Forgetting and remembering in the digital age

forgetting

In the past month I have participated in a number of events which have highlighted the juxtaposition between “remembering” and “forgetting”.

And 2014 is a rather important year on both fronts.

Firstly, 2014 is the tercentenary of the Longitude Act (1714) when the British Government established a prize to solve the “Longitude Problem”. As a part of these celebrations the National Maritime Museum has an exhibition “Ships, Clocks and Stars – the story of Longitude” which “tells the extraordinary story of the race to determine longitude (east-west position) at sea, helping to solve the problem of navigation and saving seafarers from terrible fates including shipwreck and starvation.”

The problem of Longitude was a very real one as European powers expanded their interest in, and exploration of, the world, and each of the suggested solutions relied on the painstaking gathering of information captured from various instruments which enabled huge numbers of measurements to be taken by humans and then meticulously entered into tables for ongoing reference.

As I wandered around the exhibition what struck me was the enormity of the task of not only making the observations, but in the compilation and continual updating of the tables themselves. This was in stark contrast to the ease with which I later consulted Google Maps to find the location of a nearby pub together with the timetable of the Thames Clipper in order to navigate my own way home.

We who live in the Western World take so much for granted as we tap into our smartphones and very rarely do we think about how fragile our systems are until we are hit with power outages, such as happened in North America during last year’s “polar vortex”, or last week’s slowing-down of the internet which most people were oblivious to.

We forget how far we’ve come in a very short time, and it is useful to visit places like the Royal Observatory to reflect on these past accomplishments, as well as to exhibitions such as “the Digital Revolution” at the Barbican, which documents not only some of the developments in film, architecture, design, music and game development, but demonstrates the creative possibilities offered by augmented reality, artificial intelligence, wearable technologies and 3-D printing. For those of us who have not only lived through, but have an active interest in, the evolution of digital information technologies, the exhibition was a wonderful reminder of human ingenuity and creativity; for many others it was a demonstration about what is now “real” and no longer “magic”, no longer “the future” but very much the present.

The notion of “presence” is hugely important when it comes to remembering and forgetting. As the World remembers the “War to end all Wars”, by commemorating the Centenary of the First World War, this is absolutely the right moment to ask some very important questions about what sort of world we want to create and what sort of future we want to live in, and to determine how best we are going to answer those questions for the benefit of ourselves and future generations.

The idea of “memory” is something which we have traditionally approached from the human perspective.  It has social and often creative aspects, and often the “convenience” of forgetting is as useful as the “right” to remember, based on circumstance and objective. As Napoleon is supposed to have said:

“History is the version of past events that people have decided to agree upon.”

So, what happens when it is not “people” who decide, but machines? And how are those decisions made?

Much of the conversation around me at the minute is about the European Court of Justice’s recent ruling on the “Right to be Forgotten”. Last week I went to a Debate on the “right to be forgotten” hosted by the Central London Debating Society. What was most interesting was that it seemed that those who actually researched the EU legislation had a far more positive view of it than those who did not, regardless of which side of the debate they were representing.

The legislation itself is certainly getting people talking, even those who would normally not be interested in something like this, and, for those in the data and information worlds, it is presenting all sorts of unforeseen and complex challenges, and highlighting the need for legislators and policymakers to have far more developed digital skills and capabilities in order to deal with governing in the digital age.

When it comes to articles that are “on the public record” and “in the public interest” I can see the case against this legislation, but when it comes to the situation where, according to Eric Schmidt, young people are going to have to change their names in order “escape their cyber past”, I find this both disconcerting and, in fact, very sad.

It could well be that the concept of “privacy”, and, indeed of the fallibility of human memory, is now a thing of the past, and that young people are now no longer able to experiment with who and what they want to be (as many of us were able to do) because of the greed of large information companies whose business models feed on the information they often unwittingly provide.

I have written about information and privacy in previous posts (particularly the digital brand), but it seems that now these issues are impacting at a personal level, and my only hope is that at last ordinary citizens wake up to the fact that digital information presents them with very different choices to make in terms of how they interact with both organisations, and with each other.

It may be that what will emerge is not just something like the “personal data store” but a whole new transparency in the relationship between those who supply data (as in individuals) and those that seek to use it (organisations). Perhaps a system will finally emerge whereby data itself has value as a source of currency and exchange, and the key elements of the digital brand will be more clearly articulated.

All of this, of course, requires people to have a greater understanding of data and information; a “digital literacy”.

In order understand this on a personal basis I have just spent two days with Decoded doing both their “Code in a Day” and “Data in a Day” courses. Incredibly valuable days which not only demystified the whole concept of “coding” for me, but also gave me insights into the actual mechanics of data, and the incredible array of tools and resources that are now easily, and often freely, available on the Web.

Whilst there are obvious benefits for people from both the advertising and the retail sectors, the key insight for me was the skills-gap that exists between the people who develop and make policy and legislation, versus those who actually work with data and code on a daily basis.

For far too long “IT” has been seen as “rocket science”, and, whilst I am not going to undermine the skills required in order to artfully programme code, I am going to say that they are absolutely teachable, and that that they are as important as “reading, writing and ‘rithmatic” … the traditional “Three R’s”. 

Coding is a language and a state of mind. It takes patience, it takes a certain aptitude for the “craft” but it is logical and it can be taught to everyone, as the UK Government has already determined to do. Whilst we don’t all need to rush off and be “coders”, I absolutely believe that each and every one of us who lives “digital lives” needs to equip ourselves with at least the basic skills to understand what we are doing in order to more effectively analyse, understand and communicate more effectively with data.

Only then can we have any sort of reasonable debate about “remembering” and “forgetting” and make a conscious decision as to whether history will be written by humans or machines, but hopefully both.

Posted in Brand, Communications, Data, Digital, Education, Government, Information, Social Machine, Transparency | Comments Off

Observing the Web as a way of making sense of a world of “wicked” problems

wicked_1_blog

(Image courtesy of Thanassis Tiropanis in his “Web Observatory” presentations).

In a previous post I talked about Socrates and notion of the “examined life”. Countless words have been written on human attempts to examine the psyche, and the challenge of more effectively understanding how we can live together more harmoniously for the betterment of all.

This is what is at the heart of most governments, and during the ANZSOG Master Class earlier this year I asked John Alford what he felt was the greatest challenge which currently faces public sector managers. He felt that it was need to simultaneously manage the demands of “business as usual” with the complexity presented by the so-called “wicked problems”.

A “wicked problem” is one that “resists resolution”, and, all too often, through its own solution there emerge further “wicked problems”. In discussing this with other colleagues an interesting distinction emerged: the difference between problems that are “complex” and those that are “complicated”. When it comes to dealing with “wicked problems” many are complicated but not complex. For example the NASA Space programme. Whenever problems arose during the race to the Moon the Director asked his team three questions:

  1. What is the problem?
  2. Do we still want to do this?
  3. Get on and solve the problem.

Much of what was required was to uncomplicate the seemingly complicated, to simplify and retain the essence without getting overwhelmed, and the results speak for themselves. Something that is complicated can seem to be complex, but, when it is analysed in terms of systems and processes it can be simplified and solutions can be found.

The challenges of twenty first century government and governance are both complicated and complex and, as The Economist outlined in a recent article, globalization and digital technologies are making many of the traditional systems and processes seem outdated, adding another layer of complexity. In order to even begin to deal with these issues we first need to be able to observe what is going on, and to then apply tools to those observations in order to make sense of them.

This is at the core of the Web Science “Web Observatory” initiative.

A “Web Observatory” is, in essence, a “Social Machine to observe Social Machines” (see an Overview paper here). If we consider the world of astronomy, then what the physicists do is utilse a range of telescopes (such as the “Square Kilometer Array”) to focus on a different part of the sky. From these divergent observations they construct a picture of the universe.  The Web Observatory (or “Web of Observatories”) consists of a range of Web “telescopes” which focus on particular parts of the Web (be they certain Social Machines, such as Twitter or Wikipedia or others which utilise Open Data such as maps or transport applications).

wicked_2_blog

Given the vast amount of open data that is now available we are not short of telescopes, but the trick is to make sense of it, and that requires a range of tools to both visualise and analyse what is going on.

Making sense is not only the purview of scientists however; it is absolutely at the core of twenty first century policy development, and we are now contributing to this with a new ANZSOG research project “Governance in the age of Social Machines: Web Informed policy making” (an overview of which can be found as both a Overview Paper and Presentation).

This project is being undertaken in partnership with the Web Science Trust, University of Southampton, University of South Australia and Government of South Australia, and our objectives are:

  1. To develop the data publishing and governance structures which enable the SA Government to publish its data on the Observatory;
  2. To develop a methodology to utilise that data to inform policy making, and
  3. To develop cases which underpin a “digital literacy” education programme to be developed by ANZSOG together with the SA Government for delivery to other jurisdictions.

The project will address three key research questions:

  1. How do we build a “Social Machine” to better observe the workings of government?
  2. How can this Government Web Observatory better inform the creation of public policy?
  3. What are some of the key challenges, which governments will face as a result of being armed with a Web Observatory?

It is both exciting and refreshing to see organisations such as ANZSOG and others supporting initiatives such as this which focus on the core of our digital literacy and competencies. The reality is that we can have all the data in the world, but if we cannot learn to “read” it – and from there turn it into information and knowledge – then we are merely flailing about in the dark.

The UK has recognised this with it’s “teaching coding” in schools, and The House of Lords has just called for evidence for its Committee on Digital Skills stating that its goal is to examine “the digital capability of the nation”.

In his recent presentation at the National Archives “Information:  The Currency of the Digital Economy” Conference in Canberra, Minister for Communications Malcolm Turnbull stated that

“In reality data is not an ice-cold world of algorithms and automatons. There is an essential role for people, with all the limitations, inaccuracies, subjectivity and humanity that come with being human … It is now the responsibly of us all, government, industry and the community more broadly to unleash the latent power of today’s modern information for a common social and economic good.”

I couldn’t agree more. We need to learn to design our world for “data” as much as we need to design for humans, because with the emergence of Social Machines humans and data are becoming inextricably bound.

Posted in ANZSOG, Digital, Government, Social Machine, Web Observatory, Web Science | Comments Off

It’s about the people, stupid!

robot_blog

Last week I attended the “Humanising the Robot Economy” at Nesta, which was essentially the launch of “Our Work Here is Done: Visions of a Robot Economy”.

This, together with the “Fourth Revolution” at the RSA, reinforced for me the challenge we are facing as a society to both understand the integrate this current “revolution” being driven by digital information and interaction technologies.

The panel of speakers included:

  • Frances Coppola – Associate Editor at Pieria
  • Dr. Nick Hawes – Senior Lecturer in Intelligent Robotics at the University of Birmingham
  • Izabella Kaminska – Reporter for the Financial Times Alphaville service
  • Elly Truitt – Professor at Bryn Mawr College
  • Ryan Avent – Economics Correspondent, The Economist; and
  • Carlota Perez – Professor of Technology and Development, LSE and University of Tallinn).

The event was moderated by Stian Westlake, Executive Director of Policy and Research for Nesta, and brought together a range of expertise and perspectives to do with how “robots” are, and potentially will, change the economy and the relationship between humans and “work”.

Discussing “robots” can be quite an emotional topic because throughout human history the idea of “automatons” has persisted as “artificially alive devices” which can be viewed as either friend or foe – something that is of benefit to human society and something that is a threat.

According to Wikipedia a robot is a “mechanical or virtual artificial agent” which can be viewed as either friend or foe – something that is of benefit to human society and something that is a threat.

Encyclopaedia Britannica defines a robot is “any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner” and Merriam-Webster describes a robot as a “machine that looks like a human being and performs various complex acts (as walking or talking) of a human being”, or a “device that automatically performs complicated often repetitive tasks”, or a “mechanism guided by automatic controls”.

In all cases the idea of a “robot” is of something that is independent of human connection, because of its “autonomous” nature and the focus on mechanical and, more recently, information engineering. But what I feel is missing in this conversation is the emerging science of bio-engineering and “bionics” and “the transfer of technology between lifeforms and manufactures”.

Let’s have a think for a minute about what is on the horizon.

Firstly, the augmenting of our “knowledge” and “information” through both personal computing, be it our smartphones permanently grafted to our palms, Google glass on our faces or our Fitbits and personal health monitors.

Secondly, the augmentation of our physical selves through the replacement or amendment of body parts (pacemakers, artificial limbs, cochlear ear implants).

Thirdly, the emerging “internet of things” where anything that can be connected to the internet via wireless technologies is being connected, uploading real-time data and increasingly integrating our physical selves to the digital universe.

As the conversation at Nesta unfolded I kept thinking about the legacy of Descartes in terms of dividing how we see the physical and social sciences, the “man in the machine” perspective, where there is a division between us and the world around us. I felt that the “robots” discussion was pretty much limited to this separation, and came away feeling that something really crucial was missing.

In his book “Built to Last” Jim Collins talks about “the tyranny of the ‘or’, and the power of the ‘and’”, and the importance of being able to consider all options at times when we are often encouraged to make a single choice. I have always subscribed to this view, and what I think was missing in the “robot society” conversation was the lack of focus on the potential convergence of humans AND machines (not necessarily Cyborgs) but certainly a much closer inter-relationship and integration, with boundaries that are increasingly blurred and ambiguous.

My intuition is telling me that this is where the real “robots” are going to have the greatest impact, and we have the early stages of this already in the concept of the “social machine”. What concerns me is that if we continue to bifurcate and divide the way we think about humans and “robot” technologies then not only will we miss out on the real opportunities, but we will be unaware of the real threats.

So, are we talking about “humanising the robot economy” or “roboticising the human economy”?

I believe we are on the brink of some crucial decisions for humanity at the minute and there is a crucial need to educate people in order to make at least a somewhat informed choice about the world they want to create. There are some exciting initiatives now under way which include:

All are contributing to the development of a “digital literacy”.

Posted in Creativity, Futures, Information, Innovation, Research, Social Machine, Social Machine, Web Science | Comments Off