Monthly Archives: June 2011

Trying to value “social capital”…

Last month, Lloyd Davis gave two performances at the Centre for Creative Collaboration, telling stories from his recent trip – sorry, social art project to the States, “Please Look After this Englishman”.

Today, he set off on his next project: an unplanned, working journey around the UK, led by the contacts, leads and ideas generated from social media.

Lloyd’s performance was very enjoyable – he is a good storyteller – and thought-provoking. I meant to write about it, but my own social art project – sorry, trip up north got in the way, and I didn’t get around to it. Until now.

Lloyd Davis, telling tales

I won’t tell my own versions of Lloyd’s tales – you’ll have to see him for those – but some of issues he raised. Lloyd travelled from San Francisco to New York, guided by his social media contacts (with the proviso that he needed to stop off at South by Southwest Interactive in Austin, Texas). His contacts prompted his route, where and with whom he stayed, and even how (and by whom) he was entertained. We are clearly a perverse bunch: he was directed north from San Francisco to Seattle, west to Wisconsin and then south to Austin, before heading back north to Maine and finally New York. Not the most direct route – although that wasn’t really the point of the project.

Redrawing the map of the USA

His contacts – largely his online friends, friends of friends, and friends of friends of friends – became his safety network. (It would be interesting to know how many degrees of separation this network extended, but of course anyone can become a one-degree of separation friend online with the click of a mouse.) They put him in touch with people and places, and occasionally the net broke: he found the extent of his network when it was overstretched and vague contacts wondered who the hell this guy was and what he wanted.

It seems to me that this was an exercise in realising “social capital” (a phrase Lloyd used, too) – but also it made me think what a bad term social capital is. It is of course a metaphor, a way of conceptualising the value of a social network as one might value the capital built up by an enterprise.

But a balance sheet or bank account isn’t the right model for this concept. Social capital is not spent: indeed, using social capital – as Lloyd did, by meeting and conversing with his many connections – actually creates more social capital, rather than depleting it. The highly intangible – and possibly volatile – nature of social capital mean it isn’t something that we can consciously build a stock of.

It is clearly something that is created through social interaction and social acts. Sharing something on Twitter, writing a blog post (perhaps even this…), meeting new people, introducing others – all these things (and more!) can create value within a network.

It isn’t a zero sum game – there isn’t a finite amount in an account in which we can measure the rise or fall as a result of our actions. Or inaction – I am sure that my social capital decreases when I have quiet periods on twitter or fail to post to this blog for a month or so (an all too frequent occurrence). It isn’t static.

I’m not sure where this gets us, aside from a belief that borrowing the metaphor from a financial concept is less than helpful. Is there a better term that more adequately explains the ephemeral nature of social capital? I’m not sure that I can think of one – at least, not yet.

(You can follow Lloyd’s progress around Britain on Twitter, @LloydDavis.)

Eli Pariser and the Filter Bubble

Another week, another talk at the RSA… This time, Eli Pariser was discussing his thoughts (and his book) on the filter bubble – the way the internet shows us a cut-down, “me-too” world.

To some extent, the filter bubble is nothing new. We have always surrounded ourselves with people like us, who share similar views and like similar things; we buy (at least, I still do…) newspapers which reflect our views and interests. It is no surprise that on the internet, we do the same: our friends on Facebook and those we follow on Twitter generally reflect our views and opinions, creating an echo chamber.

Pariser’s argument extends this with a healthy dose of paranoia. He noticed that Facebook was filtering out his friends with views which differed from his, and he realised that this was because Facebook were applying an algorithm to the stream of updates on Pariser’s page which ranked those with similar views most highly, just as Amazon uses an algorithm to generates recommendations – “if you like that, you may like this…”. (Presumably, Facebook only does this when “top news” was selected – I assume by selecting “most recent” to get updates in chronological order, any algorithmic bias is removed. This may be wrong!)

Google uses similar algorithms to return personalised search results, based on previous search behaviour and internet use (mediated by cookies, I believe). Pariser got some friends to use Google to search on “Egypt”, and was surprised by the wide disparity in the results Google returned. Pariser’s view was that most people’s ignorance of this made it quite dangerous.

I can understand the need for Google and others to use their algorithms to filter what they show us. There is simply so much information that we couldn’t process it without some filtering. The problem is that most people aren’t aware that it is happening. (Nor would most care. Probably.) They are also trying to make money, which they do by having us click on adverts and paid links – so obviously the will show us the links which will make them most money.

Worse than filtering, though, he said that by mining our friends’ data, companies (or, perhaps, government agencies?) are able to predict our own behaviour with 80% accuracy – without recourse to us at all. Credit rating agencies could use this to assess one’s credit worthiness, for instance, or insurance companies the risk of a claim – and since it is not based on our data, we would have no influence over it (and nor might it be covered by privacy legislation). This week, WPP, the world’s largest ad agency, announced that it has built the world’s largest database to track our use of the internet, and the London Evening Standard reported that

Google has been hit with a lawsuit by an Irish hotel… one of the first results via Google’s autocomplete function would have been “receivership”. There are no stories or links suggesting the hotel is in receivership… the hotel points out that Google Instant’s suggestion was bad for business

These corporations are only accountable to shareholders – not to us, and not to those whose results they demote or incorrectly list.

I find this disturbing, and perhaps awareness of the ways corporations use data is one way to combat it, the price of freedom being eternal vigilance. Pariser felt that the filter bubble represents a threat to democracy, since a functioning democracy needs informed citizens to exercise their choice. He may be right, but my guess is that people are much better informed of the world around them because of Google than they were before its advent.

It is the almost accidental way that Google, Facebook and others are editing – or personalising – our web experience that is worrying. When I buy a newspaper or watch a tv news programme or documentary, I am aware of the editorial influence. The algorithms designed to filter the web are unthinking and data driven. Their designers are engineers (probably male, probably white, possible lacking in emotional intelligence and with a strong a belief in rationality). Pariser said there is a view amongst the technocrats that privacy is over – all our information will be out there, and we can’t control it (although those with wealth and power may have more chance than others). This may produce a beneficial panopticon, bringing back village values to the global village; but do we want to live in a prison?

I was discussing Pariser’s talk with David Jennings, who pointed out that he’d written critically about the idea of the filter bubble a few years ago. I don’t dispute David’s three objections – that the filtering is far from perfect (Rory Clellan Jones tried to repeat Pariser’s experiment by getting his friends to use Google to search for the same search terms, and Google returned the same links to everyone); if it were perfect, we’d get bored of it, and corporations would have to engineer in serendipidity; and we can see further than our computer screens – we know there are different views out there, and we have a variety of other news sources.

I still find the filter bubble disquieting: there are huge issues around the use of our own and others’ data in this way. It is people’s ignorance of it – the fact that most people won’t care – that worries me. Maybe I need a new tag for this blog – “paranoia”…

(Pariser’s website recommends 10 ways to pop your filter bubble.)

Changing Education: “Education for Uncertain Futures”

I have spent many months over the past few years working with public servants in the Scottish Government on change programmes in the education sector. I wasn’t designing or leading the change – there were pedagogues to do that – but I was responsible for programme management of some workstreams.

Change in the education sector is difficult. There are a lot of deep-rooted interest groups – parents, teachers, unions (surprisingly, learners rarely seem to get a look in…). The change happens in classrooms, far removed from the design and political drivers of change. There is a very long time lag – changing a school curriculum means changing the assessment and exam system. And the political pressures to tinker can be huge.

So changing education is difficult and complicated.

I was therefore really interested in a recent debate at the RSA entitled “Education for Uncertain Futures”. This was held to launch the output of an RSA project, “Building Agency in the Face of Uncertainty: a thinking tool for educators and education leaders”. I haven’t read the pamphlet – yet – but the debate was interesting and raised a lot of issues.

There were four speakers: Keri Facer, who was co-author of the pamphlet (sorry – thinking tool…); Patrick Hazlewood, a head teacher who has been doing some work with the RSA; Carolyn Usted, an educationalist and former inspector of schools; and Dougald Hine, an itinerant thinker (and co-founder of the Everything Unplugged meetup I sometimes go to). A varied bunch, each coming from a different perspective.

It made for an interesting talk, but there was a lack of cohesion in the views and issues discussed. This is what some of the contributors talked about, and some thoughts of my own they prompted. (There was an underlying assumption in the discussion that mainstream education is the way forward. It might not be; but it probably will be the system that most young people work through, so there seems little point in challenging that assumption. Maybe that’s another post!)

The education establishment, like other organisations, faces lots of uncertainty – economic, environmental, technological – but the feedback in the system may take years. Governments and educators are designing the education system in a fog of data: and the educational environment is changing much faster than the system can. The policies being implemented now are not just unlikely to work in the future – they’re unlikely to work today, because they’re based on data that is now outdated.

Beneath this uncertainty, though, is a continuity: the purpose of education remains the same (if we can agree what that is – there seems to be no overriding philosophy of why we educate; we may not even share a common language to discuss learning). The daily business of teaching remains (more or less) the same. Against the background of change and uncertainty, what teachers do and try to achieve remains pretty constant.

The curriculum and its objectives has changed little: the early 21st century curriculum would be recognised by teachers from the early 20th century, though the tools and practice may have changed considerably. Teachers aim to get measurable results – so they “teach to the test”, and always have done (because that is how they are assessed). Teaching is generally a linear, sequential process (though learning may be recursive). Generally, we teach what we know based on the past, not what we may think will be needed for the future (the one thing we can be certain of predictions of the future is that they will be wrong).

Change may be a constant, but the rate of change – particularly technology and the way we use it – has greatly accelerated. Young people – the primary consumers of the education system – are at the forefront of that change, ahead (maybe way ahead) of their parents and teachers. Their access to information and other resources is far superior to previous generations’. Helping people manage the vast amounts of information available now, sifting value from the chaff, would be useful.

One thing that the education system could do is prepare young people to cope with this rate of change: to enable them to live with uncertainty and ambiguity, to improvise in novel situations. Society may come to rely on these skills as institutions – banks or universities or governments – fail. The gap between those in power and influence and those without – consumers of it, perhaps – is changing too, in ways the powerful may not understand (the Arab spring illustrates this, and it could happen anywhere).

Allowing educators to improvise and experiment – removing the yoke of management by results from them – to see what works and what doesn’t in their situation (and every school may be different) might add a lot of value. Teachers are the people who have to manage the change and pass it on to their learners: providing learners with skills rather than answers would be a good start.