To some extent, the filter bubble is nothing new. We have always surrounded ourselves with people like us, who share similar views and like similar things; we buy (at least, I still do…) newspapers which reflect our views and interests. It is no surprise that on the internet, we do the same: our friends on Facebook and those we follow on Twitter generally reflect our views and opinions, creating an echo chamber.
Pariser’s argument extends this with a healthy dose of paranoia. He noticed that Facebook was filtering out his friends with views which differed from his, and he realised that this was because Facebook were applying an algorithm to the stream of updates on Pariser’s page which ranked those with similar views most highly, just as Amazon uses an algorithm to generates recommendations – “if you like that, you may like this…”. (Presumably, Facebook only does this when “top news” was selected – I assume by selecting “most recent” to get updates in chronological order, any algorithmic bias is removed. This may be wrong!)
Google uses similar algorithms to return personalised search results, based on previous search behaviour and internet use (mediated by cookies, I believe). Pariser got some friends to use Google to search on “Egypt”, and was surprised by the wide disparity in the results Google returned. Pariser’s view was that most people’s ignorance of this made it quite dangerous.
I can understand the need for Google and others to use their algorithms to filter what they show us. There is simply so much information that we couldn’t process it without some filtering. The problem is that most people aren’t aware that it is happening. (Nor would most care. Probably.) They are also trying to make money, which they do by having us click on adverts and paid links – so obviously the will show us the links which will make them most money.
Worse than filtering, though, he said that by mining our friends’ data, companies (or, perhaps, government agencies?) are able to predict our own behaviour with 80% accuracy – without recourse to us at all. Credit rating agencies could use this to assess one’s credit worthiness, for instance, or insurance companies the risk of a claim – and since it is not based on our data, we would have no influence over it (and nor might it be covered by privacy legislation). This week, WPP, the world’s largest ad agency, announced that it has built the world’s largest database to track our use of the internet, and the London Evening Standard reported that
Google has been hit with a lawsuit by an Irish hotel… one of the first results via Google’s autocomplete function would have been “receivership”. There are no stories or links suggesting the hotel is in receivership… the hotel points out that Google Instant’s suggestion was bad for business
These corporations are only accountable to shareholders – not to us, and not to those whose results they demote or incorrectly list.
I find this disturbing, and perhaps awareness of the ways corporations use data is one way to combat it, the price of freedom being eternal vigilance. Pariser felt that the filter bubble represents a threat to democracy, since a functioning democracy needs informed citizens to exercise their choice. He may be right, but my guess is that people are much better informed of the world around them because of Google than they were before its advent.
It is the almost accidental way that Google, Facebook and others are editing – or personalising – our web experience that is worrying. When I buy a newspaper or watch a tv news programme or documentary, I am aware of the editorial influence. The algorithms designed to filter the web are unthinking and data driven. Their designers are engineers (probably male, probably white, possible lacking in emotional intelligence and with a strong a belief in rationality). Pariser said there is a view amongst the technocrats that privacy is over – all our information will be out there, and we can’t control it (although those with wealth and power may have more chance than others). This may produce a beneficial panopticon, bringing back village values to the global village; but do we want to live in a prison?
I was discussing Pariser’s talk with David Jennings, who pointed out that he’d written critically about the idea of the filter bubble a few years ago. I don’t dispute David’s three objections – that the filtering is far from perfect (Rory Clellan Jones tried to repeat Pariser’s experiment by getting his friends to use Google to search for the same search terms, and Google returned the same links to everyone); if it were perfect, we’d get bored of it, and corporations would have to engineer in serendipidity; and we can see further than our computer screens – we know there are different views out there, and we have a variety of other news sources.
I still find the filter bubble disquieting: there are huge issues around the use of our own and others’ data in this way. It is people’s ignorance of it – the fact that most people won’t care – that worries me. Maybe I need a new tag for this blog – “paranoia”…
(Pariser’s website recommends 10 ways to pop your filter bubble.)