Information Filter Bubble – The Limitations for Search Results on the Internet

Filter Bubble is a concept developed by Internet activist Eli Pariser in his book by the same name. This was used to describe the phenomenon in which websites use algorithms to selectively guess what information a particular user might like to see based on the information pertaining to the user such as location, past click behavior, and search history.

Facebook displays things you ‘Like’ or prefer to read on friends’ pages, Google personally tailors your search queries, and Yahoo News and Google News customize the news for you.As per Pariser, users are less exposed to conflicting viewpoints and are isolated intellectually in their own information bubble. Pariser related an example in which one user searched Google for “BP” and got investment news about British Petroleum while search by another user revealed Deepwater Horizon oil spill. Interestingly, the two search results pages were strikingly different.

The filter bubble seems to be a comfortable place and by definition, it is populated by the things that compel you the most to click. But it is also a real problem: the set of things that users are likely to click on (sports, music and things that are highly personally relevant) are not the same as the set of things that they need to know.

The rush to build the filter bubble is typically driven by commercial interests. It has become quite evident that the organizations that want lots of people to use their website, must provide web users with personally relevant information, and if they want to reap monetary benefits through ads, they need to provide them with ‘relevant’ ads. This has resulted in a personal information gold rush, in which major companies such as Google, Facebook, Microsoft and Yahoo among others are competing to create the most comprehensive portrait of web users to drive personalized products. There is also an entire “behavior market” opening up in which every action users take online – each mouse click on the Internet, each form entry – can be sold as a commodity.

 

So what is the Internet actually hiding from its users?

 

As per Google engineer Jonathan McPhie, it is (obviously) different for every individual – and indeed, even Google does not quite know how it plays out at personal levels. At an aggregate level, they have noticed that people need to click more. But they cannot predict how each web user’s information environment gets changed.

 

In general, the things that are most likely to get edited out are the things users are least likely to click on. At certain times, this can be a real service – if a user does not read articles about sports, why would a newspaper put a football story on its front page for them? But if the same logic is applied to say, stories about foreign policy, a problem starts to emerge. Some things, like homelessness or genocide, are not quite ‘clickable’ but are highly important – it is these that get missed out due to information bubble.

 

In one form or another, nearly all major websites use personalization but the one that matters the most is Google. If two people search for the same thing using the exact keywords on Google, they may get very different results. Google tracks individual search history, the kind of computer or mobile device that we use and even the average time gap between our clicks – all such information is used to tailor results.

 

Even if users are not logged in to Google, there are 57 signals that the site uses to figure out who they are: whether they are on a Mac or PC or iPad, and where they are located while ‘Googling’. And in the near future, it will also be possible to “fingerprint” unique devices, so that sites can tell which individual computer an individual is using. This is why erasing all browser cookies is at best a partial solution—it only partially limits the information available to personalizing algorithms.

 

What Internet users actually require is for the online brands to take some responsibility for the immense power that they have – the power to decide what we see, what we do not see, what we know and what we do not know. They should make sure that users do have access to public discourse and things that matter to majority. A world based exclusively on the things that we prefer (or ‘Like’ in Facebook terminology) is rather an incomplete world.

Page 1 of 3 | Next page