Internet Culture

Everything that’s wrong with Facebook’s ‘echo chamber’ study

What happens when you combine confirmation bias and conflicts of interest?

Photo of S.E. Smith

S.E. Smith

Article Lead Image

Facebook wants you to know that if your feed is an echo chamber, it’s your fault. In a study just published in Science, the social networking giant claims that individual user choices, rather than the site’s complex algorithms, are the real culprit behind limited exposure to opposing views. The problem is that the study says no such thing.

Featured Video

On a purely academic level, the study is just bad science. It’s poorly constructed and it wasn’t conducted with care, nor were the data analyzed and discussed honestly, with some of the most interesting and important information buried in appendices. That’s why the results sound awfully favorable for Facebook, and it’s not a coincidence that glowing press releases about the study landed in journalists’ inboxes across the country last week.

Facebook and other social media sites routinely face accusations of creating an echo chamber. As in real life, the logic goes, people seek out those with similar social and political views, which isolates them from opposing opinions and meaningful discussion. In the case of social media, this problem can be exacerbated by the high volume of content confirming the opinions of readers and the ability to readily filter out things users don’t want to see.

On a purely academic level, the study is just bad science. 

Advertisement

At Wired, Alan Martin sums up the problem: “If you surround yourself with voices that echo similar opinions to those you’re feeding out, they will be reinforced in your mind as mainstream, to the point that it can distort your perception of what is the general consensus.”

The study aimed to explore the alleged echo chamber effect by analyzing a sample of Facebook users—with identifying information removed—to determine how they interacted with the site. According to the abstract accompanying the research, the study concluded that: “Compared to algorithmic ranking, individuals’ choices about what to consume had a stronger effect limiting exposure to cross-cutting content.”

However, that’s not quite how this works. The sample selection and size were deeply flawed, as were the conditions surrounding the study, as Eszter Harggitai pointed out at Crooked Timber in a blistering post about Science’s failure to evaluate the methodology with care. Only three researchers, all of whom were affiliated with Facebook, worked on the study, and when evaluating any science, it’s critical to ask who benefits.

In this case, Facebook does, and while the researchers claim to be impartial, it’s hard to believe that internal researchers would be popular with executives if they routinely criticized their parent company. While researchers at Facebook may be excellent data scientists and sociologists, they’re still biased.

Advertisement

Moreover, the study is functionally unrepeatable because these researchers are the only ones with access to the data involved. Third-party scientists cannot use Facebook’s massive body of user data, and a study that can’t be tested has questionable validity. It gets worse, though, because the data itself is also a problem.

While researchers at Facebook may be excellent data scientists and sociologists, they’re still biased.

The sample size of the study was extremely small, representing only four percent of users. To add to the problem, sample selection was hardly unbiased or random. The researchers looked explicitly at people who outed their political identities on Facebook profiles, a relatively small percentage of users, a huge no-no in sociological research.

It’s impossible to generalize these kinds of study findings to anyone other than users who specifically discuss their political leanings in their profiles, a relatively small proportion of Facebook’s population. At Medium, Zeynep Tufekci explains why this is a problem:

Advertisement

Sometimes, in cases like this, the sampling affects behavior: People who self-identify their politics are almost certainly going to behave quite differently, on average, than people who do not, when it comes to the behavior in question which is sharing and clicking through ideologically challenging content.

While an individual feed is usually strong evidence of political alignment—the user sharing Socialist Worker links is clearly different than the one sharing National Review content—very few users actually explicitly include their political affiliation in their profiles.

Such poor study construction already casts doubts on the research, but the problems don’t end there, because the results—dubious as they were—were also badly analyzed. The question of the echo chamber and what people see on Facebook is complicated by what people subscribe to and who they friend—conservatives, for example, are more likely to friend other conservatives and follow conservative pages. Unsurprisingly, they see a limited amount of liberal content.

If you interact with Polly Pocket more than other friends, you’ll see her links more than others.

Advertisement

That sounds like it would reinforce the study’s conclusions—that users build their own echo chamber—except for a glaring hole in the study’s reasoning. What users actually see is highly dependent on the site’s own algorithm, which determines which stories are suppressed and which ones actually appear in user feeds. The same goes for trending stories, also customized for users.

Facebook makes those decisions on the basis of which links are generating attention. If you interact with Polly Pocket more than other friends, you’ll see her links more than others—like the ones from that guy from high school you friended out of a sense of obligation who’s always posting downer links about subjects you disagree with.

Facebook itself openly admits this in a discussion about how user feeds function:

How does News Feed know which…stories to show? By letting people decide who and what to connect with, and by listening to feedback. When a user likes something, that tells News Feed that they want to see more of it; when they hide something, that tells News Feed to display less of that content in the future.

Advertisement

Facebook tailors itself for users, determining what it thinks people want to see. While it may do so on the basis of user behaviors, it’s impossible to separate the algorithm from those behaviors; in other words, Facebook has an amplifying effect. The “do users make the feed, or does the algorithm make the feed?” is an apples and oranges comparison.

Sociologist Nathan Jurgenson discusses this problem in his detailed takedown of the Facebook study:

Individual users choosing news they agree with and Facebook’s algorithm providing what those individuals already agree with is not either-or but additive. That people seek that which they agree with is a pretty well-established social-psychological trend. We didn’t need this report to confirm that.

He points out that this has serious implications:

Advertisement

Facebook’s ideological push to dismiss the very real ways they structure what users see and do is the company attempting to simultaneously embrace control and evade responsibility. Their news team doesn’t need to be competent in journalism because they don’t see themselves as doing journalism. But Facebook is doing journalism, and the way they code their algorithms and the rest of the site is structuring and shaping personal, social, and civic life.

Christian Sandvig’s comment that this amounts to Facebook’s “not our fault study” is strikingly accurate. The researchers involved in the study abused science, misinterpreted results, and tried to dodge the fact that the algorithm and user behaviors are compounding factors that feed on each other.

Of course people are going to “like” things that align with their political beliefs—this is hardly news to sociologists. Naturally, Facebook’s algorithm will respond to this user behavior. And, of course, opposing views will be suppressed as a result—because that’s exactly how the algorithm is supposed to work.

Facebook’s alleged absolution isn’t so absolving after all: All this study proves is that sloppy science generates headlines.

Advertisement

S.E. Smith is a writer, editor, and agitator with numerous publication credits, including the Guardian, AlterNet, and Salon, along with several anthologies. Smith also serves as the Social Justice Editor for xoJane and will be co-chairing Wiscon 40—the preeminent feminist science-fiction conference—in 2016.

Image via mkhmarketing/Flickr (CC BY 2.0)

 
The Daily Dot