Any time “scientists” at a company purport to have done a
study involving said company in any way, the public has good reason to be
suspicious of the reported conclusions. Were the folks running the company
really intent on providing credible information, they would use independent
scholars (i.e., not being compensated by the company). Such a management would
want to obviate even the appearance of a conflict of interest—their desire to
provide the public with an answer being so strong. So the management at
Facebook may not have been very invested in providing the public an answer to
the question: how much influence do users actually have over the content in
their feeds? In May 2015, three “Facebook data scientists” published a
peer-reviewed study in Science Magazine
on how often Facebook users had been “exposed to political views different from
their own.”[1]
The “scientists” concluded that if users “mostly see news and updates from
friends who support their own political ideology, it’s primarily because of
their own choices—not the company’s
algorithm.”[2]
Academic scholars criticized the study’s methodology and cautioned that the
risk of polarized “echo chambers” on Facebook was nonetheless significant.[3]
I was in academia long enough to know that methodological criticism by more
than one scholar is enough to put an empirical study’s findings in doubt. Nowadays,
I am more oriented to the broader implications of the “echo-chamber” criticism.
Although the study’s primary question concerned how much
what users see on their feeds is due to the company’s algorithm, the question
of whether Facebook was a contributing factor in the increasing political
polarization in the U.S. was at issue. “What we do show, very definitively,”
Eytan Bakshy, who worked on the study, said, “is that individuals do have
diverse social networks. . . . The majority of people do have more friends from
the other side than many have speculated. We’re putting facts behind this.”[4]
Facebook users who self-report a liberal or conservative orientation had, on
average, 23% of their friends with an opposing political ideology; and 28.5% of
the hard news that such users encountered on the News Feed cut across
ideological lines, on average.[5]
We know from neuroscience that the human brain privileges people who are
similar, so these low percentages do not necessarily mean that Facebook’s
algorithm has a contributing impact. That it could be is perhaps of more
significance.
According to the 2014 Political Polarization in the American
Public study by the Pew Research Center, political polarization increased from
1994 to 2014; the emptying out of a “middle” is especially pronounced among the
politically engaged.[6]
The proliferation of news networks specializing on particular political market-segments
and the ability of social-network users to pick what they encounter are
arguably contributing factors, and subtle leanings from a giant company’s algorithm
could play a significant role too on an aggregated scale.
The potential in terms of political influence on a mass
scale with a usership as large as Facebook’s warrants consideration, however.
At the time of the study, Mark Zuckerberg, founder and CEO of Facebook, had the
wherewithal to subtly push a political point by carefully amending the company’s
algorithm (as well as by using his public platform). Imagine Starbucks' Howard
Schultz, who had felt free to use his company (and its employees) to further
political issues (and positions) he values, in Zuckerberg's position at
Facebook. Would users appreciate being manipulated, subtly or not, into
discussions on race that tilt to Schultz's position?[7]
The impact on public policy could be astounding simply on account of the
proportion of Americans who actively use Facebook. The mining of user-data
would give such a CEO even more power not only over particular users, but also on
a societal level.[8]
The problem, in other words, is that of an unelected person having such massive
political power in a viable representative democracy. Similar to the rationale
of anti-trust laws, limits on the size of social-media companies in terms of
usership could therefore be justified with some degree of precedent.
See also "Taking the Face Off Facebook."
See also "Taking the Face Off Facebook."
1. Alexander B. Howard, “Facebook
Study Says Users Control What They See, But Critics Disagree,” The
Huffington Post, May 12, 2015.
2. Ibid. I put the quotes around “scientists” to make the point that the conflict
of interest renders the label itself controversial in being applied to the study’s
investigators.
3. See, for example, Christian Sandvig, “The Facebook ‘It’s Not
Our Fault’ Study,” Multicast, Harvard Law School Blogs, May 7, 2015.
4. Howard, “Facebook Study.” See Eytan Bakshy, Solomon Messing, and Lada Adamic, “Exposure
to Diverse Information on Facebook,” Facebook Blog, May 7, 2015.
5. Ibid.
6. Ibid. See “Political
Polarization in the American Public,” U.S.
Politics & Policy, Pew Research Center, June 12, 2104.