Tuesday, March 20, 2018

Oligarchic Social Media Companies: Willowing the Internet Unethically

Too much power in a few hands is inherently dangerous. That goes for private as well as public, or governmental, power. In the world of social media, the companies that own and control the platforms are essentially governmental in nature in that the executives promulgate rules and, ideally, see that they are enforced. The downsides to too few platforms—each with an extraordinary amount of power—involve a constricting of ideas, or content, on the internet, and potentially unanswered violations of the rights of the social-networks’ respective users. The public policy repercussions, I submit, include applying anti-trust law to social media companies such that none gets to become as massively dominating as Facebook had been allowed to become.
In an open letter in March, 2018, Tim Berners-Lee, the inventor of the World Wide Web, proposed a regulatory framework to balance the interests of the social media companies and their users. In the wake of the Facebook scandal then involving the psychological-political manipulation of up to 50 million users by a third party, Cambridge Analytica, the obvious inference was that privacy rights were in dire need of being shored up by regulators as Facebook’s management had failed even to notify the users of the invasive  use of their data. Yet a single-minded focus on that problem risks missing a more subtle one.
Berners-Lee points in his letter to the “concentration of power” in a few social media companies that “creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.”[1] As a result, the “Web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms.”[2] The bloggers who can make good use of Facebook’s algorithm get to see their ideas (and blogs) popularized, whereas bloggers who eschew Facebook stand a greater chance of being relegated to a marginal position on the internet.
I am a case in point. I could have made use of Facebook for years to promote essays I have posted online, but I made a decision on principle not to use Facebook because of how that company had treated my attempts to create and use an account. On my first attempt, Facebook suspended my account because I had sent some text with a link to one of my academic articles to some scholars whom I actually knew. No one at Facebook bothered to ask me if my posts were spam. I was deemed to have sordid motives without much evidence to support the projection of distrust. I deleted the account. A few years later, I tried again. That time, Facebook demanded that I upload a clear facial picture of myself so I could be identified. Facebook had verified my phone number and email address, and thus my name, but strangely those were not enough. I had not yet even used the account and thus could not have violated any of the company’s use-policies, so the projection of distrust onto me was unacceptable to me. So I deleted that account rather than supply a picture of myself to be scanned. I was also concerned how the facial recognition software would be used, especially when combined with other basic information I had included in the profile. It turns out I had reason to be concerned, for even if my personality had not been construed and I had not been subject to political manipulation psychologically, the fact that Facebook failed to prevent the invasive actions by a political firm in 2015 means that other harvesting of data could have been going on without the users being informed. Even before that scandal broke, I did not trust Facebook’s staff.  
I suspect that the fact that I had written a booklet, Taking the Face off Facebook, had something to do with Facebook making it difficult for me to create and use an account. That the platform was at the time so huge means that keeping me off made it much more difficult for me to popularize my essays at The Worden Report. If so, Facebook was exploiting a conflict of interest by keeping off ideas critical of Facebook’s management. Although I made considerable use of LinkedIn and some use of Twitter, I felt as though I was swimming upstream in steering clear of Facebook as a possible means of publicizing my site. Even though business ethics was one of my areas of expertise, and thus of the essays on my site, I felt a strange feeling in actually making a stand ethically against my own use of Facebook even though I really could use the added publicity for my site.
A faculty member at the University of Chicago business school wrote me interestingly just after the Facebook scandal became public that if only I would get on Twitter and Facebook and attack the positions of other people, my essays would be picked up by the major media and I would no longer be making things harder on myself than need be. If only I “attack people.” Really? The University of Chicago must be quite a place! Another ethical line in the sand that I would not cross. Years earlier, I had stopped attending the Academy of Management “academic” conferences because the “scholars” had made a “blood sport” out of tearing apart scholars giving paper-presentations. I found that I could be helpful to the presenters by instead suggesting fruitful directions rather than trashing what had already been written. Any dead wood would eventually fall off from the tree anyway, whereas a useful insight would be sited and thus popularized. I had the same philosophy about my essays, sans any “facilitator” like Facebook. I suspect that Facebook’s culture might have been allowed to become akin to that of the Academy of Management. If so, vindictiveness could be added as a reason why the range of ideas on the internet has been narrowed, and why more attention was not devoted to enforcing policies on the third-party uses of user data. With great power comes great responsibility, so does the power remain when it has become clear that the responsibility has been lacking?
In short, social media companies like Google and Facebook had been allowed by the U.S. Government, and ultimately the American people, to get too big—too much coverage and control of the internet. There should have been another platform similar to Facebook’s that I could have used to reach more readers. Heather West of Mozilla stated at the SXSW conference in 2018 that people were “realizing the power that technology has in our lives and [were] asking technology companies to be more transparent and responsible.”[3] I doubt that, and, besides, I submit that something more than asking was needed. Social media companies like Facebook were clinging at the time to their mantra that they merely provide platforms rather than the content (or even curating it). That is similar to Goldman Sachs insisting in the wake of the financial crisis of 2008 that the bank merely puts markets together, rather than acts also as a proprietary player in them. At the SXSW conference, Kara Swisher of Vox Media and Christiane Amanpour of CNN mocked Twitter and Facebook for insisting, “We’re a tech platform that facilitates media.”[4] A conflict of interest is in even just that, for which media is to be facilitated and which relegated as problematic? Clearly, fake political ads are problematic, but are the social critics whose range includes critiquing social media companies also problematic? Perhaps Facebook’s actual role is a blend of public and private—a private sector government, one might say. If so, democratic accountability, and even that by stockholders, is problematic and thus not to be relied on.
The answer is more government regulation of companies like Facebook, essentially meaning that the public governance functions of Facebook should be overseen at the very least by public policy rather than corporate governance and pressure from users. But government regulation has its limits. Regulators cannot be everywhere, and they cannot get at the problem of the willowing of the blogs on the internet due to factors controlled by the social media companies. For there to be a true democracy of ideas on the World Wide Web, alternative platforms of substantial but not dominating scale  must be viable without being snuffed out by a bloated platform like Facebook’s. Breaking up that company could mean that potential upstarts would get a chance to grow without being bought out and shelved by the giant. Oligopoly is not good for competition, and whether or not an industry is permitted to attain an oligarchic structure is a matter for governments to decide, and ultimately electorates.


For more on this topic, 


See also the booklet, Taking the Face off Facebook





[1] Rob Pegoraro, “SXSW Takes a Skeptical Look at Tech,” USA Today, March 13, 2018.
[2] Ibid.
[3]Ibid.
[4] Ibid.