Wednesday, September 21, 2016

Tech Industry Self-Regulation: Sufficient to Handle the Ethics of A.I.?

Five of the world’s largest tech companies—Google’s Alphabet, Amazon, Facebook, IBM, and Microsoft—had by September 2016 been working out the impact of artificial intelligence on jobs, transportation, and the general welfare.[1] The basic intention was “to ensure that A.I. research is focused on benefiting people, not hurting them.”[2] The underlying ethical theory is premised on a utilitarian consequentialism wherein benefit is maximized while harm is minimized. The ethics of whether the companies should be joining together when the aim is to forestall government regulation is less clear, given the checkered pass of industry self-regulation and the conflict of interest involved.

People at the companies were concerned at the time that regulators would “create rules around their A.I. work,” so the managers were “trying to create a framework for a self-policing organization.”[3] I submit that the self-policing itself is problematic. For one thing, industry self-regulation can be less than fully effective, as companies have an immediate self-interest in colluding so the industry body lays off from enforcing all the planks agreed-to, even as the self-regulatory body presents a solid front to outsiders. Put another way, people’s faith in companies notwithstanding, senior managers of an industry’s companies can all agree to let the industry’s own regulatory body ease up on enforcement so all of the companies—or even just the market-leader—will benefit. Hence, industry self-regulation can devolve into the proverbial story of the wolf guarding the hen house.

More broadly, the intention to forestall government regulation (with or without industry self-regulation) is ethically problematic in that it presupposes a lawlessness, or inherent weakness, beyond the companies and their industry themselves. Put another way, the pernicious mentality that government control is for the other guy can be behind such an intention. Incredibly, Wall Street bankers still felt that financial deregulation was needed after the subprime-mortgage based bond scandal that nearly brought down the American and perhaps even the global financial system. Image such a mentality—of not needing government regulation in spite of known industry flaws—going with self-regulation. The mentality that the other guy is to blame—in that case, the mortgage borrowers—is conducive to intentional lapses in self-regulation.
Back to the tech companies, also in September, 2016 the Stanford Project issued a report that was funded by a Microsoft researcher. The report “attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities.”[4]  The report is a mixed bag.

On the one hand, the report claims that attempts to regulate A.I. would be misguided due to the lack of any clear definition of A.I. and “the risks and considerations are very different in different domains” of it.[5] Regulations are naturally oriented to specifics, however, so they could indeed be tailored to fit the distinctiveness of any given domain of A.I.

On the other hand, the report’s authors wanted to “raise the awareness of and expertise about artificial intelligence at all levels of government,” according to Peter Stone, one of the authors of the report.[6] “There is a role for government and we respect that,” said David Kenny of IBM’s A.I. division. Therefore, the attempt to stave off government regulation would be foolish. Yet a sustained practice of giving information to a regulatory agency can risk regulatory capture, wherein the agency comes to rely on the information so much that the regulated wind up manipulating (via slanted “information”) the agency—and even controlling it. So all levels of government should keep their information sources diverse such that none—especially those of the industry—get in a position of being able to manipulate or control the government on the matter of A.I.

Ideally, the industry and government should work together to keep the companies within acceptable boundaries, yet crucially without the government giving up alternative sources of information, by which the veracity of the industry’s own information can be checked and the government kept from being captured. There is indeed a legitimate role for government regulation, as opposed to relying on an industry to regulate itself; the profit-motive is just too strong for going exclusively with self-regulation. Even Adam Smith maintained that there is a role for government even in a perfectly competitive industry. Now, just how many truly competitive industries are there?

[1] John Markoff, “Devising Real Ethics for Artificial Intelligence,” The New York Times, September 2, 2016.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Ibid.
[6] Ibid.