Thursday, September 22, 2016

Russian Electoral Fraud: A Threat to Constitutional Governance

In spite of Ella Pamfilova’s appointment in March, 2016 to “clean house and oversee transparent, democratic elections,” . . . “a statistical analysis of the official preliminary results of the country’s September 18 [2016] State Duma elections points to a familiar story: massive fraud in favor of the ruling United Russia party.”[1] “The results of the current Duma elections were falsified on the same level as the Duma and presidential elections of 2011, 2008, and 2007, the most falsified elections in post-Soviet history, as far as we can tell,” physicist and data analyst Sergei Shpilkin said to The Atlantic.”  In 2008, Shpilkin estimated that United Russia actually won 277 seats in the Duma instead of the constitutional majority of 315 that it was awarded.[2] This means that Putin’s party could unilaterally amend the Russian constitution. From a constitutional standpoint, either the hurdles in the amendment process are too low or the election fraud has been so massive the entire form of government is impaired.

The official turnout for the 2016 election “was 48 percent, and United Russia polled 54.2 percent of the party-list vote—about 28,272,000 votes. That total gave United Russia 140 of the 225 party-list seats available in the Duma. . . . In addition, United Russia candidates won 203 of the 225 contests in single-mandate districts, giving the party an expected total of 343 deputies in the 450-seat house.”[3] With the “projected 343 deputies in the new parliament, United Russia once again has enough votes to unilaterally alter the constitution.[4]

“By my estimate,” Shpilkin said, “the scope of the falsification in favor of United Russia in these elections amounted to approximately 12 million votes.”[5] He “shows that almost all ‘extra’ votes from polling stations reporting higher-than-average turnout went to United Russia. That is, a party such as ultranationalist Vladimir Zhirinovsky’s LDPR received virtually the same number of votes from polling stations reporting a turnout of 95 percent as it did from stations reporting turnouts of 65 percent. United Russia, by contrast, received about four times as many at the 95 percent stations.”[6]

Fraud at around 12 million votes is indeed massive, and it is clearly enough to render the existing constitutional amendment process dysfunctional. A constitution should not be a document that one party can unilaterally change. The crime, ostensibly committed by Putin’s party, is sufficient, therefore, to impair the rule of law from a democratic standpoint. 

The problem with reform of the elections has to do with lessor powers being able to thwart the efforts of the hegemonic party, whose power could easily block even small reforms. It may well take a huge groundswell of Russian people uniting to push for meaningful change that would rid United Russia of its overwhelming claws. For this to occur, the groundswell would have to be non-partisan rather than going through the extant other parties; enough solidified power that is yet widespread would have to coalesce to overpower the power of the United Russia party. Such a cause would naturally be to safeguard the constitutional system itself from the reach of even the largest party. This is a formidable feat not only because of the continuing power of United Russia, but also the power needed to concentrate the diverse and decentralized power of the people—the popular sovereign, to whom the Russian government and the constitution should rightfully defer, as an agent defers to his principal.


[1] Valentin Baryshnikov and Robert Coalson, “12 Million Extra Votes for Putin’s Party,” The Atlantic, September 21, 2016.
[2] Valentin Baryshnikov and Robert Coalson, “12 Million Extra Votes for Putin’s Party,” The Atlantic, September 21, 2016.
[3] Valentin Baryshnikov and Robert Coalson, “12 Million Extra Votes for Putin’s Party,” The Atlantic, September 21, 2016.
[4] Valentin Baryshnikov and Robert Coalson, “12 Million Extra Votes for Putin’s Party,” The Atlantic, September 21, 2016.
[5] Valentin Baryshnikov and Robert Coalson, “12 Million Extra Votes for Putin’s Party,” The Atlantic, September 21, 2016.
[6] Valentin Baryshnikov and Robert Coalson, “12 Million Extra Votes for Putin’s Party,” The Atlantic, September 21, 2016.

Wednesday, September 21, 2016

Tech Industry Self-Regulation: Sufficient to Handle the Ethics of A.I.?

Five of the world’s largest tech companies—Google’s Alphabet, Amazon, Facebook, IBM, and Microsoft—had by September 2016 been working out the impact of artificial intelligence on jobs, transportation, and the general welfare.[1] The basic intention was “to ensure that A.I. research is focused on benefiting people, not hurting them.”[2] The underlying ethical theory is premised on a utilitarian consequentialism wherein benefit is maximized while harm is minimized. The ethics of whether the companies should be joining together when the aim is to forestall government regulation is less clear, given the checkered pass of industry self-regulation and the conflict of interest involved.

People at the companies were concerned at the time that regulators would “create rules around their A.I. work,” so the managers were “trying to create a framework for a self-policing organization.”[3] I submit that the self-policing itself is problematic. For one thing, industry self-regulation can be less than fully effective, as companies have an immediate self-interest in colluding so the industry body lays off from enforcing all the planks agreed-to, even as the self-regulatory body presents a solid front to outsiders. Put another way, people’s faith in companies notwithstanding, senior managers of an industry’s companies can all agree to let the industry’s own regulatory body ease up on enforcement so all of the companies—or even just the market-leader—will benefit. Hence, industry self-regulation can devolve into the proverbial story of the wolf guarding the hen house.

More broadly, the intention to forestall government regulation (with or without industry self-regulation) is ethically problematic in that it presupposes a lawlessness, or inherent weakness, beyond the companies and their industry themselves. Put another way, the pernicious mentality that government control is for the other guy can be behind such an intention. Incredibly, Wall Street bankers still felt that financial deregulation was needed after the subprime-mortgage based bond scandal that nearly brought down the American and perhaps even the global financial system. Image such a mentality—of not needing government regulation in spite of known industry flaws—going with self-regulation. The mentality that the other guy is to blame—in that case, the mortgage borrowers—is conducive to intentional lapses in self-regulation.
   
Back to the tech companies, also in September, 2016 the Stanford Project issued a report that was funded by a Microsoft researcher. The report “attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities.”[4]  The report is a mixed bag.

On the one hand, the report claims that attempts to regulate A.I. would be misguided due to the lack of any clear definition of A.I. and “the risks and considerations are very different in different domains” of it.[5] Regulations are naturally oriented to specifics, however, so they could indeed be tailored to fit the distinctiveness of any given domain of A.I.

On the other hand, the report’s authors wanted to “raise the awareness of and expertise about artificial intelligence at all levels of government,” according to Peter Stone, one of the authors of the report.[6] “There is a role for government and we respect that,” said David Kenny of IBM’s A.I. division. Therefore, the attempt to stave off government regulation would be foolish. Yet a sustained practice of giving information to a regulatory agency can risk regulatory capture, wherein the agency comes to rely on the information so much that the regulated wind up manipulating (via slanted “information”) the agency—and even controlling it. So all levels of government should keep their information sources diverse such that none—especially those of the industry—get in a position of being able to manipulate or control the government on the matter of A.I.

Ideally, the industry and government should work together to keep the companies within acceptable boundaries, yet crucially without the government giving up alternative sources of information, by which the veracity of the industry’s own information can be checked and the government kept from being captured. There is indeed a legitimate role for government regulation, as opposed to relying on an industry to regulate itself; the profit-motive is just too strong for going exclusively with self-regulation. Even Adam Smith maintained that there is a role for government even in a perfectly competitive industry. Now, just how many truly competitive industries are there?



[1] John Markoff, “Devising Real Ethics for Artificial Intelligence,” The New York Times, September 2, 2016.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Ibid.
[6] Ibid.