Thursday, May 16, 2019

Facebook: Holding User Accounts Hostage

A Facebook “challenge” asking users to post a current photo and one from a decade earlier went viral in early 2019. Even though it is unlikely that the company was behind the “challenge” going viral, that the company had been working on facial recognition technology had users being suspicious on the motive behind the “challenge.”[1] A writer for Wired wrote at the time, “Imagine that you wanted to train a facial recognition algorithm on age-related characteristics and, more specifically, on age progression (e.g., how people are likely to look as they get older). Ideally, you’d want a broad and rigorous dataset with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years.”[2] Why would Facebook want to track how a person is likely to look years later? Some users may put up an old picture of themselves or simply not update the existing photo, but why would Facebook want to know what those users are likely to look currently? Perhaps Facebook wanted to be able to identify those users in current pictures uploaded by others. So why did the company deny using the “challenge” for such a legitimate purpose as connecting people socially? Nonetheless, the company insisted that it had no benefit from the “challenge” going viral. This statement seems suspicious, especially given the company’s earlier lapses on user privacy. I contend that an even more toxic subterfuge existed at the time at Facebook—a cloak that held user accounts hostage until a clear facial picture could be supplied.

Perhaps because of the company’s track record since Cambridge Analytica on user privacy, the company’s statement also sought to reassure users by reminding them that they “can choose to turn facial recognition on or off at any time.”[3] While technically true, the company could freeze any user’s account supposedly for security reasons—to protect the user—until a current picture that clearly shows the face is supplied.

In spite of the fact that no legitimate security concerns could be raised a month or so after I created a user account, I discovered one day that Facebook’s computer had blocked my account until such time as I could verify the phone number I had used in setting up the account (when verification on the number was successfully made). I followed through nonetheless the second time, only to find days later that a current picture clearly showing my face was needed “for security reasons.” Until then, I could not use my account to protect my security. That I had not used the account for anything remotely suspicious, not to mention trolling or spam, led me to view the “security” rationale as fake, or at least as excessive. In demanding a face picture from me, the company indicated that the picture will not go on my profile, but this differentiation makes no difference in the company being able to use facial recognition software on me for the company’s internal uses (and even those of external stakeholders like the FBI) from then on. Facebook’s work on facial recognition AI for the previous two years had included such uses as tagging users in other users’ photos even if the photographed user does not know the photographer. Under the subterfuge of a “security need” for a current picture that other users will not see, the picture can still be used in tagging the user without his or her awareness.

I contend, therefore, that Facebook’s demand for a clear face photo is unethical. Besides the company’s horrendous track record on safekeeping user privacy, having users’ accounts held ransom nonetheless can seem presumptuous, like a bad child nonetheless demanding that his parents take him to Disneyland. The unethical verdict is also due to the felt-vulnerability that is natural (even among innocent people!) in handing a face picture to unknown people, even if they have a good reputation in securing privacy. For a company to dismiss the vulnerability and go so far as to demand a clear face picture can be reckoned as a harm (even as passive aggression) that is unjustifiable ethically.  Furthermore, the lying, such as in Facebook’s claim that it had no benefit from the treasure-trove of before-and-after pictures, and the subterfuge that a user can always turn facial-recognition off (even as the company uses it under the lie of a security need to connect a face to an account) are themselves unethical. Anticipating the future revelations on the privacy breaches, I wrote a booklet, Taking the Face off Facebook, on the unethical management at the company. It may therefore be that I’m on a “make problems for” list at Facebook, but other users have complained of having their accounts held hostage too, so I suspect the problem was still with the company’s managers, including its CEO. 

2. Ibid.
3. Ibid.

Wednesday, May 15, 2019

The FAA Deferred to Boeing on the 737 MAX Jet

After a misfiring-prone automatic stall-prevention device on the 737 MAX jet had caused two accidents in which 346 people died, an internal review at the U.S. Federal Aviation Administration, a regulatory agency, found that the regulators had relied too much on Boeing employees to conduct the safety inspections of the planes. Incredibly, Congress expanded the industry-reliance practice of the agency in 2018. Both the FAA and Congress were admittedly motivated by the added efficiency that such “sub-contracting” could bring. However, to focus on the economic benefit while ignoring the inherent (and obvious) conflict of interest in “sub-contracting” to the very companies that are regulated by the FAA is itself a red flag. A subservient or over-reliant regulatory agency cannot be a check on a company’s claims of not having sacrificed safety or even safety checks in order to focus more on profitability.  Of course, the political influence of a large company such as Boeing may have played a role in the FAA’s “back-seat” approach, but in this case the government’s own interest in stretching the coverage of its human resources may have been dominant. That such an interest could involve minimizing or ignoring outright such a blatant conflict of interest may point to a wider culture in which institutional conflicts of interest are presumed to be innocuous or even benign rather than too toxic to permit even if they have not been actively exploited.  

During the FAA certification process for the 737 MAX, Boeing didn’t flag the automated stall-prevention feature as a system whose malfunction or failure could cause a catastrophic event.”[1] The FAA’s report does not point to any fabrication on the part of the company. The problem is that “FAA engineers and midlevel managers deferred to Boeing’s early safety classification.”[2] No check on the company’s determination could be in such deference. It is astounding that managers at a regulatory agency could have neglected or ignored this basic point, which gets at the raison d’etre of any regulatory agency. C’est vraiment incroyable.

In fact, the company’s initial safety classification allowed “company experts to conduct subsequent analyses of potential hazards with limited agency oversight.”[3] The operative assumption in this practice seems to be that experts cannot be initially wrong, or that they could eventually catch their own errors, and that such experts are not subject to pressure from managers to get the planes in the air and generating revenue that can at least cover payments on the planes themselves.

Even worse, the FAA classified certain Boeing employees as “designated agency representatives.”[4] Employees of a regulated company cannot represent the regulatory agency, for such a designation is itself an institutional conflict of interest. It is, in effect, to designate one wolf as a police-wolf around a hen house! How can this not be obvious? I submit that only in a permissive culture can such blind-spots thrive. The FAA’s practice of designating some employees of regulated companies as being able “to act for the agency” was set up by the FAA and “endorsed and expanded” by Congress with “the aim of freeing up government resources to focus on what are deemed the most important and complex safety matters.”[5] Was not something that had killed hundreds of people an important safety matter? FAA managers might retort, “But we didn’t know this except in hindsight.” Exactly. This is precisely what minimizing or ignoring a huge conflict of interest can do.

See Institutional Conflicts of Interest, available at Amazon.

[1] Andy Paztor, Andrew Tangel, and Alison Sider, “FAA Left 737 MAX Review to Boeing,” The Wall Street Journal, May 15, 2019.
[2] Ibid.
[3] Ibid., italics added.
[4] Ibid.
[5] Ibid.