A Facebook “challenge” asking users to post a current photo
and one from a decade earlier went viral in early 2019. Even though it is
unlikely that the company was behind the “challenge” going viral, that the
company had been working on facial recognition technology had users being
suspicious on the motive behind the “challenge.”[1]
A writer for Wired wrote at the time, “Imagine that you wanted to train a
facial recognition algorithm on age-related characteristics and, more
specifically, on age progression (e.g., how people are likely to look as they
get older). Ideally, you’d want a broad and rigorous dataset with lots of
people’s pictures. It would help if you knew they were taken a fixed number of
years apart—say, 10 years.”[2]
Why would Facebook want to track how a person is likely to look years later?
Some users may put up an old picture of themselves or simply not update the
existing photo, but why would Facebook want to know what those users are likely
to look currently? Perhaps Facebook
wanted to be able to identify those users in current pictures uploaded by
others. So why did the company deny using the “challenge” for such a legitimate
purpose as connecting people socially? Nonetheless, the company insisted that
it had no benefit from the “challenge”
going viral. This statement seems suspicious, especially given the company’s
earlier lapses on user privacy. I contend that an even more toxic subterfuge existed
at the time at Facebook—a cloak that held user accounts hostage until a clear
facial picture could be supplied.
Perhaps because of the company’s track record since
Cambridge Analytica on user privacy, the company’s statement also sought to
reassure users by reminding them that they “can choose to turn facial
recognition on or off at any time.”[3]
While technically true, the company could freeze any user’s account supposedly
for security reasons—to protect the user—until a current picture that clearly
shows the face is supplied.
In spite of the fact that no legitimate security concerns
could be raised a month or so after I created a user account, I discovered one
day that Facebook’s computer had blocked my account until such time as I could
verify the phone number I had used in setting up the account (when verification
on the number was successfully made). I followed through nonetheless the second
time, only to find days later that a current picture clearly showing my face
was needed “for security reasons.” Until then, I could not use my account to protect my security. That I had not
used the account for anything remotely suspicious, not to mention trolling or
spam, led me to view the “security” rationale as fake, or at least as excessive.
In demanding a face picture from me, the company indicated that the picture
will not go on my profile, but this differentiation makes no difference in the
company being able to use facial recognition software on me for the company’s internal uses (and even those of
external stakeholders like the FBI) from then on. Facebook’s work on facial
recognition AI for the previous two years had included such uses as tagging
users in other users’ photos even if the photographed user does not know the
photographer. Under the subterfuge of a “security need” for a current picture
that other users will not see, the picture can still be used in tagging the
user without his or her awareness.
I contend, therefore, that Facebook’s demand for a clear face photo is unethical. Besides the company’s
horrendous track record on safekeeping user privacy, having users’ accounts
held ransom nonetheless can seem
presumptuous, like a bad child nonetheless demanding that his parents take him
to Disneyland. The unethical verdict is also due to the felt-vulnerability that
is natural (even among innocent people!) in handing a face picture to unknown
people, even if they have a good reputation in securing privacy. For a company
to dismiss the vulnerability and go so far as to demand a clear face picture can be reckoned as a harm (even as
passive aggression) that is unjustifiable ethically. Furthermore, the lying, such as in Facebook’s
claim that it had no benefit from the treasure-trove of before-and-after
pictures, and the subterfuge that a user can always turn facial-recognition off
(even as the company uses it under the lie of a security need to connect a face
to an account) are themselves unethical. Anticipating the future revelations on
the privacy breaches, I wrote a booklet, Taking the Face off Facebook, on the unethical management at the company. It may
therefore be that I’m on a “make problems for” list at Facebook, but other
users have complained of having their accounts held hostage too, so I suspect
the problem was still with the company’s managers,
including its CEO.
1. Nicole Martin, “Was the Facebook ’10
Year Challenge’ a Way to Mine Data for Facial Recognition AI?,”
Forbes, January 17, 2019.
2. Ibid.
3. Ibid.