Cambridge Analytica, political data firm founded by Stephen
Bannon and Robert Mercer, and with ties to U.S. President Trump’s 2016 campaign,
“was able to harvest private information from more than 50 million Facebook
profiles without the social network’s alerting users.”[1]
The firm had purchased the data from a developer (a psychology professor at
Cambridge University in the E.U.) who had developed a personality test that
Facebook users could take, and whose purpose was supposedly academic. The developer
violated Facebook’s policy on how user data could be used by third parties. The
data firm “used the Facebook data to develop methods that [the firm] claimed
could identify the personalities of individual American voters and influence
their behavior.”[2]
In other words, Cambridge Analytica used the purchased data to manipulate users
to vote for Donald Trump for U.S. president in 2016 by sending pro-Trump messages.
Although Facebook had not known of the sale of the data to Cambridge Analytica
at the time, the social network, upon learning Cambridge Analytica’s political use
of the data in 2015, failed to notify its users whose data had been
compromised. Although 270,000 Facebook users took the developer’s personality
test, “the data of some 50 million users
. . . was harvested without their explicit consent via their friend networks.”[3]
It bears noting here that those of the 50 million users who had not taken the
personality test should definitely have been informed. At the very least,
Facebook’s management could not be trusted to not only keep users informed, but also protect users in
the first place by adequately enforcing the third-party-use policy. So it is
ironic that Facebook’s untrustworthy management
could be unduly distrustful of
ordinary users.
The psychological-political mixture in Cambridge Analytica’s
use of the data is downright creepy. Tapping into a psychology professor’s
methodology for inferring personality from data on a social network platform so
to be able to send politically manipulative advertising to certain Facebook
users is highly invasive, even for the
users who voluntarily took the professor’s personality test online. Regardless
of party affiliation, a reaction of disapprobation to such an over-reach could
be expected; hence the operation was stealth—which is why Facebook’s management
erred so in failing to inform the 50 million users. Facebook’s stock deserved
to fall when the story finally did break in March, 2018.
It is odd that Facebook’s management even permitted the developer,
the psychology professor who went on to sell the data to Cambridge Analytica,
to obtain the data in the first place to develop personality constructs for
academic purposes. It is also odd that Facebook’s management had been so naïve
concerning a political data firm, and yet so demanding of individual users who
displayed no cause for suspicion. Facebook suspended an account I set up
because I had sent a link to one of my academic articles to some scholars I knew. I deleted the account. A few
years later, I tried again. That time, Facebook demanded that I upload a clear facial picture of myself so I
could be identified. Apparently my phone number and email address were not
sufficient, even though I had not yet even used the account and thus could not
have violated any of the company’s use-policies. I deleted that account rather
than supply a picture of myself because I was concerned how the facial
recognition software would be used, especially when combined with other basic
information I had included in the profile. It turns out I had reason to be
concerned, for even if my personality had not been construed and I had not been subject to political manipulation psychologically, the fact that Facebook
let a political firm in the door means that other harvesting could have been
going on. Furthermore, even if Facebook discovered other extractions, I could
not trust that the company would have informed me.
It is telling, in short, that a company so distrustful demanded that I upload a picture of my
face so I could be identified—as if I were distrustful. I suspect that the
managers and their employees were projecting their own distrustfulness onto
innocent users, while giving firms like Cambridge Analytica a free hand. In
other words, the folks at Facebook were very bad at determining who is trustworthy.
The lesson here is that Facebook was not worthy of its users’ trust, and yet
strangely the users did not bolt en mass.
It could be that people in modern society had become so used to being
distrusted by people working in organizations and to interacting with
distrustful companies that the Facebook revelation was a mere blimp on the
radar screen.
The philosopher Kant reasoned that promise-making is only
valid in a context in which promises tend to be kept; otherwise, promises would
simply be dismissed as worthless dribble. If large companies only keep their
promises when doing so is convenient to them, such a context could recalibrate
just how much worth promise-making justifiably deserves. If so, the business
world itself could contribute to a society in which distrust rather than trust
is the norm. When I lived in Tucson, Arizona, I experienced such a society. I
could feel not only the angst in the air, but also the passive aggression in
the distrust itself. Besides the police-state being “beyond the pale” even
on the local university’s campus, the guarded watchfulness that was (and
surely is still) practiced between strangers on the city streets (as well as
between bus drivers and riders) included an inherent aggressiveness. Likewise,
Facebook’s refusal to notify users of the “harvesting” and Facebook’s demand that I furnish a photo of my face
involved passive aggression—which is inherent in unjustified disrespect. Are companies like Facebook unwittingly
turning modern society into Tucsons? If so, the link between distrust and
aggression should be made transparent so people can at least be aware of the
change.
For a business ethics critique of Facebook, see Taking the Face off Facebook
1. Matthew Rosenberg and Sheera Frenkel, “Facebook Role In Data Misuse Sets Off
Storm,” The New York Times, March 19,
2018.
2. Ibid.
3. “Cambridge Analytica:
Facebook ‘being investigated by FTC,’” BBC News ( accessed March 20, 2018).