The question of the role of social media internet companies as protesters have used social-media platforms to communicate before and during protests exploded on the world stage in "Arab Spring" going in the Middle East in early 2011. Lest it be presumed that the companies' respective policies were relevant only in terms of what content (or users) was allowed and how that content could impact events on the ground, the policies themselves reflect the claim made by the West of what liberty means. In other words, if social media companies were (allowed to be) oppressive or otherwise not respectful of their customers, the overall message to the oppressed in the Middle East could not have been that greater freedom is indeed possible because it exists in the West. Lest our own private sector unwittingly undercut the words and efforts of the protesters, we might want to use this case to ask if we couldn’t be freer too.
According to Ebele Okobi-Harris, the director of the business and human rights program at Yahoo, which owned Flickr at the time, the case of el-Hamalwy, an Egyptian activist whose uploaded pictures of security agents were abruptly taken down by Flickr staff, prompted internal discussions at the company about whether Flickr should reconsider its approach. What if the photos had been his own and he had not yet backed them up? Flickr’s abrupt and unannounced action suddenly seems quite oppressive. Fortunately, managers at Flickr were at least thinking about the issue. “As the uses of these social networks evolve,” Harris said, “we have to start thinking about how to create rules on how to apply rules that also facilitate human rights activists using these tools.”[1]
Harris “pointed to the challenges of balancing the existing rules and terms of service for users with the new ways that activists are using these tools. One challenge is whether a company should maintain its commitment to remain neutral about content, even when politicized content could offend users or even put people in danger. ‘Does a company take responsibility for the content?’”[2] For instance, what, el-Hamalawy asked, would Flickr do if a group that opposes abortion wanted to post photographs of doctors who perform abortions? In his own case, el-Hamalawy “said Flickr’s decision to take down the photos left him not only frustrated and angry but also terrified. ‘Everyone knew that I had released those photos,’ he said. ‘Then the photos were gone. I couldn’t sleep. I was thinking that at any minute, they were going to come for me.’”[3] Would Flickr managers have been responsible for el-Hamalawy’s death had it been occasioned by Flickr’s action? Or was it his own act in uploading the photos in the first place that put his life at risk? To be sure, Flickr should at least have notified him before taking down his pictures; the company was certainly responsible for causing him fear. However, this incident seems more like bad business to me than unethical conduct on Flickr’s part. Whether a customer is a protester oppressed by a dictatorship or simply a novice photographer who has uploaded her own pictures, there does appear to have been reason to withhold one’s trust from Flickr.
Beyond the matter of bad customer relations—which seems to be getting worse in American business--the question of whether social media, which has included Facebook, Flickr, Instagram, Twitter and YouTube among others, has been unwittingly biased toward oppressive governments even if only from a desire to maintain control over its site must be addressed.
In early 2011, it became clear that social-media companies were increasingly being used by activists and pro-democracy forces, especially in the Middle East and North Africa, to the chagrin of the respective governments. As Harris asked of Flickr, does a social media company have responsibility for the content? Furthermore, should such a company be susceptible to the influence of angered governments, whether in identifying users or barring their content?
According to The New York Times at the time, the “new role for social media has put these companies in a difficult position: how to accommodate the growing use for political purposes while appearing neutral and maintaining the practices and policies that made these services popular in the first place."[4] In November 2007, YouTube had removed videos flagged as “inappropriate” by a community member "because they showed a person in Egypt being tortured by the police. They were uploaded by Wael Abbas, another Egyptian blogger involved in opposing torture in Egypt. After a public outcry, YouTube managers reviewed the videos and restored them.”[5] Had the managers been influenced by Egypt in taking down the video, the company would have effectively taken sides in the Egyptian dispute between its government and people.
Prime facie, removing videos or photos of the police torturing one or several of their own citizens is pro-authoritarianism and anti-liberty. In other words, the very act enables aggression by states against their own people, whose liberty is treated as a wanton step-child. YouTube's initial decision to pull the videos back in 2011 was in this sense a political decision. Accordingly, it should not have been labeled as simply a business decision. Seen in this light, YouTube's employees enjoyed power beyond what working on business entails and thus entitles.
Alternatively, the issue could simply be whether a warning notice is appropriate given the graphic nature of anti-government material. I made the horrible mistake, for example, of watching the slow beheading of a Western hostage by a terrorist group in the Middle East. Even a year or two later, I could still hear the man’s raspy voice shouting for dear life as his throat was being gradually deprived of air. Because I ignored the “graphic content” warning, perhaps the issue facing YouTube in 2011 does indeed go beyond whether such a warning should apply. In my case, my curiosity got the better of me. Should YouTube have been responsible for protecting me from myself? That could be good business, as I stayed away from the platform for a while because I realized what emotional power "real" videos could have. Yet this would open YouTube employees and their managers up to grasping what is actually political power, at the very least if the subject matter itself is political. A blurry line exists between business and political power in the business realm. Power-aggrandizement can take advantage of the discretion. Perhaps social media managements could limit their intervention only to extremely graphic content, with review taking place in the company in the particularly harsh cases. Still, in a free society, citizens ultimately must take responsibility for ignoring warnings.
Regarding Facebook, The New York Times reported on March 26, 2011 that the company “has remained mostly quiet about its increasing role among activists in the Middle East who use the site to connect dissident groups, spread information about government activities and mobilize protests. But Facebook is now finding itself drawn into the Israeli-Palestinian conflict and has been pushed to defend its neutral approach and terms of service to some supporters of Israel, including an Israeli government official. Yuli Edelstein, an Israeli minister of diplomacy and diaspora affairs, sent a letter [in March, 2011] to Facebook’s chief executive, Mark Zuckerberg, asking him to remove a Facebook page created that March named the Third Palestinian Intifada." The page, which called for an uprising in the occupied Palestinian territory that May, had more than 240,000 members at the time. "As Facebook’s C.E.O. and founder, you are obviously aware of the site’s great potential to rally the masses around good causes, and we are all thankful for that," Mr. Edelstein wrote. "However, such potential comes hand in hand with the ability to cause great harm, such as in the case of the wild incitement displayed on the above-mentioned page." Facebook had, so far, not removed the page. The administrators of the page were not advocating violence so the page fell within the company’s definition of acceptable speech, company officials said. "We want Facebook to be a place where people can openly discuss issues and express their views, while respecting the rights and feelings of others," said Andrew Noyes, a spokesman for public policy at the company. Facebook was trying to maintain its neutrality without getting political.
The problem is, “wild incitement” could pertain to the pro-democracy rallies that had been taking place throughout the Middle East at the time. Even if violence were being called for in the Intifada, would Facebook (or Instigram) remove such content if it had been put up by an Egyptian or Libyan protester? More pointedly, what if a page or photo referred to “wild incitement” in the midst of being attacked by government troops or police? How far removed is an occupied people to such intimidation on a daily basis? Should they be barred from tweeting, “Come help me at X intersection b/c police are beating my elderly parents”? The staff at Facebook were smart not to intervene in disputes between a government and its people. If anything, an America-based company should have a bias in favor of liberty in taking the side of the oppressed, for the United States came into existence from British oppression. Relatedly, the U.S. Government itself acts in concert with its own beginning whenever it takes the side of a people protesting against governmental oppression. Of course, American companies acting in favor of American values make de facto political judgments and decisions whose application in other cultures may be ill-fitting or even inappropriate.
Yet it is possible that particular company policies are inherently to the advantage of vengeful governments and thus a threat to Facebook’s customers under those governments. For example, The New York Times reported at the time that “Human rights advocates have also criticized Facebook for not being more flexible with some of its policies, specifically its rule requiring users to create accounts with their real names. Danny O’Brien, the Internet advocacy coordinator for the Committee to Protect Journalists, cited the case of Michael Anti, an independent journalist and blogger from China whose Facebook account was deactivated in January because he had not used his state-given name to create it. In addition to losing the ability to publish and communicate on Facebook, and not wanting to use his real name because of China’s strict rules governing freedom of speech and harsh response to those activists who violate them, he . . . lost the contact information for thousands of people in his Facebook community. ‘One can’t expect all of these services to provide everything to everyone,’ said Mr. O’Brien. ‘I think that part of the solution is to provide people with a dignified way of leaving the service.’”
O’Brien was giving too much credit to Facebook. It is insufficient to expect Facebook to have merely provided its customers with a dignified way to leave (or be deprived of service). In addition, Facebook ought to have respected the preference of some of its customers to anonymity, especially if their respective real names could bring them into harm's way. Facebook would have still had those customers’ contact info (more of which could be demanded and verified in such cases), so anonymity would not have been an excuse to get away with unethical or illegal conduct, such as publicly defaming someone by making false claims. At the very least, I contend that a person’s anonymity being refused is a basis for that person to lie ethically about being on Facebook.
Moreover, Facebook’s insistence at the time that real names must be used adds to the argument that the U.S. Constitution should be amended to include an explicit right to privacy. Much of the criticism of Roe v. Wade is actually that the justices “found” such a right being implicit in that constitution. Even people in favor of making abortion legal could (and have) raise the objection. Another entails whether the decision should be federal or have remained with the States.
While the problem regarding Facebook's insistence that customers use their real names may have had implications for U.S. constitutional law, bad business (i.e., bad customer service) may have been Facebook's underlying problem. In fact, Facebook having had respect for its potential and actual customers who preferred anonymity could have sent a message stronger than any from the protesters or human rights advocates in the Middle East, namely: “Look over here! Real freedom is indeed possible!”
1. Jennifer Preston, “Ethical Quandary for Social Sites,” The New York Times, March 27, 2011.
2. Ibid.
3. Ibid.
4. Ibid.
5. Ibid.