"The greatness and the genuine trait of your thought and writings lie on the fact that you positively and interestingly make use of philosophical thoughts and thoughtfulness in order to deeply and concretely cogitate about America's social issues. . . . This does not mean that your thought is reducible to your era: your thought, being inspired by issues characterizing your era . . . , overcomes your era and will still likely be up to date even after your era, for future generations." Bruno Valentin

Thursday, March 29, 2018

An Interfaith Declaration of Business (Ethics)

Released in 1994, “An Interfaith Declaration: A Code of Ethics on International Business for Christians, Muslims, and Jews” is comprised of two parts: principles and guidelines. The four principles (justice, mutual respect/love, stewardship and honesty) are described predominantly in religious terms, devoid of any connection to business. In contrast, the guidelines invoke the principles in their ethical sense, devoid of any religious connotation. The disconnect in applying religious ethics to business is not merely in books; the heavenly and earthly cities are as though separated by a great ocean of time.

Are these religions applicable to business?    

To be sure, the text refers to business in discussing the ethical principles of love, stewardship and honesty, however briefly. Love in the business world is to extend out from corporate boundaries to  stakeholders. Stewardship applies to a business’s use of resources such that ownership itself is qualified beyond the reach of regulation. Lastly, honesty includes the use of “true scales.” The honest are said to get a religious reward (i.e., resurrection), presumably to compensate for any monetary loss in being honest in business.

Turning to the guidelines for business, they are portrayed predominately in the text mostly as a defense of corporate capitalism. Strangely, the reference to the principles is devoid of any religious association. The following guideline is typical: “The efficient use of scarce resources will be ensured by the business” (A.7). Another guideline adds a reference to an ethical principle: “Competition between businesses has generally been shown to be the most effective way to ensure that resources are not wasted, costs are minimized and prices fair” (A.2). To be sure, fairness is indeed an ethical principle, which John Rawls applies in his Theory of Justice. However, fairness is not among the religious ethical principles. Furthermore, no religious content is referenced in the guideline, as well as still another: “The basis of the relationship with the principal stakeholders shall be honesty and fairness, by which is meant integrity” (B.3). The reader is left to ponder what integrity looks like in terms of the three Abrahamic religions.

A major problem in relating monotheism and business ethics comes down to the enigma that God’s omnipotence cannot be limited by a human ethical system, and yet divine decrees that violate secular ethical principles are untenable and thus typically considered to be invalid. For example, killing people who refuse to convert because God says rankles the modern conscience into seemingly rebelling against the Ultimate. The question naturally flairs up regarding whether God really decrees the sordid practice. Looking out of a smoked window in this earthly realm, we mortals tend to conceptualize or sense God as extending beyond the limits of human perception and cognition. This means that we cannot rely on any firm answer in justifying a divine decree above a social ethic. 

For example, insisting that employees keep the Sabbath, whether on Friday, Saturday, or Sunday, may not be fair to the workers who do not recognize the validity of the Ten Commandments. Given the limitations discussed above that preempt religious intuition, belief, and experience from being recognized as factual knowledge, an employer cannot justifiably treat the revelation as though a fact that an objecting employee has no cause to ignore. The question of the revelation's divine validity is ultimately at stake here, and no answer can possibly settle the matter in dispute.

In conclusion, it follows that throwing monotheism into the mix of business and ethics cannot reduce to a simplistic list of determinate guidelines. Getting beyond the “oil and water” of the sacred and profane turns out to be a whale of a challenge to religious business practitioners. In Christian terms, the problem can be put in terms of whether the "fully human and fully divine" Christology devoid of blending is a sufficient basis to cross the ocean of time between Sunday and Monday.  


Related paper: "Religion in Strategic Leadership: A Positivistic, Normative/Theological and Strategic Analysis," Journal of Business Ethics (2005) 57: 221-239.

Related book: God's Gold  The text goes through the history of Christian thought on how greed is related to wealth and profit-seeking, and proffers an explanation for why the historical shift was from anti-wealth to a pro-wealth dominant stance. 

Friday, March 23, 2018

Corporate Social Responsibility Is Not Altruistic: The Case of Amazon Prime

In a doctoral seminar on corporate social responsibility (CSR), the professor turned to me, perhaps because by then I was also taking courses in the religious studies department, and asked, “What is enlightened self-interest?” In my answer, I argued that such self-interest is distinctly oriented to the long-term, rather than, for example, immediate profits. Alternatively, I could have stressed the ethical connotation of the word, enlightened, but the self-interest component would seem to invalidate an ethical basis. In line with the notion of love as caritas, which is human love (eros) sublimated up directed to God, as distinct from agape, which excludes lower, self-interest inclusive, love, doing good can go along with long-term self-interest. In other words, doing good has value because good is done even if self-interest is salient in the motive. In regard to CSR, the self-interest that coincides is long-term-oriented. Amazon, for instance, giving the poor (i.e., Medicaid recipients) 50 percent off on the monthly charge for Amazon Prime is in line with gaining full-paying customers eventually, for it usually takes a while for poor people to move up the economic ladder. 
In 2017, Amazon made discounts of an almost 50 percent discount on Prime memberships available to people receiving “food stamps.” The following year, the company expanded its reach to customers by giving the discount to people with Medicaid medical insurance. The first step to increasing a standard customer base is to reach out to people who would not become customers without an additional incentive. Amazon’s management wanted “to gain more market share among low-income consumers and those without access to traditional banking and credit.”[1] The company was betting that a significant enough percentage of the discount-taking customers would eventually have enough wealth to access banking that they could pay the full monthly price. I suspect that a manager “ran the numbers” based on an estimate of that percentage and set the discount accordingly as a break-even point.
That Amazon’s management was likely geared to the company’s long-term financial interest in terms of market-share generally and turning impoverished people into full-paying customers more specifically does not mean that societal good was not enhanced, for the purchase power of the poorest of the poor could be expanded. The good, in other words, lay in the added utility, and this is a significant ethical good, for the poorest, I can attest, suffer unrelentingly with the hardships of poverty. Not even hard work can result in appreciable change in terms of income and wealth. The poor benefitting from Amazon’s discount justifiably don’t care whether the company’s management extended the offer in order to gain market-share.
A company’s enlightened self-interest in CSR does not mean that good is not done. Its “certainly the case that we’re hoping to create some lifetime Prime members here,” a program manager at Amazon said when the expansion to Medicaid occurred in 2018.[2] The company was positioning itself to go head to head with Walmart. Amazon was clear that it was “making this move for business reasons, not for altruism, but”—and here is my point—“that doesn’t mean it won’t help people,” said Avi Greengart, an industry analyst at a marketing research firm.[3] Altruism may actually be quite rare, or even non-existent in its pure form, in human nature even as it appreciates the good. 
Caritas is much more realistic than agape. It is for this reason that the latter is designated as divine love—the self-emptying (hence selfless) love that a deity not having a human nature has. In Christianity, Augustine and Calvin emphasize in their respective writings that God is love. These theologians differed, however, on whether it is too much to ask humans to have and display selfless (agape) love rather than merely self-interest-infused love aimed high to God (caritas); Calvin was more idealistic in this respect. 
Doing good in the sense of improving the lot of other people applies to not only the Christian notion of neighbor-love, that is, caritas seu benevolentia universalis, but also simply wanting to make a positive impact society. Self-interest is more salient in the latter--that is, doing good ethically in the absence of love, but this does not mean that good is not done, even if as a byproduct. This brings us back to corporate social responsibility, realistically construed.

1, Elizabeth Weise, “Medicaid Recipients Can Get Discount on Amazon Prime,” USA Today, March 8, 2018.
2,  Ibid.
3, Ibid.

Monday, March 19, 2018

The Founder of Theranos: A Flawed Charismatic Vision and Leader

“Theranos rose quickly from being a college dropout’s idea to revolutionize the blood analysis industry to a hot tech bet that accrued $700 million in funding and many famous names for its board.”[1] Elizabeth Holmes, the company’s founder, was stripped of her position at the company in 2018 after the SEC discovered her deep involvement with the fraud at the company. Her “smarts, fierce determination and Steve Jobs-inspired look . . . were critical” to her being able to perpetuate the lie that the company had a device that could do blood tests with just a scant amount of blood, obviating the unpleasant experience of having blood drawn by needle.[2] Although Jack Welsh, Bill Gates, and Steve Jobs accomplished enough to warrant their fame, I submit that companies are too prone to create “champions”—even strangely calling them “rock stars.” In other words, even though charismatic vision is of value to a business, neither such a leader nor his or her vision itself should be overplayed. Business, I submit, has a marked tendency to do just that, and often with impunity.

On leadership vision, see Skip Worden, The Essence of Leadership: A Cross-Cultural Foundation

[1] Marco della Cava, “Behind the Scenes of Theranos’ Dramatic Rise, Fall,” USA Today, March 16, 2018.
[2] Ibid.

Facebook: A Distrustful Company Projecting Distrust

Cambridge Analytica, political data firm founded by Stephen Bannon and Robert Mercer, and with ties to U.S. President Trump’s 2016 campaign, “was able to harvest private information from more than 50 million Facebook profiles without the social network’s alerting users.”[1] The firm had purchased the data from a developer (a psychology professor at Cambridge University in the E.U.) who had developed a personality test that Facebook users could take, and whose purpose was supposedly academic. The developer violated Facebook’s policy on how user data could be used by third parties. The data firm “used the Facebook data to develop methods that [the firm] claimed could identify the personalities of individual American voters and influence their behavior.”[2] In other words, Cambridge Analytica used the purchased data to manipulate users to vote for Donald Trump for U.S. president in 2016 by sending pro-Trump messages. Although Facebook had not known of the sale of the data to Cambridge Analytica at the time, the social network, upon learning Cambridge Analytica’s political use of the data in 2015, failed to notify its users whose data had been compromised. Although 270,000 Facebook users took the developer’s personality test, “the data of some 50 million  users . . . was harvested without their explicit consent via their friend networks.”[3] It bears noting here that those of the 50 million users who had not taken the personality test should definitely have been informed. At the very least, Facebook’s management could not be trusted to not only  keep users informed, but also protect users in the first place by adequately enforcing the third-party-use policy. So it is ironic that Facebook’s untrustworthy management could be unduly distrustful of ordinary users.
The psychological-political mixture in Cambridge Analytica’s use of the data is downright creepy. Tapping into a psychology professor’s methodology for inferring personality from data on a social network platform so to be able to send politically manipulative advertising to certain Facebook users  is highly invasive, even for the users who voluntarily took the professor’s personality test online. Regardless of party affiliation, a reaction of disapprobation to such an over-reach could be expected; hence the operation was stealth—which is why Facebook’s management erred so in failing to inform the 50 million users. Facebook’s stock deserved to fall when the story finally did break in March, 2018.
It is odd that Facebook’s management even permitted the developer, the psychology professor who went on to sell the data to Cambridge Analytica, to obtain the data in the first place to develop personality constructs for academic purposes. It is also odd that Facebook’s management had been so naïve concerning a political data firm, and yet so demanding of individual users who displayed no cause for suspicion. Facebook suspended an account I set up because I had sent a link to one of my academic articles to some scholars I knew. I deleted the account. A few years later, I tried again. That time, Facebook demanded that I upload a clear facial picture of myself so I could be identified. Apparently my phone number and email address were not sufficient, even though I had not yet even used the account and thus could not have violated any of the company’s use-policies. I deleted that account rather than supply a picture of myself because I was concerned how the facial recognition software would be used, especially when combined with other basic information I had included in the profile. It turns out I had reason to be concerned, for even if my personality had not been construed and I had not been subject to political manipulation psychologically, the fact that Facebook let a political firm in the door means that other harvesting could have been going on. Furthermore, even if Facebook discovered other extractions, I could not trust that the company would have informed me.
It is telling, in short, that a company so distrustful demanded that I upload a picture of my face so I could be identified—as if I were distrustful. I suspect that the managers and their employees were projecting their own distrustfulness onto innocent users, while giving firms like Cambridge Analytica a free hand. In other words, the folks at Facebook were very bad at determining who is trustworthy. The lesson here is that Facebook was not worthy of its users’ trust, and yet strangely the users did not bolt en mass. It could be that people in modern society had become so used to being distrusted by people working in organizations and to interacting with distrustful companies that the Facebook revelation was a mere blimp on the radar screen.
The philosopher Kant reasoned that promise-making is only valid in a context in which promises tend to be kept; otherwise, promises would simply be dismissed as worthless dribble. If large companies only keep their promises when doing so is convenient to them, such a context could recalibrate just how much worth promise-making justifiably deserves. If so, the business world itself could contribute to a society in which distrust rather than trust is the norm. When I lived in Tucson, Arizona, I experienced such a society. I could feel not only the angst in the air, but also the passive aggression in the distrust itself. Besides the police-state being “beyond the pale” even on the local university’s campus, the guarded watchfulness that was (and surely is still) practiced between strangers on the city streets (as well as between bus drivers and riders) included an inherent aggressiveness. Likewise, Facebook’s refusal to notify users of the “harvesting” and Facebook’s demand that I furnish a photo of my face involved passive aggression—which is inherent in unjustified disrespect. Are companies like Facebook unwittingly turning modern society into Tucsons? If so, the link between distrust and aggression should be made transparent so people can at least be aware of the change.

For a business ethics critique of Facebook, see Taking the Face off Facebook

1. Matthew Rosenberg and Sheera Frenkel, “Facebook Role In Data Misuse Sets Off Storm,” The New York Times, March 19, 2018.
2. Ibid.
3.Cambridge Analytica: Facebook ‘being investigated by FTC,’” BBC News ( accessed March 20, 2018).

Saturday, February 24, 2018

Novartis Invested for Bribery in the E.U.: On the Ethics of Suffering

Two former prime ministers, the central bank governor, and the federal commissioner for migration stood accused by prosecutors in the E.U. state of Greece of receiving bribes from Novatis “in exchange for fixing the price of its medicines at artificially high levels and increasing” the company’s access in the state.[1] The state legislature voted in February, 2018 to investigate the accusations and to vote by secret ballot at the conclusion of the investigation on whether to revoke immunity, which would be necessary for any of the accused to be indicted. The prime minister at the time, Alexis Tsipras, said, “Those who enriched themselves from human pain must suffer the consequences.”[2] This statement reveals an ethical truism of sorts—namely, that people who knowingly cause others pain should suffer.  It is right, in other words, that they suffer.
A gay man, for instance, who knowingly risks infecting sex partners with HIV by lying to them may receive less sympathy if he becomes ill. Mortgage producers who knowingly subject borrowers the likely risk of losing their respective homes deserve to suffer punishment. Suffering should be in balance. Yet the infliction of retributive suffering does not undo the original suffering. Whether or not the 10 politicians would suffer by being imprisoned would not bring back any patients who died because medications were too expensive. A corresponding suffering does not make the world fair; it merely makes the victims or their allies feel better by relieving their anger. But does this render the corresponding suffering ethical?
It is better, ethically speaking, to prevent the original suffering, for even adding a corresponding suffering does not undo the original for the victims. Novartis had been investigated for bribery in China, South Korea, Turkey, and the U.S. Why had the European Commission not held the company to close scrutiny? To look the other way concerning such a company is itself unethical because the original suffering could have been prevented.

For more on unethical business, see Cases of Unethical Business

[1] Nici Kitsantonis, “Did Novartis Bribe 10 Politicians? Greece Approves an Investigation,” The New York Times, February 23, 2018.
[2] Ibid.

The Commercial Media as Gate-Keepers Looking Down on Bloggers as "Non-Journalists"

In a few days during July in 2010, the American media was obsessed with Shirley Sherrod, who in a tightly edited video clip had made apparently-racist statements about not helping a caucasion farmer because he was caucasion. She was quickly fired by Tom Vilsak, the US Secretary of Agriculture, who, like the journalists and the NAACP, had failed to look at the full video.  The day after Sherrod was fired, the NAACP looked at the full video and realized that she was actually a racial healer rather than racist.  In the fuller video, she said, “I have come to realize that we have to work together … we have to overcome the divisions we have.”  Even as she used questionable language, such as “his own kind,” it should not be forgotten that the clan killed her father.  In other words, she deserves some slack.  At any rate, it was not long after the NAACP’s about-face that the agriculture department and the media were doing also doing an about-face. According to the NYT, “the White House and Mr. Vilsack offered their profuse apologies to her for the way she had been humiliated and forced to resign after a conservative blogger put out a misleading video clip that seemed to show her admitting antipathy toward a white farmer.”

Bill O’Reilly of Fox apologized—though while suggesting that Sherrod “very well could have seen things through a racial prism” and had been “blatantly partisan” on the job possibly in violation of the Hatch Act so she should not work in government.  O’Reilly was apologizing for not having done due diligence in “reporting” the story by watching the entire video before making a judgement. Like so many other journalists, he lept at the story without adequately checking the source—the video or Sherrod herself.  Even as the journalists were apologizing for their bad work, they wanted to distinguish themselves as journalists from the “blogger” or “activist” who had posted the edited video clip in the first place. O’Reilly promised his viewers that they could still come to him for good journalism even as he had gotten the story wrong.

Beyond the momentary obsession that the media enjoyed at Sherrod’s expense—the obsession itself being a problem missed by the journalists themselves—this case allows us to glimpse how journalism changed so much in the first decade of the twenty-first century. The case put journalists in the position of distinguishing themselves from bloggers when both had engaged in bad judgment. Hence Bill O’Reilly’s statement that his viewers could come to him for good reporting (rather than have to rely on bloggers) in spite of the fact that he had just engaged in bad journalism and may have done Sherrod another injustice even in his apology. To be sure, the blogger had erred in posting such an edited video clip without providing the context.  However, given the opining of many mainstream journalists who work for media companies and the actual news provided on blogs, the line between “journalist” and blogger are blurred.  Hence the journalists working for media companies were sure to distance themselves from the blogger, who they said was not a real journalist, even though they had all made the same mistake. Were there a clear distinction to be made between the journalists and the bloggers, the former would have done better work—but they didn’t.

Back in 1984 when Daniel Schorr was working at CNN, he objected to the network’s plan to couple him with John Connally, who had been the Governor of Texas and a Secretary of the Treasury, to cover the Republican Convention. It was improper, Mr. Schorr said, to mix a politician with a journalist. In 2010, journalists were saying that it was improper to mix a journalist with a blogger. By then, many television journalists were giving opinions, and were thus closer being politicians, while many bloggers were providing news even before the networks. Lest the journalists point to their educational credentials from schools of journalism, how many American journalists in the nineteenth or even the twentieth century majored in journalism?  Is learning on the job at a newspaper so much different than the entrepreneurs who free lance at their own blogs to provide news?  If these are so different, why didn’t the “journalists” in the Sherrod case catch rather than perpetuate the blogger’s mistake? The proof is in the pudding.

The fact is that many bloggers are able to provide news because a person does not have to study journalism to have access to some information that is new. As a blogger myself, I have not found myself in this position—hence I confine myself to providing analysis based on my years of formal education and on the news provided by others—bloggers or “journalists.”  I must admit that I am more apt to trust the news from a company simply because there are institutional requirements for verifying stories, though as the Sherrod case shows, a media company’s procedures are not always sufficient. The difference between a news company and a blogger is perhaps in the checking or verification function, rather than so much in the getting of news (though the companies have more resources).  Even so, news can come from a variety of sources—not just from people who have a BA in journalism.  As a consequence, there is more of a need for verification—precisely because there are so many blogger/entrepreneurs operating. To dismiss them by saying they are not really journalists is an over-reaction and ill-founded. However, to insist on due diligence and verification on any report is even more pressing. Perhaps rather than have their journalists invoke artificial diremptions, news organizations could hire or contract per piece with many of the bloggers who are providing news so the latter could have access to the organizational wherewithal to verify stories.  These bloggers would then have the advantages of being entrepreneurs and of having the wherewithal to do due diligence.

Daniel Schorr, a protégé of Edward R. Murrow at CBS News and an aggressive reporter who got into conflict with censors, the Nixon administration and network superiors would likely see the advantages that bloggers have in terms of freedom, while being worried (as I am as well) concerning the due diligence limitations faced by the entrepreneurs. He got his first scoop, which earned him $5, when he was 12. A woman fell or jumped from the roof of the apartment house where he lived, and he called the police, interviewed them about the victim and then called The Bronx Home News, which paid for news tips.  Had there been an internet, he likely would have been a blogger. Would that have made a difference?

While good to a point, a profession’s gate-keeping can be readily subverted into simply keeping people out who are otherwise doing good work. In spite of the Sherrod blogger, other bloggers have been providing news—otherwise, the media companies would not be citing them as sources. Rather than fighting the bloggers, the “journalists” who got the Sherrod story wrong might offer a hand; they might just find that they will be helped in return. News is like water in a stream—there are many feeder streams.  Moreover, the nature of news is freedom,which is inherently broad rather than circumscribed.  This is particularly so in a high-tech world where the internet has had a democratizing effect in expanding the sources of news and analysis.  In this context, we might be wise to remember Ben Franklin and Thomas Jefferson concerning the need for an educated electorate rather than try to monopolize information-getting to those in the club.

Source on Schoor: http://www.nytimes.com/2010/07/24/business/media/24schorr.html?_r=1&hp
Source on Sherrod: http://www.nytimes.com/2010/07/22/us/politics/22sherrod.html?scp=1&sq=sherrod&st=cse

Monday, February 19, 2018

How Much Economic Distance Is Justified?

The median household income in the U.S. in 2016 was about $60,000. The following year, Citibank’s CEO, Mike Corbat, received a 48% increase in compensation--$23 million. Goldman Sachs’ Lloyd Blankfein enjoyed a 9% rise, to $24 million. JP Morgan Chase’s Jamie Dimon received $29.5 million.[1] The sheer distance between the median and bank CEOs’ incomes not only begs the question—were the executive compensations justified?—but also raises the question—is such distance itself justified?