Wednesday, August 24, 2016

Apollo Global Flew Too Close to the Sun: Personal and Institutional Conflicts of Interest


I submit that people tend to get more upset over the exploitation of personal conflicts of interest than the institutional sort. That is to say, our blood boils when we learn of another person contravening a duty in order to gain financially, yet we don’t mind when a CPA firm falsely gives a qualified opinion on an audit so the company being audited will continue with that audit firm the following year. Logically, as the money involved is more in the case of the CPA firm and individuals within the firm stand to benefit personally as the firm is enriched by the continued business, yet even so, we cannot stand direct personal enrichment resulting from a conflict of interest. In August, 2016, Apollo Global, a large private equity firm, settled with the SEC. Both personal and institutional conflicts of interest brought on the $53 million fine. Hence, this case is useful in comparing the two sorts of conflicts of interest.
The S.E.C. accused the private equity firm of misleading investors and failing to supervise a senior executive who was twice caught “improperly charging personal items and services” to Apollo’s funds (and, by extension, to the investors).[1] Misleading investors here is an institutional conflict of interest because the activity is 1) systemic in an organization rather than being done by a person and 2) premised on the institutional relationship between the investor class and the firm. A person improperly charging personal items constitutes a personal conflict of interest because the individual’s personal gain is put before the person’s obligation to the company. In both cases, a narrower gain supplants a wider benefit, which in turn is usually associated with a duty.
The misleading of investors involves the private equity firm’s failure to inform its investors of “so-called monitoring fees.”[2] Apollo had been charging the fees to some of the companies it owned as compensation for the consulting and advice it had provided to them. The Apollo executives were essentially breaking out the supervisory aspect of owning a company and charging the latter for it. In short, Apollo was charging some of its companies for being owned. The private equity firm was even accelerating the monitoring fees when one of its companies was about to be sold or gone public. Specifically, “Apollo would accelerate the remaining years of monitoring fees into lump-sum payments.”[3] According to the S.E.C., these payments effectively reduced the “amounts available for distribution to fund investors.”[4] Apollo, and therefore its management, stood to gain. This represents a narrowing of the beneficiary group (from the companies and the fund’s investors to Apollo itself) by exploiting the fund’s duty to inform its investors. This is known as an exploitation of a conflict of interest.
Regarding the personal conflict of interest, one of Apollo’s senior executives submitted “fabricated information to Apollo in an effort to conceal his conduct” from 2010 to mid-2013.[5] The SEC charged Apollo’s management with knowing of the manager’s misconduct yet failing to do anything about it. In charging the fund for personal items, the manager gained personally, while the fund paid the price. Hence, here again the narrowing of a benefit is involved. The manager exploited his duty to report only work-related expenses in order to gain personally.
Which conflict of interest here aggravates you more? Another person enriching himself—stealing, in effect—or the fund’s charging its companies for functions that are part of ownership and misleading investors about it? I contend that most people would say the former, even though the misleading of fund investors has been a recurring problem. “A common theme in our recent enforcement actions against private equity firms is their failure to properly disclose fees and conflicts of interest to fund investors,” said Andrew Ceresney, the S.E.C.’s head of enforcement.[vi] Ceresney could cite the Blackstone Group and Kohlberg Kravis Roberts & Company as just two such cases.
I submit the following explanation. We humans are more easily resentful of other people enriching themselves unethically than of organized groups of people doing the same thing institutionally. A person found stealing raises our ire more than a company found misleading investors so to profit more at their expense. Something about groups and institutionalization mitigates our reactions. As a result, better legislation and improved regulatory enforcement oriented to breaking up institutional conflicts of interest (even before they are exploited!) find insufficient political will. The Dodd-Frank financial reform law of 2010, coming on the heels of a major financial crisis, thus leaves the CPA and rating company conflicts of interest in tact. We can expect, likewise, that private equity firms will continue to be tempted to exploit their conflicts of interest even as individual managers found to be stealing from the company "trough" will face prosecution. I contend that American society, including its business sector, could do worse than regard institutional conflicts of interest as more rather than less harmful than the personal variety.




1. Ben Protess, “Apollo Global Settles Securities Case as S.E.C. Issues $53 Million Fine,” The New York Times, August 23, 2016.
2.Ibid.
3. Ibid.
4. Ibid.
5. Ibid.
6. Ibid.

Monday, August 22, 2016

Homeless “Campers” Starting Wildfires: Outside the Social Contract


Nederland, Colorado. A town in Boulder County that had embraced marijuana dispensaries for profit, found itself just outside a wildfire that burned 600 acres in July, 2016. Two homeless men were charged with fourth-degree arson for failing to put out their camp fire. The townsfolk reacted in anger, pointing to the increasing number of homeless people in the nearby national forest. Officials had been forced to deal with “more emergency calls, drug overdoses, illegal fires and trash piles deep in the woods.”[1] Some residents urged the U.S. Forest Service to crack down on the homeless by imposing tighter rules on camping, or banning it altogether in certain parts of the woods most popular with the homeless. An analysis drawing on the political philosophy of Thomas Hobbes, a seventeenth-century English philosopher can be employed to reveal a broader perspective on the problem.
In his masterpiece, Leviathan, Hobbes theorizes that people in the state of nature once made a social contract wherein they ceded their political freedom to a sovereign, who could forestall civil strife and war. Self-preservation is the dominate motive here. In agreeing to give up some freedom to a system of laws and police, agreeing to be bound by them, people believe themselves more likely to survive.
Social-contract theory more generally is not limited to the political dimension. In living in society, people agree to give up some of their economic self-sufficiency that comes from living off the land. Economic interdependence comes from specialization of labor, trade, and even the use of money. In an economy, people are interrelating parts rather than being wholly self-sufficient. As recessions and the loss of particular industries demonstrate, being a part in an economy is not necessarily best for a person’s self-preservation.
Therefore, it is hardly surprising that people for whom the socio-economy—a system of interdependence—does not make self-preservation more assured would head to a forest to live off the land. The homeless in the national forest near Nederland can hardly be blamed for doing what is necessary to survive. Hobbes maintained that people have the right (of self-preservation) to fight off execution even though the punishment is issued by a sovereign who rightly holds all political (and theological) power.
To be sure, a state of nature in a forest located next to a modern society may be inherently problematic. That one homeless man camping long-term in the national forest outside of Nederland asked a forest official when the trash would be picked up points to the problems entailed—problems that would not exist were we all in the state of nature. If modern society can no longer tolerate people living in the state of nature, then places must be found for the extricated humans within the socio-political economy consistent with their self-preservation.
In the E.U., the operative principle is solidarity. Social policy is the typical means by which governments implement the principle wherein self-preservation is taken to be a human right that a society is obligated to protect. In the U.S., the principle is scant—eclipsed perhaps by that of economic liberty within interdependence. Hence, the safety net within American society has gaps. It is only natural for people falling through them—for whatever reason—to seek self-preservation outside of society. It is also natural for people accustomed to the safety in society to fear the human landscape outside of society, where liberties given up in society are taken back up. These liberties are feared by the people in society as they have given them up in exchange for safety. Therefore, we can see, using Hobbes’ theory, that it is in the interest of the residents of Nederland to petition the government of Colorado to accommodate the forest people back in society rather than continue to fight their nearby presence by pushing them further away from society.



1. Jack Healy, “As Homeless Find Refuge in Forests, ‘Anger is Palpable’ in Nearby Towns,” The New York Times, August 21, 2016.

Thursday, July 14, 2016

Hillary Clinton's Extreme Recklessness with National Security: A Rigged Justice Department or Failing Short of Gross Negligence?

In July, 2016, the FBI came to the conclusion that while Hillary Clinton was serving as U.S. Secretary of State, she risked classified information by using private computer servers for email and other purposes. The FBI’s director explicitly stated that she had been extremely reckless. In legal terms, that means gross negligence. At the time, a 99-year-old statute whereby gross negligence is sufficient for a fine or imprisonment of up to ten years was still on the books. Whether or not the person knew the actions were wrong is not relevant to the statute, and thus the enforcement.  So it was perplexing to a significant number of Americans—including prosecutors and other lawyers—that the FBI director did not recommend prosecution. Crucially, extremely reckless is the same as gross negligence in legal terms.
The FBI director pointed out that the statute had not been used as a basis for prosecution, and therefore it was not fitting to apply the statute in 2016. Does this reasoning mean that just because nobody has been prosecuted for lynching black Americans since, say, 1916, a person who lynches a black man in 2016 should not be prosecuted? I have simply increased the seriousness of the crime, but is being extremely reckless with national security not also a serious crime?  Is the American legal system prepared to say that any statute not used in a prosecution is therefore unenforceable?  Only statutes already utilized could be used to prosecute people. No legal basis exists for such a view, and yet the FBI director got away with it.
That Hillary Clinton’s husband, former President Bill Clinton, boarded the Attorney General’s jet on the tarmac at Phoenix’s airport to have a discussion with Loretta Lynch presumably about grandchildren just a week before the FBI director’s announcement opens the door to the possibility that the president who had appointed Lynch to a lower office made a deal so his wife—who was running for president at the time—would not be prosecuted. The FBI’s extremely reckless logic adds more support to that possibility. In short, where there’s smoke, there’s usually fire.
At the very least, the appearance of corruption is noxious and thus unacceptable. CPA firms look not only at material conflicts of interest, but also the appearance thereof as being problematic. Such conflicts are rather obvious and they are avoidable. Perhaps Bill and Hillary Clinton were desperate to make a deal—appearance or not—because they knew she had been reckless in going against the State Department’s policy; even freshmen congressmen know not to put classified material on private email servers. Hillary engaged in such traffic even when she was on hostile soil, such as China and Russia.
What amazes me from this case is just how easy it was for the FBI recommend no prosecution—given the extremely bad rationale rationally speaking—and how easy had been for Bill Clinton and Loretta Lynch to get away with the 30-minute discussion on her plane “on the grandchildren.” That the American people take all this at face value quivers my faith in American representative democracy. Put another way, if the players could get away with corruption and, at the very least, incompetence in such a blatant case, other players could get the message that the American system of justice is no match for corrupt deals made by powerful people. Are the people really so naïve, or are we simply apathetic? Either way, the message from this case is not good regarding accountability. 


Note: This essay is not meant to convey an opinion on the 2016 U.S. Presidential election, and more specifically on Hillary Clinton as a candidate. Rather, the question is whether an inter-institutional conflict of interest exists between the White House and the U.S. Department of Justice (i.e., whether that department is immune from political pressure).

On the Business Ethics and Technology of Self-Driving Cars at Tesla

During the summer of 2016, Tesla was under fire societally with charges regarding the technology and ethics. Both of these issues can be put into a wider perspective in the company’s favor. Put another way, both technological and ethical analyses can be enhanced by putting the specific problems within a larger perspective—even in terms of time.

Regarding the technological problem at issue, the company’s cars running on “auto-pilot” could not yet take into account another car’s lateral movement. For example, the technology could not detect another car travelling alongside in an adjacent lane shifting over into the Tesla car. A man died from just such an occurrence.  He was not paying attention at the time, and yet Tesla’s incomplete technology also received a lot of blame.

Given the incomplete condition of the technology and simply for safety’s sake, the company was communicating to the buyers that even though the cars could self-drive, the drivers still needed to pay active attention. So a driver who was filmed sleeping while behind the wheel of a Tesla car during a slow-paced commute was culpable even though the car did not crash.

The argument that the company was culpable held that it had misled customers by stating that the cars were self-driving. In other words, drivers could reasonably assume that they need not pay attention. This argument fails because pilots know they must still pay attention when the airplanes are on auto-pilot. Therefore, that a car can self-drive does not imply that drivers can take naps or fixate on their smartphones. Such people are not smart at all.

The wider perspective shows the early smart-driving technology is apt to have limitations and even faults. Drivers dismissing these were missing the point regarding how technology progresses. Technological development takes a while, rather than being perfected at launch. At the start, drivers keeping this in mind could not reasonably conclude that they could drive as if the technology could support them sleeping or being distracted.

As the self-driving technology develops—sadly in part from trial and error—drivers may one day be able to sleep or play on their smart-phones with the reasonable expectation that paying active attention is not necessary. Also, as the proportion of cars that are self-driving increases on the roads, the case for not paying attention while the cars are self-driving improves still more. 

In 2016 and likely in years to come, the roads could even more dangerous than before the advent of the self-driving cars and after the technology and proportion of self-driving cars is made more complete. The temporal vulnerability resulting in the problems during the summer of 2016 is like a donut hole because a sufficient number of drivers of self-driving cars did not adequately understand the risks from the technology not yet complete enough to justify what those drivers were doing at the expense of paying active attention. Perhaps it is human nature, but Tesla was not at fault either on technological or ethical grounds.  

Tuesday, March 22, 2016

Reefer Madness: One of Nixon's Dirty Tricks

Journalist Dan Baum wrote in the April cover story of Harper’s about how he interviewed Ehrlichman in 1994 while working on a book about drug prohibition. Ehrlichman provided some shockingly honest insight into the motives behind the drug war. From Harper’s:
“You want to know what this was really all about?” he asked with the bluntness of a man who, after public disgrace and a stretch in federal prison, had little left to protect. “The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

Friday, March 18, 2016

SEC Investigating Hedge-Fund Priest: Christianity’s Pro-Wealth Paradigm Lapsing into Greed?


It is against U.S. securities law to knowingly make false statements or publish false information about a company you are shorting (selling stock now and buying the shares later, hence betting the stock price will go down). In other words, you can’t try to drive the company’s stock price down you are shorting so you can profit from the trade. Besides being illegal, the practice is unethical. Just go to Kant for that! The guy was fanatical against lying.

You wouldn’t expect to read, therefore, that the SEC is investigating a Greek Orthodox priest who sidelines as a hedge-fund manager for trashing commercial reputations in order to make money off shorting stock.  BloombergBusiness reported on March 18, 2016 that the SEC was “examining whether the Reverend Emmanuel Lemelson of Massachusetts made false statements about companies he was shorting.”.”[1] He reportedly referred to his trading skills as a “gift from God.”[2] Such a claim is on a slippery slope, theologically speaking.

The priest who may have lapsed off the plank of Christianity's pro-wealth paradigm onto outright greed hidden under rationalizations as to means and ends. Is Christianity itself at risk for having gone so far into this worldly realm? The again, the Medieval Roman Church was very worldly as a political power.

When the story broke, I had just days earlier finished revising my second book on Christian attitudes toward profit-seeking and wealth in relation to greed. Lemelson’s “gift from God” language reminds me of the pro-wealth writings during the Italian Renaissance two centuries before the Calvinist work-ethic of industriousness. The Italian theologians of the fifteenth century tended to lighten up on profit-seeking and wealth. Cosimo de Medici got a pass from Pope Eugene IV in spite of a fortune based on usury (lending at interest). One priest, Fancini, went so far as to claim that humans are gods on Earth, given the dominion we have over its resources. Far from the camel who could not get through the eye of the proverbial needle, a Christian during the Renaissance (and after) knew he had to be rich in order to exercise the Christian virtue of munificence. Whereas liberality pertains to typical gifts, munificence involves donating money to build a cathedral, for instance. Being able to make a lot of money was a “gift of God” that would enable the successful Christian to give philanthropically on a scale worthy of God’s majesty.
Of course, the pro-wealth paradigm in Christianity is vulnerable to lapsing into love of gain (i.e., greed). Luther’s extremely anti-wealth stance can be interpreted as an effort to put on the brakes before the by-then dominant pro-wealth attitude in Christianity hit the skids and flipped over into greed. Luther did not succeed. Nor did Calvin or the Puritans, though they were more accommodating to the dominant perspective. The result was a clear line to the Prosperity Gospel—the notion that God rewards true believers with not just salvation, but material wealth as well. This idea came from the Old Testament, wherein God promises Israel that material wealth would come if His People hold to the covenant.
In my book, God’s Gold, I search for a theological undercurrent below the graduate shift from anti-wealth to pro-wealth dominance. I discount the impact of the commercializing context. With regard to the hedge-fund priest, I would be hesitant simply to say he was a manifestation of a pro-business American culture. This may be so, more significant, I submit, are the rationalizations presumably going on in the guy’s head. Bearing false-witness (i.e., lying) to harm others is difficult to view as a gift from God. Even as a means to a salubrious end, the juxtaposition of a gift from God and lying without concern for others’ welfare is odd at best.
In the book, I come to a discussion of how the human brain functions in the domain of religion. If we are vulnerable to certain “short-circuiting” cognitively and yet we have a religious instinct, are we not as a species in a double-bind? Put another way, if Lemelson can neither cognitively nor perceptually recognize his own rationalization, is his urge to be religious compromised? I don’t think so; rather, other aspects of the brain, or mind, may obstruct or circumvent it as it manifests itself. I do think these short-comings can be made transparent, and thereby reduced at least somewhat in severity, or swollenness, but denial is indeed a formidable and intractable obstacle. I suppose the dominance in Christianity since the Renaissance of the pro-wealth paradigm (i.e., profit-seeking and wealth decoupled from the stain of greed) renders the “mind-games” that much more harmful in terms of rationalizing some rather un-Christian behavior toward others. For one thing, in order to make money in order to serve God better can enable some pretty nasty means-ends justifications.  In this way, Christianity itself is now more vulnerable than the religion was when being wealthy and Christian were presumed to be mutually exclusive (i.e., greed was assumed to be tightly stapled to virtually any wealth). Ironically, the theology may be partially to blame, in so far as anthropomorphism unwittingly lifts the religious status of money and property.[3]



[1] Matt Robinson, “Hedge Fund Priest’s Trades Probed by Wall Street Cop,” BloombergBusiness, March 18, 2016.
[2] Ibid.
[3] The secret to that sauce is in chapter 12 of the book, God’s Gold. I got so into the writing of that chapter in revising it that in retrospect the chapters on the historical shift seemed a bit like a very long preface.

Thursday, March 10, 2016

Picking a U.S. President: Excessive and Insufficient Democracy

The Electoral College has never performed as intended. Instead of functioning as a buffer against "mob rule," the method of selecting the U.S. federal president has been at the mercy of the two major political parties. While they have made certain that their electors vote for the party candidate, the parties have lost control of the presidential-election process itself. The void given the demise of the Electoral College as a check-and-balance feature has enabled the process to deteriorate. Even as this is not good for democracy, the American electorate has refused to demand that the process be fixed. Both the failure of the Electoral College to function as intended and the related elongation of the presidential-campaign "season" indicate that the system, or process, has run amuck, yet even so, the voters of both parties don't seem to mind. Both Thomas Jefferson and John Adams, as per their letters in retirement, would be very concerned about such an electorate. The viability of the American republics, including the Union is at risk, these Founding Fathers would no doubt warn us.
That American voters would elect electors by state, and said electors would in turn then meet in their respective state capitols to caste votes for the candidates reflects the Convention's delegates' fear that the masses voting directly would be risky because people have difficulty resisting their immediate passions. Additionally, because the number of voters for the federal-level office was so large even when the U.S. initially had a population of 7 million, only a tiny fraction could have personal knowledge of the candidates--even just from seeing one in person.  Having a much smaller number of electors actually vote for the candidates would enhance the quality of the democracy, theoretically speaking, because those electors were few enough in number to actually meet the candidates in person. Additionally, the electors could have "inside information" not available through the media and thus to the American people generally. Put another way, the empire-scale of even the Union of thirteen republics renders direct representative democracy less than optimal. Other things equal, the larger the electoral district, the lower the direct contact between the electors and the candidates. The electors thus, as a group, have less information going into the decision.
That so many of the 310 million Americans in 2016 depended on the news media for information on the presidential candidates explains in part why the "primary season" took on the air of a circus. Debates on public policy easily succumbed to titillating personal barbs, including, unbelievably, the size of a candidate's hands and how much another candidate sweats!
The sheer length of the presidential-campaign "season" had also gone out of control. In 2015, Canada's official election season was extended to 11 weeks from its typical five or six weeks. "Many Canadians saw the extension as an excruciating marathon."[1] It is odd, therefore, that Americans put up with a campaign "season" that started during the Spring of 2015 and would not end until November of the following year! Most Canadians thought that the length of the presidential-election cycle had become truly absurd.[2]
American presidential-campaign "seasons" were not always so long. In 1960, John F. Kennedy did not announce his run until 11 months before the election. In 1972, however, Iowa moved its caucuses to the first month of the year, requiring candidates to begin campaigning well before then.[3]
Ironically, the excess democracy is compatible with insufficient democracy. Most notably, the longer the campaign "season," the shorter the period that elected representatives have to viably govern. Less time for governing makes it more difficult for the People's will to be enacted into legislation and executed in regulations.
Furthermore, even though having the various primary elections and caucuses spread out over months can entertain the masses week after week, that the "weaker" candidates can drop out of the process in the process means that voters in a state having a primary weeks or months into the season who want to vote for such a candidate are effectively disenfranchised. That Americans never stop to realize this point suggests to me that the gladiatorial excitement has taken on a life of its own. In effect, nominating party candidates becomes a reality television show even as (strangely) the American people are oblivious to the gradual slide. The reigning assumption in the status quo is that the process by which one candidate is made a federal president is not broken. For an assumption to be so utterly wrong and yet so widely (and unconsciously) ascribed to should cause us perhaps the most concern, for an electorate out of touch with itself is perhaps the most dangerous thing in a republic. Moreover, the presumption of not being able to be wrong renders such a people very vulnerable.
For a people to recognize and accept its own weaknesses and go on even to build procedural safeguards to check even democracy itself is what led to the Electoral College. It was meant to be a check on the excesses possible in an electorate--especially a big one. Doubtless, the device was an utter failure, but this does not mean that no alternatives to the status quo are possible.
In the federal convention, for instance, delegates considered having the governors elect the federal president. We could conceivably add even more possibilities, such as having the newly elected Congress meet in joint session to elect the president. Having elected representatives themselves select among candidates for a federal post is actually very consistent both historically and theoretically with ancient federalism (i.e., confederalism). In the E.U., another empire-level federal system, officials at the federal level select the presidents of the Commission and the European Council.
My basic point is that with such historical and comparative knowledge at hand, even a people wedded to the status quo can realize the brokenness of a system and go on to come up with alternatives. Sadly, viable fixes can be labeled as outlandish or impracticable to a People used to slow, incremental change. They miss the point that rearranging desk chairs on the Titanic falls short when a system has become fundamentally broken. As John Adams and Thomas Jefferson both wrote in their exchanges of letters in retirement, a viable republic requires an educated and virtuous citizenry.  



1. Daniel Victor, "The U.S. Election Is in Its Final 11 Weeks. Canadians Wonder, 'Why So Long?'," The New York Times, August 23, 2016.
2. Ibid.
3. Ibid.

Sunday, January 3, 2016

On the Key Role of Energy in the Industrial Revolution

Reading from Peter Stearns' "The Industrial Revolution in World History," I'm intrigued with the twin elements of fossil fuels to power the machinery and organizational management to organize the production process, including the continued use of human energy/labor. I suppose Descartes' "mind-body" dualism is getting in the way of my understanding of the industrial revolution from the standpoint of the leap of energy and the related increase in complexity. 

Stearns contends that "the industrial revolution constituted one of those rare occasions in world history when the human species altered its framework of existence." This is a huge statement! Stearns suggests that "the only previous development comparable in terms of sheer magnitude was the Neolithic revolution--the conversion of hunting and gathering to agriculture as the basic form of production for survival." As a result of the industrial revolution, human beings use 100 times the energy necessary for survival. Such energy is necessary to maintain and establish increasingly complex technological devices and organizational/communications systems or networks. 

I get how the energy harnessed from coal represents a leap in the industrial revolution, but the boost in energy from "industrial-style organization [,which] involved more conscious management of workers toward a faster as well as a more fully coordinated work pace" does not seem to me to be the same kind of energy as is unleashed in a chemical reaction (e.g., burning coal).

Sterns emphasizes the "contrast with the more relaxed work styles characteristic of much preindustrial labor, including a good bit of slave labor." I can understand that workers working more means more energy (as I understand that term in physics), but the managerial coordination of work does not seem like "real energy" to me. I realize I'm wrong about this, but that energy just seems different to me--less real. Likewise, stuff like "redefined work discipline and specialization," [which] along with growth in the size of the work unit, defined the organizational core of the industrial revolution," seems to me like concentrating mental energy rather than more energy. Of course, increasing the size of a factory means more energy in a factory. But mental energy, as is involved in managerial coordination and increased self-discipline, seems different to me--more like the "energy" that is involved in thinking. 

I do realize that the brain's activity is indeed a draw on the body's total energy (including holding the brain up), but I suppose I have a bit of the Cartesian mind/body dualism going on here. My reading here of the industrial revolution is that it rivals the Neolithic revolution (hunter/gatherer to agriculture) in that the source of the energy changed from animals and humans to fossil fuels, which have a lot of stored energy from the sun. It is the tapping of that stored energy--being able to use much more of the sun's energy--that I believe is the key change in the industrial revolution; so when I look at the energy used in thinking, including managerial coordination and the self-discipline of labor, I don't see the especially great energy required since it is largely mental. In other words, I lean on coal (and then oil) as the energy-signature of the industrial revolution. 

Wednesday, December 30, 2015

On the Financial Crisis of 2008: Why Business Ethics Failed

I submit that the academic field of business ethics failed in not being able to anticipate the fraud and exploited conflicts of interest that precipitated the financial crisis of 2008. That is to say, business-ethics scholars, including myself, failed utterly. To the extent that the general public relies on us to shoot off flairs in advance of a high likelihood of icebergs in the water ahead, we failed in our social responsibility, ironically as many of us were admonishing corporate managers to be socially responsible. Many who did so used could use their programs as advertisements or even window-dressing. In this essay, I point to some of the academic reasons why business-ethics scholars failed so miserably.

I think the focus on ethical decision-making is culpable here, as it diverts attention from the institutional level(e.g., inter- and intra-institutional conflicts of interest that are themselves systemic).  Also, the focus assumes that managers want at least to consult the ethical dimension (or worse, that they can become ethical by knowing how ethical decisions are or should be made). The  2015  film, "The Big Short," makes the point that this assumption is faulty at best.

Relatedly, the "managerialization" of the field, as evinced in coming up with procedures by which to make decisions in terms of ethical principles (and, in the field of Business & Society, in CSR2), may mean that business-ethics scholars think too much like managers, so we didn't see the systemic failure coming. We did not have a vantage-point by which we could have seen what the Wall Streeters couldn't (except for a few of them).

I think the hegemony of the individual and firm levels in the field has also contributed to the failure in that business ethicist scholars missed even the obvious ethical problem in how rating agencies are compensated. We also missed the inherent conflict of interest in a bank having its own proprietary holdings apart from those that serve as counter-parties to client trades.

In short, I contend that business ethics is too managerial in perspective (and content) and too micro. My intuition tells me that business ethics scholarship missed (and misses) the forest for a few trees. I would even say, moreover, that scholarship from business schools is not sufficiently distanced from the phenomena being studied. I suspect that many professors of business are caught between identifying themselves as managers and scholars, and this is no doubt reflected in their work because they are actually in neither camp.

The department of business environment and public policy at Pitt, in which I studied as a doctoral student, was an exception in that most of the faculty members had doctorates in disciplines including political science, sociology, and psychology, as well as institutional economics. While the make-up was not managerial culturally--the department was quite obviously the part that was "not like the others" in the school (quoting here from PBS's "Sesame Street";); yet even so, the academic orientation was managerial! I suspect that that department's faculty tried especially hard to be managerially useful because they had come from disciplines in the social sciences. With the subfields being Business & Government, Business & Society, and Business Ethics (though I would argue that it was an exogenous field on account of the lack of ethical theory courses), such a department was in theory almost uniquely (along with Berkeley and Minnesota, at the time) in a position to see the iceberg in the dark (trading) water ahead. Scholars in "Business, Government & Society" were too managerial--too wedded to the system to be really critical--and not academic enough--in the sense of not grasping the real significance in the subject matter--to get the big picture: that the system itself was (and still is) riddled with fraud and self-serving behavior shamelessly at the expense of the whole system. Not seeing the Wall Street culture of greed and opportunism as the proverbial canary in the coal mine, the scholars missed the blatant fraud at Wall Street banks, mortgage servicers such as Countrywide, and the related rating agencies at the expense of investors, clients, and the general public. Scholars at business schools should have had megaphones to warn the American people of the baleful existence of the major institutional conflicts of interests that should be rather obvious to everyone. The silence of the learned enabled the self-satisfied business practitioners to assure the public that the institutional "firewalls" could be trusted to prevent such conflicts from being exploited even as they were!

Meanwhile, I "overshot" while I was in the doctoral program, taking the macro orientation to the point of the business SYSTEM as an entity in relation to other systems in society (e.g., the system of government). Hence I missed the inter-institutional conflicts of interest WITHIN the business system (e.g., the way CPA firms and rating agencies are compensated). I was resisting the gravitational pull of the manager and firm-level managerial orientation of the field of "Business, Government & Society." I had taken a MBA seminar at Indiana on the environment of international business, which studied economic, financial, governmental, and cultural/religious systems, and I patterned my doctoral studies at Pitt on that course (even getting a MA and Ph.D. minor in religious studies and taking substantial coursework in international political economy in political science).

Looking back, I'm astounded that through the coursework in "Business, Government, & Society" at Pitt, the only coverage of conflicts of interest was confined to the "regulatory capture theory," wherein regulators depend on the regulated for information. Even for that conflict of interest, the siphon of information now strikes me as hilariously minimalist. How about the revolving door? How about political pressure on the SEC from government officials who have accepted political donations from Wall Street banks? How about ex-CEOs of those banks serving as high up as Secretary of the Treasury?

We should not count on practitioners to recognize even monstrous institutional conflicts of interest. When I briefly worked as a staff-auditor (before going back to academia out of sheer boredom and lack of brain-stimulus), I had used a tickmark, "As per comptroller, discrepancy resolved" without even noticing how inherently ethically problematic it is. An independent audit that takes the word of the comptroller. Decades later, I was shocked to hear the man in charge internationally of that CPA firm's internal "firewall" tell me that such a device is effective. I submit that it was not, is not, and in fact cannot be so, and yet the Dodd-Frank financial reform law of 2010 looks the other way. Nice.

That structural conflicts of interest are still in place in public accounting and the rating agencies, as well as at post-Glass-Steagall banks in spite of the financial crisis and even the Dodd-Frank Act suggests to me that the pedestrian "scholars" (aka managerialists) in the field of business ethics still don't get it. The public is still left unprotected.  The American people really needs scholars of business ethics who DO get it and can inform the public honestly even if future consulting gigs take a hit. Sitting in on a doctoral seminar in business while visiting a school before the financial crisis, I was stunned when the professor told his students (who had been bankers) to run their research conclusions from surveys (i.e., empirical research) by the managements of the firms surveyed and change the conclusions as needed so as to keep consulting possibilities open. What if one of those students went on to knowingly publish a flawed risk-analysis metric? As we know from the case of systemic risk in 2008, the financial system could hang in the balance. Perhaps the "regulatory capture" theory should be applied to business schools. Astronomers who study Mars certainly don't live there or try to act like Martians; likewise, business scholars should not conflate themselves with their subject matter. A certain distance between the scholar and the thing being studied is necessary for good scholarship--that is to say, scholarship that gets the big picture concerning the object of study.  

Anyway, all this is to explain my gut reaction at the end of the film: Business ethics failed. It was the sort of immediate intuition that falls with a thud on concrete. My field failed. I failed. How could I have not seen it? How presumptuous of me to think that the financial crisis would be a great case study for me to teach and write about, my past studies, beginning at the masters level, of the environment of business notwithstanding. Relatedly, how presumptuous of the people at the helm in government and business who missed the danger signals to go on to consider themselves vital in saving the financial system and economy. The U.S. Treasury Secretary and the Chairman of the Fed--a scholar whose dissertation is on the Great Depression!--didn't see the iceberg ahead and yet both presumed to be the men of the hour to fix the ship.

Tuesday, September 29, 2015

Business Implications of Power in Mergers: The Case of the New United Airlines

Ideally, a merger combines the best features of one company with those of another company such that the whole is of greater value than the sum of the two parts. Optimal combination as such may imply or at least depend on a rough power-balance between the two adjoining companies, for otherwise distended dominance could translate into the worst of one company (i.e., the dominate one) being foisted onto the merged entity. The opportunity cost, or benefit lost in going with the worst of the dominant company, could be measured by the extent to which the same function in the other company is better than that of the dominant company. Put another way, it would make no sense to go into a merger planning to let each company continue to do what it does worse than the other. Sadly, power can eclipse economic criteria even in a company. The merger of Continental Airlines and United Airlines provides a case in point.

According to The New York Times, “The merger . . . was supposed to combine Continental’s reputation for solid customer service with the broader reach of United’s domestic and international network. Instead, [the merger turned into] an exercise in frustration for [the] fliers, with frequent delays, canceled flights, and lost bags.”[1] Customer unhappiness is a pretty good indication that something went horribly wrong in the formation of the combined company.

One business passenger, a frequent flier, provides us with a synopsis. “Continental was probably the best airline . . . that you could travel on pre-United. I would say United is one of the lowest.”[2] Specifically, he cited poor service, bad wifi connections, and cut-backs on perks and upgrades that evince little appreciation for frequent fliers. “I feel that at 100,000 miles, somebody should care and make me feel like a valued customer. You’re treated as just a commodity, and it’s a race to the bottom. They don’t really appreciate me at all.”[3] He would have quickly switched to another carrier, but the new United held 70 percent of all routes in and out of Newark, his main hub, at the time. Monopoly in a market, and perhaps even oligopoly, may enable sub-optimal merged companies to continue when they otherwise would have gone bankrupt.

United's "Love in the Air" promotion highlighting couples who met in the air. The case of the winning couple pictured here just happens to involve an "upgrade." The love in the air does not refer here to the employees on board or at the gate, even though the impression intended may be that flying United is a loving experience. (United Airlines)

In any case, the poor service of the pre-merger United somehow trumped Continental’s excellent service in the combined airline; the sordid mentality survived the salubrious one. Behind this dynamic lies dominance, or power, disproportional, I submit, from the standpoint of an optimized merged company according to business criteria—that is to say, power over effectiveness. Lest it be presumed that business principles and calculation play a predominate role mergers, the management of the power dynamics should not be left out of the equation.




[1] Jad Mouawad and Martha White, “Despite Shake-Up at Top, United Faces Steep Climb,” The New York Times, September 15, 2015.
[2] Ibid.
[3] Ibid.