Sunday, January 29, 2017

The French Socialist Party’s Proposal of a Universal Income Amended: An Economic Floor Providing Economic Security to the Poor

Benoit Hamon, “riding to victory” from political obscurity on a proposal to “pay all adults a monthly basic income,” defeated the recent Prime Minister, Manuel Valls, in a presidential primary runoff election of the Socialist Party in the E.U. state of France.[1] Although “Hamon wasn’t as tainted as Valls by Hollande’s unpopularity” because Hamon had “rebelled and quit the government in 2014,” whereas Valls served more than two years as Hollande’s prime minister in the state legislature, Hamon’s “proposal for a 750 euros ($800) ‘universal income’ that would be gradually granted to all adults also proved a campaign masterstroke. It grabbed headlines and underpinned his surprise success in the primary’s two rounds of voting.”[2] I submit that the proposal, although flawed from the standpoint of economic security, fits well with the industrial world of global capitalism.

Under Hamon’s proposal, the no-strings-attached payments could be made to more than 50 million adults in the state. The “no-strings-attached” aspect is crucial to the provision of economic security, which is itself of significant psychological and financial value to people who are either unemployed or live from paycheck to paycheck. Put another way, the lack of conditionality can give such people a more stable peace of mind that could not but improve the quality of life generally in the daily life of a town or city in interpersonal dynamics. The temptation would be to begin to insist that the money be used for A, B, and C, but not on X, Y, and Z. Even such salubrious conditionality would undercut the stability afforded by the faith that the money would be come every month necessary—come hell or high water. I submit that Western peoples tend to discount the value of financial assurance or stability—essentially the provision of a floor or net that can be relied on—just in terms of the foregone anxiety alone.

The problem is that Hamon meant the payments to go to every adult, irrespective of income and wealth. A wealth person with a good income already has financial security, so adding a floor of 700 euros would be a waste of money from the standpoint of providing economic security. So the cost of the program, which Hamon reckoned to be at least 300 billion euros ($320 billion), can be reckoned as excessive, given the purpose of the program. In other words, taxpayers need not pay so much to make sure that every person has at least an adequate amount of economic security. Lest it be said that the middle- and upper- economic "classes" would then have little self-interest in supporting the proposal, I would simply point to value of the peace-of-mind in knowing that should financial ruin, such as from an economic recession (or depression), injury, or illness hit, economic security would be maintained. Simply knowing this can lighten the step of even a wealthy person, since none of us can say with complete certainty that tomorrow will be like today.


Given the destructive competition that is a part of life in advanced industrial states, the rationale for the claim that every person should be financially secure from hardship is valid. That Hamon proposed a tax on robots to help finance “the measure’s huge costs” points to his rationale for why a modern society cannot simply rely on jobs and even unemployment insurance to provide economic security.[3] Automation has permanently removed many manufacturing jobs, both in the E.U. and U.S. Additionally, the financial incentive of companies to move factories to low-wage, non-developed and newly-developed/industrialized countries like Mexico and China, respectively, means that employment in industrial countries can no longer be relied on to provide economic security to a significant segment of populations, for not everyone is going to go to law- or business-school and graduate—even if education were tuition-free.

Abstractly put, the logic of global capital is not in sync with the fact that in any society, a portion of the adult population is oriented to blue-collar rather than white-collar work. Even if the E.U. were to become a manufacturing utopia, some people, such as the disabled, would still lack economic security, and thus stability, were jobs the exclusive means of providing it. 

In short, the nearly “post” industrial world cannot simply become a world of lawyers, physicians, accountants, and business managers, whereas everyone needs food, shelter, and access to medical care. Providing even a very low floor would pay dividends for everyone as society would be a more civil place, and the cost need not be so much as would be needed to pay 700 euros to every adult, regardless of whether the security is needed. In fact, perhaps 1000 euros would then be an option. Life is too short to sweat the small stuff, yet some people must and their lives are painful for lack of financial security.

Financial worry is like an internal, perpetual war to the poor person, eviscerating life of its pleasure. Quality of life matters, and not just for the poor. How people you interact with are doing in terms of anxiety due to hardship—whether deserved or not—can easily ruin your day, whereas being around calm people can make your day. No man is an island, and in modern society economic interdependence has its drawbacks. Giving other people the psychological security of a financial floor each month can indeed pay dividends to the payers without the floor necessarily being raised so high that the beneficiaries can take advantage of the blessing of security made possible by others.



1. Associated Press, “Hard-left Candidate wins French Socialists’ Presidential primary,” Foxnews.com. January 29, 2017.
2. Ibid.
3. Ibid.

A Federal Court Stays President Trump’s Muslim-Ban: Flawed Reportage?

Judge Ann Donnelly of the U.S. Federal District Court in Brooklyn, New York, issued a nationwide injunction on January 28, 2017 concerning President Donald Trump’s executive order barring people from seven countries from entering the United States. On the same day, BBC (America) radio reported that Trump had been stopped in his tracks. I submit that this instance points to the importance of investigative journalism prior to reporting. Alternatively, the case may illustrate a partisan or otherwise ideological penchant among journalists officially tasked with investigating and reporting rather than interpreting the news.

According to The New York Times, the order was “limited in scope, applying only to people on their way to the United States or already here.”[1] People en route to the United States on January 28th would not be sent back because of the undue hardship involved. Rather than allowed into the U.S., those few hundred people would be detained. So the federal president’s travel ban was essentially untouched. So the notion that the federal judicial branch had stopped Trump is clearly untrue. The hyperbole to the contrary may “make good press,” whether by opposition groups or journalists. Yet political value in sketching a political reality in which the new president was already going too far and thus had to be stopped cannot be ignored. In politics, perception can indeed become reality. So the possibility of a political agenda cannot be ruled out.

With regard to journalists and media companies, sheer ignorance or ideological preference is no excuse for not reporting the obvious: only people on their way to the U.S. would not be turned back, and they would remain detained until or unless they have been cleared on a case-by-case basis, as the executive order permits. To characterize the judge’s ruling as an injunction staying the order is nothing short of misleading.

The order itself was being taken out of perspective societally, as it was intended to be only temporary, giving the U.S. Government enough time to strengthen its vetting process.  The president is on firm ground in that, as the statement itself reads, “No foreign national in a foreign land, without ties to the United States, has any unfettered right to demand entry into the United States or to demand immigration benefits in the United States.”[2] To take a rather blatant example, a person does not have the right to unilaterally cross an international border, not to mention to go on to insist on a right to benefits for those who have lawfully crossed the border.

The order is on much less firm ground to the extent that the intent and outcome discriminates against Muslims.  “The smoking gun they put in the executive order is the idea that they would grant exceptions for minority religions,” said Anthony Romero of the ACLU.[3] The one thing you can’t do under the establishment clause of the First Amendment of the U.S. Constitution is favor one religion over another, he added. The clause states that the U.S. Government cannot establish (or favor) a religion, or prohibit the free exercise of religion. 

Would declaring or acting in such a way that only Christians are admitted from the seven countries in the Middle East mentioned in the order be to establish a religion? I submit that this question is more difficult than meets the eye. Clearly, favoring one religion over another runs contrary to the American value of toleration (it is certainly distasteful to me--comparative religion being one of my academic fields!) but does favoring people of a given religion establish that religion as the official religion of the United States? My point is merely that an elastic reading of the establishment clause may be necessary to get to establishment from barring Muslims. 

To be sure, the executive order does not bar Muslims; rather, it may discriminate against them. Yet even this may be a function of security rather than religious preference (which in itself falls short of establishing a religion). From a security standpoint, making distinctions among religions does have "a kernel of truth," given the facts on the ground. It makes sense, therefore, that stricter scrutiny would be entailed where a predominately Muslim country has lax security. I suspect that the selection of the seven countries had to do with weaknesses in the U.S.’s vetting process having to do specifically with those countries. Perhaps their security infrastructures pale in comparison with that of the Saudis--Saudi Arabia being omitted from the list in spite of being a conservative Muslim country! 

So, again, to report or characterize the executive order as a “Muslim ban” implies the sordid presence of either ignorance or a political agenda. Either way, it should be noted that a viable republic wherein the People stand as the ultimate sovereign depends on accurate reporting of the affairs of government to the principals. For intermediaries to become lazy or insert their own agendas is to disrespect the very notion of a republic, and the People as well.



[1] Adam Liptak, “Rulings on Trump’s Immigration Order Are First Step on Long Legal Path,” The New York Times, January 29, 2017.
[2] Ibid.
[3] Ibid.

Friday, January 27, 2017

Brexit and Calexit: Excessive Democracy?

Ordered by Britain’s Supreme Court to get the state’s Parliament’s approval for the state to secede from the Union, the Prime Minister, Teresa May, faced the prospect of debate, amendments, and the votes themselves in the House of Commons and the House of Lords. In the latter chamber, May’s Conservative Party did not at the time have a majority. Some in her party “suggested that she should quickly appoint enough new lords to give her the votes she needs. But few say they expect that to be necessary: with little democratic legitimacy, the 805 lords are unlikely to dare to block” the referendum outcome favoring secession.[1] I submit that the democratic criterion is ill-fitting to the House of Lords.

Clearly, the British House of Lords is not a democratic institution. This does not de-legitimatize it, however, as such a chamber can serve as a check against the excesses of democracy, such as mob-rule, which Plato and Aristotle both viewed as the bad side of democracy. Hypothetically, were the state’s House of Commons to abruptly vote to secede from the Union because of an emotional reaction to something taking place at the federal level, the lords could step in and say, in effect, slow down; let’s think this over—whether secession is in our own best interests.

The distinctive assets that the House of Lords has in making sure the secession-vote is in the best interest of the state depend on the bases on which the lords are appointed. Heredity, for instance, could bring to bear the maturity of having been raised well (including education). Appointing lords based on commercial success would bring good analysis to bear on the problem, and appointing highly educated people would increase the chance that the implications of secession (or staying in the Union) are well thought out.

I am by no means suggesting that the will of an unelected legislative chamber is superior to that of a democratic one. Rather, my point is that the check-and-balance feature of the British state legislature necessitates that both chambers are not held to the criterion fitting only one. The claim that the House of Lords had better rubber-stamp the House of Commons simply because the democratic votes favor secession is spurious and thus not in the state’s best interest.

We can contrast that case—“Brexit”—with that of California—“Calexit.” As Teresa May was submitting her secession document to Parliament, a “proposal for California to secede from the United States was submitted to the Secretary of State’s Office. The proposed ‘Calexit’ initiative—its name borrowed from the UK’s ‘Brexit’ . . . –would ask voters to repeal the part of the state constitution that declares California an inseparable part of the U.S.”[2] Fortunately for Britain, neither the state nor the E.U. basic law has the perpetual union requirement built-in. Hence, the California Nationhood proposal faced an up-hill road, including needing 600,000 signatures and a successful referendum vote. 

Presumably both chambers of California’s legislature would also have to consent to the constitutional amendment rendering the state “a free, sovereign and independent country.”[3] Herein lies the rub. Unlike the British legislature, both chambers in California are democratically elected. This is clearly duplicative—the chief difference between the bodies is their size—the senatorial districts are much larger (on the scale of the main regions in the UK: Wales, Scotland, England, and Northern Ireland).

Additionally, the opportunity cost (i.e., the benefit or value of alternatives that is foregone) of the check-and-balance feature such as existed at the time in the British Parliament is lacking. Other than California’s Supreme Court, which like that of Britain, could intervene, “Calexit” could be pursued solely on a democratic basis and thus be vulnerable to its deficiencies or drawbacks. Mob rule, such as partisan opposition to the federal president, Donald Trump, could lead to a result (albeit unlikely) that is not in the best interest of California. 

Furthermore, practical experience, maturity, and seasoned analysis do not have an institutional perch in the California legislature. It does not hold that the majority vote of an electorate will see to it that these virtues are sent to the legislature. So in this respect, California’s legislature can be seen as weaker than that of Britain. In California’s case, the U.S. Congress would also have to approve the state’s secession, and this added hurdle could serve as a check of sorts, though, again, only under democratic auspices, for both the U.S. House of Representatives and the U.S. Senate have been democratically-elected bodies since 1913 when the senators became elected rather than appointed by their respective governments. In contrast, the E.U.’s European Council, which like the U.S. Senate represents states rather than citizens, was not as of early 2017 democratically elected.


[1] Katrin Bennhold, “Ordered to Seek Approval on ‘Brexit,’ Teresa May Does So. Tersely,” The New York Times, January 26, 2017.
[3] Ibid.

Thursday, January 26, 2017

Power beyond the Constraints of Federalism: The Case of Gambia’s 2016 Presidential Election

Even though Adama Barrow defeated the longtime president of Gambia, Yahya Jammeh, in the state’s presidential election in December, 2016, Barrow was rushed to the state of Senegal for security reasons when Jammeh refused to relinquish the power of the presidency. Jammeh had led a successful coup in coming to power in 1994. So it is no surprise that days after accepting the election result, he “changed his mind, declared the election results invalid and vowed to use the power of his military to stay in charge.”[1] This attests to the allure of power and how difficult it is to give up. In the E.U. and U.S., the protocols and institutional procedures are so well established that the nature of power is eclipsed from view as one political party assumes power previously held by another party. The reality of power as it lives in human nature is much more raw in the case of Gambia’s transition of presidents in 2016. I submit that federalism at the empire level was too lax to bracket the true nature of power at the state level.

Gambia's new president, Adama Barrow, 
returning to the state after the previous president agreed to leave office. (Jerome Delay/AP)

“It took repeated personal overtures from West African presidents and finally a regional coalition of troops that crossed into Gambia to persuade [Jammeh], renowned for human rights abuses, to step down.”[2] That he felt compelled to leave Gambia for Equatorial Guinea says as much about the reach of the International Criminal Court as it does about the matter of how Gambia’s rule of law is no match for raw power in human vengeance materializing through political power. In other words, the exaggerated actions, including the need of a regional coalition of troops and Jammeh’s self-imposed exile, point to the reality of power without the channels of well-established, or fortified, institutional rules and even societal customs.

Furthermore, the ad hoc nature of the regional coalition bespeaks the need for a strengthening of the African Union. Unlike the E.U. and U.S., the A.U. is a mere confederation with little or no governmental sovereignty at the federal level. Were the A.U. balanced in terms of state and federal power (and the same could be said of the Articles of Confederation in the U.S. and the EEC before the E.U.), the federal level could have acted as a check against Jammeh’s dogmatic decision to remain in office. On the other side—and Americans in particular need to be reminded of this—the state governments in a federal system should have enough power to act as a check against over-reach at the federal level. The E.U. is much closer to a balanced federalism, with the A.U. on one side and the U.S. on the other (i.e., risks of dissolution and consolidation, respectively).




[1] Jaime Y. Barry and Dionne Searcey, “His Predecessor Gone, Gambia’s New President Finally Comes Home,” The New York Times, January 26, 2017.
[2] Ibid.

Wednesday, January 25, 2017

Bringing Back Manufacturing Jobs to the U.S.A.: Confronting Tough Realities

Meeting with American corporate CEOs at the White House on the first “working day” of his presidency, Donald Trump warned, “A company that wants to fire all of its people in the United States and build some factory somewhere else, then thinks that product is going to just flow across the border into the United States . . . that’s just not going to happen.”[1] The new president was up against “tectonic forces” in trying to bring back “blue collar” manufacturing jobs to his base using tax policy. Yet the business calculus goes immediately on the basis of financial advantage, and the contours of the “game board” include the various tax and trade policies of countries.

Without an import tax of sufficient amount to render the cost savings of moving a factory abroad, CEOs will naturally succumb to the pressure “to increase earnings at a double-digit rate when the American economy is growing by only 2 percent, and the quickest way to deliver higher profits is by reducing labor costs, whether through automation or moving jobs to cheaper locales like Mexico or China.”[2] The push, in other words, is excessive. The cause, according to the New York Times, “is the drive for bigger returns on 401(k) accounts, pension plans and other retirement vehicles that depend on steadily rising corporate profits and, in turn, a buoyant stock market.”[3] Whereas a U.S. president has a term of four years in which to see his policies realized, no such time-span is permitted where quarterly earnings reports are all the rage. Simply put, CEOs must make sure their policies see results and quick. With many emerging-market economies, as well as China, growing at more than twice the rate of the U.S. at the time Trump took office, global—including American—capital takes flight.

It is not as though the CEO’s of American companies who move factories off-shore are unethical. Scott Paul of the Alliance for American Manufacturing, told the New York Times, “I believe a lot of the C.E.O.s in that room [with Trump] want to do the right thing and create jobs in America, but the realities of Wall Street Pressure and a globalized economy leads [those C.E.O.s] to off-shore a lot of these jobs.”[4] One way to align the patriotic value with the business calculus is to alter the “game board” in such a way that it would be cheaper for the companies to manufacture products geared for domestic sale domestically rather than abroad; products directed to the Chinese consumer could still be manufactured in China. The key lies in raising the tariff or tax high enough and in adequately enforcing it. 

To be sure, automation would still mean that a return to the manufacturing hay-days could not be expected. Herein lies a much more difficult challenge: what to do with the remaining blue-collar workers who are not oriented to moving to white-collar professions and yet cannot find jobs in manufacturing. Behind the legitimacy of a tax on American companies moving factories abroad is the hard truth that significant numbers of people in any geographical region are not going to fit into white-collar jobs, for a variety of reasons not limited to education and upbringing as well as values.


[1] Nelson D. Schwartz and Alan Rappeport, “Call to Create Jobs, or Else, Tests Trump’s Sway,” The New York Times, January 24, 2017.
[2] Ibid.
[3] Ibid.
[4] Ibid.

Tuesday, January 24, 2017

On the U.S. Government’s Fiscal Imbalance: Federalism to the Rescue?

At the outset of the Trump administration in the U.S., real economic output was projected to grow at an annual rate of 1.9 percent over the next decade.[1] The new federal president was hoping his proposals of tax cuts and $1 trillion in additional infrastructure spending over a decade would bump up the annual growth to 4 percent. I submit, however, that just over 2 percent more in the growth rate would not alter the stark “budget reality” facing the new president and the American people.

With an accumulated federal debt of roughly $20 trillion, excluding “agency debt” and unfunded liabilities of entitlement programs like Social Security, Medicare, and Medicaid, the prospect of $9.7 trillion more from 2018 to 2027, as the Congressional Budget Office predicted at the time,[2] makes the matter of 1.9% versus 4% almost trivial. To be sure, such a difference in growth-rate is not trivial to people hanging onto, or in need of, a job. “By 2023, the deficit would reach $1 trillion, and in 2027, a projected $1.4 trillion deficit would be equal to 5 percent of the economy, well over the 3 percent that economists view as the danger point.”[3] Lest it be presumed that a danger point would have an effect on the American consciousness—even in its own self-interest—the world was set to gleefully sail past the 2 degree centigrade increase in the Earth’s temperature—another danger point. To be sure, an economic danger-point for a super-power, or modern empire, does not represent such a pending cataclysm.

Yet a major depression is nothing to sneeze at.  The Congressional Budget Office claimed “that the share of debt held by the public was expected to reach 89 percent of gross domestic product in 2027. Such a high level of debt could increase the likelihood of a financial crisis and raise the possibility that investors will become skittish about financing the government’s borrowing.”[4] Skittish strikes me as a euphemism here. The question is rather why the holders of U.S. Treasury bonds already in January, 2017 had not come to the realization that even $20 trillion in debt would never be repaid. The headless horseman is already dead. Is this really a news flash? I submit that matter of economic growth pales in comparison. Even the matter of an additional $9.4 trillion in debt, which does not take into account proposed tax cuts and increased infrastructure (and military!) spending, can be realistically viewed as an interesting fact past the finish line.

The fiscal dynamics of the U.S. federal government had been off track since at least the deficits during President Reagans’ terms in office in the 1980s, with the exception of a surplus in the late 1990s—only half of which went to retiring some of the debt. The imbalance can be reckoned, in other words, as systemic. I submit that it is a symptom of the broader imbalance in American federalism, wherein the federal government has assumed more and more power at the expense of that of the States since the Great Depression of the 1930s. As I see it, the only way out of the fiscal mess at the federal level is to hold federal taxation constant while transferring most of the domestic programs to the state level. That is to say, the states would have control of the programs, including raising the revenue to fund them. To be sure, individual state governments would have the power to decide whether to continue the programs. Even in terms of military spending, the U.S. military could be allowed to shrink (decreasing the amount of federal taxation going to the military), while more reliance is placed on the armies (i.e., militias) of the States. Most of the federal tax revenue would go to paying down the federal debt.

My overall point is that American government had been off-balance for some time before the Trump administration—the problem is systemic. Furthermore, the magnitude of the imbalance and its accumulated ballast warrant a drastic or fundamental realignment. Lest this seem frightening, especially to the vested interests sustaining the status quo, the inherent incrementalism of American politics means that continuing out of balance while debating how the deck chairs are arranged on this Titanic ship is the most likely course.



[1] Alan Rappeport, “Federal Debt Projected to Grow by Nearly $10 Trillion Over Next Decade,” The New York Times, January 24, 2017.
[2] Ibid.
[3] Ibid.
[4] Ibid.

Monday, January 16, 2017

The Wealth of 8 People and 3.6 Billion People: Utilitarianism Applied

As of the end of 2016, eight people held as much wealth as the 3.6 billion people who make up the world’s poorest half. Just a year earlier, a similar study had “found that the world’s richest 62 people had as much wealth as the bottom half of the population.”[1] Part of the difference in these findings is due to new data gathered by Credit Suisse. Put another way, the richest of the rich were richer than had been thought. In this essay, I want to call attention to the sheer magnitude of the wealth involved, as it pertains to the richest.
Forbes’ 2016 list of billionaires has Bill Gates, the founder of Microsoft, with a net worth of $75 billion, followed by Amancio Gaona, the founder of Inditex, at $67 billion. Warren Buffett came in third with $60.8 billion.[2] I could go on, but these three figures are sufficient to raise the question of how much is enough. By the calculus of greed, which is the love of gain itself—as in more and more ad infinitum—this question can only be extrinsic. In terms of use, however, the question is ripe, for there is indeed a limit to how much a person can realistically consume.
In terms of declining marginal utility, wherein a person does not get as much pleasure out of the fourth or fifth ice-cream cone in a row as from the first, it takes a lot more money added to $67 billion to trigger pleasure than to $100. Add $1,000 to $100 and you have made the guy’s day, but add $1,000 to $67 billion and you might get a yawn. Pareto claimed that no such interpersonal comparisons of pleasure can be made, but I think Bentham was correct in making this point. Whereas Pareto relies on the valid point that pleasure itself is not quantifiable, Jeremy Bentham (whose 18th century mummified body absent his head sits in an open closet in a university-building’s hallway in London) stressed the declining marginal utility as it pertains to very different quantities of wealth.
Bentham, whose utilitarian ethics gives primacy to the greatest good (i.e., pleasure, which he viewed as happiness) for the greatest number of people. Distribution from the rich to the poor is in line with this ethic, given the fact that a poor person would get more pleasure, or utility, from $1000 than the pain of the rich man who is now without the $1000.
Even just the gigantic sums of accumulated wealth themselves, such as Warren Buffett’s $60.8 billion—holding aside the question of added utility/pleasure from adding more wealth to the base—are not efficient, so to speak, in terms of utility/pleasure. “In my entire lifetime,” Warren Buffett said, “everything that I’ve spent will be quite a bit less than 1 percent of everything I make. The other 99 percent plus will go to others because it has no utility to me. So it’s silly for me to not transfer that utility to people who can use it.”[3] Because other people could use the money to derive more pleasure/utility, there is indeed an opportunity cost in the rich holding such vast sums. In other words, the retention of billions of dollars does represent a harm in that people who could really use it are deprived of it.
Admittedly, Buffett’s invested funds have led to pleasure from added productive enterprise and even innovation. The assumption of added productive uses can be questioned, however, as presumably alternative means of raising capital exist. An enterprise strategically oriented to expanding could go to a bank, for example, were Buffett’s invested funds depleted by voluntary or involuntary redistribution. In fact, banks would presumably have more money to lend to the extent that people receiving the redistributed funds deposit some portion (even the added consumption would go to existing businesses, thus giving them more retained earnings to invest in expansion and innovation). Furthermore, Buffett could have redistributed some wealth to the poor in the form of stocks and bonds, which would give the poor more economic security given the dividends and bond payments are on a base of wealth. In general, such means of increasing productive enterprise and innovation would be more in line with the greatest good for the greatest number of people, given declining marginal utility.
To be sure, Bentham warns that if redistribution crosses a threshold, the rich will not be motivated to create more wealth by work or investing more funds. The total “pie” would thus decrease; other things equal, there would be less pleasure/utility all around. We humans react more to losing $1,000 than to gaining $1,000, Bentham points out. Additionally, a rich person may be emotionally agitated if he or she feels that the redistribution is unfair—even stealing. This is in spite of Buffett’s point that very little utility relative to billions of dollars accrues to a rich person. However, Buffett’s statement suggests that losing a lot of money to redistribution—admittedly voluntary in his case—need not trigger the pain of loss. Even considering such pain to exist and be material, it must surely be less than the pleasure on the other side of the redistribution, given declining marginal utility. In Buffett’s words, other people can get more use out of the funds, and this added pleasure (which was not in Buffett’s holding of the wealth) is more than any pain in losing the wealth (given the pleasure that Buffett would still have from even just 1 percent of his wealth!).
From another perspective, owning tens of billions of dollars can be deemed to be excessive in not being justified in terms of property-rights theory. I have in mind John Locke’s labor theory of wealth. A person gains a natural right of ownership by “mixing” his or her labor with the asset, such as land. If you till the ground and plant the corn, you have earned a property right, or exclusive claim, on that land and its corn. It would be unethical for other people to trespass and consume from the corn.
Applied to founders such as Gates, Gaona, and Buffett, the question is whether having wealth of tens of billions of dollars is proportioned to the labor (and even risk of loss) put into the respective foundings. This question pertains to executive compensation—are CEOs who are also founders paid inordinately because of their power and status in their respective organizations?—and to stock ownership—is there a public interest in limiting the amount of stock-value one person holds in a company?  The public interest, if one exists, would presumably borrow from the Bentham’s point that billions of poor people would get more pleasure, or utility, from the redistributed surpluses than all the pain (if any) inflicted on the richest of the rich from the loss of some of their stock-wealth. Given that only so much wealth can be consumed by any single person, there would presumably be more than enough wealth remaining such that the richest would not suffer.
In conclusion, sound theoretical reasons support the claim that the eight richest people in the world should not have as much wealth as 3.6 billion of the poorest. Just as it is difficult for the human mind to conceive of billions of people, the same applies to billions of monetary units. Accordingly, it is difficult to grasp the sheer vastness of the imbalance. From this basis alone, the inequality can be deemed problematic. As this point is itself in dispute—perhaps in part because some people simply hate government—I have not gone on to prescriptions on how the problem can or should be solved. In other words, just establishing that there is a problem is a task in need of theoretical justification and argumentation. My essay here is a flawed (e.g., too limited) attempt to fortify the position that the massive inequality of wealth is indeed a serious problem, ethically speaking. That is to say, the holding of such huge sums as I’ve cited above is not justified by the efforts expended to “get the ball rolling.”




[1] Gerry Mullany, “World’s 8 Richest Have as Much Wealth as Bottom Half of Global Population,” The New York Times, January 16, 2017.
[2] Ibid.
[3] Jonathan Stempel, “Gates Charity to Sell 60 Million Berkshire Shares, as Buffett Urged,” Reuters, January 18, 2017.


Sunday, January 15, 2017

Behind Brexit: State Sovereignty, Not Markets

Lest it be thought that trade—indeed, economics—was the foremost consideration in the British decision to secede from the E.U., the state’s prime minister tasked with implementing the secession made it clear that the political union had been the prime antagonist from the British standpoint. In American terms, such a position has been labeled as anti-federalist and even “states’ rights.” Economic considerations are not primary; rather, federalism is front and center—in particular, where power should be lodged. This ought not to strike fear into British business practitioners.

In characterizing the secession, Prime Minister Teresa May said, “We are leaving. . . . We will be able to have control of our borders, control of our laws.”[1] As a state of the E.U., the United Kingdom had only qualified sovereignty; the rest residing with the E.U. itself. The secession of a state means a return of the sovereignty given up when Britain joined the Union. Similarly, were California to secede from the U.S., the state would be fully sovereign.

Lest it be thought that the sovereign in question in the British case was only economic in nature, May and her officials “made it clear that her two main priorities are ending the jurisdiction of the European Court of Justice over British law, and restoring British control over its borders and immigration.”[2] The economics fall out of these political priorities, as these decisions “mean that Britain could no longer be a part of the single market for goods, capital, people and services of the European Union, because the rules for that market are adjudicated by the European Court of Justice.”[3] In other words, getting back the Court’s sovereignty to which the UK had been subject was primary; giving up the common market and the customs union followed by implication.

By implication, the interests of particular British business sectors were not primary in May’s decision. This is not to say that particular British businesses had a basis to move “off shore” as a result, for a bilateral trade deal between the sovereign state and the E.U. would hardly be a minor thing. Put another way, given the amount of trade that would be possible, neither the state nor the Union would be apt to construct a trade agreement that thwarts trade. Therefore, even when economics is not in the driver’s seat, it does not follow that political changes will result in dire consequences for business.



[1] Steven Erlanger, “U.K. Set to Choose Sharp Break from European Union,” The New York Times, January 15, 2017, italics added.
[2] Ibid.
[3] Ibid.

Saturday, January 14, 2017

The Age of the Imperial CEO: The Case of Fred R. Johnson at RJR Nabisco

Fred R. Johnson, former CEO of RJR Nabisco, was known “for the fleet of corporate jets that ferried him to celebrity golf events and other luxurious perks he awarded himself.”[1] The key words here being awarded himself, for Johnson epitomized the sort of imperial CEO that made an oxymoron out of the notion that the corporate board is to serve as an overseer of corporate management in corporate governance. Awarded himself should be the oxymoron, for such a conflict of interest runs against the logic of any viable business calculus.

That Johnson had “scant interest in the daily corporate grind” should also be an oxymoron, for the principle role of a CEO is to manage business.[2] “He was not strategic,” said John Greeniaus, who ran the Nabisco business under Johnson.[3] This too should be regarded as an oxymoron, given the salience of strategic management in a CEO’s role. At some point, the business under such a CEO had to have taken a hit. For example, that offices “were abruptly moved, [and business] units [were] suddenly sold” could not have been good for the bottom-line.

How could such a condition be permitted to go on in a major company? The corporate governance was undone, as Johnson handed out free plane rides, lucrative fees, and consulting contracts—each one representing a conflict of interest for the board members. Even though the board finally said no to Johnson’s attempt at a leveraged buyout because in part he would “reap outsized profits from the deal,” he received $53 million in golden parachute payments after he resigned as the board went with another takeover bid.[4] The golden parachute should have been regarded as an oxymoron, and yet the payments attest to just how easy the CEO had had it.

My point is simply to ask, at what expense? How is it that a major company would even hire a man who was little interested in strategy. Wouldn’t this have shown through in the interviews? When he put cigarettes and cookies together in the same company, wouldn’t it have dawned on the board that the two areas were not a good fit? To be sure, the snacks and cigarettes were broken apart in 1999, but wouldn’t such an acknowledgement have naturally reflected on the CEO? Even so, he received $53 million in golden parachute payments. 

Clearly, this case suggests that the system by which corporations are governed is vulnerable from a business standpoint.  To be sure, decreasing marginal utility means that it would take a lot of money to improve the happiness of a rich CEO, but does a board need to pay so much heed to this dynamic, which flies in the face of sound compensation management. As for Johnson, Greeniaus describes the man as being “like a really intelligent six-year-old in a sandbox.”[5] Just because it would take a lot of money to interest a spoiled rich kid does not mean that boards should become enablers; it is not as if adults would not take the job.




[1] James R. Hagerty, “F. Ross Johnson,” The Wall Street Journal, January 7-8, 2017.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Ibid.

Wednesday, January 4, 2017

Is the E.U. an Unimportant Tower of Babel?

With 24 official languages, the E.U. spent about 1 billion euros on translation and interpretation in 2016. The defense that diversity and language-learning were promoted is based on the specious reductionism of cultural diversity to language and the faulty assumption that E.U. business being conducted in a myriad of languages prompts E.U. citizens to pick up an additional language. After all, such an undertaking is not like changing clothes or knitting a sweater. Meanwhile, the true cost of using the E.U. to make ideological claims using language as a symbol goes beyond euros to include the foregone ability of the E.U. to integrate even enough to adequately conduct its existing competencies, or domains of authority.

Fortunately, officials and staff at the European Commission “usually write internally in only three [languages]—English, French and German—and often speak in English.”[1] That this has annoyed French-speakers disproportionally (relative to German speakers) is but one indication that practicality could too easily be sacrificed in the very functioning of the E.U.’s federal institutions even at a baleful time for the E.U.

The movement to recognize Luxembourgish is similarly at the expense of practicality. At least as of 2016, residents of the state of Luxembourg spoke German and French too, and the state laws were in French! Incroyable!  Could the E.U. afford to add such an unnecessary language, especially given the anticipated secession of Britain and the toll that that could take on the Union even just psychologically? Why hamper the E.U.’s functioning in such a baleful context—literally adding to its budget on translation and interpretation—just to enhance the status of Luxembourgish—a specious, sophist assumption anyway.

Incredibly, some politicians on the state level were urging the removal of English as one of the languages after the secession by the British even though the language had been so useful functionally at the European Commission. That Ireland and Malta relied at the time on English and the language was “extremely popular in Central and Eastern Europe”[2] just adds ammunition to the charge that government officials in the E.U. are not taking its existential threats seriously enough. The implication of the movement is that the functioning of the E.U. at the federal level is not really very important, as word-games are more so. Priorities matter, especially at turning points. The secession of a big state is a big deal for a federal system; going on to enhance integration anyway, Europeans would need to put the E.U. at a higher priority than was the case amid the jealous language-games in 2016.



1. James Kanter, “As the E.U.’s Language Roster Swells, So Does the Burden,” The New York Times, January 4, 2017.
2. Ibid.

Tuesday, January 3, 2017

The Electoral College Hampered: The Case of Nixon’s 1968 Campaign Treason

While he was running for the U.S. presidency in 1968, Richard Nixon told H.R. Haldeman “that they should find a way to secretly ‘monkey wrench’ peace talks in Vietnam” by trying to get the South Vietnamese government to refuse to attend peace talks in Paris until after the U.S. election.[1] Specifically, Nixon gave instructions that Anna Chennault, a Republican fundraiser, should keep “working on” South Vietnamese officials so they would not agree to a peace agreement before the U.S. election.[2] “Potentially, this is worse than anything he did in Watergate,” said John Farrell, who discovered evidence of Nixon’s involvement from Haldeman’s notes on a conversation with the candidate. That Nixon committed a crime to win the election is itself an indication that the way Americans elect the federal president was flawed. That he went on to cover up the Watergate crime committed during the 1972 campaign only to win by a landslide should give pause to anyone having faith in an unchecked popular election.  I contend that the American Founders had designed the Electoral College in part to catch such a candidate from becoming president, even if the College had never operated as such. Yet it could.
 Through surveillance, President Johnson learned of Chennault’s intervention at the behest of the Nixon campaign. Privately, the president believed that the intervention amounted to treason, though he said nothing publicly, lacking proof of Nixon’s personal involvement. “There’s really no doubt this was a step beyond the normal political jockeying, to interfere in an active peace negotiation given the stakes with all the lives.”[3] Johnson was planning on announcing a bombing pause precisely to encourage the South Vietnamese to the table. Thanks to Farrell’s discovery, we know that Nixon did indeed attempt to undermine U.S. policy. Put another way, he put his own ambition above his country’s national security and interest.
One of the purposes of the Electoral College, as designed, is to act as a check on the American electorate, which can be misled by designing candidates. With so many Americans—even just the seven million at the time of the commencement of the U.S. federal constitution—it could not be assumed that the voters could have enough information on the candidates to take their actual activities into account. The relatively few electors in the Electoral College, however, could uncover non-publicized information pertinent to a good judgment on whom should be president. Electors, for example, could have spoken with Johnson and done some digging on their own to get to the bottom of whether Nixon had committed treason to get elected. Because the electors “work for” the American people, which is sovereign over the government, government intel would have rightly been available to the electors.
“It is my personal view that disclosure of the Nixon-sanctioned actions by [Anna] Chennault would have been so explosive and damaging to the Nixon 1968 campaign that Huber Humphrey would have been elected president, said Tom Johnson, the note taker in the Johnson White House meetings about this episode.[4] So had the presidential electors of the Electoral College been free of the Republican party and cognizant of their function to make up for deficiencies in the popular election, Nixon may not have been elected president in 1968. The “great national nightmare” of Watergate would have been averted. Unfortunately, the selection of president was limited to public information, and the media was not able to make up the difference by getting to the root of the story.
We can look back at all this as a failure in the Electoral College and ask how the electors therein can be selected in such a way that their function as a check on the deficiencies of the popular judgment is enabled and protected. Allowing the political parties to select the electors can be regarded as an obstacle. Perhaps a given state’s electors could be selected in several ways—each elector being determined in a different way—such that no dominant power could subvert the College. The state legislature, for instance, could select one, the governor another. The state’s supreme court still another. A few more could be elected directly by the people by region. Perhaps having electors serve rotating multi-year terms might protect electors from undue external influence so they could resist popular or concentrated private pressure at election time. Paradoxically, American democracy would be strengthened, rather than diminished. The unearthed evidence of Nixon’s pre-election treason demonstrates how faulty the grounds of popular, public judgment can be at the ballot-box.




[1] Peter Baker, “Nixon Sought ‘Monkey Wrench’ in Vietnam Talks,” The New York Times, January 3, 2017.
[2] Ibid.
[3] Ibid.
[4] Ibid.