Monday, January 16, 2017

The Wealth of 8 People and 3.6 Billion People: Utilitarianism Applied

As of the end of 2016, eight people held as much wealth as the 3.6 billion people who make up the world’s poorest half. Just a year earlier, a similar study had “found that the world’s richest 62 people had as much wealth as the bottom half of the population.”[1] Part of the difference in these findings is due to new data gathered by Credit Suisse. Put another way, the richest of the rich were richer than had been thought. In this essay, I want to call attention to the sheer magnitude of the wealth involved, as it pertains to the richest.

Forbes’ 2016 list of billionaires has Bill Gates, the founder of Microsoft, with a net worth of $75 billion, followed by Amancio Gaona, the founder of Inditex, at $67 billion. Warren Buffett came in third with $60.8 billion.[2] I could go on, but these three figures are sufficient to raise the question of how much is enough. By the calculus of greed, which is the love of gain itself—as in more and more ad infinitum—this question can only be extrinsic. In terms of use, however, the question is ripe, for there is indeed a limit to how much a person can realistically consume.

In terms of declining marginal utility, wherein a person does not get as much pleasure out of the fourth or fifth ice-cream cone in a row as from the first, it takes a lot more money added to $67 billion to trigger pleasure than to $100. Add $1,000 to $100 and you have made the guy’s day, but add $1,000 to $67 billion and you might get a yawn. Pareto claimed that no such interpersonal comparisons of pleasure can be made, but I think Bentham was correct in making this point. Whereas Pareto relies on the valid point that pleasure itself is not quantifiable, Jeremy Bentham (whose 18th century mummified body absent his head sits in an open closet in a university-building’s hallway in London) stressed the declining marginal utility as it pertains to very different quantities of wealth.

Bentham, whose utilitarian ethics gives primacy to the greatest good (i.e., pleasure, which he viewed as happiness) for the greatest number. Distribution from the rich to the poor is in line with this ethic, given the fact that a poor person would get more pleasure, or utility, from $1000 than the pain of the rich man who is now without the $1000. To be sure, we humans react more to losing than to gaining, but here, given the huge difference in base wealth between the poor and rich person, the pain in the rich man must surely be much less than the pleasure in the poor man. Pareto overlooked this point, which is vital when applied to such vast sums of wealth as I’ve cited above.

From yet another perspective, the sums can be deemed to be excessive. I have in mind John Locke’s labor theory of wealth. A person gains a natural right of ownership by “mixing” his or her labor with the asset, such as land. If you till the ground and plant the corn, you have earned a property right, or exclusive claim, on that land and its corn. It would be unethical for other people to trespass and consume from the corn.

Applied to founders such as Gates, Gaona, and Buffett, the question is whether having wealth of tens of billions of dollars is proportioned to the labor (and even risk of loss) put into the respective foundings. This question pertains to executive compensation—are CEOs who are also founders paid inordinately because of their power and status in their respective organizations?—and to stock ownership—is there a public interest in limiting the amount of stock-value one person holds in a company?  The public interest, if one exists, would presumably borrow from the Bentham’s point that billions of poor people would get more pleasure, or utility, from the redistributed surpluses than all the pain (if any) inflicted on the richest of the rich from the loss of some of their stock-wealth. Given that only so much wealth can be consumed by any single person, there would presumably be more than enough wealth remaining such that the richest would not suffer.

In conclusion, sound theoretical reasons support the claim that the eight richest people in the world should not have as much wealth as 3.6 billion of the poorest. Just as it is difficult for the human mind to conceive of billions of people, the same applies to billions of monetary units. Accordingly, it is difficult to grasp the sheer vastness of the imbalance. From this basis alone, the inequality can be deemed problematic.

[1] Gerry Mullany, “World’s 8 Richest Have as Much Wealth as Bottom Half of Global Population,” The New York Times, January 16, 2017.
[2] Ibid.

Sunday, January 15, 2017

Behind Brexit: State Sovereignty, Not Markets

Lest it be thought that trade—indeed, economics—was the foremost consideration in the British decision to secede from the E.U., the state’s prime minister tasked with implementing the secession made it clear that the political union had been the prime antagonist from the British standpoint. In American terms, such a position has been labeled as anti-federalist and even “states’ rights.” Economic considerations are not primary; rather, federalism is front and center—in particular, where power should be lodged. This ought not to strike fear into British business practitioners.

In characterizing the secession, Prime Minister Teresa May said, “We are leaving. . . . We will be able to have control of our borders, control of our laws.”[1] As a state of the E.U., the United Kingdom had only qualified sovereignty; the rest residing with the E.U. itself. The secession of a state means a return of the sovereignty given up when Britain joined the Union. Similarly, were California to secede from the U.S., the state would be fully sovereign.

Lest it be thought that the sovereign in question in the British case was only economic in nature, May and her officials “made it clear that her two main priorities are ending the jurisdiction of the European Court of Justice over British law, and restoring British control over its borders and immigration.”[2] The economics fall out of these political priorities, as these decisions “mean that Britain could no longer be a part of the single market for goods, capital, people and services of the European Union, because the rules for that market are adjudicated by the European Court of Justice.”[3] In other words, getting back the Court’s sovereignty to which the UK had been subject was primary; giving up the common market and the customs union followed by implication.

By implication, the interests of particular British business sectors were not primary in May’s decision. This is not to say that particular British businesses had a basis to move “off shore” as a result, for a bilateral trade deal between the sovereign state and the E.U. would hardly be a minor thing. Put another way, given the amount of trade that would be possible, neither the state nor the Union would be apt to construct a trade agreement that thwarts trade. Therefore, even when economics is not in the driver’s seat, it does not follow that political changes will result in dire consequences for business.

[1] Steven Erlanger, “U.K. Set to Choose Sharp Break from European Union,” The New York Times, January 15, 2017, italics added.
[2] Ibid.
[3] Ibid.

Saturday, January 14, 2017

The Age of the Imperial CEO: The Case of Fred R. Johnson at RJR Nabisco

Fred R. Johnson, former CEO of RJR Nabisco, was known “for the fleet of corporate jets that ferried him to celebrity golf events and other luxurious perks he awarded himself.”[1] The key words here being awarded himself, for Johnson epitomized the sort of imperial CEO that made an oxymoron out of the notion that the corporate board is to serve as an overseer of corporate management in corporate governance. Awarded himself should be the oxymoron, for such a conflict of interest runs against the logic of any viable business calculus.

That Johnson had “scant interest in the daily corporate grind” should also be an oxymoron, for the principle role of a CEO is to manage business.[2] “He was not strategic,” said John Greeniaus, who ran the Nabisco business under Johnson.[3] This too should be regarded as an oxymoron, given the salience of strategic management in a CEO’s role. At some point, the business under such a CEO had to have taken a hit. For example, that offices “were abruptly moved, [and business] units [were] suddenly sold” could not have been good for the bottom-line.

How could such a condition be permitted to go on in a major company? The corporate governance was undone, as Johnson handed out free plane rides, lucrative fees, and consulting contracts—each one representing a conflict of interest for the board members. Even though the board finally said no to Johnson’s attempt at a leveraged buyout because in part he would “reap outsized profits from the deal,” he received $53 million in golden parachute payments after he resigned as the board went with another takeover bid.[4] The golden parachute should have been regarded as an oxymoron, and yet the payments attest to just how easy the CEO had had it.

My point is simply to ask, at what expense? How is it that a major company would even hire a man who was little interested in strategy. Wouldn’t this have shown through in the interviews? When he put cigarettes and cookies together in the same company, wouldn’t it have dawned on the board that the two areas were not a good fit? To be sure, the snacks and cigarettes were broken apart in 1999, but wouldn’t such an acknowledgement have naturally reflected on the CEO? Even so, he received $53 million in golden parachute payments. 

Clearly, this case suggests that the system by which corporations are governed is vulnerable from a business standpoint.  To be sure, decreasing marginal utility means that it would take a lot of money to improve the happiness of a rich CEO, but does a board need to pay so much heed to this dynamic, which flies in the face of sound compensation management. As for Johnson, Greeniaus describes the man as being “like a really intelligent six-year-old in a sandbox.”[5] Just because it would take a lot of money to interest a spoiled rich kid does not mean that boards should become enablers; it is not as if adults would not take the job.

[1] James R. Hagerty, “F. Ross Johnson,” The Wall Street Journal, January 7-8, 2017.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Ibid.

Wednesday, January 4, 2017

Is the E.U. an Unimportant Tower of Babel?

With 24 official languages, the E.U. spent about 1 billion euros on translation and interpretation in 2016. The defense that diversity and language-learning were promoted is based on the specious reductionism of cultural diversity to language and the faulty assumption that E.U. business being conducted in a myriad of languages prompts E.U. citizens to pick up an additional language. After all, such an undertaking is not like changing clothes or knitting a sweater. Meanwhile, the true cost of using the E.U. to make ideological claims using language as a symbol goes beyond euros to include the foregone ability of the E.U. to integrate even enough to adequately conduct its existing competencies, or domains of authority.

Fortunately, officials and staff at the European Commission “usually write internally in only three [languages]—English, French and German—and often speak in English.”[1] That this has annoyed French-speakers disproportionally (relative to German speakers) is but one indication that practicality could too easily be sacrificed in the very functioning of the E.U.’s federal institutions even at a baleful time for the E.U.

The movement to recognize Luxembourgish is similarly at the expense of practicality. At least as of 2016, residents of the state of Luxembourg spoke German and French too, and the state laws were in French! Incroyable!  Could the E.U. afford to add such an unnecessary language, especially given the anticipated secession of Britain and the toll that that could take on the Union even just psychologically? Why hamper the E.U.’s functioning in such a baleful context—literally adding to its budget on translation and interpretation—just to enhance the status of Luxembourgish—a specious, sophist assumption anyway.

Incredibly, some politicians on the state level were urging the removal of English as one of the languages after the secession by the British even though the language had been so useful functionally at the European Commission. That Ireland and Malta relied at the time on English and the language was “extremely popular in Central and Eastern Europe”[2] just adds ammunition to the charge that government officials in the E.U. are not taking its existential threats seriously enough. The implication of the movement is that the functioning of the E.U. at the federal level is not really very important, as word-games are more so. Priorities matter, especially at turning points. The secession of a big state is a big deal for a federal system; going on to enhance integration anyway, Europeans would need to put the E.U. at a higher priority than was the case amid the jealous language-games in 2016.

1. James Kanter, “As the E.U.’s Language Roster Swells, So Does the Burden,” The New York Times, January 4, 2017.
2. Ibid.

Tuesday, January 3, 2017

The Electoral College Hampered: The Case of Nixon’s 1968 Campaign Treason

While he was running for the U.S. presidency in 1968, Richard Nixon told H.R. Haldeman “that they should find a way to secretly ‘monkey wrench’ peace talks in Vietnam” by trying to get the South Vietnamese government to refuse to attend peace talks in Paris until after the U.S. election.[1] Specifically, Nixon gave instructions that Anna Chennault, a Republican fundraiser, should keep “working on” South Vietnamese officials so they would not agree to a peace agreement before the U.S. election.[2] “Potentially, this is worse than anything he did in Watergate,” said John Farrell, who discovered evidence of Nixon’s involvement from Haldeman’s notes on a conversation with the candidate. That Nixon committed a crime to win the election is itself an indication that the way Americans elect the federal president was flawed. That he went on to cover up the Watergate crime committed during the 1972 campaign only to win by a landslide should give pause to anyone having faith in an unchecked popular election.  I contend that the American Founders had designed the Electoral College in part to catch such a candidate from becoming president, even if the College had never operated as such. Yet it could.
 Through surveillance, President Johnson learned of Chennault’s intervention at the behest of the Nixon campaign. Privately, the president believed that the intervention amounted to treason, though he said nothing publicly, lacking proof of Nixon’s personal involvement. “There’s really no doubt this was a step beyond the normal political jockeying, to interfere in an active peace negotiation given the stakes with all the lives.”[3] Johnson was planning on announcing a bombing pause precisely to encourage the South Vietnamese to the table. Thanks to Farrell’s discovery, we know that Nixon did indeed attempt to undermine U.S. policy. Put another way, he put his own ambition above his country’s national security and interest.
One of the purposes of the Electoral College, as designed, is to act as a check on the American electorate, which can be misled by designing candidates. With so many Americans—even just the seven million at the time of the commencement of the U.S. federal constitution—it could not be assumed that the voters could have enough information on the candidates to take their actual activities into account. The relatively few electors in the Electoral College, however, could uncover non-publicized information pertinent to a good judgment on whom should be president. Electors, for example, could have spoken with Johnson and done some digging on their own to get to the bottom of whether Nixon had committed treason to get elected. Because the electors “work for” the American people, which is sovereign over the government, government intel would have rightly been available to the electors.
“It is my personal view that disclosure of the Nixon-sanctioned actions by [Anna] Chennault would have been so explosive and damaging to the Nixon 1968 campaign that Huber Humphrey would have been elected president, said Tom Johnson, the note taker in the Johnson White House meetings about this episode.[4] So had the presidential electors of the Electoral College been free of the Republican party and cognizant of their function to make up for deficiencies in the popular election, Nixon may not have been elected president in 1968. The “great national nightmare” of Watergate would have been averted. Unfortunately, the selection of president was limited to public information, and the media was not able to make up the difference by getting to the root of the story.
We can look back at all this as a failure in the Electoral College and ask how the electors therein can be selected in such a way that their function as a check on the deficiencies of the popular judgment is enabled and protected. Allowing the political parties to select the electors can be regarded as an obstacle. Perhaps a given state’s electors could be selected in several ways—each elector being determined in a different way—such that no dominant power could subvert the College. The state legislature, for instance, could select one, the governor another. The state’s supreme court still another. A few more could be elected directly by the people by region. Perhaps having electors serve rotating multi-year terms might protect electors from undue external influence so they could resist popular or concentrated private pressure at election time. Paradoxically, American democracy would be strengthened, rather than diminished. The unearthed evidence of Nixon’s pre-election treason demonstrates how faulty the grounds of popular, public judgment can be at the ballot-box.

[1] Peter Baker, “Nixon Sought ‘Monkey Wrench’ in Vietnam Talks,” The New York Times, January 3, 2017.
[2] Ibid.
[3] Ibid.
[4] Ibid.

Tuesday, December 20, 2016


If a picture is worth a thousand words, then how many words are moving pictures worth? Add in a script and you have actual words—potentially quite substantive words—grounding all that pictorial worth. Moving pictures, or movies for short, are capable of conveying substantial meaning to audiences. In the case of the film, Snowden (2016), the meaning is heavy in political theory. In particular, democratic theory. The film’s value lies in depicting how far short the U.S. Government has slipped from the theory, and, indeed, the People to which that government is in theory accountable.

The full essay is at "Snowden."

Monday, December 12, 2016

How American Presidents Are Selected: Beyond Russian Interference

Most delegates in the U.S. Constitutional Convention in 1787 recognized the value of constitutional safeguards against excess democracy, or mob rule. The U.S. House of Representatives was to be the only democratically elected federal institution—the U.S. Senate, the U.S. Supreme Court, and even the U.S. Presidency were to be filled by the state legislatures, the U.S. President and U.S. Senate, and electors elected by citizens, respectively. The people were to be represented in the U.S. House and the State governments in the U.S. Senate. The Constitutional Amendment in the early twentieth century that made U.S. senators selected by the people rather than the governments of the States materially unbalanced the original design. In terms of the selection of the U.S. president by electors, the political parties captured them such that whichever party’s candidate wins a State, the electors there are those of the winning party. Even if the electors could vote contrary to the popular vote in a State, such voting could only be a rare exception given the party-control. Hence the electors have not been able to function as intended—as a check against excess democracy. The case of Russian interference in the presidential election of 2016 presents an additional use for the Electoral College, were it to function as designed and intended. Of course, this is a huge assumption to make, even just in taking into account the American mentality regarding self-governance.

Suppose, for example, that a presidential election were to take place only months after an attack by another country, such as the one at Pearl Harbor on December 7, 1941. The American people might be inclined to vote for whichever candidate has promised to nuke the belligerent power off the face of the Earth. Clearly, such a knee-jerk reaction would not be in the best interest of the American people. Were the electors in the Electoral College free of party-affiliation as well as any law requiring them to vote according to their State’s popular vote in the “presidential election” (i.e., actually for the electors), the electors of the College could elect another candidate—one not so inclined to beat the war drum to capitalize on the momentary passions of the people.

In short, American voters elect electors by state, and said electors in turn then meet in their respective state capitols to cast votes for president roughly a month later—that being the actual presidential election. This system reflects the delegates' fear that the masses voting directly would be risky because people have difficulty resisting their immediate passions. Demagogues running for office can too easily take advantage of the ignorance and inattention of the electorate, especially when the “campaign season” lasts 14 months!

That a population even as large as 7 million in the U.S. in 1789, and even more one of 310 million in 2016, must depend on the media for information on candidates—it being extremely unlikely that all but a tiny fraction of the people could meet the candidates—adds merit to the value of having electors whose task it is to act as a check on deficiencies in a democratic election on such a scale. As for the number of electors in the Electoral College, each State has as many as the total number of its U.S. senators and U.S. House representatives. The number is few enough that the electors could actually meet the candidates in person and question them. Additionally, the electors could more feasibly have access to information on the candidates and even U.S. intel. In voting for these electors, the American people would be voting for people whose judgment is deemed to be up to the task.

So it is fitting, given the purpose and design of the Electoral College, that the electors could receive U.S. intel on Russia’s interference in the 2016 presidential election. Hillary Clinton’s presidential campaign chairman, John Podesta, supported a proposal that electors be given “an intelligence briefing on alleged political interference by Russia.”[1] A group of 10 electors had written to the Director of National Intelligence to request a briefing. Those electors cited their role as a “deliberative body” designed in part to prevent foreign powers from trying to influence elections.[2] Although I am not aware of any direct reference to foreign interference as an explicit reason for the Electoral College in the Constitutional Convention (via Madison’s Notes), the rationale can fall within the broader one of the College serving as a check on deficiencies in the presidential election (i.e., the election of electors by the American people).

As the American people themselves selected the electors to in turn select the federal president, the extant federal officials, as agents of the People, were duty-bound to defer to the electors for such a purpose bearing on the task of electing the next president. It is up to the electors to decide whether any new information gained after the presidential election warrants the selection of someone other than the candidate whose party controls the majority of the electors (i.e., “won” the Electoral College). It does not necessarily follow that the electors should select the candidate who came in second.

Hypothetically, events taking place between the “presidential election” and the electors’ own vote could warrant the election of another candidate than the one who “won” the electoral college. New information on either of the major candidates could also justify such an outcome. The overall point, or aim, is that the best possible selection is made for the United States and its people. Holding to popular vote, whether by State or nationwide, pales in comparison, and is not necessarily optimal. One delegate, for instance, argued at the Convention that “the people at large . . . will never be sufficiently informed of characters.”[3] Another delegate said, “The people are uninformed and would be misled by a few designing men.”[4] That delegate felt this problem so grave that “the popular mode of electing the [president] would certainly be worst of all.”[5] Still another delegate argued that the selection of the president should be “by those who know most of the eminent characters & qualifications,” not “by those who know least”—meaning millions of people across an empire.[6] Such delegates were not themselves government officials, so the recognition of the limitations of a popular election by people like themselves is itself awe-inspiringly humble. For a people to recognize its own deficiencies and design safeguards even at the expense of their own future electoral preferences renders such a people worthy of self-government. Maturity, in addition to being educated and virtuous as Jefferson and Adams insisted, is requisite for self-governance.

I submit that Americans in 2016 were overwhelmingly—and conveniently—deficient in governmental maturity. Instead of a willingness to face their own complicity in standing by or enabling as presidential campaigns had become so sordid and devoid of policy or even debate, a blind charge could be heard immediately after the election toward a new system based on an unprotected, and thus vulnerable, (nationwide) popular vote. Legitimacy supposedly hung in the balance, and the People could not be wrong. So it is ironic that the need for safeguards against the electorate itself were so easily dismissed. In other words, it is nothing short of astonishing that such an electorate would assume that an overhaul was not necessary on how presidents are selected and, moreover, that no safeguards would be needed for going by a nationwide majority vote. The underlying problem can be put as a question: Does a people that refuses to recognize the need for safeguards on itself, even for its own protection (i.e., in its own best interest) deserve self-government? Can such government function for long without the electorate being willing and able to keep their system of government in good condition? What if a people cannot recognize brokenness, whether in itself or in how its president is selected? Can such a people self-govern for long?

It is much easier to focus on foreign interference than to be willing to recognize deficiencies much closer to home. Taking the most comfortable route, rather than making difficult choices, is lethal for a viable republic especially when the lack of character is combined with ignorance as to what constitutes good and bad public-governance systems. It is particularly revealing that a people most in need of safeguards is most apt to make the convenient assumption that they are not necessary. The rise and fall of mammoth empires is the stuff of history. Every empire in history has come and gone. The fall of even a modern-day empire can come from within, as from a squalid mentality that absolves itself of even the possibility of being wrong about itself. This, I submit, is the American blight, and plight.


[1] Cody Derespina, “Clinton Campaign Backs Call for Electors to Get Trump-Russia Intel Briefing,” Fox News, December 12, 2016.
[2] Ibid.
[3] James Madison, Notes of Debates in the Federal Convention of 1787 Reported by James Madison (New York: W. W. Norton, 1966): 306.
[4] Ibid., 327.
[5] Ibid.
[6] Ibid., 405.

Wednesday, December 7, 2016

A Business Surtax on Income Inequality: Target the Proceeds

The medium compensation in 2015 for the 200 highest-paid executives at publicly-held companies in the U.S. was $19.3 million; five years earlier, the figure was $9.6 million.[1] CEO pay compared with the earnings of average workers surged from a multiple of 20 in 1965 to almost 300 in 2013.[2] “Income inequality is real, it is a national problem and the federal government isn’t doing anything about it,” said Charlie Hales, the mayor of Portland, Oregon in 2016 when that city passed a surtax on companies whose CEO’s earn more than 100 times the medium pay of their rank-and-file workers.[3] According to the law, set to take effect in 2017, companies whose ratios are between 100 and 249 would pay an additional 10 percent in taxes; companies with higher ratios would face a 25 percent surtax on the city’s business-license tax. Whether the new law would make a dent in reversing the increasing income-inequality was less than clear.
The most direct route to reversing the trend of growing inequality would be to use the proceeds from the surtax to increase the average incomes of the poor. Cash assistance to city residents below the poverty line, for instance, or increased rent subsidies would qualify. Alternatively, the city council could pass and fund a minimal-income level for local residents. As still another option, the financial assistance could be meted out more specifically to workers in the companies subject to the surtax, or local companies more generally. Unfortunately, the proceeds were set to go into city’s general fund, only part of which increases the incomes of the poor. “City officials in Portland estimated that the new tax would generate $2.5 million to $3.5 million a year for the city’s general fund, which pays for basic public services such as housing and police and firefighter salaries.”[4] If rental assistance is included and expanded, then the inequality of effective income could be impacted locally, though adding more police and firefighters and perhaps even buying more police cars and firetrucks would not affect the ratio.
In short, for the surtax to address the matter of income inequality most directly, the use of the tax revenue would have to be targeted to increasing the effective incomes of the poor (and middle class). Simply increasing the city’s budget dilutes the impact substantially.
On the CEO-pay end, the assumption that the surtax would result in lower CEO compensation figures is also subject to critique. What a board offers a prospective CEO must contend with what that particular labor market will bear. Furthermore, it is not clear that even 25% of a local license tax is enough money to motivate a board to reduce top executive salaries. It is also not clear that $2.5 to $3.5 million would appreciably raise income levels in a city the size of Portland—Oregon’s largest city. Were the city to increase the tax to motivate companies to bring down CEO pay and/or make a dent in the incomes of the city’s poor, companies could simply move; they could even stay in Oregon.
To be sure, Portland’s mayor at the time admitted that the surtax is “an imperfect instrument” with which to tackle the momentous problem of increasing income-inequality in the U.S.[5] A better instrument would be at the State or federal level, with the proceeds going to fund a minimum income for all citizens. Lest such a “Robin Hood” approach be too stark, proceeds could be targeted more closely to the worker-CEO ratio by increasing the incomes or disposable incomes of workers.

[1] Gretchen Morgenson, “Portland Adopts Surcharge on C.E.O. Pay in Move vs. Income Inequality,” The New York Times, December 7, 2016.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Ibid.

Monday, December 5, 2016

Analysis of Italy’s 2016 Referendum: Beyond the Euro and the E.U.

The predominate axis of analysis in the wake of the Italian referendum in early December, 2016 centered on the euro, the federal currency of the European Union. For example, an article in The Wall Street Journal begins with the following: “Sunday’s referendum vote in Italy reinforced a widening split between the economics needed to sustain Europe’s common currency and the continent’s rising tide of populism.”[1] At the time, however, the populism in the E.U.’s states had more to do with immigration than the federal currency. Even so, analysts predicted that Italian parties antagonistic to the currency could be expected to benefit. Stephen Gallo at BMO Financial Group went so far as to claim, “Eurozone breakup risks are rising,” given “the political currents at work in the Eurozone.”[2] Although he makes a good observation in noting the lack of a political will in the States “to finish building the missing architecture of the single currency area”—implying that the underpinnings of the euro were inherently unstable—he overlooked the matter of the distribution of wealth, and in particular the element of fairness, which I submit is salient in the Italians’ ‘No’ vote as well as in the rising anti-establishment, or shall we say, anti-elite, populism of the day.
The constitutional changes proposed in the referendum were “aimed at streamlining lawmaking and boosting competitiveness”[3] Whereas The Wall Street Journal adds that the ‘No’ vote marked “a sobering start to a defining year ahead for the European Union,” I submit that the pro-business nature of the proposed changes is more salient.[4] Mateo Renzi, Italy’s prime minister who resigned after the vote, had argued in defense of the changes that Italy needed to be more competitive by making it easier for companies to do business there.[5] Translation: the changes would have enriched businesses in the State. To be sure, boosting the State’s economic output would have positively impacted the euro’s underlying stability as well as that of the Union itself. Even so, it is interesting to contrast the pro-business impact of the proposed reforms with the populist, or antiestablishment 5-Star Movement party’s platform’s proposed income guarantees for all Italians.[6] The contrast gets to the real meaning of populism—the populous, or people, as distinct from the mutual-reinforcing economic and political elites. In this light, the resounding 60% ‘No’ vote can be understood as a rejection of the mutual-reinforcing dynamic as well as the increasing concentration of wealth rather than as being primarily about the euro or even the E.U. If so, the question is whether such populism can concentrate enough power of its own to overcome the resistance/power of the financial and political elite. In other words, can populism result in a wholesale change in the political elite when so much money funnels in from the business world. This is a question for every republic, rather than just for E.U. states that use the euro currency.

1. Stephen Fidler, “Italy’s ‘No’ Opens Harrowing Year for EU,” The Wall Street Journal, December 5, 2016.
2. Jon Sindreu, “Euro Falls as Italian Reject Renzi’s Changes,” The Wall Street Journal, December 5, 2016.
3. Fidler, “Italy’s ‘No’.”
4. Ibid.
5. Ibid.
6. Ibid.

Young Japanese: An Early Verdict on Climate Change

Is the verdict in, and have we, mankind, lost our own self-inflicted climate battle? Is this what Japanese millennials were saying in 2016 when, according to a government survey, only 75 percent expressed interest in climate change, whereas close to 90 percent of the same age group (18-29) had expressed interest just a few years earlier?[1] Their intuition may have been the proverbial canary in the coal mine.
Midori Aoyagi, a principal researcher at the National Institute for Environmental Studies in Japan, reports that the young people in her focus groups “always felt a kind of hopelessness” toward their daily lives, their jobs, and social issues.[2] She suggests the pessimism might be “a result of having grown up during a prolonged period of economic stagnation known as the lost decades,” but this would not account for the drop from 90% to 75% in just a few years.[3] Interviews with Japanese aged 22 to 26 elicited a similar attitude. “These young people cited the huge scale and timeline of the problem, a feeling of powerlessness, silence from the media and preoccupation with more important issues.”[4] I want to unpack this revealing piece of evidence.
The huge scale and silence of the media, combined with the political power of the extant energy sector, whose financial benefits are grounded in the status quo, suggests that nothing short of sustained effort aimed at transitioning to clean energy could possibly suffice to obviate the worst of climate change in the decades to come. Not sensing such effort, as per the silence of the media, the young people may have intuited that their time would be more usefully spent on other societal problems, which still had a chance of being solved. To be sure, unforeseen technological developments could at least in theory still redeem the species in spite of its self-destructive urge for instant gratification. Yet without a hint of promise from the species' unique tool-making ability, the young people could not but sense a slipping away of the window for solving the climate-change problem. Indeed, because they could live to see the worst of climate change as it unfolds, the sense of hopelessness makes sense. So it is particularly telling for the rest of us that more of them were moving on to tackle other, more solvable societal problems.

[1] Tatiana Schlossberg, “Japan Is Obsessed with Climate Change. Young People Don’t Get It,” The New York Times, December 5, 2016.
[2] Ibid.
[3] Ibid.
[4] Ibid.