Wednesday, February 20, 2019

Corporate Political-Campaign Contributions as Decisive in Anti-Trust Enforcement

On August 31, 2011, “the [U.S.] Justice Department sued to block AT&T’s $39 billion takeover of T-Mobile USA, a merger that would create the nation’s largest mobile carrier. 'We believe the combination of AT&T and T-Mobile would result in tens of millions of consumers all across the United States facing higher prices, fewer choices and lower-quality products for their mobile wireless services,' said James M. Cole, the deputy attorney general.”[1] The New York Times claimed at the time that it was “arguably the most forceful antitrust move” by the Obama administration.[2] To be sure, there were “few blockbuster mergers with the potential to reshape entire industries and affect large swaths of consumers.”[3] However, one could cite the UAL merger with Continental and Comcast’s acquisition of NBC as accomplished mergers. It is more likely that the housing-induced recession made the administration reluctant to risk a major company looking for buyer going bankrupt. I would not be surprised if the vested interests of major mergers and acquisitions “played the bankruptcy card” as leverage with the Justice Department. Moreover, the political power of mega-corporations in the U.S. can be expected to have come into play.
To be sure, the U.S. Department of Justice was capable of flexing its political muscle. Nasdaq withdrew its $11 billion bid for NYSE Euronext, the parent company of the Big Board after government lawyers warned of legal action. However, conditioning Comcast’s purchase of NBC to the latter giving up control of Hulu, an on-line movie/television conduit, evinced a strange indifference to a much larger distribution company (Comcast) having a vested financial interest in some of the content (i.e., NBC programming).
The Justice Department looked the other way on a rather obvious conflict of interest potentially operating at the expense of the consumer (assuming people want to watch more than NBC programming on cable). Of course, cable is not the only distribution channel for television programming. Customers dissatisfied with Comcast’s vaunting of NBC programming (and even possibly restricting other content) could go to DirecTV, for example. However, conflating distribution and content seems a bit like the repeal of the Glass-Steagall law, which had prohibited the combination of commercial and investment banking (and brokerage) from 1933-1999. The law reflected the belief that institutional conflicts of interest can and should be avoided even though not every instance of a commercial-investment firewall could be expected to succumb to the immediacy of the profit motive.
In short, the Obama administration could have gone further in its antitrust actions. While admittedly not as pro-business as the preceding administration, the Obama administration’s tacit acceptance of mega-corporations may translate into an insufficient defense of competition. It should not be forgotten that Goldman Sachs contributed $1 million to Obama’s election campaign in 2008. For the president to actively promote competitive markets would require him to bite the hands that have been feeding him. In other words, there is a conflict of interest involved in allowing corporations to make political contributions while the government officials are tasked with replacing oligarchies with competitive marketplaces.

1. Ben Protess and Michael J. De La Merced, “The Antitrust Battle Ahead,” New York Times, August 31, 2011. 
2. Ibid.
3. Ibid.

President Trump’s Spending on a Border Wall: Federalism at Risk?

U.S. President Trump announced in February of 2019 that he would fully fund a wall on the U.S.’s southern border. He would first use the $1.375 granted by Congress to be followed by  $600 million from a Treasury Department asset-foreclosure fund for law enforcement, $2.5 billion from a military anti-drug account, and $3.6 billion in military construction funds.[1] The president’s rationale hinged on his declaration of a national emergency due to illegal immigration, drug-traffic, and crime/gangs—all having been coming across the border on a regular basis. In federal court, sixteen of the U.S.’s member-states challenged the president’s declaration and use of funds. The U.S. president’s legal authority to declare national emergencies was pitted against the authority of the U.S. House of Representatives to be the initiator of federal spending legislation. The House therefore had standing to sue. The question of the states’ legal standing is another matter. It is particularly interesting because it involved not only whether a given state would be harmed by the wall or even the president’s use of other funding sources that could otherwise be used for other projects in the states not directly affected by the wall, but also because federalism itself could be negatively affected in a way that harms all of the states.
Prime facie, it seems difficult that California and New Mexico could show injury from a wall that would not be built in either of those states. On this basis, the injury to Hawaii seems far-fetched, as the ocean functions as that republic’s border. Similarly, New York is nowhere near the U.S.’s southern border. Arguing, however, that “the president’s unconstitutional action could cause harms in many parts” of the U.S., California’s attorney general at the time insisted that the member-states had standing apart from where the wall would be built.[2] Given the sources of the funding, all of the states could “lose funding that they paid for with their tax dollars, money that was destined for drug interdiction or for the department of Defense for military men and woman and military installations,” he explained.[3] This point, I admit, is valid but it lacks a larger constitutional view.
In a federal system in which the member-states and federal governmental institutions both have their own basis of governmental sovereignty, a power-grab by one means less power for the other. The judicial trend since the war between the U.S.A. and C.S.A. during the first half of the 1860s has been to validate encroachments by the federal government on those of the states. President Trump’s decision to build a wall in some of the member-states represents a power-grab not only with respect to the Congress, but also the states. In the E.U., by contrast, the states have more say in how the E.U.’s border is protected. The European model of federalism values cooperation at both the policy and implementation stages than does the American model in which ambition is set to counter ambition.
The U.S. Senate was originally intended to be the access point in the federal government in which the state governments could affect or even block proposed federal legislation. When U.S. senators became popularly elected by voters in the respective states rather than appointed by the state governments, the latter lost their direct access in the federal government. Before then, a majority of states could defend not only their own interests, but also the interest of the state “level” in the federal system. It would be more difficult for the state governments to forestall encroachments (i.e., power-grabs) by the federal government. The federal system itself would suffer from a growing imbalance.
With the state governments no longer able to directly express themselves in the U.S. Senate because senators had an obvious incentive to satisfy constituent and especially financial-backer interests, going to the courts became the only route in trying to stop the federal president’s spending-plan for a wall. Yet even that strategy suffered from the institutional conflict of interest implicit in a federal court deciding disputes between the states and the federal government. Perhaps looking narrowly at anticipated injuries to the 16 states would attest to the federal-bias in the federal courts, which nonetheless have a responsibility to consider the standing that the states have in the federal system. After all, they rather than the federal government enjoy residual sovereignty. Is not a federal encroachment itself an injury to the state governments as per their loss of power? By the twenty-first century, the federal government could claim preemption in order to keep the governments of the states from legislating in an area of law even though the federal government does not intend to legislate in it! The danger in such an imbalanced federal system—that is, a lopsided system of governance—is that the encroaching government becomes tyrannical not just toward the states, but the People as well. As the power-checking-power mechanism breaks down, absolute power becomes increasingly likely.

For more comparisons of American and European federalism, see Essays on Two Federal Empires: Comparing the E.U. and U.S., and American and European Federalism: A Critique of Rick Perry's "Fed Up"!  Both are available at Amazon.

1. Charlie Savage and Robert Pear, “States’ Lawsuit Aims to Thwart Emergency Bid,” The New York Times, February 19, 2019.
2. Ibid.
3. Ibid.

Saturday, February 16, 2019

On the Various Causes of the Financial Crisis of 2008: Have We Learned Anything?

In January, 2011, the Financial Crisis Commission announced its findings. The usual suspects were not much of a surprise; what is particularly notable is how little had changed on Wall Street since the crisis in September of 2008. According to The New York Times, "The report examined the risky mortgage loans that helped build the housing bubble; the packaging of those loans into exotic securities that were sold to investors; and the heedless placement of giant bets on those investments." In spite of the Dodd-Frank Financial Reform Act of 2010 and the panel's report, The New York Times reported that "little on Wall Street has changed." One commissioner, Byron S. Georgiou, a Nevada lawyer, said the financial system was “not really very different” in 2010 from before the crisis. “In fact," he went on, "the concentration of financial assets in the largest commercial and investment banks is really significantly higher today than it was in the run-up to the crisis, as a result of the evisceration of some of the institutions, and the consolidation and merger of others into larger institutions.” Richard Baker, the president of the Managed Funds Association, told The Financial Times, "The most recent financial crisis was caused by institutions that didn't know how to adequately manage risk and were over-leveraged. And I worry that if there is another crisis, it will be because the same institutions have failed to learn from the mistakes of the past." From the testimonies of managers of some of those institutions, one might surmise that the lack of learning in the two years after the crisis was due to a refusal to admit to even a partial role in crisis.  In other words, there appears to have been a crisis of mentality, which, as it contains intractable assumptions and ideological beliefs, as well as stubborn defensiveness, is not easy to dislodge such that legislation past Dodd-Frank could ever be passed.
It is admittedly tempting to go with the status quo than be responsible for reforms. If the reformers are also the former perpetrators, their defensiveness and ineptitude mesh well with the continuance of the status quo even if an entire economy the size of an empire is left vulnerable to a future crisis. To comprehend the inherent danger in the sheer continuance of the status quo, it is helpful to digest the panel's findings. 
The crisis commission found "a bias toward deregulation by government officials, and mismanagement by financiers who failed to perceive the risks." The commission concluded, for example, that "Fannie and Freddie had loosened underwriting standards, bought and guaranteed riskier loans and increased their purchases of mortgage-backed securities because they were fearful of losing more market share to Wall Street competitors." These two organizations were not really market participants, as they were guaranteed by the U.S. Government. That government-backed corporations would act so much like private competitive firms undercuts the assumed civic mission that premises government-underwriting. All this ought to have raised a red flag for everyone--not just the panel which stressed the need for a pro-regulation verdict. 

Lehman was a particularly inept player leading up to the crisis.     Zambio

In terms of the private sector, The New York Times reported that the panel "offered new evidence that officials at Citigroup and Merrill Lynch had portrayed mortgage-related investments to investors as being safer than they really were. It noted — Goldman’s denials to the contrary — that 'Goldman has been criticized — and sued — for selling its subprime mortgage securities to clients while simultaneously betting against those securities.'”  The bank's proprietary net-short position could not be justified by simply market-making as a counter-party to its clients, Blankfein's congressional testimony notwithstanding. 
Relatedly, the panel also pointed to problems in executive compensation at the banks. For example, Stanley O’Neal, chief executive of Merrill Lynch, a bank which failed in the crisis, told the commission about a “dawning awareness” through September 2007 that mortgage securities had been causing disastrous losses at the firm; in spite of his incompetence, he walked away weeks later with a severance package worth $161.5 million. The panel might have gone on to point to the historically relatively huge difference between CEO and lower-level manager compensation and questioned the relative merit, but such a conclusion would go beyond the commission's mission to explain the financial crisis.
With regard to the government, The New York Times reported that the panel "showed that the Fed and the Treasury Department had been plunged into uncertainty and hesitation after Bear Stearns was sold to JPMorgan Chase in March 2008, which contributed to a series of “inconsistent” bailout-related decisions later that year." The Federal Reserve was clearly the steward of lending standards in this country,” said one commissioner, John W. Thompson, a technology executive. “They chose not to act.” Furthermore, Sabeth Siddique, a top Fed regulator, described how his 2005 warnings about the surge in “irresponsible loans” had prompted an “ideological turf war” within the Fed — and resistance from bankers who had accused him of “denying the American dream” to potential home borrowers. That is to say, the Federal Reserve, a corporation wholly owned by the U.S. Government, is too beholden to bankers instead of the common good. So we are back to the issue of a government-guaranteed corporation acting like or on behalf of private companies (and badly at that).
We can conclude generally that governmental, governmental-supported, and private institutions, all acting in their self interests, contributed to a "perfect storm" that knocked down Bear Stearns, Lehman Brothers, Countrywide, AIG, and Freddy and Fannie Mae. Systemically, the commercial paper market--lending between banks--seized up and many of the housing markets in the U.S. took a severe fall such that home borrowers awoke to find their homes under water. The Federal Reserve was caught off-guard, as its chairman, Ben Bernanke, had been claiming that the housing markets could be relied on to stay afloat. Relatedly, AIG insured holders of mortgage-based bonds without bothering to hold enough cash in reserves in case of a major decline in the housing markets all at once. Neither the insurer nor the investment banks that had packaged the subprime mortgages into bonds though to investigate whether Countrywide's mortgage producers had pushed through very risky mortgages before selling them to the banks to package. In short, people who were inept believed nonetheless that they could not be wrong. Dick Fuld, Lehman's CEO, had the firm take on too much debt to buy real estate so that eventually his firm would be as big as Goldman Sachs. That such recklessness would be on the behest of a childish desire to be as big as the other banks testifies as to the need for financial regulation that goes beyond the "comfort zone" of Wall Street's bankers and their political campaign "donations."

See also Essays on the Financial Crisis: Systemic Greed and Arrogant Stupidity, available at Amazon.


Sam Jones, "Hedge Funds Rebuke Goldman," Financial Times, January 28, 2011, p. 18.

Wednesday, February 13, 2019

Decreasing Bank Size by Increasing Capital-Reserve Requirements: Plutocracy in Action?

Although the Dodd-Frank Financial Reform Act was passed in 2010 with some reforms, such as liquidity standards, stress tests, a consumer-protection bureau, and resolution plans, the emphasis on additional capital requirements (i.e., the SIFI surcharges) could be considered as weak because they may not be sufficient should another financial crisis trigger a shutdown in the commercial paperr market (i.e., banks lending to each other). A study by the Federal Reserve Bank of Boston found that even the additional capital requirements in Dodd-Frank would not have been enough for eight of the 26 banks with the largest capital loss during the financial crisis of 2008. As overvalued assets, such as subprime mortgage-backed derivatives, plummet in value, banks can burn through their capital reserves very quickly. A frenzy of short-sellers can quicken the downward cycle even more. This raises the question of whether additional capital resources would quickly be "burnt through" rather than being able to stand for long as a bulwark. The financial crisis showed the cascading effect that can quickly run through a banking sector as fear even between banks widens as one damaged bank impacts another, and another. 
With the $6.2 billion trading loss at JPMorgan Chase in the hindsight, Sen. Sherrod Brown (D-Ohio) and Sen. David Vitter (R-La) in the U.S. Senate proposed a bill that would require banks with more than $400 billion in assets to hold at least 15 percent of those assets in hard capital. The two senators meant this requirement to encourage the multi-trillion-dollar banks to split up into smaller banks. Although it had been argued that gigantic banks are necessary given the size of the loans wanted by the largest corporations, banks had of course been able to form syndicates to finance such mammoth deals. 
The Senate had recently voted 99-0 on a nonbinding resolution to end taxpayer subsidies to too-big-to-fail banks, so the U.S. Senate had Wall Street’s attention. Considering that the U.S. House of Representatives was working on legislation to deregulate derivatives, the chances that the U.S. Government would stand up to Wall Street even to the too-big-to-fail systemic risk were slim to nil. Indeed, the U.S. Department of Justice’s criminal division had been going easy in prosecuting the big banks for fraud out of fear that a conviction would cause a bank collapse (or because President Obama had received very large donations from Wall Street banks including notably Goldman Sachs).
The two senators’ strategy of going about breaking up the biggest banks indirectly by increasing their reserve requirements disproportionately didn't work, at least as of 2019. Advantages of size, including the human desire to empire-build (witness Dick Fuld at Lehman Brothers), could have been expected to outweigh the economic preference for a lower, more proportionate, reserve requirement. With anti-trust laws having been used to break up giants such as Standard Oil and ATT, the thought of breaking up the banks too big to fail even in the wake of the financial crisis was strangely viewed as radical and thus at odds with American incrementalism. The question was simply whether systemic risk should be added to monopoly (i.e., restraint of trade) as an additional rationale for breaking up huge concentrations of private property. This question could have been made explicit, rather than trying to manipulate the big banks to lose some weight.  
The approach of using disproportionately reserves can be critiqued on at least two grounds. First, should one or more of those banks decide to go with the 15% requirement rather than break up into smaller firms, even the additional capital might not be enough to protect a bank during a financial crisis. The study discussed above suggested as much. Second, even if the additional requirements would turn out to be sufficient in a crisis, the approach would obviate a decision by the government on whether systemic risk justifies a cap on how large banks can get. 
I suspect that the U.S. Congress and president backed off in really reforming Wall Street because of its money in campaign finance. In short, the big banks in Wall Street didn't want to shrink. In a system of political economy wherein the economy is regulated by government, rather than vice versa, backing off just because large concentrations of wealth (and thus power--even political) don't like the plans is unacceptable. Moreover, it is a sign of encroaching plutocracy wherein the regulated dictate behind closed doors to the regulators and politicians. Meanwhile, the public, including the economy itself, remains vulnerable. 

See Essays on the Financial Crisis: Systemic Greed and Arrogant Stupidity, available at Amazon.

Eric Rosengren, “Bank Capital: Lessons from the U.S. Financial Crisis,” Federal Reserve Bank of Boston, February 25, 2013.
Zach Carter and Ryan Grim, “Break Up the Banks’ Bill Gains Steam in Senate As Wall Street Lobbyists Cry Foul,” The Huffington Post, April 8, 2013.

Johnson’s “Reinvention” of JC Penney: Too Much and Too Little

In April 2013, JC Penney’s board wished the CEO, Ron Johnson, “the best in his future endeavors.” His effort to “reinvent” the company had been “very close to a disaster,” according to the largest shareholder, William Ackman. During Johnson’s time at the company as its CEO, shares fell more than fifty percent. In February 2013, Johnson admitted to having made “big mistakes” in the turnaround. For one thing, he did not test-market the changes in product-line and pricing-points. The latter in particular drove away enough customers for the company’s sales to decline by 25 percent. Why did Johnson fail so miserably?
Ron Johnson's short tenure as CEO of JC Penney was disastrous, according to Altman.   Source: Reuters
Some commentators on CNBC claimed that JC Penney’s board directors should have known better than hire someone from Apple to have so much responsibility right off the bat in a department store. However, Johnson had been V.P. for merchandising at Target before going over to Apple. Therefore, Penney’s board cannot be accused of ignoring the substantive differences between sectors. Even so, Target and Walmart are oriented to one market-segment, whereas JC Penney, Kohls and Macys are oriented to another. Perhaps had he taken the time to have market tests done at JC Penney, any error in applying what he had learned at Target could have been made transparent.
Although as the former CEO Ullman who would be replacing Johnson pointed out, customer tastes are always changing so you can’t go back to worked in the past, to “reinvent” a company goes too far in the other direction. For one thing, it is risky for a retail company to shift from one market-segment to another, given the company's image. Additionally, to “reinvent” something is to start from scratch to come up with something totally new. Even if that were possible for a retail chain, the “new front” would likely seem fake to existing customers. “They are trying to be something they are not,” such customers might say. Put another way, Ron Johnson might have gotten carried away.
In an interview just after Johnson’s hiring at JC Penney had been announced in June 2011, he said, “In the U.S., the department store has a chance to regain its status as the leader in style, the leader in excitement. It will be a period of true innovation for this company.” A department store is exciting? Was he serious? Perhaps his excitement got the better of him in his zeal for change. Were the changes really of “true innovation?” Adding Martha Stewart kitchen product-lines was hardly innovative—nor was getting rid of clearance sales and renovating store designs and the company logo.
Renovation generally-speaking is rather superficial, designed perhaps to give customers an impression of more change than s actually the case. Is a given renovation an offshoot of marketing or strategy? Ron Johnson may have been prone to exaggeration, as evinced by his appropriation of faddish jargon, while coming up short in terms of substantive change. In an old company trying to be something it's not (i.e., going from a promotional to a specialty pricing strategy), too much superficial change can easily outweigh too little real change. Sometimes even upper-level managers can get carried away with their own jargon in trying to make their respective companies something they are not. It is like a person trying to be someone he or she is not. In "reinventing" JC Penney, Ron Johnson was trying to make an old woman come off as young by applying make-up and new clothes.
Stephanie Clifford, “J.C. Penney Ousts Chief of 17 Months,” The New York Times, April 9, 2013.

Joann Lublin and Dana Mattioli, “Penney CEO Out, Old Boss Back In,” The Wall Street Journal, April 8, 2013.

Monday, February 11, 2019

Is Modest Growth vs. Full Employment a False Dichotomy?

As Summer slid into Autumn in 2012, the Chinese government was giving no hint of any ensuing economic stimulus program. This was more than slightly unnerving for some, as a recent manufacturing survey had slumped more than expected, to 49.2 in August. A score of 50 separated expansion from contraction. A similar survey, by HSBC, came in at 47.6, down from 49.3 the previous month. Bloomberg suggested that China might face a recession in the third quarter. So why no stimulus announcement?  Was the Chinese government really just one giant tease? I submit that the false dichotomy of moderate economic growth and full employment was in play. In short, the Chinese government did not want to over-heat even a stagnant economy even though the assumption was that full employment would thus not be realizable.

Wang Tao, an economist at UBS, explained the “very reactionary, cautious approach” as being motivated by the desire to avoid repeating the “excesses of last time.”[1] The stimulus policy in the wake of the 2008 global downturn had sparked inflation and caused a housing bubble in China. According to The New York Times, China was avoiding “measures that could reignite another investment binge of the sort that sent prices for property and other assets soaring in 2009 and 2010.”[2] A repeat of any such binge could not be good, for it can spark the sort of irrational excitement that have a life of its own.
In short, too much stimulus in an economy can cause inflation and put people’s homes at risk of foreclosure once the housing bubble bursts, whereas a lack of stimulus means that a moderate growth rate is likely, rather one that could give rise to full employment. Is there no way out of this trade-off? 
Keeping fiscal or monetary stimulus within projections of a moderate growth can occur with more government spending targeted to a combination of giving private employers a financial incentive to hire more people and increasing the number of people hired by state enterprises. In principle with the Full Employment Act of the U.S. in 1946, a government can see that anyone who wants a job has one, while still maintaining a moderate stimulus. A modest growth-rate can co-exist with full employment. 

1, Bettina Wassener, “As Growth Flags, China Shies From Stimulus,” The New York Times, September 3, 2012. 
2. Ibid.

Saturday, February 9, 2019

Behind Cameron's Referendum on Britain's Secession from the E.U.

Governors of other E.U. states reacted quickly to David Cameron’s announcement that if his party would be re-elected to lead the House of Commons, he would give his state’s residents a chance to vote yes or no on seceding from the European Union. The result would be decisive, rather than readily replaced by a later referendum. Cameron said the referendum would also be contingent on him not being able to renegotiate his state’s place in the Union. This renegotiation in particular prompted some particularly acute reactions from the governments of other “big states.” Behind these reactions was a sense that the British government was being too selfish. This was not fair, I submit, because the ground of the dispute was on the nature of the E.U. itself as a federal system. 
David Cameron, PM of Britain
With the basic or underlying difference still intact, it should be no surprise that the renegotiation did not go well. German Foreign Minister Guido Westerwelle said at the time that Britain should not be allowed to “cherry pick” from among the E.U. competencies only those that the state likes. What then should we make of the opt-outs at the time—provisions in which states other than Britain benefitted? Surely one size does not fit all in such a diverse federal union (that goes for the U.S. as well). Westerwelle was saying that Cameron had abused the practice that was meant as an exception rather than the rule. Britain was exploiting this means of flexibility in the Union because that people in that state tended to view the E.U. as a confederation or, worse, a trade "bloc" even though the E.U. and its states each had some governmental sovereignty. 
The president of the European Parliament, Martin Schulz, said the approach of the British government would lead to the detriment of the Union. Specifically, he warned of “piecemeal legislation, disintegration and potentially the breakup of the union” if Britain was allowed to be bound only to the E.U. competencies that the party in power in the House of Commons liked. A player joining a baseball team undermine the game even in demanding that he will only bat because that’s the only part that is fun. In higher education, the education itself could only be incomplete if students could limit their classes to what interests them. Such a player or student would essentially have a different view of the sport and education, respectively. The view itself of the nature of the thing was so at odds with the fundamentals of the thing that it would be undercut severely. This is what had been going on in the case of Britain navigating in the E.U. 
Carl Bildt, the Swedish foreign minister, also touched on the detriment to the whole from what he erroneously took to be the selfishness of a part. He said that Cameron’s notion of a flexible arrangement for his own state would lead to there being “no Europe at all. Just a mess.” French foreign minister Laurent Fabius said that “Europe a la carte” would introduce dangerous risks for Britain itself. So if the British government was being selfish, it could have been at the state's detriment, though of course I contend that selfishness does not go far enough. 
In short, the visceral reactions in other states to Cameron’s announcement manifested recognition of selfishness of one part at the expense of the whole. Those reactions were rash and, even more importantly, lacking in recognition of the underlying fault-line in the Union erupting between Britain and the Union out somewhere in the Channel. Cameron and plenty of other Brits viewed the E.U. simply a series of multilateral treaties in which sovereign states could pursue their respective interests. “What he wants, above all,” according to Deutsche Welle, “is a single market.” Therefore, he “wants to take powers back from Brussels” to return the E.U. to a network of sovereign states. It followed according to this view that each state, being fundamentally sovereign, “should be able to negotiate its level of integration in the EU.” Such would indeed be the case were the E.U. merely a bundle of multilateral international treaties, or a network to which Britain was a party, rather than a federal union of semi-sovereign states and a semi-sovereign federal level. Herein lies the real conflict of ideas within the E.U. Cameron’s strategy is selfish only from the assumption that the E.U. is something more than a network to which Britain happens to belong.
Ultimately the problem was the uneasy co-existence of the two contending conceptions of what the union was in its very essence. The real question was whether the E.U. could long exist with both conceptions being represented by different states. The negative reaction from state officials of other states who held the “modern federal” conception (i.e., dual sovereignty) of the E.U. suggests that ultimately Cameron’s conception of the E.U. was utterly incompatible with the union’s continued viability, given what it actually was at the time

EU Leaders Hit Out Over Cameron Referendum Pledge,” Deutche Welle, 23 January 2013.
Cameron Wants Another EU,” Deutsche Welle, 24 January 2013.

Essays on Two Federal Empires, available at Amazon.

Essays on the E.U. Political Economy, available at Amazon.

Greek Austerity: Pressure on the Environment

“While patrolling on a recent cold night, environmentalist Grigoris Gourdomichalis caught a young man illegally chopping down a tree on public land in the mountains above Athens. When confronted, the man broke down in tears, saying he was unemployed and needed the wood to warm the home he shares with his wife and four small children, because he could no longer afford heating oil. ‘It was a tough choice, but I decided just to let him go’ with the wood, said Mr. Gourdomichalis, head of the locally financed Environmental Association of Municipalities of Athens, which works to protect forests around Egaleo, a western suburb of the capital.”[1] Tens of thousands of trees had disappeared from parks and forests in Greece during the first half of the winter of 2013 alone as unemployed Greeks had to contend with the loss of the home heating-oil subsidy as part of the austerity program demanded by the state’s creditors. As impoverished residents too broke to pay for electricity or fuel turned to fireplaces and wood stoves for heat, smog was just one of the manifestations—the potential loss of forests being another. On Christmas Day, for example, pollution over Maroussi was more than two times the E.U.’s standard. Furthermore, many schools, especially in the north part of Greece, had to face hard choices for lack of money to heat classrooms.
Greek forests were succumbing  in 2012 to the Greeks' need to heat their homes as austerity hit.   source: Getty Images
Essentially, austerity was bringing many people back to pre-modern living, perhaps including a resurgence in vegetable gardens during the preceding summer. At least in respect to the wood, the problem was that the population was too big—and too concentrated in Athens—for the primitive ways to return, given the environment's capacity. 
To be sure, even in the Middle Ages, England had lost forests as the population (and royal plans) grew. In December 1953, many Londoners decided to use their fireplaces to burn wood, resulting in pollution blanketing the city. As a result, thousands died and the city outlawed the use of fireplaces. No one probably thought to ask whether the city had gotten too big—and too dense. No policy was enacted that would result in a shift in population out of the region.
Generally speaking, human population levels made possible by modern technology and medical advances have become too large for a return to pre-modern ways of life. Because of the extraordinarily large sizes of the modern city, including Athens, suddenly removing modern technology, which includes government subsidies, it is especially problematic when many people are forced to fend for themselves to meet basic needs. The efficiency of modern technology, including in regard to utilities and food distribution, is often taken for granted, even by governments, so the impacts on the environment when masses of people “return to nature” can be surprising. Nature has become "used to" seven billion humans on the planet in large part because we have economized via technology so the full brunt of the population-size is not felt. Particularly in industrial countries, societies are reliant on modern technology because without it the bulging population is unsustainable. 
Put another way, we have distanced ourselves from nature, and our growth in numbers in the meantime has made it impossible for us to “get back to nature” in a jolt, especially by many people. It is in this sense that governmental austerity programs that cut back on sustenance are dangerous not only for society, but also the ecosystems in which humans live. Accordingly, by mid-January, 2013, the Greek government was considering proposals to restore heating-oil subsidies. It is incredible that the financial interests of institutional creditors, including other governments, were even allowed to put the subsidies at risk.
In ethical terms, the basic sustenance of a people takes priority ethically over a creditor’s “need” for interest. The sin of usury is sourced back to the origins of lending as an instance of charity rather than money-making either from the plight of the poor or profit-uses.[2] When a person in antiquity was in trouble financially, someone with a bit of cash would lend some with the expectation that only that sum would be returned. The demand for interest on top was viewed by the historical Church as adding insult to injury (i.e., the bastardization of charity into a money-making ruse). Then exceptions were made for commercial lending, wherein a creditor could legitimately demand a share of the profit made from the borrowed money in addition to the return of the principal. As commercial lending came increasingly to characterize lending, the demand for interest became the norm, even on consumption loans when no profit would ensue to pay off the loan with interest. The notion that interest is conditional on a borrower having enough funds was lost, causing much pain to many in the name of fidelity of contract, as if it or the creditor’s financial interest were an absolute. Put another way, the default has swung over from the borrowers to the lenders to such an extent that society may look the other way as people literally have to cut down trees to heat their homes because creditors have demanded and won austerity touching on sustenance programs.
Therefore, especially in Christian Europe, putting people out by pressure being applied to state governments in the E.U. to make payments even in the context of a financial crisis can be considered to be untenable, ethically speaking. I am not suggesting that states should be profligate with borrowed funds. Rather, just as Adam Smith’s Wealth of Nations is bracketed by his Theory of Moral Sentiments, so too an economy (and financial system) functions best within moral constraints. 

1. Nektaria Stamouli and Stelios Bouras, “Greeks Raid Forests in Search of Wood to Heat Homes,” The New York Times, January 11, 2013.
2. Skip Worden, God's Gold, available at Amazon. 

Friday, February 8, 2019

Second-Term Inaugural Addresses of American Presidents: Of Transformational or Static Leadership?

According to a piece in the National Review, “George Washington might have had the right idea. Second inaugural addresses should be short and to the point. Of course, speaking only 135 words as Washington did in 1793 might be a little severe.”[1] Consider how short, and (yet?) so momentous Lincoln's Gettysburg Address was. The challenge for second-term-presidents, whether Barack Obama or the sixteen two-term presidents before him, is “how to make a second inaugural address sound fresh, meaningful and forward-looking." Almost all of Obama’s predecessors failed at this. Only Abraham Lincoln and Franklin D. Roosevelt made history with their addresses. One stirred a nation riven by civil war; the other inspired a country roiled by a deep depression. All but forgotten are the 14 other addresses, their words having been unable to survive the test of time. Even those presidents famed for their past oratory fell short.”[2] This is a particularly interesting observation: surviving the test of time being the decisive criterion. Even a president whose silver tongue mesmerizes a people of his or her time may not deliver ideas that survive beyond being a cultural artifact of the president’s own time. What of an address that is quite meaningful in its immediate time yet does not pass the test of time so as to be recognized as a classic? 

The full essay is at "Inaugural Addresses: Of Leaders?"

1. George E. Condon, Jr., “The Second-Term Inaugural Jinx,” National Journal, January 20, 2013.
2. Ibid.

Increasing Income Inequality in the U.S.: Deregulation to Blame?

Most Americans have no idea how unequal wealth as well as income is in the United States. This is the thesis of Les Leopold, who wrote How to Make a Million Dollars an Hour. In an essay, he points out that the economic inequality increased through the twentieth century. His explanation hinges on financial deregulation. I submit that reducing the answer to deregulation does not work, for it does not go far enough.
In 1928, the top one percent of Americans earned more than 23% of all income. By the 1970’s the share had fallen to less than 9 percent. Leopold attributes this enabling of a middle class to the financial regulation erected as part of the New Deal in the context of the Great Depression. In 1970 the top 100 CEOs made $40 for every dollar earned by the average worker. By 2006, the CEOs were receiving $1,723 for every worker dollar. In the meantime was a period of deregulation beginning with Carter’s deregulation of the airline industry in the late 1970s and Reagan’s more widespread deregulation. Even Clinton got into the act, agreeing to shelve the Glass-Steagall Act, which since 1933 had kept commercial banking from the excesses of investment banking. The upshot of Leopold’s argument is that financial regulation strengthens the middle class and reduces inequality by tempering the wealth and income of those “on the top.” Deregulation has the reverse effect.
The increasing role of the financial sector in the second half of the 1900s means that finance itself could claim an increasing share of compensation.  
Leopold misses the increasing proportion of the financial sector in GDP from the end of World War II to 2002. The ending of the Glass-Steagall act in 1998 does not translate into more output on Wall Street relative to other sectors. Indeed, the trajectory of the increasing role of finance in the U.S. economy is independent of even the deregulatory period. Leopold’s explanation can be turned aside, moreover, by merely recognizing that the “young Turks” on Wall Street have generally been able to walk circles around the products of their regulators. Even though financial deregulation can open the floodgates to excessive risk-taking, such as in selling and trading sub-prime-mortgage-based derivatives and the related insurance swaps, I suspect that the rising compensation on Wall Street has had more to do with the increasing role of the financial sector in the American economy.
The larger question, which Leopold misses in his essay, is whether the “output” of Wall Street is as “real” as that of the manufacturing and retail sectors, for example. Is there any added value to brokering financial transactions, which in turn are means to investments in such things as plants and equipment used to “make real things”? Surely there is value to the function of intermediaries, but as that function takes on an increasing share of GDP, it is fair to ask whether the overall value of “production” is inferior.
Given the steady increase of the financial sector as a percent of GDP, one would expect a more steady divergence of these two lines. Reagan's deregulation fits the divergence pictured, though one would expect a further increase in divergence after the repeal of the Glass-Steagall Act in 1998.  Source: Les Leopold

As for the rising income and wealth of Wall Streeters, increasing risk, which is admittedly encouraged by deregulation, is likely only part of the story. If the financial products are premium goods as distinct from the goods sold at Walmart, for instance, then as the instruments are increasingly complex one would expect the compensation to increase as well.
Leopold is on firmest ground in his observation that Americans are largely oblivious to the extent of economic inequality in the United States. Few Americans have a sense of how much more economic inequality there is in the U.S. than in the E.U., where the ratio of CEO to average worker compensation is much lower. One question worth asking centers on what in American society, such as in what is valued in it, allows or even perpetuates such inequality, both in absolute and relative terms. The relative terms suggest that part of the explanation lies in cultural values having relative salience in American society. Possible candidates include property rights and the related notion of economic liberty, the value placed on wealth itself as a good thing, and the illusion of upward mobility that allows for sympathy for the rich from those “below.”
In short, beyond actual regulations, particular values esteemed in American society and the increasing role of the financial sector in the American GDP may provide us with a fuller explanation of why economic inequality increased so during the last quarter of the twentieth century and showed no signs of stopping during the first decade of the next century. Americans by in large were wholly unaware of the role of their values in facilitating the growing inequality, and even of the sheer extent of the inequality itself. In a culture where political equality has been so mythologized, the acceptance of so much economic inequality is perplexing. At the very least, the co-existence of the two seems like a highly unstable mixture from the standpoint of the viability of the American republics “for which we stand.” Yet absent a re-calibration of societal values, the mixture may be an enduring paradox of American society even if the democratic element succumbs.

Les Leopold, “Inequality Is Much Worse Than You Think,” The Huffington Post, February 7, 2013.