Saturday, August 30, 2014

Budget Austerity in the E.U.: Turning the Russian Invasion of Ukraine into an Advantage

With economic growth in the E.U. flat-lining in mid-2014 after a modest recovery, pressure mounted to relax the federal "austerity" constraints on the state budgets. According to The New York Times at the time, "(p)olitical and financial instability related to Russia's confrontation with Ukraine and the effects of escalating economic sanctions between [the E.U.] and Russia have further clouded the economic outlook."[1] Mired in the austerity vs. fiscal stimulus dichotomy, E.U. leaders may have been missing an opportunity here.

With yet another round of sanctions in the works on the heels of a recent Russian invasion and unemployment at a stubborn 11.5 percent, and the threat of runaway deflation hitting wages in particular, the E.U.'s economy looked poised for an ongoing onslaught of stag-deflation. The E.U. "is menaced by long and possibly interminable stagnation if we don't act," Francois Hollande of the state of France warned.[2]  He had in mind some movement along the ongoing relaxation vs. austerity dichotomy in the direction of larger state deficits--something the governor over in Germany was still fiercely resisting. We "really must question whether we can go on receiving less than we spend, so that our debts keep on growing. Indeed," Angela Merkel pointed out, "a whole crisis of confidence has grown out of that."[3] Such a basic imbalance in state finance undercuts the equilibrium that is so vital to the survival of the macro system in the long run.

So, it would appear that the well-worn dichotomy had reached a dead-end, or the proverbial brick wall. I contend that in such a case thinking beyond the either/or strictures is advisable. To illustrate my point, I present a thought-experiment of sorts (i.e., unrealistic, but it gets the point across).

Let's imagine that the president of Ukraine met with the European Council as Russian troops were crossing the border into Ukraine with the eventual aim of separating the eastern half of the independent state from Kiev.

"I come before you with an admittedly unorthodox suggestion," President Poroshenko might have told the Council. "Without a massive infusion of support from the E.U., my country will split apart and Russia will gain the eastern half."

"What kind of support do you have in mind," the Council's Van Rompuy might has asked.

"Well, the sort that would make your fast-track accession process look like a snail's pace," Ukraine's head of state might have replied with a curious grin that told of something very new coming. "Make Ukraine a state; my government will accept all of your conditions without reservation. Send in your Commission's bureaucrats right away to implement the conditions. To protect them, I recommend that you send along military troops from your state militias as well as the small federal army you have. We could even request that U.N. peace-keepers come along. Putin is already in hot water at the U.N. for continuing his KGB tactics as president."

"What if he keeps sending in Russian troops?" Merkel might have asked.

"This is why speed would be so vital, both in Ukraine's accession, which could be of a limited term if that is easier for you, and the influx of bureaucrats and others doing the E.U.'s business and protecting them. Ukraine would agree to the Schengen Agreement on open borders, and I would request that the E.U. attend immediately to the external border--meaning that which we share with Russia. Securing that border has precedent for the E.U., does it not?" In fact, the need to protect the E.U. bureaucrats pouring into the eastern parts of Ukraine with troops from the state armies would mean that NATO would be relevant. This point, if made explicit, could deter Putin from sending in still more military hardware and troops.

"Please excuse us as we discuss this proposal," Van Rompuy might have politely yet curtly told the Ukrainian head of state. After all, the E.U. leaders do their best work behind closed doors. The point is that their minds need not be closed either. Lemons can indeed be made into lemonade.

Even if this scenario is too outlandish to be taken seriously in a world so wetted to the status quo as its default, thinking in such terms "outside the box" could stimulate more realistic policy prescriptions going beyond the austerity vs. fiscal stimulus dichotomy. For example, the notion of troops and hardware from state militias in the E.U. going along to protect federal bureaucrats might prompt an E.U. leader to suggest that the state armies transfer even more to the small federal army of 60,000 troops. Doing so would enable the state budgets to accommodate both more fiscal stimulus and lower deficits as less military spending would be needed. I am assuming the E.U. would pick up the tab for the operation of the added hardware and the salaries of the additional troops. From the perspective of the E.U., the shift would mean less duplication. How likely is it really that Belgium and Portugal, for example, would need to use their respective armies anyway? In the context of continued stag-deflation, such nationalist luxuries are difficult to justify, especially considering the opportunity cost in terms of stimulating the economy.

In short, the E.U. need not have faced a future of stagnation. Ideas hitherto undiscovered can indeed have great value in practical results. The key is to think beyond the confines of what are presumed to be the only possibilities. The human brain has a tendency to shrink the possible in a way that cuts off many potentially fruitful possibilities without any recognition of doing so. The advisable condition of receptivity is to welcome such ideas into the public discourse rather than going with the knee-jerk reaction of "that's too radical!" or "that would never see the light of day." We might be surprised what could see the dawn and beyond.


1. Liz Alderman and Alison Smale, "Divisions Grow as a Downturn Rocks Europe," August 29, 2014.
2. Ibid.
3. Ibid.

Tuesday, August 26, 2014

Should the ECB Buy State Bonds and Encourage State Deficits?

In remarks at Jackson Hole, Wyoming, European Central Bank president Mario Draghi urged greater fiscal and monetary coordination to boost the E.U.’s economy. A ship cannot move along at full speed if all the sails are not coordinated so that each is poised at its optimal angle to the prevailing winds. So too, various policies in a political economy must all sail in the same direction for a full-sail recovery to really take off. Just as a sailing ship must avoid jagged pitfalls lurking in rocky waters, so too must policy makers; for it is all too easy in focusing on one point on the horizon to ignore or dismiss baleful downsides to the dominant policies.


At the foot of the Teton Mountains, there being no foothills, Draghi “called for explicit policy co-ordination between the [Eurozone’s] monetary guardian and [the states].”[1] Brandishing considerably less concern on inflation, he linked the ECB’s future purchases of state bonds (i.e., quantitative easing, or QE) to structural reforms, tax cuts, and more spending at the state level. 


At the time, E.U. law limited state budget deficits to 3% of gross domestic product. Pointing to the existing flexibility in that law, the central banker urged the state governors to “better address the weak recovery and to make room for the cost of needed structural reforms.”[2] I submit that the accent should be placed on the latter, as they would have more staying power. It is like the difference between consuming sugar (even in fruit) before running, and drinking a protein shake after lifting weights; both food elements are helpful, but only the protein becomes a part the body and can thus strengthen it for the future.

Quantitative easing can unfortunately impact an economy in both foreseen and unforeseen ways due to the intended artificially-low interest rates. The market-mechanism cannot but be distorted, with harsh byproducts free to silently ravage certain segments while others benefit without merit. The human brain is not so omniscient as to be able to fully anticipate and plug all the leaks that can arise from a systemic distortion in a macro-economy. That is to say, the system is so complex that a huge distortion using one macro policy tool can introduce significant systemic risk.

In the U.S., for example, low rates in the 1990s incentivized a housing bubble whose collapse in 2007 triggered the potentially catastrophic credit freeze and collapse of Lehman Brothers in September of 2008. The government bailout of GM, AIG, and Wall Street banks worsened the federal public debt. By 2014, that debt reached over $17 trillion. Meanwhile, the Federal Reserve printed money far exceeding the meager growth of GDP by buying bonds to artificially lower interest rates to prompt a recovery.

With interest rates low, money flooded into stocks. “The market is in effect rigged because of the [low] interest rate,” Charles Biderman said on CNBC as the summer of 2014 was coming to an end.[3] A grinding ethical fault-line ran between the stocks increasing at 25% a year and the wages and salaries increasing at a mere 3 percent. Moreover, the middle and lower economic classes would doubtless feel the brunt of a collapse of the stock market due to irrational fear should rates be raised to counter the inflation from too many dollars chasing too few goods.

With borrowing money being so cheap at the low rates, government officials did not feel the normal market pressure to hem in the deficits. Corporate managements could borrow money cheaply to acquire or merge with other companies without being perhaps as discerning of the degree of compatibility and synergy as would be the case under naturally determined interest rates. In 2014 through August, more than $2 trillion in mergers and acquisitions were announced—an increase of 70 percent over the same period the year before.[4] While managers, stockholders, and lawyers make out like bandits on the deals, subordinate employees are left out of the largess and may even lose their jobs as the price of synergy.

As dark as the underside of quantitative easing is in pushing rates abnormally low, the most potentially harmful byproduct of Draghi’s plan concerns his intent to encourage the E.U.’s state governments to make greater use of the “existing flexibility” already in the federal law limiting state budget deficits to 3% of annual economic output. Even large states such as France and Germany had had trouble keeping within the law; states such as Greece, Spain, and even Italy carried so much debt that the systemic risk of default became a problem not just for the E.U., but for the global financial system as well. To encourage flexibility might be like giving money to an alcoholic standing outside a liquor store. The last thing state legislators need to hear is additional flexibility to tax less and spend more can be found in the fine print.

As an alternative, Draghi could have emphasized that the ECB would focus on assisting states with structural reforms both by lending its expertise and finding the right monetary incentives that would not distort the financial market in potentially unforeseen ways. Sticking to old ways is not necessarily the best route to getting structural changes capable of ushering in new ways.



1. Claire Jones, Peter Spiegel, and Robin Harding, “Draghi Softens Tone on Austerity,” The Financial Times, August 22, 2014.
2. Ibid.
3. Charles Biderman, CNBC TV, August 28, 2104.
4. Trish Regan, “Has Fed Jumped the Shark?” USA Today, August 26, 2014.


Sunday, August 17, 2014

Mergers and Acquisitions: What about the Stockholders?

Why do companies merge and acquire other companies? Synergy is the textbook answer. Typically, the stockholders of the target company see an appreciation in the value of their stock, while stockholders in the initiating firm see a downtick. The reason why is simple: corporations typically overpay. The value-added of the anticipated synergy must be greater than not only any overpayment, but also the intangible costs in aligning the corporate cultures. Yet another factor—an opportunity cost, really—is frequently overlooked: that of whether the extra cash on hand should be returned to the stockholders as dividends.

During 2014 up to August 15th, merger activity around the world was $2.2 trillion, up from $1.29 trillion in the same period the previous year.[1] Comcast, for example, was buying Time Warner Cable for $5 billion, and Reynolds American was buying Lorillard for $27 billion.[2] The nonfinancial companies in the S&P 500 had a near-record $1.2 trillion in cash in an economic context of low interest rates and a bit of inflation.[3] Buying another company would thus be cheaper than not only expanding from within, but also investing the cash in interest-bearing or tied securities.

In fact, not buying a company under the circumstances could easily be thought of as doing nothing with the cash. As Chris Lee of Fidelity Select Financial Services Portfolio puts it, “Now, the risk of doing nothing seems greater than the risk of doing something.”[4] In spite of the fact that mergers can be very good in the long haul for the stockholders of both companies to a merger or acquisition, a board in touch with its fiduciary duty to the company’s owners would properly consider the alternative of returning the surplus cash to them. Lest too much attention be paid to appreciation in the price of a stock—admittedly of value to those owners who intend to sell—the dividend is a means by which all of a company’s current owners benefit.

To be sure, the decision is not merely a financial one. The return of capital to the providers of equity is an ideological matter as well. To own property, even if most of it is in the form of a concentration of capital, brings with it the right to a share in the profits. This goes beyond the impact of the successive surpluses on the stock’s price. The perspective in which not using extra cash to acquire or otherwise merge with another company is reckoned as doing nothing eclipses the alternative use altogether. Returning excess cash to the stockholders is decidedly not “doing nothing.” The implication that it is intimates a bias in the interest of management that is at odds with its fiduciary duty. 

Put another way, similar to what can easily happen in the political debate over whether to reduce taxes so the citizens will be able to hold onto more of their money or spend the additional revenue on governmental budget items, the tendency of corporate managements, and even boards, to spend excess money rather than return it to the owners may not always be in the best interests of the principals and the principle of property rights.



[1] John Waggoner, “When 2 Companies Love Each Other Very Much . . . “, USA Today, August 15, 2014.
[2] Ibid.
[3] Ibid.
[4] Ibid.

Saturday, August 16, 2014

On the Tyranny of the Status Quo

Ever wondered why so much energy must be expended to dislodge a long-established institution, law, or cultural norm? Why does the default have so much staying power? Are we as human beings ill-equipped to bring about, not to mention see, even the “no-brainer” changes that are so much (yet apparently not so obviously) in line with our individual and collective self-interest? In this essay, I look at Ukraine, Spain, and Illinois to make some headway on this rather intractable difficulty.

Ukrainian President Yanukovych refused for months to budge then suddenly disappeared as if a teenager fleeing from a now-likely punishment. (Image Source: AP)

In Kiev’s central square, Ukrainian protesters braved bitter cold for months in late 2013 and early 2014 without any movement whatsoever in disgorging a divisive president who may gone on to surrender Ukraine’s sovereignty for money  in line with Vladimir Putin’s imperial dream of a restored Russian empire under the rubric of a “Eurasian Union.” It took twenty and then seventy deaths before the steadfast protesters would see the president replaced by an interim parliament-centered coalition government. After months of stalemate, the president actually fell from power quite suddenly once his partisan support in parliament had sunk below a threshold.[1] Until that point was reached, any trickles of power shifting behind the scenes did not register in the least as even a slight movement toward a resolution in the massive tug-of-war. Such ongoing intransigence, or gravity, that seemingly inheres in a default is itself an obstacle that can easily dissuade anyone who comes to view the way things are as not only contingent, but also, well, rather stupid. Such an individual might wonder why societal self-corrections in the public interest are so elusive even though they are rather obvious.

Even realizing that a given domain is subject to the rigid longevity of invisible sub-optimality can be difficult to achieve. For example, only after seven decades did the E.U. state of Spain seriously reassess Franco’s decree on May 2, 1942 moving Spain from the GMT time zone, which Spain had adopted at the International Meridian Conference in 1884, to GMT +1. Falling back an hour would put the dictator on the same time as Hitler’s Germany (and France) and Mussolini’s Italy. Seven decades later, in October 2012, the VII National Congress for Rationalise Spanish Time Zones proposed returning to GMT. With more daylight in the morning and less in the evening, state residents might not stay up so late on work-nights. Once the state had been bailed out by the E.U. federal government after the financial crisis of 2008, Spain could ill-afford the continued loss of 8 percent of the state’s GNP due to productivity losses from the nocturnal proclivity that coincided with another cultural icon, the siesta. For our purposes here, why did it take decades even to propose the easily-rationalized correction even though it meant returning to a rule that had been in effect for decades before the rise of Nazi Germany.

Let’s travel across the Atlantic Ocean to Illinois, whose major metropolis is the bewindowed city of Chicago. The latest sunset there is at 8:30pm (20:30 hrs), which occurs during the last week of June. The sun rises during that week at around 5:15am, though relatively few Chicagoans are awake at 4:45 to witness daybreak. Bentham’s rule of utilitarianism would have us believe that the greatest good for the greatest number somehow matters in life. Might it be rather obvious that taking an hour of daylight during the summer from early morning and depositing it at the other end of the day, prolonging evening from turning into night, would be more optimal? Perhaps it is merely common sense that many more people could enjoy the hour of light in question were they awake to see it. This point would not be missed by many tourists from Spain. Why is it so difficult for the people losing out to become aware of what they could have in a better life?

Even moving another hour of daylight, such that sunrises would occur roughly between 7:00am and 7:30 during June and July (daylight before 7) and sunsets would be after 10:00pm (22:00hrs) would not unduly fine “morning people.” Yet this would mean a three-hour shift from standard time. Achieving even a 9:30pm sunset would entail a two-hour change (not necessarily on the same days). Lest such a proposal seem too catastrophic, the PSOE, a political party in Spain, established the addition of another hour in summer beginning in the 1980s.

Perhaps the fear of the unknown is assuaged by the news of the same unknown being part of the default somewhere else. Furthermore, perhaps what does not work in a state of one Union may work just fine in a state in another Union; even within an empire-scale Union diversity of clime and custom justify allowances for interstate differences (e.g., via federalism).

Perhaps, moreover, members of the homo sapiens species are “hard-wired” to prefer “missing out” in the face of even a relatively simple change that would add appreciably to the good for the greatest number. In this case, the good to be had is in terms not only of summer enjoyment in a clime whose long winters can keep the door open to extreme cold from the Arctic, but also of improved health (from more exercise en plein air) and greater safety (i.e., fewer muggings and rapes). Perhaps the over-riding lesson here is simply that making life a little better—a bit more enjoyable—need not be so difficult.

Dorothy, in The Wizard of Oz, could have used the magic slippers to return to Kansas at anytime. Unfortunately, she did not even ponder the possibility, and thus had to come to it the hard way. 
(Image Source: Hollywoodreporter.com)

I am reminded of Dorothy in the 1939 film, The Wizard of Oz. The good witch of the North tells her at the end that she could have used her ruby shoes to return to Kansas at any time, but that she had to come to realize this herself. Perhaps the question for us is why we have such trouble in coming to realize that we, too, need not wait so long to effect change that we could have accomplished long before because it lies within our power. The pickle in all of this is that enough people in a given society must come to this self-empowering realization themselves for any movement to take place. For once a threshold is met, even a societal change can be effected surprisingly fast and much easier than expected or feared.

1. Jim Heintz and Angela Charlton, "Ukraine Parliament Boss Takes Presidential Powers," GlobalPost, February 23, 2014.

Tuesday, August 12, 2014

Global Geopolitical Risks: Is Wall Street Hypersensitive and Reductionistic?

The Dow dropped 140 points in August 5, 2014 on a rumor that the Russian military is about to invade eastern Ukraine. Three days later, amid hints of de-escalation and the end of troop “exercises” on the Ukraine border, the Dow gained 186 points. Three days later, as Russia’s president approves a deal wherein the Russian OAO Rosneft and the American ExxonMobil can begin drilling a $700 million well in the Arctic Ocean, the Dow gains 16 points.[1] Are stock analysts and Wall Street investors really so hypersensitive to day-to-day changes in geopolitical risk? It may be simply that such news sells.

Just because to events are correlated in that they both occur at the same time does not necessarily mean that one caused the other.  According to the eighteenth-century philosopher David Hume, we think we have a better grasp on causation than we actually do. In the case of positive correlation between two events, a third one could be behind both—rather than one of the correlated events causing the other. In general terms, causation may be much more complex than the human brain is naturally inclined to accept.

In the case of the changes in the Dow cited above, many analysts held at the time that a “plunge in geopolitical risks related to the Ukraine-Russia crisis leads to a rally in U.S. stock prices.” Yet how “micro” can we take such a change to be? Do rumors and hints coming out of Moscow really have such power to reverberate throughout Wall Street? If so, stock analysts and investors may have a proclivity to minimize or simply not see the tremendous inertia that the geopolitical status quo enjoys. To be sure, significant events do happen. A century before, on August 4, 1914, Britain entered World War I, the war to end all wars only to prompt Adolf Hitler into political action.

Additionally, analysts may be inclined to develop an “either/or” perceptual framework that ignores the gray areas in sizing up the relative importance of multiple geopolitical hot spots around the world. Some analysts dismissed the negative impact of the escalating fighting in Iraq and Israel even as the American press was full-blown into yet another obsession on them. The Markets in New York and Illinois may indeed have been “shrugging off” the detrimental impacts of the American bombing in Iraq and Israel’s in Gaza because the Middle East as USA Today reported at the time. Yet even though Russia can have a clear economic impact on Europe, which in turn could hamper the American economy, neither the E.U. nor U.S. is indifferent to possible impediments to oil coming out of the Middle East. Russ Koesterich, the chief investment strategist at BlackRock, said as much in maintaining that the American markets would be higher if it weren’t for the increase in geopolitical risk in the Middle East.

Interestingly, however, Koesterich also stressed that “other issues are also weighing on stocks, including elevated valuations in the U.S. market, renewed economic weakness in Europe, concerns about an earlier-than-expected interest rate hike by the Federal Reserve and recent weakness in the high-yield bond market.” It is easier to reduce the cause to one, and point the finger squarely at the particular military conflagration having the most direct line, or seemingly so, to the economies in the West, than to go back to the hackneyed, even banal, myriad of domestic usual suspects. The human brain yearns for simplicity even when the actual causation process is more complicated, and the media dutifully comply.



[1] All quotes are from Adam Shell, “If Russia Sneezes, Wall St. Gets a Cold,” USA Today, August 12, 2014.

Should Entitlement Programs Be Cut?

If human beings have survival among our inalienable rights as citizens for whom both rights and duties apply, then we as a society must grapple with how sustenance can be guaranteed to those among us who are not meeting their own needs. I put it this way to highlight the lack of conditionality in the right. That is to say, if it is inalienable, then it is irrelevant whether the person is lazy or of a bad temperament.

 Sustenance or "extra" cash? Only one is a human right.   (Image source: timthethief.com)
To be sure, in 2 Thessalonians 3, the Apostle Paul  declares, “For even when we were with you, this we commanded you, that if any would not work, neither should he eat.” I suspect that many moderns, particularly in America, agree with Paul here. Unpacking the quote, I submit that a number of subterranean assumptions lurk just below the surface. We naturally assume that Paul was not referring to a woman who cannot work because she was so brutally gang-raped that she is mentally and/or physically unable to work. We assume the Apostle did not have a blind and mute old man in mind. Rather, we assume that the person is a deadbeat, seeking to have others work while he enjoys himself, or that the person has a pernicious personality and therefore cannot secure even a menial job because no one could stand being around him. The punishment we dish out against the deadbeat and the rude man is a death sentence, in effect.
That the rest of us might be engaged in self-idolatry in setting ourselves up as Judge goes without punishment in our convenient scheme. Our presumed entitlement to omniscience (all-knowing) goes under the radar screen. We presume we are fit to know why some among us do not meet their sustenance needs on their own in an interdependent modern economy, and we feel at liberty to pronounce judgment based on the motives of others that we presume to know. Perhaps the poorest among us should pronounce judgment on the rest of us! The unassuming poor in spirit could quote the following from Matthew (19:30), “But many who are first will be last, and many who are last will be first.” This line refers to the Kingdom of God, which in crucial ways turns the world’s logic upside-down or ties it up like a pretzel so the worldly wise are confounded into fools even as they count themselves as clever.
The typical rationale made by those citizens among us who do not regard sustenance as an inalienable human right in a civilized society hinges on the assumption that charity can fill the gap. Even though human beings need at least one meal a day (many of us used to three!), the assumption here is that such a demanding requirement can be met by chance as people give to charity. The fallacy here lies in presuming certainty in something that is not.
Indeed, an examination of the giving pattern of the wealthy in America suggests that sustenance is not even a priority. According to the 2011 report of the American Association of Fundraising Counsel, 32 percent of the $298 billion given away by Americans went to religious organizations, 13 percent to cultural organizations, and only 12 percent to social services. It cannot be assumed that the bulk of the money going to religious and cultural organizations went directly to sustenance for the poor.
Furthermore, the Chronicle of Philanthropy indicates that the top five donations, totaling $190 million, in New York went to Columbia University’s business school, the Metropolitan Museum of Art, and the Brooklyn Bridge Park Corporation (for the building of an indoor cycling track). Not one of the top 49 gifts went to support social services explicitly, yet somehow charity can be relied on to bridge the gap in health-care, housing and food for those who would otherwise die without such sustenance.  Rarely if even do we as a society ask whether an indoor cycling track should be conditioned on the sustenance of every citizen (or legal resident) having been satisfied. A new terminal or runway is typically viewed, at least implicitly, as having an equal (or higher) claim on our resources, societally.
Were sustenance treated as an inalienable human right to which an explicit duty of citizenship would correspond, society would be more than an aggregate of individuals out for themselves. Lest we assume that no one would work if a basic security based on the value of solidarity were known to be met, personal ambition would doubtless show itself as going beyond a concern for mere sustenance. In other words, enterprise would not falter. We have no idea how it would feel to live life with a basic, or existential, security in knowing that if the worst were to happen to us, we would not starve or be homeless or go without needed medicine for want of voluntary charity by others. We have no idea how such a basic psychological security would permeate and thus impact society, not to mention something called "quality of life."

Instead of exercising statesmanship for the good of each other, we as a body politic make it harder on even ourselves than need be because we presume that such a basic security would be at our expense and undeserved by the beneficiaries. In other words, our selfishness is ultimately at our own expense, and society itself is held back in the process. In other words, we get the cities we deserve. The sad thing is that the perpetuation of the existential insecurity is so unnecessary and ultimately in nobody’s interest. Even so, we keep stumbling over ourselves in avarice so to keep what we have, and relatedly in fear that someone might possibly get something for nothing at our expense.  In our presumed omniscience, we conveniently assume that we cannot be wrong, and this seals the deal on our small fate.

Source:
Ginia Bellafante, “Bulk of Charitable Giving Not Earmarked for the Poor,” The New York Times, September 8, 2012. http://www.nytimes.com/2012/09/09/nyregion/bulk-of-charitable-giving-not-earmarked-for-poor.html?hp

Monday, August 11, 2014

On the Democratization of Credibility: Global Warming Experts

Even as late as 2013, as if the mounting evidence of global warming and our carbon footprint were some new kind of faith narrative whose white-coated high priests preside over a new political religion distinctly American, some members of Congress, political commentators, and lay apostates recoiled with the declaration, "I don't believe in global warming." Epistemologically, to believe is less rigorous than to know. Actually, what they mean is that they know that our industries and carbon-emitting vehicles are blameless, even pure. It is the sheer declarativeness and the underlying epistemological assumption that I want to make transparent, as being inherently problematic and yet likely "hard-wired" in the very fabric of human brain. 

In mid-February, 2014, Charles Krauthammer, a former psychiatrist turned political analyst after his paralyzing accident, sarcastically asked on a Fox "News" show how the "Global Warming Religion" was doing since the polar vortex had drifted southward, well into several U.S. states unaccustomed to such frigid air, in January. He also pointed to the severe drought in California as discrediting the scientists' belief that warmer temperatures would bring more, not less, rain. Were global warming a religion, belief would be the appropriate currency and the political commentator would be entitled to protest the established civic religion by preaching his own belief. Unfortunately for Krauthammer, who strikes me as a natural in analyzing politics, the abundance of scientific evidence on the warming planet and a human contribution made him look ignorant rather than as a prophet chastising zealots. 

Perhaps most incredibly, Krauthammer was ignorant of the actual scientific position on rainfall. Rather than increasing precipitation everywhere, the pattern is expected to shift closer toward the poles. So, while Alaska, Canada, and Europe gain, regions already lean on rainfall, like California, Arizona, Texas, and North Africa, would tend to face even more severe droughts. Not only was the political commentator unaware of the "more extremes" nuance even in the planet warming over all; he also missed the ubiquitous refrain of climatologists that 1) it is extremely difficult to assess the impact that climate change has on a particular storm or season's weather in a given region in part because 2) the predicted changes from global warming are tendencies rather than the case for every storm or season. 

Let's take the extreme cold dipping unusually southward in North America in the winter of 2013-2014 as an example. Can we conclude that the onslaught of frigid Arctic air would occur every winter thereafter? No. We can say that the occurrence is more likely to occur. Because the poles are warming disproportionately over the warming going on closer to the equator, the differential from the latter to the poles reduces (i.e., a less steep downward slope). Less energy in "riding" the slope downward means the resulting "river of air," or jet-stream," is weaker, hence more loopy (not unlike some tired people). Similarly, the circular "river of air" going around the Arctic (and AntArctic) air weakens too (the temperature differential there too being less). As a result, this polar vortex "relaxes" in circulating further southward and with greater lopsidedness. 

Combine the Northern Hemisphere's jet-stream and the Arctic’s spinning polar vortex fencing in the Arctic air, and you have a vortex elongating more and being "enabled" by the elongated loops of the jet stream. The video and pictures (from NASA) below tell the tale. The jet stream looks like a white noodle winding around the Northern Hemisphere. The elongated loop southward in North America captured here on December 16, 2013, opens a sort of void into which the Arctic air can slip because the vortex "river of air" is more pliable or stretchable. 

Can we say therefore that the cold winter of 2013-2014 resulted from climate change (paradoxically the way in which the planet warms causes both warm and cold extremes in various regions)? Perhaps if the extreme cold weather is understood as a more likely tendency, though here too factors idiosyncratic to that winter had a significant impact. Namely, the low-pressure system over Hudson's Bay in Canada meant that a counter-clockwise movement of air around the low would facilitate the Arctic air's trip southward. 





Therefore, to claim that a cold winter in North America (and Siberia too, as shown in "The Big Chill" picture above) invalidates global warming shows just how substantial ignorance of the science can be. Similarly, to claim that a drought in California also invalidates "the religion of global warming" rather than adding support to it shows the same sort of gross over-simplification. Charles Krauthammer's innate insight into politics does not translate into scientific knowledge on climatology. The same can be said of many other self-declared "experts" on global warming on both sides of the political debate.

Being active in political public-discourse is not sufficient to count oneself as knowledgeable on the underlying science. Similarly, having a college degree in medicine does not necessarily mean that a person knows something about climate change, not to mention climatology. In fact, few Americans know that the M.D. degree in medicine is the first degree in medicine, and thus a prerequisite to the doctorate in medicine, the D.Sci.M. (Doctorate in the Science of Medicine). In being a terminal degree, a doctoral degree (even those of professional schools like law, medicine, and business) cannot be a prerequisite to a higher degree in the same discipline or body of knowledge (e.g., medicine). Perhaps the sheer extent of esteem in American society for the well-compensated practices of law and medicine relative to the irrelevant academics in the "ivory towers" (e.g., "those who can't do, teach.") has enabled the undergraduate degrees in medicine (MD) and law (LLB a.k.a. JD) to be counted as if they were doctoral degrees (i.e., terminal rather than prerequisite to another degree, comprehensive exams by professors (not industry boards), and a substantial contribution of original research (typically a book-length dissertation, defended in front of professors in the specialty)).

Therefore, Americans are particularly susceptible to the fallacy that a psychiatrist, physician, or lawyer can be taken as a credible mouthpiece on another discipline, such as climatology. Put another way, Charles Krauthammer was not even entitled to use the title of the doctorate even as he presumed he had earned a degree equivalent to the D.Sci.M. degree (which would be illogical, as only graduates of medicine (i.e., holding the M.D. degree) can be admitted to the doctoral program in medicine), assuming he had even heard of the doctorate in his own field. As though arrogance on stilts during a flood, the political pundit could leap from that presumption to the one I have detailed in this essay. Put another way, presumption can be addictive, even perhaps becoming a personality disorder. This disorder can even be part of the collective unconscious of a society. Now, to round this circle (of hot air?), might it be that presuming we know more than we do has a genetic basis in our species' DNA?  

Human nature may contain the seed of its own destruction to manifest as extinction due to human-induced climate change? That carbon emissions hit a record high in 2012 (rather than being on a downturn by then) may point to such a basic dysfunction in the species that had ironically done so well in terms of natural selection (i.e., multiplying DNA via population growth). We fail to realize that too much success for a species can spell disaster as a result.

The U.S. Producing More Oil: A Panacea or Obstacle?

The International Energy Agency projected in 2012 that a shale-oil boom would catapult the United States over the state of Saudi Arabia as the world’s largest oil producer by 2020. In the words of the Wall Street Journal, the global energy map was “being redrawn by the resurgence in oil and gas production in the United States.”[1] Although the United States would benefit in the period from the trajectory, the drawbacks should not be ignored. In fact, the trend could be harmful in the long term if preparedness for a world without oil is put off as a consequence.

On the plus side, producing more natural gas in place of coal to generate electricity reduces carbon-dioxide emissions from what they would otherwise have been.  The IEA has projected that natural gas will displace oil as the largest single fuel in the U.S. by 2030. In 2012 alone, carbon-dioxide emissions were down in the U.S. from 2011. In the first eight months of 2012, natural gas accounted for 31% of electricity generation, up from 24% in 2011. The question of whether more black gold is a good thing, even if produced at home, is considerably murkier. 

Drill baby drill, an expression used by Republicans in the election campaign of 2008, contains the combination of some immediate benefits, but also some risks. On the one hand, American demand for oil would not be so dependent on states in the Middle East, and would thus have a freer hand in foreign policy, should a greater proportion of oil consumed in the U.S. be extracted domestically. Already in 2012, the U.S. received less than 20% of its imports from the Persian Gulf region, whereas China received half of its oil imports from there. This trajectory could give the State Department more bargaining power with major oil producers in the Middle East, and even free up the $60 to $80 billion a year being spent by the U.S. to protect the Middle East sea lanes.

At the same time, drill baby drill risks reducing the energy problem facing the U.S. to one of insufficient oil drilled within the country’s borders. In fact, focusing on drilling more may be counter-productive if there is too much oil being consumed as it is. The relatively narrow aim could frustrate efforts to rely more on “clean energy” sources. Even on its own terms, the trajectory of increased domestic production is not all that one might suppose at first glance.

Demand continues to exceed domestic production throughout the period.

Even with the additional domestic production, the domestic demand, even given its drop of 8.4% in 2012 to 18.9 million barrels a day from 2011, was still projected at the time to be more than domestic production even in 2020. This suggests that oil imports—expected to be four million barrels a day from 10 million a day in 2012—would still be material and thus relevant to foreign policy. Economically, extraction in the U.S. is relatively expensive, and the prices are set globally, so more domestic production does not necessarily translate into lower prices. Indeed, the continuing dependence of the U.S. on at least some foreign production would still put at least some upward pressure on the prices of oil. Even this is not so simple. American oil companies extract oil around the world. In fact, after being kicked out of Iraq in 1993, the companies were invited to bid for leasing contracts in the wake of the U.S. invasion—a turn of events that certainly invites speculation on the Bush administration’s motive for invading Iraq.

In the long run, less pressure to develop alternative sources of energy in spite of the gap could make the United States vulnerable to a “day of reckoning” when oil finally does begin to run out. After taking office in early 2009, President Obama used the perception of energy scarcity and increasing concern about global warming to urge members of Congress to pass legislation capping greenhouse-gas emissions and to spend billions of dollars on green-energy companies. The Republican majority in the U.S. House and rising fiscal deficits compromised the president’s ventures. Opening up more off-shore water to oil drilling was a much easier, bipartisan, route. The expected surge in U.S. oil production to 11.1 million barrels a day in 2020 allowed the president to mend some fences with the Republicans in Congress and their corporate sponsors. It is a truism of sorts that it is always in the interest of both parties that more jobs be created, even by oil companies. That such companies have so massive treasuries to draw on to lobby and contribute to political advertising via “social welfare” non-profits means that the truism is doubtless being reinforced rather than subjected to critique. Indeed, the reduction of the energy problem to a need to drill, baby, drill was no accident.

In short, the benefits of increased domestic production may be muted, even provided that oil drilling within the borders of the United States is so much different than the extractions by American oil companies overseas. Moreover, the sense of greater self-reliance “complacency” was already easing pressure to enact policy in 2012 to reduce global warming. The record melting of ice at the North Pole during the summer that year should have been a wake-up call, but it could easily be drowned out by the cheer, drill baby drill!  Such attention on domestic drilling could be just the tangent that the people in denial on planetary warming had been seeking to thwart such policy as per the short- and medium-term financial interests of the extant oil companies. It may be indeed difficult for a democratically-elected representatives to orient policy to a long-term benefit even when there is a tail-wind. The prospect of tremendous profits at America’s oil giants represents a formidable head-wind, which of course means that more fuel—or self-discipline—will be necessary domestically.

1. Benoit Faucon and Keith Johnson, “U.S. Redraws World Oil Map,” The Wall Street Journal, November 13, 2012.

Sunday, August 10, 2014

AIG’s Benmosche on Bonuses amid the Bailout

Robert Benmosche, former CEO Of American International Group (AIG)—one of the biggest corporate recipients of government bail-out (TARP) funds—likened the resistance by the American public and some government officials to partial bonuses being paid to hundreds of employees in the ill-fated financial products unit as akin to a racial lynching. 

The full essay is at AIG's Benmosche.

Monday, August 4, 2014

Wall Street Subsidies Silently Magnifying Systemic Risk

At a U.S. Senate hearing on a GAO report on the costs of expectations of government support for banks should they go under, “discussion went far beyond the report and delved into the current state of banking, the limits of the Dodd-Frank Act and what should be done about banks that are simply too big to manage,” according to The New York Times.[1] Six years after the massive credit freeze, a major question hinged on whether some financial institutions were still too large, complex, and interconnected to be liquidated in an orderly and containable manner should they head under water. 

The full essay is at "Wall Street Subsidies."


1. This and all quotes in this essay are from Gretchen Morgenson, “Big Banks Still a Risk,” The New York Times, August 3, 2014.

Friday, August 1, 2014

On the Duty of Public Service: The Case of Rep. Eric Cantor

Public service, such as holding public office and defending the homeland under attack, is rooted historically in a duty rather than being intended to further personal ambitions. Hence, public advancement is a reward for having gone beyond the call of duty in one’s public service. To be sure, it is not unheard of that an elected official views his or her post as a launching pad for personal enrichment, whether in terms of wealth or power. When this aim becomes primary, the duty aspect of the public service can easily fall away like a tadpole’s tail off a bumpy toad. U.S. House representative (and majority leader) Eric Cantor is a case in point, both in why he lost his seat and his decision to resign it early rather than finish his term.

U.S. House Majority Leader Eric Cantor (Mark Wilson, Getty Images)

In his defeat in the Republican primary, Cantor acknowledged the criticism that he had not kept sufficiently in touch with his district. Although his position as majority leader in the U.S. House of Representatives involved much work in Washington, D.C., the post was also fodder for further advancement; indeed, having reached the office points to a strong ambition. The duty of a representative to represent his or her constituents can easily be slighted or crowded out by such personal ambition. Essentially, the office becomes a creature of the man rather than the duty.

When Cantor contacted a newspaper hours after he stepped down as the majority leader to make public his intention to resign his seat in time for a special election to coincide with the upcoming general election, he emphasized the fact that his successor would be able to take office immediately and so his constituents would not go without representation.[1] Of course, they would be covered had Cantor decided to serve out his full term, but that would have involved enduring what several Republicans told Politico "the humbling shift from 11 years in the leadership to being a back bencher, even if only for four months."[2] That Cantor never began his move to the small office in the Capitol he had been assigned when he announced he would be stepping down as majority leader suggests that he may have decided on election night to resign his seat rather than serve the remainder of his term. In making that decision (whenever he did it), Cantor was once again putting his own interests above public service; being in Congress had been about him, rather than serving. As soon as "serving" became uncomfortable, even embarrassing for him, he easily tossed off the duty and bolted during the summer break.

The American Founding Fathers envisioned citizens taking leave from their occupations to serve a term of office to represent their fellow citizens. The operative assumption, hopefully gained from empirical observation, was that the citizens urged to run would only grudgingly part from their businesses or farms to serve in Congress. Out of this assumption, a felt duty can be readily inferred. For an elected representative to disregard his representative role would connote an indifference to the very duty that he had accepted in agreeing to run. Likewise, to resign before the end of his term would have played as a shirking of duty, and thus as a sign of a weak character.



[1] The Associated Press, “Eric Cantor Plans to Resign House Seat Earlier Than Expected: Report,” The Huffington Post, August 1, 2014.
[2] Anna Palmer, Jake Sherman, and John Bresnahan, "Why Eric Cantor Really Resigned," Politico, August 1, 2014.