Friday, June 6, 2014

GDP and Poverty: Is Economic Growth the Answer?

From 1959 to 1973, the American economy grew 82 percent, per person. It is easy to assume this is why the poverty rate decreased from 22% to 11 percent.[1] From roughly 1985 to 1990 and then again from 1995 to 2000, heady growth rates are also correlated positively with declining poverty rates. But correlation is not causation. Indeed, had the correlation in the 1959-1973 period continued, the subsequent per capita growth would have ended poverty in 1986. What then are we to make of the relationship between GDP and poverty?

According to Heidi Shierholz, an economist at E.P.I., the “very tight relationship between overall growth and fewer and fewer Americans living in poverty” broke apart in the 1970s.[2] In spite of the OPEC oil cartel’s inflationary shocks in 1973 and 1979, the poverty rate remained relatively constant through the decade of “stagflation,” spiking only once Reagan took office—perhaps on account of David Stockman’s domestic budget cuts that hit the poor especially hard and Paul Volcker’s high interest rates at the Fed (to decrease the inflation rate) that increased the cost of borrowing money. To be sure, the recessions in the early 1980s and the early 1990s are associated with increases in the poverty rate, which even lags the subsequent recoveries, and the rate fell as the economy was humming along in the late 1980s and 1990s. Even so, eleven percent seems to be the rate’s floor. Perhaps this is why the relationship broke apart in the 1970s?

According to Thomas Piketty, the period from World War I to the 1970s is unusual economically because the shocks reduced returns on capital relative to the growth rates of income and GDP. The economist suggests that inequalities in income and even wealth narrow under this rather artificial arrangement; typically, returns on capital have outsized increases in income, thus increasing the inequalities. Perhaps the inverse relationship we are looking at holds only when the rate of return on capital is low relative to the GDP rate. However, as I indicate above, heady growth periods after the 1970s can be found in which the poverty rate is decreasing, and recessionary periods in which the rate is increasing.

So we are back to the 11 percent floor. When the relationship broke apart in the early 1970s, the poverty rate was indeed at 11 percent. Instead of continuing downward, the rate hovered, when up a bit, then slightly downward until just above 11 percent before heading starkly upward in 1981.[3] What might be behind the floor? It seems doubtful to me that 11 percent of the adults in America are simply unable to work for non-economic reasons. It seems more likely that the market mechanism, which can support wages at the minimum wage without any upward pressure (e.g., from labor shortages), functions at an equilibrium short not only of full employment, but rising wages (relative to returns on profits and stock appreciation) as well. In other words, we cannot look to the market to “grow” us out of poverty. While undoubtedly a help, the market is only part of the solution.

The question is thus how we can fill in the rest of the solution. What, outside of the economy, can make up the difference? As the source and enforcer of societal rules, government is not intrinsically the answer, for to be the “man in charge” does not in itself connote supplying materials (including jobs). Yet the government could direct that materials and or jobs be provided outside of private industry, such as by non-profit organizations receiving funds from tax revenues and thus directly under government oversight. Even now, the Full Employment Act of 1946 directs that everyone who wants a job should be able to have one—that this is a task of government to oversee. Lest employment be viewed as an end in itself rather than a means, we could stipulate that everyone has at least a minimum amount of money and wealth. Hence the unemployable veteran, for example, would not have to suffer in poverty. Lest breaking through the 11 percent floor make it more difficult for Walmart or McDonalds to pay cashiers the minimum wage of just over $7 an hour, we might remember that a part of the solution is not in itself more important than the solution itself.

That is to say, hits taken at the margins to part of the solution as the rest of the solution is added should not outweigh the solution itself, for the end is more important than any one of the means. Too often, I fear, the American public discourse obsesses on the downsides on the margins to a means, being all too willing to preempt a full solution just so the particular means is not slighted in any way. I suppose this is a sort of tunnel vision, with greed playing a supporting role. Death and taxes may be inevitable, but surely poverty is not. Indeed, it may be viewed as an artificial byproduct of human society and thus as fully within our powers and responsibility to eliminate rather than take as a given.




[1] Neil Irwin, “Growth Has Been Good for Decades. So Why Hasn’t Poverty Declined?The New York Times, June 4, 2014.
[2] Ibid.
[3] Ibid.

Tuesday, June 3, 2014

What We Know about Coal and Natural Gas: The EPA’s Coal Emissions Targets

Coal is the bad guy. At least it is the antagonist in the U.S. Environmental Protection Agency’s 645-page carbon-emissions plan unveiled in early June, 2014. In spite of the fact that the 30% reduction in CO2 emissions from the level in 2005 being set for 2030, critics showed their oligarchic focus on today by pointing to what the current likely costs would be. Electric bills increasing $4 or so a month in West Virginia. Lost jobs—as if the criteria of capital were also those of labor. In short, short-term inconveniences without a hint of the other side of the ledger. I submit that this is precisely the element in human nature that can be likened to the proverbial “seed of its own destruction” in terms of the future of our species. As menacing as such “reductionism to today” is, the assumption such as underlies the EPA’s proposal that coal is the definitive obstacle—and, furthermore—that we are not missing any other huge but invisible danger—is just as problematic from the standpoint of the species’s survival.

I have in mind the EPA’s projections by fuel type going out to 2030 from 2012. From 37% of the electricity generated in 2012, the comparable projected figure is 32% for 2013.[1] While at least the direction is downward in relation to the other fuel types, climatologists would doubtless say more of a drop is necessary to stave off more than a 2 degree C global increase.  As damning as this “ok but not good enough” scenario is pertaining to coal, the more damning feature of the report is that it may be very wrong about something it takes to be an improvement.

For example, the natural gas category is projected to go from 30% in 2012 to 35% in 2030[2]. That’s good, right, because it’s the clean gas. Not so fast. Independent empirical studies of leaks in Utah, L.A., and Washington, D.C. have shown much higher levels of methane escaping into the atmosphere than the 1% touted by the producers and adopted (without independent confirmation) by the EPA. In the observations, the actual percentages of leakage were double-digits. The problem is that the break-even point with coal in impacting global warming is 3 percent. Methane, which natural gas gives off before being burnt, turns out to have ten times the impact as coal.

My point is that even though by now we are used to the contingents that put today’s convenience above the risk to the future of the species, we don’t know what we don’t know, and this can be even more dangerous. In other words, what we assume to be a good thing may in actual fact be doing a lot of damage under our very eyes. We may not have a clue as to how what we are doing today is impacting the planet’s atmosphere. This may be one reason why scientists have repeatedly had to accelerate their projections of when the ice sheets would melt at the poles.

Human nature may be much more problematic from the standpoint of the species’s own survival than we know. Not only have 1.8 million years of natural selection engrained in us a focus on today (e.g., fight or flight) at the expense of tomorrow; we may be very wrong about stuff we assume we got right and yet be totally unaware of it. It is as though our species were a person with long hair who never bothers to use a mirror to make sure that the hair in back is brushed. The laugh is on that person, and yet she (or he) has no idea. The industrialists who are instinctively wetted to the status quo out of a desire for financial gain may just be the tip of the iceberg; we had better look underneath before it has totally melted.



[1] Wendy Koch, “EPA Carbon-Cutting Plan Could See Power Shift,” USA Today, June 3, 2014.
[2] Ibid.

President Obama as Chief Executive: Missing the Fraud at the Veterans Administration

Buffeted with a whirlwind of criticism in the wake of revelations of widespread fraud in VA Hospital and outpatient clinic wait times, President Obama somewhat sheepishly admitted during a news conference on the matter that he had heard nothing of the practice on his travels around the country. With at least one instance of false scheduling at 65% of the facilities between September 30, 2013 and March 31, 2014 and 13% of schedulers being instructed in how to falsify wait-times,[1] it is odd that word had not reached the president’s ear. Maybe this is not so odd after all, for the president’s domestic trips tended to be oriented to campaign fundraisers and speeches oriented to proposed legislation. In other words, the president—and Barak Obama is hardly alone here—put his legislative role above that of his office as chief executive.

It is worth noting that the legislative role of the American federal president is negative in that the power is exercised by vetoing legislation. To be sure, the president is constitutionally encouraged to make recommendations through the State of the Union report made to Congress. Even so, the extent of time and attention that presidents have directed to pushing favored legislative bills go beyond making recommendations, and thus the opportunity cost (i.e., foregone attention to other matters, such as managing the executive branch of the U.S. Government) is not justified. Put another way, having two branches focused at the top on legislating is not only redundant, or overkill; the joint focus leaves the executive branch without a chief except in parchment.

This is not to say that proactive rather than veto-based presidential involvement in the legislative process cannot bear fruit. Franklin D. Roosevelt, the president for much of the Great Depression in the 1930s, expended tremendous effort in seeing to it that his New Deal programs were legislated into actuality. In an “exit-interview” at the conclusion of decades in the U.S. House of Representatives, Rep. John Dingell (D-Michigan) calls FDR, “The Giant, one of probably the three greatest” presidents in American history.[2] In dull contrast, Dwight Eisenhower was a “fine chairman of the board, . . . but didn’t do much.”[3] This stinging critique implies that the managerial imprint translates into lethargy or at least a lack of accomplishment.

Relatedly, Dingell criticizes Jimmy Carter for not being able to see the forest even as he could see every tree in the woods.[4] While a president as presider should have his or her eyes on the big picture, protecting society and the systems of business and government as wholes from actualizing systemic risk, the president as chief executive should focus on trees relative to society as a whole—that is, relative to the orientation in presiding. To be sure, the focus of the particular agencies is considerably narrower, and no CEO rightfully gets hung up at that level—but neither does a CEO focus on society at the expense of the business itself. Carter took micromanaging to the extreme, personally approving even the White House Christmas cards. As dysfunctional as this is for a chief executive, equally problematic is a president who acts as if he or she were a Congressional leader, or else privileges his own presiding over managing. Yet legislating and presiding have come to swallow up the very notion of the American presidency—Rep. Dingell’s comments illustrating this default.

It hardly bears mentioning that for a politically-oriented person, running around the member states making speeches oriented to a vision of society is unquestionably more exciting than exercising executive responsibilities. As a result, it has been all too easy for the campaign-oriented people who have occupied the Oval Office to effectively leave the mammoth executive branch without a CEO or managerial chairperson—a decision that tacitly enables the sort of widespread fraud as was found in the Veterans Administration in 2014. It is fanciful to suppose that word of even such a widespread managerial practice would somehow show up on a rope-line as a celebrity president is passing by. Yet in his news conference on the fraud at the VA, President Obama saw no such disjunction. Instead, he sought to appear as managerially on top of the intricacies of the VA scheduling process.

I suspect that the encroachment of campaigning over governing has a correlate in the White House, wherein legislating has come to crowd out the executive functions. Perhaps the Electoral College was established in part so a good executive rather than a good campaigner would have a chance at the office; the increasing salience of the popular vote being like a storm’s wave washing over everything else and thus effectively establishing the sort of person who would get the prize. Relatedly, the underlying problem doubtlessly includes the character flaw that too easily ignores some of a job’s responsibilities in selfishly favoring others. In other words, we can indeed blame Barak Obama and many of his predecessors for slighting their managerial responsibilities across the executive branch in order to have more influence (i.e., legislatively). Ironically, fewer speeches on pending legislation would have much more currency and free up the president to manage the executive branch. As for the quite legitimate presiding role that is literal to the presidency itself, catastrophic threats to the systems of business, government, and society do not arise every day; the role does not “eat” a lot of time on a daily basis if understood correctly instead of applied to every symptom that pops up on an oversensitive radar-screen. Leaving legislating largely to Congress, a presiding president would likely find that he or she has enough time to manage the executive branch effectively, assuming an optimal mix of direct supervision and delegation is applied. Generally speaking, balance and proper boundaries would do a lot of good, yet unfortunately human nature may be more schizogenic than homeostatic—more maximizing (e.g., desire) than oriented to equilibrium.



[1] Meghan Hoyer and Gregg Zoroya, “Fraud Masks VA Wait Times,” USA Today, June 3, 2014.
[2] “John Dingell Rates the Presidents,” USA Today, May 2, 2014.
[3] Ibid.
[4] Ibid.

Dismembering Time Inc: A Critique of the Conglomerate Strategy

Typically, management is assumed to be a skill or practice that enables a person so trained to work in virtually any company, regardless of what the sector happens to be. A manager is presumed fully able to go from managing a bank to managing a restaurant. Organizing is the common thread; passion for the particular output is not. Yet surely product-specific knowledge and indeed fascination must count for something. This point struck me as I read the ideas of one journalist regarding what should come of Time Inc. once separated from the mother-ship of Time Warner. If the parts of Time Inc. are indeed worth more as parts of different companies than as remaining as a whole, then it is worth asking whether it makes sense to assign each part to a company oriented to the same theme or domain. If so, then the particular theme of a publication, or business moreover, should have some bearing on a manager.

Playing with how a freed-up Time Inc. might be internally dismembered, Michael Wolff suggests that Time’s InStyle magazine might go to Hearst or Condé Nast; Entertainment Weekly to The Hollywood ReporterSports Illustrated to ESPN; and Fortune to The Wall Street Journal.[1] Although management dogma has it that the practice can enable a non-publishing company to effectively take in any of Time’s magazines, synergies in terms of content would doubtless have value if the purchasing company not only publishes already, but also in the same subject matter. Adding to the latter, I submit that collecting together managers who have a particular interest in a certain subject-matter, such as sports or finance, and having those managers fully occupied only in it, has both financial and psychological value. The balance could in fact be tipped away from the conglomerate enterprise should the concept of a shared, intense passion take hold in a managerial setting.

Accordingly, the salience of the content of what is being managed—the particular subject-matter of a product or service—may render managerial skill not as much of a passport as currently supposed or taught in business schools. Just because a person has a MBA does not mean that he or she could manage a magazine for a few years then move on to manage a restaurant; the delimiting question would be: what interests you other than management? Perhaps then the world would see more managers of flesh and blood—fully alive even at work—rather than mere stand-ins composed of sanitized skeletal bone. To be viable, management may have to transcend itself into specific, non-transferrable content.




[1] Michael Wolff, “The Once and Future Time Inc. Is in Flux,” USA Today, May 27, 2014.