Thursday, February 28, 2019

Regulating Smoking in China: A Socialist Conflict of Interest

Government ownership and control of a means of production is the standard definition of socialism even if some linguistic revisionists want to redefine the term as merely the control of a business or industry. In short, a government must own the economic enterprises to meet the definition of Socialism rather than merely government regulation of private businesses. Socialism, I contend, involves a structural conflict of interest that a government that both owns an controls an enterprise, industry or even an entire economy may be tempted to exploit for its own ends rather than the public good. The key here is the regulating of that which is owned. Specifically, where a government as owner enjoys the benefit of profit or surplus, that government has a financial interest that can be against the restriction of the produced product. Such a monopolistic restriction could admittedly be warranted by public health or safety, but the gain could also be private in the sense that it is limited to the government and even the personal financial interests of government officials. In other words, the public good can be distinct from a government’s own financial (and related political) interest even as that government is charged with acting in the public interest in part by owning and regulating state enterprises. It is the pivot between the public and private interest that sets up the conflict of interest because the human urge is to go with a narrower, private interest at the expense of the public good. In other words, the very possibility, even likelihood given human nature, that a government would exploit the wider distribution of benefits for the narrower one (i.e., to the government itself) is the basis of a conflict of interest. I argue elsewhere that even the mere possibility renders even an as-yet unexploited conflict of interest inherently unethical. Here, I examine the matter of public health in China as a case of a socialist (in part) government that has had a conflict of interest. 
Three hundred million Chinese were smokers in 2010. This number is roughly equivalent to the entire U.S. population in 2000. In 2010, the addiction killed an estimated 3,000 people a day in China; this translates into 1.2 million tobacco-related deaths for the year. One out of three cigarettes smoked worldwide was smoked in China. It was estimated that smoking would kill about a third of Chinese men under 30. On May 1, 2011, the Chinese government banned smoking in indoor public places. However, the law contained no penalties. According to Time magazine, the law was not likely to have any effect.[1]
The reason for the lenient regulation may have had something to do with the powerful China National Tobacco corporation, a state-owned and controlled enterprise. In 2010, taxes and profits from the monopoly were roughly 7% of the government’s revenue.[2] That gave government officials an incentive to protect the enterprise's revenue and a disincentive to issue regulations that could be expected to reduce the consumption of cigarettes in China even if a reduction were in the public interest. This combination of incentive and disincentive is an earmark of a conflict of interest, the basis of which is the human instinctual urges behind the combination. 
This may be why more incentive typically exists to protect and increase revenue coming in than to minimizing costs even if they exceed the revenue. Even if the government’s expense in covering health-care costs for the 3,000 Chinese a day who died of smoking in 2010 exceeded 7% of the government’s total revenue, even a partial loss of revenue would likely be resisted by government officials.  Attention to revenue can dwarf that to costs especially where no market competition exists because extracting more revenue is relatively easy whereas cost-containment is still difficult.
Ethically, the government officials otherwise tasked with regulating so as to protect the public health in China and thus preventing deaths from smoking suffered from the personal (if kickbacks were involved) and institutional conflict of interest wherein the government’s financial interest and public-health goals were in conflict. That is to say, the officials not only had their own ethical dilemmas to resolve; there was also a larger institutional problem akin to a house being designed to be at odds with itself. The part of the government oriented to protecting and even increasing the revenue may have had disproportionate influence beyond that of the public-health department because the narrower the benefits are, the greater the incentive. Seven percent of the state's revenue doubtless got more attention from the state itself than its broader public-health measures, including those that made it more difficult for people to smoke in public. 





1. “A Smoking Ban without Teeth,” Time, May 20, 2011.
2. Ibid.


Monday, February 25, 2019

Public Access to the Public Domain Increasingly Privatized for Profit

To Aaron Swartz, the subject of the documentary, The Internet’s Own Boy (2014), the major concern in his day regarding the internet was not the ability of a person to create a blog or use social media; rather, the problem was in the trend of the power of the gate-keepers, who tell you were on the internet you want to go, concentrating. In other words, the issue concerned what commands our attention. More specifically, who gets access to the ways people find things on the internet. “Now everyone has a license to speak; it’s a question of who gets heard,” he said.  Although he was a computer wiz, he also had political aspirations; both of which were on display as he lobbied against the Stop Online Piracy Act (SOPA), which was introduced in Congress in October of 2011. Unfortunately, the combination of his computer and political skills got the attention of the FBI, which engaged in a relentless pursuit of him until, under the pressure, he committed suicide at the age of 26. His short life was one of idealism that should not have been squashed by an unstoppable criminal-justice system, especially when influenced by political pressure from corporations and politicians. Lest the overzealousness of law enforcement obscure a vision of Aaron’s idealism, it can be viewed as public access being restored to the public domain in terms of the internet.


The full essay is at "The Internet's Own Boy."

Wednesday, February 20, 2019

Corporate Political-Campaign Contributions as Decisive in Anti-Trust Enforcement

On August 31, 2011, “the [U.S.] Justice Department sued to block AT&T’s $39 billion takeover of T-Mobile USA, a merger that would create the nation’s largest mobile carrier. 'We believe the combination of AT&T and T-Mobile would result in tens of millions of consumers all across the United States facing higher prices, fewer choices and lower-quality products for their mobile wireless services,' said James M. Cole, the deputy attorney general.”[1] The New York Times claimed at the time that it was “arguably the most forceful antitrust move” by the Obama administration.[2] To be sure, there were “few blockbuster mergers with the potential to reshape entire industries and affect large swaths of consumers.”[3] However, one could cite the UAL merger with Continental and Comcast’s acquisition of NBC as accomplished mergers. It is more likely that the housing-induced recession made the administration reluctant to risk a major company looking for buyer going bankrupt. I would not be surprised if the vested interests of major mergers and acquisitions “played the bankruptcy card” as leverage with the Justice Department. Moreover, the political power of mega-corporations in the U.S. can be expected to have come into play.
To be sure, the U.S. Department of Justice was capable of flexing its political muscle. Nasdaq withdrew its $11 billion bid for NYSE Euronext, the parent company of the Big Board after government lawyers warned of legal action. However, conditioning Comcast’s purchase of NBC to the latter giving up control of Hulu, an on-line movie/television conduit, evinced a strange indifference to a much larger distribution company (Comcast) having a vested financial interest in some of the content (i.e., NBC programming).
The Justice Department looked the other way on a rather obvious conflict of interest potentially operating at the expense of the consumer (assuming people want to watch more than NBC programming on cable). Of course, cable is not the only distribution channel for television programming. Customers dissatisfied with Comcast’s vaunting of NBC programming (and even possibly restricting other content) could go to DirecTV, for example. However, conflating distribution and content seems a bit like the repeal of the Glass-Steagall law, which had prohibited the combination of commercial and investment banking (and brokerage) from 1933-1999. The law reflected the belief that institutional conflicts of interest can and should be avoided even though not every instance of a commercial-investment firewall could be expected to succumb to the immediacy of the profit motive.
In short, the Obama administration could have gone further in its antitrust actions. While admittedly not as pro-business as the preceding administration, the Obama administration’s tacit acceptance of mega-corporations may translate into an insufficient defense of competition. It should not be forgotten that Goldman Sachs contributed $1 million to Obama’s election campaign in 2008. For the president to actively promote competitive markets would require him to bite the hands that have been feeding him. In other words, there is a conflict of interest involved in allowing corporations to make political contributions while the government officials are tasked with replacing oligarchies with competitive marketplaces.

1. Ben Protess and Michael J. De La Merced, “The Antitrust Battle Ahead,” New York Times, August 31, 2011. 
2. Ibid.
3. Ibid.

President Trump’s Spending on a Border Wall: Federalism at Risk?

U.S. President Trump announced in February of 2019 that he would fully fund a wall on the U.S.’s southern border. He would first use the $1.375 granted by Congress to be followed by  $600 million from a Treasury Department asset-foreclosure fund for law enforcement, $2.5 billion from a military anti-drug account, and $3.6 billion in military construction funds.[1] The president’s rationale hinged on his declaration of a national emergency due to illegal immigration, drug-traffic, and crime/gangs—all having been coming across the border on a regular basis. In federal court, sixteen of the U.S.’s member-states challenged the president’s declaration and use of funds. The U.S. president’s legal authority to declare national emergencies was pitted against the authority of the U.S. House of Representatives to be the initiator of federal spending legislation. The House therefore had standing to sue. The question of the states’ legal standing is another matter. It is particularly interesting because it involved not only whether a given state would be harmed by the wall or even the president’s use of other funding sources that could otherwise be used for other projects in the states not directly affected by the wall, but also because federalism itself could be negatively affected in a way that harms all of the states.
Prime facie, it seems difficult that California and New Mexico could show injury from a wall that would not be built in either of those states. On this basis, the injury to Hawaii seems far-fetched, as the ocean functions as that republic’s border. Similarly, New York is nowhere near the U.S.’s southern border. Arguing, however, that “the president’s unconstitutional action could cause harms in many parts” of the U.S., California’s attorney general at the time insisted that the member-states had standing apart from where the wall would be built.[2] Given the sources of the funding, all of the states could “lose funding that they paid for with their tax dollars, money that was destined for drug interdiction or for the department of Defense for military men and woman and military installations,” he explained.[3] This point, I admit, is valid but it lacks a larger constitutional view.
In a federal system in which the member-states and federal governmental institutions both have their own basis of governmental sovereignty, a power-grab by one means less power for the other. The judicial trend since the war between the U.S.A. and C.S.A. during the first half of the 1860s has been to validate encroachments by the federal government on those of the states. President Trump’s decision to build a wall in some of the member-states represents a power-grab not only with respect to the Congress, but also the states. In the E.U., by contrast, the states have more say in how the E.U.’s border is protected. The European model of federalism values cooperation at both the policy and implementation stages than does the American model in which ambition is set to counter ambition.
The U.S. Senate was originally intended to be the access point in the federal government in which the state governments could affect or even block proposed federal legislation. When U.S. senators became popularly elected by voters in the respective states rather than appointed by the state governments, the latter lost their direct access in the federal government. Before then, a majority of states could defend not only their own interests, but also the interest of the state “level” in the federal system. It would be more difficult for the state governments to forestall encroachments (i.e., power-grabs) by the federal government. The federal system itself would suffer from a growing imbalance.
With the state governments no longer able to directly express themselves in the U.S. Senate because senators had an obvious incentive to satisfy constituent and especially financial-backer interests, going to the courts became the only route in trying to stop the federal president’s spending-plan for a wall. Yet even that strategy suffered from the institutional conflict of interest implicit in a federal court deciding disputes between the states and the federal government. Perhaps looking narrowly at anticipated injuries to the 16 states would attest to the federal-bias in the federal courts, which nonetheless have a responsibility to consider the standing that the states have in the federal system. After all, they rather than the federal government enjoy residual sovereignty. Is not a federal encroachment itself an injury to the state governments as per their loss of power? By the twenty-first century, the federal government could claim preemption in order to keep the governments of the states from legislating in an area of law even though the federal government does not intend to legislate in it! The danger in such an imbalanced federal system—that is, a lopsided system of governance—is that the encroaching government becomes tyrannical not just toward the states, but the People as well. As the power-checking-power mechanism breaks down, absolute power becomes increasingly likely.

For more comparisons of American and European federalism, see Essays on Two Federal Empires: Comparing the E.U. and U.S., and American and European Federalism: A Critique of Rick Perry's "Fed Up"!  Both are available at Amazon.


1. Charlie Savage and Robert Pear, “States’ Lawsuit Aims to Thwart Emergency Bid,” The New York Times, February 19, 2019.
2. Ibid.
3. Ibid.

Saturday, February 16, 2019

On the Various Causes of the Financial Crisis of 2008: Have We Learned Anything?

In January, 2011, the Financial Crisis Commission announced its findings. The usual suspects were not much of a surprise; what is particularly notable is how little had changed on Wall Street since the crisis in September of 2008. According to The New York Times, "The report examined the risky mortgage loans that helped build the housing bubble; the packaging of those loans into exotic securities that were sold to investors; and the heedless placement of giant bets on those investments." In spite of the Dodd-Frank Financial Reform Act of 2010 and the panel's report, The New York Times reported that "little on Wall Street has changed." One commissioner, Byron S. Georgiou, a Nevada lawyer, said the financial system was “not really very different” in 2010 from before the crisis. “In fact," he went on, "the concentration of financial assets in the largest commercial and investment banks is really significantly higher today than it was in the run-up to the crisis, as a result of the evisceration of some of the institutions, and the consolidation and merger of others into larger institutions.” Richard Baker, the president of the Managed Funds Association, told The Financial Times, "The most recent financial crisis was caused by institutions that didn't know how to adequately manage risk and were over-leveraged. And I worry that if there is another crisis, it will be because the same institutions have failed to learn from the mistakes of the past." From the testimonies of managers of some of those institutions, one might surmise that the lack of learning in the two years after the crisis was due to a refusal to admit to even a partial role in crisis.  In other words, there appears to have been a crisis of mentality, which, as it contains intractable assumptions and ideological beliefs, as well as stubborn defensiveness, is not easy to dislodge such that legislation past Dodd-Frank could ever be passed.
It is admittedly tempting to go with the status quo than be responsible for reforms. If the reformers are also the former perpetrators, their defensiveness and ineptitude mesh well with the continuance of the status quo even if an entire economy the size of an empire is left vulnerable to a future crisis. To comprehend the inherent danger in the sheer continuance of the status quo, it is helpful to digest the panel's findings. 
The crisis commission found "a bias toward deregulation by government officials, and mismanagement by financiers who failed to perceive the risks." The commission concluded, for example, that "Fannie and Freddie had loosened underwriting standards, bought and guaranteed riskier loans and increased their purchases of mortgage-backed securities because they were fearful of losing more market share to Wall Street competitors." These two organizations were not really market participants, as they were guaranteed by the U.S. Government. That government-backed corporations would act so much like private competitive firms undercuts the assumed civic mission that premises government-underwriting. All this ought to have raised a red flag for everyone--not just the panel which stressed the need for a pro-regulation verdict. 

Lehman was a particularly inept player leading up to the crisis.     Zambio

In terms of the private sector, The New York Times reported that the panel "offered new evidence that officials at Citigroup and Merrill Lynch had portrayed mortgage-related investments to investors as being safer than they really were. It noted — Goldman’s denials to the contrary — that 'Goldman has been criticized — and sued — for selling its subprime mortgage securities to clients while simultaneously betting against those securities.'”  The bank's proprietary net-short position could not be justified by simply market-making as a counter-party to its clients, Blankfein's congressional testimony notwithstanding. 
Relatedly, the panel also pointed to problems in executive compensation at the banks. For example, Stanley O’Neal, chief executive of Merrill Lynch, a bank which failed in the crisis, told the commission about a “dawning awareness” through September 2007 that mortgage securities had been causing disastrous losses at the firm; in spite of his incompetence, he walked away weeks later with a severance package worth $161.5 million. The panel might have gone on to point to the historically relatively huge difference between CEO and lower-level manager compensation and questioned the relative merit, but such a conclusion would go beyond the commission's mission to explain the financial crisis.
With regard to the government, The New York Times reported that the panel "showed that the Fed and the Treasury Department had been plunged into uncertainty and hesitation after Bear Stearns was sold to JPMorgan Chase in March 2008, which contributed to a series of “inconsistent” bailout-related decisions later that year." The Federal Reserve was clearly the steward of lending standards in this country,” said one commissioner, John W. Thompson, a technology executive. “They chose not to act.” Furthermore, Sabeth Siddique, a top Fed regulator, described how his 2005 warnings about the surge in “irresponsible loans” had prompted an “ideological turf war” within the Fed — and resistance from bankers who had accused him of “denying the American dream” to potential home borrowers. That is to say, the Federal Reserve, a corporation wholly owned by the U.S. Government, is too beholden to bankers instead of the common good. So we are back to the issue of a government-guaranteed corporation acting like or on behalf of private companies (and badly at that).
We can conclude generally that governmental, governmental-supported, and private institutions, all acting in their self interests, contributed to a "perfect storm" that knocked down Bear Stearns, Lehman Brothers, Countrywide, AIG, and Freddy and Fannie Mae. Systemically, the commercial paper market--lending between banks--seized up and many of the housing markets in the U.S. took a severe fall such that home borrowers awoke to find their homes under water. The Federal Reserve was caught off-guard, as its chairman, Ben Bernanke, had been claiming that the housing markets could be relied on to stay afloat. Relatedly, AIG insured holders of mortgage-based bonds without bothering to hold enough cash in reserves in case of a major decline in the housing markets all at once. Neither the insurer nor the investment banks that had packaged the subprime mortgages into bonds though to investigate whether Countrywide's mortgage producers had pushed through very risky mortgages before selling them to the banks to package. In short, people who were inept believed nonetheless that they could not be wrong. Dick Fuld, Lehman's CEO, had the firm take on too much debt to buy real estate so that eventually his firm would be as big as Goldman Sachs. That such recklessness would be on the behest of a childish desire to be as big as the other banks testifies as to the need for financial regulation that goes beyond the "comfort zone" of Wall Street's bankers and their political campaign "donations."

See also Essays on the Financial Crisis: Systemic Greed and Arrogant Stupidity, available at Amazon.

Source:

Sam Jones, "Hedge Funds Rebuke Goldman," Financial Times, January 28, 2011, p. 18.

Wednesday, February 13, 2019

Decreasing Bank Size by Increasing Capital-Reserve Requirements: Plutocracy in Action?

Although the Dodd-Frank Financial Reform Act was passed in 2010 with some reforms, such as liquidity standards, stress tests, a consumer-protection bureau, and resolution plans, the emphasis on additional capital requirements (i.e., the SIFI surcharges) could be considered as weak because they may not be sufficient should another financial crisis trigger a shutdown in the commercial paperr market (i.e., banks lending to each other). A study by the Federal Reserve Bank of Boston found that even the additional capital requirements in Dodd-Frank would not have been enough for eight of the 26 banks with the largest capital loss during the financial crisis of 2008. As overvalued assets, such as subprime mortgage-backed derivatives, plummet in value, banks can burn through their capital reserves very quickly. A frenzy of short-sellers can quicken the downward cycle even more. This raises the question of whether additional capital resources would quickly be "burnt through" rather than being able to stand for long as a bulwark. The financial crisis showed the cascading effect that can quickly run through a banking sector as fear even between banks widens as one damaged bank impacts another, and another. 
With the $6.2 billion trading loss at JPMorgan Chase in the hindsight, Sen. Sherrod Brown (D-Ohio) and Sen. David Vitter (R-La) in the U.S. Senate proposed a bill that would require banks with more than $400 billion in assets to hold at least 15 percent of those assets in hard capital. The two senators meant this requirement to encourage the multi-trillion-dollar banks to split up into smaller banks. Although it had been argued that gigantic banks are necessary given the size of the loans wanted by the largest corporations, banks had of course been able to form syndicates to finance such mammoth deals. 
The Senate had recently voted 99-0 on a nonbinding resolution to end taxpayer subsidies to too-big-to-fail banks, so the U.S. Senate had Wall Street’s attention. Considering that the U.S. House of Representatives was working on legislation to deregulate derivatives, the chances that the U.S. Government would stand up to Wall Street even to the too-big-to-fail systemic risk were slim to nil. Indeed, the U.S. Department of Justice’s criminal division had been going easy in prosecuting the big banks for fraud out of fear that a conviction would cause a bank collapse (or because President Obama had received very large donations from Wall Street banks including notably Goldman Sachs).
The two senators’ strategy of going about breaking up the biggest banks indirectly by increasing their reserve requirements disproportionately didn't work, at least as of 2019. Advantages of size, including the human desire to empire-build (witness Dick Fuld at Lehman Brothers), could have been expected to outweigh the economic preference for a lower, more proportionate, reserve requirement. With anti-trust laws having been used to break up giants such as Standard Oil and ATT, the thought of breaking up the banks too big to fail even in the wake of the financial crisis was strangely viewed as radical and thus at odds with American incrementalism. The question was simply whether systemic risk should be added to monopoly (i.e., restraint of trade) as an additional rationale for breaking up huge concentrations of private property. This question could have been made explicit, rather than trying to manipulate the big banks to lose some weight.  
The approach of using disproportionately reserves can be critiqued on at least two grounds. First, should one or more of those banks decide to go with the 15% requirement rather than break up into smaller firms, even the additional capital might not be enough to protect a bank during a financial crisis. The study discussed above suggested as much. Second, even if the additional requirements would turn out to be sufficient in a crisis, the approach would obviate a decision by the government on whether systemic risk justifies a cap on how large banks can get. 
I suspect that the U.S. Congress and president backed off in really reforming Wall Street because of its money in campaign finance. In short, the big banks in Wall Street didn't want to shrink. In a system of political economy wherein the economy is regulated by government, rather than vice versa, backing off just because large concentrations of wealth (and thus power--even political) don't like the plans is unacceptable. Moreover, it is a sign of encroaching plutocracy wherein the regulated dictate behind closed doors to the regulators and politicians. Meanwhile, the public, including the economy itself, remains vulnerable. 

See Essays on the Financial Crisis: Systemic Greed and Arrogant Stupidity, available at Amazon.

Sources:
Eric Rosengren, “Bank Capital: Lessons from the U.S. Financial Crisis,” Federal Reserve Bank of Boston, February 25, 2013.
Zach Carter and Ryan Grim, “Break Up the Banks’ Bill Gains Steam in Senate As Wall Street Lobbyists Cry Foul,” The Huffington Post, April 8, 2013.

Johnson’s “Reinvention” of JC Penney: Too Much and Too Little

In April 2013, JC Penney’s board wished the CEO, Ron Johnson, “the best in his future endeavors.” His effort to “reinvent” the company had been “very close to a disaster,” according to the largest shareholder, William Ackman. During Johnson’s time at the company as its CEO, shares fell more than fifty percent. In February 2013, Johnson admitted to having made “big mistakes” in the turnaround. For one thing, he did not test-market the changes in product-line and pricing-points. The latter in particular drove away enough customers for the company’s sales to decline by 25 percent. Why did Johnson fail so miserably?
Ron Johnson's short tenure as CEO of JC Penney was disastrous, according to Altman.   Source: Reuters
Some commentators on CNBC claimed that JC Penney’s board directors should have known better than hire someone from Apple to have so much responsibility right off the bat in a department store. However, Johnson had been V.P. for merchandising at Target before going over to Apple. Therefore, Penney’s board cannot be accused of ignoring the substantive differences between sectors. Even so, Target and Walmart are oriented to one market-segment, whereas JC Penney, Kohls and Macys are oriented to another. Perhaps had he taken the time to have market tests done at JC Penney, any error in applying what he had learned at Target could have been made transparent.
Although as the former CEO Ullman who would be replacing Johnson pointed out, customer tastes are always changing so you can’t go back to worked in the past, to “reinvent” a company goes too far in the other direction. For one thing, it is risky for a retail company to shift from one market-segment to another, given the company's image. Additionally, to “reinvent” something is to start from scratch to come up with something totally new. Even if that were possible for a retail chain, the “new front” would likely seem fake to existing customers. “They are trying to be something they are not,” such customers might say. Put another way, Ron Johnson might have gotten carried away.
In an interview just after Johnson’s hiring at JC Penney had been announced in June 2011, he said, “In the U.S., the department store has a chance to regain its status as the leader in style, the leader in excitement. It will be a period of true innovation for this company.” A department store is exciting? Was he serious? Perhaps his excitement got the better of him in his zeal for change. Were the changes really of “true innovation?” Adding Martha Stewart kitchen product-lines was hardly innovative—nor was getting rid of clearance sales and renovating store designs and the company logo.
Renovation generally-speaking is rather superficial, designed perhaps to give customers an impression of more change than s actually the case. Is a given renovation an offshoot of marketing or strategy? Ron Johnson may have been prone to exaggeration, as evinced by his appropriation of faddish jargon, while coming up short in terms of substantive change. In an old company trying to be something it's not (i.e., going from a promotional to a specialty pricing strategy), too much superficial change can easily outweigh too little real change. Sometimes even upper-level managers can get carried away with their own jargon in trying to make their respective companies something they are not. It is like a person trying to be someone he or she is not. In "reinventing" JC Penney, Ron Johnson was trying to make an old woman come off as young by applying make-up and new clothes.
Sources:
Stephanie Clifford, “J.C. Penney Ousts Chief of 17 Months,” The New York Times, April 9, 2013.

Joann Lublin and Dana Mattioli, “Penney CEO Out, Old Boss Back In,” The Wall Street Journal, April 8, 2013.

Monday, February 11, 2019

Is Modest Growth vs. Full Employment a False Dichotomy?

As Summer slid into Autumn in 2012, the Chinese government was giving no hint of any ensuing economic stimulus program. This was more than slightly unnerving for some, as a recent manufacturing survey had slumped more than expected, to 49.2 in August. A score of 50 separated expansion from contraction. A similar survey, by HSBC, came in at 47.6, down from 49.3 the previous month. Bloomberg suggested that China might face a recession in the third quarter. So why no stimulus announcement?  Was the Chinese government really just one giant tease? I submit that the false dichotomy of moderate economic growth and full employment was in play. In short, the Chinese government did not want to over-heat even a stagnant economy even though the assumption was that full employment would thus not be realizable.

Wang Tao, an economist at UBS, explained the “very reactionary, cautious approach” as being motivated by the desire to avoid repeating the “excesses of last time.”[1] The stimulus policy in the wake of the 2008 global downturn had sparked inflation and caused a housing bubble in China. According to The New York Times, China was avoiding “measures that could reignite another investment binge of the sort that sent prices for property and other assets soaring in 2009 and 2010.”[2] A repeat of any such binge could not be good, for it can spark the sort of irrational excitement that have a life of its own.
In short, too much stimulus in an economy can cause inflation and put people’s homes at risk of foreclosure once the housing bubble bursts, whereas a lack of stimulus means that a moderate growth rate is likely, rather one that could give rise to full employment. Is there no way out of this trade-off? 
Keeping fiscal or monetary stimulus within projections of a moderate growth can occur with more government spending targeted to a combination of giving private employers a financial incentive to hire more people and increasing the number of people hired by state enterprises. In principle with the Full Employment Act of the U.S. in 1946, a government can see that anyone who wants a job has one, while still maintaining a moderate stimulus. A modest growth-rate can co-exist with full employment. 

1, Bettina Wassener, “As Growth Flags, China Shies From Stimulus,” The New York Times, September 3, 2012. 
2. Ibid.

Saturday, February 9, 2019

Behind Cameron's Referendum on Britain's Secession from the E.U.

Governors of other E.U. states reacted quickly to David Cameron’s announcement that if his party would be re-elected to lead the House of Commons, he would give his state’s residents a chance to vote yes or no on seceding from the European Union. The result would be decisive, rather than readily replaced by a later referendum. Cameron said the referendum would also be contingent on him not being able to renegotiate his state’s place in the Union. This renegotiation in particular prompted some particularly acute reactions from the governments of other “big states.” Behind these reactions was a sense that the British government was being too selfish. This was not fair, I submit, because the ground of the dispute was on the nature of the E.U. itself as a federal system. 
David Cameron, PM of Britain
With the basic or underlying difference still intact, it should be no surprise that the renegotiation did not go well. German Foreign Minister Guido Westerwelle said at the time that Britain should not be allowed to “cherry pick” from among the E.U. competencies only those that the state likes. What then should we make of the opt-outs at the time—provisions in which states other than Britain benefitted? Surely one size does not fit all in such a diverse federal union (that goes for the U.S. as well). Westerwelle was saying that Cameron had abused the practice that was meant as an exception rather than the rule. Britain was exploiting this means of flexibility in the Union because that people in that state tended to view the E.U. as a confederation or, worse, a trade "bloc" even though the E.U. and its states each had some governmental sovereignty. 
The president of the European Parliament, Martin Schulz, said the approach of the British government would lead to the detriment of the Union. Specifically, he warned of “piecemeal legislation, disintegration and potentially the breakup of the union” if Britain was allowed to be bound only to the E.U. competencies that the party in power in the House of Commons liked. A player joining a baseball team undermine the game even in demanding that he will only bat because that’s the only part that is fun. In higher education, the education itself could only be incomplete if students could limit their classes to what interests them. Such a player or student would essentially have a different view of the sport and education, respectively. The view itself of the nature of the thing was so at odds with the fundamentals of the thing that it would be undercut severely. This is what had been going on in the case of Britain navigating in the E.U. 
Carl Bildt, the Swedish foreign minister, also touched on the detriment to the whole from what he erroneously took to be the selfishness of a part. He said that Cameron’s notion of a flexible arrangement for his own state would lead to there being “no Europe at all. Just a mess.” French foreign minister Laurent Fabius said that “Europe a la carte” would introduce dangerous risks for Britain itself. So if the British government was being selfish, it could have been at the state's detriment, though of course I contend that selfishness does not go far enough. 
In short, the visceral reactions in other states to Cameron’s announcement manifested recognition of selfishness of one part at the expense of the whole. Those reactions were rash and, even more importantly, lacking in recognition of the underlying fault-line in the Union erupting between Britain and the Union out somewhere in the Channel. Cameron and plenty of other Brits viewed the E.U. simply a series of multilateral treaties in which sovereign states could pursue their respective interests. “What he wants, above all,” according to Deutsche Welle, “is a single market.” Therefore, he “wants to take powers back from Brussels” to return the E.U. to a network of sovereign states. It followed according to this view that each state, being fundamentally sovereign, “should be able to negotiate its level of integration in the EU.” Such would indeed be the case were the E.U. merely a bundle of multilateral international treaties, or a network to which Britain was a party, rather than a federal union of semi-sovereign states and a semi-sovereign federal level. Herein lies the real conflict of ideas within the E.U. Cameron’s strategy is selfish only from the assumption that the E.U. is something more than a network to which Britain happens to belong.
Ultimately the problem was the uneasy co-existence of the two contending conceptions of what the union was in its very essence. The real question was whether the E.U. could long exist with both conceptions being represented by different states. The negative reaction from state officials of other states who held the “modern federal” conception (i.e., dual sovereignty) of the E.U. suggests that ultimately Cameron’s conception of the E.U. was utterly incompatible with the union’s continued viability, given what it actually was at the time

Sources:
EU Leaders Hit Out Over Cameron Referendum Pledge,” Deutche Welle, 23 January 2013.
Cameron Wants Another EU,” Deutsche Welle, 24 January 2013.

Essays on Two Federal Empires, available at Amazon.

Essays on the E.U. Political Economy, available at Amazon.

Greek Austerity: Pressure on the Environment

“While patrolling on a recent cold night, environmentalist Grigoris Gourdomichalis caught a young man illegally chopping down a tree on public land in the mountains above Athens. When confronted, the man broke down in tears, saying he was unemployed and needed the wood to warm the home he shares with his wife and four small children, because he could no longer afford heating oil. ‘It was a tough choice, but I decided just to let him go’ with the wood, said Mr. Gourdomichalis, head of the locally financed Environmental Association of Municipalities of Athens, which works to protect forests around Egaleo, a western suburb of the capital.”[1] Tens of thousands of trees had disappeared from parks and forests in Greece during the first half of the winter of 2013 alone as unemployed Greeks had to contend with the loss of the home heating-oil subsidy as part of the austerity program demanded by the state’s creditors. As impoverished residents too broke to pay for electricity or fuel turned to fireplaces and wood stoves for heat, smog was just one of the manifestations—the potential loss of forests being another. On Christmas Day, for example, pollution over Maroussi was more than two times the E.U.’s standard. Furthermore, many schools, especially in the north part of Greece, had to face hard choices for lack of money to heat classrooms.
Greek forests were succumbing  in 2012 to the Greeks' need to heat their homes as austerity hit.   source: Getty Images
Essentially, austerity was bringing many people back to pre-modern living, perhaps including a resurgence in vegetable gardens during the preceding summer. At least in respect to the wood, the problem was that the population was too big—and too concentrated in Athens—for the primitive ways to return, given the environment's capacity. 
To be sure, even in the Middle Ages, England had lost forests as the population (and royal plans) grew. In December 1953, many Londoners decided to use their fireplaces to burn wood, resulting in pollution blanketing the city. As a result, thousands died and the city outlawed the use of fireplaces. No one probably thought to ask whether the city had gotten too big—and too dense. No policy was enacted that would result in a shift in population out of the region.
Generally speaking, human population levels made possible by modern technology and medical advances have become too large for a return to pre-modern ways of life. Because of the extraordinarily large sizes of the modern city, including Athens, suddenly removing modern technology, which includes government subsidies, it is especially problematic when many people are forced to fend for themselves to meet basic needs. The efficiency of modern technology, including in regard to utilities and food distribution, is often taken for granted, even by governments, so the impacts on the environment when masses of people “return to nature” can be surprising. Nature has become "used to" seven billion humans on the planet in large part because we have economized via technology so the full brunt of the population-size is not felt. Particularly in industrial countries, societies are reliant on modern technology because without it the bulging population is unsustainable. 
Put another way, we have distanced ourselves from nature, and our growth in numbers in the meantime has made it impossible for us to “get back to nature” in a jolt, especially by many people. It is in this sense that governmental austerity programs that cut back on sustenance are dangerous not only for society, but also the ecosystems in which humans live. Accordingly, by mid-January, 2013, the Greek government was considering proposals to restore heating-oil subsidies. It is incredible that the financial interests of institutional creditors, including other governments, were even allowed to put the subsidies at risk.
In ethical terms, the basic sustenance of a people takes priority ethically over a creditor’s “need” for interest. The sin of usury is sourced back to the origins of lending as an instance of charity rather than money-making either from the plight of the poor or profit-uses.[2] When a person in antiquity was in trouble financially, someone with a bit of cash would lend some with the expectation that only that sum would be returned. The demand for interest on top was viewed by the historical Church as adding insult to injury (i.e., the bastardization of charity into a money-making ruse). Then exceptions were made for commercial lending, wherein a creditor could legitimately demand a share of the profit made from the borrowed money in addition to the return of the principal. As commercial lending came increasingly to characterize lending, the demand for interest became the norm, even on consumption loans when no profit would ensue to pay off the loan with interest. The notion that interest is conditional on a borrower having enough funds was lost, causing much pain to many in the name of fidelity of contract, as if it or the creditor’s financial interest were an absolute. Put another way, the default has swung over from the borrowers to the lenders to such an extent that society may look the other way as people literally have to cut down trees to heat their homes because creditors have demanded and won austerity touching on sustenance programs.
Therefore, especially in Christian Europe, putting people out by pressure being applied to state governments in the E.U. to make payments even in the context of a financial crisis can be considered to be untenable, ethically speaking. I am not suggesting that states should be profligate with borrowed funds. Rather, just as Adam Smith’s Wealth of Nations is bracketed by his Theory of Moral Sentiments, so too an economy (and financial system) functions best within moral constraints. 

1. Nektaria Stamouli and Stelios Bouras, “Greeks Raid Forests in Search of Wood to Heat Homes,” The New York Times, January 11, 2013.
2. Skip Worden, God's Gold, available at Amazon. 

Friday, February 8, 2019

Second-Term Inaugural Addresses of American Presidents: Of Transformational or Static Leadership?

According to a piece in the National Review, “George Washington might have had the right idea. Second inaugural addresses should be short and to the point. Of course, speaking only 135 words as Washington did in 1793 might be a little severe.”[1] Consider how short, and (yet?) so momentous Lincoln's Gettysburg Address was. The challenge for second-term-presidents, whether Barack Obama or the sixteen two-term presidents before him, is “how to make a second inaugural address sound fresh, meaningful and forward-looking." Almost all of Obama’s predecessors failed at this. Only Abraham Lincoln and Franklin D. Roosevelt made history with their addresses. One stirred a nation riven by civil war; the other inspired a country roiled by a deep depression. All but forgotten are the 14 other addresses, their words having been unable to survive the test of time. Even those presidents famed for their past oratory fell short.”[2] This is a particularly interesting observation: surviving the test of time being the decisive criterion. Even a president whose silver tongue mesmerizes a people of his or her time may not deliver ideas that survive beyond being a cultural artifact of the president’s own time. What of an address that is quite meaningful in its immediate time yet does not pass the test of time so as to be recognized as a classic? 

The full essay is at "Inaugural Addresses: Of Leaders?"

1. George E. Condon, Jr., “The Second-Term Inaugural Jinx,” National Journal, January 20, 2013.
2. Ibid.

Increasing Income Inequality in the U.S.: Deregulation to Blame?

Most Americans have no idea how unequal wealth as well as income is in the United States. This is the thesis of Les Leopold, who wrote How to Make a Million Dollars an Hour. In an essay, he points out that the economic inequality increased through the twentieth century. His explanation hinges on financial deregulation. I submit that reducing the answer to deregulation does not work, for it does not go far enough.
In 1928, the top one percent of Americans earned more than 23% of all income. By the 1970’s the share had fallen to less than 9 percent. Leopold attributes this enabling of a middle class to the financial regulation erected as part of the New Deal in the context of the Great Depression. In 1970 the top 100 CEOs made $40 for every dollar earned by the average worker. By 2006, the CEOs were receiving $1,723 for every worker dollar. In the meantime was a period of deregulation beginning with Carter’s deregulation of the airline industry in the late 1970s and Reagan’s more widespread deregulation. Even Clinton got into the act, agreeing to shelve the Glass-Steagall Act, which since 1933 had kept commercial banking from the excesses of investment banking. The upshot of Leopold’s argument is that financial regulation strengthens the middle class and reduces inequality by tempering the wealth and income of those “on the top.” Deregulation has the reverse effect.
The increasing role of the financial sector in the second half of the 1900s means that finance itself could claim an increasing share of compensation.  
Leopold misses the increasing proportion of the financial sector in GDP from the end of World War II to 2002. The ending of the Glass-Steagall act in 1998 does not translate into more output on Wall Street relative to other sectors. Indeed, the trajectory of the increasing role of finance in the U.S. economy is independent of even the deregulatory period. Leopold’s explanation can be turned aside, moreover, by merely recognizing that the “young Turks” on Wall Street have generally been able to walk circles around the products of their regulators. Even though financial deregulation can open the floodgates to excessive risk-taking, such as in selling and trading sub-prime-mortgage-based derivatives and the related insurance swaps, I suspect that the rising compensation on Wall Street has had more to do with the increasing role of the financial sector in the American economy.
The larger question, which Leopold misses in his essay, is whether the “output” of Wall Street is as “real” as that of the manufacturing and retail sectors, for example. Is there any added value to brokering financial transactions, which in turn are means to investments in such things as plants and equipment used to “make real things”? Surely there is value to the function of intermediaries, but as that function takes on an increasing share of GDP, it is fair to ask whether the overall value of “production” is inferior.
Given the steady increase of the financial sector as a percent of GDP, one would expect a more steady divergence of these two lines. Reagan's deregulation fits the divergence pictured, though one would expect a further increase in divergence after the repeal of the Glass-Steagall Act in 1998.  Source: Les Leopold

As for the rising income and wealth of Wall Streeters, increasing risk, which is admittedly encouraged by deregulation, is likely only part of the story. If the financial products are premium goods as distinct from the goods sold at Walmart, for instance, then as the instruments are increasingly complex one would expect the compensation to increase as well.
Leopold is on firmest ground in his observation that Americans are largely oblivious to the extent of economic inequality in the United States. Few Americans have a sense of how much more economic inequality there is in the U.S. than in the E.U., where the ratio of CEO to average worker compensation is much lower. One question worth asking centers on what in American society, such as in what is valued in it, allows or even perpetuates such inequality, both in absolute and relative terms. The relative terms suggest that part of the explanation lies in cultural values having relative salience in American society. Possible candidates include property rights and the related notion of economic liberty, the value placed on wealth itself as a good thing, and the illusion of upward mobility that allows for sympathy for the rich from those “below.”
In short, beyond actual regulations, particular values esteemed in American society and the increasing role of the financial sector in the American GDP may provide us with a fuller explanation of why economic inequality increased so during the last quarter of the twentieth century and showed no signs of stopping during the first decade of the next century. Americans by in large were wholly unaware of the role of their values in facilitating the growing inequality, and even of the sheer extent of the inequality itself. In a culture where political equality has been so mythologized, the acceptance of so much economic inequality is perplexing. At the very least, the co-existence of the two seems like a highly unstable mixture from the standpoint of the viability of the American republics “for which we stand.” Yet absent a re-calibration of societal values, the mixture may be an enduring paradox of American society even if the democratic element succumbs.

Source:
Les Leopold, “Inequality Is Much Worse Than You Think,” The Huffington Post, February 7, 2013.

Thursday, February 7, 2019

A U.S. Senator Aiding a Contributor While Averting a "Fiscal Cliff": Turning a Crisis into an Opportunity

The law passed by Congress on January 3, 2013 to avert the across-the-board tax increases and “sequester” (i.e., across-the-board budget cuts) was “stuffed with special provisions helping specific companies and industries.” While many of the provisions would increase the U.S. Government’s debt, at least one would decrease it. Is the latter any more ethical because it is in line with the more general interest in reducing the federal debt? Put another way, does the end justify the means?  Do good consequences justify bad motives?  These are extremely difficult questions. The best I can do here is suggest how they can be approached by analysis of a particular case study.
In the legislation, a provision reduced the Medicare reimbursement rate for a radiosurgery device manufactured by the E.U. company Elekta AB. The cut was pushed by a competitor, Varian Medical Systems. Senate Majority Leader Harry Reid asked Sen. Max Baucus, chair of the Senate Finance Committee, to write the cut into the legislation. While both senators could point to the public interest in the debt-reduction result of the cut, their relationship with Varian makes their motives suspect. Specifically, they may have exploited personal conflicts of interest that eclipsed a more expansive duty to the wider (i.e., not private, or personal) public interest. 
While it is perhaps simplistic to relate campaign contributions to a senator’s subsequent action, it is significant that Varian spent  $570,000 in 2012 on lobbying. The company added Capitol Counsel, which had contacts to Sen. Baucus. Vivian already had connections to Reid through Cornerstone Government Affairs lobbyist Paul Denino, a former Reid deputy chief of staff. Additionally, the leading beneficiary of the contributions of Varian executives and the company’s PAC over the previous four years was Sen. Reid, whose committees received $21,200. Varian’s lobbyists added $42,700 more to Reid’s campaign.[1] While Sen. Reid’s subsequent urging of the reimbursement rate cut could have been unrelated to these contributions and contacts, the senator’s involvement compromises him ethically. Put another way, it is at the very least bad form, or unseemly. It implies that companies making political contributions and hiring lobbyists connected to public officials do so (or worse, should do so) to have special access to those particular officials to turn upcoming legislation to the companies’ financial advantage. Even if the public also benefits, it can be asked whether the companies deserve their particular benefits. In the case of Varian, it may be asked whether the company deserved the cut in the reimbursement rate going to Elekta.
As could be expected, spokespersons at both companies sought to argue the merits of their respective cases in the court of public opinion.  It is more useful to look at the regulators’ rationale for increasing the reimbursement rate for Elekta’s  “Gamma Knife” in the first place. Originally, the knife and Varian’s linac machines were lumped together by the Centers for Medicare and Medicaid Services (CMS) under the same CMS code. In 2001, the Centers separated the devices in terms of data collection so an analysis could be conducted on whether the devices should receive different reimbursement rates. The Huffington Post reports that the reimbursement rate for the Gamma Knife was increased because “it typically requires only one treatment, while the linacs often require multiple treatments.” Also, “Gamma Knives machines are more expensive to obtain and maintain due to the storage of radioactive cobalt and regulation by both the Nuclear Regulatory Commission and the Department of Homeland Security. Linacs don’t use nuclear material and are regulated by the Food and Drug Administration.”[2] So, due to the cost and use differential, CMS  increased the Gamma Knife reimbursement in 2006 to $7000. From the standpoint of the criteria of regulators, the data-collection and analysis method and the rational rationale are legitimate. In contrast, because neither the use or cost differential had changed by January 2013, the cut in the reimbursement rate cannot enjoy such legitimacy. Hence it is possible that exogenous factors, such as the political influence of Varian’s lobbyists and campaign contributions, were behind the change. From the standpoint of the previous rate differential, the change cannot be justified. Neither Sen. Reid nor Sen. Baucus could justify their actions (and motives) by the substance of the case. However, they could still appeal to the salubrious budget-cutting effect as justifying their involvement.
The question here is whether the favorable consequences of the cut on the government’s subsequent deficits mitigates or reduces the shady scenario of a senator acting on behalf of a company that had contributed to his or her campaign. I would advise a member of Congress to avoid even the appearance of a conflict of interest. If the result in this particular case is in the public interest (i.e., reducing the deficit), does this positive consequence justify the senators’ actions and even the questionable appearance?  It’s a no-brainer that the senators would immediately point to the public interest in the consequence, but does it effectively remove the taint of immoral political conduct (and perhaps motive)?
The link between the company-senator relation, the senators’ action in which the company stands to benefit financially in a material way, and the financial benefit to the company can be distinguished ethically from a good consequence to the public. A bystander would naturally view the consequence to the public as salubrious even while having a sentiment of disapprobation toward the company’s own benefit as well as the senators’ action and relation to the company. In other words, the favorable impact on the public does not remove the stain on the company and the senators. To be sure, that stain would be greater were the public harmed rather than helped, but even with the positive general consequence the senators may have acted for the private benefit. Also, their action could have come from other senators, hence obviating the ethical problem. In short, the public interest does not remove either senator from the ethically problematic situation in which they decided to occupy.  Even if their motive had been solely for the public interest, they violated the appearance of unethical motive and conduct.
“The end justifies the means” is a slippery slope in terms of what the human mind can rationalize as legitimate. Great harm has been seemingly justified by great ideals. Even in the face of the ideals, the harms provoke a sentiment of disapprobation by the observer (excepting sociopaths). This suggests that the ideals cannot completely justify unethical means.  It may indeed be that unethical means are necessary in some particular cases, but this does not render the devices ethically pure. Ethical principles do not know practical compromise. Rather, people do.


1. Paul Blumenthal, “Varian Medical Systems Used Fiscal Cliff Deal to Hurt Competitor,” The Huffington Post, February 8, 2013.
2. Ibid.

On the Impact of Political Rhetoric: From “Global Warming” to “Climate Change”

Words matter in politics. The side that can frame a question by definitively naming it in the public mind enjoys a subtle though often decisive advantage in the debate and thus in any resulting public policy as well. For example, “pro-choice”privileges the pregnant woman, while “pro-life” defines the abortion debate around the fetus. Similarly, “global warming” implies a human impact, whereas“climate change” defines the issue around nature. Even though the shift from“global warming” to “climate change” is more in keeping with the evolving science and won’t be bumped off by a cold winter, political players have been the driving force—language hardly being immune to ideological pressure.
Regarding the weather shifting popular perception on the issue, research published in Public Opinion Quarterly in 2011 claimed that a bad winter can indeed discredit the “global warming” label.[1] The Washington Policy Center claimed two years later that the heavy snowfall during the latest winter had led to “climate change” replacing “global warming.”[2] The cold refusing to relent in March of 2013 and hitting North America hard in January of 2019 seemed to undercut or repudiate the scientific “global warming” hypothesis even though meteorology, a empirical science,  always demands long-term data.
However, in looking back at the name-change, we must consider the influence of political actors, who are prone to manipulate the public's perception in part by using words to frame the debate. In 2002, for example, Frank Luntz wrote a confidential memo to the Republican Party suggesting that because the Bush administration was vulnerable on the climate issue. The White House should abandon the phrase “global warming,” he wrote, in favor of “climate change.”[3] As if by magic, although “global warming” appeared frequently in President Bush’s speeches in 2001, “climate change” populated the president’s speeches on the topic by 2002.[4] In other words, the president’s political vulnerability on the issue was answered by changing the label to reframe the debate. Not missing a beat, critics charged that the motive was political in downplaying the possibility that carbon emissions were a contributing factor.[5] Both Bush and Cheney had ties to the oil and gas industry. In fact, Cheney's through Halliburton may have played a role in the administration's advocacy in favor of invading Iraq under the subterfuge that it had been involved in the attack on the Pentagon and the World Trade Center in 2001. 
The Obama administration likely went with “climate change” rather than "global warming" because the former was less controversial. The corporate Democrat tended to hold to the center politically; after all, Goldman Sachs had contributed a million dollars to his first presidential campaign in 2008. In September 2011, the White House decided to replace the term “global warming” with “global climate disruption.”[6] The administration subsequently annulled its own decision. 
So much attention on the matter of a mere label indicates that just how important what you call something is to its outcome. Labels are not always neutral. For instance, the term "African American," was making inroads whereas "Black American" was hardly ever heard. "African" slips in ethnicity whereas "Black," or negroid, refers to race. Changing the axis on which the controversy had hinged was in favor of the race-now-ethnicity. Meanwhile, the American public didn't notice the artful conflation of ethnicity (i.e., culture) and race. Obama used the ethnic term and applied it to himself even though his mother was Caucasian. He also claimed Illinois as his home state even though he moved to Chicago after college. He could benefit politically from the support of Black Americans and Illinoisans. 
Similarly, Obama could benefit politically from adopting "climage change." As the academic journal Public Opinion Quarterly reported in 2011, “Republicans are far more skeptical of ‘global warming’ than of ‘climate change.’” Whereas the vast majority of Democrats were indifferent to the label being used.[7] With “global warming” carrying “a stronger connotation of human causation, which has long been questioned by conservatives,” Obama stood to gain some republican support simply by changing how he refers to the issue.[8] That support was part of the president's ability to straddle the center in American politics. 
Given the effort that has gone into labels, it is amazing that more time in the Congress has not gone into debating labels. I am also curious why the American people did not realize that they were being manipulated by the choice of label. If "climate change" allows for the contention that human-sourced carbon emissions into that atmosphere have not been a cause of the warming of the oceans and air, then it is possible that the very survival of the species could be in jeopardy because of  the choice of a label for short-term economic and political reasons.

1. Tom Jacobs, “Wording Change Softens Global Warming Skeptics,” Pacific Standard, March 2, 2011. 
2. Washington Policy Center, “Climate Change: Where the Rhetoric Defines the Science,” March 8, 2011.
3. Oliver Burkeman, “Memo Exposes Bush’s New Green Strategy,” The Guardian, March 3, 2003.
4. Ibid.
5. Washington Policy Center, “Climate Change: Where the Rhetoric Defines the Science,” March 8, 2011.
6. Erik Hayden, “Republicans Believe in ‘Climate Change,’ Not ‘Global Warming,” The Atlantic Wire, March 3, 2011.
7. Tom Jacobs, “Wording Change Softens Global Warming Skeptics,” Pacific Standard, March 2, 2011.
8. Ibid.