Saturday, October 27, 2018

The Debt-Ceiling and the U.S. Budget as Ransom: A Structural Flaw of Democracy?

It is likely a drawback of democracy that hard decisions—that is, those in which fixing the problem goes against instant gratification or financial advantage—get pushed back, or “kicked down the road,” rather than addressed in a definitive way such that difficult problems are fixed. This structural problem can be seen in how Congressional leaders and the U.S. President delayed the “fiscal cliff” for two months at the beginning of 2013. More generally, the political tactic of holding the federal budget and the debt-ceiling as ransom evinces a fundamental flaw in democracy itself.
To be sure, excesses in politics were also in the mix, as each side stepped back from closing deals when presented with a more opportunistic bargaining standpoint in doing so. For example, President Obama suddenly added $400 billion more in revenue to his “grand bargain” with Speaker Boehner when the bipartisan “gang of six” in the U.S. Senate announced their own deal, which included more revenue than was in the “grand bargain.” Put another way, Obama got greedy and undercut his own credibility in terms of sticking to a deal that he had led the Speaker to believe had been achieved. Later, as conservative Republican pressure mounted on the Speaker, he walked away from even the “grand bargain” without the $400 billion more in revenue on the table. The result was frustration, distrust, and a “quick fix” that merely “kicked the can down the road” and unnerved markets with the prospect of ongoing uncertainty. At the very least, the trajectory bespoke the dysfunction rather than triumph of politics. More subtly, the verdict on representative democracy could not have been good. Although less transparent, this observation is far more serious, for no alternative to democracy could claim superiority even given the vulnerabilities in self-government.
Behind the leaders’ “inability” to reach a “grand bargain” capable of solving structural budgetary imbalances (beyond those which come from simply “digesting” the aging of the baby-boom) was pressure from competing ideologies on the size/role of government held within the electorate itself. Reconciling such distant ideologies can be difficult even in terms of reconciling visionary leadership;  deal-making is likely more oriented to a more micro level of policy.
The Speaker had been wise in wanting to do something much bigger in the “grand bargain” than merely getting the country’s debt ceiling raised and making a dent in the budget deficit. He had wanted fundamental tax and entitlements reform that would put the U.S. Government on the path to fiscal balance. “I did not come here to have a big title,” he said. “I came here to do big things.” Indeed, he put his title as Speaker at risk just by negotiating with the President with revenues on the table, given the emergence of the anti-tax “Tea-Party” Republicans in the House Republican caucus.
Upping the ante, as it were, was the choice made by Rep. Paul Ryan (R-WI) to use the debt-ceiling vote as leverage to extract concessions from the White House. According to The New York Times, “Republicans vowed to use the need to raise the federal debt ceiling in early 2013 to force deeper spending reductions before agreeing to an extension until May.” Making passage of an increase in the ceiling in some sense contingent was itself destabilizing to the market due to the new uncertainty on whether the U.S. Government would default. Additional uncertainty in the business environment translates into a business reducing or putting off investments in expanding operations. The announcement itself added risk to U.S. Treasury bonds, even if the strategy would not actually go as far as actually standing by as the United States Treasury defaults on its obligations.
In reaction to Rep. Ryan’s announcement, Tim Geithner, Secretary of the U.S. Treasury, advised the President to make a deal because a default on Treasuries would trigger not only a significant downgrade in the nation’s debt rating, but also another Great Depression that would take generations to run its course. Even if Ryan’s negotiation strategy of “hold no prisoners” is very clever in a narrow sense of politics, it is difficult to accept the “ends justifies the means” justification for even opening up the mere possibility of another Great Depression. In other words, even great political strategy can raise red flags if the country itself is put at catastrophic economic risk even for a time. It makes sense that the American Founders viewed partisanship so negatively, even if the Federalists and Anti-federalists could be as partisan as they come. Being willing to up the ante without limit in terms of the risk of harm to the whole may be part of an escalation of ideological passion that eclipses common sense and eventually sinks the entire ship. An observer from Mars might get the idea that the humans over here are getting more desperate. Given the sheer ideological distance between the competing visions at issue, using something as catastrophic as not raising the debt ceiling as leverage can reasonably be regarded as reckless, if not foolish, even if the political calculus is cleaver and even ultimately effective in terms of the ideological objectives.
Fortunately (relative to having the debt-ceiling as leverage), the minority leader and the president of the U.S. Senate came up with a “fiscal cliff” to effectively replace the debt-ceiling as leverage. Even though legislative patrons of various parts of the federal budget claimed that the across-the-board cuts, or sequestration, would devastate the particular departments or programs, “going over” the “fiscal cliff” would be preferable to even risking the U.S. going into default. In other words, the move to something less catastrophic in what a partisan is threatening if he doesn’t get what he wants represents a ray of sanity in an otherwise insane escalation in systemic risk.
To be sure, the media had made sequestration sound like the U.S. Government would be paralyzed and the sky would fall. The economic fear and uncertainty unleashed by the hyperbolic rhetoric are perhaps more harmful than the actual “cuts” would be. The across-the-board “cuts” scheduled to go into effect on March 2, 2013 absent a deficit-reduction law in the meantime total $85 billion. This is a mere sliver in a budget of more than $3.5 trillion. Indeed, the “cuts” are less  in total than the last annual increase in the budget—far from likely to send the U.S. economy into recession.
According to the Fiscal Year 2012 Mid-Session Review, the enacted 2011 budget called for $3.63 trillion in outlays. The enacted 2012 budget called for $3.796 trillion in outlays, according to the Office of Management and Budget. The annual increase, $166 billion, is almost twice as much as the $85 billion at issue in the threatened sequester for March through December 2013. Put another way, the sequester’s cuts for 2013 beginning on March 2nd equal about half of the increase in the budget from FY2011 to FY2012.
The reckless nature of the sequestration is not in taking back half of the last annual increase. In fact, the total amount of outlays would still steadily increase throughout the ten year period that is subject to sequestration.
Rather, the craziness pertains to two points. First, although the $85 billion is less than half of the prior year’s increase in the budget, the sequestered amount would not apply, according to the Congressional Budget Office, to about 70% of mandatory spending. That mandatory spending, such as social security, medicare and Medicaid, made up about two-thirds of the budget at the time. It follows that sequestration would not touch 47% of the federal budget. This means that for the remaining 53 percent, the reduction would go deeper than the increases in those categories, or “buckets,” from the prior year. In other words, about half of the budget would take on the full weight of the sequester, hence the “cuts” there really would be cuts (i.e., going beyond removing the increase from the prior year). Reports of suspended public services, such as air traffic control at some 100 smaller airports, could thus be expected even though in total the sequester amount is about half of the total budget increase from the prior year.
It would be like adding an additional product to an already-loaded caravan of camels crossing a desert. The caravan could easily absorb the addition, except that the decision is made to put the additional weight onto about half of the camels. From the strain on those camels, an observer might easily conclude that the additional product is too much for the caravan itself. Any question of adding still another product would be dismissed out of hand even though the further addition is feasible and would make the caravan profitable.
Second, each “budgetary bucket” in the 53% of the federal budget subject to the sequestration would face the same percentage or “automatic” reduction, regardless of how vital the particular bucket happens to be. A department could not shift its “cuts” from payroll, for example, to conferences, to avoid layoffs. Put another way, all of the buckets in a given department would have be treated the same way in the sequestration (i.e., automatic, across-the-board). As a result, even just $85 billion out of $3.5 trillion could result in significant layoffs.
From the standpoint of achieving fiscal balance, it could be argued that even more should be cut, or some additional combination of additional revenue and “cuts” going beyond a total amount that merely removes about half of the annual increase in the budget from the prior year.  However, this point would doubtless pale in comparison with the real cuts to the budget buckets subject to sequestration. The way the sequestration is designed implies or gives rise to a perception of severity that is not the case on the macro level, and this perception can arrest any movement to bring spending and revenue further into line. Put another way, the way the sequestration approaches the federal budget makes it more difficult to bring enough political will to “finishing the job” in ending structural deficits and not merely narrowing them.
In conclusion, using the debt-ceiling and sequestration as leverage are not really comparable from the standpoint of actual (rather than media-hyped) harm to the United States. The harm from sequestration applies only to certain “buckets”; the overall “hit” being merely taking back some of the increase from the prior year’s budget. In contrast, the failure to increase the debt-ceiling to the extent that Treasury can avoid default gives rise to the systemic harm of a governmental default. Perceptions notwithstanding, the particular harms from the sequestration are qualitatively and quantitatively different. Accordingly, the move from the debt-ceiling to sequestration as political leverage represents a bright spot on what otherwise looks like democracy being utterly incapable of tackling fundamental problems facing a republic. To be sure, the obsessive fixation on “revenue vs. cuts” contributes to the limited perspective that prevents more fundamental solutions from entering even into public deliberation and discourse. Generally speaking, we the people are holding ourselves back even from being aware of more far-reaching proposals because of the distortions in the media’s “reporting” (or opining) as well as in the design of the sequestration itself. The fundamental question is whether such “holding back” is intrinsic to self-governance of and by the People.

Source:

Cliffhanger,” Frontline, PBS, February 11, 2013.

Cavuto, Neil “Sequestration Really the End of the World?” Fox News. 20 February 2013.

A Weak Economy as a Competitive Advantage to the Largest Corporations

Size matters, at least in the business world. Richard Fuld, the last CEO and Chairman of Lehman Brothers, overextended "his" bank with risky real-estate and financial derivatives in part so Lehman Brothers would be as big as Goldman Sachs. 



Empire-building (and ego) aside, the largest corporations can indeed perform differently than smaller firms in an economy. In April 2013, it was clear that the biggest companies were outpacing smaller ones. Analysts estimated profits for the 100 largest companies in the Standard & Poor’s 500 stock-index to rise 6.6% in the second quarter, while earnings for the bottom 100 were expected to fall by 1.6 percent. Of all the profits earned by the companies in the S&P 500, 22% would be coming from the 10 largest companies, enabling them relatively more wherewithal with which to gain still more market share. Put another way, beyond a certain point, organizational size can protect or buffer a company in the midst of a languid economy. It is not only the market mechanism that accounts for this phenomenon.
One benefit enjoyed by the big corporations is being able to profit from other markets around the world and thus make up for a slow market at home. Smaller firms, with little or no access globally, are more constrained in the sense of being limited to the conditions of the domestic economy. Additionally, large companies enjoy easier access to credit. A bad economy need not keep the biggies from making investments in expanding operations still more, hence building on their existing size-advantage. There appears to be a certain threshold in organizational size beyond which the size of a company's operations can act as a buffer mitigating the negative effects of a slowing economy. It may even be that the market mechanism itself rewards size, and in so doing facilitates the shift from competition to oligopoly or even monopoly in a given market. This does not necessarily result in greater efficiency in business or benefits for the consumer.
Once an organization reaches a certain size, diseconomies of scale kick in. Further increases in size bring disproportionate increases in costs, which counter the earlier economies of scale. As argued by Thompson in Organizations in Action (1967), as organizational size increases, the cost of integrating the various divisions and departments increases more than proportionately. In fact, coordination and integration can become virtually impossible once an organization has reached a certain size. For example, banks like JPMorgan and Bank of America may be so large that they cannot be effectively managed, not to mention in a cost-efficient way.
In addition to the impact from the market mechanism and business, federal legislation can benefit the largest corporations disproportionately or even exclusively, thereby widening the gap between the "haves" and "have nots." For example, federal budget-cutting tends to hurt small business more than large corporations. Also, large corporations like G.E. can afford to lure tax experts away from the IRS in order to minimize the corporate income tax liability. In fact, G.E. got away with zero tax in 2010, despite having earned billions of dollars in the U.S. that year. 
Furthermore, large corporations have considerably more lobbying power than do small companies. With some strategically-placed political campaign contributions, a large company can get its particular situation explicitly exempted in proposed tax legislation. In some cases, the companies or their lobbying group have even given the tax-writing Congressional staff the legislative language--saving the staff some work.
A large corporation can even create a sustainable competitive advantage for itself by getting Congress to increase the tax burden on smaller firms! Applied to the formulation of regulations by regulatory agencies, a large company with exclusive access to domestic and even international market data can "give" it to the regulators, who depend on such data in crafting regulations that will be effective in the market. Of course, the company can use its informational asset as leverage with which to sway regulators to soften the regulations in a way that benefits the company over its competitors. Even the prospect of future, well-compensated employment can influence a regulator to see to it that the future employer will not face regulations that are too costly. Otherwise, the eventual salary of the regulator-turned-regulatee could be lower amid the lower profits from costly regulations. This use of assets to sway regulators to go easy on the company (but not its competitors!) is known as the strategic use of regulation. I submit that organizational size facilitates such use because the assets that can be used as leverage are more valuable to regulators.
The power of large corporations over public policy and regulations is not the market mechanism at work; rather, it is a manifestation of plutocracy at the expense of representative democracy and the public interest. In a plutocracy, the good of the part trumps the good of the whole. This is sub-optimal for the whole because the interest of a part is not necessarily in line with the interest of the whole. Accordingly, public policy can be justified in countering the “artificial” advantages of organizational size in the political arena. Furthermore, even the size bias of the market mechanism and business itself could be reduced or even countered as per the public interest in more competitive markets in place of oligopolies and monopolies. This would take thinking systemically from the perspective of the public interest.

Source:


Nelson Schwartz, “As Wall St. Soars in Tough Era, Company Size Is a Big Factor,” The New York Times, April 15, 2013.

Tuesday, October 23, 2018

Canada Takes On the United States: A Case of Two Empires?

Two centuries after the War of 1812, the Canadian government sought to commemorate the “fact” that Canada had thwarted the invasion of troops of the American republics to the south.  “Two hundred years ago, the United States invaded our territory,” a narrator says over dark images and ominous music in the government’s ad. “But we defended our land; we stood side by side and won the fight for Canada.” However, the New York Times points out that “because Canada did not become a nation until 1867, the War of 1812 was actually a battle between the young United States and Britain.” The fight was not for Canada because the British troops were fighting for the British empire rather than for colonies in what is now Canada.

                                         The British are coming! A British hero in "Upper Canada."         rpsc.org

The real question is why the young American empire sought to take on the British empire--an empire within taking on the seat of the larger empire (as if an empire, the United Colonies,being in an empire makes sense and is durable).
The other correction that comes with shifting the question to why a young empire would challenge an older and larger one involves the distinction between a colony, state or host kingdom on the one hand and an empire thereof on the other. In the case of the American colonies, a very large one (e.g., Virginia) as well as several of them in an informal group (e.g., New England) and even the United Colonies of North America as a whole were referred to as an empire. By the time of the American independence, the term empire was generally applied on both sides of the Atlantic to both the U.S. (and, hitherto, to The United Colonies) and the British empire. In contrast, the few colonies north of that American “empire within an empire” were not viewed as an empire, but, rather, as a colony (or a few?) of the British empire. 
In the context of the meaning of empire as the political unit just above that of many kingdom-level polities (including colonies as such polities) of sufficient scale, a few colonies must surely fall short. Even in the twenty-first century, the amount of usable land and the population of Canada (34 million in 2011—only a few million more than California’s population) is equivalent to one of the large states in the U.S. To be sure, the cultural differences between Quebec and Newfoundland, for instance, are of such magnitude to rival those from province to province in an empire, but the scale and number of Canadian provinces wherein the respective cultures are markedly unique are not sufficient for Canada itself to be considered to be on the empire-level alongside the U.S., E.U., China, India, and Russia (at least not until global warming renders much of Northern Canada habitable such that the population and number of states in Canada increase dramatically).
Accordingly, the U.S.'s Articles of Confederation allowed for Canada to enter the Union as a state. To be sure, the ten Canadian provinces (and three territories) could join the U.S. as a few medium-sized member-states rather than altogether as one big like  California. Either way, it would not be a case of two empires uniting. No European would say that Turkey joining the E.U. would be a merger of two empires. “That would be like Mexico becoming a state in the U.S.,” a European official once put the matter to me. That is to say, the “United States of” Mexico would translate into one big state (or a few smaller ones) rather than as another United States of America. 
There is thus a category mistake in the following statement by James Moore, who as minister of Canadian heritage was in charge of the advertising (or propaganda) campaign on the War of 1812. “Canada was invaded, the invasion was repelled and we endured, but we endured in partnership with the United States,” he said. The British Empire was invaded, and the accession of Canada would not be a matter of partnership. To take a few maple leaves and consider them to be commensurate to a branch is to make a category mistake that cannot but lead to erroneous conclusions.

Source:
Ian Austen, “Canada Puts Spotlight on War of 1812, With U.S. as Villain,” The New York Times, October 8, 2012.

See also, British Colonies Forge an American Empire, by Skip Worden. 

Friday, October 12, 2018

On a Blatant Conflict of Interest in Georgia


A coalition of advocacy groups launched a lawsuit on October 11, 2018 to “block Georgia from enforcing a practice critics say endangers the votes of more than 50,000 people in [the upcoming election] and potentially larger numbers heading into the 2020 presidential election cycle.”[1] Kemp was at the time Georgia’s Secretary of State, which means he had considerable discretion concerning how the election would be run. The conflict of interest lies in the fact that he was running for governor—interestingly against Stacey Abrams, a candidate who had been a voter-rights lawyer! I submit that such a conflict of interest should never have been permitted.
Rather than focus on the controversial “exact match” issue at the center of the suit, I want to call attention to the fact that “the Abrams campaign called for Kemp to resign as the state’s top elections official in order for Georgia voters to ‘have confidence that their Secretary of State [will] competently and impartially oversee the election.”[2] For a candidate to also be the top elections official is such a blatant conflict of interest that we can legitimately ask in retrospect why the travesty was allowed to exist in the first place. Shouldn’t candidates be barred from overseeing their own election? Why, moreover, didn’t Georgians scoff at the conflict of interest and demand that it be deconstructed immediately after Kemp declared his candidacy for governor?


[2] Ibid.

Thursday, October 11, 2018

Congressional Cuts to Food Stamps: Violating a Human Right?

The natural right to food unconditionally in society is based, I submit, on the assumption that it is because a person without food is in society that he or she is without food. Were the person in an agrarian economy in which people live off the land, having enough food to eat would not be such a formidable problem. Rousseau makes this point in his Discourse on Inequality.[1]  Hence, Mandeville's finding of an equal distribution of food among city dwellers because farmers sold their surplus crops to buy frivolous vanities can be viewed as highly optimistic, and, along with that account, so too Adam Smith's claim that competitive markets satisfy the food needs of specialized factory-laborers by means of competitive markets. Hence the need for governments to supply food to the most vulnerable, whose incomes and other expenses, such as rent, keep people from being able to participate in (competitive?) food markets. 


During the debate in the U.S. House of Representatives in June 2013 on a proposed $20.5 billion in cuts over 10 years to the Supplemental Nutrition Assistance Program (SNAP), otherwise known as the food stamps program, proponents of the cuts denied that they would make it more difficult for the poor to feed themselves. Rep. Rick Crawford claimed that the cuts would be “eliminating abuse.”[1] For example, some drug addicts sell their “food stamps” for something like half value and use the cash to buy drugs. The addicts manage to get their food at pantries and soup kitchens. While such fraud exists, the proposed cuts would have hit bone. According to the Center on Budget and Policy Priorities, nearly 2 million people would lose SNAP eligibility were the cuts to become law.[2] After the debate, “Tea Party” Republicans wanting even more cut combined with Democrats against any cuts defeated the proposal. Three months later, the U.S. House voted 217 to 210 to cut food stamps by $40 billion.  Obama had already promised a veto, which the tally could not overcome. Even so, that no vote had been taken to suspend or end foreign aid to Egypt on account of the military coup or to cut corporate welfare is telling in what this says about priorities. Even as some House supporters of the bill insisted that the innocuous decrease in federal funding merely reflects increased enforcement of existing income limits, still other House supporters admitted that the cuts are oriented to getting as many able-bodied (i.e., non-disability) recipients as possible to get a job. "If you're a healthy adult and don't have someone relying on you to care for them, you ought to earn the benefits you receive," said Rep. Tim Huelskamp (R-Kan.). "Look for work. Start job training to improve your skills or do community service. But you can no longer sit on your couch or ride a surfboard like Jason in California and expect the federal taxpayer to feed you."[3] That is to say, rather than being a right, sustenance ought to be contingent on work. To the extent that the bill reflects this aim, more was involved in the cuts than merely strengthening enforcement of existing caps. In fact, the proposed decrease in funding could even take a pound of flesh out of the human right to sustenance in a society of interdependence.

I suspect that part of the argument on behalf of earning as a prerequisite reflects a failure to realize that the increased number of food recipients since 2007 was in large measure due to the post-financial-crisis economic downturn. In 2012, for example, the SNAP program spent around $80 billion on about 47 million Americans—one in seven.[4] According to the Congressional Budget Office, the increased cost and usage of the program over the previous few years was due to the recession following the financial crisis of 2008 and the subsequent nearly-jobless recovery.[5] Nevertheless, the ballooning cost made the program vulnerable politically to being “downsized.” Hence the debate on the U.S. House floor in June 2013 and the claim on the Hill that too many Americans had become dependent on the federal government for food. Meanwhile, people on food stamps were wondering how they were supposed to get off the aid when “there are no jobs.”[6]

A similar catch-22 or double-bind would also apply to the proposal by Rep. Steve Southerland “that would allow—but not require—individual states to test work requirements.”[7] The 1996 welfare law had included work requirements for food-stamp recipients, though most states would be granted waivers by the Obama administration. Getting recipients to attend mandatory weekly “check-in” meetings and fill out weekly job search forms, let alone actually find a job, turned out to be a lesson in futility for state employees. Members of Congress and the Clinton administration had put the front-line employees at the local level in an impossible position of fitting a federal uniform requirement with the actual conditions of the recipients. In regard to Southerland’s proposal in 2013, while it would accommodate the different conditions of the states and respect their portion of sovereignty, a work requirement would not fit with the children, elderly and disabled, who make up a significant number of the recipients. Again, it would seem that members of Congress are out of touch, with ordinary people potentially at risk of having to pay the price. Rather than expecting an answer from reason to unravel the "earnings/no jobs" double-bind, we need to look at the passions whose role is hinted at by the existence of the logical contradiction itself.

I contend that the earnings-rationale is in part actually exaggerated anger at real abuses. That is, the work ethic is in part a front here for an instinct to retaliate. Plato would point out at this point that a person talking reason to one's own undisciplined passion is necessary to render such a psyche just (i.e., passions and courage ruled by reason). Moreover, a polis (i.e., society) is just if and only if it is ruled by reason rather than passions such as resentment. As is often the case with vengeance, collateral damage unforeseen by the hypertrophic passion would result. The vote had the potential of triggering a wake-up call of sorts concerning the realization that what happens in Congress can really hit home on Main Street. Sadly, the most vulnerable can indeed fall through the cracks, with the resentment rejoicing as the human right takes a hit.

Even reducing the funding of the SNAP program by a certain percent can set in motion consequences unknown to members of Congress. For example, well into a month in which the state had halved recipient food benefits, I went to a food pantry. The place was inundated with people who had run out of food funds unexpectedly early. SNAP recipients who had never been to the pantry had to wait two hours just to be registered, after which they were told to go to the end of the “regular” line.  The pantry ran out of food, rationing portions to most of the first-times and turning away still others. Recipients I spoke with scoffed at the notion that they were enjoying “being dependent,” and, moreover, had much choice in the matter, given the lack of jobs. As for the pantry’s volunteers, they admitted that their procedure for the first-timers was unfair; however, this did not keep the volunteers from using the occasion nevertheless to spread their Christian beliefs to the frustrated first-timers standing in their second line. Were Congress to reduce funding to the states for the SNAP program, it would not take much for the situation on the ground to get out of hand. From my observations, food pantries should not be relied on to fill up the slack.

Fundamentally, because food is a daily requirement for human beings, I contend that a daily supply of food is a human right. To make fulfilling that need contingent at all does not match the lack of contingency in the daily need. Subjecting it to the politics in Congress or a work requirement essentially holds the SNAP recipients hostage. Even just referring to food as nutrition is problematic, as the latter is not strictly speaking as much of a need as food itself. Eating more nutritious food is a worthy goal, whereas eating food is a daily requirement. Distinguishing, or bracketing, those things that are necessary for daily sustenance from all other budget items can thus be justified on the basis of human physiology—and thus human rights.

In dealing with something as necessary and individual as food consumption, small changes in a federal law can have huge, unexpected consequences as front-line state employees translate the changes as they affect particular lives. For this reason, Rep. Ryan’s proposal to move the SNAP program to the states in a block grant makes sense.[8] Besides state legislators being closer to the local contexts, a fixed block grant is more in line than Congressional programs with the dual-sovereignty feature of modern federalism. To be sure, the state governments would have the sole responsibility to see to it that the most vulnerable are not inadvertently blown over by violent political winds making even minor state-wide changes to the programs. As a rule of thumb, representatives in Congress could do much worse than treat food as unconditional in terms of human consumption. Hence, if a person cannot secure enough food on his or her own, the role of government would be to make food-sustenance as close to unconditional in practice as possible.


1. Ned Resnikoff, “House Debates $20.5 Billion Cuts to Food Stamps,” MSNBC, June 18, 2013.
2. Dottie Rosenbaum and Stacy Dean, “House Agricultural Committee Farm Bill Would Cut Nearly 2 Million People Off SNAP,” The Center on Budget and Policy Priorities, May 16, 2013. “By eliminating the categorical eligibility state option, which over 40 states have adopted, the bill would cut nearly 2 million low-income people off SNAP.”
3. Arthur Delaney and Michael McAuliff, "House Votes to Cut Food Stamps by $40 Billion," The Huffington Post, September 19, 2013.
4. Associated Press, “House GOP Considers Food Stamp Work Requirements, Cutting Spending for Feeding Program,” The Washington Post, July 24, 2013.
5. Dottie Rosenbaum and Stacy Dean, “House Agricultural Committee Farm Bill Would Cut Nearly 2 Million People Off SNAP,” The Center on Budget and Policy Priorities, May 16, 2013. “By eliminating the categorical eligibility state option, which over 40 states have adopted, the bill would cut nearly 2 million low-income people off SNAP.”
6.  I heard this complaint from several people when I visited a food pantry run by a non-profit organization.
7. Associated Press, “House GOP Considers Food Stamp Work Requirements, Cutting Spending for Feeding Program,” The Washington Post, July 24, 2013.
8. Ibid.

Food as a Human Right: A Basis in Rousseau

The natural right to food unconditionally in society is based, I contend, on the assumption that it is because a person without food is in society that he or she is going without. In other words, were he or she in the state of nature, acquiring enough food would not be a problem. Rousseau makes this point in his Discourse on Inequality[1].

Jean-Jacques Rousseau (1712-1778)  Source: Wikimedia Commons.

Basing a human right to food on Rousseau’s philosophy risks the criticism that rights cannot possibly exist in the philosopher’s beloved state of nature, as rights depend on there being a government. However, Rousseau adopts wise Locke’s notion that one’s labor added to land makes it one’s property as a matter of right even without the institution of government. For my purpose here, it is enough to claim that a food-sustenance is a human right in political society. It is precisely on account of how that society differs from the state of nature than the human right is necessary only in society.

“As long as men remained satisfied with their rustic cabins; as long as they confined themselves to the use of clothes made of the skins of other animals, . . . ; in a word, as long as they undertook such works only as a single person could finish, and stuck to such arts as did not require the joint endeavours of several hands, they lived free, healthy, honest and happy, as much as their nature would admit, and continued to enjoy with each other all the pleasures of an independent intercourse.” In other words, with people being limited in production or collection to their own needs, there is likely to be enough for all. From “the moment one man began to stand in need of another's assistance; from the moment it appeared an advantage for one man to possess the quantity of provisions requisite for two, all equality vanished.” From natural differences between people even in the state of nature, as soon as some people of superior strength and industriousness desire food enough for many, perhaps to sell or give away the surplus for money or power, more scarcity than is due to nature is apt to set in for other people not so constituted.

With more labor necessary to produce or collect a surplus beyond one person’s own needs, “boundless forests became smiling fields, which it was found necessary to water with human sweat, and in which slavery and misery were soon seen to sprout out and grow with the fruits of the earth.” With artifice being superimposed on nature’s provisions that otherwise are open to all, the output is skewed in distribution toward some.

Furthermore, with the economic interdependence that comes with society and an economy of different sectors and specialization of labor, the connection that everyone has to nature’s fruits is broken for many and fewer hands remain to work the land even though everyone must eat. “The more hands were employed in manufactures, the fewer hands were left to provide subsistence for all, though the number of mouths to be supplied with food continued the same.” The natural right to food as unconditional kicks in, and is due to, the fact that the must eat continues, being based on nature, even as instituting an economy puts the supply of food at risk for some. Hence, the right is natural because we must eat on a regular basis, even if people establish and superimpose an economy on nature’s fruits, distorting their relatively equal distribution. The right is a right because it is only necessary once society, including an economy and government, has taken people out of the state of nature.

On Securing the Human Right, See: "Should Charities Replace Government?"

1. Rousseau, Jean-Jacques, Discourse on the Origins of Inequality, Harvard Classics, Charles W. Eliot, ed., Vol. 34 (Cambridge: Harvard University Press,1910).

Income Inequality: Natural or Artificial?

In the United States, the disposable income of families in the middle of the income distribution shrank by 4 percent between 2000 and 2010, according to the OECD.[1] Over roughly the same period, the income of the top 1 percent increased by 11 percent. In 2012, the average CEO of one of the 350 largest U.S. companies made about $14.07 million, while the average pay for a non-supervisory worker was $51,200.[2] In other words, the average CEO made 273 times more than the average worker. In 1965, CEOs were paid just 20 times more; by 2000, the figure peaked at 383 times. The ratio fell in the wake of the dot-com bubble and then in the financial crisis and its recession, but in 2010 the ratio began to rebound. According to an OECD report, rising incomes of the top 1 percent in the E.U. accounted for the rising income inequality in Europe in 2012, though that level of inequality was “notably less” than the one in the U.S.”[3]  Nevertheless, in both cases the increasing economic gap between the very rich and everyone else was not limited to the E.U. and U.S.; a rather pronounced global phenomenon of increasing economic inequality was clearly in the works by 2013.



Accordingly, much study has gone into discovering the causes and making prognoses both for capitalism and democracy, for extreme economic inequality puts “one person, one vote” at risk of becoming irrelevant at best. One question is particularly enticing—namely, can we distinguish the artificial, or “manmade,” sources of economic inequality from those innate in human nature? Natural differences include those from genetics, such as body type, beauty, and intelligence. Although unfair because no one deserves to be naturally prone to weight-gain, blindness, or a learning disability, no one is culpable in nature’s lot. No one is to be congratulated either, for a person is not born naturally beautiful or intelligent because someone else made it so. This is not to say that artifacts of society, as well as their designers and protectors, cannot or should not be praised or found blameworthy in how they positively or negatively impact whatever nature has deigned to give or withhold. It is the artificial type of inequalities, which exist only once a society has been formed, that can be subject to dispute, both morally and in terms of public policy.
A society's macro economic and political systems, as well as the society itself, can be designed to extenuate or diminish the level of inequalities artificially; it is also true that a design can be neutral, having no impact one way or the other on natural inequalities. How institutions, such as corporations, schools, and hospitals, are designed and run can also give rise to artificial inequalities. In his Theory of Justice, John Rawls argues that to be fair, the design of a macro system or even an institution should benefit the least well off most. Under this rubric, artificial inequalities would tend to diminish existing inequalities. Unfortunately, a society’s existing power dynamics may work against such a trajectory, preferring ever increasing inequality because it is in the financial interests of the most powerful. Is it inevitable, one might ask, that as the human race continues to live in societies the very rich will get richer and richer while “those below” stagnate or get poorer? Jean-Jacques Rousseau (1712-1778) distinguishes natural and artificial (or what he calls “moral”) inequalities with particular acuity and insight. He answers yes, but only until the moral inequalities reach a certain point. Even if his “state of nature” is impractical, we can make more sense of the growing economic inequalities globally but particularly in the U.S. by applying his theory.


1.Eduardo Porter, “Inequality in America: The Data is Sobering,” The New York Times, July 30, 2013.
2. Mark Gongloff, “CEOs Paid 273 Times More Than Workers in 2012: Study,” The Huffington Post, June 26, 2013.
3. Kaja B. Fredricksen, “Income Inequality in the European Union,” OECD, Economics Department Working Paper No. 952, 2012.

Monday, October 8, 2018

Bank of America Exploited State Tax-Rate Differentials in the E.U.: Systemic Risk and Federalism Blindsided

In 2012, the corporate income tax rate was reduced from 26% to 24 percent. With the comparable rate in Germany at 29% and France at 33 percent, Britain stood to reap the revenue-benefits of a significantly lower tax rate within the European Union. That the 24% rate would be pared down to 21% in 2014 suggests that everything else equal, the state of Britain was set to reap a sustainable competitive advantage over other E.U. states with respect to attracting business, and thus jobs. The move was not without risks, however.
The move by the British could have triggered reduced rates in other states, resulting in a “race to the bottom” wherein corporations would get away with less tax and the governments would have to cut back on basic services due to insufficient revenue in the coffers. In early 2013, for example, Bank of America moved billions of pounds of complex financial transactions through London from Dublin in order to apply the loss carry-forwards on the underlying investments to the state with the higher tax rate. At the time, the corporate tax rate in Ireland was only 12% so the loss deductions could have benefited the bank more if applied against profit in Britain. As a result, that state stood to collect less in tax from the bank and the bank stood to pay less in tax--all due to the rate differential between the states of Ireland and Britain.
In short, a bank that had made horrible acquisitions in 2008 was able to “play the rates” to get some kind of “silver-lining” benefit at the expense of the E.U.’s state governments. Because of the disproportionate fiscal role of those governments in the E.U., business could effectively play them off against each other. Were there a federal corporate income tax, the benefits of shifting carry-forward losses from Dublin to London would have been mitigated because the more of the tax bill in Europe would have been unaffected. Therefore, in addition to forestalling more of a fiscal balance within the E.U. to the benefit of the euro, the reliance on state tax in the E.U. could be exploited by corporations such that less tax revenue would be collected.
In terms of business, Bank of America’s taking advantage of differential tax rates illustrates a sort of “operating at the margins.” Did this redeem the bank? Lest it be forgotten, the bank had screwed up rather unroyally in acquiring Merrill Lynch and Countrywide in 2008, given all the real-estate debt and financial derivatives held by those institutions. That is to say, any cleverness in minimizing the tax bill within the E.U. could not possibly make up for the colossal blunders at the hand of Ken Lewis and the board in 2008. The bank was even then too big to fail, meaning that there was systemic risk to the financial system (and economy) should the bank have collapsed, so any cleverness at working tax differentials should not distract us from the big picture concerning not only that bank, but also any large bank as being too big to fail and yet fully capable of making huge blunders that could compromise even a large bank's very existence. In other words, expertise in reducing the tax-bill in the E.U. does not make up for greater ineptitude because the low-probability, very high systemic risk is simply too dangerous; the Great Depression of the 1930s illustrates what could happen.
As for the E.U, it was (and is) vulnerable fiscally because its reliance on the states to collect and spend tax revenue. The E.U. Government, like the early U.S. Government, has evinced the weakness of dependency (on its states). The reticence of state officials to cede more governmental sovereignty to the Union has been at least part of the problem, with banks like Bank of America being able to exploit differential state tax-rates as a result. The welfare of the whole--the Union--has suffered as a result.  

Sources:


Jill Treanor, “Bank of America Makes Derivatives Switch from Dublin to London,” The Guardian, 28 January 2013.
Dan Milmo, “Corporation Tax Rate Cut to 21% in Autumn Statement,” The Guardian, 5 December 2012.

See also: Skip Worden, Essays on the E.U. Political Economy and Essays on Two Federal Empires.

Were Raises at Bailed-Out U.S. Companies Approved by Treasury?

In early 2013, the Special Inspector General for Troubled Asset Relief Program reported that the U.S. Treasury Department disregarded its own guidelines in order to allow large pay increases for executives at three major companies that had received bailouts during the financial crisis. In particular, eighteen raises for executives at American International Group (AIG), General Motors, and Ally Financial were approved. Fourteen were for $100,000 or more. A raise for the CEO of a division of AIG was $1 million. Treasury approved these raises even though they exceeded the pay limits set in Treasury’s own guidelines.
Was Treasury Secretary Tim Geithner smirking because his friends were happy?     NYT
In assessing Treasury’s approval of the raises, one must weigh the argument that they were needed to retain expertise needed to restore the companies to financial health (and thus be able to pay back the bailouts) against the argument that bailouts should come with strings such that the funds are not used opportunistically. At the very least, executives associated with the companies’ failures should not be rewarded. However, what about new-hires brought in to restore the companies?  If the restoration is successful, shouldn’t those managers be compensated?  Even if the raises were not necessary to retaining talent, managers who had not been part of the problem should be compensated for effective work. At the same time, it is proper and fitting that companies being bailed out be subject to strings, and thus neither the companies nor their employees should be able to benefit inordinately.
That Treasury disregarded its own guidelines can be read as an indication that the officials were concerned that vital talent would be lost had the guidelines been followed. The bailouts in the E.U. contained limits on executive compensation without any apparent hindrance to the viability of the banks. In other words, the argument that the raises were necessary to retain talent could have been a ruse. An alternative interpretation consistent with this scenario is that the business sector had too much influence over Treasury officials. In addition to lobbying influence and connections between Treasury officials and former colleagues on Wall Street, it is possible that pro-business officials had adopted the business line that government should not interfere with business—even companies being bailed out.
Put another way, contrasting the lack (or ignoring) of strings at Treasury with the salience of strings in the case of the E.U.’s bailouts may illustrate a cultural difference between Americans and Europeans generally with respect to pro-business ideology. Had executives at the three bailed out companies above enjoyed inordinate influence within Treasury, the conflict of interest for the government officials could have been enabled by a shared ideology: namely, what is good for GM is good for America.

Source:
Marcy Gordon, “Treasury Disregarded Own Guidelines, Allowed Executive Raises At Bailed-Out GM, AIG,” The Huffington Post, January 28, 2013.

See also: Skip Worden, Essays on the Financial Crisis.

Egypt: A Missed Opportunity to Interiorize Protests

How a democratic system is designed can be as important as whether the government officials have been elected or appointed. In constructing a democracy, it is not sufficient to simply hold elections. While the victors may have democratic legitimacy, the government itself may still not. Egypt amid the violent protests in early 2013 may be a case in point. Even though unlike in 2012 the sitting president had been democratically elected, it is too simplistic to say that the Egyptian government and constitution had democratic legitimacy.
In January 2013 following an Egyptian court sentencing 21 residents in Port Said to death for their roles in the stadium disaster, the chief of the army said that the ongoing violence could bring about the collapse of Morsi’s government. The opposition demanded that the president establish a nationally unified government and rewrite controversial parts of the constitution that had recently been passed. That the constitution had been pushed by religionists amid an increasingly polarized citizenry left even the democratically elected government vulnerable. It was not enough that Morsi had been democratically elected.
Particularly in a highly polarized country, simply holding elections is not sufficient to usher in a sustainable democracy. If a partisan party holds virtually all of the power in the government of a highly polarized country, the opposition will have no recourse but to resort to protests and even violence. Put another way, democratic legitimacy requires more where a citizenry is polarized in the sense of operating under very different, and thus highly conflicting, assumptions and prescriptions. In such a context, a democratic system that hands virtually all the power over to one “side” is insufficiently democratic.
This is not to say that a “unity government” is the answer. Given the polarization, any unity would be illusory. More realistically, Morsi could have viewed the sheer intensity of the violence in the protests as an indication that the new democratic system was being monopolized by one party at the expense of others. Providing them with their own bases of power within the government and democratically elected would bring in the external political strife—replacing violence on the streets with debate and negotiation between governmental institutions. The latter is not predicated on unity. Even any resulting compromise in legislative terms would not necessarily imply unity.

 Can such intense violence be "interiorized" as debate and politics in a legislature?  Government itself can be viewed as civic violence "redacted" and "refined."   Source: thestar.com
Interiorizing the conflict on the streets by permitting it with some political power within the government could be accomplished through a bicameral legislature, the chambers of which having very different bases of membership, or a qualified majority vote mechanism in a single chamber. The separation of powers could also be by government branch, with one party controlling one branch and the opposition controlling at least part of another branch.
In the U.S., for example, a Democrat controlled the White House at the time, while the Republicans controlled the U.S. House. The opposition did not have to have a share of the power in the House because the minority there had another power base within the government. Were the government completely dominated by even democratically elected Republicans, such was the case in Wisconsin after the election of 2010, activist Democrats would head for the streets. That the legitimacy of such a government can quickly become suspect in spite of its democratic basis is illustrated by the thirteen senators from Wisconsin who literally fled to Illinois so Wisconsin’s senate could not function with a quorum. Democracy involves the design of a government as much as that crucial offices are elected rather than appointed. To be legitimate democratically, the government’s design should interiorize political strife by providing a power base to more than one party.
In short, it is not sufficient for the Egyptian president and even its parliament to be democratically elected, such that one party can dominate both simply due to the numbers. Given the extent of polarization among the citizenry, such domination is doomed to failure even if the dominated party is in no hurry to differentiate power-bases within the government. Particularly in cases such as Egypt where the parties are “not on the same page,” the opposition must have some basis within the government to act as a check and thus balance out the otherwise excess possible in one-party rule. In a polarized citizenry, such excess quickly pushes the other side to extreme reactions on the street.
The task in the construction of a viable government in Egypt would seem to be providing opposition groups with enough ownership within the government without thereby providing a veto on any legislative output. As of early 2013 at least, Egypt might have to go a couple of rounds before a design is adopted that effectively interiorizes the violence.  Otherwise, splitting the country into two—one secular and the other a theocracy—might be the only viable solution (other than federalism and the sort of democratic design discussed here).  It is notoriously difficult to relocate people, however, so as to have truly distinct secular and religionist societies. Given the daylight between the two camps, however, partition, such as that which had occurred between Pakistan and India in 1947 might simply reflect the fact that two distinct nations had already come to exist in Egypt. If so, it is the fossilized nature of a defined country that may be the underlying obstacle to Egypt catching up with itself.

Source:
Egypt Political Factions Condemn Violence, Urge Dialogue,” Deutsche Welle, January 31, 2013.