Thursday, August 31, 2017

Free Speech in the EU: On the Judgement on John Galliano's Anti-Semitism

On March 1, 2011, Sidney Toledano, CEO of the French fashion house Christian Dior, wrote that he was dismissing its chief designer, John Galliano, after the surfacing of a video that showed "his anti-Semitic outbursts at a Paris bar." The word choice of outbursts by The New York Times is interesting, for the actual video shows him in a rather mellow, notably intoxicated, "well you know" mood. The article's writer admits that the designer had used "a slurred voice." Galliano was telling a Jewish couple that they should feel lucky that their ancestors were not killed by the Nazis because so many did not survive. He said ‘‘people like you would be dead,’’ and  ‘‘your mothers, your forefathers’’ could have all be ‘‘gassed.’’ Although applying a rational criterion to a drunk man, I wonder in what sense he meant ‘‘I love Hitler.’’ Considering that Galliano is gay and Hitler sent homosexuals to concentration camps, I suspect that Galliano was lying simply to hurt the couple in what was undoubtedly a back-and-forth in a verbal fight.  Indeed, it takes two to tangle, and the rest of us might do well to recognize the difficulty in interpreting a snipet without having observed the entire contest.

While hurtful and inappropriate even in the midst of a disagreement, Galliano's aversarial comments hardly constituted an outburst, as if he had lost control of himself and thrown his bar table against a wall. Why, one might ask, would a journalist at a major New York paper use a word that (deliberately?) overstates the case against the designer?  Perhaps even in a free society, there is a tendency to gang up on an unpopular, even loathed, minority opinion in a way that distorts the story in order to give occasion for further fulminations. We don't know, for example, what the couple might have said to Mr. Galliano that sparked his vitriole.  Lyes Meftahi, a 38 year old Parisian who runs an audiovisual company, said that Mr Galliano was certainly drunk, speaking slowly and slurring his words. So much for any outburst. Furthermore, the witness said that the designer was keeping to himself and was ‘‘provoked’’ by a woman, who had called Mr. Galliano ‘‘ugly.’’  Mr. Galliano himself was threatened with violence at one stage during the altercation according to Mr. Meftahi. It is difficult for the rest of us to know what happened based on an objectionable snipet.

Rather than defending the designer, whose comments I concur were highly inappropriate (note that I'm applying rationalism again to a drunk person),  I want to contend that the rush to judgment against him had a certain amount of presumption attached. That is to say, we as human beings may tend to presume we are in a position to judge when in fact we are not. Taking ourselves as gods on earth in effect, we tend to assume omniscience rather than limited creatureliness as our mantle. For a part to take itself as the whole is to truncate reality itself into a mere projection of the part. Lest we forget, we are all fallible, even when we judge with apparent certitude.

For example, that Mr. Galliano had "helped to energize Dior after he joined it in 1996 as creative director, increasing sales and making it a jewel of the LVMH Moët Hennessy Louis Vuitton luxury-goods empire" was wantonly or unintentionally tossed aside by Mr. Toledano in what bears all the signs of a rush to judgment. In its statement, Dior said it had ‘‘immediately suspended relations’’ with Mr. Galliano and ‘‘initiated dismissal procedures.’’ It cited the ‘‘particularly odious comments’’ contained in the video. It is as though the weight of history came slamming down on the star designer, suffocating him from even proffering a self-defense before the fall of the guillotine. In the face of this injustice, it might be quelle dommage pour M. Galliano were it not for his own choice of weapon. He undoubtedly esteemed his own faculties too much in assuming he could handle being drunk. Again, human beings do not have as much pith as we tend to think.

To be sure, anti-semitism and racism ought to be relegated to the ash heap following the twentieth century. For all its technological progress, that century was remarkably decadent and stagnant.  In early 2011, the world dared to hope that popular protests sweeping the Middle East might have been ushering in a new progression of freedom in the establishment of republics in what had been autocracies for centuries. Would that region sport the tolerance that is necessary for a free society to truly be free? Can it look to Europe, where certain speech, even in a small group, can get one thrown in prison? According to The New York Times, "French law makes it a crime to incite racial hatred; the statute has been used in the past to punish anti-Semitic remarks."  Yet to incite seems to connote a public broadcasting or speaking format, as in inciting the mob to storm the Bastille (or, as in 1792, the republic's prison filled with aristocrats and clergy--a massacre that Robbespierre denounced as a travesty of the rights of man). Does a person incite hatred against a particular group simply be giving his opinion in a dispute with another person?  The dubious applicability of the French law seems to hinge in this case on treating a private gathering, albeit in a public establishment, as a public (political) event.  Of course, in the United States, even the latter is protected by the first amendment on free speech, but even there hate crimes exist. In the European Union, where speech is punished on account of the Nazi experience, the society looks overly restrictive and unfree, at least from an American perspective. To be sure, the reverse has also been the case. In 1948, for example, the U.S. Government banned showings in the U.S. of the American documentary, Nuremberg: Its Lessons for Today, even as Germans were free (and encouraged) to see it in Germany. The American military did not want Americans seeing the Soviets as allies (and the Germans, whose help the American govenment was then seeking against the Russians, as enemies). It is precisely such a proclivity that the first amendment of the U.S. Constitution was designed to thwart. The human species is insufficiently equipped to be able to curtail innate freedom effectively.

Source: http://www.nytimes.com/2011/03/02/fashion/02dior.html?pagewanted=1&sq=john galliano&st=cse&scp=2

Financial Sector Lobbyists Put the Republic at Risk: The Case of the U.S. Senate Bill on Financial Reform in 2010

Considering the gravity of the risk in Wall Street banks being too big to fail, the financial reform bill passed by the US Senate in 2010 may have been influenced too much by the financial interests. It can thus serve as a good case study for how a republic can be subject to too much influence from the moneyed interests. It could be asked, moreover, whether there is an inevitable trajectory that a polity undergoes from being a republic to becoming a plutocracy (ruled by the wealthy).

Executives and political action committees from Wall Street banks, hedge funds, insurance companies and related financial sectors showered Congressional candidates with more than $1.7 billion in the last decade, with much of it going to the financial committees that oversee the industry’s operations. In the 2010 election cycle before the financial reform bill passed the Senate, members of the financial committees far outpaced those of other committees in fund-raising parties by holding 845 events. The 14 freshmen who serve on the House Financial Services Committee raised 56 percent more in campaign contributions than other freshmen. And most freshmen on the panel, the analysis found, are now in competitive re-election fights. In return, the financial sector has enjoyed virtually front-door access and what critics say is often favorable treatment from many lawmakers. But that relationship, advantageous to both sides for many years, is now being tested in ways rarely seen, as the nation’s major financial firms seek to call in their political chits to stem regulatory changes they believe will hurt their business.

Even after the passage of the Senate’s bill, the financial industry was confident that a provision that would force banks to spin off their derivatives businesses would be stripped out, but in the final rush to pass the bill, that did not happen. The opposition came not just from the financial industry. The chairman of the Federal Reserve and other senior banking regulators opposed the provision, and top Obama administration officials said they would continue to push for it to be removed. Such officials could include Larry Summers, who along with Alan Greenspan and Robert Rubin pushed for derivatives to be left unregulated in the late 1990s while Summers and Rubin were in the Clinton Administration.

Not missing a beat, Wall Street lobbyists  began an 11th-hour effort to remove it just as House and Senate conferees were preparing to meet to reconcile their two bills. Lobbyists said they were already considering the possible makeup of the conference panel to focus on office visits and potential fund-raising.  Rep. Barney Franks, chairman of the House Financial Services Committee, signaled on May 25th that the prohibition on banks trading in deriviates using their own funds could be dropped. “I don’t see the need for a separate rule regarding derivatives because the restriction on banks engaging in proprietary activities would apply to derivatives as well as everything else,” Mr. Frank said according to The Wall Street Journal (May 22, 2010, p. A6).  However, if this were the case, why would Sen. Dodd, the White House, and the banks be so set against removing the derivative langauge?  Is redundancy really that big of a deal?  Something is rotten here; I can smell it. With Sen. Dodd retiring, I wouldn’t be surprise if he made a deal with Wall Street concerning his financial future–it being so odd how he has mellowed so much on reform. Not surprisingly, the Chamber of Commerce had already spent more than $3 million to lobby against parts of the bill, including the derivative provision, and as the Senate was passing its bill the Chamber was planning to keep fighting for a loosening of the regulatory restrictions — first in the House-Senate conference, then in the implementation phase after final passage of a bill, and “if all else fails,” in court. With all these avenues, it is no wonder that the money of Wall Street banks has such ease in Washington.

There are so many points along the line of a bill becoming a law that a provision with teeth can be targeted by well-funded parties with a vested interest against it, it would appear that no change can happen in the US that is not in an industry’s interest.  Because wealthy firms have presumably done ok under the status quo, it is not clear why they would have an interest is supporting systemic change even if systemic risk warrants it. In the case of financial reform,the financial houses have a rather obvious conflict of interest. Furthermore, the behavior of the big bankers in September of 2008 suggests that they do not view rescuing a financial system in crisis as their job.  There is no reason to suppose that the legislation that they support would be geared to repairing the system when it is in crisis. The real problem is apt to manifest on the crest of the next bubble, as the industry had already gotten much of what it wanted even with the derivatives language in the Senate bill.
According to The New York Times, “Despite the outcry from lobbyists and warnings from conservative Republicans that the legislation will choke economic growth, bankers and many analysts think that the bill approved by the Senate … will reduce Wall Street’s profits but leave its size and power largely intact. Industry officials are also hopeful that several of the most punitive provisions can be softened before it is signed into law.”  This is dangerous because so many bankers, including those at Goldman Sachs, have been in denial as to how they should behave.  According to The Wall Street Journal, Bank of America, Deutsche Bank, and Citigroup have continued their practices of “window-dressing”: temporarily shedding debt just before reporting their finances to the public. This suggests that “the banks are carrying more risk most of the time than their investors or customers can easily see.”   As of 2010, this activity had actually increased since 2008, “when the financial crisis brought actions like these under greater scrutiny.”  For ten quarters ending March 2010, the three banks lowered their net borrowings in the repo market by an average of 41% at the end of the quarters (as compared with during them). This represents a significant misstatement, which the banks’ auditors should have highlighted. The Wall Street Journal reported in April 2010 that 18 large banks, as a group, had routinely reduced their short-term borrowings in this way.

So it appears that Wall Street went on in its old ways even after the crisis, and there was relief that financial reform would not rock the boat. According to The New York Times, “If you talk to anyone privately, there’s a sigh of relief,” said one veteran investment banker who insisted on anonymity because of the delicacy of the issue. “It’ll crimp the profit pool initially by 15 or 20 percent and increase oversight and compliance costs, but there’s no breakup of any institution or onerous new taxes.” In other words, incrementalism rather than systemic change.  Washington, in other words, had been bought–even the “real change” agent himself, Barak Obama.  Mr. Obama is not unaware of the powerful friends he will need in 2012. He received just under a million dollars from Goldman Sachs for his 2008 campaign.  How difficult it is to let go of power, and say that one term will be enough, even better, for that is what seems to be necessary for one to fight against the entrenched culprits of the status quo.  In actuality, Andrew Jackson showed in 1832 that standing up to a large bank (in his case, the Second Bank of the US) can actually be consistent with winning reelection.  But absent such faith in the people to come through, saying to hell with reelection seems to be necessary for a president to be an authentic agent of real change.  If anything called for such change, it was the financial crisis of 2008. Yet as the House and Senate compared notes on their respective bills, Wall Street was actually relieved.  That really says something.  The culprits have an effective veto on what safeguards will be put in place to keep the banks from risking the world economy again. The wolves have to accede to the design of the new chicken coop. To my fellow Americans, I say: this is our system.

Even though the financial crisis far outweighed any health-care crisis, the financial reform is far more incremental–though both are within that rubric. “The health care bill is going to transform the structure of health care exponentially more than this legislation on financial regulation is going to change Wall Street,” said Roger C. Altman, the chairman of Evercore Partners and deputy Treasury secretary in the Clinton administration. “It’s not even close.”  It could be that the added incrementalism in the health-care legislation was in the interest of the health insurance companies and hospitals, whereas less was in the interest of Wall Street.

Of course, it could be that regardless of the regulation, financial bubbles are bound to come and go, and the sheer scale of financial deals today requires large banks. Donald B. Marron, the former chief executive of Paine Webber, avers, “Despite these new rules, Wall Street will continue to provide the same important business services because the same needs are still there — creating liquidity; financing governments, corporations and individuals; and providing financial advice and products.” So perhaps the financial leverage of Wall Street in Washington doesn’t keep us from achieving a solution because the financial markets and their players are going along a trajectory that is being defined by the progression of the financial market.  Consider, for example, the pressure on banks from globalization to amass more and more capital for bigger and bigger deals.  In other words, the “too big to fail” phenomenon could be a necessary part of an increasingly globalized financial market.  Of course, this could point to the need for greater international financial regulation.  But with Europe embroiled with a debt crisis of its own and China wearily watching its own housing bubble, there were at the time of the passage of the Senate’s bill more financial bush-fires in the world than firemen.  The problem is that the fire chiefs are too often paid off (and intimidated) by the profiteering arsonists, so we are left woefully unprotected even if the veneer of regulatory reform has the looks of an effective profilactic.

Sources:  http://www.nytimes.com/2010/05/23/us/politics/23lobby.html?scp=2&sq=financial%20lobbying&st=cse ; http://www.nytimes.com/2010/05/24/business/24reform.html?hp ; WSJ (May 26, 2010).

Betraying an Electorate: On President Obama's Deal with Drug Companies

While campaigning for the U.S. presidency in 2008, Barak Obama decried the greedy Republican lawmakers acting at the behest of the drug companies to keep drug prices artificially high. A year later, those same drug companies wanted Obama to oppose a Democratic proposal that was intended to bring down the prices of medicine. Beyond betraying those voters who voted for him based on his campaign rhetoric on drug prices, Obama belied the trust that is necessary for a viable republic to function democratically.

 “On June 3, 2009,” according to the New York Times, “one of the lobbyists e-mailed Nancy-Ann DeParle, the president’s top health care adviser. Ms. DeParle sent a message back reassuring the lobbyist. Although Mr. Obama was overseas, she wrote, she and other top officials had ‘made decision, based on how constructive you guys have been, to oppose importation on the bill.’ Just like that, Mr. Obama’s staff abandoned his support for the reimportation of prescription medicines at lower prices and with it solidified a growing compact with an industry he had vilified on the campaign trail the year before.” 

As per the quid pro quo, the industry sponsored its own advertising campaign in favor of Obama’s health-insurance proposal. It was not just the guys’ constructiveness that had convinced Obama’s staff to “make the deal.” To be sure, the staff could have been supposing that the subsidies enabling middle-income Americans to afford health insurance and the expansion of Medicaid to cover 30 million uninsured Americans would relieve people from having to pay high prices for medicine (though everyone, even if only as payers of higher taxes, would be paying the price in the form of higher insurance premiums). However, there would be no guarantee that the government would pick up the tab when needed. Furthermore, the higher prices could widen government deficits.

Beyond health-care and budget policy, moreover, is the contradiction between Obama’s campaign speeches and his staff’s decision to oppose the bill. The implication is that Obama went back on his word, essentially betraying anyone who voted for him because he promised to support lower drug prices. Beyond Obama’s public credibility lies the mechanism of democracy wherein voters trust that a candidate’s campaign bears some relation to the candidate’s governance. Otherwise, the “will of the people” breaks down and mistrust sets in on a societal basis. In other words, when representatives say one thing on the trail and quietly do the opposite behind their desks, democracy itself suffers.

In the wake of the financial crisis of 2008, it was said that trust is vital to the financial system. The same can be said of a viable republic. The question, therefore, is how candidates can be held accountable for the assumed congruence when so much of governance is done behind closed doors. In an electoral system where not voting for one candidate benefits the candidate even further from the voter’s preferences, it can be difficult indeed to hold an office-holder accountable at the ballot box. 

Source:

Peter Baker, “LobbyE-Mails Show Depth of Obama Ties to Drug Industry,” The New York Times, June 8, 2012.



Wednesday, August 23, 2017

The Veto Power of the U.S. President

On September 12, 1787, in the U.S. Constitutional Convention, Gerry claimed that the "primary object of the revisionary check on the President is not to protect the general interest, but to defend his own department" (Madison, Notes, p. 628). Gerry was stressing the value of maintaining the separation of power that was to exist between the three branches of the U.S. (General, or federal) Government. I believe he was inordinately fixated on his point--missing the presiding function of the U.S. President. Also on September 12, Madison averred that the "object of the revisionary power is twofold. 1. to defend the Executive Rights 2. to prevent popular or factious injustice" (Madison, Notes, p. 629). In addition to be an advocate of the separation of power within the U.S. Government, Madison was concerned that a large faction in the majority might oppress a minority faction and he viewed the expanded republic of the union as a means to minimize such tyranny. He too was slighting the presiding role of the president. 

At the end of the convention, George Washington, who had been presiding over it as one controversial point after another were debated, noted the problems inherent in both presiding and advocating on particular issues. Madison reports that when "the PRESIDENT rose, for the purpose of putting the question [of the Constitution], he [Washington] said that although his situation had hitherto restrained him from offering his sentiments on questions depending in the House, and it might be thought, ought now to impose silence on him, yet he could not forbear expressing his wish that the . . . smallness of the proportion of Representatives [in the U. S. House] had been considered by many members of the Convention an insufficient security for the rights &; interests of the people. . . . he thought this of so much consequence that it would give [him] much satisfaction to see it adopted. No opposition was made . . . it was agreed to unanimously" (Madison, Notes, p. 655). Washington believed that as he was presiding over the Convention it was necessary for him to remain silent on all of the particular points being debated throughout the Convention; even on the last day he hesitated in expressing his desire that there be no less than 30,000 people per House Rep. rather than 40,000 as the Convention had decided. 

The silence of a presider places him or her in good position to weigh in on a point "of so much consequence."  In other words, a presider literally sits before, rather than participates, so as to be able to protect the whole from dangers from points of large consequence.  Weighing in on every partisan point, such as most U.S. Presidents have done, not only keeps them from seeing the forest through the particular trees, but also detracts from their credibility with which they could push through the few matters of such consequence that the system would succumb otherwise. 

It follows that the veto should be used not to give the President a share in every piece of legislation, but to enable him or her to stop bills that would otherwise compromise the system as a whole.  In the U.S. Constitution as it was drafted by the Convention, the U.S. House was the only democratically elected body or branch in the U.S. Government.  Neither the U.S. Senators nor the U.S. President were elected by the people. The Senate represented (and protected) the state governments, and special electors were chosen by the state legislatures to select the U.S. President.  The quality of representative democracy in the U.S. House was therefore vital to the Government having a balance within which democracy was a part. Compromise democracy in the House and the U.S. Government might become an aristocracy or monarchy.  These terms were used by many of the convention's delegates. 

George Washington understood the nature of presiding, which can be gleamed from Madison's report of what the PRESIDENT said on the last day of the convention. It is a pity that his example has been lost on so many U.S. Presidents.

Source: James Madison, Notes in the Federal Convention of 1787 (New York: Norton, 1987).

Judicial Ethics: Friendship and Philanthropy

Harlan Crow was a Dallas real estate magnate and a major contributor to conservative causes. He did many favors for his friend, Clarence Thomas, “helping finance a Savannah library project dedicated to Justice Thomas, presenting him with a Bible that belonged to Frederick Douglass and reportedly providing $500,000 for [Virginia] Thomas to start a Tea Party-related group.” The two friends spent time together at “gatherings of prominent Republicans and businesspeople at Crow’s Adirondacks estate and his camp in East Texas.” Crow also “stepped in at Thomas’ urging” to finance the multimillion-dollar purchase and restoration of the cannery that had employed the justice’s mother. Crow’s restoration “featured a museum about the culture and history of Pin Point that has become a pet project of Justice Thomas’s. . . . While the nonprofit Pin Point museum is not intended to honor Justice Thomas, people involved in the project said his role in the community’s history would inevitably be part of it, and he participated in a documentary film that is to accompany the exhibits.”

News “of Mr. Crow’s largess provoked controversy and questions, adding fuel to a rising debate about Supreme Court ethics. But Mr. Crow’s financing of the museum, his largest such act of generosity, previously unreported, raises the sharpest questions yet — both about Justice Thomas’s extrajudicial activities and about the extent to which the justices should remain exempt from the code of conduct for federal judges. Although the Supreme Court is not bound by the code, justices have said they adhere to it. Legal ethicists differed on whether Justice Thomas’s dealings with Mr. Crow pose a problem under the code.”

The code says judges “should not personally participate” in raising money for charitable endeavors, out of concern that donors might feel pressured to give or entitled to favorable treatment from the judge. In addition, judges are not even supposed to know who donates to projects honoring them. . . . (T)he restriction on fund-raising is primarily meant to deter judges from using their position to pressure donors, as opposed to relying on ‘a rich friend’ like Mr. Crow, said Ronald D. Rotunda, who teaches legal ethics at Chapman University in California.” On the other side of the argument, Deborah L. Rhode, a Stanford University law instructor who has called for stricter ethics rules for Supreme Court justices, said Justice Thomas “should not be directly involved in fund-raising activities, no matter how worthy they are or whether he’s being centrally honored by the museum.”

Ethical Analysis:

Ethical analysis is hardly an objective science. Nietzsche’s view that a philosopher’s philosophy is merely a reflection of his or her most dominant instinct expressed via cognition seems particularly relevant. In other words, out of the tussle of one’s instincts one remains and it can be expressed as one’s thought. For instance, the thought that first came to my mind in reading the Times article was that exempting U.S. Supreme Court justices from the judicial ethics code violates the ethical principle of fairness. This “first find” ethically-speaking seems to me to be the most indubitable conclusion of the case, ethically-speaking. However, my perception as well as “the salience” of the principle of fairness may have more to do with which of my instincts is most dominant in my psyche than any objective determination of ethical outcome.

The principle-instinct of fairness could be so dominant for me because it was conditioned as such through my early years. Specifically, I have a brother who is 1.5 years younger than me, and the closeness in age meant that the principle of fairness was seldom far removed when we were kids. For instance, awhile after we moved to a house my parents had had built, they made split our large, shared bedroom into two. The question not far from the surface all around was “is the space equal?”—as if square feet would matter to two boys (we eye-balled it and concluded the rooms were “fair enough”).

What instinct and supporting personal experience lies behind the lawyer’s thou shalt not claim that justices should not be involved in fundraising PERIOD? Is there an ethical principle in that asseveration? Considering that the lawyer at Stanford does not have a graduate degree in ethics (or law, for that matter), her declaration is highly likely based on a dominant instinct that has an urge to express itself in the garb of ethical language.

The lawyer at Chapman is more discerning, pointing to the purpose in the ethical prohibition on fundraising: the point is to keep justices from using their influence to get rich people to donate. In the case of Justice Thomas, it appears that Harlan Crow has wanted to make his donations. This, unlike had Thomas used his influence to secure the gifts, is not ethically problematic. The ethical problem would arise should a matter of concern to Crow come before Thomas’ court; the justice would be ethically obliged to recuse himself to obviate his personal conflict of interest. Announcing such a conflict would not be sufficient, as the underlying temptation to lean in favor of the benefactor would still exist; it would simply be apt to be better camouflaged by legalese. Of course, should a justice choose not to know the sources of beneficial donations, recusals to avoid such conflicts of interest would be less likely.

Therefore, one way to play it ethically is to allow the particular justice to decide how much he or she wants to avoid having to recuse based on knowing the identity of a benefactor. Hardly objective, this ethical strategy is one of several that are possible. It reflects empowering individual justices to determine the extent to which they want to subject themselves to the possibility of having to recuse to avoid a conflict-of-interest. The principle of boyhood fairness insists that the strategy be applicable for any federal judge, without exception. 

Last but not least, the field of judicial ethics would be better served if American law schools would follow their European counterparts in hiring legal scholars (i.e., holders of the doctorate in law, the J.S.D.). Also, a scholar of judicial ethics should have at least one degree in philosophy (ethical theory)—preferably a masters or a joint Ph.D./J.S.D.


Source:



Mike McIntire, “Friendship of Justice and Magnate Puts Focus on Ethics,” The New York Times, June 18, 2011.

The Flemish and Walloons: Worlds Apart?

I contend that the cultural differences between the Flemish and Walloons within Belgium have been exaggerated to such an extent that the state government of Belgium has been paralyzed and solutions have eluded the Belgians. Reducing the fear-induced swelling of the admittedly real differences within Belgium may therefore facilitate relief from the paralysis. In other words, the added perspective from viewing the cultural differences as less traumatic can help the Flemish and Walloons to either live together or, ironically, be able to separate. That’s right—a more realistic assessment of the differences can actually facilitate the separation of Belgium into two (or three) E.U. states (or Flanders joining the Netherlands and Wallonia joining France—and the German-speaking area joining Germany). Exaggerating differences can snuff out consideration of such alternatives and enable continued paralysis.

To be sure, distinctions can indeed be made between the Flemish and Walloons; we can’t simply assume that the overall Belgian (or European) identity relegates the regional distinctions. "I am Flemish first, Belgian second," says Pascal Francois of Aalst.  Another Flemish man says, “it’s a toss-up when I’m in Belgium.” Even though I am a citizen of the U.S. rather than the E.U., I can relate.

I regard myself as a Midwesterner first, Illinoisan second. Being a Midwesterner essentially means to me having imbued the intrinsic down-to-earth culture of my native region of Illinois, which, as mostly rural with only a medium-sized city as its de facto capital, is distinct from Chicagoland (which is less Midwestern than the other regions). To be sure, “the Midwest” is a broad area in mid North America that transcends political categories. The label goes far beyond geographic connotation, for “the Midwest” stands for a certain “home-grown” (rather than foreign) culture wherein honesty (and bluntness), prudence, populism, and humility (and stubbornism) are particularly valued. The Midwest is known as “the heartland” because of these ethical virtues. In Illinois at least, being a Midwesterner can be readily identified with one's specific region because the cultural values are more immediate than the political identification associated with being an Illinoisan. So being a Midwesterner is to being Flemish as being an Illinoisan is to being Belgian. So too, being an American is as being a European, even if the emphasis differs. Ideally, a federal system proffers political expression to each of these respective identities. Unfortunately, fear and the related intransigence (or stubbornness) can block full expression of one or more of the levels of cultural identification.

In the case of giving political expression to regional identification in Illinois, fear of change has gotten in the way. For example, the Illinois Senate could represent the regions (i.e., clusters of four or five counties), hence facilitating their expression. Given how much the regions differ, the result has been a deficit in political identification within Illinois. Because the republic is quite heterogeneous (including linguistically, which, by the way, by no means exhausts the ways in which cultures can differ), I did not grow up identifying myself as an Illinoisan. In fact, the regions in Southern Illinois have more than once attempted to secede from Illinois due to economic, political and cultural differences—mainly from Chicago (whose culture is foreign even from the vantage-point of the two other regions in Northern Illinois). In my late twenties, I visited Southern Illinois once from the North. Even though I am not from the Chicago region, I felt at the time how strange it was that the place was “Illinois.” You’re not Illinois, I thought to myself, this place is different and far away. The people talk differently. Unfortunately, I did not have a regional political identity on which to rest this intuitive reaction of semi-foreignness. Perhaps the Walloons feel a semi-foreignness when they are visiting Flanders (and so too, the Flemish, when visiting Wallonia), though in their case, unlike mine, regional political identification can fortify the regional cultural bases of “home.”

In short, I can understand why a Belgian might identify as Flemish or a Walloon first and want to give political expression to it, given the cultural diversity within Belgium. Such identification is not a bad thing in itself. Of course, whereas there are regional dialects (and some unique vocabularies) in Illinois, Flanders and Wallonia enjoy different languages—indeed it can even be said that these regions enjoy standing for Dutch and French, respectively. Even as language is a major point of difference between the two regions, this basis can indeed be exaggerated, playing on the generalized fear by emphasizing the standing for over simple enjoyment. Il est facile de craindre.

For example, The Telegraph reported in 2010 that “Pascal Smet, the schools minister for Flanders, has horrified [the Walloons] by suggesting that Flemish children, who are Dutch speakers, should learn English as their second language, rather than the French spoken by two fifths of their countrymen in Wallonia.” While being horrified constitutes an over-reaction, Pascal Smet must have known in 2010 that he had “picked a broader fight” under the reasonable rationale that English should be learned because it is becoming the common language of the E.U. "I note that the engine of European integration is sputtering. One reason is that we do not speak the same tongue, hence my plea for a common European language," he said according to The Telegraph.

Of course, Smet could have satisfied his purpose by proposing that English and French be taught to the Flemish kids. His needless insensitivity alone can be seen to have inexorably fomented an exaggerated response. According to The Telegraph, “Smets proposal that children in Flanders can dispense with French [has] deeply angered Belgian Walloons already fearful over their fate and Belgium's future after Flemish separatists won the largest share of the vote in elections.” In other words, even sensible proposals involving the languages can escalate, fueled by the more generalized fear in the context of mistrust.

In short, already-stark differences existing between the Flemish and Walloons are easily exaggerated, creating a self-fulfilling prophesy of separateness wherein people have a knee-jerk tendency to over-react. This can be seen as well where the Flemish and Walloons come into close contact. At least in the short run, integration can provoke flash-points.

According to the BBC, “Flemish defensiveness is at its sharpest near Brussels. The capital, which used to have a Dutch-speaking majority until the early 20th Century, is now overwhelmingly francophone. Its population is spreading outward in search of greenery and cheaper homes - a move that many in the Flemish suburbs find threatening. Liederkerke, a traditionally working-class town 15 miles (25km) west of Brussels, is one of many suburbs that have seen an influx of both rich expatriates and African immigrants.” It is strange that Walloons from the south of the state would be compared to expats and African immigrants.

The cultural differences within Belgium should not be construed as though they were a microcosm of cultural differences within the E.U. or even internationally. For example, the BBC avers that the “cultural divide between Europe's Germanic north and Latin south has run through the middle of Belgium since the Roman Empire.” However, Flanders is not exactly Bavaria, nor is Wallonia populated by Spaniards and Sicilians. That is to say, perspective ought to be maintained in assessing the extent of the cultural differences within a small E.U. state. Let’s not get carried away.

                                               BBC

Of course, as I suggest above, cultural differences do indeed exist between the Flemish and Walloons. Among the relevant factors, economic differences have fueled the continued salience of the regional identities—indeed, in exaggerating them as well. Luc De Bruyckere, chairman of the Ghent-based food group Ter Beke and vice-president of FEB, Belgium's main employers' federation, for example, “points out that Flanders has a very tight labour market, while Wallonia is suffering from 17% unemployment.” Remi Vermeiren, a former chairman of the banking giant KBC, contends that Flemish people "believe more in a market economy" than Walloons. However, I have met Flemish who have stressed the European socio-political virtue of solidarity (which is virtually absent from the American political lexicon).

Therefore, I suspect that the economic ideological differences between the Flemish and Walloons are overstated. It is not as though the Flemish have adopted Sarah Palin’s view of capitalism while the Walloons have adopted a command-and-control economy akin to that of the defunct Soviet Union.

Furthermore, economic disparities have fomented prejudice, which has the effect of exaggerating cultural differences and inhibiting viable solutions. According to the BBC, “Flanders indeed has wealth, a hard-working population, and beautiful, world-famous cities - like Bruges, Ghent and Antwerp. Many there are asking why their taxes should prop up what they regard as a lagging, mismanaged region.” Are the Walloons really not “hard-working” and not able to manage themselves? Such assumptions do not necessarily follow from economic differences. More likely, regions differ economically because their dominant industries are different and perform differently. Even so, Roger Vandervoorde, 65, a retired sales director, for example, told the BBC, “Walloons should be responsible for what they do.” Prejudice drips off this statement, reflecting more on his state of mind than any lack of responsibility among the Walloons. Besides exaggerating cultural differences, such prejudice can impact political recommendations and reactions, which have in turn have exaggerated the differences.

According to the BBC, “resurgent Flemish pride is based on much deeper forces than just material wealth.” Specifically, “The sense of Flemish identity is all the more acute as it was suppressed by the French-speaking elites that ran Belgium after the 1830 revolution. The constitution was written in French. A Dutch version, written a century later, was not given equal legal force until 1967. As the Dutch-speaking majority demanded recognition, it was mainly pressing claims against the Belgian state.” Accordingly, “a wide majority in Flanders reject Flemish separatism. Most people just want more autonomy within the Belgian state.” This autonomy can be read as a reaction from having felt oppressed (or a fear of potential oppression in the future).

The generalized fear interlarding the Flemish is evident in the following observation from the BBC: “Wallonia may be poorer, but it is part of the 200m-strong francophone community. The Flemish are not standing on the shoulders of a friendly giant next door - and can be irked by Walloon cultural self-assurance.” Lest such fear be given too much leeway, the Flemish might recognize that Flemish conservatives have been dominating the Belgian state government of late and that both Belgium and France are states in the European Union. The ECJ, for example, is fully capable of restraining an imperialistic France intervening in Belgium on behalf of the Walloons.

Similarly, a generalized fear has interlarded the Walloons too. This can be seen in the Walloons’ reaction to Vandervoorde’s claim (perhaps made on the basis of his prejudice), “The best would be a confederation, with each part responsible for itself and only a few small matters handled federally.” Perhaps reacting subconsciously to the prejudice in addition to the proposal itself, “the Walloons are digging in their heels. They regard confederation as secession in all but name, and insist on keeping tax and welfare policies at federal level." The Walloons’ political reaction, in other words, may not simply be a desire for continued redistribution. At root, the fear might be that of being rejected. Such emotional/political fear need not exaggerate the perception of cultural differences or natural reactions to them.

Federalism, and even separation, can be natural reactions to real cultural differences. De Bruykere has a point in urging, “We have to organise ourselves in such a way that the different problems can be answered. One size fits all is not a solution.” While this dictum pertains especially to empire-scale unions such as the E.U. and U.S., it can also apply to heterogeneous states such as Belgium and Illinois. Just as the Chicago region ought not dominate the other regions of Illinois, Flanders ought not dominate Wallonia. That the two republics are themselves states in empire-level federal systems can be expected to relegate the “shock” thought to ensue from the partitioning of either Belgium or Illinois.

Even as prejudice can exaggerate the salience of extant cultural differences, being in an overarching federal system can be an asset in dealing with them. Belgium being a state in the E.U. can take some of the pressure off the Belgian government by having a more activist E.U. presence in the state (e.g., dealing directly with Flanders and Wallonia). Alternatively, the E.U. can facilitate Belgium in reconfiguring into two states or in splitting off into the Netherlands and France. Accordingly, Belgians, whether Flemish or Walloon, can afford to take a breath and gain sufficient perspective to stop clutching in fear to what has been at the very least a rather uncomfortable status quo.

Sources:

BBC News, “Rich Flanders Seeks More Autonomy,” September 30, 2008.

Bruno Waterfield, “Flemish-Speaking Belgian Minister Wants English To Be Europe’s ‘Common Language’,” The Telegraph, September 27, 2010.


Ronald Reagan

Ronald Reagan’s extolling of individualism amid the problem that he saw as government itself resonated with the religious overtures of American divine providence as a city on a hill—a promised land akin to the New Jerusalem. Even as material self-interest taking advantage of unbridled markets under the guise of competition was not Reagan’s primary orientation, greed could easily trump the force of Reagan’s normative envelop, human nature such as it is.

According to Madrick (p. 116), “The transformation of a political and economic message to a moral one was Reagan’s strength.” Religious would have to be added to moral for one to get a sense of Reagan’s individualism beyond its economic and political aspects. In a speech in 1963, for example, Reagan said that the inalienable rights of individuals are “God-given,” and that this individualism “puts us in opposition to . . . a prevailing attitude of many who have turned to a modern-day secularism” (Madrick, p. 117). Freedom is God-given, whereas totalitarianism is inherently secular. Hence, Reagan referred to the U.S.S.R. as the “evil empire” on more than one occasion—even as far back as in his television work for G.E.

Reagan saw the American welfare-state in as being similar to the totalitarian regime of the Soviets. According to Robert Dallek, “To Reagan . . . there are striking similarities between a Communist Russia and a welfare-state America that [he sees] as abandoning its traditional spirit of rugged individualism” (Madrick, p. 116). Reagan claimed that American government had failed to protect those truly in need of sustenance, but I do not believe he thought that government should enable the survival of those individuals whom misfortune or illness would otherwise kill. Reagan’s mindset was formed in the twentieth century largely before the rise in divorce and the associated fragmentation of the American family.

In Reagan’s America, it was generally assumed that church, charity and family could be relied on—albeit perhaps idyllically—to sustain those who could not survive otherwise on their own. Whether Reagan’s concern for the working class would include support of government aid for the long-term unemployed as a last resort in another era (i.e., the other safety nets being compromised) would have to encounter his disdain for lazy people living off the work of others.  Reagan agreed with Paul’s dictum that those who do not work do not eat. In more abstract terms, Reagan’s value on individual self-reliance and his association of the welfare state with totalitarianism together trump a solidarity value based on the human right to life qua “right to survival.”

Similarly, Reagan associated government regulation with totalitarianism. Accordingly, he viewed deregulation as essential to individual freedom. In a speech in 1959, he said that the power of “the stultifying hand of government regulation and interference . . . under whatever name or ideology, is the very essence of totalitarianism” (Ibid.). His push for deregulation was therefore not primarily to enable corporations to become bigger and richer; freedom as divinely-endowed rather than mere materialism was Reagan’s sun.

In a speech in 1967, Reagan said, “The world’s truly great thinkers have not pointed us towards materialism; they have dealt with the great truths and with the high questions of right and wrong, of morality and of integrity. They have dealt with the question of man, not the acquisition of things” (Madrick, p. 124). The moral (and religious) basis of Reagan’s political and economic ideology discounts not only material gain, but also the underlying self-interest. Madrick (p. 124) observes that “Reagan mostly avoided making economic self-interest the centerpiece of his economic program. . . . It was the selflessness of hard work, self-reliance, and courage associated with an American Protestant ethic. Material success was its by-product, not its objective.” Selflessness is indeed a value in American conservatism, even if was relegated by the version oriented to economic self-interest following Reagan. Indeed, individualism itself need not be reduced to selfishness and greed, even in conservatism.

However, in advocating deregulation, Reagan’s religio-moral individualism allowed for the ensuing materialist-based, free-market economic conservatism that has been so susceptible to unfettered corporate empire-building and the related love of gain, or greed, as an end in itself. For example, Madrick (p. 116) points out that “Reagan agreed with Friedman that unfettered capitalism gave people the freedom to find their own way; this was its greatest benefit.” Even though Reagan’s orientation was on freedom, it allowed for the unfettered capitalism that enabled the unregulated sub-prime mortgage derivatives that in turn nearly toppled the financial system in 2008. Madrick (p. 124) concludes that Reagan “planted a visceral distaste for government in the American belly, justifying to many, and even making moral, runaway individualism and greed.” Whereas Reagan’s religio-moral orientation was not tucked within an economic paradigm of economizing self-interest, his anti-government plank enabled even a moral basis for such self-interest; such a moral basis is not that of Adam Smith’s moral sentiments that constrain competition.

Reagan’s legacy is not his religio-moral basis for individual rights; rather, he is known for having facilitated and consolidated a fundamental shift in the American psyche, which had begun in the context of the Vietnam conflict and Watergate. Reagan made it explicit that government was the problem rather than a solution; he made this into a cause. The financial crisis of 2008, including the failure of regulators to check the greed on Wall Street, can be related directly back to this paradigmatic shift. In fact, it is likely that economic self-interest unfettered by “evil” regulations came out on top as a result of the shift—even compromising Reagan’s God-given individual rights through the organizational power made possible by deregulated (and thus consolidated) corporate capitalism. Indeed, it could even be argued that the latter is more of totalitarianism than is government regulation, at least from the standpoint of the mere individuals who happen to be citizens.

The triumph of the legal persons doctrine with its associated rights is just one indication of the hypertrophy that has dwarfed Reagan’s highest virtue even as the reductionism has sprung from Reagan’s very own apparatus. Had the former president modified his anti-government plank such that government should be put in the positive service—government service being ideally selfless—of protecting and furthering fundamental individual (not corporate) rights against commercial as well as governmental totalitarianism, the materialist hypertrophy of economic empires may not have been able to gain so much power over individuals.

Source:

 Jeff Madrick, Age of Greed: The Triumph of Finance and the Decline of America, 1970 to the Present (New York: Alfred A. Knoff, 2011).

Banning Corporate Earmarks: Too Broad?

In March 2010, the U.S. House Appropriations Committee banned earmarks to for-profit companies. Had such a ban been in place in 2009, it would have meant the elimination of about 1,000 awards worth a total of about $1.7 billion. Many of those earmarks went to military contractors for projects in lawmakers’ home districts. The committee seemingly meant to end a practice that has steered billions of dollars in no-bid contracts to companies and set off corruption scandals. However, it is also possible that the vote was a “dog and pony show” not meant to result in any eventual law. Such a show would give the American public the illusion of Congressional effort to reduce the impact of business on the elected representatives.

Most likely as the House Committee anticipated, the U.S. Senate balked. The allure of earmarks is simply too great for a ban to have survived intact. The confluence of projects being in representatives’ respective districts and corporate campaign donors being reinforced is too hard for reform-minded legislators to crack. According to the Office of Congressional Ethics, there is a “widespread perception” among the private-sector recipients of earmarks that giving political contributions to lawmakers helps secure appropriations.

I contend that banning earmarks misses the mark in thwarting the corruption that naturally stems from corporate political campaign donors being able to receive Congressional largess. The root of the problem is a conflict of interest wherein campaign contributors are allowed to benefit financially from government appropriations. To ban all projects in a district does not target this conflict of interest. It would be interesting to see whether corporations would contribute to political campaigns if there could be no financial benefit from the public purse from the legislative body..


 Source:

 Eric Lichtblau, “Leaders in House Block Earmarks to Corporations,” The New York Times, March 10, 2010.

The Keynesian Drug: America’s Achilles' Heel

Keynes posited that government deficit-spending could boost an economy’s output of goods and services when it is short of full-employment. In the context of an economy near full-steam, tax cuts and/or more government spending could trigger inflation while adding little to GDP. To maintain balance in the government accounts, government surpluses during the ensuing upswing are used to pay off the accumulated debt. This is the theory. Unfortunately, it seems to be at odds with representative democracy. Specifically, a systemic bias exists in favor of recurrent deficits, and thus accumulating debt.

As one example, the surpluses in the U.S. Government in the late 1990s were not devoted entirely to debt reduction; even if they had been, the debt would not have been wiped out, given the economic downturn in 2001. Clinton’s assumption of 15 years of surpluses turned out to be wildly idealistic, and his decision to spend portions of the surpluses in the late 1990s imprudent. Besides the natural human preference for immediate gratification over paying down debt and the associated enabling by elected representatives who are easily distracted by other goals, the tendency of an economy to stay well below full-employment means deficits will continue to be called for more often than surpluses. In other words, on economic terms alone, Keynesianism is inherently unbalanced, and political dynamics rooted in human nature extenuate the imbalance.

The propensity of government officials to supervene other agenda items such as “size of government” and “jobs” in the face of deficits over $1 trillion and an accumulated debt of over $14 trillion boggles the mind. To be sure, those other items are important; but if they are allowed to eclipse or block the achievement of fiscal balance, it can be asked whether a people (and elected representatives) are sufficiently mature for to manage public debt.

For example, referring to the two-year extension of the Bush tax cuts in 2010 in the midst of a huge deficit (and accumulated debt), Alan Simpson (WY-R) bemoaned, "It's a great disappointment, a tremendous disappointment, because—what is it, $858 billion in two years added to the deficit? I mean, that just breaks your heart. What the hell do you think we've been talking about?" The former U.S. Senator was pointing to the immaturity involved in placing other priorities, even his own (i.e., smaller government), above reducing deficits running over $1 trillion and a debt of over $14 trillion. He was being an adult, bracketing other priorities dear to him while talking to irresponsible children who ought not be allowed to play with debt, especially when it is at a dangerous level.

Speaking on the possibility of the Chinese pulling out as a creditor of the U.S. Government Treasury bonds, Alan Simpson warns, "It will be precipitous. It won't be six months, might not even be six weeks. It might be six days when they suddenly start the flight. And I know how bankers are: once the flight starts, and the money and rumors, it'll be fast and difficult. . . . We don't know the tipping point. But the tipping point will come if you fail to address the long-term problem of debt, deficit, and interest." Given this danger, it is foolish, in other words, not to throw everything we have—including more revenue and lower spending—at the problem and then debate jobs and the size of government later.

In short, the American people ought to stay away from the Keynesian drug—finding a better way—or we must enact mechanisms by which we will enforce fiscal balance on ourselves. A balanced budget amendment would disallow Keynesianism rather than apply balance by limiting deficits to what a government stipulates itself to paying off in a few years’ time.

Even if modern Americans are in a convenient denial, the Anti-federalists saw the danger in sustained government borrowing (and the possibility of an associated increase in the U.S. Government’s power). Brutus (p. 151), for example, observes: “The power to borrow money is general and unlimited . . . By this means, they may create a national debt, so large, as to exceed the ability of the country ever to sink. I can scarcely contemplate a greater calamity that could befal this country, than to be loaded with a debt exceeding their ability ever to discharge. If this be a just remark, it is unwise and improvident to vest in the general government a power to borrow at discretion, without any limitation or restriction. (I)t would certainly have been a wise provision in this constitution, to have made it necessary that two thirds of the members should assent to borrowing money—when the necessity was indispensible, this assent would always be given, and in no other cause ought it to be.” War stemming from being invaded is likely what Brutus was thinking of—rather than stimulating the economy. Lest it be assumed that the latter would necessarily be excluded, two-thirds of the U.S. House members and U.S. Senators could vote for borrowing to stimulate the economy to create jobs that are necessary. To be sure, the likelihood of such a vote would be less than under the hurdle of a mere majority, but Keynesianism would not be theoretically excluded. The matter of enforcing balance in using the Keynesian drug is difficult in terms of design because of the gravity of the addiction and the associated enabling rationalization of slippage.


Sources:

Brutus, Letter 8, January 10, 1788, 2.9.95, in Herbert J. Storing, ed., The Anti-Federalist, (Chicago: University of Chicago Press, 1985).

Alan Simpson, Newsweek, December 27, 2010, p. 28.

On the Ethics of Legislating

The means by which a bill becomes a law are sometimes referred to “how sausage gets made” because they are not suited for public display.  The belief is that were the citizens to see what goes on in the process, they would demand that it be changed. Perhaps this means that it should be changed.

For example, Rep. David Dreier, R-California, argued in March, 2010 in the US House that while “the process of lawmaking should be ugly, I have never seen it as ugly as it seems to be coming before us this week. … I think that [James] Madison would be spinning in his grave at the fact that there is absolutely no accountability to what is taking place here.” He was referring to the Speaker’s attempt to bypass a direct vote on the US Senate’s health-care bill by voting on a rule that would deem that bill passed in the House.  More generally, the ugliness in passing legislation refers to the horse-trading, which can include special favors and even payoffs.  For example, the U.S. Senate’s health-care reform bill of 2010 included a “Louisiana Purchase” and a special deal for Nebraska—both were engineered to obtain the votes of the two states’ senators. 

In horse-trading, a representative votes against the preference of a majority of his or her constituents because they will get something else out of it.  The official judges that the typical constituent would agree to vote against his or her preference on the given issue because of the payoff to the district.  “I’ll go ahead and vote for health-care reform with a significant government role because we’ll get a new bridge.”  If the representative’s judgment matches those that the majority of his or her constituents, the sausage-making is ethical because the representative is indeed representing.

If most of the representative’s constituents would say no to the deal, then the yes vote would only be ethical if the representative judges correctly that the deal would be in their best interest.  Being a representative can justifiably depart from polls to act in his or her constituents’ best interest because this too represents them.  Were there no value in this, we would have a direct rather than a representative democracy.  There is in the latter a check on the passions of the people; the strength of check depends on the length of the term because an election further off gives the representative more time for the “in your best interest” to become evident.

The ethical waters darken appreciably the more the deal benefits the representative rather than the constituents.  A friend or relative getting a judgeship, for example, does not justify a vote that is at odds with the preference (or best interest) of the majority of the constituents.  This is essentially a principal-agent problem, where the representative is the agent who is not acting in the interest of his or her principal.  It is unethical because the agent has agreed to the duty of acting in the principal’s interests.

So sausage-making is not necessarily as sordid for the individual representative as it might seem overall.  An ethical representative need not fear discussing it.  In fact, making it transparent as part of the political discourse (through the media) would permit him or her to inform the constituents why he or she is making the deal.  There might be feedback ensuing that leads the representative to revise his or her assessment. 

Of course, the representative might fear that the deal would unravel were it made public. However, it is unethical to include stealth deals in a public legislation.  A deal that is not agreed to in the light of day should not be part of a given bill because the majority voting for a bill should know what they are voting on.  There is nothing shameful in a deal that is in the interest of one’s constituents, and such a deal might be in the interest of the other representatives who want the bill to pass.  “We will give your people the bridge that is in their interest in exchange for their support on our bill here.”   This is essentially a recognition of legitimate priorities.  However, it can also be argued that a bill should stand on its own merits.

 Strictly speaking, if a majority of the representatives do not support the health-reform, for example, the bill should not be passed.  Proponents of a bill overreach in looking for deals to put them over the top because the majority vote with the deals does not match the merits of the bill apart from the unrelated deals.  Bad public policy can be enacted because some districts care more about other things.  In this sense, it would be ethical for the proponents to admit that they don’t have a majority and that therefore the bill ought not to pass.  Behind such a normative constraint is the selfishness of “I want it my way anyway.”  This too ought to be part of the public dialogue when a majority is to be achieved only by the horse-trading. 

However, the horse-traders could argue that including the deals reflects the priorities in the society, and these ought to be reflected in a legislature.  Ethically speaking, a legislature should design its sausage-making in such a way that such priorities are reflected even as they do not enable legislation that only a minority want on its merits apart from the priorities.  How to distinguish priorities from overreaching is the difficult problem facing us both ethically and in terms of legislative process.  Ah, but there’s the rub; what reforms would institute this messy ethical distinction?  Applied ethics may not be sausage-making, but it is messy nonetheless.


For Rep. Dreier’s quote, pls see:

Friday, August 18, 2017

The U.S. House of Representatives: An Aristocratic Democracy-Deficit?

The abrupt resignation of Jesse Jackson, Jr., from the U.S. House of Representatives in 2012 only weeks after being re-elected gave Democratic politicians in Chicago a rare opportunity to get their hands on a Congressional seat. The New York Times observed at the time that such seats “in Democratic strongholds” of Chicago “do not come open very often, and when they do, a line forms fast.” According to Debbie Halvorson, who ran against Jackson, “If someone is thinking of becoming a congresswoman or congressman, this might be their only chance. Whoever gets this will have it forever, they say. That’s why everyone wants to take a chance.” In other words, the office is a sort of personal entitlement. From a democratic standpoint, this represents “slippage.”

The reason for the two-year term in the U.S. House is to render the representatives responsive to their respective electorates or constituents. The six-year term in the U.S. Senate was chosen for precisely the opposite reason—that such a term would give the senators some breathing room from the “real-time” demands of their respective states. The bicameral result would ideally be a check and balance of the people’s momentary passions and deliberation on the country’s best interests beyond today. Having a House seat “forever” renders its occupant immune, at least potentially, from having to respond legislatively to contemporary demands “back home.” Indeed, having the job for life, an incumbent might even move to Washington, D.C., with only occasional visits “back home.” With only 435 members representing a combined population of about 310 million (as of 2012), the U.S. House is already aristocratic in nature; a re-election rate for incumbents up for re-election of over 90% and certain seats being virtually life-time appointments render the “people’s House” akin to a House of Lords.

Because the U.S. Senate was intended to represent the propertied interest as well as the states, an aristocratic House gives the elite too much institutional power in the U.S. Government. Other things equal, the democratic element—that of the people—will in theory eventually revolt. Were the imbalance in favor of the masses, the propertied would soon “opt out” too. Hence, the delegates at the U.S. constitutional convention in 1787 intentionally fashioned a federal government reflecting “the one, the few, and the many” in a sort of balance.

The U.S. President is “the one”—the antecedent being the imperial monarch. The U.S. Senate and the U.S. Supreme Court both refer to “the few.” The U.S. House, being at first the sole repository of democracy in the U.S. Government, was to represent “the many.” As in juggling, if one ball joins one of the others rather than there being three separated equidistantly, the balance is off and all of the balls are likely to be dropped. At the very least, the pernicious impact of the imbalance can be lessened by shifting domains of authority back to the governments of the member-states.

                                    Even though the U.S. House Chamber looks large, it represents 310 million people.   Source: Britannica

Additionally, the U.S. House could be enlarged to the size of the European Parliament—both containing representatives of a people spanning an empire in scope. Lest it be concluded that such a size is nonetheless unwieldy for a legislative body, it could be argued that the “extended republic” has become too big for even a “repository of democracy,” in which case we are back to the notion that power could be transferred back to the states—many of which have populations equal to or exceeding that of the United States when the U.S. Constitution was formulated and ratified.

Accordingly, several of the states might consider adopting federalism—the states’ “states” (i.e., provinces, “countries” (in UK), cantons, or lander in European terms) might be as the early state legislatures were in the early United States (i.e.., citizen representatives serving for a time). Put another way, the large and medium republics in the U.S. as of the end of the twentieth century at least may themselves be commensurate with the early U.S. as a whole from the standpoint of population and representative democracy. Even so, the diversity within a given state is not as great as that which exists even in 2012 from state to state. Europeans who travel from New York to Miami and on to San Francisco and maybe Utah discover that the United States do indeed differ cultural, albeit in different ways perhaps than the member-states do at home in the European Union. Even though less diverse internally than interstate, some of the United States are internally diverse enough—and populous enough—to warrant the application of federalism to those republics such that legislatures covering a number of counties could be formed and given a portion of the state’s remaining sovereignty. Just as the E.U. deals directly with regions of states, the U.S. could as well.

In short, the U.S. House of Representatives, being in many respects an aristocratic body—the advent of which some of the founders, particularly the anti-federalists, anticipated back in 1787—enlarging that chamber democratically AND extending federalism down into the states could lessen the democratic deficit in the system overall.  

Source:
Steven Yaccino and Monica Davey, “Illinois Sets Election Dates to Replace Jackson in House,” The New York Times, November 27, 2012.