Tuesday, April 30, 2019

Glimpsing behind the Curtain: Vice President Lyndon Johnson and the Kennedy Assassination

Robert Ross interviewed Lyndon Johnson’s mistress, Madeleine Duncan Brown what Ross titled, “The Clint Murchison Meeting in Dallas November 21, 1963.” The interview took place sometime before her death on June 22, 2002. The content is revealing, and she comes across as very credible as it is obvious she still had feelings even then for the late president. She also had a credible motive for opening up to the American people. So in watching the interview, I did not view it as just another conspiracy theory; I paid attention. Sometimes the truth finally emerges in plain sight, rather than through complicated theories as in Oliver Stone’s film, JFK (1991). The most revealing facts to emerge from the interview are that Jack Ruby, who killed Oswald just two days after the assassination, had been at the meeting at Murchison’s mansion on the night before the assassination, and that LBJ told Madeleine while leaving Murchison’s house after the meeting, “After tomorrow, those SOB’s will never embarrass me again.” That the official narrative from the Warren Commission would still carry weight as the default account at least in the first two decades of the next century astounded me. At the very least, all of Madeleine’s knowledge of the players should have caused at least a tremor when the interview was made public. The status quo has that much inertia. Even so, the American public can gleam from Brown’s account just how different the reality of the power-brokers in (and outside of) the U.S. Government can be from what the public knows. Unfortunately, the patina or gloss even of acting can have incredible staying-power even in the face of the facts revealed. Members of the political elite and their companions may want to protect their legacies in old age, or want the freedom of conscience that comes from the impunity that can only come with death. The resulting piecemeal facts must justify themselves, however, whereas the long-standing official version often has the benefits of not only protective power and entrenchment that comes with having been the default for so long, but also a coherent (i.e., contrived) narrative.  

Madeleine had met LBJ in 1948. By her reckoning, she and Lyndon had a “wonderful relationship.” Johnson fathered Madeleine’s son, Steve Brown, who had died of cancer by the time of the interview. In spite of having cancer, Steve had sued to get part of Johnson’s estate. Madeleine was hurt by the way the power structure in Texas had handled Steve by preventing him from appearing in court. “I probably would never have opened my mouth, but the way they handled my son. They can’t take anything from me now. The public needs to know.” Essentially, she says in the interview that the assassination of Kennedy was the result of a domestic plot that been planned since the 1960 Dem Convention.

Joe Kennedy and H. L Hunt met three days before the convention and they cut a deal: Johnson would be the VP. At the time, Hunt told Madeleine, “We may have lost a battle but we’re going to win the war.” On the day of the assassination, he would tell her, “We won the war.” Madeleine concluded the assassination was “a political crime for political power.” H.L. Hunt, the richest man in the world at the time, and others “mapped a plot to get rid of John Kennedy” from just after the convention. The 8-f group included oil men such as Clint Murchison and Hunt, Texas politicians such as John Connally, and even occasionally J. Edgar Hoover.


Meeting the night before the assassination at Clint Murchison’s house on Nov 21, 1963 were Lyndon Johnson, Edgar Hoover, John McCloy, H.L. Hunt (who had had flyers “Wanted for Treason: John F. Kennedy” passed out in downtown Dallas), John Currington, George Brown, Richard Nixon, Amen J. Carter, Jr, Texas Gov. John Connally, Earle Cabell (mayor of Dallas, whose brother Kennedy had fired after the botched Bay of Pigs invasion), W. O. Bankston, Clint Peoples, Bill Dicker (sheriff of Dallas county), Cliff Carter, Malcom Wallace, and, representing the mafia, Carlos Marchellas, Joe Civilla, and Jack Ruby (an old buddy, Madeleine remarks). I submit that the mafia had a motive to kill the president whose brother Robert had turned the U.S. Department of Justice on the mob, including very mobster in Chicago, Sam Giancana, who is said to have put Illinois over the top in voting for Kennedy. It is particularly relevant, therefore, that Ruby, who would later he killed Oswald out of anger for assassinating the president, was at a meeting with such notable insiders on the night before the assassination. Also, the inclusion of the FBI and the sheriff of Dallas County fit with the obvious need to cover-up the crime. That Richard Nixon, who had lost the 1960 election to Kennedy—unfairly according to the man known as “tricky Dick”—would be in a meeting with Johnson supporters should also raise some eyeballs; it would make sense, however, if the Democrats wanted assurances that the other party would not try to uncover the plot. It is therefore significant that Nixon was already in town; he and Johnson had met two days earlier.

At any rate, the social party at the mansion, for which Madeleine had been invited, broke up at 11 p.m. when the Vice President arrived. He and others went into a conference room. Jack Ruby brought a call-girl, Shirley, to the meeting. When Johnson came out of the meeting at its conclusion, he told Madeleine: “After tomorrow, those SOB’s [i.e. sons of bitches] will never embarrass me again.” Johnson was angry. “The Irish mafia, I think,” Madeleine says in the interview when Ross asks her whom Johnson was referring to. However, in her book written five years earlier, Madeleine wrote that Johnson had told her, “After tomorrow, those goddamned Kennedy’s will never embarrass me again.”[1] Because she looks like her mind is going astray at that point in the interview—she would, after all, die soon—I suspect she confused Lyndon’s antipathy at the Irish mob with his loathing of the two Kennedy brothers. 


Even if Johnson didn’t get along with a mobster, his frustrating relationship with the Kennedy brothers in the White House is well documented. Regardless of whomever he was angry at, that Lyndon Johnson knew that something would be very different for him on the next day—the day of the assassination—suggests that he knew of it beforehand. In fact, that he made such a statement with such strident certainty just after the meeting suggests to me that its purpose had been to decide on whether to go ahead with the plan. If indeed Lyndon Johnson had at the very least been aware of the assassination beforehand, the way in which he publicly reacted after it can be seen in a different light—as being acted out rather than authentic. By implication, the American people had no clue as to what was actually going on behind the scenes. The sheer difference ought to be of concern from the standpoint of democracy, because the sheer degree of acting can be used on an ongoing basis to hoodwink the electorate.

People on the periphery of the plotting group were in an interesting predicament, being let into at least some of the inside information and yet not truly part of the group. Hence they could be expected to share at least one of their points of reference with the public and thus feel guilty enough to speak, or finally turn on the insiders by divulging the tidbits of information even in the face of a seemingly overwhelming public narrative. Clint Murchison’s secretary, for instance, committed suicide days after the assassination. Even though Madeleine still had feelings for Johnson (i.e., they had not ended on a bad note), she was convinced that he had been in on the assassination and yet she said nothing of this publicly until she was old, after her sons had died so she had nothing to lose. For one thing, she says in the interview that if Kennedy had not been assassinated when he was, Johnson would have faced “serious political problems when he returned to Washington.” He had been involved in the Billy Sol Estas and the Billy Baker scandals, and Kennedy was already looking for another VP candidate for 1964, according to Kennedy’s secretary, Evelyn Lincoln.[2] At the time of the assassination, a U.S. House committee was planning to indict Johnson. A man, who would later be shot, was going to testify that Johnson had taken kick-backs from agricultural programs. When Lyndon was president, he kept the Vietnam War going on for so long because he was getting kickbacks on military contracts to his business friends.

Johnson’s real mentality, however, went deeper than corruption. According to Madeleine, Malcolm “Mac” Wallace was Johnson’s hit-man. In a letter to the Department of Justice in 1984, Douglas Caddy, the lawyer for Billie Sol Estes, claimed to have evidence that Johnson order hits on eight men, including Kennedy.[3] Johnson “had no qualm about having someone killed,” the still-smitten Madeleine says in the interview. “Whatever it takes to get a job done,” she says of Lyndon’s mentality. She agrees with Ross in his conclusion that Johnson must have thought the end justified the means. Madeleine points out that Johnson even had an innocent woman who had seen Madeleine and Johnson together in a hallway killed. Even just to conceive that a U.S. president had a hit man is difficult; to a public kept largely in the dark, such a thing—and that the American electorate voted for a mafia-like man in 1964—must seem inconceivable, or else fiction, like the series, House of Cards. Hence the vulnerability lodged in American democracy wherein the electorate is left with mere superficial or artificial perceptions of the candidates and office-holders remains largely hidden from view.

All of the above hitherto hidden from view does not even count the stealth role of corporations in influencing Congress, the President, and even the regulatory agencies that regulate the specific corporations or industries. The relationship can indeed be quite cozy in spite of the conflicts of interest that should be obvious. The allowance of “dark money” contributions to political campaigns affirmed by the U.S. Supreme Court in its Citizens United case is just one indication of how the real relationship between business and government in the U.S. can be deliberately hidden from plain view, and especially this disinfectant effect of sunlight. If sunlight is essential for the popular sovereign (i.e., the People) to hold its government officials accountable, then representative democracy in the U.S. is seriously flawed. To get caught up in debating who shot Kennedy may be just what the political elite wants because not only such myopic investigations tend to be premised on the Warren Commission’s report as the default narrative to be disproven, but also the obsession of one historical event comes at the expense of uncovering the true nature of the current office-holders in government and the real relationship between business and government.


[2] James Hepburn, Farewell America: The Plot to Kill JFK (Penmarin Books: 2002).
[3] Ibid.

Saturday, April 27, 2019

Eight Good Behaviors of Managers: Googled by Google

In early 2009 at Google, "statisticians . . . embarked on a plan code-named Project Oxygen. The 'people analytics' teams at the company produced what might be called the Eight Habits of Highly Effective Google Managers. 'My first reaction was, that’s it?' says Laszlo Bock, Google’s vice president . . .  for human resources. 'The starting point was that our best managers have teams that perform better, are retained better, are happier — they do everything better,' Mr. Bock says. 'So the biggest controllable factor that we could see was the quality of the manager, and how they sort of made things happen. The question we then asked was: What if every manager was that good? And then you start saying: Well, what makes them that good? And how do you do it?' He tells the story of one manager whose employees seemed to despise him. He was driving them too hard. They found him bossy, arrogant, political, secretive. They wanted to quit his team. 'He’s brilliant, but he did everything wrong when it came to leading a team,' Mr. Bock recalls. Because of that heavy hand, this manager was denied a promotion he wanted, and was told that his style was the reason. But Google gave him one-on-one coaching — the company has coaches on staff, rather than hiring from the outside. Six months later, team members were grudgingly acknowledging in surveys that the manager had improved." (1)

Analysis:

"What if every manager were good?" sounds a lot like "What is everyone were above average?" I suppose there will always be the proficient and lacking in any profession. Even then, some organizations would be better managed that others. Corporate culture has a bearing on such differences, even among supervisors. For example, more than one American company probably has a culture in which supervisors view training as the way to correct an employee's bad attitude toward customers. This sense of "bad" is different than "bad" as in incompetent, and even in this sense training may not be sufficient.

For instance, once at a grocery store at night I encountered both a cashier and the customer-service person who did not know how to calculate a "rain check." I was stunned that when I pointed out the most basis of mistake, the two people had blank stares. I politely told them I had to go; I had realized that a transaction would not be likely that evening. The next day, I spoke with the "front line" supervisor, who agreed with me that the incompetence had been "off the charts," and yet she said that during a few hours in the evening, that customer-service employee was in charge in the cashier area. The manager could not do anything about it. Clearly, the management of the store was bad in terms of managerial competence. In fact, as past experience at Walmart stores taught me, incompetence can be so bad, so far removed from that which is customary and thus expected, that horrendous incompetence may itself be unethical. Typically, unethical retail conduct is limited to attitude and related bad conduct toward customers. 

To get good managers, including supervisors, we must consider in what sense good. Good-hearted? Good as in having mastered managerial skills?  Good as in having a good style that fits the particular corporate culture? The question of what makes a manager good hinges on what is meant by "good." Of course, all of these senses of good are important, and not even incompetence can necessarily be cured with training. 

In the case of the bossy and arrogant manager at Google, I contend that what was "bad" was not limited to or sourced in his style; rather, the problem was his personality, which transcends style. Arrogance, for example, is a basic attitude rather than a style. It is no surprise that "coaching" (a misnomer or bad analogy outside of sports) did not turn the guy around. Perhaps the guy needed therapy or counseling. That Google would reduce a "bad" personality to a leadership style and prescribe "coaching" rather than a therapist is no accident.

It is commonly taught in business schools and believed in business settings that the science of management is applicable for virtually any business in any industry. In fact, one can theoretically manage a "team" (another misnomer from sports being used out of context) without having any skill or knowledge particular to the product.  The idea, in short, is that anything --and virtually anyone (certainly anyone who has been hired!) be managed by being (re-)trained. Just as it is assumed that a person with a Masters in Business Administration (MBA) can manage organizations in virtually any industry (i.e., without necessarily knowing much about the product coming in or even well into the job of managing), having a bad (in any sense) employee re-trained is often the default route. 

It is often assumed, for instance, that training and even re-training can be efficacious with anyone. From my observations of the cashier and the supervisor on duty in the grocery store, I would hope that the store manager would consider that some people, even hired ones!, may not be educated or intelligent enough to comprehend and apply the re-training. It is as if managers conveniently assume that their company's hiring process is so good that training should be all that is necessary for any employee. Alternatively, a short-sighted mentality, especially concerning money, may be behind the view of training as a cure-all, for to fire and re-hire is, or should be, a considerable process. 

So, what is actually a psychological problem is thus transmuted into managerial terms such as "style" in need of "coaching." Personality, in other words, is reduced to the extent to which it fits within management. Moreover, reducing managing to behaviors, as if that which is inside the manager is a black box, is to ignore that which separates the mice from the men as managers in terms of getting along with others (i.e., "good" as interpersonal relations), not having a trivial or short-sighted lack of perspective. Improving a manager's "style" by trying to change (manipulate?) her behavior is apt to be insufficient. It is like paddling a row boat without moving the anchor; the boat isn't going to move very far. The anchor must move too, and, well, there are limits to what management, and especially retraining, can do in that respect. Often time in badly managed businesses, the hiring process is flawed such that bad (in any sense) people get in, whether as managers, supervisors or employees.

With this in mind, I turn now to critique the "Eight Good Behaviors" that the good people at Google recommend.
  • Be a good coach. Included: provide specific feedback without being too negative and "present" solutions to problems. But isn't this just management?  I don't see much substance in the term transferred from sports(i.e., what coaches actually do).
  • Empower your team and don't micromanage. Freedom vs. advice. Challenge the "team" with "big" problems. This sounds like something written by a "team" of school teachers to their young studentsEmpower is a faddish politically-correct term that is rarely adequately defined. With regard to micromanaging, every micro-manager I have encountered has had control issues--meaning psychological problems involving or impacting personality and interpersonal conduct (not rooted in conduct, or style!).
  • Express interest in team members' success and personal well-being. Get to know about their lives outside of work and make new team members feel welcome. Helping new people to feel welcome is laudable; it is perhaps the area where a manager can truly be most human. Success, howeveris a vague term implying an ending (e.g., Did you succeed in getting the kids to sleep last night?), whereas business typically is ongoing and thus not like a race or contest after which contestants can know if they won. Furthermore, when used more broadly than in regard to a specific project or plan, success is too vague. With regard to getting to know things about subordinates outside of work, including their personal well-being, some subordinates may feel pressured to say more than they would like, given the power differential. Also, the "authentic" questions may come with a hidden agenda--namely, to manipulate the subordinates so they will want to stay at the company and be more productive. 
  • Don't be a sissy: Be productive and results-oriented.  Focus on the "team" setting achievement goals and priorities.  We are back to elementary-school language (e.g., sissy) and to what is essentially management itself (producing results, not visions). A business is a results-oriented enterprise.  A focus "on what employees want the team to achieve" belies a manager's true intention to set goals for his or her subordinates so they will pay more attention to results and thus be more productive. Having "the team" set its own goals and priorities can itself be understood as a motivating tool as long as the goals and priorities are approved by the manager. The patina of democracy or decentralized decision-making is often a manipulative sham designed to get more production.
  • Be a good communicator and listen to the team. Two-way communication. "Hold all-hands meetings and be straightforward" in communicating . . . Encourage open dialogue and listen." All-hands? At any rate, should we really be encouraging managers to have more meetings?  Being straightforward is laudable, however, as is open dialogue. The question is perhaps whether this is even possible where managers view their subordinates as lower. In other words, can there be straightforward dialogue where there is a power relation between boss and employee?
  • Help your employees with career development. Here too, the difficult matter of being able to be straightforward is relevant, given how organizational politics (i.e. collusion or friendship) and a manager's own career interests can all too easily relate to others' career development, possibly resulting in problems for the friend or ally once he or she has been elevated to more difficult tasks.
  • Have a clear vision and strategy for the firm even in the midst of turmoil. Involve the team in setting the vision.  Grouping together strategy and vision ignores the vital distinction between management and leadership. My dissertation presents a model by which integrity (i.e., ethical principles) can moderate between the interests of strategic management and leadership vision. The latter is not the same as long-term strategy; rather, vision is an ideal, for which strategy is a means to. The leadership vision of large companies like Google includes the company's place or role at a societal level, such that the vision extends to the societal level. Hence the vision is set at the top, typically by the CEO and perhaps even the chairman of the board, so the notion that a "team" lower down sets the vision is simply wrong; it is a consequence of obfuscating management and leadership, an epidemic in American business. It also follows that vision is not the same as long-term strategic goals; this conflation is also a result of fusing management and leadership. Strategic leadership has two main components, which are distinct. In fact, they can be in tension. Reconciling a credible societal vision with pressing strategic interests can be difficult because upholding the integrity of a vision can involve short- and medium-term costs that are at odds with budgets ensuing from corporate strategy. Google's grouping of vision and strategy ignores this tension. Just in using the term "vision," Google is using yet another vague analogy that has been a fad since the 1980's. How does a vision differ from coming up with a goal? Has anyone in the study of leadership defined vision?  Regarding faddish words used as weak analogies, people can use them without knowing what they mean! Lastly, the use of the word turmoil, as if it were only occasional rather than the typical condition of the business environment, over-dramatizes the need for someone at the helm. Even a turbulent business environment pales in comparison with havoc in and following the protests in the Middle East and the Japanese earthquake in 2011. Lest it be assumed that turmoil has increased over the decades, plenty of oil refiners and producers were going out of business amid the destructive competition of the 1860's. This was Rockefeller's rationale for creating a refining monopoly--a justification he used to act in contradiction to even his own vision of himself as a Christian "helping" competitors from going under. Turbulence can be used an excuse.
  • Have key technical skills so you can help advise the team. Work side by side with your subordinates when and understand the work they are doing. This principle, or "habit," challenges the notion that a person can learn management skills and apply them to virtually any business--knowledge of how to make the particular product being unnecessary.  I suspect this is an American view of management. The Japanese have traditionally hired managers from the factory floor precisely because they are familiar with the technical skills being used to make the particular products. Even so, Japan has not been without cases of horrendously incompetent management. A good manager, I contend, is one who is already proficient with most of the tasks of his or her subordinates and can therefore help out when needed.  So it would appear that Google got one right.

1. Adam Bryant, "Google's Quest to Build a Better Boss," The New York Times, March 12, 2011.

Friday, April 26, 2019

Getting More For Doing Less: Bank Board Directors

Executive compensation is an art rather than a science. It is not as if numbers are fed into a computer and the correct compensation pops out. More discretion is involved than meets the eye. “Since the financial crisis,” The New York Times reported in 2013, “compensation for the directors of [America’s] biggest banks has continued to rise even as the banks themselves, facing difficult markets and regulatory pressures, are reining in bonuses and pay.” [1] Just five years after the financial crisis, it is interesting how the banks' respective managements decided to spend the TARP money from Congress and even more money from the Federal Reserve Bank. Also of note, board and upper management compensations seemed to be going in different directions in spite of both being presumably tied to the same firm performance. Even a performance-incentive approach tied to firm-performance can accommodate a lot of latitude, such that banks differ in how much they pay their respective boards. The discretion permits inside collusion and even outlandish demands by "celebrity" members whose advice does not necessarily come up to celebrity status.  
At $488,709 in 2011, Goldman Sachs had the highest director-pay of any American bank. Some of the bank’s 13 directors made more than $500,000 because they had extra board responsibilities. As the directors were paid in stock, 2012 promised to be an even better year for the board members. Compensation experts have stated that banks must pay premium dollar to pay such figures for what is essentially part-time work in order to get the best advice. However, JPMorgan, the largest American bank, gave its directors “only” an average of $278,194 in 2011. Bank of America paid its directors $275,000 each. Equilar reported that the average compensation for a director at one of the six largest American banks in 2011 was $328,655. This compares with $232,142 at almost 500 publicly-traded companies, according to Spencer Stuart, in spite of the fact that regulations had narrowed the responsibilities of bank boards.
One would think that compensation would reflect changes in the number of tasks even more than macro indicators of bank performance. “I get you have to pay up for sophisticated board, but what is that complexity worth?” said Timothy M. Ghriskey, co-founder of the Solaris Group, a financial services shareholder that voted in 2011 to reject a pay plan for top executives at Citigroup. “Does it take $200,000 or $500,000? The discrepancy between a board like JPMorgan and Goldman is confusing.”[2] I submit that it is confusing only from a rationalistic standpoint. 
The differential indicates that the matter is far more subjective than meets the eye. Collusion between upper management and its board may be happening. So when a compensation expert claims that a certain level is necessary, the claim can be questioned rather than taken at face value. In fact, the false-necessity may be a subterfuge used by insiders seeking to enrich each other. You scratch my back, and I’ll scratch yours. The dispersed stockholders are left with less.
In short, it can be doubted whether the director compensation levels at banks are necessary or even in the stockholders’ interest. The excess probably reflects the difficulty facing stockholders in holding the insiders accountable. Accordingly, one consequence of corporate governance reform may be reining in the pay for what is really a part-time job with fewer and fewer responsibilities. If very wealthy or renown board members demand a premium, it is not justified in terms of corporate governance unless the advice is more valuable. 

See Essays on the Financial Crisis: Systemic Greed and Arrogant Stupidity, available at Amazon.

1. Susanne Craig, “At Banks, Board Pay Soars Amid Cutbacks,” The New York Times, April 1, 2013.
2. Ibid.

Saturday, April 20, 2019

Too Big To Fail: The U.S. Is Still at Risk

On March 20, 2013, more than two years after the Dodd-Frank financial reform legislation had become law, Federal Reserve chairman Ben Bernanke made it clear that the problem of too-big-to-fail banks had not been solved. “Too Big To Fail is not solved and gone,” he said in a press conference. “It’s still here.”[1] That is, providing an orderly liquidation process for bankrupt banks would be insufficient in keeping the U.S. economy free of vulnerability from even one of the biggest banks taking down the financial sector merely by going bankrupt. Congress should not have missed or minimized this point while working on the Dodd-Frank Act. The self-interested power of Wall Street in Washington and the need of campaign funds in Congress coalesced to dilute the law in spite of the detriment to the public good.
Suggesting that more legislation might be needed, Bernanke said, “Too Big To Fail was a major source of the crisis . . . and we will not have successfully responded to the crisis if we do not address that successfully.”[2] More would be needed to rid the U.S. economy of the threat of banks too big to fail. If holding more capital does not make the big banks safer, “we will have to take additional steps.” This, he said, “is important.” Yet somehow his voice was not adequately heard. Other voices were louder on Capitol Hill.
Meanwhile, Wall Street banks faced little downside. Because the mammoth size of big banks such as Citibank and Bank of America makes their failure a threat to the viability of the financial system and even the overall economy, such size is an advantage to the banks because the bankers can reasonably bet that the U.S. Government would have to bail them out even if they face financial ruin by having taken on too much risk as the economy sours. The sense of invincibility, plus lower borrowing costs, could lead big banks to not only stay big (or even get larger!), but also take bigger risks. Bankers at such banks may even feel free to commit fraud because U.S. Attorney General Eric Holder admitted in 2013 that large banks were nearly immune from government prosecution for crimes, given the risks to the economy from the failure of a convicted bank. What about the fraudulent bankers who sold "crap" while claiming the mortgage-based bonds were sturdy? In short, the risk taken on by a big bank could easily outstrip even the additional capital requirements in the Dodd-Frank Act.
Even apart from reckless banking at the top of the U.S. financial system, if a sizable market in the U.S., such as many of the housing markets, were to collapse all at once, as in 2007-2008, many banks would be hit. The additional reserves would not likely buttress individual banks from the domino effect that was evinced, albeit halted, in September 2008. I submit that more money in reserves would have stopped the cascading momentum. While higher reserves might safeguard a bank while others are intact, the claim seems doubtful at best when the undercurrent from the momentum of many banks being hit at once or in a row is strong. It is no accident, I contend, that the Obama campaign of 2008 accepted $1 million from Goldman Sachs. 

1. Mark Gongloff, “Ben Bernanke: ‘I Agree With ElizabethWarren100 Percent’ On Too Big To Fail,” The Huffington Post, March 20, 2013.
2. Ibid.

Behind Corporate Loopholes: Wealth and Power

A company in the U.S. wants a tax loophole to apply. Starbucks, for example, wanted to be able to use the manufacturing deduction by stretching manufacturing to include the roasting of coffee beans. So in 2004 the company hired Michael Evans, a lobbyist at K&L Gates who had just a year before worked as a top lawyer on the U.S. Senate Finance Committee, which writes tax law. Evans was able to urge his former colleagues in the Senate to expand the definition of manufacturing to include roasting in a clause added to a 243-page tax bill called the American Jobs Creation Act.  As you might imagine, Starbucks was not the only company to get a tax break written into that law. By 2013, the manufacturing deduction had saved Starbucks $88 million that the company would otherwise have had to pay in corporate income tax. In 2012, corporate tax breaks and loopholes added $150 billion in lost revenue for the federal government, increasing the budget deficit by that amount.[1] Three lessons can be gleamed from the hidden corporate loopholes. 
First, the damage done to the U.S. debt by corporate loopholes has been significant. While dwarfed by the debt incurred to finance the Iraq and Afghanistan wars ($2.4 trillion added to the debt by 2013), $150 billion of lost revenue from corporate tax benefits for that period alone is nonetheless significant. 
Second, the “insider influence” itself violates the principles of openness and fairness, which are so esteemed in a democracy. The many points of access to influence legislation can be abused by legislators and lobbyists alike by their stealth dealings, sometimes literally in the middle of the night as a bill is about to be voted on. Ideally, the many points of access refers to the fact that various groups (and citizens) can reach legislators, not that the most powerful interests can abuse their ability to contact lawmakers for private gain (both to the interests and the lawmakers, thanks to political campaign contributions). In fact, for a lobbyist, including a corporate lobbyist, to have disproportionate influence on a bill to make it financially beneficial to the lobbyist's clients can be reckoned as a conflict of interest because even the information supplied is apt to be biased. The many points of access is meant to dilute the influence of the private interests that stand to benefit most from loopholes. 
Third, the contacts that lobbyists have in government from having worked there themselves can play a major role in the loopholes being granted and even in secret. Other self-interested interests cannot check the self-interested influence of the companies or industries that would gain most, so the private benefit gets away with eclipsing the public good. A law prohibiting former legislators and Congressional staffers from lobbying for at least ten years might make a dent in the inordinate insider influence of corporations in Congress. However, the influence of a Speaker of the House such as John Boehner, who became a corporate lobbyist after resigning from Congress, would hardly be diminished in his private influence, and thus earnings. Information that only insiders have sells. 
Like water, pent-up power naturally seeks its way around an obstruction with the objective of reaching an objective. The influence of wealth inexorably finds its way into the halls of power, especially in democracies as they have many points of access. This vulnerability is particularly great in cases in which candidates for public office must raise large sums of money to get elected. Asking the candidates to look the other way when a big donor is knocking at the door runs against human nature; even if laws prevent large donations, power finds its own way in the dark. The power both of candidates/lawmakers and corporations can be so massive that space itself bends toward mutual objectives. Perhaps the question is whether trying to bend space back only slightly is worth the time and energy of passing a law. Although removing the financial need of candidates for campaign funds (e.g., by public funding of advertising) could in theory take out part of the incentives on one side of the equation, corporations could tempt the incentive for private gain in other ways, such as with the promise of a lucrative job afterwards. 
In the end, the threat to the democracy is the inordinate power from the concentration of private wealth as in large corporations. The citizens are hardly focused in their collective use of their power, so the insiders in government tend to be influenced inordinately by the moneyed interest at the expense of the public good, the good of the whole.  

1 Ben Hallman and Chris Kirkham, “As Obama Confronts Corporate Tax Reform, Past Lessons Suggest Lobbyists Will Fight For Loopholes,” The Huffington Post, February 15, 2013.

See Institutional Conflicts of Interest, available at Amazon. Conflicts within the U.S. Government, in business, and between business and government are explored, as well as the very nature of an institutional conflict of interest. 

Thursday, April 18, 2019

Regulating Wall Street after a Financial Crisis

On Columbus Day 2011, The New York Times observed that the regulations known as the Volcker rule, “intended to limit trading when the bank's money is at risk, a sweet spot for banks, is seen as a centerpiece of the sprawling financial overhaul of the Dodd-Frank Act of 2010. In anticipation, the nation's biggest banks, like Goldman Sachs and Bank of America, have already shut down their stand-alone proprietary trading desks.”[1] Even so, the long and tortuous route by which any regulation is written was leaving its own mark in the sense that promising loopholes were finding their way into the rule. In other words, the regulated would have a disproportionate influence on the writing of the regulations. This conflict of interest is dangerous from the standpoint of not being vulnerable to another financial crisis in which the greed on Wall Street knows no bounds. 
Regulators were leaving room for “significant changes,” according to the Times. Wall Street was “lobbying furiously to tame the Volcker Rule, holding roughly 40 meetings with various regulators, warning that the changes will eat into profits at a difficult time for banks.” Those banks were undoubtedly threatening to charge more to their customers if the rule weren’t weakened. “In essence, the [rule] would upend the banking industry's lucrative, yet risky trading system, forcing powerhouse investment banks to resemble sleepier brokerage firms.” It is difficult to see Morgan Stanley and Goldman Sachs readily becoming mere market-makers and deposit and loan banks without a fight. To be sure, Lloyd Blankfein did insist that his bank was only a market maker when he testified before Sen. Levin’s Senate committee after the credit freeze of 2008.
At the time the Volcker Rule was being proposed, it was already apparent that there would be some wiggle-room for the banks. "Unfortunately, this initial proposal does not deliver on the promise of the Volcker Rule or the requirements of the statute," said Marcus Stanley, policy director of American for Financial Reform, an advocacy group. In the proposal, “a number of controversial exemptions emerged. While the regulation prevents big banks from placing bets on many stocks, corporate bonds and derivatives, it exempts trading in government bonds and foreign currencies. The proposal also provided a path for getting around the ban, for instance, when banks hedge against risk that comes from carrying out a customer's trade. Market-making and underwriting are excused, too, though the line is often fuzzy between these pure client activities and proprietary bets.” Lastly, the proposal would allow “banks to hedge against theoretical or ‘anticipatory’ risk, rather than just clear-and-present problems.” Armed with their lawyers and astute financial wizards, Wall Street banks could conceivably continue with business as usual.
Trading in government bonds and foreign currencies, and hedging even theoretical risk presumably with anything constitute an obstacle course that any Wall Street banker could run without breaking a sweat. With so much on the line and public scrutiny less potent at the regulatory stage, the financial-sector lobbyists could be expected to achieve just enough and then some. Once again, systemic risk would not be a factor, and history could repeat itself.

See: Skip Worden, Institutional Conflicts of Interests, available at Amazon.

1. Ben Protess, “Banking Industry Revamp Moves Step Closer to Law,” The New York Times, October 12, 2011. 

Morgan Stanley: Systemic Mistrust or Bad Financials after the Financial Crisis?

"Morgan Stanley by any measure is a safe and solid investment bank. Except for one: The amount of trust people have in the whole financial and political system. It's just about zero,” according to Jesse Eisinger of The New York Times in October 2011. Even as there is undoubtedly an element of hyperbole in his conclusion—for zero trust in the financial system and governments would occasion far greater problems than the world faced at the time of Eisinger’s report—his broader point that bankers would be held accountable one way or the other for not having learned their lesson on derivatives (and risk more generally) is valid. The subtext is that even though banks like Morgan Stanley were in actuality in solid financial shape, they deserved the negative repercussions from the systemic skepticism that the banks themselves brought about by virtually ignoring risk analysis in preference to a run of profits and (not coincidentally) bonuses.
Eisinger points out that, at least as of October 2011, Morgan Stanley “has almost $60 billion in common equity, compared with $36 billion before September 2008, and its ratios are stronger. Its trading book - which is volatile and where any bank can take sudden, large losses - is smaller than it was. Morgan Stanley has more long-term debt and higher deposits, both of which stabilize its finances. The bank has more cash available in case there's a crunch and a smaller amount of Level III assets, which don't have an independently verifiable value and so must be estimated by the bank. Hedge funds have parked a smaller amount of assets at Morgan Stanley. That's good because in the financial crisis, they pulled them from the bank.” But because all of this could be easily wiped out by a run on the bank occasioned or fueled by a wider mistrust of the financial sector, Eisinger brings up the topic of derivatives as a way of showing that the bankers did not in fact learn their lesson (i.e., all the improved stats may be for naught). Accordingly, the bankers deserved the systemic mistrust even at the expense of any effort having resulted in added financial strength.  
According to the reporter, Morgan Stanley had a face value of $56 trillion in derivatives in October 2011. He notes that JP Morgan Chase had more: a face value of $79 trillion. This is the GNP of some countries. Even though the bankers insisted at the time that they had adequately hedged their long positions, the hedges themselves could fail, especially if the derivatives are positively correlated, as in September 2008 when AIG was completely overwhelmed due to the housing-based derivatives caving in virtually all at once.
In other words, those of us capable of learning lessons know that we should not trust hedges in so far as systemic risk is concerned; the system itself can be overwhelmed by the sheer momentum of a really big wave. So we are back to the issue of trust in the entire financial system, which is and ought to be a drag on even stellar financials until the broader lesson is learned. Unfortunately, that lesson may not be in the immediate financial interest of particular banks due to externalities occasioned by moral hazard (e.g., the possibility of being rescued while another bank, such as Lehman, fails).
Even though governments can step in to protect the broader system (unless captured by the regulated), legislators and regulators cannot force bankers to learn their lesson. A mentality to safeguard even one’s own bank as a going concern cannot be imposed; it must be felt and valued from the inside. All too often, bankers are engaged in “managing” regulations as impediments to be minimized rather than stepping back to ask why the regulations exist in the first place. They might exist for the banks’ own good. If so, the banking lobby trying to water down the Volcker Rule might have been working at odds with those institutions that the lobby ostensibly represents. Be careful what you wish for, Wall Street bankers. You might just get it, especially if you have the gold and therefore can make the rules. It would be ironic if the protesters rather than yourselves had your back, even as you ridicule the masses marching below your towering be-windowed edifices of greed.

Source:

Jesse Eisinger, “Between the Lines, Wall St. Banks Face a Deficit of Trust,” The New York Times, October 12, 2011. 

Thursday, April 11, 2019

Disenfranchising an Electorate: Using Legal Language on Referendums

Popular sovereignty, the ultimate sovereignty of a people as a whole, is typically exercised by an electorate at the ballot box. Such sovereignty is above that of governments (i.e., governmental sovereignty), which might come as a surprise given how little voters actually decide. Typically, the will of the people is limited to filling public offices by selecting among candidates or write-ins. In the last few decades of the twentieth century, California effectively expanded the power of popular sovereignty by adding a number of referendum questions to the ballots, but even those questions have not come close to covering the full spectrum of major policy issues, which are typically left to the office-holders: the agents of the People. Even though the popular sovereign (i.e., the direct will of the people) can make mistakes—such as requiring a 2/3 legislative majority to pass a tax increase in California—the expansion from merely filling public offices to actually making basic public policy decisions is from a democratic perspective a good thing. The key is to go broad enough that judgement rather than technical expertise or specialized knowledge is used. This effectively franchises at least the vast majority of an electorate as nearly everyone is capable of making a judgement among competing values, whereas a small percentage of people are highly educated in any given society—even in advanced industrial states. 

The problem, it seems to me, lies in how the policy questions on a ballot are written. In particular, they must be written in such a way that they are understandable to the typical voter. Writing a question, whether on policy, law, or a constitutional amendment, in legalize circumvents the expansion in popular sovereignty. Such an approach defies common sense itself, and yet it the Florida legislature did just that in 2012, placing the Florida electorate in a nearly-impossible position as the popular sovereign. Perhaps the legislators knew that the incomprehensible legalize would effectively safeguard their existing power.

Concerning the 2012 election in Florida, The Florida Times-Union in Jacksonville wrote, much “of what’s on the ballot is legalese and difficult-to-understand wording associated with the amendments.” The newspaper advises Florida’s citizens, “To save time, it will help to know what each amendment does, how you want to vote, and if a “Yes” or “No” achieves that desired vote. In short, be prepared.” Much too much is assumed in this advice concerning the wherewithal of the typical citizen to make sense of the technical legal words, and even to research each question before voting in order to understand what the  technical language means (assuming that the typical voter is going to wade through even the newspaper’s own deciphering).  In general, assuming too much of an electorate reflects negatively not on the electorate, but, rather, on the legislators who crafted and approved the ballot’s language.

For example, the matter of the third proposed amendment on the Florida ballot was put to the voter in the following words (from the ballot): “This proposed amendment to the State Constitution replaces the existing state revenue limitation based on Florida personal income growth with a new state revenue limitation based on inflation and population changes. Under the amendment, state revenues, as defined in the amendment, must be deposited into the budget stabilization fund until the fund reaches its maximum balance, and thereafter shall be used for the support and maintenance of public schools by reducing the minimum financial effort required from school districts for participation in a state-funded education finance program, or, if the minimum financial effort is no longer required, returned to the taxpayers.” Besides the basic, rather obvious point that the typical voter could not possibly be expected to understand this language (and yet someone approved it nevertheless!), the language assumes that the voter knows what goes into the existing revenue limitation (and can thus compare it with basing a limitation on inflation and population changes). Furthermore, the voter is assumed to be familiar with what a budget stabilization fund is, and able to assess its dynamic (e.g., maximum balance, etc.). The proposal is so specific, moreover, it may be misplaced as basic or constitutional law rather than as a mere statute. Indeed, the decision on the question is more along the lines of governmental than popular sovereignty (i.e., the elected representatives, who write laws and thus either understand the legalese or have a staff that does).

Supposing perhaps that the typical voter has a real estate brokerage license, the fourth proposed amendment on the ballot states in part: “In certain circumstances, the law requires the assessed value of homestead and specified nonhomestead property to increase when the just value of the property decreases. Therefore, this amendment provides that the Legislature may, by general law, provide that the assessment of homestead and specified nonhomestead property may not increase if the just value of that property is less than the just value of the property on the preceding January 1, subject to any adjustment in the assessed value due to changes, additions, reductions, or improvements to such property which are assessed as provided for by general law.” The legislators erred, in my view, in projecting their own language onto the general public, and, moreover, in conflating what is actually a statute with constitutional language, which is (or at least should be) much broader.

Another proposed “amendment” was for “an exemption from ad valorem taxes levied by counties, municipalities, school districts, and other local governments on tangible personal property if the assessed value of an owner’s tangible personal property is greater than $25,000 but less than $50,000.” Besides the Latin term and the legal jargon, the specification of the dollar figures is clearly statute-level rather than constitution. Furthermore, how many Floridians knew what “ad valorem taxes” are?

To be sure, some items of detail should be decided by the People’s agents because broad judgment is not required. For example, could not the legislators have been entrusted with making the decision on whether to “replace the president of the Florida Student Association with the chair of the council of state university student body presidents as the student member of the Board of Governors of the State University System.” Is this even constitutional language? The typical voter might legitimately have wondered, what is the council of state university student body presidents, and is it really much different than the Florida Student Association? Moreover, why am I being asked to decide this? Because popular sovereignty is superior to governmental sovereignty, that which a legislature puts to the electorate to decide must reach a certain threshold of importance. Deciding on contending student representatives so obviously does not meet this test that one might wonder whether the Florida legislature was fit even to legislate, much less address matters to the agents’ principal—the voters as a group.

The truly unfortunate thing about Florida’s bastardization of popular sovereignty is that sensible proposals to expand popular sovereignty could face unnecessary hurdles based on botched attempts to have an electorate decide on proposed constitutional amendments. In other words, by using legalize, the Florida legislators set the electorate up to fail. Going back to limiting popular sovereignty to the selection of candidates would suffer from the fact that so many reasons go into why voters elect a person to an office that it is impossible to say that the majority of the voters have expressed a general will on a given policy by electing a particular candidate.

Here’s what the Florida legislature missed: Whereas the typical voter does not have a basis in real-estate, accounting and law, he or she could be expected to reflect on and give an answer to questions such as: should abortion be illegal, should the U.S. give aid to Israel, should military spending be cut, should Florida provide subsidized health insurance to residents unable to afford it, and should property taxes be cut to reduce the deficit or raised to add funding to roads and education? A legislature could even add some non-legal terms to such questions to clarify them without losing the typical voter, and testing such questions by using focus groups could add confidence that the legislators have not unintentionally projected too much of their own world into the wording. Additional specificity could be handled by the agents (i.e., the legislators), as per the nature of principal-agent relationships.

In short, expecting too much from the electorate is not only utterly unfair to the voters, it also risks undercutting real progress on popular sovereignty, an electorate being fully capable of deciding general policy and even law, with the legislative agents then being tasked with writing the expressed general will into legal language. Yet legislators have so much power in deciding major policy that being mere implementers would surely be a let-down. Hence they would resist efforts to enhance the People’s exercise of popular sovereignty.


Sources:

Matt Dixon, “Florida Constitutional Amendments: Voter’s Guide,” The Florida Times-Union, October 13, 2012.

Misconceptions of the E.U. Budget

Could it be that at least some of the British voters who were in favor of secession from the E.U. held misconceptions of the federal budget? If so, perhaps the antagonism was unduly harsh in the referendum.  
So many misconceptions have existed regarding the E.U.’s budget that the European Commission published a “myth-buster” page on its web-site in 2013. As against the claim that the E.U.’s budget was enormous, for example, the Commission pointed out that the 2011 budget was about €140 billion, while the combined budgets of the 27 states were €6.3 trillion. In fact, the E.U.’s budget was less than that of the budgets of medium-sized states, such as Austria and Belgium. Whereas the E.U. budget represented about 1% of the E.U.’s GDP (the total value of all goods and services produced in the E.U.), the typical state’s budget was 44% of the state’s GDP. Relative to economic activity, the E.U. budget was not enormous, the Commission concluded.
In terms of the growth of the E.U. budget, the Commission pointed out that between 2000 and 2010, the state budgets had increased by 62% while the E.U. budget had increased by only 37 percent. Lest it be argued that the state budgets had been more democratically determined, the European Parliament, the members of which are directly elected by E.U. citizens, must approve the E.U. budget.
Regarding the misconception that most of the E.U. budget went to administration, the Commission pointed out that administrative expenses amounted to less than 6% of the total 2011 E.U. budget, with salaries accounting for half of that 6 percent. More than 94% of the budget, according to the Commission, “goes to citizens, regions, cities, farmers and businesses.” In this regard, the federal spending was not much different than state spending. In fact, state and local officials typically selected the E.U.-sponsored projects best suited to the officials’ respective areas.
Lastly, regarding the misconception that most of the E.U. budget has gone to farmers, direct aid to farmers and market-related programs was just 30% of the budget in 2011, and rural development spending was only 11 percent. For perspective, around 70% of the EC’s budget in 1985 was spent on agriculture. Put another way, the E.U. had diversified, hence reaching more citizens.

Source:

Myths and Facts,” E.U. Commission.

Friday, April 5, 2019

On the Unitary and Imperial American Presidency

In December 2009, Abdullah II, King of Jordon, dismissed the prime minister and replaced him with a palace aide and loyalist, dissolved Parliament, and postponed legislative elections for a year.   For all the defects of a representative democratic system, it is far superior to autocratic rule, especially by a dictator.   It is natural for people to resist preemption. “The nature of humans is they want democracy,” said Ali Dalain, an independent member of the Parliament that was dissolved. “One person cannot solve all problems and cannot make everyone happy, so people must share in determining their fate.”[1] These quotes are revealing from the standpoint of the unitary and imperial American presidency. 
   
Regarding “one person cannot solve all problems,” the American theory of the unitary executive and, moreover, the imperial presidency can be challenged. The unitary executive means that one person as president is better than a presidential council, for example. In a council, it may be difficult to reach a final decision, which is a drawback especially in times of emergency. Hence, the president's role as commander in chief has been tied to the unitary executive model. However, the emergency card has, I submit, been overplayed. A better reason is that a final say may be needed on contending military plans, but a council's majority could be taken. Most importantly, one person can be wrong, even in military matters. Would President George W. Bush have been able to link Iraq to the attack on September 11, 2001 and thus invade the country and occupy it for years had a presidential council have had to sign off? To be sure, only Congress can declare war, for it is a conflict of interest for the commander in chief to do so. Yet the fact that such commanders have been able to unilaterally begin military engagements means that the problem of one person being wrong should be taken seriously.  

The imperial presidency refers to the increase in presidential power in the twentieth century in the U.S. This has been at the expense not only of Congress, but also the state governments, given the federal power of preemption. In proposing laws, the president depends theoretically on Congressional leaders to steer the legislation through the lawmaking machinery. Should the Congress pass an alternative, the president can veto it, yet this does not mean the president's own proposal becomes law. So, constitutionally, the relationship seems balanced, and ample opportunity for voices exists. Even so, the president has an edge on Congress in that the latter goes on recesses whereas the West Wing is always working (though the same could be said of congressional staffs). So more to the point, the president is nearly always in the spotlight--relative even to individual senators--and thus can mold public opinion. 

Given the increased power of the presidency, it can be argued that too much power has come to be in the hands of one person. Human nature may not handle wielding so much power very well. The Stanford experiments in the 1960's on the abuse of power testify to the problem. Whereas the presidency may have a figure head without running into this problem, spreading out the power may fit better with how humans are constituted, especially those humans who suffer from ailments such as malignant narcissism. A presidential council could put a check on such a person, especially if he tends to lose control of his urges of the moment at the risk of the reputation, at least, of the presidency and the U.S.  

Disassociating the presidency from "one person" could also disspell any associated hero worship that has held on from ancient king-worship.  This tendency is evinced not just when a president is sworn in, but also when he gives the State of the Union address. Contributing to the problem, the media obsesses on his every move, including what he is doing on vacation.  

1. Michael Slackman, "Jordan's King Remakes His Government," The New York Times, December 22, 2009. 

Should Health Care Be a Right?

In the Spring of 2019, President Trump promised that a Republican alternative to "Obamacare" would soon be unveiled; the majority leader of the U.S. Senate, Mitch McConnell, quickly informed the president that the prospects of such legislation passing the Democratic-controlled U.S. House were zilch. This virtually guaranteed that health care would be play a salient role in the upcoming 2020 presidential race. The underlying question, I submit, has been whether health care ought to be a right, which the government would be obligated to ensure. Such a right would obviously not be one of those that hold government back (e.g., the right to liberty). Whether a right ensured by government or holding government back, the nature of a right is such that it is to be respected by others, whether individuals, organizations, or the state. Such respect, being an obligation, constrains those others. Hence, health care as a right has been controversial in the U.S. 
The Senior US Senator from Illinois, Dick Durbin, said the following just before one of the votes in December, 2009 on the Affordable Care Act, the health-care insurance reform legislation initiated by President Obama: “Thirty million Americans who currently don’t have health insurance  have the peace of mind of knowing that they have health insurance,” Mr. Durbin said. He added, “This is a real debate over whether or not health care is going to be a right or a privilege in America.”[1] By using the word, privilege, Sen. Durbin was implying that having access to health care on the sole basis of whether a person has money is unfair. 
If being wealthy is a good indication of being worthy of survival, then it may be assumed that health care for all, whether through private, non-profit, or government insurance, would undermine survival of the fittest. This in turn takes fit to mean strong or good. Were the humans in the financial sector before the financial crisis of 2008 strong or good? Does not fraud point to an underlying weakness? When Dick Fuld was CEO of Lehman Brothers before it collapsed, was he a strong leader or a pitiful man whose ambition got the best of him? 
In "survival of the fittest," fit has to do with fitting in with a changed environment. Such fitness, or fit, is on nature's terms rather than necessarily according to our notions of strong and good. For instance, a young drug dealer in a large city may have twelve "baby mamas." This means that the man had impregnated twelve women, who had been attracted to him on some basis that they valued. The sheer number of offspring suggests that the man was successful in reproducing himself; he thus fit well in his environment on this nauralistic basis. If survival of the fittest lies the availability of health care, should that man be covered while a poor religious man who has contributed to society without earning much money or having children should not? 

See also "Congressional Cuts to Foodstamps: Violating a Human Right?"

1. David Herszenhorn and Robert Pear, "Parties Stay United as Health Bill Clears Steps in Senate," The New York Times, December 22, 2009.