DRI-128 for week of 12-28-14: The Student-Loan Bubble: Blackboard Economics Strikes Again

An Access Advertising EconBrief: 

The Student-Loan Bubble: Blackboard Economics Strikes Again

The subjugation of print and broadcast news media by the Internet has changed many aspects of the news business, but crisis mode still predominates. The financial crisis gave the Great Recession its headline stories, the biggest of which were the housing bubble and accompanying subprime-mortgage loan scandals. Ever since then the New Media have been beating the bushes for their next big crisis. The front-running nominee seems to be the impending student-loan debacle.

News outlets across the political spectrum have vied for shrillest note of alarm in detailing the deplorable state of the student-loan market. In fact, this use of the term “market” is highly stylized, similar to its use in government bonds, home mortgages and military-defense. Ever since 2010, the federal government has effectively monopolized the market for student loans obtained for purchase of higher education. This monopoly operates in a manner analogous to the federal monopoly on home mortgages enjoyed by Fannie Mae and Freddie Mac. And a popular consensus has formed around the idea that the student-loan result will duplicate that in home mortgages; namely, a bubble with disastrous economic consequences for the economy at large.

The Outlines of the Disaster in the Making

Why is everybody all het up? Here are the outlines of the disaster as the various sources see it coming:

Between the years 2003-2013 college tuition has risen by almost 80%. To put this rise in perspective, consider that it is roughly double the rise in the average cost of medical care over the same time period. That sounds bad, but a price increase is not a bad thing ipso facto. Marked increases in quality can account for a higher price, for example, by causing increases in demand. Demand can increase for other reasons, too. Might higher tuition derive from these causes?

Surveys have demonstrated that tenured professors now average between six and nine hours of teaching per week at public universities, compared to the former average of nine to twelve hours. A majority of courses are now taught by non-tenured faculty, consisting of full-time, non-tenured faculty members, part-time adjunct faculty and graduate students. In order to believe that students are now receiving higher-quality teaching, we must believe that this motley mix are better teachers than the older, better credentialed tenured faculty members.

On its face, that proposition seems wildly far-fetched. In fact, it is not at all unusual to observe younger, non-tenured faculty members winning teaching awards and popularity polls. But this still doesn’t make a case for higher-quality education, for if it were generally true then it would argue for the separation of teaching and research altogether. And indeed, this may well be the optimal organization of labor in higher education. We will never know until we deregulate the industry, cutting off all government funds and allowing markets to determine the question.

The real source of tuition increases is not an organic increase in demand for higher education. Since 1999, the total volume of student loans has grown by 511%. Thus, the felt, effective demand for higher education has increased dramatically because it is an artificial, subsidized demand.

This has led to about $1 trillion being allocated to student loans – more money than is currently tied up on consumer credit-card debt. The so-called average individual owes about $24,000. This form of personal debt is very tenacious. Unlike most forms of personal debt, it cannot be discharged in bankruptcy. Only death can extinguish it.

Not surprisingly, the high rate, volume and burden of student-loan debt have produced defaults on that debt. Some $146 billion worth of student-loan defaults have been recorded to date. The default rate stands at its highest level since 1996.

Where there is default, there are those seeking to deflect it. There is even a term used to describe this practice; it is called forbearance. A Wall Street Journal op-ed (“The Hidden Student-Debt Bomb,” by Jason Delisle, WSJ 12/31/14) describes the practice and its spread.

Forbearance is the generic term for various means of avoiding on-time payment of student loans. Of the total $1 trillion in student loan debt outstanding, the amount in forbearance is $125 billion and rising.

The standard forbearance benefit is usually granted by the company servicing the loan. The student calls the company and requests forbearance. Upon receipt, the student receives postponement of payments for as long as three years. Since this benefit is granted at the discretion of the company, there need be no qualifying criteria, although there are sometimes income qualifications.

Forbearance can also be used to cure a delinquency status. As author Delisle notes, this becomes rather quaint – accrued interest and principal accumulates on the loan so that the initial too-onerous-to-pay amount is now bulked up considerably by the time the next payment is due. Really, then, Delisle argues, forbearances should be treated as equivalent to delinquencies and defaults rather than as a treatment for them. Thus, their steady upward march in recent years (12.5% of repayments in 2006, 13.3% in 2013 and 16% of the $778 million in repayment today) is reason for alarm.

Another technique bases forbearance on ability to repay, or income. At an income of 150% of a poverty-level income or less, payment is zero. As income rises, payments rise on an ascending scale to between 1% and 15% of income. At some point – either 10, 20 or 25 years, depending on the precise details of the program – remaining debt is forgiven completely and taxpayers pick up the remaining tab. The interesting feature of income-based programs is their separation from standard amortization principles of debt repayment. For example, the most generous income-program cases do not even cover the accrued interest on the student loan! Thus, there is really no pretense that the loan will ever be repaid under these circumstances – the program just gives the student a thin layer of epidermis in the game. According to Delisle, “the Obama administration estimated in 2012 that the average amount forgiven in income-based repayment plans will be $41,000 per borrower” (!).

Since those loans represent expenditures financed by the federal government, that means that the money was acquired by the federal government in one of the three standard (and exclusive) ways: by taxing, borrowing and “printing” (e.g., creating). That means that taxpayers have already paid for it or will pay for it in the future. If students do not repay their loans, that means that taxpayers will bear the burden. Then again, even if students do repay the loans, the only form of “repayment” taxpayers get is reimbursement to the Treasury, which defrays future expenses on some other boondoggle. But even if you discount the dubious notion that taxpayers are repaid by students, it is clear that the practice of forbearance encourages students to take on heavy debt loads and later shed them at taxpayer expense.

One obvious paradox is that the overall U.S. economy has been improving recently while the pattern of delinquency, default and forbearance on student loans has been increasing. This makes no economic sense. That makes Delisle wonder whether the purpose of student loans is political rather than economic.

Regular readers of this space should already have reached that conclusion by this point. Before broaching this issue fully, we should pause to ponder the question: Exactly what is the economic purpose of student loans for higher education underwritten by the federal government?

The Orthodox Economic Case for Government Subsidies to Higher Education

The economic case for subsidies to higher education by government can be found in virtually every undergraduate economics textbook. It is cited as an example of a positive externality. Ordinarily, an economic transaction involves a buyer and a seller – the benefits of the good being purchased are confined to the buyer and the costs of production were incurred by the seller. Education, it is claimed, benefits everybody, not just the student. So, there are benefits “external” to the parties immediately involved in the purchase, making the externality a “positive” one. (The presence of external costs would be a negative externality; pollution flowing from a production site would be an example.) Because students take only their own future benefits into consideration when weighing an investment in their human capital, they will not purchase enough education. It is up to government to subsidize education to make up for this inherent flaw in the free market. True, you and I are forced to pay for the education of others, but that is justified by the benefits we receive from their education – better goods and services that they produce, better conversation that they make with us, better government that they give us and more.

Even the apostle of free markets and laissez-faire, Milton Friedman, gives lip service to this argument in his treatise Capitalism and Freedom. And just as Keynesians like Ben Bernanke cite Milton Friedman’s slightest obiter dictum as support for their loose-money policies, so have government spenders cited him in support of spending on higher education.

This is a classic case of what the late Nobel laureate Ronald Coase called “blackboard economics.” Teachers will develop an argument on the blackboard, “prove” it using the assumptions they assert under the terms of their model. Then – because they probably had a vested interest to promote in the first place – they proceed to promote policies that are based on its validity.

“Economic policy involves a choice among alternative social institutions, and these are created by the law or are dependent on it. The majority of economists do not see the problem in this way. They paint a picture of an ideal economic system and then, comparing it with what they observe (or think they observe), they prescribe what is necessary to reach this ideal state without much consideration for how this could be done. The analysis is carried out with great ingenuity but it floats in the air. It is, as I have phrased it, ‘blackboard economics.’ There is little investigation of how the economy actually operates, and in consequence it is hardly surprising that we find…that the factual examples given are often quite misleading.”

Coase cited two famous blunders by famous Nobel Prize-winning economists. Paul Samuelson, author of the all-time bestselling economics text, followed the precedent set by several generations of economists going back to John Stuart Mill in the 19th century by flatly stating that lighthouses were an example of a positive externality and could only be provided by government, never privately in a free market. In reality, private lighthouses flourished for centuries. James Meade declared that bee pollination of orchards could never be handled by free markets, blithely overlooking the fact that beekeeping in the U.S. had done just that for many decades at the time (the early 1950s) that he wrote. Ironically, despite his suggestive term, Coase never applied his logic to higher education itself.

Coase implies strongly that the problem with “blackboard economics” is a lack of empirical investigation. He was trained at the London School of Economics and taught for many years in the Law School at the University of Chicago. Thus, he was exposed to the influence of two famous philosophical positivists, John Neville Keynes (father of John Maynard Keynes) and Milton Friedman. Both men developed a school of economic logic and practice that was very widely taught and practiced within the profession. It preached that economists should not only develop hypotheses but test them empirically using formal statistical inference. Only those hypotheses that pass the tests – that is, the ones that are empirically sound – should be vetted for policy purposes.

This philosophy is purportedly based on the habits developed by the natural sciences – physics, biology, chemistry et al. It sounds – or, more precisely, sounded– attractive, which accounts for its onetime dominance of the profession. It now lies in ruins. Few theorists pretend to “test” economic hypotheses today, although everybody goes on mechanically employing statistical tools and looking for new ones. The concept of “statistical significance” today brings a blush to professional cheeks after its scandalous misuse by generations of social scientists.

Coase made a minor point, all right; economists were arrogant for not at least peeking out the windows of their ivory towers before applying the theories they formulated so carelessly. But the decisive point is theoretical, not empirical. The externalities argument is badly reasoned in the first instance. Coase himself proved this when he laid the groundwork for the so-called “Coase Theorem,” which shows that when transactions costs are disregarded, the existence of an externality does not make a case for government involvement. The two parties involved have an incentive to bargain their way to a solution.

The positive externality argument for government subsidies to higher education has an even bigger hole in it, one big enough to drive a truck holding $1 trillion through.

When Is An Investment Not An Investment?

When we stand back and view the positive externality argument and today’s reality of student-loan spending by government in some sort of perspective, it is blindingly obvious that something is missing. Something vital was overlooked all along in the mad rush to get money in the hands of students. What was it, exactly?

After Forbes Magazine published one of the cautionary articles referred to earlier, a young student sent in a dissenting response. His economic arguments were chillingly naïve: Since the loans are made and supported by the government, the private sector is “protected” against the fallout from default, unlike the case with the mortgage default on subprime loans; the loans are not securitized via derivative assets and thus have less potential for harm. But most telling of all is his closing comment that “after all, education is not a cost, it is an investment.”

Incredible as it seems, this is the same hazy-crazy-lazy blue-skies frame of mind with which economists themselves have approached the subject. Let us rectify this carefree, careless approach with some incisive thinking. Education is a good. The purchase of education by an individual is an investment that entails a cost. The cost is the highest-valued alternative foregone by that individual in the purchase as it is viewed BY THAT INDIVIDUAL. The benefit is the discounted present value of the future benefits expected to accrue from the human capital created as they are viewed BY THAT INDIVIDUAL. Nobody else’s views matter in evaluating this investment. Nobody else can evaluate the benefits because they are his or her benefits – nobody else’s. Nobody else can evaluate the cost because it is his or her cost – nobody else is foregoing the alternative(s).

Are there other people who gain in some way from that individual’s education? Fine – let them subsidize his or her education, if they want to. If they want to run the risk that he or she will purchase too little education, let them run it. If they don’t perceive sufficient benefit to them from his or her education to subsidize it, then the only sensible policy is to treat that external benefit as negligible. In practice, we see various people and institutions willing to subsidize the educations of others.

Now the shortcoming of the current system sticks out like the proverbial sore thumb. As it stands out now, the education decision is NOT an investment – because the individual making it considers only benefits, not costs. By manipulating the system, the student-borrower can slide out from under a very substantial proportion of the nominal cost.

And that’s not all. The investment decision is further distorted by the fact that, when the buyer perceives the cost to be zero or very low, the quantity demanded will be very high. Thus, the price – tuition – will be driven artificially high. Expansion of capacity – that is, supply – comes rather slowly because public funding to build more universities or expand existing ones comes from legislatures, while private universities are funded largely from endowments.

The Political Basis of the Current System

Delisle’s conjecture about the political basis of the current system is well-founded. Conservatives sometimes act as if sin originated with the election of Barack Obama in 2008, though, and iniquity in public education goes back over a century. The economists who formulated the positive externality theory worked for the government, as do most economists today. The 20th century saw education become a captive of the state. The subjection of students via student loans is only the latest foray by a marauding government.

The current design of student-loan programs is not the result of laxity or well-meaning over-generosity, but of political calculation. The concept of “predatory lending” has absolutely no meaning in a private, free-market economy because private, profit-seeking businesses have no incentive to write bad loans. But government does and the student-loan program is the locus classicus of predatory lending a la government. Its purpose is to entrap students in loans from which they have no alternative except to default. Government is both their benefactor – for “giving” them a college education – and their savior – for rescuing them from financial ruin and penury with forbearance. Thus, government has now created a built-in, guaranteed constituency. Moreover, this new constituency comes complete with an army of bureaucrats that also owe their jobs to government. Bureaucrats first of all to administer the loans in the first place – fill out the forms and check eligibility (as if!) and recruit new borrowers and keep the loans flowing; bureaucrats later on to administer the forbearance phase in which de-facto defaults are carefully managed and nurtured to their soft landings.

Both these new constituencies will vote for big government forever.

And “forever” lasts just as long as it takes for the money to run out and the ultimate financial debacle to take down the whole monetary and financial system.

Bust Up the Blackboard 

Most utter debacles come about in spite of economic theory and logic. This one was carefully engineered with the aid of economics and economists. We cannot fine-tune out way out of this disaster. The only way out is to privatize education at all levels. Severing the financial lifeline of these subsidies is the only way to kill this two-headed student-loan beast that devours our real income with each mouth.

DRI-228 for week of 10-5-14: Can We Afford the Risk of EPA Regulation?

An Access Advertising EconBrief:

Can We Afford the Risk of EPA Regulation?

Try this exercise in free association. What is first brought to mind by the words “government regulation?” The Environmental Protection Agency would be the answer of a plurality, perhaps a majority, of Americans. Now envision the activity most characteristic of that agency. The testing of industrial chemicals for toxicity, with a view to determining safe levels of exposure for humans, would compete with such alternative duties as monitoring air quality and mitigating water pollution. Thus, we have a paradigmatic case of government regulation of business in the public interest – one we would expect to highlight regulation at its best.

One of the world’s most distinguished scientists recently reviewed EPA performance in this area. Richard Wilson, born in Great Britain but long resident at HarvardUniversity, made his scientific reputation as a pioneer in the field of particle physics. In recent decades, he became perhaps the leading expert in nuclear safety and the accidents at Three Mile Island, Chernobyl and Fukushima, Japan. Wilson is a recognized leader in risk analysis, the study of risk and its mitigation. In a recent article in the journal Regulation (“The EPA and Risk Analysis,” Spring 2014), Wilson offers a sobering explanation of “how inadequate – and even mad and dangerous – the U.S. Environmental Protection Agency’s procedures for risk analysis are, and why and how they must be modified.”

Wilson is neither a political operative nor a laissez-faire economist. He is a pure scientist whose credentials gleam with ivory-tower polish. He is not complaining about excesses or aberrations, but rather characterizing the everyday policies of the EPA. Yet he has forsworn the dispassionate language of the academy for words such as “mad” and “dangerous.” Perhaps most alarming of all, Wilson despairs of finding anybody else willing to speak publicly on this subject.

The EPA and Risk 

The EPA began life in 1970 during the administration of President Richard Nixon. It was the culmination of the period of environmental activism begun with the publication of Rachel Carson’s book Silent Spring in 1962. The EPA’s foundational project was the strict scrutiny of industrial society for the risks it allegedly posed to life on Earth. To that end, the EPA proposed “risk assessment and regulations” for about 20 common industrial solvents.

How was the EPA to assess the risks of these chemicals to humans? Well, standard scientific procedure called for laboratory testing that isolate the chemical effects from the myriad of other forces impinging on human health. There were formidable problems with this approach, though. For one thing, teasing out the full range of effects might take decades; epidemiological studies on human populations are commonly carried out over 10 years or more. Another problem is that human subjects would be exposed to considerable risk, particularly if dosages were amped up to shorten the study periods.

The EPA solved – or rather, addressed – the problem by using laboratory animals such as rats and mice as test subjects. Particularly in the beginning, few people objected when rodents received astronomically high dosages of industrial chemicals in order to determine the maximum level of exposure consistent with safety.

Of course, everybody knew that rodents were not comparable to people for research purposes. The EPA addressed that problem, too, by adjusting their test results in the simplest ways. They treated the results applicable to humans as scalar multiples of the rodent results, with the scale being determined by weight. They assumed that the chemicals were linear in their effects on people, rather than (say) having little or no effect up to a certain point or threshold. (A linear effect would be infinitesimally small with the first molecule of exposure and rise with each subsequent molecule of exposure.)

Of all the decisions made by EPA, none was more questionable than the standard it set for allowable risk from exposure to toxic chemicals. The standard set by EPA was no more than one premature death per million of exposed population over a statistical lifetime. Moreover, the EPA also assumed the most unfavorable circumstances of exposure – that is, that those exposed would receive exposure daily and get the level of exposure that could only be obtained occupationally by workers routinely exposed to high levels of the substance. This maximum safe level of exposure was itself a variable, expressed as a range rather than a single point, because the EPA could not assume that all rats and mice were identical in their response to the chemicals. Here again, the EPA assumed the maximum degree of uncertainty in reaction when calculating allowable risk. As Wilson points out, if the EPA had assumed average uncertainty instead, this would have reduced their statistical risk to about one in ten million.

It is difficult for the layperson to evaluate this “one out of a million” EPA standard. Wilson ties to put it in perspective. The EPA is saying that the a priori risk imposed by an industrial chemical should be roughly equivalent to that imposed by smoking two cigarettes in an average lifetime. Is that a zero risk? Well, not in the abstract sense, but it will do until something better comes along. Wilson suggests that the statistical chance of an asteroid hitting Earth is from 100 to 1000 times greater than this. There are several chemicals found in nature, including arsenic and mercury, whose risk of death to man is each about 1,000 times greater than this EPA-stipulated risk. Still astronomically small, mind you – but vastly greater than the arbitrary standard set by the EPA for industrial chemicals.

Having painted this ghastly portrait of your federal government at work, Wilson steps back to allow us a view of the landscape that the EPA is working to alter. There are some 80,000 industrial chemicals in use in the U.S. Of these, about 20 have actually been studied for their effects on humans. Somewhere between 10,000 and 20,000 chemicals have been tested on lab animals using methods liThat means that, very conservatively speaking, there are at least 60,000 chemicals for which we have only experience as a guide to their effects on humans.

What should we do about this vast uncharted chemical terrain? Well, we know what the EPA has done in the past. A few years ago, Wilson reminds us, the agency was faced with the problem of disposing of stocks of nerve gas, including sarin, one of the most deadly of all known substances. The agency conducted a small test incineration and then looked at the resulting combustion products. When it found only a few on its list of toxic chemicals, it ignored the various other unstudied chemicals among the byproducts and dubbed the risk of incineration to be zero! It was so confident of this verdict that it solicited the forensic testimony of Wilson on its behalf – in vain, naturally.

Wilson has now painted a picture of a government agency gripped by analytical psychosis. It arrogates to itself the power to dictate safety to us, imposes unreal standards of safety on chemicals it studies – them arbitrarily assumes that unstudied chemicals are completely safe! Now we see where Wilson’s words “mad and dangerous” came from.

Economists who study government should be no more surprised by the EPA’s actions than by Wilson’s horrified reaction to them. The scientist reacts as if he were a child who has discovered for the first time that his parents are capable of the same human frailties as other humans. “Americans deserve better from their government. The EPA should have a sound, logical and scientific justification for its chemical exposure regulations. As part of that, agency officials need to accept that they are sometimes wrong in their policymaking and that they need to change defective assessments and regulations.” Clearly, Wilson expects government to behave like science – or rather, like science is ideally supposed to behave, since science itself does not live up to its own high standards of objectivity and honesty. Economists are not nearly that naïve.

The Riskless Society

Where did the EPA’s standard of no more than one premature death per million exposed per statistical lifetime come from? “Well, let’s face it,” the late Aaron Wildavsky quipped, “no real man tells his girlfriend that she is one in a hundred thousand.” Actually, Wildavsky observes, “the real root of ‘one in a million’ can be traced to the [government’s] efforts to find a number that was essentially equivalent to zero.” Lest the reader wonder whether Wilson and Wildavsky are peculiar in their insistence that this “zero-risk” standard is ridiculous, we have it on the authority of John D. Graham, former director of the Harvard School of Public Health’s Center for Risk Analysis, that “No one seriously suggested that such a stringent risk level should be applied to a[n already] maximally exposed individual.”

Time has also been unkind to the rest of EPA’s methodological assumptions. Linear cancer causation has given way to recognition of a threshold up to which exposure is harmless or even beneficial. This gibes with the findings of toxicology, in which the time-honored first principle is “the dose makes the poison.” It makes it next-to-impossible to gauge safe levels of exposure using either tests on lab animals or experience with low levels of human exposures. As Wildavsky notes, it also helps explain our actual experience over time, in which “health rates keep getting better and better while government estimates of risk keep getting worse and worse.”

During his lifetime, political scientist Aaron Wildavsky was the pioneering authority on government regulation of human risk. In his classic article “No Risk is the Highest Risk of All,” The American Scientist, 1979, 67 (1) 32-37) and his entry on the “Riskless Society” in the Fortune Encyclopedia of Economics (1993, pp. 426-432), Wildavsky produced the definitive reply to the regulatory mentality that now grips America in a vise.

Throughout mankind’s history, human advancement has been spearheaded by technological innovation. This advancement has been accompanied by risk. The field of safety employs tools of risk reduction. There are two basic strategies for risk reduction. The first is anticipation. The EPA, and the welfare state in general, tacitly assume this to be the only safety strategy. But Wildavsky notes that anticipation is a limited strategy because it only works when we can “know the quality of the adverse consequence expected, its probability and the existence of effective remedies.” As Wildavsky dryly notes, “the knowledge requirements and the organizational capacities required to make anticipation an effective strategy… are very large.”

Fortunately, there is a much more effective remedy close at hand. “A strategy of resilience, on the other hand, requires reliance on experience with adverse consequences once they occur in order to develop a capacity to learn from the harm and bounce back. Resilience, therefore, requires the accumulation of large amounts of generalizable resources, such as organizational capacity, knowledge, wealth, energy and communication, that can be used to craft solutions to problems that the people involved did not know would occur.” Does this sound like a stringent standard to meet? Actually, it shouldn’t. We already have all those things in the form of markets, the very things that produce and deliver our daily bread. Markets meet and solve problems, anticipated and otherwise, on a daily basis.

Really, this is an old problem in a new guise. It is the debate between central planning – which assumes that the central planners already know everything necessary to plan our lives for us – and free competition – which posits that only markets can generate the information necessary to make social cooperation a reality. Wildavsky has put the issue in political and scientific terms rather than the economic terms that formed the basis of the Socialist Calculation debates of the 1920s and 30s between socialists Oskar Lange and Fred Taylor and Austrian economists Ludwig von Mises and F. A. Hayek. The EPA is a hopelessly outmoded relic of central planning that not only fails to achieve its objectives, but threatens our freedom in the bargain.

In “No Risk is the Highest Risk of All,” Wildavsky utilizes the economic concept of opportunity cost to make the decisive point that by utilizing resources inefficiently to drive one particular risk all the way to zero, government regulators are indirectly increasing other risks. Because this tradeoff is not made through the free market but instead by government fiat, we have no reason to think that people are willing to bear these higher alternative risks in order to gain the infinitesimally small additional benefits of driving the original risk all the way to zero. As a purely practical matter, we can be sure that this tradeoff is wildly unfavorable. The EPA bans an industrial chemical because it does not meet their impossibly high safety standard. Businesses across the nation have to utilize an inferior substitute. This leaves the businesses, their shareholders, employees and consumers poorer, with less real income to spend on other things. Safety is a normal good, something people and businesses spend more on when their real incomes rise and less on when real incomes fall. The EPA’s foolish “zero-risk” regulatory standard has created a ripple effect that reduces safety throughout the economy.

The Proof of the Safety is in the Living

Wildavsky categorically cited the “wealth to health” linkage as a “rule without exception. To get a concrete sense of this transformation in the 20th century, we can consult the U. S. historical life expectancy and mortality tables. In the century between 1890 and 1987, life expectancy for white males rose from 42.5 years to 72.2 years; for non-whites, from 32.54 years to 67.3 years. For white females, it rose from 44.46 years to 78.9 years; for non-white females, from 35.04 years to 75.2 years. (Note, as did Wildavsky, that the longevity edge enjoyed by females over males came to exceed that enjoyed by white males over non-whites.)

Various diseases were fearsome killers at the dawn of the 20th century, but petered out over the course of the century. Typhoid fever killed an average of 26.7 people per 100,000 as the century turned (from 1900-04); by 1980 it had been virtually wiped out. Communicable diseases of childhood (measles, scarlet fever, whooping cough and diphtheria) carried away 65.2 out of every 100,000 people in the early days of the century but, again by 1980, they had been virtually wiped out. Pneumonia used to be called “the old man’s friend” because it was the official cause of so many elderly deaths, which is why 161.5 out of every 100,000 deaths were attributed to it during 1900-04. But this number had plummeted to 22.0 by 1980. Influenza caused 22.8 deaths out of every 100,000 during 1900-04, but the disease was near extinction in 1980 with only 1.1 deaths ascribed to it. Tuberculosis was another lethal killer, racking up 184.7 deaths per 100,000 on average in the early 1900s. By 1980, the disease was on the ropes with a death rate of only 0.8 per 100,000. Thanks to antibiotics, appendicitis went from lethal to merely painful, with a death rate of merely 0.3 per 100,000 people. Syphilis went from scourge of sexually transmitted diseases to endangered-species of same, going from 12.9 deaths per 100,000 to 0.1.

Of the major causes of death, only cancer and cardiovascular disease showed significant increase. Cancer is primarily a disease of age; the tremendous jump in life expectancy meant that many people who formerly died of all the causes listed above now lived to reach old age, where they succumbed to cancer. That is why the incidence of most diseases fell but why cancer deaths increased. “Heart failure” is a default listing for cause of death when the proximate cause is sufficient to cause organ failure but not acute enough to cause death directly. That accounts for the increase in cardiovascular deaths, although differences in lifestyle associated with greater wealth also bear part of the blame for the failure of cardiovascular deaths to decline despite great advances in medical knowledge and technology. (In recent years, this tendency has begun to reverse.)

The activity-linked mortality tables are also instructive. The tables are again expressed as a rate of fatality per 100,000 people at risk, which can be translated into absolute numbers with the application of additional information. By far the riskiest activity is motorcycling, with an annual death rate of 2,000 per 100,000 participants. Smoking lags far behind at 300, with only 120 of these ascribable to lung cancer. Coal mining is the riskiest occupation with 63 deaths per 100,000 participants, but it has to share the title with farming. It is riskier a priori to drive a motor vehicle (24 deaths per 100,000) than to be a uniformed policeman (22 deaths). Roughly 60 people per year are fatally struck by lightning. The lowest risk actually calculated by statisticians is the 0.000006 per 100,000 (six-millionths of one percent) risk of dying from a meteorite strike.

It is clear that risk is not something to be avoided at all cost but rather an activity that provides benefits at a cost. Driving, coal mining and policing carry the risk of death but also provide broad-based benefits not only to practitioners but to consumers and producers. Industrial chemicals also provide widespread benefits to the human race. It makes no sense to artificially mandate a “one in a million” death-risk for industrial solvents when just climbing in the driver’s seat of a car subjects each of us to a risk that is hundreds of thousands of times greater than that. We don’t need all-powerful government pretending to regulate away the risk associated with human activities while actually creating new hidden risks. We need free markets to properly price the benefits and costs associated with risk to allow us to both efficiently run risks and avoid them.

This fundamental historical record has been replicated with minor variations across the Western industrial landscape. It was not achieved by heavy-duty government regulation of business but by economic growth and markets, which began to slow as the welfare state and regulation began to predominate. Ironically, recent slippage in health and safety has been associated with the transfer of public distrust from government – where it is well-founded – to science. Falling vaccination rates has produced revival of diseases, such as measles and diphtheria, which had previously been nearly extinct.

The Jaundiced Eye of Economists

If there is any significant difference in point of view between scientists (Wilson) and political scientists (Wildavsky) on the one hand, and economists on the other, it is the willingness to take the good faith of government for granted. Wilson apparently believes that government regulators can be made to see the error of their ways. Wildavsky apparently viewed government regulators as belonging to a different school of academic thought (“anticipation vs. resilience”) – maybe they would see the light when exposed to superior reasoning.

Economists are more practical or, if you like, more cynical. It is no coincidence that government regulatory agencies do not practice good science even when tasked to do so. They are run by political appointees and funded by politicians; their appointees are government employees who are paid by political appropriations. The power they possess will inevitably be wielded for political purposes. Most legal cases are settled because they are too expensive to litigate and because one or both parties fear the result of a trial. Government regulatory agencies use their power to bully the private sector into acquiescence with the political results favored by politicians in power. Private citizens fall in line because they lack the resources to fight back and because they fear the result of an administrative due process in which the rules are designed to favor government. This is the EPA as it is known to American businesses in their everyday world, not as it exists in the conceptual realities of pure natural science or academic political science.

The preceding paragraph describes a kind of bureaucratic totalitarianism that differs from classical despotism. The despot or dictator is a unitary ruler, while the bureaucracy wields a diffused form of absolute power. Nevertheless, this is the worst outcome associated with the EPA and top-down federal-government regulation in general. The risks of daily life are manageable compared to the risks of bad science dictated by government. And both these species of risk pale next to the risk of losing our freedom of action, the very freedom that allows us to manage the risks that government regulation does not and cannot begin to evaluate or lessen.

The EPA is just too risky to have around.

DRI-265 for week of 2-23-14: False Confession Under Torture: The So-Called Re-Evaluation of the Minimum Wage

An Access Advertising EconBrief:

False Confession Under Torture: The So-Called Re-Evaluation of the Minimum Wage

For many years, the public pictured an economist as a vacillator. That image dated back to President Harry Truman’s quoted wish for a “one-armed economist,” unable to hedge every utterance with “on the one hand…on the other hand.”

Surveys of economists belied this perception. The profession has remained predominantly left-wing in political orientation, but its support for the fundamental logic of markets has been strong. Economists have backed free international trade overwhelmingly. They have opposed rent control – which socialist economist Assar Lindbeck deemed the second-best way to destroy a city, ranking behind only bombing. And economists have denounced the minimum wage with only slightly less force.

Now, for the first time, this united front has begun to break up. Recently a gaggle of some 600 economists, including seven Nobel Laureates, has spoken up in favor of a 40% increase in the minimum wage. The minimum wage has always retained public support. But what could possibly account for this seeming about-face by the economics profession?

The CBO Study

This week, the Congressional Budget Office (CBO) released a study that was hailed by both proponents and opponents of the minimum wage. The CBO study tried to estimate the effects of raising the current minimum of $7.25 per hour to $9 and $10.10, respectively. It provided an interval estimate of the job loss resulting from President Obama’s State of the Union suggestion of a $10.10 minimum wage. The interval stretched from roughly zero to one million. It took the midpoint of this interval – 500,000 jobs – as “the” estimate of job loss because… because…well, because 500,000 is halfway between zero and 1,000,000, that’s why. Averages seem to have a mystical attraction to statisticians as well as to the general public.

Economists looking for signs of orthodox economic logic in the CBO study could find them. “Some jobs for low-wage workers would probably be eliminated, the income of most workers who became jobless would fall substantially, and the share of low-wage workers who were employed would probably fall slightly.” The minimum wage is a poorly-targeted means of increasing the incomes of the poor because “many low-income workers are not members of low-income families.” And when an employer chooses which low-wage workers to retain and which to cut loose after a minimum-wage hike, he will likely retain the upper-class employee with good education and social skills and lay off the first-time entrant into the labor force who is poor in income, wealth and human capital. These are traditional sentiments.

On the other hand, the Obama administration’s hired gun at the Council of Economic Advisers (CEA), Chairman Jason Furman, looked inside the glass surrounding the minimum wage and found it half-full. He characterized the CBO’s job-loss conclusion as a “0.3% decrease in employment” that “could be essentially zero.” Furman cited the CBO estimate that 16.5 million workers would receive an increase in income as a result of the minimum-wage increase. Net benefits to those whose incomes currently fall below the so-called poverty line are estimated at $5 billion. The overall effect on real income – what economists would call the general equilibrium result of the change – is estimated to be a $2 billion increase in real income.

The petitioning economists, the CBO and the CEA clearly are all not viewing the minimum wage through the traditional textbook prism. What caused this new outlook?

The “New Learning” and the Old-Time Religion on the Minimum Wage

The impetus to this eye-opening change has ostensibly been new research. Bloomberg Businessweek devoted a lead article to the supposed re-evaluation of the minimum wage. Author Peter Coy declares that “the argument that a wage floor kills jobs has been weakened by careful research over the past 20 years.” Not surprisingly, Coy locates the watershed event as the Card-Krueger comparative study of fast-food restaurants in New Jersey and Pennsylvania in 1994. This study not only made names for its authors, it began the campaign to make the minimum wage respectable in academic economic circles.

“The Card-Krueger study touched off an econometric arms race as labor economists on opposite sides of the argument topped one another with increasingly sophisticated analyses,” Coy relates. “The net result has been to soften the economics profession’s traditional skepticism about minimum wages.” If true, this would be sign of softening brains, not skepticism. The arguments advanced by the re-evaluation of the minimum wage have been around for decades. Peter Coy is saying that, somehow, new studies done in the last 20 years have produced different results than those done for the previous fifty years, and those different results justify a turnabout by the economics profession.

That stance is, quite simply, hooey. Traditional economic opposition to the minimum wage was never based on empirical research. It was based on the economic logic of choice in markets, which argues unequivocally against the minimum wage. Setting a wage above the market-determined wage will create a surplus of low-skilled labor; e.g., unemployment. Thus, any gains accruing to the workers who retain their jobs will come at the expense of workers who lose their jobs. The public supports the minimum wage on the misapprehension that the gains come at the expense of employers. This is true only transitorily, during the period in which some firms go out of business, prices rise and workers are laid off. During this short-run transition period, the gains of still-employed workers come at the expense of business owners and laid-off workers. But once the adjustments occur, the business owners who survive the transition are once again earning a “normal” (competitive) rate of profit, as they were before the minimum wage went up. Now, and indefinitely going forward, the gains of still-employed workers come at the expense of laid-off workers and consumers who pay higher prices for the smaller supply of goods and services produced by low-skilled workers.

The still-employed workers are by no means all “poor,” despite the face that they earn the minimum wage. Some are teenagers in middle- or upper-class households, whose good educations and social skills preserved their jobs after the minimum-wage hike. Some are older workers whose superior discipline and work skills made them irreplaceable. The workers who rate to lose their jobs are the poorest and least able to cope – namely, first-time job holders and those with the fewest cognitive and social skills. The minimum wage transfers income from the poor to the non-poor. What a victory for social justice! That is why even the left-wing economists like Alan Blinder formerly pooh-poohed the minimum wage as a means of helping the poor. (While he was Chairman of the CEA under President Clinton, Blinder was embarrassed when the arguments against the minimum wage in his economics textbook were juxtaposed alongside the administration’s support of a minimum-wage increase.)

This does not complete the roster of the minimum wage’s defects. Government price-setting has mirror-image effects on both above-market prices and below-market prices. By creating a surplus of low-skilled labor, the minimum wage makes it costless for employers to discriminate against a class of workers they find objectionable – black, female, politically or theologically incorrect, etc. Black-market employment of illegal workers – immigrants or off-the-books employees – can now gain a foothold. Business owners are encouraged to substitute machines for workers and have done so throughout the history of the minimum wage. In cases such as elevator operators, this has caused whole categories of workers to vanish. This expanded range of drawbacks somehow never finds its way into popular discussions of the minimum wage, which are invariably confined to the effects on employment and income distribution.

“If there are negative effects on total employment, the most recent studies show, they appear to be small,” according to Bloomberg Businessweek.  The trouble is that the focus of the minimum wage is not properly on total employment. The minimum wage itself applies only to the market for low-skilled labor, comprising roughly 20 million Americans. There are certainly effects on other labor and product markets. But it is difficult enough to estimate the quantitative effect of the minimum wage on the one market directly affected, let alone to gauge the secondary impact on the other markets comprising the remaining 300 million people. The Obama administration, the vocal economists, the Bloomberg Businessweek and the political Left are ostensibly concerned with the poor. Why, then, do they insist on couching employment effects only in total terms?

It is clear that the same reasons why economists have traditionally chosen not to confuse the issue by dragging in total employment are also the reasons why economists now choose precisely to do so. They want to confuse the issue, to disguise the full magnitude of the adverse effects on low-skilled workers by hiding them inside the much smaller percentage effect on total employment. That is what allows CEA Chairman Jason Furman to brag that the “CBO’s central estimate…leads to a 0.3% decrease in employment… [that] could be essentially zero.” 500,000 is not 0.3% of 20 million (that would be 60,000) but rather 0.3% of the larger total work force of around 170 million. 0.3% sounds like such a small number. That’s almost zero, isn’t it? Surely that isn’t such a high price to pay for paying people what they’re worth – or what a bunch of economists think they’re worth, anyway.

But we digress. Just what is it that causes those “apparently small” effects on total employment, anyway? “Higher wages reduce turnover by reducing job satisfaction, so at any given moment there are fewer unfilled openings. Within reasonable ranges of a minimum wage, the churn-reducing effect seems to offset whatever staff reductions occur because of higher labor costs. Also, some businesses manage to pass along the costs to customers without harming sales.”

This is mostly warmed-over sociology, imported by economists for cosmetic purposes. American industry is pockmarked with industries plagued by high turnover, such as trucking. If higher wages were a panacea for this problem, it would have been solved long since. Today, we have a minimum wage. We also have a gigantic mismatch of unfilled jobs and discouraged workers. The shibboleth of businesses “passing along” costs to consumers with impunity was a cherished figment imagined in books by John Kenneth Galbraith in the 1950s and 60s, but neither Galbraith nor today’s economists can explain what hypnotic power businesses exert over consumers to accomplish this feat.

The magic word never mentioned by Peter Coy or the 600 economists or Jason Furman is productivity. Competitive markets enforce a strict link between market wages and productivity – specifically, between the wage and the discounted marginal value product of the marginal worker’s labor. Once that link is severed, the tether to economic logic has been cut and the discussion drifts along in never-never land. The political Left maunders on about the “dignity of human labor” and “a living wage” and “the worth of a human being” – nebulous concepts that have no objective meaning but allow the user to attach their own without fear of being proven wrong.

Bloomberg Businessweek‘s cover features a young baggage handler holding a sign identifying his job and duties, with a caption reading “How Much Is He Worth?” Inside the magazine, a page is taken up with workers posing for pictures showing their jobs and their own estimation of their “worth.” These emotive exercises may or may not sell magazines, but they prove and solve nothing. Asking a low-skilled worker to evaluate their own worth is like asking a cancer victim what caused their disease. Broadcast journalists do it all the time, but if that were really valuable, we would have cured cancer long ago. If a low-skilled worker were an expert on valuing labor, he or she would qualify as an entrepreneur – and would be set up to make some real money.

A Fine-Tuned Minimum Wage

Into the valley of brain death rode the 600 economists who supported a minimum wage of $10.10 per hour. Their ammunition consisted of fine-tuning based on econometrics. Let us hear from Paul Osterman, labor economist of MIT. “To jump from $7.25 to $15 would be a long haul. That would in my view be a shock to the system.” Mr. Osterman, exercising his finely-honed powers of insight denied to the rabble, is able to peer into the econometric mists and discern that $10.10 would be …somehow… just right – barely felt by 320 million people generating $16 trillion in goods and services, but $15 – no, that would shock the system. In other words, that first 40% increase would be hardly a tickle, but the subsequent 38% would be a bridge too far.

In any other context, it would be quite a surprise to the economics profession to discover that the study of econometrics had advanced this far. (The phrase “science of econometrics” was avoided advisedly.) For decades, graduate students in economics were taught a form of logical positivism originally outlined by John Neville Keynes (father of John Maynard Keynes) and developed by Milton Friedman. Economic theory was advanced by developing hypotheses couched in the form of conditional predictions. These were then tested in order to evaluate their worth. The tests ranged from simple observation to more complex tests of statistical inference. Hypotheses meeting the tests were retained; those failing to do so were discarded.

Simple and attractive though that may sound, this philosophy has failed utterly in practice. The tests have failed to convince anybody; it is axiomatic that no economic theory was ever accepted or rejected on the basis of econometric evidence. And the econometric tools themselves have been the subject of increasing skepticism by economists themselves as well as the outside world. One of the ablest and most respected practitioners, Edward Leamer, titled a famous 1983 article, “Let’s Take the Con Out of Econometrics.”

The time period pictured by Peter Coy as an “econometric arms race” employing “increasingly sophisticated” tools and models overlapped with a steadily growing scandal enveloping the practice of econometrics – or, more precisely, statistical practice across both the natural and social sciences. Within economics alone, it concerned the continuing failure of the leading economists and economic journals to correctly enforce the proper interpretation of the term “statistical significance.” This failure has placed the quantitative value of most of the econometric work done in the last 30 years in question.

The general public’s exposure to the term has encouraged it to regard a “statistically significant” variable or event as one that is quantitatively large or important. In fact, that might or might not be true; there is no necessary connection between statistical significance and quantitative importance. The statistician needs to take measures apart from ascertaining statistical significance in order to gauge quantitative importance, such as calculating a loss function. In practice, this has been honored more in the breach than the observance. Two leading economic historians, Deirdre McCloskey and Steven Ziliak, have conducted a two-decade crusade to reform the statistical practice of their fellow scientists. Their story is not unlike that of the legendary Dr. Simmelweis, who sacrificed his career in order to wipe out childbed fever among women by establishing doctors’ failure to wash their hands as the transmitter of the disease.

This scandal could not be more relevant to the current rehabilitation of the minimum wage. The entire basis for that rehabilitation is supposedly the new, improved econometric work done beginning in 1994 – the very time when the misuse and overemphasis of statistical significance was in full swing. The culprits included many of the leading economists in the profession – including Drs. Card and Krueger and their famous 1994 study, which was one of dozens of offending econometric studies identified by McCloskey and Ziliak. And the claim made by today’s minimum-wage proponents is that their superior command of econometrics allows them to gauge the quantitative effects of different minimum-wages so well that they can fine-tune the choice of a minimum wage, picking a minimum wage that will benefit the poor without causing much loss of jobs and real income. But judging the quantitative effect of dependent variables is exactly what econometrics has done badly from the 1980s to the present, owing to its preoccupation with statistical significance. The last thing in the world that the lay public should do is take the quantitative pretensions of these economists on faith.

This doesn’t sound like a profession possessing the tools and professional integrity necessary to fine-tune a minimum wage to maximize social justice – whatever that might mean. In fact, there is no reason to take recent pronouncements by economists on the minimum wage at face value. This is not professional judgment talking. It is political partisanship masquerading as analytical economics.

The Wall Street Journal pointed out that the $2 billion net gain in real income projected by the CBO if the minimum wage were to rise to $10.10 is a minute percentage gain compared to the size of a $16 trillion GDP. (It is slightly over 0.001%.) The notion of risking a job loss of one million for a gain of that size is quixotic. Even more to the point, the belief that economists can predict gains or losses of that tiny magnitude in a general equilibrium context using econometrics is absurd. The CEA and the CBO are allowing themselves to be used for political purposes and, in the process, allowing the discipline of economics to be prostituted.

The increasing politicization of economics is beginning to produce the same effects that subservience to political orthodoxy produced on Russian science under Stalin. The Russian scientist Lysenko became immortal not because of his scientific achievements but because of his willingness to distort science to comport with Communist doctrine. The late, great economist Ronald Coase once characterized the economics profession’s obsession with econometrics as a determination to “torture the data until it confesses.” Those confessions are now taking on the hue of Soviet-style confessions from the 1930s, exacted under torture from political dissidents who wouldn’t previously knuckle under to the regime. Today, politically partisan economists torture recalcitrant data on the minimum wage in order to extract results favorable to their cause.

The CBO and the CEA should have new stationery printed. Its logo should be an image of Lubyanka Prison in old Soviet Russia.

DRI-221 for week of 12-8-13: What’s (Still) Wrong with Economics?

An Access Advertising EconBrief:

What’s (Still) Wrong with Economics?

Taking stock is an end-of-year tradition. This space devotes the remainder of the year to explaining the value of economics, so it’s fitting and proper to don a hair shirt and break out the penance whips as 2013 fades into the distance. What’s wrong with economics? Why doesn’t its productivity justify its title of queen of the social sciences – and what could be done about that?

This omnibus indictment demands an orderly presentation, organized by subject area.

Teaching: Although the motto of the Econometric Society is “science is measurement,” a better operational definition is “science is knowing what the hell you’re talking about.” On that score, economics has a lot to answer for. A science is only as good as its practitioners, who regurgitate what they are taught. Teaching is the first place to lay blame for the shortcomings of economics as a science.

In the past, economics has seldom been taught at the secondary level. That is changing, but only slowly. The subject is so difficult to master and absorption is such an osmotic process that an early start would vastly improve results. It would also force an improvement in the standard mode of teaching.

At the college level, economics is taught by teaching the same formal theory that Ph. D. students are required to master. Granted, college freshmen begin at the most basic level using far simpler tools, but they learn the same techniques. As the successful business economist Leif Olsen (among others) has pointed out, the tacit premise of college economics instruction is that all students will go on to study for their doctorate in the subject.

That is absurd. It forces textbooks to concentrate on force-feeding students bits (or chunks) of technique, supposedly to insure that all students are exposed to the tools and reasoning used by working economists. The use of the word “exposed” in this context should call to mind a disreputable man clothed only in a raincoat, accosting impressionable females in a public park. That captures both the thoroughness and duration of the exposure to each technical refinement, as well as the depth of understanding and relative appeal to the emotions and intellect on the part of the students.

What is needed here is textbooks and teachers that cover much less ground but do it much more thoroughly. Only a tiny fraction of students seek, let alone obtain, the Ph. D. The rest need to grasp the basic logic behind supply and demand, opportunity cost and the role of markets in coordinating the dispersed knowledge of humanity. This requires intensive study of basics – something that would also benefit today’s eventual doctoral candidates, many of whom never learn those basics. The only textbook serving this need that comes quickly to mind is The Economic Way of Thinking, by the late Paul Heyne.

In addition to the benefits accruing to undergraduate education, other advantages would follow from this superior approach. As it now stands, graduate students in economics are hamstrung by the subject’s austere formalism. The mathematical approach is now so rigorous at the highest levels of economics that the subject bears a stronger resemblance to engineering or physics than to the political economy practiced by classical economists in the 18th and 19th centuries. If this so-called rigor added value in form of precision to the practice of economics, it would be worth its cost in pain and hardship.

Alas, it doesn’t. Even worse, graduate students have to spend so much time grappling with mathematics that they lack the time to absorb the basic elements underlying the mathematics. Often, the mathematical models must eliminate the basic elements in order make the mathematics tractable. We are then left with the anomaly of an economic theory that must truncate or amputate its economic content in order to satisfy certain abstract scientific criteria. This obsession with formalism has substituted bad science for good economics – the worst kind of tradeoff.

The reader might wonder who benefits from the status quo, since beneficiaries have not been evident in the telling thus far. The current system creates a narrow road to academic success for career economists. They must fight their way through the undergraduate curriculum, then labor as part-time teachers and research assistants while taking their own graduate courses. Writing the Ph. D. dissertation can take years, after which they have a short time (usually six years) in which to write publishable research and get it placed in the small number of peer-reviewed economics journals. If they succeed in all this, they may end up with tenure at an American university. This will entitle them to job security and opportunity for advancement and a sizable income. If they fail – well, there’s always the private sector, where a small number of economists attain comparable career success. It is the survivors of this process, the tenured faculty at major American colleges and universities, who benefit from the system as it exists.

Perhaps this privileged few are an extraordinarily productive lot? Well, there are a tiny handful of the professoriate who produce research output that might reasonably be classed as valuable. Most articles published in professional journals, though, are virtually worthless. Nobody would pay any significant money to sponsor them directly. That’s not all. In addition to the arid mathematics employed by the theoretical research, there is also the statistical technique used to generate empirical articles. For several decades, the primary desideratum in statistical economics has been to obtain “statistically significant” results between the variable(s) in the economic model and the variable we are trying to understand. If questioned about this, the average person would probably define this criterion as “a large enough effect or impact to be worth measuring, or large enough to make us think what we are measuring has an important influence on what we are studying.”

Wrong! “Statistical significance” is a term of art that means something else – something that is more qualitative than quantitative. Essentially, it means that there is a likelihood that the relationship between the model variable(s) and the variable of interest is not due to random chance but is, rather, systematic. Another way of putting it would be to say that statistical significance answers a binary, “yes-no” question instead of the question we are usually most interested in. The big question, the one we most want the answer to, is usually a “how much” question. How much influence does one variable have on another; how great is the importance of one variable on another? The question answered by statistical significance is interesting and useful, but it is not the one we care about the most. Yet it is almost the only one the social sciences have cared about for decades. And, believe it or not, it is apparent that many economists do not even realize the mistake in emphasis they have been making.

Yet it is not the small number of beneficiaries or even their ghastly mistakes that indicts the current system. Rather, it is economic theory itself, which insists that people benefit from consumption rather than production. It is consumers of economics – students and the general public – who should be reaping rewards. The benefits earned by tenured professors are not bad if they are earned by providing comparable benefits to consumers rather than merely reaping monopoly profits from an exclusionary process. But students are lowest on the totem pole on any major university campus. Tenured faculty members teach as little as possible, usually only two courses per semester. Teaching is little rewarded and often poorly done by tenured and non-tenured faculty alike. Academic lore is filled with stories of award-winning teachers who neglected research for teaching and were dumped by their university in spite of their teaching accomplishments.

The late Nobel laureate James Buchanan characterized the position of academic economists today to “a kind of dole;” that is, they are living off the taxpayer rather than earning their keep. Administrators are fellow beneficiaries of the system, although they are pilot fish riding the backs of all academicians, not merely economists.

The Public: Consumers of economics include not merely those who study the subject in school but also the general public. Economists advise businesses on various subjects, including the past, present and future level of economic activity overall and within specific sectors, industries and businesses. They provide expert witness services in forensics by estimating business valuation, damage and loss in litigation, by representing the various parties in regulatory proceedings and particularly in antitrust litigation. Economists are the second-most numerous profession in government employment, behind lawyers.

For some seventy years, economists have played an important role in the making of economic policy. One might expect that economists would play the most important role; who is qualified to decide economic policy if not economists? In fact, modern governments place politicians and bureaucrats ahead of everybody when it comes to policymaking regardless of expertise. This has created a situation in which we were better off with no economic policy at all than with an economic policy run by non-economists. Still, the recent efforts of professional economists do not paint the profession in a favorable light, either.

The problem with public perception of economics and economists is that they have come to regard economics as synonymous with “macroeconomics;” that is, with forecasting and policymaking aimed at economic statistical aggregates like employment, gross domestic product and interest rates in the plural. This is the unfortunate byproduct of the Keynesian Revolution that overtook economics in the 1930s and reigned supreme until the late 1970s. The overarching Keynesian premise was that only such an aggregative focus could cure the recurrent recessions and depressions that Keynesians ascribed to the inherent instability and even stagnation of a private economy left to its own devices.

It is ironic that every premise on which Keynes based his conclusions was subsequently rejected by the four decades of extensive and intensive research devoted to the subject. It is even more ironic that the conclusion reached by the profession was that attention needed to be focused on developing “microfoundations of macroeconomics,” since it was the very notion of microeconomics that Keynes rejected in the first place. And the crowning irony was that, while Keynes ideas filtered down into the textbook teaching of economics and even into media presentation of economic news and concepts to the general public, the rejection of Keynesian economics never reached the news media or the general public. Textbooks were revised (eventually), but without the fanfare that accompanied the “Keynesian Revolution.”

So it was that when the financial crisis of 2008 and ensuing Great Recession of 2009 reacquainted America with economic depression, Keynesian economists could reemerge from the subterranean depths of intellectual isolation like zombies from a George Romero movie without triggering screams of horror from the public. Only those with very long memories and a healthy quotient of temerity stood up to ask why discredited economic policies had suddenly acquired cachet.

When the Nobel Foundation began awarding quasi-Nobel prizes for economics in the late 1960s, a good deal of grumbling was heard in the ranks of the hard sciences. Economics wasn’t a real science, they maintained stubbornly. A real science is cumulative; it creates a body of knowledge that grows larger over time owing to its revealed truth and demonstrated value in application. Economics just recycles the same ideas, they scoffed, which go in and out of fashion like women’s hemlines rather than being proved or disproved.

From today’s vantage point, we can see more than just a grain of truth in their disparagement – more like a boulder, in fact. What macroeconomist Alan Blinder referred to in a journal article as “the death and life of Keynesian economics” is a perfect case in point. Keynesian economics did not arise because it was a superior theory – research proved its theoretical inferiority. Not only that, it took decades to settle the point, which doesn’t exactly constitute a testimonial to the value of the subject or the lucidity of its doctrines. Keynesian economics did not triumph in the arena of practical application; that is, countries did not eliminate recessions and depressions using Keynesian policies, thereby proving their worth. Just the opposite; after decades of pinning his hopes on Keynesian economics, the British Labor Party leader James Cavenaugh renounced it in a celebrated denunciation in the mid-1970s.

No, Keynesian economics made a comeback because it was politically useful to the Obama administration. It enabled them to spend vast amounts of money and direct the spending to political supporters on the pretext that they were “stimulating the economy.” If economics had to justify its existence by pointing to the results of “economic policy,” economists would be thrown out into the street and forbidden to practice their craft.

In the early 1960s, Time Magazine put John Maynard Keynes on its cover and proclaimed the death of the business cycle. This obituary proved to be premature. Like Icarus, economists tried to fly too high. Their wings melted by the solar heat, the profession is now in freefall, putting up a bold front and proclaiming “so far, so good” as they plummet to Earth. The only remedy for this hubris is to straightforwardly admit that economics is not a hard, quantitatively predictive science in the mold of the natural sciences. Its fundamental insights are not quantitative at all but they are absolutely vital to our well-being. When combined with such other social sciences as law and political science, economics can explain patterns of human behavior involving choice. It can unlock the key to human progress by making the knowledge sequestered in billions of individual brains accessible in useful form for the mutual benefit of all. Thanks to economics, billions of people can live who would die without its insights. These benefits are anything but trivial.

Economics can even ameliorate the hardships imposed by the business cycle, as long as we do not expect too much and can resign ourselves to occasional recessions of limited length and severity. In this regard, success can be likened to hitting home runs in baseball. Trying to hit home runs by swinging too hard usually doesn’t work; making solid contact is the key to hitting homers. Many great home-run hitters, including Hank Aaron and Ernie Banks, were not large, powerful men who swung for the fences. They were wiry, muscular hitters who hit solid line drives. The economic analogue of this philosophy is to allow free markets to work and relative prices to govern the allocation of resources rather than trying to use government spending, taxes and money creation as a bludgeon to hammer the efforts of markets into a politically acceptable shape.

Remedies: In thinking about ways to right its wrongs, economics should take its own advice and fall back on free markets. Rather than trying to administratively reshape the academic status quo and tenure-based faculty system, for example, economists should simply support privatization of education. This is simply taking current professional support of tuition vouchers and charter schools to the next logical level. Tenure is a protected academic monopoly, unlikely to survive in a free private market. If it does, this will mean that it has unsuspected virtues; so much the better, then.

Recent decades have seen the rise of applied popular economics books written to bring economics to the masses. The best-known and most popular of these, Freakonomics, is among the least useful – but it is better than nothing. Better works have been submitted by economists like Steven Landsburg (The Armchair Economist) and David Friedman. Their worthy efforts have helped to turn the tide by correcting misapprehensions and redirecting focus away from macroeconomics. This is another good example of reform from within the profession that does not require economists to sacrifice their own well-being.

Perhaps the one missing link in economics today is leadership. Revolutions in scientific theory and practice are typically effected by individuals at the head of scientific movements. In economics, these have included men like Adam Smith, David Ricardo, Karl Marx, the Austrian economists of the 19th century, Alfred Marshall, Keynes and Milton Friedman. Today there is a leadership vacuum in the profession; nobody with the intellectual stature of Friedman remains to take the lead in reforming economics.

Given the woes of economics and economic theory, a new candidate seems unlikely to come riding over the horizon. It may be that economists will have to prop up an intellectual giant of the past to ride like El Cid against the ancient foes of ignorance, apathy, prejudice and vested interest. There is one outstanding candidate, the man who saved the 20th century in life and whose wide-ranging thought and multi-disciplinary theory is alone capable of midwiving a new sustainable economics of the future. That would be F.A. Hayek. Recent stirrings within the profession suggest a growing acknowledgment that Hayek’s economics have been too long neglected and explain the crisis, recession and current stagnation far better than anything offered by Keynes or his followers. There is no better body of work to serve as a model for what is wrong with economics and how to correct it than his.

DRI-234 for week of 11-17-13: Economists Start to See the Light – and Speak Up

An Access Advertising EconBrief:

Economists Start to See the Light – and Speak Up

In order for dreadful economic policies to be ended, two things must happen. Economists must recognize the errors – then, having seen the light, they must say so publicly. For nearly five years, various economists have complained about Federal Reserve economic policies. Unfortunately, the complaints have been restrained and carefully worded to dilute their meaning and soften their effect. This has left the general public confused about the nature and degree of disagreement within the profession. It has also failed to highlight the radicalism of the Fed’s policies.

Two recent Wall Street Journal economic op-eds have broken this pattern. They bear unmistakable marks of acuity and courage. Both pieces focus particularly on the tactic of quantitative easing, but branch out to take in broader issues in the Fed’s conduct of monetary policy.

A Monetary Insider Kneels at the Op-Ed Confessional to Beg Forgiveness

Like many a Wall Street bigwig, Andrew Huszar has led a double life as managing director at Morgan Stanley and Federal Reserve policymaker. After he served seven years at the Fed from 2001-2008, good behavior won him a parole to Morgan Stanley. But when the Great Financial Crisis hit, TARP descended upon the landscape. This brought Huszar a call to return to public service in spring, 2009 as manager of the Fed’s program of mortgage-backed securities purchases. In “Confessions of a Quantitative Easer” (The Wall Street Journal, 11/12/2013), Huszar gives us the inside story of his year of living dangerously in that position.

Despite his misgivings about what he perceived as the Fed’s increasing subservience to Wall Street, Huszar accepted the post and set about purchasing $1.25 trillion (!) of mortgage-backed securities over the next year. This was the lesser-known half of the Fed’s quantitative-easing program, the little brother of the Fed’s de facto purchases of Treasury debt. “Senior Fed officials… were publicly acknowledging [past] mistakes and several of those officials emphasized to me how committed they were to a major Wall Street revamp.” So, he “took a leap of faith.”

And just what, exactly, was he expected to have faith in? “Chairman Ben Bernanke made clear that the Fed’s central motivation was to ‘affect credit conditions for households and businesses.'” Huszar was supposed to “quarterback the largest economic stimulus in U.S. history.”

So far, Huszar’s story seems straightforward enough. For over half a century, economists have had a clear idea of what it meant to stimulate an economy via central-bank purchases of securities. That idea has been to provide banks with an increase in reserves that simultaneously increases the monetary base. Under the fractional-reserve system of banking, this increase in reserves will allow banks to increase lending, causing a pyramidal increase in reserves, money, spending, income and employment. John Maynard Keynes himself was dubious about this use of monetary policy, at least during the height of a depression, because he feared that businesses would be reluctant to borrow in the face of stagnant private demand. However, Keynes’ neo-Keynesian successors gradually came to understand that the simple Keynesian remedy of government deficit spending would not work without an accompanying increase in the money stock – hence the need for reinforcement of fiscal stimulus with monetary stimulus.

Only, doggone it, things just didn’t seem to work out that way. Sure enough, the federal government passed a massive trillion-dollar spending measure that took effect in 2009. But “it wasn’t long before my old doubts resurfaced. Despite the Fed’s rhetoric, my program wasn’t helping to make credit any more accessible for the average American. The banks were only issuing fewer and fewer loans. More insidiously, whatever credit they were issuing wasn’t getting much cheaper. QE may have been driving down the wholesale cost for banks to make loans, but Wall Street was pocketing most of the extra cash.”

Just as worrisome was the reaction to the doubts expressed by Huszar and fellow colleagues within the Fed. Instead of worrying “obsessively about the costs versus the benefits” of their actions, policymakers seemed concerned only with feedback from Wall Street and institutional investors.

When QE1 concluded in April, 2010, Huszar observed that Wall Street banks and near-banks had scored a triple play. Not only had they booked decent profits on those loans they did make, but they also collected fat brokerage fees on the Fed’s securities purchases and saw their balance sheets enhanced by the rise in mortgage-security prices. Remember – the Fed’s keenness to buy mortgage-backed securities in the first place was due primarily to the omnipresence of these securities in bank portfolios. Indeed, mortgage-backed securities served as liquid assets throughout the financial system and it was their plummeting value during the financial crisis that caused the paralyzing credit freeze. Meanwhile, “there had been only trivial relief for Main Street.”

When, a few months later, the Fed announced QE2, Huszar “realized the Fed had lost any remaining ability to think independently from Wall Street. Demoralized, I returned to the private sector.”

Three years later, this is how Huszar sizes up the QE program. “The Fed keeps buying roughly $85 billion in bonds a month, chronically delaying so much as a minor QE taper. Over five years, its purchases have come to more than $4 trillion. Amazingly, in a supposedly free-market nation, QE has become the largest financial-market intervention by any government in world history.”

“And the impact? Even by the Fed’s sunniest calculations, aggressive QE over five years has generated only a few percentage points of U.S. growth. By contrasts, experts outside the Fed…suggest that the Fed may have [reaped] a total return of as little as 0.25% of GDP (i.e., a mere $40 billion bump in U.S. economic output).” In other words, “QE isn’t really working” –

except for Wall Street, where 0.2% of U.S. banks control 70% total U.S. bank assets and form “more of a cartel” than ever. By subsidizing Wall Street banks at the expense of the general welfare, QE had become “Wall Street’s new ‘too big to fail’ policy.”

The Beginning of Wisdom

Huszar’s piece gratifies on various levels. It answers one question that has bedeviled Fed-watchers: Do the Fed’s minions really believe the things the central bank says? The answer seems to be that they do – until they stop believing. And that happens eventually even to high-level field generals.

It is obvious that Huszar stopped drinking Federal Reserve Kool-Aid sometime in 2010. The Fed’s stated position is that the economy is in recovery – albeit a slow, fragile one – midwived by previous fiscal and monetary policies and preserved by the QE series. Huszar doesn’t swallow this line, even though dissent among professional economists has been muted over the course of the Obama years.

Most importantly, Huszar’s eyes have been opened to the real source of the financial crisis and ensuing recession; namely, government itself. “Yes, those financial markets have rallied spectacularly…but for how long? Experts…are suggesting that conditions are again ‘bubble-like.'”

Having apprehended this much, why has Huszar’s mind stopped short of the full truth? Perhaps his background, lacking in formal economic training, made it harder for him to connect all the dots. His own verdict on the failings of QE should have driven him to the next stage of analysis and prompted him to ask certain key questions.

Why did banks “only issu[e] fewer and fewer loans”? After all, this is why QE stimulated Wall Street but not Main Street; monetary policy normally provides economic stimulus by inducing loans to businesses and (secondarily) consumers, but in this case those loans were conspicuous by their absence. The answer is that the Fed deliberately arranged to allow interest payments on excess reserves it held for its member banks. Instead of making risky loans, banks could make a riskless profit by holding excess reserves. This unprecedented state of affairs was deliberately stage-managed by the Fed.

Why has the Fed been so indifferent to the net effects of its actions, instead of “worry[ing] obsessively about the costs versus the benefits”? The answer is that the Fed has been lying to the public, to Congress and conceivably even to the Obama Administration about its goals. The purpose of its actions has not been to stimulate the economy, but rather to keep it comatose (for “its” own good) while the Fed artificially resuscitates the balance sheets of banks.

Why did the Fed suddenly start buying mortgage-backed securities after “never [buying] one mortgage bond…in its almost 100-year history”? Bank portfolios (more particularly, portfolios of big banks) have been stuffed to the gills with these mortgage-backed securities, whose drastic fall in value during the financial crisis threatened the banks with insolvency. By buying mortgage-backed securities like they were going out of style, the Fed increases the demand for those securities. This drives up their price. This acts as artificial respiration to bank balance sheets, just as Andrew Huszar relates in his op-ed.

The resume of Fed Chairman Ben Bernanke is dotted with articles extolling the role played by banks as vital sources of credit to business. Presumably, this – rather than pure cronyism, as vaguely hinted by Huszar – explains Bernanke’s obsession with protecting banks. (It was Bernanke, acting with the Treasury Secretary, who persuaded Congress to pass the enormous bailout legislation in late 2008.)

Why has “the Fed’s independence [been] eroding”? There is room for doubt about Bernanke’s motivations in holding both short-term and long-term interest rates at unprecedentedly low levels. These low interest rates have enabled the Treasury to finance trillions of dollars in new debt and roll over trillions more in existing debt at low rates. At the above-normal interest rates that would normally prevail in our circumstances, the debt service would devour most of the federal budget. Thus, Bernanke is carrying water for the Treasury. Reservoirs of water.

Clearly, Huszar has left out more than he has included in his denunciation of QE. Yet he has still been savaged by the mainstream press for his presumption. This speaks volumes about the tacit gag order that has muffled criticism of the Administration’s economic policies.

It’s About Time Somebody Started Yellin’ About Yellen

Kevin Warsh was the youngest man ever to serve as a member of the Federal Reserve Board of Governors when he took office in 2006. He earned a favorable reputation in that capacity until he resigned in 2011. In “Finding Out Where Janet Yellen Stands” (The Wall Street Journal, 11/13/2013), Warsh digs deeper into the views of the new Federal Reserve Board Chairman than the questions on everybody’s lips: “When will ‘tapering’ of the QE program begin? and “How long will the period of ultra-low interest rates last?” He sets out to “highlight – then question – some of the prevailing wisdom at the basis of current Fed policy.”

Supporters of QE have pretended that quantitative easing is “nothing but the normal conduct of monetary policy at the zero-lower-bound of interest rates.” Warsh rightly declares this to be hogwash. While central banks have traditionally lowered short-term interest rates to stimulate investment, “the purchase of long-term assets from the U.S. Treasury to achieve negative real interest rates is extraordinary, an unprecedented change in practice… The Fed is directly influencing the price of long-term Treasurys – the most important asset in the world, the predicate from which virtually all investment decisions are judged.”

Since the 1950s, modern financial theory as taught in orthodox textbooks has treated long-term U.S. government bonds as the archetypal “riskless asset.” This provides a benchmark for one end of the risk spectrum, a vital basis for comparison that is used by investment professionals and forensic economists in court testimony. Or rather, all this used to be true before Ben Bernanke unleashed ZIRP (the Zero Interest Rate Policy) on the world. Now all the finance textbooks will have to be rewritten. Expert witnesses will have to find a new benchmark around which to structure their calculations.

Worst of all, the world’s investors are denied a source of riskless fixed income. They can still purchase U.S. Treasurys, of course, but these are no longer the same asset that they knew and loved for decades. Now the risk of default must be factored in, just as it is for the bonds of a banana republic. Now the effects of inflation must be factored in to its price. The effect of this transformation on the world economy is incalculably, unfavorably large.

Ben Bernanke has repeatedly maintained that the U.S. economy would benefit from a higher rate of inflation. Or, as Warsh puts it, that “the absence of higher inflation is sufficient license” for the QE program. Once again, Warsh begs to differ. Here, he takes issue with Bernanke’s critics as much as with Bernanke himself. “The most pronounced risk of QE is not an outbreak of hyperinflation,” Warsh contends. “Rather, long periods of free money and subsidized credit are associated with significant capital misallocation and malinvestment – which do not augur well for long-term growth or financial stability.”

Déjà Va-Va-Vuum

Of all the hopeful signs to recently emerge, this is the most startling and portentous. For centuries – at least two centuries before John Maynard Keynes wrote his General Theory and in the years since – the most important effect of money on economic activity was thought to be on the general level of prices; i.e., on inflation. Now Warsh is breaking with this time-honored tradition. In so doing, he is paying long-overdue homage to the only coherent business-cycle theory developed by economists.

In the early 1930s, F.A. Hayek formulated a business-cycle theory that temporarily vied with the monetary theory of John Maynard Keynes for supremacy among the world’s economists. Hayek’s theory was built around the elements stressed by Warsh – capital misallocation and malinvestment caused by central-bank manipulation of the money supply and interest rates. In spite of Hayek’s prediction of the Great Depression in 1929 and of the failure of the Soviet economy in the 1930s, Hayek’s business-cycle theory was ridiculed by Keynes and his acolytes. The publication of Keynes’ General Theory in 1936 relegated Hayek to obscurity in his chosen profession. Hayek subsequently regained worldwide fame with his book The Road to Serfdom in 1944 and even won the Nobel Prize in economics in 1974. Yet his business-cycle theory has survived only among the cult of Austrian-school economists that stubbornly refused to die out even as Keynesian economics took over the profession.

When Keynesian theory was repudiated by the profession in the late 1970s and 80s, the Austrian school remained underground. The study of capital theory and the concept of capital misallocation had gone out of favor in the 1930s and were ignored by the economics profession in favor of the less-complex modern Quantity Theory developed by Milton Friedman and his followers. Alas, monetarism went into eclipse in the 80s and 90s and macroeconomists drifted back towards a newer, vaguer version of Keynesianism.

The Great Financial Crisis of 2008, the subsequent Great Recession and our current Great Stagnation have made it clear that economists are clueless. In effect, there is no true Macroeconomic theory. Warsh’s use of the terms “capital misallocation” and “malinvestment” may be the first time since the 1930s that these Hayekian terms have received favorable mention from a prominent figure in the economic Establishment. (In addition to his past service as a Fed Governor, Warsh also served on the National Economic Council during the Bush Administration.)

For decades, graduate students in Macroeconomics have been taught that the only purpose to stimulative economic policies by government was to speed up the return to full employment when recession strikes. The old Keynesian claims that capitalist economies could not achieve full employment without government deficit spending or money printing were discredited long ago. But this argument in favor of artificial stimulus has itself now been discredited by events, not only in the U.S. and Europe but also in Japan. Not only that, the crisis and recession proceeded along lines closely following those predicted by Hayek – lavish credit creation fueled by artificially low interest rates long maintained by government central banks, coupled with international transmission of capital misallocation by flexible exchange rates. It is long past time for the economics profession to wrench its gaze away from the failed nostrums of Keynes and redirect its attention to an actual theory of business cycles with a demonstrated history of success. Warsh has taken the key first step in that direction.

The Rest of the Story

When a central bank deliberately sets out to debase a national currency, the shock waves from its actions reverberate throughout the national economy. When the economy is the world’s productive engine, those waves resound around the globe. Warsh patiently dissects current Fed policy piece by piece.

To the oft-repeated defense that the Fed is merely in charge of monetary policy, Warsh correctly terms the central bank the “default provider of aggregate demand.” In effect, the Fed has used its statutory mandate to promote high levels of employment as justification for assuming the entire burden of economic policy. This flies in the face of even orthodox, mainstream Keynesian economics, which sees fiscal and monetary policies acting in concert.

The United States is “the linchpin in the international global economy.” When the Fed adopts extremely loose monetary policy, this places foreign governments in the untenable position of having either to emulate our monetary ease or to watch their firms lose market share and employment to U.S. firms. Not surprisingly, politics pulls them in the former direction and this tends to stoke global inflationary pressures. If the U.S. dollar should depreciate greatly, its status as the world’s vehicle currency for international trade would be threatened. Not only would worldwide inflation imperil the solidity of world trade, but the U.S. would lose the privilege of seigniorage, the ability to run continual trade deficits owing to the world’s willingness to hold American dollars in lieu of using them to purchase goods and services.

The Fed has made much of its supposed fidelity to “forward guidance” and “transparency,” principles intended to allow the public to anticipate its future actions. Warsh observes that its actions have been anything but transparent and its policy hints anything but accurate. Instead of giving lip service to these cosmetic concepts, Warsh advises, the Fed should simply devote its energies to following correct policies. Then the need for advance warning would not be so urgent.

Under these circumstances, it is hardly surprising that we have so little confidence in the Fed’s ability to “drain excess liquidity” from the markets. We are not likely to give way in awed admiration of the Fed’s virtuosity in monetary engineering when its pronouncements over the past five years have varied from cryptic to highly unsound and its predictions have all gone wrong.

Is the Tide Turning?

To a drowning man, any sign that the waters are receding seems like a godsend. These articles appear promising not only because they openly criticize the failed economic policies of the Fed and (by extension) the Obama Administration, but because they dare to suggest that The Fed’s attempt to portray its actions as merely conventional wisdom is utterly bogus. Moreover, they imply or (in Kevin Warsh’s case) very nearly state that it is time to reevaluate the foundations of Macroeconomics itself.

Is the tide turning? Maybe or maybe not, but at last we can poke our heads above water for a lungful of oxygen. And the fresh air is intoxicating.

DRI-277 for week of 11-3-13: Why Are There No Economists Among Leading Opinion-Molders Today?

An Access Advertising EconBrief:

Why Are There No Economists Among Leading Opinion-Molders Today?

Today the discipline of economics occupies a strange position within the general public. The Financial Crisis of 2008 and ensuing Great Recession brought economics to the daily attention of Americans more forcefully than since the Great Depression of the 1930s. The Federal Reserve’s monetary policies, particularly its recent tactic of Quantitative Expansion (QE) of the stock of money, have made monetary policy the object of attention to a greater extent than at any time since the days of stagflation and supply-side economics during the Reagan administration in the early 1980s. One would expect to find economists occupying center stage almost every day.

Not so, surprisingly enough. Contrast the position of economics today with, say, that in the 1960s and 70s, just prior to Ronald Reagan’s election as President. At that point, three economists were familiar on sight and sound to a great many Americans: John Kenneth Galbraith, Paul Samuelson and Milton Friedman. Today, the venues, space and time available to economists far outnumber those existing forty years ago. Yet no economist today even approaches the influence and familiarity of the Big Three in their heyday. A brief recollection of each is in order for the benefit of younger readers.

The Big Three of Yesteryear: Galbraith, Samuelson and Friedman

The Big Three economists of yesteryear bestrode the 20th century like colossi and stood tall into the 21st. They were all born and died within a few years of each other: Galbraith (1908-2006) slightly outlasted Samuelson (1915-2009) and Friedman (1911-2006) in longevity although he died first, in April 2006. They were all self-made and experienced the Depression first hand. Each was a prolific writer but appealed to a different audience.

The best-selling writer of the three was John Kenneth Galbraith, whose literary zenith produced The Affluent Society (1958) and The New Industrial State (1967). Compactly put, Galbraith’s thesis was that Americans were satiated with consumer goods but starved for so-called “public goods;” i.e., the goods government was uniquely situated to provide. The economy thrived not on competition but on monopoly exercised by giant corporations, who artificially created the demand for their products via advertising rather than merely responding to the inchoate demands expressed by consumers. Since consumer wants depended on the same process that satisfied them, that process of want-satisfaction could not be justified or defended as simply “giving the people what they want.” Therefore, government was not merely allowed but morally required to tax and regulate business to restrain their behavior and acquire the resources necessary to redress the imbalance between public and private spending. Galbraith’s views resonated with the general public much more than with the economics profession itself, where only the New Left, radicals, institutionalist admirers of Thorstein Veblen and quasi-Marxists found them attractive. Needless to say, Galbraith’s ideas seem quaint today in light of the decline and fall of the supposedly invulnerable giant corporations he worshipped.

In addition to his economic works, which also included American Capitalism (1952) and A Theory of Price Control (1952), Galbraith wrote novels and memoirs of his travels and tenure as Ambassador to India. His iconoclastic views – he minted the phrase “the conventional wisdom” – and ironic style endeared him to the general public, whose distrust of authority he shared. This seems odd of a man whose World War II service as deputy head of the Office of Price Administration made him one of America’s chief bureaucrats, but Galbraith’s early life was spent on a farm in Canada. Another of his books was a novel satirizing America’s foreign-policy establishment (“Foggy Bottom”). Perhaps the chief object of his scorn over the years was the corporate hierarchy, whose morals and mores he never tired of mocking despite his exaggerated opinion of their power over markets.

Paul Samuelson was the leading theoretician among American economists and the first American awarded the economics version of the Nobel Prize in 1970. His scholarly articles numbered in the hundreds, but he is remembered today chiefly for two books: Foundations of Economic Analysis, based on his doctoral dissertation, which signaled a turning point in economics to mathematics as the formal mode of analysis; and Economics, his all-time best-selling college text that combined principles textbooks in microeconomics and macroeconomics in order to integrate the two analytically into the so-called “Neoclassical synthesis. Samuelson combined the elements of classical price theory as developed by Alfred Marshall and refined by subsequent generations with Keynesian macroeconomics as modified from Keynes by the neo-Keynesian generation that included Samuelson himself, Franco Modigliani and James Tobin, among others. This book (really, a double book) taught generations of economists through over twenty editions from the 1940s until the 21st century.

Samuelson’s central conceit was that individual free markets worked beautifully, but markets in the aggregate were prone to unemployment or inflation. This aggregate shortcoming could only be corrected by government spending directed by… well, by men like Samuelson himself; although he always refused to take a policy post in the Democrat administrations he supported and advised.

As with Galbraith, it is difficult for non-economists today to credit the veneration Samuelson inspired in certain quarters. In the late 1950s, Samuelson began predicting that the Soviet Union would soon overtake the U.S. in per-capita GDP (then GNP). He retained this prediction in successive editions of his textbook – until the final overthrow of the Soviet Union in 1991. Galbraith and Samuelson made an odd couple of Keynesians – the former supporting massive government spending in spite of his distrust of the bureaucracy and the latter embracing deficit spending by government because he had faith in the ability of government to fine-tune aggregate economic activity. Samuelson shared a forum in Newsweek Magazine in an alternating column with the third member of our Big Three, Milton Friedman.

Friedman became well-known among fellow economists long before he attracted public notice. He won the John Bates Clark medal awarded to the leading economist under the age of 40 and published a notable collection entitled Essays in Positive Economics that contains some of the best expository writing ever done in his subject. 1957 saw what he considered his best piece of work, A Theory of the Consumption Function, which successfully reconciled cross-section data on aggregate consumption among different groups over the same time period with time-series data on consumption among all groups over time.

In the early 1960s, Friedman published two books within a year of each other that catapulted him to public attention and professional eminence. Capitalism and Freedom made both the political and economic case for free markets, an analytical position that had almost deteriorated through neglect. A Monetary History of the United States, which he co-authored with Anna Schwartz of the National Bureau of Economic Research, made an empirical case for the money stock as perhaps the chief economic variable of interest both historically and for policy purposes. Milton Friedman became the world’s chief exponent of the Quantity Theory of Money, which had been around ever since David Hume in the 18th century but had never before been put to such comprehensive use in economic theory. Ironically, Friedman’s single-minded focus on the money stock proved to be his Achilles heel. Although still greatly respected for his manifold contributions to economic theory and his prodigious talents as a defender of freedom and popularizer of economic thought, Friedman’s monetary theory is little regarded among professionals of all ideological stripes.

As the 60s and 1970s wore on, Friedman headed up the disloyal opposition to Keynesian economics within the economics profession. Keynes had been posthumously crowned king in the 1950s and early 60s as Western economies began to adopt the policy of spending their way to prosperity. But the advent of simultaneous high inflation and high unemployment, or “stagflation,” in the 1970s put paid to the Keynesian tenure atop the profession. Friedman and Edmund Phelps independently and more or less simultaneously developed the hypothesis of a “natural rate of unemployment” that defied Keynesian efforts to reduce it via deficit spending. Only through continually increasing injections of money into the economy – producing ever-increasing rates of inflation and resulting unrest – could unemployment be reduced and held below this “natural rate.” Friedman’s Nobel Prize, received in 1976, was by this time a foregone conclusion.

In 1980, Friedman reached his zenith of public popularity with the best-selling book and accompanying PBS television series Free to Choose. This was a popularized version of Capitalism and Freedom, updated for the 80s. For the first time, an economist had scaled the heights of public popularity, professional acclaim and policy prominence. Like Samuelson, Friedman preferred to exercise his influence outside of government. Unlike Samuelson, though, Friedman had actually worked for government in World War II. It was Milton Friedman, of all people, who devised the concept of government tax withholding to streamline the process of revenue collection.

Vacuum at the Top

Today, economics is omnipresent in our lives. Yet there is nobody in the public square whose position rivals that of the Big Three of yesteryear. The closest would be Paul Krugman, who has written several popular books, whose Nobel Prize is spelled exactly like the one received by Samuelson and who believes that the stock of money can play an important role in economic policy. In other words, he is a pale shadow of Galbraith, Samuelson and Friedman.

Noted economist Sam Peltzman probed this seeming paradox in an article published in the May, 2013 issue of Econ Journal Watch, 10(2) pp. 205-209, entitled “Why Is There No Milton Friedman Today?” Peltzman’s analytical qualifications are impeccable. He has carved out a distinguished career as a critic of government regulation. His crown jewel is a famous 1975 study on automobile safety that introduced the pioneering concept of “risk compensation” to the social sciences.

Risk compensation refers to the behavioral effects created by safety improvements and regulations. When people take more risk in response to safety improvements and/or regulation, this change in behavior has been christened the “Peltzman Effect.” Thus, Sam Peltzman has been given the greatest scientific honor of all – a scientific principle has been named for him.

Peltzman notes the absence of successors to the Big Three. He especially abhors the vacuum created by the loss of Milton Friedman. Peltzman’s explains it by citing Friedman’s unique talents. The first of these was his knack for communicating economic insights to the masses. The same expository skill Friedman brought to his professional work equipped him to educate the general public.

Peltzman illustrates Friedman’s style with a revealing anecdote from his own (Peltzman’s) academic career. Peltzman’s first graduate-school class was Friedman’s legendary class in Price Theory at the University of Chicago. The students “eagerly awaited our introduction to the technical mysteries of our chosen profession. Instead, we got an extended paraphrase of an article entitled ‘I, Pencil,’ in which a humble pencil tells us of the herculean coordination problem required to get itself produced and distributed and of the virtues of markets in solving that problem.” Peltzman correctly attributes the essay to Leonard Read, founder of the Foundation for Economic Education and its journal, The Freeman, in which the essay originally appeared. Peltzman’s points are that Friedman’s pedagogy was time-tested and simple and he employed it before professional audiences as well as public ones.

Friedman’s second unique virtue was his zest for combat. Libertarian economists were scarce in Friedman’s day and he knew his arguments would be received with scorn and incredulity. Nevertheless, his rejoinders were cheerful and clever; he relished the opportunity to buck the tide of collectivist conformism. And his devotion to his principles was unyielding. “All against one makes for a good show,” observes Peltzman, “and Friedman liked the odds.” This brings to mind the answer made by John Wayne’s character J.B. Books, the dying gunfighter in the movie The Shootist, when asked to account for his luck in surviving so many gunfights over the years: “I was willing.”

It is clear that even Galbraith and Samuelson couldn’t measure up to Milton Friedman by Peltzman’s criteria. Galbraith had the communication skills and debating talent but little worthwhile to communicate; his theory badly needed shoring up. Samuelson had the theory but communicated largely by writing letters to his fellow economists in the language of differential equations. His text worked well enough for a captive academic audience but nobody ever characterized his persona as “dynamic.” Both these men were, to some greater or lesser extent, arguing for the status quo, while one of Friedman’s books was titled The Tyranny of the Status Quo.

So far, so good. Peltzman makes a concise, compelling case for Milton Friedman as sui generis. Now, though, Peltzman tries to explain why today’s economists do not measure up to the standard set by Friedman. Although his observations of the economics profession seem descriptively accurate, his attitude toward their change in behavior is disturbingly complacent.

The Contemporary Economist as Engineer

In assessing the state of the profession today, Peltzman at first sounds optimistic. It’s true that there is no Milton Friedman leading the charge for freedom and free markets. But that isn’t due to a lack of free-market economists. “There are…numbers of them within our gates, perhaps more than in Friedman’s time…But they lack something that Friedman had in…his time.” Actually, they lack several somethings.

First, they lack the kind of dedicated, first-rate opponents Friedman had in abundance. “…The range of belief within economics has narrowed, partly because of Friedman’s efforts…the modal economist is less [interventionist]… than the modal economist of Friedman’s era…Market solutions…are given a respectable hearing or are part of the consensus today (think flexible exchange rates or unregulated railroad rates). There is less room today for a good fight among economists.” Apparently, Peltzman does not read Paul Krugman’s column in the New York Times.

If this sounds dubious, just listen to Peltzman’s next assertion. “Consider…what has happened in the aftermath of the financial crisis of 2008. The chattering class pronounced with excited joy that Capitalism is now Dead, but the political center hardly moved, and in some countries even moved right – to fiscal rectitude, labor market reform, etc. Hardly any left party that moved away from socialism in Friedman’s heyday has moved back since. What is a committed free-market economist spoiling for a good fight to do when the other side is not so far away?”

This narrative hardly sounds like a description of the multi-trillion dollar stimulus, multiple bailouts of big banks and financial firms, government seizure and handover to autoworkers of two of the Big Three auto companies, impending nationalization of health care, regulatory reign of terror and Federal-Reserve money-creation and asset-purchase binges that have characterized the U.S. since 2008. Contrary to Peltzman, events since 2008 have conformed more to Newsweek‘s famous cover headline: “We Are All Socialists Now.” And what has today’s “modal economist” done in response to this overwhelming frontal assault on free markets?

If Peltzman’s judgment that the economics profession has gravitated toward freer markets were correct, we would expect to read protests from our modal economist. Instead, he has, according to Peltzman, turned into “a much cooler customer. This one tends to be less committed to any politico-economic system.” Wait a minute – what happened to all those “numbers of …free-market economists…within our gates” just a minute ago? We could sure use them, because it now turns out that among the cooler customers, “the animating spirit is more the engineer solving specific problems than the philosopher seeking a unified world view. The questions asked tend to be smaller than, say, the connection between capitalism and freedom.”

Strangely, Peltzman doesn’t seem perturbed about this loss of ideological fervor, because “the skill with which the question is answered tends to be greater than in times past.” What about their professional duty to educate the public in the great truths of economics? “At some point,” Peltzman declares airily, “today’s leading economists may want to communicate their results to a wider audience. But this is an afterthought, in the sense that what is valued within the profession – the skill in obtaining the result – is not what the outside audience is interested in.”

Peltzman is surely wrong about the outside audience, who is intensely interested in “the skill in obtaining the result” because (at least in principle) it should affect the veracity of the result. Presumably what Peltzman meant to say is that the audience doesn’t care what method economists use to get the answer as long as they get the right one. And in this connection, it is hard to see what economics profession Peltzman is referring to – surely not the one that actually exists. For over two decades, Deirdre McCloskey and Steven Ziliak have proclaimed that econometric practice within the social sciences – in economics and elsewhere – is scandalously incompetent. Most empirical articles in the leading professional journals over-rely upon and misuse the principle of “statistical significance.” Thus the foundation of empirical economics has rotted away – and with it has gone Peltzman’s claim of greater skill.

Peltzman is not merely blind to the failings of his profession today; he is complacent about its future prospects. “It is hard for me to see a reversal of the kind of trends I have described…in…fields where the engineer has replaced the philosopher. Perhaps an economic calamity will shake things up in economics. But we had one in 2008, and very little changed within the profession. There was a period of befuddlement [after which] economists went back to their tinkering and were largely irrelevant to the political response to the crisis.”

Peltzman’s complacency even extends back to Friedman’s work. He attributes the fact that “there is no serious socialist faction left within economics” to “Friedman’s success,” which “makes it harder for someone to follow in his footsteps.” Peltzman declares flatly that “there is no serious political/economic alternative to some form of capitalist organization in any major economy.” Peltzman cannot have forgotten – can he? – that this was exactly the point made by Ludwig von Mises and reinforced by Mises and his student F.A. Hayek in the Socialist Calculation debates of the 1930s. This was a central contention of Hayek in his great polemic The Road to Serfdom in 1944. It was Hayek, the guiding spirit behind the Mont Pelerin Society of worldwide free-market economists who sparked Friedman’s interest in political activism in the late 1950s. Friedman admitted all this in his Introduction to the 1994 edition of The Road to Serfdom and in interviews with Hayek’s biographer, Alan Ebenstein.

Peltzman’s most outrageous error is his claim that “the Fed chairman learned from Friedman not to permit a credit freeze to turn into a monetary implosion.” Milton Friedman would have slit both wrists and reclined in a warm bath before endorsing the policies followed by Ben Bernanke before, during or after the Financial Crisis of 2008. Friedman’s criticism of Federal Reserve policy during the Great Depression did not pertain to a “credit freeze” but rather to the wholesale failure of banks throughout the U.S. and resulting nosedive taken by the money stock when deposits were destroyed. A credit freeze – whatever else it might entail – implies no such rapid decline in the money supply and therefore does not demand a “helicopter drop” of money, a la Milton Friedman, in order to cure it. Peltzman’s jaw-dropping attempt to imply a posthumous endorsement of Bernanke by Friedman is as inexcusable as it is inexplicable.

Peltzman has chosen the wrong model for his model economist – Friedman rather than Hayek. He has also chosen the wrong model for his modal economist – the engineer rather than the philosopher. In The Counter-Revolution of Science (recently republished under its original planned title, Studies on the Abuse and Decline of Reason), Hayek outlines the disastrous effects of subjecting society to control by the “mind of the engineer.”

The engineer strives to bring all aspects of a problem under his conscious control in order to achieve a technical optimum. He chafes at external constraints such as prices, incomes and interest rates; they are not “objective attributes of things but reflections of a particular human situation at a given time and place.” He sees them as meaningless, irrational interferences with his optimization techniques. When an engineer confronts a machine, for instance, he typically strives to gain the maximum power or energy output from given inputs of resources. In fact, as Hayek points out, the engineer’s technical optimum is usually just the solution that would obtain if the supply of working capital or resources was unlimited or the interest rate was zero. In adopting the perspective of the engineer, the economist is losing his own unique perspective. A good real-world example of the engineering perspective gone wrong in economic practice would be the misguided activist economic policies of former-engineer Herbert Hoover in trying to combat the Great Depression.

Peltzman correctly recalls that Milton Friedman advanced the view that the profession should pursue “positive economics” by formulating hypotheses and testing them empirically. But Peltzman neglects to inform his readers that today this viewpoint is as dead as the dodo – deader, actually, since today we can clone dodos back to life but we are not about to resurrect the canard that econometrics can be used to test predictive hypotheses in the social sciences in the same way that laboratory experiments test natural scientific hypotheses. In academic economics today, nobody believes that anymore. The massive, sausage-producing enterprise of submitting articles to refereed professional journals for acceptance continues, but purely as a ritual for granting tenure. Nobody now pretends that this process has any value above the purely ceremonial. It is now axiomatic in economics that econometrics does not prove anything, test any hypotheses or rule out (or in) any part of economic theory.

The format mathematical models economists swear by give the appearance of scientific rigor, but this is spurious. In order to reduce actual human activity to systems of solvable equations and stable equilibria, economists have to remove so much realistic detail that their models are unrecognizable to the layman. They are virtually useless for making quantitative predictions. We know this because, as the former Donald McCloskey put it, economists cannot answer “the American question: If you’re so smart, why aren’t you rich?”

Today, economic policy is taking measured that economists have warned against for centuries. The attempt to create wealth and induce prosperity by massive money creation is traditionally a tactic of desperation, one that inevitably ends in crisis and chaos. Yet economists sit silent instead of rising in indignant protest. And Peltzman appears to approve both the desperation tactics and the compliance of his profession.

Actually, Peltzman does betray deep-seated doubts about the current path of economics profession in his last sentence. It reads: “But one wonders still: is this only the calm before the storm?” And one wonders if Peltzman will have cause to regret his failure to speak out.

Whither Economics?

Sam Peltzman has courageously taken on one of the great contemporary mysteries. It is a missing-persons case. Where did the economist go in our public discourse? Peltzman succeeds in finding his quarry, all right. But having found him, he is distressingly indifferent to the runout. His confidence in the methods and motives of today’s economists seems utterly misplaced. Without realizing it, Peltzman himself is providing part of the explanation for the absence of economists from public discourse. He is sanctioning the abandonment of what they do best – teaching the philosophy behind economics – in favor of what they do worst – pretending to employ the methods and techniques of engineering in the foreign realm of economics.

DRI-326 for week of 9-1-13: Quantity vs. Quality in Economists

An Access Advertising EconBrief:

Quantity vs. Quality in Economists

Students of economics have long complained that economics texts focus too much on quantity and not enough on quality when evaluating goods. The same issue arises when comparing economists themselves. The career of Nobel Laureate Ronald Coase, who died this week at age 102, is a polar case.

Economists advance by publishing articles in prestigious, peer-reviewed journals. The all-time leader in the number of articles published is the late Harry G. Johnson. Despite dying young at age 53, Johnson compiled the staggering total of 526 published articles during his lifetime.

The use of advanced mathematics and abstract modeling techniques has enabled economists to rack up impressive publications scores by introducing slight mathematical refinements that add little to the substantive meaning or practical value of their achievement. When asked to account for the comparative modesty of a list of publications only one-fourth the size of Johnson’s, Nobel Laureate George Stigler countered, “Yeah, but mine are all different.”

Coase stands at the other extreme. His complete list of articles numbers fewer than twenty, but two of those are among the most-frequently consulted by economists, lawyers and other specialists, not to mention by the general public. He published a long-awaited, widely noticed book in 2012 despite having passed his centenary the year before. His life is an advertisement for the value of quality over quantity in an economist.

Another notable aspect of Coase’s work is its accessibility. In an age when few professional contributions can be read and understood by non-specialists, much less by interested non-economists, Coase’s work is readily comprehensible by the educated layperson. Now is the time to rehearse the insights that made Coase’s name a byword within the economics profession. His death makes this review emotionally as well as intellectually fitting.

Why Do Businesses Exist?

At 26 years of age, Ronald Coase was a left-leaning economics student. He pondered the following contradictory set of facts: On the one hand, socialists ever since Saint-Simon had advocated running a nation’s economy “like one big factory.” On the other hand, orthodox economists declared this to be impossible. Yet some highly successful corporations reached enormous size.

Who was right? It seemed to Coase that the answer depended on the answer to a more fundamental question – why do businesses exist? Be it a one-person shop or a huge multinational corporation, a business arises voluntarily. What conditions give birth to a business?

Coase found the answer in the concept of cost. (In his 1937 article, “The Nature of the Firm,” Coase used the term “marketing costs,” but the economics profession refined the term to “transactions costs”.) A business arises whenever it is less costly for people to organize into a hierarchical, centralized structure to produce and distribute output than it is to produce the same output and exchange it individually. And the business itself performs some activities within the firm while outsourcing others outside the firm. Again, cost determines the locus of these activities; any activity more cheaply bought than performed inside the firm is outsourced, while activities more cheaply done inside the firm are kept internal.

Like most brilliant, revolutionary insights, this seems almost childishly simple when explained clearly. But it was the first lucid justification for the existence of business firms that relied on the same economic logic that business firms themselves (and consumers) used in daily life. Previously, economists had been in the ridiculous position of assuming that businesses used economic logic but arose through some non-economic process such as habit or tradition or government direction.

Today, we have a regulatory process that flies in the face of Coase’s model. It implicitly assumes that markets are incapable of correctly organizing, assigning and performing basic business functions, ranging from safety to hiring to providing employee benefits. To make matters worse, the underlying assumption is that government regulatory behavior is either costless or less costly than the correlative function performed by private markets. As Coase taught us over 75 years ago, this flies in the face of the inherent “nature of the firm.”

The “Coase Theorem”

In mid-career, while working amongst a group of free-market economists at the University of Virginia, Ronald Coase made his most famous discovery. It assured him immortality among economists. Just about the best way to make a name for yourself is to give your name to a theory, the way John Maynard Keynes or Karl Marx did. But in Coase’s case, the famous “Coase Theorem” was actually devised by somebody else, using Coase’s logic – and Coase himself repudiated the use to which his work was put!

To appreciate what Coase did – and didn’t do – we must grasp the prior state of economic theory on the subject of “externalities.” Tort case law contained examples of railroad trains whose operation created fires by throwing off sparks into combustible areas like farm land. The law treated these cases by either penalizing the railroad or ignoring the damage. A famous economist named A. C. Pigou declared this to be an example of an “externality” – a cost created by business production that is not borne by the business itself because the business’s owners and/or managers do not perceive the damage created by the sparks to be an actual cost of production.

Rather than simply penalizing the railroad, Pigou observed, the economically appropriate action is to levy a per-unit tax against the railroad equal to the cost incurred by the victims of the sparks. This would cause the railroad to reduce its output by exactly the same amount as if it had perceived its sparks to be a legitimate cost of production in the first place. In effect, the railroad “pays” the costs of its actions in the form of reduced output (and reduced use of the resources necessary to provide railroad transport services), rather than paying them in the form of a fine. Why is the former outcome better than the latter? Because the purpose is not to hurt the railroad as retaliation for its hurting the farmer, the way one child hurts another in revenge for being hurt. A railroad is a business – in effect, it is a piece of paper expressing certain contractual relationships. It cannot feel hurt the way a human being can, so the fine may make the farmer feel better (if he or she received the fine as proceeds of a tort suit) but does not compensate for the waste of resources caused by having the railroad produce too much output. (“Too much” because resources will have to be devoted to repairing the damage caused by the sparks, and consumers value the resources used to do this more than the farmer values the loss.) In contrast, when the costs are factored into the railroad’s production decision, everybody values the resulting output of railroad services and other things as exactly worth their cost.

Of course, the catch is that somebody has to (1) realize the existence of the externality; and (2) calculate exactly how much tax to levy on the railroad to neutralize (or internalize) the externality; and then (3) do it. In the manner of a philosopher king, Pigou declared that this task should be assigned to a government regulatory bureaucracy. And for the next half-century (Pigou was writing in the early 1900s), mainstream economists salivated at the prospect of regulatory agencies passing rules to internalize all the pesky externalities that liberals and bureaucrats could dream up.

In 1960, Ronald Coase came along and gave the world a completely new slant on this age-old problem. Consider the following type of situation: you are flying from New York to Los Angeles on a low-price airline. You have settled somewhat uncomfortably into your seat, survived the takeoff and are just beginning to contemplate the six-hour flight when the passenger in front of you presses a button at his side and reclines his seatback – thereby preempting what little leg room you previously had. Now what?

This is not actually the example Coase used – it was used by contemporary economist Peter Boettke to illustrate Coase’s ideas – but it is especially good for our purposes. Using the same logic Coase applied to his examples, we reason thusly: It would be completely arbitrary to assign either of us a property right to the space preempted by the seatback. Why? Because the problem is not to stop bad people from doing bad things. Instead, we are faced with a situation in which ordinary people want to do good things that are in some sense contradictory or offsetting in their effects. On the airplane, my wish to stretch out is no more or less morally compelling than his to recline. The problem is that  we can’t do both to the desired extent at the same time without getting in each other’s way; e.g., offsetting each other’s efforts.

Indeed, this is true of most so-called externalities, including the railroad/farmer case. Case law usually treated the railroad as a nefarious miscreant imposing its will on the innocent, helpless farmer. But the railroad’s wish to provide transport services is just as reasonable as the farmer’s to grow and harvest crops. It is not unthinkable to enjoin the railroad against creating sparks – but neither should we overlook the possibility of requiring the farmer to protect against sparks or perhaps even not locate a farm within the threatened area. Indeed, what we really want in all cases is to discover the least-cost solution to the externality. That might involve precautions taken by the railroad, or by the farmer, or emigration by the farmer, or payment by the railroad to the farmer as compensation for the spark damage, or payment by the farmer to the railroad as compensation for spark avoidance.

 

In general, it is crazy to expect an uninvolved third party – particularly a government regulator – to divine the least-cost solution and implement it. The logical people to do that are the involved parties themselves, who know the most about their own costs and preferences and are on the scene. These are also the people who have the incentive to find a mutually beneficial solution to the problem. In our airline example, I might offer the man in front of me a small payment for not reclining. Or he might pay me for the privilege of reclining. But either way, we will bargain our way to a solution that leaves us both better off, if there is one. One of us would object to any proposed solution that did not leave him better off.

Of course, it would be useful for bargaining purposes to have an assignment of property rights; that is, a specification that I have the right to my space or that the man in front of me has the right to recline. That way, the direction of compensatory payments would be clear – money would flow to the right-holder from the right-seeker.

What if bargaining does not produce an efficient outcome, one that both parties can agree on? That means that the right-holder values his right at more than the right-seeker is willing to pay. But in that case, no government tax would produce an efficient outcome either.

On the airline, suppose that I value the leg room preempted by the reclining seat at $10. Suppose, further, that airline policy gives him the right to recline. If I offer him $6 not to recline, he will accept my offer if he values reclining at any amount less than $6 – say, $5. Notice that we are now both better off than under the status quo ante bargain. I get leg room I valued at $10 – or course, I had to pay $6 for it, but that is better than not having it at all, just as having the airline’s cocktail is better than being thirsty even though I had to pay $5 for it. He loses his right to recline, but he gets $6 instead – and reclining was only worth $5 to him. He is better off, just as he would be if he accepted an airline’s offer of $500 to surrender his seat and take a later flight, as sometimes happens.

We cannot even begin to estimate how many times people solve everyday problems like this through individual bargains. The world would be vastly better off if we were trained from birth in the virtues of a voluntary society where bargaining is a way to solve everyday disputes and make everybody better off. That training would stress the virtues of money as the lubricant that facilitates this sort of bargain because it is readily exchangeable for other things and because it is the common denominator of value. Instead, most of us are burdened by an instinctive tribal suspicion that money is evil and bargaining is used only to seek personal advantage at the expense of others. Experienced businesspeople know otherwise, but throughout the world the Zeitgeist is working against Coase’s logic. More and more, government and statutory law are held up as the only fair mechanism for resolving disputes.

University of Chicago economist George Stigler used Coase’s logic to devise the so-called Coase theorem, which says that when the transactions costs of bargaining are zero, the ultimate price and output results will be the same regardless of the initial assignment of property rights. This is true because both parties will have the incentive to bargain their way to an efficient improvement, if one exists. The assignment of property rights will affect the wealth of the bargainers, because it will determine the direction of the money flow, but economists are concerned with welfare (determined by prices and quantities), not wealth. No government regulatory body can improve on the free-market solution.

Coase disagreed with the theorem named after him – not because he disputed its logic, but because he foresaw the results. Economists would use it to look for circumstances when transactions costs were low or non-existent. Instead, Coase wanted to investigate real-world institutions such as government to compare its transactions costs to those of the market. He knew that real-world transactions costs were seldom zero but that government solutions almost never worked out as neatly in practice as they did on the blackboard. In fact, he invented the phrase “blackboard economics” to refer to solutions that could never work in practice, only on a theoretical blackboard, because real-world governments never had either the information or the incentive necessary to apply the solution.

Why China Became Capitalist

Ronald Coase devoted his last years to learning how and why China evolved from the world’s last major Communist dictatorship to the world’s emerging economic superpower. In Why China Became Capitalist, he and his research partner Ning Wang delivered an account that contravened the popular explanation for China’s rise. China’s ruling central-government oligarchy has received credit for the country’s emergence as the growth leader among developing nations. Since the Communist Party retained political control throughout the growth spurt, it must have been responsible for it – so the usual explanation runs. Coase and Wang showed that government’s presence as the agency in charge of political life does not automatically entitle it to credit for economic growth.

The death of Mao Zedong in 1976 rescued China from decades of terror, famine and dictatorship. Mao’s designated successor, Hua Guofeng, was an economist who outlined a program of state-run investment in heavy industry called the “Leap Outward.” This resembled the various Five-Year Plans of Soviet ruler Joseph Stalin in general approach and in overall lack of results. Hua’s successors, Deng Xiaoping and Chen Yun, abandoned the Leap Outward in favor of an emphasis on agriculture and light industry. Although Deng was the political figurehead who garnered the lion’s share of the publicity, Chen was the guiding spirit behind this second centralized plan designed to spur Chinese economic growth. It placed less emphasis on production of capital goods and more on consumer goods. Chen allowed state-controlled agricultural prices to rise in an effort to stimulate production on China’s collective farms, which had failed disastrously under Mao, resulting in approximately 40 million deaths from famine. He also allowed state-run enterprises a measure of autonomy and private profit, heretofore unthinkable under Communism.

Although these central-government measures were the ostensible spur to China’s remarkable growth run, Coase and Wang assign actual responsibility to the resurgence of China’s private economy. Private farms had always existed as part of the nation’s 30,000 villages and towns, much as neighboring Russians continued to nurture their tiny private plots of land alongside the Soviet collective farms. And, just as was the case in Russia, the smaller private farms began to outdo the larger collectives in productivity and output. Mao fanatically insisted on agricultural collectivization, but his death freed private farmers to resume their former lives.  By late 1980, the Beijing government was forced to officially acknowledge the private farms. In 1982, China formally abandoned its costly experiment with collective agriculture and de-collectivized its farms. Official grain prices were allowed to rise and grain imports were permitted.

Agriculture wasn’t the only industry that flourished at the local level. Small businesses in rural China labored under official handicaps; their access to raw materials was not protected and they had no officially sanctioned distribution channels for their output. But they bought inputs on the black market at high prices and groomed their own sales representatives to scour the nation drumming up business for their goods. These local Davids outperformed the state-run Goliaths; they were the real vanguard of Chinese economic growth.

Growth was slower in China’s major cities. Mao had sent some 20 million youths to the countryside to escape unemployment in the cities. After his death, many of these youths returned to the cities – only to find themselves out of work again. They demonstrated and formed opposition political movements, sometimes paralyzing daily life with their protests. This forced Beijing to permit self-employment for the first time – another Communist sacred cow sacrificed to political expediency. This, in turn, created an urban class of Chinese entrepreneurs. This led to yet another government reaction in the form of Special Economic Zones, somewhat reminiscent of the U.S. “enterprise zones” of the 1980s. Economic freedom and lower taxes were allowed to exist in a controlled environment; Chinese officials hoped to encourage controlled doses of capitalist prosperity in order to save socialism.

Gradually, the limited reforms of the Special Economic Zones became more general. Increased freedom of market prices was introduced in 1992, taxes were lowered in 1994 and privatization of failing state-run enterprises began in the mid-1990s. For the first time, China began to replace local and regional markets with a single national market for many goods.

Coase and Wang identify perhaps the most important but least-known capitalist element to arise in China as the improved pursuit of knowledge. They accurately attribute the recognition of knowledge’s role in economics to Nobel Laureate F.A. Hayek and note the increasing popularity of books and articles by Hayek, his mentor Ludwig von Mises and classical forebears such as Adam Smith. The economics profession has pigeonholed the subject of knowledge under the heading of “technical coefficients of production,” but the authors know that this is only the beginning of the knowledge needed to make a free-market economy work. The knowledge of market institutions and the dispersed, specialized “knowledge of particular time and place” that can only be collated and shared by free markets are even more important than technical knowledge about how to produce goods and services.

The upshot of China’s private resurgence has been to make the country a “laboratory for capitalist experimentation,” according to Coase and Wang. That laboratory has brewed a recipe for unparalleled economic growth since the 1990s, leading to China’s admittance into the World Trade Organization in 2001. The final piece of the puzzle, the authors predict, is a true free market for ideas – the one thing that Western economies have that China lacks. When this falls into place, China will become the America of the 21st century.

Thus did Ronald Coase add a landmark study in economic history to his select resume of classic works.

Quality vs. Quantity

Never in the history of economics has one economist achieved so much productivity with so little scholarly output. Ronald Coase economized on the scarce resources of time and human effort (ours) by devoting the longest career of any great economist to specializing in quality, not quantity, of work.

DRI-280 for week of 7-7-13: Unintended Consequences and Distortions of Government Action

An Access Advertising EconBrief:

Unintended Consequences and Distortions of Government Action

The most important cultural evolution of 20th-century America was the emergence of government as the problem-solver of first resort. One of the most oft-uttered phrases of broadcast news reports was “this market is not subject to government regulation” – as if this automatically bred misfortune. The identification of a problem called for a government program tailored to its solution. Our sensitivity, compassion and nobility were measured by the dollar expenditure allocated to these problems, rather than by their actual solution.

This trend has increasingly frustrated economists, who associate government action with unintended consequences and distortions of markets. Since voluntary exchange in markets is mutually beneficial, distortions of the market and consequences other than mutual benefit are bad things. Economists have had a hard time getting their arguments across to the public.

One reason for this failure is the public unwillingness to associate a cause with an effect other than that intended. We live our lives striving to achieve our ends. When we fail, we don’t just shrug and forget it – we demand to know why. Government seems like a tool made to order for our purposes; it wields the power and command over resources that we lack as individuals. Our education has taught us that democracy gives us the right and even the duty to order government around. So why can’t we get it to work the way we want it to?

The short answer to that is that we know what we want but we don’t know how government or markets work, so we don’t know how to get what we want. In order to appreciate this, we need to understand the nature of government’s failures and of the market’s successes. To that end, here are various examples of unintended consequences and distortions.

Excise Taxation

One of the simplest cases of unintended, distortive consequences is excise taxation. An excise tax is a tax on a good, either on its production or its consumption. Although few people realize it, the meaningful economic effects of the tax are the same regardless of whether the tax is collected from the buyer of the good or from the seller. In practice, excise taxes are usually collected from sellers.

Consider a real-world example with purely hypothetical numbers used for expository purposes. Automotive gasoline is subject to excise taxation levied at the pump; e.g., collected from sellers but explicitly incorporated into the price consumers pay. Assume that the price of gas net of tax is $2.00 per gallon and the combination of local, state and federal excuse taxes adds up to $1.00 per gallon. That means that the consumer pays $3.00 per gallon but the retail gasoline seller pockets only $2.00 per gallon.

Consider, for computational ease, a price decrease of $.30 per gallon. How likely is the gasoline seller to take this action? Well, he would be more likely to take it if his total revenue were larger after the price decrease than before. But with the excise tax in force, a big roadblock exists to price reductions by the seller. The $.30 price decrease subtracts 15% from the price (the net revenue per unit) the seller receives, but only 10% from the price per unit that the buyer pays. And it is the reduction in price per unit paid by the buyer that will induce purchase of more units, which is the only reason the seller would have to want to reduce price in the first place. The fact that net revenue per unit falls by a larger percentage than price per unit paid by consumers is a big disincentive to lowering price.

Consider the kind of case that is most favorable to price reductions, in which demand is price-elastic. That is, the percentage increase in consumer purchases exceeds the percentage decrease in price (net revenue). Assume that purchases were originally 10,000 gallons per week and increased to 11,200 (an increase of 12%, which exceeds the percentage decrease in price). The original total revenue was 10,000 x $2.00 = $20,000. Now total revenue is 11,200 x $1.70 = $19,040, nearly $1,000 less. Since the total costs of producing 1,200 more units of output are greater than before, the gasoline seller will not want to lower price if he correctly anticipates this result. Despite the fact that consumer demand responds favorably (in a price-elastic manner) to the price decrease, the seller won’t initiate it.

Without the excise taxation, consumers and seller would face the same price. If demand were price-elastic, the seller would expect to increase total revenue by lowering price and selling more units than before. If the increase in total revenue were more than enough to cover the additional costs of producing the added output, the seller would lower price.

Excise taxation can reduce the incentive for sellers to lower price when it is imposed in specific form – a fixed amount per unit of output. When the excise tax is levied ad valorem, as a percentage of value rather than a fixed amount per unit, that disincentive is no longer present. In fact, the specific tax is the more popular form of excise taxation.

The irony of this unintended consequence is felt most keenly in times of rising gasoline prices. Demagogues hold sway with talk about price conspiracies and monopoly power exerted by “big corporations” and oil companies. Talk-show callers expound at length on the disparity between price increases and price decreases and the relative reluctance of sellers to lower price. Yet the straightforward logic of excise taxation is never broached. The callers are right, but for entirely the wrong reason. The culprit is not monopoly or conspiracy. It is excise taxation.

This unintended consequence was apparently first noticed by Richard Caves of Harvard University in his 1964 text American Industry: Structure, Conduct, Performance.

ObamaCare: The 29’ers and 49’ers

The recent decision to delay implementation of the Affordable Care Act – more familiarly known as ObamaCare – has interrupted two of the most profound and remarkable unintended consequences in American legislative history. The centerpiece of ObamaCare is its health mandates: the requirement that individuals who lack health insurance acquire it or pay a sizable fine and the requirement that businesses of significant size provide health plans for their employees or, once again, pay fines.

It is the business mandate, scheduled for implementation in 2014, which was delayed in a recent online announcement by the Obama administration. The provisions of the law had already produced dramatic effects on employment in American business. It seems likely that these effects, along with the logistical difficulties in implementing the plan, were behind the decision to delay the law’s application to businesses.

The law requires businesses with 50 or more “full-time equivalent” employees to make a health-care plan available to employees. A “full-time-equivalent” employee is defined as any combination of employees whose employment adds up to the full-employment quotient of hours. Full-time employment is defined as 30 hours per week, in contradiction to the longtime definition of 40 hours. Presumably this change was made in order to broaden the scope of the law, but it is clearly having the opposite effect – a locus classicus of unintended consequences at work.

Because the “measurement period” during which each firm’s number of full-time equivalent number of employees is calculated began in January 2013, firms reacted to the provisions of ObamaCare at the start of this year, even though the business mandate itself was not scheduled to begin until 2014. No sooner did the New Year unfold than observers noticed changes in fast-food industry employment. The changes took two basic forms.

First, firms – that is, individual fast-food franchises – cut off their number of full-time employees at no more than 49. Thus, they became known as “49’ers.” This practice was obviously intended to stop the firm short of the 50-employee minimum threshold for application of the health-insurance requirement under ObamaCare. At first thought, this may seem trivial if highly arbitrary. Further thought alters that snap judgment. Even more than foods, fast-food firms sell service. This service is highly labor-intensive. An arbitrary limitation on full-time employment is a serious matter, since it means that any slack must be taken up by part-timers.

And that is part two of the one-two punch delivered to employment by ObamaCare. Those same fast-food firms – McDonald’s, Burger King, Wendy’s, et al – began limiting their part-time work force to 20 hours per week, thereby holding them below the 30-hour threshold as well. But, since many of those employees were previously working 30 hours or more, the firms began sharing employees – encouraging their employees to work 20-hour shifts for rival firms and logging shift workers from those firms on their own books. Of course, two 20-hour shifts still comprises (more than) a full-time-equivalent worker, but as long as the total worker hours does not exceed the 1500-hour weekly total of 50 workers at 30 hours, the firm will still escape the health-insurance requirement. Thus were born the “29’ers” – those firms who held part-time workers below the 30-hour threshold for full-time-equivalent employment.

Are the requirements of ObamaCare really that onerous? Politicians and left-wing commentators commonly act as if health-insurance were the least that any self-respecting employer could provide any employee, on a par with providing a roof to keep out the rain and heat to ward off freezing cold in winter. Fast-food entrepreneurs are striving to avoid penalties associated with hiring that 50th full-time-equivalent employee. The penalty for failing to provide health insurance is $2,000 per employee beginning with 30. That is, the hiring of the 50th employee means incurring a penalty on the previous 20 employees, a total penalty of $40,000. Hiring (say) 60 employees would raise the penalty to $60,000.

A 2011 study by the Hudson Institute found that the average fast-food franchise makes a profit of $50,000-100,000 per year. Thus, ObamaCare penalties could eat up most or all of a year’s profit. The study’s authors foresaw an annual cost to the industry of $6.4 billion from implementation of ObamaCare. 3.2 million jobs were estimated to be “at risk.” All this comes at a time when employment is painfully slow to recover from the Great Recession of 2007-2009 and the exodus of workers from the labor force continues apace. Indeed, it is just this exodus that keeps the official unemployment rate from reaching double-digit heights reminiscent of the Great Depression of the 1930s.

Our first distortion was an excise tax. The ObamaCare mandates can also be viewed as a tax. The business mandates are equivalent to a tax on employment, since their implementation and penalties are geared to the level of employment. The Hudson study calculated that, assuming a hypothetical wage of $12 per hour, employing the 50th person would cost the firm $52 per hour, of which only $12 was paid out in wages to the employee. The difference between what the firm must pay out and what the employee receives is called “the wedge” by economists, since it reduces the incentive to hire and to work. The wider the wedge, the greater the disincentive. Presumably, this is yet another unintended consequence at work.

ObamaCare is a law that was advertised as the solution to a burgeoning, decades-old problem that threatened to engulf the federal budget. Instead, the law itself now threatens to bring first the government, then the private economy to a standstill. In time, ObamaCare may come to lead the league in unintended consequences – a competition in government ineptitude that can truly be called a battle of the all-stars.

The Food Stamp Program: An Excise Subsidy

In contrast to the first two examples of distortion, the food-stamp program is not a tax but rather its opposite number – a subsidy. Because food stamps are a subsidy given in-kind instead of in cash – a subsidy on a good in contrast to a tax on a good – they are an excise subsidy.

Food stamps began in the 1940s as a supplement to agricultural price supports. Their primary purpose was to dispose of agricultural surpluses, which were already becoming a costly nuisance to the federal government. Their value to the poor was seen as a coincidental, though convenient, byproduct. Although farmers and the poor have long since exchanged places in the hierarchy of beneficiaries, vestiges of the program’s lineage remain in its residence in the Agriculture Department and the source of its annual appropriations in the farm bill. (Roughly 80% of this year’s farm bill was given over to monies for the food-stamp program, which now reaches some 47.3 million Americans, or 15% of the population.)

The fact that agricultural programs help people other than their supposed beneficiaries is not really an example of unintended consequences, since we have known from the outset that price supports, acreage quotas, target prices and other government measures harm the general public and help large-scale farmers much more than small family farmers. The unintended consequences of the food-stamp program are vast, but they are unrelated to its tenuous link to agriculture.

Taxes take real income away from taxpayers, but – at least in principle – they fund projects that ostensibly provide compensating benefits. The unambiguous harm caused by taxes results from the distortions they create, which cause deadweight losses, or pure waste of time, effort and resources. Subsidies, the opposite number of taxes, create similar distortions. The food stamp program illustrates these distortions vividly.

For many years, program recipients received stamp-like vouchers entitling them to acquire specified categories of foodstuffs from participating sellers (mostly groceries). The recipient exchanged the stamps for food at a rate of exchange governed by the stamps’ face value. Certain foods and beverages, notably beverage alcohol, could not be purchased using food stamps.

Any economist could have predicted the outcome of this arrangement. A thriving black market arose in which food stamps could be sold at a discount to face value in exchange for cash. The amount of the discount represented the market price paid by the recipient and received by the broker; it fluctuated with market conditions but often hovered in the vicinity of 50% (!). This transaction allowed recipients to directly purchase proscribed goods and/or non-food items using cash. The black-market broker exchanged the food stamps (quasi-) legally at face value in a grocery in exchange for food or illegally at a small discount with a grocery in exchange for cash. (In recent years, bureaucrats have sought to kill off the black market by substituting a debit card for the stamp/vouchers.)

The size of the discount represents the magnitude of the economic distortion created by giving poor people a subsidy in excise form rather than in cash. Remarkably, large numbers of poor people preferred cash subsidies to markedly that $.50 in cash was preferred to $1.00 worth of (government-approved) foodstuffs. This suggests that a program of cash subsidies could have made recipients better off while spending around half as much more money on subsidies and dispensing with most of the large administrative costs of the actual food-stamp program.

Inefficiency has been the focus of various studies of the overall welfare system. Their common conclusion has been that the U.S. could lift every man, woman and child above the arbitrary poverty line for a fraction of our actual expenditures on welfare programs simply by giving cash to recipients and forgoing all other forms of administrative endeavor.

Of course, the presumption behind all this analysis is that the purpose of welfare programs like food stamps is to improve the well-being of recipients. In reality, the history of the food-stamp program and everyday experience suggests otherwise – that the true purpose of welfare programs is to improve the well-being of donors (i.e., taxpayers) by alleviating guilt they would otherwise feel.

The legitimate objections to cash subsidy welfare programs focus on the harm done to work incentives and the danger of dependency. The welfare reform crafted by the Republican Congress in 1994 and reluctantly signed by President Clinton was guided by this attitude, hence its emphasis on work requirements. But the opposition to cash subsidies from the general public, all too familiar to working economists from the classroom and the speaking platform, arises from other sources. The most vocal opposition to cash subsidies is expressed by those who claim that recipients will use cash to buy drugs, alcohol and other “undesirable” consumption goods – undesirable as gauged by the speaker, not by the welfare recipient. The clear implication is that the food-stamp format is a necessary prophylactic against this undesirable consumption behavior by welfare recipients, the corollary implication being that taxpayers have the moral right to control the behavior of welfare recipients.

Taxpayers may or may not be morally justified in asserting the right to control the behavior of welfare recipients whose consumption is taxpayer-subsidized. But this insistence on control is surely quixotic if the purpose of the program is to improve the welfare of recipients. And, after all, isn’t that what a “welfare” program is – by definition? The word “welfare” cannot very well refer to the welfare of taxpayers, for then the program would be a totalitarian program of forced consumption run for the primary benefit of taxpayers and the secondary benefit of welfare recipients.

The clinching point against the excise subsidy format of the food-stamp program is that it does not prevent recipients from increasing their purchases of drugs, alcohol or other forbidden substances. A recipient of (say) $500 in monthly food stamps who spends $1,000 per month on (approved) foodstuffs can simply use the food stamps to displace $500 in cash spending on food, leaving them with $500 more in cash to spend on drugs or booze. In practice, a recipient of a subsidy will normally prefer to increase consumption of all normal goods (that is, goods whose consumption he or she increases when real income rises). Any excise subsidy, including food stamps, will therefore be inferior to a cash subsidy for this reason. In terms of economic logic, an excise subsidy starts out with three strikes against it as a means of improving a recipient’s welfare.

So why do multitudes of people insist on wasting vast sums of money in order to make people worse off, when they could save that money by making them better off? The paradox is magnified by the fact that most of these money-wasters are politically conservative people who abhor government waste. The only explanation that suggests itself readily is that by wasting money conspicuously, these people relieve themselves of guilt. They are no longer troubled by images of poor, hungry downtrodden souls. They need feel no responsibility for enabling misbehavior through their tax payments. They have lifted a heavy burden from their minds.

The Rule, Not the Exception

These common themes developed by these examples are distortion of otherwise-efficient markets by government action and unintended consequences resulting from the government-caused distortions. By its very nature, government acts through compulsion and coercion rather than mutually beneficial voluntary exchange. Consequently, distortions are the normal case rather than the exception. Examples such as those above are not exceptions. They are the normal case.

DRI-307 for week of 6-30-13: Paving the Road to Hell: A Short History of Bailouts

An Access Advertising EconBrief:

Paving the Road to Hell: A Short History of Bailouts

A versatile sports anecdote of obscure lineage pits a combative baseball manager against a first-base umpire. The manager conducts a prolonged, high-decibel – but utterly unavailing – protest against the umpire’s decision to call a runner out at first base. Upon returning to the dugout, the manager encounters a quizzical coach.

“Why waste all that energy?” the coach inquires. “You know he’s not going to change his call.”

“I’m not arguing about that call,” the manager replies vehemently. “I’m arguing for the next one.”

The story may be apocryphal, but its point is sound. Umpires are known to be influenced by their own nagging suspicions that they have blown a call, so much so that umpire schools teach pupils not to compensate for mistakes in subsequent decisions. The immediate aftermath of the play is the manager’s only window of opportunity to influence the umpire – about future plays, not the one argued about.

From the beginning, economists have argued against “bailouts” – the use of government (e.g., taxpayer) funds to rescue failing business firms. Although the arguments supporting bailouts pretend to be economic, the true motivation is invariably political. This suggests that economists’ opposition is futile. Yet the opposition continues, just as the bailouts themselves do.

Like the proverbial manager, economists are arguing for the next one. They know that the bailout process has a cumulative momentum. A bailout is not an independent, isolated event that stands or falls purely on its own merits. Each bailout establishes the precedent for the succeeding one. Moreover, each new generation requires a fresh introduction to the illogic of the bailout, as well as to the history of the process. Economists direct their arguments against past bailouts, but their true targets are the bailouts to come – the ones whose fate they can influence.

That is why a history of bailouts and the ghastly reasoning that inspired them is far from pointless. It is our only prophylactic against the flood of bailouts to come.

Penn Central (1970)

The Penn Central Railroad was created by the 1968 merger of two venerable American railroad companies: the New York Central Railroad and the Pennsylvania Railroad. A year later, the New York, New Haven and Hartford Railroad joined the party to form Penn Central Transportation Company. These railroads all shared common features, particularly their location in the northeast United States. The Northeast corridor was the most population-dense region of the country. Each of these roads specialized in short hauls of people and freight, in contrast to the mostly long-haul traffic carried by railroads elsewhere in the U.S.

The problem was that, while shorter routes made geographic sense, many competing means of transport had evolved by the late 1960s. Barges carried bulky, low value-to-weight commodities like gravel and sand. Trucks carried retail goods and foodstuffs, including refrigerated perishables. Buses and automobiles carried passenger traffic. This left specialized raw materials like coal and commuting passengers for the railroads.

The roads wanted and needed to lower freight and passenger rates to compete with rival industries. Alas, they were hamstrung by the Interstate Commerce Commission, whose regulations forbade rate changes without regulatory hearings. Ironically, the very regulatory body ostensibly created (in 1887) to prevent railroads from utilizing monopoly power now prevented them from behaving competitively. The erosion of railroad customer base to these competing transportation modes left the railroads with scads of excess capacity and no way to utilize it. This was a recipe for bankruptcy.

The theory behind Penn Central was that merger would allow the single entity to better utilize capacity by selling off abandoning track and rolling stock. Unfortunately, it succeeded only in building a bigger, bulkier and less efficient mousetrap. Penn Central declared bankruptcy in 1970 and was eventually declared unsuitable for reorganization. The federal government took over its passenger business and operated it under the name of Conrail.

Railroads in general and Conrail in particular were saved, not by government bailouts, but by the deregulation of railroads in the Staggers Act of 1980. This gave railroad companies the freedom and flexibility to act quickly and decisively to serve customers by cutting prices and dumping unprofitable lines of business. Unfortunately, the federal government continued to operate a nationalized passenger-rail transport system called Amtrak. Today, a completely deregulated railroad industry would undoubtedly serve the part of the U.S. where passenger-rail service remains viable – the Northeast. Instead, Amtrak continues to serve markets where the demand for passenger service is feeble and the costs of service are astronomically high.

Why in the world was Penn Central bailed out to produce Conrail? What crying necessity demanded it? What calamity would have accompanied an orderly bankruptcy and the demise and liquidation of the company? “None” and “none,” respectively, are the answers to the last two questions. Many upper-middle-class and upper-class Northeasterners traveled the commuter routes served by the roads, and the railroad unions wielded political clout in inverse proportion to the value created by their members for the railroads. (The term “featherbedding” was coined to describe the work practices of railroad-union employees.) The Republican (!) administration in power was powerless to resist the political temptation to “save jobs” and preserve a highly visible service catering to an influential elite.

Today, everybody has forgotten about Conrail. Nobody remembers the first great federal bailout of private business. Of course, it did not end in a huge fiasco. And today the railroad sector is a tremendous transportation success story. But the reason for success is the subsequent deregulation of railroads, and the remaining legacy of the bailout – Amtrak – continues to hemorrhage red ink and suck involuntary transfusions from taxpayers.

Great oafs from little acorns grow.

Lockheed (1971)

The longtime producer of jets had come to derive the bulk of its business from government contracts. This made it a creature of government, even though it technically operated in competition with other airplane manufacturers. The bankruptcy of British firm Rolls Royce – famous for its luxury automobile but also a proficient builder of engines – threatened the completion date for Lockheed’s TriStar L-1011 jet fighter. Default on this U.S. government contract would have put Lockheed under. To tide the company over, the U.S. Congress issued some $250 million in loan guarantees to Lockheed, over the protests of free-marketers.

This time, the rationale was somewhat different. Lockheed’s defense status allowed the company to wrap itself in the cloak of national security, a nuisance that probably destroys more GDP annually than any other economic pest. This required considerable chutzpah on Lockheed’s part, considering that America could still boast firms like Boeing and McDonnell Douglas even if Lockheed had padlocked its doors. But that didn’t stop the company from pointing to the dread specter of its 60,000 jobs that would be lost – gone forever! – if Congress did not ride to its rescue.

Sure enough, the TriStar made it to market. Fittingly, it was deep-sixed by competitors like Boeing’s BA747 and McDonnell Douglas’s DC-10. When the TriStar ceased production in 1983, Lockheed abandoned jet production (so much for our national security) and later merged with Martin Marietta to form Lockheed Martin.

Note, once again, that even though Lockheed did not default on its loans, the bailout was still exposed as a fraud. The pretext of protecting national security proved to be nonsense, the object of the loans proved to be superfluous and as for the jobs – well, the loan guarantees ended up saving a product that deserved to fail but didn’t immunize against an eventual loss of jobs, which went unnoticed anyway.

Chrysler (1980)

In 1979 Chrysler, the smallest of America’s “Big 3” automakers, turned in a then-gigantic $1 billion loss in net income and teetered on the edge of bankruptcy. Dynamic CEO Lee Iacocca heeded the newly evolving American tradition that, when the going gets tough, the tough go begging – to Washington for a bailout. Probably recalling Lockheed’s loan guarantees, Iacocca secured $1.5 billion in guarantees for Chrysler. In addition to the (by now) old chestnut that he was “protecting jobs,” old-hog Iacocca was able to root up a new chestnut – that America’s automotive vanguard had to be protected against the encroachment of foreign competition from Japan. This was a conveniently flexible argument. If there had been no competition from Japan, Iacocca would then have argued that Chrysler needed to be saved to make sure that Americans didn’t run out of cars. Now he could argue that Chrysler needed to be saved to make sure that America “won” the “car war” with Japan. The fact that “winning” by subsidizing an inferior product was the same thing as losing didn’t seem to occur to most people – certainly not to Congress – and Iacocca was hailed as a genius for his lobbying efforts.

President Carter signed the bailout legislation in January, 1980. His administration saved face by requiring Chrysler to raise its own financing for the loans. Iacocca could later brag that the company returned to profitability by 1983 and repaid its loans. No harm, no foul, right? What a triumph for bailouts! At least, that was the general impression conveyed. Yet American consumers paid for Chrysler’s comeback in the form of taxes and quotas levied on imports of Japanese automobiles. That price was very steep.

The biggest price, though, came later. The Chrysler bailout set the stage for the later bailout of General Motors and Ford. The precedent set by Chrysler made it easy – indeed, virtually inevitable – to bail out the “Big 2” when their time came. Not only was it that much harder to reject the same bogus “jobs” rhetoric Iacocca had advanced, but the mere fact that Chrysler had done it and gotten away with it set a psychological minimum standard for treatment of ailing corporate giants. Previous bailees had been either quasi-utilities like Penn Central or quasi-government firms like Lockheed. This was a straightforward case of corporate welfare. It was a line jumped, a Rubicon crossed, a rule broken. Things would never be the same again.

Long Term Capital Management (1998)

In the late 1960s, a group of investors that included Nobel-Prize winning economists formed one of the first hedge funds, named Long Term Capital Management (LTCM). The fund was designed to incorporate asset pricing and portfolio management principles embodied in tools like the Capital Asset Pricing Model developed by William Sharpe. The most striking notions employed by LTCM were those involving portfolio risk.

LTCM designed highly risky portfolios that included long-term fixed-income instruments and currencies. It was precisely the long terms that produced the high risk, since the interest-rate risk of fixed-income securities increases with term to maturity. Currency risk likewise increases with the holding period. The high risk produced very high rates of return. So far, there was nothing remarkable about LTCM’s activities or methods. But the firm was able to offset most of the high risk through a hedge position, whose value was specifically designed to move inversely to that in the risky portfolios. Alternatively put, it was supposed to move directly with interest rates. The general idea behind this hedge investment was simple in concept but hard to achieve in practice: to rise in value when LTCM’s risky portfolios were falling in value, thus offsetting the otherwise-high risk LTCM was running. This made it possible for LTCM to earn spectacularly high profits in good times and break even (more or less) in bad times.

The hedge investment was a short position in U.S. Treasury securities. When worldwide interest rates rose, LTCM’s risky portfolio value would plummet. But LTCM’s managers knew that investors would bid down the prices of Treasury securities and, as a result, their effective yields (interest rates) would rise. Only this higher yield would make Treasury bonds equally satisfactory to investors when world interest rates had risen. The fall in Treasury-bond prices would make big profits on LTCM’s short position to offset the losses on its risky portfolios. And so it went for about 20 years until 1998.

That was the year of the Russian government default. Suddenly the world’s investors abandoned risky investments altogether. They embarked on a “flight to safety.” At that point, the U.S. government’s Treasury bond was still the prototypical riskless asset. So investors bought Treasury bonds, driving up their price and driving down their effective yields (interest rates).

Whoops! Now LTCM was losing boatloads of money on both sides of its trades. In no time it was going down for the third time, financially speaking. And its owners, having kept their eyes open for the preceding 20 years, did what any red-blooded American financier or CEO would do. They ran to the federal government for a bailout.

LTCM was not a railroad. It was not a defense contractor. It was not a car company. It wasn’t even a bank. It was just an investment company whose investment strategy had blown up in its face. Now its investors and owners were suddenly staring insolvency in the face. Except, in this case, they decided to stare Fed Chairman Alan Greenspan in the face instead. And Greenspan blinked. Acting through its New York branch, the Fed passed the plate around Wall Street and collected $3.8 billion in funds with which to salvage the firm’s investments while delivering the firm into the hands of its rescuers.

And what was the rationale for this unprecedented act? Basically, to prevent turmoil in the markets. LTCM was so big that the Fed was afraid that its failure would scare investors to death. Note that there was now no pretense of saving jobs, defending national security, preserving the sanctity of motherhood or the recipe for Mom’s apple pie.

LTCM was a hedge fund whose investors were people of considerable means. The whole idea behind the tight regulation of the investment business is to make sure that investors and investments are suitable for each other and risks are borne by willing individuals who can afford to lose the money. And now… the Fed said we couldn’t afford to let them lose the money! Why? Because the knowledge that one firm had failed would drive this group of rational investors to collectively commit irrational acts. The Fed intervened massively in capital markets to reverse the outcomes of legitimate trades because their subjective reading of collective psychology told them it was the thing to do. And they arbitrarily commandeered private resources to do so, without statutory or judicial warrant.

The Bailouts of the Great Recession and the Financial Crisis (2007-2010)

For most people, the steps taken by the federal government during the Great Recession and the Financial Crisis of 2008 seemed unique and precipitous. But our history of bailouts shows their roots extending far back in history.

The nationalizations of General Motors, Fannie Mae and Freddie Mac were preceded by the nationalization of Conrail. The bailout of GM came after the bailout of Chrysler. The bailout of a financial firm like LTCM paved the way for future bailouts of AIG, Goldman Sachs Hedge Fund and others. The numerous bank and near-bank bailouts in the Financial Crisis were the grandchildren of the Continental Illinois bailout.

The ostensible legacy of the Great Depression was that particular markets needed tight regulation. Financial markets needed it to insure that all parties had the information needed to make rational voluntary exchange possible. Banking needed it because the principle of fractional-reserve banking allowed banks in the aggregate to exert an undue influence over the supply of money through credit creation. In good times, this could facilitate inflation and the creation of bubbles. In bad times, this could cause disaster when bank runs and bank failures have a downwardly cascading effect on the money supply.

Our history of bailouts, however, indicates that bailouts began forfirms in specialized sectors like railroads, defense and banking, but gradually spread to mundane sectors like manufacturing and investment. It comes as no surprise, therefore, that today programs like TARP offers bailouts to a substantial sector of the American population. Homeowners make up a majority of U.S. households and it is not hard to envision a day when a mortgage will come with a guarantee against foreclosure.

The ultimate guarantors of a bailout are taxpayers. The government can obtain funds to bail out a business firm from only three sources: tax receipts, borrowing and money creation. Taxes reduce the real income of taxpayers. Borrowing requires the repayment of principal and interest; thus, it reduces taxpayer real incomes unless it funds the creation of a productive asset. Money creation reduces the value of taxpayers’ money holdings, which is tantamount to a tax.

When everybody bails out everybody else, the process is self-defeating. It becomes impossible and purposeless to sort out gainers and losers. Only the brokers, politicians and bureaucrats, are net gainers. Since the expenditure of resources necessary to produce the bailouts far exceeds the gains enjoyed by these groups, economists frown on the whole process. Far better to allow market to allocate resources and pass judgment on how well or how badly business firms use them to satisfy consumers. Of course, anybody who wants to voluntarily contribute their own resources to compensate losers in the competitive process is welcome to do so. When people act voluntarily, we can presume they gain more than they lose from their actions.

But when government meddling takes the form of bailouts, there is no such presumption.

DRI-259 for week of 2-10-13: Coverage of President Obama’s SOTU Minimum-Wage Proposal

An Access Advertising EconBrief:

Coverage of President Obama’s SOTU Minimum-Wage Proposal

As advertised, President Barack Obama’s 2013 State of the Union (SOTU) address outlined the economic agenda for his second term in office. Among its planks was a proposal to raise the minimum wage to $9.00 an hour from its current $7.25. The reputation of economics as “the dismal science” is vindicated by the coverage of this proposal in the news media, which is indeed nothing short of dismal.

The Wall Street Journal‘s Coverage of Obama’s Minimum-Wage Proposal

The Wall Street Journal is the leading financial publication in America – indeed, in the world. On page four of its morning-after coverage of Obama’s SOTU message, the Journal provided a five-column box headlined “Bid on Minimum Wage Revives Issue That Has Divided Economists,” written by reporters Damian Palette and Jon Hilsenrath. The pair predicts that “President Obama’s proposal… is likely to rekindle debates over whether the measure helps or hurts low-income workers.” And the debates will be between “White House officials” who “say the move …is aimed at addressing poverty and helping low-income Americans” and “Republicans and business groups, which have traditionally said raising the minimum wage discourages companies from hiring low-skilled workers.”

The article rehearses the specifics of the President’s proposal, which raise the minimum wage in stages to $9.00 per hour by 2015, after which it would be indexed to the rate of inflation. It reminds readers that Mr. Obama originally proposed to raise the minimum to $9.50 by 2011. It reports confident projections by “Administration officials” that at least 15 million Americans would directly benefit from the increase by 2015, not counting those now earning above the minimum whose wages would be driven higher by the measure.

Three paragraphs in the middle of the piece gloss over the views of “economists and politicians [who] are divided over the issue.” These consist of two economists, one proponent and one opponent, and one central banker. David Neumark of the University of California Irvine unequivocally maintains that “the effects of the minimum wage are declines in employment for the very least skilled workers,” while “a lot of the benefits …leak out to families way above the poverty line.” Alan Krueger from PrincetonUniversity, currently Chairman of the President’s Council of Economic Advisors, “found positive effects” from the minimum wage on fast-food workers in New Jersey. The authors do not remark the apparent coincidence that Neumark and Krueger studied precisely the same group of workers in reaching their conclusions. Janet Yellen, Vice-Chairman of the Federal Reserve, was quoted as refusing to endorse the minimum-wage increase on grounds of its irrelevance to current conditions, while admitting its adverse effects would probably be small.

The authors close out by summarizing the political strategies of the White House and Republicans in proposing and opposing the measure. The authors toss in a few numbers of general economic significance – surging stock market, recent increase in hiring, persistently slow economic growth, nagging high unemployment, decline in median real income since 2000. They cite the most recent minimum-wage increase in 2009 and note that 19 states already have statewide minima in excess of the current federal minimum.

The reader will notice that the Journal‘s headline refers to a revival of a debate between economists. Yet the article only cites two economists and the debate consists of approximately five lines out of five columns of prose – just over 4% of the article’s 120 lines. A reader who isn’t already thoroughly familiar with the issue will learn virtually nothing at all about why the minimum wage is bad or – for that matter – why its proponents think it is good. The closest thing to analysis are cryptic references to “discourages companies from hiring,” “declines in employment” and – most mysterious of all – “benefits” that “leak out to families way above the poverty line.”

Between 90% and 95% of the article is devoted to politics. And that is utterly superficial. The world’s leading financial publication has devoted substantial space to a Presidential proposal of economic significance, yet its readers would never suspect that the subject is one of the most highly research, well-considered and settled in all of economics. The minimum wage has been a staple application in microeconomics textbooks for over a half-century. Along with policy measures like free international trade and rent control, the minimum wage has generated the most lopsided responses in opinion surveys taken of economists. In percentages ranging from 75% to 90%, economists have resoundingly affirmed their belief that minimum wages promote higher unemployment among low-skilled workers – among their many undesirable effects.

Yet today Wall Street Journal reporters imply that it’s a 50-50 proposition. Or rather, they imply that economists are evenly divided on the merits of the measure. The article mentions a revival of a debate without explaining the terms of the debate or its previous resolution. Indeed, even the arguments of the proponent economist – the Chairman of the CEA, no less! – go unmentioned.

Something more than mere journalistic incompetence is on display here. The WSJ reporters are showing contempt for the discipline of economics. The only significant thing about economists, they imply, is that they are “divided.” The economics itself is hardly worth our attention.

Economists have only themselves to blame for their low repute. But readers deserve a truthful and complete understanding of the minimum wage.

The Minimum Wage As Seen by Economics – and Economists

The minimum wage is a species of the economic genus known as the “minimum price.” Other species include agricultural price supports, imposed for the ostensible purpose of increasing the incomes of family farmers. The idea behind all minimum prices is to make the price of something higher than it would otherwise be. The alternative embodies in “otherwise” is to allow the price of human labor to find its own level in a free labor market. That level is the point at which the amount of labor workers wish to supply is equal to the amount that businesses want to hire. Economists call the wage that equalizes the quantity of labor supplied with the quantity demanded the equilibrium wage.

In practice, the minimum wage is always legislatively pegged at a higher level than the current equilibrium wage. Otherwise there would be no point to it. And in practice, the minimum wage applies only to low-skilled labor. This is because wages reflect the value of labor’s productivity and low-skilled labor is the least productive kind. What is the effect of a higher-than-equilibrium wage for low-skilled labor?

Holding the price of anything above its equilibrium level produces a surplus of that thing. A surplus of human labor is called “unemployment” in layman’s terms. Thus, a minimum wage produces unemployment where there would otherwise be no (persistent) unemployment.

If this sounds pretty categorical, cut-and-dried and matter-of-fact, that’s because it is. Supply and demand are economics. More precisely, they are what we today label “microeconomics.” Since there is no macroeconomic theory left standing that is worthy of the name, that leaves us with supply and demand and very little else.

The federal minimum wage was first introduced in the Fair Labor Standards Act of 1938. It first began attracting attention from academic economists after World War II when George Stigler wrote a celebrated article outlining its basic effects in 1946. The first edition of Stigler’s legendary textbook on microeconomic price theory appeared in 1949. This may have been the debut of the minimum wage as a textbook application – a way of illustrating what happens when the principles of supply and demand are flouted by government.

Stigler may have been the first future Nobel Laureate to oppose the minimum wage, but he headed up what became a long procession with few absentees. Since then, it has been rare to find an intermediate (junior-senior level) micro textbook that didn’t feature an analysis of the minimum wage and the effects of the labor surplus it causes. This practice has crossed political lines. Liberal economists write textbooks, too, but they were pitiless in their view of the minimum wage – at least until recently, anyway. Alan Blinder, former CEA member under President Bill Clinton, was embarrassed by the revelation that his political support for a minimum wage conflicted with his textbook’s unsparing criticism of it.

The Effects of a Minimum Wage

The effects of a minimum wage are those of a minimum price generally, translated into the specific context of the market for low-skilled labor. The overarching effect, whose implications far exceed the obvious, is the surplus of labor created. The resulting unemployment, in and of itself, confounds the expectations of minimum-wage proponents. Their stated purpose is to increase the monetary incomes, hence real incomes, hence well-being of low-income workers. But you can only benefit from a higher wage if you have a job from which to earn that wage. The low-skilled workers whose jobs are lost because of the minimum wage are harmed by it, not helped. Moreover, they have nowhere to go except the unemployment line. Ordinarily, people who lose their job look for another job. But low-skilled workers are already scraping the bottom of the barrel – when that residue is suddenly denied them, they’re out of luck.

But what about the people who don’t lose their job? They benefit, don’t they? True enough, at least as a first approximation. The minimum wage should be viewed as transferring income from some low-skilled workers to other low-skilled workers. It is tempting to say it transfers income from some poor workers to other poor workers, but this is not always true. Sometimes it transfer income from poor people to rich people or, more precisely, to the offspring of rich people. That outcome will be explained below.

Wait a minute – what about employers? To hear the left wing talk, the problems of the poor are mostly the fault of greedy bosses who refuse to pay the poor what they’re worth. (The ever-popular formulation is that employers owe their workers a “living wage.”) At least the minimum wage sticks it to those greedy bastards, doesn’t it?

The answer is: Yes and no, but mostly no. In the short run, owners of businesses share the cost of the minimum wage with workers who are driven out of work. Business owners share that cost because higher wages mean higher costs, and higher costs will reduce revenues and profits and drive marginal businesses out of business. The reduced supply of goods will drive prices to consumers higher.

In the long run, though, the higher price gradually attained by the market in response to the higher costs will restore the business rate of return (i.e., profit) to its “normal” level. So, the remaining businesses in the market do not suffer long-run harm from the minimum wage. In the long run, the burden of the minimum wage is borne by consumers of the products produced using low-skilled labor and by low-skilled workers who remain out of work or whose prospects for work and productivity are permanently reduced by their sojourn into unemployment.

In other words, the minimum wage does not exact class revenge against evil, greedy businessmen. It harms poor, low-skilled workers and consumers – who are mostly ordinary people. Is it any wonder, then, that even liberal economists have traditionally refused to endorse the minimum wage as a means of transferring income to the poor?

Wait – There’s More. A LOT More

If John Paul Jones were an economist, he might interject at this point that we have not yet begun our analytical fight against popular misconceptions about the minimum wage. The artificial surplus created by the minimum wage has even more insidious implications.

In a competitive market, the tendency toward equality between quantity supplied and quantity demanded of the good or service being provided exerts a restraining influence on the actions of buyers and sellers. If you show up to rent an apartment or house and the landlord doesn’t like the color of your skin, he might decide to not to rent to you. But when there are exactly as many buyers as there are apartments and houses on the market, this will cost him money. The economic history of the world – and the history of discrimination in the American South and South Africa, among other places – very strongly confirms that competition and economic incentives are the best means of overcoming racial discrimination.

But when the market is in surplus, the picture changes dramatically. Now the landlord can afford to discriminate among would-be buyers by turning down one he doesn’t particularly like, because he knows that others are out there waiting for his unit.

The logic applies to buyers as well. Under competitive conditions, employers cannot afford to discriminate against workers for any reasons not related to productivity. They know only too well how hard it is to find good help when the market is tight. But when unemployment is high, an employer with a “taste for discrimination” can afford to indulge it. (It is idle to talk about whether that behavior is against the law or not. In practice, the case for discrimination cannot be proved; legal cases are won by imposing heavy costs on defendants until they give up and settle by admitting guilt whether true or not. And the cases are prosecuted in the first place for political or economic reasons, not to achieve justice for defendants.)

One of life’s supreme ironies is that the very people who cry the loudest for an end to racial discrimination and lament the injustice of our racist society are the same people who lobby in favor of the minimum wage. By creating a surplus of low-skilled labor and reducing the effective cost of discrimination to zero, the minimum wage surely makes it easy for employers to exercise whatever racist urges they might feel.

…And More

The minimum wage is anti-black in its effects not only because it promotes discrimination, but also because it places blacks at an objective disadvantage. One thing employers look for is experience. On average, blacks are younger than other ethnic groups and have less experience. Thus, they are less able to cope with the labor surplus created by the minimum wage.

Both minimum and maximum prices bring the issue of product quality to bear on the decisions of businesses. When businesses can’t raise prices due to maximum prices, or price controls, they try to reduce product quality instead. Similarly, when businesses suddenly face an increase in the minimum wage, they look to offset its effects by retaining only their highest-quality low-skilled workers. That is, they retain the best-dressed, most punctual, technologically adept workers rather than the shabbier, less reliable, socially and technically awkward workers. All too often, the workers let go are the ones who need the job the most – namely, low-income blacks picking up the necessary skills to succeed in the working world. Their places are taken by the sons and daughters of the well-to-do, whose cultural and economic advantages gave them an occupational leg up when they entered the labor market. This is what David Neumark meant by benefits leaking out to the well-to-do.

Black (illicit) markets are an inevitable by-product of minimum and maximum prices. In this case, the existence of a labor surplus means that there are people willing to work at a lower wage than the prevailing wage. By offering sub-standard working conditions and employment “off the books,” some employers can induce workers into work that they wouldn’t accept in a competitive labor market. This is still another ill effect of the minimum wage and another way in which low-skilled workers bear its brunt.

The late Milton Friedman was outraged by popular efforts to depict the minimum wage as the salvation of the poor and underprivileged. He called it the most anti-black law on the books.

The Card-Krueger Study

In 1993, two economists made a bid to overturn the decades-old economic consensus against the minimum wage. David Card and Alan Krueger conducted a phone survey of fast-food establishments in New Jersey and Pennsylvania. They chose these two adjoining states because New Jersey had raised its minimum wage prior to the study period, during which Pennsylvania’s law remained unchanged. Their study found that New Jersey’s employment of low-skilled labor increased by 13% relative to Pennsylvania’s. They ascribed this to the fact that the higher wage had certain desirable effects on the labor force.

Both the general public and the economics profession went gaga over this single result. Despite decades of studies and negative results by dozens of distinguished economists, this one study was said to have revolutionized thinking on the minimum wage. In reality, its effects were more political than economic.

An attempt to replicate the study by the National Bureau of Economic Research used payroll records from the businesses surveyed by Card and Krueger rather than relying on the phone surveys. Apparent anomalies had been found in both the New Jersey and Pennsylvania date using the phone surveys, so the payroll records were substituted as a check on the results. Sure enough, recalculation of the results using the payroll records reversed the results of the study – New Jersey employment was now found to have declined by about 4% relative to Pennsylvania’s.

Alan Krueger parlayed the popularity of his study into the Chairmanship of the President’s Council of Economic Advisors. David Card has recently written a book about the subject of the minimum wage. But there is little reason to accept the results of their original study at its face value.

The Political Purpose of the Minimum Wage

Economists have long known that the true purposes of the minimum wage are political rather than economic. Low-skilled labor is a substitute for unionized labor and higher-skilled labor. By making low-skilled labor less attractive to employers, the minimum wage makes union labor more attractive. That is why unions have supported a minimum wage since long before it was actually adopted, both in the U.S. and in places like South Africa.

Unions are one of the strongest and most numerous constituent groups of the Obama administration. That is why the President has now opted to advance this proposal to increase the minimum wage. Yet The Wall Street Journal‘s piece – which purported to describe the economics and politics of the measure – did not breathe a word of this.

Note: The first draft of this post erred by saying that the minimum wage was introduced in the Wagner Act of 1935, rather than the Fair Labor Standards Act of 1938.