DRI-183 for week of 3-1-15: George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

An Access Advertising EconBrief:

George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

February 26, 2015 is a date that will live in regulatory infamy. That assertion is subject to revision by the courts, as is nearly everything undertaken these days by the Obama administration. As this is written, the Supreme Court hears yet another challenge to “ObamaCare,” the Affordable Care Act. President Obama’s initiative to achieve a single-payer system of national health care in the U.S. is rife with Orwellian irony, since it cannot help but make health care unaffordable for everybody by further removing the consumer of health care from every exposure to the price of health care. Similarly, the latest administration initiative is the February 26 approval by the Federal Communications Commission (FCC) of the so-called “Net Neutrality” doctrine in regulatory form. Commission Chairman Tom Wheeler’s summary of his regulatory proposal – consisting of 332 pages that were withheld from the public – has been widely characterized as a proposal to “regulate the Internet like a public utility.”

This episode is riven with a totalitarian irony that only George Orwell could fully savor. The FCC is ostensibly an independent regulatory body, free of political control. In fact, Chairman Wheeler long resisted the “net neutrality” doctrine (hereinafter, shortened to “NN” for convenience). The FCC’s decision was a response to pressure from President Obama, which made a mockery of the agency’s independence. The alleged necessity for NN arises from the “local monopoly” over “high-speed” broadband exerted by Internet service providers (again, hereinafter abbreviated as “ISPs”) – but a “public utility” was, and is, by definition a regulated monopoly. Since the alleged local monopoly held by ISPs is itself fictitious, the FCC is in fact proposing to replace competition with monopoly.

To be sure, the particulars of Chairman Wheeler’s proposal are still open to conjecture. And the enterprise is wildly illogical on its face. The idea of “regulating the Internet like a public utility” treats those two things as equivalent entities. A public utility is a business firm. But the Internet is not a single business firm; indeed, it is not a single entity at all in the concrete sense. In the business sense, “the Internet” is shorthand for an infinite number of existing and potential business firms serving the world’s consumers in countless ways. The clause “regulate the Internet like a public utility” is quite literally meaningless – laughably indefinite, overweening in its hubris, frightening in its totalitarian implications.

It falls to an economist, former FCC Chief Economist Thomas Hazlett of Clemson University, to sculpt this philosophy into its practical form. He defines NN as “a set of rules… regulating the business model of your local ISP.” In short, it is a political proposal that uses economic language to prettify and conceal its real intentions. NN websites are emblazoned with rhetoric about “protecting the Open Internet” – but the Internet has thrived on openness for over 20 years under the benign neglect of government regulators. This proposal would end that era.

There is no way on God’s green earth to equate a regulated Internet with an open Internet; the very word “regulated” is the antithesis of “open.” NN proponents paint scary scenarios about ISPs “blocking or interfering with traffic on the Internet,” but their language is always conditional and hypothetical. They are posing scenarios that might happen in the future, not ones that threaten us today. Why? Because competition and innovation protected consumers up to now and continue to do so. NN will make its proponents’ scary predictions more likely, not less, because it will restrict competition. That is what regulation does in general; that is what public-utility regulation specifically does. For over a century, public-utility regulation has installed a single firm as a regulated monopoly in a particular market and has forcefully suppressed all attempts to compete with that firm.

Of course, that is not what President Obama, Chairman Wheeler and NN proponents want us to envision when we hear the words “regulate the Internet like a public utility.” They want us to envision a lovely, healthy flock of sheep grazing peacefully in a beautiful meadow, supervised by a benevolent, powerful Shepherd with a herd of well-trained, affectionate shepherd dogs at his command. Soothing music is piped down from heaven and love and tranquility reign. At the far edges of the meadow, there is a forest. Hungry wolves dwell within, eyeing the sheep covetously. But they dare not approach, for they fear the power of the Shepherd and his dogs.

In other words, the Obama administration is trying to manipulate the emotions of the electorate by creating an imaginary vision of public-utility regulation. The reality of public-utility regulation was, and is, entirely different.

The Natural-Monopoly Theory of Public-Utility Regulation

The history of public-utility regulation is almost, but not quite, co-synchronous with that of government regulation of business in the United States. Regulation began at the state level with Munn vs. Illinois, which paved the way for state government of the grain business in the 1870s. The Interstate Commerce Commission’s inaugural voyage with railroad regulation followed in the late 1880s. With the commercial introduction of electric lighting and the telephone came business firms tailored to those ends. And in their wake came the theory of natural monopoly.

Both electric power and telephones came to be known as “natural monopoly” industries; that is, industries in which both economic efficiency and commercial viability chose one single firm to serve the entire market. This was the outgrowth of economies of scale in production, owing to decreasing long-run average cost of production. This decidedly unusual state of affairs is a technological anomaly. Engineers recognize it in conjunction with the “two-thirds rule.” There are certain cases in which cost increases as the two-thirds power of output, which implies that cost decreases steadily as output rises. (The thru-put of pipes and cables and the capacity of cargo holds are examples.) In turn, this implies that the firm that grows the fastest will undersell all others while still covering all its costs. The further implication is that consumers will receive the most output at the lowest price if one monopoly firm serves everybody – if, and only if, the firm’s price can be constrained equal to its long-run average cost at the rate of output necessary to meet market demand. An unconstrained monopoly would produce less than this optimal rate of output and charge a higher price, in order to maximize its profit. But the theoretical outcome under regulated monopoly equates price with long-run average cost, which provides the utility with a rate of return equal to what it could get in the best alternative use for its financial capital, given its business risk.

In the U.S. and Canada, this regulated outcome is sought by a public-utility commission via the medium of periodic hearings staged by the public-utility regulatory commission (PUC for short). The utility is privately owned by shareholders. In Europe, utilities are not privately owned. Instead, their prices are (in principle) set equal to long-run marginal cost, which is below the level of average cost and thus constitutes a loss in accounting terms. Taxpayers subsidize this loss – these subsidies are the alternative to the profits earned by regulated public-utility firms in the U.S. and Canada.

These regulatory schemes represent the epitome of what the Nobel laureate Ronald Coase called “blackboard economics” – economists micro-managing reality as if they possessed all the information and control over reality that they do when drawing diagrams on a classroom blackboard. In practice, things did not work out as neatly as the foregoing summary would lead us to believe. Not even remotely close, in fact.

The Myriad Slips Twixt Theoretical Cup and Regulatory Lip

What went wrong with this theoretical set-up, seemingly so pat when viewed in a textbook or on a classroom blackboard? Just about everything, to some degree or other. Today, we assume that the institution of regulated monopoly came in response to market monopolies achieved and abuses perpetrated by electric and telephone companies. What mostly happened, though, was different. There were multiple providers of electricity and telephone service in the early days. In exchange for submitting to rate-of-return regulation, though, one firm was extended a grant of monopoly and other firms were excluded. Only in very rare cases did competition exist for local electric service – and curiously, this rate competition actually produced lower electric rates than did public-utility regulation.

This result was not the anomaly it seemed, since the supposed economies of scale were present only in the distribution of electric power, not in power generation. So the cost superiority of a single firm producing for the whole market turned out to be not the slam-dunk that was advertised. That was just one of many cracks in the façade of public-utility regulation. Over the course of the 20th century, the evolution of public-utility regulation in telecommunications proved to be paradigmatic for the failures and inherent shortcomings of the form.

Throughout the country, the Bell system were handed a monopoly on the provision of local service. Its local service companies – the analogues to today’s ISPs – gradually acquired reputations as the heaviest political hitters in state-government politics. The high rates paid by consumers bought lobbyists and legislators by the gross, and they obediently safeguarded the monopoly franchise and kept the public-utility commissions (PUCs) staffed with tame members. That money also paid the bill for a steady diet of publicity designed to mislead the public about the essence of public-utility regulation.

We were assured by the press that the PUC was a vigilant watchdog whose noble motives kept the greedy utility executives from turning the rate screws on a helpless public. At each rate hearing, self-styled consumer advocacy groups paraded their compassion for consumers by demanding low rates for the poor and high rates on business – as if it were really possible for some non-human entity called “business” to pay rates in the true sense, any more than they could pay taxes. PUCs made a show of ostentatiously requiring the utility to enumerate its costs and pretending to laboriously calculate “just and reasonable” rates – as if a Commission possessed juridical powers denied to the world’s greatest philosophers and moralists.

Behind the scenes, after the press had filed their poker-faced stories on the latest hearings, increasingly jaded and cynical reporters, editors and industry consultants rolled their eyes and snorted at the absurdity of it all. Utilities quickly learned that they wouldn’t be allowed to earn big “profits,” because this would be cosmetically bad for the PUC, the consumer advocates, the politicians and just about everybody involved in this process. So executives, middle-level managers and employees figured out that they had to make their money differently than they would if working for an ordinary business in the private sector. Instead of working efficiently and productively and striving to maximize profit, they would strive to maximize cost instead. Why? Because they could make money from higher costs in the form of higher salaries, higher wages, larger staffs and bigger budgets. What about the shareholders, who would ordinarily be shafted by this sort of behavior? Shareholders couldn’t lose because the PUC was committed to giving them a rate of return sufficient to attract financial capital to the industry. (And the shareholders couldn’t gain from extra diligence and work effort put forward by the company because of the limitation on profits.) That is, the Commission would simply ratchet up rates commensurate with any increase in costs – accompanied by whatever throat-clearing, phony displays of concern for the poor and cost-shifting shell games were necessary to make the numbers work. In the final analysis, the name of the game was inefficiency and consumers always paid for it – because there was nobody else who could pay.

So much for the vaunted institution of public-utility regulation in the public interest. Over fifty years ago, a famous left-wing economist named Gardiner Means proposed subjecting every corporation in the U.S. to rate-of-return regulation by the federal government. This held the record for most preposterous policy program advanced by a mainstream commentator – until Thomas Wheeler announced that henceforth the Internet would be regulated as if it were a public utility. Now every American will get a taste of life as Ivan Denisovich, consigned to the Gulag Archipelago of regulatory bureaucracy.

Of particular significance to us in today’s climate is the effect of this regime on innovation. Outside of totalitarian economies such as the Soviet Union and Communist China, public-utility regulation is the most stultifying climate for innovation ever devised by man. The idea behind innovation is to find ways to produce more goods using the same amount of inputs or (equivalently) the same amount of goods using fewer inputs. Doing this lowers costs – which increases profits. But why do to the trouble if you can’t enjoy the increase in profits? Of course, utilities were willing to spend money on research, provided they could get it in the rate base and earn a rate of return on the investment. But they had no incentive to actually implement any cost-saving innovations. The Bell System was legendary for its unwillingness to lower its costs; the economic literature is replete with jaw-dropping examples of local Bell companies lagging years and even decades behind the private sector in technology adoption – even spurning advances developed in Bell’s own research labs!

Any reader who suspects this writer of exaggeration is invited to peruse the literature of industrial organization and regulation. One nagging question should be dealt with forthwith. If the demerits of public-utility regulation were well recognized by insiders, how were they so well concealed from the public? The answer is not mysterious. All of those insiders had a vested interest in not blowing the whistle on the process because they were making money from ongoing public-utility regulation. Commission employees, consultants, expert witnesses, public-interest lawyers and consumer advocates all testified at rate hearings or helped prepare testimony or research it. They either worked full-time or traveled the country as contractors earning lucrative hourly pay. If any one of them was crazy enough to launch an expose of the public-utility scam, he or she would be blackballed from the business while accomplishing nothing – the institutional inertia in favor of the system was so enormous that it would have taken mass revolt to effect change. So they just shrugged, took the money and grew more cynical by the year.

In retrospect, it seems miraculous that anything did change. In the 1960s, local Bell companies were undercharging for local service to consumers and compensating by soaking business and long-distance customers with high prices. The high long-distance rates eventually attracted the interest of would-be competitors. One government regulator grew so fed up with the inefficiency of the Bell system that he granted the competitive petition of a small company called MCI, which sought to compete only in the area of long-distance telecommunications. MCI was soon joined by other firms. The door to competition had been cracked slightly ajar.

In the 1980s, it was kicked wide open. A federal antitrust lawsuit against AT&T led to the breakup of the firm. At the time, the public was dubious about the idea that competition was possible in telecommunications. The 1990s soon showed that regulators were the only ones standing between the American public and a revolution unlike anything we had seen in a century. After vainly trying to protect the local Bells against competition, regulators finally succumbed to the inevitable – or rather, they were overrun by the competitive hordes. When the public got used to cell phones and the Internet, they ditched good old Ma Bell and land-line phones.

This, then, is public-utility regulation. The only reason we have smart phones and mobile Internet access today is that public-utility regulation in telecommunications was overrun by competition despite regulatory opposition in the 1990s. But public-utility regulation is the wonderful fate to which Barack Obama, Thomas Wheeler and the FCC propose to consign the Internet. What is the justification for their verdict?

The Case for Net Neutrality – Debunked

As we have seen, public-utility regulation was based on a premise that certain industries were “natural monopolies.” But nobody has suggested that the Internet is a natural monopoly – which makes sense, since it isn’t an industry. Nobody has suggested that all or even some of the industries that utilize the Internet are natural monopolies – which makes sense, since they aren’t. So why in God’s name should we subject them to public-utility regulation – especially since public-utility regulation didn’t even work well in the industries for which it was ideally suited? We shouldn’t.

The phrase “net neutrality” is designed to achieve an emotional effect through alliteration and a carefully calculated play on the word “neutral.” In this case, the word is intended to appeal to egalitarian sympathies among hearers. It’s only fair, we are urged to think, that ISPs, the “gatekeepers” of the Internet, are scrupulously fair or “neutral” in letting everybody in on the same terms. And, as with so many other issues in economics, the case for “fairness” becomes just so much sludge upon closer examination.

The use of the term “gatekeepers” suggests that God handed to Moses on Mount Olympus a stone tablet for the operation of the Internet, on which ISPs were assigned the role of “gatekeepers.” Even as hyperbolic metaphor, this bears no relation to reality. Today, cable companies are ISPs. But they began life as monopoly-killers. In the early 1960s, Americans chose between three monopoly VHF-TV networks, broadcast by ABC, NBC and CBS. Gradually, local UHF stations started to season the diet of content-starved viewers. When cable-TV came along, it was like manna from heaven to a public fed up with commercials and ravenous for sports and movies. But government regulators didn’t allow cable-TV to compete with VHF and UHF in the top 100 media markets of the U.S. for over two decades. As usual, regulators were zealously protecting government monopoly, restricting competition and harming consumers.

Eventually, cable companies succeeded in tunneling their way into most local markets. They did it by bribing local government literally and figuratively – the latter by splitting their profits via investment in pet political projects of local politicians as part of their contracts. In return, they were guaranteed various degrees of exclusivity. But this “monopoly” didn’t last because they eventually faced competition from telecommunication firms who wanted to get into their business and whose business the cable companies wanted to invade. And today, the old structural definitions of monopoly simply don’t apply to the interindustry forms of competition that prevail.

Take the Kansas City market. Originally, Time Warner had a monopoly franchise. But eventually a new cable company called Everest invaded the metro area across the state line in Johnson County, KS. Overland Park is contiguous with Kansas City, MO, and consumers were anxious to escape the toils of Time Warner. Eventually, Everest prevailed upon KC, MO to gain entry to the Missouri side. Now even the cable-TV market was competitive. Then Google selected Kansas City, KS as the venue for its new high-speed service. Soon KC, MO was included in that package, too – now there were three local ISPs! (Everest has morphed into two successive incarnations, one of which still serves the area.)

Although this is not typical, it does not exhaust the competitive alternatives. This is only the picture for fixed service. Americans are now turning to mobile forms of access to the Internet, such as smart phones. Smart watches are on the horizon. For mobile access, the ISP is a wireless company like AT&T, Verizon, Sprint or T-Mobile.

The NN websites stridently maintain that “most Americans have only a single ISP.” This is nonsense; a charitable interpretation would be that most of us have only a single cable-TV provider in our local market. But there is no necessary one-to-one correlation between “cable-TV provider” and “ISP.” Besides, the state of affairs today is ephemeral – different from what is was a few years ago and from what it will be a few years from now. It is only under public-utility regulation that technology gets stuck in one place because under public-utility regulation there is no incentive to innovate.

More specifically, the FCC’s own data suggest that 80% of Americans have two or more ISPs offering 10Mbps downstream speeds. 96% have two or more ISPs offering 6Mbps downstream and 1.5 upstream speeds. (Until quite recently, the FCC’s own criterion for “high-speed” Internet was 4Mbps or more.) This simply does not comport with any reasonable structural concept of monopoly.

The current flap over “blocking and interfering with traffic on the Internet” is the residue of disputes between Netflix and ISPs over charges for transmission of the former’s streaming services. In general, there is movement toward higher charges for data transmission than for voice transmission. But the huge volumes of traffic generated by Netflix cause congestion, and the free-market method for handling congestion is a higher price, or the functional equivalent. That is what economists have recommended for dealing with road congestion during rush hours and congested demand for air-conditioning and heating services at peak times of day and during peak seasons. Redirecting demand to the off-peak is not a monopoly response; it is an efficient market response. Competitive bar and restaurant owners do it with their pricing methods; competitive movie theater owners also do it (or used to).

Similar logic applies to other forms of hypothetically objectionable behavior by ISPs. The prioritization of traffic, creation of “fast” and “slow” lanes, blocking of content – these and other behaviors are neither inherently good nor bad. They are subject to the constraints of competition. If they are beneficial on net balance, they will be vindicated by the market. That is why we have markets. If a government had to vet every action by every business for moral worthiness in advance, it would paralyze life as we know it. The only sensible course is to allow free markets and competition to police the activities of competitors.

Just as there is nothing wrong or untoward with price differentials based on usage, there is nothing virtuous about government-enforced pricing equality. Forcing unequals to be treated equally is not meritorious. NN proponents insist that the public has to be “protected” from that kind of treatment. But this is exactly what PUCs did for decades when they subsidized residential consumers inefficiently by soaking business and long-distance users with higher rates. Back then, the regulatory mantra wasn’t “net neutrality,” it was “universal service.” Ironically, regulators never succeeded in achieving rates of household telephone subscription that exceeded the rate of household television service. Consumers actually needed – but didn’t get – protection from the public-utility monopoly imposed upon them. Today, consumers don’t need protection because there is no monopoly, nor is there any prospect of one absent regulatory intervention. The only remaining vestige of monopoly is that remaining from the grants of local cable-TV monopoly given by municipal governments. Compensating for past mistakes by local government is no excuse for making a bigger mistake by granting monopoly power to FCC regulators.


The late, great economist Frank Knight once remarked that he had heard do-gooders utter the equivalent words to “I want power to do good” so many times for so long that he automatically filtered out the last three words, leaving only “I want power.” Federal-government regulators want the maximum amount of power with the minimum number of restrictions, leaving them the maximum amount of flexibility in the exercise of their power. To get that, they have learned to write excuses into their mandates. In the case of NN and Internet regulation, the operative excuse is “forbearance.”

Forbearance is the writing on the hand with which they will wave away all the objections raised in this essay. The word appears in the original Title II regulations. It means that regulators aren’t required to enforce the regulations if they don’t want to; they can “forebear.” “Hey, don’t worry – be happy. We won’t do the bad stuff, just the good stuff – you know, the ‘neutrality’ stuff, the ‘equality’ stuff.” Chairman Wheeler is encouraging NN proponents to fill the empty vessel of Internet regulation with their own individual wish-fulfillment fantasies of what they dream a “public-utility” should be, not what the ugly historical reality tells us public-utility regulation actually was. For example, he has implied that forbearance will cut out things like rate-of-return regulation.

This just begs the questions raised by the issue of “regulating the Internet like a public utility.” The very elements that Wheeler proposes to forbear constitute part and parcel of public-utility regulation as we have known it. If these are forborne, we have no basis for knowing what to expect from the concept of Internet public-utility regulation at all. If they are not, after all, forborne – then we are back to square one, with the utterly dismal prospect of replaying 20th-century public-utility regulation in all its cynical inefficiency.

Forbearance is a good idea, all right – so good that we should apply it to the whole concept of Internet regulation by the federal government. We should forbear completely.

DRI-172 for week of 1-18-15: Consumer Behavior, Risk and Government Regulation

An Access Advertising EconBrief: 

Consumer Behavior, Risk and Government Regulation

The Obama administration has drenched the U.S. economy in a torrent of regulation. It is a mixture of new rules formulated by new regulatory bodies (such as the Consumer Financial Protection Bureau), new rules levied by old, preexisting federal agencies (such as those slapped on bank lending by the Federal Reserve) and old rules newly imposed or enforced with new stringency (such as those emanating from the Department of Transportation and bedeviling the trucking industry).

Some people within the business community are pleased by them, but it is fair to say that most are not. But the President and his subordinates have been unyielding in his insistence that they are not merely desirable but necessary to the health, well-being, vitality and economic growth of America.

Are the people affected by the regulations bad? Do the regulations make them good, or merely constrain their bad behavior? What entitles the particular people designing and implementing the regulations to perform in this capacity – is it their superior motivations or their superior knowledge? That is, are they better people or merely smarter people than those they regulate? The answer can’t be democratic election, since regulators are not elected directly. We are certainly entitled to ask why a President could possibly suppose that some people can effectively regulate an economy of over 300 million people. If they are merely better people, how do we know that their regulatory machinations will succeed, however well-intentioned they are? If they are merely smarter people, how do we know their actions will be directed toward the common good (whatever in the world that might be) and not toward their own betterment, to the exclusion of all else? Apparently, the President must select regulators who are both better people and smarter people than their constituents. Yet government regulators are typically plucked from comparative anonymity rather than from the firmament of public visibility.

Of all American research organizations, the Cato Institute has the longest history of examining government regulation. Recent Cato publications help rebut the longstanding presumptions in favor of regulation.

The FDA Graciously Unchains the American Consumer

In “The Rise of the Empowered Consumer” (Regulation, Winter 2014-2015, pp.34-41, Cato Institute), author Lewis A. Grossman recounts the Food and Drug Administration’s (FDA) policy evolution beginning in the mid-1960s. He notes that “Jane, a [hypothetical] typical consumer in 1966… had relatively few choices” across a wide range of food-products like “milk, cheese, bread and jam” because FDA’s “identity standards allowed little variation.” In other words, the government determined what kinds of products producers were allowed to legally produce and sell to consumers. “Food labels contained barely any useful information. There were no “Nutrition Facts” panels. The labeling of many foods did not even include a statement of ingredients. Nutrient content descriptors were rare; indeed, the FDA prohibited any reference whatever to cholesterol. Claims regarding foods’ usefulness in preventing disease were also virtually absent from labels; the FDA considered any such statement to render the product an unapproved – and thus illegal – drug.”

Younger readers will find the quoted passage startling; they have probably assumed that ingredient and nutrient-content labels were forced on sellers over their strenuous objections by noble and altruistic government regulators.

Similar constraints bound Jane should she have felt curiosity about vitamins, minerals or health supplements. The types and composition of such products were severely limited and their claims and advertising were even more severely limited by the FDA. Over-the-counter medications were equally limited – few in number and puny in their effectiveness against such infirmities as “seasonal allergies… acid indigestion…yeast infection[s] or severe diarrhea.” Her primary alternative for treatment was a doctor’s visit to obtain a prescription, which included directions for use but no further enlightening information about the therapeutic agent. Not only was there no Internet, copies of the Physicians’ Desk Reference were unavailable in bookstores. Advertising of prescription medicines was strictly forbidden by the FDA outside of professional publications like the Journal of the American Medical Association.

Food substances and drugs required FDA approval. The approval process might as well have been conducted in Los Alamos under FBI guard as far as Jane was concerned. Even terminally ill patients were hardly ever allowed access to experimental drugs and treatments.

From today’s perspective, it appears that the position of consumers vis-à-vis the federal government in these markets was that of a citizen in a totalitarian state. The government controlled production and sale; it controlled the flow of information; it even controlled the life-and-death choices of the citizenry, albeit with benevolent intent. (But what dictatorship – even the most savage in history – has failed to reaffirm the benevolence of its intentions?) What led to this situation in a country often advertised as the freest on earth?

In the late 19th and early 20th centuries, various incidents of alleged consumer fraud and the publicity given them by various muckraking authors led Progressive administrations led by Theodore Roosevelt, William Howard Taft and Woodrow Wilson to launch federal-government consumer regulation. The FDA was the flagship creation of this movement, the outcome of what Grossman called a “war against quackery.”

Students of regulation observe this common denominator. Behind every regulatory agency there is a regulatory movement; behind every movement there is an “origin story;” behind every story there are incidents of abuse. And upon investigation, these abuses invariably prove either false or wildly exaggerated. But even had they been meticulously documented, they would still not substantiate the claims made for them and not justify the regulatory actions taken in response.

Fraud was illegal throughout the 19th and 20th century and earlier. Competitive markets punish producers who fail to satisfy consumers by putting the producers out of business. Limiting the choices of producers and consumers harms consumers without providing compensating benefits. The only justification for FDA regulation of the type provided for the first half of the 20th century was that government regulators were omniscient, noble and efficient while consumers were dumbbells. That is putting it baldly but it is hardly an overstatement. After all, consider the situation that exists today.

Plentiful varieties of products exist for consumers to pick from. They exist because consumers want them to exist, not because the FDA decreed their existence. Over-the-counter medications are plentiful and effective. The FDA tries to regulate their uses, as it does for prescription medications, but thankfully doctors can choose from a plethora of “off-label” uses. Nutrient and ingredient labels inform the consumer’s quest to self-medicate such widespread ailments as Type II diabetes, which spread to near-epidemic status but is now being controlled thanks to rejection of the diet that the government promoted for decades and embrace of a diet that the government condemned as unsafe. Doctors and pharmacists discuss medications and supplements with patients and provide information about ingredients, side effects and drug interactions. And patients are finally rising in rebellion against the tyranny of FDA drug approval and the pretense of compassion exhibited by the agency’s “compassionate use” drug-approval policy for patients facing life-threatening diseases.

Grossman contrasts the totalitarian policies of yesteryear with the comparative freedom of today in polite academic language. “The FDA treated Jane’s… cohort…as passive, trusting and ignorant consumers. By comparison, [today’s consumer] has unmediated [Grossman means free] access to many more products and to much more information about those products. Moreover, modern consumers have acquired significant influence over the regulation of food and drugs and have generally exercised that influence in ways calculated to maximize their choice.”

Similarly, he explains the transition away from totalitarianism to today’s freedom in hedged terms. To be sure, the FDA gave up much of its power over producers and consumers kicking and screaming; consumers had to take all the things listed above rather than receive them as the gifts of a generous FDA. Nevertheless, Grossman insists that consumers’ distrust of the word “corporation” is so profound that they believe that the FDA exerts some sort of countervailing authority to ensure “the basic safety of products and the accuracy and completeness of labeling and advertising.” This concerning an agency that fought labeling and advertising tooth and claw! As to safety, Grossman makes the further caveat that consumers “prefer that government allow consumers to make their own decisions regarding what to put in their bodies…except in cases in which risk very clearly outweighs benefit” [emphasis added]. That implies that consumers believe that the FDA has some special competence to assess risks and benefits to individuals, which completely contradicts the principle that individuals should be free to make their own choices.

Since Grossman clearly treats consumer safety and risk as a special case of some sort, it is worth investigating this issue at special length. We do so below.

Government Regulation of Cigarette Smoking

For many years, individual cigarette smokers sued cigarette companies under the product-liability laws. They claimed that cigarettes “gave them cancer,” that the cigarette companies knew it and that consumers didn’t and that the companies were liable to selling dangerous products to the public.

The consumers got nowhere.

To this day, an urban legend persists that this run of legal success was owed to deep financial pockets and fancy legal footwork. That is nonsense. As the leading economic expert on risk (and the longtime cigarette controversy), W. Kip Viscusi, concluded in Smoke-Filled Rooms: A Postmortem on the Tobacco Deal, “the basic fact is that when cases reached the jury, the jurors consistently concluded that the risks of cigarettes were well-known and voluntarily incurred.”

In the early 1990s, all this changed. States sued the tobacco companies for medical costs incurred by government due to cigarette smoking. The suits never reached trial. The tobacco companies settled with four states; a Master Settlement Agreement applied to remaining states. The aggregate settlement amount was $243 billion, which in the days before the Great Recession, the Obama administration and the Bernanke Federal Reserve was a lot of money. (To be sure, a chunk of this money was gobbled up by legal fees; the usual product-liability portion is one-third of the settlement, but gag orders have hampered complete release of information on lawyers’ fees in these cases.)

However, the states were not satisfied with this product-liability bonanza. They increased existing excise taxes on cigarettes. In “Cigarette Taxes and Smoking,” Regulation (Winter 2014-2015, pp. 42-46, Cato Institute), authors Kevin Callison and Robert Kaestner ascribe these tax increases to “the hypothesis… that higher cigarette taxes save a substantial number of lives and reduce health-care costs by reducing smoking, [which] is central to the argument in support of regulatory control of cigarettes through higher cigarette taxes.”

Callison and Kaestner cite research from anti-smoking organizations and comments to the FDA that purport to find price elasticities of demand for cigarettes of between -0.3 and -0.7 percent, with the lower figure applying to adults and the higher to adolescents. (The words “lower” and “higher” refer to the absolute, not algebraic, value of the elasticities.) Price elasticity of demand is defined as the percentage change in quantity demanded associated with a 1 percent change in price. Thus, a 1% increase in price would cause quantity demanded to fall by between 0.3% and 0.7% according to these estimates.

The problem with these estimates is that they were based on research done decades ago, when smoking rates were much higher. The authors estimate that today’s smokers are mostly the young and the poorly educated. Their price elasticities are very, very low. Higher cigarette taxes have only a miniscule effect on consumption of cigarettes. They do not reduce smoking to any significant extent. Thus, they do not save on health-care costs.

They serve only to fatten the coffers of state governments. Cigarette taxes today play the role played by the infamous tax on salt levied by French kings before the French Revolution. When the tax goes up, the effective price paid by the consumer goes up. When consumption falls by a much smaller percentage than the price increase, tax revenues rise. Both the cigarette-tax increase of today and the salt-tax increases of the 17th and 18th century were big revenue-raisers.

In the 1990s, tobacco companies were excoriated as devils. Today, though, several of the lawyers who sued the tobacco companies are either in jail for fraud, under criminal accusation or dead under questionable circumstances. And the state governments who “regulate” the tobacco companies by taxing them are now revealed as merely in it for the money. They have no interest in discouraging smoking, since it would cut into their profits if smoking were to fall too much. State governments want smoking to remain price-inelastic so that they can continue to raise more revenue by raising taxes on cigarettes.


Can Good Intentions Really Be All That Bad? The Cost of Federal-Government Regulation

The old saying “You can’t blame me for trying” suggests that there is no harm in trying to make things better. The economic principle of opportunity cost reminds us that the use of resources for one purpose – in this case, the various ostensibly benevolent and beneficent purposes of regulation – denies the benefits of using them for something else. So how costly is that?

In “A Slow-Motion Collapse” (Regulation, Winter 2014-2015, pp. 12-15, Cato Institute), author Pierre Lemieux cites several studies that attempted to quantify the costs of government regulation. The most comprehensive of these was by academic economists John Dawson and John Seater, who used variations in the annual Code of Federal Regulations as their index for regulatory change. In 1949, the CFR had 19,335 pages; in 2005, this total has risen to 134,261 pages, a seven-fold increase in six-plus decades. (Remember, this includes federal regulation only, excluding state and local government regulation, which might triple that total.)

Naturally, proponents of regulation blandly assert that the growth of real income (also roughly seven-fold over the same period) requires larger government, hence more regulation, to keep pace. This nebulous generalization collapses upon close scrutiny. Freedom and free markets naturally result in more complex forms of goods, services and social interactions, but if regulatory constraints “keep pace” this will restrain the very benefits that freedom creates. The very purpose of freedom itself will be vitiated. We are back at square one, asking the question: What gives regulation the right and the competence to make that sort of decision?

Dawson and Seater developed an econometric model to estimate the size of the bite taken by regulation from economic growth. Their estimate was that it has reduced economic growth on average by about 2 percentage points per year. This is a huge reduction. If we were to apply it to the 2011 GDP, it would work as follows: Starting in 1949, had all subsequent regulation not happened, 2011 GDP would have been 39 trillion dollars higher, or about 54 trillion. As Lemieux put it: “The average American (man, woman and child) would now have about $125,000 more per year to spend, which amounts to more than three times [current] GDP per capita. If this is not an economic collapse, what is?”

Lemieux points out that, while this estimate may strain the credulity of some, it also may actually incorporate the effects of state and local regulation, even though the model itself did not include them in its index. That is because it is reasonable to expect a statistical correlation between the three forms of regulation. When federal regulation rises, it often does so in ways that require corresponding matching or complementary state and local actions. Thus, those forms of regulation are hidden in the model to some considerable degree.

Lemieux also points to Europe, where regulation is even more onerous than in the U.S. – and growth has been even more constipated. We can take this reasoning even further by bringing in the recent example of less-developed countries. The Asian Tigers experienced rapid growth when they espoused market-oriented economics; could their relative lack of regulation supplement this economic-development success story? India and mainland China turned their economies around when they turned away from socialism and Communism, respectively; regulation still hamstrings India while China is dichotomized into a relatively autonomous small-scale competitive sector and a heavily regulated and planned government controlled big-business economy. Signs point to a recent Chinese growth dip tied to the bursting of a bubble created by easy money and credit granted to the regulated sector.

The price tag for regulation is eye-popping. It is long past time to ask ourselves why we are stuck with this lemon.

Government Regulation as Wish-Fulfillment

For millennia, children have cultivated the dream fantasies of magical figures that make their wishes come true. These apparently satisfy a deep-seated longing for security and fulfillment. Freud referred to this need as “wish fulfillment.” Although Freudian psychology has long ago been discredited, the term retains its usefulness.

When we grow into adulthood, we do not shed our childish longings; they merely change form. In the 20th century, motion pictures became the dominant art form in the Western world because they served as fairy tales for adults by providing alternative versions of reality that were preferable to daily life.

When asked by pollsters to list or confirm the functions regulation should perform, citizens repeatedly compose “wish lists” that are either platitudes or, alternatively, duplicate the functions actually approximated by competitive markets. It seems even more significant that researchers and policymakers do exactly the same thing. Returning to Lewis Grossman’s evaluation of the public’s view of FDA: “Americans’ distrust of major institutions has led them to the following position: On the one hand, they believe the FDA has an important role to play in ensuring the basic safety of products and the accuracy and completeness of labeling and advertising. On the other hand, they generally do not want the FDA to inhibit the transmission of truthful information from manufacturers to consumers, and – except in cases in which risk very clearly outweighs benefit – they prefer that the government allow consumers to make their own decisions regarding what to put in their own bodies.”

This is a masterpiece of self-contradiction. Just exactly what is an “important role to play,” anyway? Allowing an agency that previously denied the right to label and advertise to play any role is playing with fire; it means that genuine consumer advocates have to fight a constant battle with the government to hold onto the territory they have won. If consumers really don’t want the FDA to “inhibit the transmission of truthful information from manufacturers to consumers,” they should abolish the FDA, because free markets do the job consumers want done by definitionand the laws alreadyprohibit fraud and deception.

The real whopper in Grossman’s summary is the caveat about risk and benefit. Government agencies in general and the FDA in particular have traditionally shunned cost/benefit and risk/benefit analysis like the plague; when they have attempted it they have done it badly. Just exactly who is going to decide when risk “very clearly” outweighs benefit in a regulatory context, then? Grossman, a professional policy analyst who should know better, is treating the FDA exactly as the general public does. He is assuming that a government agency is a wish-fulfillment entity that will do exactly what he wants done – or, in this case, what he claims the public wants done – rather than what it actually does.

Every member of the general public would scornfully deny that he or she believes in a man called Santa Claus who lives at the North Pole and flies around the world on Christmas Eve distributing presents to children. But for an apparent majority of the public, government in general and regulation in particular plays a similar role because people ascribe quasi-magical powers to them to fulfill psychological needs. For these people, it might be more apropos to view government as “Mommy” or “Daddy” because of the strength and dependent nature of the relationship.

Can Government Control Consumer Risk? The Emerging Scientific Answer: No 

The comments of Grossman, assorted researchers and countless other commentators and onlookers over the years imply that government regulation is supposed to act as a sort of stern, but benevolent parent, protecting us from our worst impulses by regulating the risks we take. This is reflected not only in cigarette taxes but also in the draconian warnings on the cigarette packages and in numerous other measures taken by regulators. Mandatory seat belt laws, adopted by state legislatures in 49 states since the mid-1980s at the urging of the federal government, promised the near-elimination of automobile fatalities. Government bureaucracies like Occupational Safety and Health Administration have covered the workplace with a raft of safety regulations. The Consumer Product Safety Commission presides with an eagle eye over the safety of the products that fill our market baskets.

In 1975, University of Chicago economist Sam Peltzman published a landmark study in the Journal of Political Economy. In it, Peltzman revealed that the various devices and measures mandated by government and introduced by the big auto companies in the 1960s had not actually produced statistically significant improvements in safety, as measured by auto fatalities and injuries. In particular, use of the new three-point seat belts seemed to show a slight improvement in driver fatalities that was more than offset by a rise in fatalities to others – pedestrians, cyclists and possibly occupants of victim vehicles. Over the years, subsequent research confirmed Peltzman’s results so repeatedly that former Chairman of the Council of Economic Advisors’ N. Gregory Mankiw dubbed this the “Peltzman Effect.”

A similar kind of result emerged throughout the social sciences. Innovations in safety continually failed to produce the kind of safety results that experts anticipated and predicted, often failing to provide any improved safety performance at all. It seems that people respond to improved safety by taking more risk, thwarting the expectations of the experts. Needless to say, this same logic applies also to rules passed by government to force people to behave more safely. People simply thwart the rules by finding ways to take risk outside the rules. When forced to wear seat belts, for example, they drive less carefully. Instead of endangering only themselves by going beltless, now they endanger others, too.

Today, this principle is well-established in scientific circles. It is called risk compensation. The idea that people strike to maintain, or “purchase,” a particular level of risk and hold it constant in the face of outside efforts to change it is called risk homeostasis.

These concepts make the entire project of government regulation of consumer risk absurd and counterproductive. Previously it was merely wrong in principle, an abuse of human freedom. Now it is also wrong in practice because it cannot possibly work.

Dropping the Façade: the Reality of Government Regulation

If the results of government regulation do not comport with its stated purposes, what are its actual purposes? Are the politicians, bureaucrats and employees who comprise the legislative and executive branches and the regulatory establishment really unconscious of the effects of regulation? No, for the most part the beneficiaries of regulation are all too cynically aware of the façade that covers it.

Politicians support regulation to court votes from the government-dependent segment of the voting public and to avoid being pilloried as killers and haters or – worst of all – a “tool of the big corporations.” Bureaucrats tacitly do the bidding of politicians in their role as administrators. In return, politicians do the bidding of bureaucrats by increasing their budgets and staffs. Employees vote for politicians who support regulation; in return, politicians vote to increase budgets. Employees follow the orders of bureaucrats; in return, bureaucrats hire bigger staffs that earn them bigger salaries.

This self-reinforcing and self-supporting network constitutes the metastatic cancer of big government. The purpose of regulation is not to benefit the public. It is to milk the public for the benefit of politicians, bureaucrats and government employees. Regulation drains resources away from and hamstrings the productive private economy.

Even now, as we speak, this process – aided, abetted and drastically accelerated by rapid money creation – is bringing down the economies of the Western world around our ears by simultaneously wreaking havoc on the monetary order with easy money, burdening the financial sector with debt and eviscerating the real economy with regulations that steadily erode its productive potential.

DRI-241 for week of 11-9-14: The Birth of Public-Utility Regulation

An Access Advertising EconBrief:

The Birth of Public-Utility Regulation

Today’s news heralds the wish of President Obama that the Federal Communications Commission (FCC) pass strict rules ensuring that internet providers provide equal treatment to all customers. This is widely interpreted (as, for example, by The Wall Street Journal front-page article of 11/11/2014) as saying that “the Federal Communications Commission [would] declare broadband Internet service a public utility.”

More specifically, the Journal’s unsigned editorial of the same day explains that the President wants the FCC to apply the common-carrier provisions of Title II of the Communications Act of 1934. Its “century-old telephone regulations [were] designed for public utilities.” In fact, the wording was copied from the original federal regulatory legislation, the Interstate Commerce Act of 1887; the word “railroad” stricken and “telephone” was added to “telegraph.”

In other words, Mr. Obama wants to resurrect enabling regulatory legislation that is a century and a quarter old and apply it to the Internet.

We might be pardoned for assuming that the original legislation has been a rip-roaring success. After all, the Internet has revolutionized our lives and the conduct of business around the world. The Internet has become a way of life for young and old, from tribesmen in central Africa to dissidents from totalitarian regimes to practically everybody in developed economies. If we’re now going to entrust its fate to the tender mercies of Washington bureaucrats, the regulatory schema should presumably be both tried and true.

Public-utility regulation has been tried, that’s for sure. Was it true? And how did it come to be tried in the first place?


Natural Monopoly: The Party Line on Public-Utility Regulation


Public-utility regulation is a subset of the economic field known as industrial organization. Textbooks designed for courses in the subject commonly devote one or more chapters to utility regulation. Those texts rehearse the theory underlying regulation, which is the theory of natural monopoly. According to that theory, the reason we have (or had) regulated public utilities in areas like gas, electricity, telegraphs, telephones and water is that free competition cannot long persist. Regulated public utilities are greatly preferable to the alternative of a single unregulated monopoly provider in each of these fields.

The concept of natural monopoly rests on the principle of decreasing long-run average cost. In turn, this is based on the idea of economies of scale. Consider the production of various economic goods. All other things equal, we might suppose that as all inputs into the production process increase proportionately, the total monetary cost of production for each one might do so as well. Often it does – but not always. Sometimes total cost increases more-than-proportionately, usually because the industry to which the good belongs uses so much of a particular input that expansion bids up the input’s price, thereby increasing total cost more-than-proportionately.

The rarest case is the opposite one, in which total cost increases less-than-proportionately with the increase in output. Although at first thought this seems paradoxical, there are technical factors that occasionally operate to bring it about. One of these is the engineering principle known as the two-thirds rule. In certain applications, such as the thru-put in a pipeline or the contents of containers used by ocean-going freight vessels, the volume varies as the two-thirds power of the surface area of the surrounding enclosure. In other words, when the pipe grows larger and larger, the amount that can be transmitted through the pipe increases more-than proportionately. When the container is made larger, the amount of freight the container can hold increases more than proportionately. The economic implication of this technical law is far-reaching, since the production cost is a function of the size of the pipe or the container (surface area) while the amount of output is a function of the thru-put of the pipe or amount of freight (volume). In other words, this exactly describes the condition called “economies of scale,” in which output increases more-than-proportionately when all inputs are increased equally. Since average cost is the ratio of total cost to output, the fact that the denominator in the ratio increases more than the numerator causes the ratio to fall, thus producing decreasing average total cost.

Why does decreasing average cost create this condition of natural monopoly? Think of unit price as “average revenue.” Decreasing average cost allows a seller to lower price continuously as the scale of output increases. This is important because it suggests that the seller who achieves the largest scale of output – that is, grows faster than competitors – could undersell all others while still charging a viable price. The textbooks go on to claim that after driving all competitors from the field, the successful seller would then achieve an insurmountable monopoly and raise its price to the profit-maximizing point, dialing its output back to the level commensurate with consumer demand at that higher price. Rather than subjecting consumers to the agony of this pure monopoly outcome, better to compromise by settling on an intermediate price and output that allows the regulated monopolist a price just high enough to attract the financial capital it needs to build, expand and maintain its large infrastructure. That is the raison d’etre of public-utility regulation, which is accomplished in the U.S. by an administrative law process involving hearings and testimony before a commission consisting of political appointees. Various interest groups – consumers, the utility company, the commission itself – are legally represented in the hearings.

Why is the regulated price and output termed a “compromise?” The Public Utility Commission (PUC) forces the company to charge a price equal to its average cost, incorporating a rate of profit sufficient to attract investor capital. This regulatory result is intermediate between the outcomes under pure monopoly and pure competition. A profit-maximizing monopoly firm will always maximize profit by producing the rate of output at which marginal revenue is equal to marginal cost. The monopolist’s marginal revenue is less than its average revenue (price) because every change in price affects inframarginal units, either positively or negatively, and the monopolist is all too aware of its singular status and the large number of inframarginal units affected by its pricing decisions. Under pure competition, each firm treats price as a parameter and neglects the tiny effect its supply decisions have on market price; hence price and marginal revenue are effectively equal. Thus, each competitive firm will produce a rate of output at which price equals marginal cost, and the total output resulting from each of these individual firm decisions is larger – and the resulting market price is lower – than would be the case if a single monopoly firm were deciding on price and output for the whole market. The PUC does not attempt to duplicate this pure competitive price because it assumes that, under decreasing average cost, marginal cost is less than average cost and a price less than average cost would not cover all the utility firm’s costs. Rather than subsidize these losses out of public funds (as is commonly done outside of the U.S. and Canada

), the PUC allows a higher price sufficient to cover all costs including the opportunity cost of attracting financial capital.

How well does this theoretical picture of natural monopoly fit industrial reality? Many public-utility industries possess at least some technical features in common with it. Electric and telephone transmission lines, natural-gas pipelines and water pipe all obey the two-thirds rule. This much of the natural monopoly doctrine has a scientific basis. On the other hand, power generation (as opposed to transmission or transport) does not usually exhibit economies of scale. There are plenty of industries that are not regulated public utilities despite showing clear scale economies – ocean-going cargo vessels are one obvious case. This is enough to provoke immediate suspicion of the natural-monopoly doctrine as a comprehensive explanation of public-utility regulation. Suffice it to say that scale economies seldom dominate the production functions even of public-utility goods.

The Myth of the Birth of Public-Utility Regulation – and the Reality


In his classic article, (“Hornswoggled! How Ma Bell and Chicago Ed Conned Our Grandparents and Stuck Us With the Bill,” Reason Magazine, February 1986, pp. 29-33), author Marvin N. Olasky recounts the birth of public-utility regulation. When “angry consumers and other critics call for an end to [public-utility] monopolies, choruses of utility PR people and government regulators recite the same old story – once upon a time there was competition among utilities, but ‘the public’ got fed up and demanded regulation… Free enterprise in utilities lost in a fair fight.”

As Olasky reveals, “it makes a good story, but it’s not true.” It helps to superimpose the logic of natural monopoly theory on the scenario spun by the “fair fight” myth. If natural-monopoly logic held good, how would we expect the utility-competition scenario to deteriorate?

Well, the textbooks tell us that the condition of natural monopoly (decreasing long-run average total cost) allows one firm to undersell all others by growing faster. Then it drives rivals out of business, becomes a pure monopoly and gouges consumers with high prices and reduced output. So that’s what we would expect to find as our “fair-fight” scenario: dog-eat-dog competition resulting in the big dog devouring all rivals, then rounding on consumers, whose outraged howls produce the dog-catching regulators who kennel up the company as a regulated public utility. The problem with this scenario is that it never happened. It is nowhere to be found in the history books or contemporary accounts.


Well, somebody must have said something about life before utility regulation. After all, it was only about a century ago, not buried in prehistory. If events didn’t unfold according to textbook theory, how did public-utility regulation happen?

Actually, conventional references to the pre-regulatory past are surprisingly sparse. More to the point, they are contradictory. Mostly, they can be grouped under the heading of “wasteful competition.” This is a very different story than the one told by the natural monopoly theory. It maintains that competitive utility provision was a prodigal fiasco; numerous firms all vying for the same market by laying cable and pipe and building transmission lines. All this superfluous activity and expenditure drove costs – and, presumably, prices – through the roof. Eventually, a fed-up public put an end to all this competitive nonsense by demanding relief from the government. This is the scenario commonly cited by the utility PR people and regulators, who care little about theory and even less about logical consistency. All they want is an explanation that will play in Peoria, meeting whatever transitory necessity confronts them at the moment.

Fragmentary support for this explanation exists in the form of references to multiply suppliers of utility services in various markets. In New York City, for example, there were six different electricity franchises granted by one single 1887 City Council resolution. But specific references to competitive chaos are hard to come by, which we wouldn’t expect if things were as bad as they are portrayed.

Could such a situation have arisen and persisted for the 20-40 years that filled the gap between the development of commercial electricity and telephony and the ascendance of public-utility regulation in the decade of the 1920s? No, the thought of competitive firms chasing their tails up the cost curve and losing money for decades is implausible on its face. Anyway, we have gradually pieced together the true picture.

The Reality of Pre-Regulatory Utility Competition


Marvin Olasky pinpoints 1905 as a watershed year in the sage of public utilities in America. That year a merger took place between two of the nation’s largest electric companies, Chicago Edison and Commonwealth Electric. Olasky cites a 1938 monograph by economist Burton Behling, which declared that prior to 1905 the market for municipal electricity “was one of full and free competition.” Market structure bore a superficial resemblance to cable television today in that municipalities assigned franchise rights for service to corporate applicants, the significant difference being that “the common policy was to grant franchises to all who applied” and met minimum requirements. Olasky describes the resulting environment as follows: “Low prices and innovative developments resulted, along with some bankruptcies and occasional disruption of service.”

That qualification “some bankruptcies and occasional disruption of service” raises no red flags to economists; it is the tradeoff they expect to encounter for the benefits provided by low prices and innovation. But it is integral to the story we are telling here. The anecdotal tales of dislocation are the source of the historical scare stories told by later generations of economic historians, utility propagandists and left-wing opportunists. They also provided contemporaneous proponents of public-utility regulation with ammunition for their promotional salvos.

Who roamed the utility landscape during the competitive years? In 1902, America Bell Co. had about 1.3 million subscribers, while the independent companies who competed with it had over 2 million subscribers altogether. By 1905, Bell’s industry leadership was threatened sufficiently to inspire publication of a book entitled How the Bell Lost its Grip. In Toledo, OH, an independent company, Home Telephone Co., began competing with Bell in 1901. It charged rates half those of Bell. By 1906, it had 10, 000 subscribers compared to 6,700 for the local Bell Co. In the states of Nebraska and Iowa, independent company subscribers outnumbered those of Bell by 260,000 to 80,000. Numerous cities held referenda on the issue of granting competitive franchises for telephone service. Competition usually won out. In Portland, OR, the vote was 12,213 to 560 in favor of granting the competitive franchise. In Omaha, NE, the independent franchise won by 7,653 to 3,625. A national survey polled 1,400 businessmen on the issue; 1,245 said that competition had or could produce better phone service in their community. 982 said that competition had forced their Bell company to improve its service.

Obviously, one option open to the Bell (and Edison electric) companies was to cut prices to meet competition. But because Bell and Edison were normally the biggest company in a city or region, with the most subscribers, this price cut was much more costly to them than it was to a smaller independent because the big company had so many inframarginal customers. Consequently, these leading companies looked around for alternative ways of dealing with pesky competitors. The great American rule of thumb in business is: If you can’t beat ’em, join’em; if you can’t beat ’em or join ’em, bar ’em.

The Deadly Duo: Theodore Vail and Samuel Insull


Theodore Vail was a leading America business executive of the 19th century. He was President of American Bell from 1880 to 1886, and then later rejoined the Bell system when he became an AT&T board member in 1902. Vail commissioned a city-by-city study of Bell’s competitive position. It persuaded him that Bell’s business strategy needed overhauling. Bell’s corporate position had been that monopoly was the only technically feasible arrangement because it enabled telephone users in different parts of a city and even different cities to converse. As a company insider conversant with the latest advances, Vail knew that this excuse was wearing thin because system interconnections were even then becoming possible. Competition was eating into Bell’s market share already, and with interconnection on the horizon Vail knew that Bell’s supremacy would vanish unless it was revitalized.

The idea Vail hit upon was based upon the strategy employed by the railroads about fifteen years earlier. In order to win public acceptance for the special government favors they had received, the roads commissioned puff pieces from free-lance writers and bribed newspaper and magazine editors to print them. Vail expanded this technique into what later came to be called “third-party” editorial services; he employed companies for the sole purpose of producing editorial matter glorifying the Bells. One firm earned over $100,000 from the Bell companies while simultaneously earning $84,000 per year to place some 13,000 favorable articles annually about electric utilities. (These usually appeared as what we would now call “advertorials” – unsigned editorials containing citing no source.) The companies did not formally acknowledge their link with utilities, although it was exposed in investigative works such as 1931’s The Public Pays by Ernest Gruening.

Vail combined this approach with another original tactic borrowed from the railroads – the pre-emptive embrace of government regulation. Political scientist Gabriel Kolko provided documentation for his thesis that the original venture in federal-government regulation, the Interstate Commerce Commission Act of 1887, was sponsored by the railroads themselves as a means of cartelizing the industry and suppressing the troublesome competitive forces that had bankrupted one railroad after another by producing price wars and persistent low rates for freight. The public uproar over differential rates for long hauls and short hauls gave both railroads and regulators the necessary excuse to claim that competition had failed and only regulation could provide “just and reasonable rates.” Not surprisingly, the regulatory solution was to impose fairness and equality by requiring railroads to raise the rates for long hauls to the level of short-haul rates, so that all shippers now paid equal high rates per-mile.

Vail was desperate to suppress competition from independent phone companies, but knew that he would then face the danger of lawsuits under the embryonic Sherman Antitrust Act, which contained a key section forbidding monopolization. The only kind of competition Vail approved of was “that kind which is rather ‘participation’ than ‘competition,’ and operates under agreement as to prices or territory.” That is, Vail explicitly endorsed cartelization over competition. Unfortunately, the Sherman Act also contained a section outlawing price collusion. Buying off the public was clearly not enough; Vail would have to stave off the federal government as well. So he sent AT&T lobbyists to Washington, where they successfully achieved passage of legislation placing interstate telephone and telegraph communications under the aegis of the ICC.

Vail feared competition, not government. He was confident that regulation could be molded and shaped to the benefit of the Bells. He knew that the general public and particularly his fellow businessmen would take a while to warm up to regulation. “Some corporations have as yet not quite got on to the new order of things,” he mused. By the time Vail died in 1920, that new order had largely been established thanks to the work of Vail’s contemporary, Samuel Insull.

Insull emigrated from England in 1881 to become Thomas Edison’s secretary. He rose rapidly to become Edison’s strategic planner and right-hand man. At Edison’s side, Insull saw firsthand the disruptive effects of innovation on markets when competition was allowed to function. Insull made a mental note not to let himself become the disruptee. With Edison’s blessing, Insull took the reins of Chicago Edison in 1892. His tenure gave him an education in the field of politics to complement the one Edison had given him in technology. In 1905, he merged Chicago Edison with Commonwealth Electric to create the nation’s leading municipal power monopoly.

Like Vail, Insull recognized the threat posed by marketplace competition. Like Vail, Insull saw government as an ally and a tool to suppress his competitors. Insull’s embrace of government was even warmer than Vail’s because he perceived its vital role to be placating and anesthetizing the public. As Olasky put it, “Insull argued that utility monopoly… could best be secured by the establishment of government commissions, which would present the appearance of popular control.”

The commission idea would be sold to the public as a democratic means of establishing fair utility rates. Sure, these rates might be lower than the highest rates utility owners could get on their own, but they would certainly be higher than those prevailing with competition. And the regulated rates would be stable, a sure thing, not the crap shoot offered by the competitive market. In a 1978 article in the prestigious Journal of Law and Economics, economic historian Gregg Jarrell documents that the first states to implement utility regulation saw rising prices and profits and falling utility output, while states that retained competitive utility markets had lower utility prices. Jarrell’s conclusion: “State regulation of electric utilities was primarily a pro-producer policy.”

Over the years, this trend continued, even though utility competition died off almost to the vanishing point. Yet it remained true that those few jurisdictions that allowed utility competition – usually phone, sometimes electric – benefitted from lower rates. This attracted virtually no public attention.

Insull realized that the popularity of competition was just as big an obstacle as its reality in the marketplace. So he slanted his public-relations to heighten the public’s fear of socialism and promote utility regulation as the alternative to a government-owned, socialized power system. Insull foresaw that politicians and regulators would need to use the utility company as a whipping boy by pretending to discipline it severely and accusing it of cupidity and greed. This would allow government to assume the posture of a stern guardian of the public welfare and champion of the consumer – all the while catering to the utility’s welfare behind closed doors. Generations of economists became accustomed to seeing this charade performed at PUC hearings. Their cynicism was tempered by the fact that these same economists were earning handsome incomes by specializing as consultants to one of the several interested parties at those hearings. Over the years, this iron quadrangle of interested parties – regulators, lawyers, economists and “consumer advocates” – became the staunchest and most reliable defender of the public-utility regulation process. Despite the fact that these people were in the best position to appreciate the endless waste and hypocrisy, their self-interest blinded them to it.

Insull enthusiastically adopted the promotional methods pioneered by the railroads and imitated by Theodore Vail. One of his third-party firms, the Illinois Committee on Public Utility Information, was led by Insull subordinate Bernard J. Mullaney. The Committee distributed 5 million pieces of pro-utility literature in the state in 1920 and 1921. Mullaney carefully cultivated the favors of editors by feeding them news and information of all kinds in order to earn a key quid pro quo – publication of his press releases. This favoritism went as far as providing the editors with free long-distance telephone service as an in-kind bribe. Not to be overlooked, of course, is that most traditional of all shady relationships in the newspaper business – buying ads in exchange for preferential treatment in the paper. Electric companies, like the Bells, were prodigious advertisers and took lavish advantage of it. In eventual hearings held by the Federal Trade Commission and the Federal Communications Commission, testimony and exhibits revealed that Bell executives had newspaper editors throughout the West and Midwest in their pockets.

Over the years, as public-utility regulation became a respected institution, the need for big-ticket PR support waned. But utilities never stopped cultivating political support. The Bell companies in particular bought legislators by the gross, challenging teachers’ unions as the leading political force in statehouses across the nation. When the challenge of telecommunications deregulation loomed, the Bells were able to stall it off and postpone its benefits to U.S. consumers for a decade longer than those enjoyed abroad.

Profit regulation left utilities with no profit motive to innovate or cut costs. This caused costs to inflate like a hot-air balloon. Sam Insull realized that he could make a healthy profit by guaranteeing his market, killing off his competition and writing his profit in stone through regulation. Then he could ratchet up real income by “gold-plating the rate base” – increasing salaries and other costs and forcing the ratepayers to pay for them. Ironically, he ended up going broke despite owning a big portfolio of utilities. He borrowed huge sums of money to buy them and expand their operations. When the Depression hit, he found that he couldn’t raise rates to service the debt he had run up. He was indicted, left the country, returned to win acquittal on criminal charges but died broke from a heart attack – just one more celebrated riches-to-rags Depression-era tale.

The lack of motivation made utilities a byword for inefficiency. Bell Labs invented the transistor, but AT&T was one of the last companies to use it because it still had vacuum tubes on hand and had no profit motivation to switch and no competitive motivation to serve its customers. An AT&T company made the first cell phone call in 1946, but the technology withered on the vine for 40 years because the utility system had no profit motivation to deploy it. Touch-tone dialing was invented in 1941 but not rolled out until the 1970s. Bell Labs developed early high-speed computer modems but couldn’t test high-speed data transmission because regulators hadn’t approved tariffs (prices) for data transmission. The list goes on and on; in fact, the entire telecommunications revolution began by accident when a regulator became so fed up with AT&T’s inefficiency that he changed one regulation in the 1970s and allowed one company called MCI to compete with the Bells. (We owe Andy Kessler, longtime AT&T employee and current hedge-fund manager, for this litany of innovative ineptitude.)

What is Net Neutrality All About?


Today, the call for “net neutrality” by politicians like President Obama is a political pose, just as the call for public-utility regulation was a century ago. Robert Litan of the
Brookings Institution has pointed out the irony that slapping a Title II common-carrier classification on broadband Internet providers would not even prevent them from practicing the paid prioritization of buyers that the President complained of in his speech! Indeed, for most of the 20th century, public utilities practiced price discrimination among different classes of buyers in order to redistribute income from business users to household users.

The Internet as we know it today is the result of an unimpeded succession of competitive innovations over the last three decades; i.e., the very “open and free Internet” that the New York Times claims President Obama will now bestow upon us. Net neutrality would bring all this to a screeching halt by imposing regulation on most of the Web and taxes on consumers. Today, the biggest chunk of phone bills goes to pay for a charge for “universal service,” a redistributive tax ostensibly intended to make sure everybody had phone service. Yet before the proliferation of cell phones, the percentage of the U.S. population owning televisions – which were unregulated and benefitted from no “universal service” tax – was several percentage points higher than the percentage owning and using telephones. In reality, the universal service tax was used to perpetuate the regulatory process itself.

In summary, then, the balance sheet on public utilities shows they were plotted by would-be monopolists to stymie competition and enlist government and regulators as co-conspirators. The conspiracy stuck consumers with high prices, reduced output, mediocre service, high excise taxes and – worst of all – stagnant innovation for decade after decade. All this is balanced against the dubious benefit of stability – the sort of stability the U.S. economy has shown in the last five years.

A similar future awaits us if we treat the Internet’s imagined ills with the regulatory nostrum called net neutrality.

DRI-311 for week of 3-30-14: The Dead Hand of Regulation

An Access Advertising EconBrief:

The Dead Hand of Regulation

One of the venerable legal principles, dating from common law and drummed into every lawyer, is the rule against perpetuities. Its purpose is to stay the “dead hand” of the past from controlling life into the indefinite future.

Economists have adopted the study of economic regulation as a specialized field. They have gradually come to realize that regulation acts as a “dead hand” that retards the advance of progress wherever it prevails.

This characterization departs radically from the conventional view of regulation as an all-purpose cure-all for whatever ails an economic system. The knee-jerk responses of mainstream news media and academia to a problem are variously to blame an absence of regulation, excoriate lax regulation, perceive insufficient regulation or lament poorly funded regulation. Indeed, these constitute virtually the full menu of choice.

The obvious inference to draw from this is that regulation is the cine qua non of problem solution. Sure, sometimes the shortsighted legislators neglect to regulate some stray industry or human activity; sometimes the regulators get lazy and just sit on their hands or posteriors; sometimes those silly laws tie the hands of regulators; and sometimes we stingy so-and-sos don’t give the heroic regulators the money they need to do their jobs. But set up a good old government bureau, populate it with noble, altruistic, tough-minded, tender-hearted civil servants and unlock the public treasury – then just watch those regulators go. The industry will hum like a Welsh chorus – so we are supposed to assume.

Unfortunately, economists have been hard put to find even one exemplary example of regulation. It isn’t just that regulatory agencies share the deficiencies common to all government agencies. No, they invent new ones. And this is just about the only kind of innovation they do promote. In all other respects, regulation is analogous to the “dead hand” that the institution of law has been trying to thwart for over a millennium.

Some of the most famous case studies of regulation provide persuasive testimony for the economic case against it.

Taxicab Regulation

Almost every major city in the U.S. regulates taxicabs. Taxi regulation began not long after the automobile’s popularity began to zoom upward; it accompanied the regulation of trucking in the 1920s and 1930s. It is a variety of occupational regulation, requiring licensure and registration with the police department. Taxi fares are regulated by city bureaus rather than marketplace competition, and this regulation serves as a cartel that prevents competitive price declines by taxi firms (or individual drivers). Licenses have been severely restricted – for example, New York City has issued virtually no new taxi licenses after World War II. The licenses, or “medallions,” could be sold. The restriction on their number and cartelization of fares combine to create monopoly profits that are capitalized into the market price of the medallions; that is why medallion prices have sometimes reached six figures.

Economics textbooks teach that consumption is the end-in-view behind all economic activity. We are all consumers. They demonstrate that competitive markets produce larger output and lower prices than do the same market dominated by a single monopoly producer, thus validating the economist’s customary preference for pure competition over pure monopoly as a form of market organization. Since the local taxi cartels created by taxicab regulation essentially duplicate the outcome of pure monopoly – but with producer benefits spread over multiple firms rather than a single one – it is not surprising that economic textbooks have long featured taxicab regulation as a real-world application of regulation-gone-wrong.

Anybody who has ridden a taxicab regularly over the last half-century didn’t need to pick up an economics textbook to know that something was wrong with taxi service. It was proverbially difficult to get a taxi during morning and evening rush hours, and also during the “bar rush” late at night. When the city was visited by bad weather or a big convention, taxis were even scarcer. Wait times could stretch into hours even during the off-peak times for predominantly black residential areas plagued by high rates of crime. The level of professionalism among drivers varied from sky high to dirt poor.

We can gauge the degree of influence exerted on political reality by academic economists and taxi consumers by the fact that only a tiny handful of communities have deregulated taxi service in response to the complaints of either group. Washington, D.C. long featured the nation’s least regulated taxi market – and lowest taxi fares – among major cities. In the mid-1980s, Kansas City, MO deregulated both taxi fares and entry into the market, ushering in a few years of fierce taxi competition and dramatic benefits to local consumers and tourists. But the forces of regulation eventually regained the upper hand in the 1990s and the market was once more cartelized. A few other cities experience their competitive moment. Until recently, the Dark Side ruled mostly unopposed.

In 2010, somebody came up with a new kind of challenge to taxicab regulation. Bucking the regulatory establishment with lawsuits and publicity campaigns was too costly and difficult. Rather than confront taxicab regulation head on, it was better to make an end run around it.

The Uber Innovation: Outflanking Taxi Regulation

Technology made it inevitable. The intersection of the Internet, online credit-card payment methods, smart phones and GPS mapping suddenly made traditional taxicab service virtually obsolete. Now customers could book a taxi the same way they make a dinner reservation, by calling up a provider, reserving a car and paying in advance. Then they could track their vehicle’s progress to their location. No more “mission impossible” trips during peak hours! No more dealing with unresponsive monopoly providers! No more surly, badly dressed drivers or uncomfortable vehicles! No more monopoly taxi fares!

But wait – what about the dearth of drivers? What happened to the regulatory bottleneck that prevented entry of firms? This is where the end run came in. The new taxicab entrant, a firm called Uber, got around taxi regulation by not being a taxicab company. First, it entered the market for transportation services by providing town cars for its trips. These Lincolns, Cadillacs, BMWs and Mercedes were functioning as livery vehicles because they were available by appointment only; thus, they were technically outside the realm of taxi regulation. (As we’ll see, that hasn’t stopped taxi regulators from trying to regulate the company or its imitators.) Then it doubled as a middleman that recruited ordinary drivers and matched them up with people who needed trips where the drivers were going – or were willing to go. Thus, Uber reduced overhead costs to the bone by utilizing pre-existing capital goods that had competing alternative uses. Either way, though, the central idea is to provide the services of a taxicab company without being shackled by taxicab regulation.

The company took about a fifth of the cost of a trip, leaving the rest as gross revenue to the driver. The trip cost itself is calculated much as a taxi fare would be – distance based unless the car is moving less than 11 miles-per-hour, then time-based. Uber’s driver compensation incorporates a tip and the passenger advisory declares tipping unnecessary. This muddies direct comparison with taxi fares, particularly for Uber’s high-end livery-like town-car trips. Most of the customary problems faced by both driver and passenger are eliminated or greatly reduced. For example, there is very little incentive to rob an Uber driver since they do not collect the revenue (even tips) for their trips. Not surprisingly, Uber has attracted imitative competitors like the ride-sharing services Lyft and Sidecar.

It would seem, then, that this innovative new form of competition to traditional taxicab service is a boon to consumers. Since the whole purpose of economic activity is to benefit consumers, that change is a good thing. Since the purpose – the ostensible purpose – of regulation is to make things better, regulators should welcome this change with open arms. How do the noble, heroic, altruistic, tough-minded municipal civil servants feel about Uber and its ilk?

Why, they hate it, of course.

The Empire Strikes Back Against Uber, et al

“I’m hoping that people will…pay attention to what this actually is, which is an attempt to deregulate the taxi industry.” That is the view of Matthew Daus, former chairman of New York City’s Taxi and Limousine Commission, as quoted in a profile of Uber in Bloomberg Businessweek (2/24-3/02, “Invasion of the Taxi Snatchers,” by Brad Stone). His opinions were apparently shared by officials in Miami and Austin, TX, where Uber was prevented from operating by regulators. Perhaps because New York City has long been profiled in so many unfavorable examinations of taxicab regulation, the Bloomberg piece focused particularly on Uber‘s operations in San Francisco.

Despite an increasing population (up by 300,000 in the last decade), San Francisco “has long capped the number of taxi medallions.” Members of the taxi cartel “didn’t seem to care about prompt customer service since they make money primarily by leasing their cars to drivers” and extracting the monopoly rents embedded in the medallions through the fees charged for lease, dispatch and phone-order services provided to drivers. (The only way drivers could afford weekly lease fees of $400 or more was by reaping the benefits of monopoly taxi fares.) The article quotes the acerbic view of David Autor, MIT economics professor, that the industry is “characterized by high prices, low service, and no accountability. It was ripe for entry because everybody hates it.”

Well, that certainly doesn’t sound like an idyllic regulated industry, does it? It sounds more like taxi operations in New York City; indeed, like the typical regulated industry anywhere. Why, then, are regulators so averse to change? Why are they dead set against interlopers like Uber?

Innovation implies change. Change means doing things differently. That means that either the people doing them now must change – or new people must do them using the new methods. And that makes the incumbent doers unhappy, since they lose their jobs or suffer a loss of income or both. In this case, the disaffected include taxi-company owners and employees and taxicab drivers.

The Bloomberg piece is replete with complaints. Some complainants are cab drivers, who “complain that they can no longer pick up riders in the city’s tonier neighborhoods.” They stare down and block Uber and block ride-sharing drivers in traffic and confront them at airport terminals. “I’ve made it my personal mission to make it as difficult as possible for these guys to operate,” vows a director of the San Francisco Cab Drivers Association.

Taxi companies play a harder game of ball. “In Boston and Chicago, taxi operators have sued their cities for allowing unregulated companies to devalue million-dollar operating permits.” Uber has faced lawsuits and regulatory objections in San Francisco, New York City and virtually everywhere else it has been. Taxi companies “accuse Uber of risking passengers’ lives by putting untested drivers on the road, offering questionable insurance, and lowering prices as part of a long-term conspiracy to kill the competition, among other transgressions.”

Of course Uber is lowering prices – that is what competition is all about, reducing monopoly prices and benefitting consumers. Uber lacks the ability to “kill competition,” as the entrance of various competing ride-sharing services proves. As for killing its own customers – well, Uber‘s interest hardly lies in attracting big-dollar liability lawsuits. Nothing could be more slipshod than traditional taxi regulation. Its taxi-inspections and employee background-checks were laughably lax; for example, it is a truism that convicted felons usually can only get a job driving a cab. Monopoly taxi firms have much less incentive to keep passenger interests in mind than Uber does today.

Regulators themselves feed off the complaints of incumbents because they need a constituency to protect. By protecting the interests of incumbents, they are really protecting their own interests. If the regulated go out of business, then regulators will have nobody to regulate and no justification for retaining their jobs and incomes. Notice the tacit premise in the quoted passage objecting to Uber‘s hidden agenda of deregulation; namely, that deregulation is unthinkable.

But what about consumer complaints? They were legion under regulation. Have they vanished with the advent of competition?

How Innovation Solves Problems that Regulation Doesn’t

The principal complaint is lodged against Uber‘s “surge pricing,” which “jacks up” (Bloomberg’s wording) prices in peak times like rush hours. While prices are announced to customers in advance, that doesn’t forestall grumbling or accusations of “exploiting customers.”

For decades, academic economists pushed this very type of “peak-load pricing” for regulated electric utilities, which experience the same peak loading problems that taxi firms do. The idea was to persuade customers to use less electricity during peak times and spread their use more evenly throughout the day.

The same logic applies to taxi demand; ironically, consumer complaints show that demand is indeed sensitive to price on-peak. But Uber‘s real innovation is that surge pricing has solved one of the age-old supply problems of taxicab operation: how to “produce” more drivers in bad weather, evening rush-hour and late night peaks. Bloomberg interviewed a dozen drivers and found agreement that the higher prices attracted drivers magnetically to the street just at the time they were most needed. The peaking problem is that the quantity of taxi services demanded greatly exceeds the quantity supplied. Surge pricing significantly reduces the former and greatly increases the latter. Voila! Problem solved in less than four years. This is something taxicab regulation never succeeded in doing – hardly even attempting – over a century of existence.

This is just the start of competition’s problem-solving career in taxi-type services. Uber CEO Travis Kalanick envisions “a dense network of Uber cars in every city,” which could be used “to deliver such things as packages from online stores and takeout food. (The company delivered flowers on Valentine’s Day.) Uber could one day even allow other companies – say, a laundry pickup startup – to use its fleet.” Eventually, the widespread coordination of private cars for multiple purposes could vastly cut down on the number of autos and fuel use and improve the efficient use of existing transportation capacity. Again, this can only be accomplished via a competitive pricing mechanism; it is something that government regulation has never come close to achieving or even contemplating.

The (Non-) Case for Taxicab Regulation

Regulators take it for granted that taxicabs must be regulated by government. For decades, New York City has been warning its residents against using the vast supply of illegal, unregulated “gypsy” cabs that prowl the streets. Customers are risking their money and their very lives, warn the regulators darkly. But the history of taxi regulation does not support the necessity of regulation.

Government regulation of business began at the state level in the second half of the 19th century with regulation of grain elevators. In 1887, the Interstate Commerce Commission was created to regulate railroads. Letters exchanged by the wealthy railroad owners at the time indicate that they intended to use the ICC to cartelize the industry, and that is indeed what happened. In the early 20th century, trucks were regulated because their competition threatened the freight-carriage business of railroads. Taxicabs were regulated because they threatened the business of streetcars, which lingered into the mid-20th century despite their technological obsolescence.

The pattern is clear. Regulation occurs not to cure the evils of competition but to protect incumbents from the effects of competition. In the vernacular of economics, regulation and competition are substitutes, not complements. This gives the lie to the pretense that regulation is supposed to polish and buff away the excesses, evils, flaws, mistakes and unsightly features of competitive capitalism. The purpose of regulation is to replace free-market capitalism with government control of markets.

To illustrate the flimsiness of the regulatory case, consider the “argument” advanced for taxicab regulation in the Bloomberg piece by – of all people – a professor of economics at Northwestern. “Traditionally, we had to have price regulation in cabs because when you are hailing a cab or standing in a taxi stand, you had to take the first car and you didn’t know the price in advance. You could be exploited.” Of course, all that price regulation does is to require all cabs to charge the same price – it doesn’t, in and of itself, inform the customer what that price is. That is accomplished by painting the fare on the outside of the cab. But that could be done under competitive pricing, too – and was in deregulated markets like Kansas City in the 1980s! In essence, the Northwestern economist was saying that a high monopoly price wasn’t “exploitation,” but the chance that a consumer might not know all possible prices charged by all companies was – even though most consumers undoubtedly don’t possess perfect knowledge of all prices. For this we need tenured professors of economics?

The Past and the Future

The history of taxicab regulation suggests that regulation produces bad current outcomes. But the inherent logic of regulation suggests that its effect on the future is just as bad through its discouraging impact on innovation. According to Bloomberg, “there’s a battle for the future of transportation being waged outside our offices and homes” involving “Uber and a collection of startups.” If regulators succeed in killing Uber and its imitators, their vast potential for economic growth will die in infancy.

The next EconBrief will review various new products and industries whose innovative benefits are similarly threatened by government regulation.

DRI-313 for week of 3-9-14: Economic Rewind: What Went Wrong With America: What Went Wrong?

An Access Advertising EconBrief:

Economic Rewind: What Went Wrong With America: What Went Wrong?

Steadily increasing technological innovation carries with it a growing focus on the future. Looking backward nowadays tends to be more and more an occasion for nostalgia. This denies us a valuable tool. Reviewing yesterday in light of subsequent events and discoveries can improve our navigation through the future.

In 1991, the Philadelphia Inquirer published a nine-part series of articles by its star investigative reporting team, Donald L. Barlett and James B. Steele. The series was entitled “America: What Went Wrong?” The articles purported to show what had recently gone wrong with the country, roughly over the preceding decade. The popularity of the series encouraged the publication of the collected articles in book form under the same title. The book became a runaway New York Times bestseller. The authors, who had both previously won two Pulitzer Prizes for investigative reporting, gained national fame and climbed the career ladder to high-level writing positions at Time Magazine and Vanity Fair.

One motive for reinvestigating this subject matter is immediately apparent. In retrospect, the suggestion that America went to hell in a hand basket during the 1980s seems decidedly eccentric, to say the least. Although the nation began the decade in a mood often characterized as “malaise,” dramatic changes in economic policies fostered by President Reagan, Federal Reserve Chairman Paul Volcker and a small number of maverick economists ushered in an economic boom that eventually became the longest peacetime economic expansion in American history up to that point. True, there was a mild recession in 1991, the year of the Philadelphia Inquirer series, but this was followed by an even longer period of expansion.

It is possible to formulate a theory that bad things happened in the 1980s and subsequently. Former Reagan administration official David Stockman developed this thesis in his recent lengthy memoir. But Stockman didn’t portray those years as years of deprivation and despair; rather, he maintained that the underlying foundations of the boom were built on excessive government spending. The good times were good enough while they lasted, in other words, but were destined to end badly.

This was most definitely not the picture painted by Barlett and Steele. It behooves us to closely examine their bestselling book today. The perspective of time enables us to appreciate how bizarre and wrongheaded their thesis was. Our economic focus will show how completely wrong their conclusions were. And, most startling of all, we will realize that the two most celebrated investigative reporters of their day apparently failed in their duty to their readers and their profession.

Readers who remember the powerful impact Barlett and Steele’s book had over twenty years ago and who are awed by their current eminence will find the foregoing contentions incredible. They should swallow their incredulity and read on.

BS on “The High Cost of Deregulation”

Barlett and Steele (hereinafter shortened to BS) organized their book into ten chapters (one more than the original number of articles). We will focus on the most economics-intensive chapters, as determined by the subject matter. Chapter 6, entitled “The High Cost of Deregulation,” is probably the narrowest in its concentration on pure economic subject matter.

We can infer that the chapter deals intensively with economics from the fact that an economist is quoted in it. You would never know that, though, from this chapter. BS quote “Darius W. Gaskins, Jr., former Chairman of the ICC.” But they do not bother to tell their readers that he is economist Darius Gaskins, leading specialist in the field of Industrial Organization, who was appointed Chairman of the ICC by President Carter specifically because of his expertise in trucking regulation.

Most of the chapter deals with the deregulation of the trucking industry that began in 1978 and resulted in the passage of the Motor Carrier Act of 1980, which deregulated the substance although not the technical form of pricing and entry into interstate trucking. This is odd. The introductory page (“What Went Wrong”) previews the topic: “Thousands of firms gone. 200,000 jobs lost. Deregulation has been costly to workers and consumers alike.” It continues by citing the costs of the savings and loan cleanup and claiming that “now the push is on to deregulate the banking industry.” Yet the chapter itself deals almost completely with the two big deregulatory efforts of the 1980s – trucking and commercial airlines.

The headline of the section on trucking deregulation is “Wrecking Industries and Lives.” The inference is clear: the trucking industry was “wrecked” by deregulation and lives were destroyed as a direct result. How was the industry wrecked?

Here is the BS explanation: “Since deregulation of the trucking industry in 1980, more than 100 once-thriving trucking companies have gone out of business. More than 150,000 workers at those companies lost their jobs.” They provide a full-page table headed “The Collapse of the U.S. Trucking Industry.” It lists “the top 30 trucking firms of 1979,” most of which are “gone.” More precisely, 17 had folded, 3 merged and 10 still operated.

How did this destruction occur? The BS view is that deregulation was based on false premises: “Removing government restrictions on the private sector would let free and open competition rule the marketplace. Getting rid of regulations would spur the growth of new companies. Existing companies would become more efficient or perish. Competition would create jobs, drive down prices and benefit consumers and businesses alike…That’s the theory. The gritty reality, as imposed on the daily lives of the men and women most directly affected, is a little different.”

BS illustrate the difference between the promise of deregulation and its actual performance with anecdotes. A trucking-company employee was stricken with a “rare bone cancer.” His company went bankrupt. He lost his medical insurance and could not pay for his medical treatment. (Oddly, BS say that the company’s checks paying for the treatment “began bouncing.”) The treatments apparently continued until the man’s death, but his wife suffered harassment by the collections department of the hospital.

A woman worked for nineteen years as an accountant for a trucking firm. It went out of business. She took part-time jobs at a lower salary. Another woman suffered the loss of her husband, who was killed in a highway accident. Rather than paying her out of the state worker’s compensation fund, regulators allowed the trucking company to pay her directly. But “deregulation was helping drive it…out of business.” After bankruptcy, the woman’s checks stopped and she became an unsecured creditor.

These three anecdotes are attenuated and decorated with detail to stress the suffering of the principals. Likewise, the good old days of regulated trucking are fondly recalled. “My uncle was a truck driver twenty years ago and, wow, he made a lot of money…drove a nice car…had a nice house.” “Trucking was a good job in those days…the pay was good, it was steady work.” The contrast with deregulation is stark. “Deregulation has been a nightmare…

now, drivers are struggling to survive.” “For truckers, the 1980s were a dismal time.”

The BS Theory of Deregulatory Disintegration

It is one thing to claim that things are bad under deregulation but another to actually state a logical chain of reasoning to underpin the disintegration. Why did things go bad? Why must it be inevitable that deregulation is doomed to failure while regulation should succeed? The closest that BS come is to insist that government statistics do not tell the true story, that deregulation causes wages to spiral downward and “good, middle-class jobs” to disappear and that deregulation produces financial disaster for the industry.

BS actually cite a Brooking Institute study claiming a $20 billion net benefit from trucking deregulation. (They say nothing about who did the study, how it was done or the nature of the benefit.) They are willing to concede that “companies that hire truckers have profited from lower rates” but maintain that “there are no economic data showing that the cost savings have been passed along to consumers.”

BS admit that “according to the Bureau of Labor Statistics…between 1980 and 1990, the number of employees increased 248,000 [and] average yearly earnings went from $18,400 to $23,400” in the trucking industry. But the good jobs, those with seniority, high wages, benefits and health insurance, went down the tubes along with the 100 big trucking firms that failed. In their place came a raft of “one-owner, shoestring trucking operations.” Deregulation “eliminated two jobs that paid, say, $30,000 and created three jobs that paid $20,000 or less.” The $23,400 earnings figure is misleading because “the government excludes one major category of drivers from its figures – self-employed drivers. And their earnings are generally lower than those for drivers employed by major companies.”

BS spend the best part of seven pages decrying the particular efforts that failing companies employed to stay afloat. These included mergers, leveraged buyouts and particularly the use of employee stock ownership plans (ESOPs) as a means of raising capital by selling ownership in the company to its employees. “Since 1980, more than two dozen ESOPs financed by worker wage cuts have been adopted by large trucking companies. With few exceptions, the companies failed anyway.” BS wax especially indignant at the activities of one (1) financier who owned an S&L as well as a trucking company and was eventually convicted of fraud and sentenced to twenty-four years in jail.

Such is the BS theory of deregulatory disintegration. On regulation, BS are even vaguer. They see it as a kind of mediative or ameliorative process, apparently intended to smooth out (or saw off) the rough edges of the competitive process. BS drop stray hints that regulation was not perfect. (“Undoubtedly, the ICC, like its counterpart in the aviation industry the Civil Aeronautics Board (CAB), had had stifled competition and discouraged innovation…If the ICC had been guilty of overregulation in the past…”) But deregulation was not the answer; instead, policymakers should have repaired regulation. (“Rather than correct the defects in the regulatory system, Congress chose instead to throw it out…It was somewhat akin to eliminating the referees in a football game because of flawed calls, instead of merely replacing them.”) Do BS mean that regulators should have been replaced? Of course, civil-service regulations make it difficult, if not impossible, to replace anybody below cabinet level for political or policy reasons. Higher-ups usually get replaced anyway as administrations change.

In any case, as we shall see below, virtually everything BS say in this chapter is wrong, misleading or irrelevant.

The Truth About Trucking Deregulation

That this account of trucking deregulation could appear in a major metropolitan newspaper under the guise of investigative reporting, then subsequently form the basis for a non-fiction bestseller, is astonishing. America: What Went Wrong was subsequently named one of the 100 leading pieces of investigative journalism in the 20th century by New YorkUniversity’s Department of Journalism. This gives it a status approximating that of Piltdown Man in the scientific world. We can begin to appreciate this by methodically recounting the truth about trucking deregulation, as revealed by the logical and empirical tools of economics.

As with all forms of industrial regulation, the roots of trucking regulation have been meticulously traced. It did not arise owing to a spontaneous, grass-roots demand by the public. Railroads resented the competition for freight carriage presented by trucks during the 1920s and 1930s. They strenuously lobbied state regulators and the ICC. In 1935, the ICC subjected the interstate trucking industry to tight regulatory control governing entry into the market and prices charged by trucking firms to shippers, as well as safety standards of operation. Meanwhile, each state established analogous control over intrastate trucking.

The essence of entry control was the certificate of convenience and necessity, which required would-be new entrants to prove to regulatory authorities that their new service was both necessary and appropriate. (Incumbent firms were mostly grandfathered into the business without having to meet the requirements.) The regulatory authorities included current active truckers, so would-be entrants were begging their competitors for permission to compete with them. Not surprisingly, their entreaties were seldom heeded. Applications for new service by certificated firms were given precedence over those by new entrants.

Prices were submitted to regulatory authorities at least 30 days in advance of effective date. Anybody interested party could view them and object to them. Once again, this was a virtual bar to any effective form of competition, since objections would give rise to a bureaucratic investigation and competitive gains would be eroded or eliminated altogether by the investigative costs.

The rules governing service authority under regulation were staggeringly inefficient. Because the chances of passing muster for a certificate of convenience and necessity were virtually nil, the only way companies could enter the business in practice was to buy the authority to provide trucking service to a particular route. This cost hundreds of thousands of dollars (in today’s money, the equivalent of millions of dollars). The lack of price competition and difficulty of competitive entry meant that trucking companies earned monopoly profits. The only way an existing trucking firm would sell its authority to operate was if it were compensated for the loss of those monopoly profits in the sale price; that is, the price reflected the discounted present value of anticipated future monopoly profits. This is directly analogous to the sky-high prices received for the sale of taxicab medallions in places like New York City, where entry and price competition were similarly restricted. Unlike the taxi case, though, trucking-company owners had to split the monopoly profits with unionized company employees, who made up about 60% of large-company workers. (Deregulation was vehemently opposed by both the Teamster’s Union and the American Trucking Associations, which included the large trucking firms.)

If a trucking firm was lucky enough to raise the price of entry, it still faced barriers to efficient operation. Consider the case of a firm serving the route from Cleveland to Buffalo under regulation. Let’s say it purchased the right to a route from Buffalo to Pittsburgh. Could it carry freight traveling from Cleveland to Pittsburgh? No, it couldn’t travel straight between those two cities because it had no authority to serve that route; it must travel many miles out of its way by going through Buffalo in order to serve Pittsburgh from Cleveland. And what about the return trip? As every commercial drive knows, “deadheading” or returning empty is mindboggingly inefficient. But that’s what trucks had to do under regulation.

How inefficient was regulation? Certain commodities were exempt from regulated carriage, and their rates averaged 20-40% lower than regulated rates. Dressed poultry was exempt; its rates were 50% below that of regulated cooked poultry. Trucking rates in West Germany and the U.S., which had similar forms of trucking regulation, were 75% above those in unregulated Great Britain.

Does something seem not exactly copasetic about this arrangement? So it must have seemed to Congress, which passed the Reed-Bullwinkle Act in 1948 exempting motor carriers from the anti-trust laws. Thus, the ICC effectively functioned as a cartel allowing trucking-industry firms to act collectively as a monopoly and the firms were immunized against the legal consequences of doing that by act of Congress.

How much of this did BS tell their readers? Absolutely nothing.

Deregulation was proposed by John F. Kennedy in 1962. It was proposed again by Gerald Ford in 1975. It was backed by economists who studied the industry virtually from the outset of regulation in 1935 until 1978, when Senator Edward Kennedy sponsored the enabling legislation signed by President Jimmy Carter in 1980.

BS’s piece on deregulation appeared in October, 1991; their book appeared in Spring, 1992. By then, careful economic studies on the effects of trucking deregulation had already appeared. Since then, of course, even more empirical work has been done confirming and strengthening the initial results.

Economist Thomas Gale Moore published an early study in 1987 and later contributed two summaries, one to the Fortune Encyclopedia of Economics and one to its successor, the Concise Encyclopedia of Economics. He found that average rates for (full) truckload shipments declined by 25% between 1977 and 1987 and LTL (less-than-truckload) rates fell by 10-20% between 1979 and 1986. Overall, revenue per truckload ton fell by about 22%. The cause of the decline in prices and revenue was the increase in price competition caused by the tremendous influx of new entrants into the market, which broke up the monopolies exerted by the big trucking firms and destroyed their monopoly profits. It also produced an increase in trucking output. By 1990, there were about 40,000 truckers in the U.S., more than twice the number operating under regulation. (BS themselves recognized both an increase in truckers and an 11% increase in trucking output, which they derided as “too many trucks…suddenly chasing too little freight.)

In surveys, 77% of shippers approved of deregulation. Official complaints by shippers against trucking firms fell to less than 10% of their former levels under regulation. The Department of Transportation conservatively estimated the gains from trucking deregulation at $38-56$ billion per year.

One of the major unanticipated gains was the role played by deregulation in adoption of the “just-in-time” (JIT) system of inventory control. JIT is now recognized as a key prophylactic against the fate suffered by the U.S. economy in 1920, when the large inventories accumulated by firms produced a short but extremely sharp recession. The tremendous improvement in trucking efficiency produced by deregulation allowed business firms to cut their level of inventories from 14% of GNP in 1981 to 10.8% in 1987.

BS noted with disapproval the role played by Senator Kennedy in deregulation. Other than that – and the derogatory reference to the increase in trucking output – they told their readers absolutely nothing of the true genesis and effects of trucking deregulation.

What Did BS Owe Their Readers? What Did They Deliver?

Readers may be a bit dizzy at this point. Can it really be true that the two leading investigating reporters of all time – ranking above even Woodward and Bernstein in the esteem of many commentators – could have produced an article so completely lacking in merit? After all, BS are not themselves economists; they were only reporters and never pretended otherwise. Is this EconBrief holding them to an impossibly high standard?

The following compares what the readers of BS had every right to expect with what BS delivered.

Journalistic Integrity. A reporter is not an editorialist. He is not supposed to state unsupported opinion. He is supposed to seek out expert, authoritative opinion whenever possible – not set himself up as an expert. He should cite his sources of information or state that their anonymity is being preserved. An investigative reporter is supposed to learn the facts and present them, not create a story to fit his preconceptions. The reporter’s opinions are irrelevant to the story.

Economists are the authoritative experts on the organization of industry, consumer benefit and government regulation. They offer university courses on these subjects, provide expert testimony in regulatory and judicial proceedings and advise government in official and unofficial capacities. They offer the only formal theory of human behavior dealing directly with these issues.

BS made no visible effort to consult expert, authoritative opinion. They cited no economist. They quoted one economist, but hid his economic credentials from their readers. They did not understand the economic theory and logic underpinning their subject. They did not understand the implications of the scanty data they did cite on the subject. They operated on the basis on an apparent, implicit “theory” concerning the trucking market and its deregulation. That implicit theory bore no relationship to reality or to economic logic.

BS violated most of the canons of sound reporting. They adopted an advocate’s role with no factual or logical basis underlying it. The case they presented to their readers was incoherent, resting on illogic and non sequitur. Thus, they relied entirely upon eliciting an emotional reaction from their readers. Whatever else this is, it is not good journalism.

Analytical Coherence. BS paint a picture. In order to be worthwhile, that picture must be coherent. Otherwise, it has value only as a kind of surrealist exercise. “The High Cost of Deregulation” fails that test completely. The very title is a misnomer. The word “cost” – like all terms used by economists – is a term of art. It denotes opportunity cost, the highest-valued alternative foregone. BS assumed the burden of proving that (say) the failed businesses were in fact in their highest-valued use, or that employees had greater productivity in the job they lost than in their subsequent employment. They didn’t do this and made no attempt at it.

BS want us to presume that an industry is “wrecked” when large businesses fail and people lose their jobs. But business failure happens daily throughout the economy. A majority of small businesses fail within five years of inception, but their industries survive. Large numbers of restaurants fail and their employees must seek out new jobs, while others prosper and create wealth for their owners and employees. That is how the competitive process learns exactly what buyers want and weeds out inefficient suppliers. Considering that “wants” and “efficiency” change frequently, successful competition is something we should cherish, not deplore.

BS invert the stereotypical casting of the left-wing business morality play. Here, it is the big, bloated corporations that are heroic because they provide their doughty workers with “good, middle-class” wages and safe jobs. They are undone by the hordes of evil, one-owner small businesses that invade the industry when unleashed by deregulation. But economics is not a morality play. Trucking deregulation was beneficial because it replaced a monopolistic cartel with competition. The beneficiaries of monopoly were made unhappy by deregulation. But their satisfaction did not take precedence over that of workers, business owners and consumers who benefitted from it.

The trucking industry was not “wrecked.” Numbers of trucking firms increased dramatically. Today, trucking carries roughly two-thirds of all freight moved in the U.S. The only problem with deregulation was that it was not complete; it did not extend to intrastate trucking and it only affected pricing and entry, not the remaining aspects of the business. BS complain that deregulation was based on a false premise, yet everything that BS say did not happen under deregulation – free and open competition, entry of new firms, removal of burdensome regulations, lower prices, more output, job creation, benefits to consumers and businesses alike – did happen.

Incredibly, BS insist that there is no “data” verifying that the “lower rates” produced by deregulation were “passed along to consumers.” Apparently, BS were expecting to find an actual government data category reading “price decrease to consumers caused by deregulation,” or some such. If the “rates” refer to freight rates, that is what happens by definition; that is what is measured by the benefit studies done by Brookings, Moore, et al. If “rates” refers to wages rates, it is again true by definition. After all, deregulation abolished the monopoly cartel run by the ICC, it did not introduce monopoly. There is no mechanism by which sellers can benefit from lower wages created by competition and than arbitrarily keep all the gains for themselves. If they believe otherwise, BS should go start a trucking firm and utilize that mysterious arbitrary power to earn profits that will prove their case.

BS also insist that government data showing a $5,000 yearly increase in average yearly earnings for truckers is “misleading” because deregulation “eliminated two jobs paying $30,000 and created three jobs paying $20,000 or less.” But their own arithmetic doesn’t even support their argument; it produces stable or falling earnings. And their claim that government data excludes self-employed truckers does not support their case, either – deregulation destroyed the cozy monopoly combination between big trucking firms and unions, thereby freeing up the market and enabling small-business truckers to increase their incomes.

BS rest their anti-deregulation case on anecdotal misfortunes suffered by workers. But in every case, deregulation was a non sequitur, not the cause of the misfortune. Loss of health insurance in bankruptcy is caused by the linking of insurance coverage to employment, which is the major cause of the crisis in health care generally. Deregulation is merely one of a myriad of factors that bring this problem to the surface. By her own admission, the fifty-nine year-old accountant who lost her job chose to limit herself to part-time employment thereafter. Substitution away from worker’s compensation was done by regulators; there was no “deregulator” who made the decision that the widow’s compensation would be paid by the failing firm rather than the state fund. ESOPs originated in 1974, well before deregulation. The same is true of the other tactics used by struggling firms. Indeed, the fraud and safety derelictions complained of by BS were failures of regulation, not deregulation; these areas were not deregulated. BS are blaming the invisible hand of deregulation because the visible hand of regulation failed in its explicit duties.

There is nothing left of the BS case. Every single point made against deregulation by BS was analytically wrong, misleading or irrelevant.


The Implications of BS

Since 1992, the world of journalism has undergone earthshaking change. The print newspaper business has become an endangered species. The conventional thinking ascribes this fall to the rise of the Internet. But the decline in newspaper circulation began before the rise of the World Wide Web.

Up to this point, our focus has been purely analytical. Now is becomes speculative. An educated conjecture is that newspapers throughout the land emulated the techniques of BS. Those were the techniques of “advocacy journalism,” which is a euphemism for disregarding the objective basis of reporting in favor of political partisanship. They can be summarized below.

“First, write the basic outline of your story – then research it. Your every action as reporter will serve your agenda. Objective fact will enter only as an incidental by-product in your story. Do not approach any sources whose views will dispute or even mitigate that story. Tell your story mainly in personal anecdotes. Use emotive language that paints a vivid picture for your audience. Limit yourself to short factoids that will intensify the picture you are painting. Make your story a morality play, a Manichean struggle between good and evil. Poor helpless victims grip the emotions of the audience and are ideal centerpieces for your story. Rich, powerful villains trigger anger in your audience and sway them in your favor.”

Roughly half the country gradually came to the realization that this agenda, not the precepts of journalism, now guided the reporting as well as the editorial policy of most major metropolitan newspapers. They gradually lost respect for, and enjoyment of, those papers, patronizing them only when absolutely necessary. The rise of the Internet made their decision much easier and speeded their transition away from print media, but it was not actually the decisive factor in that move.

Has this speculative addendumgiven BS less than their due? No, it has done them a favor. Otherwise we would regard their work simply as hopelessly incompetent. The next EconBrief will continue to analyze this watershed work in the decline of American journalism.

DRI-270 for week of 10-6-1: How is Job Safety Produced?

An Access Advertising EconBrief:

How is Job Safety Produced?

The best-selling book on economics in the 20th century was probably Free to Choose, the 1980 defense of free markets by Milton and Rose Friedman. It contained a chapter entitled, “Who Protects the Worker?” In it, the authors highlighted the tremendous improvement in the working conditions and living standards of workers from the Industrial Revolution onward. What, they inquired rhetorically, accounted for this? The Friedmans suggested “labor unions” and “government” as the likely top two answers to any poll taken on this subject.

One of the nation’s leading experts on the subject of risk and safety is W. Kip Viscusi, long an economics professor at Harvard, Duke and Vanderbilt universities and now affiliated with the Independent Institute. In an essay on “Job Safety” for the Fortune Encyclopedia of Economics, Viscusi wrote: “Many people believe that employers do not care whether their workplace conditions are safe. If the government were not regulating job safety, they contend, workplaces would be unsafe.”

The Friedmans and Viscusi knew something that the general public doesn’t know about job safety; namely, that free markets and competition are what keep workers safe. The notion of a “market” for risk or safety seems hopelessly abstruse to most people. The general attitude toward competition can best be described as ambivalent. Still, it is the job of economists to make the complex understandable. Herewith an explanation of how job safety is really produced.

Compensating Differentials

Most people seem comfortable with the fact that wage differentials exist between jobs. Moreover, the direction of difference is not random. Different types of manual labor may differ radically in the element of physical risk to which workers are subjected – coal mining, for example, presents a much higher probability of death or severe injury than does loading-dock work. The greater the risk associated with an employment, the higher the wage its workers will command.

In free markets, wages result from the interaction of supply and demand. Does the “risk differential” reflect variations in the supply of labor or the demand for it? Either or both. The toll of current and future mortality in coal mining – from accidents and “black-lung” disease, respectively – tends to restrict the supply of labor to the profession, driving up miners’ wages on that account alone. Coal’s high-BTU content makes a miner’s output from a 40-hour workweek much more valuable than that of the dock worker, so the demand for coal miners exceeds that for dock workers – a factor also tending to drive miners’ wages above those of dock workers.

This logic extends to other characteristics of employment, beyond those ordinarily associated with risk. Library work is viewed as pleasant because of its low-key, low-stress, peaceful character and attractive environment. This attracts a plentiful supply of applicants for low-rung library jobs (pages and assistants) and the continuous pursuit of graduate degrees in library science (required for librarians). This bountiful supply of labor tends to depress wages within libraries below those of comparable other jobs, such as clerks, cashiers, tellers and such. The particular attractions of library work influence people to accept lower wages than would otherwise be acceptable – in effect, library employees receive part of their payment in kind rather than in cash.

Economists use the term compensating differentials as shorthand to denote and explain differences in work-related remuneration that compensate for differences in how different jobs are perceived or experienced. The phenomenon was first observed and categorized by the great Adam Smith in 1776 in his magnum opus, An Enquiry into the Nature and Causes of the Wealth of Nations. Smith observed that positive wage differences would exist for occupations that were dirty or unsafe, such as coal mining or butchering, those that were odious, such as performing executions, and those that were difficult to learn.

Compensating differentials play a key role in job safety. Opponents of markets – who tend to be the same people promoting government regulation of job safety – insist that employers are too parsimonious to spend money on job safety. And why should they? From the employer’s standpoint, the anti-market man maintains, expenditures on job safety are a waste because they are a cost that does not contribute to the employer’s revenue. The compensating differentials argument supplies a potential motivation for the employer’s investment in safety. A safer workplace will increase the worker’s willingness to accept lower wages, thus allowing the employer to recoup his investment over time, just as if the investment allowed him to earn more revenue.

This positive incentive may have a negative counterpart as well. The ability of workers to sue for tort injury provides an incentive to improve worker safety. (In this regard, Viscusi makes a vital distinction: Firms must correctly understand and anticipate liability in order to feel this incentive. The most famous tort liability case was the massive asbestos liability case, in which longtime principles of tort liability were overturned in order to find large companies like Johns Manville liable for worker illnesses contracted many years before the link between asbestos and mesothelioma was uncovered.)

The Market for Job Safety

The most common way of assessing risk is to calculate the approximate rate of death or injury per annum. For example, a job requiring physical labor entailing moderate risk might result in one death per 10,000 workers per year. Workers in this employment should expect to earn a modest premium – perhaps $500 – $700 per year – over workers doing labor involving essentially no risk of death. Another way to view this premium would be to call it the amount that workers would willingly give up to avoid the risk they bear.  And this amount also sets a ceiling on what employers would pay to improve jobsite safety, since any amount below this will save the employer money, while any amount above it will cost more in safety expenditures than the amount the employer could save in avoided wage premia.

The market for safety is one in which workers assess the risk characteristics of jobs they contemplate. Their assessment determines their willingness to work at that job and the wage at which they will work. It is obvious that the successful functioning of this market demands that workers correctly assess a job’s risk/safety profile.

“How well does the safety market work?” Viscusi asks rhetorically. “For it to work well, workers must have some knowledge of the risks they face. And they do.“[emphasis added] He cites one study showing that 496 workers correctly paired a higher risk of injury with a higher level of danger in their industry. Only 24% of workers in women’s outerwear manufacturing and communications equipment characterized their industry as “dangerous.” But 100% of workers in logging and meat products described their industry as dangerous – correctly, as it turned out.

Are workers ever wrong about the risks they face? Well, they sometimes mis-estimate the level of risk they face, not by assuming it to be zero but by wrongly assuming to be higher or lower than it is. But the evidence strongly suggests that the market does work.

Another datum supporting this conclusion is the general reduction in job risk throughout the 20th century. As real income rose throughout the century, we would expect that workers would take some of their gains in the form of risk reduction; that is, they would deliberately seek out less job risk because the increase in real wages allows them this luxury. In effect, this implies that safety (or risk-reduction) is a normal good, something workers choose to “purchase” more of when their real incomes rise. In fact, that is exactly what did happen over time. Real wages roughly tripled from 1933 to 1970 and average death rates on the job fell from about 37 per 1,000 workers to about 18.

Still another aspect of the market for safety is the incentive it provides to learn. This applies to both employee and employer. Do workers keep track of new information developed about job-related safety hazards? Yes; the evidence of this is the high quit-rate (about 33%) for workers to learn that job risk has risen since their initial hire. Since the hiring and training process is expensive for employers, this represents an incentive for them to hold down those risks.

Government Regulation as a Way to Improve Job Safety

To Americans under forty years of age, it must seem as though the federal government has always been omnipresent in economic life. Actually, the bulk of federal-government regulation is the legacy of two historical periods – the New Deal administration of President Franklin Roosevelt from 1932-1945 and the Great Society regime of President Lyndon Johnson from 1963-1968. Most of the non-financial regulatory apparatus, including the agencies dealing with health and safety, were created in the late 60s and early 70s. The publicity created by the muckraking exposes of consumer activist Ralph Nader played a key role in stimulating the implementing legislation for these agencies. (Viscusi’s career began with the two years he spent as an apprentice in the Nader organization prior to his academic training.)

In 1970, the federal Occupational Safety and Health Act created the agency called OSHA (Occupational Safety and Health Administration). The agency is an attempt to engineer a theoretically safe workplace and implement it by government fiat. This de-glamorized mission statement highlights the agency’s glaring flaw: the substitution of technological criteria for purposes of solving economic problems. OSHA’s attempt to ban formaldehyde from the workplace in 1987 resulted in rulemakings that were estimated to cost $72 billion for each life they purported to save. To add insult to this grievous injury to economic logic, the U.S. Supreme Court ruled that OSHA regulations could not be subjected to any cost-benefit test, thus enshrining the agency’s right to commit acts of fiscal and economic lunacy with apparent impunity.

It seems difficult to believe that the judges could not envision the possibility that the $72 billion committed to saving that life had alternative uses that included saving multiple other lives. Yet the idea that the Constitution should codify any kind of respect for economic logic remains outside the legal mainstream to this day, despite the efforts of scholarly judges like Richard Posner and Frank Easterbrook to bring their substantial economic learning to bear.

Viscusi notes that “increases in safety from OSHA’s activities have fallen short of expectations. According to some economists’ estimates, OSHA’s regulations have reduced workplace injuries by at most 2 to 4%.” He compares the fines OSHA collects in the average year (about $10 million at the time Viscusi wrote) to the size of the aggregate risk premium embedded in U.S. wages (about $120 billion at that point). Obviously, the market for safety was disciplining employers and employees alike much more powerfully than OSHA.

As the Friedmans pointed out, though, “government does protect one class of workers very well; namely, those employed by government.” Government employees have job security and incomes linked to the cost of living. Their civil-service retirement pensions are also indexed to inflation and superior to anything available from the Social Security system most Americans are tied to by law. Those government employees who retire early enough to log enough quarters of private-sector employment to qualify for Social Security benefits can “double-dip” from the government pension trough. Needless to say, this is not exactly the concept that OSHA, et al, were designed to further.

Labor Union Bargaining

The role of labor unions in securing improvements in job safety is limited to whatever provisions the union might succeed in embedding into negotiated labor contracts. Unions cannot add to the market incentives to improve safety – incentives that would exist whether unions existed or not. Indeed, if anything, the opposite is true.

Unions can succeed in raising the wage received by their members. They do this either by restricting the supply of labor by limiting the legal supply of workers to union members, or by legally bargaining for a wage higher than the one that would otherwise prevail in a free market. Either way, the result of this above-market wage is unemployment of labor. To the extent that workers leave the unionized industry for employment elsewhere, the higher unionized wages are counterbalanced by lower wages elsewhere.

The wage premium for risk will represent a lower fraction or percentage of the higher, unionized wage than of the market-level wage. Thus, labor unions dilute or lessen the impact of wage premia in creating job safety for workers.

Risk Compensation

The most powerful development in the economics of risk and safety over the last four decades has been the recognition of risk compensation behavior as an offset to rulemaking by government. In the early 1960s, University of Chicago economist Sam Peltzman began to investigate federal-government automotive safety laws designed to force automobile companies to add safety equipment to cars.

The laws made no sense to him. He could see that car companies had incentives to add safety improvements to cars, provided customers wanted them – and he didn’t doubt that many consumers did. But he didn’t see why the companies had to be, or should be, forced to do something that that might well be in their own interest anyway or, alternatively, might not make sense at all.

Peltzman’s research, summarized in a now-classic 1975 article in the Journal of Political Economy, found that the safety regulations did not improve safety on net balance. That is, they either failed to improve actual safety or the lives saved or injuries avoided were offset by other lives lost and injuries incurred because of the laws and safety measures taken.

The key overall principle at work was risk compensation. Safety measures like air bags, seat belts and anti-lock brakes made people feel safer. Consequently, the most risk-loving individuals drove faster and incurred more driving risk to offset the death-and-injury risk that had been reduced by the new safety measures and equipment.

Peltzman’s results were initially greeted with massive skepticism. But forty years of research have vindicated them resoundingly. The “Peltzman Effect” is now recognized worldwide by social and physical scientists. It has been verified empirically in research involving motorcycle and bicycle accidents as well as automobile crashes, and in such diverse fields as athletics, children’s play, recreational pursuits like skydiving and fields like insurance and finance.

Really, risk compensation is not nearly as counterintuitive as it seems upon first exposure. The logic of command-and-control government rules is that most people are mindless robots who are incapable of perceiving incentives, let alone acting in their own interest – but who are capable of following rules laid down by government.  Alternatively, they are docile enough to pay fines ad infinitum after racking up violations. The glaring exceptions are government rule makers, who are well-informed and well-intentioned enough to make the rules that the robots are supposed to follow.

Nothing about actual human behavior suggests that human beings conform to this model. Evidence clearly reveals human beings as rational subject to the informational constrains under which they all labor. The idea that we react to the presence of rules that run counter to our predilections is fully consistent with this picture. It is perfectly clear why OSHA’s rules have “fallen short of expectations” – because OSHA failed to realize that when they force people to obey rules against their will, they take happiness away that people will strive to regain. That is true by definition; that is what “against their will” means.

The Common Sense of the Free-Market Approach to Job Safety

In free markets, workers demand a “wage premium” to reflect the degree of danger or “unsafety” they perceive in a job. They don’t “demand” it by walking into an employer’s office and banging on the desk – they don’t have to. They just work only when and where wages rise sufficiently to compensate them for the risk they bear. This voluntary approach allows the amount of work supplied to equal the amount employers seek at the equilibrium market wage. This contrasts with the approach of labor unions, which creates involuntary unemployment by insisting on a bargained, above-market wage and/or working conditions that employers would not voluntarily provide.

The common sense of the wage premium can be expressed in figurative terms: “In our (workers’) opinion, this wage premium reflects the degree of danger – above the norm or average – that we associate with this job. You (the employer) are free to make any safety modifications in the job or working environment that will cost less than this amount (in the aggregate), but beware of spending more than this. Meanwhile, we have freely chosen to accept the currently-existing risks – as we perceive them.”

This provides a framework for efficient improvements in job safety. Without it, we are left with vague, grandiose rhetoric about how “nothing less than absolute safety is tolerable for our workers” or “how would you like your son or daughter to work in such an environment.” That kind of nebulous talk is complete rubbish. Every human being willingly takes risks every day of their lives, consciously or not. One of the most important parts of growing up is learning the risks of everyday life – which ones are necessary, which ones are reasonable and which ones are foolish. It makes no sense whatever to whine that people “shouldn’t have to risk their lives mining coal” so “big corporations can make profits.” Does it make sense to say that people can willingly risk their lives climbing mountains, fighting bulls, racing automobiles, jumping out of planes, fighting fires and so on – but they can’t risk their lives to keep people warm in the winter? Free markets allow the individuals directly concerned – workers, employers and consumers – to gauge the risks and calculate which improvements in safety are worth making and which aren’t. It recruits the people most willing and able to bear risk by offering them a premium for their efforts. It warns the timid by differentiating jobs according to risk – if all jobs paid the same a tedious and dangerous process of trial and error would be required to learn which jobs they should avoid.

Contrast this reasoned, rational approach with that of government regulatory agencies. They substitute their own technological, engineering view of safety for the free-market approach and impose it on the public in the form of command-and-control, one-size-fits-all, take-it-or-leave-the-country rules and regulations. One might reply that engineers have a more informed view of safety than do workers and employers. Yet research by leading experts like W. Kip Viscusi shows that market wage premia closely track technological and ex post statisticalmeasures of risk. And the government approach runs the risk of being skewed by politics; the regulatory agency’s objective studies may be overridden by a determination to please their bosses in the administration or Congressional patrons upon whom their funding depends.

The final word belongs to formal logic, which declares that there is no such thing as a pure engineering optimum in resource allocation. An engineer can determine (say) the optimum output from given inputs into a particular machine, but he or she can never determine the value to place on the inputs or output. Only producers and consumers can do that; that is why we need markets to solve economic problems. The formaldehyde case, noted above, shows the ghastly extremes to which engineers and bureaucracy can go when given free rein.

How is Job Safety Produced?

Our investigation reveals that job safety is produced primarily by free markets through wage premia and voluntary improvements in safety enacted by employers. It is produced much less efficiently, less productively and more haphazardly by government and even less proficiently by the actions of labor unions.