DRI-202 for week of 4-26-15: The Comcast/Time-Warner Cable Merger Bites the Dust

An Access Advertising EconBrief:

The Comcast/Time-Warner Cable Merger Bites the Dust

This week brings the news that the year’s biggest and most highly publicized merger, between cable television titans Comcast and Time-Warner Cable, has been called off. Although the decision was technically made by Comcast, who announced it on Monday, it really came from the Federal Communications Commission (FCC), whose de facto opposition to the merger became public last week. This continues a virtually unbroken string of economically inane measures taken by the Obama administration and its regulatory minions.

Theoretically, merger policy falls within the province of industrial organization, the economic specialty spawned by the theory of the firm. Actually, the operative logic had nothing whatever to do with economics. Instead, the decision was dictated by the peculiar incentives governing the behavior of government.

The high visibility of the intended merger and the huge volume of comment it spawned make it worthwhile to examine carefully. What made it so attractive to the principals? Why was it denounced so bitterly in certain quarters? Was the FCC right to oppose it?

Who Were the Principals in the Merger?

Comcast and Time-Warner Cable (hereinafter, TWC) are today the two leading firms in the so-called “pay-TV” industry. The quotation marks reflect the fact that the term has undergone several changes over the course of television history. Today it refers to two different groups of television consumers. First are subscribers to cable television, the biggest revenue source for both Comcast and TWC. Born in the 1950s and nurtured in the 1960s, cable TV fought tooth and nail to gain a toehold against “free” broadcast television. It succeeded by offering better reception from buried coaxial-cable transmission lines, more viewing choices than the “Big 3” national broadcast network channels offered on free TV and a blessed absence of commercial interruption. Its success came despite the efforts of government regulators, who forbade local cable companies from serving major metropolitan areas until the 1980s.

In the early days, municipalities were so desperate to get cable-TV that local government would offer a grant of monopoly to the first cable franchise to lay cable and promise to serve the citizenry. In return, the cable firm would have to pay various legal and illegal bribes. The legal ones came in the form of community-access and public-service channels that few watched but which gave lip service to the notion that the cable firm was serving the “public interest” and not merely maximizing profit. Predictably, these monopoly concessions eventually came back to haunt municipal government when cable firms inexorably began to raise their rates without providing commensurate increases in programming value and customer service to their customers.

Today, the contractual arrangements with cable firms survive. But the grants of monopoly are no more. In many markets, other cable firms have entered to compete with the original firms. Even more important, though, are the other sources of competitive television service. First, there is satellite TV service provided by companies like Direct TV and Dish. A satellite dish – usually located on the customer’s roof – gathers the signal transmitted by the company and provides hundreds of channels to customers. Wireless firms like AT&T and Verizon can also transmit television signals to provide television service as well. And finally, it has become possible to “stream” television signals digitally by means similar to those used to stream audio signals for songs. Consequently, a movie-streaming service like Netflix has become a potent competitor to cable television as well.

What Did Comcast and TWC Have to Gain from the Merger? 

The late, great Nobel laureate Ronald Coase taught us that business firms exist to do things that individuals can’t do for themselves – or, more precisely, things that individuals find too costly to do themselves and more efficient to “import” from outsiders. Take this same logic and extend it to business firms. Firms produce some things internally and purchase other things outside the firm. Logically, the inputs they produce internally are the ones they can produce at a cost lower than the external cost of purchase, while external purchases are made when the internal cost of production is too high.

Now extend this logic even further – to the question of merger, in which one firm purchases another. Both firms have to agree to the terms, including a price, which means that both firms consider the merged operation superior to separation. The term used to denote the advantages that arise from combination is synergy – a hybrid of “synthesis” and “energy” suggesting that melding two elements produces a greater output of energy than do the individuals in isolation.

Why should putting two firms together improve on their separate efficiency? The first place to look for an answer is cost, the reason why businesses exist in the first place and the reason why they purchase inputs in the second place. The primary synergy in most mergers is elimination of duplicative functions. Because mergers themselves take time, effort and other resources to effect, there must be substantial duplication that can be eliminated in order to justify a merger on this ground alone. That is why mergers so often occur (or threaten) among similar, competing firms with similar internal structures.

This applies to Comcast and TWC. Large parts of both firms are devoted to the same function; namely, providing cable television to subscribers. A merger would still leave them with the same total territory to service. But one central office, much smaller than the combined size of both before the merger, could now handle administration for the entire territory. The largest efficiencies would undoubtedly have been available in advertising. Economies of scale would have been gained from having one advertising department handle all advertising for the merged firm. Economies of size would have been available because the much larger total of advertising would have commanded volume discounts from sellers.

Given the gigantic size of the firms – their combined revenue would have yielded well over $80 billion – these economies alone might well have justified the merger. And that leaves out the most important reason for the merger. In times of market turmoil, mergers are often referred to as “consolidation.” This is a polite way of saying that the firms involved are girding their loins for future battle. They are fighting for their business life.

This is completely at odds with the picture painted by self-styled “consumer advocates” and government regulators. The former whine about the poor quality of service provided by Comcast to its cable subscribers, calling the company a “lazy monopolist.” By definition, a lazy monopolist doesn’t have to worry about its future – it is living off the fat of the land or, as an economist puts it, taking some of its profits in the form of leisure. (Of course, the critics can’t have it both ways – if the firm is “lazy” then it must be extracting less profit from consumers than it could if it were “aggressive.” But the act of moral posturing uses up so much mental energy that there is little left for critics to use in applying logic.) Government regulators say that Comcast and Time-Warner have so much power that, when combined, they could exclude their potential competitors from the market for “high-speed broadband.”

But the picture painted by market analysts is completely different. Comcast and TWC are leading players in a market that is beginning to wither on the vine. They are not merely providing “pay TV;” they are providing it via coaxial cable buried in the ground and via subscription. This method of providing television service will sooner or later become an endangered species – and the evidence is leaning toward “sooner.” People are beginning to “cut the cord” binding them to cable television. They are doing it in at least three ways. For years, satellite services have made modest inroads into cable markets. Now wireless companies are increasing these inroads. Finally, streaming services are promoting the ultimate heresy – people are renouncing their television sets entirely by streaming TV programming on their computers. Consumers have begun abandoning pay-TV in both 2013 and 2014; in the last year, cord-cutting to streaming TV has begun to occur in the millions.

Not surprisingly, the prime mover behind all of these threats to cable TV is cost. In the early days of cable, hundreds of channels were a dazzling novelty after the starvation diet of three major networks (with perhaps one UHF channel as an added spice). People occasionally surfed the channels just to find out what they might be missing or for something of genuine interest. Over time, though, they bore an increasing cost of holding an inventory of dozens of channels handy on the mere off-chance that something interesting might turn up. That experience gradually made the tradeoff seem less and less favorable, making the lure of a TV lineup tailored to their specific preferences and budget more attractive. Today, the prices of cable TV’s competitors will go nowhere but down.

These competitors are not only competing on the basis of price but also on the basis of product quality. Increasingly, they are now creating their own programming content. This trend began years ago with Home Box Office (HBO), which started life as a movie channel but entered the top tier of television competition when it began producing its own movies and specials. Now Netflix has followed suit and everybody else sees the handwriting on the wall.

The biggest attraction of the merger for Comcast and Time-Warner was the combined resources of the two firms, which would have given the resulting merged firm the kind of war chest it needed to fight a multi-front competitive war with all these competitors. Each of the two firms brought its own special advantages to the fight, complementing the weaknesses of the other. Comcast owns NBC, currently the most successful broadcast-TV channel and a locus of programming expertise. Another of its assets is Universal Studios, a leading film producer since the dawn of Hollywood and a television pioneer since the 1950s. TWC brings the additional heft and nationwide presence necessary to lift Comcast from regional cable-TV leader to international media player.

What is an “Industry?”

Everybody has heard the word “industry” used throughout their lives. Everybody thinks they know what it means. The federal government lists and classifies industries according to the Standard Industrial Classification (SIC) code. The SIC code defines an industry by its technical characteristics, and the definition becomes narrower as the work performed by the firms becomes more specialized. From the point of view of economics, though, there is a problem with this strictly technical approach to definition.

It has no necessary connection to economics at all.

The only economic definition of an industry related to the economic substitutability of the products produced by its members. If the products are viewed by consumers as economically homogeneous – e.g., interchangeable – then the aggregate of firms constitutes an industry. This holds true regardless of the technical features of those products. They may be physically identical; indeed, that might seem highly likely. But identical or not, their physical similarity has nothing to do with the question of industrial status.

If the goods are close substitutes, we may regard the firms as comprising an industry. How close is “close?” Well, in practice, economists usually use price as their yardstick. If significant variations in the price of any firm’s output will induce consumers to shift their custom to a different seller, then that is sufficient to stamp the output of different sellers as close substitutes. (We hold product quality constant in making this evaluation.)

This distinction – between the definition of an industry in strictly technical terms and in economic terms – is the key to understanding modern-day telecommunications, the digital universe and the Comcast/TWC merger.

Without saying it in so many words, the FCC proposes to define markets and industries in non-economic terms that suit its own bureaucratic self-interest. It does this despite the fact that only economic logic can be used when evaluating the welfare of consumers and interpreting the meaning of antitrust law.

The FCC’s Rationale for Ordering a Hearing on the Comcast/TWC Merger

Comcast decided to pull the plug on its proposed merger with TWC because the FCC’s announced decision to hold a regulatory hearing on the merger was a signal of the agency’s intention to oppose it. (The power of the federal government to legally coerce citizens is so great than innocent defendants commonly plead guilty to criminal charges in order to minimize penalties, so it is not strange that Comcast should surrender preemptively.) It is natural to wonder what was behind that opposition. There are two answers to that question. The first answer is the one that the agency itself would have provided in the hearing and that already been provided in statements made by FCC Chairman Thomas Wheeler. That answer should be considered the regulatory pretext for opposition to the merger.

For years, another regulatory agency – the Federal Trade Commission (FTC) – passed both formal and informal judgment on antitrust law in general and business combinations in particular. The FTC even provided a set of guidelines for what mergers would be viewed favorably and unfavorably. The guidelines looked primarily at what industrial-organization economists called industry structure. That term refers to the makeup of firms existing within the industry. Traditionally, this field of economics studies not only industry structure – the number of firms and the division of industry output among them – but also the conduct of existing firms – competition might be fierce, lackadaisical or even give way to collusive attempts to set price – and their actual performance – prices, output and product quality might be consistent either with competitive results or with monopolistic ones. But the FTC concerned itself with structural attributes of the market when reviewing proposed mergers, to the exclusion of other factors. It calculated what were known as concentration ratios – fractions of industry output produced by the leading handful of firms currently operating. If the ratio was too high, or if the proposed merger would make it too high, then the merger would be disallowed. When feeling particularly esoteric, the agency might even deploy a hyper-scientific tool like the “Herfindahl-Hirschman Index” of industry concentration as evidence that a merger would “harm competition.”

In our case, the FCC needed a rationale to stick its nose into the case. That was provided by President Obama’s insistence on the policy of “net neutrality” as he defined it. This policy contended that the leading cable-TV providers were “gatekeepers” of the Internet by virtue of their local monopoly on cable service. In order to give their policy a semblance of concreteness – and also to make the FCC look as busy as possible – the agency established a policy that the top pay-TV firm could control no more than 30% of the “total” market. This criterion is at least loosely reminiscent of the old FTC merger guidelines – except for the fact that the FTC merger guidelines had a tenuous relationship with economic theory and logic. Here, the FCC’s policy as much to do with astrology as it does with economics; e.g., roughly zero in both cases. But, mindful of the FCC’s rule and in order to keep its merger hopes alive, Comcast sold enough of its cable-TV properties to Charter Communications to reduce the two companies’ combined pay-TV holdings to the 30% threshold.

In order to create the appearance of being progressive in the technical as well as the political sense, the FCC set itself up as the guardian of “high-speed broadband service.” For years leading up to the merger announcement, the FCC’s definition of “high-speed” was a speed greater than or equal to 4 Mbps. But after the merger announcement, the FCC abruptly changed its definition of the “high-speed market” to 25 Mbps. or greater. Why this sudden change? Comcast’s sale of cable-TV assets had circumvented the FCC’s 30% market threshold, so the agency now had an incentive to invent a new hurdle to block the merger. The faster broadband-speed classification had the effect of including fewer firms, thereby making its (artificially defined) market smaller than before. In turn, this made the shares of existing firms higher. Under this revised definition – surprise, surprise! – the Comcast/TWC merger would have given the resulting firm 57% of this newly defined “market” rather than the 37% it would previously have had.

Still, most industry observers figured that Comcast’s divestiture sale to Charter Communications, combined with what Holman Jenkins of The Wall Street Journal called “Comcast’s vast lobbying spending and carefully cultivated donor ties with the Obama administration”, would see the merger over the regulatory hurdles. Clearly, they reckoned without the determination of FCC Chairman Wheeler.

What Was the Actual Motivation of the FCC in Frustrating the Comcast/TWC Merger?

Regulators regulate. That is the explanation for the FCC’s de facto denial of the Comcast/TWC merger. It is the bureaucratic version of Descartes’s “I think, therefore I am.” After over a century of encroaching totalitarianism, it is only gradually dawning on America that big government is dedicated solely to the proposition that government of, by and for itself shall not perish from the Earth.

A recent Bloomberg Business editorial is an implicit rationale for the FCC’s action. The editor marvels at how only recently it seemed that the forces of cable-TV darkness had the upper hand and were poised with their jackboots on the throats of consumers the world over. But then, with startling suddenness, cable’s position now seems wholly tenuous as it is beset on all sides with uncertainty. And who should we thank for this sudden reversal? Why, the FCC, of course, whose wise regulation has turned the tide. Instead of crediting competitive forces with making the FCC’s action unnecessary if not a complete non sequitur, the editorial gives the credit to the FCC for creating circumstances that preexisted and in which the agency had no hand.

One of Milton Friedman’s famous characterizations of bureaucracy compared it to the flight leader of a covey of ducks who, upon discovering that the remainder of his V-formation have deserted him and are flying off in a different direction, scrambles to get back in front of the V again. By denying the merger, the FCC has re-positioned itself to claim credit for anything and everything that competition has accomplished so far and will accomplish in the future. If it had done nothing, regulation would have had to cede credit to market forces. By doing something – even something as crazy, useless and downright counterproductive as frustrating a potentially beneficial merger – the FCC has not only set itself up for future benefits, it has also fulfilled the first goal of every government bureaucracy.

It has justified its existence.

All this would have been true even if the FCC’s pre-existing commitment to net neutrality has not forced it to twitch reflexively every time the words “high-speed broadband” arise in a policy context. As it is, the agency was compelled to invent a “policy” for regulating a market that will soon be the most hotly competitive arena in the world – unless the federal government succeeds in wrestling competition to a standstill here as it did in telecommunications in the 1990s.

Why are Economic Theory and Logic Absent from the FCC’s Actions in the Comcast/TWC Merger?

Begin with a few matter-of-fact sentences from Forbes magazine’s summary of the merger. “Comcast and TWC do not directly compete with each other… and there is no physical overlap in the areas in which these companies offer services.” Competitors such as Direct-TV, Dish, AT&T, Verizon and Netflix have “reduced the Importance of the cable-TV market and given its customers other alternatives… Hence this merger would not significantly impact the choices available to the consumers in the service areas of these two companies.”

Forbes’ point was that old-time opposition to mergers by agencies like the FTC was based on the simplistic premise that when competitors merge, there is one few competitor in the market – which is then one step closer to monopoly. When there were few competitors to begin with, this line of thinking had a certain naïve appeal, even though it was wrong. But when the merging companies weren’t competitors in the first place, even this rather flimsy rationale evaporates. And this holds just as true in the so-called “market for high-speed broadband” as it does for the market for pay-TV. Why? Because President Obama and FCC Chairman Wheeler have anointed the cable companies as the gatekeepers of that “market,” and the only markets they can be the gatekeepers of are those same local markets in which Comcast and Time-Warner weren’t competitors before the merger announcement. Therefore the merger couldn’t have affected developments there, either.

The end-in-view of all economic activity is consumption. Consumers – the people who watch TV in whatever form – would not have been harmed or adversely affected by the merger. The consumer advocated who cite the bad service given by Comcast to its customers seem to have taken the view that the remedy for this offense is to make sure that nothing good happens to Comcast from now on. They apparently expect that the merger would have reduced the total volume of employment by the two firms – which it undoubtedly would – and that this would on its face have made customer service even worse – which it most certainly would not have done. Government never ceases to object to budget cuts and predict even worse customer service when they are implemented, but bigger government never produced better customer service. Only competition does that – and the merger was a desperate attempt to prepare for and cope with competition.

The FCC’s imaginary market for high-speed broadband and its 30% threshold were as irrelevant to market competition as the price of tea in Ceylon. The entire digital universe is inventing its way around the anachronistic gatekeeper function performed by local cable firms. (The Wall Street Journal‘s editors couldn’t help reacting in amazement to the FCC’s announcement: “Is anybody at the FCC under 40?” Today it is only the senior-citizen crowd that is still tethered to desktop computers for Web access.)

Why Should the Man in the Street Be Expected to Embrace a Merger Between Large Corporations?

It has been estimated that the sum of mankind’s knowledge has increased more since 2003 than it did since the dawn of human history up to that point. Given the breakneck advance of learning, we cannot expect to comprehend the meaning and benefit of all that goes on around us. Instead, we must choose between the presumptive value of freedom and the restraining hand of government. We owe most of what we value to freedom and private initiative. It is genuinely difficult to identify much – if anything – that government does adequately, less alone brilliantly.

This straightforward comparison, rather than complex mathematics, econometrics or “he said, she said” debates between vested interests should sway us to side with freedom and free markets. The average person shouldn’t “embrace” a corporate merger because he or she shouldn’t evaluate the issue on the basis of emotion. The merger should have been “tolerated” as an exercise of free choice by responsible adults – period.

DRI-183 for week of 3-1-15: George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

An Access Advertising EconBrief:

George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

February 26, 2015 is a date that will live in regulatory infamy. That assertion is subject to revision by the courts, as is nearly everything undertaken these days by the Obama administration. As this is written, the Supreme Court hears yet another challenge to “ObamaCare,” the Affordable Care Act. President Obama’s initiative to achieve a single-payer system of national health care in the U.S. is rife with Orwellian irony, since it cannot help but make health care unaffordable for everybody by further removing the consumer of health care from every exposure to the price of health care. Similarly, the latest administration initiative is the February 26 approval by the Federal Communications Commission (FCC) of the so-called “Net Neutrality” doctrine in regulatory form. Commission Chairman Tom Wheeler’s summary of his regulatory proposal – consisting of 332 pages that were withheld from the public – has been widely characterized as a proposal to “regulate the Internet like a public utility.”

This episode is riven with a totalitarian irony that only George Orwell could fully savor. The FCC is ostensibly an independent regulatory body, free of political control. In fact, Chairman Wheeler long resisted the “net neutrality” doctrine (hereinafter, shortened to “NN” for convenience). The FCC’s decision was a response to pressure from President Obama, which made a mockery of the agency’s independence. The alleged necessity for NN arises from the “local monopoly” over “high-speed” broadband exerted by Internet service providers (again, hereinafter abbreviated as “ISPs”) – but a “public utility” was, and is, by definition a regulated monopoly. Since the alleged local monopoly held by ISPs is itself fictitious, the FCC is in fact proposing to replace competition with monopoly.

To be sure, the particulars of Chairman Wheeler’s proposal are still open to conjecture. And the enterprise is wildly illogical on its face. The idea of “regulating the Internet like a public utility” treats those two things as equivalent entities. A public utility is a business firm. But the Internet is not a single business firm; indeed, it is not a single entity at all in the concrete sense. In the business sense, “the Internet” is shorthand for an infinite number of existing and potential business firms serving the world’s consumers in countless ways. The clause “regulate the Internet like a public utility” is quite literally meaningless – laughably indefinite, overweening in its hubris, frightening in its totalitarian implications.

It falls to an economist, former FCC Chief Economist Thomas Hazlett of Clemson University, to sculpt this philosophy into its practical form. He defines NN as “a set of rules… regulating the business model of your local ISP.” In short, it is a political proposal that uses economic language to prettify and conceal its real intentions. NN websites are emblazoned with rhetoric about “protecting the Open Internet” – but the Internet has thrived on openness for over 20 years under the benign neglect of government regulators. This proposal would end that era.

There is no way on God’s green earth to equate a regulated Internet with an open Internet; the very word “regulated” is the antithesis of “open.” NN proponents paint scary scenarios about ISPs “blocking or interfering with traffic on the Internet,” but their language is always conditional and hypothetical. They are posing scenarios that might happen in the future, not ones that threaten us today. Why? Because competition and innovation protected consumers up to now and continue to do so. NN will make its proponents’ scary predictions more likely, not less, because it will restrict competition. That is what regulation does in general; that is what public-utility regulation specifically does. For over a century, public-utility regulation has installed a single firm as a regulated monopoly in a particular market and has forcefully suppressed all attempts to compete with that firm.

Of course, that is not what President Obama, Chairman Wheeler and NN proponents want us to envision when we hear the words “regulate the Internet like a public utility.” They want us to envision a lovely, healthy flock of sheep grazing peacefully in a beautiful meadow, supervised by a benevolent, powerful Shepherd with a herd of well-trained, affectionate shepherd dogs at his command. Soothing music is piped down from heaven and love and tranquility reign. At the far edges of the meadow, there is a forest. Hungry wolves dwell within, eyeing the sheep covetously. But they dare not approach, for they fear the power of the Shepherd and his dogs.

In other words, the Obama administration is trying to manipulate the emotions of the electorate by creating an imaginary vision of public-utility regulation. The reality of public-utility regulation was, and is, entirely different.

The Natural-Monopoly Theory of Public-Utility Regulation

The history of public-utility regulation is almost, but not quite, co-synchronous with that of government regulation of business in the United States. Regulation began at the state level with Munn vs. Illinois, which paved the way for state government of the grain business in the 1870s. The Interstate Commerce Commission’s inaugural voyage with railroad regulation followed in the late 1880s. With the commercial introduction of electric lighting and the telephone came business firms tailored to those ends. And in their wake came the theory of natural monopoly.

Both electric power and telephones came to be known as “natural monopoly” industries; that is, industries in which both economic efficiency and commercial viability chose one single firm to serve the entire market. This was the outgrowth of economies of scale in production, owing to decreasing long-run average cost of production. This decidedly unusual state of affairs is a technological anomaly. Engineers recognize it in conjunction with the “two-thirds rule.” There are certain cases in which cost increases as the two-thirds power of output, which implies that cost decreases steadily as output rises. (The thru-put of pipes and cables and the capacity of cargo holds are examples.) In turn, this implies that the firm that grows the fastest will undersell all others while still covering all its costs. The further implication is that consumers will receive the most output at the lowest price if one monopoly firm serves everybody – if, and only if, the firm’s price can be constrained equal to its long-run average cost at the rate of output necessary to meet market demand. An unconstrained monopoly would produce less than this optimal rate of output and charge a higher price, in order to maximize its profit. But the theoretical outcome under regulated monopoly equates price with long-run average cost, which provides the utility with a rate of return equal to what it could get in the best alternative use for its financial capital, given its business risk.

In the U.S. and Canada, this regulated outcome is sought by a public-utility commission via the medium of periodic hearings staged by the public-utility regulatory commission (PUC for short). The utility is privately owned by shareholders. In Europe, utilities are not privately owned. Instead, their prices are (in principle) set equal to long-run marginal cost, which is below the level of average cost and thus constitutes a loss in accounting terms. Taxpayers subsidize this loss – these subsidies are the alternative to the profits earned by regulated public-utility firms in the U.S. and Canada.

These regulatory schemes represent the epitome of what the Nobel laureate Ronald Coase called “blackboard economics” – economists micro-managing reality as if they possessed all the information and control over reality that they do when drawing diagrams on a classroom blackboard. In practice, things did not work out as neatly as the foregoing summary would lead us to believe. Not even remotely close, in fact.

The Myriad Slips Twixt Theoretical Cup and Regulatory Lip

What went wrong with this theoretical set-up, seemingly so pat when viewed in a textbook or on a classroom blackboard? Just about everything, to some degree or other. Today, we assume that the institution of regulated monopoly came in response to market monopolies achieved and abuses perpetrated by electric and telephone companies. What mostly happened, though, was different. There were multiple providers of electricity and telephone service in the early days. In exchange for submitting to rate-of-return regulation, though, one firm was extended a grant of monopoly and other firms were excluded. Only in very rare cases did competition exist for local electric service – and curiously, this rate competition actually produced lower electric rates than did public-utility regulation.

This result was not the anomaly it seemed, since the supposed economies of scale were present only in the distribution of electric power, not in power generation. So the cost superiority of a single firm producing for the whole market turned out to be not the slam-dunk that was advertised. That was just one of many cracks in the façade of public-utility regulation. Over the course of the 20th century, the evolution of public-utility regulation in telecommunications proved to be paradigmatic for the failures and inherent shortcomings of the form.

Throughout the country, the Bell system were handed a monopoly on the provision of local service. Its local service companies – the analogues to today’s ISPs – gradually acquired reputations as the heaviest political hitters in state-government politics. The high rates paid by consumers bought lobbyists and legislators by the gross, and they obediently safeguarded the monopoly franchise and kept the public-utility commissions (PUCs) staffed with tame members. That money also paid the bill for a steady diet of publicity designed to mislead the public about the essence of public-utility regulation.

We were assured by the press that the PUC was a vigilant watchdog whose noble motives kept the greedy utility executives from turning the rate screws on a helpless public. At each rate hearing, self-styled consumer advocacy groups paraded their compassion for consumers by demanding low rates for the poor and high rates on business – as if it were really possible for some non-human entity called “business” to pay rates in the true sense, any more than they could pay taxes. PUCs made a show of ostentatiously requiring the utility to enumerate its costs and pretending to laboriously calculate “just and reasonable” rates – as if a Commission possessed juridical powers denied to the world’s greatest philosophers and moralists.

Behind the scenes, after the press had filed their poker-faced stories on the latest hearings, increasingly jaded and cynical reporters, editors and industry consultants rolled their eyes and snorted at the absurdity of it all. Utilities quickly learned that they wouldn’t be allowed to earn big “profits,” because this would be cosmetically bad for the PUC, the consumer advocates, the politicians and just about everybody involved in this process. So executives, middle-level managers and employees figured out that they had to make their money differently than they would if working for an ordinary business in the private sector. Instead of working efficiently and productively and striving to maximize profit, they would strive to maximize cost instead. Why? Because they could make money from higher costs in the form of higher salaries, higher wages, larger staffs and bigger budgets. What about the shareholders, who would ordinarily be shafted by this sort of behavior? Shareholders couldn’t lose because the PUC was committed to giving them a rate of return sufficient to attract financial capital to the industry. (And the shareholders couldn’t gain from extra diligence and work effort put forward by the company because of the limitation on profits.) That is, the Commission would simply ratchet up rates commensurate with any increase in costs – accompanied by whatever throat-clearing, phony displays of concern for the poor and cost-shifting shell games were necessary to make the numbers work. In the final analysis, the name of the game was inefficiency and consumers always paid for it – because there was nobody else who could pay.

So much for the vaunted institution of public-utility regulation in the public interest. Over fifty years ago, a famous left-wing economist named Gardiner Means proposed subjecting every corporation in the U.S. to rate-of-return regulation by the federal government. This held the record for most preposterous policy program advanced by a mainstream commentator – until Thomas Wheeler announced that henceforth the Internet would be regulated as if it were a public utility. Now every American will get a taste of life as Ivan Denisovich, consigned to the Gulag Archipelago of regulatory bureaucracy.

Of particular significance to us in today’s climate is the effect of this regime on innovation. Outside of totalitarian economies such as the Soviet Union and Communist China, public-utility regulation is the most stultifying climate for innovation ever devised by man. The idea behind innovation is to find ways to produce more goods using the same amount of inputs or (equivalently) the same amount of goods using fewer inputs. Doing this lowers costs – which increases profits. But why do to the trouble if you can’t enjoy the increase in profits? Of course, utilities were willing to spend money on research, provided they could get it in the rate base and earn a rate of return on the investment. But they had no incentive to actually implement any cost-saving innovations. The Bell System was legendary for its unwillingness to lower its costs; the economic literature is replete with jaw-dropping examples of local Bell companies lagging years and even decades behind the private sector in technology adoption – even spurning advances developed in Bell’s own research labs!

Any reader who suspects this writer of exaggeration is invited to peruse the literature of industrial organization and regulation. One nagging question should be dealt with forthwith. If the demerits of public-utility regulation were well recognized by insiders, how were they so well concealed from the public? The answer is not mysterious. All of those insiders had a vested interest in not blowing the whistle on the process because they were making money from ongoing public-utility regulation. Commission employees, consultants, expert witnesses, public-interest lawyers and consumer advocates all testified at rate hearings or helped prepare testimony or research it. They either worked full-time or traveled the country as contractors earning lucrative hourly pay. If any one of them was crazy enough to launch an expose of the public-utility scam, he or she would be blackballed from the business while accomplishing nothing – the institutional inertia in favor of the system was so enormous that it would have taken mass revolt to effect change. So they just shrugged, took the money and grew more cynical by the year.

In retrospect, it seems miraculous that anything did change. In the 1960s, local Bell companies were undercharging for local service to consumers and compensating by soaking business and long-distance customers with high prices. The high long-distance rates eventually attracted the interest of would-be competitors. One government regulator grew so fed up with the inefficiency of the Bell system that he granted the competitive petition of a small company called MCI, which sought to compete only in the area of long-distance telecommunications. MCI was soon joined by other firms. The door to competition had been cracked slightly ajar.

In the 1980s, it was kicked wide open. A federal antitrust lawsuit against AT&T led to the breakup of the firm. At the time, the public was dubious about the idea that competition was possible in telecommunications. The 1990s soon showed that regulators were the only ones standing between the American public and a revolution unlike anything we had seen in a century. After vainly trying to protect the local Bells against competition, regulators finally succumbed to the inevitable – or rather, they were overrun by the competitive hordes. When the public got used to cell phones and the Internet, they ditched good old Ma Bell and land-line phones.

This, then, is public-utility regulation. The only reason we have smart phones and mobile Internet access today is that public-utility regulation in telecommunications was overrun by competition despite regulatory opposition in the 1990s. But public-utility regulation is the wonderful fate to which Barack Obama, Thomas Wheeler and the FCC propose to consign the Internet. What is the justification for their verdict?

The Case for Net Neutrality – Debunked

As we have seen, public-utility regulation was based on a premise that certain industries were “natural monopolies.” But nobody has suggested that the Internet is a natural monopoly – which makes sense, since it isn’t an industry. Nobody has suggested that all or even some of the industries that utilize the Internet are natural monopolies – which makes sense, since they aren’t. So why in God’s name should we subject them to public-utility regulation – especially since public-utility regulation didn’t even work well in the industries for which it was ideally suited? We shouldn’t.

The phrase “net neutrality” is designed to achieve an emotional effect through alliteration and a carefully calculated play on the word “neutral.” In this case, the word is intended to appeal to egalitarian sympathies among hearers. It’s only fair, we are urged to think, that ISPs, the “gatekeepers” of the Internet, are scrupulously fair or “neutral” in letting everybody in on the same terms. And, as with so many other issues in economics, the case for “fairness” becomes just so much sludge upon closer examination.

The use of the term “gatekeepers” suggests that God handed to Moses on Mount Olympus a stone tablet for the operation of the Internet, on which ISPs were assigned the role of “gatekeepers.” Even as hyperbolic metaphor, this bears no relation to reality. Today, cable companies are ISPs. But they began life as monopoly-killers. In the early 1960s, Americans chose between three monopoly VHF-TV networks, broadcast by ABC, NBC and CBS. Gradually, local UHF stations started to season the diet of content-starved viewers. When cable-TV came along, it was like manna from heaven to a public fed up with commercials and ravenous for sports and movies. But government regulators didn’t allow cable-TV to compete with VHF and UHF in the top 100 media markets of the U.S. for over two decades. As usual, regulators were zealously protecting government monopoly, restricting competition and harming consumers.

Eventually, cable companies succeeded in tunneling their way into most local markets. They did it by bribing local government literally and figuratively – the latter by splitting their profits via investment in pet political projects of local politicians as part of their contracts. In return, they were guaranteed various degrees of exclusivity. But this “monopoly” didn’t last because they eventually faced competition from telecommunication firms who wanted to get into their business and whose business the cable companies wanted to invade. And today, the old structural definitions of monopoly simply don’t apply to the interindustry forms of competition that prevail.

Take the Kansas City market. Originally, Time Warner had a monopoly franchise. But eventually a new cable company called Everest invaded the metro area across the state line in Johnson County, KS. Overland Park is contiguous with Kansas City, MO, and consumers were anxious to escape the toils of Time Warner. Eventually, Everest prevailed upon KC, MO to gain entry to the Missouri side. Now even the cable-TV market was competitive. Then Google selected Kansas City, KS as the venue for its new high-speed service. Soon KC, MO was included in that package, too – now there were three local ISPs! (Everest has morphed into two successive incarnations, one of which still serves the area.)

Although this is not typical, it does not exhaust the competitive alternatives. This is only the picture for fixed service. Americans are now turning to mobile forms of access to the Internet, such as smart phones. Smart watches are on the horizon. For mobile access, the ISP is a wireless company like AT&T, Verizon, Sprint or T-Mobile.

The NN websites stridently maintain that “most Americans have only a single ISP.” This is nonsense; a charitable interpretation would be that most of us have only a single cable-TV provider in our local market. But there is no necessary one-to-one correlation between “cable-TV provider” and “ISP.” Besides, the state of affairs today is ephemeral – different from what is was a few years ago and from what it will be a few years from now. It is only under public-utility regulation that technology gets stuck in one place because under public-utility regulation there is no incentive to innovate.

More specifically, the FCC’s own data suggest that 80% of Americans have two or more ISPs offering 10Mbps downstream speeds. 96% have two or more ISPs offering 6Mbps downstream and 1.5 upstream speeds. (Until quite recently, the FCC’s own criterion for “high-speed” Internet was 4Mbps or more.) This simply does not comport with any reasonable structural concept of monopoly.

The current flap over “blocking and interfering with traffic on the Internet” is the residue of disputes between Netflix and ISPs over charges for transmission of the former’s streaming services. In general, there is movement toward higher charges for data transmission than for voice transmission. But the huge volumes of traffic generated by Netflix cause congestion, and the free-market method for handling congestion is a higher price, or the functional equivalent. That is what economists have recommended for dealing with road congestion during rush hours and congested demand for air-conditioning and heating services at peak times of day and during peak seasons. Redirecting demand to the off-peak is not a monopoly response; it is an efficient market response. Competitive bar and restaurant owners do it with their pricing methods; competitive movie theater owners also do it (or used to).

Similar logic applies to other forms of hypothetically objectionable behavior by ISPs. The prioritization of traffic, creation of “fast” and “slow” lanes, blocking of content – these and other behaviors are neither inherently good nor bad. They are subject to the constraints of competition. If they are beneficial on net balance, they will be vindicated by the market. That is why we have markets. If a government had to vet every action by every business for moral worthiness in advance, it would paralyze life as we know it. The only sensible course is to allow free markets and competition to police the activities of competitors.

Just as there is nothing wrong or untoward with price differentials based on usage, there is nothing virtuous about government-enforced pricing equality. Forcing unequals to be treated equally is not meritorious. NN proponents insist that the public has to be “protected” from that kind of treatment. But this is exactly what PUCs did for decades when they subsidized residential consumers inefficiently by soaking business and long-distance users with higher rates. Back then, the regulatory mantra wasn’t “net neutrality,” it was “universal service.” Ironically, regulators never succeeded in achieving rates of household telephone subscription that exceeded the rate of household television service. Consumers actually needed – but didn’t get – protection from the public-utility monopoly imposed upon them. Today, consumers don’t need protection because there is no monopoly, nor is there any prospect of one absent regulatory intervention. The only remaining vestige of monopoly is that remaining from the grants of local cable-TV monopoly given by municipal governments. Compensating for past mistakes by local government is no excuse for making a bigger mistake by granting monopoly power to FCC regulators.

Forbearance? 

The late, great economist Frank Knight once remarked that he had heard do-gooders utter the equivalent words to “I want power to do good” so many times for so long that he automatically filtered out the last three words, leaving only “I want power.” Federal-government regulators want the maximum amount of power with the minimum number of restrictions, leaving them the maximum amount of flexibility in the exercise of their power. To get that, they have learned to write excuses into their mandates. In the case of NN and Internet regulation, the operative excuse is “forbearance.”

Forbearance is the writing on the hand with which they will wave away all the objections raised in this essay. The word appears in the original Title II regulations. It means that regulators aren’t required to enforce the regulations if they don’t want to; they can “forebear.” “Hey, don’t worry – be happy. We won’t do the bad stuff, just the good stuff – you know, the ‘neutrality’ stuff, the ‘equality’ stuff.” Chairman Wheeler is encouraging NN proponents to fill the empty vessel of Internet regulation with their own individual wish-fulfillment fantasies of what they dream a “public-utility” should be, not what the ugly historical reality tells us public-utility regulation actually was. For example, he has implied that forbearance will cut out things like rate-of-return regulation.

This just begs the questions raised by the issue of “regulating the Internet like a public utility.” The very elements that Wheeler proposes to forbear constitute part and parcel of public-utility regulation as we have known it. If these are forborne, we have no basis for knowing what to expect from the concept of Internet public-utility regulation at all. If they are not, after all, forborne – then we are back to square one, with the utterly dismal prospect of replaying 20th-century public-utility regulation in all its cynical inefficiency.

Forbearance is a good idea, all right – so good that we should apply it to the whole concept of Internet regulation by the federal government. We should forbear completely.

DRI-241 for week of 11-9-14: The Birth of Public-Utility Regulation

An Access Advertising EconBrief:

The Birth of Public-Utility Regulation

Today’s news heralds the wish of President Obama that the Federal Communications Commission (FCC) pass strict rules ensuring that internet providers provide equal treatment to all customers. This is widely interpreted (as, for example, by The Wall Street Journal front-page article of 11/11/2014) as saying that “the Federal Communications Commission [would] declare broadband Internet service a public utility.”

More specifically, the Journal’s unsigned editorial of the same day explains that the President wants the FCC to apply the common-carrier provisions of Title II of the Communications Act of 1934. Its “century-old telephone regulations [were] designed for public utilities.” In fact, the wording was copied from the original federal regulatory legislation, the Interstate Commerce Act of 1887; the word “railroad” stricken and “telephone” was added to “telegraph.”

In other words, Mr. Obama wants to resurrect enabling regulatory legislation that is a century and a quarter old and apply it to the Internet.

We might be pardoned for assuming that the original legislation has been a rip-roaring success. After all, the Internet has revolutionized our lives and the conduct of business around the world. The Internet has become a way of life for young and old, from tribesmen in central Africa to dissidents from totalitarian regimes to practically everybody in developed economies. If we’re now going to entrust its fate to the tender mercies of Washington bureaucrats, the regulatory schema should presumably be both tried and true.

Public-utility regulation has been tried, that’s for sure. Was it true? And how did it come to be tried in the first place?

 

Natural Monopoly: The Party Line on Public-Utility Regulation

 

Public-utility regulation is a subset of the economic field known as industrial organization. Textbooks designed for courses in the subject commonly devote one or more chapters to utility regulation. Those texts rehearse the theory underlying regulation, which is the theory of natural monopoly. According to that theory, the reason we have (or had) regulated public utilities in areas like gas, electricity, telegraphs, telephones and water is that free competition cannot long persist. Regulated public utilities are greatly preferable to the alternative of a single unregulated monopoly provider in each of these fields.

The concept of natural monopoly rests on the principle of decreasing long-run average cost. In turn, this is based on the idea of economies of scale. Consider the production of various economic goods. All other things equal, we might suppose that as all inputs into the production process increase proportionately, the total monetary cost of production for each one might do so as well. Often it does – but not always. Sometimes total cost increases more-than-proportionately, usually because the industry to which the good belongs uses so much of a particular input that expansion bids up the input’s price, thereby increasing total cost more-than-proportionately.

The rarest case is the opposite one, in which total cost increases less-than-proportionately with the increase in output. Although at first thought this seems paradoxical, there are technical factors that occasionally operate to bring it about. One of these is the engineering principle known as the two-thirds rule. In certain applications, such as the thru-put in a pipeline or the contents of containers used by ocean-going freight vessels, the volume varies as the two-thirds power of the surface area of the surrounding enclosure. In other words, when the pipe grows larger and larger, the amount that can be transmitted through the pipe increases more-than proportionately. When the container is made larger, the amount of freight the container can hold increases more than proportionately. The economic implication of this technical law is far-reaching, since the production cost is a function of the size of the pipe or the container (surface area) while the amount of output is a function of the thru-put of the pipe or amount of freight (volume). In other words, this exactly describes the condition called “economies of scale,” in which output increases more-than-proportionately when all inputs are increased equally. Since average cost is the ratio of total cost to output, the fact that the denominator in the ratio increases more than the numerator causes the ratio to fall, thus producing decreasing average total cost.

Why does decreasing average cost create this condition of natural monopoly? Think of unit price as “average revenue.” Decreasing average cost allows a seller to lower price continuously as the scale of output increases. This is important because it suggests that the seller who achieves the largest scale of output – that is, grows faster than competitors – could undersell all others while still charging a viable price. The textbooks go on to claim that after driving all competitors from the field, the successful seller would then achieve an insurmountable monopoly and raise its price to the profit-maximizing point, dialing its output back to the level commensurate with consumer demand at that higher price. Rather than subjecting consumers to the agony of this pure monopoly outcome, better to compromise by settling on an intermediate price and output that allows the regulated monopolist a price just high enough to attract the financial capital it needs to build, expand and maintain its large infrastructure. That is the raison d’etre of public-utility regulation, which is accomplished in the U.S. by an administrative law process involving hearings and testimony before a commission consisting of political appointees. Various interest groups – consumers, the utility company, the commission itself – are legally represented in the hearings.

Why is the regulated price and output termed a “compromise?” The Public Utility Commission (PUC) forces the company to charge a price equal to its average cost, incorporating a rate of profit sufficient to attract investor capital. This regulatory result is intermediate between the outcomes under pure monopoly and pure competition. A profit-maximizing monopoly firm will always maximize profit by producing the rate of output at which marginal revenue is equal to marginal cost. The monopolist’s marginal revenue is less than its average revenue (price) because every change in price affects inframarginal units, either positively or negatively, and the monopolist is all too aware of its singular status and the large number of inframarginal units affected by its pricing decisions. Under pure competition, each firm treats price as a parameter and neglects the tiny effect its supply decisions have on market price; hence price and marginal revenue are effectively equal. Thus, each competitive firm will produce a rate of output at which price equals marginal cost, and the total output resulting from each of these individual firm decisions is larger – and the resulting market price is lower – than would be the case if a single monopoly firm were deciding on price and output for the whole market. The PUC does not attempt to duplicate this pure competitive price because it assumes that, under decreasing average cost, marginal cost is less than average cost and a price less than average cost would not cover all the utility firm’s costs. Rather than subsidize these losses out of public funds (as is commonly done outside of the U.S. and Canada

), the PUC allows a higher price sufficient to cover all costs including the opportunity cost of attracting financial capital.

How well does this theoretical picture of natural monopoly fit industrial reality? Many public-utility industries possess at least some technical features in common with it. Electric and telephone transmission lines, natural-gas pipelines and water pipe all obey the two-thirds rule. This much of the natural monopoly doctrine has a scientific basis. On the other hand, power generation (as opposed to transmission or transport) does not usually exhibit economies of scale. There are plenty of industries that are not regulated public utilities despite showing clear scale economies – ocean-going cargo vessels are one obvious case. This is enough to provoke immediate suspicion of the natural-monopoly doctrine as a comprehensive explanation of public-utility regulation. Suffice it to say that scale economies seldom dominate the production functions even of public-utility goods.

The Myth of the Birth of Public-Utility Regulation – and the Reality

 

In his classic article, (“Hornswoggled! How Ma Bell and Chicago Ed Conned Our Grandparents and Stuck Us With the Bill,” Reason Magazine, February 1986, pp. 29-33), author Marvin N. Olasky recounts the birth of public-utility regulation. When “angry consumers and other critics call for an end to [public-utility] monopolies, choruses of utility PR people and government regulators recite the same old story – once upon a time there was competition among utilities, but ‘the public’ got fed up and demanded regulation… Free enterprise in utilities lost in a fair fight.”

As Olasky reveals, “it makes a good story, but it’s not true.” It helps to superimpose the logic of natural monopoly theory on the scenario spun by the “fair fight” myth. If natural-monopoly logic held good, how would we expect the utility-competition scenario to deteriorate?

Well, the textbooks tell us that the condition of natural monopoly (decreasing long-run average total cost) allows one firm to undersell all others by growing faster. Then it drives rivals out of business, becomes a pure monopoly and gouges consumers with high prices and reduced output. So that’s what we would expect to find as our “fair-fight” scenario: dog-eat-dog competition resulting in the big dog devouring all rivals, then rounding on consumers, whose outraged howls produce the dog-catching regulators who kennel up the company as a regulated public utility. The problem with this scenario is that it never happened. It is nowhere to be found in the history books or contemporary accounts.

Oops.

Well, somebody must have said something about life before utility regulation. After all, it was only about a century ago, not buried in prehistory. If events didn’t unfold according to textbook theory, how did public-utility regulation happen?

Actually, conventional references to the pre-regulatory past are surprisingly sparse. More to the point, they are contradictory. Mostly, they can be grouped under the heading of “wasteful competition.” This is a very different story than the one told by the natural monopoly theory. It maintains that competitive utility provision was a prodigal fiasco; numerous firms all vying for the same market by laying cable and pipe and building transmission lines. All this superfluous activity and expenditure drove costs – and, presumably, prices – through the roof. Eventually, a fed-up public put an end to all this competitive nonsense by demanding relief from the government. This is the scenario commonly cited by the utility PR people and regulators, who care little about theory and even less about logical consistency. All they want is an explanation that will play in Peoria, meeting whatever transitory necessity confronts them at the moment.

Fragmentary support for this explanation exists in the form of references to multiply suppliers of utility services in various markets. In New York City, for example, there were six different electricity franchises granted by one single 1887 City Council resolution. But specific references to competitive chaos are hard to come by, which we wouldn’t expect if things were as bad as they are portrayed.

Could such a situation have arisen and persisted for the 20-40 years that filled the gap between the development of commercial electricity and telephony and the ascendance of public-utility regulation in the decade of the 1920s? No, the thought of competitive firms chasing their tails up the cost curve and losing money for decades is implausible on its face. Anyway, we have gradually pieced together the true picture.

The Reality of Pre-Regulatory Utility Competition

 

Marvin Olasky pinpoints 1905 as a watershed year in the sage of public utilities in America. That year a merger took place between two of the nation’s largest electric companies, Chicago Edison and Commonwealth Electric. Olasky cites a 1938 monograph by economist Burton Behling, which declared that prior to 1905 the market for municipal electricity “was one of full and free competition.” Market structure bore a superficial resemblance to cable television today in that municipalities assigned franchise rights for service to corporate applicants, the significant difference being that “the common policy was to grant franchises to all who applied” and met minimum requirements. Olasky describes the resulting environment as follows: “Low prices and innovative developments resulted, along with some bankruptcies and occasional disruption of service.”

That qualification “some bankruptcies and occasional disruption of service” raises no red flags to economists; it is the tradeoff they expect to encounter for the benefits provided by low prices and innovation. But it is integral to the story we are telling here. The anecdotal tales of dislocation are the source of the historical scare stories told by later generations of economic historians, utility propagandists and left-wing opportunists. They also provided contemporaneous proponents of public-utility regulation with ammunition for their promotional salvos.

Who roamed the utility landscape during the competitive years? In 1902, America Bell Co. had about 1.3 million subscribers, while the independent companies who competed with it had over 2 million subscribers altogether. By 1905, Bell’s industry leadership was threatened sufficiently to inspire publication of a book entitled How the Bell Lost its Grip. In Toledo, OH, an independent company, Home Telephone Co., began competing with Bell in 1901. It charged rates half those of Bell. By 1906, it had 10, 000 subscribers compared to 6,700 for the local Bell Co. In the states of Nebraska and Iowa, independent company subscribers outnumbered those of Bell by 260,000 to 80,000. Numerous cities held referenda on the issue of granting competitive franchises for telephone service. Competition usually won out. In Portland, OR, the vote was 12,213 to 560 in favor of granting the competitive franchise. In Omaha, NE, the independent franchise won by 7,653 to 3,625. A national survey polled 1,400 businessmen on the issue; 1,245 said that competition had or could produce better phone service in their community. 982 said that competition had forced their Bell company to improve its service.

Obviously, one option open to the Bell (and Edison electric) companies was to cut prices to meet competition. But because Bell and Edison were normally the biggest company in a city or region, with the most subscribers, this price cut was much more costly to them than it was to a smaller independent because the big company had so many inframarginal customers. Consequently, these leading companies looked around for alternative ways of dealing with pesky competitors. The great American rule of thumb in business is: If you can’t beat ’em, join’em; if you can’t beat ’em or join ’em, bar ’em.

The Deadly Duo: Theodore Vail and Samuel Insull

 

Theodore Vail was a leading America business executive of the 19th century. He was President of American Bell from 1880 to 1886, and then later rejoined the Bell system when he became an AT&T board member in 1902. Vail commissioned a city-by-city study of Bell’s competitive position. It persuaded him that Bell’s business strategy needed overhauling. Bell’s corporate position had been that monopoly was the only technically feasible arrangement because it enabled telephone users in different parts of a city and even different cities to converse. As a company insider conversant with the latest advances, Vail knew that this excuse was wearing thin because system interconnections were even then becoming possible. Competition was eating into Bell’s market share already, and with interconnection on the horizon Vail knew that Bell’s supremacy would vanish unless it was revitalized.

The idea Vail hit upon was based upon the strategy employed by the railroads about fifteen years earlier. In order to win public acceptance for the special government favors they had received, the roads commissioned puff pieces from free-lance writers and bribed newspaper and magazine editors to print them. Vail expanded this technique into what later came to be called “third-party” editorial services; he employed companies for the sole purpose of producing editorial matter glorifying the Bells. One firm earned over $100,000 from the Bell companies while simultaneously earning $84,000 per year to place some 13,000 favorable articles annually about electric utilities. (These usually appeared as what we would now call “advertorials” – unsigned editorials containing citing no source.) The companies did not formally acknowledge their link with utilities, although it was exposed in investigative works such as 1931’s The Public Pays by Ernest Gruening.

Vail combined this approach with another original tactic borrowed from the railroads – the pre-emptive embrace of government regulation. Political scientist Gabriel Kolko provided documentation for his thesis that the original venture in federal-government regulation, the Interstate Commerce Commission Act of 1887, was sponsored by the railroads themselves as a means of cartelizing the industry and suppressing the troublesome competitive forces that had bankrupted one railroad after another by producing price wars and persistent low rates for freight. The public uproar over differential rates for long hauls and short hauls gave both railroads and regulators the necessary excuse to claim that competition had failed and only regulation could provide “just and reasonable rates.” Not surprisingly, the regulatory solution was to impose fairness and equality by requiring railroads to raise the rates for long hauls to the level of short-haul rates, so that all shippers now paid equal high rates per-mile.

Vail was desperate to suppress competition from independent phone companies, but knew that he would then face the danger of lawsuits under the embryonic Sherman Antitrust Act, which contained a key section forbidding monopolization. The only kind of competition Vail approved of was “that kind which is rather ‘participation’ than ‘competition,’ and operates under agreement as to prices or territory.” That is, Vail explicitly endorsed cartelization over competition. Unfortunately, the Sherman Act also contained a section outlawing price collusion. Buying off the public was clearly not enough; Vail would have to stave off the federal government as well. So he sent AT&T lobbyists to Washington, where they successfully achieved passage of legislation placing interstate telephone and telegraph communications under the aegis of the ICC.

Vail feared competition, not government. He was confident that regulation could be molded and shaped to the benefit of the Bells. He knew that the general public and particularly his fellow businessmen would take a while to warm up to regulation. “Some corporations have as yet not quite got on to the new order of things,” he mused. By the time Vail died in 1920, that new order had largely been established thanks to the work of Vail’s contemporary, Samuel Insull.

Insull emigrated from England in 1881 to become Thomas Edison’s secretary. He rose rapidly to become Edison’s strategic planner and right-hand man. At Edison’s side, Insull saw firsthand the disruptive effects of innovation on markets when competition was allowed to function. Insull made a mental note not to let himself become the disruptee. With Edison’s blessing, Insull took the reins of Chicago Edison in 1892. His tenure gave him an education in the field of politics to complement the one Edison had given him in technology. In 1905, he merged Chicago Edison with Commonwealth Electric to create the nation’s leading municipal power monopoly.

Like Vail, Insull recognized the threat posed by marketplace competition. Like Vail, Insull saw government as an ally and a tool to suppress his competitors. Insull’s embrace of government was even warmer than Vail’s because he perceived its vital role to be placating and anesthetizing the public. As Olasky put it, “Insull argued that utility monopoly… could best be secured by the establishment of government commissions, which would present the appearance of popular control.”

The commission idea would be sold to the public as a democratic means of establishing fair utility rates. Sure, these rates might be lower than the highest rates utility owners could get on their own, but they would certainly be higher than those prevailing with competition. And the regulated rates would be stable, a sure thing, not the crap shoot offered by the competitive market. In a 1978 article in the prestigious Journal of Law and Economics, economic historian Gregg Jarrell documents that the first states to implement utility regulation saw rising prices and profits and falling utility output, while states that retained competitive utility markets had lower utility prices. Jarrell’s conclusion: “State regulation of electric utilities was primarily a pro-producer policy.”

Over the years, this trend continued, even though utility competition died off almost to the vanishing point. Yet it remained true that those few jurisdictions that allowed utility competition – usually phone, sometimes electric – benefitted from lower rates. This attracted virtually no public attention.

Insull realized that the popularity of competition was just as big an obstacle as its reality in the marketplace. So he slanted his public-relations to heighten the public’s fear of socialism and promote utility regulation as the alternative to a government-owned, socialized power system. Insull foresaw that politicians and regulators would need to use the utility company as a whipping boy by pretending to discipline it severely and accusing it of cupidity and greed. This would allow government to assume the posture of a stern guardian of the public welfare and champion of the consumer – all the while catering to the utility’s welfare behind closed doors. Generations of economists became accustomed to seeing this charade performed at PUC hearings. Their cynicism was tempered by the fact that these same economists were earning handsome incomes by specializing as consultants to one of the several interested parties at those hearings. Over the years, this iron quadrangle of interested parties – regulators, lawyers, economists and “consumer advocates” – became the staunchest and most reliable defender of the public-utility regulation process. Despite the fact that these people were in the best position to appreciate the endless waste and hypocrisy, their self-interest blinded them to it.

Insull enthusiastically adopted the promotional methods pioneered by the railroads and imitated by Theodore Vail. One of his third-party firms, the Illinois Committee on Public Utility Information, was led by Insull subordinate Bernard J. Mullaney. The Committee distributed 5 million pieces of pro-utility literature in the state in 1920 and 1921. Mullaney carefully cultivated the favors of editors by feeding them news and information of all kinds in order to earn a key quid pro quo – publication of his press releases. This favoritism went as far as providing the editors with free long-distance telephone service as an in-kind bribe. Not to be overlooked, of course, is that most traditional of all shady relationships in the newspaper business – buying ads in exchange for preferential treatment in the paper. Electric companies, like the Bells, were prodigious advertisers and took lavish advantage of it. In eventual hearings held by the Federal Trade Commission and the Federal Communications Commission, testimony and exhibits revealed that Bell executives had newspaper editors throughout the West and Midwest in their pockets.

Over the years, as public-utility regulation became a respected institution, the need for big-ticket PR support waned. But utilities never stopped cultivating political support. The Bell companies in particular bought legislators by the gross, challenging teachers’ unions as the leading political force in statehouses across the nation. When the challenge of telecommunications deregulation loomed, the Bells were able to stall it off and postpone its benefits to U.S. consumers for a decade longer than those enjoyed abroad.

Profit regulation left utilities with no profit motive to innovate or cut costs. This caused costs to inflate like a hot-air balloon. Sam Insull realized that he could make a healthy profit by guaranteeing his market, killing off his competition and writing his profit in stone through regulation. Then he could ratchet up real income by “gold-plating the rate base” – increasing salaries and other costs and forcing the ratepayers to pay for them. Ironically, he ended up going broke despite owning a big portfolio of utilities. He borrowed huge sums of money to buy them and expand their operations. When the Depression hit, he found that he couldn’t raise rates to service the debt he had run up. He was indicted, left the country, returned to win acquittal on criminal charges but died broke from a heart attack – just one more celebrated riches-to-rags Depression-era tale.

The lack of motivation made utilities a byword for inefficiency. Bell Labs invented the transistor, but AT&T was one of the last companies to use it because it still had vacuum tubes on hand and had no profit motivation to switch and no competitive motivation to serve its customers. An AT&T company made the first cell phone call in 1946, but the technology withered on the vine for 40 years because the utility system had no profit motivation to deploy it. Touch-tone dialing was invented in 1941 but not rolled out until the 1970s. Bell Labs developed early high-speed computer modems but couldn’t test high-speed data transmission because regulators hadn’t approved tariffs (prices) for data transmission. The list goes on and on; in fact, the entire telecommunications revolution began by accident when a regulator became so fed up with AT&T’s inefficiency that he changed one regulation in the 1970s and allowed one company called MCI to compete with the Bells. (We owe Andy Kessler, longtime AT&T employee and current hedge-fund manager, for this litany of innovative ineptitude.)

What is Net Neutrality All About?

 

Today, the call for “net neutrality” by politicians like President Obama is a political pose, just as the call for public-utility regulation was a century ago. Robert Litan of the
Brookings Institution has pointed out the irony that slapping a Title II common-carrier classification on broadband Internet providers would not even prevent them from practicing the paid prioritization of buyers that the President complained of in his speech! Indeed, for most of the 20th century, public utilities practiced price discrimination among different classes of buyers in order to redistribute income from business users to household users.

The Internet as we know it today is the result of an unimpeded succession of competitive innovations over the last three decades; i.e., the very “open and free Internet” that the New York Times claims President Obama will now bestow upon us. Net neutrality would bring all this to a screeching halt by imposing regulation on most of the Web and taxes on consumers. Today, the biggest chunk of phone bills goes to pay for a charge for “universal service,” a redistributive tax ostensibly intended to make sure everybody had phone service. Yet before the proliferation of cell phones, the percentage of the U.S. population owning televisions – which were unregulated and benefitted from no “universal service” tax – was several percentage points higher than the percentage owning and using telephones. In reality, the universal service tax was used to perpetuate the regulatory process itself.

In summary, then, the balance sheet on public utilities shows they were plotted by would-be monopolists to stymie competition and enlist government and regulators as co-conspirators. The conspiracy stuck consumers with high prices, reduced output, mediocre service, high excise taxes and – worst of all – stagnant innovation for decade after decade. All this is balanced against the dubious benefit of stability – the sort of stability the U.S. economy has shown in the last five years.

A similar future awaits us if we treat the Internet’s imagined ills with the regulatory nostrum called net neutrality.

DRI-281 for week of 2-24-13: Our Telecommunications Marketplace: The Rest of the Story

An Access Advertising EconBrief:

Our Telecommunications Marketplace: The Rest of the Story

Last week’s EconBrief told the tale of the man who, with reasoned premeditation, set out to release the telecommunications marketplace from the thrall of natural monopoly. This week we counter with what the late Paul Harvey might have called “the rest of the story” – the complement to the policy revolution wrought by Tom Whitehead in the White House Office of Telecommunications Policy.

This is a different story altogether. The actions of the Federal Communications Commission (FCC) and the Department of Justice (DOJ) were triggered by the chance decision of one man. That man was not an economist or a free-market ideologue. He was a lawyer and bureaucrat motivated by helplessness and disgust with his task of regulating the Bell system. He sought only to inflict a pinprick – but ended up helping to topple the world’s largest corporation from its monopoly throne.

The key elements of the story were told by economic historian Peter Temin in his short essay, “The Primrose Path,” in Second Thoughts: Myths and Morals of U.S. Economic History, edited by Donald N. McCloskey.

Enter Bernie Strassburg

Bernie Strassburg was a lawyer who headed the FCC’s Common Carrier Bureau. He was charged with regulating telephone and telegraph companies; e.g., he rode regulatory herd on AT&T and Western Union.

In the early 1960s, AT&T was the world’s largest corporation. Federal law gave them a virtual monopoly on American telephone service, both at the local level and for long-distance service. But the monopoly also extended to telephone equipment as well; it as illegal to use any equipment not manufactured by Bell. Thus, the Bell system was vertically integrated.

Regulating them was like wrestling an octopus. Each of the Bell regional companies was regulated by the public-utility commissions (PUCs) within its service area. PUCs conducted hearings to determine the allowable “fair rate of return” on the utility’s rate base. This formed the basis for the rates charged by the company.

The word “rates” applies literally. Instead of charging one universal rate to all users, the Bells charged differential rates to different classes of users. Residential users got preferential low rates, thanks to the doctrine of “universal service.” Telephone service was deemed a necessity to health and safety reasons and the low rates were ostensibly necessary to make it affordable to low-income residents. Business users got special rates – special high rates, that is. After all, businesses could apparently afford to provide all kinds of non-salary benefits for employees, such as health insurance, pensions, retirement accounts, etc. Why not make businesses pay high rates for telephone service and use the proceeds to subsidize residential service?

Of course, economists know why not. This is precisely analogous to a tax on business, and no business ever paid a tax. Instead, the tax – or, in this case, the high charges for telephone use – are borne in the short run by owners and employees of firms driven out of business by the higher costs, as well as by consumers of goods produced using telephone services as an input. In the long run, all costs are borne by suppliers of inputs and/or consumers. In this case, that means consumers of goods that use telephone services in their production and suppliers of inputs to those industries. Too few of those goods are produced and too many resources are devoted to providing telephone services to residential consumers. Although residential consumers pay a lower prices for telephone services – at least temporarily – their real incomes are almost surely lower thanks to the smaller quantity of other goods and services they consume.

A crowning irony of the politically sacrosanct doctrine of universal service is that the penetration of telephone service never reached the levels reached by television. Apparently telephone service wasn’t as necessary as television service, no matter what regulators claimed.

Instead of high nominal profits, public-utility owners earned the equivalent of lower nominal profits at virtually zero risk. Utility managers earned high salaries, worked in plush offices and oversaw huge staffs. Utility executives substituted easy living and a quiet life for the go-go, big-profit lifestyle of corporate America. Well-off elderly Americans held AT&T and Bell regional stocks in their portfolios for risk-free high returns. And this cushy deal was safeguarded by Bell’s political activities. State and local rate regulation attracted Bell lobbyists like locusts to the legislative harvest. Lobbying costs were paid by ratepayers.

The Bell system’s equipment monopoly was just as stifling as its monopoly on phone service. Bell’s monopoly on phone service was reinforced by a prohibition on the conjunction of Bell and non-Bell equipment. Thus, use of competing answering machines, modems and telephones was barred if it involved interaction with Bell facilities. In the mid-1950s, DOJ filed an antitrust lawsuit against AT&T challenging the integrated company’s refusal to allow “private communication” on its network.

Bell’s response was that it was willing to provide such items for its customers. Indeed it was – at a price. Bell’s AT&T Long Lines company also provided long-distance service – at high rates that subsidized the system’s artificially low residential customer rates. It provided data transmission service to business – at prices so high that some businesses even incurred the expense of setting up their own two-way private networks between key locations. The issue wasn’t so much provision of service as its terms.

The Paradox of Natural Monopoly Regulation

The idea behind natural monopoly is that one single firm is the most efficient supplier for the entire market. Even if competition is allowed, the process will inevitably culminate in the victory of a single firm, and that firm will then proceed to establish the price and output of a pure monopolist. Because that price is so much higher and the rate of output so much less than would be “chosen” (in the aggregate) by a competitive industry of firms, government regulation intervenes to seek a preferable compromise. The efficiency of single-firm production is enjoyed, while the price and output outcomes of pure monopoly are moderated – not to the degree attained under competitive conditions, but enough to reward the firm’s owners with only a “normal profit.” That rate of profit is only just sufficient to attract the capital needed by the firm.

This compromise seemed superficially attractive. It avoided the disadvantages of the other popular public-utility model, adopted in Europe and Canada. Equating the public-utility price to its marginal cost would approximate the price and output result under competition. But public utilities often exhibit decreasing average costs of production for technological reasons such as the famous 2/3 Rule. When an average magnitude is falling, that means its corresponding marginal value is less than the average; the marginal is pulling the average down. If price is set equal to marginal cost, it must be less than average cost under decreasing cost conditions of production. When price is less than average cost, the firm is losing money. The European/Canadian model is feasible only when accompanied by large public subsidies to the public-utility firm. Meanwhile, all the same difficulties and expense of conducting rate cases and calculating the utility’s costs are still present.

In actual practice, the case of the Bell system exposed the gaping flaws in the U.S. version of natural monopoly regulation – indeed, in the very concept of natural monopoly itself. If regulation had established a single price for all users, it might have remained viable. But this would have exposed the true costs of providing phone service to the American public. It would have allowed them to judge whether the benefits of having a single integrated firm provide service to everybody were worth the drawbacks of excluding competitors and innovation from the market.

Government was not willing to tell the public the truth when a politically irresistible lie was within their grasp. By setting residential-consumer rates artificially low, it could pose as the public’s benefactor, the savior who rescued them from the clutches of the evil monopoly. Of course, regulators would then have to make good on their promise to the public utility’s owners by making up the lost revenue somewhere else. They did this by allowing the company to charge draconian prices to business and long-distance users. This won the votes of dreamers who liked to fantasize that non-human entities called “businesses” could pay taxes and lift the burden of high prices from ordinary people.

It is true that the public avoided the obvious ill effects of unregulated pure monopoly – a single high price, reduced output and above-normal profits. But all these same effects were realized in hidden form – monopoly prices paid by businesses and long-distance users, reduced output of private communications and goods using telephone services and risk-free profits and lifestyles enjoyed by public-utility owners, managers and employees.

While an unregulated monopolist doesn’t have to worry about regulation, he does have to worry about entry of competing firms. Of course, the theory of natural monopoly claims that firms won’t want to enter once the natural monopoly is attained. But in that case, why did the federal government constantly fend off the advances of firms wanting to compete with AT&T? This highlights the worst aspect of natural monopoly regulation – the strangling of incipient competition in its crib.

And this is where Bernie Strassburg came in.

The Pinprick

The 1956 DOJ lawsuit sought to restructure the Bell system along European lines by forcing divestiture of the Bell Operating Companies (the regional Bells) and equipment divisions (Western Electric and Bell Labs). Bell insisted on maintaining its integrated system. The Eisenhower administration asked the FCC if it could regulate the integrated system.

Speaking in his capacity as head of the Common Carrier Division, Strassburg drafted a memo in which he maintained that the FCC had the authority to regulate the entire Bell system but lacked the resources and expertise to do the job. Strassburg was reflecting on the reality of natural monopoly regulation as we have described it. But his bosses at FCC, thinking only of their own welfare, deleted the second part of his reply and submitted this edited memo to the Administration. Consequently, the Bell system’s position was accepted on the presumption that the federal government’s regulatory authority would suffice to protect the public welfare.

Now Strassburg was in a fix. He had been told to herd an unruly rogue elephant without being given as much as a stick to help with the job. In desperation, he cast about for any means of prodding the beast towards lowering costs (hence, prices) and accepting competition. First, he used the imminence of computer technology as an excuse to force acceptance of private devices as adjuncts to Bell technology. He was aided by the FCC’s decision in the Carterfone case, which forbade Bell’s prohibition of outside equipment on private lines.

Next, Strassburg considered the application of a tiny company with only 100 employees. The company was named MCI. It wanted to lease its microwave-tower facilities stretching from St. Louis to Chicago to private businesses for use in voice and date transmission via radio waves. Companies that could not afford to build their own internal network could lease MCI’s facilities more cheaply than they could purchase Bell’s expensive package of business services.

Microwave technology had been around since World War II. The FCC had already decided in 1959 that the Bell system did not own airwave rights to microwave radio transmissions. The question was: Could MCI meet the government standard of “convenience and necessity” required to get permission to enter the market?

There were countless businesses tired of paying through the nose for AT&T’s business service, even on this one route, so lining up prospective customers to testify in their behalf was easy. But MCI had to show a “need” for their service by proving that they were more efficient than Bell. It wasn’t enough that they could offer their customers a lower price – the government didn’t recognize that as sufficiently valuable to justify allowing competition.

MCI claimed that their microwave technology was more efficient than AT&T’s landline technology. AT&T countered that MCI was simply “skimming the cream” of AT&T’s business customers, who were paying AT&T’s monopoly price, without having to assume the burden of providing residential service to AT&T’s local customers. In a sense, AT&T was right, because the technological differences did not represent large differences in cost. But in the substantive economic sense, MCI was lined up on the side of economic efficiency and consumer welfare. MCI was breaking up the AT&T pattern of cross-subsidy between business and residential consumers, which was creating concealed monopoly inefficiencies and harming consumers on net balance.

Strassburg has no illusions that MCI would be able to compete effectively with powerful AT&T. As Peter Temin noted, all Strassburg wanted was a pin to prick the rogue elephant with, something to wake it up to the changing technological realities of the unfolding new age. So he supported MCI’s petition to operate. AT&T’s appeal was denied.

MCI Unleashed

In 1969, MCI was allowed into the market for microwave voice and date transmission. Its new CEO, venture capitalist William McGowan, didn’t waste a second. He knew that there were dozens of routes whose profit opportunities mimicked those of the St. Louis-Chicago corridor. So he created over 2000 MCI-affiliate companies whose applications flooded the FCC.

Bernie Strassburg abandoned all pretense of considering each individual application. In 1971, the FCC issued a general rulemaking approving microwave facilities that met general criteria for service.

In order to serve hundreds of different customers, MCI couldn’t contemplate building separate connection facilities with each one. Instead, MCI applied to interconnect with Bell’s facilities. By this time, AT&T could see the handwriting on the wall and knew that MCI was a genuine competitive threat. It refused MCI’s interconnection requests. MCI filed an antitrust action against AT&T in 1974 alongside the DOJ’s celebrated suit.

Also in 1974, MCI offered its own package of switched long-distance service. This marked a competitive milestone. In five years, MCI had gone from a piddling 100-employee firm with no revenue and one private-service route to a full-fledged competitor of the mighty AT&T.

This was too much even for the FCC, which opposed MCI’s petition to offer long-distance service. But the genie was out of the bottle now. Bernie Strassburg has unleashed the forces of competition and nothing could pen them back up again. By 1981, AT&T had to give up ownership of the Bell Operating Companies in exchange for the right to retain vertical integrated status. The Bell System as such was gone. The monopoly was broken.

During its corporate career, MCI developed important innovations. The company applied for the first common-carrier satellite license when the White House OTP’s “Open Skies” policy went into effect. It was the first telecommunications firm to install single-mode fiber-optic cable, which is the industry standard today. In the early 1980s, MCI developed an early version of electronic mail. And in the mid-80s, MCI worked with several universities to establish high-speed telecommunications links between their computer systems – a forerunner of the Internet.

The Rest of the Story

Bernie Strassburg’s story complements that of Tom Whitehead. The birth of our modern telecommunications marketplace was a miracle. Tom Whitehead intended the substitution of competition for monopoly but it was miraculous that he ever ascended to a position of sufficient power to effect it. Bernie Strassburg intended no such outcome as the birth of competitive telecommunications; all he ever wanted was to get more regulatory leverage over the Bell System. He never questioned the bona fides of the natural monopoly argument nor did he hope that MCI would ever compete successfully with AT&T.

The fact we needed a miracle to give us the manifest blessings of cell phones, digital technology, I phones, smart phones, cable telephony and streaming Internet is profoundly disturbing. In a competitive environment, the fact that any particular firm succeeds as Microsoft or Apple has may be amazing but the fact that some firm does is no miracle at all; it is what we justifiably expect. But regulation gave us plodding, inefficient, complacent monopoly for decades; the fact that competition eventually triumphed over it was a miraculous accident.

Nor did it have to happen this way. The well-known industrial organization economist Harold Demsetz pointed out some four decades ago that regulated monopoly is not natural, necessary or inevitable. Even if there is no competition in the market, firms can still compete for the market. That is, we could have put up the right to operate as a monopolist in a public-utility market for competitive bids. In effect, firms could bid by committing to the price and quantity targets they would subsequently meet, with the best bid winning the contract. In this way, the bidding process itself would be the check on monopoly power. If there were enough bidders, we would expect the outcome to approximate that of a competitive process.

In his book Capitalism and Freedom, Milton Friedman pondered the possibilities under so-called “natural monopoly” conditions. He concluded that unregulated monopoly is preferable to regulated monopoly. The history of public-utility regulation vindicates Friedman’s position. The defects of hidden monopoly under regulation outweigh those of straightforward monopoly.

Today, the concept of natural monopoly is laughable when applied to the telecommunications marketplace because technological innovation proceeds so quickly that it offsets any temporary effects of monopoly power. Today’s “monopolist” is tomorrow’s has-been; a downward-sloping cost curve cannot compete with a downward-shifting cost curve.

Instead of relying on regulation to produce these miracles, it is long past time to reform it or eliminate it altogether.