DRI-202 for week of 4-26-15: The Comcast/Time-Warner Cable Merger Bites the Dust

An Access Advertising EconBrief:

The Comcast/Time-Warner Cable Merger Bites the Dust

This week brings the news that the year’s biggest and most highly publicized merger, between cable television titans Comcast and Time-Warner Cable, has been called off. Although the decision was technically made by Comcast, who announced it on Monday, it really came from the Federal Communications Commission (FCC), whose de facto opposition to the merger became public last week. This continues a virtually unbroken string of economically inane measures taken by the Obama administration and its regulatory minions.

Theoretically, merger policy falls within the province of industrial organization, the economic specialty spawned by the theory of the firm. Actually, the operative logic had nothing whatever to do with economics. Instead, the decision was dictated by the peculiar incentives governing the behavior of government.

The high visibility of the intended merger and the huge volume of comment it spawned make it worthwhile to examine carefully. What made it so attractive to the principals? Why was it denounced so bitterly in certain quarters? Was the FCC right to oppose it?

Who Were the Principals in the Merger?

Comcast and Time-Warner Cable (hereinafter, TWC) are today the two leading firms in the so-called “pay-TV” industry. The quotation marks reflect the fact that the term has undergone several changes over the course of television history. Today it refers to two different groups of television consumers. First are subscribers to cable television, the biggest revenue source for both Comcast and TWC. Born in the 1950s and nurtured in the 1960s, cable TV fought tooth and nail to gain a toehold against “free” broadcast television. It succeeded by offering better reception from buried coaxial-cable transmission lines, more viewing choices than the “Big 3” national broadcast network channels offered on free TV and a blessed absence of commercial interruption. Its success came despite the efforts of government regulators, who forbade local cable companies from serving major metropolitan areas until the 1980s.

In the early days, municipalities were so desperate to get cable-TV that local government would offer a grant of monopoly to the first cable franchise to lay cable and promise to serve the citizenry. In return, the cable firm would have to pay various legal and illegal bribes. The legal ones came in the form of community-access and public-service channels that few watched but which gave lip service to the notion that the cable firm was serving the “public interest” and not merely maximizing profit. Predictably, these monopoly concessions eventually came back to haunt municipal government when cable firms inexorably began to raise their rates without providing commensurate increases in programming value and customer service to their customers.

Today, the contractual arrangements with cable firms survive. But the grants of monopoly are no more. In many markets, other cable firms have entered to compete with the original firms. Even more important, though, are the other sources of competitive television service. First, there is satellite TV service provided by companies like Direct TV and Dish. A satellite dish – usually located on the customer’s roof – gathers the signal transmitted by the company and provides hundreds of channels to customers. Wireless firms like AT&T and Verizon can also transmit television signals to provide television service as well. And finally, it has become possible to “stream” television signals digitally by means similar to those used to stream audio signals for songs. Consequently, a movie-streaming service like Netflix has become a potent competitor to cable television as well.

What Did Comcast and TWC Have to Gain from the Merger? 

The late, great Nobel laureate Ronald Coase taught us that business firms exist to do things that individuals can’t do for themselves – or, more precisely, things that individuals find too costly to do themselves and more efficient to “import” from outsiders. Take this same logic and extend it to business firms. Firms produce some things internally and purchase other things outside the firm. Logically, the inputs they produce internally are the ones they can produce at a cost lower than the external cost of purchase, while external purchases are made when the internal cost of production is too high.

Now extend this logic even further – to the question of merger, in which one firm purchases another. Both firms have to agree to the terms, including a price, which means that both firms consider the merged operation superior to separation. The term used to denote the advantages that arise from combination is synergy – a hybrid of “synthesis” and “energy” suggesting that melding two elements produces a greater output of energy than do the individuals in isolation.

Why should putting two firms together improve on their separate efficiency? The first place to look for an answer is cost, the reason why businesses exist in the first place and the reason why they purchase inputs in the second place. The primary synergy in most mergers is elimination of duplicative functions. Because mergers themselves take time, effort and other resources to effect, there must be substantial duplication that can be eliminated in order to justify a merger on this ground alone. That is why mergers so often occur (or threaten) among similar, competing firms with similar internal structures.

This applies to Comcast and TWC. Large parts of both firms are devoted to the same function; namely, providing cable television to subscribers. A merger would still leave them with the same total territory to service. But one central office, much smaller than the combined size of both before the merger, could now handle administration for the entire territory. The largest efficiencies would undoubtedly have been available in advertising. Economies of scale would have been gained from having one advertising department handle all advertising for the merged firm. Economies of size would have been available because the much larger total of advertising would have commanded volume discounts from sellers.

Given the gigantic size of the firms – their combined revenue would have yielded well over $80 billion – these economies alone might well have justified the merger. And that leaves out the most important reason for the merger. In times of market turmoil, mergers are often referred to as “consolidation.” This is a polite way of saying that the firms involved are girding their loins for future battle. They are fighting for their business life.

This is completely at odds with the picture painted by self-styled “consumer advocates” and government regulators. The former whine about the poor quality of service provided by Comcast to its cable subscribers, calling the company a “lazy monopolist.” By definition, a lazy monopolist doesn’t have to worry about its future – it is living off the fat of the land or, as an economist puts it, taking some of its profits in the form of leisure. (Of course, the critics can’t have it both ways – if the firm is “lazy” then it must be extracting less profit from consumers than it could if it were “aggressive.” But the act of moral posturing uses up so much mental energy that there is little left for critics to use in applying logic.) Government regulators say that Comcast and Time-Warner have so much power that, when combined, they could exclude their potential competitors from the market for “high-speed broadband.”

But the picture painted by market analysts is completely different. Comcast and TWC are leading players in a market that is beginning to wither on the vine. They are not merely providing “pay TV;” they are providing it via coaxial cable buried in the ground and via subscription. This method of providing television service will sooner or later become an endangered species – and the evidence is leaning toward “sooner.” People are beginning to “cut the cord” binding them to cable television. They are doing it in at least three ways. For years, satellite services have made modest inroads into cable markets. Now wireless companies are increasing these inroads. Finally, streaming services are promoting the ultimate heresy – people are renouncing their television sets entirely by streaming TV programming on their computers. Consumers have begun abandoning pay-TV in both 2013 and 2014; in the last year, cord-cutting to streaming TV has begun to occur in the millions.

Not surprisingly, the prime mover behind all of these threats to cable TV is cost. In the early days of cable, hundreds of channels were a dazzling novelty after the starvation diet of three major networks (with perhaps one UHF channel as an added spice). People occasionally surfed the channels just to find out what they might be missing or for something of genuine interest. Over time, though, they bore an increasing cost of holding an inventory of dozens of channels handy on the mere off-chance that something interesting might turn up. That experience gradually made the tradeoff seem less and less favorable, making the lure of a TV lineup tailored to their specific preferences and budget more attractive. Today, the prices of cable TV’s competitors will go nowhere but down.

These competitors are not only competing on the basis of price but also on the basis of product quality. Increasingly, they are now creating their own programming content. This trend began years ago with Home Box Office (HBO), which started life as a movie channel but entered the top tier of television competition when it began producing its own movies and specials. Now Netflix has followed suit and everybody else sees the handwriting on the wall.

The biggest attraction of the merger for Comcast and Time-Warner was the combined resources of the two firms, which would have given the resulting merged firm the kind of war chest it needed to fight a multi-front competitive war with all these competitors. Each of the two firms brought its own special advantages to the fight, complementing the weaknesses of the other. Comcast owns NBC, currently the most successful broadcast-TV channel and a locus of programming expertise. Another of its assets is Universal Studios, a leading film producer since the dawn of Hollywood and a television pioneer since the 1950s. TWC brings the additional heft and nationwide presence necessary to lift Comcast from regional cable-TV leader to international media player.

What is an “Industry?”

Everybody has heard the word “industry” used throughout their lives. Everybody thinks they know what it means. The federal government lists and classifies industries according to the Standard Industrial Classification (SIC) code. The SIC code defines an industry by its technical characteristics, and the definition becomes narrower as the work performed by the firms becomes more specialized. From the point of view of economics, though, there is a problem with this strictly technical approach to definition.

It has no necessary connection to economics at all.

The only economic definition of an industry related to the economic substitutability of the products produced by its members. If the products are viewed by consumers as economically homogeneous – e.g., interchangeable – then the aggregate of firms constitutes an industry. This holds true regardless of the technical features of those products. They may be physically identical; indeed, that might seem highly likely. But identical or not, their physical similarity has nothing to do with the question of industrial status.

If the goods are close substitutes, we may regard the firms as comprising an industry. How close is “close?” Well, in practice, economists usually use price as their yardstick. If significant variations in the price of any firm’s output will induce consumers to shift their custom to a different seller, then that is sufficient to stamp the output of different sellers as close substitutes. (We hold product quality constant in making this evaluation.)

This distinction – between the definition of an industry in strictly technical terms and in economic terms – is the key to understanding modern-day telecommunications, the digital universe and the Comcast/TWC merger.

Without saying it in so many words, the FCC proposes to define markets and industries in non-economic terms that suit its own bureaucratic self-interest. It does this despite the fact that only economic logic can be used when evaluating the welfare of consumers and interpreting the meaning of antitrust law.

The FCC’s Rationale for Ordering a Hearing on the Comcast/TWC Merger

Comcast decided to pull the plug on its proposed merger with TWC because the FCC’s announced decision to hold a regulatory hearing on the merger was a signal of the agency’s intention to oppose it. (The power of the federal government to legally coerce citizens is so great than innocent defendants commonly plead guilty to criminal charges in order to minimize penalties, so it is not strange that Comcast should surrender preemptively.) It is natural to wonder what was behind that opposition. There are two answers to that question. The first answer is the one that the agency itself would have provided in the hearing and that already been provided in statements made by FCC Chairman Thomas Wheeler. That answer should be considered the regulatory pretext for opposition to the merger.

For years, another regulatory agency – the Federal Trade Commission (FTC) – passed both formal and informal judgment on antitrust law in general and business combinations in particular. The FTC even provided a set of guidelines for what mergers would be viewed favorably and unfavorably. The guidelines looked primarily at what industrial-organization economists called industry structure. That term refers to the makeup of firms existing within the industry. Traditionally, this field of economics studies not only industry structure – the number of firms and the division of industry output among them – but also the conduct of existing firms – competition might be fierce, lackadaisical or even give way to collusive attempts to set price – and their actual performance – prices, output and product quality might be consistent either with competitive results or with monopolistic ones. But the FTC concerned itself with structural attributes of the market when reviewing proposed mergers, to the exclusion of other factors. It calculated what were known as concentration ratios – fractions of industry output produced by the leading handful of firms currently operating. If the ratio was too high, or if the proposed merger would make it too high, then the merger would be disallowed. When feeling particularly esoteric, the agency might even deploy a hyper-scientific tool like the “Herfindahl-Hirschman Index” of industry concentration as evidence that a merger would “harm competition.”

In our case, the FCC needed a rationale to stick its nose into the case. That was provided by President Obama’s insistence on the policy of “net neutrality” as he defined it. This policy contended that the leading cable-TV providers were “gatekeepers” of the Internet by virtue of their local monopoly on cable service. In order to give their policy a semblance of concreteness – and also to make the FCC look as busy as possible – the agency established a policy that the top pay-TV firm could control no more than 30% of the “total” market. This criterion is at least loosely reminiscent of the old FTC merger guidelines – except for the fact that the FTC merger guidelines had a tenuous relationship with economic theory and logic. Here, the FCC’s policy as much to do with astrology as it does with economics; e.g., roughly zero in both cases. But, mindful of the FCC’s rule and in order to keep its merger hopes alive, Comcast sold enough of its cable-TV properties to Charter Communications to reduce the two companies’ combined pay-TV holdings to the 30% threshold.

In order to create the appearance of being progressive in the technical as well as the political sense, the FCC set itself up as the guardian of “high-speed broadband service.” For years leading up to the merger announcement, the FCC’s definition of “high-speed” was a speed greater than or equal to 4 Mbps. But after the merger announcement, the FCC abruptly changed its definition of the “high-speed market” to 25 Mbps. or greater. Why this sudden change? Comcast’s sale of cable-TV assets had circumvented the FCC’s 30% market threshold, so the agency now had an incentive to invent a new hurdle to block the merger. The faster broadband-speed classification had the effect of including fewer firms, thereby making its (artificially defined) market smaller than before. In turn, this made the shares of existing firms higher. Under this revised definition – surprise, surprise! – the Comcast/TWC merger would have given the resulting firm 57% of this newly defined “market” rather than the 37% it would previously have had.

Still, most industry observers figured that Comcast’s divestiture sale to Charter Communications, combined with what Holman Jenkins of The Wall Street Journal called “Comcast’s vast lobbying spending and carefully cultivated donor ties with the Obama administration”, would see the merger over the regulatory hurdles. Clearly, they reckoned without the determination of FCC Chairman Wheeler.

What Was the Actual Motivation of the FCC in Frustrating the Comcast/TWC Merger?

Regulators regulate. That is the explanation for the FCC’s de facto denial of the Comcast/TWC merger. It is the bureaucratic version of Descartes’s “I think, therefore I am.” After over a century of encroaching totalitarianism, it is only gradually dawning on America that big government is dedicated solely to the proposition that government of, by and for itself shall not perish from the Earth.

A recent Bloomberg Business editorial is an implicit rationale for the FCC’s action. The editor marvels at how only recently it seemed that the forces of cable-TV darkness had the upper hand and were poised with their jackboots on the throats of consumers the world over. But then, with startling suddenness, cable’s position now seems wholly tenuous as it is beset on all sides with uncertainty. And who should we thank for this sudden reversal? Why, the FCC, of course, whose wise regulation has turned the tide. Instead of crediting competitive forces with making the FCC’s action unnecessary if not a complete non sequitur, the editorial gives the credit to the FCC for creating circumstances that preexisted and in which the agency had no hand.

One of Milton Friedman’s famous characterizations of bureaucracy compared it to the flight leader of a covey of ducks who, upon discovering that the remainder of his V-formation have deserted him and are flying off in a different direction, scrambles to get back in front of the V again. By denying the merger, the FCC has re-positioned itself to claim credit for anything and everything that competition has accomplished so far and will accomplish in the future. If it had done nothing, regulation would have had to cede credit to market forces. By doing something – even something as crazy, useless and downright counterproductive as frustrating a potentially beneficial merger – the FCC has not only set itself up for future benefits, it has also fulfilled the first goal of every government bureaucracy.

It has justified its existence.

All this would have been true even if the FCC’s pre-existing commitment to net neutrality has not forced it to twitch reflexively every time the words “high-speed broadband” arise in a policy context. As it is, the agency was compelled to invent a “policy” for regulating a market that will soon be the most hotly competitive arena in the world – unless the federal government succeeds in wrestling competition to a standstill here as it did in telecommunications in the 1990s.

Why are Economic Theory and Logic Absent from the FCC’s Actions in the Comcast/TWC Merger?

Begin with a few matter-of-fact sentences from Forbes magazine’s summary of the merger. “Comcast and TWC do not directly compete with each other… and there is no physical overlap in the areas in which these companies offer services.” Competitors such as Direct-TV, Dish, AT&T, Verizon and Netflix have “reduced the Importance of the cable-TV market and given its customers other alternatives… Hence this merger would not significantly impact the choices available to the consumers in the service areas of these two companies.”

Forbes’ point was that old-time opposition to mergers by agencies like the FTC was based on the simplistic premise that when competitors merge, there is one few competitor in the market – which is then one step closer to monopoly. When there were few competitors to begin with, this line of thinking had a certain naïve appeal, even though it was wrong. But when the merging companies weren’t competitors in the first place, even this rather flimsy rationale evaporates. And this holds just as true in the so-called “market for high-speed broadband” as it does for the market for pay-TV. Why? Because President Obama and FCC Chairman Wheeler have anointed the cable companies as the gatekeepers of that “market,” and the only markets they can be the gatekeepers of are those same local markets in which Comcast and Time-Warner weren’t competitors before the merger announcement. Therefore the merger couldn’t have affected developments there, either.

The end-in-view of all economic activity is consumption. Consumers – the people who watch TV in whatever form – would not have been harmed or adversely affected by the merger. The consumer advocated who cite the bad service given by Comcast to its customers seem to have taken the view that the remedy for this offense is to make sure that nothing good happens to Comcast from now on. They apparently expect that the merger would have reduced the total volume of employment by the two firms – which it undoubtedly would – and that this would on its face have made customer service even worse – which it most certainly would not have done. Government never ceases to object to budget cuts and predict even worse customer service when they are implemented, but bigger government never produced better customer service. Only competition does that – and the merger was a desperate attempt to prepare for and cope with competition.

The FCC’s imaginary market for high-speed broadband and its 30% threshold were as irrelevant to market competition as the price of tea in Ceylon. The entire digital universe is inventing its way around the anachronistic gatekeeper function performed by local cable firms. (The Wall Street Journal‘s editors couldn’t help reacting in amazement to the FCC’s announcement: “Is anybody at the FCC under 40?” Today it is only the senior-citizen crowd that is still tethered to desktop computers for Web access.)

Why Should the Man in the Street Be Expected to Embrace a Merger Between Large Corporations?

It has been estimated that the sum of mankind’s knowledge has increased more since 2003 than it did since the dawn of human history up to that point. Given the breakneck advance of learning, we cannot expect to comprehend the meaning and benefit of all that goes on around us. Instead, we must choose between the presumptive value of freedom and the restraining hand of government. We owe most of what we value to freedom and private initiative. It is genuinely difficult to identify much – if anything – that government does adequately, less alone brilliantly.

This straightforward comparison, rather than complex mathematics, econometrics or “he said, she said” debates between vested interests should sway us to side with freedom and free markets. The average person shouldn’t “embrace” a corporate merger because he or she shouldn’t evaluate the issue on the basis of emotion. The merger should have been “tolerated” as an exercise of free choice by responsible adults – period.

DRI-241 for week of 11-9-14: The Birth of Public-Utility Regulation

An Access Advertising EconBrief:

The Birth of Public-Utility Regulation

Today’s news heralds the wish of President Obama that the Federal Communications Commission (FCC) pass strict rules ensuring that internet providers provide equal treatment to all customers. This is widely interpreted (as, for example, by The Wall Street Journal front-page article of 11/11/2014) as saying that “the Federal Communications Commission [would] declare broadband Internet service a public utility.”

More specifically, the Journal’s unsigned editorial of the same day explains that the President wants the FCC to apply the common-carrier provisions of Title II of the Communications Act of 1934. Its “century-old telephone regulations [were] designed for public utilities.” In fact, the wording was copied from the original federal regulatory legislation, the Interstate Commerce Act of 1887; the word “railroad” stricken and “telephone” was added to “telegraph.”

In other words, Mr. Obama wants to resurrect enabling regulatory legislation that is a century and a quarter old and apply it to the Internet.

We might be pardoned for assuming that the original legislation has been a rip-roaring success. After all, the Internet has revolutionized our lives and the conduct of business around the world. The Internet has become a way of life for young and old, from tribesmen in central Africa to dissidents from totalitarian regimes to practically everybody in developed economies. If we’re now going to entrust its fate to the tender mercies of Washington bureaucrats, the regulatory schema should presumably be both tried and true.

Public-utility regulation has been tried, that’s for sure. Was it true? And how did it come to be tried in the first place?

 

Natural Monopoly: The Party Line on Public-Utility Regulation

 

Public-utility regulation is a subset of the economic field known as industrial organization. Textbooks designed for courses in the subject commonly devote one or more chapters to utility regulation. Those texts rehearse the theory underlying regulation, which is the theory of natural monopoly. According to that theory, the reason we have (or had) regulated public utilities in areas like gas, electricity, telegraphs, telephones and water is that free competition cannot long persist. Regulated public utilities are greatly preferable to the alternative of a single unregulated monopoly provider in each of these fields.

The concept of natural monopoly rests on the principle of decreasing long-run average cost. In turn, this is based on the idea of economies of scale. Consider the production of various economic goods. All other things equal, we might suppose that as all inputs into the production process increase proportionately, the total monetary cost of production for each one might do so as well. Often it does – but not always. Sometimes total cost increases more-than-proportionately, usually because the industry to which the good belongs uses so much of a particular input that expansion bids up the input’s price, thereby increasing total cost more-than-proportionately.

The rarest case is the opposite one, in which total cost increases less-than-proportionately with the increase in output. Although at first thought this seems paradoxical, there are technical factors that occasionally operate to bring it about. One of these is the engineering principle known as the two-thirds rule. In certain applications, such as the thru-put in a pipeline or the contents of containers used by ocean-going freight vessels, the volume varies as the two-thirds power of the surface area of the surrounding enclosure. In other words, when the pipe grows larger and larger, the amount that can be transmitted through the pipe increases more-than proportionately. When the container is made larger, the amount of freight the container can hold increases more than proportionately. The economic implication of this technical law is far-reaching, since the production cost is a function of the size of the pipe or the container (surface area) while the amount of output is a function of the thru-put of the pipe or amount of freight (volume). In other words, this exactly describes the condition called “economies of scale,” in which output increases more-than-proportionately when all inputs are increased equally. Since average cost is the ratio of total cost to output, the fact that the denominator in the ratio increases more than the numerator causes the ratio to fall, thus producing decreasing average total cost.

Why does decreasing average cost create this condition of natural monopoly? Think of unit price as “average revenue.” Decreasing average cost allows a seller to lower price continuously as the scale of output increases. This is important because it suggests that the seller who achieves the largest scale of output – that is, grows faster than competitors – could undersell all others while still charging a viable price. The textbooks go on to claim that after driving all competitors from the field, the successful seller would then achieve an insurmountable monopoly and raise its price to the profit-maximizing point, dialing its output back to the level commensurate with consumer demand at that higher price. Rather than subjecting consumers to the agony of this pure monopoly outcome, better to compromise by settling on an intermediate price and output that allows the regulated monopolist a price just high enough to attract the financial capital it needs to build, expand and maintain its large infrastructure. That is the raison d’etre of public-utility regulation, which is accomplished in the U.S. by an administrative law process involving hearings and testimony before a commission consisting of political appointees. Various interest groups – consumers, the utility company, the commission itself – are legally represented in the hearings.

Why is the regulated price and output termed a “compromise?” The Public Utility Commission (PUC) forces the company to charge a price equal to its average cost, incorporating a rate of profit sufficient to attract investor capital. This regulatory result is intermediate between the outcomes under pure monopoly and pure competition. A profit-maximizing monopoly firm will always maximize profit by producing the rate of output at which marginal revenue is equal to marginal cost. The monopolist’s marginal revenue is less than its average revenue (price) because every change in price affects inframarginal units, either positively or negatively, and the monopolist is all too aware of its singular status and the large number of inframarginal units affected by its pricing decisions. Under pure competition, each firm treats price as a parameter and neglects the tiny effect its supply decisions have on market price; hence price and marginal revenue are effectively equal. Thus, each competitive firm will produce a rate of output at which price equals marginal cost, and the total output resulting from each of these individual firm decisions is larger – and the resulting market price is lower – than would be the case if a single monopoly firm were deciding on price and output for the whole market. The PUC does not attempt to duplicate this pure competitive price because it assumes that, under decreasing average cost, marginal cost is less than average cost and a price less than average cost would not cover all the utility firm’s costs. Rather than subsidize these losses out of public funds (as is commonly done outside of the U.S. and Canada

), the PUC allows a higher price sufficient to cover all costs including the opportunity cost of attracting financial capital.

How well does this theoretical picture of natural monopoly fit industrial reality? Many public-utility industries possess at least some technical features in common with it. Electric and telephone transmission lines, natural-gas pipelines and water pipe all obey the two-thirds rule. This much of the natural monopoly doctrine has a scientific basis. On the other hand, power generation (as opposed to transmission or transport) does not usually exhibit economies of scale. There are plenty of industries that are not regulated public utilities despite showing clear scale economies – ocean-going cargo vessels are one obvious case. This is enough to provoke immediate suspicion of the natural-monopoly doctrine as a comprehensive explanation of public-utility regulation. Suffice it to say that scale economies seldom dominate the production functions even of public-utility goods.

The Myth of the Birth of Public-Utility Regulation – and the Reality

 

In his classic article, (“Hornswoggled! How Ma Bell and Chicago Ed Conned Our Grandparents and Stuck Us With the Bill,” Reason Magazine, February 1986, pp. 29-33), author Marvin N. Olasky recounts the birth of public-utility regulation. When “angry consumers and other critics call for an end to [public-utility] monopolies, choruses of utility PR people and government regulators recite the same old story – once upon a time there was competition among utilities, but ‘the public’ got fed up and demanded regulation… Free enterprise in utilities lost in a fair fight.”

As Olasky reveals, “it makes a good story, but it’s not true.” It helps to superimpose the logic of natural monopoly theory on the scenario spun by the “fair fight” myth. If natural-monopoly logic held good, how would we expect the utility-competition scenario to deteriorate?

Well, the textbooks tell us that the condition of natural monopoly (decreasing long-run average total cost) allows one firm to undersell all others by growing faster. Then it drives rivals out of business, becomes a pure monopoly and gouges consumers with high prices and reduced output. So that’s what we would expect to find as our “fair-fight” scenario: dog-eat-dog competition resulting in the big dog devouring all rivals, then rounding on consumers, whose outraged howls produce the dog-catching regulators who kennel up the company as a regulated public utility. The problem with this scenario is that it never happened. It is nowhere to be found in the history books or contemporary accounts.

Oops.

Well, somebody must have said something about life before utility regulation. After all, it was only about a century ago, not buried in prehistory. If events didn’t unfold according to textbook theory, how did public-utility regulation happen?

Actually, conventional references to the pre-regulatory past are surprisingly sparse. More to the point, they are contradictory. Mostly, they can be grouped under the heading of “wasteful competition.” This is a very different story than the one told by the natural monopoly theory. It maintains that competitive utility provision was a prodigal fiasco; numerous firms all vying for the same market by laying cable and pipe and building transmission lines. All this superfluous activity and expenditure drove costs – and, presumably, prices – through the roof. Eventually, a fed-up public put an end to all this competitive nonsense by demanding relief from the government. This is the scenario commonly cited by the utility PR people and regulators, who care little about theory and even less about logical consistency. All they want is an explanation that will play in Peoria, meeting whatever transitory necessity confronts them at the moment.

Fragmentary support for this explanation exists in the form of references to multiply suppliers of utility services in various markets. In New York City, for example, there were six different electricity franchises granted by one single 1887 City Council resolution. But specific references to competitive chaos are hard to come by, which we wouldn’t expect if things were as bad as they are portrayed.

Could such a situation have arisen and persisted for the 20-40 years that filled the gap between the development of commercial electricity and telephony and the ascendance of public-utility regulation in the decade of the 1920s? No, the thought of competitive firms chasing their tails up the cost curve and losing money for decades is implausible on its face. Anyway, we have gradually pieced together the true picture.

The Reality of Pre-Regulatory Utility Competition

 

Marvin Olasky pinpoints 1905 as a watershed year in the sage of public utilities in America. That year a merger took place between two of the nation’s largest electric companies, Chicago Edison and Commonwealth Electric. Olasky cites a 1938 monograph by economist Burton Behling, which declared that prior to 1905 the market for municipal electricity “was one of full and free competition.” Market structure bore a superficial resemblance to cable television today in that municipalities assigned franchise rights for service to corporate applicants, the significant difference being that “the common policy was to grant franchises to all who applied” and met minimum requirements. Olasky describes the resulting environment as follows: “Low prices and innovative developments resulted, along with some bankruptcies and occasional disruption of service.”

That qualification “some bankruptcies and occasional disruption of service” raises no red flags to economists; it is the tradeoff they expect to encounter for the benefits provided by low prices and innovation. But it is integral to the story we are telling here. The anecdotal tales of dislocation are the source of the historical scare stories told by later generations of economic historians, utility propagandists and left-wing opportunists. They also provided contemporaneous proponents of public-utility regulation with ammunition for their promotional salvos.

Who roamed the utility landscape during the competitive years? In 1902, America Bell Co. had about 1.3 million subscribers, while the independent companies who competed with it had over 2 million subscribers altogether. By 1905, Bell’s industry leadership was threatened sufficiently to inspire publication of a book entitled How the Bell Lost its Grip. In Toledo, OH, an independent company, Home Telephone Co., began competing with Bell in 1901. It charged rates half those of Bell. By 1906, it had 10, 000 subscribers compared to 6,700 for the local Bell Co. In the states of Nebraska and Iowa, independent company subscribers outnumbered those of Bell by 260,000 to 80,000. Numerous cities held referenda on the issue of granting competitive franchises for telephone service. Competition usually won out. In Portland, OR, the vote was 12,213 to 560 in favor of granting the competitive franchise. In Omaha, NE, the independent franchise won by 7,653 to 3,625. A national survey polled 1,400 businessmen on the issue; 1,245 said that competition had or could produce better phone service in their community. 982 said that competition had forced their Bell company to improve its service.

Obviously, one option open to the Bell (and Edison electric) companies was to cut prices to meet competition. But because Bell and Edison were normally the biggest company in a city or region, with the most subscribers, this price cut was much more costly to them than it was to a smaller independent because the big company had so many inframarginal customers. Consequently, these leading companies looked around for alternative ways of dealing with pesky competitors. The great American rule of thumb in business is: If you can’t beat ’em, join’em; if you can’t beat ’em or join ’em, bar ’em.

The Deadly Duo: Theodore Vail and Samuel Insull

 

Theodore Vail was a leading America business executive of the 19th century. He was President of American Bell from 1880 to 1886, and then later rejoined the Bell system when he became an AT&T board member in 1902. Vail commissioned a city-by-city study of Bell’s competitive position. It persuaded him that Bell’s business strategy needed overhauling. Bell’s corporate position had been that monopoly was the only technically feasible arrangement because it enabled telephone users in different parts of a city and even different cities to converse. As a company insider conversant with the latest advances, Vail knew that this excuse was wearing thin because system interconnections were even then becoming possible. Competition was eating into Bell’s market share already, and with interconnection on the horizon Vail knew that Bell’s supremacy would vanish unless it was revitalized.

The idea Vail hit upon was based upon the strategy employed by the railroads about fifteen years earlier. In order to win public acceptance for the special government favors they had received, the roads commissioned puff pieces from free-lance writers and bribed newspaper and magazine editors to print them. Vail expanded this technique into what later came to be called “third-party” editorial services; he employed companies for the sole purpose of producing editorial matter glorifying the Bells. One firm earned over $100,000 from the Bell companies while simultaneously earning $84,000 per year to place some 13,000 favorable articles annually about electric utilities. (These usually appeared as what we would now call “advertorials” – unsigned editorials containing citing no source.) The companies did not formally acknowledge their link with utilities, although it was exposed in investigative works such as 1931’s The Public Pays by Ernest Gruening.

Vail combined this approach with another original tactic borrowed from the railroads – the pre-emptive embrace of government regulation. Political scientist Gabriel Kolko provided documentation for his thesis that the original venture in federal-government regulation, the Interstate Commerce Commission Act of 1887, was sponsored by the railroads themselves as a means of cartelizing the industry and suppressing the troublesome competitive forces that had bankrupted one railroad after another by producing price wars and persistent low rates for freight. The public uproar over differential rates for long hauls and short hauls gave both railroads and regulators the necessary excuse to claim that competition had failed and only regulation could provide “just and reasonable rates.” Not surprisingly, the regulatory solution was to impose fairness and equality by requiring railroads to raise the rates for long hauls to the level of short-haul rates, so that all shippers now paid equal high rates per-mile.

Vail was desperate to suppress competition from independent phone companies, but knew that he would then face the danger of lawsuits under the embryonic Sherman Antitrust Act, which contained a key section forbidding monopolization. The only kind of competition Vail approved of was “that kind which is rather ‘participation’ than ‘competition,’ and operates under agreement as to prices or territory.” That is, Vail explicitly endorsed cartelization over competition. Unfortunately, the Sherman Act also contained a section outlawing price collusion. Buying off the public was clearly not enough; Vail would have to stave off the federal government as well. So he sent AT&T lobbyists to Washington, where they successfully achieved passage of legislation placing interstate telephone and telegraph communications under the aegis of the ICC.

Vail feared competition, not government. He was confident that regulation could be molded and shaped to the benefit of the Bells. He knew that the general public and particularly his fellow businessmen would take a while to warm up to regulation. “Some corporations have as yet not quite got on to the new order of things,” he mused. By the time Vail died in 1920, that new order had largely been established thanks to the work of Vail’s contemporary, Samuel Insull.

Insull emigrated from England in 1881 to become Thomas Edison’s secretary. He rose rapidly to become Edison’s strategic planner and right-hand man. At Edison’s side, Insull saw firsthand the disruptive effects of innovation on markets when competition was allowed to function. Insull made a mental note not to let himself become the disruptee. With Edison’s blessing, Insull took the reins of Chicago Edison in 1892. His tenure gave him an education in the field of politics to complement the one Edison had given him in technology. In 1905, he merged Chicago Edison with Commonwealth Electric to create the nation’s leading municipal power monopoly.

Like Vail, Insull recognized the threat posed by marketplace competition. Like Vail, Insull saw government as an ally and a tool to suppress his competitors. Insull’s embrace of government was even warmer than Vail’s because he perceived its vital role to be placating and anesthetizing the public. As Olasky put it, “Insull argued that utility monopoly… could best be secured by the establishment of government commissions, which would present the appearance of popular control.”

The commission idea would be sold to the public as a democratic means of establishing fair utility rates. Sure, these rates might be lower than the highest rates utility owners could get on their own, but they would certainly be higher than those prevailing with competition. And the regulated rates would be stable, a sure thing, not the crap shoot offered by the competitive market. In a 1978 article in the prestigious Journal of Law and Economics, economic historian Gregg Jarrell documents that the first states to implement utility regulation saw rising prices and profits and falling utility output, while states that retained competitive utility markets had lower utility prices. Jarrell’s conclusion: “State regulation of electric utilities was primarily a pro-producer policy.”

Over the years, this trend continued, even though utility competition died off almost to the vanishing point. Yet it remained true that those few jurisdictions that allowed utility competition – usually phone, sometimes electric – benefitted from lower rates. This attracted virtually no public attention.

Insull realized that the popularity of competition was just as big an obstacle as its reality in the marketplace. So he slanted his public-relations to heighten the public’s fear of socialism and promote utility regulation as the alternative to a government-owned, socialized power system. Insull foresaw that politicians and regulators would need to use the utility company as a whipping boy by pretending to discipline it severely and accusing it of cupidity and greed. This would allow government to assume the posture of a stern guardian of the public welfare and champion of the consumer – all the while catering to the utility’s welfare behind closed doors. Generations of economists became accustomed to seeing this charade performed at PUC hearings. Their cynicism was tempered by the fact that these same economists were earning handsome incomes by specializing as consultants to one of the several interested parties at those hearings. Over the years, this iron quadrangle of interested parties – regulators, lawyers, economists and “consumer advocates” – became the staunchest and most reliable defender of the public-utility regulation process. Despite the fact that these people were in the best position to appreciate the endless waste and hypocrisy, their self-interest blinded them to it.

Insull enthusiastically adopted the promotional methods pioneered by the railroads and imitated by Theodore Vail. One of his third-party firms, the Illinois Committee on Public Utility Information, was led by Insull subordinate Bernard J. Mullaney. The Committee distributed 5 million pieces of pro-utility literature in the state in 1920 and 1921. Mullaney carefully cultivated the favors of editors by feeding them news and information of all kinds in order to earn a key quid pro quo – publication of his press releases. This favoritism went as far as providing the editors with free long-distance telephone service as an in-kind bribe. Not to be overlooked, of course, is that most traditional of all shady relationships in the newspaper business – buying ads in exchange for preferential treatment in the paper. Electric companies, like the Bells, were prodigious advertisers and took lavish advantage of it. In eventual hearings held by the Federal Trade Commission and the Federal Communications Commission, testimony and exhibits revealed that Bell executives had newspaper editors throughout the West and Midwest in their pockets.

Over the years, as public-utility regulation became a respected institution, the need for big-ticket PR support waned. But utilities never stopped cultivating political support. The Bell companies in particular bought legislators by the gross, challenging teachers’ unions as the leading political force in statehouses across the nation. When the challenge of telecommunications deregulation loomed, the Bells were able to stall it off and postpone its benefits to U.S. consumers for a decade longer than those enjoyed abroad.

Profit regulation left utilities with no profit motive to innovate or cut costs. This caused costs to inflate like a hot-air balloon. Sam Insull realized that he could make a healthy profit by guaranteeing his market, killing off his competition and writing his profit in stone through regulation. Then he could ratchet up real income by “gold-plating the rate base” – increasing salaries and other costs and forcing the ratepayers to pay for them. Ironically, he ended up going broke despite owning a big portfolio of utilities. He borrowed huge sums of money to buy them and expand their operations. When the Depression hit, he found that he couldn’t raise rates to service the debt he had run up. He was indicted, left the country, returned to win acquittal on criminal charges but died broke from a heart attack – just one more celebrated riches-to-rags Depression-era tale.

The lack of motivation made utilities a byword for inefficiency. Bell Labs invented the transistor, but AT&T was one of the last companies to use it because it still had vacuum tubes on hand and had no profit motivation to switch and no competitive motivation to serve its customers. An AT&T company made the first cell phone call in 1946, but the technology withered on the vine for 40 years because the utility system had no profit motivation to deploy it. Touch-tone dialing was invented in 1941 but not rolled out until the 1970s. Bell Labs developed early high-speed computer modems but couldn’t test high-speed data transmission because regulators hadn’t approved tariffs (prices) for data transmission. The list goes on and on; in fact, the entire telecommunications revolution began by accident when a regulator became so fed up with AT&T’s inefficiency that he changed one regulation in the 1970s and allowed one company called MCI to compete with the Bells. (We owe Andy Kessler, longtime AT&T employee and current hedge-fund manager, for this litany of innovative ineptitude.)

What is Net Neutrality All About?

 

Today, the call for “net neutrality” by politicians like President Obama is a political pose, just as the call for public-utility regulation was a century ago. Robert Litan of the
Brookings Institution has pointed out the irony that slapping a Title II common-carrier classification on broadband Internet providers would not even prevent them from practicing the paid prioritization of buyers that the President complained of in his speech! Indeed, for most of the 20th century, public utilities practiced price discrimination among different classes of buyers in order to redistribute income from business users to household users.

The Internet as we know it today is the result of an unimpeded succession of competitive innovations over the last three decades; i.e., the very “open and free Internet” that the New York Times claims President Obama will now bestow upon us. Net neutrality would bring all this to a screeching halt by imposing regulation on most of the Web and taxes on consumers. Today, the biggest chunk of phone bills goes to pay for a charge for “universal service,” a redistributive tax ostensibly intended to make sure everybody had phone service. Yet before the proliferation of cell phones, the percentage of the U.S. population owning televisions – which were unregulated and benefitted from no “universal service” tax – was several percentage points higher than the percentage owning and using telephones. In reality, the universal service tax was used to perpetuate the regulatory process itself.

In summary, then, the balance sheet on public utilities shows they were plotted by would-be monopolists to stymie competition and enlist government and regulators as co-conspirators. The conspiracy stuck consumers with high prices, reduced output, mediocre service, high excise taxes and – worst of all – stagnant innovation for decade after decade. All this is balanced against the dubious benefit of stability – the sort of stability the U.S. economy has shown in the last five years.

A similar future awaits us if we treat the Internet’s imagined ills with the regulatory nostrum called net neutrality.