DRI-192 for week of 6-7-15: Adding Entrepreneurship to Economics Makes ‘Disruptive’ Innovations Coordinative

An Access Advertising EconBrief:

Adding Entrepreneurship to Economics Makes ‘Disruptive’ Innovations Coordinative

Journalism pretends to be an objective profession. In reality, it is a subjective business. The subjective component derives from the normal limitations nature places on human perception; journalists may aspire to Olympian standards of accuracy and detachment, but they labor under the same biases as everybody else. The need to make a profit causes journalistic enterprises to cater to intellectual fads and fashions just as haute couture does when selling clothes.

The trendy business buzzword these days is “disruptive.” Ever since the Internet began revolutionizing life on the planet, technology has been occupying a bigger part of our lives. Somebody started saying “disruptive” to define new businesses that seemed to usher in noticeable changes in the status quo. When it comes to vocabulary, journalists imitate each other like parrots and chatter like magpies. Now slick magazines, websites and blogs are crawling with articles like “The 10 Most Disruptive Technologies/50 Most Disruptive Firms,” “How to Identify the Next Big Disruptive Technology” and “Which Sector Needs Disrupting the Most?”

It isn’t hard to identify disruptive firms; just picture the firms that have garnered the biggest and most recurring headlines – Apple, Amazon, Uber, Lyft, Airbnb, SpaceX and such. Our job here is to ascertain whether a systematic logic unites the success of these firms and whether the term “disruptive” is economically descriptive – or not. Business writers often associate disruptive technologies with economist Joseph Schumpeter, whose work we examined in last week’s EconBrief.

This association is understandable, but unfortunate. Schumpeter’s linking of entrepreneurial progress and capitalism with technological innovation is not the general case, but only a special case. That is, it is only a small part of the reason why capitalism has been so successful. Schumpeter’s view of the forest was obscured by a few redwoods, figuratively speaking. Even worse, the term “disruptive” – like Schumpeter’s famous phrase “creative destruction” – conveys an utterly misleading impression about the impact of entrepreneurial progress and technological innovation under capitalism.

Journalists and business analysts were right in looking to economics for an understanding of technological innovation. And, as we saw last week, they certainly didn’t get much help from traditional economic theory. But they picked the wrong maverick economist to consult.

A Brief Review 

Our previous EconBrief identified a serious lacuna in economic theory. No, make that multiple lacunae – certain simplifying assumptions that have alienated academic economics from reality. The pervasive use of high-level mathematics and statistical testing encouraged these assumptions because they kept economic theory tractable. Without them, economic models would not have been spare and abstract enough for mathematical and statistical purposes. In effect, the economics profession has chosen theoretical models useful for its own professional advancement but well-nigh useless for the practical benefit of the general public.

Evidence of this is supplied by the traditional indifference to entrepreneurship and innovation shown by mainstream theorists and textbooks. For contrast, we analyzed two striking exceptions to this pattern: the ideas of Joseph Schumpeter and F. A. Hayek. Schumpeter was contemptuous of the mainstream obsession with perfectly competitive equilibrium. He believed that economic development under capitalism was accomplished by a process of “creative destruction.” This did not involve small, incremental increases in output and decreases in price by perfectly competitive firms, each one of which had insignificant shares of its market. Instead, Schumpeter envisioned competition as a life-and-death struggle between large monopoly firms, each producing new products that replaced existing goods and improved consumer welfare by leaps and bounds. “Creative destruction” was a hugely disruptive process, a wholesale overturning of the status quo.

Hayek criticized mainstream theory just as strongly, but from a different angle. Hayek maintained that mainstream, textbook economic theory started out by assuming the things it should be explaining. Where did consumers and producers get the “perfect information” that traditional theory assumed was “given” to them? In effect, Hayek grumbled, it was “given” to them by the economists in their textbooks, not actually given in reality. He had the same complaint about product quality, an issue traditional theory assumed away by treating goods as homogeneous in nature. The trouble is that the vast quantity of information needed by consumers and producers isn’t available in one place; it is dispersed in fragmentary form inside billions of human brains. Only the price system, operating via a functioning free-market system, can collate and transmit this information to all market participants.

Hayek saw the true nature of equilibrium differently than did mainstream economists. The latter took their cue from mathematical economists such as 19th-century pioneer Leon Walras, who formulated equations for supply and demand curves and solved them algebraically to derive an equilibrium at which the quantity demanded and quantity supplied were equal. To Hayek, equilibrium meant that the plans human beings make in the course of living daily life turn out to be compatible, not chaotically inconsistent. That is the true Economic Problem – how to collect and transmit the dispersed information necessary to market functioning among billions of people in order to allow their plans to be mutually compatible.

Entrepreneurship – the Engine of Capitalism

Hayek’s work opened the door to an understanding of capitalism. We had long known that capitalism worked and socialism failed. But we could not supply a nuts-and-bolts, nitty-gritty explanation for why and how this was so. Theory is given little importance by the general public, but it is honored in the breach. The lack of a thoroughgoing theory of capitalist superiority has allowed a myth of socialist superiority to survive and even thrive despite the utter failure of socialism to prosper in practice. A disciple of Hayek and Hayek’s mentor, Ludwig von Mises, utilized the intellectual capital created by his teachers to complete their work.

Israel Kirzner was taught at New York University by Ludwig von Mises. His dissertation became an intermediate textbook on price theory, The Economic Point of View. In 1973, Kirzner synthesized the ideas of Mises and Hayek in a book called Competition and Entrepreneurship. For the first time, we had an explicit justification and explanation of the vital role played by the entrepreneur in economic life.

Heretofore, the entrepreneur had been the mystery figure of economic theory, akin to the Abominable Snowman or Bigfoot. To some, he was simply the organizer of production. To others, he was a salesman or promoter. To Schumpeter, he was an innovator who created new products using the lever of technology. Israel Kirzner took a completely different tack.

The keynote in Kirzner’s view of the entrepreneur is alertness to opportunity within a market framework. As a first approximation, the entrepreneur’s attention is fixed upon the price system. He or she is constantly searching for “value discrepancies;” that is, differences between the price(s) of input(s) and output. For example, he may observe that a, b and c can combine in production to produce D. The price of amounts of a, b and c sufficient to produce one unit of D is $5, while the entrepreneur sees (or envisions) that D will sell for $10. This act of intellectual visualization itself is what constitutes entrepreneurship in Israel Kirzner’s theory. Acting upon entrepreneurial observation requires productive activity.

There is a family resemblance between Kirzner’s concept of entrepreneurship and what is often termed “arbitrage.” But the two are far from identical. Arbitrage is loosely defined as buying and selling in different markets to profit from price differentials. Often, the same good is purchased and sold – simultaneously if possible – to reduce or even eliminate any risk of financial loss. Kirznerian entrepreneurship is far more comprehensive. Different goods may be involved, purchases need not be simultaneous or even close to it; indeed, markets for some of the goods or inputs involved may not even exist at the point of visualization! The entrepreneur may be contemplating the introduction of an entirely new good, a la Schumpeter. At the other extreme, the entrepreneur may be hoping to profit from the smallest price discrepancy in the most homogeneous good, as banks or traders do when they arbitrage away tiny price differences in stocks, bonds or foreign currencies in different exchanges.

In fact, the entrepreneur need not even be a producer or a seller at all. Consumers can and do engage in entrepreneurial activity all the time. Consumers clip and redeem coupons. They scan newspapers and online ads for sales and comparative prices. This activity is analytically indistinguishable from the activity of producers, Kirzner claims, because in both cases there is a net increase in value derived by consumers – and consumption is the end-in-view behind all economic activity.

The Consumer as Entrepreneur – A Case Study

In 1965, Samuel Rubin and a few friends were dismayed by the vanishing interest in, and availability of, silent movies. They held a small film festival for silent-movie enthusiasts and created the Society for Cinephiles. This gathering became the first classic-movie film festival. Fifty years later, Cinecon remains the oldest and most respected of this now-worldwide genre. Three years later, Steven Haynes, John Baker and John Stingley hosted a small gathering for classic-movie lovers in Columbus, Ohio. This year, Mr. Haynes died after planning the 47th meeting of the Cinevent festival, which annually attracts a few hundred dedicated lovers of silent and studio-system-era movies. In 1980, classic-movie fanatic Phil Serling began the Cinefest gathering in Syracuse, New York with a few close friends. 2015 marked the final meeting of this festival, which attracted attendees from around the world. Today the San Francisco Silent Film Festival is a headline-making event featuring the latest newly found and restored rarities.

This genre of classic-movie worship was begun by consumers, not by profit-motivated producers. But these consumers nevertheless were alert to opportunity – the discrepancy in value between the movies currently available for viewing and those of the past. Prior to the digital age, older movies (particularly silent movies) were seldom screened and hard to view. Moreover, they were disintegrating rapidly and dangerous to maintain because of the fire-danger posed by nitrate film stock. Yet thanks to the efforts of these pioneering consumers, today we have multiple television channels exclusively, primarily or secondarily devoted to showing classic films, including silent movies. Turner Classic Movies (TCM) leads the way, while the Fox Channel is close behind. Over twenty thousand people attend the Turner Classic Movies Festival in Hollywood every year and TCM’s annual cruise and other promotions attract thousands more. Film preservation is a major endeavor, with new discoveries of heretofore “lost” movies occurring every year. Classic movies is big business, thanks to the dispersed entrepreneurship efforts of the scattered but determined few decades ago. The small net gains in value experienced by the silent-movie lovers in 1965 multiplied millions-fold into the consumption gains of millions worldwide today on television and in person.

Schumpeter Vs. Hayek/Kirzner: Away from Equilibrium or Towards It?

Contemporary business analysts take an ambivalent attitude toward innovation and entrepreneurship. They give lip service toward its benefits – new products and services, the benefits reaped by consumers. But they imply in no uncertain terms that these benefits carry a terrible price. Terms like “creative destruction,” with heavy emphasis placed on the second word, directly state that there is a tradeoff between consumer gains and destructive loss suffered by workers, owners of businesses driven into insolvency and even members of the general public who lose non-human resources that are somehow vaporized by the awesome power of technology. Instead of stressing the labor-saving properties of technology, commentators are more apt to refer to labor-killing innovations. No wonder, then, that journalists have turned to Schumpeter, whose apocalyptic view of capitalism was that its superior productivity would ultimately prove its undoing. With friends like Schumpeter, capitalism has grown ever more defenseless against its enemies.

Schumpeter believed that entrepreneurial innovation was both creative and destructive – creative because its products were new, destructive because they completely supplanted the replaced competing products, driving their competition from the field. In the technical sense, then, Schumpeter saw entrepreneurs as a dysequilibrating force, spearheading a movement away from one stable equilibrium position to a different one. Schumpeter himself recognized that, in practice and unlike the blackboard transitions that academic economists effect in the blink of an eye, these movements would often be wrenching. But the analysis of Kirzner, using the framework built by Hayek and Mises, leads to different conclusions.

Kirzner acknowledged the validity of Schumpeter’s form of entrepreneurship. But he recognized that it was only the exceptional case. The garden variety, everyday forms of entrepreneurship – practiced by consumers as well as producers – produce movements toward equilibrium, not away from it. This is true for two reasons. First, entrepreneurship does not lead away from equilibrium because the traditional concept of equilibrium is a myth; reality changes far too quickly for actual equilibrium ever to be reached, let alone be maintained. Second, entrepreneurship leads toward equilibrium because it enables human beings to better coordinate their plans by allowing a more efficient exchange of information. Hayek objected to the traditional economic assumption of “perfect information” because he claimed that this assumed the existence of equilibrium at the outset. Kirzner’s theory of entrepreneurship tells us that the so-called “disruptive” businesses of today are pushing us closer and closer to that condition of perfect information – which means we are getting closer and closer to perfectly coordinated equilibrium. Of course, we never reach this blissful state, but capitalism keeps us steadily on the move in the right direction.

What is Google, with its search-engine technology, if not the search for the economist’s informational Shangri-La of perfect information? Wikipedia, a user-created encyclopedia, is the archetype of Hayek’s model of a world in which information exists in dispersed, fragmentary form that is unified by a voluntary, beneficial market. Facebook has become a colossus by making it easy for people to provide information about themselves to others – and in the process become a kind of worldwide clearinghouse for information of all kinds. Pinterest has narrowed this same type of focus to photos, but the key is still information. Newer technology businesses like Crowd Strike, specializing in cyber intelligence and security, and the Chinese company Tencent, with its emphasis on mobile advertising, are also informational in character.

In each of these cases, entrepreneurs were alert to the market opportunities opened by technology and signaled by the low prices ushered in by the digital age. The entrepreneurial character of some of these businesses has baffled the business establishment because it has not emulated the conventional, profit-seeking model. That is usually because the initial entrepreneurs have been consumers striving to create value for their own direct use. Only later have they realized the potential for exporting the value surplus created to the rest of the world. This looks outré to most observers but it is fully consistent with Israel Kirzner’s theory of entrepreneurship.

Another of the unrealistic simplifying assumptions deplored by Hayek was “costless” transactions, particularly entry, exit and determination of product quality. This was another case of economists assuming what they should be proving, or at least investigating; it started out by assuming equilibrium and skipped the market process necessary to produce – or, more realistically, approach – an eventual equilibrium. The technological innovations of the last two decades that weren’t information in character were mostly directed at reducing various costs, either natural or man-made costs.

The Internet itself is a mammoth exercise in reducing the costs of transport and communication. Instead of calling in the telephone, we can now send an e-mail. By inventing smartphones, Apple has one-upped the Internet and desktop computers by making this communication mobile. In between these two inventions, of course, came cell phones – invented decades earlier but made practical when Moore’s Law eventually shrank them to pocket size. The shocking thing is how little economics had to say about any of these revolutionary human innovations – because traditional economic theory had long assumed zero transport and transactions costs. Why concern yourself with an innovation when your theory says there is no need for it in the first place?

The development of cell phones was held back for years by government regulation of telecommunications, which fought tooth and claw to prevent competition between phone companies and innovation by monopoly providers. In formal logic, the effect of government regulation is best envisioned as equivalent to the effect of a mountain range or an ocean on transportation. Alternatively, think of costs as being like taxes. Transport costs are “levied” by nature, while taxes are levied by governments. Transactions costs may be either natural or man-made. And a review of recent “disruptive” businesses shows many designed specifically to overcome either natural or man-made costs.

The entrepreneurs of Uber and Lyft observed the artificially high taxi fares created by local-government regulation in the U.S. and elsewhere in the world. They envisioned lower prices and faster response-times resulting from assembling a voluntary workforce of casual drivers and independent professionals, operating free from the stranglehold of regulation. Airbnb looked at the rental market for habitation and saw the potential for achieving the same kind of economies by enlisting owners as vendors. Jeff Bezos of Amazon envisioned consumers freed from the shackles of traveling to retail stores and a supplier with transport costs lowered by economies of scale. The result has shaken the world of retail sales to its foundations. (We should note that this combines the lowering of natural transport costs and the lowering of artificial man-made sales taxes.) Driverless cars threaten an even bigger revolution in the world of transportation by overcoming the costs of human error and accidents – if they can overcome the “tax” of government regulation to achieve liftoff. Body sensors are a revolutionary innovation triggered by the consumer desire to overcome high medical costs of maintaining good health, which are an artifact of regulation. The new website Open Bazaar dubs itself “a decentralized peer-to-peer marketplace” whose goal “is to give everyone in the world the ability to directly engage in trade with each other.” In other words, it is dedicated to reducing transactions costs to the irreducible minimum.

Once again, these cost-based innovations are entrepreneur-driven. Again, some of them were pioneered by consumers rather than by the corporate or venture-capital establishment. This is exactly what we would expect, given the theory developed by Israel Kirzner.

Monopoly or Competition? 

Schumpeter believed that true progress came from monopoly, not competition. He meant monopoly in the effective, substantial sense, not merely the formalistic sense of a transitory market hegemony enjoyed by the innovator. Events have clearly proven Schumpeter wrong. It is hard to find a case today that would correspond to Schumpeter’s archetype; instead, the initial innovator has been superseded by somebody else. Market leadership has been the result of performance, not entry barriers or patents or government pull. And the innovators themselves have often been “nobodies” rather than monopolists boasting war chests heavy with monopoly profits.

Pattern Prediction

In 1929, Ludwig von Mises predicted a “great crash” and refused to take a position in the Austrian government for fear of association with the economic downturn he anticipated. F.A. Hayek predicted a sharp recession, pursuant to the business-cycle theory he had recently developed. Later, Hayek predicted the failure of Keynesian counter-cyclical fiscal and monetary policies and the high worldwide inflation of the 1970s, coupled with the recession that followed measured taken to break the inflation.

In general, Hayek did not believe that accurate quantitative prediction of economic events was possible. At most, he felt, economic theory could offer “pattern predictions” of a more general nature. His own statements, both in economics and political philosophy, tended to support this approach.

Israel Kirzner did not “predict” the advent of the Internet or the invention of the smartphone. But the technological revolution and the businesses spearheading it conformed to the general pattern of entrepreneurship outlined in Israel Kirzner’s theory. In this sense, while this revolution came as a complete surprise to the mainstream economics profession, it can hardly have surprised Kirzner. The revolution was led by people behaving just as Kirzner hypothesized that entrepreneurs do behave.

Can the Status Quo be “Disruptive?”

Based on our analysis and Israel Kirzner’s theory of entrepreneurship, the business buzzword “disruptive” is misleading when applied to the cutting-edge firms and technologies of today. It is indeed true that these technologies overturn the status quo. But the status quo is hindering human progress and preventing attainment of true economic equilibrium; it is hurting people rather than helping them. If transport costs or transaction costs or taxes or regulation are hurting people – and helping at most only a minority vested interest in the process – then changing the status quo is the indicated action. “Stability” is not always good. After all, Stalin’s Soviet Union was stable. Fortunately, the Soviet Union later collapsed when that stability disintegrated.

As Israel Kirzner himself has always maintained, economics is all about making people better off. When this criterion is placed foremost, discarding the pure formalism of mainstream theory, is becomes clear that Mises, Hayek and Kirzner were right and Schumpeter was wrong. Entrepreneurship is equilibrating because it tends to better coordinate the plans made by individual human beings.

The process by which Nobel Prizes are awarded is highly secretive. The Nobel committee keeps their candidate “cards” close to their vests. Rumors have circulated, however, placing Israel Kirzner’s name on the short list of potential awardees. No man alive has done more than he to redeem the tarnished prestige of economics as a subject worth studying for its practical value to humanity.

DRI-184 for week of 5-31-15: Why is Economic Theory MIA Amidst Humanity’s Biggest Innovation Boom?

An Access Advertising EconBrief: 

Why is Economic Theory MIA Amidst Humanity’s Biggest Innovation Boom?

It is obvious even to casual observers that humanity has experienced an unprecedented boom in technological improvements in recent decades. Apparently even greater advances lie in store, although some contrarians insist that the best is behind us. We might expect to find economists in the thick of all this – spotting trends, lauding entrepreneurs and listing the factors responsible for their success, toting up the gains in real income, output and wealth, applauding the effects on rich and poor alike and approving the nosediving rate of world poverty.

Those expectations would be disappointed, at least by a perusal of mainstream sources. True, there are periodic ex cathedra pronouncements by stray economists on these matters. Scattered foundations, think tanks and institutes devoted to entrepreneurship pop up. The continuing popularity of the late maverick economist Joseph Schumpeter ensures that the subject of innovative entrepreneurship does not fade entirely from the public consciousness or the minds of economists. But the leading professional journals in economics, such as The American Economic Review and the Journal of Political Economy, remain preoccupied with the perennial concerns of the profession. And those do not include the topics of innovation and entrepreneurship.

Why not? What have critics of mainstream theory suggested to improve matters? Those are the subjects of this EconBrief. Next week we will see how non-traditional economic theory can improve our understanding of revolutionary technological innovation.

The Wrong Turns in Economic Theory

In the 1870s, economic theory underwent a revolution. Prior to that time, a vital element was missing from economics. Its theory of value was defective. The Classical Economists believed that the value of economic goods depended on the objective cost of the inputs that went into their production. They lacked a solid, systematic theory of consumer demand. Beginning in 1871, three different economists – working independently in England, Switzerland and Austria – developed the concept of marginal utility, thereby laying the foundation for the modern theory of consumer demand. This Marginal Revolution presaged the Laws of Supply and Demand and the famous diagram depicting equilibrium price formation via the junction of the supply and demand curves. (The diagram was dubbed the “Marshallian Cross,” after the great English economist who popularized it, Alfred Marshall.)

One of the original three founders of marginal utility, Leon Walras, was also the modern developer of mathematical economics. Walras believed that the most concise and precise means of depicting economic relationships was by expressing them in mathematical form. He envisioned an economy as a mathematical model consisting of supply-curve equations for all goods and demand-curve equations for all consumers. He stated that such a system of equations could be solved simultaneously – that is, algebraically – to yield an equilibrium solution. That equilibrium would be one in which the quantity of each good chosen by all consumers and the quantity supplied by all producers would be identical. Eighty years later, two economists proved Walras’s conjecture correct and later received a Nobel Price for their efforts.

Walras believed that his procedure was more scientific than that followed heretofore by economists because it imitated the procedures of the physical sciences like biology, chemistry and astronomy. Despite his scientific pretensions, he also believed that economists could never hope to actually formulate a full set of general equilibrium equations in which actual coefficients were calculated for the variables. As the years went on, Walras’s mathematical approach gained steadily in popularity, but economists inherited none of his realism. Meanwhile, the canons of statistical inference developed by the English mathematical statistician Ronald Fisher also gained favor and were applied to the social science of economics as well as to the natural sciences. After World War II, economists increasingly practiced their craft by developing a mathematical model to express a theoretical hypothesis and using statistical methods to “test” its validity and quantitative boundaries.

This modus operandi seduced the economics profession en masse. In view of its disastrous effects, we might well ponder why this research agenda proved so irresistible. First, it provided a made-to-order research agenda to justify diverting attention away from instruction. Second, it provided an apparently objective standard by which to evaluate faculty for tenure and later promotion. This, in turn, allowed administrators to press graduate students and non-tenured adjunct faculty into service as cut-price teachers of the undergraduate curriculum while the faculty did research and earned money from consulting contracts. It turned economics departments of public universities into sausage factories for producing research studies for academic journals. This made politicians and bureaucrats happy because it gave them several excellent excuses for spending more money – “investing” in research, democratizing higher education by loaning money to students in an effort to create “universal” higher education. (“Universal service” and “affordability” are the two leading political excuses for redistributive spending.) The face that this “research” was completely worthless to everybody except economists meant that the public wouldn’t poke its nose too deeply into the process – which suited everybody involved.

Indeed, the output of this research agenda turned out to be of little value even within the economics profession. The fact that a mathematical model is “precise” and “rigorous” means nothing in itself. The question is: Can the mathematical models of economists capture human action sufficiently well to be of practical use? In the mid-1990s, the noted economists Deirdre McCloskey and Steven Ziliak discovered that economists (and many other scientists) had been misusing the statistical tools of Fisher, et al for years, thereby vitiating the empirical as well as the theoretical basis of most economic research.

Mathematics and statistics work well in the natural sciences because the phenomena are under study in controlled circumstances, which enables the staging of meaningful experiments. This permits the finding of empirical regularities or laws in the natural sciences. But human action, unlike that of inanimate objects and simple life forms, is both purposeful and full of complexity and ambiguity. Moreover, economic life is ordinarily not subject to controlled experimentation. Consequently, the practical results of the economic research model using mathematical models and statistical testing have been hugely disappointing.

The model still lingers on because it is so convenient for the people whose preferences matter most in universities; namely, government, administrators and faculty. The people badly served – undergraduate and graduate students – are the lowest forms of animal life in the university setting.

It is highly interesting to observe that this outcome is directly counter to the very logic taught by economics. Consumption is the end-in-view behind all economic activity. This includes university study and research. Thus, economic logic counsels removing universities from the aegis of government and subjecting them to market competition by abolishing tenure, privatizing research funding and separating the functions of teaching and research. Unfortunately, the two vested interests who have the most to lose from this change in approach, faculty and administrators, are the ones most powerfully in control of the present system.

If You’re So Smart, Why Ain’t You Rich? 

Inevitably, some readers will disagree with the foregoing, perhaps even find it outrageous. The dissenters should ask themselves what the distinguished economic historian and statistician Donald (now Deirdre) McCloskey called “the American question:” If you’re so smart, why ain’t you rich?” Here, the “you” are economists who devise theoretical models for stock and options prices, bond prices, GDP and interest rates. If those models really work – if they are statistically “robust” – why haven’t economists become rich as Croesus from using them to predict the future course of financial markets? For that matter, why were economists generously willing to publish their results for the world to see rather than jealously hoarding them as a source of income?

Most people couldn’t care less whether economist themselves make money from their work, but they are passionately convinced that government should somehow “regulate” the economy to make good things happen for them and prevent bad things from happening. Where did governments, which have existed for thousands of years of human history in myriad forms, suddenly acquire this mystical power to control human behavior and steer the course of future events?

Well, if the alleged control relates to the so-called “macro economy,” it clearly dates back to 1936 and the publication of John Maynard Keynes’ famous treatise on employment, interest and money. Here, the version of the American question relates to policy: Why hasn’t Keynesian economics worked as advertised? After forty years of the most intensive research ever expended on a scientific topic and forty more years of attempts to modify Keynesian theory and put it into practice, the world finds itself perched on a financial precipice.

Then then there are those who apply the term “regulation” in an administrative sense to individual industry sectors, or even to individual firms. In this case, the “American question” should be modified to “if you’re so smart, why ain’t you running the business?” Agency regulation is such a nebulous concept that any attempt to criticize it allows proponents to slide out from under by changing the terms of the argument. But proponents cannot be permitted this luxury; regulation must have some definite purpose. And in practice, government regulation of business fails every test known to mortal man. The things that most people claim they want from regulation are precisely the things that can only be supplied by market completion rather than by regulation. Regulation is not a supplement or corrective to competition; it is an inferior substitute for it.

This failure of economic theory is particularly important because it drags the research model down to failure along with it. The majority of academic economists are left-wing in political orientation. (After all, they work for government.) In practice, their theoretical model and statistical tests have been designed to demonstrate the failure of free markets and the need for government intervention to produce an optimal result. The optimal result is the one that would obtain if private markets worked perfectly. Since they don’t, so runs the academic party line, we need government intervention and regulation to correct the market failures.

But real life has overtaken the academic research model. It is free markets, not government- controlled ones, that deliver the goods. This is still another argument for junking the current research model. It’s hard to do good research starting with a bad economic theory.

The Nitty-Gritty: Where Does Mainstream Economic Theory Go Wrong?

We have said that the mathematical model seduces economists into wrongly specifying their theoretical models. Exactly what does this mean?

Go back to Walras’s model of supply and demand. He, or rather his successors, assumed that we could model consumer demand as a function of consumers’ incomes, tastes and the prices of substitutes and complements for the good under study. But this implicitly assumes that consumers know all this information. As we all realize, they don’t. Nevertheless, it was long traditional for economists to begin by assuming the existence of “perfect information.” Since people consume not only in the present moment but also save for future consumption, this perfection of knowledge applied to the future as well as the present.

How’s that for an abstract model with no relationship to reality?

The same consideration applies on the supply side of the market, where producers are assumed to know not only every price relevant to the production of their own product – all input prices, the prices of all competing goods and so on – but also all technological facts relevant to production of their product and related products. And that’s not all, folks.

When devising models of general equilibrium, economists long assumed that all firms were “price-takers.” That simply meant that each firm supplied such a miniscule fraction of total market output that its contribution to that output had virtually no effect on the market price. That is, regardless of whether it operated at maximum production or went out of business, the market supply curve didn’t budge enough to change the equilibrium price materially. Therefore each firm took the market price as a parameter and treated the quantity it supplied as its only decision variable.

What about the quality of the good it produced? That led to still another simplifying assumption. Since “quality” was a variable that seemed to defy quantification, economists at first sought to treat the output of all firms as homogeneous – thereby removing product quality from discussion.

At this point, readers are probably experiencing the same mixture of disillusion and disbelief that hits college freshmen and sophomores when they are exposed to the economic concept of “perfect competition” for the first time. “What planet do economists live on” is a representative specimen of the thoughts running through student heads at this moment.

As a temporary venture in devil’s advocacy, it is worth noting that an individual farmer operating in certain industries may meet some of these criteria. It is not too big a stretch to treat a particular variety of (say) wheat as a homogeneous good and it is definitely no stretch to treat the output of (say) one family farmer as an insignificant fraction of industry output. But even this kind of partial correspondence between model and reality is the exception, not the rule.

Over the decades, economists have modified the stringent assumptions listed above in various ways. But these modifications have been minor in their practical consequences. Instead of assuming perfect knowledge, for example, economists assumed that market participants possessed probability distributions about the outcome of future events or the existence of certain kinds of information. This minor concession didn’t add much value to their models. If I can play blackjack using the “card-counting” technique, this shifts the odds slightly in my favor. I will always win in the long run, assuming that my initial stake is big enough to withstand any runs of bad luck and I can play “forever.” Unfortunately, most economic decisions do not offer even this probabilistic level of certainty, let alone the perfect information available in the less sophisticated version of economic theory. (And in real life, blackjack doesn’t either; the casinos will ban me if they catch me card-counting.)

Economists introduced even more modifications on the supply side of markets. Beginning in the 1930s, they began to contemplate alternatives in between the polar opposites of perfectly competitive markets and pure monopoly. But these alternatives, such as product differentiation and strategic interaction among a small number of large firms, were so slow to catch on that economists became habituated to focusing only on the equilibrium outcomes of markets and not on market processes. This meant that even when more sophisticated models began utilizing game theory and other non-traditional approaches, their focus was still directed away from entrepreneurship and innovation.

The Effects on the Study of Innovation and Entrepreneurship

The esoteric assumptions behind mainstream, traditional economic theory have backed that theory into a corner. Economists came to depend on the research model behind the theory for their livelihood. This gave them an underlying, unconscious identification with its biases and conclusions.

When Alfred Marshall first promoted his supply-demand Marshallian Cross, he viewed it as a valuable teaching tool for educating the masses. But economists became so obsessed with the concept of equilibrium that it became the primary focus of every theoretical model. The conditions necessary for equilibrium and the conditions prevailing at the state of equilibrium became the centerpiece of nearly every journal article. Little or nothing was said about the time-path to equilibrium and what might affect it.

The noted economist Joseph Schumpeter (1883-1950) prided himself on his personal and professional eccentricity. (He is said to have espoused the goals of being the best horseman in Vienna, the best lover in Europe and the best economist in the world.) In his theory of economic development, he derided the mainstream obsession with equilibrium, perfect competition, perfect information and – most of all – product homogeneity. Schumpeter believed that economic progress was made primarily by firms that created entirely new products. This could come about only as a result of innovation.

But Schumpeter knew that the mainstream world inhabited by his colleagues was hostile to the notion of innovation. In traditional economic theory, perfectly competitive firms were each earning a “normal” profit in long-run equilibrium. That is another way of saying that each firm’s books recorded exactly enough money under the heading “profit” to prevent shareholders from withdrawing their money and investing elsewhere, but not enough to attract the entry of new competitors into the industry. (Another way of putting it would be to say that the firm’s investment earned an amount equal to the best alternative investment of equal risk; e.g., its “opportunity cost” of investment was exactly covered.) In such an environment, an innovator would find that any temporary profits from creating a new product would soon – in principle, instantaneously – be competed away by a horde of imitative firms entering the market. After all, with “perfect information” all relevant information necessary for production would be publicly known.

According to Schumpeter, innovative firms strive not only to erect but to maintain durable monopoly positions in the products they create. The resulting monopoly profits not only reward owners for the risks they take but also bankroll the research necessary to improve their product and create new innovative products. The actual world of imperfect information makes it harder on producers but it also makes it easier to maintain monopoly status once it is attained.

Mainstream economists couldn’t stomach this analysis because they had been preaching (and practicing) a doctrine of enforced competition and government intervention to eradicate monopoly. How could they now praise the monopoly structure that they had made their bones by condemning? (Of course, economists were all-too-willing to relax their standards and overlook monopoly when it was organized and enforced by government itself because they viewed government as the sole economic actor not actuated by self-interest. In effect, economists of Schumpeter’s day were, and remain today, employees of Government R’Us.)

Schumpeter replied to his mainstream colleagues by pointing out that innovating monopolists did face competition even if they were able to exclude direct competitors from their market by (for example) obtaining patent protection for their new products. That competition came from other creative would-be monopolists. After all, the demand for the original monopolist’s product had to come from people shifting purchases from goods being produced by competitive firms. Why wasn’t the monopolist also vulnerable to the same line of attack from other innovators?

For Schumpeter, “competition” was not merely a dull, incremental process of bland, homogeneous products duking it out for tiny shares of a market and a normal profit. He called his model of competition between monopolists “creative destruction,” implying that innovation can occur only by destroying or disrupting the existing order in favor of a new creative equilibrium – which will eventually be toppled by a new innovator. Thus, said Schumpeter, “…competition from the new commodity, the new technology, the new source of supply, the new type of organization… which… strikes not at the margins of the profits and the outputs of the existing firms but at their foundations and their very lives,” is the true explanation behind the superiority of free-market capitalism to other systems. In Capitalism, Socialism and Democracy, Schumpeter cited the example of ALCOA, a “monopoly” so notorious that it would soon be convicted under U.S. antitrust laws. Yet between 1890 and 1929, the price it charged for aluminum had fallen by 91% and its output had risen by a factor of 30,000! Schumpeter believed that the company had, in effect, been competing against the threat posed by potential competition.

Schumpeter was the most popular of the economic heretics because his model corresponds much more closely with certain aspects of reality. New products and product heterogeneity are a fact of life. Market uncertainty faces every participant, none more so than the would-be innovator. If injected with truth serum, every economist would be forced to admit that the concept of equilibrium is best conceived as a constantly changing point toward which competitive markets tend, rather than a point of rest actually attained by real-world markets.

The more telling critique of traditional economic theory, though, was made by Schumpeter’s fellow Austrian, F.A. Hayek (1899-1992), from a different theoretical perspective. Hayek pointed out that the term “perfect competition” violates every commonsense precept of the word competition. Under perfect competition, each firm has no sense of any other firm as a rival, hence does not perceive itself as “competing” with anybody. It has no incentive to lower its price for competitive reasons since it can already sell all it produces at the prevailing market price. If it attempted to raise its price arbitrarily, its sales would fall to zero. Every firm produces exactly the same product, so there is no competition on the basis on product quality.

Another simplifying assumption of traditional theory has been that no barriers to entry or exit exist in a “competitive” industry. This absence of barriers was formalized mathematically as costless entry and exit, meaning that the emergence of profits above those available in comparable investments elsewhere would instantly attract new entrants. The additional supply provided by that new entry would lower market price until the supra-normal profits were fully eroded.

What is there left to compete about? Nothing. Each firm selects the rate of output optimal to its situation; that is all. “Price-taking behavior” is the antithesis of “competition” as it is commonly understood. In “The Meaning of Competition” (1946), Hayek observes that the array of simplifying assumptions made by traditional theory assume competitive equilibrium to exist – the process that brings it about it not explained by the theory but merely assumed at the outset. Nowhere does the theory explain how or why information should be so perfect, entry should be so easy, goods should be homogeneous and so many firms should exist.

Hayek found the assumption of “perfect information” especially paradoxical. Assuming that everybody knows everything is really just a way of evading the issue that economists should be making the central issue of their studies; namely, how is information transmitted and acquired in a market economy? We know that people know some of the things that economists assume they know – the question is how they came to know them.

When Hayek broached this issue in a seminal article – “The Use of Knowledge in Society” in 1945 – the fashion among economists was to treat information about prices and goods as “given data.” He wondered to whom the data were “given?” The phrase must have meant “given to the observing economist” rather than actually given to the people who were supposed to possess it, since there was no agency that literally gave people such information. “The data from which the economic calculus starts are never for the whole society ‘given’ to a single mind… and can never be so given.” In fact, no one person or institution possessed it in its totality. It existed only in dispersed, fragmentary form in the minds of many millions (today, read “billions”) of people.

There is only one way for people to acquire the invaluable information they need to participate effectively in a market economy. They get from markets themselves. That is why free markets are a necessary prerequisite for economic prosperity.

In another article (1937’s “Economics and Knowledge”), Hayek illumined the concept of equilibrium even more brightly than did Schumpeter. Rather than treating equilibrium merely by defining it as the correspondence of quantity demanded with quantity supplied in a market or markets, Hayek looked at the human implications of this fact. People order their lives by making plans that guide their behavior. When their individual plan is optimal when juxtaposed with the galaxy of facts at their disposal, the individual is said to be “in equilibrium.” But each individual’s plan is typically made independently of others; all plans need not automatically or necessarily be compatible with each other a priori. A market is said to be in equilibrium when all plans do mesh and are compatible. Thus, the impersonal workings of a free market serve to coordinate the plans of individuals by collating the dispersed information existing in the minds of its participants and using it to reconcile the wants and needs of all.

Writers of economics textbooks have traditionally begun by outlining what they call the Economic Problem. Since the resources necessary to produce economic goods are scarce and have alternative uses, we must allocate them logically in order to best satisfy the infinite wants of consumers. Optimal allocative logic is what textbook writers envision as economic theory.

Hayek redefined the Economic Problem. Because economists themselves do not possess the knowledge that mainstream theory has assumed market participants possess, they cannot “allocate” resources. Neither can government, for the same reason. The knowledge exists only in dispersed form, and the only way to unlock and make use of it is by utilizing markets to collate it and distribute it. That same market process then coordinates the plans of market participants to make them (more) compatible. The true Economic Problem is how to coordinate the plans of individuals by distributing the dispersed information not possessed by any one individual or institution.

We know that free markets perform this function better than government central planning and regulation. For over seventy years, central planning reigned in the Soviet Union. The result was the antithesis of coordination, in which an ordinary citizen might spend as much as six hours per day standing in line or hiring substitutes to do it for him. And the reward was a level of income and wealth equal to a small fraction of that obtainable in free societies without having to stand in line.

The Revised Economic Theory: Innovation and Entrepreneurship

Hayek’s work paved the way for an explicit economic theory of entrepreneurship and innovation, one that not only corrected the errors of mainstream theory but also put the work of Schumpeter in its proper perspective. In this space next week, we will explain how one man – now apparently on the short list for the Nobel Prize in economics – extended and refined Hayek’s analysis.

DRI-202 for week of 4-26-15: The Comcast/Time-Warner Cable Merger Bites the Dust

An Access Advertising EconBrief:

The Comcast/Time-Warner Cable Merger Bites the Dust

This week brings the news that the year’s biggest and most highly publicized merger, between cable television titans Comcast and Time-Warner Cable, has been called off. Although the decision was technically made by Comcast, who announced it on Monday, it really came from the Federal Communications Commission (FCC), whose de facto opposition to the merger became public last week. This continues a virtually unbroken string of economically inane measures taken by the Obama administration and its regulatory minions.

Theoretically, merger policy falls within the province of industrial organization, the economic specialty spawned by the theory of the firm. Actually, the operative logic had nothing whatever to do with economics. Instead, the decision was dictated by the peculiar incentives governing the behavior of government.

The high visibility of the intended merger and the huge volume of comment it spawned make it worthwhile to examine carefully. What made it so attractive to the principals? Why was it denounced so bitterly in certain quarters? Was the FCC right to oppose it?

Who Were the Principals in the Merger?

Comcast and Time-Warner Cable (hereinafter, TWC) are today the two leading firms in the so-called “pay-TV” industry. The quotation marks reflect the fact that the term has undergone several changes over the course of television history. Today it refers to two different groups of television consumers. First are subscribers to cable television, the biggest revenue source for both Comcast and TWC. Born in the 1950s and nurtured in the 1960s, cable TV fought tooth and nail to gain a toehold against “free” broadcast television. It succeeded by offering better reception from buried coaxial-cable transmission lines, more viewing choices than the “Big 3” national broadcast network channels offered on free TV and a blessed absence of commercial interruption. Its success came despite the efforts of government regulators, who forbade local cable companies from serving major metropolitan areas until the 1980s.

In the early days, municipalities were so desperate to get cable-TV that local government would offer a grant of monopoly to the first cable franchise to lay cable and promise to serve the citizenry. In return, the cable firm would have to pay various legal and illegal bribes. The legal ones came in the form of community-access and public-service channels that few watched but which gave lip service to the notion that the cable firm was serving the “public interest” and not merely maximizing profit. Predictably, these monopoly concessions eventually came back to haunt municipal government when cable firms inexorably began to raise their rates without providing commensurate increases in programming value and customer service to their customers.

Today, the contractual arrangements with cable firms survive. But the grants of monopoly are no more. In many markets, other cable firms have entered to compete with the original firms. Even more important, though, are the other sources of competitive television service. First, there is satellite TV service provided by companies like Direct TV and Dish. A satellite dish – usually located on the customer’s roof – gathers the signal transmitted by the company and provides hundreds of channels to customers. Wireless firms like AT&T and Verizon can also transmit television signals to provide television service as well. And finally, it has become possible to “stream” television signals digitally by means similar to those used to stream audio signals for songs. Consequently, a movie-streaming service like Netflix has become a potent competitor to cable television as well.

What Did Comcast and TWC Have to Gain from the Merger? 

The late, great Nobel laureate Ronald Coase taught us that business firms exist to do things that individuals can’t do for themselves – or, more precisely, things that individuals find too costly to do themselves and more efficient to “import” from outsiders. Take this same logic and extend it to business firms. Firms produce some things internally and purchase other things outside the firm. Logically, the inputs they produce internally are the ones they can produce at a cost lower than the external cost of purchase, while external purchases are made when the internal cost of production is too high.

Now extend this logic even further – to the question of merger, in which one firm purchases another. Both firms have to agree to the terms, including a price, which means that both firms consider the merged operation superior to separation. The term used to denote the advantages that arise from combination is synergy – a hybrid of “synthesis” and “energy” suggesting that melding two elements produces a greater output of energy than do the individuals in isolation.

Why should putting two firms together improve on their separate efficiency? The first place to look for an answer is cost, the reason why businesses exist in the first place and the reason why they purchase inputs in the second place. The primary synergy in most mergers is elimination of duplicative functions. Because mergers themselves take time, effort and other resources to effect, there must be substantial duplication that can be eliminated in order to justify a merger on this ground alone. That is why mergers so often occur (or threaten) among similar, competing firms with similar internal structures.

This applies to Comcast and TWC. Large parts of both firms are devoted to the same function; namely, providing cable television to subscribers. A merger would still leave them with the same total territory to service. But one central office, much smaller than the combined size of both before the merger, could now handle administration for the entire territory. The largest efficiencies would undoubtedly have been available in advertising. Economies of scale would have been gained from having one advertising department handle all advertising for the merged firm. Economies of size would have been available because the much larger total of advertising would have commanded volume discounts from sellers.

Given the gigantic size of the firms – their combined revenue would have yielded well over $80 billion – these economies alone might well have justified the merger. And that leaves out the most important reason for the merger. In times of market turmoil, mergers are often referred to as “consolidation.” This is a polite way of saying that the firms involved are girding their loins for future battle. They are fighting for their business life.

This is completely at odds with the picture painted by self-styled “consumer advocates” and government regulators. The former whine about the poor quality of service provided by Comcast to its cable subscribers, calling the company a “lazy monopolist.” By definition, a lazy monopolist doesn’t have to worry about its future – it is living off the fat of the land or, as an economist puts it, taking some of its profits in the form of leisure. (Of course, the critics can’t have it both ways – if the firm is “lazy” then it must be extracting less profit from consumers than it could if it were “aggressive.” But the act of moral posturing uses up so much mental energy that there is little left for critics to use in applying logic.) Government regulators say that Comcast and Time-Warner have so much power that, when combined, they could exclude their potential competitors from the market for “high-speed broadband.”

But the picture painted by market analysts is completely different. Comcast and TWC are leading players in a market that is beginning to wither on the vine. They are not merely providing “pay TV;” they are providing it via coaxial cable buried in the ground and via subscription. This method of providing television service will sooner or later become an endangered species – and the evidence is leaning toward “sooner.” People are beginning to “cut the cord” binding them to cable television. They are doing it in at least three ways. For years, satellite services have made modest inroads into cable markets. Now wireless companies are increasing these inroads. Finally, streaming services are promoting the ultimate heresy – people are renouncing their television sets entirely by streaming TV programming on their computers. Consumers have begun abandoning pay-TV in both 2013 and 2014; in the last year, cord-cutting to streaming TV has begun to occur in the millions.

Not surprisingly, the prime mover behind all of these threats to cable TV is cost. In the early days of cable, hundreds of channels were a dazzling novelty after the starvation diet of three major networks (with perhaps one UHF channel as an added spice). People occasionally surfed the channels just to find out what they might be missing or for something of genuine interest. Over time, though, they bore an increasing cost of holding an inventory of dozens of channels handy on the mere off-chance that something interesting might turn up. That experience gradually made the tradeoff seem less and less favorable, making the lure of a TV lineup tailored to their specific preferences and budget more attractive. Today, the prices of cable TV’s competitors will go nowhere but down.

These competitors are not only competing on the basis of price but also on the basis of product quality. Increasingly, they are now creating their own programming content. This trend began years ago with Home Box Office (HBO), which started life as a movie channel but entered the top tier of television competition when it began producing its own movies and specials. Now Netflix has followed suit and everybody else sees the handwriting on the wall.

The biggest attraction of the merger for Comcast and Time-Warner was the combined resources of the two firms, which would have given the resulting merged firm the kind of war chest it needed to fight a multi-front competitive war with all these competitors. Each of the two firms brought its own special advantages to the fight, complementing the weaknesses of the other. Comcast owns NBC, currently the most successful broadcast-TV channel and a locus of programming expertise. Another of its assets is Universal Studios, a leading film producer since the dawn of Hollywood and a television pioneer since the 1950s. TWC brings the additional heft and nationwide presence necessary to lift Comcast from regional cable-TV leader to international media player.

What is an “Industry?”

Everybody has heard the word “industry” used throughout their lives. Everybody thinks they know what it means. The federal government lists and classifies industries according to the Standard Industrial Classification (SIC) code. The SIC code defines an industry by its technical characteristics, and the definition becomes narrower as the work performed by the firms becomes more specialized. From the point of view of economics, though, there is a problem with this strictly technical approach to definition.

It has no necessary connection to economics at all.

The only economic definition of an industry related to the economic substitutability of the products produced by its members. If the products are viewed by consumers as economically homogeneous – e.g., interchangeable – then the aggregate of firms constitutes an industry. This holds true regardless of the technical features of those products. They may be physically identical; indeed, that might seem highly likely. But identical or not, their physical similarity has nothing to do with the question of industrial status.

If the goods are close substitutes, we may regard the firms as comprising an industry. How close is “close?” Well, in practice, economists usually use price as their yardstick. If significant variations in the price of any firm’s output will induce consumers to shift their custom to a different seller, then that is sufficient to stamp the output of different sellers as close substitutes. (We hold product quality constant in making this evaluation.)

This distinction – between the definition of an industry in strictly technical terms and in economic terms – is the key to understanding modern-day telecommunications, the digital universe and the Comcast/TWC merger.

Without saying it in so many words, the FCC proposes to define markets and industries in non-economic terms that suit its own bureaucratic self-interest. It does this despite the fact that only economic logic can be used when evaluating the welfare of consumers and interpreting the meaning of antitrust law.

The FCC’s Rationale for Ordering a Hearing on the Comcast/TWC Merger

Comcast decided to pull the plug on its proposed merger with TWC because the FCC’s announced decision to hold a regulatory hearing on the merger was a signal of the agency’s intention to oppose it. (The power of the federal government to legally coerce citizens is so great than innocent defendants commonly plead guilty to criminal charges in order to minimize penalties, so it is not strange that Comcast should surrender preemptively.) It is natural to wonder what was behind that opposition. There are two answers to that question. The first answer is the one that the agency itself would have provided in the hearing and that already been provided in statements made by FCC Chairman Thomas Wheeler. That answer should be considered the regulatory pretext for opposition to the merger.

For years, another regulatory agency – the Federal Trade Commission (FTC) – passed both formal and informal judgment on antitrust law in general and business combinations in particular. The FTC even provided a set of guidelines for what mergers would be viewed favorably and unfavorably. The guidelines looked primarily at what industrial-organization economists called industry structure. That term refers to the makeup of firms existing within the industry. Traditionally, this field of economics studies not only industry structure – the number of firms and the division of industry output among them – but also the conduct of existing firms – competition might be fierce, lackadaisical or even give way to collusive attempts to set price – and their actual performance – prices, output and product quality might be consistent either with competitive results or with monopolistic ones. But the FTC concerned itself with structural attributes of the market when reviewing proposed mergers, to the exclusion of other factors. It calculated what were known as concentration ratios – fractions of industry output produced by the leading handful of firms currently operating. If the ratio was too high, or if the proposed merger would make it too high, then the merger would be disallowed. When feeling particularly esoteric, the agency might even deploy a hyper-scientific tool like the “Herfindahl-Hirschman Index” of industry concentration as evidence that a merger would “harm competition.”

In our case, the FCC needed a rationale to stick its nose into the case. That was provided by President Obama’s insistence on the policy of “net neutrality” as he defined it. This policy contended that the leading cable-TV providers were “gatekeepers” of the Internet by virtue of their local monopoly on cable service. In order to give their policy a semblance of concreteness – and also to make the FCC look as busy as possible – the agency established a policy that the top pay-TV firm could control no more than 30% of the “total” market. This criterion is at least loosely reminiscent of the old FTC merger guidelines – except for the fact that the FTC merger guidelines had a tenuous relationship with economic theory and logic. Here, the FCC’s policy as much to do with astrology as it does with economics; e.g., roughly zero in both cases. But, mindful of the FCC’s rule and in order to keep its merger hopes alive, Comcast sold enough of its cable-TV properties to Charter Communications to reduce the two companies’ combined pay-TV holdings to the 30% threshold.

In order to create the appearance of being progressive in the technical as well as the political sense, the FCC set itself up as the guardian of “high-speed broadband service.” For years leading up to the merger announcement, the FCC’s definition of “high-speed” was a speed greater than or equal to 4 Mbps. But after the merger announcement, the FCC abruptly changed its definition of the “high-speed market” to 25 Mbps. or greater. Why this sudden change? Comcast’s sale of cable-TV assets had circumvented the FCC’s 30% market threshold, so the agency now had an incentive to invent a new hurdle to block the merger. The faster broadband-speed classification had the effect of including fewer firms, thereby making its (artificially defined) market smaller than before. In turn, this made the shares of existing firms higher. Under this revised definition – surprise, surprise! – the Comcast/TWC merger would have given the resulting firm 57% of this newly defined “market” rather than the 37% it would previously have had.

Still, most industry observers figured that Comcast’s divestiture sale to Charter Communications, combined with what Holman Jenkins of The Wall Street Journal called “Comcast’s vast lobbying spending and carefully cultivated donor ties with the Obama administration”, would see the merger over the regulatory hurdles. Clearly, they reckoned without the determination of FCC Chairman Wheeler.

What Was the Actual Motivation of the FCC in Frustrating the Comcast/TWC Merger?

Regulators regulate. That is the explanation for the FCC’s de facto denial of the Comcast/TWC merger. It is the bureaucratic version of Descartes’s “I think, therefore I am.” After over a century of encroaching totalitarianism, it is only gradually dawning on America that big government is dedicated solely to the proposition that government of, by and for itself shall not perish from the Earth.

A recent Bloomberg Business editorial is an implicit rationale for the FCC’s action. The editor marvels at how only recently it seemed that the forces of cable-TV darkness had the upper hand and were poised with their jackboots on the throats of consumers the world over. But then, with startling suddenness, cable’s position now seems wholly tenuous as it is beset on all sides with uncertainty. And who should we thank for this sudden reversal? Why, the FCC, of course, whose wise regulation has turned the tide. Instead of crediting competitive forces with making the FCC’s action unnecessary if not a complete non sequitur, the editorial gives the credit to the FCC for creating circumstances that preexisted and in which the agency had no hand.

One of Milton Friedman’s famous characterizations of bureaucracy compared it to the flight leader of a covey of ducks who, upon discovering that the remainder of his V-formation have deserted him and are flying off in a different direction, scrambles to get back in front of the V again. By denying the merger, the FCC has re-positioned itself to claim credit for anything and everything that competition has accomplished so far and will accomplish in the future. If it had done nothing, regulation would have had to cede credit to market forces. By doing something – even something as crazy, useless and downright counterproductive as frustrating a potentially beneficial merger – the FCC has not only set itself up for future benefits, it has also fulfilled the first goal of every government bureaucracy.

It has justified its existence.

All this would have been true even if the FCC’s pre-existing commitment to net neutrality has not forced it to twitch reflexively every time the words “high-speed broadband” arise in a policy context. As it is, the agency was compelled to invent a “policy” for regulating a market that will soon be the most hotly competitive arena in the world – unless the federal government succeeds in wrestling competition to a standstill here as it did in telecommunications in the 1990s.

Why are Economic Theory and Logic Absent from the FCC’s Actions in the Comcast/TWC Merger?

Begin with a few matter-of-fact sentences from Forbes magazine’s summary of the merger. “Comcast and TWC do not directly compete with each other… and there is no physical overlap in the areas in which these companies offer services.” Competitors such as Direct-TV, Dish, AT&T, Verizon and Netflix have “reduced the Importance of the cable-TV market and given its customers other alternatives… Hence this merger would not significantly impact the choices available to the consumers in the service areas of these two companies.”

Forbes’ point was that old-time opposition to mergers by agencies like the FTC was based on the simplistic premise that when competitors merge, there is one few competitor in the market – which is then one step closer to monopoly. When there were few competitors to begin with, this line of thinking had a certain naïve appeal, even though it was wrong. But when the merging companies weren’t competitors in the first place, even this rather flimsy rationale evaporates. And this holds just as true in the so-called “market for high-speed broadband” as it does for the market for pay-TV. Why? Because President Obama and FCC Chairman Wheeler have anointed the cable companies as the gatekeepers of that “market,” and the only markets they can be the gatekeepers of are those same local markets in which Comcast and Time-Warner weren’t competitors before the merger announcement. Therefore the merger couldn’t have affected developments there, either.

The end-in-view of all economic activity is consumption. Consumers – the people who watch TV in whatever form – would not have been harmed or adversely affected by the merger. The consumer advocated who cite the bad service given by Comcast to its customers seem to have taken the view that the remedy for this offense is to make sure that nothing good happens to Comcast from now on. They apparently expect that the merger would have reduced the total volume of employment by the two firms – which it undoubtedly would – and that this would on its face have made customer service even worse – which it most certainly would not have done. Government never ceases to object to budget cuts and predict even worse customer service when they are implemented, but bigger government never produced better customer service. Only competition does that – and the merger was a desperate attempt to prepare for and cope with competition.

The FCC’s imaginary market for high-speed broadband and its 30% threshold were as irrelevant to market competition as the price of tea in Ceylon. The entire digital universe is inventing its way around the anachronistic gatekeeper function performed by local cable firms. (The Wall Street Journal‘s editors couldn’t help reacting in amazement to the FCC’s announcement: “Is anybody at the FCC under 40?” Today it is only the senior-citizen crowd that is still tethered to desktop computers for Web access.)

Why Should the Man in the Street Be Expected to Embrace a Merger Between Large Corporations?

It has been estimated that the sum of mankind’s knowledge has increased more since 2003 than it did since the dawn of human history up to that point. Given the breakneck advance of learning, we cannot expect to comprehend the meaning and benefit of all that goes on around us. Instead, we must choose between the presumptive value of freedom and the restraining hand of government. We owe most of what we value to freedom and private initiative. It is genuinely difficult to identify much – if anything – that government does adequately, less alone brilliantly.

This straightforward comparison, rather than complex mathematics, econometrics or “he said, she said” debates between vested interests should sway us to side with freedom and free markets. The average person shouldn’t “embrace” a corporate merger because he or she shouldn’t evaluate the issue on the basis of emotion. The merger should have been “tolerated” as an exercise of free choice by responsible adults – period.

DRI-285 for week of 11-18-12: Twinkie Recipe: Separate Politics from Economics, Bake Cheaply and Deliver Efficiently


An Access Advertising EconBrief:

Twinkie Recipe: Separate Politics from Economics,

Bake Cheaply and Deliver Efficiently

Contemporary economic theory is now so heavily formalized by high-level mathematics and statistics as to be inaccessible to non-specialists. This has many drawbacks. Among them is the difficulty of integrating the effect of politics on markets. This is one of the few points of agreement between left- and right-wing commentators, who insist that we have a system of political economy rather than a system of markets as such.

Both sides are correct. Unfortunately, this realization causes them to neglect economics rather too much and concentrate on politics too heavily. Faced with a controversy, they tend to choose sides as if engaged in a war – by looking at the political uniform worn by the participants. Their most recent skirmish has attracted national attention. The baker, snack-food and confectioner Hostess, Inc. has filed for bankruptcy after a protracted dispute with its unions. One union in particular, the bakers’ union, has drawn the focus of attention.

The Decline and Fall of Hostess

Hostess was formerly Interstate Brands Corp., of Kansas City, MO, producer of over 30 brands of breads, cakes and snacks. These include legendary names like the Twinkie, Wonder Bread and Hostess Ding Dongs. The current dispute between Hostess and its unions is only the terminal event in a decades-long history of gradual decline. Hostess’s bankruptcy is its second; the first resulted in reorganization and the name change from IBC to Hostess. How could a company with such a distinguished roster of popular brands have fallen so low?

Some of the decline is due to a change in consumer tastes. High-calorie, high-fat, high-sugar snacks have lost favor. The realization that carbohydrate consumption carries just as much danger as fat consumption, if not more, has dampened the American enthusiasm for bread and cake. This is only part of the explanation for Hostess’s problems, though.

The longtime popularity of brands like Twinkies and Ding Dongs allowed the company to endure some highly uneconomical labor practices. The Teamsters Union – one of 12 unions operating under more than 300 collective-bargaining agreements with Hostess – forbade drivers from helping to load and unload their trucks. A stocker had to be employed to drive to the store and stock retail shelves with products transferred from storage. Some brands, such as Wonder Bread, could not share space on trucks with others.

When the falloff in brand popularity hit, Hostess could no longer subsidize this sort of inefficiency. The company has operated in bankruptcy reorganization for most of the preceding decade. The final crisis occurred within the last week, when Hostess announced that it had asked for contract concessions from the baker’s union, having already received concessions from the other 11 unions. It could not operate under the current contract and the law forbade operation without a contract. Thus, it announced that unless the bakers agreed to a deal, Hostess would once more file for bankruptcy and this time would proceed to liquidate the company’s assets.

The bakers refused. The company filed for bankruptcy. A federal judge intervened with by demanding that the parties undergo mediation. That process failed, and the bankruptcy and liquidation will now proceed.

The Left-Wing Reaction

The response on the hard left-wing, particularly among union proletarians, is that once more a company was undone by “vulture capitalism.” Private-equity firms took over the firm and ran roughshod over the rights of honest workers, raping and pillaging the firm’s assets. These commentators are doubtless fortified by the election returns, which suggest that the campaign of career-character assassination against former Bain Capital CEO Mitt Romney worked well enough to secure re-election of a fairly unpopular President.

The commentators looked at the day-to-day uniforms worn by the managers of Hostess and saw “venture capital” emblazoned thereon. Had they looked behind the scenes, however, they would have noticed that many of the particular venture capitalists involved with Hostess were closely associated with the Democrat Party. That’s right – the party of compassion, of equality and fairness, of comparable worth and social justice and the 99% and share-the-wealth and soak-the-rich. How could this be?

Actually, the real question is: How could it be any other way? Take-over artists and private-equity managers are primarily engaged in turning around businesses, not liquidating them. A liquidation is a fire sale, in which assets are generally sold at rock-bottom prices. That is why potential buyers tend to wait out dramas like the Hostess episode rather than riding to the rescue like the Lone Ranger. The rate of return on an asset depends crucially on the price paid for it. Who wouldn’t rather pay a low price rather than a higher one? Private-equity managers are business experts, right enough, but there’s no such thing as an expert in getting a high price at a close-out sale. Ask any business owner who ever went bust or any grieving son or daughter who ever liquidated their parent’s possessions at an auction. It’s pretty tough to profit from this process and it’s just as tough to earn fees from producing outcomes like this, since nobody has an incentive to pay the fees.

No, the people who bought Hostess bought it in order to run it, not break it up. Their record shows they usually succeed in doing that. They’re liquidating Hostess now because they failed this time and there’s no point in throwing good money after bad by failing to play their hole card. That card is the fact that Hostess’s 30+ brands still have considerable market value. In fact, their individual market value – outside the company and freed from the dead weight of union presence – probably exceeds its collective value inside Hostess.

The link between private-equity and the Democrat Party is eminently logical. It is economic, not political; that is, it has no necessary connection with the political sympathies of the vulture capitalists involved. Takeover targets are failing companies that have the potential to succeed. Why does a potentially successful company fail? Answer: it is being dragged down by unions, just as Hostess was. How to overcome this roadblock? Answer: persuade the unions to cease and desist from their uneconomic practices for their own good as well as the good of the shareholders. The best people to do this are not card-carrying Republican Party members or Ayn Rand sympathizers. They are fellow Democrats, who can at least gain the ear of the union bosses and perhaps retain a shred of credibility with the rank and file. And look what happened here – Hostess’s managers succeeded in keeping the company going for over a decade and persuaded 11 of the 12 unions to sign off on their latest resuscitation plan.

So much for the standard left-wing boilerplate view of the Hostess affair. Alas, the view from the right wing is not much more cogent.

The Right-Wing Reaction

Somewhat surprisingly, the right-wing view has gained considerable momentum even in mainstream media. The baker’s union suffers from false consciousness, say the mavens of talk radio. They stubbornly cling to their high union wages and benefits at the cost of their own jobs – and the 18,500 other jobs at Hostess in the bargain! How selfish can you get? Just one more case of what “union bloody-mindedness…at work.”

Wall Street Journal columnist Holman Jenkins (11/21/2012) provides a refreshing antidote to the stereotypical thinking of both right and left. He reveals that the Hostess story is a tale of two unions, not just one. It is the Teamsters who are the stereotypical hard-liners, insisting on featherbedding work rules that have driven Hostess’s product distribution costs into the stratosphere. The bakers (the Bakery, Confectionery, Tobacco Workers and Grain Millers International Union) have made repeated concessions, to the point where production costs hardly exceed industry norms.

From the bakers’ standpoint, they are being asked to make even more concessions now in order to protect the current status of the Teamsters, whose work rules are still hamstringing the company. No matter what you may have heard, solidarity is not “forever” – that is merely a song lyric.

As organized under laws mostly passed in the 1920s and 1930s and reinforced by labor regulations handed down for decades by the federal government, a labor union is a cartel. It is analogous to cartels set up by businessmen who sell products and services. Cartels strive to emulate the outcome of a monopoly, which is to thwart the competitive process and attain the same collective profit-maximizing outcome theoretically open to a pure monopoly seller.

In practice, a pure monopolist cannot even approach that theoretical outcome without the aid of government in restricting competition. That is even truer of cartels and much truer of labor unions. That is why the federal government has conferred their coercive powers upon unions. Unions operate to raise wages above the level that would otherwise prevail in a free labor market. The only ways to do that are to artificially hold wages high or to artificially restrict the supply of labor to the market. Unions do one or the other, depending on circumstances.

Both of these practices reduce employment in the unionized sector. This drives workers into unemployment and/or into non-unionized sectors, thereby driving down wages there. Union workers have no particular incentive to sacrifice on behalf of other union workers, who are after all merely workers like the ones whose interests have already been harmed by the union cause.

Jenkins points out that the bakers had a strong case for not agreeing to Hostess’s offer. Why not “hold back further concessions, let the company liquidate, and try their luck with a new owner or owners who might materialize for its bakery operations. These new owners presumably would be in a position to invest cash in marketing and promotion… They would benefit from the deluge in free media that has befallen the Twinkies brand this week. All the more so given that Hostess plans to close or sell some of the bakery plants anyway, that unemployment benefits are generous, that bakery jobs have become crummy-paying thanks to previous givebacks, that the government-run Pension Benefit Guaranty Corp. will be assuming the Hostess pensions in any case.”

So it seems that the bakers are not dumbbells after all. They are pursuing their own interests rationally given the cards they were dealt under a system they didn’t design. The right wing is repeating a frequent mistake of blaming victims of progressive socialism for acting in their own behalf. The right should instead expend all its energies working to change the system.

Jenkins observes that “one could always ask about the wisdom of a labor-law structure that causes companies like Hostess to drag on for decades without adapting to their marketplaces.” Indeed. This is a structural consequence of the substitution of politics for economics.

The Vocabulary of Political Theater

The medium of political theater employs a vocabulary of perception rather than one of real meaning. Words are assigned a political meaning unrelated to their substantive economic impact. One such word is “corporation.”

A corporation is a set of meanings that assign claims to various assets. But the political meaning of the word “corporation” describes a personified entity that is “large,” “wealthy,” “powerful,” “insensitive,” and “evil” when remotely viewed, or “paternalistic,” “secure” and (still) “wealthy” when viewed up close – say, from the perspective of an employee. All these traits are those of individual human beings; the political view of a corporation equates it to a person.

When a corporation goes out of business, it closes – often declaring bankruptcy – and its assets are liquidated. When a person goes out of business, he or she dies. A person cannot undergo “asset liquidation” even though a person’s assets can be liquidated. Thus, a person is not a corporation. But because politics views a corporation as a person, bankruptcy is viewed as akin to human death, even though it is not.

Bankruptcy is a process of evaluating the business to determine whether, and in what form, the business should go forward. That evaluation will gauge whether the business’s assets are worth more in combination or singly. This determination is a vital social process because your welfare and mine suffers if business assets are misused. True, we may not be owners of the business, but the real beneficiaries of a business are consumers, who benefit from what the business produces. That, after all, is the whole purpose of businesses – to produce goods and services for consumption.

When companies like Hostess die lingering deaths of a thousand union and bureaucratic cuts, all of us experience imperceptible losses. We pay more for government regulatory and bureaucratic functions. We pay more for the goods and services those businesses produce and we get less. Perhaps we are able to buy less in the coin of a depreciated currency.

Bankruptcy is in no sense analogous to human death. If an analogy is absolutely necessary, the “burnoff” of dead, accumulated brush that occurs in nature would be a good one. This pruning away of dead, useless stuff enables the remaining ecosystem to thrive.

One of the most destructive of all political terms is “economics,” which means “macroeconomics.” Currently, there really is no such coherent economic theory. Even less is there a set of valid, generally recognized policy prescriptions that could be grouped under that heading. The only valid meaning for the term “economics” would be described by the sub-head “microeconomics,” with the proviso that this would include the specialty of monetary theory and the study of business-cycle dynamics. One of the two sub-disciplines of microeconomics is the theory of the firm. That logic is of more help in understanding Hostess than anything provided by the Council of Economic Advisors.

A politico-economic term that has no meaningful economic referent is “job creation.” The purpose of economics is not to create jobs but to create value. Human labor is the key means of doing that, but it is the value, not the labor itself, that is the desired end product. Totalitarian regimes are wonderful job creators; there was no unemployment in ancient Egypt or in Soviet Russia or Communist China under Mao. The trick is not putting people to work; it is getting the most out of the work they do. That is what the “labor-law structure” referred to by Holman Jenkins completely overlooks.

Whither Twinkies, et al?

A few observers are sheepishly acknowledging that maybe we haven’t seen the last of Twinkies after all. The current owners of Hostess intend to sell the rights to produce all those branded products, which portends a bright future for any brand not encumbered by the same union rules that felled Hostess. And it may well mean a brighter future for many of those in the baker’s union as well.

DRI-390 for week of 9-2-12: Cause and Effect

An Access Advertising EconBrief:

 

Cause and Effect

Early man began trying to link cause and effect by simple observation of the world around him. Combining this with imagination produced the first scientific theories. Over time, man began to record his observations, hoping to weed out chance relationships from the truly systematic ones.

Gradually, science took on a formal character. Logic became a formal study. Early thinkers like Euclid and Pythagoras developed the basic structure of mathematics. Eventually, Newton and Leibniz paved the way for modern science by inventing the calculus. When the foundations of probability and statistics were laid in the late 19th and early 20th centuries, the way was clear for scientists to develop and test theories rigorously.

Although the progress made by the natural sciences has lifted man from the primordial ooze into a life of relative ease and comfort, the social sciences are still finding their proper place in the world. Economics, queen of the social sciences, has yielded the most benefit.

Unfortunately, it has also wreaked the most havoc. Formal economic theory is only a few centuries old – by historical standards, it is in its infancy. The problems of the social sciences tend to be much more complex than those of the natural sciences because human reason and motivation complicates the analytical process exponentially. It is hard enough to learn how plants fight disease, but at least scientists do not have to grapple with any plant motivations more complex than simple survival. Social scientists have tended to assume that the sophisticated mathematical and statistical methods will succeed just as well for them as they have for natural scientists, but it seems increasingly clear that this is not so.

To make matters worse, the general public has an upside-down view of this scientific dichotomy, believing that social-scientific problems are easier than natural-scientific ones. Because the facts of economics are the stuff of everyday experience, we tend to assume that casual observation and common sense are sufficient to allow deduction of cause and effect relationships in economics. As a result, Mark Twain’s Theorem applies with powerful force to public understanding of economics.

Twain’s Theorem is “It ain’t what you don’t know that hurts you, it’s the things you know that just ain’t true.” Popular economic theories of cause and effect, based on casual observation and superficial correlation, have embedded many myths and fallacies in the public consciousness. This leaves much work for economists to undo.

Urban Overcrowding and Poverty

Large populations of urban poor became commonplace during the Industrial Revolution. The sight of thousands of people crammed together in tight quarters, living cramped lives marred by comparatively poor nutrition and hygiene, inspired literary outrage from authors such as Dickens and reformist zeal from do-gooders like Jane Addams. Although average real incomes have zoomed upward in recent centuries, the phenomenon of urban overcrowding remains in cities like Mumbai and Calcutta, India. Today, the mantle of reformism has fallen upon filmmakers.

There has long been a presumption in the general public that the poverty experienced by the masses is due to their excessive numbers and close proximity – that is, to overcrowding. Indeed, this is generally viewed as so obvious that the causal connection is allowed to speak for itself. The tacit argument runs as follows: The more people there are living close to each other, the greater is the competition for the fixed amount of resources in that limited geographic area. As overcrowding increases, resource availability per capita falls. This implies that per-capita production of goods and services will also fall. The limited amount of resources will keep resource prices high, insuring that prices of consumer goods will also be high, causing real incomes of residents to be low.

The urban overcrowding hypothesis has great superficial appeal, particularly when it is given an environmentalist slant. The overcrowding also strains the natural resources of the geographic area by hurting the quality of air, water and land. An inexorable tendency for population to increase creates a doomsday scenario in which both economic growth and quality of life swoon into a downward spiral. Only drastic measures such as forced population control, draconian environmental regulations and socialist distribution programs can save human life or, indeed, the planet itself from destruction.

Urban Overcrowding Explained

Despite its visceral appeal, the urban overcrowding hypothesis is wrong. One of its obvious flaws is the lemming-like behavioral pattern it imputes to hundreds of millions of urban residents. Why should they be attracted and held by a way of life that operates so severely to their disadvantage? The answer is that they aren’t.

Contrary to the impression created by overcrowding theorists, only about 5% of the earth’s land area is occupied by human inhabitation. While some of it is clearly uninhabitable, most of the rest of up for grabs. This puts a crimp in the environmental doomsday hypothesis, since resource depletion can hardly be more than a local matter under these conditions. But the urban poor do not escape their poverty by fleeing the city because their advantage lies in remaining there. Urban overcrowding benefits poor residents in numerous ways.

The urban poor can live much more cheaply in their overcrowded cities than in suburbia or in rural areas. In the city, they have access to mass transportation – buses, subways, taxis – and forms of individual transportation like bicycling and walking that would be uneconomical, hence unavailable, on the outside. Their access to work, entertainment, recreation and medical care is also better. Bear in mind as well that close proximity has offsetting benefits – human companionship can and does serve as a substitute for material wealth.

A corollary to the choice made by the urban poor is the fact that middle-class, upper-class and rich families tend to spend real income on larger living quarters located outside the city. Their larger incomes offset the higher transport and transactions costs of suburban and rural living.

Cause and Effect Reversed

Although the correct reasoning is quite straightforward, stating it in cause-and-effect terms startles most people: Poverty causes overcrowding, not the other way around. Modern science and capitalist markets make it possible for many people to live who would otherwise never be born or would die early in life. These people make the best of their circumstances by living in conditions that those better situated find inexplicable and repellent. Moreover, the terms “poverty” and “overcrowding” are relative, not scientific absolutes. Poverty in the United States implies a level of real income many times greater than the same status does in India.

These facts have not prevented moralists and reformers from decrying the state of poverty and overcrowding in American cities throughout the 19th and 20th centuries, up to the present day.

The “Decline” of Wages and Income

Movements in income and their effects on American households have given rise to other popular theories of cause and effect. The financial crisis of 2008 and ensuing recession have produced persistent and lingering after-effects on income and employment. These have tilled the ground for hypotheses of decline for the U.S. economy and way of life. Two recent books have repeated longtime claims about the flatness of average U.S. wages over the past 30 years. A related claim is that average household income has also declined over the same time period.

These claims result not so much from casual observation as from casualness in the compilation and interpretation of data. Although economists are not normally themselves the guilty parties, they bear a measure of guilt for these popular errors.

In their anxiety to make use of sophisticated scientific tools, economists have fallen over themselves providing data to use the tools on. But the theorizing process only works as long as the man handling the data doesn’t manhandle the data. The combination of haste, carelessness, vested political and academic interest and personal prejudice adds up to a lot of ways for economic theory to go wrong. And when the lay public starts to monkey with the data, the result is chaos.

Take the case of wages. During World War II, the federal government chose to finance the war by borrowing and printing money. In the time-honored manner of governments throughout history, it slapped on controls intended to prevent the expenditure of printed and borrowed money from bidding up wages and prices to the stratosphere. But there were no controls on in-kind benefits like employer-paid health insurance. In addition, these benefits were not taxable by the IRS – unlike earned wage income.

The wartime trend in favor of substituting fringe benefits for wage increases survived the war and accelerated beginning in the 1970s. These benefits constitute real income because people could take wage income and use it to buy health insurance. If employers paid for (day) fruit and vegetable purchases of employees, this would constitute real income to employees for analogous reasons.

Clearly, people who study increases in the average or general level of wages usually do it in order to gauge the purchasing power of wage recipients. Equally clearly, it would be grossly misleading to omit consideration of fringe benefits from this consideration, since it enhances employee purchasing power as well. But all too often this is exactly the mistake made by (non-professional) students of wage trends.

The cause-and-effect theory of the non-professional students is straightforward enough. People work and earn wages in order to get real income with which to purchase goods and services. When their wages stagnate, their real income must do likewise. With rising prices, their purchasing power will fall.

This overlooks the fact that employment benefits also increase real income and purchasing power. And people can choose to work for employers to provide more and better fringe benefits. Thus, a marked trend favoring substitution of non-taxable benefits for taxable wages suggests that people are better served by the former than the latter.

Household Income in an Era of Changing Household Composition

Similar problems await the unwary student of trends in income. One popular data category is household income. At first glance, this choice seems eminently logical. The family is the predominant unit of American cultural and economic organization. Decisions about expenditure and saving are not made by mothers, fathers and children in isolation but rather in a household context.

So far, so good. The problems start when the neophyte starts making simple comparisons between household income at different points in time. Since population tends to increase over time, it seems logical to use average household income as the basis for comparison. But a hidden trap lies in store.

Throughout the Western industrialized world, birth rates have fallen in recent decades. The number of children in the average household has fallen. Divorce has increased, a factor tending to produce more and smaller households.

When the “average” household gets smaller, it will tend to have a lower income – even when business and personal productivity, Gross Domestic Product and other indicators of production and real income have not fallen. And this is exactly what has happened in the U.S. over the last 30 years. While observers have complained about “falling” household income, real personal consumption has increased some 74% over the last 30 years, according to economist Alan Reynolds. This strongly suggests that the decline in household income is artificial, a statistical artifact that does not reflect the true economic reality that the writers and commentators are trying to capture.

Cause and Effect Complicated

Theorists of wage/income decline told a story of cause-and-effect between wages and living standards. Hourly workers work for wages in order to be able to buy goods and services. If wages don’t rise, their consumption can’t rise. Wages increases cause rising consumption; wages decreases cause falling consumption; stagnant wages hold consumption constant.

But this story ran afoul of a confounding third factor – income. In any scientific endeavor, this plagues the linking of cause and effect. Not only is there the problem of deciding which of two correlated variables causes changes in the other, there is also the possibility that neither is the causal agent. A third variable may be causing changes in one or both of the first two.

Here the confounding variable is income. It is really income that facilitates consumption. Wages are one component of income, but benefits are another. A wage decline that is more than offset by an increase in benefits is compatible with an increase in consumption.

A parallel case involves theories of poverty. Studies claiming that the percentage of people living below the poverty line has been largely unaffected by increases in material wealth have generally failed to take into account all sources of income for poverty-level people. The steady rise in the number and size of government transfer programs designed to benefit poverty-level recipients has nurtured a trend in which these types of benefits have simply replaced work-related earnings as income sources. As with employer-paid benefits, this has been the choice of recipients rather than a factor beyond their control.

The Theory of Affordable Housing

A final popular theory of cause and effect is the theory of affordable housing. This theory is the implicit backing behind virtually every federal-government housing enactment since the 1930s. The theory says that, left to the whims and vagaries of a free market, the poorest people in society would die of exposure due to lack of housing or, at best, suffer unconscionably in intolerably poor housing. In order to insure an adequate supply of affordable housing, it is necessary to government to subsidize both the production of housing and its consumption. The government-produced housing is sold to the poor at artificially low prices. Government subsidy programs enable the poor to purchase homes or rent housing at affordable rates.

The Federal Housing Administration (FHA), Federal National Mortgage Association (Fannie Mae), Government National Mortgage Association (Ginnie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac) are all programs founded on the foregoing logic. The housing bubble preceding the recent financial crisis and Great Recession was fostered by men like James Johnson of Fannie Mae beginning in the mid-1990s, and was started, nurtured and stoked to fever pitch on the basis of this reasoning.

The truly amazing thing about this popular theory is its utter lack of corroboration. All of the available evidence suggests that government intervention in the housing market makes housing less rather than more affordable.

In 1901, housing costs comprised 23% of median income. In 2003, after a myriad of government programs intended to make housing more affordable, the figure was 33%. One of the most infamous of the World War II-era regulatory survivors is rent control, the program ostensibly designed to keep rental housing affordable by putting a legal ceiling on the height to which housing rents can rise. Swedish economist Assar Lindbeck delivered the consensus verdict of the economics profession on rent control by declaring that the best way to destroy a city was by bombing it and the second-best way was by imposing rent control within its boundaries. The artificial shortage of rental housing created by the controls not only reduces the quantity of rental housing available to consumers, it reduces housing quality by giving landlords an incentive to reduce the quality of their housing stock in compensation for not being able to raise rent. It kills the incentive to produce more rental housing but increases the incentive to convert rental housing stock to condominiums, whose prices are not controlled.

Affordable housing activists emphasize the fact that developers do not normally construct housing aimed directly at the poorest buyers. With no new construction of private affordable housing, they reason, it must be true that government production will be necessary to prevent the supply of housing available to poor people from drying up altogether as houses wear out but are not replaced.

Housing expert William Tucker has pointed out the fallacy inherent in this view. As the housing stock gradually ages, housing originally constructed for the rich or well-off eventually declines sufficiently in quality and value to be within the reach of the poor. This process is called “filtering.” Thus, it is not primarily new construction that provides housing stock for the poor, but rather secondary, used housing. As Tucker points out, this gives the lie to the necessity of government housing- construction programs.

A onetime counterargument to the filtering principle is that the poor deserve bright, shiny, new housing of their own. The experience of large-scale federal housing projects built in the 1950s and 60s has silenced proponents of this view. In cases like Pruitt-Igoe in St. Louis and Wayne Minor in Kansas City, tenants and vandals methodically destroyed the quality of the housing. In effect, government ownership is no ownership; the government lacks the private incentive to protect the value of the property.

In addition to direct government interference in housing, indirect interference via Federal Reserve pegging of artificially low interest rates must also be considered. Respected mainstream economists like John Taylor have now conceded that these actions constitute inefficient and counterproductive housing subsidies that were instrumental in inflating the housing bubble. Artificially subsidizing interest rates may make a housing purchase seem superficially more attractive, but it does not create the future real income necessary to amortize a mortgage nor does it make buying a house more economically efficient than renting in true economic terms.

Another form of indirect interference with housing markets is land-use restrictions. Starting in the 1970s, land-use restrictions were part of an anti-growth strategy that greatly reduced the volume of new housing construction in many U.S. markets. The result of a constriction in the supply of new housing is an increase in price. Moreover, the effect of the restrictions was to make the supply of housing more brittle, or less “elastic” in economic language. This means that the quantity response to any increase in demand is more restrained, resulting in a higher price for a given increase in demand. Economist Thomas Sowell showed that almost all of the regional markets that saw sharp upward spikes in housing prices during the housing bubble also featured these land-use restrictions.

Housing markets were among the most stable and reliable throughout U.S. and world history – until governments insisted on making housing affordable. Then we began to experience housing bubbles and ensuing recessions. The case for affordable housing policy is nonexistent.

Cause and Effect Turned Upside Down

The affordable-housing crowd asserts a direct causal relationship between government activity and housing affordability. Reality reveals a relationship that is not only inverse but perverse – the opposite of that intended. The more government intervenes, the less affordable housing becomes. Perversity is an uncommon result in a market economy characterized by voluntary action and rational choice, since people will usually act to bring about that which they intend. But it is typical of government, where people lack the information, the motivation and the incentives to bring about the ends that private individuals would consider optimal.

Cause and Effect in the Social Sciences

The foregoing examples demonstrate that cause and effect in the social sciences is not a simple matter of casual observation and common sense. Instead it is a complex matter that requires the application of training, logic and rigorous study. Economists do it better than non-specialists but are still prey to occupational and temperamental blind spots.