DRI-192 for week of 6-7-15: Adding Entrepreneurship to Economics Makes ‘Disruptive’ Innovations Coordinative

An Access Advertising EconBrief:

Adding Entrepreneurship to Economics Makes ‘Disruptive’ Innovations Coordinative

Journalism pretends to be an objective profession. In reality, it is a subjective business. The subjective component derives from the normal limitations nature places on human perception; journalists may aspire to Olympian standards of accuracy and detachment, but they labor under the same biases as everybody else. The need to make a profit causes journalistic enterprises to cater to intellectual fads and fashions just as haute couture does when selling clothes.

The trendy business buzzword these days is “disruptive.” Ever since the Internet began revolutionizing life on the planet, technology has been occupying a bigger part of our lives. Somebody started saying “disruptive” to define new businesses that seemed to usher in noticeable changes in the status quo. When it comes to vocabulary, journalists imitate each other like parrots and chatter like magpies. Now slick magazines, websites and blogs are crawling with articles like “The 10 Most Disruptive Technologies/50 Most Disruptive Firms,” “How to Identify the Next Big Disruptive Technology” and “Which Sector Needs Disrupting the Most?”

It isn’t hard to identify disruptive firms; just picture the firms that have garnered the biggest and most recurring headlines – Apple, Amazon, Uber, Lyft, Airbnb, SpaceX and such. Our job here is to ascertain whether a systematic logic unites the success of these firms and whether the term “disruptive” is economically descriptive – or not. Business writers often associate disruptive technologies with economist Joseph Schumpeter, whose work we examined in last week’s EconBrief.

This association is understandable, but unfortunate. Schumpeter’s linking of entrepreneurial progress and capitalism with technological innovation is not the general case, but only a special case. That is, it is only a small part of the reason why capitalism has been so successful. Schumpeter’s view of the forest was obscured by a few redwoods, figuratively speaking. Even worse, the term “disruptive” – like Schumpeter’s famous phrase “creative destruction” – conveys an utterly misleading impression about the impact of entrepreneurial progress and technological innovation under capitalism.

Journalists and business analysts were right in looking to economics for an understanding of technological innovation. And, as we saw last week, they certainly didn’t get much help from traditional economic theory. But they picked the wrong maverick economist to consult.

A Brief Review 

Our previous EconBrief identified a serious lacuna in economic theory. No, make that multiple lacunae – certain simplifying assumptions that have alienated academic economics from reality. The pervasive use of high-level mathematics and statistical testing encouraged these assumptions because they kept economic theory tractable. Without them, economic models would not have been spare and abstract enough for mathematical and statistical purposes. In effect, the economics profession has chosen theoretical models useful for its own professional advancement but well-nigh useless for the practical benefit of the general public.

Evidence of this is supplied by the traditional indifference to entrepreneurship and innovation shown by mainstream theorists and textbooks. For contrast, we analyzed two striking exceptions to this pattern: the ideas of Joseph Schumpeter and F. A. Hayek. Schumpeter was contemptuous of the mainstream obsession with perfectly competitive equilibrium. He believed that economic development under capitalism was accomplished by a process of “creative destruction.” This did not involve small, incremental increases in output and decreases in price by perfectly competitive firms, each one of which had insignificant shares of its market. Instead, Schumpeter envisioned competition as a life-and-death struggle between large monopoly firms, each producing new products that replaced existing goods and improved consumer welfare by leaps and bounds. “Creative destruction” was a hugely disruptive process, a wholesale overturning of the status quo.

Hayek criticized mainstream theory just as strongly, but from a different angle. Hayek maintained that mainstream, textbook economic theory started out by assuming the things it should be explaining. Where did consumers and producers get the “perfect information” that traditional theory assumed was “given” to them? In effect, Hayek grumbled, it was “given” to them by the economists in their textbooks, not actually given in reality. He had the same complaint about product quality, an issue traditional theory assumed away by treating goods as homogeneous in nature. The trouble is that the vast quantity of information needed by consumers and producers isn’t available in one place; it is dispersed in fragmentary form inside billions of human brains. Only the price system, operating via a functioning free-market system, can collate and transmit this information to all market participants.

Hayek saw the true nature of equilibrium differently than did mainstream economists. The latter took their cue from mathematical economists such as 19th-century pioneer Leon Walras, who formulated equations for supply and demand curves and solved them algebraically to derive an equilibrium at which the quantity demanded and quantity supplied were equal. To Hayek, equilibrium meant that the plans human beings make in the course of living daily life turn out to be compatible, not chaotically inconsistent. That is the true Economic Problem – how to collect and transmit the dispersed information necessary to market functioning among billions of people in order to allow their plans to be mutually compatible.

Entrepreneurship – the Engine of Capitalism

Hayek’s work opened the door to an understanding of capitalism. We had long known that capitalism worked and socialism failed. But we could not supply a nuts-and-bolts, nitty-gritty explanation for why and how this was so. Theory is given little importance by the general public, but it is honored in the breach. The lack of a thoroughgoing theory of capitalist superiority has allowed a myth of socialist superiority to survive and even thrive despite the utter failure of socialism to prosper in practice. A disciple of Hayek and Hayek’s mentor, Ludwig von Mises, utilized the intellectual capital created by his teachers to complete their work.

Israel Kirzner was taught at New York University by Ludwig von Mises. His dissertation became an intermediate textbook on price theory, The Economic Point of View. In 1973, Kirzner synthesized the ideas of Mises and Hayek in a book called Competition and Entrepreneurship. For the first time, we had an explicit justification and explanation of the vital role played by the entrepreneur in economic life.

Heretofore, the entrepreneur had been the mystery figure of economic theory, akin to the Abominable Snowman or Bigfoot. To some, he was simply the organizer of production. To others, he was a salesman or promoter. To Schumpeter, he was an innovator who created new products using the lever of technology. Israel Kirzner took a completely different tack.

The keynote in Kirzner’s view of the entrepreneur is alertness to opportunity within a market framework. As a first approximation, the entrepreneur’s attention is fixed upon the price system. He or she is constantly searching for “value discrepancies;” that is, differences between the price(s) of input(s) and output. For example, he may observe that a, b and c can combine in production to produce D. The price of amounts of a, b and c sufficient to produce one unit of D is $5, while the entrepreneur sees (or envisions) that D will sell for $10. This act of intellectual visualization itself is what constitutes entrepreneurship in Israel Kirzner’s theory. Acting upon entrepreneurial observation requires productive activity.

There is a family resemblance between Kirzner’s concept of entrepreneurship and what is often termed “arbitrage.” But the two are far from identical. Arbitrage is loosely defined as buying and selling in different markets to profit from price differentials. Often, the same good is purchased and sold – simultaneously if possible – to reduce or even eliminate any risk of financial loss. Kirznerian entrepreneurship is far more comprehensive. Different goods may be involved, purchases need not be simultaneous or even close to it; indeed, markets for some of the goods or inputs involved may not even exist at the point of visualization! The entrepreneur may be contemplating the introduction of an entirely new good, a la Schumpeter. At the other extreme, the entrepreneur may be hoping to profit from the smallest price discrepancy in the most homogeneous good, as banks or traders do when they arbitrage away tiny price differences in stocks, bonds or foreign currencies in different exchanges.

In fact, the entrepreneur need not even be a producer or a seller at all. Consumers can and do engage in entrepreneurial activity all the time. Consumers clip and redeem coupons. They scan newspapers and online ads for sales and comparative prices. This activity is analytically indistinguishable from the activity of producers, Kirzner claims, because in both cases there is a net increase in value derived by consumers – and consumption is the end-in-view behind all economic activity.

The Consumer as Entrepreneur – A Case Study

In 1965, Samuel Rubin and a few friends were dismayed by the vanishing interest in, and availability of, silent movies. They held a small film festival for silent-movie enthusiasts and created the Society for Cinephiles. This gathering became the first classic-movie film festival. Fifty years later, Cinecon remains the oldest and most respected of this now-worldwide genre. Three years later, Steven Haynes, John Baker and John Stingley hosted a small gathering for classic-movie lovers in Columbus, Ohio. This year, Mr. Haynes died after planning the 47th meeting of the Cinevent festival, which annually attracts a few hundred dedicated lovers of silent and studio-system-era movies. In 1980, classic-movie fanatic Phil Serling began the Cinefest gathering in Syracuse, New York with a few close friends. 2015 marked the final meeting of this festival, which attracted attendees from around the world. Today the San Francisco Silent Film Festival is a headline-making event featuring the latest newly found and restored rarities.

This genre of classic-movie worship was begun by consumers, not by profit-motivated producers. But these consumers nevertheless were alert to opportunity – the discrepancy in value between the movies currently available for viewing and those of the past. Prior to the digital age, older movies (particularly silent movies) were seldom screened and hard to view. Moreover, they were disintegrating rapidly and dangerous to maintain because of the fire-danger posed by nitrate film stock. Yet thanks to the efforts of these pioneering consumers, today we have multiple television channels exclusively, primarily or secondarily devoted to showing classic films, including silent movies. Turner Classic Movies (TCM) leads the way, while the Fox Channel is close behind. Over twenty thousand people attend the Turner Classic Movies Festival in Hollywood every year and TCM’s annual cruise and other promotions attract thousands more. Film preservation is a major endeavor, with new discoveries of heretofore “lost” movies occurring every year. Classic movies is big business, thanks to the dispersed entrepreneurship efforts of the scattered but determined few decades ago. The small net gains in value experienced by the silent-movie lovers in 1965 multiplied millions-fold into the consumption gains of millions worldwide today on television and in person.

Schumpeter Vs. Hayek/Kirzner: Away from Equilibrium or Towards It?

Contemporary business analysts take an ambivalent attitude toward innovation and entrepreneurship. They give lip service toward its benefits – new products and services, the benefits reaped by consumers. But they imply in no uncertain terms that these benefits carry a terrible price. Terms like “creative destruction,” with heavy emphasis placed on the second word, directly state that there is a tradeoff between consumer gains and destructive loss suffered by workers, owners of businesses driven into insolvency and even members of the general public who lose non-human resources that are somehow vaporized by the awesome power of technology. Instead of stressing the labor-saving properties of technology, commentators are more apt to refer to labor-killing innovations. No wonder, then, that journalists have turned to Schumpeter, whose apocalyptic view of capitalism was that its superior productivity would ultimately prove its undoing. With friends like Schumpeter, capitalism has grown ever more defenseless against its enemies.

Schumpeter believed that entrepreneurial innovation was both creative and destructive – creative because its products were new, destructive because they completely supplanted the replaced competing products, driving their competition from the field. In the technical sense, then, Schumpeter saw entrepreneurs as a dysequilibrating force, spearheading a movement away from one stable equilibrium position to a different one. Schumpeter himself recognized that, in practice and unlike the blackboard transitions that academic economists effect in the blink of an eye, these movements would often be wrenching. But the analysis of Kirzner, using the framework built by Hayek and Mises, leads to different conclusions.

Kirzner acknowledged the validity of Schumpeter’s form of entrepreneurship. But he recognized that it was only the exceptional case. The garden variety, everyday forms of entrepreneurship – practiced by consumers as well as producers – produce movements toward equilibrium, not away from it. This is true for two reasons. First, entrepreneurship does not lead away from equilibrium because the traditional concept of equilibrium is a myth; reality changes far too quickly for actual equilibrium ever to be reached, let alone be maintained. Second, entrepreneurship leads toward equilibrium because it enables human beings to better coordinate their plans by allowing a more efficient exchange of information. Hayek objected to the traditional economic assumption of “perfect information” because he claimed that this assumed the existence of equilibrium at the outset. Kirzner’s theory of entrepreneurship tells us that the so-called “disruptive” businesses of today are pushing us closer and closer to that condition of perfect information – which means we are getting closer and closer to perfectly coordinated equilibrium. Of course, we never reach this blissful state, but capitalism keeps us steadily on the move in the right direction.

What is Google, with its search-engine technology, if not the search for the economist’s informational Shangri-La of perfect information? Wikipedia, a user-created encyclopedia, is the archetype of Hayek’s model of a world in which information exists in dispersed, fragmentary form that is unified by a voluntary, beneficial market. Facebook has become a colossus by making it easy for people to provide information about themselves to others – and in the process become a kind of worldwide clearinghouse for information of all kinds. Pinterest has narrowed this same type of focus to photos, but the key is still information. Newer technology businesses like Crowd Strike, specializing in cyber intelligence and security, and the Chinese company Tencent, with its emphasis on mobile advertising, are also informational in character.

In each of these cases, entrepreneurs were alert to the market opportunities opened by technology and signaled by the low prices ushered in by the digital age. The entrepreneurial character of some of these businesses has baffled the business establishment because it has not emulated the conventional, profit-seeking model. That is usually because the initial entrepreneurs have been consumers striving to create value for their own direct use. Only later have they realized the potential for exporting the value surplus created to the rest of the world. This looks outré to most observers but it is fully consistent with Israel Kirzner’s theory of entrepreneurship.

Another of the unrealistic simplifying assumptions deplored by Hayek was “costless” transactions, particularly entry, exit and determination of product quality. This was another case of economists assuming what they should be proving, or at least investigating; it started out by assuming equilibrium and skipped the market process necessary to produce – or, more realistically, approach – an eventual equilibrium. The technological innovations of the last two decades that weren’t information in character were mostly directed at reducing various costs, either natural or man-made costs.

The Internet itself is a mammoth exercise in reducing the costs of transport and communication. Instead of calling in the telephone, we can now send an e-mail. By inventing smartphones, Apple has one-upped the Internet and desktop computers by making this communication mobile. In between these two inventions, of course, came cell phones – invented decades earlier but made practical when Moore’s Law eventually shrank them to pocket size. The shocking thing is how little economics had to say about any of these revolutionary human innovations – because traditional economic theory had long assumed zero transport and transactions costs. Why concern yourself with an innovation when your theory says there is no need for it in the first place?

The development of cell phones was held back for years by government regulation of telecommunications, which fought tooth and claw to prevent competition between phone companies and innovation by monopoly providers. In formal logic, the effect of government regulation is best envisioned as equivalent to the effect of a mountain range or an ocean on transportation. Alternatively, think of costs as being like taxes. Transport costs are “levied” by nature, while taxes are levied by governments. Transactions costs may be either natural or man-made. And a review of recent “disruptive” businesses shows many designed specifically to overcome either natural or man-made costs.

The entrepreneurs of Uber and Lyft observed the artificially high taxi fares created by local-government regulation in the U.S. and elsewhere in the world. They envisioned lower prices and faster response-times resulting from assembling a voluntary workforce of casual drivers and independent professionals, operating free from the stranglehold of regulation. Airbnb looked at the rental market for habitation and saw the potential for achieving the same kind of economies by enlisting owners as vendors. Jeff Bezos of Amazon envisioned consumers freed from the shackles of traveling to retail stores and a supplier with transport costs lowered by economies of scale. The result has shaken the world of retail sales to its foundations. (We should note that this combines the lowering of natural transport costs and the lowering of artificial man-made sales taxes.) Driverless cars threaten an even bigger revolution in the world of transportation by overcoming the costs of human error and accidents – if they can overcome the “tax” of government regulation to achieve liftoff. Body sensors are a revolutionary innovation triggered by the consumer desire to overcome high medical costs of maintaining good health, which are an artifact of regulation. The new website Open Bazaar dubs itself “a decentralized peer-to-peer marketplace” whose goal “is to give everyone in the world the ability to directly engage in trade with each other.” In other words, it is dedicated to reducing transactions costs to the irreducible minimum.

Once again, these cost-based innovations are entrepreneur-driven. Again, some of them were pioneered by consumers rather than by the corporate or venture-capital establishment. This is exactly what we would expect, given the theory developed by Israel Kirzner.

Monopoly or Competition? 

Schumpeter believed that true progress came from monopoly, not competition. He meant monopoly in the effective, substantial sense, not merely the formalistic sense of a transitory market hegemony enjoyed by the innovator. Events have clearly proven Schumpeter wrong. It is hard to find a case today that would correspond to Schumpeter’s archetype; instead, the initial innovator has been superseded by somebody else. Market leadership has been the result of performance, not entry barriers or patents or government pull. And the innovators themselves have often been “nobodies” rather than monopolists boasting war chests heavy with monopoly profits.

Pattern Prediction

In 1929, Ludwig von Mises predicted a “great crash” and refused to take a position in the Austrian government for fear of association with the economic downturn he anticipated. F.A. Hayek predicted a sharp recession, pursuant to the business-cycle theory he had recently developed. Later, Hayek predicted the failure of Keynesian counter-cyclical fiscal and monetary policies and the high worldwide inflation of the 1970s, coupled with the recession that followed measured taken to break the inflation.

In general, Hayek did not believe that accurate quantitative prediction of economic events was possible. At most, he felt, economic theory could offer “pattern predictions” of a more general nature. His own statements, both in economics and political philosophy, tended to support this approach.

Israel Kirzner did not “predict” the advent of the Internet or the invention of the smartphone. But the technological revolution and the businesses spearheading it conformed to the general pattern of entrepreneurship outlined in Israel Kirzner’s theory. In this sense, while this revolution came as a complete surprise to the mainstream economics profession, it can hardly have surprised Kirzner. The revolution was led by people behaving just as Kirzner hypothesized that entrepreneurs do behave.

Can the Status Quo be “Disruptive?”

Based on our analysis and Israel Kirzner’s theory of entrepreneurship, the business buzzword “disruptive” is misleading when applied to the cutting-edge firms and technologies of today. It is indeed true that these technologies overturn the status quo. But the status quo is hindering human progress and preventing attainment of true economic equilibrium; it is hurting people rather than helping them. If transport costs or transaction costs or taxes or regulation are hurting people – and helping at most only a minority vested interest in the process – then changing the status quo is the indicated action. “Stability” is not always good. After all, Stalin’s Soviet Union was stable. Fortunately, the Soviet Union later collapsed when that stability disintegrated.

As Israel Kirzner himself has always maintained, economics is all about making people better off. When this criterion is placed foremost, discarding the pure formalism of mainstream theory, is becomes clear that Mises, Hayek and Kirzner were right and Schumpeter was wrong. Entrepreneurship is equilibrating because it tends to better coordinate the plans made by individual human beings.

The process by which Nobel Prizes are awarded is highly secretive. The Nobel committee keeps their candidate “cards” close to their vests. Rumors have circulated, however, placing Israel Kirzner’s name on the short list of potential awardees. No man alive has done more than he to redeem the tarnished prestige of economics as a subject worth studying for its practical value to humanity.

DRI-184 for week of 5-31-15: Why is Economic Theory MIA Amidst Humanity’s Biggest Innovation Boom?

An Access Advertising EconBrief: 

Why is Economic Theory MIA Amidst Humanity’s Biggest Innovation Boom?

It is obvious even to casual observers that humanity has experienced an unprecedented boom in technological improvements in recent decades. Apparently even greater advances lie in store, although some contrarians insist that the best is behind us. We might expect to find economists in the thick of all this – spotting trends, lauding entrepreneurs and listing the factors responsible for their success, toting up the gains in real income, output and wealth, applauding the effects on rich and poor alike and approving the nosediving rate of world poverty.

Those expectations would be disappointed, at least by a perusal of mainstream sources. True, there are periodic ex cathedra pronouncements by stray economists on these matters. Scattered foundations, think tanks and institutes devoted to entrepreneurship pop up. The continuing popularity of the late maverick economist Joseph Schumpeter ensures that the subject of innovative entrepreneurship does not fade entirely from the public consciousness or the minds of economists. But the leading professional journals in economics, such as The American Economic Review and the Journal of Political Economy, remain preoccupied with the perennial concerns of the profession. And those do not include the topics of innovation and entrepreneurship.

Why not? What have critics of mainstream theory suggested to improve matters? Those are the subjects of this EconBrief. Next week we will see how non-traditional economic theory can improve our understanding of revolutionary technological innovation.

The Wrong Turns in Economic Theory

In the 1870s, economic theory underwent a revolution. Prior to that time, a vital element was missing from economics. Its theory of value was defective. The Classical Economists believed that the value of economic goods depended on the objective cost of the inputs that went into their production. They lacked a solid, systematic theory of consumer demand. Beginning in 1871, three different economists – working independently in England, Switzerland and Austria – developed the concept of marginal utility, thereby laying the foundation for the modern theory of consumer demand. This Marginal Revolution presaged the Laws of Supply and Demand and the famous diagram depicting equilibrium price formation via the junction of the supply and demand curves. (The diagram was dubbed the “Marshallian Cross,” after the great English economist who popularized it, Alfred Marshall.)

One of the original three founders of marginal utility, Leon Walras, was also the modern developer of mathematical economics. Walras believed that the most concise and precise means of depicting economic relationships was by expressing them in mathematical form. He envisioned an economy as a mathematical model consisting of supply-curve equations for all goods and demand-curve equations for all consumers. He stated that such a system of equations could be solved simultaneously – that is, algebraically – to yield an equilibrium solution. That equilibrium would be one in which the quantity of each good chosen by all consumers and the quantity supplied by all producers would be identical. Eighty years later, two economists proved Walras’s conjecture correct and later received a Nobel Price for their efforts.

Walras believed that his procedure was more scientific than that followed heretofore by economists because it imitated the procedures of the physical sciences like biology, chemistry and astronomy. Despite his scientific pretensions, he also believed that economists could never hope to actually formulate a full set of general equilibrium equations in which actual coefficients were calculated for the variables. As the years went on, Walras’s mathematical approach gained steadily in popularity, but economists inherited none of his realism. Meanwhile, the canons of statistical inference developed by the English mathematical statistician Ronald Fisher also gained favor and were applied to the social science of economics as well as to the natural sciences. After World War II, economists increasingly practiced their craft by developing a mathematical model to express a theoretical hypothesis and using statistical methods to “test” its validity and quantitative boundaries.

This modus operandi seduced the economics profession en masse. In view of its disastrous effects, we might well ponder why this research agenda proved so irresistible. First, it provided a made-to-order research agenda to justify diverting attention away from instruction. Second, it provided an apparently objective standard by which to evaluate faculty for tenure and later promotion. This, in turn, allowed administrators to press graduate students and non-tenured adjunct faculty into service as cut-price teachers of the undergraduate curriculum while the faculty did research and earned money from consulting contracts. It turned economics departments of public universities into sausage factories for producing research studies for academic journals. This made politicians and bureaucrats happy because it gave them several excellent excuses for spending more money – “investing” in research, democratizing higher education by loaning money to students in an effort to create “universal” higher education. (“Universal service” and “affordability” are the two leading political excuses for redistributive spending.) The face that this “research” was completely worthless to everybody except economists meant that the public wouldn’t poke its nose too deeply into the process – which suited everybody involved.

Indeed, the output of this research agenda turned out to be of little value even within the economics profession. The fact that a mathematical model is “precise” and “rigorous” means nothing in itself. The question is: Can the mathematical models of economists capture human action sufficiently well to be of practical use? In the mid-1990s, the noted economists Deirdre McCloskey and Steven Ziliak discovered that economists (and many other scientists) had been misusing the statistical tools of Fisher, et al for years, thereby vitiating the empirical as well as the theoretical basis of most economic research.

Mathematics and statistics work well in the natural sciences because the phenomena are under study in controlled circumstances, which enables the staging of meaningful experiments. This permits the finding of empirical regularities or laws in the natural sciences. But human action, unlike that of inanimate objects and simple life forms, is both purposeful and full of complexity and ambiguity. Moreover, economic life is ordinarily not subject to controlled experimentation. Consequently, the practical results of the economic research model using mathematical models and statistical testing have been hugely disappointing.

The model still lingers on because it is so convenient for the people whose preferences matter most in universities; namely, government, administrators and faculty. The people badly served – undergraduate and graduate students – are the lowest forms of animal life in the university setting.

It is highly interesting to observe that this outcome is directly counter to the very logic taught by economics. Consumption is the end-in-view behind all economic activity. This includes university study and research. Thus, economic logic counsels removing universities from the aegis of government and subjecting them to market competition by abolishing tenure, privatizing research funding and separating the functions of teaching and research. Unfortunately, the two vested interests who have the most to lose from this change in approach, faculty and administrators, are the ones most powerfully in control of the present system.

If You’re So Smart, Why Ain’t You Rich? 

Inevitably, some readers will disagree with the foregoing, perhaps even find it outrageous. The dissenters should ask themselves what the distinguished economic historian and statistician Donald (now Deirdre) McCloskey called “the American question:” If you’re so smart, why ain’t you rich?” Here, the “you” are economists who devise theoretical models for stock and options prices, bond prices, GDP and interest rates. If those models really work – if they are statistically “robust” – why haven’t economists become rich as Croesus from using them to predict the future course of financial markets? For that matter, why were economists generously willing to publish their results for the world to see rather than jealously hoarding them as a source of income?

Most people couldn’t care less whether economist themselves make money from their work, but they are passionately convinced that government should somehow “regulate” the economy to make good things happen for them and prevent bad things from happening. Where did governments, which have existed for thousands of years of human history in myriad forms, suddenly acquire this mystical power to control human behavior and steer the course of future events?

Well, if the alleged control relates to the so-called “macro economy,” it clearly dates back to 1936 and the publication of John Maynard Keynes’ famous treatise on employment, interest and money. Here, the version of the American question relates to policy: Why hasn’t Keynesian economics worked as advertised? After forty years of the most intensive research ever expended on a scientific topic and forty more years of attempts to modify Keynesian theory and put it into practice, the world finds itself perched on a financial precipice.

Then then there are those who apply the term “regulation” in an administrative sense to individual industry sectors, or even to individual firms. In this case, the “American question” should be modified to “if you’re so smart, why ain’t you running the business?” Agency regulation is such a nebulous concept that any attempt to criticize it allows proponents to slide out from under by changing the terms of the argument. But proponents cannot be permitted this luxury; regulation must have some definite purpose. And in practice, government regulation of business fails every test known to mortal man. The things that most people claim they want from regulation are precisely the things that can only be supplied by market completion rather than by regulation. Regulation is not a supplement or corrective to competition; it is an inferior substitute for it.

This failure of economic theory is particularly important because it drags the research model down to failure along with it. The majority of academic economists are left-wing in political orientation. (After all, they work for government.) In practice, their theoretical model and statistical tests have been designed to demonstrate the failure of free markets and the need for government intervention to produce an optimal result. The optimal result is the one that would obtain if private markets worked perfectly. Since they don’t, so runs the academic party line, we need government intervention and regulation to correct the market failures.

But real life has overtaken the academic research model. It is free markets, not government- controlled ones, that deliver the goods. This is still another argument for junking the current research model. It’s hard to do good research starting with a bad economic theory.

The Nitty-Gritty: Where Does Mainstream Economic Theory Go Wrong?

We have said that the mathematical model seduces economists into wrongly specifying their theoretical models. Exactly what does this mean?

Go back to Walras’s model of supply and demand. He, or rather his successors, assumed that we could model consumer demand as a function of consumers’ incomes, tastes and the prices of substitutes and complements for the good under study. But this implicitly assumes that consumers know all this information. As we all realize, they don’t. Nevertheless, it was long traditional for economists to begin by assuming the existence of “perfect information.” Since people consume not only in the present moment but also save for future consumption, this perfection of knowledge applied to the future as well as the present.

How’s that for an abstract model with no relationship to reality?

The same consideration applies on the supply side of the market, where producers are assumed to know not only every price relevant to the production of their own product – all input prices, the prices of all competing goods and so on – but also all technological facts relevant to production of their product and related products. And that’s not all, folks.

When devising models of general equilibrium, economists long assumed that all firms were “price-takers.” That simply meant that each firm supplied such a miniscule fraction of total market output that its contribution to that output had virtually no effect on the market price. That is, regardless of whether it operated at maximum production or went out of business, the market supply curve didn’t budge enough to change the equilibrium price materially. Therefore each firm took the market price as a parameter and treated the quantity it supplied as its only decision variable.

What about the quality of the good it produced? That led to still another simplifying assumption. Since “quality” was a variable that seemed to defy quantification, economists at first sought to treat the output of all firms as homogeneous – thereby removing product quality from discussion.

At this point, readers are probably experiencing the same mixture of disillusion and disbelief that hits college freshmen and sophomores when they are exposed to the economic concept of “perfect competition” for the first time. “What planet do economists live on” is a representative specimen of the thoughts running through student heads at this moment.

As a temporary venture in devil’s advocacy, it is worth noting that an individual farmer operating in certain industries may meet some of these criteria. It is not too big a stretch to treat a particular variety of (say) wheat as a homogeneous good and it is definitely no stretch to treat the output of (say) one family farmer as an insignificant fraction of industry output. But even this kind of partial correspondence between model and reality is the exception, not the rule.

Over the decades, economists have modified the stringent assumptions listed above in various ways. But these modifications have been minor in their practical consequences. Instead of assuming perfect knowledge, for example, economists assumed that market participants possessed probability distributions about the outcome of future events or the existence of certain kinds of information. This minor concession didn’t add much value to their models. If I can play blackjack using the “card-counting” technique, this shifts the odds slightly in my favor. I will always win in the long run, assuming that my initial stake is big enough to withstand any runs of bad luck and I can play “forever.” Unfortunately, most economic decisions do not offer even this probabilistic level of certainty, let alone the perfect information available in the less sophisticated version of economic theory. (And in real life, blackjack doesn’t either; the casinos will ban me if they catch me card-counting.)

Economists introduced even more modifications on the supply side of markets. Beginning in the 1930s, they began to contemplate alternatives in between the polar opposites of perfectly competitive markets and pure monopoly. But these alternatives, such as product differentiation and strategic interaction among a small number of large firms, were so slow to catch on that economists became habituated to focusing only on the equilibrium outcomes of markets and not on market processes. This meant that even when more sophisticated models began utilizing game theory and other non-traditional approaches, their focus was still directed away from entrepreneurship and innovation.

The Effects on the Study of Innovation and Entrepreneurship

The esoteric assumptions behind mainstream, traditional economic theory have backed that theory into a corner. Economists came to depend on the research model behind the theory for their livelihood. This gave them an underlying, unconscious identification with its biases and conclusions.

When Alfred Marshall first promoted his supply-demand Marshallian Cross, he viewed it as a valuable teaching tool for educating the masses. But economists became so obsessed with the concept of equilibrium that it became the primary focus of every theoretical model. The conditions necessary for equilibrium and the conditions prevailing at the state of equilibrium became the centerpiece of nearly every journal article. Little or nothing was said about the time-path to equilibrium and what might affect it.

The noted economist Joseph Schumpeter (1883-1950) prided himself on his personal and professional eccentricity. (He is said to have espoused the goals of being the best horseman in Vienna, the best lover in Europe and the best economist in the world.) In his theory of economic development, he derided the mainstream obsession with equilibrium, perfect competition, perfect information and – most of all – product homogeneity. Schumpeter believed that economic progress was made primarily by firms that created entirely new products. This could come about only as a result of innovation.

But Schumpeter knew that the mainstream world inhabited by his colleagues was hostile to the notion of innovation. In traditional economic theory, perfectly competitive firms were each earning a “normal” profit in long-run equilibrium. That is another way of saying that each firm’s books recorded exactly enough money under the heading “profit” to prevent shareholders from withdrawing their money and investing elsewhere, but not enough to attract the entry of new competitors into the industry. (Another way of putting it would be to say that the firm’s investment earned an amount equal to the best alternative investment of equal risk; e.g., its “opportunity cost” of investment was exactly covered.) In such an environment, an innovator would find that any temporary profits from creating a new product would soon – in principle, instantaneously – be competed away by a horde of imitative firms entering the market. After all, with “perfect information” all relevant information necessary for production would be publicly known.

According to Schumpeter, innovative firms strive not only to erect but to maintain durable monopoly positions in the products they create. The resulting monopoly profits not only reward owners for the risks they take but also bankroll the research necessary to improve their product and create new innovative products. The actual world of imperfect information makes it harder on producers but it also makes it easier to maintain monopoly status once it is attained.

Mainstream economists couldn’t stomach this analysis because they had been preaching (and practicing) a doctrine of enforced competition and government intervention to eradicate monopoly. How could they now praise the monopoly structure that they had made their bones by condemning? (Of course, economists were all-too-willing to relax their standards and overlook monopoly when it was organized and enforced by government itself because they viewed government as the sole economic actor not actuated by self-interest. In effect, economists of Schumpeter’s day were, and remain today, employees of Government R’Us.)

Schumpeter replied to his mainstream colleagues by pointing out that innovating monopolists did face competition even if they were able to exclude direct competitors from their market by (for example) obtaining patent protection for their new products. That competition came from other creative would-be monopolists. After all, the demand for the original monopolist’s product had to come from people shifting purchases from goods being produced by competitive firms. Why wasn’t the monopolist also vulnerable to the same line of attack from other innovators?

For Schumpeter, “competition” was not merely a dull, incremental process of bland, homogeneous products duking it out for tiny shares of a market and a normal profit. He called his model of competition between monopolists “creative destruction,” implying that innovation can occur only by destroying or disrupting the existing order in favor of a new creative equilibrium – which will eventually be toppled by a new innovator. Thus, said Schumpeter, “…competition from the new commodity, the new technology, the new source of supply, the new type of organization… which… strikes not at the margins of the profits and the outputs of the existing firms but at their foundations and their very lives,” is the true explanation behind the superiority of free-market capitalism to other systems. In Capitalism, Socialism and Democracy, Schumpeter cited the example of ALCOA, a “monopoly” so notorious that it would soon be convicted under U.S. antitrust laws. Yet between 1890 and 1929, the price it charged for aluminum had fallen by 91% and its output had risen by a factor of 30,000! Schumpeter believed that the company had, in effect, been competing against the threat posed by potential competition.

Schumpeter was the most popular of the economic heretics because his model corresponds much more closely with certain aspects of reality. New products and product heterogeneity are a fact of life. Market uncertainty faces every participant, none more so than the would-be innovator. If injected with truth serum, every economist would be forced to admit that the concept of equilibrium is best conceived as a constantly changing point toward which competitive markets tend, rather than a point of rest actually attained by real-world markets.

The more telling critique of traditional economic theory, though, was made by Schumpeter’s fellow Austrian, F.A. Hayek (1899-1992), from a different theoretical perspective. Hayek pointed out that the term “perfect competition” violates every commonsense precept of the word competition. Under perfect competition, each firm has no sense of any other firm as a rival, hence does not perceive itself as “competing” with anybody. It has no incentive to lower its price for competitive reasons since it can already sell all it produces at the prevailing market price. If it attempted to raise its price arbitrarily, its sales would fall to zero. Every firm produces exactly the same product, so there is no competition on the basis on product quality.

Another simplifying assumption of traditional theory has been that no barriers to entry or exit exist in a “competitive” industry. This absence of barriers was formalized mathematically as costless entry and exit, meaning that the emergence of profits above those available in comparable investments elsewhere would instantly attract new entrants. The additional supply provided by that new entry would lower market price until the supra-normal profits were fully eroded.

What is there left to compete about? Nothing. Each firm selects the rate of output optimal to its situation; that is all. “Price-taking behavior” is the antithesis of “competition” as it is commonly understood. In “The Meaning of Competition” (1946), Hayek observes that the array of simplifying assumptions made by traditional theory assume competitive equilibrium to exist – the process that brings it about it not explained by the theory but merely assumed at the outset. Nowhere does the theory explain how or why information should be so perfect, entry should be so easy, goods should be homogeneous and so many firms should exist.

Hayek found the assumption of “perfect information” especially paradoxical. Assuming that everybody knows everything is really just a way of evading the issue that economists should be making the central issue of their studies; namely, how is information transmitted and acquired in a market economy? We know that people know some of the things that economists assume they know – the question is how they came to know them.

When Hayek broached this issue in a seminal article – “The Use of Knowledge in Society” in 1945 – the fashion among economists was to treat information about prices and goods as “given data.” He wondered to whom the data were “given?” The phrase must have meant “given to the observing economist” rather than actually given to the people who were supposed to possess it, since there was no agency that literally gave people such information. “The data from which the economic calculus starts are never for the whole society ‘given’ to a single mind… and can never be so given.” In fact, no one person or institution possessed it in its totality. It existed only in dispersed, fragmentary form in the minds of many millions (today, read “billions”) of people.

There is only one way for people to acquire the invaluable information they need to participate effectively in a market economy. They get from markets themselves. That is why free markets are a necessary prerequisite for economic prosperity.

In another article (1937’s “Economics and Knowledge”), Hayek illumined the concept of equilibrium even more brightly than did Schumpeter. Rather than treating equilibrium merely by defining it as the correspondence of quantity demanded with quantity supplied in a market or markets, Hayek looked at the human implications of this fact. People order their lives by making plans that guide their behavior. When their individual plan is optimal when juxtaposed with the galaxy of facts at their disposal, the individual is said to be “in equilibrium.” But each individual’s plan is typically made independently of others; all plans need not automatically or necessarily be compatible with each other a priori. A market is said to be in equilibrium when all plans do mesh and are compatible. Thus, the impersonal workings of a free market serve to coordinate the plans of individuals by collating the dispersed information existing in the minds of its participants and using it to reconcile the wants and needs of all.

Writers of economics textbooks have traditionally begun by outlining what they call the Economic Problem. Since the resources necessary to produce economic goods are scarce and have alternative uses, we must allocate them logically in order to best satisfy the infinite wants of consumers. Optimal allocative logic is what textbook writers envision as economic theory.

Hayek redefined the Economic Problem. Because economists themselves do not possess the knowledge that mainstream theory has assumed market participants possess, they cannot “allocate” resources. Neither can government, for the same reason. The knowledge exists only in dispersed form, and the only way to unlock and make use of it is by utilizing markets to collate it and distribute it. That same market process then coordinates the plans of market participants to make them (more) compatible. The true Economic Problem is how to coordinate the plans of individuals by distributing the dispersed information not possessed by any one individual or institution.

We know that free markets perform this function better than government central planning and regulation. For over seventy years, central planning reigned in the Soviet Union. The result was the antithesis of coordination, in which an ordinary citizen might spend as much as six hours per day standing in line or hiring substitutes to do it for him. And the reward was a level of income and wealth equal to a small fraction of that obtainable in free societies without having to stand in line.

The Revised Economic Theory: Innovation and Entrepreneurship

Hayek’s work paved the way for an explicit economic theory of entrepreneurship and innovation, one that not only corrected the errors of mainstream theory but also put the work of Schumpeter in its proper perspective. In this space next week, we will explain how one man – now apparently on the short list for the Nobel Prize in economics – extended and refined Hayek’s analysis.

DRI-192 for week of 5-24-15: Why Incremental Reform of Government Is a Waste of Time

An Access Advertising EconBrief:

Why Incremental Reform of Government Is a Waste of Time

Any adult America who follows politics has seen it, heard it and read it ad infinitum. A person of prominence proposes to reform government. The reform is supposed to “make government work better.” Nothing earthshaking, understand, just something to improve the dreadful state that confronts us. And if there’s one thing that everybody agrees on, it’s that government is a mess.

Newspapers turn them out by the gross – it’s one of the few things that newspapers still publish in bulk. They can be found virtually every day in opinion sections. Let’s look at a brand-spanking new one, bright and shiny, just off the op-ed assembly line. It appeared in The Wall Street Journal (5/27/2015).The two authors are a former governor of Michigan (John Engler) and a current President of the North America Building Trades Unions (Sean McGarvey). The title – “It’s Amazing Anything Ever Gets Built” – aptly expresses the current level of exasperation with day-to-day government.

The authors think that infrastructure in America – “airports, factories, power plants and factories” are cited specifically – is absurdly difficult to build, improve and replace. The difficulty, they feel, is mostly in acquiring government permission to proceed. “The permitting process for infrastructure projects… is burdensome, slow and inconsistent.” Why? “Gaining approval to build a new bridge or factory typically involves review by multiple federal agencies – such as the Environmental Protection Agency, the U.S. Forest Service, the Interior Department, the U.S. Army Corps of Engineers and the Bureau of Land Management – with overlapping jurisdictions and no real deadlines. Often, no single federal entity is responsible for managing the process. Even after a project is granted permits, lawsuits can hold things up for years – or, worse, halt a half-completed construction project.”

Gracious. These are men with impressive-sounding titles and prestigious resumes. They traffic in the measured prose of editorialists rather than the adjective-strewn rhetoric of alarmists. And their language seems all the more reasonable for its careful wording and conclusions. Naturally, having taken good care to gain the reader’s attention, they now hold it with an example: “The $3 billion Transwest Express [is] a multi-state power line that would bring upward of 3,000 megawatts of wind-generated electricity from Wyoming to about 1.8 million homes and businesses from Las Vegas to San Diego. The project delivers on two of President Obama’s priorities, renewable power and job creation, so the administration in October 2011 named [it] one of seven transmission projects to ‘quickly advance’ through federal permitting.”

You guessed it; the TransWest Express “has languished under federal review since 2007.” That’s eight (count ’em) years for a project that the Obama administration favors; we can all imagine how less well-regarded projects are doing, can’t we? In fact, we don’t have to use our imaginations, since we have the example of the Keystone XL Pipeline before us.

Last month, the Bureau of Land Management pronounced the ink dry on an environmental-impact statement well done. That left only the EPA, the Federal Highway Administration, the Corps of Engineers, the Forest Service, the National Park Service, the Bureau of Reclamation, the U.S. Fish and Wildlife Service (!) and the Bureau of Indian Affairs (!!) to be heard from. At the rate these agencies are careening through the approval process, the TransWest Express should come online about the time that the world supply of fossil fuels is entirely extinguished – a case of exquisitely timed federal permitting.

According to Messrs. Engler and McGarvey, the worst thing about this egregious case study in federal-government overreach is that it leaves “thousands of skilled craft construction workers [to] sit on their hands.” Apparently, the Obama administration was in general agreement with this line of thought, because “President Obama’s Jobs Council examined how other countries expedite the approval of large projects” and its gaze fell upon Australia.

“Australia used to be plagued with overlapping layers of regulatory jurisdiction that resemble the current regulatory structure in the U.S.” before it installed the type of reform that the two authors are laying before us. The Australian province of New South Wales “now prioritizes permit applications based on their potential economic impact, and agreements among various reviewing agencies ensure that projects are subject to a single set of requirements.” As a result of this sunburst of reformist illumination, “permitting times have shrunk… from a once-typical 249 days to 134 days.”

Mind you, that was the President’s Jobs Council talking, not the authors. And the President, listening intently, created an “interagency council… dedicated to streamlining the permitting process.” Just to make sure we knew the President wasn’t kidding, “the White House also launched an online dashboard to track the progress of select federal permit applications.”

At this point, readers might envision the two authors reading their op-ed to a live audience consisting of Wall Street Journal readers – who would greet the previous two paragraphs with a few seconds of incredulous silence, followed by gales of hilarious laughter. Doubtless sensing the pregnancy of these passages, the authors follow with some rhetorical throat-clearing: “It has become clear, however, that congressional action is needed to make these improvements permanent and to require meaningful schedules and deadlines for permit review. Fortunately, Sens. Rob Portman (R-Ohio) and Claire McCaskill (D-Mo.) have introduced the Federal Permitting Improvement Act.”

“The bill would require the government to designate a lead agency to manage the review process when permits from multiple agencies are needed. It would establish a new executive office to oversee the speed of permit processing and to maintain the online dashboard that tracks applications.”

“The bill would also impose sensible limits on the subsequent judicial review of permits by reducing the statute of limitations on environmental lawsuits from six years to two years and by requiring courts to weigh potential job losses when considering injunction requests.”

Ah-hah. Let’s summarize this. President Obama, whose world renown for taking unilateral action to achieve his ends was earned by his selective ignoring and rewriting of law, confronted a situation in which two of his administration’s priorities were being thwarted by federal agencies over which he, as the nation’s Chief Executive, wielded administrative power. What action did he take? He turned to a presidential council – a century-old political buckpassing dodge to avoid making a decision. The council proceeded to do a study – another political wheeze that dates back at least to the 19th century and has never failed to waste money while failing to solve the problem at hand. When the study ostensibly uncovered an administrative reform purporting to achieve incremental gains in efficiency, the President (a) “streamlined the process” by telling two of the agencies who were creating the worst problems in the first place to cooperate with each other via an additional layer of bureaucracy (an “interagency council”) and created an “online dashboard” so that we could all watch the ensuing slow-motion failure more closely. All these Presidential actions took place in 2011. It is now mid-2015.

And what do our two intrepid authors propose to deal with this metastatic bureaucratic cancer? Congress will point its collective finger at one of the agencies causing the original problem and give it more power by making it “manager” of the review process. (This action implies that the root cause of the problem is that somebody in government doesn’t have enough power.) Of course, the premise that “permits from multiple agencies are needed” is taken completely for granted. Next, Congress would establish still another layer of bureaucracy (the “executive office”) to “oversee” the very problem that is supposedly being solved (e.g., “speed of permit processing”). (This implies that we have uncovered two more root causes of the problem – not enough layers of bureaucracy and not enough oversight exercised by bureaucrats.) A classic means of satisfying everybody in government is by getting every branch of government into the act. Accordingly, Congress points its collective finger at “the courts” and tells them to “weigh” job losses when considering requests for injunctions against projects. (The fact that this conflicts with the original “potential economic impact” mandate doesn’t seem to have concerned Congress or, for that matter, Messrs. Engler and McGarvey.) Finally, Congress throws a last glance at this unfolding Titanic scenario and, collective chins resting on fists, rearranges one last deck chair with a four-year reduction in the statute of limitations on environmental lawsuits.

The most amazing thing is not that anything ever gets built, but that these two authors could restrain their own laughter long enough to submit this op-ed for publication. The above summary reads more like a parody submitted for consideration by Saturday Night Live or Penn and Teller.

Two questions zoom, rocket-like, to the reader’s lips upon reading this op-ed and the above summary. What good, if any, could possibly result from this kind of proposal? Why do these proposals pop up with monotonous regularity in public print? The answers to those questions give rise in turn to a third question: What are the elements of a truly effective program for government reform and why has it not emerged?

Why Doesn’t Incremental Reform Work? 

The reform proposed by Messrs. Engler and McGarvey is best characterized as “incremental” because it does not change the structure of government in any fundamental way; it merely tinkers with its operational details. It aims merely to change one small part of the vast federal regulatory apparatus (permitting) by improving one element (its speed of operation) to a noticeable but modest degree (reduce average [?] time needed to secure a permit from 269 days to 134 days). And the rhetoric employed by the authors stresses this point – aside from the attention-grabbing headline, they are at pains to emphasize their modest goal as a major selling point of their proposal. They’re not trying to change the world here. “Americans of all stripes know that something is seriously wrong when other advanced countries can build infrastructure faster and more efficiently than the U.S., the country that built the Hoover Dam.” They use words like “bipartisan proposal” and “strengthen the administration’s efforts” rather than heaping ridicule on the blatant hypocrisy and stark contradiction of the Obama administration’s actions. They want to get a bill passed. But do they want actual reform?

Superficially, it seems odd that two authors would propose reform while opposing reform. Yet close inspection confirms that hypothesis not only for this op-ed, but in general. The authors deploy the standard op-ed bureaucratic argle-bargle that we have absorbed by osmosis from thousands of other op-eds – “infrastructure,” “permitting,” “priorities,” “job creation,” “streamline [government] process,” “expedite approval,” “implemented reforms,” “economic impact,” “manage the review process,” “lead agency,” “executive office.” The trouble is that if all this really worked, we wouldn’t be where we are today. The TransWest Express review wouldn’t have begun in 2007 and still be in limbo today. The Obama Administration wouldn’t have started remedial measures in 2011 and still be waiting on them to take effect in 2015. The U.S. wouldn’t be staggering under a cumulative debt load exceeding its GDP. The federal government wouldn’t have unfunded liabilities exceeding $24 trillion. The Western world wouldn’t be supporting a welfare state that is teetering on the brink of collapse.

Who are John Engler and Sean McGarvey? John Engler was formerly the Governor of Michigan. At one time, he was considered the bright hope of the Republican Party. He began by trying to reform state government in Michigan. He failed. Instead, he was co-opted by big government. Detroit went on to declare bankruptcy. John Engler left office and went to work for the Business Roundtable. Business organizations like the Chamber of Commerce exist today for the same reason that other special-interest organizations like La Raza and AARP exist – to secure special government favors for their members and protect them from being skewered by the special favors doled out to other special-interest organizations. Sean McGarvey is President of North America’s Building Trades Unions, a department of the AFL-CIO that performs coordinative, lobbying and “research” (i.e., public-relations) functions. Unions can achieve higher wages for their members only by affecting either the supply of labor or the demand for it. There is precious little they can do to affect the demand for labor, which comes from businesses, not unions. Unions can affect the supply of labor only by reducing it, which they do in various ways. This causes unemployment, which in turn exerts continuous public-relations pressure on unions to support “job creation” measures. But true job creation can come only from the combination of consumer demand and labor productivity, which underlie the economic concept of marginal value productivity of labor.

In the jargon of economics, all these organizations are rent-seekers that seek benefits unobtainable in the marketplace. They represent their members in their capacities as producers or input suppliers, not in their capacities as consumers. In other words, rent-seekers and the op-eds they write structure their pleas for “reform” to raise the prices of goods and inputs supplied by their member/constituents and/or provide jobs to them. Virtually all the op-eds appearing in print are written by rent-seekers striving to shape pseudo-reforms in ways that suit their particular interests.

In the Engler-McGarvey case, there are two possibilities. Possibility number 1: The Federal Permitting Improvement Act actually passes Congress and actually achieves the incremental improvement promised. In this wildly unlikely case, Mr. Engler’s business clients benefit from the modest reduction in permitting times. Since the entire wage and hiring process for infrastructure processes – government or otherwise – is grossly biased in favor of union labor, Mr. McGarvey’s clients benefit as well. Possiblity 2: As the above Summary suggests, the likelihood of actual incremental improvement is infinitesimal even if the legislation were to pass, since it requires efficient behavior by the same government bureaucracy that has caused the problems requiring reform in the first place. So the chances are that the result of the reform proposal will be nil.

As far as you and I are concerned, this represents a colossal waste of time and money. But for Messrs. Engler and McGarvey, this is not so. They are creatures of government. The next-best alternative to positive benefits for their client-constituents is no change in the status quo. For Mr. Engler, the status quo gives the biggest companies big advantages over smaller competitors. For Mr. McGarvey, the status quo gives unions and union labor big advantages that they cannot begin to earn in the competitive marketplace. Unions have been losing market share steadily in the private sector for many years. But they have been gaining influence and membership in the government sector, which is ruled by legislation and lobbyists.

Op-eds and reform proposals like this one allow people like Mr. Engler and Mr. McGarvey to earn their lucrative salaries as lobbyist and union president/lobbyist, respectively, by sponsoring and promoting pseudo-reform policies whose effects on their client-constituents can be characterized as “heads we win, tails we break even.”

But what about the effects on the rest of us?

What Would Real Reform Require – and Why Don’t We Get It?

A fundamental insight of economics – we might even call it THE fundamental insight – is that consumption is the end-in-view behind all economic activity. All of us are consumers. But this very fact works against us in the realm of big government, because this diffuses the monetary stake each one of us has in any one particular issue as a consumer. A tax on an imported good will raise its price, which rates to be a bad thing for millions of Americans. But because that good forms only a small part of the total consumption of each person, the money it costs him or her will be small. The cost will not be enough to motivate him or her to organize politically against the tax. On the other hand, a worker threatened with losing his or her job to the competition posed by the imported good may have a very large sum of money at stake – or may believe that to be true. The same is true for owners of domestic import-competing firms. Consequently, there are many lobbyists for legislation against imports and almost no lobbyists in favor of free, untaxed international trade. Yet economists know that free international trade will create more happiness, more overall goods and services and almost certainly more jobs than will international trade that is limited by taxes and quotas.

This explains why so many op-ed writers are rent-seekers and so few argue in favor of economic efficiency. True reform of government would not focus on the aims of rent-seekers. It would not strive to preserve the artificial advantages currently enjoyed by large companies – neither, for that matter, would it seek to preserve the presence of small companies merely for their own sake. True reform would allow businesses to perform their inherent function; namely, to produce the goods and services that consumers value the most. The only way to effect that reform is to remove the artificial influence of government from markets and confine government to its inherent limited role in preventing fraud and coercion.

Based on this evaluation, we might expect to see economists writing op-eds opposing the views of rent-seekers. Instead, this happens only occasionally. Economists are just as keenly attuned to their self-interest as other people. Most economists are employed by government, either directly as government employees or indirectly as teachers in public universities or fellows in research institutions funded by government. At best, these economists will favor the status quo rather than true reform. Only the tiny remnant of economists who work outside government for free-market oriented research organizations can be relied upon to support true reform.

Incremental Reform Vs. Structural Reform 

Incremental reforms are sponsored by rent-seekers. They are designed either to fail or, if they succeed, to yield rents to special interests instead of real reform. Real reform must be pro-consumer in nature. But the costs of organizing consumers are vast. In order to mobilize a reform of that scale, it must offer benefits that are just as vast or greater in size and scope. That means that true reform must be structural rather than incremental. It cannot merely preserve the status quo; it must overturn it.

In other words, true reform must be revolutionary. This does not imply that it must be violent. The reform that overturned Soviet Communism, perhaps the most powerful totalitarian dictatorship in human history, was almost completely non-violent. Admittedly, it had outside help from the international community in the political and moral form from people like Lech Walesa, Pope John, British Prime Minister Margaret Thatcher and, most of all, President Ronald Reagan.

As the efforts of the Tea Party have recently demonstrated, pro-consumer reform cannot be “organized” in the mechanistic sense. It can only arise spontaneously because that is the least costly way – and therefore the only feasible way – to achieve it.

We are unlikely to read about such a reform in the public prints because most of them are owned or sponsored by people who have vested interests in big government. These interests are usually financial but may sometimes be purely ideological. Big government may be a means of suppressing competition. It may be a means of subsidizing their enterprise. It may be a means of providing a bailout when digital competition becomes too fierce. In any event, we cannot look to the op-ed pages for leadership of real government reform.

DRI-168 for week of 5-17-15: Who Killed the Amtrak 8?

An Access Advertising EconBrief: 

Who Killed the Amtrak 8?

At 9:21 PM on Tuesday, May 12, 2015, Amtrak Northeast Regional passenger train 188 was proceeding northeast en route from Washington, D.C. to New York City. Specifically, it was traveling through Philadelphia a few miles north of 30th Street station in an area called Frankford Junction. Passing through a short stretch of eastbound track, it came to a fairly sharp northeast curve. The speed limit for a train entering the curve was 50 mph. According to the train’s “black box,” or data recorder, it was traveling at 106 mph as it entered the curve. The train’s engineer apparently applied emergency brakes immediately after reaching the curve, but a few seconds later – the point at which the data recorder stopped receiving data – the train had slowed only to 102 mph.

The reason the data recorder ceased operations was that the train derailed at that point. Seven people were killed at the crash site and one died subsequently; around thirty others were hospitalized with injuries of varying severity. The dead included the CEO of a technology firm and a naval-academy midshipman.

Reactions were predictable. Philadelphia Mayor Michael Nutter solemnly, somberly lamented the tragedy. Federal-government regulators stressed the desirability of transportation safety and passage of time necessary to decelerate a train. And, most predictable of all, politicians and political commentators placed blame on their political opponents.

Murdering Republicans Strike Again

An activist liberal policy group called “Agenda Project Action Fund” made a video giving their version of the events leading up to and including the derailment. It was titled “Republican Cuts Kill Again.” The “cuts” referred to were budget cuts by the U.S. Congress, a majority of whose members are currently Republican. An author named Josh Israel of “Think Progress” wrote an article titled “Currently Available Technology May Have Prevented Fatal Amtrak Crash, But Congress Never Funded It.” Politico.com chipped in with the headline “House panel votes to cut Amtrak budget hours after deadly crash.” Rep. Nita Lowey sententiously volunteered that “starving rail of funding will not enable safer train travel” – without, of course, mentioning that the combined federal and state Amtrak budgets have increased every year since 2008.

The immediate questions that arise are: What is this “currently available technology?” Why didn’t Congress fund it? But that doesn’t begin to exhaust the relevant sources of curiosity. How long has the technology been available? Why is Congressional funding even an issue in the first place, since Amtrak is nominally a for-profit corporation? Most importantly of all, what is the optimal framework for providing transportation services in general and passenger-rail services in particular – and how does Amtrak fit into that framework?

What is Amtrak and Why are People Saying These Terrible Things About It?

“Amtrak” is a hybrid name for the National Railroad Passenger Corporation. It is one of those centaur-like organizations common to modern big government – a nominally for-profit corporation that is nevertheless publicly funded. It receives annual appropriations from the federal government that have averaged around $1.4 billion in recent years. It also receives annual funding from various state-level sources, particularly about 14 state governments and the three largest Canadian provinces.

Amtrak serves 46 U.S. states and those three Canadian provinces. But the bulk of its business is provided in what is called the “Northeast Corridor” of the U.S. Although Amtrak’s routes comprise over 500 destinations, more than two-thirds of its nearly 31 million passengers come from the ten largest metropolitan areas in the U.S.; 83% travel routes of less than 400 miles.

Amtrak began operations on May 1, 1971. Today it runs over 300 trains per day across 21,000 miles of track. It has over $2 billion in annual revenue. But it has yet to turn a profit. It has always required subsidies. During the Reagan administration, these subsidies hit an annual low of $600 million before rising again subsequently. They have waxed and waned, but state-level subsidies have recently tended to compensate for cuts at the federal level. Government has also provided capital subsidies for investment; this explains why the left wing can call for Congress to fund safety improvements.

Although Congress provided limited authorization for Amtrak to deviate from labor-union agreements in the late 1990s, Amtrak has long employed union labor. It negotiates with 14 separate unions and has 24 separate agreements with those unions. For many decades under federal regulation by the ICC, the railroad business was a classic case of “featherbedding,” or the employment of superfluous workers in union-protected jobs. This remains true today with Amtrak. It is no coincidence that Amtrak’s national headquarters is Washington, D.C.

Amtrak is a lightning rod for political controversy. The left wing loves it because mass transit is a sacred cow of both the old left and environmentalists. The fact that Amtrak is horrendously inefficient is politically advantageous to the left because it means that it employs more labor than necessary to produce a given output – the very thing that outrages any competent economist delights the left wing.

It is true that left-wingers cite cost comparisons claiming that train travel is the most efficient form of passenger transportation. Unfortunately for their argument, the comparisons are bogus. They use “on-time” as a criterion for comparing airlines and trains while rigging the definitions to allow trains absurd margins of lateness. Even more telling is the fact that they completely ignore the element of consumer demand. The reason most people do not ride trains is the same reason that they prefer to drive automobiles – cars provide point-to-point transportation and maximum personal convenience. This economizes on the value of an individual’s time. Since we are all mortal and have limited hours in the day and in our lifetime, this is a vast benefit to us. But this is completely ignored in the cost comparisons claiming superiority for train travel. When economists conduct the comparisons and account for human time preference, this claimed superiority for mass transit vanishes.

The right wing hates Amtrak, but that doesn’t mean that it is unpopular with Republicans. Amtrak is popular in the most populous parts of the country, which means that most of the geographic U.S. (but a minority of the population) is subsidizing a relatively small part of the country (but a majority of the population). Consequently, Republicans – most of whom have the political backbone of invertebrates – tend to support subsidies. (This is particularly true in the House of Representatives, which is based on population.) How else have they continued, year after year after year? Instead of cutting Amtrak loose or voting for privatization, Republicans content themselves with rhetorical volleys against it and cosmetic measures designed to “make it work better.”

The “Currently Available Technology”

The “currently available technology” referred to by the liberal activist group is the Positive Train Control (PTC) system. It uses a combination of radio signals and GPS technology to pinpoint the position of all trains. Not only can slow speeding trains, it can also prevent collisions between trains, prevent trains from proceeding through wrongly positioned switches and prevent trains from entering work zones. In other words, PTC is an all- (or at least, multi-)purpose train safety system.

In 2008, Amtrak suffered a derailment in California with loss of life. Senator Dianne Feinstein (D-Cal), one of the most powerful Senate Democrats, did not let this crisis go to waste. She seized the chance to push through legislation mandating full installation of PTS for both passenger and freight railroad systems in the U.S. by 2015.

So what, many readers are doubtless thinking to themselves? Isn’t that the way the system is supposed to work? Isn’t this a victory for big government, the regulatory state?

Not hardly. Just the opposite, in fact.

Why PTS Is DOA

To an economist, the first thing that pops into mind is the question: If PTC is the greatest thing since sliced bread, why does Congress have to mandate its adoption? After all, freight railroads have been an extremely successful industry for years. Those ads touting their success in squeezing efficiency from train fuel are not hyperbole. Warren Buffett didn’t buy Burlington Northern because he thought its management was brain-dead. Why in the world wouldn’t the industry rush to adopt PTC if it were the last word in safety, since safety is vital to any successful freight operation?

The answer to that question was provided by the Reason Foundation. Thanks to the expertise of its founder, Robert Poole, the Foundation has long been recognized as the ranking expert in transportation. Policy analyst Baruch Feigenbaum gave his readers the lowdown on PTC.

The Federal Railroad Administration performed a cost-benefit study on PTC technology. It found a projected benefit range (discounted present value) of $0-$400 million. But the cost was $13 billion. Whoops. That means that for every $1 of (maximum) benefit, it cost $20 to install.

But because Congress, in its infinite wisdom, forced both passenger and freight railroads to install it, the entire railroad business has been laboriously slaving away at it for the last seven years. Of course, nobody is too crazy about throwing money down this rathole. The use of radio signals requires coordination with the FCC, and Amtrak, which can’t even coordinate with itself well enough to make a profit, is finding that difficult. Then there are the various regulatory hurdles. Yes, that’s right – the same government which has legislatively mandated the adoption of PTC is throwing up regulatory hurdles to it in the form of environmental and historic-preservation review for each of the 20,000 required communications antennas in the system. This has led to a year-long moratorium on installation, according to Association of American Railroads’ CEO Edward Hamberger in a Wall Street Journal op-ed.

But..uh, well, at least PTC is better than nothing, right? Wouldn’t we be stuck with no safety at all if we hadn’t passed that unbelievably dumb, wildly wasteful law? Apparently that’s what the political left wants us to believe; that is its intellectual fall-back position when confronted with the facts about PTC. Presumably, that is as far as the average person’s thinking goes on the subject.

But the inherent meaning of cost-benefit analysis is that “cost” refers to foregone alternatives. When cost exceeds benefit, there are better, more beneficial ways of spending the money than on the project being analyzed because the foregone alternatives represent benefits available elsewhere.

And in this particular case, some of those benefits are alternative safety projects within the railroad industry itself.

ATC – A Better, Lower-Cost Alternative

Both Feigenbaum and Hamberger describe another type of railroad safety technology now in use. Amtrak and freight railroads currently utilize a type of safety technology that relates specifically to speeding trains. It is called “Automatic Train Control” (ATC). It is installed on the tracks and sends signals to trains telling them what the speed limit is, allowing the train to automatically slow itself before reaching the speed-change point. In short, it is a mechanism for eliminating the particular type of human (engineer) error apparently responsible for the Philadelphia derailment. It would have prevented the Philadelphia accident.

It is quite true that ATC handles only this particular type of error; it lacks the all-encompassing scope of PTC. But ATC has the advantage of being relatively cheap and easy to install. We know this because after the recent Philadelphia derailment, Amtrak quietly installed ATC on the section of track where the accident occurred. It accomplished the installation in one weekend.

Nor is this the only type of alternative safety improvement to ponder. Marc Scribner of the Competitive Enterprise Institute recently noted that about 270 people die every year in accidents at train crossings. Why not take some of that $13 billion and devote it instead to improving crossing safety in various low-cost ways, thereby saving dozens of lives every year instead of 8-10 lives every 7 years or so?

Both Amtrak and private freight railroads have installed ATC. Why haven’t they completed that installation? Well, they both labor under the burden of meeting mandatory legal deadlines for which they will eventually be fined when 2015 expires without completion of the PTC system. Hamberger estimates 2018 as the PTC completion date, with another two years necessary for “testing and validation.”

Who Killed the Amtrak 8? 

Given the facts as outlined above, it is obvious who killed the Amtrak 8. Big government and the regulatory state killed them. Even Amtrak might have had the corporate brains to install ATC throughout the Northeast Corridor – by far its biggest revenue generator and arguably profitable in its own right – were it not faced with the overwhelming burden of having to install PTC.

This verdict is seconded by Wall Street Journal columnist Holman Jenkins in his latest column (WSJ, 05/20/2015, “How Congress Railroaded the Railroads”). “Is there a more absurd technology than positive train control, which Congress imposed as an unfunded mandate on railroads in 2008, and which supposedly would have prevented last week’s Philly Amtrak crash? Except it didn’t since its implementation has been draggy and its design so clearly inferior to cheaper, faster, more up-to-date solutions.”

Even beyond this, though, is the decisive point relating to the fate of passenger rail had big government not established and continually sustained Amtrak in the first place.

A World Without Amtrak

When Lyndon Johnson succeeded John F. Kennedy in the White House, he recognized that Kennedy’s assassination had created an extraordinary mandate for change. Johnson was perhaps the century’s premier legislative spearhead, and he essentially created the regulatory welfare state that presides over the country today. Johnson predicted that it would take 50 years to determine the success or failure of his “experiment” in social policy. A half-century later, we can deliver the verdict that the welfare state is imploding not just in the U.S., but worldwide.

Similarly, forty-four years should be sufficient to pronounce Amtrak a failure. Its infrastructure is ramshackle, its finances are a mess and its organization is a shambles. Amtrak’s only positive feature is a core constituency that leaves open the possibility of a profitable passenger rail service. That constituency, in the “Northeast Corridor” of America, boasts a population density roughly ten times greater than the rest of the U.S. This makes it possible for a for-profit, private-sector business to identify and isolate this customer base. In no sense is passenger-rail service a “public good” in the classical economic sense; it is neither non-exclusive nor non-rival.

Thus, the obvious solution to the problems plaguing Amtrak, of which safety is merely the one occupying front-pages currently, is to end its public subsidies, acknowledge its bankruptcy and sell off its assets. This includes its rights of way, which would enable a privatized successor to operate passenger rail for the benefit of the large number of people in the relatively confined area where that business is economically feasible.

To be fair, it should be noted that some are skeptical of privatization not on principle but as a practical matter. Like Holman Jenkins of The Wall Street Journal, they think the profits of the Northeast Corridor are overestimated and costs of service underestimated. Variable costs should take into account incremental wear and tear on infrastructure, which are now obscured by capital subsidies. Congress has given Amtrak preferential right-of-way over freight traffic on lines owned by the freight railroads – another implicit subsidy that would vanish under privatization. Various regional commuter transporters now tacitly agree not to compete with Amtrak, which is still another hidden subsidy. Could a privatized rail carrier still serve the Northeast Corridor without these subsidies? The only way to know is to try it and see.

To be workable, privatization would demand relief from the killing mandate currently crippling Amtrak and greatly hindering freight railroads – namely, the 2008 law mandating the adoption of the already-obsolete and dreadfully expensive PTC. This would save hundreds, if not thousands of lives, and improve life for millions of people. The only losers would be regulators, politicians and, possibly, union members who would lose jobs and be forced to take lower-paying ones. The union members could be bought off through severance. The others would simply have to eat their losses. In fact, this is increasingly the choice that confronts us not merely in passenger rail but in the entire transportation system.

As things stand today, the American transportation system is a massive form of human sacrifice to the gods of government regulation and unionization. Tens of thousands of Americans lose their lives every year so that government regulators and union members can hold their jobs and earn more money than would otherwise be the case.

Let us hear from Holman Jenkins again: “Which brings us to another headline from the brave new world of self-driving vehicles. This month the truck maker Freightliner introduced a robotically controlled truck, licensed to operate on the roads of Nevada. Its onboard system, designed to relieve drivers of the monotony of motoring for hours down calm stretches of well-marked interstate, ‘never gets tired. It never gets distracted. It’s always at 100%,’ company executive Wolfgang Bernhard told the media.”

“Alas, Mr. Bernhard deflated expectations by predicting that, though the system is ready to roll today, deployment is likely five years off. ‘The biggest obstacle that we see is the regulatory framework'” [emphasis added].

“Five years may be optimistic: An unspoken burden for the future is the legacy of the Toyota travesty of 2010, in which congressmen and, most damningly, a head of the Transportation Department, whose agency knows better, preferred to allege an undetected electronic bug in Toyotas rather than acknowledge that drivers (i.e., voters) cause accidents by pressing the gas instead of the brake.”

“This scandal, hugely costly to Toyota and largely fabricated, has never been acknowledged or investigated by the government of the media…One big inconvenient precedent lies in its wake. As Toyota found, because it’s impossible to prove the nonexistence of a software bug, anytime there’s an accident involving a system in which software plays a role, the software will be blamed and the driver will be excused. Perhaps the only way forward, then, is to remove the driver altogether” [emphasis added].

Whether it is cars, planes or trains, the dirty little secret that nobody is willing to talk about is the driver – the source of almost all the deaths and injuries. Here we have a train traveling at 106 mph in a 50 mph zone and an engineer with a case of amnesia. Sure, there was a dent in the windshield and talk of a projectile. But the dent didn’t penetrate the windshield and there is no logical explanation for how a projectile would cause the train’s speed to double. Is a left-wing lawyer going to emerge claiming that the train’s engine was manufactured by Toyota? Or are we eventually going to wind up with “driver error” as the cause of the derailment? Once again, with trains as with planes and cars, self-driving is the ultimate way forward.

Holman Jenkins is now acknowledging what this space declared over two years ago with respect to self-driving cars and almost a year ago with respect to commercial aviation. Now the same chickens are roosting on the tracks of passenger rail. Big government and regulators are standing athwart technology and yelling “Stop!” while over 30,000 people are killed every year on the nation’s roads, hundreds die in each commercial air crash and hundreds more die annually in various forms of railroad accident.

Up to now, none dare call it murder. Yet Democrats get away with accusing Republicans of murder for the sin of holding a Congressional majority.

DRI-186 for week of 5-10-15: How Can the Framework of Economics Help Us Assign Responsibility for War Crimes in World War II?

An Access Advertising EconBrief:

How Can the Framework of Economics Help Us Assign Responsibility for War Crimes in World War II?

The previous EconBrief explains how the classical theory of voluntary exchange and the moral concept of individual responsibility mutually reinforce each other. The mutually beneficial character of voluntary exchange allows individuals to assume responsibility for their own actions in a free society. Individual responsibility permits voluntary exchange to function without the necessity of, say, review of each transaction by a neutral third party to insure fairness. The role of government in a voluntary society is minimal – to enforce contracts and prevent coercion.

Recently, the issue of responsibility for war crimes committed during World War II has been raised by various independent events. In Germany, a 93-year-old man is standing trial as an accessory to war crimes committed while he worked at the Auschwitz concentration camp during World War II. His presence in the camp is known, but his actual role and behavior is disputed. Should the prosecution have to prove he actually committed crimes, or would his participation as (say) a guard be enough to warrant his conviction as a war criminal?

A recent column in The Wall Street Journal by Bret Stephens (“From Buchenwald to Europe,” 05/05/2015) observes that many people in Germany were victims of Nazism, not Nazis – including many non-Jews. How should this affect Germany’s national policies today on European union, immigration and attitude toward systematic anti-Semitism and misogyny practiced by Muslim immigrants? “It isn’t easy, or ultimately wise, [for Germany] to live life in a state of perpetual atonement,” Mr. Stephens thinks.

Japan’s Prime Minister Shinzo Abe has publicly marveled about the transformation in relations between Japan and America, two countries who became deadly rivals in the late 1930s and waged total war in the 1940s, culminating in mankind’s only nuclear attack. Today we are two of the planet’s closest trading partners. Abe clearly wants to enlist the cooperation of the U.S. in Japan’s efforts to re-arm against the imminent threat of mainland’s China’s sabre-rattling territorial ambitions. But Abe has also made disturbing noises in domestic politics, worshipping at the shrine of Japan’s war dead and speaking equivocally about Japan’s aggressive invasion of its Asian neighbors in the 1930s. These speeches are a rough Japanese analogue to holocaust-denial.

In deciding what to make of these events, our analytical anchor is once again the economic logic of individual responsibility arising in a context of voluntary exchange.

The Flawed Notion of National Responsibility for War Crimes

In his Wall Street Journal piece, Bret Stephens depicts “the drama of postwar Germany” as its “effort to bury the Nazi corpse,” which “haunts Germany at every turn.” This phrasing is troubling. It implies that Germany’s residents bear a collective burden for sins committed long before most of them were even born.

Not surprisingly, this burden hasn’t just been heavy – it has been unshakeable. “Should Germany’s wartime sins be expiated by subsidizing the spendthrift habits of corrupt Greek governments? Should fear of being accused of xenophobia require Germans to turn a blind eye to Jew-hatred and violent misogyny when the source if Germany’s Muslim minority?” These questions, posed rhetorically by Mr. Stephens, should be placed in the pantheon of pointlessness with queries about the angel-carrying capacity of pinheads.

Even before World War II ended, many people realized that the Axis powers would have to be called to account for their sins. Members of the German and Japanese governments and military had committed acts that mined new depths of depravity. Civilization had institutions and standards for judging and punishing the familiar forms of crime, but the scope and magnitude of Axis atrocities persuaded the Allies to hold separate war-crimes tribunals for Germany and Japan. And the defendants at every trial were individual human beings, not collective entities called “Germany” or “Japan.”

To be sure, there were arguments – some of them almost as bitter as the fighting that preceded the trials – about which individuals should be tried. At least some of the disagreement probably reflected disappointment that the most deserving defendants (Hitler, Goering et al) had cheated the hangman by committing suicide beforehand. But nobody ever entertained the possibility of putting either nation on trial. In the first place, it would have been a practical impossibility. And without an actual trial, the proceedings would have been a travesty of justice. Even beyond that, though, the greater travesty would have been to suggest that the entirety of either nation had been at fault for acts such as the murder of millions of Jews by the Nazis.

We need look no farther than Stephens’ own article to substantiate this. He relates the story of his father-in-law, Hermann, who celebrated his 11th birthday on VE-Day, May 8th, 1945. He was the namesake of his father, a doctor who died in a German prison camp, where he was imprisoned for the crime of xenophilia, showing friendly feelings to foreign workers. Father Hermann apparently treated inhabitants of forced-labor camps and was indicating the likelihood of an ultimate Russian victory over Germany. Not only was he not committing atrocities, he was trying to compensate for their effects and got killed for his pains. Were we supposed to prosecute his 11-year old son? What madness that would have been! As Stephens put it, “what was a 10-ywar-old boy, whose father had died at Nazi hands, supposed to atone for?”

History tells us that Germany also harbored its own resistance movement, which worked behind the scenes to oppose Fascism in general and the war in particular. In fact, the Academy Award for Best Actor in 1943 went not to Humphrey Bogart, star of Best Picture winner Casablanca, but instead to Paul Lukas, who played a German who risked his life fighting the Nazis in the movie Watch On the Rhine. The Freiburg School of economists, a German free-market school of economists formed before the war, openly opposed Fascist economic policies even during World War II. Their prestige was such that the Nazis did not dare kill them, instead preferring to suppress their views and prevent their professional advancement. Then there were the sizable number of Germans who did not join the Nazi Party and were not politically active.

Hold every contemporary German criminally accountable for the actions of Hitler, Goebbels, Hess, Goering, Mengele and the rest? Unthinkable. In which case, how can we even contemplate asking today’s Germans, who had no part in the war crimes, weren’t even alive when they were committed and couldn’t have prevented them even if inclined to try, to “atone” for them?

The longer we think about the notion of contemporary national guilt for war crimes, the more we wonder how such a crazy idea ever wandered into our heads in the first place. Actually, we shouldn’t wonder too long about that. The notion of national, or collective, guilt came from the same source as most of the crazy ideas extant.

It came from the intellectual left wing.

The Origin of “Social Wholes”

There is no more painstaking and difficult pastime than tracing the intellectual pedigree of ideas. Apparently, the modern concept of the “social whole” or national collective seems traceable to the French philosopher, Claude Henri de Rouvroy, Comte de Saint Simon (hereinafter Saint-Simon). Saint-Simon is rightfully considered the father of Utopian Socialism. Born an aristocrat in 1760, he lived three lives – the first as a French soldier who fought for America in the Revolution, the second as a financial speculator who made and lost several fortunes, the third as an intellectual dilettante whose personal writings attracted the attention of young intellectuals and made him the focus of a cult.

Around age 40, Saint-Simon decided to focus his energies on intellectual pursuits. He was influenced by the intellectual ferment within France’s Ecole polytechnique, where the sciences of mathematics, chemistry, physics and physiology turned out distinguished specialists such as Lavoisier, Lagrange and Laplace. Unfortunately, Saint-Simon himself was able to appreciate genius but not to emulate it. Even worse, he was unable to grasp any distinction between the natural sciences and social sciences such as economics. In 1803, he wrote a pamphlet in which he proposed to attract funds by subscription for a “Council of Newton,” composed of twenty of the world’s most distinguished men of science, to be elected by the subscribers. They would be deemed “the representatives of God on earth,” thus displacing the Pope and other divinely ordained religious authorities, but with additional powers to direct the secular affairs of the world. According to Saint-Simon, these men deserved this authority because their competence in science would enable them to consciously order human affairs more satisfactorily than heretofore. Saint-Simon had received this plan in a revelation from God.

“All men will work; they will regard themselves as laborers attached to one workshop whose efforts will be directed to guide human intelligence according to my divine foresight [emphasis added]. The Supreme Council of Newton will direct their works… Anybody who does not obey their orders will be treated … as a quadruped.” Here we have the beginnings of the collective concept: all workers work for a single factory, under one central administration and one boss.

We can draw a direct line between this 1803 publication of Saint-Simon and the 20th century left-wing “Soviet of engineers” proposed by institutional economist Thorstein Veblen, the techno-socialism of J. K. Galbraith and the “keep the machines running” philosophy of Clarence Ayres. “Put government in the hands of technical specialists and give them absolute authority” has been the rallying cry of the progressive left wing since the 19th century.

Saint-Simon cultivated a salon of devotees who propagated his ideas after his death in 1825. These included most notably Auguste Comte, the founder of the “science” of sociology, which purports to aggregate all the sciences into one collective science of humanity. Comte inherited Saint-Simon’s disregard for individual liberty, referring contemptuously to “the anti-social dogma of the ‘liberty of individual conscience.'” It is no coincidence that socialism, which had its beginnings with Saint-Simon and his salon, eventually morphed into Nazism, which destroyed individual conscience so completely as to produce the Holocaust. That transformation from socialism to Nazism was described by Nobel laureate F. A. Hayek in The Road to Serfdom.

Today, the political left is committed to the concept of the collective. Its political constituencies are conceived in collective form: “blacks,” “women,” “labor,” “farmers,” “the poor.” Each of these blocs is represented by an attribute that blots out all trace of individuality: skin color, gender, economic class (or occupation), income. The collective concept implies automatic allegiance, unthinking solidarity. This is convenient for political purposes, since any pause for thought before voting might expose the uncomfortable truth that the left has no coherent policy program or set of ideas. The left traffics exclusively in generalities that attach themselves to social wholes like pilot fish to sharks: “the 1%,” the 99%,” “Wall St. vs. Main St.,” “people, not profit,” “the good of the country as a whole.” This is the parlor language of socialism. The left finds it vastly preferable to nitty-gritty discussion of the reality of socialism, which is so grim that it couldn’t even be broached on college campuses without first issuing trigger warnings to sensitive students.

The left-wing rhetoric of the collective has special relevance to the question of war crimes. Actual war crimes are committed by individual human beings. Human beings live discrete, finite lives. But a collective is not bound by such limitations. For example, consider the business concept of a corporation. Every single human being whose efforts comprise the workings of the corporation will eventually die, but the corporation itself is – in principle – eternal. Thus, it is a collective entity that corresponds to left-wing notions because it acts as if animated by a single will and purpose. And the left constantly laments the obvious fact that the U.S. does not and cannot act with this singular unanimity of purpose. For decades, left-wing intellectuals such as Arthur Schlesinger and John Kenneth Galbraith have looked back with nostalgia at World War II because the U.S. united around the single goal of winning the war and subordinated all other considerations to it.

The Rhetorical Convenience of Collective Guilt

Given its collective bent, we would expect to find the left in the forefront of the “collective guilt” school of thought on the issue of war crimes. And we do. For the left, “the country” is one single organic unity that never dies. When “it” makes a ghastly error, “it” bears the responsibility and guilt until “it” does something to expiate the sin. That explains why Americans have been figuratively horsewhipped for generations about the “national shame” and “original sin” of slavery. It is now 153 years after the Emancipation Proclamation and 150 years since the end of the Civil War, when a half-million Americans died to prevent slaveholding states from seceding from the Union. The U.S. Constitution was amended specifically to grant black Americans rights previously denied them following the Civil War. Yet “we” – that is, collective entity of “the country” on which left-wing logic rests – have not yet expunged this legacy of slavery from “our” moral rap sheet. Exactly how the slate should be wiped clean is never clearly outlined – if it were, then the left wing would lose its rhetorical half-Nelson on the public debate over race – but each succeeding generation must carry this burden on its shoulders in a race-reversed reprise of the song “Old Man River” from the play Showboat. “Tote that barge, lift that bale” refers in this case not to cotton but to the moral burden of being responsible for things that happened a century or more before our birth.

If this burden can be made heavy enough, it can motivate support for legislation like forced school busing, affirmative action and even racial reparations. Thus, the collective concept is a potentially powerful one. As Bret Stephens observes, it is now being pressed into service to prod Germany into bailing out Greeks, whose status as international deadbeats is proverbial. Exactly how were Greeks victimized by Germans? Were they somehow uniquely tyrannized by the Nazis – more so than, say, the Jews who later emigrated to Israel? No, Germany’s Nazism of seventy or eighty years ago is merely a handy pig bladder with which to beat today’s German over the head to extract blackmail money for the latest left-wing cause du jour. Since the money must come from the German government, German taxpayers must fork it over. A justification must be found for blackmailing German taxpayers. The concept of collective guilt is the ideal lever for separating Germans from their cash. Every single German is part of the collective; therefore, every single German is guilty. Voila!

The Falsity of Social Wholes

In The Counterrevolution of Science (1952), Nobel laureate F.A. Hayek meticulously traced the pedigree of social wholes back to their roots. He sketched the life and intellectual career of Saint Simon and his disciple Auguste Comte. Hayek then carefully exposed the fallacies behind the holistic method and explained why the unit of analysis in the social sciences must be the individual human being.

Holistic concepts like “the country” are abstract concepts that have no concrete referent because they are not part of the data of experience for any individual. Nobody ever interacts directly with “the country,” nor does “the country” ever interact directly with any other “country.” The only meaning possible for “the country” is the sum of all the individual human beings that comprise it, and the only possible theoretical validity for social wholes generally arises when they are legitimately constructed from their individual component parts. Indeed, Hayek views one role for social scientists as the application of this “compositive” method of partial aggregation as a means of deriving theories of human interaction.

The starting point, though, must be the individual – and theory can proceed only as far as individual plans and actions can be summed to produce valid aggregates. The left-wing historical modus operandi has reversed this procedure, beginning with one or more postulated wholes and deriving results, sometimes drawing conclusions about individual behavior but more often subsuming individuals completely within a faceless mass.

An example may serve to clarify the difference in the two approaches. The individualist approach, common to classical and neoclassical economics, is at home with the multifarious differences in gender, race, income, taste, preferences, culture and historical background that typify the human race. There is only one assumed common denominator among people – they act purposefully to achieve their ends. (For purposes of simplicity, those ends are termed “happiness.”)Then economic theory proceeds to show how the price system tends to coordinate the plans and behavior of people despite the innumerable differences that otherwise characterize them.

In contrast, the aggregative or holistic theory begins with certain arbitrarily chosen aggregates – such as “blacks.” It assumes that skin color is the defining characteristic of members of this aggregate; that is, skin color determines both the actions of the people within the aggregate and the actions of non-members toward those in the aggregate. The theory derived from this approach is correct if, and only if, this assumption holds. The equivalent logic holds true of other aggregates like “women,” “labor,”et al, with respect to the defining characteristic of each. Since this basic assumption is transparently false to the facts, holistic theories – beginning with Saint Simonian socialism, continuing with Marxism, syndicalism and the theories of Fourier, the Fabian socialists, Lenin, Sombart, Trotsky, and the various modern socialists and Keynesians – have had to make numerous ad hoc excuses for the “deviationism” practiced by some members of each aggregate and for the failure of each theory.

The Hans Lipschis Case

Is it proper in principle that Hans Lipschis, a former employee of Auschwitz and now ninety-three years old, be repatriated to Germany from the U.S. and tried as accessory in the murder of 300,000 inmates of the notorious World War II death camp? Yes. The postwar tribunals, notably at Nuremberg, reaffirmed the principle that “following orders” of duly constituted authority is not a license to aid and abet murder.

Lipschis’s defense is that he was a cook, not a camp guard. But a relatively new legal theory, used to convict another elderly war-crimes defendant, John Demjanjuk, is that the only purpose of camps like Auschwitz was to inflict death upon inmates. Thus, the defendant’s presence at the camp as an employee is sufficient to provide proof of guilt. Is this theory valid? No. A cook’s actions benefitted the inmates; a guard’s actions harmed them. If guards refused to serve, the camps could not have functioned. But if cooks refused to serve, the inmates would have died of starvation.

Verdicts such as that in the Demjanjuk case were undoubtedly born of the extreme frustration felt by prosecutors and men like Simon Wiesenthal and other Nazi hunters. It is almost beyond human endurance to have lived through World War II and then be forced to watch justice be cheated time after time after time. First the leading Nazis escaped or committed suicide. Then some of them were recruited to aid Western governments. Then some were sheltered by governments in South America and the Middle East. Over time, attrition eventually overtook figures such as Josef Mengele. Occasionally, an Adolf Eichmann was brought to justice – but even he had to be kidnapped by Israeli secret agents before he could be prosecuted. Now the job of legally proving actual criminal acts committed by minor functionaries fifty, sixty or seventy years after the fact becomes too difficult. So we cannot be surprised when desperate prosecutors substitute legal fancies for the ordinary rules of evidence.

Nevertheless, if the prosecution cannot prove that Lipschis committed actual crimes, then he must be acquitted. This has nothing to do with his age or the time lapse between the acts and the trial. Any other decision is a de facto application of the bogus principle of collective guilt.

Shinzo Abe and Guilt for Japanese Aggression in World War II

Japanese Prime Minister Abe is a classic politician. Like the Roman god Janus, he wears two faces, one when speaking abroad to foreign audiences and another when seeking reelection by domestic voters. His answers to questions about whether he was repudiating the stance taken by a previous Prime Minister in 1996 – that Japan was indeed guilty of aggression for which the Japanese government formally apologized – were delicately termed “equivocal” by the U.S. magazine U.S. News and World Report. That is a euphemism meaning that Abe was lying by indirection, a political tactic used by politicians the world over. He wanted his answer to be interpreted one way by Japanese voters without having to defend that interpretation to the foreign press.

Abe’s behavior was shameful. But that has absolutely nothing to do with the question of Japanese guilt for war crimes committed during and prior to World War II. That guilt was borne by specific individual Japanese and established by the Tokyo war-crimes tribunal. Indeed, one government spokesman eventually admitted this in just those words, albeit grudgingly, after Abe’s comments had attracted worldwide attention and criticism.

The implications of this are that Japanese today bear no “collective guilt” for the war crimes committed by previous Japanese. (It would be wrong to use the phrase “by their ancestors,” since presumably few Japanese today are related by blood to the war criminals of seventy or eighty years ago.) The mere coincidence of common nationality does not constitute common ancestry except in the broad cultural sense, which is meaningless when discussing moral guilt. Are we really supposed to believe, for example, that the surviving relatives of Jesse James or Billy the Kid should carry around a weighty burden of guilt for the crimes of their forebear? In a world where the lesson of the Hatfield’s and McCoy’s remains unlearned in certain precincts, this presumption seems too ridiculous for words.

Similarly, the fact that Japanese leaders in the 1920s, 30s and 40s were aggressively militaristic does not deny Japanese today the right to self-defense against a blatantly aggressive Chinese military establishment.

Much is made of Abe’s unwillingness to acknowledge the “comfort women” – women from Korea, China and other Asian nations who were held captive as prostitutes by Japanese troops. Expecting politicians to behave as historians is futile. If Japanese war criminals remain at large, apprehend and indict them. If new facts are unearthed about the comfort women or other elements of Japanese war crimes, publish them. But using these acts as a club against contemporary Japanese leaders is both wrong and counterproductive.

Besides, it’s not as if no other ammunition was available against Abe. He has followed Keynesian fiscal policies and monetary policies of quantitative easing since his accession to prime minister. These may not be crimes against humanity, but they are crimes against human reason.

Macro vs. Micro

Academic economics today is segregated between macroeconomics and microeconomics. The “national economy” is the supposed realm of macroeconomics, the study of economic aggregates. But as we have just shown, it is the logic of individual responsibility that actually bears on the issue of war crimes committed by the nations of Germany and Japan – because the crimes were committed by individuals, not by “nations.” 

One of the most valuable lessons taught by classical economic theory is that the unit of analysis is the individual – in economics or moral philosophy.

DRI-179 for week of 5-3-15: Why Economics is Inseparable From Individual Responsibility

An Access Advertising EconBrief:

 Why Economics is Inseparable From Individual Responsibility

Many people know that the father of modern economics, Adam Smith, wrote An Inquiry Into the Nature and Causes of the Wealth of Nations in 1776. Few today realize that his most famous prior work was The Theory of Moral Sentiments in 1754. In Smith’s day, the conjunction of economics and moral philosophy was accepted, even taken for granted. Now economists are viewed as social scientists rather than philosophers, let alone moralists. Yet some of the most penetrating recent books and policy debates have revealed the economic underpinnings of genuine morality, rooted in the concept of individual responsibility.

Of all moral principles, individual responsibility may have been taken the worst beating at the hands of the 20th century. The chief abuser was Sigmund Freud, founder of the modern school of psychology and the profession of psychiatry. The book Admirable Evasions: How Psychology Undermines Morality is primarily an expose of the harm wrought by Freud and his descendants. The author, Theodore Dalrymple, is a psychiatrist who has viewed the profession from the inside as a former prison doctor and psychiatrist in private practice. (“Dalrymple” is the pen name for Englishman Anthony Daniels, but to avoid confusion we follow the author’s convention in this article.) He wonders whether “Mankind…would…be the loser or the gainer… if all the anti-depressants and anxiolytics… were thrown into the sea… all textbooks of psychology were withdrawn and pulped” and “all psychologists ceased to practice.” He is in doubt despite the “modest contributions to the alleviation of suffering” by some areas of clinical psychological practice. This implies that the harm done by psychology must be both significant and ongoing.

The maxim “It takes one to know one” was never better illustrated than by Dalrymple. His only drawback is occupational tunnel vision; he gives short shrift to economic logic as the motive force behind the failure of psychology.

Freudian Fraud

Sigmund Freud, born in 1883 in Vienna, Austria, underwent conventional medical education and training in neurology. Based on his interviews of patients, he founded the study of psychoanalysis. The fundamental principle of psychoanalysis is that the analyst possesses certain a priori truths about the patient’s mental makeup that establish a hierarchical relationship between the two. The analyst should enjoy a position of dominance, which the patient will inevitably resist. Only submission will enable the analyst to unlock the complexes and neuroses that plague the patient. These afflictions are the result of result of sexual pressures emerging in early childhood, including the male Oedipus complex and female penis envy. Patients are powerless to perceive and grapple with these primal forces; only psychoanalysis can bring them to the surface and resolve their conflicts.

Does it occur to you to wonder how the psychoanalyst himself became immune to these primal forces, hence worthy of the dominant analyst’s role? Well, the analyst himself supposedly had his own analyst, but the infinite regression involved in this issue was one of many logical problems never resolved in Freudian theory.

The term “psychology” derives from the ancient word “psyche,” used to denote human consciousness. Freud divided the human psyche into three parts: the ego, or conscious mind that allows us to interact with reality; the id, or unconscious; and the superego, the way station between id and ego and repository of societal and parental norms that control our behavior.

The first half of the 20th century elevated Freud to the status of cultural hero and icon. In the second half, rigorous study of his career, methods and techniques left Freudian theory in tatters. Freud based his theories on a combination of empirical generalization from his case histories and speculative conjecture. Many a successful scientific theory has been built on less, but in Freud’s case the result was a mess. Freud’s case histories were published using pseudonyms, a commendable attempt to protect the personal privacy of his subjects. This delayed their investigation and study. Eventually, it became clear that they had little or no scientific validity because their results were not measurable, they could not be replicated and they did not seem to be robust. Freud’s famous concepts – id, ego, superego, Oedipus complex and penis envy – have all been dropped from the lexicon of modern psychiatry.

Indeed, psychiatric practice today owes almost nothing to Freudian method. It is divided between the biological practitioners and the behaviorists. The biologicals treat “mental illness” completely differently than Freud did. Instead of viewing it as a unique phenomenon of the psyche, they see it as simply another branch of modern medicine. Conditions like schizophrenia and manic depression (now called bipolar condition) are recognized as physical illnesses caused by chemical imbalances within the brain; they are treated with prescription medicines. This reinforces the logic of training psychiatrists as medical doctors rather than wizards of the psyche. Behaviorists talk with patients about their problems and help them cope with those problems – in this they bear a superficial resemblance to psychoanalysts. But there is no hierarchical relationship and no a priori theory about the origin of those problems. Moreover, behaviorists must be on the lookout for psychological problems with a biological source.

Where does psychoanalysis fit into this modern paradigm? It doesn’t. Maybe we should call psychoanalysts as an endangered species – but there isn’t much impetus to preserve the species. Psychology is a profit-motivated profession. If psychoanalysis were capable of curing patients by resolving their problems rather than merely relieving them of an overburdened wallet, it would be thriving today. Instead, psychoanalysis is facing extinction.

If the commission of pseudoscience were Freud’s only sin, he would have slipped quietly into obscurity by now. Alas, this is the least of Freud’s mistakes. Sigmund Freud’s legacy lives on in ways that Freud himself hardly intended and would not have approved.

The Unintended Consequences of Freudian Psychology

Among Freud’s contentions were that sexually restrictive social mores created neuroses and inhibitions that repressed natural human behavior. In his day, this made Freud a name as a libertine. This label was false, for Freud was sexually quite strait-laced and conventional. As Theodore Dalrymple acutely observes, the “profoundly subversive” element of Freudian theory was “that desire, if not fulfilled, will lead to pathology… [This] makes self-indulgence man’s highest goal. It is a kind of treason to the self, and possibly to others, to deny oneself anything” [emphasis added]. Dalrymple supplies a chilling example of this philosophy in action: “[Dalrymple] quotes one of his patients, a murderer: ‘I had to kill her, doctor, or I don’t know what I would have done.'”

The idea that customs, traditions and morality evolve because they have value – survival value and competitive value in fulfilling human desires – may not have occurred to Freud. It definitely did not occur to his many successors, who were determined to engineer human evolution according to a central plan. The effects have been the reverse of those intended. Throughout the 20th century, Freudian psychology has walked side by side with Marxian philosophy and economics. Yet by encouraging people to shrug off the so-called “repression” of self that motivates respect for the rights and sensitivities of others, Freudianism has been the enabler of the self-absorption so often decried by critics of capitalist materialism.

Behaviorism

The heir to Freudian psychology is the behaviorism of B.F. Skinner and his disciples. Here, Dalrymple deplores the behaviorist tendency to categorize every complaint as a “disorder,” subject to psychiatric eradication by behavior modification. “No statement that a psychiatric disturbance has such-and-such a prevalence in such-and-such a population should be taken at face value, especially when it is a plea, as it so often is, explicit or implicit as the case may be, for more resources to treat it, the supposed prevalence having risen shockingly in the last few years.”

Dalrymple is not merely questioning the statistical validity of this technique – although that is sufficient justification for the warning, since the bogus use of statistics has been biggest scandal of the last two decades in both the social sciences and the natural sciences. He is also further extending the Heisenberg principle that by investigating a phenomenon the scientist is also altering its course. “It is not merely that epidemiological searchers in this field can find what they are looking for; it is that they can provoke what they are looking for.” This principle cannot be stressed too strongly.

The social-welfare establishment has identified dozens of conditions requiring treatment. This treatment requires money and the existence of a bureaucratic establishment to provide, fund and supervise it. That establishment provides a living for many people. The “victims” of the conditions get real income in various forms: money, medical treatment and certified “victim” status as addicts or whatever the jargon term is for their condition.

And the victims also get a certified excuse for their misbehavior.

This is a form of real income that cannot be underestimated. Whereas in pre-psychology days, the victims were ostracized or otherwise discouraged from engaging in the behavior, now they are encouraged in it by the various subsidies provided. While proponents of the “therapeutic state” may indignantly object that nobody wants to be sick, objective research strongly confirms the role of incentives in enabling bad behavior.

This whole system has become self-promoting and self-aggrandizing. “The expansion of psychiatric diagnoses leads paradoxically and simultaneously to overtreatment and undertreatment. The genuinely disturbed get short shrift; Those with chronic schizophrenia, which seems most likely to be a genuine pathological malfunction of the brain [e.g., not “mental illness” at all but physical illness of the brain], are left to molder in doorways, streets and stations of large cities, while untold millions have their fluctuating preoccupations attended to with the kind of attention that an overconcerned mother gives her spoiled child with more or less the same results.”

The genuinely ill get less treatment because, being less able to earn income, they get less attention. The pseudo-ill are more able to command attention and show better “results” with less effort; therefore, they are easier and more satisfactory to “treat.”

Psychology is able to create the demand for its services by creating pseudo-illness. It does so, argues Mona Charen in her book review of Dalrymple in National Review, by “creating one excuse after another for bad behavior – our terrible childhoods, our genes, our neurotransmitters, our addictions. In each case, and often with extremely unscientific reasoning, we are offered absolution. None of us is really responsible for our behavior. The whole psychological enterprise, Dalrymple argues, has had the effect of excusing poor choices and bad character. ‘Virtue is not manifested in one’s behavior, always so difficult and tedious to control, but in one’s attitude to victims'”[emphasis added].

This book may have opened our eyes to the 20th century. But it was written by a psychiatrist. How does economics come into it?

The Economics of Individual (Ir-) Responsibility 

In both classical and neoclassic economics, the unit of analysis is the individual human being. (For immediate purposes, the separation between “classical” and “neoclassical” will be taken as the “Marginal Revolution” in the theory of consumer demand beginning roughly in the 1870s. This distinction is not important to what follows.) When the focus shifts to the theory of the firm, the unifying element is the assumption of profit maximization that directs the diverse strivings of the firm’s members toward a single goal.

Free markets are governed by the principle of mutually beneficial voluntary exchange. Mutual benefit provides the motivation to exchange voluntarily. There is a tacit presumption that each individual is responsible for his or her actions; that is, neither is liable for the actions of the other. This is entirely logical, since each one is the reigning expert on his or her wants, desires, shortcomings, plans and expectations. Neither can possibly know as much about the other as he or she knows about himself or herself. Thus, the concept of individual responsibility is an automatic byproduct of the philosophy of free markets.

No wonder, then, that Adam Smith trafficked in moral philosophy. The surprising thing is that somewhere along the way this got lost in the transition of economists to men in white coats peddling business forecasts of future growth rates of GDP and interest rates.

Contrast the relationship between human beings engaging in free trade and that between analyst and patient in today’s “therapeutic state.” The patient has a problem. No surprise there, since all of us do virtually all the time. The patient has an incentive to view this problem as beyond his control – if not a physical illness, then a neurosis, a complex, an addiction, a “sickness” of a metaphoric kind. The incentive is multi-pronged.

First, his lack of control relieves him of responsibility. He has no moral responsibility for having created, nurtured or tolerated it. Since he has no responsibility for it, he need feel no guilt over it.

Second, he now has a moral claim on the resources of others that did not previously exist. This claim is a form of real income that may become tangible if he can extract voluntary charity from them or involuntary payment in the form of government subsidies.

Third, his status as a moral claimant who suffers from a problem not of his own making makes him a victim. Victim status makes him a member of a recognized interest group. In addition to the possibility of extracting tangible real income via charity or government subsidies, he can also receive the psychic benefit that goes with public recognition as a member of a victim class.

Now shift attention to the analyst, whose incentives run parallel with those of the patient. He has an incentive to identify the patient’s problem as either a physical sickness or a psychic “mental illness.” Either way, this identification immediately relieves him of any guilt that might otherwise attach to treating the patient. Now he is merely a doctor treating a sick patient. He need feel no guilt over that.

And once his doctor status is secure, the analyst has no qualms about filing an intellectual lien on the assets of the public, either by appealing to their charitable sympathies of to their legal responsibilities as citizens and taxpayers.

Victims require saving. Saving requires saviors. Saviors are heroic figures. Thus, analysts earn psychic benefits from assuming heroic public status, just as patients gain psychic benefits from assuming victim status.

When two groups of people have so much to gain from pursuing a congruent sequence of activities, what does economic logic say will happen? The “equimarginal principle” – the fundamental principle of economic optimization underlying the theories of consumer demand, the firm and input supply – says that as long as the marginal benefit of an activity exceeds its marginal cost, economic actors will increase their pursuit of the activity. Indeed, if two non-competing groups find that their ends coincide, the groups may even collude, either openly or tacitly, to further those ends.

And that is just what has happened in mental health during the 20th century. Psychologists and patients have tacitly colluded to enlarge the “mental-health” establishment. That is what Theodore Dalrymple has had the temerity to point out in his politically incorrect book. Its political incorrectness is its outstanding virtue; its sole vice is its economic incorrectness. Where Dalrymple has made a literary-career specialty of telling unpopular and unpleasant truths about havoc wreaked by the pseudo-science of modern psychology, he has been unaccountably reticent in failing to disclose the economic logic underlying his position.

Why is it Important to Acknowledge the Role of Individual Responsibility in Economics?

In the most important excerpt quoted above, Dalrymple acknowledges that “the genuinely disturbed get short shrift.” These are people who suffer from psychoses formerly diagnosed as “mental illness” and treated with (utterly useless) psychotherapy. Thanks to the onetime heretics who refused to knuckle under to Freudian dogma, we now know that schizophrenia and manic depression (currently called bi-polar disorder) are neurochemical disorders of the brain. As is true with the most intractable physical disorders, we can offer only limited medical therapy for these conditions. But even this help is often denied to those who need it most.

Dalrymple rightly sees the outlines of the problem because he has spent a lifetime within the system as prison doctor and psychiatrist in private practice. As a resident of the U.K., he lived under Great Britain’s infamous National Health Service (NHS). He knows the workings of government the way a gulag prisoner knows the workings of a concentration camp. But it would be expecting too much to hope that a man who spent his life acquiring expertise in medicine and psychiatry and emerged alive from the toils of NHS should also be conversant with economic theory.

The reason for the denial of therapy to the “genuinely disturbed” is straightforward. The victims are unable to act as their own advocates. The treatment of so-called mental illness is plagued by a version of Gresham’s Law (“bad money drives out good money”), in which bad therapy drives out good therapy. The pseudo-victims are the squeaky wheels, greased by their own financial and political resources and the very fact that their lack of true illness yields better “results” from treatment. Because the treatment of mental illness is a jealously guarded prerogative of government and government budget-allocation is a jealously guarded prerogative of politicians, funds allocated to the treatment of the truly psychotic are a small slice of an already-small pie.

Individual responsibility is vital to the operation of civil society. It goes hand-in-hand with human freedom and free markets. But it breaks down in the rare – but real – cases where individuals are incapable of acting in their own behalf.

As things stand, government is the agency designated to act for those who cannot act for themselves. For example, children cannot enter into contracts for employment without the consent of their parents or guardian. Just to make sure that this position is not abused, children’s earnings are subject to protection by trusts. Child-welfare agencies also exist (ostensibly) to prevent other types of abuse. But when it comes to mental health, government is a walking, talking, breathing conflict of interest. Essentially, it is in the same conflicted position as the analyst because government is not a neutral party. It does not act for “the common good” because there is no “common good” – there are only diverse goods. This diversity can be reconciled only by a mechanism that allows relative value to be placed on each good so that the tradeoffs required by the reconciliation can be made efficiently and consistently. When government becomes the arbiter in a situation when its decision can produce more government, it always decides in favor of government intervention. (The only exception is when it is called upon to perform a true function of government, which would require a sacrifice of some other non-essential government activity – in which case it always chooses the non-essential over the essential.) Relying on government, with its built-in conflict of interest, is what got us in the fix we’re in.

When people cannot act in their own behalf, somebody must act for them. Their closest relatives or spouse are the first place to turn. When they cannot or will not act and government is disqualified, the only alternative is private charity.

Why has the word “charity” acquired a pejorative tinge? After all, research shows that Americans are very much inclined to support charitable causes. The problem is that too many Americans are still bewitched by the wish-fulfillment fantasy of government as problem-solver of first resort. Were government confined to its true functions, we would have the additional real income and discretion with which to solve the problems that government is now purporting – but failing – to solve.

As Dalrymple notes, the paradigm for any problem relating to health is to identify a “new” disorder, spread the alarm about its “epidemic” status and demand (what else?) government action at once, if not sooner. The good news about Dalrymple’s book is that the “problem” is vastly smaller than advertised. The bad news is that a real problem exists that is not being addressed and is immune to government action. In fact, the best thing would be to keep government away altogether. The worst news of all is that the attempt to solve the non-existent problem has created a worse one – the erosion of the irreplaceable concept of individual responsibility.

The key to sorting all this out is the economic logic underlying it all.

DRI-202 for week of 4-26-15: The Comcast/Time-Warner Cable Merger Bites the Dust

An Access Advertising EconBrief:

The Comcast/Time-Warner Cable Merger Bites the Dust

This week brings the news that the year’s biggest and most highly publicized merger, between cable television titans Comcast and Time-Warner Cable, has been called off. Although the decision was technically made by Comcast, who announced it on Monday, it really came from the Federal Communications Commission (FCC), whose de facto opposition to the merger became public last week. This continues a virtually unbroken string of economically inane measures taken by the Obama administration and its regulatory minions.

Theoretically, merger policy falls within the province of industrial organization, the economic specialty spawned by the theory of the firm. Actually, the operative logic had nothing whatever to do with economics. Instead, the decision was dictated by the peculiar incentives governing the behavior of government.

The high visibility of the intended merger and the huge volume of comment it spawned make it worthwhile to examine carefully. What made it so attractive to the principals? Why was it denounced so bitterly in certain quarters? Was the FCC right to oppose it?

Who Were the Principals in the Merger?

Comcast and Time-Warner Cable (hereinafter, TWC) are today the two leading firms in the so-called “pay-TV” industry. The quotation marks reflect the fact that the term has undergone several changes over the course of television history. Today it refers to two different groups of television consumers. First are subscribers to cable television, the biggest revenue source for both Comcast and TWC. Born in the 1950s and nurtured in the 1960s, cable TV fought tooth and nail to gain a toehold against “free” broadcast television. It succeeded by offering better reception from buried coaxial-cable transmission lines, more viewing choices than the “Big 3″ national broadcast network channels offered on free TV and a blessed absence of commercial interruption. Its success came despite the efforts of government regulators, who forbade local cable companies from serving major metropolitan areas until the 1980s.

In the early days, municipalities were so desperate to get cable-TV that local government would offer a grant of monopoly to the first cable franchise to lay cable and promise to serve the citizenry. In return, the cable firm would have to pay various legal and illegal bribes. The legal ones came in the form of community-access and public-service channels that few watched but which gave lip service to the notion that the cable firm was serving the “public interest” and not merely maximizing profit. Predictably, these monopoly concessions eventually came back to haunt municipal government when cable firms inexorably began to raise their rates without providing commensurate increases in programming value and customer service to their customers.

Today, the contractual arrangements with cable firms survive. But the grants of monopoly are no more. In many markets, other cable firms have entered to compete with the original firms. Even more important, though, are the other sources of competitive television service. First, there is satellite TV service provided by companies like Direct TV and Dish. A satellite dish – usually located on the customer’s roof – gathers the signal transmitted by the company and provides hundreds of channels to customers. Wireless firms like AT&T and Verizon can also transmit television signals to provide television service as well. And finally, it has become possible to “stream” television signals digitally by means similar to those used to stream audio signals for songs. Consequently, a movie-streaming service like Netflix has become a potent competitor to cable television as well.

What Did Comcast and TWC Have to Gain from the Merger? 

The late, great Nobel laureate Ronald Coase taught us that business firms exist to do things that individuals can’t do for themselves – or, more precisely, things that individuals find too costly to do themselves and more efficient to “import” from outsiders. Take this same logic and extend it to business firms. Firms produce some things internally and purchase other things outside the firm. Logically, the inputs they produce internally are the ones they can produce at a cost lower than the external cost of purchase, while external purchases are made when the internal cost of production is too high.

Now extend this logic even further – to the question of merger, in which one firm purchases another. Both firms have to agree to the terms, including a price, which means that both firms consider the merged operation superior to separation. The term used to denote the advantages that arise from combination is synergy – a hybrid of “synthesis” and “energy” suggesting that melding two elements produces a greater output of energy than do the individuals in isolation.

Why should putting two firms together improve on their separate efficiency? The first place to look for an answer is cost, the reason why businesses exist in the first place and the reason why they purchase inputs in the second place. The primary synergy in most mergers is elimination of duplicative functions. Because mergers themselves take time, effort and other resources to effect, there must be substantial duplication that can be eliminated in order to justify a merger on this ground alone. That is why mergers so often occur (or threaten) among similar, competing firms with similar internal structures.

This applies to Comcast and TWC. Large parts of both firms are devoted to the same function; namely, providing cable television to subscribers. A merger would still leave them with the same total territory to service. But one central office, much smaller than the combined size of both before the merger, could now handle administration for the entire territory. The largest efficiencies would undoubtedly have been available in advertising. Economies of scale would have been gained from having one advertising department handle all advertising for the merged firm. Economies of size would have been available because the much larger total of advertising would have commanded volume discounts from sellers.

Given the gigantic size of the firms – their combined revenue would have yielded well over $80 billion – these economies alone might well have justified the merger. And that leaves out the most important reason for the merger. In times of market turmoil, mergers are often referred to as “consolidation.” This is a polite way of saying that the firms involved are girding their loins for future battle. They are fighting for their business life.

This is completely at odds with the picture painted by self-styled “consumer advocates” and government regulators. The former whine about the poor quality of service provided by Comcast to its cable subscribers, calling the company a “lazy monopolist.” By definition, a lazy monopolist doesn’t have to worry about its future – it is living off the fat of the land or, as an economist puts it, taking some of its profits in the form of leisure. (Of course, the critics can’t have it both ways – if the firm is “lazy” then it must be extracting less profit from consumers than it could if it were “aggressive.” But the act of moral posturing uses up so much mental energy that there is little left for critics to use in applying logic.) Government regulators say that Comcast and Time-Warner have so much power that, when combined, they could exclude their potential competitors from the market for “high-speed broadband.”

But the picture painted by market analysts is completely different. Comcast and TWC are leading players in a market that is beginning to wither on the vine. They are not merely providing “pay TV;” they are providing it via coaxial cable buried in the ground and via subscription. This method of providing television service will sooner or later become an endangered species – and the evidence is leaning toward “sooner.” People are beginning to “cut the cord” binding them to cable television. They are doing it in at least three ways. For years, satellite services have made modest inroads into cable markets. Now wireless companies are increasing these inroads. Finally, streaming services are promoting the ultimate heresy – people are renouncing their television sets entirely by streaming TV programming on their computers. Consumers have begun abandoning pay-TV in both 2013 and 2014; in the last year, cord-cutting to streaming TV has begun to occur in the millions.

Not surprisingly, the prime mover behind all of these threats to cable TV is cost. In the early days of cable, hundreds of channels were a dazzling novelty after the starvation diet of three major networks (with perhaps one UHF channel as an added spice). People occasionally surfed the channels just to find out what they might be missing or for something of genuine interest. Over time, though, they bore an increasing cost of holding an inventory of dozens of channels handy on the mere off-chance that something interesting might turn up. That experience gradually made the tradeoff seem less and less favorable, making the lure of a TV lineup tailored to their specific preferences and budget more attractive. Today, the prices of cable TV’s competitors will go nowhere but down.

These competitors are not only competing on the basis of price but also on the basis of product quality. Increasingly, they are now creating their own programming content. This trend began years ago with Home Box Office (HBO), which started life as a movie channel but entered the top tier of television competition when it began producing its own movies and specials. Now Netflix has followed suit and everybody else sees the handwriting on the wall.

The biggest attraction of the merger for Comcast and Time-Warner was the combined resources of the two firms, which would have given the resulting merged firm the kind of war chest it needed to fight a multi-front competitive war with all these competitors. Each of the two firms brought its own special advantages to the fight, complementing the weaknesses of the other. Comcast owns NBC, currently the most successful broadcast-TV channel and a locus of programming expertise. Another of its assets is Universal Studios, a leading film producer since the dawn of Hollywood and a television pioneer since the 1950s. TWC brings the additional heft and nationwide presence necessary to lift Comcast from regional cable-TV leader to international media player.

What is an “Industry?”

Everybody has heard the word “industry” used throughout their lives. Everybody thinks they know what it means. The federal government lists and classifies industries according to the Standard Industrial Classification (SIC) code. The SIC code defines an industry by its technical characteristics, and the definition becomes narrower as the work performed by the firms becomes more specialized. From the point of view of economics, though, there is a problem with this strictly technical approach to definition.

It has no necessary connection to economics at all.

The only economic definition of an industry related to the economic substitutability of the products produced by its members. If the products are viewed by consumers as economically homogeneous – e.g., interchangeable – then the aggregate of firms constitutes an industry. This holds true regardless of the technical features of those products. They may be physically identical; indeed, that might seem highly likely. But identical or not, their physical similarity has nothing to do with the question of industrial status.

If the goods are close substitutes, we may regard the firms as comprising an industry. How close is “close?” Well, in practice, economists usually use price as their yardstick. If significant variations in the price of any firm’s output will induce consumers to shift their custom to a different seller, then that is sufficient to stamp the output of different sellers as close substitutes. (We hold product quality constant in making this evaluation.)

This distinction – between the definition of an industry in strictly technical terms and in economic terms – is the key to understanding modern-day telecommunications, the digital universe and the Comcast/TWC merger.

Without saying it in so many words, the FCC proposes to define markets and industries in non-economic terms that suit its own bureaucratic self-interest. It does this despite the fact that only economic logic can be used when evaluating the welfare of consumers and interpreting the meaning of antitrust law.

The FCC’s Rationale for Ordering a Hearing on the Comcast/TWC Merger

Comcast decided to pull the plug on its proposed merger with TWC because the FCC’s announced decision to hold a regulatory hearing on the merger was a signal of the agency’s intention to oppose it. (The power of the federal government to legally coerce citizens is so great than innocent defendants commonly plead guilty to criminal charges in order to minimize penalties, so it is not strange that Comcast should surrender preemptively.) It is natural to wonder what was behind that opposition. There are two answers to that question. The first answer is the one that the agency itself would have provided in the hearing and that already been provided in statements made by FCC Chairman Thomas Wheeler. That answer should be considered the regulatory pretext for opposition to the merger.

For years, another regulatory agency – the Federal Trade Commission (FTC) – passed both formal and informal judgment on antitrust law in general and business combinations in particular. The FTC even provided a set of guidelines for what mergers would be viewed favorably and unfavorably. The guidelines looked primarily at what industrial-organization economists called industry structure. That term refers to the makeup of firms existing within the industry. Traditionally, this field of economics studies not only industry structure – the number of firms and the division of industry output among them – but also the conduct of existing firms – competition might be fierce, lackadaisical or even give way to collusive attempts to set price – and their actual performance – prices, output and product quality might be consistent either with competitive results or with monopolistic ones. But the FTC concerned itself with structural attributes of the market when reviewing proposed mergers, to the exclusion of other factors. It calculated what were known as concentration ratios – fractions of industry output produced by the leading handful of firms currently operating. If the ratio was too high, or if the proposed merger would make it too high, then the merger would be disallowed. When feeling particularly esoteric, the agency might even deploy a hyper-scientific tool like the “Herfindahl-Hirschman Index” of industry concentration as evidence that a merger would “harm competition.”

In our case, the FCC needed a rationale to stick its nose into the case. That was provided by President Obama’s insistence on the policy of “net neutrality” as he defined it. This policy contended that the leading cable-TV providers were “gatekeepers” of the Internet by virtue of their local monopoly on cable service. In order to give their policy a semblance of concreteness – and also to make the FCC look as busy as possible – the agency established a policy that the top pay-TV firm could control no more than 30% of the “total” market. This criterion is at least loosely reminiscent of the old FTC merger guidelines – except for the fact that the FTC merger guidelines had a tenuous relationship with economic theory and logic. Here, the FCC’s policy as much to do with astrology as it does with economics; e.g., roughly zero in both cases. But, mindful of the FCC’s rule and in order to keep its merger hopes alive, Comcast sold enough of its cable-TV properties to Charter Communications to reduce the two companies’ combined pay-TV holdings to the 30% threshold.

In order to create the appearance of being progressive in the technical as well as the political sense, the FCC set itself up as the guardian of “high-speed broadband service.” For years leading up to the merger announcement, the FCC’s definition of “high-speed” was a speed greater than or equal to 4 Mbps. But after the merger announcement, the FCC abruptly changed its definition of the “high-speed market” to 25 Mbps. or greater. Why this sudden change? Comcast’s sale of cable-TV assets had circumvented the FCC’s 30% market threshold, so the agency now had an incentive to invent a new hurdle to block the merger. The faster broadband-speed classification had the effect of including fewer firms, thereby making its (artificially defined) market smaller than before. In turn, this made the shares of existing firms higher. Under this revised definition – surprise, surprise! – the Comcast/TWC merger would have given the resulting firm 57% of this newly defined “market” rather than the 37% it would previously have had.

Still, most industry observers figured that Comcast’s divestiture sale to Charter Communications, combined with what Holman Jenkins of The Wall Street Journal called “Comcast’s vast lobbying spending and carefully cultivated donor ties with the Obama administration”, would see the merger over the regulatory hurdles. Clearly, they reckoned without the determination of FCC Chairman Wheeler.

What Was the Actual Motivation of the FCC in Frustrating the Comcast/TWC Merger?

Regulators regulate. That is the explanation for the FCC’s de facto denial of the Comcast/TWC merger. It is the bureaucratic version of Descartes’s “I think, therefore I am.” After over a century of encroaching totalitarianism, it is only gradually dawning on America that big government is dedicated solely to the proposition that government of, by and for itself shall not perish from the Earth.

A recent Bloomberg Business editorial is an implicit rationale for the FCC’s action. The editor marvels at how only recently it seemed that the forces of cable-TV darkness had the upper hand and were poised with their jackboots on the throats of consumers the world over. But then, with startling suddenness, cable’s position now seems wholly tenuous as it is beset on all sides with uncertainty. And who should we thank for this sudden reversal? Why, the FCC, of course, whose wise regulation has turned the tide. Instead of crediting competitive forces with making the FCC’s action unnecessary if not a complete non sequitur, the editorial gives the credit to the FCC for creating circumstances that preexisted and in which the agency had no hand.

One of Milton Friedman’s famous characterizations of bureaucracy compared it to the flight leader of a covey of ducks who, upon discovering that the remainder of his V-formation have deserted him and are flying off in a different direction, scrambles to get back in front of the V again. By denying the merger, the FCC has re-positioned itself to claim credit for anything and everything that competition has accomplished so far and will accomplish in the future. If it had done nothing, regulation would have had to cede credit to market forces. By doing something – even something as crazy, useless and downright counterproductive as frustrating a potentially beneficial merger – the FCC has not only set itself up for future benefits, it has also fulfilled the first goal of every government bureaucracy.

It has justified its existence.

All this would have been true even if the FCC’s pre-existing commitment to net neutrality has not forced it to twitch reflexively every time the words “high-speed broadband” arise in a policy context. As it is, the agency was compelled to invent a “policy” for regulating a market that will soon be the most hotly competitive arena in the world – unless the federal government succeeds in wrestling competition to a standstill here as it did in telecommunications in the 1990s.

Why are Economic Theory and Logic Absent from the FCC’s Actions in the Comcast/TWC Merger?

Begin with a few matter-of-fact sentences from Forbes magazine’s summary of the merger. “Comcast and TWC do not directly compete with each other… and there is no physical overlap in the areas in which these companies offer services.” Competitors such as Direct-TV, Dish, AT&T, Verizon and Netflix have “reduced the Importance of the cable-TV market and given its customers other alternatives… Hence this merger would not significantly impact the choices available to the consumers in the service areas of these two companies.”

Forbes’ point was that old-time opposition to mergers by agencies like the FTC was based on the simplistic premise that when competitors merge, there is one few competitor in the market – which is then one step closer to monopoly. When there were few competitors to begin with, this line of thinking had a certain naïve appeal, even though it was wrong. But when the merging companies weren’t competitors in the first place, even this rather flimsy rationale evaporates. And this holds just as true in the so-called “market for high-speed broadband” as it does for the market for pay-TV. Why? Because President Obama and FCC Chairman Wheeler have anointed the cable companies as the gatekeepers of that “market,” and the only markets they can be the gatekeepers of are those same local markets in which Comcast and Time-Warner weren’t competitors before the merger announcement. Therefore the merger couldn’t have affected developments there, either.

The end-in-view of all economic activity is consumption. Consumers – the people who watch TV in whatever form – would not have been harmed or adversely affected by the merger. The consumer advocated who cite the bad service given by Comcast to its customers seem to have taken the view that the remedy for this offense is to make sure that nothing good happens to Comcast from now on. They apparently expect that the merger would have reduced the total volume of employment by the two firms – which it undoubtedly would – and that this would on its face have made customer service even worse – which it most certainly would not have done. Government never ceases to object to budget cuts and predict even worse customer service when they are implemented, but bigger government never produced better customer service. Only competition does that – and the merger was a desperate attempt to prepare for and cope with competition.

The FCC’s imaginary market for high-speed broadband and its 30% threshold were as irrelevant to market competition as the price of tea in Ceylon. The entire digital universe is inventing its way around the anachronistic gatekeeper function performed by local cable firms. (The Wall Street Journal‘s editors couldn’t help reacting in amazement to the FCC’s announcement: “Is anybody at the FCC under 40?” Today it is only the senior-citizen crowd that is still tethered to desktop computers for Web access.)

Why Should the Man in the Street Be Expected to Embrace a Merger Between Large Corporations?

It has been estimated that the sum of mankind’s knowledge has increased more since 2003 than it did since the dawn of human history up to that point. Given the breakneck advance of learning, we cannot expect to comprehend the meaning and benefit of all that goes on around us. Instead, we must choose between the presumptive value of freedom and the restraining hand of government. We owe most of what we value to freedom and private initiative. It is genuinely difficult to identify much – if anything – that government does adequately, less alone brilliantly.

This straightforward comparison, rather than complex mathematics, econometrics or “he said, she said” debates between vested interests should sway us to side with freedom and free markets. The average person shouldn’t “embrace” a corporate merger because he or she shouldn’t evaluate the issue on the basis of emotion. The merger should have been “tolerated” as an exercise of free choice by responsible adults – period.