DRI-211 for week of 3-29-15: Which First – Self-Driving Cars or Self-Flying Planes?

An Access Advertising EconBrief: 

Which First – Self-Driving Cars or Self-Flying Planes?

As details of the grisly demise of Lufthansa’s Germanwings flight 9525 gradually emerged, the truth became inescapable. The airliner had descended 10,000 feet in a quick but controlled manner, not the dead drop or death spiral of a disabled plane. No distress calls were sent. It became clear that the airplane had been deliberately steered into a mountainside. The recovery of the plane’s flight data recorder – the “black box” – provided the anticlimactic evidence of a mass murder wrapped around an apparent suicide: the sound of a chair scraping the floor as the flight crew’s captain excused himself from the cabin, followed by the sound of the cabin door closing, followed by the steady breathing of the co-pilot until the captain’s return. The sounds of the captain’s knocks and increasingly frantic demands to be readmitted to the cabin were finally accompanied by the last-minute screams and shrieks of the passengers as they saw the French Alps looming up before them.

The steady breathing inside the cabin showed that the copilot remained awake until the crash.

As we would expect, the reaction of the airline, Lufthansa, and government officials is now one of shock and disbelief. Brice Robin, Marseille public prosecutor, was asked if the copilot, Andreas Lubitz, had – to all intents and purposes – committed suicide. “I haven’t used the word suicide,” Robin demurred, while acknowledging the validity of the question. Carsten Spohr, Lufthansa’s CEO and himself a former pilot, begged to differ: “If a person takes 149 other people to their deaths with him, there is another word than suicide.” The obvious implication was that the other people were innocent bystanders, making this an act of mass murder that dwarfed the significance of the suicide.

This particular mass murder caught the news media off guard. We are inured to the customary form of mass murder, committed by a lone killer with handgun or rifle. He is using murder and the occasion of his death to attain the sense of personal empowerment he never realized in life. The news media reacts in stylized fashion with pious moralizing and calls for more and stronger laws against whatever weapon the killer happened to be using.

In the case of the airline industry, the last spasm of government regulation is still fresh in all our minds. It followed in response to the mass murder of 3,000 people on September 11, 2001 when terrorists hijacked commercial airliners and crashed them into the World Trade Center and the Pentagon. Regulation has marred airline travel with the pain of searches, scans, delays and tedium. Beyond that, the cabins of airliners have been hardened to make them impenetrable from the outside – in order to provide absolute security against another deliberately managed crash by madmen.

Oops. What about the madmen within?

But, after a few days of stunned disbelief, the chorus found its voice again. That voice sounded like Strother Martin’s in the movie Cool Hand Luke. What we have here is a failure to regulate. We’ll simply have to find a way to regulate the mental health of pilots. Obviously, the private sector is failing in its clear duty to protect the public, so government will have to step in.

Now if it were really possible for government to regulate mental health, wouldn’t the first priority be to regulate the mental health of politicians? Followed closely by bureaucrats? The likely annual deaths attributable to government run to six figures, far beyond any mayhem suicidal airline pilots might cause. Asking government to regulate the mental health of others is a little like giving the job to the inmates of a psychiatric hospital – perhaps on the theory that only somebody with mental illness can recognize and treat it in others.

Is this all we can muster in the face of this bizarre tragedy? No, tragedy sometimes gives us license to say things that wouldn’t resonate at other times. Now is the time to reorganize our system of air-traffic control, making it not only safer but better, faster and cheaper as well.

The Risk of Airline Travel Today: The State of the Art

Wall Street Journal Holman Jenkins goes straight to the heart of the matter in his recent column (03/29-29/2015, “Germanwings 9525 and the Future of Flight Safety”). The apparent mass-murder-by-pilot “highlights one way the technology has failed to advance as it should have.” Even though the commercial airline cockpit is “the most automated workplace in the world,” the sad fact is that “we are further along in planning for the autonomous car than for the autonomous airliner.”

How has the self-flying plane become not merely a theoretical possibility but a practical imperative? What stands in the way of its realization?

The answer to the first question lies in comparing the antiquated status quo in airline traffic control with the potential inherent in a system updated to current technological standards. The second answer lies in the recognition of the incentives posed by political economy.

Today’s “Horse and Buggy” System of Air-Traffic Control

For almost a century, air-traffic control throughout the world has operated under a “corridor system.” This has been accurately compared to the system of roads and lanes that governs vehicle transport on land, the obvious difference being that it incorporates additional vertical dimensions not present in the latter. Planes file flight plans that notify air-traffic controllers of their origin and ultimate destination. The planes are required to travel within specified flight corridors that are analogous to the lanes of a roadway. Controllers enforce distance limits between each plane, analogous to the “car-lengths” distance between the cars on roadways. Controllers regulate the order and sequence of takeoffs and landings at airports to prevent collisions.

Unfortunately, the corridor system is pockmarked with gross inefficiencies. Rather than being organized purely by function, it is instead governed primarily by political jurisdiction. This is jarringly evident in Europe, home to many countries in close physical proximity. An airline flight from one end of Europe to another may pass through dozens of different political jurisdictions, each time undergoing a “handoff” of radio contact for air-traffic control between plane and ground control.

In the U.S., centralized administration by the Federal Aviation Administration (FAA) surmounts some of this difficulty, but the antiquated reliance on radar for geographic positioning still demands that commercial aircraft report their positions periodically for handoff to a new air-traffic control boss. And the air corridors in the U.S. are little changed from the dawn of air-mail delivery in the 1920s and 30s, when hillside beacons provided vital navigational aids to pilots. Instead of regular, geometric air corridors, we have irregular, zigzag patterns that cause built-in delays in travel and waste of fuel. Meanwhile, the slightest glitch in weather or airport procedure can stack up planes on the ground or in the air and lead to rolling delays and mounting frustration among passengers.

Why Didn’t Airline Deregulation Solve or Ameliorate These Problems? 

Throughout the 20th century, the demand for airline travel grew like Topsy. But the system of air-traffic control remained antiquated. The only way that system could adjust to increased demand was by building more airports and hiring more air-traffic controllers. Building airports was complicated because major airports were constructed with public funds, not private investment. The rights-of-way, land acquisition costs, and advantages of sovereign immunity all militated against privatization. When air-traffic controllers became unionized, this guaranteed that the union would strive to restrict union membership in order to raise wages. This, too, made it difficult to cope with increases in passenger demand.

The deregulation of commercial airline entry and pricing that began in 1978 was an enormous boon to consumers. It ushered in a boom in airline travel. Paradoxically, this worsened the quality of the product consumers were offered because the federal government retained control over airline safety. This guaranteed that airport capacity and air-safety technology would not increase pari passu with consumer demand for airline travel. As Holman Jenkins puts it, the U.S. air-traffic-control system is “a government-run monopoly, astonishingly slow to upgrade its technology.” He cites the view of the leading expert on government regulation of transportation, Robert Poole of the Reason Foundation, that the system operates “as if Congress is its main customer.”

Private, profit-maximizing airlines have every incentive to insure the safe operation of their planes and the timely provision of service. Product quality is just as important to consumers as the price paid for service; indeed, it may well be more important. History shows that airline crashes have highly adverse effects on the business of the companies affected. At the margin, an airline that offers a lower price for a given flight or provides safer transportation to its customers or gives its customers less aggravation during their trip rates to make more money through its actions.

In contrast, government regulators have no occupational incentive to improve airline safety. To be sure, they have an incentive to regulate – hire staff, pass rules, impose directives and generally look as busy as possible in their everyday operations. When a crash occurs, they have a strong incentive to assume a grave demeanor, rush investigators to the scene, issue daily updates on results of investigations and eventually issue reports. These activities are the kinds of things that increase regulatory staffs and budgets, which in turn increase salaries of bureaucrats. They serve the public-relations interests of Congress, which controls regulatory budgets. But government regulators have no marginal incentive whatsoever to reduce the incidence of crashes or flight delays or passenger inconvenience – their bureaucratic compensation is not increased by improved productivity in these areas despite the fact that THIS IS REALLY WHAT WE WANT GOVERNMENT TO DO.

Thus, government regulators really have no incentive to modernize the air-traffic control system. And guess what? They haven’t done it; nor have they modernized the operation of airports. Indeed, the current system meets the needs of government well. It guarantees that accidents will continue to happen – this will continue to require investigation by government, thus providing a rationale for the FAA’s accident-investigation apparatus. Consumers will continue to complain about delays and airline misbehavior – this will require a government bureau to handle complaints and pretend to rectify mistakes made by airlines. And results of accident investigations will continue to show that something went wrong – after all, that is the definition of an accident, isn’t it? Well, the FAA’s job is to pretend to put that something right, whatever it might be.

The FAA and the Federal Transportation Safety Board (FTSB) are delighted with the status quo – it justifies their current existence. The last thing they want is a transition to a new, more efficient system that would eliminate accidents, errors and mistakes. That would weaken the rationale for big government. It would threaten the rationale for their jobs and their salaries.

Is there such a system on the horizon? Yes, there is.

Free Flight and the Future of Fully Automatic Airline Travel

A 09/06/2014 article in The Economist (“Free Flight”) is subtitled “As more aircraft take to the sky, new technology will allow pilots to pick their own routes but still avoid each other.” The article describes the activities of a Spanish technology company, Indra, involved in training a new breed of air-traffic controllers. The controllers do not shepherd planes to their destinations like leashed animals. Instead, they merely supervise autonomous pilots to make sure that their decisions harmonize with each other. The controllers are analogous to the auctioneers in the general equilibrium models of pricing developed by the 19th century economist Vilfredo Pareto.

The basic concept of free flight is that the pilot submits a flight plan allowing him or her to fly directly from origin to destination, without having to queue up in a travel corridor behind other planes and travel the comparatively indirect route dictated by the air-traffic control system. This allows closer spacing of planes in the air. Upon arrival, it also allows “continuous descent” rather than the more circuitous approach method that is now standard. This saves both time and fuel. For the European system, the average time saved has been estimated at ten minutes per flight. For the U. S., this would undoubtedly be greater. Translated into fuel, this would be a huge saving. For those concerned about the carbon dioxide emissions of airliners, this would be a boon.

The obvious question is: How are collisions to be avoided under the system of free flight? Technology provides the answer. Flight plans are submitted no less than 25 minutes in advance. Today’s high-speed computing power allows reconciliation of conflicts and any necessary adjustments in flight-paths to be made prior to takeoff. “Pilots” need only stick to their flight plan.

Streamlining of flight paths is only the beginning of the benefits of free flight. Technology now exists to replace the current system of radar and radio positioning of flights with satellite navigation. This would enable the exact positioning of a flight by controllers at a given moment. The European air-traffic control system is set to transition to satellite navigation by 2017; the U.S. system by 2020.

The upshot of all these advances is that the travel delays that currently have the public up in arms would be gone under the free flight system. It is estimated that the average error in flight arrivals would be no more than one minute.

Why must we wait another five years to reap the gains from a technology so manifestly beneficial? Older readers may recall the series of commercials in which Orson Welles promoted a wine with the slogan “We sell no wine before its time.” The motto of government regulation should be “we save no life before its time.”

The combination of free flight and satellite navigation is incredibly potent. As Jenkins notes, “the networking technology required to make [free flight] work [lends] itself naturally and almost inevitably to computerized aircraft controllable from the ground.” In other words, the human piloting of commercial aircraft has become obsolete – and has been so for years. The only thing standing between us and self-flying airliners has been the open opposition of commercial pilots and their union and the tacit opposition of the regulatory bureaucracy.

Virtually all airline crashes that occur now are the result of human error – or human deliberation. The publication Aviation Safety Network listed 8 crashes since 1994 that are believed to have been deliberately caused by the pilot. The fatalities involved were (in ascending order) 1, 1, 4, 12, 33, 44, 104 and 217. Three cases involved military planes stolen and crashed by unstable pilots, but of the rest, four were commercial flights whose pilots or copilots managed to crash their plane and take the passengers with them.

Jenkins resurrects the case of a Japanese pilot who crashed hid DC-8 into Tokyo Bay in 1982. He cites the case of the Air Force pilot who crashed his A-10 into a Colorado mountain in 1997. He states what so far nobody else has been willing to say, namely that “last March’s disappearance of Malaysia Airlines 370 appears to have been a criminal act by a member of the crew, though no wreckage has been recovered.”

The possibility of human error and human criminal actions is eliminated when the human element is removed. That is the clincher – if one were needed – in the case for free flight to replace our present antiquated system of air-traffic organization and control.

The case for free flight is analogous to the case for free markets and against the system of central planning and government regulation.

What if… 

Holman Jenkins reveals that as long ago as 1993 (!) no less a personage than Al Gore (!!) unveiled a proposal to partially privatize the air-traffic control system. This would have paved the way for free flight and automation to take over. As Jenkins observes retrospectively, “there likely would have been no 9/11. There would have been no Helios 522, which ran out of fuel and crashed in 2005 when its crew was incapacitated. There would have been no MH 370, no Germanwings 9525.” He is omitting the spillover effects on private aviation, such as the accident that claimed the life of golfer Payne Stewart.

The biggest “what if” of all is the effect on self-driving cars. Jenkins may be the most prominent skeptic about the feasibility – both technical and economic – of autonomous vehicles in the near term. But he is honest enough to acknowledge the truth. “Today we’d have decades of experience with autonomous planes to inform our thinking about autonomous cars. And disasters like the intentional crashing of the Germanwings plane would be hard to conceive of.”

What actually happened was that Gore’s proposal was poured through the legislative and regulatory cheesecloth. What emerged was funding to “study” it within the FAA – a guaranteed ticket to the cemetery. As long as commercial demand for air travel was increasing, pressure on the agency to do something about travel delays and the strain on airport capacity kept the idea alive. But after 9/11, the volume of air travel plummeted for years and the FAA was able to keep the lid on reform by patching up the aging, rickety structure.

And pilots continued to err. On very, very rare occasions, they continued to murder. Passengers continued to die. The air-traveling public continued to fume about delays. As always, they continued to blame the airlines instead of placing blame where it belonged – on the federal government. Now air travel is projected to more-than-double by 2030. How long will we continue to indulge the fantasy of government regulation as protector and savior?

Free markets solve problems because their participants can only achieve their aims by solving the problems of their customers. Governments perpetuate problems because the aims of politicians, bureaucrats and government employees are served by the existence of problems, not by their solution.

DRI-254 for week of 7-6-14: The Selling of Environmentalism

An Access Advertising EconBrief:

The Selling of Environmentalism

The word “imperialism” was coined by Lenin to define a process of exploitation employed by developed nations in the West on undeveloped colonies in the Eastern hemisphere. In recent years, though, it has been used in a completely different context – to describe the use of economic logic to explain practically everything in the world. Before the advent of the late Nobel laureate Gary Becker, economists were parochial in their studies, confining themselves almost exclusively to the study of mankind in its commercial and mercantile life. Becker trained the lens of economic theory on the household, the family and the institution of marriage. Ignoring the time-honored convention of treating “capital” as plant and equipment, he (along with colleagues like Theodore Schultz) treated human beings as the ultimate capital goods.

Becker ripped the lid off Pandora’s Box and the study of society will never be the same again. We now recognize that any and every form of human behavior might profitably be seen in this same light. To be sure, that does not mean employing the sterile and limiting tools of the professional economist; namely, advanced mathematics and formal statistics. It simply means subjecting human behavior to the logic of purposeful action.

Environmentalism Under the Microscope

The beginnings of the environmental movement are commonly traced to the publication of Silent Spring in 1961 by marine biologist Rachel Carson. That book sought to dramatize the unfavorable effects of pesticides, industrial chemicals and pollution upon wildlife and nature. Carson had scientific credentials – she had previously published a well-regarded book on oceanography – but this book, completed during her terminal illness, was a polemic rather than a sober scientific tract. Its scientific basis has been almost completely undermined in the half-century since publication. (A recent book devoted entirely to re-examination of Silent Spring by scientific critics is decisive.) Yet this book galvanized the movement that has since come to be called environmentalism.

An “ism” ideology is, or ought to be, associated with a set of logical propositions. Marxism, for example, employs the framework of classical economics as developed by David Ricardo but deviates in its creation of the concept of “surplus value” as generated by labor and appropriated by capitalists. Capitalism is a term intended invidiously by Marx but that has since morphed into the descriptor of the system of free markets, private property rights and limited government. What is the analogous logical system implied by the term “environmentalism?”

There isn’t one. Generically, the word connotes an emotive affinity for nature and corresponding distaste for industrial civilization. Beyond that, its only concrete meaning is political. The problem of definition arises because, in and of itself, an affinity for nature is insufficient as a guide to human action. For example, consider the activity of recycling. Virtually everybody would consider it de rigueur as part of an environmentalist program. The most frequent stated purpose of recycling is to relieve pressure on landfills, which are ostensibly filling up with garbage and threatening to overwhelm humanity. The single greatest component of landfills is newsprint. But the leachates created by the recycling of newsprint are extremely harmful to” the environment;” e.g., their acidic content poisons soils and water and they are very costly to divert. We have arrived at a contradiction – is recycling “good for the environment” or “bad for the environment?” There is no answer to the question as posed; the effects of recycling are couched in terms of tradeoffs. In other words, the issue is dependent on economics, not emotion only.

No matter where we turn, “the environment” confronts us with such tradeoffs. Acceptance of the philosophy of environmentalism depends on getting us to ignore these tradeoffs by focusing on one side and ignoring the other. Environmental advocates of recycling, for instance, customarily ignore the leachates and robotically beat the drums for mandatory recycling programs. When their lopsided character is exposed, environmentalists retreat to the carefully prepared position that the purity of their motives excuses any lapses in analysis and overrides any shortcomings in their programs.

Today’s economist does not take this attitude on faith. He notes that the political stance of environmentalists is logically consistent even if their analysis is not. The politics of environmentalism can be understood as a consistent attempt to increase the real income of environmentalists in two obvious ways: first, by redistributing income in favor of their particular preferences for consumption (enjoyment) of nature; and second, by enjoying real income in the form of power exerted over people whose freedom they constrain and real income they reduce through legislation and administrative and judicial fiat.

Thus, environmentalism is best understood as a political movement existing to serve economic ends. In order to do that, its adherents must “sell” environmentalism just as a producer sells a product. Consumers “buy” environmentalism in one of two ways: by voting for candidates who support the legislation, agencies, rules and rulings that further the environmental agenda; and by donating money to environmental organizations that provide real income to environmentalists by employing them and lobbying for the environmental agenda.

Like the most successful consumer products, environmentalism has many varieties. Currently, the most popular and politically successful one is called “climate change,” which is a model change from the previous product, “global warming.” In order to appreciate the economic theory of environmentalism, it is instructive to trace the selling of this doctrine in recent years.

Why Was the Product Called “Climate Change” Developed?

The doctrine today known as “climate change” grew out of a long period of climate research on a phenomenon called “global warming.” This began in the 1970s. Just as businessmen spent years or even decades developing products, environmentalists use scientific (or quasi-scientific) research as their product-development laboratory, in which promising products are developed for future presentation on the market. Although global warming was “in development” throughout the 1970s and 80s, it did not receive its full “rollout” as a full-fledged environmental product until the early 1990s. We can regard the publication of Al Gore’s Earth in the Balance in 1992 as the completed rollout of global warming. In that book, Gore presented the full-bore apocalyptic prophesy that human-caused global warming threatened the destruction of the Earth within two centuries.

Why was global warming “in development” for so long? And after spending that long in development limbo, why did environmentalists bring it “to market” in the early 1990s? The answers to these questions further cement the economic theory of environmentalism.

Global warming joined a long line of environmental products that were brought to market beginning in the early 1960s. These included conservation, water pollution, air pollution, species preservation, forest preservation, overpopulation, garbage disposal, inadequate food production, cancer incidence and energy insufficiency.The most obvious, logical business rationale for a product to be brought to market is that its time has come, for one or more reasons. But global warming was brought to market by a process of elimination. All of the other environmental products were either not “selling” or had reached dangerously low levels of “sales.” Environmentalists desperately needed a flagship product and global warming was the only candidate in sight. Despite its manifest deficiencies, it was brought to market “before its time;” e.g., before its scientific merits had been demonstrated. In this regard, it differed from most (although not all) of the previous environmental products.

Those are the summary answers to the two key questions posed above. Global warming (later climate change) spent decades in development because its scientific merits were difficult if not impossible to demonstrate. It was brought to market in spite of that limitation because environmentalists had no other products with equivalent potential to provide real income and had to take the risks of introducing it prematurely in order to maintain the “business” of environmentalism as a going concern. Each of these contentions is fleshed out below.

The Product Maturation Suffered by Environmentalism

Businesses often find that their products lead limited lives. These limitations may be technological, competitive or psychological. New and better processes may doom a product to obsolescence. Competitors may imitate a product into senescence or even extinction. Fads may simply lose favor with consumers after a period of infatuation.

As of the early 1990s, the products offered by environmentalism were in various stages of maturity, decline or death.

Air pollution was a legitimate scientific concern when environmentalism adopted it in the early 1960s. It remains so today because the difficulty of enforcing private property rights in air make a free-market solution to the problem of air pollution elusive. But by the early 1990s, even the inefficient solutions enforced by the federal government had reduced the problem of air pollution to full manageability.

Between 1975 and 1991, the six air pollutants tracked by the Environmental Protection Agency (EPA) fell between 24% and 94%. Even if we go back to 1940 as a standard of comparison – forcing us to use emissions as a proxy for the pollution we really want to measure, since the latter wasn’t calculated prior to 1975 – we find that three of the six were lower in 1991 and total emissions were also lower in 1991. (Other developed countries showed similar progress during this time span.)

Water pollution was already decreasing when Rachel Carson wrote and continued to fall throughout the 1960s, 70s and 80s. The key was the introduction of wastewater treatment facilities to over three-quarters of the country. Previously polluted bodies of water like the CuyahogaRiver, the AndroscogginRiver, the northern Hudson River and several of the Great Lakes became pure enough to host sport-fishing and swimming. The Mississippi River became one of the industrialized world’s purest major rivers. Unsafe drinking water became a non-problem. Again, this was accomplished despite the inefficient efforts of local governments, the worst of these being the persistent refusal to price water at the margin to discourage overuse.

Forests were thriving in the early 1990s, despite the rhetoric of environmental organizations that inveighed against “clear-cutting” by timber companies. In reality, the number of wooded acres in the U.S. had grown by 20% over the previous two decades. The state of Vermont had been covered 35% by forest in the late nineteenth century. By the early 1990s, this coverage had risen to 76%.

This improvement was owed to private-sector timber companies, who practiced the principle of “sustainable yield” timber management. By the early 1990s, annual timber growth had exceeded harvest every year since 1952. By 1992, the actual timber harvest was a miniscule 384,000 acres, six-tenths of 1% of the land available for harvest. Average annual U.S. wood growth was three times greater than in 1920.

Environmentalists whined about the timberlands opened up for harvest by the federal government in the national parks and wildlife refuges, but less logging was occurring in the National Forests than at any time since the early 1950s. Clear-cut timber was being replaced with new, healthier stands that attracted more wildlife diversity than the harvested “old-growth” forest.

As always, this progress occurred in spite of government, not because of it. The mileage of roads hacked out of national-park land by the Forest Service is three times greater that of the federal Interstate highway system. The subsidized price at which the government sells logging rights on park land is a form of corporate welfare for timber companies. But the private sector bailed out the public in a manner that would have made John Muir proud.

Garbage disposal and solid-waste management may have been the most unheralded environmental victory won by the private sector. At the same time that Al Gore complained that “the volume of garbage is now so high that we are running out of places to put it,” modern technology had solved the problem of solid-waste disposal. The contemporary landfill had a plastic bottom and clay liner that together prevent leakage. It was topped with dirt to prevent odors and run-off. The entire U.S. estimated supply of solid waste for the next 500 years could be safely stored in one landfill 100-yards deep and 20 miles on a side. The only problem with landfills was a siting problem, owing to the NIMBY (“not in my back yard”) philosophy fomented by environmentalism. The only benefit to be derived from recycling could be had from private markets by recycling only those materials whose benefits (sales revenue) exceeded their reclamation costs (including a “normal” profit).

Overpopulation was once the sales leader of environmentalism. In 1968’s The Population Bomb, leading environmentalist Paul Ehrlich wrote that “the battle to feed all of humanity is over. In the 1970s, the world will undergo famines – hundreds of millions of people are going to starve to death in spite of any crash programs embarked upon now. At this late date, nothing can prevent a substantial increase in the world death rate….” Ehrlich also predicted food riots and plummeting life expectancy in the U.S. and biological death for a couple of the Great Lakes.

Ehrlich was a great success at selling environmentalism. His book, and its 1990 sequel The Population Explosion, sold millions of copies and recruited untold converts to the cause. Unfortunately, his product had a limited shelf life because his prophecies were spectacularly inaccurate. The only famines were politically, not biologically, triggered; deaths were in the hundreds of thousands, not millions. Death rates declined instead of rising. The Great Lakes did not die; they were completely rehabilitated. Even worse, Ehrlich made a highly publicized bet with economist Julian Simon that the prices of five metals handpicked by Ehrlich would rise in real terms over a ten-year period. (The loser would pay the algebraic sum of the price changes incurred.) The prices went down in nominal terms despite the rising general level of prices over the interval – another spectacular prophetic failure by Ehrlich.

It’s not surprising that Ehrlich, rather than the population, bombed. In the 1960s, the world’s annual population growth was about 2.0%. By the 1990s, it would fall to 1.6%. (Today, of course, our problem is falling birth rates – the diametric opposite of that predicted by environmentalism.)

Therefore, the phantom population growth predicted by environmentalism did not comprise one component of the inadequate food supply foreseen with uncanny inaccuracy by environmentalists. Ehrlich and others had foreseen a Malthusian scenario in which rising population growth overtook diminishing agricultural productivity. They were just as wrong about productivity as about population. The Green Revolution ushered in by Norman Borlaug et al drove one of the world’s leading agricultural economists to declare that “the scourge of famine due to natural causes has been almost conquered….”

The other leg of environmentalism’s collapsing doomsday scenario of inadequate food was based on cancer incidence. Not only would the food supply prove insufficient, according to environmentalists, it was also unsafe. Industrial chemicals and pesticides were entering the food supply through food residues and additives. They were causing cancer. How did we know this? Tests on animals – specifically, on mice and rats – proved it.

There was only one problem with this assertion. Scientifically speaking, it was complete hooey. The cancer risk of one glass of wine was about 10,000 -12,000 times greater than that posed by the additives and pesticide residues (cumulatively) in most food products. Most of our cancer risk comes from natural sources, such as sunlight and natural pesticides produced by plants. Some of these occur in common foods. Still, cancer rates had remained steady or fallen over the previous fifty years except for lung cancers attributable to smoking and melanomas attributable to ultraviolet light. Cancer rates among young adults had decreased rapidly. Age-adjusted death rates had mostly fallen.

Energy insufficiency had been brought to market by environmentalists in the 1970s, during the so-called Energy Crisis. It sold well when OPEC was allowed to peg oil prices at stratospheric levels. But when the Reagan administration decontrolled prices, domestic production rose and prices fell. As the 1990s rolled around, environmentalists were reduced to citing on “proven reserves” of oil (45 years) and natural gas (63 years) as “proof” that we would soon run out of fossil fuels and energy prices would then skyrocket. Of course, this was more hooey; proven reserves are the energy equivalent of inventory. Businesses hold inventory as the prospective benefits and costs dictate. Current inventories say nothing about the long-run prospect of shortages.

In 1978, for example, proven reserves of oil stood at 648 billion barrels, or 29.2 years’ worth at current levels of usage. Over the next 14 years, we used about 84 billion barrels, but – lo and behold – proven reserves rose to nearly a billion barrels by 1992. That happened because it was now profitable to explore for and produce oil in a newly free market of fluctuating oil prices, making it cost-efficient to hold larger inventories of proven reserves. (And in today’s energy market, it is innovative technologies that are driving discoveries and production of new shale oil and gas.) Really, it is an idle pastime to estimate the number of years of “known” resources remaining because nobody knows how much of a resource remains. It is not worth anybody’s time to make an accurate estimate; it is easier and more sensible to simply let the free market take its course. If the price rises, we will produce more and discover more reserves to hold as “inventory.” If we can’t find any more, the resultant high prices will give us the incentive to invent new technologies and find substitutes for the disappearing resource. That is exactly what has just happened with the process called “fracking.” We have long known that conventional methods of oil drilling left 30-70% of the oil in the ground because it was too expensive to extract. When oil prices rose high enough, fracking allowed us to get at those sequestered supplies. We knew this in the early 1990s, even if we didn’t know exactly what technological process we would ultimately end up using.

Conservation was the first product packaged and sold by environmentalism, long predating Rachel Carson. It dated back to the origin of the national-park system in Theodore Roosevelt’s day and the times of John Muir and John Jacob Audubon. By the early 1990s, conservation was a mature product. The federal government was already the biggest landowner in the U.S. We already had more national parks than the federal government could hope to manage effectively. Environmentalists could no longer make any additional sales using conservation as the product.

Just about the only remaining salable product the environmentalists had was species preservation. Environmentalism flogged it for all it was worth, but that wasn’t much. After the Endangered Species Act was passed and periodic additions made to its list, what was left to do? Not nearly enough to support the upper-middle-class lifestyles of a few million environmentalists. (It takes an upper-middle-class income to enjoy the amenities of nature in all their glory.)

Environmentalism Presents: Global Warming

In the late 1980s, the theory that industrial activity was heating up the atmosphere by increasing the amount of carbon dioxide in the air began to gain popular support. In 1989, Time Magazine modified its well-known “Man of the Year” award to “Planet of the Year,” which it gave to “Endangered Earth.” It described the potential effects of this warming process as “scary.” The International Panel on Climate Change, an organization of environmentalists dedicated to selling their product, estimated that warming could average as much as 0.5 degrees Fahrenheit per decade over the next century, resulting in a 5.4 degree increase in average temperature. This would cause polar ice caps to melt and sea levels to rise, swamping coastal settlements around the world – and that was just the beginning of the adverse consequences of global warming.

No sooner had rollout begun than the skepticism rolled in along with the new product. Scientists could prove that atmospheric carbon dioxide was increasing and that industrial activity was behind that, but it could not prove that carbon dioxide was causing the amount of warming actually measured. As a matter of fact, there wasn’t actually an unambiguous case to be made for warming. What warming could be found had mostly occurred at night, in the winter and in the Southern Hemisphere (not the locus of most industrial activity). And to top it all off, it is not clear whether or not we should ascribe warming to very long-run cyclical forces that have alternated the Earth between Ice Ages and tropical warming periods for many thousands of years. By 1994, Time Magazine (which needed a continuous supply of exciting new headlines just as much as environmentalists needed a new supply of products with which to scare the public) had given up on global warming and resuscitated a previous global-climate scare from the 1970s, the “Coming Ice Age.”

It is easy to see the potential benefits of the global-warming product for environmentalists. Heretofore, almost all environmentalist products had an objective basis. That is, they spotlighted real problems. Real problems have real solutions, and the hullabaloo caused by purchase of those products led to varying degrees of improvement in the problems. Note this distinction: the products themselves did not cause or lead to the improvement; it was the uproar created by the products that did the job. Most of the improvement was midwived by economic measures, and environmentalism rejects economics the way vampires reject the cross. This put environmentalists in an anomalous position. Their very (indirect) success had worked against them. Their real income was dependent on selling environmentalism in any of various ways. Environmentalists cannot continue to sell more books about (say) air pollution when existing laws, regulations and devices have brought air quality to an acceptable level. They cannot continue to pass more coercive laws and regulations when the legally designated quality has been reached. Indeed, they will be lucky to maintain sales of previously written books to any significant degree. They cannot continue to (credibly) solicit donations on the strength of a problem that has been solved, or at least effectively managed.

Unfortunately for environmentalists, the environmental product is not like an automobile that gives service until worn out and needs replacement, ad infinitum. It is more like a vaccine that, once taken, needn’t be retaken. Once the public has been radicalized and sensitized to the need for environmentalism, it becomes redundant to keep repeating the process.

Global warming was a new kind of product with special features. Its message could not be ignored or softened. Either we reform or we die. There was no monkeying around with tradeoffs.

Unlike the other environmental products, global warming was not a real problem with real solutions. But that was good. Real problems get solved – which, from the environmentalist standpoint, was bad. Global warming couldn’t even be proved, let alone solved. That meant that we were forced to act and there could be no end to the actions, since they would never solve the problem. After all, you can’t solve a problem that doesn’t exist in the first place! Global warming, then, was the environmentalist gift that would keep on giving, endlessly beckoning the faithful, recruiting ever more converts to the cause, ringing the cash register with donations and decorating the mast of environmentalism for at least a century. Its very scientific dubiety was an advantage, since that would keep it in the headlines and keep its critics fighting against it – allowing environmentalists the perfect excuse to keep pleading for donations to fend off the evil global-warming deniers. Of course, lack of scientific credibility is also a two-edged sword, since environmentalists cannot force the public to buy their products and can never be quite sure when the credibility gap will turn the tide against them.

When you’re selling the environmentalist product, the last thing you want is certainty, which eliminates controversy. Controversy sells. And selling is all that matters. Environmentalists certainly don’t want to solve the problem of global warming. If the problem is solved, they have nothing left to sell! And if they don’t sell, they don’t eat, or at least they don’t enjoy any real income from environmentalism. Environmentalism is also aimed at gaining psychological benefits for its adherents by giving their lives meaning and empowering them by coercing people with whom they disagree. If there is no controversy and no problem, there is nothing to give their lives meaning anymore and no basis for coercing others.

The Economic Theory of Environmentalism

Both environmentalists and their staunchest foes automatically treat the environmental movement as a romantic crusade, akin to a religion or a moral reform movement. This is wrong. Reformers or altruists act without thought of personal gain. In contrast, environmentalists are self-interested individuals in the standard tradition of economic theory. Some of their transactions lie within the normal commercial realm of economics and others do not, but all are governed by economic logic.

That being so, should we view environmentalism in the same benign light as we do any other industry operating in a free market? No, because environmentalists reject the free market in favor of coercion. If they were content to persuade others of the merits of their views, their actions would be unexceptional. Instead, they demand subservience to their viewpoint via legal codification and all forms of legislative, executive, administrative and judicial tyranny. Their adherents number a few would-be dictators and countless petty dictators. Their alliance with science is purely opportunistic; one minute they accuse their opponents of being anti-scientific deniers and the next they are praying to the idol of Gaia and Mother Earth.

The only thing anti-environmentalists have found to admire about the environmental movement is its moral fervor. That concession is a mistake.