An Access Advertising EconBrief:
Can We Afford the Risk of EPA Regulation?
Try this exercise in free association. What is first brought to mind by the words “government regulation?” The Environmental Protection Agency would be the answer of a plurality, perhaps a majority, of Americans. Now envision the activity most characteristic of that agency. The testing of industrial chemicals for toxicity, with a view to determining safe levels of exposure for humans, would compete with such alternative duties as monitoring air quality and mitigating water pollution. Thus, we have a paradigmatic case of government regulation of business in the public interest – one we would expect to highlight regulation at its best.
One of the world’s most distinguished scientists recently reviewed EPA performance in this area. Richard Wilson, born in Great Britain but long resident at HarvardUniversity, made his scientific reputation as a pioneer in the field of particle physics. In recent decades, he became perhaps the leading expert in nuclear safety and the accidents at Three Mile Island, Chernobyl and Fukushima, Japan. Wilson is a recognized leader in risk analysis, the study of risk and its mitigation. In a recent article in the journal Regulation (“The EPA and Risk Analysis,” Spring 2014), Wilson offers a sobering explanation of “how inadequate – and even mad and dangerous – the U.S. Environmental Protection Agency’s procedures for risk analysis are, and why and how they must be modified.”
Wilson is neither a political operative nor a laissez-faire economist. He is a pure scientist whose credentials gleam with ivory-tower polish. He is not complaining about excesses or aberrations, but rather characterizing the everyday policies of the EPA. Yet he has forsworn the dispassionate language of the academy for words such as “mad” and “dangerous.” Perhaps most alarming of all, Wilson despairs of finding anybody else willing to speak publicly on this subject.
The EPA and Risk
The EPA began life in 1970 during the administration of President Richard Nixon. It was the culmination of the period of environmental activism begun with the publication of Rachel Carson’s book Silent Spring in 1962. The EPA’s foundational project was the strict scrutiny of industrial society for the risks it allegedly posed to life on Earth. To that end, the EPA proposed “risk assessment and regulations” for about 20 common industrial solvents.
How was the EPA to assess the risks of these chemicals to humans? Well, standard scientific procedure called for laboratory testing that isolate the chemical effects from the myriad of other forces impinging on human health. There were formidable problems with this approach, though. For one thing, teasing out the full range of effects might take decades; epidemiological studies on human populations are commonly carried out over 10 years or more. Another problem is that human subjects would be exposed to considerable risk, particularly if dosages were amped up to shorten the study periods.
The EPA solved – or rather, addressed – the problem by using laboratory animals such as rats and mice as test subjects. Particularly in the beginning, few people objected when rodents received astronomically high dosages of industrial chemicals in order to determine the maximum level of exposure consistent with safety.
Of course, everybody knew that rodents were not comparable to people for research purposes. The EPA addressed that problem, too, by adjusting their test results in the simplest ways. They treated the results applicable to humans as scalar multiples of the rodent results, with the scale being determined by weight. They assumed that the chemicals were linear in their effects on people, rather than (say) having little or no effect up to a certain point or threshold. (A linear effect would be infinitesimally small with the first molecule of exposure and rise with each subsequent molecule of exposure.)
Of all the decisions made by EPA, none was more questionable than the standard it set for allowable risk from exposure to toxic chemicals. The standard set by EPA was no more than one premature death per million of exposed population over a statistical lifetime. Moreover, the EPA also assumed the most unfavorable circumstances of exposure – that is, that those exposed would receive exposure daily and get the level of exposure that could only be obtained occupationally by workers routinely exposed to high levels of the substance. This maximum safe level of exposure was itself a variable, expressed as a range rather than a single point, because the EPA could not assume that all rats and mice were identical in their response to the chemicals. Here again, the EPA assumed the maximum degree of uncertainty in reaction when calculating allowable risk. As Wilson points out, if the EPA had assumed average uncertainty instead, this would have reduced their statistical risk to about one in ten million.
It is difficult for the layperson to evaluate this “one out of a million” EPA standard. Wilson ties to put it in perspective. The EPA is saying that the a priori risk imposed by an industrial chemical should be roughly equivalent to that imposed by smoking two cigarettes in an average lifetime. Is that a zero risk? Well, not in the abstract sense, but it will do until something better comes along. Wilson suggests that the statistical chance of an asteroid hitting Earth is from 100 to 1000 times greater than this. There are several chemicals found in nature, including arsenic and mercury, whose risk of death to man is each about 1,000 times greater than this EPA-stipulated risk. Still astronomically small, mind you – but vastly greater than the arbitrary standard set by the EPA for industrial chemicals.
Having painted this ghastly portrait of your federal government at work, Wilson steps back to allow us a view of the landscape that the EPA is working to alter. There are some 80,000 industrial chemicals in use in the U.S. Of these, about 20 have actually been studied for their effects on humans. Somewhere between 10,000 and 20,000 chemicals have been tested on lab animals using methods liThat means that, very conservatively speaking, there are at least 60,000 chemicals for which we have only experience as a guide to their effects on humans.
What should we do about this vast uncharted chemical terrain? Well, we know what the EPA has done in the past. A few years ago, Wilson reminds us, the agency was faced with the problem of disposing of stocks of nerve gas, including sarin, one of the most deadly of all known substances. The agency conducted a small test incineration and then looked at the resulting combustion products. When it found only a few on its list of toxic chemicals, it ignored the various other unstudied chemicals among the byproducts and dubbed the risk of incineration to be zero! It was so confident of this verdict that it solicited the forensic testimony of Wilson on its behalf – in vain, naturally.
Wilson has now painted a picture of a government agency gripped by analytical psychosis. It arrogates to itself the power to dictate safety to us, imposes unreal standards of safety on chemicals it studies – them arbitrarily assumes that unstudied chemicals are completely safe! Now we see where Wilson’s words “mad and dangerous” came from.
Economists who study government should be no more surprised by the EPA’s actions than by Wilson’s horrified reaction to them. The scientist reacts as if he were a child who has discovered for the first time that his parents are capable of the same human frailties as other humans. “Americans deserve better from their government. The EPA should have a sound, logical and scientific justification for its chemical exposure regulations. As part of that, agency officials need to accept that they are sometimes wrong in their policymaking and that they need to change defective assessments and regulations.” Clearly, Wilson expects government to behave like science – or rather, like science is ideally supposed to behave, since science itself does not live up to its own high standards of objectivity and honesty. Economists are not nearly that naïve.
The Riskless Society
Where did the EPA’s standard of no more than one premature death per million exposed per statistical lifetime come from? “Well, let’s face it,” the late Aaron Wildavsky quipped, “no real man tells his girlfriend that she is one in a hundred thousand.” Actually, Wildavsky observes, “the real root of ‘one in a million’ can be traced to the [government’s] efforts to find a number that was essentially equivalent to zero.” Lest the reader wonder whether Wilson and Wildavsky are peculiar in their insistence that this “zero-risk” standard is ridiculous, we have it on the authority of John D. Graham, former director of the Harvard School of Public Health’s Center for Risk Analysis, that “No one seriously suggested that such a stringent risk level should be applied to a[n already] maximally exposed individual.”
Time has also been unkind to the rest of EPA’s methodological assumptions. Linear cancer causation has given way to recognition of a threshold up to which exposure is harmless or even beneficial. This gibes with the findings of toxicology, in which the time-honored first principle is “the dose makes the poison.” It makes it next-to-impossible to gauge safe levels of exposure using either tests on lab animals or experience with low levels of human exposures. As Wildavsky notes, it also helps explain our actual experience over time, in which “health rates keep getting better and better while government estimates of risk keep getting worse and worse.”
During his lifetime, political scientist Aaron Wildavsky was the pioneering authority on government regulation of human risk. In his classic article “No Risk is the Highest Risk of All,” The American Scientist, 1979, 67 (1) 32-37) and his entry on the “Riskless Society” in the Fortune Encyclopedia of Economics (1993, pp. 426-432), Wildavsky produced the definitive reply to the regulatory mentality that now grips America in a vise.
Throughout mankind’s history, human advancement has been spearheaded by technological innovation. This advancement has been accompanied by risk. The field of safety employs tools of risk reduction. There are two basic strategies for risk reduction. The first is anticipation. The EPA, and the welfare state in general, tacitly assume this to be the only safety strategy. But Wildavsky notes that anticipation is a limited strategy because it only works when we can “know the quality of the adverse consequence expected, its probability and the existence of effective remedies.” As Wildavsky dryly notes, “the knowledge requirements and the organizational capacities required to make anticipation an effective strategy… are very large.”
Fortunately, there is a much more effective remedy close at hand. “A strategy of resilience, on the other hand, requires reliance on experience with adverse consequences once they occur in order to develop a capacity to learn from the harm and bounce back. Resilience, therefore, requires the accumulation of large amounts of generalizable resources, such as organizational capacity, knowledge, wealth, energy and communication, that can be used to craft solutions to problems that the people involved did not know would occur.” Does this sound like a stringent standard to meet? Actually, it shouldn’t. We already have all those things in the form of markets, the very things that produce and deliver our daily bread. Markets meet and solve problems, anticipated and otherwise, on a daily basis.
Really, this is an old problem in a new guise. It is the debate between central planning – which assumes that the central planners already know everything necessary to plan our lives for us – and free competition – which posits that only markets can generate the information necessary to make social cooperation a reality. Wildavsky has put the issue in political and scientific terms rather than the economic terms that formed the basis of the Socialist Calculation debates of the 1920s and 30s between socialists Oskar Lange and Fred Taylor and Austrian economists Ludwig von Mises and F. A. Hayek. The EPA is a hopelessly outmoded relic of central planning that not only fails to achieve its objectives, but threatens our freedom in the bargain.
In “No Risk is the Highest Risk of All,” Wildavsky utilizes the economic concept of opportunity cost to make the decisive point that by utilizing resources inefficiently to drive one particular risk all the way to zero, government regulators are indirectly increasing other risks. Because this tradeoff is not made through the free market but instead by government fiat, we have no reason to think that people are willing to bear these higher alternative risks in order to gain the infinitesimally small additional benefits of driving the original risk all the way to zero. As a purely practical matter, we can be sure that this tradeoff is wildly unfavorable. The EPA bans an industrial chemical because it does not meet their impossibly high safety standard. Businesses across the nation have to utilize an inferior substitute. This leaves the businesses, their shareholders, employees and consumers poorer, with less real income to spend on other things. Safety is a normal good, something people and businesses spend more on when their real incomes rise and less on when real incomes fall. The EPA’s foolish “zero-risk” regulatory standard has created a ripple effect that reduces safety throughout the economy.
The Proof of the Safety is in the Living
Wildavsky categorically cited the “wealth to health” linkage as a “rule without exception. To get a concrete sense of this transformation in the 20th century, we can consult the U. S. historical life expectancy and mortality tables. In the century between 1890 and 1987, life expectancy for white males rose from 42.5 years to 72.2 years; for non-whites, from 32.54 years to 67.3 years. For white females, it rose from 44.46 years to 78.9 years; for non-white females, from 35.04 years to 75.2 years. (Note, as did Wildavsky, that the longevity edge enjoyed by females over males came to exceed that enjoyed by white males over non-whites.)
Various diseases were fearsome killers at the dawn of the 20th century, but petered out over the course of the century. Typhoid fever killed an average of 26.7 people per 100,000 as the century turned (from 1900-04); by 1980 it had been virtually wiped out. Communicable diseases of childhood (measles, scarlet fever, whooping cough and diphtheria) carried away 65.2 out of every 100,000 people in the early days of the century but, again by 1980, they had been virtually wiped out. Pneumonia used to be called “the old man’s friend” because it was the official cause of so many elderly deaths, which is why 161.5 out of every 100,000 deaths were attributed to it during 1900-04. But this number had plummeted to 22.0 by 1980. Influenza caused 22.8 deaths out of every 100,000 during 1900-04, but the disease was near extinction in 1980 with only 1.1 deaths ascribed to it. Tuberculosis was another lethal killer, racking up 184.7 deaths per 100,000 on average in the early 1900s. By 1980, the disease was on the ropes with a death rate of only 0.8 per 100,000. Thanks to antibiotics, appendicitis went from lethal to merely painful, with a death rate of merely 0.3 per 100,000 people. Syphilis went from scourge of sexually transmitted diseases to endangered-species of same, going from 12.9 deaths per 100,000 to 0.1.
Of the major causes of death, only cancer and cardiovascular disease showed significant increase. Cancer is primarily a disease of age; the tremendous jump in life expectancy meant that many people who formerly died of all the causes listed above now lived to reach old age, where they succumbed to cancer. That is why the incidence of most diseases fell but why cancer deaths increased. “Heart failure” is a default listing for cause of death when the proximate cause is sufficient to cause organ failure but not acute enough to cause death directly. That accounts for the increase in cardiovascular deaths, although differences in lifestyle associated with greater wealth also bear part of the blame for the failure of cardiovascular deaths to decline despite great advances in medical knowledge and technology. (In recent years, this tendency has begun to reverse.)
The activity-linked mortality tables are also instructive. The tables are again expressed as a rate of fatality per 100,000 people at risk, which can be translated into absolute numbers with the application of additional information. By far the riskiest activity is motorcycling, with an annual death rate of 2,000 per 100,000 participants. Smoking lags far behind at 300, with only 120 of these ascribable to lung cancer. Coal mining is the riskiest occupation with 63 deaths per 100,000 participants, but it has to share the title with farming. It is riskier a priori to drive a motor vehicle (24 deaths per 100,000) than to be a uniformed policeman (22 deaths). Roughly 60 people per year are fatally struck by lightning. The lowest risk actually calculated by statisticians is the 0.000006 per 100,000 (six-millionths of one percent) risk of dying from a meteorite strike.
It is clear that risk is not something to be avoided at all cost but rather an activity that provides benefits at a cost. Driving, coal mining and policing carry the risk of death but also provide broad-based benefits not only to practitioners but to consumers and producers. Industrial chemicals also provide widespread benefits to the human race. It makes no sense to artificially mandate a “one in a million” death-risk for industrial solvents when just climbing in the driver’s seat of a car subjects each of us to a risk that is hundreds of thousands of times greater than that. We don’t need all-powerful government pretending to regulate away the risk associated with human activities while actually creating new hidden risks. We need free markets to properly price the benefits and costs associated with risk to allow us to both efficiently run risks and avoid them.
This fundamental historical record has been replicated with minor variations across the Western industrial landscape. It was not achieved by heavy-duty government regulation of business but by economic growth and markets, which began to slow as the welfare state and regulation began to predominate. Ironically, recent slippage in health and safety has been associated with the transfer of public distrust from government – where it is well-founded – to science. Falling vaccination rates has produced revival of diseases, such as measles and diphtheria, which had previously been nearly extinct.
The Jaundiced Eye of Economists
If there is any significant difference in point of view between scientists (Wilson) and political scientists (Wildavsky) on the one hand, and economists on the other, it is the willingness to take the good faith of government for granted. Wilson apparently believes that government regulators can be made to see the error of their ways. Wildavsky apparently viewed government regulators as belonging to a different school of academic thought (“anticipation vs. resilience”) – maybe they would see the light when exposed to superior reasoning.
Economists are more practical or, if you like, more cynical. It is no coincidence that government regulatory agencies do not practice good science even when tasked to do so. They are run by political appointees and funded by politicians; their appointees are government employees who are paid by political appropriations. The power they possess will inevitably be wielded for political purposes. Most legal cases are settled because they are too expensive to litigate and because one or both parties fear the result of a trial. Government regulatory agencies use their power to bully the private sector into acquiescence with the political results favored by politicians in power. Private citizens fall in line because they lack the resources to fight back and because they fear the result of an administrative due process in which the rules are designed to favor government. This is the EPA as it is known to American businesses in their everyday world, not as it exists in the conceptual realities of pure natural science or academic political science.
The preceding paragraph describes a kind of bureaucratic totalitarianism that differs from classical despotism. The despot or dictator is a unitary ruler, while the bureaucracy wields a diffused form of absolute power. Nevertheless, this is the worst outcome associated with the EPA and top-down federal-government regulation in general. The risks of daily life are manageable compared to the risks of bad science dictated by government. And both these species of risk pale next to the risk of losing our freedom of action, the very freedom that allows us to manage the risks that government regulation does not and cannot begin to evaluate or lessen.
The EPA is just too risky to have around.