DRI-312 for week of 8-18-13: Understanding Risk, Benefit and Safety

An Access Advertising EconBrief:

Understanding Risk, Benefit and Safety

The mainstream press has propagated an informal historical narrative of safety in America. Prior to the Progressive era and the advent of muckraking journalism, the public lay at the mercy of rapacious businessmen who knowingly produced unsafe products and unwholesome foods in order to maximize their personal wealth. Thanks to the unselfish labors of investigating journalists and the subsequent creation of government regulatory agencies, products and foods became safe for the first time.

Now regulators and the press fight a never-ending battle for safety against the forces of greedy capitalism. Alas, there are so many industries and goods to regulate and so little time and money in the federal budget with which to do it.

In order to appreciate the full falsity of this doctrine, we must grasp the economic meaning of concepts like risk, benefit and safety. A good route to this goal lies through our own inner sense of the logic of human behavior.

A Reductio Ad Absurdum

To highlight the concepts of risk, benefit and safety, consider the following example. It is a reductio ad absurdum – an example “reduced to absurdity” in order to eliminate extraneous considerations and shine a spotlight on a few insights.

Assume you have only one day left to live – exactly twenty-four hours. You are aware of this. You are also aware that your death will be instantaneous and painless and your vitality, faculties and awareness will remain unimpaired up to your last second of consciousness. How will this affect your behavior?

A little thought should convince you that the effect will be profound. You have only one day left to wring whatever excitement, enjoyment and satisfaction you can from life. Will that day be business as usual, awakening at the normal time and departing to work at your job? Unless you work at one of the world’s most stimulating and fulfilling jobs, the last thing you will want is to spend your final day on Earth at work.

Instead, you will devote your time to the most intense and meaningful pleasures. These may be physical or mental, aesthetic or gastronomic, boisterous or sedate. The word “pleasure” inevitably evokes the notion of hedonism in some people, but this need not apply here. The pleasures you seek during your last day may be sensual but they may just as easily be as cerebral as reading a book or as contemplative as observing a sunset. Your personal selections from the vast menu of choice will be highly subjective, in the sense that my choices might very well differ drastically from yours. In spite of this, though, the example affords highly useful insights about economics – particularly the concepts of risk, benefit and safety.

Economic Benefit

The first conclusion to emerge from our artificial but enlightening example relates to the nature of economic benefit. In recent decades, a Martian studying Earth by scanning its news media transmissions and publications might well conclude that the benefit of human existence derives from work. After all, politicians and commentators yammer endlessly about the glories of, and necessity for, “jobs, jobs, jobs.” Taking this preoccupation at face value implies that work, in and of itself, is what makes life worthwhile. The obiter dicta of the rich and famous, who recklessly profess such heartfelt love for their profession that they would practice it for nothing, reinforce this impression.

Our example, though, shatters this shibboleth. Economic value inheres not in work but rather in the things that work produces, which produce pleasure and satisfaction when consumed. It is certainly possible to love one’s work, but it is not coincidence that the people who love it the most are the ones most highly compensated for it; their earnings can purchase the most satisfaction and pleasure. It is a famous truism that nobody’s deathbed reflections are mostly regrets at not spending more time at the office.


Ever since the pathbreaking work of economist Frank Knight some ninety years ago, economists have defined risk as mathematically expressed variance of possible future outcomes. Uncertainty, the first cousin of risk, applies when the future outcomes vary in ways not susceptible to mathematical expression. For our purposes, however, we will view risk colloquially, as the possibility of unfavorable future outcomes.

Again, it should be obvious that the prospect of death in twenty-four hours’ time will radically affect your attitude toward risk and benefit. You are out to grab all the gusto you can get in the day you have left. From experience, we realize that the pursuit of pleasure can involve some element of risk. For example, the most hair-raising rollercoaster ride may well provoke the most pleasurable response. But it may also produce nausea, vertigo and unsteadiness. There is even the risk of injury or death if the mechanism malfunctions or you somehow are thrown from the ride.

If you are the kind of person who enjoys rollercoasters, you will be undeterred by their risk in our special case. You are certainly not going to pass up this big thrill for fear of a one-in-a-hundred-million chance of death – you’re going to be dead tomorrow anyway! On the other hand, you might well refuse to ride the coaster with your safety belt unbuckled for the first twenty-three hours of your last day. You don’t want to take foolish risks and waste most of your last day. But you might well reverse that decision during your final hour, especially if you always wondered what it would be like to take that ride unbuckled. You certainly aren’t risking much for that thrill, are you, with only minutes left to live?


Safety is best understood as reduction in risk or uncertainty. In colloquial terms, it is time and trouble taken to reduce the likelihood of unfavorable outcomes. Put in those terms, the equivocal nature of safety is clear. It demands the sacrifice of time – and time is just what you have so little left of. Why should you take much trouble reducing the likelihood of an unfavorable outcome when you will experience the most unfavorable outcome of all within twenty-four hours? Every second of time you spend on safety reduces the time you could be spending experiencing pleasure; every bit of trouble you take avoiding risk lowers your potential for happiness during the dwindling time you have left.

Now is the time for you to go hang gliding, even launching off a mountain top if the idea takes your fancy. Bungee jumping is another good candidate. In neither case will you spend an hour or two inspecting your equipment for defects or weakness.

Of course, we know that safety is a significant concern for all of us in our daily lives. That is one of the changes introduced by the reversion to reality in our model. Comparing reality to the polar extreme of our reductio ad absurdum outlines the continuum of risk, benefit and safety.

The Reality of Risk, Benefit and Safety

Reality differs from our artificial example in key respects. Although a relative few of us actually do have only twenty-four hours to live, only a tiny few of that few know (or suspect) the truth. And of those, virtually none have the freedom and vitality accorded our example individual. That clearly affects the central conclusions reached by our model – that the individual would seek out pleasure, eschew work, embrace risk if doing so heightened pleasure significantly and “purchase” little safety at the cost of foregoing pleasure.

We observe, and instinctively realize, that most people must work in order to earn income with which to buy pleasurable consumption goods. They tend to be “risk-averse” within relevant ranges of income and wealth; that is, they will buy a lottery ticket but not play roulette with the rent money. They value safety, but nowhere nearly to the extent implied by the mainstream news media and politicians. In a world of work and production, safety is produced using time and physical resources, which reduces the value of pleasurable goods produced because that time and those resources cannot then be used to produce pleasure. Thus, safety production adds to the money cost and price of consumption goods, which creates a tradeoff between safety and purchasing power. Nobel Laureate George Stigler once colorfully averred that he would rather crash once every 500,000 takeoffs than pay a fortune to fly between major U.S. cities.

In other words, the insights gained from our reductio ad absurdum turn out to be surprisingly useful. We merely have to adjust for the length, variability and unpredictability of actual life spans in order to predict the general character of human behavior in the face of risk. And when we apply these adjustments retroactively, we appreciate how badly astray the mainstream historical view of safety has led us.

Rewriting (Pseudo) History

The mainstream view contains at least a grain of truth in its suggestion that the emphasis on safety is a modern development. But the blame attached to profit-hungry capitalists is wrongheaded. This is not because capitalists aren’t profit-hungry; they most certainly are. But the hunger for profits has always been strong even as the production and consumption of safety have varied. Profit-hunger did not suppress safety for centuries, could not prevent the demand for safety from arising and cannot put it back into the bottle now that it has emerged.

The industrial revolution and the rise of free markets created a tremendous increase in human productivity, thereby increasing real incomes throughout the world. The increases were not uniform; certain countries benefitted much more, and faster, than others. The higher incomes increased the demand for safety and for medical research, which in turn led to tremendous gains in life expectancy.

Longer life spans increased the demand for safety even more. This is our reductio ad absurdum played out in reverse. The longer we expect to live, the more future value we are safeguarding by sacrificing present pleasure with our “purchases” of safety. Prior to the 20th century, with life expectancies at birth not much over 50 years even in the developed industrial nations, it didn’t pay to make great sacrifices in current consumption to safeguard the safety of many people whose longevity was limited anyway. But as life expectancy steadily lengthened – particularly for those in the later stages of life – the terms of the tradeoff changed dramatically.

Risk Compensation

Another factor that greatly affects the balance between risk and safety also emerged in our artificial example. We noted that many of the pleasure-producing human activities carry risk along with their beneficial properties; indeed, therisk itself may even be the source of pleasure. This is true of a wide range of human pursuits, ranging from the rollercoaster rise in our model to auto racing, casino gambling and bungee jumping. Some pastimes such as mountain climbing and hang gliding may produce secondary benefits like physical fitness to supplement their primary purpose of slaking a thirst for risk.

Mainstream society has traditionally viewed risky activities ambivalently. It has tolerated some (mountain-climbing) and frowned on others (gambling, illicit drug-taking) without acknowledging the bedrock similarity common to all. That failure has not only caused much needless death and suffering but has also endangered our freedoms.

Strongly influenced by mid-century muckraker Ralph Nader’s research on the Chevrolet Corvair (later discredited), the U.S. Congress passed legislation beginning in the 1960s requiring American automakers to include safety equipment on all vehicles as standard equipment rather than optional extras. Those safety features included safety belts and, eventually, crash bags. Starting in 1975, University of Chicago economist Sam Peltzman published studies of the results of this legislation. His work showed that any lives that might have been saved among occupants of vehicles tended to be offset by lives lost among pedestrians, cyclists and other non-occupants. That was not to deny the existence of a trend toward fewer highway vehicle deaths. Indeed, that trend had been underway well before the safety legislation was passed owing to factors such as improvements in vehicle design, production and maintenance. Sorting out the effects of this trend from those of the legislation required considerable statistical effort, not to say guesswork.

But the existence of a countervailing force was clear. Peltzman suggested that the safety devices made people feel safer, causing them to drive less carefully. This might be due to increased carelessness or a willingness to embrace a certain level of risk when driving, which caused them to compensate for their increasedlevel of personal protection by taking additional driving risks.

Politicians, regulators and do-gooders of all sorts went ballistic when confronted with Peltzman’s conclusions. How dare he suggest that federal-government safety legislation was anything less than a shining example of nobility and good intentions at work? Rather than ponder the implications of his analysis, they hardened their position. Not only did they force businesses to produce safety, they began forcing consumers to consume safety as well. This campaign began with mandatory seat-belt legislation requiring first drivers, then passengers and eventually children to wear seat belts while vehicles were in operation.

Essentially, the implications of the regulatory position were that markets are dysfunctional. In a competitive market, producers not only produce automobiles that provide transportation services, they also provide various complementary features for those autos. One of those features is safety. (In fact, virtually every safety feature was offered by private auto companies before it was required by the government.) Consumers can patronize auto companies and models that provide the most and best safety features, such as seat belts, air bags, anti-lock brakes and more. They can also reject those that omit safety features. Or consumers can choose to reject safety features by buying autos that lack them. Why would they do that? The obvious reason is that safety features require physical resources and engineering talent to provide, making them costly. Consumers may not wish to pay the cost.

By overriding producer decisions and consumer preferences, regulators in effect assert that markets do not work and government commands should replace the voluntary choices made in the marketplace. One obvious problem with this approach is that it creates momentum in the direction of a centrally planned, totalitarian economy and away from a voluntary, free-market one. But for those who believe that the end justifies the means, the loss of freedom may be justified by the greater safety resulting from the regulatory command-and-control approach.

As time went on, however, it became clear that the regulatory approach was not achieving the results claimed for it. Not only were markets being circumvented, but the regulatory nirvana of a risk-free world was no closer to reality. How could this be? What was going wrong?

As far back as 1908, the British equivalent of America’s Auto club urged landowners to cut back their hedges to improve visibility for drivers of the newly invented automobile. A retired Army colonel responded to this appeal by noting that this hedge-trimming had caused unintended consequences: his lawn had been filled with dust caused by zooming motorists who exceeded speed limits and skidded into his yard. When detained by police, the offenders maintained that “it was perfectly safe” to drive so fast because visibility was clear for a long distance. So the colonel changed his mind and let his shrubs grow in order to deter the speeders.

Following Sam Peltzman’s lead, researchers in succeeding decades discovered a myriad of analogous phenomena. The proliferation of wilderness- and mountain-rescue teams induced hikers and climbers to take more and bigger risks, thus assuring that deaths and injuries from hiking and climbing would not decline despite the increase in resources devoted to rescue. Parachute manufacturers built superior rip cords, but chutists pulled the rip cord later because they were more confident of the cord’s resilience. The result was stability of death rates for sky divers. Stronger levees did not reduce the incidence of death, injury and damage from floods because people were induced to remain in floodplain areas rather than move out. Indeed, the desirability of these locations meant that more people moved in when they became more safe, leading to even more deaths, injuries and damage when a flood did occur. Workers who began wearing back supports still suffered injuries from lifting because the safety supports encouraged them to life heavier loads – which overcame the effect of the supports. Research on children who began wearing more protective sports equipment consistently showed that the kids responded by playing more roughly, overriding the benefits of the equipment and continuing the trend toward injuries. Better contraceptives and more effective medical treatments for HIV infection encouraged people to engage in riskier sexual practices, thereby preventing infection rates from declining as much as expected.

The technical term for all these cases is risk compensation. The general public and those with vested interests in government regulation tend to scoff at the concept, but its presence has been confirmed so repeatedly that it is now conventional wisdom. According to the popular purveyor of mainstream science, Smithsonian Magazine, “This counterintuitive idea was introduced in academic circles several years ago and is broadly accepted today…today the issue is not [about] whether it exists but about the degree to which it does.”  We see it “in the workplace, on the playing field, at home, in the air (“Buckle Up Your Seat Belt and Behave,” April 2009, by William Ecenberger).”

The implications of this research for even so widely venerated a government policy as mandatory seat-belt use are startlingly negative. People inclined to use seat belts are unaffected by the laws, but unwilling wearers who are forced to buckle up are presumably risk-loving types. When their seat belts are firmly in place, they will take more driving risks – after all, they must have had a reason for refusing the belt in the first place and risk-preference is the logical explanation. It follows, then, that they must feel safer when buckled in, which implies that they will try to return to their preferred status of risk tolerance. And studies of seat-belt mandates by economists do tend to show this result.

Risk compensation is so widely accepted among scientists outside of government that a Canadian psychologist has carried it to a logical extreme. Gerald J. S. Wilde propounds the philosophy of risk homeostasis, which posits that human beings automatically adjust their behavior to keep their exposure to risk at a constant level, just as the human body regulates its internal temperature at 98.6 degree Fahrenheit despite variations in external conditions.

The Economic View of Risk

We need not carry belief in adjustment to risk this far in order to recognize the futility of government attempts to fit society into a one-size-fits-all risk-free straitjacket. Not only is it a blatant violation of freedom and free markets, it doesn’t even achieve its intended objectives. It is wrong in theory and wrong in practice.

Risk is not an unambiguous bad thing. It is an unavoidable fact of life toward which different people take widely varying attitudes. For some people, risk is a benefit in and of itself. For practically everybody, risk is a by-product of other beneficial products and activities. Free markets give the most scope for the satisfaction of those different attitudes by allowing the risk-averse to avoid risk and the risk-loving to embrace it – and enabling both groups to do so efficiently via the price system.

Those who claim to see a role for government in allowing the risk-averse to avoid risk are practitioners of what Nobel Laureate Ronald Coase calls “blackboard economics.” This is favored by policymakers standing at a figurative blackboard and divorced from the real-world costs and complications of actually putting their government intervention into operation. In practice, risk and safety policies are delegated to regulators who issue orders and run roughshod over markets. The end result benefits regulators by increasing the size and power of government. The rest of us are stuck with obeying the regulations and picking up the tab.

DRI-250 for week of 1-27-13: What Are the Lessons of Econometrics?

An Access Advertising EconBrief:

What Are the Lessons of Econometrics?

Recently, Federal Reserve official Janet Yellen earned attention with a speech in which she justified monetary easing by citing the Fed’s use of a new “macroeconometric model” of the economy. The weight of the term seemed to give it rhetorical heft, as if the combination of macroeconomics and econometrics produced a synergy that each one lacked individually. Does econometrics hold the key to the mysteries of optimal macroeconomic policy? If so, why are we only now finding that out? And, more broadly, is economics really the quantitative science it often pretends to be?


As practiced for roughly eight decades, econometrics combines the knowledge of three fields – economics, mathematics and statistics. Economics develops the pure logic of human choice that gives logical structure to our quantitative investigations into human behavior. Mathematics determines the form in which economic principles are expressed for purposes of statistical analysis. Statistics allows for the systematic processing and analysis of sample data organized into meaningful form using the principles of economics and mathematics.

Suppose we decide to study the market for production and consumption of corn in the U.S. Economics tells us that the principles of supply and demand govern production and consumption. It further tells us that the price of corn will gravitate toward the point at which the aggregate amount of corn that U.S. farmers wish to produce will equal the aggregate amount that U.S. consumers wish to consume and store for future use.

Mathematics advises us to portray this relationship between supply and demand by expressing both as mathematical equations. That is, both supply and demand will be expressed as mathematical functions of relevant variables. The orthodox formulation treats the price of corn as the independent variable and the quantity of corn supplied and demanded, respectively, as the dependent variable of each equation. Other variables, called parameters, are included in the equations as well, but isolated from price in their effects on quantity. Finally, our model of the corn market will stipulate that the two equations will produce an equal quantity demanded and supplied of corn.

Statistics allows us to gather data on corn without having to compile every single scrap of information on every ear of corn produced during a particular year. Instead, sample data (probably provided by government bureaus) can be consulted and carefully processed using the principles of statistical inference.

In principle, this technique can derive equations for both the supply of corn and its demand. These equations can be used either to predict future corn harvests or to explain the behavior of corn markets in the past. For over a half-century, training in econometrics has been a mandatory part of postgraduate education in economics at nearly all American universities.

Does this procedure leave you scratching your head? In particular, are you moved to wonder why mathematics and simultaneous equations should intrude into the study of economics? Or have we outlined a beautiful case of interdisciplinary cooperation in science?

Historical Evolution

As it happens, the development of econometrics was partly owing to the collision of scientific research programs that evolved concurrently in similar directions. Economics has interacted with data virtually since its inception. In the 1600s, Sir William Petty utilized highly primitive forms of quantitative analysis in England to analyze subjects like taxation and trade. Adam Smith populated The Wealth of Nations with various homely numerical examples. In the early 19th century, a French economist named Cournot used mathematics to develop pathbreaking models of monopoly and oligopoly, which anticipated more famous work done many decades later.

A Swiss economist, Leon Walras, and an Italian, Enrico Barone, applied algebraic mathematics to economics by expressing economic relationships in the form of systems of simultaneous equations. They did not attempt to fill in the parametric coefficients of their economic variables with real numbers – in fact, they explicitly denied the possibility of doing so. Their intent was purely symbolic. In effect, they were saying: “Isn’t it remarkable how the relationships in an economic system resemble those in a mathematical system of simultaneous equations? Let’s pretend that an economy of people could be described and analyzed using algebraic mathematics as a tool – and then see what happens.”

At almost the same time (the early 1870s), the British economist William Stanley Jevons developed the principles of marginalism, which have been the cornerstone of economic logic ever since. Economic value is determined at the margin – which means that both producers and consumers gauge the effects of incremental changes in action. If the benefits of the action exceed the costs, they approve the action and take it. If the reverse holds, they spurn it. Their actions produce tendencies toward marginal equality of benefits and costs, similar in principle to the quantity supplied/quantity demanded equality cited above. Jevons thought it amazing that this incremental logic seemed to correspond so closely to the logic inherent in the differential calculus. So he developed his theory of consumer demand in mathematical terms, using calculus. (It is also fascinating that the Austrian simultaneous co-discoverer of marginalism, Carl Menger, refused to employ calculus in his formulations.)

By the early 1900s, the roots of mathematics in economics had taken root. Soon a British mathematician, Ronald Fisher, would modernize the science of statistics. It was only a matter of time until mathematical economists began using statistics to put numbers into the coefficient slots in their equations, which were previously occupied with algebraic letters serving as symbolic place-holders.

In 1932, economist and businessman Alfred Cowles endowed the Cowles Commission at the University of Chicago. The purpose of the Commission was to do economic research, but the research was targeted toward mathematics and economics. The original motto of the Commission was the same as that of the Econometric Society. It was taken from the words of the great physicist Lord Kelvin: “Science is measurement.”

Seldom have three words conveyed so much meaning. The implication was that economics was, or should strive to be, a “science” in exactly the same sense as physics, biology, chemistry and the rest of the hard physical sciences. The physical sciences did science by observing empirical regularities and expressing them mathematically. They tested their theories using controlled laboratory experiments. They were brilliantly successful. The progress of mankind can be traced by following the progression of their work.

In retrospect, it was probably inevitable that social sciences like economics should take this turn – that they should come to define their success, their very meaning, by the extent and degree of their emulation of the natural sciences. The Cowles Commission was the cutting edge of econometrics for the next 20 years, after which time its major focus shifted from empirical to theoretical economics – back to mathematical models of the economy using simultaneous equations. But by that time, econometrics had gained an impregnable beachhead in economics.

The Role of Econometrics

Great hopes were held out for econometrics. Of course, it was young as sciences go, but by careful study and endless trial and error, we would gradually get better and better at creating better economic models, choosing just the right mathematical forms and using exactly the right statistical techniques. Our forecasts would slowly, but surely, improve.

After all, we had a country full of universities whose economists had nothing better to do than monkey around with econometrics. They would submit their findings for review by their peers. The review would lead to revisions. The best studies would be published in the leading economics journals. At last, at long last, we would discover the empirical regularities of economics, the rules and truths that had remained hidden from us for centuries. The entire system of university tenure and promotion would be based on this process, leading to the notorious maxim “publish or perish.” Success would be tied to the value of government research grants acquired to do this research. The brightest young minds would succeed and climb the ladder of university success. They would teach in graduate school. A virtuous cycle of success would produce more learning, better economics, better econometrics, better models, better predictions, more economic prosperity in society, better education for undergraduates and graduate students alike and a better life for all.

As it turned out, none of these hopes have been fulfilled.

Well, that’s not entirely accurate. A system was created that has ruled academic life for decades and, incredibly, shows no sign of slowing down. Young economists are taught econometrics, after a fashion. They dutifully graduate and scurry to a university where they begin the race for tenure. Like workers in a sausage factory, they churn out empirical output that is read by nobody excepting a few of their colleagues. The output then dies an unlamented death in the graveyard of academic journals. The academic system has benefitted from econometrics and continues to do so. It is difficult to imagine this system flourishing in its absence.

Meanwhile, back at the ranch of reality, the usefulness of econometrics to the rest of the world asymptotically approaches zero. Periodically, well-known economists like Edmond Malinvaud and Carl Christ review the history of econometrics and the Cowles Commission. They are laudatory. They praise the Commission’s work and the output of econometricians. But they say nothing about empirical regularities uncovered or benefits to society at large. Instead, they gauge the benefits of econometrics entirely from the volume of studies done and published in professional journals and the effort expended by generations of economists. In so doing, they violate the very standards of their profession, which dictates that the value of output is judged by its consumers, not by its producers, and that value is determined by price in a marketplace rather than by weight on a figurative scale.

It is considered a truism within the economics profession that no theoretical dispute was ever settled by econometrics – that is a reflection of how little trust economists place in it behind closed doors. In practice, economists put their trust in theory and choose their theories on the basis of their political leanings and emotional predilections.

We now know, as surely as we can know anything in life, that we cannot predict the future using econometrics. As Donald (now Deirdre) McCloskey once put it, you can figure this out yourself without even going to graduate school. All you have to do is figuratively ask an econometrician the “American question:” “If you’re so smart, why ain’t you rich?” Accurate predictions would yield untold riches to the predictors, so the absence of great wealth is the surest index of the poverty of econometrics.

Decades of econometric research have yielded no empirical regularities in economics. Not one. No equivalent to Einstein’s equation for energy or the Law of Falling Bodies.

It is true that economists working for private business sometimes generate predictions about individual markets using what appears to be econometrics. But this is deceptive. The best predictions are usually obtained by techniques called “data mining,” that violate the basic precepts of econometrics. The economists are not interested in doing good econometrics or statistics – just in getting a prediction with some semblance of accuracy. Throwing every scrap of data they can get their hands on into the statistical pot and cooking up a predictive result doesn’t tell you much about which variables are the most important or the degree of independent influence each has on the outcome. But the only hope for predictive success may be in assuming that the future is an approximation of the past, in which case the stew pot may cook up a palatable result.

The Great “Statistical Significance” Scandal

In the science of medicine, doctors are sworn to obey the dictum of Hippocrates: “First, do no harm.” For over twenty years, economists Deirdre McCloskey and Stephen Ziliak have preached this lesson to their colleagues in the social sciences. The use of tests of “statistical significance” as a criterion of value was rampant by the 1980s, when the two began their crusade against its misuse. For, as they pointed out, the term is misunderstood not only by the general public but even by the professionals who employ it.

When a variable is found statistically significant, this does not constitute an endorsement of its quantitative importance. It merely indicates the likelihood that the sample upon which the test was conducted was, indeed, randomly chosen according to the canons of statistical inference. That information is certainly useful. But it is not the summum bonum of econometrics. What we usually want to know is what McCloskey and Ziliak refer to as the “oomph” of a variable (or a model in its totality) – how much quantitative effect it has on the thing it affects.

The two modern-day Diogenes conducted two studies of the econometric articles published in the American Economic Review, the leading professional journal. In the 1980s, most of the authors erred in their use and interpretation of the concept of statistical significance. In the 1990s, after McCloskey and Ziliak began writing and speaking out on the problem, the ratio of mistakes increased. Among the culprits were some of the profession’s most distinguished names, including several Nobel Prize winners. When it comes to statistics and econometrics, it seems, economists literally do not know what they are doing.

According to McCloskey – who is herself a practitioner and believer in econometrics – virtually all the empirical work done in econometrics to date will have to be redone. Most of the vast storehouse of econometric work done since the 1930s is worthless.

The Difference Between the Social Sciences and the Natural Sciences

Statistics has been proven to work well in certain contexts. The classical theory of relative-frequency probability is clearly valid, for example; if it weren’t, Las Vegas would have gone out of business long ago. Those who apply statistics properly, like W. Edward Deming, have used it with tremendous success in practical applications. Deming’s legendary methods of quality control involving sampling and testing have been validated time and again across time and cultures.

When econometrics was born, a small band of critics protested its use on the grounds that the phenomena being studies in the social sciences were not amenable to statistical inference. They do not involve replicative, repetitive events that resemble coin flips or dice throws. Instead, they are unique events that involving different elements whose structures differ in innumerable ways. The number of variables involved usually differs between the physical and social sciences, being vastly larger when human beings are the phenomena under study. Moreover, the free will exerted by humans is different from unmotivated, instinctive, chemically or environmentally induced behavior found in nature. Free will can defy quantitative expression, whereas instinctive behavior may be much more tractable.

In retrospect, it now seems certain that those critics were right. Whatever the explanation, the social sciences in general and economics in particular resist the quantitative measurement techniques that took natural sciences to such heights.

The Nature of Valid Economic Prediction

We can draw certain quantitative conclusions on the basis of economic theory. The Law of Demand says that when the price of something rises, desired purchases of that thing will fall – other things equal. But it doesn’t say how much they’ll fall. And we know intuitively that, in real life, other things are never unchanged. Yet despite this severely limited quantitative content, there is no proposition in economic theory that has demonstrated more practical value.

Economists have long known that agriculture is destined to claim a smaller and smaller share of total national income as a nation gets wealthier. There is no way to predict the precise pattern of decrease, but we know that it will happen. Why? Agricultural goods are mostly either food or fiber. We realize instinctively that when our real incomes increase, we will purchase more food and more clothing – but not in proportion to the increase in income. That is, a 20% increase in real income will not motivate us to eat 20% more food – not even Diamond Jim Brady was that gluttonous. Similarly, increases in agricultural productivity will increase output and lower price over time. But a 20% decline in food prices will not call forth 20% more desired food purchases. Economists say that the demand for agricultural good is price- and income-inelastic.

These are the types of quantitative predictions economists can make with a clear conscience. They are couched in terms of “more” or “less,” not in terms of precise numerical predictions. They are what Nobel laureate F. A. Hayek called “pattern predictions.”

It is one of history’s great ironies that Hayek, an unrelenting critic of macroeconomics and foe of statistics and econometrics, nevertheless made some of the most prescient economic predictions of the 20th century. In 1929, Hayek predicted that the economic boom of the 1920s would soon end in economic contraction – which it did, with a vengeance. (Hayek’s mentor, Ludwig von Mises, went even further by refusing a prestigious appointment because he anticipated that “a great crash” was imminent.) In the 1930s, both Hayek and von Mises predicted the failure of the Soviet economy due to its lack of a functioning price system, particularly the absence of meaningful interest rates. That prediction, too, eventually bore fruit. In the 1950s, Hayek declared that Keynesian economic policies would produce accelerating inflation. Western industrial nations endured withering bouts of inflation beginning in the late 1960s and lasting for over a decade. Then Hayek broke with his fellow economists by insisting that this inflationary cycle could be broken, but only by drastically slowing the rate of monetary growth and enduring the resulting recession for as long as it lasted. Right again – and the recession was followed by two decades of prosperity that came to be known as the Great Moderation.

Ask the Fed

One of the tipoffs to the complicity of the mainstream press in the Obama administration’s policies is the fact that nobody has thought to ask Janet Yellen questions like this: “If your macroeconometric model is good enough for you to rely on it as a basic for a highly unconventional set of policies, why did it not predict the decline in Gross Domestic Product in fourth quarter 2012? Or if it did, why did the Fed keep that news a secret from the public?”

The press doesn’t ask those questions. Perhaps they are cowed by the subject of “macroeconometrics.” In fact, macroeconomics and econometrics are the two biggest failures of contemporary economics. And there are those who would substitute the word “frauds” for “failures.” Unless you take the position that combining two failures rates to produce a success, there is no reason to expect anything valuable from macroeconometrics.

DRI-280 for week of 11-11-12: Restaurant-Dish Takeaway and Comparative Economic Systems

An Access Advertising EconBrief:

 Restaurant-Dish Takeaway and Comparative Economic Systems

You are eating dinner in a casual restaurant with a spouse. No sooner does the last forkful of food ascend toward your mouth than your waiter whisks away the plate. His request for permission – “Done with that?” – is purely a formality since the plate is gone before you can object.

You have observed a tendency in recent years for restaurant servers to remove dishes with increasing alacrity. You remark this to your dinner companion who, unlike you, is a non-economist. Her all-purpose explanation of human behavior is binary: Is the object of study a nice guy or not? Nice guys remove dishes quickly so diners have more elbow room to relax.

You are an economist. You believe people act purposefully to achieve their ends. Moreover, you are thoroughly acquainted with tradeoffs. You have often had waiters take your plate before you were through with it. Some people bristle when they perceive others constantly hovering over them. There are even those – not you, of course, but boors and gluttons – who eat the food of others after finishing their own. One of these types might just react by snatching back his plate and declaring, a la John Paul Jones, “I have not yet begun to eat!”

The “nice-guy” explanation won’t suffice, since the quick-takeaway approach will suit many people well but others poorly. Restaurants that follow a consistent policy of quick takeaway risk offending some customers. Offending customers is not something restaurants do lightly. In order to make this risk worthwhile, there should be some strong motivation in the form of a compensating prospect of gain. What might that be?

One way to define an economist is by saying that they are the kind of people who ask themselves questions like this. And the mark of a good economist is that he can supply not only answers but also further implications and ramifications for social life and government policy.

The Economics of Restaurant Service

Americans have eaten in restaurants ever since America became the United States and before that. While the basic concepts underlying the restaurant sector have remained intact, structural changes have remade the industry in recent decades. The most important contributor has been the institution of franchising.

Fast-service franchising began was begun in the 1920s by A&W root-beer stands and Howard Johnson motel-restaurants. Baskin Robbins, Dairy Queen and Tastee Freeze hopped on the bandwagon in the 1930s and 40s. McDonald’s and Subway became big business in the 1950s. The decade of the 1960s saw restaurant franchises zoom to over 100,000 in number. After overcoming legal challenges posed by antitrust and the economic threat of OPEC in the 70s, franchising became the dominant form of restaurant business organization in the 1980s.

Franchising enlarged markets and made competitive entry easier. By standardizing both product and service, it made restaurant operation easier. It raised the stakes involved in success and failure. All these increased the intensity of competition. In turn, this shone the spotlight on even the minutest aspects of restaurant operation. Franchises and food groups ran schools in which they taught their franchisees and managers the fundamentals of restaurant success. Managers went out on their own to put those principles into practice. The level of professional operation ratcheted upward throughout the industry.

The word “professional” means numerous things, but in context it refers to the rigorous, even relentless application of restaurant practices single-mindedly aimed at achieving profitable operation. This entails developing a repeat-customer base and making the largest profit possible from serving that base.

Whether the quality of all types of restaurant food improved is open to debate, but it cannot be doubted that average quality rose. Today, the “greasy spoons” of yesteryear are nearly as scarce as passenger pigeons.

It was during this period of franchise domination that the practice of quick takeaway gained widespread currency. Maximizing the daily turnover of the given restaurant capacity is a commandment in the operations bible for profit-maximization. Minimizing the time between the departure of one set of guests and the arrival of their successors at each table is one way to maximize turnover. One way to reduce the time taken by clearing tables at meal’s completion is to begin the process before departure rather than waiting until the guests get up to leave; that way, fewer dishes remain to remove upon actual departure.

Fast removal of dishes not only maximizes turnover, it also maximizes the revenue take from each separate turnover. From the restaurant owner’s perspective, maximizing the size of each table’s check is another step toward maximizing total profit. After-dinner items like coffee and dessert are the obvious route to that goal. (Alcoholic drinks are the before-dinner complement of this strategy, which is why attainment of a liquor license is a coveted goal for most restaurants.) Quick takeaway aids this strategy in two ways. First, it speeds the transition from dinner to dessert. Second, it aids the server, who is in no position to handle dish removal when arriving at the table laden with desserts.

“Quick takeaway” has been standard practice throughout most of the industry for quite awhile, though. This doesn’t account for a recent speedup. For that, look deeper into the details of restaurant operation.

Table Size, Takeaway and… Demographic Trends?

Concomitant with the trend toward faster takeaway, the economist has also observed a trend toward smaller tables and booths in casual restaurants. Tables, chairs and booths come in standard sizes (there are five different booth sizes, for example), but the observed trend has been toward more booths designed to accommodate two people. Greater usage has been made of bar areas to provide food service, wherein diners can often obtain quicker service at the cost of table space and chairs limited to two people.

To understand the rationale for this changeover, pretend for a moment that all of the restaurant’s patronage consists of parties of two. Larger tables and booths would waste space and unnecessarily limit revenue per turnover, whereas designing for two would maximize the number of people served (and revenue collected) from an individual full-house turnover.

The link between table size and quick takeaway is obvious. Smaller table and booth sizes leave less room to accommodate elbows, books, newspapers, miscellaneous articles – not to mention additional dishes like dessert. (Technically, a smaller table doesn’t mean less room per person, but the whole idea behind the move to smaller tables is to achieve better utilization of capacity – the result leaves much less unused space available than did the larger tables and booths.) Now servers have even more reason to get those vacated dishes moving back to the kitchen, since there was barely room for them on the table to begin with. This reinforces the preexisting motivation for fast table-clearing and enlists the diners’ sympathy on the side of management, since table-crowding has become all too obvious.

There is still one major link left out of the chain of reasoning. In practice, restaurant parties do not consist entirely of twosomes. Casual restaurants usually include a few larger tables and/or booths, but what is to prevent larger parties from dominating smaller ones in the great scheme of things?

The last four decades have seen an increasing demographic trend toward smaller U.S. household size. In 1970, there an average of 3.1 people comprising the average U.S. household. By 2000, this had fallen to 2.62; by 2007, to 2.6 and by 2010, to 2.59.

Several forces drove this trend. First has been a shrinking birthrate. Here the U.S. is merely following the lead of other Western industrialized nations, which have seen shrinking birthrates throughout the 20th century. In the U.S., the shrinkage has waxed and waned since the 1930s. The 1990s saw a modest resurgence and U.S. births barely struggled above 2.0 per 1,000 early in the millennium. That is the replacement point – the level at which births and deaths counterbalance. As noted by leading demographer Ben Wattenberg and others, the large influx of Hispanic immigrants in recent decades undoubtedly spearheaded this comeback. Hispanics tend to be Catholic, fecund and pro-life. But since 2007, the rate has backslid down to 1.9; even the Hispanics seem to have assimilated the American cultural indifference to reproduction.

Other cultural forces have reinforced demography. Birth control has become omnipresent and routine. Divorce and illegitimacy have lost their stigma, thereby conducing to households containing only one parent. Whereas formerly it was commonplace for two men or two women to room together and share expenses, the legal status granted to homosexual partnerships has now placed a question mark around those arrangements. (This applies particularly to males; apparently the politically correct status conferred upon homosexuals does not much reassure two heterosexual men who contemplate cohabitation.) Indeed, it is today less socially questionable for unmarried male/female couples to live together than for same-sex couples – but this is practical only as a substitute for marriage, so its effect on household size is negligible.

The aggregate effect of this cultural attrition has been nearly as potent at the declining birthrate. In 1970, the fraction of households containing one person living alone was 17%. By 2007, this had risen to 27%.

Given this trend toward declining household size, we would expect to see a corresponding decline in the average size of parties at casual restaurants. After all, households (particularly adults) typically dine together rather than separately. Certainly, large groups do assemble on special occasions and regular get-togethers. But the overall trend should follow this declining pattern.

And there you have it. Smaller average household size produces smaller restaurant table and booth size, which in turn produces quick – or rather, quicker – takeaway of dishes at or before meal completion.

Many people instinctively reject this kind of analysis because they can’t picture most restaurant owner and employees thinking this deeply about such minute details or putting their plans into practice. But the foregoing analysis doesn’t necessarily assume that all restaurant owners and managers are this single-minded and obsessive. In a hotly competitive environment, the restaurants that survive and thrive will be those that do take this attitude. They will attract more business – thus, the odds of encountering smaller tables and quick takeaway will be greater even though those practices may not be uniform across the industry. Indeed, this reasoning supports the very notion of profit maximization itself. This survivorship principle was pioneered by the great economist Armen Alchian.

The Larger Meaning of Little Details

Economics is capable of supplying answers to life’s quaint little questions. (Some people would rearrange the wording of that sentence to “quaint little answers to life’s questions.”) But economics was developed to tackles bigger issues. It turns out that the little questions bear on the big ones.

One of the big questions economists ask about the behavior of business firms is: Is it socially beneficial? Business firms exist because, and to the extent that, they produce goods and services cheaper and better than individual households can. The gauge of success is the welfare of consumers.

Smaller tables and quick takeaway enable restaurants to achieve better capacity utilization. This enables them to cut costs and serve more customers. These are beneficial to consumers. The more intense competition serves to lower prices of restaurant food. This also benefits consumers.

What about the quality of food served? Table size and dish removal do not bear directly on this question, but the industry shift towards corporate control and franchised ownership has sometimes been blamed for a supposed decline in overall food quality. This hypothesis overlooks the analytical nose on its face – the fact that consumers themselves are the only possible judges of quality. Even if we assume that average quality has fallen, we have no basis for second-guessing the willingness of consumers to trade off lower quality for lower price and greater quantity. This is the same sort of tradeoff we make in every other sphere of consumption – housing, clothing, entertainment, medical care, ad infinitum.

The Left wing has recently developed a variation on its theme of corporate malignity in production and distribution of food. Corporations are destroying the health of their customers by purveying food containing too much sugar, salt, fat and taste. Only stringent government regulation of restaurant operations can hope to counteract the otherwise-irresistible lure of corporate advertising and junk food.

This hypothesis is not merely wrongheaded but wrong on the facts. Consumers have every right to trade off lower longevity for heightened enjoyment of life. This is something people often do in non-nutritive contexts such as athletics, extreme leisure pursuits like hang-gliding or public-service activities like missionary work. History indicates that, far from promoting public health, government has aided and abetted the increased incidence of type-II diabetes through wrong-headed dietary insistence on carbohydrate consumption as the foundational building block of nutrition.

Any objective appraisal must recognize that nowhere on earth can consumers find such abundance and diversity of cuisine as in the United States of America. World cuisine is amply represented even in mid-size metropolitan markets like Kansas City, Missouri and Sioux City, Iowa. There is no taste left unfulfilled – even the esoteric insistence on vegetarian meals, organic cultivation and free-range animal raising.

Restaurant Regulation

In order to appreciate the operation of a free market for restaurant meals, we need to dial down our level of abstraction and conduct a comparative-systems comparison. Heretofore we have conducted an imaginative exercise: we have explained a piece of restaurant operations under free-market competition. Now we need to envision how that piece would work under an alternative system like socialism.

In a socialist system, public ownership of the means of production dictates thoroughgoing, top-down regulation of business practice. For example, a regulator will pose the questions: How many booths and tables should the restaurant have? How big should they be? How far apart should they be spaced? How many people should we allow the restaurant to serve and how many should be allowed to sit at each table and booth?

In a socialist system, a regulator or group of them will ask this question in a centralized fashion. That is, he will ask it for a large grouping of restaurants – perhaps all restaurants, perhaps all fast-service restaurants, all bar-restaurants, all casual sit-down restaurants and all fine-dining restaurants. Or perhaps regulators will choose to group the restaurant industry differently. But group it they will and regulate each group on a one-size-rule-fits-all basis.

How will the regulator decide what regulations to impose? He will have government statistics at his disposal, such as the information cited above on average household size. It will be up to him to decide which information is relevant and how to apply the aggregate or collective information that governments collect to each individual restaurant being regulated. Even in the wildly unlikely instance that a regulator could actually visit each regulated restaurant, that could hardly happen more than once per year.

As we have just seen, free markets don’t work that way. One of the most misleading of popular perceptions is that free markets are “unregulated.” In reality, they are subject to the most stringent regulation of all – that of competition. But because the regulation part of competition works invisibly, people seem to miss its importance completely.

Instead of waiting for a central authority to certify its product as tasty and wholesome, markets supply their own verdict. Consumers try it for themselves. They ask their friends or take note when opinions are volunteered. They seek out reviews in newspapers, online and on television. When the verdict is unfavorable, bad news travels fast. This applies even more strongly to the aspect of health, by the way. Nothing empties a restaurant quicker than food-borne illness or even the rumor of it – as entrepreneurs know only too well.

In contrast, government health regulation doesn’t move nearly this fast. The cumbersome process of visits by the health inspector, trial-by-checklist followed by re-inspection – a pattern broken only rarely by a shutdown – is a classic example of bureaucracy at work. Political favoritism can affect the choice of inspections and the result. The de facto health inspector is the free market, not the government employee who holds that title.

Competitive regulation is decentralized. In our restaurant example, decisions about table size and restaurant takeaway are not made by a far-off government authority and applied uniformly. They are made on the spot, at each restaurant on a day-by-day basis. Restaurant owners and managers may possibly have the same government-collected information available to regulators, although it seems likely that they will be too busy to spend much time evaluating it. More to the point, though, they will have what the late Nobel laureate F. A. Hayek called “the information of particular time and place.” That is the time- and place-specific information about each particular restaurant that only its owner and managers can mobilize.

Merely because average household size has fallen over the U.S. does not mean that households in each and every individual neighborhood are smaller. It may be the case, for example, that in Hispanic neighborhoods – not gripped by declining birthrates or an epidemic of divorce – average household size has not fallen as it mostly has elsewhere. Those restaurants would not feel the urge to decrease table size and speed up dish collections in line with most restaurants. And well they shouldn’t, since they would serve their particular customers better by not blindly playing follow-the-leader with national trends.

Would centralized regulators pick up on this distinction? No, they would have to be clairvoyant in order to sort out the kind of exceptions that markets automatically catch.

After all, their aggregate statistics simply do not sift the data finely enough to make individual distinctions and differences visible.

But decentralized markets make those individual differences keenly felt by the people most affected. For restaurants, variations in consumer preference are felt by the very people who serve the consumer groups. Changes in demographic trends are witnessed by those whose very livelihoods are at stake. Competitive regulation works because it is on the spot, informed by the exact information needed and directed by the very people – on both sides of the market – with the motivation and expertise needed to make it effective.

Free markets allow participants to collect, disperse and heed information from any source but do not force people to respond to it. They do, however, provide incentives to respond proportionately to the magnitude of the information provided. A huge disruption of the supply of something will produce a big increase in price, suggesting to people that they reduce their consumption of this good a lot. A small decrease in a good’s price will offer a gentle inducement to increase consumption of something but not to go hog wild over it.

Again and again, we find ourselves saying that free markets nudge people in the right direction, towards doing the thing that we would want done if we could somehow magically observe all economic activity and direct by waving a magic wand. Economists laconically define this quality as being “efficient.”

Restaurant Economics and Rational Behavior

This object lesson in restaurant economics reminds us of a perceptive argument for free markets put forward by Hayek. He was responding to longtime arguments put forth by critics on the Left. The same arguments have recently reechoed following the housing bubble, financial crisis and ensuing Great Recession. Free markets may be logical, the critics concede, but only if people are rational. Since people behave irrationally, free markets must fail in practice, however well grounded their principles might be.

Hayek observed that the critics had it backwards. Markets do not require rational behavior by participants in order to function. Instead, markets encourage rational behavior by rewarding those who act rationally and penalizing those who do not. The history of mankind reveals a gradual movement towards more rational behavior; the widely noted reduction in the incidence of warfare is one noteworthy example of this.

The Audience Responds With a Burst of Applause

Can you imagine a nobler progression from the trivially mundane to the globally significant? That is what economists do.

And, by way of gratitude for this insight, your dinner companion rewards you by inquiring: “OK, now explain why restaurants are so stingy with the butter these days.”