DRI-317 for week of 6-9-13: Che Lives! And Truth Dies…

An Access Advertising EconBrief:

Che Lives! And Truth Dies…

Cuban Communist revolutionary Che Guevara was killed in Bolivia in 1967, but his spirit lives on to this day. Indeed, his spirit is more popular than he ever was while alive. As observed by the noted Latin American historian and journalist Alvaro Vargas Llosa, Che’s likeness “adorns mugs, hoodies, key chains, wallets, baseball caps, toques, bandannas, tank tops, club shirts, couture bags, denim jeans, herbal tea, and… T-shirts.” Nothing bespeaks revolutionary fervor like the legendary photograph “of the socialist heartthrob in his beret during the early days of the [Cuban] revolution.” Since 1997, Che has been the subject of no fewer than five books and three movies, the best known of each being The Motorcycle Diaries, which was co-produced in movie form by Robert Redford.

Llosa’s 2005 article “The Killing Machine” (reprinted in The New Republic) explored the irony behind Che’s transformation “from Communist Firebrand to Capitalist Brand.” The irony lies not merely in the fact that he is celebrated today by the capitalists whom he held in contempt during his lifetime, tried to destroy and killed in fair numbers. There is also the undoubted fact that most of his worshipful admirers are ignorant of his brutal resume and his utter lack of genuine accomplishment. Finally, there is the polar contrast between the egg-cracking school of revolution and economic development favored by Che and his disciples on the Left and the less glamorous but vastly more successful approach of the free-market Right.

The Real Che Guevara

Che Guevara was born, not in Cuba, but in Argentina. His parents upper-were middle-class Argentines. He was well-educated, graduating from medical school. In 1952, at age 23, he embarked on the tour of South America recounted in The Motorcycle Diaries. He wrote about his experiences not only in the diaries but in letters to his mother, to whom he remained close throughout his life. A 1954 letter to her was posted from Guatemala, where Guevara witnessed the overthrow of President Jacobo Arbenz’s leftist government. “It was all a lot of fun,” he wrote, “what with the bombs, speeches, and other distractions to break the monotony I was living in.”

It was in Guatemala that Guevara formed one of his seminal revolutionary convictions; namely, that Arbenz’s overthrow was due to his failure to execute all of his enemies when he assumed power. It was a mistake that Guevara was determined not to emulate. He spent two years with Fidel Castro in Cuba’s Sierra Maestra Mountains before helping to orchestrate the celebrated storming of Havana and takeover of Cuba’s government in 1959. In the transition following the overthrow of dictator Fulgencia Batista, Guevara assumed the role of jailer and executioner.

Castro gave Guevara command of the fortress of San Carlos de la Cabana, a onetime bulwark against pirates that had more recently functioned as a military barracks. Now it became a prison housing Batista functionaries and supporters, reporters, clerics, foreigners, suspected spies and displaced persons. As warden, Guevara followed the principle he had laid down as guerilla leader: “If in doubt, kill him.” Recipients of this maxim had included fellow guerilla Eutimo Guerra, whom he suspected of informing to Batista’s secret police. “I ended the problem with a .32 caliber pistol, in the right side of his brain,” he wrote matter-of-factly in his diary. There was Aristidio, a peasant who was uncomfortable with the presence of the rebels. Echevarria, brother of a comrade-in-arms, was another acknowledged victim of his pistol.

At La Cabana, formal military tribunals were held at which Guevara presided. These occupied daytime hours. Night was the time for the executions. In the evening, appeals were heard and paperwork was completed to be forwarded to the Interior Ministry. After all the formalities had been observed, in the middle of the night, the prisoners were executed.

Javier Arzuaga was a Basque chaplain at La Cabana. He witnessed many executions there. According to Arzuaga, “Che Guevara presided over the appellate court. He never overturned a sentence… I pleaded many times with Che on behalf of prisoners. Che did not budge.”

Estimates of the total number of executions at La Cabana vary widely, from as few as 200 to as high as 2000. Since Guevara commanded La Cabana only from January to June, 1959, it seems likely that only about 400 executions were carried out under his authority. “He said they were all CIA agents,” said CIA agent and Cuban exile Felix Rodriguez, who eventually hunted down Guevara in Bolivia in 1967.

In a 1967 address entitled “Message to the Tricontinental,” Guevara declared the necessity for “hatred as an element of struggle, unbending hatred for the enemy, which pushes a human being beyond his natural limitations, making him into an effective, violent, selective, and cold-blooded killing machine.”

This was the real Che Guevara.

The Omelette Theory of Revolutionary Socialism

Guevara’s theory of revolutionary socialism was epitomized in the old saying, “You can’t make an omelette without breaking eggs.” (In its political context, the aphorism is most often attributed to New York Times‘ reporter Walter Duranty, who won a Pulitzer Prize in the 1930s for reporting now adjudged to be scandalously lax in overlooking Stalinist atrocities in Soviet Russia.) Socialism viewed society not as an aggregation of individuals but as a collective, an organic unity. Revolution consisted of a remaking of society from scratch according to a recipe, much as a cook might assemble an omelette. The remaking process began with the breaking of eggs; e.g., the killing of individual human beings who opposed the revolution. This was no more regrettable than the killing of fertilized ova inside the eggs comprising an omelette – indeed, it was a necessity that demanded the hardening of the revolutionary’s psyche to the task.

Viewed in this light, Guevara takes on the same moral coloration as other revolutionary killers who preceded and followed him, such as Lenin, Stalin, Mao and Guevara’s boss, Castro. (Guevara was a follower, not a leader, so we might better compare him to Goebbels, Beria and Chou En-Lai.) They justified their crimes against humanity in a similar manner. The needs of “society” outrank those of any individual. The only individuals who stand above the crowd are those who promote societal needs by fomenting revolution. Thus, the mass murder of counter-revolutionaries is not a crime but rather a prophylactic.

True to the omelette approach, the Castro regime began reassembling Cuban society in a completely different form. Prior to the revolution, the political structure of the Batista regime was crony-ridden and thoroughly corrupt. But the Cuban economy was one of the strongest in Latin America. That didn’t stop Castro, et al from breaking it up and playing with the pieces.

Che’s first economic portfolio was the dual leadership of the National Bank (Cuba’s central bank) and the National Institute for Agrarian Reform. His effectiveness as central banker was somewhat hindered by the fact that, according to his subordinate Ernesto Betancourt, he “was ignorant of the most elementary economic principles.” He oversaw the redistribution of land from the wealthy sugar growers to – no, not to the peasants but instead to bureaucrats in the Cuban government.

During Che’s tenure, Cuba’s powerful sugar industry collapsed. Total land under cultivation was reduced. Real incomes of Cubans began a decline that had continued to the present day. By 1997, the average Cuban subsisted on a monthly diet of five pounds of rice, one pound of beans, one pound of soybean paste, four eggs and less than one ounce of meat.

In 1963, Guevara was promoted to Minister of Industry. By this time, Cuba was internally producing virtually no raw materials at all and had to export its entire sugar crop to the Soviet Union as partial payment for the massive cash and oil subsidies (equivalent to billions of U.S. dollars annually) that kept the island alive. Consequently, it could not support any local industry.

Cubans began defecting in steady streams to the U.S., only ninety miles away. An ordinary person might have ascribed this to the failure of his policies. Guevara saw it as a victory. He told Egypt’s President Nasser that the success of land reform and similar policies should be measured in the number of people “who feel there is no place for them in the new society.” He began talking of the “New Man” that the revolution would create.

Exporting Revolution: the Author of Guerilla Warfare as Military Leader

One of the great internal debates among the world Communist movement was over the issue of “socialism in one country” – the Soviet Union – versus the export of revolution throughout the world. At first, the expense and difficulty of keeping a military establishment kept the focus internal. As the Russian economy grew steadily worse and after the ghastly toll taken by World War II on Russia’s population and resources, Communists turned abroad as a means of recruiting resources and converts to the cause.

Guevara was the most ideologically loyal of Cuba’s leaders. Castro was an opportunist who saw Communism primarily as a means to power, while his brother Raul fell in between Fidel and Guevara on the political spectrum. Che had made a name for himself as a guerilla leader during the years in the Sierra Maestra, although today his military victories and tactical skills have been disputed by the recollections of surviving comrades. Still, he was the logical choice as spearhead of the Soviet revolutionary salient into Latin America.

The results of Che’s military campaigns supported the revisionist skeptics who doubted his abilities. In the immediate aftermath of the victory over Battista, beginning in late 1959, Guevara took his guerilla show on the road to Nicaragua, the Dominican Republic, Panama and Haiti. Each of the guerilla forces he led was soundly defeated and scattered. Chastened, Guevara was content merely to act as advisor to Jorge Ricardo Masetti’s efforts to overturn the newly elected democratic government in Che’s home country of Argentina. Masetti failed just as miserably as Guevara had and was killed in the bargain.

In the Belgian Congo, a ghastly war had raged for years during the 1950s and early 1960s. The United States, South Africa and Cuban exiles on one side and Soviet Russia on the other sponsored local factions that warred for the dubious privilege of supplanting the Belgians. The United Nations strove incapably to mediate among these belligerents. Guevara mounted a 1965 expedition to aid Cuban (i.e., Russian) guerilla contingents on opposite sides of the country. On a continent whose blood-drenched history is approached only by that of Eastern Europe and Asia, the Cuban Communists left a bloody trail of death and destruction whose remnants are etched in local memories still. But Guevara himself was forced to leave the country to avoid capture.

Bolivia in 1967 was Guevara’s last try at notching a successful guerilla operation outside Cuba. To his chagrin – as recorded in his captured diary – he found little local support and acceptance. “The peasant masses don’t help us at all,” he whined in print. But his real misfortune came at the hands of the local Communist opposition, which betrayed him by leading him into an ambush in southeast Bolivia at Yuro ravine. There he was trapped by the Bolivian military, which were soon joined by a CIA unit led by Cuban exile Felix Rodriguez.

Che Guevara’s life came to an ignominious but altogether fitting end when he was accorded the same treatment he had given to so many other human beings. He was summarily executed. His body was tied to the skids of a helicopter, taken to military headquarters and photographed. It was buried in an unmarked grave and remained lost until the location of his remains was revealed by a retired Bolivian general in 1997 and they were uncovered.

Che Guevara’s Comeback in the Movies

The 1960s were memorable in the United States for the rise of the counterculture and the New Left. This movement was well represented in Hollywood and in 1969 Che Guevara was the subject of a major motion picture, Che!, starring Omar Sharif as Guevara and Jack Palance as Fidel Castro.

Sharif, one of the great matinee idols of his day, was just off his international triumph in David Lean’s Lawrence of Arabia and his Hollywood star turn as Nick Arnstein in Funny Girl, which featured Barbra Streisand’s Oscar-winning debut as Fanny Brice. In short, he was ensconced as a glamorous leading man. The role as Che Guevara was created in that same mold, but tailored to the left-wing revolutionary temper of the times. The movie portrayed Che as a heroic fighter against capitalist colonialism and the corrupt U.S. foreign-policy establishment. It occupied a prominent place on the list of big-budget box-office flops of the era. Leonard Maltin’s Movie and Video Guide handed it its rarely-granted rating of “BOMB,” calling the film “one of the biggest film jokes of the 1960s” for its “comic-book treatment” of its subject. Palance’s flamboyant portrayal of Castro did win some critical plaudits, however.

Resounding failure usually dampens the enthusiasm of Hollywood producers. Here, it took some three decades of time, physical exhumation and intellectual resurrection to re-ignite it. The spate of Che biographies and the rise to financial preeminence of the Hollywood left wing built the tinder and famous Hollywood personalities lit the spark. Che Guevara became a hero once again on film as he never had been in life.

The publication in English of The Motorcycle Diaries jump-started the new cycle of Che Guevara romanticism. The Diaries were apparently kept by Guevara and his friend Alfredo Granado on their 1952 journey across South America on a motorized bike. Guevara was then a 23-year-old medical student on the verge of completing his studies, while Granado was a 29-year-old doctor. The two toured Argentina, Chile, Peru and Colombia and each recorded their impressions. The trip allegedly radicalized Guevara, who vowed to devote his life to the poor.

The books were reviewed rhapsodically in literary circles, which are occupationally academic, hence left-wing in orientation. The trip was described as a voyage of self-discovery comparable to Don Quixote’s quest. Guevara was characterized as one of the leading guerilla figures of the 20th century – which, if true, certainly denigrates the revolutionary vigor of the century. Given this reception, it was only a matter of time until Hollywood renewed its interest in Che.

In 2004, aging Hollywood heartthrob and Oscar-winning director Robert Redford produced a movie version of the book. The directing chores, though, were handed off to Walter Salles. The movie captured the romantic quality of the books; Maltin awarded it its highest rating of **** and it included an Oscar-winning best song. Two Hispanic actors, Gael Garcia Bernal and Rodrigo de la Serna, played Guevara and Granado as young men who are politically awakened to the “repressive” character of “right-wing, conservative” political regimes in South America.

In 2005, American director Josh Evans produced Che Guevara on what was evidently a small budget, once again using a foreign cast devoid of familiar Hollywood names, although Sonia Braga did appear in a cameo role. The movie showed Guevara reviewing his life in flashback while languishing in a Bolivian prison. Reviews describe the production as “amateurish” and it was released straight to video without appearing in movie theaters.

In 2008, Che returned to the big-time in a major studio production. Oscar-winning director Steven Soderbergh produced and directed Che, which was really two two-hour-plus movies breaking Guevara’s life story into equal parts. His star was Oscar-winning actor Benicio Del Toro, who claimed to have previously known of Che Guevara only as a “bad person.” But starring in this two-part blockbuster opened his eyes to the “real man.” Soderbergh found his motivation for the project in the “bucketsful of love” expressed for Guevara by the surviving colleagues and family members whom he interviewed.

Part 1 (The Argentine) follows Guevara’s life through the victory of the Cuban revolution. Part 2 (The Guerilla) portrays Guevara as exporter of revolution. As in Che! 40 years earlier, Che is portrayed as a sympathetic hero rather than the implacable killer he really was. Significantly, Soderbergh omitted any depiction of Guevara’s tenure as jailer and executioner at La Cabana prison. His rationalization for this decision was “there is no amount of accumulated barbarity that would have satisfied the people who hate him.” Del Toro’s marathon performance earned him Best Actor at the prestigious Cannes Film Festival.

The film earned many good reviews along with mediocre ones. Those reviewers who expressed dissatisfaction, however, generally were unhappy with the length of the films rather than their factual inaccuracy.

The overriding significance of Che Guevara’s career as romantic hero of motion pictures is the utter unwillingness to come to grips with the truth. Hollywood’s disposition to glamorize outlaws is legendary, yet even outrageously romanticized portrayals of murdering thugs like Jesse James, Billy the Kid and John Dillinger have acknowledged profound character flaws in their subjects. Clearly, the difference was made by politics, which not only gave filmmakers moral license to invent heroism where none existed but also to look the other way at murder.

What would happen if, say, Adolf Hitler (or Goebbels) received the same kind of artistic treatment? We don’t have to speculate about this, for we already know the answer. When historian David Irving published books purporting to treat Hitler’s actions and philosophy even-handedly, he was read out of civilized society. Despite the startling realization that Communism racked up an even bigger score of murder, famine, rapine, territorial conquest, concentration camps and political repression than did Nazism, we treat Communists with kid gloves while treating Nazis as unredeemed monsters. If there is an obvious explanation for this dissonance, it is that so many leftists are or were Communists and socialists and comparatively few were Nazis.

Obvious as this seems, it still doesn’t explain everything. The founding organ of modern conservatism, National Review magazine, began publishing in 1955. The overwhelming majority of those on its first editorial masthead were former Communists. Indeed, most of those were former Communist spies or intelligence assets. Yet they were able to not only overcome their original fanaticism but reverse its polarity.

There is no more tragic irony in the case of Che Guevara than the sort of apology made for Guevara’s destruction of Cuba’s economy. “He had to tear capitalism out by its roots; those he killed were the evil exploiters; of course he made mistakes, since he was starting all over again from scratch with nothing to guide him; he was building the New Socialist Man in a hostile world – he needed more time to show results.” This is the tiresome litany of excuses.

To hear Che Guevara’s apologists talk, one would think that nobody had ever successfully promoted economic development in Latin America. But that would be quite wrong. We need only return to the days of yesteryear, in Che Guevara’s home country of Argentina, to prove that.

Argentina from 1852 to 1928: the Influence of Juan Bautista Alberdi

In 1852, the Argentine revolutionary leader Justo Jose de Urquiza overthrew the ruling tyrant of Argentina, Juan Manual Rosas – just as Castro overthrew Batista over a century later. Urquiza had a key lieutenant who advised him on political economy, just as Guevara later advised Castro. The lieutenant’s name was Juan Bautista Alberdi.

At fourteen years of age, Alberdi walked the length of Argentina, from pampas to deserts – just as Guevara later traversed South America. When Urquiza took power, Alberdi represented his government abroad – not, to be sure, by employing Guevara’s guerilla tactics but as a diplomat and intellectual. (Like Guevara, Alberdi also died abroad.) In fact, as Llosa pointed out, “Alberdi never killed a fly,” and he opposed Argentina’s war with Paraguay.

Unlike Guevara, Alberdi believed in limited government, not totalitarianism. Alberdi supported free trade. He encouraged immigration into Argentina. He staunchly supported private property rights. Like Guevara, Alberdi wrote books. One of them, Bases y puntos de partida para la organizacion de la Republica Argentina, formed the basis for the Argentine Constitution of 1853.

Alberdi differed from Guevara diametrically not only on first principles but also in results attained. Guevara virtually destroyed the Cuban economy. In contrast, look at what Alberdi achieved. In the last half of the 19th century, Argentina had the second-highest rate of economic growth in the world. At the turn of the century, real incomes of Argentine workers exceeded those of workers in Switzerland, Germany and France (!!!).  By 1928, when Argentina reversed course once more and reverted back to tyranny, Argentina had the 12th-highest per-capita GDP in the world.

And, as Alvaro Vargas Llosa noted wryly, “his [Alberdi’s] likeness does not adorn Mike Tyson’s abdomen.”

Today, the murderous totalitarian and abject failure Che Guevara is widely considered a hero. Juan Bautista Alberdi lies forgotten in his grave. This is an irony to wring tears not just from Argentina – the whole world should weep. Where is Andrew Lloyd Webber when you need him, anyway?

DRI-306 for week of 6-2-13: What Is (Or Was) ‘American Exceptionalism’?

An Access Advertising EconBrief:

What Is (Or Was) ‘American Exceptionalism’?

Ever since the 1970s, but increasingly since the financial crisis of 2008 and ensuing Great Recession, eulogies have been read for American cultural and economic preeminence. If accurate, this valedictory chorus would mark one of the shortest reigns of any world power, albeit also the fastest rise to supremacy. Even while pronouncing last rites on American dominance, however, commentators unanimously acknowledge our uniqueness. They dub this quality “American exceptionalism.”

This makes sense, since you can’t very well declare America’s superpower status figuratively dead without having some idea of what gave it life in the first place. And by using the principles of political economy, we can identify the animating features of national greatness. This allows us to perform our own check of national vital signs, to find out if American exceptionalism is really dead or only in the emergency room.

Several key features of the American experience stand out.

Free Immigration

Immigration (in-migration) fueled the extraordinary growth in U.S. population throughout its history. Immigration was mostly uncontrolled until the 1920s. (The exception was Chinese immigration, which was subject to controls in the late 19th century.) Federal legislation in the 1920s introduced the concept of immigration quotas determined by nation of origin. These were eventually loosened in the 1960s.

From the beginning of European settlement in the English colonies, inhabitants came not only from the mother country but also from Scotland, Ireland, Wales, the Netherlands, Spain, France, Germany and Africa. Scandinavia soon contributed to the influx. Some of the earliest settlers were indentured servants; slaves were introduced in the middle of the 17th century.

Today it is widely assumed that immigrants withdraw value from the U.S. rather than enhancing it, but this could hardly have been true during colonial times when there was little or no developed economy to exploit. Immigrants originally provided the only source of labor and have continued to augment the native labor supply down to the present day. For most of American history, workers were drawn to this country by wages that were probably the highest in the world. This was due not just to labor’s relative scarcity but also to its productivity. Immigrants not only increased the supply of labor (in and of itself, tending to push wages down) but also complemented native labor and made it more productive (tending to push wages up). The steady improvements in technology during the Industrial Revolution drove up productivity and the demand for labor faster than the supply of labor increased, thereby increasing real wages and continually drawing new immigrants.

Economists have traditionally underrated the importance of entrepreneurship in economic development, but historians have noted the unusual role played by Scottish entrepreneurs like Andrew Carnegie in U.S. economic history. At the turn of the 20th century, the business that became the motion-picture industry was founded almost entirely by immigrants. Most of them were Jews from Eastern Europe who stepped on dry land in the U.S. with no income or assets. They built the movie business into the country’s leading export industry by the end of the century. In recent years, Asians and Hispanics have taken up the entrepreneurial slack left by the native population.

An inexplicably ignored chapter in U.S. economic history is the culinary (and gastronomic) tradition linked to immigration. Early American menus were heavily weighted with traditional English dishes like roast beef, breads and puddings. Soon, however, immigrants brought their native cuisines with them. At first, each ethnic enclave fed its own appetites. Then immigrants opened up restaurants serving their own. Gradually, these establishments attracted native customers. Over decades, immigrant dishes and menus became assimilated into the native U.S. diet.

Germans were perhaps the first immigrants to make a powerful impression on American cuisine. Many Germans fought on the American side in the Revolution. After independence was won, a large percentage of opposing Hessian mercenaries stayed on to make America their home. Large German populations inhabited Pennsylvania, Illinois and Missouri. The so-called Pennsylvania Dutch, whose cooking won lasting fame, were German (“Deutsch”).

In the 19th century, hundreds of thousands of Chinese laborers came to the U.S., many to work on western railroad construction. They formed Chinese enclaves, the largest one located in San Francisco. Restaurants serving regional Chinese cuisines sprang up to serve these immigrants. When Americans displayed a taste for Chinese food, restaurateurs discovered that they had to tailor the cooking to American tastes, and these “Chinese restaurants” served Americanized Chinese food in the restaurant and authentic Chinese food in the kitchen for immigrants. Today, this evolutionary cycle is complete; American Chinese restaurants proudly advertise authentic dishes specialized along Mandarin, Szechuan and Cantonese lines.

Meanwhile, back in the 1800s, Italians were emigrating to America. Italian food was also geographically specialized and subsequently modified for American tastes. Today, Italian food is as American as apple pie and as geographically authentic as its Chinese counterpart. The Irish brought with them a simple but satisfying mix of recipes for starches and stews. Although long restricted to cosmopolitan coastal centers, French cooking eventually made its way into the American diet.

Mexicans began crossing the Rio Grande into the U.S. during the Great Depression. Their numbers increased in the 1950s, and this coincided with the advent of Mexican food as the next great ethnic specialty. Beginning in the late 1960s and coinciding with the rise of franchising as the dominant form of food retailing, Mexican food took the U.S. palate by storm. It followed the familiar pattern, beginning with Americanized “Tex-Mex” and culminating with niche Mexican restaurants catering to authentic regional Mexican cuisines.

Today, restaurant dining in America is an exercise in gastronomic globe-trotting. Medium-size American cities offer restaurants flying the ethnic banners of a dozen, fifteen or twenty nations – not just Italian, Chinese and Mexican food, but the dishes of Spain, Ethiopia, Thailand, Vietnam, Ireland, India, Greece, Denmark, the Philippines, Germany and more.

Immigration was absolutely necessary to all this development. As any experienced cook can attest, simple copying of recipes could not have reproduced the true flavor of these dishes, nor could non-natives have accomplished the delicate task of modifying them for the American palate while keeping the original versions alive until they eventually found favor with the U.S. market.

It is ironic that so much debate focuses on the alleged need for immigrants to assimilate U.S. culture. This single example shows how America has assimilated immigrant culture to a far greater degree. Indeed, American culture didn’t exist prior to immigration and has been created by this very assimilation process. Now, apart from learning English, it is not clear how much is left for immigrants to assimilate. For example, consumer products like Coca-Cola and McDonald’s hamburgers have become familiar to immigrants before they set foot here through U.S. exports.

Cultural Heterogeneity

Many of the great powers of the past were trading civilizations, like the Phoenicians and the Egyptians. By trading in the goods and languages of many nations, they developed a cosmopolitan culture.

In contrast, physical trade formed a fairly modest fraction of economic activity in the U.S. until well into the 20th century. The U.S. achieved its cultural heterogeneity less through trade in goods and services than via trade in people. The knowledge and experience shared by immigrants with natives produced a similar result.

Economists have long known that these two forms of trade substitute for each other in useful ways. For example, efficient use of a production input – whether labor, raw material or machine – requires that its price in different locations be equal. Where prices are not equal, equalization can be accomplished directly by movements of the input from its low-priced location to its high-priced location, which tends to raise the input’s price in the former location and lower it in the latter location. Or, it can be accomplished indirectly by trade in goods produced using the input; since the good will tend to be cheaper in the input’s low-priced location, exports to the high-priced location will tend to raise the good’s price, the demand for the input and the input’s price in that location.

Input-price equalization is a famous case of trade in goods obviating the necessity for trade in (movement of) people. Cultural heterogeneity is a much less well-known case of the reverse phenomenon – immigration substituting for trade in goods.

The importance of cultural heterogeneity has been almost completely overshadowed by the modern obsession with “diversity,” which might be concisely described as “difference for difference’s sake.” Unlike mindless diversity, cultural heterogeneity is rooted in economic logic. Migration is governed by the logic of productivity; people move from one place to another because they are more productive in their new location. Estimates indicate, for example, that some low-skilled Mexican workers are as much as five times more productive in the U.S. than in Mexico.

That is only the beginning of the benefits of migration. Because workers often complement the efforts of other workers, immigration also raises the productivity (and wages) of native workers as well. And there is another type of benefit that is seldom, if ever, noticed.

The late, great Nobel laureate F.A. Hayek defined the “economic problem” more broadly than merely the efficient deployment of known inputs for given purposes. He recognized that all individuals are limited in their power to store, collate and analyze information. Consumers do not recognize all choices available to them; producers do not know all available resources, production technologies or consumer wants. The sum of available knowledge is not a given; it is locked up in the minds of billions of individuals. The economic problem is how to unlock it in usable form. That is what free markets do.

Our previous extended example involving immigration and the evolution of American cuisine illustrates exactly this market information process at work. The free market made it efficient and attractive for immigrants to come to the U.S. U.S. consumers became acquainted with a vast new storehouse of potential consumption opportunities – eventually, U.S. entrepreneurs could also mine this trove of opportunity. Immigrant producers became aware of a new source of demand and new inputs with which to meet it. And the resulting knowledge became embedded in the mosaic of American culture, making our cuisine the most cosmopolitan in the world.

The upshot is that, without consciously realizing it, Americans have had access to vast amounts of knowledge, expertise and experience. This store of culture has acted as a kind of pre-cybernetic Internet, the difference being that culture operates outside our conscious perception. At best, we can observe its residue without directly measuring its input. One way of appreciating its impact is to compare the progress of open societies like the U.S. with civilizations that were long closed to outside influence, like Japan and China. Isolation fearfully retarded economic development.

Status Mobility

In his recent book, Unintended Consequences, financial economist Edward Conard stresses the necessity of risk-taking entrepreneurial behavior as a source of economic growth. The risks must be organically generated by markets rather than artificially created by politicians; the latter were the source of the recent financial crisis and ensuing Great Recession.

According to Conard, it is the striving for status that drives entrepreneurs run big risks in search of huge rewards that few will ultimately attain. Status may take various forms – social, occupational or economic. Its attraction derives from the human craving to distinguish oneself. It is this need for disproportionate reward – whether measured in esteem, dollars or professional recognition – that balances the high risk of failure associated with big-league entrepreneurship.

In the U.S., status striving has long been ridiculed by sociologists and psychologists. “Keeping up with the Joneses” has been stigmatized as a neurotic preoccupation. Yet the American version of status compares favorably with its ancient European ancestor.

England is famous for its class stratification. A half-century ago, its “angry young men” revolted against a stifling class system that defined status at birth and sharply limited upward mobility. Elsewhere in Europe, lingering remnants of the feudal system remained in place for centuries.

But the U.S. was comparatively classless. Economics defined its classes, and the economic categories embodied a high degree of mobility. Even those who started on the bottom rung usually climbed to the higher ones, where the rarefied climate proved difficult to endure for more than a generation or two.

The best feature of the status-striving U.S. class system has been the broad distribution of its benefits. The unimaginable fortunes acquired by titans of Industry like Carnegie, Rockefeller, Gates, Buffett, et al have made thousands of people rich while building a floor of real income under the nation. Our working lives and leisure have been defined by these men. The value created by a Bill Gates, say, is almost beyond enumeration.

Thus, it is not the striving for status per se that makes a national economy exceptional. It is the mobility that accompanies status. This will determine the form taken by the status striving process.

Before free markets rose to prominence, wealth was gained primarily through plunder. Seekers after status were warlords, kings or politicians. They gained their status at the expense of others. Today, plunder is the exception rather than the rule. Drug cartel bosses are the vestige of Prohibition; they profit purely from the illegalization of a good. Politicians are their counterpart in the straight world.

When status is accompanied by mobility, anybody can gain status. But they cannot have it without increasing the real incomes of large numbers of people. Ironically, the biggest complaint lodged against the American version of capitalism – that it promotes greed and income inequality – turns out to be both dead wrong and inaccurate. Mobility is achieved through competition and free markets, which absolutely demand that in order to get rich the status-striver must satisfy the wants of other people en masse. And income inequality is the inevitable concomitant of risk-taking entrepreneurship – somebody must bear the risks of ferreting out the dispersed information about wants, resources and technologies lodged in billions of human brains. If we don’t reward the person who succeeds in doing the job, the billions of people who gain from the process don’t get their real-income gains.

Free Markets

You might suppose that bureaucracy was invented by the New Deal. In fact, Elizabethan England knew it well. Price controls date back at least to the Roman emperor Diocletian. Prior to Adam Smith’s lesson on the virtues of trade and David Ricardo’s demonstration of the principle of comparative advantage, the philosophy of mercantilism held that government must tightly regulate economic activity lest it burst its bonds. Thus, free markets are a historical rarity.

England’s abolition of the Corn Laws in the mid-1800s provides a brief historical window on a world of free international trade, but the U.S. prior to 1913 probably best approximates the case of a world power living under free markets. Immigration was uncontrolled and tariffs were low; both goods and people flowed freely across political boundary lines.

Prices coordinate the flow of goods and services in the “present;” that is, over short time spans. Production and consumption over time are coordinated by markets developed to handle the future delivery of goods (futures and forward markets) and by prices that modify the structure of production and consumption in accord with our needs and wants for consumption and saving in the present and the future. For the most part, these prices are called interest rates.

Interest rates reflect consumers’ desires to save for future consumption and producers’ desires to invest to augment productive capabilities for the future. Just as a price tends to equalize the amount of a good producers want to produce and consumers want to purchase in a spot market, an interest rate tends to equalize the flow of saving by consumers with the investment in productive capital by producers. Without interest rates, how would we know that the amounts of goods wanted by consumers in the future would correspond to what producers will have waiting for them? As it happens, we are now experiencing first hand the answer to that question under the Federal Reserve’s “zero-interest-rate policy,” which substitutes artificial Federal Reserve-determined interest rates for interest rates determined by the interaction of consumers and producers.

Without knowing what policies were followed, we can scrutinize development outcomes in countries like China, India, Southeast Asia and Africa and draw appropriate inferences about departures from free-markets. High hopes and failure were associated with statism and market interference in China, India and Africa for over a half-century. Successful development has followed free markets like pigs follow truffles. But the obstacles to free markets are formidable, and no country has as yet found the recipe for keeping them in force over time.

What About Political Freedom?

Discussions of American exceptionalism invariably revolve around America’s unique political and constitutional history and its heritage of political freedom. Yet the preceding definition of exceptionalism has leaned heavily on economics. The world does not lack for political protestations and formal declarations of freedom and justice. Many of these are modeled on our own U.S. Constitution. History shows, though, that only a reasonably well-fed, prosperous population is willing to fight to preserve its political rights. Time and again, economic freedom has preceded political freedom.

When the level of economic development is not sufficient and free markets are not in place, the populace is not willing to sacrifice material real income to gain political freedom because it is too close to the subsistence level of existence already. And even in the exceptional case (usually in Africa or Latin America) in which a charismatic, status-striving leader heads a successful political movement, the leader will not surrender leadership status – even though the ostensible purpose of the independence movement was precisely to gain political freedom. Instead, he or she preserves that status by cementing political power for life. Why? Because there is no substitute status reward to fall back on; his or her economic and social status depends on wielding political power. This is the fault of the political Left, which has demanded that “mere” economic rights be subrogated to claims of equality, with the result that neither equality nor wealth has been realized.

Observation shows that when economic growth begins – but not before that – people begin to sacrifice consumption to control pollution and improve health. Similar considerations apply to political freedom. Expressing the relationship in the jargon of economics, we would say that political freedom is a normal good. This means that we “purchase” more of it as our real incomes increase. In this context, the word “purchase” does not imply acquisition with money as the medium of exchange; it means that we must sacrifice our time and effort to get political freedom, leaving less leisure time available for consumption purposes.

The U.S. was the exception because its economic freedom and real income was well advanced before the Revolution. Enough Americans were willing to oppose the British Crown to achieve independence because colonial America was living well above the subsistence level – at that, the ratio of rebels to Tories was close to even. George Washington was offered a crown rather than a Presidency, but he declined – and declined again when offered a third Presidential term. His Virginia plantation offered a substitute status reward; he did not need to hold office to maintain his economic status or social esteem. It is interesting to speculate about the content of the Constitution and the course of U.S. history had the U.S. lacked the firm economic foundation laid by its colonial history and favorable circumstances.

DRI-325 for week of 5-26-13: Stockman on Reagan: He Should Have Known

An Access Advertising EconBrief:

Stockman on Reagan: He Should Have Known

The publishing sensation du jour – at least in the field of politics and economics – is the cautionary memoir of former Reagan administration budget director David Stockman, entitled The Great Deformation. The title derives from the religious Great Reformation of the 16th century, which befits the missionary zeal Stockman brings to his tale. The titular deformation was suffered by American capitalism during the 20th century, but particularly during Stockman’s adult life.

Stockman’s book is part history, part gloomy prophecy and part score-settling with his critics. Each part has value for readers, but it is his historical recollections that deserve primary attention. Stockman assigns significant responsibility for the ongoing demise of free-market capitalism to policies initiated during the Reagan administration he served. One of those policies is widely considered to be President Reagan’s crowning achievement – the demise of the Soviet Union.

Stockman’s Revisionist View of Soviet Decline

The conventional account has the Soviet Union declining rapidly during the 1980s before finally toppling of its own weight between 1989 and 1991. The proximate cause was economic: the Soviet economy was so ponderously inefficient that it eventually lost the capacity to feed and clothe its own citizens, most of whom were forced to either stand in queues for hours daily at government stores or purchase basic goods at elevated prices in the black market. The government devoted most of its resources to producing military goods or subsidizing Communism abroad.  The lack of a functioning price system – current prices for goods and services and interest rates determining the value of capital goods – severed the link between the production of goods and services and the wants of the people, thereby leaving the economy adrift and floundering.

Stockman does not quarrel with this verdict. In fact, he endorses it so forcefully that he claims that the celebrated Reagan-administration defense build-up of the early 1980s was unnecessary and counterproductive. It was unnecessary because the Soviet economy was already collapsing of its own weight and, consequently, the Soviet military was no threat to the U.S. (The emphasis is mine.) It was counterproductive because military expenditures are an inherent drag on the production of goods and services for private consumption. And during the 1980s, we experienced what Stockman called “the greatest stampede of Pentagon log-rolling and budget aggrandizement by the military-industrial complex ever recorded.”

Stockman accuses Reagan defense Secretary Caspar Weinberger of selecting 7% annual increases in a baseline defense-spending figure of $142 billion annually out of a hat, simply because it represented the midpoint between candidate Reagan’s promised 5% annual increase and the 8-9% demanded by a hawkish group of advisors led by Senator John Tower of Texas. Stockman waxes at indignant length about the budgetary waste embedded in this program. He juxtaposes this fiscal profligacy alongside the Administration’s incapable efforts to cut spending on entitlement programs, which resulted in a feeble reduction of 1/3 of 1% in the percentage of GDP deployed by government.

Stockman goes on to record the fiscal depredations of the two Bush administrations that followed, curiously lauding the Clinton administration for its budget “surpluses” despite the fact that these were really accounting artifacts achieved by off-budget borrowing. He then describes, with mounting alarm, the fiscal death spiral executed by the Bush-Obama regimes – more properly linked as a hyphenate than were Reagan-Bush – which combined monetary excess with fiscal profligacy to nose-dive the U.S. economy into the ground.

The 742-page volume contains a wealth of valuable material, much of it insider information delivered by a participant in federal budgetary battles and Wall Street machinations. But the datum of immediate interest is Stockman’s putative debunking of Reagan’s role as Soviet dragon slayer.

History as Hindsight

David Stockman claims that the Soviet Union was in terrible economic shape when Ronald Reagan took office in January, 1981 – so bad that it presented no serious military threat to the U.S. Given the force of his argument, it may seem somewhat surprising that he presents no direct evidence to support this claim. It is not difficult to grasp the nature of his inferential case, though.

The Soviet Union’s political collapse began in 1989 and occurred with shocking suddenness. Even more stunning was its non-violence; hardly a shot was fired despite various confrontations involving tanks and troops. Stockman’s implicit argument presumably runs something like this: “In order for public mobilization against the government to have attained this critical mass by 1989, economic deterioration as of 1981 must have been well under way and quite advanced.” This is a reasonable inference. Moreover, it is supported by the research and release of documents that occurred in the window of time between 1991 and the onset of the Putin regime, when access to Soviet archives was closed to the West and even to Russian researchers.

But that isn’t all. Implicitly, Stockman continues along the following lines: “Because we now know that the Soviet Union was scheduled to fall apart beginning in 1989, it was therefore unnecessary for the Reagan administration to waste all that money on its defense build-up. And since that build-up was a major contributor to the ensuing decline and fall of the U.S. economy and free markets, Reagan himself must bear a big share of responsibility for the fix we are in today.” Obviously, the quoted paraphrase is my interpretation of Stockman’s argument; the reader will have to judge its fairness.

But if it is a fair rendering of Stockman’s case, then that case fails utterly. Each of the three elements comprising it is false. The first of these is the most obvious. Stockman has committed the fallacy of hindsight. Thirty years later, we now know that the Soviet Union was scheduled to fall apart in 1989. But in 1981, Ronald Reagan didn’t know it. In fact, nobody knew it.

Soviet Disintegration: The View From 1981

Based on David Stockman’s harsh judgment of Ronald Reagan’s conduct, one reads The Great Deformation with bated breath, waiting for the meeting with Reagan at which Stockman sternly declaims, “Mr. President, the Soviet Union is falling apart. You know it as well as I do. How dare you waste all this money on military spending? You’re going to spend us into the poorhouse. Thirty years from now, our economy will implode.”

But we wait in vain; no such passage is included in the book. Presumably, no such conversation took place because David Stockman was just as ignorant of the Soviet Union’s true economic status as Reagan was.

Actually, there is every reason to believe that Reagan was better informed than Stockman. At least two books have been written about Reagan’s campaign to win the Cold War, the better one being Reagan’s War, by Peter Schweizer. We learn that Reagan himself expected the Soviet economy to collapse; he was one of the few people outside the ranks of hard-core free marketers who did. What he didn’t know was when this would happen.

This is the economist’s eternal bugbear, after all, and Reagan was the only U.S. President to actually hold a degree in economics. Economists usually know what’s going to happen, but they are notoriously unable to predict the timing of events, which accounts for their lackluster forecasting reputation.

At the point of Reagan’s inauguration, the Soviet Union’s star was ascendant internationally. It had not yet retreated from Afghanistan and its advisors and acolytes were meddling in Third World countries around the world. The U.S. was under worldwide pressure to succumb to détente and negotiate away its nuclear superiority – the very factor that Stockman claims made its defense build-up superfluous.

Since Stockman didn’t know that the Soviet Union was a basket case but is implicitly saying that Reagan should have known it, he must mean that somebody else who did know it told him the truth, or should have told him. Who would this have been? The logical candidate would have been the CIA. But we now know that the CIA didn’t know it; their failure to provide advance warning of the Soviet collapse is proverbial.

Perhaps Stockman thinks that economists should have anticipated the economic collapse of the Soviet Union. At first thought, this doesn’t seem so unreasonable, does it? But the leading academic economist of the day, Paul Samuelson of MIT, is now famous – no, make that infamous – for including a running graph in succeeding editions of his best-selling college textbook that shows the Soviet Union overtaking the U.S. in per-capita economic growth during the 1980s. (Actually, the point of convergence was quietly moved farther back in time in later editions.) Clearly, Samuelson didn’t have a clue about actual economic growth in the Soviet Union or he never would have made a fool of himself in print for posterity.

Maybe some of Reagan’s free-market economist friends knew the truth, or should have known. Certainly F.A. Hayek and Ludwig von Mises knew that the Soviet economy was fated to collapse in the 1930s when they argued that economic calculation under socialism was impossible. But Mises was dead by 1981 and Hayek was in his 80s, only recently rehabilitated within the profession by the award of his Nobel Prize in 1974. In reality, free-market economists were demoralized by their ostracism from the profession and the seeming invulnerability of the Soviet Union to public criticism and the laws of economics. They had long ago predicted its demise, only to be confounded and humiliated when it kept rolling along – aided in no little part by the subsidies it received from Western governments and the praise it gained from Western intellectuals. Few, if any, free-market economists were optimistically predicting the end of Communism in 1981.

Realistically speaking, nobody should now expect Ronald Reagan to have predicted the future then. But suppose, hypothetically, he knew the general state of the Soviet economy. When then? The fact is that even this knowledge would not have made Stockman’s argument valid – just the opposite would seem to be true.

History is Not a Controlled Experiment

If the Soviet Union had been as impregnable as most of the world believed it to be, then there might have been a case for détente or a milder policy of rapprochement. Reagan didn’t know the truth, but he suspected that the Soviet economy was shaky. He told his advisors, “Here’s my strategy of the Cold War: We win, they lose.” He wanted to give the Soviet economy the push that would shove it over the cliff and destabilize the regime. He knew that forcing the Kremlin into an arms race would pose a fatal dilemma: Either the Soviet government would devote more resources to military production or it would refuse the challenge and devote more resources to civilian goods and services. The former choice would expose it to civil revolt and eventual rebellion. The later choice would condemn it to inferiority in conventional arms as well as nuclear capability; it would pose no threat to the U.S. or the rest of the world and could be isolated to die on the vine in good time.

The Soviet Union had killed upwards of 100 million people during the 20th century by means of execution, deliberately contrived famine and exile to gulags. Reagan felt that the Soviet economy would collapse eventually. But when? For all he knew, it might take ten years, twenty years, thirty years – after all, they had last 64 years up to that point and they had most of the world on their side. There seemed to be a window of opportunity to win the Cold War now, but he wouldn’t be President forever. If he failed to do the job on his watch, his successor(s) might give away whatever advantage he had gained. In the event, despite the single-minded dedication of Reagan and his small band of advisors, it took two full terms – nearly 9 years – to get the job done as it was.

World War II was a conventional war against totalitarianism, fought with conventional weapons against Hitler, Tojo and Mussolini. It was won by spending (and wasting) vast quantities of money on soldiers, bombs, ships, planes, tanks and the like. Most of these armaments were actually used for their intended purposes. Millions of people were killed in the process.

The Cold War was an unconventional war against totalitarianism, fought and won by Ronald Reagan by the unconventional means of spending his Soviet opponents into submission when they elected to compete with him militarily. The superior American economy provided the wherewithal to defeat the inferior Soviet economy. In the direct sense, nobody was killed in the process, although the tremendous waste involved did cost many lives. But because the Soviet Union killed millions of people directly and indirectly and would have continued to do so, Reagan’s actions saved many lives.

The futility of Stockman’s case is illustrated by his criticism of Reagan’s build-up of conventional weapons. Reagan had campaigned against the Soviet Union’s nuclear capability, Stockman maintained, yet insisted on spending vast sums on conventional weapons. “What actually kept the Soviets at bay was the retaliatory [capability] of submarine-based Trident missile warheads…along with…land-based minuteman ICBMs…This deterrent force was what actually kept the nation safe and had been fully in place for years.” In contrast, Stockman scoffed, “the $20 billion MX ‘peacekeeper’ missile…was an offensive weapon that undermined deterrence and wasn’t actually deployed until the Cold War was nearly over.” No, the MX contributed as much, if not more, to winning the Cold War than Polaris and the ICBMs because all these weapons were not used to fight a conventional war. They were not even valuable primarily for their deterrent effect, although that was also quite useful. They “fought” the war peacefully by forcing the Soviet Union to use resources to compete with them. Indeed, the greatest weapon of the Cold War was never even produced. Former Premier Mikhail Gorbachev and his advisors specifically cited the Strategic Defense Initiative (derisively termed “Star Wars” by its detractors) as the crucial factor in the Soviet Union’s demise.

At this juncture in the discussion, the point may seem almost too obvious to need stress: Ronald Reagan’s actions themselves contributed to the collapse of the Soviet Union and may well have been its proximate cause; therefore we cannot assume that the Soviet Union would have collapsed anyway if Reagan had not acted as he did.

 

The first element of Stockman’s case is that because we now know the Soviet Union was falling apart, Reagan should have known it and should have guessed that the Soviet Union would therefore collapse in 1989. This is false; it is the hindsight fallacy. The second element is that because the Soviet Union did collapse in 1989, it would have done so even if Ronald Reagan had not actively won the Cold War. This is not merely false; it is an absurdity since Ronald Reagan’s actions themselves contributed decisively to the speed and completeness of the collapse.

The Fiscal Fallout: Eisenhower vs. Bush

At this point, David Stockman’s case against Ronald Reagan as Cold Warrior is on the ropes. But there remains a counter-argument to the points raised in opposition. Even if Reagan did win the Cold War, it must have come at a terrible cost if it led on a direct line to the end of free-market capitalism in America and impending fiscal and monetary collapse, as David Stockman says it did. Who could argue with that?

David Stockman, as a matter of fact. Stockman himself provides the final refutation to his own argument against Reagan’s role in the Cold War. And, incredibly, he shows no realization of it. Stockman lists President Eisenhower among his heroes for the courage he displayed by taking on a job disdained by his predecessor, Harry Truman. Eisenhower and his Treasury Secretary, George Humphreys, recognized that wartime tax rates were far too high to promote prosperity. But they were determined to complement a reduction in tax rates with spending reductions that would bring government’s percentage take of GDP (then called gross national product or GNP) back into line with historic norms. So between them they hammered out over $145 billion in defense-budget reductions over three years that accomplished the de facto demobilization of the military.

Stockman is fortunate that the Reagan-era parallel was not a snake; else he would be in need of anti-venom serum. The Soviet Union’s collapse was not clear-cut until just after Reagan left office. Reagan had no chance to unwind the successful chain of events he had set in motion. Thus, to make Reagan’s triumph complete, his successor George Bush needed to do the same job Eisenhower did – namely, drastically downsize a military budget whose only rationale had been to win the Cold War non-violently. It was the cravenness and stupidity of the Bush Administration, not Ronald Reagan’s failings, which started the budget rot leading to our present problems. At least, that is the end product of David Stockman’s own logic.

The third element of David Stockman’s case – that Reagan’s fiscal program led directly to our present malaise – is refuted by Stockman’s own choice of Eisenhower as hero and Stockman’s explanation of Eisenhower’s defense-budget cuts. Bush, not Reagan, was the malefactor.

 

The Stockman Manifesto

David Stockman’s manifesto is a tour de force that commands our close attention. In particular, his debunking of the so-called “financial crisis” in 2008 should be required reading for every American. Unfortunately, his omnibus explanation of our fiscal and monetary woes explains too much – or not enough, depending on how you choose to express it. Perhaps Stockman’s own involvement in the Reagan Administration colored his analysis of Reagan’s policies. For whatever reason, his history of the Cold War is inexcusably short-sighted and unworthy of somebody whose views are otherwise acute.

DRI-326 for week of 5-12-13: Paul Krugman Can’t Stand the Truth About Austerity

An Access Advertising EconBrief: 

Paul Krugman Can’t Stand the Truth About Austerity

The digital age has produced many unfortunate byproducts. One of these is the rise of shorthand communication. In journalism, this has produced an overreliance on buzzwords. The buzzword substitutes for definition, delineation, distinction and careful analysis. Its advantage is that it purports to say so much within the confines of one word – which is truly a magnificent economy of expression, as long as the word is telling the truth. Alas, all too often, the buzzword buzzsaws its way through its subject matter like a chain saw, leaving truth mutilated and amputated in its wake.

The leading government budgetary buzzword of the day is “austerity.” For several years, members of the European Union have either undergone austerity or been threatened with it – depending on whose version of events you accept. Now the word has crossed the Atlantic and awaits a visa for admission to this country. It has met a chilly reception.

In a recent (05/11/2013) column, economist Paul Krugman declares that “at this point, the economic case for austerity…has collapsed.” In order to appreciate the irony of the column, we must probe the history of the policy called “austerity.” Tracing that history back to the 1970s, we find that it was originated by Keynesian economists – ideological and theoretical soul mates of Paul Krugman. This revelation allows us to offer a theory about otherwise inexplicable comments by Krugman in his column.

The Origin of “Austerity”

The word “austerity” derives from the root word “austere,” which is used to denote something that is harsh, cold, severe, stern, somber or grave. When applied to a government policy, it must imply an intention to inflict pain and hardship. That is, the severity must be inherent in the policy chosen – it cannot be an invisible or unwitting byproduct of the policy. There may or may not be a compensating or overriding justification for the austerity, but it is the result of deliberation.

The word was first mated to policy during the debt crisis. No, this wasn’t our current federal government debt crisis or even the housing debt and foreclosure crisis that began in 2007. The original debt crisis was the 1970s struggle to deal with non-performing development loans made by Western banks to sovereign nations. At first, most of the debtor countries were low-income, less-developed countries in Africa and Latin America. Eventually, the contagion of bad loans and debt spread to middle-income countries like Mexico and Argentina. This episode was a rehearsal for the subprime-mortgage-loan defaults to follow decades later.

The original debt crisis was motivated by the same sort of “can’t miss” thinking that produced the housing mess. Sovereign nations were the perfect borrower, reasoned the big Wall Street banks of the 1970s, because a country can’t go broke the way a business can. After all, it has the power to tax its citizens, doesn’t it? Since it can’t go broke, it won’t default on its loan payments.

This line of reasoning – no, let’s call it “thinking” – found willing sets of ears on the heads of Keynesian economists, who had long been berating the West for its stinginess in funding development among less-developed countries. Agencies like the International Monetary Fund and the World Bank perked up their ears, too. The IMF was created at the end of World War II to administer a worldwide regime of fixed exchange rates. When this regime, named for the venue (Bretton Woods, New Hampshire) at which it was formally established, collapsed in 1971, the IMF was a great big international bureaucracy without a mandate. It was charmed to switch its attention to economic development. By brokering development loans to poor countries in Africa, Central and South America, it could collect administrative fees coming and going – coming, by carving off a chunk of the original loan in the form of an origination fee and going, by either rolling over the original loan or reformulating the development plan completely when the loan went bust.

The reformulation was where the austerity came in. Standard operating procedure called for the loan to be repaid either with revenues from the development project(s) funded by the loan(s) or by tax revenues reaped from taxing the profits of the project(s). Of course, the problem was that development loans made by big bureaucratic banks to big bureaucratic governments in Third World nations were usually subverted to benefit leaders in the target countries or their cronies. This meant that there were usually no business revenues or tax revenues left from which to repay the loans.

Ordinarily, that would leave the originating banks high and dry, along with the developers of the failed investment projects. “Ordinarily” means “in the context of a free market, where lenders and borrowers must suffer the consequences of their own actions.” But the last thing Wall Street banks wanted was to get their just desserts. They influenced their colleagues at the IMF and the World Bank to act as their collection agents. The agencies took off their “economic development loan broker” hats and put on one of their other hats; namely, their “international economics expert advisor” hat. They advised the debtor country how to extricate itself from the mess that the non-performing loan – the same one that they had collected fees for arranging in the first place – had got it into. Does this sound like a conflict of interest? Remember that these agencies were making money coming and going, so they had a powerful incentive to maintain the process by keeping the banks happy – or at least solvent.

Clearly, the Third World debtor country would have to scare up additional revenue with which to pay the loan. One possible way would be to divert revenue from other spending. But the agency economists were Keynesians to the marrow of their bones. They believed that government spending was stimulative to the economy and increased real income and employment via the fabled “multiplier effect,” in which unused resources were employed by the projects on which the government funds were spent. So, the last thing they were willing to advise was a diversion of spending away from the government and into repayment of debt. On the other hand, they were willing to advise Third World countries to acquire money to spend through taxation. If government were to raise $X in taxes and spend those $X, the net effect would not be a wash – it would be to increase real income by X. Why? Because taxation acquires money that private citizens would otherwise spend, but also money that they would otherwise save. When the entire amount of tax revenue is then spent by government, the net effect is to increase total spending – or so went the Keynesian thinking. One of Keynes’ most famous students, Nicholas Kaldor, later to become Lord Kaldor in Great Britain, complained in a famous 1950s’ article: “When will underdeveloped nations learn to tax?”

Thus, the development agencies kept a clear conscience when they advised their Third World clients to raise taxes in order to repay the debt incurred to Western banks. Not surprisingly, this policy advice was not popular with the populations of those countries. That policy acquired the descriptive title of “austerity.” Viewing it from a microeconomic or individual perspective, it is not hard to see why. By definition, a tax is an involuntary exaction that reduces the current or future consumption of the vict-…, er, the taxpayer. The taxpayer gains from it if, and only if, the proceeds are spent so as to more-than-compensate for the loss of that consumption and/or saving. Well, in this case, Third World taxpayers were being asked to repay loans for projects that failed to produce valuable output in the first place and did not produce the advertised gains in employment either. A double whammy – no wonder they called it “austerity!”

How austere were these development-agency recommendations? In Wealth and Poverty (1981), George Gilder offers one contemporary snapshot. “The once-solid economy of Turkey, for example, by 1980 was struggling under a 55 percent [tax] rate applying at incomes of $1,600 and a 68 percent rate incurred at just under $14,000, while the International Monetary Fund (IMF) urged new ‘austerity’ programs of devaluation and taxes as a condition for further loans.” Note Gilder’s wording; the word “austerity” was deliberately chosen by the development- agency economists themselves.

“This problem is also widespread in Latin America,” noted Gilder. Indeed, as the 1970s stretched into the 80s and 90s, the problem worsened. “[Economic] growth in Africa, Latin America, Eastern Europe, the Middle East and North Africa went into reverse in the 1980s and 1990s,” onetime IMF economist William Easterly recounted sadly in The Elusive Quest for Growth (2001). “The 1983 World Development Report of the World Bank projected a ‘central case’ annual percent per-capital growth in the developing countries from 1982 to 1995″ but “the actual per-capita growth would turn out to be close to zero.”

Perhaps the best explanation of the effect of taxes on economic growth was provided by journalist Jude Wanniski in The Way the World Works (1978). A lengthy chapter is devoted to the Third World debt crisis and the austerity policies pushed by the development agencies.

Two key principles emerge from this historical example. First, today’s knee-jerk presumption that government spending is always good, always wealth enhancing, always productive of higher levels of employment depends critically on the validity of the multiplier principle. Second, the original definition of austerity was painful increases in taxation, not decreases in government spending. And it was left-wing Keynesians themselves who were its practitioners, and who ruled out government spending decreases in favor of tax increases.

Fast Forward

Fast forward to the present day. Since the 1970s, the worldwide experience with taxes has been so unfavorable – and the devotion to lower taxes has become so ingrained – that virtually nobody outside of Scandinavia will swallow a regime of higher taxes nowadays.

Keynesian economics, thoroughly discredited not only by its disastrous economic development policy failures but also by the runaway inflation it started but could not stop in the 1970s, has emerged from under the earth like a protagonist in a George Romero movie. Its devotees still preach the gospel of stimulative government spending and high taxes. But they stress the former and downplay the latter. And, instead of embracing their former program of austerity as the means of overcoming debt, they now accuse their political opponents of practicing it. They have effected this turnabout by redefining the concept of austerity. They now define it as “slashing government spending.”

The full quotation from the Paul Krugman column quoted earlier was: “At this point, the economic case for austerity – for slashing government spending even in the face of a weak economy – has collapsed.” Notice that Krugman says nothing about taxes even though that was a defining characteristic of austerity as pioneered by development-agency Keynesians of his youth. (Krugman does not neglect devaluation, the other linchpin, since he advocates printing many more trillions of dollars than even Ben Bernanke has done so far.)

When Krugman’s Keynesian colleagues originated the policy of austerity, they did it with malice aforethought – using the term themselves while fully recognizing that the high-tax policies would inflict pain on recipients. Now Krugman projects this same attitude on his political opponents by claiming that not only does reduced government spending have harmful effects on real income and employment, but that Republicans will it so. The Republicans, then, are both evil and stupid. Republicans are evil because they “have long followed a strategy of ‘starving the beast,’ slashing taxes so as to deprive the government of the revenue it needs to pay for popular programs. They are stupid because their reluctance “to run deficits in times of economic crisis” is based on the premise that “politicians won’t do the right thing and pay down the debt in good times.” And, wouldn’t you know, the politicians who refuse to pay down the debt are the Republicans themselves. The Republicans are “a fiscal version of the classic definition of chutzpah…killing your parents, then demanding sympathy because you’re an orphan.”

But the real analytical point is that Krugman, and Democrats in general, are exhibiting the chutzpah. They have taken a policy term originated and openly embraced not merely by Democrats, but by Keynesian Democrats exactly like Krugman himself. They have imputed that policy to Republicans, who would never adopt this Democrat policy tool because its central tenet is excruciatingly high taxes. They have correctly accused Republicans of wanting to reduce government spending but wrongly associated that action with austerity in spite of the fact that their Keynesian Democrat forebears did not include it in the original austerity doctrine.

Why have they done this? For no better reason than that they oppose the Republicans politically. Psychology recognizes a behavior called “projection,” the imputing of a detested personal trait or characteristic to others. Having first developed the policy of austerity in the late 1970s and seen its disastrous consequences, Democrats now project its advocacy on their hated Republican opponents. In Krugman’s case, there are compelling reasons to suspect a psychological root cause for his behavior. His ancillary comments reveal an alarming propensity to ignore reality.

Paul Krugman’s Flight from Reality

In the quoted column alone, Krugman makes numerous factual claims that are so clearly and demonstrably untrue as to suggest a basis in abnormal psychology. Pending a full psychiatric review, we can only compare his statements with the factual record.

“In the United States, government spending programs designed to boost the economy are in fact rare – FDR’s New Deal and President Barack Obama’s much smaller recovery act are the only big examples.” Robert Samuelson’s recent book The Great Inflation and Its Aftermath (2008)covers in detail the growth and history of Keynesian economics in the U.S. During the Kennedy administration, Time Magazine featured Keynes on its cover to promote a story conjecturing that Keynesian economics had ended the business cycle. Samuelson followed Keynesian economics and such luminaries as Council of Economic Advisors Chairman Walter Heller, Nobel Laureates Paul Samuelson and James Tobin through the Kennedy, Johnson, Carter and Reagan administrations. One of his major theses was precisely that Keynesian economists produced the stagflation of the 1970s by refusing to stop deficit spending and excessive money creation – a view that helped to discredit Keynesianism in the 1980s. There can be no doubt that U.S. economic policy was dominated by Keynesian policies “designed to boost the economy” throughout the 1960s and 1970s.

Moreover, every macroeconomics textbook from the 1950s forward taught the concept of “automatic stabilizers” – government programs in which spending was designed to automatically increase when the level of economic activity declined. These certainly qualify as “big” in terms of their omnipresence, although since Krugman is an inflationist in every way he might deny their bigness in some quantitative sense. But they are certainly government spending programs, they are certainly designed to boost the economy and they are certainly continually operative – which makes Krugman’s statement still more bizarre.

“So the whole notion of perma-stimulus is a fantasy… Still, even if you don’t believe that stimulus is forever, Keynesian economics says not just that you should run deficits in bad times, but that you should pay down debt in good times.” The U.S. government has had one true budget surplus since 1961, bequeathed by the Johnson administration to President Nixon in 1969. (The accounting surpluses during the Clinton administration years of 1998-2001 are suspect due to borrowing from numerous off-budget government agencies like Social Security.) This amply supports the contention that politicians will not balance the budget cyclically, let alone annually. European economies are on the verge of collapse due to sovereign debt held by their banking systems and to the inexorable downward drift of productivity caused by their welfare-state spending. Krugman’s tone and tenor implies that “Keynesian economics” should be given the same weight as a doctor prescribing an antibiotic – a proven therapy backed by solid research and years of favorable results. Yet the history of Keynesian economics is that of a discredited theory whose repeated practical application has failed to live up to its billing. Now Krugman is in a positive snit because we don’t blindly take it on faith that the theory will work as advertised for the first time and that politicians will behave as advertised for the first time. If nothing else, one would expect a rational economist to display humility when arguing the Keynesian case – as Keynesians did when repenting their sins in favor of a greatly revised “New Keynesian Economics” during the mid-1980s.

“Unemployment benefits have fluctuated up and down with the business cycle and as a percentage of GDP they are barely half what they were at their recent peak.” Unemployment benefits have “fluctuated” up to 99 weeks during the Great Recession because Congress kept extending them. The rational Krugman knows that his fellow economists have debated whether these extensions have caused people to stop looking for work and instead rely on unemployment benefits. Robert Barro says they have, and finds that the extensions have added about two percentage points to the unemployment rate. Keynesian economists demur, claiming instead that the addition is more like 0.4%. In other words, the profession is not arguing about whether the extensions increase unemployment, only about how much. Meanwhile, Krugman is in his own world, pacing the pavement and mumbling “up and down, up and down – they’re only half what they were at their highest point when you measure them as a percentage of GDP!”

“Food stamp use is still rising thanks to a still-terrible labor market, but historical experience suggests that it too will fall sharply if and when the economy really recovers.” Food stamp (SNAP) use has steadily risen to nearly 48 million Americans. Even during the pre-recession years 2000-2008, food-stamp use rose by about 60%. Thus, the growth of the program has far outpaced growth in the rate of poverty. The Obama administration has bent over backward to liberalize criteria for qualification, allowing even high-wealth, low-income households into the program. This does not depict a temporary program whose enrollment fluctuates up and down with economic change, but rather a tightening vise of dependency.

Krugman’s picture of a “still-terrible labor market” cannot be reconciled with his claim that government spending is an effective counter-cyclical tool. If Krugman’s reaction to the anemic response to the Obama administration economic stimulus is a demand for much higher spending, he will presumably pull out that get-home-free card no matter what the effects of a spending program are. Why would much higher spending work when the actual amount failed? Krugman makes no theoretical case and cites no historical examples to support his claim – presumably because there are none. Governments need no urging to spend money – European governments are collapsing like dominos from doing exactly that. European unemployment has lingered in double digits for years despite heavy government spending, recent complaints about “austerity” to the contrary notwithstanding.

“The disastrous turn toward austerity has destroyed many jobs and ruined many lives. And its time for a U-turn.” Keep in mind that Krugman’s notion of “austerity” is reduced government spending but not higher taxes. This means that he is claiming that taxes have not gone up – when they have. And he is claiming that government spending has gone down, presumably by a lot since it has “destroyed many jobs and ruined many lives.” But government spending has not gone down; only a trivial reduction in the rate of growth of government spending has occurred during the first four and one-half months of 2013.

“Yet calls for a reversal of the destructive turn toward austerity are still having a hard time getting through.” Krugman’s rhetoric implies that Keynesian economics is a sound, sane voice that cannot be heard above the impenetrable din created by right-wing Republican voices. As a rational Krugman well knows, the mainstream news media has long been completely dominated by the Left wing. (It is the Right wing that should be complaining because the public is unfamiliar with the course of economic research over the last 40 years and the mainstream news media has done nothing to educate them on the subject.) Its day-to-day vocabulary is permeated with Keynesian jargon like “multiplier” and “automatic stabilizers.” The rhetorical advantage lies with Democrats and Keynesians. It is practical reality that has let them down. The economics profession conducted an unprecedented forty-five year research program on Keynesian economics. Its obsession with macroeconomics led to a serious neglect of microeconomics in university research throughout the 40s, 50s and 60s. By approximately 1980, the verdict was in. Keynesian economics was theoretically discredited, although its theoretical superstructure was retained in government and academia. Even textbooks were eventually revised to debunk the Keynesian debunking of Classical economics. Macroeconomic policy tools were retained not because free markets were inherently flawed but because policy was ostensibly a faster way to return to “full employment” than by relying on the slower adjustment processes of the market. The reaction to recent “stimulus” programs has demonstrated that even that modest macroeconomic aim is too ambitious.

Keynesian economics has had no trouble getting a hearing. It has had the longest, fairest hearing in the history of the social sciences. The verdict is in. And Krugman stands in the jury box, screaming that he has been framed by conservative Republicans as the bailiffs try to remove him from the courtroom.

Memory records no comparable flight from reality by a prominent economist.

DRI-343 for week of 4-21-13: Lockdown Lessons

An Access Advertising EconBrief:

Lockdown Lessons

On Monday, April 15, 2013, Tax Day or Patriot’s Day, according to your viewpoint, two powerful explosions rocked the finish line of the venerable Boston Marathon in Boston, MA. The homemade bombs killed three people and injured over 200 others. Although no individuals or groups stepped forward to claim responsibility for the acts, the modus operandi strongly suggested terrorism as the source.

Law enforcement officials reacted quickly. Local police were already on hand to provide security. They were joined by special police, state police and (eventually) the FBI. (Federal law confers jurisdictional seniority on the FBI when terrorism is implicated in the crime.) Immediate attention focused on identifying suspects for the bombings, using surveillance video of the marathon scene. Two young men wearing white and black caps, respectively, were shown apparently placing satchels like those carrying the bombs.

Frames from the videos were publicly disseminated late Thursday afternoon. The public was put on notice to watch for the suspects. Late Thursday evening, a holdup at a campus convenience store drew the attention of police. The first responder, an MIT campus policeman, encountered the two suspects. Initially, it was thought that they had held up the store, but later the meeting was ascribed to an “ambush” of the officer by the suspects. In the ensuing exchange of gunfire, the policeman was killed. A few minutes later, the suspects carjacked a Mercedes and its driver, who subsequently escaped. In the wee hours of Friday morning, police pursuit overtook the pair in the Watertown suburb of Boston. In the resulting shootout, one suspect was fatally wounded and run over by the other, who made his escape. Meanwhile, another police officer was critically injured.

Now possessing a body to work with, authorities were able to identify the suspects as two natives of Chechnya, who had been in this country for nine years. The dead man was 26-year-old Tamerlan Tsarnaev, a former Golden Gloves boxer and subsequent community-college dropout. The suspect at large was his 19-year-old brother Dzhokhar, a nursing student. By the time Boston residents woke up Friday morning, they discovered that much of the metropolitan area was “on lockdown.”

Lockdown

The implications of the evocative term are mostly self-evident. Residents were told to stay indoors, keep their doors locked and respond only to the police. Transit service – subway, buses and taxis – was suspended. Amtrak cancelled its daily service. The FAA declared a no-fly zone over a radius of the Boston metro area centered on the manhunt zone. Logan Airport was closed to outgoing traffic; vehicles entering the airport were stopped at roadblocks. Businesses were requested to close, except for those providing service to emergency workers. The streets were deserted, populated only by police personnel and vehicles, FBI, emergency medical personnel and news media crews. Private vehicles were stopped every few blocks for searches and interrogations. Military personnel and equipment, including tanks, patrolled the streets.

Law enforcement personnel began a house-to-house search for Dzhokhar Tsarnaev in Watertown. By the end of the day, they had not found him. At the end of the day, the lockdown was lifted. Within minutes of the unlocking, a man in Watertown entered his backyard and noticed what appeared to be blood on the tarp covering his boat. He peered under the tarp and saw a wounded man crouching there. He immediately notified police, who converged on the house and confronted the suspect. Yet another firefight ensued, which ended with the boat full of bullet holes and a seriously wounded Dzhokhar Tsarnaev in custody. The weeklong drama was over.

In the immediate aftermath of this significant episode in American history, much factual information remains to be learned. It is clear that we still lack a coherent legal framework of legal principles applicable to terrorism. But one issue sticks out like the proverbial sore thumb as worthy of discussion.

Why was a substantial portion of the Boston metropolitan area – including the municipalities of Watertown, Newton, Waltham, Cambridge, Belmont and Arlington and the neighborhoods of Allston-Brighton in Boston – essentially frozen in place with a lockdown order for over twelve hours because one 19-year-old murder suspect remained at large?

The Rationale for the “Historic Lockdown”

The action was accepted matter-of-factly by local residents and news media. The Wall Street Journal reported that, apart from “a crowd [who] gathered at the police blockade set up near the suspects’ home in Cambridge,” people mostly stayed indoors. “It was so quiet that leaves shaking in a gentle breeze could be heard. Only a handful of people ventured outside.” Those included a few businesses catering to the needs of emergency personnel.

Yet this was, as the Journal termed it, a “historic lockdown.” Editor Holman Jenkins called it “a reaction unlike any other triple homicide in Boston history.” He might have upped the ante by citing U.S. history. After all, Chicago averages over a murder per day, but we’ve never seen the city locked down as Boston was last Friday. Serials killers, professional assassins and even terrorists like Carlos the Jackal have been on the loose before – but we never brought a major U.S. city to its knees to search for them. Why now?

Lip service was given to the pretext that it was for the safety of residents. That is patently absurd. The bombings occurred on Monday. We knew somebody had done it. But there was no lockdown. By Tuesday, we had surveillance photos of two suspects and satchels, as well as remnants of bombs providing evidence that primitive, but effective, homemade explosives had been used. And the suspects and their weapons remained at large. But there was no lockdown.

On Thursday and Friday, our suspicions about the suspects were confirmed when they were sighted and flushed from cover after they had ambushed and killed the MIT police officer. But there was no lockdown until after the death of one policeman and critical wounding of another, until after one of the suspects was fatally wounded, then run over by his brother, who proceeded to escape. At this point, we were now pursuing one 19-year old suspect known to possess a handgun and only those explosives he might possibly have on his person. (His lodgings were under guard.)

Now, suddenly, when the danger was less than at any point since the bombing at the start of the week and our knowledge was the greatest, the lockdown was instituted. The one thing we can say for sure is that residents’ safety was not the motivation. What was?

The only other possible motivation for the lockdown was to assist in the apprehension of the remaining suspect at large, Dzhokhar Tsarnaev. Presumably the thinking of authorities was that the absence of pesky citizens would make anybody on the streets stand out. Police, military, national security and emergency personnel could be easily identified. Anybody else would be readily noticeable and pinpointed as an outlier, who would be unable to blend into a crowd and vanish. He would be easy to apprehend, interrogate, identify or – if necessary – kill.

The reaction of the mainstream media has been that the successful apprehension of the suspect vindicated this action. It is interesting to speculate about their reaction had the chase gone on longer. But as it was, the attitude might be summed up as: All’s well that ends well.

This is perfectly ridiculous.

Why the Lockdown Was Wrong in Theory

“All’s well that ends well” is a time-honored maxim but that doesn’t ratify any exercise of power under the Rule of Law. We didn’t know beforehand that things would turn out as well as they did. Moreover, there are always unintended consequences flowing from actions like the lockdown – how do they affect our evaluation of this case?

The tacit premises of the lockdown appear to be that, first, since there is a terrorist on the loose it is prudential to put everything else in the immediate vicinity on hold until he is captured. Second, the best way to facilitate that capture is to freeze the city in place and forestall most normal activity.

What does “normal activity” consist of? In economic terms, production and consumption of goods and services. (We are now excluding production for the sake of law-enforcement and emergency personnel, since a few shops remained open for this purpose.) So the implicit logic of the lockdown is that bringing all this to a halt is no great sacrifice compared to the potential loss of life that might ensue if the suspect remained at large. Perhaps lockdown proponents were imagining how trivial most of everyday life seemed compared to the high drama of cops and terrorists.

Of course, this was all wrong. Wrong in theory and equally wrong in practice.

In theory, as Anthony Gregory of the Independent Institute has pointed out, the authority to suspend transportation, interdict travel, limit mobility, stop and interrogate citizens at will, and search house-to-house without specific warrants amounts to a declaration of martial law. Only the governor of a state can do this. Precedent dictates it only in times of true emergency – wartime or natural disaster – when local security is threatened by invasion or riotous disorder.

The governor did not declare martial law, which is not surprising since there was neither war nor natural disaster. Of course, the tentative identification of the suspects hinted at terrorism as the motive for the bombings. But this is miles away from legal justification. The suspect was not even on a terrorist watch list. He was not known to possess explosives or any weapon apart from a handgun. To invoke a metaphoric “War on Terrorism” as an excuse for martial law would have made a mockery of the principle and a laughingstock of the public official responsible.

So, since there was no legal justification for their actions, the authorities just went ahead and took them. They acted without legal authority. And nobody questioned them, apparently.

This is not an isolated incident. It is a culmination of a decades-long trend that was accelerated by the “War on Drugs.” The Drug War has featured escalating levels of violence by criminals and police, increasing militarization of the police force and willful repudiation of such individual rights as private property and freedom of speech, action and mobility. The Boston lockdown is merely one step further on the same path of arbitrary power seized by government and freedom surrendered by private citizens.

Confronted by this reasoning, proponents of the lockdown have so far resorted to two defenses. The first is that the end justifies the means; i.e., the favorable outcome achieved by the lockdown washes away any sin committed in its name. The second is emotional: We are under attack and must use the weapons of war to defend ourselves and our way of life.

The first of these arguments is wrong in this specific case and wrong in general. The second, like any appeal to emotion, cannot be refuted in logical terms. It can only be met with a countervailing emotional appeal.

Why the Lockdown Was Wrong in Practice

The joy and relief that greeted the apprehension of Dzhokhar Tsarnaev seems to have overcome the public’s reasoning powers. The lockdown was a failure, not a success. Dzhokhar Tsarnaev was caught after the lockdown was lifted, not while it was in force. Minutes after being unlocked, a private citizen went into his back yard and, noticing blood on his covered boat, lifted the boat’s tarp to discover a wounded man crouching underneath. He called police.

Despite the clear field created by the lockdown, the small army of law-enforcers did not find the suspect. Finally, they had to admit defeat and lift the lockdown. And within minutes, one solitary citizen did what all the King’s horses and all the King’s men failed to do – merely by strolling into his back yard. What an incredible irony!

Or was it?

An economist worth his or her salt might have predicted it. The late, great Nobel laureate F. A. Hayek was the first to point out that free markets avail themselves of the “information of particular time and place” – data that is dispersed among billions of individuals and not within reach of central-planning authorities. The back yard capture of Tsarnaev is just such a case. For centuries, police have known that voluntary compliance is required for successful law enforcement. The chief source of information used in solving crimes is tips and testimony from the public; that is why police constantly appeal for witnesses to come forward. That is why crime flourishes within communities (such as urban black ghettos) where distrust of police prevents this cooperation.

The lockdown foreclosed this process. It did not merely fail in this particular case; it invited failure by excluding the public and paralyzing normal life. It is very likely that Dzhokhar Tsarnaev would have been uncovered hours earlier in a city going about its day-to-day business. Of course, he probably would have been discovered by private citizens rather than police, thereby exposing the finder(s) to danger – but that’s exactly what happened anyway. Except that this way, it happened later, after the damage of the lockdown had been done.

The idea that the lockdown itself caused concrete damage seems not to have occurred to the authorities, as if a day’s worth of production and consumption in a great city is nothing to worry about, more or less. In economic terms, this attitude is madness.

Every day, people produce things that save lives and enhance the quality of life. By forcing people to stay home, the authorities drastically skew this process in favor of less highly valuable activities and more relatively trivial ones. In the process, they flush away a few hundred million dollars worth of goods and services. (Ironically, the lockdown itself treats the production activities themselves as though they were comparatively valueless.) There is no objective way to measure the lives that were lost and devalued by the lockdown, but we know that this loss was real. Only its magnitude is unknown. In contrast, the danger posed by the suspect was speculative. There might have been no loss of life or injury at all on that Friday. (In fact, one policeman was apparently wounded in the conclusive shootout, and the finder’s boat was badly shot up.)

The voluntary aid provided by private citizens to law enforcement is not negligible. Indeed, news sources suggest that it was the belated publication of the suspects’ photographs that triggered their chaotic actions on Thursday, which were apparently part of their attempted flight from the city. Thus, it was this very informational link between citizenry and law enforcement that had neutralized the first suspect and put the second behind the eight-ball. The lockdown severed that link by putting citizens in the dark, in their homes.

The bureaucratic monopoly held by local police and higher-level agencies has had predictable economic consequences. The lack of competition has allowed law enforcement to grow larger (thereby increasing real incomes of its members) and less efficient, without having to pay the price paid by private businesses that would lose customers, revenue and profit if they behaved similarly.

The Boston lockdown illustrates the results of this process. The authorities failed to identify the suspects from the surveillance video (which was taken by a private business, Lord and Taylor’s, and given to police). Eventually, the suspects were sighted after they ambushed a campus policeman. They were chased to Watertown, where a shootout followed. One suspect was wounded. The other suspect, a 19-year-old who was not an experienced criminal, nonetheless managed to escape the police, FBI and military for almost 24 hours, despite running over his brother at the scene. What a sorry exhibition of law enforcement!

The law-enforcement authorities reacted to their own incapable display by demanding more power with which to do their job. This is the reflex reaction of government, which reacts to its own failure by ascribing it to inadequate resources and demanding even more money and power. In a free market, this would be tantamount to a business blaming consumers for its failure to produce its product efficiently and insisting that it be given more production inputs and allowed to raise its price.

Apart from clearing the decks of normal daily activity to allow police, FBI and military unimpeded movement, the lockdown also performed another, more subtle, function. It was a psychological escalation of force that distracted attention from government’s failure to accomplish its task by sending the message: “We’re on the case and now we’re going to get really serious about this terrorist.”

Whereas a private business must actually serve the consumer’s wants in order to stay in business, a government monopoly faces no such constraint. Its only concerns are political. It doesn’t actually have to solve problems; it need only look and act busy. The lockdown not only looks busy, it forces citizens to actually feel how busy the government has become in its anti-terrorist activities. The fact that the lockdown is actually counterproductive is beside the point.

And to top off the political advantages of the lockdown, it sets a precedent for further exercise of government power. Now the government can do virtually anything it wishes as long as it blows an official whistle and announces “terrorism” as its rationale for action. Alas, what is good for government is bad for the citizenry at large.

The notion that the lockdown succeeded is a ghastly misreading. It failed miserably in theory and practice. It was a bad idea from the start and only got worse. The myth of its success will cause it to be emulated in the future. And the stakes will only rise from this point on.

Freedom or Security?

Proponents of the lockdown give short shrift to legalities and freedom. They give no appearance of knowing or caring about its known economic loss, let alone weighing it against the much more speculative character of any sparing of life or limb. They invoke the specter of war and the principle that national security outweighs every other consideration.

Benjamin Franklin famously warned that “those who give up essential liberty in exchange for a little temporary security deserve neither liberty nor security.” It is probably fair to say that lockdown proponents would deny that the liberties lost are essential and would vigorously dispute that the security gained is either small or temporary. But the arguments above cannot be waved away.

Only wartime defense can justify the surrender of freedom to the state. And “war” means literal life-or-death combat with a nation state that can end only with death or surrender by one of the combatant nations. The rhetorical accompaniment to the advancing power of government and disregard of constitutional law is the cheapening of language. It is akin to the devaluing of a currency’s purchasing power caused by over-issue of money by big government. We have seen the word “war” applied as a rhetorical intensifier to juice up political support for government spending on poverty or anti-drug programs. Now it is used to justify the arbitrary actions of the law-enforcement authorities.

Those arbitrary actions are converting the U.S. from a voluntary society that solicits cooperation among citizens and between citizens and government to a hierarchical society in which government demands obedience from citizens, who fight over control of the all-powerful government. That is the danger posed by measures like those of last Friday. The meekness that greeted the lockdown signals how far along that road we have come.

Rather than invoke the emotional image of war, we should instead appeal to our love of freedom. No band of terrorists is strong enough to threaten the security of the United States, despite the toll of victims they might rack up. (Consider the example of Israel, which has survived a vastly greater onslaught of terror despite its small size and stock of resources.) But history proves that we are a danger to ourselves. The Constitution was crafted expressly to forestall the danger that we now face – not from terrorists, but from the arbitrary actions of our own government.

DRI-265 for week of 2-3-13: Women in Combat: What Are the Issues?

An Access Advertising EconBrief:

Women in Combat: What Are the Issues?

Recently the Pentagon announced the dropping of the other shoe on its policy of women in the military. Women have long (since 1994) been deployed to theaters of combat. Now they will be allowed to serve in combat units.

This has stirred up the predictable hornet’s nest of controversy. Mostly, the battle lines form along the familiar boundary between right and left wing – the left wing hailing the announcement as a long-overdue victory for feminism and the right wing stressing the unsuitability of women for combat roles.

On the face of it, this would seem to be grist for the mill of economics. The logical approach – which is another way of describing the way economists view the world – is apparently to allow people to sort themselves into occupational slots according to their personal preferences and productivities. The price of labor, its wage, serves as the yardstick measuring labor’s value at the margin, enabling businesses to compare it with the monetary value of labor’s technical productivity.

Any woman who can produce more value than she costs is hired – simple as that! And indeed, history tells us that competitive markets are the best known antidote to arbitrary forms of discrimination, whether based on race, gender, age or any other factor extraneous to productivity.

Furthermore, there are reasonable grounds to believe that in a free market for labor, some women could pass the physical tests for qualification as combat soldiers. Does this make the Pentagon’s action are step in the right direction, at the very least?

No. The decision is based solely on political considerations, not economic ones. It will probably work badly and cause death, dissension and abdication in the ranks of the armed forces.

Marginal Productivity Theory and Female Soldiers

A commonly heard rationale in opposition to women in combat is that “men are stronger than women.” This generalization is woefully imprecise and virtually meaningless without further definition. In principle, it might mean that every single man is stronger than every single woman – that no woman is stronger than any man. Of course, we know from personal experience that opponents don’t mean that and that this global statement is not true. In fact, there are some indices of strength by which women tend to be stronger than men – using the word “stronger” in its colloquial sense of “stronger on average,” using both the mean value and the median individual as the basis for comparison.

For military combat, upper-body strength is perhaps the most relevant index. Male upper-body strength is indeed superior on average. But some women have sufficient upper-body strength to meet military-qualification standards. Comparison on other relevant criteria, such as aerobic capacity, produces similar results. We know this even without examining military records, simply by observing world records in athletic events involving upper-body and aerobic performance. Women’s records fall short of men’s records, but rank well above average male performance and implicitly exceed the standards set for combat soldiers. It is therefore possible for women to perform the physical functions demanded by combat.

There was a time when the American woman would have been adjudged too delicate, too sensitive to perform an act as brutal as killing another human being hand-to-hand or even using a weapon. That time is long past. (Indeed, reference to it from personal memory dates the age of the speaker at least to the early baby-boom cohort.) The performance of women in combat in Israel, among other countries, establishes that women can kill. The actions of women in American politics over the last half-century demonstrate the same cold calculation, lack of sensitivity and sheer brutality exhibited by men. Women are just as willing to kill for their beliefs as are men.

Pure economic logic says that optimal selection of men and women for combat duty would require equalization of their marginal productivities. That is, whenever another combat soldier is needed, the highest-productivity applicant is picked (male or female) – the limiting case or long-run tendency is toward a stable equilibrium in which productivities tend toward equality. Because mean male strength is so much high higher, this will result in many male soldiers and few female soldiers.

So much for pure economics. Up to this point, why has the military chosen to forego the productivity gains that would have accrued from accepting women in combat?

The Rationale For An All-Male Fighting Force

In a pure market setting, the productivity gains from accepting women in combat would be small because only a few women would actually apply, qualify and serve. Some women capable of qualifying would instead prefer to pursue careers in fields such as athletics, which are much more lucrative. And there have always been compelling arguments against trying to realize those small gains.

In a recent Wall Street Journal op-ed, a onetime combat soldier in Iraq spelled out the brutal realities of life as a combat soldier. Some “grunts” who spearheaded the blitz against Baghdad in 2008 spent 48 consecutive hours racing in a column toward the city. Unable to dismount their vehicles, they had to urinate and defecate in place, in full view of and proximity to their comrades. Forcing men and women to endure this would be to add social strain and humiliation to the already severe strain of combat.

A letter writer to the Journal, also a soldier, pointed out that the inevitable result of coed combat battalions would be pairing off and formation of sexual liaisons. In turn, this would upset the vital cohesion necessary to effective function of the unit by interposing jealousy and envy between squad members. This was not mere speculation on his part, but rather the evidence gathered from coed combat experiments in other countries.

That same kind of evidence argues strongly against the presence of women on the battlefield. The sight of women wounded, threatened with capture and torture, drives male soldiers to commit imprudent acts, thereby jeopardizing the safety and success of their units.

These kinds of disruptions could potentially ruin the effectiveness of a rifle platoon. What’s more, they are only the tip of the iceberg. Admission of women is an open invitation to future allegations of discrimination, sexual harassment and rape. The discrimination can of worms is a wriggling mess of litigation and adverse publicity. The potency of a volunteer force is dependent on successful recruiting, which would be threatened by allegations, scandals and lawsuits. (Indeed, there are already rumblings that thousands of re-enlistments have been jeopardized by the shift in policy.) The risk of such serious losses is not counterbalanced by the small productivity gains accrued by adding women to combat units. That is why the military high command preferred to exclude women entirely from combat roles rather than court potential disaster from the side effects of their presence.

Did this policy “discriminate” against women? Of course. The purpose of creating and maintaining an army is not to give every race, gender, religious affiliation, political party and community organization equal representation among its ranks. The only purpose of an army is to defend the nation as productively as possible. Any combat deployment that achieves that purpose is fair because it delivers on the constitutional guarantee of life, liberty and the opportunity to pursue happiness – for everybody. A job is not and cannot be a property right. And it is consumption that businesses are supposed to provide, not equality of outcomes for people who supply inputs to the businesses. As far as that goes, it would be just as true to say that the policy discriminated against those male soldiers who would have benefitted from close contact with women – just as true and just as irrelevant, for the same reasons.

Women in the Military

Throughout the 20th century, the left wing has distorted the true meaning of concepts like “freedom” and “rights.” The word “freedom” has been used as a euphemism for the concept of power – the power to dictate the terms of trade in what would otherwise be free, voluntary exchanges in free markets. Lack of bargaining power or real income has been wrongly characterized as absence of freedom, calling for government intervention to redress injustice. Inability to work one’s will on others has been misdescribed as an absence of rights, calling for government rules to establish new rights.

Freedom is the absence of coercion, not the ability to impose one’s will on others. A right only exists when its exercise does not reduce someone else’s rights. The issue of women in combat brings these classic fallacies back into action once more.

In the February 6, 2013, issue of Time Magazine, author Darlene Iskra asks rhetorically: “Women In Combat: Is It Really That Big of a Deal?” She poses the question as a false dichotomy between “naysayers” who maintain that “women can’t do combat infantry” and “…dedicated women who only want a chance to serve their country like their male peers” and who believe that “military jobs should be based on performance.” She closes her case with anecdotal histories of a few women who served in the military – as divers, not combat soldiers. In other words, the only issues are biological and political, and the solution is government-imposed equal opportunity.

 

It is true that arguments opposing women in combat are sometimes carelessly put. But every other point made by Ms. Iskra is either dishonest or disingenuous. From the moment the military began admitting women alongside men, its focus began shifting away from maintaining its productivity as a fighting force and toward fulfilling the goals of women as individuals. When women began enlisting, they soon discovered that many of them could not meet the physical standards of performance previously established for the all-male military. When men could pass the physical tests, they were washed out of combat service. But the failure of women produced a different result – a lowering of the standards of acceptance only when applied to women.

This created a climate of cynicism and disillusion, within both the service and the general public. Soldiers realized that the overriding purpose of the military was no longer to defend the nation. Their loyalty was no longer to the consumers of their product, the nation’s civilians. Now some of them were allowed to put their own wants ahead of the defense of the nation. And this attitude potentially put male soldiers’ own lives in jeopardy.

The general public realized that, while all men were created equal, women were created more equal because their wants were given priority over the life, liberty and happiness of civilians. The stage was set for the coup de grace to be administered to the public’s belief in the Rule of Law and equality under the law. It came with the Pentagon’s latest decision.

The dictates of political correctness demand that we rejoice at this great victory for equal rights for women. And most people will doubtless give lip service to that reaction. But deep down, they know that this cannot be the right decision for the nation.

 

The Purpose of a Fighting Force

Proponents of a government-mandated female presence in combat units claim that it is woman’s right to not merely enlist in the military but fight in combat as well. By phrasing the issue in terms of the rights of the soldier, they are implicitly treating an army as an organization created to further the self-expression of its individual members. This attitude strongly resembles that taken by the left-wing toward business and employment in general; namely, that the purpose of a business is to provide both real income and personal fulfillment for its employees. Any other purposes are secondary to these primary goals.

Economics teaches us otherwise. The purpose of a business – its only purpose – is to produce goods and services for consumers. The fact that the business’s goal may be to maximize the profit it earns for its owners doesn’t alter its purpose. The minute consumers stop wanting what it produces, the business stops – what the owners want no longer matters.

The purpose of the military is to defend the nation. The purpose of combat soldiers is to fulfill their employer’s purpose by fighting the nation’s enemies as productively as possible. For most of its history, the soldiers of the United States have been widely considered inferior to those of other nations. This was true throughout World War II, when German troops were generally viewed as the best, and Korea. It was only when America adopted the all-volunteer armed forces – thereby adopting the principles of the free market in recruiting its labor – that U.S. forces became acknowledged as the world’s finest. This should make it easier to see that the military is serving the nation as a producer serves his customers. Its purpose is not to make its employees (the soldiers) happy, any more than a business’s purpose is to make its employees happy. The military’s consumers are the nation; its purpose is to serve them.

The U.S. Constitution was preceded by the Declaration of Independence, the country’s founding document. In it, Thomas Jefferson proclaims our right to “life, liberty and the pursuit of happiness.” It is in order to protect our right to life that government is granted a monopoly on force and violence. A military combat force exists in order to safeguard our right to life by fighting our enemies.

The left wing is putting its radical agenda ahead of the military’s constitutional duty to defend us. In effect, proponents of government-mandated women in combat are saying, “We are perfectly willing to put our abstract notions of gender equality ahead of the Constitution and the safety of the country. If soldiers have to die, quit the military or suffer anguish because of the presence of women in combat, that is a small price to pay for the satisfaction gained from seeing women serve in combat over the objections of the military and parts of the civilian public.”

What is Behind the Pentagon’s Action?

The left wing’s motives are clear. But why has the Pentagon reversed its previous stance on women in combat?

The military finds itself in a precarious situation. Both Democrats and Republicans are desperately looking for spending to cut. Their gaze has come to rest on the military. Each party has its own reasons for this choice. Democrats look upon the military as ipso facto evil, the only part of government that needs to be downsized. Moreover, women are a gigantic interest group – not that every woman endorses the new policy – and this announcement is a politically easy way to placate them.

Republicans would like to reduce the size of government. They are frantic to cut spending – some spending, any spending. But they have had absolutely no luck cutting wasteful spending. Now they find themselves contemplating the defense budget, like a starving man stranded on a desert island who eventually finds himself surreptitiously measuring the body weight and protein content of the only other person on the island.

The military is in no position to enforce its will on either party. It has caved in to the Democrats because the Democrats are the party in power. The Pentagon is a mammoth bureaucracy held hostage. To a bureaucracy, there is no prospect more terrifying than a budget cut. By changing its policy in acquiescence to the Democrats, it is tacitly begging its captor: “If I let you do this to me, you won’t hurt me, will you?”

Who Speaks for the People?

In everything said so far, both sides to the controversy are behaving according to form. The left wing is ignoring economic logic, the general welfare and the Rule of Law in order to further its aims. The right wing is too confused to formulate a coherent argument, despite the fact that it has had plenty of time to get its intellectual house in order on this issue. Bureaucracies – the federal government in general and the Pentagon in particular -are so far acting exactly as we have come to expect.

And the big loser from this resolution of the longtime debate is the American public, whose military defense will suffer with no counterbalancing gain. Who speaks for them?

A dispassionate appraisal yields a depressing finding: Nobody.

DRI-271 for week of 1-13-13: How (Not) to Help Orphans

An Access Advertising EconBrief:

How (Not) to Help Orphans

The current issue of Great Britain’s venerable weekly The Economist contains a revealing anecdote about Vice President Joe Biden – revealing not merely about Biden himself but about economics, politics and their interaction.

The anecdote is recounted by one of the magazine’s American correspondents, whose byline is “Lexington.” The column identifies Biden as chief mediator between the Obama administration and the Republican opposition. Lexington finds Biden suited to that task, citing his 40-year Congressional career mostly spent brokering deals and schmoozing colleagues. It is not Biden’s fault that “America’s problems are larger than the deals that a vice-president can cut.” It seems that, according to Lexington, “small-government conservatives – backed by the Tea Party and allies on the airwaves and online – have raised the political costs of dispensing political pork and favours.”

Since it is not clear why this is a bad thing, it would seem that The Economist’s left-wing bias is showing. This is confirmed when Lexington cites “an old Senate belief cherished by Mr. Biden: that fellow politicians may be wrong but are rarely bad. Mr. Biden likes to recall his shock as an angry young senator on learning that a seemingly heartless Republican foe of disability rights, Jessie Helms, had adopted a disabled orphan.” Lexington’s point is that this experience chastened Biden and made him tolerant of Republicans, willing to oppose their policies but not to question their motives.

Lexington is wrong on both counts. Biden is bigoted, not tolerant. The episode reveals his intolerance of the right wing. But that is the least of its importance.

Helping the Disabled

Even as the current Economist was hitting the newsstands, the tolerant, conciliatory Mr. Biden was floating proposals for his boss to suspend the Second Amendment rights of Americans via executive order. In so doing, Biden was displaying the same callous insensitivity he displayed toward Jesse Helms in assuming that Helms’ opposition to federal disability “rights” legislation reflected a persona animus toward the disabled as a class.

Today, thanks to research by Arthur Brooks of the American Enterprise Institute, we know that right-wingers like Jesse Helms provide the bulk of charitable assistance in America. Left-wingers tend to consider their tax payments as their contribution to charity. We also know that federal-government welfare programs have become a monstrosity, mushrooming in number and size while failing to make a dent in the problems they were ostensibly intended to solve. The latter conclusion is now shared by many on the left as well as practically everybody else.

The notion that opposition to big-government is “heartless” implies that compassion is expressed impersonally, indirectly and ruthlessly by taking money from some people and giving it to others, rather than personally and directly by immediately benefitting those who need help. This not only prejudges the motives of the opponent, it takes for granted both the good will and the efficiency of the government. In other words, it was not only bigoted but dumb.

Biden’s opposition to Helms was simply the reflexive action of a man not given to reflective thought. His numerous verbal gaffes committed while Vice President reinforce this interpretation. Biden’s status as the Obama administration’s designated dealmaker does not bespeak any innate sense of empathy for his opposite numbers across the aisle, any more than a used-car dealer need feel kinship with his customers.

Jesse Helms vs. Joe Biden

Lexington’s anecdote has much more revealing economic implications. Contrast the two types of problem-solving approach illustrated. On the one hand, there is the “Jesse Helms” approach. Orphans are in trouble. They need help. Helms sees them. He responds immediately and directly – by helping orphans.

Now compare this with the “Joe Biden” approach. He sees orphans in trouble. He responds by – well, he “responds” by setting in motion a lengthy, ponderous, indirect process that just may, if all goes well, after many months or even several years elapse, succeed in helping some orphans, to some vague and indeterminate degree.

Is this comparison unduly pejorative? Does it prejudice the case against the Biden approach? No, this would seem to be a pretty dispassionate summation of the history of federal-government welfare programs over the last five decades, when balanced against the efforts of the private sector. The Congressional legislative process is indeed protracted, beginning with bill introduction, committee study and submission to the full chamber, followed by reconciliation and eventual passage by both houses. This alone often takes up the better part of one legislative session. Sometimes bills are held over into the next session; sometimes they linger on for years.

When the aid-to-orphans bill passes, does that mean the problem is solved? Certainly not. It means that government machinery is formally set up. It may take months or even years for the resulting program to become operational. After it does, the program may operate indirectly through pre-existing state and/or local programs. The federal program may generate related programs, exhibiting a form of political cellular mitosis.

The programs themselves are intended to help orphans, but they do not provide the form of direct help that Jesse Helms provided. That is, they do not take in orphans and provide them with those things the lack of which makes them orphans in the first place; namely, a loving, caring, compassionate home and family. They may provide institutional shelter in the form of a state-run home. They may provide real income, mostly in the form of in-kind assistance. This second-best form of care will be dispensed by bureaucrats and tied to all kinds of strings and rules. These rules are ostensibly designed to insure that the taxpayer funds bankrolling the program are wisely spent. But the result of this bureaucracy is invariably a system that works poorly and is disliked by the social workers who administer it, the recipients of its largesse and the taxpayers who fund it.

Ah, but surely private charity comes with its own constraints, its own delays, its own bureaucratic drawbacks and roadblocks? For example, Jesse Helms almost surely had to undergo a suitability test in order to adopt; running that gauntlet took time and effort. True enough, but the example of Father Flanagan and Boys’ Town in Omaha, Nebraska shines a glaring light of contrast on the difference between government welfare and private charity. Starting with nothing but a handful of homeless and impoverished boys and his own determination, Father Flanagan built Boys’ Town into a self-sufficient city of self-governing boys that has attracted orphans like a magnet for nearly a century.

It is true that the sunk costs of enabling legislation and setting up programs have already been expended; the welfare system is already in place. But instead of time take to pass new laws, we have to factor in time and expense of re-authorizing and financing programs already in place. Indeed, the crisis posed by public debt alone is reason enough to abandon the fiction of the “compassionate” Biden and the “heartless” Helms. It is not only that the Helms approach works and the Biden approach fails. The Biden approach is drowning representative government in a sea of debt throughout the world.

Roundabout Production

Even if we stipulate that the “Biden approach” has failed dismally in this particular case, can we say that this is a general result? That is, should we apply this lesson not merely to welfare programs but in all situations involving private vs. public assistance? And does it have even broader applicability?

“Helms vs. Biden” illustrates a lesson in the economic theory of production. To drive home the lesson in general terms, consider the example of fishing – a productive activity man has undertaken throughout recorded history. The most primitive production process is also the most direct: wading into the water and catching fish with bare hands. This requires skill and patience as well as access to shallow water holding fish.

A somewhat more productive process involves building a net, which improves the catch-per-unit-of-time. The first net builders had to take time off from fishing or hunting, which required them to build up a store of food to support themselves while net building. In turn, this required reducing their food intake for awhile prior to the investment period. This was an early historical example of the economic process of saving (dietary stricture and food stockpiling) and investment (net building).

More productive still is to construct rod and line to supplement the net. Yet more productive is to build a boat to enlarge the geographic range of fishing. These broaden the time frame of the production process considerably since they require much more time spent on investment and fishing itself. But the huge improvement in physical productivity in terms of potential catch makes the time spent worthwhile.

In the last half-century, fishing has become a production activity analogous to farming. Businesses have purchased infant fish and/or breeding stock and ponds, lakes or defined oceanic territory in which to raise colonies of fish for commercial harvesting. Obviously, this is the most protracted and costly of all fishing production processes, as well as potentially the most productive and lucrative.

The economic term of art that describes this continuum of production processes is “roundaboutness.” The most direct production processes are those that translate inputs into consumption output the quickest. Successively less direct processes take more and more time and involve more and more steps, but tend to gain more productivity with each increase in time and stages. The great Austrian economist of the late 19th and early 20th century, Eugen von Bohm Bawerk, described this by saying that roundabout production processes tend to be more productive.

Bohm Bawerk also found roundabout processes to be more characteristic of capitalism. Owners of capital (the machines and goods-in-process vital to the productivity of longer processes) can borrow to finance their own investment in these longer processes. They pay workers the discounted value of their marginal product for the work they do and use the premium above the discount to repay the borrowing. Thus, everybody can benefit from the enhanced productivity of roundabout production. Interest rates are reflected in the borrowing and in the discounting process that produces the premium.

Capitalism comes into the discussion because roundaboutness cannot be properly evaluated without the existence of prices and interest rates. It is tempting to view the productivity of roundabout production as an immutable physical law, but sometimes the good being produced is a service that has no physical yield. Now we have no alternative except to evaluate that yield in monetary terms using its price as a multiplicand. Even more compelling is the fact that a larger quantity of physical output in the future is not necessarily preferable to a smaller quantity today; it depends on the time preferences of individuals and their rate of preference for consumption today versus consumption in the future. A sufficiently high rate of preference for consumption today could override the possibility of more output in the future and tip the balance in favor of the simplest and most direct production process rather than a more roundabout one.

Another factor that might argue against roundabout processes is scarcity of inputs used in those processes. Thus, input prices have to figure in the evaluation, too. And interest rates reflect the intensity of consumer time preferences as well as the scarcity of funds made available by savers for investment purposes. Thus, interest rates are key to the calculation of costs and benefits for roundabout processes.

In a pure capitalist economy, roundabout processes are used only when they are profitable. That is the same as saying that they are used only when the value created by their higher productivity exceeds the value lost to their higher investment cost. Thus, under capitalism we are doubly blessed. As consumers, we benefit from the ofttimes greater productivity of roundabout production without having it jammed down our throats when it is not beneficial on net balance. The safety factor is the presence of the profit motive. When roundabout production is too costly, it will be unprofitable and firm owners and managers will veto it.

Government and Roundabout Production

Government is roundabout production to the max. The very existence of the legislative process itself gets government started in a roundabout direction. Stages of production increase every time a new level of bureaucracy is created. The difficulty of interacting with bureaucrats and repeating budget authorization procedures annually maintains and even increases the temporal distance between the consumer and the good or service being provided by government.

Unlike production in a pure capitalist economy, however, government production possesses no inherent internal check on roundabout processes. There is no profit motive; thus, there is no easy way to tell how much recipients like the service being provided. The absence of profit means that there is no check on costs incurred; indeed, the value of government services is traditionally gauged according to the value of the inputs used in providing them! In other words, the more we spend on government, the better off we are supposed to be. The polite way of describing this state of affairs is to say that the incentives are perverse.

Nobody has any reason to spend money carefully since bureaucrats are rewarded by overspending their budgets (with bigger budgets and larger departments) and for increasing the size of their departments (with promotions, larger salaries and more impressive titles). Government employees are the inputs into the roundabout production of government services; those production costs are income to them. Thus, the higher costs soar, the better they like it – no matter how economically inefficient this might be. True, government employees pay taxes, too, but they pay only a tiny fraction of the costs of their services while reaping all the wage, salary and fringe benefits.

To make matters worse, the demand side of the market is least amenable to roundabout production for goods and services provided by government. Welfare payments, disaster relief, military goods and services, “social insurance” and medical care for the aged and impecunious are things typically desired with the highest degree of immediate urgency. That is, they are areas where time preference is presumed to be very high and the wish for current consumption is at its greatest. Thus, even where productivity gains from roundabout production might be available, it is by no means likely that recipients of government aid would consider those gains to be “worth the wait” in the economic sense. Judging from the high level of dissatisfaction commonly expressed with government production, it is probable that neither consumers of government nor taxpayers are getting their money’s worth.

In summary, then, roundabout production has proven to be an economic triumph in free capitalist markets, where it has spurred tremendous improvements in productive techniques and living standards. And it has proven disastrous when used by government to produce goods and services. The difference between the two outcomes is the presence of the profit motive under free markets and its absence in government.

Why Has “Biden” Triumphed Over “Helms”?

Over time, various rationales have been advanced for the “Biden approach” and against the “Helms approach.” Originally, the “Helms approach” was seen as a “do-nothing” approach. The presumption – sometimes tacit, sometimes explicit – was that unless government adopted its roundabout approach, nothing would be done to help the poor, sick, orphaned, old, infirm, stricken, et al. We know now that this is not true and was never true. Even in past centuries, much voluntary effort was expended to help those in need. The reasons why this effort looks skimpy to modern eyes is twofold. First, real incomes in general were much lower and less was available for every purpose – charity included. Second, much activity was carried out informally within the boundaries of the family, neighborhoods and churches, without ever being recorded. Today, the omnipresence and scope of government has diminished the importance of the family and reduced the importance of the voluntary private sector.

The problem with the “do-nothing” presumption is that it contradicts the other premises of the welfare state. Much is made of the fact that we “voluntarily tax ourselves” to enable government to undertake its work. Of course, if this were really true, taxation would be superfluous and wasteful. The purpose of taxation is to coerce the unwilling; they are being taxed, not those who voluntarily surrender their income to the state. If people are unwilling, they presumably have a good reason. Either way, there is no reason to preserve a status quo that has broken down. Let the willing contribute to charities of their choice. This gesture will undoubtedly recruit many who are now unwilling to allow government to waste their money but would willingly give money if allowed to supervise, evaluate and find-tune their contributions.

Of course, somebody must be lobbying strongly in favor of the current system. It is the administrators, managers and employees of the 180 or so federal agencies that make up the welfare system. Welfare started out as an ostensible benefit for the poor but has now become a kind of dole for those who operate the system rather than its supposed beneficiaries. Most of these people earn higher salaries and larger employment benefits than they would otherwise earn in the private sector. Thus, they have a very strong motivation to preserve the status quo even though they are themselves taxpayers.

There is one more group – a small one – whose self-interest is identified strongly with the “Joe Biden approach.” That is the relatively small number of politicians who gain a large number of votes from their staunch of this system. And it is this group whose resistance to change has kept that system in place.

Meanwhile, what of the orphans themselves, disabled or otherwise? In a voluntary society, they could choose where to seek assistance just as the rest of us could choose whether and how to render it. The problem would be getting those who need help together with those willing and able to help. Today, the one thing available to all in profusion is information. It is impossible to believe that the voluntary efforts of a free people would accomplish less than the self-interested efforts of a badly motivated, poorly informed government.

That is the crowning irony of “Biden vs. Helms;” the Helms approach empowers the poor while the Biden approach renders them relatively powerless. The best way to help orphans is to keep government away from them.

DRI-190 for week of 12-30-12: Stereotypes Overturned: Race, Hollywood and the Jody Call

An Access Advertising EconBrief:

Stereotypes Overturned: Race, Hollywood and the Jody Call

The doctrine often referred to as “political correctness” ostensibly aims to overturn reigning stereotypes governing matters such as race. Yet all too often it results in the substitution of new stereotypes for old. Economics relies on reason and motivation rather than political programming to provide answers to human choices. Nothing could be more subversive of stereotypes than that.

What follows is a tale of Hollywood, race and the American military. At the time, each of these elements was viewed through a stylized, stereotypical lens – as they still are to some extent. But in no case did this tale unfold according to type. The reasons for that were economic.

The Movie Battleground (1949)

In 1949, Metro Goldwyn Mayer produced one of the year’s biggest boxoffice-hit movies, Battleground. It told the story of World War II’s Battle of the Bulge as seen through the eyes of a single rifle squad in the 101st Airborne Division of the U.S. Army. In late 1944, Germany teetered on the edge of defeat. Her supreme commanders conceived the idea of a desperate mid-winter offensive to grab the initiative and rock the Allies back on their heels. The key geographic objective was the town of Bastogne, Belgium, located at the confluence of seven major roads serving the Ardennes region and Antwerp harbor. Germany launched an attack that drove such as conspicuous salient into the Allied line that the engagement acquired the title of the “Battle of the Bulge.”

The Screaming Eagles of the 101st Airborne were the chief defenders of Bastogne. This put them somewhat out of their element, since their normal role was that of attack paratroopers. Despite this, they put up an unforgettable fight even though outnumbered ten to one by the German advance. The film’s scriptwriter and associate producer, Robert Pirosh, was among those serving with the 101st and trapped at Bastogne.

Battleground accurately recounted the Battle of the Bulge, including an enlisted man’s view of the legendary German surrender demand and U.S. General McAuliffe’s immortal response: “Nuts.” But the key to the film’s huge box-office success – it was the second-leading film of the year in ticket receipts – was its continual focus on the battle as experienced by the combat soldier.

The men display the range of normal human emotions, heightened and intensified out of proportion by the context. Courage and fear struggle for supremacy. Boredom and the Germans vie for the role of chief nemesis. The film’s director, William Wellman, had flown in the Lafayette Escadrille in World War I and was one of Hollywood’s leading directors of war films, including the first film to win a Best Picture Oscar, Wings.

Some of MGM’s leading players headed up the cast, including Van Johnson, George Murphy, John Hodiak, and Ricardo Montalban. The film was nominated for six Academy Awards and won two, for Pirosh’s story and screenplay and Paul Vogel’s stark black-and-white cinematography. In his motion-picture debut, James Whitmore was nominated for Best Supporting Actor and won a Golden Globe Award as the tobacco-chewing sergeant, Kinnie.

Whitmore provides the dramatic highlight of the film. Starving and perilously low on ammunition, the men of the 101st grimly hold out. They are waiting for relief forces led by General George Patton. Overwhelming U.S. air superiority over the Germans is of no use because fog and overcast have Bastogne completely socked in, grounding U.S. planes. Whitmore’s squad is cut off, surrounded and nearly out of bullets. Advised by Whitmore to save their remaining ammo for the impending German assault, the men silently fix bayonets to their rifles and await their death. Hobbling back to his foxhole on frozen feet, Whitmore notices something odd that stops him in his tracks. Momentarily puzzled, he soon realizes what stopped him. He has seen his shadow. The sun has broken through the clouds – and right behind it come American planes to blast the attacking German troops and drop supplies to the 101st. The shadow of doom has been lifted from “the battered bastards of Bastogne.”

1949 audiences were captivated by two scenes that bookended Battleground. After the opening credits and scene-setting explanation, soldiers are seen performing close-order drill led by Whitmore. These men were not actors or extras but were actual members of the 101st Airborne. They executed Whitmore’s drill commands with precise skill and timing while vocalizing a cadence count in tandem with Whitmore. This count would eventually attain worldwide fame and universal acceptance throughout the U.S. military. It began:

You had a good home but you left

You’re right!

You had a good home but you left

You’re right!

Jody was there when you left

You’re right!

Your baby was there when you left

You’re right!

Sound Off – 1,2

Sound Off – 3,4

Cadence Count – 1,2,3,4

1,2 – 3-4!

At the end of the movie, surviving members of Whitmore’s squad lie exhausted beside a roadway. Upon being officially relieved and ordered to withdraw, they struggle to their feet and head toward the rear, looking as worn out and numb as they feel. They meet the relief column marching towards them, heading to the front. Not wishing for the men to seem demoralized and defeated, Van Johnson suggests that Whitmore invoke the cadence count to bring them to life. As the movie ends, the squad marches smartly off while adding two more verses to the cadence count, supported by the movie’s music score:

Your baby was lonely as lonely could be

Until he provided company

Ain’t it great to have a pal

who works so hard to keep up morale?

Sound Off – 1,2

Sound Off – 3,4

Cadence Count – 1,2,3,4

1,2 – 3-4!

You ain’t got nothing to worry about

He’ll keep her happy ’till I get out

And I won’t get out ’till the end of the war

In Nineteen Hundred and Seventy-four

Sound Off – 1,2

Sound Off – 3,4

Cadence Count – 1,2,3,4

1,2 – 3-4!

The story of this cadence count, its inclusion in Battleground, its rise to fame and the fate of its inventor and his mentor are the story-within-the-story of the movie Battleground. This inside story speaks to the power of economics to overturn stereotypes.

The Duckworth Chant

In early 1944, a black Army private named Willie Lee Duckworth, Sr., was returning to Fort Slocum, NJ, from a long, tiresome training hike with his company. To pick up the spirits of his comrades and improve their coordination, he improvised a rhythmic chant. According to Michael and Elizabeth Cavanaugh in their blog, “The Duckworth Chant, Sound Off and the Jody Call,” this was the birth of what later came to be called the Jody (or Jodie) Call.

Duckworth’s commanding officer learned of popularity of Duckworth’s chant. He encouraged Duckworth to compose additional verses for training purposes. Soldiers vocalized the words of the chant along with training commands as a means of learning and coordinating close-order drill. Duckworth’s duties exceeded those of composer – he also taught the chant to white troops at Fort Slocum. It does not seem overly imaginative to envision episodes like this as forerunners to the growth of rap music, although it would be just a logical to attribute both phenomena to a different common ancestor.

Who is Jody (or Jodie)? The likely derivation is from a character in black folklore, Joe de Grinder, whose name would have been shortened first to Jody Grinder, then simply to Jody. The word “grind” has a sexual connotation and Jody’s role in the cadence count was indeed been to symbolize the proverbial man back home and out of uniform, who threatens to take the soldier’s place with his wife or girlfriend.

Already our story has turned certain deeply ingrained racial stereotypes upside down. In 1944, America was a segregated nation, not just in the South but North, East and West as well. This was also true of our armed forces. Conventional thinking (as distinct from conventional wisdom) holds that a black Army private had no power to influence his fate and was little more than a pawn under the thumb of larger forces.

Yet against all seeming odds and expectations, a black draftee from the Georgia countryside spontaneously introduced his own refinement into military procedure – and that refinement was not only accepted but wholeheartedly embraced. The black private was even employed to train white troops – at a point when racial segregation was the status quo.

Pvt. Duckworth’s CO was not just any commanding officer. He was Col. Bernard Lentz, the senior colonel in the U.S. Army at that time. Col. Lentz was a veteran of World War I, when he had developed the Cadence System of Teaching Close-Order Drill – his own personal system of drill instruction using student vocalization of drill commands. When Lentz heard of Duckworth’s chant, he immediately recognized its close kinship with his own methods and incorporated it into Fort Slocum’s routine.

The public-choice school of economics believes that government bureaucrats do not serve the “public interest.” Partly, this is because there is no unambiguous notion of the public interest for them to follow. Consequently, bureaucrats can scarcely resist pursuing their own ends since it is easy to fill the object-function vacuum with their own personal agenda. This is a case in which the public interest was served by a bureaucrat pursuing his own interests.

Col. Lentz had a psychological property interest in the training system that he personally developed. He had a vocational property interest in that system since its success would advance his military career. And in this case, there seems to be little doubt that the Duckworth Chant improved the productivity of troop training. Its use spread quickly throughout the army. According to the Cavanaugh’s, it was being used in the European Theater of Operations (ERO) by V-E Day. Eventually, Duckworth’s name recognition faded, to be replaced by that of his chant’s eponymous character, Jody. But the Jody Call itself remains to this day as a universally recognized part of the military experience.

Thus, the stereotypes of racial segregation and bureaucratic inertia were overcome by the economic logic of property rights. And the morale of American troops has benefitted ever since.

Hollywood as User and Abuser – Another Myth Exploded

The name of Pvt. Willie Lee Duckworth, Sr. does not exit the pages of history with the military’s adoption of his chant as a cadence count. Far from it. To paraphrase the late Paul Harvey, we have yet to hear the best of the rest of the story.

As noted above, the Duckworth chant spread to the ETO by early 1945. It was probably there that screenwriter Robert Pirosh encountered it and germinated the idea of planting it in his retelling of the Battle of the Bulge. When Battleground went into production, MGM representative Lily Hyland wrote to Col. Lentz asking if the cadence count was copyrighted and requesting permission to use it in the film.

Col. Lentz replied, truthfully, that the cadence count was not under copyright. But he sincerely requested compensation for Pvt. Duckworth and for a half-dozen soldiers who were most responsible for conducting training exercises at Fort Slocum. The colonel suggested monetary compensation for Duckworth and free passes to the movie for the other six. MGM came through with the passes and sent Pvt. Duckworth a check for $200.

As the Cavanaugh’s point out, $200 sounds like a taken payment today. But in 1949, $200 was approximately the monthly salary of a master sergeant in the Army, so it was hardly trivial compensation. This is still another stereotype shot to pieces.

Hollywood has long been famed in song and story – and in its own movies – as a user and abuser of talent. In this case, the casual expectation would have been that a lowly black soldier with no copyright on a rhyming chant he had first made up on the spur of the moment, with no commercial intent or potential, could expect to be stiffed by the most powerful movie studio on earth. If nothing else, we would have expected that Duckworth’s employer, the Army, would have asserted a proprietary claim for any monies due for the use of the chant.

That didn’t happen because the economic interests of the respective parties favored compensating Duckworth rather than stiffing him. Col. Lentz wanted the Army represented in the best possible light in the film, but he particularly wanted the cadence count shown to best advantage. If Pvt. Duckworth came forward with a public claim against the film, that would hurt his psychological and vocational property interests. The last thing MGM wanted was a lawsuit by a soldier whose claim would inevitably resonate with the public, making him seem to be an exploited underdog and the studio look like a bunch of chiseling cheapskates – particularly when they could avoid it with a payment of significant size to him but infinitesimal as a fraction of a million-dollar movie budget.

A Hollywood Ending – Living Happily Ever After

We have still not reached the fadeout in our story of Col. Lentz and Pvt. Duckworth. Carefully observing the runaway success of Battleground, Col. Lentz engaged the firm of Shapiro, Bernstein & Co. to copyright an extended version of the Duckworth chant in 1950 under the title of “Sound Off.” Both he and Willie Lee Duckworth, Sr. were listed as copyright holders. In 1951, this was recorded commercially for the first of many versions by Vaughn Monroe. In 1952, a film titled Sound Off was released. All these commercial exploitations of “Sound Off” resulted in payments to the two men.

How much money did Pvt. Duckworth receive as compensation for the rights to his chant, you may ask? By 1952, Duckworth was apparently receiving about $1,800 per month. In current dollars, that would amount to an income well in excess of $100,000 per year. Of course, like most popular creations, the popularity of “Sound Off” rose, peaked and then fell off to a whisper. But the money was enough to enable Duckworth to buy a truck and his own small pulpwood business. That business supported him, his wife and their six children. It is fair to say that the benefits of Duckworth’s work continued for the rest of his life, which ended in 2004.

If still dubious about the value of what MGM gave Duckworth, consider this. The showcase MGM provided for Duckworth’s chant amounted to advertising worth many thousands of dollars. Without it, the subsequent success of “Sound Off” would have been highly problematic, to put it mildly. It seems unlikely that Col. Lentz would have been inspired to copyright the cadence count and any benefits received by the two would have been miniscule in comparison.

The traditional Hollywood movie ending is a fadeout following a successful resolution of the conflict between protagonist and antagonist, after which each viewer inserts an individual conception of perpetual bliss as the afterlife of the main characters. In reality, as Ernest Hemingway reminds us, all true stories end in death. But Willie Lee Duckworth, Sr.’s story surely qualifies as a reasonable facsimile of “happily ever after.”

This story is not the anomaly it might seem. Although Hollywood itself was not a powerful engine of black economic progress until much later, free markets were the engine that pulled the train to a better life for 20th century black Americans. Research by economists like Thomas Sowell has established that black economic progress long preceded black political progress in the courts (through Brown vs. Topeka Board of Education) and the U.S. Congress (through legislation like the Civil Rights Act of 1964).

The Movie that Toppled a Mogul

There were larger economic implications of Battleground. These gave the film the sobriquet of “the movie that toppled a mogul.” As Chief Operating Officer of MGM, Louis B. Mayer had long been the highest-paid salaried employee in the U.S. The size of MGM’s payroll made it the largest contributor on the tax rolls of Southern California. Legend had endowed Mayer with the power to bribe police and influence politicians. Seemingly, this should have secured his job tenure completely.

Battleground was a project developed by writer and executive Dore Schary while he worked at rival studio RKO. Schary was unable to get the movie produced at RKO because his bosses there believed the public’s appetite for war movies had been surfeited by the wave of propaganda-oriented pictures released during the war. When Schary defected to MGM, he brought the project with him and worked ceaselessly to get it made.

Mayer initially opposed Battleground for the same reasons as most of his colleagues in the industry. He called it “Schary’s Folly.” Yet the movie was made over his objections. And when it became a blockbuster hit, the fallout caused Mayer to be removed as head of the studio that bore his name. To add insult to this grievous injury, Schary replaced Mayer as COO.

For roughly two decades, economists had supported the hypothesis of Adolf Berle and Gardiner Means that American corporations suffered from a separation of ownership and control. Ostensibly, corporate executives were not controlled by boards of directors who safeguarded the interests of shareholders. Instead, the executives colluded with boards to serve their joint interests. If ever there was an industry to test this hypothesis, it was the motion-picture business, dominated by a tightly knit group of large studios run by strong-willed moguls. MGM and Louis B. Mayer were the locus classicus of this arrangement.

Yet the production, success and epilogue of Battleground made it abundantly clear that it was MGM board chairman Nicholas Schenck, not Mayer, who was calling the shots. And Schenck had his eye fixed on the bottom line. Appearances to the contrary notwithstanding, Louis B. Mayer was not the King of Hollywood after all. Market logic, not market failure, reigned. Economics, not power relationships, ruled.

Thanks to Battleground, stereotypes were dropping like soldiers of the 47th Panzer Corps on the arrival of Patton’sThird Army in Bastogne.

No Happy Ending for Hollywood

Battleground came at the apex of American movies. Average weekly cinema attendance exceeded the population of the nation. The studio system was a smoothly functional, vertically integrated machine for firing the popular imagination. It employed master craftsman at every stage of the process, from script to screen.

Although it would have seemed incredible at the time, we know now that it was all downhill from that point. Two antitrust decisions in the late 1940s put an end to the Hollywood studio system. One particular abomination forbade studios from owning chains of movie theaters; another ended up transferring creative control of movies away from the studios.

The resulting deterioration of motion pictures took place in slow motion because the demand for movies was still strong and the studio system left us with a long-lived supply of people who still preserved the standards of yore. But the vertically integrated studio system has been gone for over half a century. Today, Hollywood is a pale shadow of its former self. Most movies released by major studios do not cover their costs through ticket sales. Studio profits result from sales of ancillary merchandise and rights. Theater profits are generated via concession sales. Motion-picture production is geared toward those realities and targeted predominantly toward the very young. Subsidies by local, state and national governments are propping up the industry throughout the world. And those subsidies must disappear sooner or later – probably sooner.

This has proved to be the ultimate vindication of our thesis that economics, not stereotypical power relationships, governed the movie business in Hollywood’s Golden Age. Free markets put consumers and shareholders in the driver’s seat. The result created the unique American art form of the 20th century. We still enjoy its fruits today on cable TV, VHS, DVD and the Internet. Misguided government attempts to regulate the movie business ended up killing the golden goose or, more precisely, reducing it to an enfeebled endangered species.

DRI-311 for week of 11-04-12: Natural and Unnatural Disasters

An Access Advertising EconBrief:

Natural and Unnatural Disasters

A natural disaster can wreak havoc on the economy of a city or a region. But it remains to be seen whether this is worse than the unnatural disaster created by politicians grimly determined to cope with the resulting crisis.

Hurricane Sandy stuck the U.S. East Coast last weekend. Although only a Category 1 storm – hardly of vast magnitude by historic standards – Sandy nevertheless inflicted considerable death and destruction. Among her sundry devastations were the trashing of New York City harbor and the interruption of electric power for millions of local residents.

By interdicting shipments of gasoline into the port, Sandy left inhabitants of metropolitan New York City and New Jersey temporarily out of gas. By shutting off power, the storm left many gas stations unable to open for business even if they had gas to sell. On Friday, some two-thirds of New York City gas stations were closed. On Saturday, the proportion of closures still numbered one-third.

A triumvirate of elected officials – Gov. Andrew Cuomo of New York, Gov. Chris Christie of New Jersey and Mayor Michael Bloomberg of New York City – reacted in the inimitable way of politicians everywhere when faced with a career-defining, character-revealing problem.

They ran amok.

The Political Devastation: Gasoline Rationing

Gov. Chris Christie of New Jersey is a Republican. He is highly regarded in certain right-wing circles. Once that would have testified to his economic bona fides. His reaction to Sandy and the sudden spike in scarcity of gasoline, though, was more reminiscent of the Neanderthal left.

Gov. Christie imposed gasoline rationing. This is a time-honored reaction to a decrease in supply or increase in price of something popular or important. Honored by time, that is, but not by logic. Rationing is a political means of deciding which buyer’s desires get satisfied when there’s not enough of the good to satisfy everybody. And the reason why there’s not enough is that the good’s price is not allowed to rise high enough to call forth a sufficient quantity supplied.

After all, the Law of Supply is one-half of the Laws of Supply and Demand. It says that producers will wish to produce more of any good for sale at higher prices of that good than at lower prices – all other things equal. Superimpose this Law over the Law of Demand – which says that buyers will wish to buy more of any good at lower prices of that good than at relatively higher prices, all other things equal – and you have the makings of a market. Together, the two Laws combine to generate an equilibrium price – the price at which the quantity buyers wish to purchase equals the quantity producers wish to produce for sale. This is the price towards which a competitive market will tend to gravitate and the only price that could (in principle) persist indefinitely.

Competitive markets tend to equalize the amount people want to purchase and the amount producers want to produce and sell. They do this through fluctuations in price. If a drastic decline in supply occurs – perhaps through the intervention of a disaster like Hurricane Sandy – the immediate effect of this will be a shortage of the good at the previously prevailing equilibrium price. Suddenly buyers are no longer able to get the amount of the good they previously purchased at the former price. Their dissatisfaction will goad sellers to increase production and shipments to the market in order to enjoy a higher price and increase their profits.

Gradually, as price rises ever higher, two things happen. Producers supply more and more because they are making more and more profit. Buyers wish to buy less and less as price continues to rise. Consequently, the shortage gets smaller and smaller. Ultimately – bam! – the point is reached where the amount producers ship and sell equals what buyers wish to purchase. At that point, nobody has an incentive to change their behavior further. The fewer the political and logistical constraints exist, the shorter this adjustment process will be.

If the supply disruption caused by a natural disaster were sufficiently protracted in time – something like, say, the dislocations caused by the earthquake and follow-up tsunami in Japan in 2010 – the continuing availability of supranormal profits would attract entry by new firms into the industry. The increase in supply caused by this new entry would eventually lower price until all firms were earning merely a competitive rate of return.

But Hurricane Sandy is a short-run phenomenon whose supply disruptions will be handled entirely by existing firms. Recent declines in crude oil prices have reflected a brewing worldwide recession. These declines have translated into lower gasoline prices. Refineries have accordingly cut back on production in response to this cyclical decline in demand. The trick is to get them to increase production and stand the cost of shipping product to the East Coast to meet this sizable temporary need. That is the function of the price system – one is ideally equipped to handle.

There is no rationale for intercession by government into the process, either in the short run or the long run. The stated justification for government rationing is always the same. It is to prevent the price from rising “too high,” to prevent the good from becoming “unaffordable,” to preserve “equity” and “fairness,” to prevent sellers from “exploiting an emergency” to earn “windfall profits” or “obscene profits” or “profiteering” or “putting profits above people” or “earning profits off the backs of the disadvantaged victims.” None of the quoted phrases have any objective meaning or definition. All of them are emotive terms designed to evoke terror or pity or outrage in an observer without having to meet any analytical standard of proof. In short, they are the rhetorical currency of the politician.

In this case, Gov. Christie announced a formula for gasoline rationing. He announced it with the utmost gravity despite the fact that it was entirely whimsical. It was the “odd-even” formula. New Jersey motorists whose license plates ended in an odd number could legally acquire gasoline only on odd-numbered days of the week. Owners of plates ending in even numbers bought on even-numbered days.

Competitive markets allow price to move to the level necessary to equate the quantity supplied and the quantity demanded of the good in question. Rationing has the specific intent of preventing price from increasing to the level it would otherwise reach. Rationing deliberately strives to preserve the condition of shortage. Of course, it does so in the name of “fairness.” But then, what political atrocity isn’t committed under some noble-sounding pretext or another?

More Political Devastation: “Free” Gasoline

Not to be outdone by a neighboring politician, Gov. Andrew Cuomo of New York state elbowed his way into the act with an executive announcement of his own. No less august an agency than the Department of Defense – presumably taking a break from its successful prosecution of various wars or “wars” – was riding to the rescue. It would establish mobile fueling stations in New York City and Long Island, with gas supplied by the federal government.

“And the good news is,” the governor concluded triumphantly, “it’s going to be free.” What a triumph for the NannyState! State disaster relief provided free by the federal government herself! (Of course, there would be a 10-gallon limit on purchases – the Governor was countering Christie’s proposal with his own differentiated rationing product.)

Unfortunately, the best-laid economic plans of bureaucrats gang aft agley. In fact, they gang invariably agley. It apparently never occurred to these master planners to worry about what people would do at the prospect of “free” gasoline.

What they did at the Freeport Armory in Long Island was to line up, some 1,000 strong, waiting for the station to open. But when it did, they learned that it would be eight hours until the gasoline itself arrived. At another mobile station in Queens, would-be buyers formed a line that stretched for 20 blocks.

Needless to say, the public was not happy when it felt the strings on that “free gas” offer. One caption on a picture of the resulting turmoil read: “Tempers flared after people camped out all night, waiting for their turn at the pump…” Teacher and gasoline consumer Lauren Popkoff commented, “There’s just so many people getting very frustrated. People don’t know what to do.”

At length, the State Division of Military Affairs intervened with a plea that the public eschew the mobile stations until additional gasoline supplies arrive. Now the government had to order the public to avoid the special gas stations it had set up especially to relieve their “gasoline poverty.” It finally fell to the State Division of Military Affairs (!) to administer this fiasco, which lent just the right comic-opera touch to the proceedings.

There’s No Such Thing As A Free Lunch – Or Free Gasoline

Students often react reflexively to tales like this with responses such as, “Well, at least the people got their gas free.” Of course, this is arrant nonsense. We are so habituated to smoothly functioning markets that we see ourselves driving up to a pump, getting out, pumping gas and leaving – all within a short span of time. This is the implicit context within which we define our notion of “free gasoline.”

Significant time spent queuing – let alone marathon waits of eight hours or more – changes this picture completely. Now we must face the fact that the true economic price of gas also includes the opportunity cost of the time spent acquiring it. That is represented by the value of our time – either our labor time or our leisure time.

What is an hour of your time worth? Back east, eight dollars an hour is a low wage. Yet that means that the 10 gallons of “free gas” cost customers at the Freeport Armory in Long Island a minimum of $6.40 per gallon – and that’s just the price for a minimum-wage customer who was first in line. Customers who were last in line might easily have “paid” double that much. If they were people with decent jobs, paying $20-50 per hour – they might have paid six or seven times that much. It is odd that the egalitarian left wing, obsessed with the concept of discrimination, has never worried about the differential pricing effects of rationing. Perhaps the right wing should coin a slogan for these cases – something like “people, not politics” or “reason, not rationing.”

Go back to September 11, 2001. On that day, “runs” on gasoline stations were not uncommon. Quite a few motorists anticipated widespread dislocations and disruptions resulting from terrorism – at this point, the scale, scope and source of the attacks were unknown. Long lines formed at the pumps, which game a few enterprising station owners the brainstorm of charging ultra-high prices of $5 per gallon for “no-waiting” gasoline. Predictably, this gave rise to cries of discrimination and price-gouging. But these entrepreneurs were solving the problem and making us better off. They simply allowed the public to sort itself into those people with low-valued time – who waiting in line at Quik-Trip or at oil-company stations – and those with high-valued time – who paid $5 per gallon to avoid paying much more in lost time at work or lost leisure.

The contrast between these two types of response is instructive. Government takes arbitrary actions that are mandatory and coercive and take no account whatsoever of individual differences and preferences. They are designed purely to serve the interests of politicians and bureaucrats. The private market takes actions tailored to the interests of the different customer groups they serve. Those actions allow customers to maximize their own welfare by meeting their own needs and tailoring their actions to the differing prices and values they confront at each point in time. Entrepreneurs act this way not necessarily because they are noble and altruistic – they may or may not be – but because they have to serve their customers well in order to survive commercially and prosper.

Anti-Price Gouging Laws Gouge Consumers

It is easy to laugh at the comical antics of chief executives – mayors, governors and presidents. They are chosen more for their personalities and political instincts than for their analytical skills. But attorneys general and legislators are mostly lawyers who are supposedly trained analysts. They craft, pass and enforce complex legislation. These are people whose mental faculties are finely honed. Yet when they open their mouths on economics, they become blithering idiots.

Beginning immediately after 9/11, state legislatures began to pass anti-price-gouging bills, ostensibly designed to protect consumers against high prices in emergencies. In New Jersey, businesses were forbidden from raising prices more than 10% within 30 days of a declared emergency. In New YorkState, merchants could not charge “unconscionably excessive price[s]” for “vital and necessary” goods. As to what constitutes “unconscionably excessive,” the law remained mute.

And sure enough, no sooner has Sandy made landfall than the respective AGs went into their act. New Jersey Attorney General Jeffrey Chiesa: “Anyone violating the law will find the penalties they face far outweigh the profits of taking unfair advantage of their fellow New Jerseyans during a time of great need.” Just to make sure that people knew the government meant business, it had previously hit a gas station with a $50,000 fine for raising its price by 16% during Hurricane Irene.

This reminded economist Benjamin Powell of the famous directive of Roman emperor Diocletian in 301 A. D. The emperor instituted a maximum price for bread and threatened violators with death. Powell noted that the chief result of this was an absence of bread. And “much as the Roman threat of death couldn’t force producers to bring products to the market, neither can New Jersey’s excessive fines.”

The one thing Northeasterners want most is gasoline. Prosecuting producers who supply it will not encourage them in this pursuit. And a theoretical right to obtain unlimited free quantities of a good of which there is no supply is not worth a tinker’s dam to consumers.

And Then There Was Bloomberg

Hovering over the crisis like a Big Brother was Mayor Michael Bloomberg of New York City. It isn’t often that Mayor Bloomberg is upstaged in a public controversy, but in this case he was technically outranked by Gov. Cuomo. Still, he managed to get his rhetorical licks in. On Saturday, he announced that the gas shortages should be over “in a couple more days,” when the Port of New York City was reopened. But as recently as Wednesday, November 6, Huffington Post still carried accounts of the shortage. Offers to trade sex for gas were popping up on Craig’s List.

A reliable byproduct of any durable program of rationing is the appearance of black (illicit) markets. Technically, the motivation for black markets arises due to the condition of shortage. When price is not allowed to rise and eliminate the shortage, this creates a permanent condition in which the maximum price buyers are willing to pay exceeds the legal price suppliers currently receive at the shortage-constrained price. Buyers have an incentive to offer a higher-than-legal price to get more of the good; producers have an incentive to violate the law by supplying more to the market at prices above the legal level. Thus, the dance floor is prepared for the black-market tango.

Sex-for-gas is somewhat irregular, but by no means outré. After all, a cash transaction would be traceable, at least theoretically, but sexual barter is much more difficult to trace and prove.

Rationing By Price vs. Non-Price Rationing

Supply disruptions create a situation in which a limited quantity must be allocated among many buyers. Economics suggests that there is a good and a bad way to do that. The good way is to ration the demand of many buyers using price. The bad way is to ration demand by queue, by coupon or by some other non-price method.

Rationing by price has certain key advantages. Among these are: 1. the ability of price to rise is an inherent advantage to the supply of the commodity, since it gives producers an incentive to supply the good. And in cases like Sandy’s, that is exactly what exasperated consumers want most – they want the good in question. 2. A higher price gives buyers the incentive to conserve and allows each buyer to take his or her own particular circumstances into account. A poor consumer, for example, may nonetheless need to purchase a large amount of the good and may want to pay a high price to do it. 3. In order to maximize utility or satisfaction in an ordinary marketplace setting, a consumer equalizes his personal rate of tradeoff for the good to that offered by the market. That is to say, he buys the amount of any good that equates its marginal value or benefit to its marginal cost or price. All consumers face the same price for a good; all consumers equalize their personal rates of tradeoff to that offered by the market. Since two (or more) things that are equal to the same third thing are equal to each other, that means that marketplace exchange guided by money prices achieves the same ideal outcome that would otherwise require an impossible amount of time and effort to reach using barter exchange without money. In contrast, rationing frustrates this outcome by driving a wedge between consumers’ personal rates of tradeoff. This encourages black markets and criminality.

This analysis is a staple of microeconomics textbooks, the kind used to teach undergraduates in hundreds of U.S. colleges and universities. Economists testify to its validity as expert witnesses in court cases of various kinds – regulatory, antitrust, civil and criminal.

Venality or Stupidity?

There is a venerable maxim governing motivation and behavior: “Never ascribe to venality that which can be explained by mere stupidity.” In a world of imperfectly distributed information and intractable subjective perception, this is a sound rule of thumb.

Yet the continual refusal of politicians, regulators and lawmakers to take seriously the best-established principles of economic theory and logic – while embracing only the quack remedies of macroeconomics – cannot any longer be put off to mere stupidity. People who are smart enough to gerrymander legislative districts to cement their incumbency and bury their mistakes in legislation numbering thousands of pages cannot be written off as simply too stupid to master basic economics.

This means that they must have ulterior motives for acting as they do. Since their actions harm the constituents they are sworn to help, those motives are clearly anything but benign.

The logical motivation would be to deliberately thwart suppliers in order to leave constituents at the mercy of government. By making the public dependent on government, the minions of government protect the permanence of their own positions by enhancing their budgets and the scope of their power.

Natural vs. Unnatural Disaster

Natural disasters are bad enough. When the free market is given free play to cope with them, their effects can be mitigated. But when politicians, lawmakers and bureaucrats are allowed to use them as vehicles to serve their own interests at the public’s expense, the long-run harm of the resulting unnatural disaster rivals that of its natural counterpart.