DRI-186 for week of 5-10-15: How Can the Framework of Economics Help Us Assign Responsibility for War Crimes in World War II?

An Access Advertising EconBrief:

How Can the Framework of Economics Help Us Assign Responsibility for War Crimes in World War II?

The previous EconBrief explains how the classical theory of voluntary exchange and the moral concept of individual responsibility mutually reinforce each other. The mutually beneficial character of voluntary exchange allows individuals to assume responsibility for their own actions in a free society. Individual responsibility permits voluntary exchange to function without the necessity of, say, review of each transaction by a neutral third party to insure fairness. The role of government in a voluntary society is minimal – to enforce contracts and prevent coercion.

Recently, the issue of responsibility for war crimes committed during World War II has been raised by various independent events. In Germany, a 93-year-old man is standing trial as an accessory to war crimes committed while he worked at the Auschwitz concentration camp during World War II. His presence in the camp is known, but his actual role and behavior is disputed. Should the prosecution have to prove he actually committed crimes, or would his participation as (say) a guard be enough to warrant his conviction as a war criminal?

A recent column in The Wall Street Journal by Bret Stephens (“From Buchenwald to Europe,” 05/05/2015) observes that many people in Germany were victims of Nazism, not Nazis – including many non-Jews. How should this affect Germany’s national policies today on European union, immigration and attitude toward systematic anti-Semitism and misogyny practiced by Muslim immigrants? “It isn’t easy, or ultimately wise, [for Germany] to live life in a state of perpetual atonement,” Mr. Stephens thinks.

Japan’s Prime Minister Shinzo Abe has publicly marveled about the transformation in relations between Japan and America, two countries who became deadly rivals in the late 1930s and waged total war in the 1940s, culminating in mankind’s only nuclear attack. Today we are two of the planet’s closest trading partners. Abe clearly wants to enlist the cooperation of the U.S. in Japan’s efforts to re-arm against the imminent threat of mainland’s China’s sabre-rattling territorial ambitions. But Abe has also made disturbing noises in domestic politics, worshipping at the shrine of Japan’s war dead and speaking equivocally about Japan’s aggressive invasion of its Asian neighbors in the 1930s. These speeches are a rough Japanese analogue to holocaust-denial.

In deciding what to make of these events, our analytical anchor is once again the economic logic of individual responsibility arising in a context of voluntary exchange.

The Flawed Notion of National Responsibility for War Crimes

In his Wall Street Journal piece, Bret Stephens depicts “the drama of postwar Germany” as its “effort to bury the Nazi corpse,” which “haunts Germany at every turn.” This phrasing is troubling. It implies that Germany’s residents bear a collective burden for sins committed long before most of them were even born.

Not surprisingly, this burden hasn’t just been heavy – it has been unshakeable. “Should Germany’s wartime sins be expiated by subsidizing the spendthrift habits of corrupt Greek governments? Should fear of being accused of xenophobia require Germans to turn a blind eye to Jew-hatred and violent misogyny when the source if Germany’s Muslim minority?” These questions, posed rhetorically by Mr. Stephens, should be placed in the pantheon of pointlessness with queries about the angel-carrying capacity of pinheads.

Even before World War II ended, many people realized that the Axis powers would have to be called to account for their sins. Members of the German and Japanese governments and military had committed acts that mined new depths of depravity. Civilization had institutions and standards for judging and punishing the familiar forms of crime, but the scope and magnitude of Axis atrocities persuaded the Allies to hold separate war-crimes tribunals for Germany and Japan. And the defendants at every trial were individual human beings, not collective entities called “Germany” or “Japan.”

To be sure, there were arguments – some of them almost as bitter as the fighting that preceded the trials – about which individuals should be tried. At least some of the disagreement probably reflected disappointment that the most deserving defendants (Hitler, Goering et al) had cheated the hangman by committing suicide beforehand. But nobody ever entertained the possibility of putting either nation on trial. In the first place, it would have been a practical impossibility. And without an actual trial, the proceedings would have been a travesty of justice. Even beyond that, though, the greater travesty would have been to suggest that the entirety of either nation had been at fault for acts such as the murder of millions of Jews by the Nazis.

We need look no farther than Stephens’ own article to substantiate this. He relates the story of his father-in-law, Hermann, who celebrated his 11th birthday on VE-Day, May 8th, 1945. He was the namesake of his father, a doctor who died in a German prison camp, where he was imprisoned for the crime of xenophilia, showing friendly feelings to foreign workers. Father Hermann apparently treated inhabitants of forced-labor camps and was indicating the likelihood of an ultimate Russian victory over Germany. Not only was he not committing atrocities, he was trying to compensate for their effects and got killed for his pains. Were we supposed to prosecute his 11-year old son? What madness that would have been! As Stephens put it, “what was a 10-ywar-old boy, whose father had died at Nazi hands, supposed to atone for?”

History tells us that Germany also harbored its own resistance movement, which worked behind the scenes to oppose Fascism in general and the war in particular. In fact, the Academy Award for Best Actor in 1943 went not to Humphrey Bogart, star of Best Picture winner Casablanca, but instead to Paul Lukas, who played a German who risked his life fighting the Nazis in the movie Watch On the Rhine. The Freiburg School of economists, a German free-market school of economists formed before the war, openly opposed Fascist economic policies even during World War II. Their prestige was such that the Nazis did not dare kill them, instead preferring to suppress their views and prevent their professional advancement. Then there were the sizable number of Germans who did not join the Nazi Party and were not politically active.

Hold every contemporary German criminally accountable for the actions of Hitler, Goebbels, Hess, Goering, Mengele and the rest? Unthinkable. In which case, how can we even contemplate asking today’s Germans, who had no part in the war crimes, weren’t even alive when they were committed and couldn’t have prevented them even if inclined to try, to “atone” for them?

The longer we think about the notion of contemporary national guilt for war crimes, the more we wonder how such a crazy idea ever wandered into our heads in the first place. Actually, we shouldn’t wonder too long about that. The notion of national, or collective, guilt came from the same source as most of the crazy ideas extant.

It came from the intellectual left wing.

The Origin of “Social Wholes”

There is no more painstaking and difficult pastime than tracing the intellectual pedigree of ideas. Apparently, the modern concept of the “social whole” or national collective seems traceable to the French philosopher, Claude Henri de Rouvroy, Comte de Saint Simon (hereinafter Saint-Simon). Saint-Simon is rightfully considered the father of Utopian Socialism. Born an aristocrat in 1760, he lived three lives – the first as a French soldier who fought for America in the Revolution, the second as a financial speculator who made and lost several fortunes, the third as an intellectual dilettante whose personal writings attracted the attention of young intellectuals and made him the focus of a cult.

Around age 40, Saint-Simon decided to focus his energies on intellectual pursuits. He was influenced by the intellectual ferment within France’s Ecole polytechnique, where the sciences of mathematics, chemistry, physics and physiology turned out distinguished specialists such as Lavoisier, Lagrange and Laplace. Unfortunately, Saint-Simon himself was able to appreciate genius but not to emulate it. Even worse, he was unable to grasp any distinction between the natural sciences and social sciences such as economics. In 1803, he wrote a pamphlet in which he proposed to attract funds by subscription for a “Council of Newton,” composed of twenty of the world’s most distinguished men of science, to be elected by the subscribers. They would be deemed “the representatives of God on earth,” thus displacing the Pope and other divinely ordained religious authorities, but with additional powers to direct the secular affairs of the world. According to Saint-Simon, these men deserved this authority because their competence in science would enable them to consciously order human affairs more satisfactorily than heretofore. Saint-Simon had received this plan in a revelation from God.

“All men will work; they will regard themselves as laborers attached to one workshop whose efforts will be directed to guide human intelligence according to my divine foresight [emphasis added]. The Supreme Council of Newton will direct their works… Anybody who does not obey their orders will be treated … as a quadruped.” Here we have the beginnings of the collective concept: all workers work for a single factory, under one central administration and one boss.

We can draw a direct line between this 1803 publication of Saint-Simon and the 20th century left-wing “Soviet of engineers” proposed by institutional economist Thorstein Veblen, the techno-socialism of J. K. Galbraith and the “keep the machines running” philosophy of Clarence Ayres. “Put government in the hands of technical specialists and give them absolute authority” has been the rallying cry of the progressive left wing since the 19th century.

Saint-Simon cultivated a salon of devotees who propagated his ideas after his death in 1825. These included most notably Auguste Comte, the founder of the “science” of sociology, which purports to aggregate all the sciences into one collective science of humanity. Comte inherited Saint-Simon’s disregard for individual liberty, referring contemptuously to “the anti-social dogma of the ‘liberty of individual conscience.'” It is no coincidence that socialism, which had its beginnings with Saint-Simon and his salon, eventually morphed into Nazism, which destroyed individual conscience so completely as to produce the Holocaust. That transformation from socialism to Nazism was described by Nobel laureate F. A. Hayek in The Road to Serfdom.

Today, the political left is committed to the concept of the collective. Its political constituencies are conceived in collective form: “blacks,” “women,” “labor,” “farmers,” “the poor.” Each of these blocs is represented by an attribute that blots out all trace of individuality: skin color, gender, economic class (or occupation), income. The collective concept implies automatic allegiance, unthinking solidarity. This is convenient for political purposes, since any pause for thought before voting might expose the uncomfortable truth that the left has no coherent policy program or set of ideas. The left traffics exclusively in generalities that attach themselves to social wholes like pilot fish to sharks: “the 1%,” the 99%,” “Wall St. vs. Main St.,” “people, not profit,” “the good of the country as a whole.” This is the parlor language of socialism. The left finds it vastly preferable to nitty-gritty discussion of the reality of socialism, which is so grim that it couldn’t even be broached on college campuses without first issuing trigger warnings to sensitive students.

The left-wing rhetoric of the collective has special relevance to the question of war crimes. Actual war crimes are committed by individual human beings. Human beings live discrete, finite lives. But a collective is not bound by such limitations. For example, consider the business concept of a corporation. Every single human being whose efforts comprise the workings of the corporation will eventually die, but the corporation itself is – in principle – eternal. Thus, it is a collective entity that corresponds to left-wing notions because it acts as if animated by a single will and purpose. And the left constantly laments the obvious fact that the U.S. does not and cannot act with this singular unanimity of purpose. For decades, left-wing intellectuals such as Arthur Schlesinger and John Kenneth Galbraith have looked back with nostalgia at World War II because the U.S. united around the single goal of winning the war and subordinated all other considerations to it.

The Rhetorical Convenience of Collective Guilt

Given its collective bent, we would expect to find the left in the forefront of the “collective guilt” school of thought on the issue of war crimes. And we do. For the left, “the country” is one single organic unity that never dies. When “it” makes a ghastly error, “it” bears the responsibility and guilt until “it” does something to expiate the sin. That explains why Americans have been figuratively horsewhipped for generations about the “national shame” and “original sin” of slavery. It is now 153 years after the Emancipation Proclamation and 150 years since the end of the Civil War, when a half-million Americans died to prevent slaveholding states from seceding from the Union. The U.S. Constitution was amended specifically to grant black Americans rights previously denied them following the Civil War. Yet “we” – that is, collective entity of “the country” on which left-wing logic rests – have not yet expunged this legacy of slavery from “our” moral rap sheet. Exactly how the slate should be wiped clean is never clearly outlined – if it were, then the left wing would lose its rhetorical half-Nelson on the public debate over race – but each succeeding generation must carry this burden on its shoulders in a race-reversed reprise of the song “Old Man River” from the play Showboat. “Tote that barge, lift that bale” refers in this case not to cotton but to the moral burden of being responsible for things that happened a century or more before our birth.

If this burden can be made heavy enough, it can motivate support for legislation like forced school busing, affirmative action and even racial reparations. Thus, the collective concept is a potentially powerful one. As Bret Stephens observes, it is now being pressed into service to prod Germany into bailing out Greeks, whose status as international deadbeats is proverbial. Exactly how were Greeks victimized by Germans? Were they somehow uniquely tyrannized by the Nazis – more so than, say, the Jews who later emigrated to Israel? No, Germany’s Nazism of seventy or eighty years ago is merely a handy pig bladder with which to beat today’s German over the head to extract blackmail money for the latest left-wing cause du jour. Since the money must come from the German government, German taxpayers must fork it over. A justification must be found for blackmailing German taxpayers. The concept of collective guilt is the ideal lever for separating Germans from their cash. Every single German is part of the collective; therefore, every single German is guilty. Voila!

The Falsity of Social Wholes

In The Counterrevolution of Science (1952), Nobel laureate F.A. Hayek meticulously traced the pedigree of social wholes back to their roots. He sketched the life and intellectual career of Saint Simon and his disciple Auguste Comte. Hayek then carefully exposed the fallacies behind the holistic method and explained why the unit of analysis in the social sciences must be the individual human being.

Holistic concepts like “the country” are abstract concepts that have no concrete referent because they are not part of the data of experience for any individual. Nobody ever interacts directly with “the country,” nor does “the country” ever interact directly with any other “country.” The only meaning possible for “the country” is the sum of all the individual human beings that comprise it, and the only possible theoretical validity for social wholes generally arises when they are legitimately constructed from their individual component parts. Indeed, Hayek views one role for social scientists as the application of this “compositive” method of partial aggregation as a means of deriving theories of human interaction.

The starting point, though, must be the individual – and theory can proceed only as far as individual plans and actions can be summed to produce valid aggregates. The left-wing historical modus operandi has reversed this procedure, beginning with one or more postulated wholes and deriving results, sometimes drawing conclusions about individual behavior but more often subsuming individuals completely within a faceless mass.

An example may serve to clarify the difference in the two approaches. The individualist approach, common to classical and neoclassical economics, is at home with the multifarious differences in gender, race, income, taste, preferences, culture and historical background that typify the human race. There is only one assumed common denominator among people – they act purposefully to achieve their ends. (For purposes of simplicity, those ends are termed “happiness.”)Then economic theory proceeds to show how the price system tends to coordinate the plans and behavior of people despite the innumerable differences that otherwise characterize them.

In contrast, the aggregative or holistic theory begins with certain arbitrarily chosen aggregates – such as “blacks.” It assumes that skin color is the defining characteristic of members of this aggregate; that is, skin color determines both the actions of the people within the aggregate and the actions of non-members toward those in the aggregate. The theory derived from this approach is correct if, and only if, this assumption holds. The equivalent logic holds true of other aggregates like “women,” “labor,”et al, with respect to the defining characteristic of each. Since this basic assumption is transparently false to the facts, holistic theories – beginning with Saint Simonian socialism, continuing with Marxism, syndicalism and the theories of Fourier, the Fabian socialists, Lenin, Sombart, Trotsky, and the various modern socialists and Keynesians – have had to make numerous ad hoc excuses for the “deviationism” practiced by some members of each aggregate and for the failure of each theory.

The Hans Lipschis Case

Is it proper in principle that Hans Lipschis, a former employee of Auschwitz and now ninety-three years old, be repatriated to Germany from the U.S. and tried as accessory in the murder of 300,000 inmates of the notorious World War II death camp? Yes. The postwar tribunals, notably at Nuremberg, reaffirmed the principle that “following orders” of duly constituted authority is not a license to aid and abet murder.

Lipschis’s defense is that he was a cook, not a camp guard. But a relatively new legal theory, used to convict another elderly war-crimes defendant, John Demjanjuk, is that the only purpose of camps like Auschwitz was to inflict death upon inmates. Thus, the defendant’s presence at the camp as an employee is sufficient to provide proof of guilt. Is this theory valid? No. A cook’s actions benefitted the inmates; a guard’s actions harmed them. If guards refused to serve, the camps could not have functioned. But if cooks refused to serve, the inmates would have died of starvation.

Verdicts such as that in the Demjanjuk case were undoubtedly born of the extreme frustration felt by prosecutors and men like Simon Wiesenthal and other Nazi hunters. It is almost beyond human endurance to have lived through World War II and then be forced to watch justice be cheated time after time after time. First the leading Nazis escaped or committed suicide. Then some of them were recruited to aid Western governments. Then some were sheltered by governments in South America and the Middle East. Over time, attrition eventually overtook figures such as Josef Mengele. Occasionally, an Adolf Eichmann was brought to justice – but even he had to be kidnapped by Israeli secret agents before he could be prosecuted. Now the job of legally proving actual criminal acts committed by minor functionaries fifty, sixty or seventy years after the fact becomes too difficult. So we cannot be surprised when desperate prosecutors substitute legal fancies for the ordinary rules of evidence.

Nevertheless, if the prosecution cannot prove that Lipschis committed actual crimes, then he must be acquitted. This has nothing to do with his age or the time lapse between the acts and the trial. Any other decision is a de facto application of the bogus principle of collective guilt.

Shinzo Abe and Guilt for Japanese Aggression in World War II

Japanese Prime Minister Abe is a classic politician. Like the Roman god Janus, he wears two faces, one when speaking abroad to foreign audiences and another when seeking reelection by domestic voters. His answers to questions about whether he was repudiating the stance taken by a previous Prime Minister in 1996 – that Japan was indeed guilty of aggression for which the Japanese government formally apologized – were delicately termed “equivocal” by the U.S. magazine U.S. News and World Report. That is a euphemism meaning that Abe was lying by indirection, a political tactic used by politicians the world over. He wanted his answer to be interpreted one way by Japanese voters without having to defend that interpretation to the foreign press.

Abe’s behavior was shameful. But that has absolutely nothing to do with the question of Japanese guilt for war crimes committed during and prior to World War II. That guilt was borne by specific individual Japanese and established by the Tokyo war-crimes tribunal. Indeed, one government spokesman eventually admitted this in just those words, albeit grudgingly, after Abe’s comments had attracted worldwide attention and criticism.

The implications of this are that Japanese today bear no “collective guilt” for the war crimes committed by previous Japanese. (It would be wrong to use the phrase “by their ancestors,” since presumably few Japanese today are related by blood to the war criminals of seventy or eighty years ago.) The mere coincidence of common nationality does not constitute common ancestry except in the broad cultural sense, which is meaningless when discussing moral guilt. Are we really supposed to believe, for example, that the surviving relatives of Jesse James or Billy the Kid should carry around a weighty burden of guilt for the crimes of their forebear? In a world where the lesson of the Hatfield’s and McCoy’s remains unlearned in certain precincts, this presumption seems too ridiculous for words.

Similarly, the fact that Japanese leaders in the 1920s, 30s and 40s were aggressively militaristic does not deny Japanese today the right to self-defense against a blatantly aggressive Chinese military establishment.

Much is made of Abe’s unwillingness to acknowledge the “comfort women” – women from Korea, China and other Asian nations who were held captive as prostitutes by Japanese troops. Expecting politicians to behave as historians is futile. If Japanese war criminals remain at large, apprehend and indict them. If new facts are unearthed about the comfort women or other elements of Japanese war crimes, publish them. But using these acts as a club against contemporary Japanese leaders is both wrong and counterproductive.

Besides, it’s not as if no other ammunition was available against Abe. He has followed Keynesian fiscal policies and monetary policies of quantitative easing since his accession to prime minister. These may not be crimes against humanity, but they are crimes against human reason.

Macro vs. Micro

Academic economics today is segregated between macroeconomics and microeconomics. The “national economy” is the supposed realm of macroeconomics, the study of economic aggregates. But as we have just shown, it is the logic of individual responsibility that actually bears on the issue of war crimes committed by the nations of Germany and Japan – because the crimes were committed by individuals, not by “nations.” 

One of the most valuable lessons taught by classical economic theory is that the unit of analysis is the individual – in economics or moral philosophy.

DRI-191 for week of 3-15-15: More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

An Access Advertising EconBrief:

More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

The economic way to view news is as a product called information. Its value is enhanced by adding qualities that make it more desirable. One of these is danger. Humans react to threats and instinctively weigh the threat-potential of any problematic situation. That is why headlines of print newspapers, radio-news updates, TV evening-news broadcasts and Internet websites and blogs all focus disproportionately on dangers.

This obsession with danger does not jibe with the fact that human life expectancy had doubled over the last century and that violence has never been less threatening to mankind than today. Why do we suffer this cognitive dissonance? Our advanced state of knowledge allows us to identify and categorize threats that passed unrecognized for centuries. Today’s degraded journalistic product, more poorly written, edited and produced than formerly, plays on our neuroscientific weaknesses.

Economists are acutely sensitive to this phenomenon. Our profession made its bones by exposing the bogey of “the evil other” – foreign trade, foreign goods, foreign labor and foreign investment as ipso facto evil and threatening. Yet in spite of the best efforts of economists from Adam Smith to Milton Friedman, there is no more dependable pejorative than “foreign” in public discourse. (The word “racist” is a contender for the title, but overuse has triggered a backlash among the public.)

Thus, we shouldn’t be surprised by this headline in The Wall Street Journal: “Ire Rises at China Over Glut of Steel” (03/16/2015, By Biman Mukerji in Hong Kong, John W. Miller in Pittsburgh and Chuin-Wei Yap in Beijing). Surprised, no; outraged, yes.

The Big Scare 

The alleged facts of the article seem deceptively straightforward. “China produces as much steel as the rest of the world combined – more than four times as much as the peak U.S. production in the 1970s.” Well, inasmuch as (a) the purpose of all economic activity is to produce goods for consumption; and (b) steel is a key input in producing countless consumption goods and capital goods, ranging from vehicles to buildings to weapons to cutlery to parts, this would seem to be cause for celebration rather than condemnation. Unfortunately…

“China’s massive steel-making engine, determined to keep humming as growth cools at home, is flooding the world with exports, spurring steel producers around the globe to seek government protection from falling prices. From the European Union to Korea and India, China’s excess metal supply is upending trade patterns and heating up turf battles among local steelmakers. In the U.S., the world’s second-biggest steel consumer, a fresh wave of layoffs is fueling appeals for tariffs. U.S. steel producers such as U.S. Steel Corp. and Nucor Corp. are starting to seek political support for trade action.”

Hmmm. Since this article occupies the place of honor on the world’s foremost financial publication, we expect it to be authoritative. China has a “massive steel-making engine” – well, that stands to reason, since it’s turning out as much steel as everybody else put together. It is “determined to keep humming.” The article’s three (!) authors characterize the Chinese steelmaking establishment as a machine, which seems apropos. They then endow the metaphoric machine with the human quality of determination – bad writing comes naturally to poor journalists.

This determination is linked with “cooling” growth. Well, the only cooling growth that Journal readers can be expected to infer at this point is the slowing of the Chinese government’s official rate of annual GDP growth from 7.5% to 7%. Leaving aside the fact that the rest of the industrialized world is pining for growth of this magnitude, the authors are not only mixing their metaphors but mixing their markets as well. The only growth directly relevant to the points raised here – exports by the Chinese and imports by the rest of the world – is growth in the steel market specifically. The status of the Chinese steel market is hardly common knowledge to the general public. (Later, the authors eventually get around to the steel market itself.)

So the determined machine is reacting to cooling growth by “flooding the world with exports,” throwing said world into turmoil. The authors don’t treat this as any sort of anomaly, so we’re apparently expected to nod our heads grimly at this unfolding danger. But why? What is credible about this story? And what is dangerous about it?

Those of us who remember the 1980s recall that the monster threatening the world economy then was Japan, the unstoppable industrial machine that was “flooding the world” with imports. (Yes, that’s right – the same Japan whose economy has been lying comatose for twenty years.) The term of art was “export-led growth.” Now these authors are telling us that massive exports are a reaction to weakness rather than a symptom of growth.

“Unstoppable” Japan suddenly stopped in its tracks. No country has ever ascended an economic throne based on its ability to subsidize the consumption of other nations. Nor has the world ever died of economic indigestion caused by too many imports produced by one country. The story told at the beginning of this article lacks any vestige of economic sense or credibility. It is pure journalistic scare-mongering. Nowhere do the authors employ the basic tools of international economic analysis. Instead, they employ the basic tools of scarifying yellow journalism.

The Oxymoron of “Dumping” 

The authors have set up their readers with a menacing specter described in threatening language. A menace must have victims. So the authors identify the victims. Victims must be saved, so the authors bring the savior into their story. Naturally, the savior is government.

The victims are “steel producers around the globe.” They are victimized by “falling prices.” The authors are well aware that they have a credibility problem here, since their readers are bound to wonder why they should view falling steel prices as a threat to them. As consumers, they see falling prices as a good thing. As prices fall, their real incomes rise. Falling prices allow consumers to buy more goods and services with their money incomes. Businesses buy steel. Falling steel prices allow businesses to buy more steel. So why are falling steel prices a threat?

Well, it turns out that falling steel prices are a threat to “chief executives of leading American steel producers,” who will “testify later this month at a Congressional Steel Caucus hearing.” This is “the prelude to launching at least one anti-dumping complaint with the International Trade Commission.” And what is “dumping?” “‘Dumping,’ or selling abroad below the cost of production to gain market share, is illegal under World Trade Organization law and is punishable with tariffs.”

After this operatic buildup, it turns out that the foreign threat to America spearheaded by a gigantic, menacing foreign power is… low prices. Really low prices. Visualize buying steel at Costco or Wal Mart.

Oh, no! Not that. Head for the bomb shelters! Break out the bug-out bags! Get ready to live off the grid!

The inherent implication of dumping is oxymoronic because the end-in-view behind all economic activity is consumption. A seller who sells for an abnormally low price is enhancing the buyer’s capability to consume, not damaging it. If anybody is “damaged” here, it is the seller, not the buyer. And that begs the question, why would a seller do something so foolish?

More often than not, proponents of the dumping thesis don’t take their case beyond the point of claiming damage to domestic import-competing firms. (The three Journal reporters make no attempt whatsoever to prove that the Chinese are selling below cost; they rely entirely on the allegation to pull their story’s freight.) Proponents rely on the economic ignorance of their audience. They paint an emotive picture of an economic world that functions like a giant Olympics. Each country is like a great big economic team, with its firms being the players. We are supposed to root for “our” firms, just as we root for our athletes in the Summer and Winter Olympics. After all, don’t those menacing firms threaten the jobs of “our” firms? Aren’t those jobs “ours?” Won’t that threaten “our” incomes, too?

This sports motif is way off base. U.S. producers and foreign producers have one thing in common – they both produce goods and services that we can consume, either now or in the future. And that gives them equal economic status as far as we are concerned. The ones “on our team” are the ones that produce the best products for our needs – period.

Wait a minute – what if the producers facing those low prices happen to be the ones employing us? Doesn’t that change the picture?

Yes, it does. In that case, we would be better off if our particular employer faced no foreign competition. But that doesn’t make a case for restricting or preventing foreign competition in general. Even people who lose their jobs owing to foreign competition faced by their employer may still gain more income from the lower prices brought by foreign competition in general than they lose by having to take another job at a lower income.

There’s another pertinent reason for not treating foreign firms as antagonistic to consumer interests. Foreign firms can, and do, locate in America and employ Americans to produce their products here. Years ago, Toyota was viewed as an interloper for daring to compete successfully with the “Big 3” U.S. automakers. Now the majority of Toyota automobiles sold in the U.S. are assembled on America soil in Toyota plants located here.

Predatory Pricing in International Markets

Dumping proponents have a last-ditch argument that they haul out when pressed with the behavioral contradictions stressed above. Sure, those foreign prices may be low now, import-competing producers warn darkly, but just wait until those devious foreigners succeed in driving all their competitors out of business. Then watch those prices zoom sky-high! The foreigners will have us in their monopoly clutches.

That loud groan you heard from the sidelines came from veteran economists, who would no sooner believe this than ask a zookeeper where to find the unicorns. The thesis summarized in the preceding paragraph is known as the “predatory pricing” hypothesis. The behavior was notoriously ascribed to John D. Rockefeller by the muckraking journalist Ida Tarbell. It was famously disproved by the research of economist John McGee. And ever since, economists have stopped taking the concept seriously even in the limited market context of a single country.

But when propounded in the global context of international trade, the whole idea becomes truly laughable. Steel is a worldwide industry because its uses are so varied and numerous. A firm that employed this strategy would have to sacrifice trillions of dollars in order to reduce all its global rivals to insolvency. This would take years. These staggering losses would be accounted in current outflows. They would be weighed against putative gains that would begin sometime in the uncertain future – a fact that would make any lender blanch at the prospect of financing the venture.

As if the concept weren’t already absurd, what makes it completely ridiculous is the fact that even if it succeeded, it would still fail. The assets of all those firms wouldn’t vaporize; they could be bought up cheaply and held against the day when prices rose again. Firms like the American steel company Nucor have demonstrated the possibility of compact and efficient production, so competition would be sure to emerge whenever monopoly became a real prospect.

The likelihood of any commercial steel firm undertaking a global predatory-pricing scheme is nil. At this point, opponents of foreign trade are, in poker parlance, reduced to “a chip and a chair” in the debate. So they go all in on their last hand of cards.

How Do We Defend Against Government-Subsidized Foreign Trade?

Jiming Zou, analyst at Moody’s Investor Service, is the designated spokesman of last resort in the article. “Many Chinese steelmakers are government-owned or closely linked to local governments [and] major state-owned steelmakers continue to have their loans rolled over or refinanced.”

Ordinary commercial firms might cavil at the prospect of predatory pricing, but a government can’t go broke. After all, it can always print money. Or, in the case of the Chinese government, it can always “manipulate the currency” – another charge leveled against the Chinese with tiresome frequency. “The weakening renminbi was also a factor in encouraging exports,” contributed another Chinese analyst quoted by the Journal.

One would think that a government with the awesome powers attributed to China’s wouldn’t have to retrench in all the ways mentioned in the article – reduce spending, lower interest rates, and cut subsidies to state-owned firms including steel producers. Zou is doubtless correct that “given their important role as employers and providers of tax revenue, the mills are unlikely to close or cut production even if running losses,” but that cuts both ways. How can mills “provide tax revenue” if they’re running huge losses indefinitely?

There is no actual evidence that the Chinese government is behaving in the manner alleged; the evidence is all the other way. Indeed, the only actual recipients of long-term government subsidies to firms operating internationally are creatures of government like Airbus and Boeing – firms that produce most or all of their output for purchase by government and are quasi-public in nature, anyway. But that doesn’t silence the protectionist chorus. Government-subsidized foreign competition is their hole card and they’re playing it for all it’s worth.

The ultimate answer to the question “how do we defend against government-subsidized foreign trade?” is: We don’t. There’s no need to. If a foreign government is dead set on subsidizing American consumption, the only thing to do is let them.

If the Chinese government is enabling below-cost production and sale by its firms, it must be doing it with money. There are only three ways it can get money: taxation, borrowing or money creation. Taxation bleeds Chinese consumers directly; money creation does it indirectly via inflation. Borrowing does it, too, when the bill comes due at repayment time. So foreign exports to America subsidized by the foreign government benefit American consumers at the expense of foreign consumers. No government in the world can subsidize the world’s largest consumer nation for long. But the only thing more foolish than doing it is wasting money trying to prevent it.

What Does “Trade Protection” Accomplish?

Textbooks in international economics spell out in meticulous detail – using either carefully drawn diagrams or differential and integral calculus – the adverse effects of tariffs and quotas on consumers. Generally speaking, tariffs have the same effects on consumers as taxes in general – they drive a wedge between the price paid by the consumer and received by the seller, provide revenue to the government and create a “deadweight loss” of value that accrues to nobody. Quotas are, if anything, even more deleterious. (The relative harm depends on circumstances too complex to enumerate.)

This leads to a painfully obvious question: If tariffs hurt consumers in the import-competing country, why in the world do we penalize alleged misbehavior by exporters by imposing tariffs? This is analogous to imposing a fine on a convicted burglar along with a permanent tax on the victimized homeowner.

Viewed in this light, trade protection seems downright crazy. And in purely economic terms, it is. But in terms of political economy, we have left a crucial factor out of our reckoning. What about the import-competing producers? In the Wall Street Journal article, these are the complainants at the bar of the International Trade Commission. They are also the people economists have been observing ever since the days of Adam Smith in the late 18th century, bellied up at the government-subsidy bar.

In Smith’s day, the economic philosophy of Mercantilism reigned supreme. Specie – that is, gold and silver – was considered the repository of real wealth. By sending more goods abroad via export than returned in the form of imports, a nation could produce a net inflow of specie payments – or so the conventional thinking ran. This philosophy made it natural to favor local producers and inconvenience foreigners.

Today, the raison d’etre of the modern state is to take money from people in general and give it to particular blocs to create voting constituencies. This creates a ready-made case for trade protection. So what if it reduces the real wealth of the country – the goods and services available for consumption? It increases electoral prospects of the politicians responsible and appears to increase the real wealth of the beneficiary blocs, which is sufficient to for legislative purposes.

This is corruption, pure and simple. The authors of the Journal article present this corrupt process with a straight face because their aim is to present cheap Chinese steel as a danger to the American people. Thus, their aims dovetail perfectly with the corrupt aims of government.

And this explains the front-page article on the 03/16/2015 Wall Street Journal. It reflects the news value of posing a danger where none exists – that is, the corruption of journalism – combined with the corruption of the political process.

The “Effective Rate of Protection”

No doubt the more temperate readers will object to the harshness of this language. Surely “corruption” is too harsh a word to apply to the actions of legislators. They have a great big government to run. They must try to be fair to everybody. If everybody is not happy with their efforts, that is only to be expected, isn’t it? That doesn’t mean that legislators aren’t trying to be fair, does it?

Consider the economic concept known as the effective rate of protection. It is unknown to the general public, but is appears in every textbook on international economics. It arises from the conjunction of two facts: first, that a majority of goods and services are composed of raw materials, intermediate goods and final-stage (consumer) goods; and second, that governments have an irresistible impulse to levy taxes on goods that travel across international borders.

To keep things starkly simple and promote basic understanding, take the simplest kind of numerical example. Assume the existence of a fictional textile company. It takes a raw material, cotton, and spin, weaves and processes that cotton into a cloth that it sells commercially to its final consumers. This consumer cloth competes with the product of domestic producers as well as with cotton cloth produced by foreign textile producers. We assume that the prevailing world price of each unit of cloth is $1.00. We assume further that domestic producers obtain one textile unit’s worth of cotton for $.50 and add a further $.50 worth of value to the cloth by spinning, weaving and processing it into the cloth.

We have a basic commodity being produced globally by multiple firms, indicated the presence of competitive conditions. But legislators, perhaps possessing some exalted concept of fairness denied to the rabble, decide to impose a tariff on the importation of cotton. Not wishing to appear excessive or injudicious, the solons set this ad valorem tariff at 15%. Given the competitive nature of the industry, this will soon elevate the domestic price of textiles above the world price by the amount of the tariff; e.g., by $.15, to $1.15. Meanwhile, there is no tariff levied on cotton, the raw material. (Perhaps cotton is grown domestically and not imported into the country or, alternatively, perhaps cotton growers lack the political clout enjoyed by textile producers.)

The insight gained from the effective rate of protection begins with the realization that the net income of producers in general derives from the value they add to any raw materials and/or intermediate products they utilize in the production process. Initially, textile producers added $.50 worth of value for every unit of cotton cloth they produced. Imposition of the tariff allows the domestic textile price to rise from $1.00 to $1.15, which causes textile producers’ value added to rise from $.50 to $.65.

Legislators judiciously and benevolently decided that the proper amount of “protection” to give domestic textile producers from foreign competition was 15%. They announced this finding amid fanfare and solemnity. But it is wrong. The tariff has the explicit purpose of “protecting” the domestic industry, of giving it leeway it would not otherwise get under the supposedly harsh and unrelenting regime of global competition. But this tariff does not give domestic producers 15% worth of protection. $15 divided by $.50 – that is, the increase in value added divided by the original value added – is .30, or 30%. The effective rate of protection is double the size of the “nominal” (statutory) level of protection. In general, think of the statutory tariff rate as the surface appearance and the effective rate as the underlying truth.

Like oh-so-many economic principles, the effective rate of protection is a relatively simple concept that can be illustrated with simple examples, but that rapidly becomes complex in reality. Two complications need mention. When tariffs are also levied on raw materials and/or intermediate products, this affects the relationship between the effective and nominal rate of protection. The rule of thumb is that higher tariff rates on raw materials and intermediate goods relative to tariffs on final goods tend to lower effective rates of protection on the final goods – and vice-versa.

The other complication is the percentage of total value added comprised by the raw materials and intermediate goods prior to, and subsequent to, imposition of the tariff. This is a particularly knotty problem because tariffs affect prices faced by buyers, which in turn affect purchases, which in turn can change that percentage. When tariffs on final products exceed those on raw materials and intermediate goods – and this has usually been the case in American history – an increase in this percentage will increase the effective rate.

But for our immediate purposes, it is sufficient to realize that appearance does not equal reality where tariff rates are concerned. And this is the smoking gun in our indictment of the motives of legislators who promote tariffs and restrictive foreign-trade legislation.

 

Corrupt Legislators and Self-Interested Reporting are the Real Danger to America

In the U.S., the Commercial Code includes thousands of tariffs of widely varying sizes. These not only allow legislators to pose as saviors of numerous business constituent classes. They also allow them to lie about the degree of protection being provided, the real locus of the benefits and the reasons behind them.

Legislators claim that the size of tariff protection being provided is modest, both in absolute and relative terms. This is a lie. Effective rates of protection are higher than they appear for the reasons explained above. They unceasingly claim that foreign competitors behave “unfairly.” This is also a lie, because there is no objective standard by which to judge fairness in this context – there is only the economic standard of efficiency. Legislators deliberately create bogus standards of fairness to give themselves the excuse to provide benefits to constituent blocs – benefits that take money from the rest of us. International trade bodies are created to further the ends of domestic governments in this ongoing deception.

Readers should ask themselves how many times they have read the term “effective rate of protection” in The Wall Street Journal, The Financial Times of London, Barron’s, Forbes or any of the major financial publications. That is an index of the honesty and reputability of financial journalism today. The term was nowhere to be found in the Journal piece of 03/16/2015.

Instead, the three Journal authors busied themselves flacking for a few American steel companies. They showed bar graphs of increasing Chinese steel production and steel exports. They criticized the Chinese because the country’s steel production has “yet to slow in lockstep” with growth in demand for steel. They quoted self-styled experts on China’s supposed “problem [with] hold[ing] down exports” – without every explaining what rule or standard or economic principle of logic would require a nation to withhold exports from willing buyers. They cited year-over-year increases in exports between January, 2013, 2014 and 2015 as evidence of China’s guilt, along with the fact that the Chinese were on pace to export more steel than any other country “in this century.”

The reporters quoted the whining of a U.S. steel vice-president that demonstrating damage from Chinese exports is just “too difficult” to satisfy trade commissioners. Not content with this, they threw in complaints by an Indian steel executive and South Koreans as well. They neglect to tell their readers that Chinese, Indian and South Korean steels tend to be lower grades – a datum that helps to explain their lower prices. U.S. and Japanese steels tend to be higher grade, and that helps to explain why companies like Nucor have been able to keep prices and profit margins high for years. The authors cite one layoff at U.S. steel but forget to cite the recent article in their own Wall Street Journal lauding the history of Nucor, which has never laid off an employee despite the pressure of Chinese competition.

That same article quoted complaints by steel buyers in this country about the “competitive disadvantage” imposed by the higher-priced U.S. steel. Why are the complaints about cheap Chinese exports front-page news while the complaints about high-priced American steel buried in back pages – and not even mentioned by a subsequent banner article boasting input by no fewer than three Journal reporters? Why did the reporters forget to cite the benefits accruing to American steel users from low prices for steel imports? Don’t these reporters read their own newspaper? Or do they report only what comports with their own agenda?

DRI-135 for week of 1-4-15: Flexible Wages and Prices: Economic Shock Absorbers

An Access Advertising EconBrief:

Flexible Wages and Prices: Economic Shock Absorbers

At the same times that free markets are becoming an endangered species in our daily lives, they enjoy a lively literary existence. The latest stimulating exercise in free-market thought is The Forgotten Depression: 1921 – The Crash That Cured Itself. The author is James Grant, well-known in financial circles as editor/publisher of “Grant’s Interest Rate Observer.” For over thirty years, Grant has cast a skeptical eye on the monetary manipulations of governments and central banks. Now he casts his gimlet gaze backward on economic history. The result is electrifying.

The Recession/Depression of 1920-1921

The U.S. recession of 1920-1921 is familiar to students of business cycles and few others. It was a legacy of World War I. Back then, governments tended to finance wars through money creation. Invariably this led to inflation. In the U.S., the last days of the war and its immediate aftermath were boom times. As usual – when the boom was the artifact of money creation – the boom went bust.

Grant recounts the bust in harrowing detail.  In 1921, industrial production fell by 31.6%, a staggering datum when we recall that the U.S. was becoming the world’s leading manufacturer. (The President’s Conference on Unemployment reported in 1929 that 1921 was the only year after 1899 in which industrial production had declined.) Gross national product (today we would cite gross domestic product; neither statistic was actually calculated at that time) fell about 24% in between 1920 and 1921 in nominal dollars, or 9% when account is taken of price changes. (Grant compares this to the figures for the “Great Recession” of 2007-2009, which were 2.4% and 4.3%, respectively.) Corporate profits nosedived commensurately. Stocks plummeted; the Dow Jones Industrial average fell by 46.6% between the cyclical peak of November, 1919 and trough of August, 1921. According to Grant, “the U.S. suffered the steepest plunge in wholesale prices in its history (not even eclipsed by the Great Depression),” over 36% within 12 months. Unemployment rose dramatically to a level of some 4,270,000 in 1921 – and included even the President of General Motors, Billy Durant. (As the price of GM’s shares fell, he augmented his already-sizable shareholdings by buying on margin – ending up flat broke and out of a job.) Although the Department of Labor did not calculate an “unemployment rate” at that time, Grant estimates the nonfarm labor force at 27,989,000, which would have made the simplest measure of the unemployment rate 15.3%. (That is, it would have undoubtedly included labor-force dropouts and part-time workers who preferred full-time employment.)

A telling indicator of the dark mood enveloping the nation was passage of the Quota Act, the first step on the road to systematic federal limitation of foreign immigration into the U.S. The quota was fixed at 3% of foreign nationals present in each of the 48 states as of 1910. That year evidently reflected nostalgia for pre-war conditions since the then-popular agricultural agitation for farm-price “parity” sought to peg prices to levels at that same time.

In the Great Recession and accompanying financial panic of 2008 and subsequently, we had global warming and tsunamis in Japan and Indonesia to distract us. In 1920-1921, Prohibition had already shut down the legal liquor business, shuttering bars and nightclubs. A worldwide flu pandemic had killed hundreds of thousands. The Black Sox had thrown the 1919 World Series at the behest of gamblers.

The foregoing seems to make a strong prima facie case that the recession of 1920 turned into the depression of 1921. That was the judgment of the general public and contemporary commentators. Herbert Hoover, Secretary of Commerce under Republican President Warren G. Harding, who followed wartime President Woodrow Wilson in 1920, compiled many of the statistics Grant cites while chairman of the President’s Conference on Unemployment. He concurred with that judgment. So did the founder of the study of business cycles, the famous institutional economist Wesley C. Mitchell, who influenced colleagues as various and eminent as Thorstein Veblen, Milton Friedman, F. A. Hayek and John Kenneth Galbraith. Mitchell referred to “…the boom of 1919, the crisis of 1920 and the depression of 1921 [that] followed the patterns of earlier cycles.”

By today’s lights, the stage was set for a gigantic wave of federal-government intervention, a gargantuan stimulus program. Failing that, economists would have us believe, the economy would sink like a stone into a pit of economic depression from which it would likely never emerge.

What actually happened in 1921, however, was entirely different.

The Depression That Didn’t Materialize

We may well wonder what might have happened if the Democrats had retained control of the White House and Congress. Woodrow Wilson and his advisors (notably his personal secretary, Joseph Tumulty) had greatly advanced the project of big government begun by Progressive Republicans Theodore Roosevelt and William Howard Taft. During World War I, the Wilson administration seized control of the railroads, the telephone companies and the telegraph companies. It levied wage and price controls. The spirit of the Wilson administration’s efforts is best characterized by the statement of the Chief Price Controller of the War Industries Board, Robert Brookings. “I would rather pay a dollar a pound for [gun]powder for the United States in a state of war if there was no profit in it than pay the DuPont Company 50 cents a pound if they had 10 cents profit in it.” Of course, Mr. Brookings was not actually himself buying the gunpowder; the government was only representing the taxpayers (of whom Mr. Brookings was presumably one). And their attitude toward taxpayers was displayed by the administration’s transformation of an income tax initiated at insignificant levels in 1913 and to a marginal rate of 77% (!!) on incomes exceeding $1 million.

But Wilson’s obsession with the League of Nations and his 14 points for international governance had not only ruined his health, it had ruined his party’s standing with the electorate. In 1920, Republican Warren G. Harding was elected President. (The Republicans had already gained substantial Congressional majorities in the off-year elections of 1918.) Except for Hoover, the Harding circle of advisors was comprised largely of policy skeptics – people who felt there was nothing to be done in the face of an economic downturn but wait it out. After all, the U.S. had endured exactly this same phenomenon of economic boom, financial panic and economic bust before in 1812, 1818, 1825, 1837, 1847, 1857, 1873, 1884, 1890, 1893, 1903, 1907, 1910 and 1913. The U.S. economy had not remained mired in depression; it had emerged from all these recessions – or, in the case of 1873, a depression. If the 19th-century system of free markets were to be faulted, it would not be for failure to lift itself out of recession or depression, but for repeatedly re-entering the cycle of boom and bust.

There was no Federal Reserve to flood the economy with liquidity or peg interest rates at artificially low levels or institute a “zero interest-rate policy.” Indeed, the rules of the gold-standard “game” called for the Federal Reserve to raise interest rates to stem the inflation that still raged in the aftermath of World War I. Had it not done so, a gold outflow might theoretically have drained the U.S. dry.  The Fed did just that, and interest rates hovered around 8% for the duration. Deliberate deficit spending as an economic corrective would have been viewed as madness. As Grant put it, “laissez faire had its last hurrah in 1921.”

What was the result?

In the various individual industries, prices and wages and output fell like a stone. Auto production fell by 23%. General Motors, as previously noted, was particularly hard hit. It went from selling 52,000 vehicles per month to selling 13,000 to 6,150 in the space of seven months. Some $85 million in inventory was eventually written off in losses.

Hourly manufacturing wages fell by 22%. Average disposable income in agriculture, which comprised just under 20% of the economy, fell by over 55%. Bankruptcies overall tripled to nearly 20,000 over the two years ending in 1921. In Kansas City, MO, a haberdashery shop run by Harry Truman and Eddie Jacobson held out through 1920 before finally folding in 1921. The resulting personal bankruptcy and debt plagued the partners for years. Truman evaded it by taking a job as judge of the Jackson County Court, where his salary was secure against liens. But his bank accounts were periodically raided by bill collectors for years until 1935, when he was able to buy up the remaining debt at a devalued price.

In late 1920, Ford Motor Co. cut the price of its Model T by 25%. GM at first resisted price cuts but eventually followed suit. Farmers, who as individuals had no control over the price of their products, had little choice but to cut costs and increase productivity – increasing output was an individual’s only way to increase income. When all or most farmers succeeded, this produced lower prices. How much lower? Grant: “In the second half of [1920], the average price of 10 leading crops fell by 57 percent.” But how much more food can humans eat; how many more clothes can they wear? Since the price- and income-elasticities of demand for agricultural goods were less than one, this meant that agricultural revenue and incomes fell.

As noted by Wesley Mitchell, the U.S. slump was not unique but rather part of a global depression that began as a series of commodity-price crashes in Japan, the U.K., France, Italy, Germany, India, Canada, Sweden, the Netherlands and Australia. It encompassed commodities including pig iron, beef, hemlock, Portland cement, bricks, coal, crude oil and cotton.

Banks that had speculative commodity positions were caught short. Among these was the largest bank in the U.S., National City Bank, which had loaned extensively to finance the sugar industry in Cuba. Sugar prices were brought down in the commodity crash and brought the bank down with them. That is, the bank would have failed had it not received sweetheart loans from the Federal Reserve.

Today, the crash of prices would be called “deflation.” So it was called then and with much more precision. Today, deflation can mean anything from the kind of nosediving general price level seen in 1920-1921 to relatively stable prices to mild inflation – in short, any general level of prices that does not rise fast enough to suit a commentator.

But there was apparently general acknowledgment that deflation was occurring in the depression of 1921. Yet few people apart from economists found that ominous. And for good reason. Because after some 18 months of panic, recession and depression – the U.S. economy recovered. Just as it had done 14 times previously.

 

It didn’t merely recover. It roared back to life. President Harding died suddenly in 1923, but under President Coolidge the U.S. economy experienced the “Roaring 20s.” This was an economic boom fueled by low tax rates and high productivity, the likes of which would not be seen again until the 1980s. It was characterized by innovation and investment. Unfortunately, in the latter stages, the Federal Reserve forgot the lessons of 1921 and increases the money supply to “keep the price level stable” and prevent deflation in the face of the wave of innovation and productivity increases. This helped to usher in the Great Depression, along with numerous policy errors by the Hoover and Roosevelt administrations.

Economists like Keynes, Irving Fisher and Gustav Cassel were dumbfounded. They had expected deflation to flatten the U.S. economy like a pancake, increasing the real value of debts owed by debtor classes and discouraging consumers from spending in the expectation that prices would fall in the future. Not.

There was no economic stimulus. No TARP, no ZIRP, no QE. No wartime controls. No meddlesome regulation a la Theodore Roosevelt, Taft and Wilson. The Harding administration and the Fed left the economy alone to readjust and – mirabile dictu – it readjusted. In spite of the massive deflation or, much more likely, because of it.

The (Forgotten) Classical Theory of Flexible Wages and Prices

James Grant wants us to believe that this outcome was no accident. The book jacket for the Forgotten Depression bills it as “a free-market rejoinder to Bush’s and Obama’s Keynesian stimulus applied to the 2007-9 recession,” which “proposes ‘less is more’ with respect to federal intervention.”

His argument is almost entirely empirical and very heavily oriented to the 1920-1921 depression. That is deliberate; he cites the 14 previous cyclical contractions but focuses on this one for obvious reasons. It was the last time that free markets were given the opportunity to cure a depression; both Herbert Hoover and Franklin Roosevelt supervised heavy, continual interference with markets from 1929 through 1941. We have much better data on the 1920-21 episode than, say, the 1873 depression.

Readers may wonder, though, whether there is underlying logical support for the result achieved by the deflation of 1921. Can the chorus of economists advocating stimulative policy today really be wrong?

Prior to 1936, the policy chorus was even louder. Amazing as it now seems, it advocated the stance taken by Harding et al. Classical economists propounded the theory of flexible wages and prices as an antidote to recession and depression. And, without stating it in rigorous fashion, that is the theory that Grant is following in his book.

Using the language of modern macroeconomics, the problems posed by cyclical downturns are unemployment due to a sudden decline in aggregate (effective) demand for goods and services. The decline in aggregate demand causes declines in demand for all or most goods; the decline in demand for goods causes declines in demand for all or most types of labor. As a first approximation, this produces surpluses of goods and labor. The surplus of labor is defined as unemployment.

The classical economists pointed out that, while the shock of a decline in aggregate demand could cause temporary dislocations such as unsold goods and unemployment, this was not a permanent condition. Flexible wages and prices could, like the shock absorbers on an automobile, absorb the shock of the decline in aggregate demand and return the economy to stability.

Any surplus creates an incentive for sellers to lower price and buyers to increase purchases. As long as the surplus persists, the downward pressure on price will remain. And as the price (or wage) falls toward the new market-clearing point, the amount produced and sold (or the amount of labor offered and purchases) will increase once more.

Flexibility of wages and prices is really a two-part process. Part one works to clear the surpluses created by the initial decline in aggregate demand. In labor markets, this serves to preserve the incomes of workers who remain willing to work at the now-lower market wage. If they were unemployed, they would have no wage, but working at a lower wage gives them a lower nominal income than before. That is only part of this initial process, though. Prices in product markets are decreasing alongside the declining wages. In principle, fully flexible prices and wages would mean that even though the nominal incomes of workers would decline, their real incomes would be restored by the decline of all prices in equal proportion. If your wage falls by (say) 20%, declines in all prices by 20% should leave you able to purchase the same quantities of goods and services as before.

The emphasis on real magnitudes rather than nominal magnitudes gives rise to the name given to the second part of this process. It is called the real-balance effect. It was named by the classical economist A. C. Pigou and refined by later macroeconomist Don Patinkin.

When John Maynard Keynes wrote his General Theory of Employment Interest and Income in 1936, he attacked classical economists by attacking the concepts of flexible wages and prices. First, he attacked their feasibility. Then, he attacked their desirability.

Flexible wages were not observed in reality because workers would not consent to downward revisions in wages, Keynes maintained. Did Keynes really believe that workers preferred to be unemployed and earn zero wages at a relatively high market wage rather than work and earn a lower market wage? Well, he said that workers oriented their thinking toward the nominal wage rather than the real wage and thus did not perceive that they had regained their former position with lower prices and a lower wage. (This became known as the fallacy of money illusion.) His followers spent decades trying to explain what he really meant or revising his words or simply ignoring his actual words. (It should be noted, however, that Keynes was English and trade unions exerted vastly greater influence on prevailing wage levels in England that they did in the U.S. for at least the first three-quarters of the 20th century. This may well have biased Keynes’ thinking.)

Keynes also decried the assumption of flexible prices for various reasons, some of which continue to sway economists today. The upshot is that macroeconomics has lost touch with the principles of price flexibility. Even though Keynes’ criticisms of the classical economists and the price system were discredited in strict theory, they were accepted de facto by macroeconomists because it was felt that flexible wages and prices would take too long to work, while macroeconomic policy could be formulated and deployed relatively quickly. Why make people undergo the misery of unemployment and insolvency when we can relieve their anxiety quickly and compassionately by passing laws drafted by macroeconomists on the President’s Council of Economic Advisors?

Let’s Compare

Thanks to James Grant, we now have an empirical basis for comparison between policy regimes. In 1920-1921, the old-fashioned classical medicine of deflation, flexible wages and prices and the real-balance effect took 18 months to turn a panic, recession and depression into a rip-roaring recovery that lasted 8 years.

Fast forward to December, 2007. The recession has begun. Unfortunately, it is not detected until September, 2008, when the financial panic begins. The stimulus package is not passed until January, 2009 – barely in time for the official end of the recession in June, 2009. Whoops – unemployment is still around 10% and remains stubbornly high until 2013. Moreover, it only declines because Americans have left the labor force in numbers not seen for over thirty years. The recovery, such as it is, is so anemic as to hardly merit the name – and it is now over 7 years since the onset of recession in December, 2007.

 

It is no good complaining that the stimulus package was not large enough because we are comparing it with a case in which the authorities did nothing – or rather, did nothing stimulative, since their interest-rate increase should properly be termed contractionary. That is exactly what macroeconomists call it when referring to Federal Reserve policy in the 1930s, during the Great Depression, when they blame Fed policy and high interest rates for prolonging the Depression. Shouldn’t they instead be blaming the continual series of government interventions by the Fed and the federal government under Herbert Hoover and Franklin Roosevelt? And we didn’t even count the stimulus package introduced by the Bush administration, which came and went without making a ripple in term of economic effect.

Economists Are Lousy Accident Investigators 

For nearly a century, the economics profession has accused free markets of possessing faulty shock absorbers; namely, inflexible wages and prices. When it comes to economic history, economists are obviously lousy accident investigators. They have never developed a theory of business cycles but have instead assumed a decline in aggregate demand without asking why it occurred. In figurative terms, they have assumed the cause of the “accident” (the recession or the depression). Then they have made a further assumption that the failure of the “vehicle’s” (the economy’s) automatic guidance system to prevent (or mitigate) the accident was due to “faulty shock absorbers” (inflexible wages and prices).

Would an accident investigator fail to visit the scene of the accident? The economics profession has largely failed to investigate the flexibility of wages and prices even in the Great Depression, let alone the thirty-odd other economic contractions chronicled by the National Bureau of Economic Research. The work of researchers like Murray Rothbard, Vedder and Galloway, Benjamin Anderson and Harris Warren overturns the mainstream presumption of free-market failure.

The biggest empirical failure of all is one ignored by Grant; namely, the failure to demonstrate policy success. If macroeconomic policy worked as advertised, then we would not have recessions in the first place and could reliably end them once they began. In fact, we still have cyclical downturns and cannot use policy to end them and macroeconomists can point to no policy successes to bolster their case.

Now we have this case study by James Grant that provides meticulous proof that deflation – full-blooded, deep-throated, hell-for-leather deflation in no uncertain terms – put a prompt, efficacious end to what must be called an economic depression.

Combine this with the 40-year-long research project conducted on Keynesian theory, culminating in its final discrediting by the early 1980s. Throw in the existence of the Austrian Business Cycle Theory, which combines the monetary theory of Ludwig von Mises and interest-rate theory of Knut Wicksell with the dynamic synthesis developed by F. A. Hayek. This theory cannot be called complete because it lacks a fully worked out capital theory to complete the integration of monetary and value theory. (We might think of this as the economic version of the Unified Field Theory in the natural sciences.) But an incomplete valid theory beats a discredited theory every time.

In other words, free-market economics has an explanation for why the accident repeatedly happens and why its effects can be mitigated by the economy’s automatic guidance mechanism without the need for policy action by government. It also explains why the policy actions are ineffective at both remedial and preventive action in the field of accidents.

James Grant’s book will take its place in the pantheon of economic history as the outstanding case study to date of a self-curing depression.

DRI-319 for week of 6-22-14: Redskins Bite the Dust – and So Do Free Markets

An Access Advertising EconBrief:

Redskins Bite the Dust – and So Do Free Markets

The Trials and Appeals Board (TTAB) of the United States Patent and Trademark Office (USPTO) recently suspended validity of the trademarks previously held by the Washington Redskins professional football team of the National Football League (NFL). The legal meaning of this action is actually much more complex than public opinion would have us believe. The importance of this action transcends its technical legal meaning, however. If we can believe polls taken to test public reaction to the case, 83% of the American public disapproves of the decision. They, too, sense that there is more at stake her than merely the letter of the law.

The Letter of the Law – and Other Letters

The federal Lanham Trademark Act of 1946 forbids the registration of “any marks that may disparage persons or bring them into contempt or disrepute.” That wording forms the basis for the current suit filed by a group of young Native American plaintiffs in 2006. The hearing was held before TTAB in March, 2013. This week the judges issued a 99-page opinion cancelling each of the 6 different trademark registrations of the name “REDSKINS” and the Redskins’ logo, an Indian brave’s head in silhouette with topknot highlighted on the left. The decision called the trademarks “disparaging to Native Americans at the respective times they were registered.” The wording was necessary to the verdict; indeed, the dissenting judge in the panel’s 2-1 ruling claimed that the majority failed to prove that the registrations were contemporaneously disparaging.

This was not the first attempt to invalidate the Redskins trademarks – far from it. The previous try came in 1999 when the TTAB also ruled against the team. That ruling was overturned on appeal. The grounds for rejection were both technical and substantive. The judges noted that the plaintiffs were well over the minimum filing age of 18 and that the registrations went as far back as the 1930s. Thus, the plaintiffs had undermined their claim to standing by failing to exercise their rights to sue earlier – if the trademarks were known to have been such an egregious slur, why hadn’t plaintiffs acted sooner? The plaintiffs also cited a resolution by the National Congress of American Indians in 1993 that denounced the name as offensive. The Congress claimed to represent 30% of all Native Americans, which the judges found insufficiently “substantial” to constitute a validation of plaintiffs’ claim.

Meanwhile, an AnnenbergPublicPolicyCenter poll found in 2004 that “90% of Native Americans [polled] said the name didn’t bother them,” as reported in the Washington Post. Team owner Daniel Snyder’s consistent position is that he will “never” change the team name since it was chosen to “honor Native Americans,” the same stand taken by NFL President Roger Goodell. Various Native American interest groups and celebrities, such as 5000-meter Olympic track gold-medalist Billy Mills, have sided with the plaintiffs. Senate Majority Leader Harry Reid jumped at the chance to play a race card, calling the team name a “racial slur” that “disparages the American people” (!?). He vows to boycott Redskins’ games until the name is changed. Roughly half his Senate colleagues sent a letter to the team demanding a name change.

The Practical Effects of the Ruling

Numerous popular sources have opined that anybody is now “free” to use the name “Redskins” for commercial purposes without repercussions. Several lawyers have pointed out that this is not true. For one thing, this latest decision is subject to judicial review just as were previous ones. Secondly, it affects only the federal registration status of the trademarks, not the right to the name. The enforceability of the trademark itself still holds under common law, state law and even federal law as outlined in the Lanham Act. The law of trademark itself takes into account such concepts as “pervasiveness of use,” which reflects actual commercial practice. In this case, the name has been in widespread use by the team for over 80 years, which gives it a strong de facto claim. (If that sounds confusing, join the club.) Finally, the appeals process itself takes at least two years to play out, so even the registration status will not change officially for awhile.

Thus, the primary impact of the ruling will be on public relations in the short run. The same commentators who cast doubt on the final result still urge Daniel Snyder to take some sort of token action – set up a foundation to benefit Native Americans, for instance – to establish his bona fides as a non-racist and lover of Native Americans.

Why the Law is an Ass

There are times when you’re right and you know why you’re right. There are other times when you’re right and you know you’re right, but you can’t quite explain why you’re right. The general public is not made up of lawyers. If judges say the trademark registrations are illegal, the public is prepared to grant it. But, like Charles Dickens’ character Mr. Bumble, they insist that the law is an ass. They just can’t demonstrate why.

The provision in the Lanham Act against disparaging trademarks is the kind of legal measure that governments love to pass. It sounds both universally desirable and utterly innocuous. Disparaging people and holding them up to ridicule and contempt is a bad thing, isn’t it? We’re against that, aren’t we? So why not pass a law against it – in effect – by forbidding disparaging trademarks. In 1946, when the Lanham Act passed, governments were big on passing laws that were little more than joint resolutions. The Employment Act of 1946, for example, committed the federal government to achieving “maximum employment, purchasing power and income.” There is no objective way to define these things and lawmakers didn’t try – they just passed the law as a way to show the whole world that they were really, really serious about doing good, not just kidding around the way legislatures usually are. Oh, and by the way, any time they needed an excuse for spending a huge wad of the taxpayers’ money, they now had one. (Besides, before the war a famous economist had said that it was all right to spend more money than you had.)

The law against disparaging trademarks was passed in the same ebullient mood as was the Employment Act of 1946. Government doesn’t actually have the power to guarantee maximum employment or income or purchasing power and it also doesn’t have the power to objectively identify disparagement. Unlike beauty, a slur is not in the eye of the beholder. It is in the brain of the author; it is subjective because it depends on intent. Men often call each other “bastard” or “son of a bitch”; each can be either deadly serious invective or completely frivolous, depending on the context. The infamous “n-word,” so taboo that it dare not speak its name, is in fact used by blacks toward each other routinely. It can be either a casual form of address or a form of disparagement and contempt – depending on the intent of the user.

Everybody – including even Native Americans – knows that Washington football team owner George Preston Marshall, one of the legendary patriarchs of the NFL, did not choose the team name “Redskins” in order to disparage Native Americans or hold up to ridicule or contempt. He chose it to emphasize the fighting and competitive qualities he wanted the team to exemplify, because Indians in the old West were known as fierce, formidable fighters. Whether he actually meant to honor Native Americans or merely to trade on their reputation is open to debate, but it is an open-and-shut, 100%, Good-Housekeeping-seal-of-approval-certified certainty that he was not using the word “Redskins” as a slur. Why? Because by doing so he would have been committing commercial suicide by slandering his own team, that’s why.

That brings us to the second area resemblance of between the Lanham Act and the Employment Act of 1946. The Employment Act was unnecessary because free markets when left to their own devices already do the best job of promoting high incomes, low unemployment and strong purchasing power than can be done. And free markets are the best guarantee against the use of disparaging trademarks, because the inherent purpose of a trademark is to promote identification with the business. Who wants their business identified with a slur? We don’t need a huge bureaucracy devoted to the business of rooting out and eradicating business trademarks that are really slurs. Free markets do that job automatically by driving offending businesses out of business. Why otherwise would businesses spend so much time and money worrying about public relations and agonizing over names and name changes?

If the only reason for the persistence of legislation like the Employment Act and the Lanham Act were starry-eyed idealism, we could write off them off as the pursuit of perfect justice, the attempt to make government write checks it can’t cover in the figurative sense as well as the financial. Idealism may explain the origin of these laws but not their persistence long after their imposture has been exposed.

Absolute Democracy

By coincidence, another political-correctness scandal competed with the Redskins trademark revocation for headlines. The story was first reported as follows: A 3-year-old girl suffered disfiguring facial bites by three dogs (allegedly “pit bulls”). She was taken to a Kentucky Fried Chicken franchise by a parent, where she was asked to leave, after an order was placed for her favorite meal of sweet tea and mashed potatoes, because her presence was “disrupting the other customers.” Her relatives took this story of “discrimination” to the news media.

Representatives of the parent corporation were guarded in their reaction to the accusation, but unreserved in the sympathy they expressed for the girl. They promised a donation of $30,000.00 to aid in treatment of her injuries and for her future welfare. They also promised to follow up to confirm what actually happened at the store.

What actually happened, according to their follow-up investigation, was nothing. This was the result of their internal probe and a probe by an independent company they hired to do its own investigation. Review of the store’s surveillance tape showed no sign of the girl or her relatives on the day in question. A review of transactions showed no order for “sweet tea and mashed potatoes” on that day, either. KFC released a finding that the incident was a hoax, a conclusion that was disputed by another relative of the girl who was not one of those supposedly present at the incident.

Perhaps the most significant part of this episode is that KFC did not retract their promise of a $30,000.00 donation to the girl – despite their announced finding that her relatives had perpetrated a hoax against the corporation.

The Redskins trademark case and the apparent KFC hoax are related by the desire of interested parties to use political correctness as a cover for extracting money using the legal system. Pecuniary extortion is crudely obvious in the KFC case; $30,000 is the blackmail that company officials are willing to pay to avoid being crucified in a public-relations scandal manufactured out of nothing.

Their investigation was aimed at avoiding a charge of “discrimination” against the girl, which might have resulted in a six- or seven-figure lawsuit and an even-worse PR scandal. But their willingness to pay blackmail suggests an indifference to the problem of “moral hazard,” something that clearly influences Daniel Snyder’s decision not to change the Redskins’ team name. Willingness to pay encourages more blackmail; changing the team name encourages more meddling by activists.

The Redskins case is more subtle. Commentators stress that plaintiffs are unlikely to prevail on the legal merits, but doubt that the team can stand the continuous heat put on it by the PR blowtorch lit by the TTAB verdict. That is where the money comes in – owner Daniel Snyder will have to pony up enough money to the various Native American interest groups to buy their silence. Of course, this will be spun by both sides as a cultural contribution, meant to make reparations for our history of injustice and brutality to the Native American, and so on.

Of course, Snyder may turn out to be as good as his word; he may never agree to change the Redskins’ team name. The NFL – either the Commissioner or the other owners exerting their influence – may step in and force a name change. Or Snyder may even sell the team rather than be forced to change their name against his will. That would leave the plaintiffs and Native American interest groups out in the cold – financially speaking. Does that invalidate the economic theory of absolute democracy as applied to this case?

No. Plaintiffs stand to benefit in an alternative manner. Instead of gaining monetary compensation for their efforts, they would earn psychological (psychic) utility. From everyday observation, as well as our own inner grasp of human nature, we realize that some people who cannot achieve nevertheless earn psychic pleasure from thwarting the achievements of others. In this particular case, the prospective psychic gains earned by some Native Americans from overturning the Redskins name and the prospective monetary gains earned from blackmailing the Redskins’ owner are substitute goods; the favorable verdict handed down by TTAB makes it odds-on that that one or the other will be enjoyed.

This substitution potential is responsible for the rise and continued popularity of the doctrine of political correctness. “Race hustlers” like Jesse Jackson and Al Sharpton have earned handsome financial rewards for themselves and/or clients by demonizing innocuous words and deeds of whites as “racist.” What is seldom recognized, though, is the fact that their popularity among blacks at large is owed to the psychic rewards they confer upon the rank-and-file. When (let us say) a white English teacher is demoted or fired for teaching the wrong work by Mark Twain or Joseph Conrad, followers of Jackson and Sharpton delight. They know full well that the exercise is a con – that is the point. They feel empowered by the fact that they may freely use the n-word while whites are prevented from doing so. Indeed, this is simply a reversal of the scenario under Jim Crow, when blacks were forced to the back of the bus or to restricted drinking fountains. In both cases, the power of the law is used to earn psychic rewards by imposing psychic losses on others.

Legal action was necessary in the Redskins’ case because plaintiffs were bucking an institution that had been validated by the free market. The Washington Redskins have over 80 years of marketplace success on their record; the free market refused to punish their so-called slur against Native Americans. In fact, the better case is that the team has rehabilitated the connotation of the word “redskins” through its success on the field and its continuing visibility in the nation’s capital. Goodness knows, countless words have undergone this sort of metamorphosis, changing from insults to terms of honor.

When plaintiffs could not prevail through honest persuasion they adopted the modern American method – they turned to legal force. However tempting it might be to associate this tactic exclusively with the political correctness of the left, the truth is that it is the means of first resort for conservatives as well. That is the seeming paradox of absolute democracy, which represents the dictatorship of the law over free choice.

Inevitably, advocates of political correctness cite necessity as their justification. The free market is not free and does not work, so the government must step in. The planted axioms – that free markets usually fail while governments always work – are nearly 180 degrees out of phase. The failures of government highlight our daily lives, but the successes of the free market tend to be taken for granted. The famous episode of Little Black Sambo and its epilogue serves as a reminder.

The Little Black Sambo stories and Sambo Restaurants

The character of Little Black Sambo and the stories about him have been redefined by their detractors – that is to say, demonized as racist caricatures that dehumanize and degrade American blacks. This is false. In the first place, the original character of Little Black Sambo, as first portrayed in stories written in the late 19th and early 20th centuries, was Tamil (Indian or Sri Lankan) – a reflection of the ecumenical reach exerted by the term “black” in those days. Eventually, the character was adapted to many nationalities and ethnic identities, including not only American black but also Japanese. (Indeed, he remains today a hero to children of Japan, who remain blissfully untouched by the political correctness familiar to Americans.) This is not surprising, since the stories portray a little boy whose heroic perseverance in the face of obstacles is an imperishable life lesson. Presumably, that is why the stories are among the bestselling children’s storybooks of all time.

When American versions of the story portrayed Little Black Sambo as an American or African black, this eventually caught the eye of disapproving blacks like the poet Langston Hughes, who called the picture-book depiction a classic case of the “pickaninny” stereotype. Defenders of the stories noted that when the single word “black” was removed and any similarity to American or African blacks deleted from the illustrations, the stories attracted no charges of racism. Yet black interest groups echoed the psychologist Alvin Poussaint, who claimed that “I just don’t see how I can get past the title and what it means,” regardless of any merit the stories might contain. The storybooks disappeared from schools, nurseries and libraries.

In 1957, two restaurant owners in Santa Barbara, CA, opened a casual restaurant serving ethnic American food. In the manner of countless others, they chose a name that combined their two nicknames, “Sam” (Sam Battistone) and “Bo” (Newell Bohnett). Over time, Sambo’s Restaurant’s popularity encouraged them to franchise their concept. It grew into a nationwide company with 1,117 locations. Many of these were decorated with pictures and statuary that borrowed from the imagery of the “Little Black Sambo” stories.

The restaurants were a marketplace success, based on their food, service and ambience. But in the 1970s, black interest groups began raising objections to the use of the “Sambo” name and imagery, calling it – you guessed it – racist. Defenders of the franchise cited the value and longstanding popularity of the stories. They noted the success and popularity of the restaurants. All to no avail. By 1981, the franchising corporation was bankrupt. Today, only the original Santa Barbara location remains.

This was certainly not a victory for truth and justice. But it was a victory for the American way – that is, the true American way of free markets. Opponents of Sambo’s Restaurants went to the court of public opinion and made their case. Odious though it seemed to patrons of the restaurants, the opponents won out.

So much for the notion that free markets are rigged against political correctness. In the case of Sambo’s Restaurants, people concluded that the name tended to stigmatize blacks and they voluntarily chose not to patronize the restaurants. The restaurants went out of business. This was the appropriate way to reach this outcome because the people who were benefitting from the restaurants decided that the costs of production outweighed the benefits, and chose to forego those benefits. The decisive factor was that bigotry was (apparently) a cost of production.

Instead of achieving their aim through legal coercion or blackmail, activists achieved it through voluntary persuasion. Alas, that lesson has now been forgotten by both the political Left and Right.

DRI-312 for week of 6-15-14: Wealth and Poverty: Blame and Causation

An Access Advertising EconBrief:

Wealth and Poverty: Blame and Causation

Among the very many cogent distinctions made by the great black economist Thomas Sowell is that between blame and causation. Blame is a moral or normative concept. Causation is a rational, cause-and-effect concept. “Sometimes, of course, blame and causation may coincide, just as a historic event may coincide with the Spring equinox,” Sowell declared in Economic Facts and Fallacies. “But they are still two different things, despite such overlap.”

Unfortunately, blame has overtaken causation in the public perception of how the world works. This is bad news for economics, which is a rational discipline rather than a morality play.

Economic Development

There is a specialized branch of economics called economic development. Not surprisingly, its precepts derive from the principles of general economic theory, adapted to apply in the special case of areas, regions and nation states whose productive capabilities rise from a primitive state to advanced status.

The public perception of economic development, though, is that of a historical morality play. Developed Western nations in Europe engaged in a practice called “imperialism” by colonizing nations in South America and Africa. Then they proceeded to exploit the colonial natives economically. This exploitation not only reduced their standard of living contemporaneously, it left them with a legacy of poverty that they have been subsequently unable to escape. Only government aid programs of gifts or loans, acting as analogues to the welfare programs for impoverished individuals in the Western countries, can liberate them and expiate the sins of the West.

The idea that moral opprobrium attaches to acts of national conquest has a considerable appeal. The conventional approach to what is loftily called “international law” – or, more soberly, “foreign policy” – is that military force applied aggressively beyond a country’s own international boundaries is wrong. But the impact of wrongful acts does not necessarily condemn a nation to everlasting poverty.

In fact, world history to date has been overwhelmingly a tale of conquest. For centuries, nations attained economic growth not through production but through plunder. Only since the Industrial Revolution has this changed. It is worthwhile to question the presumption that defeat automatically confers a legacy of economic stasis and inferiority.

That is why we must distinguish between blame and causation. We may assign blame to colonizers for their actions. But those actions and their effects occurred in the colonial era, prior to independence. Cause-and-effect relationships are necessarily limited to relationships in the same temporal frame; the past cannot hold the present prisoner. Even if we were to claim that (say) inadequate past investment under colonization is now responsible for constraining present economic growth, we would still have to explain why current investment cannot grow and eventually stimulate future economic growth.

Great Britain was the world’s leading economic power during the 18th and 19th centuries. She conquered and held a worldwide empire of colonies. She must have commanded great wealth, both military and economic, in order to achieve these feats. Yet Great Britain herself was conquered by the Romans and spent centuries as part of the Roman Empire. The “indigenous peoples” of the British Isles (perhaps excluding the Irish, who may have escaped the Roman yoke) must have recovered from the pain of being subjugated by the Romans. They must have overcome the humiliation of bestowing upon William the title of “Conqueror” after his victory at Hastings in 1066. They must – otherwise, how else could they have rebounded to conquer half the world themselves?

Great Britain’s legacy of military defeat, slavery and shame did not thwart its economic development. It did not stop the British pound sterling from becoming the vehicle currency for world trade, just as the U.S. dollar is today. If anything, Great Britain and Europe prospered under Roman domination and suffered for centuries after the collapse of the empire.

Germany has been an economic powerhouse since the 19th century. It survived utter devastation in two world wars and calumniation in their wake, only to rise from the ashes to new heights of economic prominence. Yet its legacy prior to this record of interrupted success was a history of squabbles and conflict between regional states. They, too, were subjugated by Rome and arose from a long period of primitive savagery. Why didn’t this traumatize the German psyche and leave them forever stunted and crippled?

It is hard to think of any nation that had a tougher road to hoe than China. True, China was the world’s greatest economic power over a millennium ago. But centuries of isolation squandered this bequest and left them a medieval nation in a modern world. As if this weren’t bad enough, they reacted by embracing a virulent Communism that produced the world’s worst totalitarian state, mass famine and many millions of innocent deaths. At the death of Mao Ze-Dong in 1976, China was a feeble giant – the world’s most populous nation but unable to feed itself even a subsistence diet. Yet this legacy of terror, famine, defeat and death failed to prevent the Chinese from achieving economic development. Less than 40 years later, China is a contender for the title of world’s leading economic power.

It is certainly true that some countries in Africa and South America were colonized by European powers and subsequently experienced difficulty in raising their economic productivity. But it is also true that there are “countries mired in poverty that were never conquered.” Perhaps even more significantly, “for thousands of years, the peoples of the Eurasian land mass and the peoples of the Western Hemisphere were unaware of each other’s existence,” which constitutes a legacy of isolation even more profound and enduring than any residue left by the much shorter period of contact between them.

Economists have identified various causal factors that affect economic development much more directly and clearly than military defeat or personal humiliation suffered by previous generations. Most prominent among these are the geographic factors.

Mankind’s recorded history began with settlements in river valleys. A river valley combines two geographic features – a river and a valley. The river is important because it provides a source of water for drinking and other important uses. Rivers also serve as highways for transportation purposes. Finished goods, goods-in-process and primary inputs are all transported by water. In modern times, with the advent of swifter forms of transportation, only commodities with low value relative to bulk travel by water. But throughout most of human history, rivers were the main transportation artery linking human settlements. Oceans were too large and dangerous to risk for ordinary transportation purposes; lakes were not dispersed widely enough to be of much help.

If we contrast the kind and quality of rivers on the major continents, it is not hard to see why North America’s economic development exceeded that of Africa. Not only is North America plentifully supplied with rivers, but its largest rivers, the Mississippi and the Missouri, tend to be highly navigable. Its coastline contains many natural harbors. Africa’s rivers, in contrast, are much more problematic. While the Nile is navigable, its annual floods have made life difficult for nearby settlers. The Congo River’s navigability (including its access from the ocean) is hindered by three large falls. The African coastline contains comparatively few natural harbors and is often difficult or impossible for ships to deal with – a fact that hindered international trade between Africa and the outside world for decades. The Congo is the world’s second largest river in terms of water-volume discharged; the Amazon River in South America is the largest. Yet the tremendous hydropower potential of both rivers has hardly been tapped owing to various logistical and political obstacles.

Valleys contrast favorably with mountainous regions because they are more fertile and easier to traverse. Sowell quotes the great French historian Fernand Braudel’s observation that “mountain life lagged persistently behind the plain.” He cites mountainous regions like the Appalachians in the U.S., the mountains of Greece, the RifMountains in Morocco and the ScottishHighlands to support his generalization. Not only do both Africa and South America contain formidable mountain barriers, their flatlands are much less conducive to economic development than those of (say) North America. Both Africa and South America contain large rainforests and jungles, which not only make travel and transport difficult or impossible but are also hard to clear. As if that weren’t a big enough barrier, both continents face political hurdles to the exploitation of the rainforests.

South America differs from its northern neighbor particularly in topography. The AndesMountains to the west have traditionally divided the continent and represented a formidable geographic barrier to travel and transportation. One of the great stories in the history of economic geography is the tale, told most vividly by legendary flier and author Antoine de Saint-Exupery in his prize-winning novel Night Flight, of the conquest of the Andes by airline mail-delivery companies in the formative days of commercial North America, the flatlands of South America do not consist primarily aviation.

Climate has similar effects on economic development. A priori, temperate climate is more suitable for agriculture and transportation than either the extremes of heat or cold. Both Africa and South America contain countries located within tropical latitudes, where heat and humidity exceed the more temperate readings typical of North America and Europe. Indeed, Africa’s average temperature makes it the hottest of all continents. While North America does contain some desert land, it cannot compare with northern Africa, where the Sahara approaches the contiguous U.S in size. The barrenness of this climate makes it less suitable for human habitation and development than any area on Earth save the polar regions. Speaking of which, subarctic climates can be found on the highest mountain regions on each continent.

The economic toll taken by geographic barriers to trade can be visualized as akin to taxes. Nature is levying a specific tax on the movement of goods, services and people over distance. The impact of this “transport tax” can extend far beyond the obvious. As Sowell points out, the languages of Africa comprise 30% of the world’s languages but are spoken by only 13% of the world’s population. The geographic fragmentation and separation of the continent has caused cultural isolation that has produced continual fear, hatred, conflict and even war between nations. The civil war currently raging between Sunni, Shiite and Kurd is the same kind of strife that T.E. Lawrence sought to suppress during World War I almost a century ago. Thus, an understanding of basic geography is sufficient to convey the severe handicap imposed on most countries in Africa and South America compared to the nations of Europe and North America.

Political Economy

It is certainly true that geography alone placed Africa and South Africa behind the economic-development 8-ball. Still, each continent does contain a share of desirable topographies and climates. History even records some economic-development success stories there. Argentina was one of the world’s leading economic powers in the 19th century. Not only was its national income ranked among world leaders, its rate of growth was high and growing. Its share of world trade also grew. Today, its status is dismal, exactly the reverse of its prior prosperity – its GDP is barely one-tenth of ours. But it was not conquered by a colonial power, nor was it “exploited” by “imperialism.”

Argentina won its independence from Spain well before it rose to economic prominence. Unfortunately, its political system gradually evolved away from free-market economics and toward the dictatorial socialism epitomized by Juan Peron and his wife, Evita. This produced inflation, high taxes, loss of foreign trade and investment and a steady erosion of real income.

Elsewhere in South America, economic evolution followed a similar course, albeit by a different route. Most countries lacked the same experience with free markets and institutions that lifted Argentina to the heights. Even when independence from colonial rule brought republican government, this quickly morphed to one-party rule or military dictatorship. Although the political Left insists that South America has been victimized by capitalism, South America’s history really reeks of the same “crony capitalism” that reigns supreme in the Western nations today. This means authoritarian rule, unlimited government and favoritism exerted in behalf of individuals or constituent groups. Moreover, erosion of property rights has weakened a key bulwark of free-market capitalism in the West today, just as it did throughout the history of South America.

In Africa, the situation was even worse and has remained so until quite recently. After crying out for independence from colonial oppressors, native Africans surrendered their freedom to a succession of dictators who proved more oppressive, brutal and bloodthirsty than the colonizers. Now, with the rise of the Internet and digital technology, Africans at last possess the ability to exist and thrive independently of government. They also can overcome the costs of transacting to protest against dictatorship.

The importance of markets and institutions can be divined from a roll-call of the most successful countries. Great Britain, Japan, Hong Kong, Singapore and Scandinavia are all small countries that lack not only size but also abundance of natural resources. One thing that Africa and South America did possess in quantities rivaling that of Europe and North America was resource wealth. But the ability to turn resources into goods and services requires the other things that Africa and South America lacked: not only favorable geography and climate, but also favorable institutions, laws and mores. Even in North America, the U.S. had all the favorable requisites, while Mexico lacked the legal and institutional environment and Canada lacked the favorable geography and climate.

Viewed in this light, it is not chauvinism to invoke a principle of “American exceptionalism;” it is just clear-eyed analysis. The country that later became the United States of America was blessed with ideal geography and climate. While it faced aboriginal opposition, that was much less fierce than it might have been. Great Britain’s colonial stewardship allowed the colonies to develop economically, albeit in a restricted framework. Moreover, the colonists developed a close acquaintanceship with British laws and institutions. This proved vital to the eventual birth of the American Declaration of Independence and Constitution. The U.S. was indeed the exception when it came to economic development because it faced few of the obstacles that hampered the development of almost all other countries. Coupled with the most favorable constitution ever written for free markets and a century and a half of virtually free immigration, the result was the growth of the world’s greatest economy.

Culture

Through the ages, historians have accorded culture an increasing emphasis in their studies. Oddly, though, it has seldom been linked to economics in general and almost never to economic development in particular. Yet even a cursory glance suggests it as an explanation for some of what otherwise would stand as paradoxes.

India has long ranked as the “phenom” of economic development – perennially expected to bust loose to assume its rightful place among the world’s economic powerhouses, and perennially a disappointment. As a legacy of centuries of colonial rule by Great Britain, it inherited a cadre of well-trained and educated civil servants. The world’s second-largest population provided a ready source of labor. The country did not lack for capital goods despite the abject poverty of most of its citizens, thanks to British investment. What, exactly, was holding India back?

The political left supplied its standard answer by attaching blame for India’s poverty to its “legacy of colonialism.” Movies like Gandhi portrayed British behavior toward Indians as beastly and sanctified Gandhi’s policy of passive resistance within a framework of civil disobedience. These answers were less than complete, however. They did not explain how the U.S., also a British colony and occasional victim of British beastliness for a century and a half, was able to succeed so brilliantly while India failed so dismally. Nor did they explain why India failed while employing the same socialist economic policies that England had incubated throughout the early 1900s before installing them at home just before granting India’s independence.

India’s adoption of socialism was the political complement to its cultural reverence for poverty, created and nurtured by Gandhi. India could hardly have picked a worse symbol for hero worship. Fortunately, India’s independence was delayed until after World War II, in which India refused to embrace Gandhi’s pacifism and participated significantly in her own defense and that of the Eastern theater. Then, after independence, India continued to stoke regional hostilities with neighbors China and Pakistan in subsequent decades, ignoring Gandhi’s views in the one context in which they might have done some good. Meanwhile, the country’s steadfast unwillingness to adopt a commercial ethic, root out public corruption and eradicate traditional taboos against the unhindered opposition of markets foreclosed any possibility of real economic growth.

If there was ever a culture that seemed impervious to economic growth, it was India’s. Even China never seemed such a hopeless case, for Chinese who emigrated became the success story of Southeast Asia; clearly Chinese institutions were holding up economic development, not her culture. Well, India’s cultural head is still buried in the sands of the past, but her institutions have changed sufficiently to midwife noticeable economic growth beginning in the late 1990s.

Foreign Aid and Foreign Investment

Two great myths of economics relate to foreign aid and foreign investment. For decades, intellectuals and governments sang the praises of foreign aid as a recipe for prosperity and cure for poverty. Alas, institutions like the World Bank and International Monetary Fund – both of which were created for completely unrelated purposes – have failed miserably to promote economic development despite decades of trying and billions of dollars in loans, grants and consulting contracts.

The failures have been particularly glaring in Africa, where real incomes were the lowest in the world throughout the 20th century. In retrospect, it is not easy to figure out why international aid should have succeeded in raising real incomes. After all, one of the signature measures employed by newly independent regimes in Africa and South America was to expropriate wealth owned by foreigners through nationalization. This raised the incomes of government officials and their cronies but did not raise real incomes generally. As Sowell observes, “there is no more reason to expect automatic benefits from wealth transfers through international agencies than from wealth transfers through internal confiscations.” And indeed, “the incentives facing those disbursing the aid and those receiving it seldom make economic development the criterion of success.” Aid agencies simply strive to give money away; host governments simply strive to get money. And that is pretty much what happened.

Lenin developed a theory of imperialism to explain why capitalism did not succumb to revolution on schedule. When the declining profit from capital threatened their viability, capitalists would turn to the less-developed nations, where their foreign investment would earn “super profits” at the expense of the host peoples. Unfortunately, his theory was overturned by experience, which showed that capitalists in developed countries invested mostly in other developed countries. (Today’s neo-Marxism has returned full-circle to the exploitation theories of original Marxism with the newly popular theory of French economist Piketty. His theory postulates a return to “capital” that is greater than that from investment in labor, which promotes a greater level of (hypothesized) inequality in income and wealth. Having failed to sell a theory of inequality based on a declining rate of profit, the Left is switching tactics – the return on capital is too high, not declining.)

The real recurring example of successful “foreign investment” has come through immigration. Welsh miners have come to the U.S. and mined successfully. Chinese entrepreneurs have migrated throughout Southeast Asia and dominated entrepreneurship in their adopted countries. Jews have migrated to countries throughout the world and dominated industries such as finance, clothing, motion pictures and education. German workers helped Argentina become a world leader in wheat production and export. Indian immigrants have become leading entrepreneurs in motels and hotels in the U.S. Italian and Lebanese immigrants migrated to Africa and the U.S. and achieved entrepreneurial success in various fields. Yet, ironically, immigration has typically been opposed by natives in spite of the consistent benefits it generates.

Causation, not Blame

History is a record of strife and conflict, of conquest and submission. At one time or other, practically every people have been conquered and subjugated. Colonial status has sometimes been disastrous to natives, as with some countries colonized by Spain in the Age of Exploration. Sometimes it has been relatively beneficial, as it was in the early stages of the American colonies. Often it turned out to be a mixed bag of benefits and drawbacks. But economic development has never been either guaranteed or foreclosed by the mere existence of a colonial past. Economic logic lists too many causal factors affecting development for us to play the blame game.