DRI-223 for week of 3-22-15: The Truth About Black Actors in Hollywood Under the Studio System

An Access Advertising EconBrief:

The Truth About Black Actors in Hollywood Under the Studio System

In a recent (03/09/2015) National Review magazine, author Jay Nordlinger laments the American propensity for “race rows.” He relates the insistence of a friend that last year’s movie Selma had been shut out of Academy Award contention. In fact the movie had received two Oscar nominations, including the coveted nod for Best Picture.

Nordlinger knew why his friend had been deceived. She was reacting to the latest in a seemingly never-ending series of stylized eruptions of race-motivated indignation. As Nordlinger noted, this year’s row followed the nine Oscar nominations and three Oscars (including Best Picture) awarded last year to 12 Years a Slave, last year’s black-experience blockbuster. “The academy must have rediscovered its inner racism in twelve months’ time,” Nordlinger observed drily.

Nordlinger has realized that for the black left wing, historical victimization is a key economic good that they cannot afford to be without. They must keep its history continually alive in order to continue reaping its benefits. He is skeptical about the thesis that Hollywood is dead set on victimizing black artists, but despairs of ever seeing his viewpoint vindicated. “An academy voter cannot acquit himself of a charge of racism – not if he preferred another movie, he can’t,” Nordlinger concludes.

The study of economics is uniquely positioned to inform us about the subject of Hollywood’s treatment of blacks. Suppose we begin at the beginning, by investigating the dawn of blacks in Hollywood under the studio system. We will begin our analysis in the late 1920s, during the waning days of silent movies.

Before doing that, though, we should quickly review economic fundamentals pertaining to the effects of free markets on minorities, particularly those disfavored or discriminated against politically and legally. framework against the conventional stylized portrait of black actors as a victimized class within Hollywood since its inception.

Free Markets and Minorities 

Many great economists have grappled with the economic issues raised by discrimination against minorities of all types – political, social, racial and ethnic. Free markets do not promise the eradication of discrimination – indeed, all of us discriminate against things and people we prefer to avoid without giving it conscious thought. But free markets make discrimination an economic choice in which the chooser evaluates benefits and costs. When discrimination is too costly, it will be foregone.

That is why persecuted minorities throughout history – Chinese in Asia, Jews throughout the world, blacks in South Africa and America – have found refuge under the banner of free markets. Free markets protect their economic productivity by preserving the incentive to employ or patronize them. Free markets protect their welfare by giving them real income when it might be denied them by political authorities.

The Hollywood Victimization Thesis

Conventional thinking has long stressed what we will call the “Hollywood Victimization Thesis” (hereinafter abbreviated HVT for convenience). We view HVT through the lens of a speech made by the then-President of the National Association of the Advancement of Colored People (NAACP) in March, 1942. This speech and accompanying remarks by NAACP counsel Wendell Wilkie marked “the beginning of a new awareness in Hollywood towards the portrayal of blacks in films” (author Champ Clark in his book Shuffling to Ignominy: The Tragedy of Stepin Fetchit). Thomas Cripps, film historian (Slow Fade to Black: the Negro in American Film) said that “March, 1942, became a date by which to measure the future against the past.”

White spoke to Hollywood filmmakers, telling them that they had projected an image of “the Negro as a barbaric dolt, a superstition-ridden ninny… a race of intellectual inferiors, cowardly, benighted, different from the superior group.” He urged them to reject them image, instead choosing to portray “the Negro as a normal human being and an integral part of human life and activity.” Wilkie – the same Wendell Wilkie who had been Republican Presidential candidate in 1940 – was also a board member of 20th Century Fox, one of the “Big Five” Hollywood studios, so we might expect his statements to carry weight with that studio. He not only condemned racial stereotypes per se but made the practical case that they were harmful to the war effort, which demanded that all segments of society pull together for the common good.

All of the major Hollywood studio heads were in the audience for this speech. Later they signed a pledge in which they agreed to avoid the projection of negative racial stereotypes in their films.

What is particularly interesting about this speech, and the watershed it represents, is the difference between its viewpoint and the HVT as typically represented today. The contemporary version goes roughly as follows: The Hollywood studios victimized black actors by consigning them to insignificant parts as railroad porters, servants, maids, butlers, slaves, laborers, tramps and miscellaneous menials – always subservient to whites.

Walter White’s original version of the HVT was distinctly different from this. Take literally, it implied that portrayals of Negroes onscreen were not insignificant, since a character possessing traits so markedly unfavorable and a station removed from normal life could hardly be unimportant to, or unnoticed by, the viewer – the viewer musttake notice of the Negro character in order for this unfavorable impression to register. As we shall see later, this difference is vital.

What accounts for the difference in these two versions? In his 1942 speech, Walter White provided an illustration to back up his claim of negative stereotyping by Hollywood studios. He mentioned one actor by name. This mention had devastating consequences for the actor, because even though White was referring to the roles played by the actor, the stigma from the speech never left the actor. It began a chain reaction that sent the actor’s career spiraling downward. And it made the actor synonymous with the HVT.

The actor’s name was Stepin Fetchit. His story is the real story of the early black experience in Hollywood.

The Story of Stepin Fetchit

Who was Stepin Fetchit? Stepin Fetchit was the first black actor ever to receive a featured credit in a Hollywood movie, In Old Kentucky. He was the first black actor to sign a long-term contract with a major Hollywood studio, Metro Goldwyn Mayer. According to his biographer, “he was the first black actor to drive through the front gates of a Hollywood studio – with a chauffer [sic] at the wheel.” He was, in his own words, “the first black actor universally acclaimed a star by the public.” According to a Ripley “Believe It Or Not” feature, Stepin Fetchit became a millionaire by portraying one kind of character in Hollywood movies. According to a 1968 documentary (“Black History – Lost, Stolen or Strayed”) narrated by Bill Cosby, “the cat [Fetchit] made $2,000,000 in five years in the middle 30s.” In 1960, Stepin Fetchit became the first black actor to get a star on the Hollywood Walk of Fame.

Does this sound like a man who was victimized by the Hollywood studios? As John Wayne might say, not hardly. And it is not out of place to invoke John Wayne as authority here, because John Wayne was Stepin Fetchit’s dresser. As Marion “Duke” Morrison, freshly arrived in Hollywood while still in college at USC in 1927, Wayne held down every existing menial job on the movie sets of directors like John Ford, whom he adopted as his mentor. In 1976, the year in which John Wayne made his last movie, The Shootist, Stepin Fetchit lay in a Los Angeles hospital suffering the effects of a stroke. Wayne himself was in the throes of his last series of illnesses, but he was not too infirm to cheer up his old friend with a visit. And never was a man more in need of cheering up. Between 1927 and 1975, when Stepin Fetchit last appeared in a movie, a tidal wave of change engulfed America and Stepin Fetchit all but drowned in it.

What was Stepin Fetchit’s unforgivable sin? Well, his venial sins were many. Mostly they were the garden variety sins of Hollywood movie stars – wine, women and a tendency to arrogance. But his unforgivable sin was that he was funny.

This is not an eccentric personal opinion. In 1929, at the time that talking pictures were staging a hostile takeover of silent movies in Hollywood, Robert Benchley said, “I see no reason for even hesitating in saying that Stepin Fetchit is the best actor that the movies have produced. His voice, his manner, his timing, everything that he does is as near to perfection as one could hope. He is one of the great comedians of the screen.” Robert Benchley was an Oscar-winning actor, one of the great American humorists and a leading scriptwriter in Hollywood. He knew humor as well as anybody then or now.

Or listen to another expert – Bill Cosby, whose criticism of Stepin Fetchit in 1968 echoed Walter White’s in 1942. “Fetchit’s one of the greatest comedians who ever lived. There was no intent on my part to ridicule him.”

Stepin Fetchit was born Lincoln Theodore Monroe Perry in 1902. He was baptized a Catholic and remained one all his life. In the early 1920s, he entered show business via the “chitlin’ circuit” – an informal chain of over 100 Southern vaudeville theaters catering to black audiences. It was here that he perfected his classic shtick: a “somnolent southern boy, all slow-motion hesitation and mumbles… almost terminally lazy” (Clark). Eventually, this laziness became his trademark. He developed himself into a theatrical act that he advertised as “the laziest man in the world.” And it was his own patented character, honed to physical and verbal perfection, that he took to the movies.

In 1927, Fetchit answered a general casting call for black performers for the silent MGM movie In Old Kentucky. When hired, he proceeded to astonish director John Stahl with his comedic skills. The director created a featured part for him as a plantation boy, and even inserted him into a romantic subplot in the script involving black actress Carolyn Snowden. The movie was a hit. MGM offered Fetchit a six-month contract. But when the studio found no further roles for him, Fetchit broke with MGM and signed with Stahl’s Tiffany-Stahl Productions for more than double the money MGM had paid him. He appeared in featured parts in several more silent pictures before his next breakthrough picture came along.

It was Universal Pictures’ talking version of the famous Edna Ferber stage play Show Boat, the seminal production of the American musical theater. Unfortunately, the studio could not acquire the rights to the Jerome Kern-Oscar Hammerstein score, but Fetchit performed one of the studio-composed songs for the film.

In 1929, Fetchit notched another “first” by starring in Hearts in Dixie, the first Hollywood film with an all-black cast. This was also a musical and it cemented Fetchit’s star status. The importance of those two words – “star status” – cannot be overstressed.

The Breakthrough of Black Actors in Hollywood

Prior to Stein Fetchit, black actors were virtually absent from Hollywood. It would easy to ascribe this to the presence of Jim Crow laws and white racism, but that would be false. To understand why this is so, consider the case of Paul Robeson.

Paul Robeson was born in New Jersey in 1898 as the son of a former slave who had become a minister. He grew up singing to help support his family and help out his father in church. He was valedictorian of his high-school class and lettered in four sports. He became only the third black person to attend Rutgers University up to that point. At Rutgers, he captained the debate team. He twice became first-team All-American in football. Walter Camp called him the greatest end ever to play the game to that point. Upon graduation, he attended law school while working on the side – helping to pioneer a fledgling organization called the National Football League, playing opposite the likes of Jim Thorpe.

After a few years, Robeson quit to become an opera star and theater performer. He played to packed houses in London and on Broadway. But when Paul Robeson first performed in movies, he worked for director Oscar Micheaux, who made “race movies” shown only to black audiences. His first film was 1925’s Body and Soul. At that point, despite Robeson’s fame and popularity in the U.S., there was no audience for Robeson as a movie star. He was much too important to play a minor role, and the kind of hybrid character/star part played by Fetchit did not yet exist. Therefore Robeson did not work in Hollywood until the 1930s - after Stepin Fetchit had carved out a place for black actors by revealing the existence of a demand for blacks in high-visibility, featured roles. In 1936, Robeson sang the famous “Old Man River” in the 1936 version of Show Boat, supporting Irene Dunne and Allan Jones. He co-starred in the first sound version of King Solomon’s Mines opposite Sir Cedric Hardwicke and starring in the English film Proud Valley as a coal miner who traveled to Wales and performed with a male voice choir. There is every reason to believe that Robeson might have become an earlier version of Sidney Poitier had not World War II and his own misguided Stalinist politics not intervened to thwart him.

Was Stepin Fetchit a true movie star? In one sense, yes. He attained fame, wealth and high visibility. These are the superficial attributes of stardom. But in the truest sense, no. The Hollywood studios carefully selected and groomed leading actors for their productions. Those actors were always performers with whom audiences could identify and about whom they could fantasize. Pure talent and dramatic success were not enough to qualify an actor for the position of star.

Stepin Fetchit could not achieve the highest level of movie stardom in the 1920s and 1930s – lead actor in major productions. Neither could Paul Robeson. There were an incredibly small number of men and women who could. Not all of them were Americans and not all of them were Caucasians. Not all of them were humans, either; Rin Tin Tin was an authentic movie star for years. Shirley Temple attained stardom at age five, but lost it in her teens. Sidney Poitier was finally able to reach that elevated pinnacle twenty five years after Fetchit and Robeson strove for it. His movie career began in 1950, before the era of civil-rights legislation, marches or demonstrations. Why was he the first to succeed?

Anybody who really knows the answer to that question could earn vast wealth as a talent scout for Hollywood casting companies today. But it is absurd to stigmatize white Americans as racists when they embraced Stepin Fetchit, Paul Robeson and the complement of black actors in the 1930s – as if a totalitarian government should somehow have utilized thought control to force Americans to confer full stardom upon blacks earlier in history.

Stepin Fetchit’s Peak – and Fall From Grace

Stepin Fetchit claimed that his first run of success in Hollywood was cut short when he refused to perform a scene in the movie The Southerner in 1930. Fetchit said that the scene implied, without saying so directly, that his character was guilty of rape. The director complained to the studio about Fetchit’s recalcitrance, and Fetchit’s contract was terminated.

It was commonplace back then even for big stars to be fined or suspended by the studio -even to be fired in extreme cases. But the biggest stars were always rehired or found work elsewhere. Stepin Fetchit had to rehabilitate his career all over again. And he did.


He appealed to actor-humorist Will Rogers, with whom he had worked briefly in silent films. Rogers knew Fetchit’s work and appreciated it. Despite his great fame in the theater and on Broadway, Rogers had been only a marginal success in silent films. He desperately needed to succeed in talking pictures. So he approved Fetchit’s hiring as a supporting character in his films.

This move was a spectacular success. In 1934 and 1935, before his untimely death in a plane crash, Will Rogers was the leading box-office star in Hollywood, ahead of Shirley Temple (with whom Fetchit also appeared). The four films he made with Stepin Fetchit established the two as the movies’ leading comedy team.

Fetchit’s biographer, Champ Clark, quotes columnist James Bacon about a conversation between Darryl Zanuck, head of 20th Century Fox studio, and Will Rogers. Zanuck tells Rogers “My God, he’s so funny, he steals the show from you.” Rogers responds, “I don’t care, he makes the movie better.” Rogers demanded that Stepin Fetchit be cast in all his movies.

The peak of Rogers’ career coincided with Fetchit’s career peak as well. With Roger’s death, Stepin Fetchit’s career started to wane. He had to compete for parts with other black actors. Some of them were imitating him. Fetchit drank, got into scrapes with the law and showed flashes of temperament at work. When he wasn’t working on movies, he polished his skills with his stage act. He wrote a newspaper column for the black newspaper, the Chicago Defender.

Eventually Stepin Fetchit fell victim to his high style of living – a problem once summarized neatly by Errol Flynn as follows: “My problem is reconciling my net income with my gross habits.” He fell into arrears with the IRS, his ex-wife and various other creditors. Movie work dried up.

At the point of Walter White’s NAACP speech in 1942, Stepin Fetchit was down. That speech put him out – out of Hollywood for a decade. His old friend, John Ford, wanted to hire him for the post-war film, My Darling Clementine, in 1946. Darryl Zanuck wouldn’t hear of it. “…To put him on the screen at this time would… raise terrible objections from the colored people. Walter White … singled out Stepin Fetchit… as an example of the humiliation of the colored race. Stepin Fetchit always plays the lazy, stupid half-wit and this is the thing that the colored people are furious about.”

Stepin Fetchit hung around the fringes of show business, working in nightclubs, developing his song and dance talents, branching out into stand-up comedy. He made a few “race movies.” In 1952, he returned to Hollywood for the movie, Bend of the River, playing a straight, non-comedic supporting part. He worked for John Ford in 1953’s Steamboat Round the Bend. White movie critics like the New York Times’ Bosley Crowther savaged him, wondering why the movie was made and why Fetchit was cast in it. Once more, Stepin Fetchit’s career was in tatters.

Still he soldiered on, working in cheap theaters and clubs and cadging a living off friends. The critic and film author, Joseph McBride, wrote of admiring his work as a stand-up comedian under harrowing circumstances.

In the early 1960s, Fetchit became a hanger-on of Cassius Clay, soon to morph into Muhammad Ali. Ali claimed that Fetchit taught him the “secret punch” with which Ali dispatched Sony Liston in their second meeting. Once again, Stepin Fetchit was up. Then, in 1968, came the CBS special, “Black History – Lost, Stolen or Strayed,” narrated by Bill Cosby. It was the Walter White speech all over again. For the fourth time, Stepin Fetchit was down. But still not out.

In 1974, Stepin Fetchit joined Moms Mabley and other black show-business veterans to make a movie fittingly entitled Amazing Grace. It was a touching exit for all the old troupers, but especially for him. Fetchit’s last film came in 1975. And in 1976, the NAACP – clearly suffering a bad case of institutional guilt – gave him an award for opening up the frontiers of entertainment to blacks. In 1978, Stepin Fetchit was admitted into the Black Filmmakers’ Hall of Fame. He died in 1985.

Was Stepin Fetchit Victimized by Hollywood?

Stepin Fetchit was not victimized by Hollywood. He seized the opportunity provided by Hollywood to gain fame and wealth hitherto undreamt of by a black actor in the movies. In so doing, he kicked open the door of opportunity for other black actors.

Stepin Fetchit’s character was assassinated by left-wing blacks in the NAACP and white liberals. They used him as a tool for their purposes – which were to portray blacks as oppressed victims of white society who needed saving by the NAACP and the federal government. In so doing, they falsified the history of the black experience in Hollywood.

Fetchit claimed that his name derived from a race horse. That name has since come to be synonymous with black subservience. That is unjust, because there is no doubt that his movie shtick was invented by Fetchit out of whole cloth. It was not devised by a white scriptwriter or producer in order to stigmatize blacks. Moreover, this screen character had nothing to do with Stepin Fetchit’s real personality. On this point there is unanimity. Stepin Fetchit the man was a highly intelligent businessman who liked classical music and was conversant with the fine points of moviemaking. He bore no resemblance to his movie persona.

Stepin Fetchit’s Imitators and the Rise of Black Character Actors in Hollywood

With the rise of Stepin Fetchit to quasi-stardom, the studio bosses of Hollywood realized two things: there was a market for black audiences in character roles in movies, and their leading candidate to fill that role was a temperamental handful. They had long made it a practice to find and cultivate competitive substitutes for their stars anyway. So it was natural for them to seek out other black actors, not only to keep their new discovery from getting too cocksure but also to meet this newfound demand. In Hollywood, imitation has always been the sincerest form of plagiarism; the new crop of black actors bore a strong occupational resemblance to Stepin Fetchit.

The most blatant of these Fetchit imitators was Willie Best (1916-1962), who drove to Hollywood in 1930 while working as a chauffeur and wound up often playing one onscreen. Best worked with the Marx Brothers, Laurel and Hardy and Shirley Temple. His most famous role came in 1940’s The Ghost Breakers, whose star, Bob Hope, called him “the best actor I know.” Comedy producer Hal Roach considered him one of the best comedy talents in show business. Before a drug arrest ended his movie career in 1950 and cancer ended his life in 1962, Best made 113 appearances in movies and a handful in short films and television.

Mantan Moreland (1902-1973) followed Best to Hollywood in 1933 and made some 125 movie appearances over the next 40 years. While Best was almost a Stepin Fetchit clone whose shtick was his sleepy-eyed appearance and skill at alternating lassitude with fright, Moreland specialized in pop-eyed expressions and a growling bass dialect. His most famous appearances were as Charlie Chan’s sidekick, Birmingham Brown, in pictures produced by the bottom-feeding studio, Monogram Pictures.

Fred Toones (1906-1962) may have been the busiest of all black character actors. Between 1931 and 1947, he appeared in over 200 films using the professional name “Snowflake.” In over 140 of them, he received no screen credit, although he was sometimes referred to with a character name onscreen. Toones’ best work was a hilarious turn as Fred MacMurray’s valet in the memorable romantic comedy Remember the Night (1940), costarring Barbara Stanwyck. He also appeared in other classic comedies like Twentieth Century (1934). Christmas in July (1940) and The Palm Beach Story (1942).

Does it seem pedantic to mention arcana like movie appearances, screen credits and character names? These details were not trivial to the actors, because an actor got paid extra for a screen credit and for speaking lines. Assignment of a character name meant that the actor was considered important by the script writer, the director and the producer. This information proves that these black actors were favored, not victimized, by the Hollywood studio system.

The great character actor, Walter Brennan, won three Academy Awards as Best Supporting Actor between 1936 and 1940. But before that, between 1925 and 1935, he received screen credit for only 33 of his first 118 movies. Pat Flaherty was known as the “King of the Uncredited Actors” because his face and voice were so familiar to audiences who didn’t know his name; he received a screen credit in only 29 of his 197 films. Robert Dudley gave an unforgettable performance as the “Wienie King” in Preston Sturges’ Palm Beach Story, but he received a screen credit for only 34 of his 123 film appearances. Long before she became immortal as a television star on I Love Lucy, Lucille Ball began her career in movies. But she did not receive a screen credit until her 25th film. “Bit” parts seldom gave screen credit or assigned character names in the days of the studio system. Today, a favorite parlor game among classic movie fans is to “spot a star” playing an uncredited bit part before attaining stardom.

Compare the status of the black actors mentioned here. Stepin Fetchit received a screen credit in all but 2 of his 49 movies and 6 shorts, and a character name in all but 11. This was unheard of; indeed, Fetchit’s case is so special that he should be considered in a class by himself as a kind of star/character actor/bit player. Willie Best was credited in 73 of his 117 appearances and named in 79 of them, which made him the aristocrat of bit players. Mantan Moreland got 70 screen credits and nearly as many character-named appearances out of his 125 screen appearances, which put him nearly on a par with Best. Toones had many more uncredited appearances, but this was because he packed so many movies (over 200) into such a comparatively short career, which still numbered over 60 credited appearances including notable performances and movies.

We have not just recounted a history of victimization. This is a distinguished record of achievement in one of America’s leading industries – for which these men were well paid.

For the sake of brevity, we have considered only those actors who were directly comparable in style and status to Stepin Fetchit; that is, comic bit players. Left out of account was the comic genius Hattie McDaniel, whose specialty of playing maids earned her the first Academy Award given to a black actor in Gone With the Wind (1939). Actors like Louise Beavers, Ernest Whitman, Rex Ingram and James Edwards were also delivering distinguished performances before Sidney Poitier came along, thanks to the trail blazed by Stepin Fetchit.

And thanks to the free markets that allowed the Hollywood studio system to arise and flourish in the first half of the 20th century.

DRI-191 for week of 3-15-15: More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

An Access Advertising EconBrief:

More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

The economic way to view news is as a product called information. Its value is enhanced by adding qualities that make it more desirable. One of these is danger. Humans react to threats and instinctively weigh the threat-potential of any problematic situation. That is why headlines of print newspapers, radio-news updates, TV evening-news broadcasts and Internet websites and blogs all focus disproportionately on dangers.

This obsession with danger does not jibe with the fact that human life expectancy had doubled over the last century and that violence has never been less threatening to mankind than today. Why do we suffer this cognitive dissonance? Our advanced state of knowledge allows us to identify and categorize threats that passed unrecognized for centuries. Today’s degraded journalistic product, more poorly written, edited and produced than formerly, plays on our neuroscientific weaknesses.

Economists are acutely sensitive to this phenomenon. Our profession made its bones by exposing the bogey of “the evil other” – foreign trade, foreign goods, foreign labor and foreign investment as ipso facto evil and threatening. Yet in spite of the best efforts of economists from Adam Smith to Milton Friedman, there is no more dependable pejorative than “foreign” in public discourse. (The word “racist” is a contender for the title, but overuse has triggered a backlash among the public.)

Thus, we shouldn’t be surprised by this headline in The Wall Street Journal: “Ire Rises at China Over Glut of Steel” (03/16/2015, By Biman Mukerji in Hong Kong, John W. Miller in Pittsburgh and Chuin-Wei Yap in Beijing). Surprised, no; outraged, yes.

The Big Scare 

The alleged facts of the article seem deceptively straightforward. “China produces as much steel as the rest of the world combined – more than four times as much as the peak U.S. production in the 1970s.” Well, inasmuch as (a) the purpose of all economic activity is to produce goods for consumption; and (b) steel is a key input in producing countless consumption goods and capital goods, ranging from vehicles to buildings to weapons to cutlery to parts, this would seem to be cause for celebration rather than condemnation. Unfortunately…

“China’s massive steel-making engine, determined to keep humming as growth cools at home, is flooding the world with exports, spurring steel producers around the globe to seek government protection from falling prices. From the European Union to Korea and India, China’s excess metal supply is upending trade patterns and heating up turf battles among local steelmakers. In the U.S., the world’s second-biggest steel consumer, a fresh wave of layoffs is fueling appeals for tariffs. U.S. steel producers such as U.S. Steel Corp. and Nucor Corp. are starting to seek political support for trade action.”

Hmmm. Since this article occupies the place of honor on the world’s foremost financial publication, we expect it to be authoritative. China has a “massive steel-making engine” – well, that stands to reason, since it’s turning out as much steel as everybody else put together. It is “determined to keep humming.” The article’s three (!) authors characterize the Chinese steelmaking establishment as a machine, which seems apropos. They then endow the metaphoric machine with the human quality of determination – bad writing comes naturally to poor journalists.

This determination is linked with “cooling” growth. Well, the only cooling growth that Journal readers can be expected to infer at this point is the slowing of the Chinese government’s official rate of annual GDP growth from 7.5% to 7%. Leaving aside the fact that the rest of the industrialized world is pining for growth of this magnitude, the authors are not only mixing their metaphors but mixing their markets as well. The only growth directly relevant to the points raised here – exports by the Chinese and imports by the rest of the world – is growth in the steel market specifically. The status of the Chinese steel market is hardly common knowledge to the general public. (Later, the authors eventually get around to the steel market itself.)

So the determined machine is reacting to cooling growth by “flooding the world with exports,” throwing said world into turmoil. The authors don’t treat this as any sort of anomaly, so we’re apparently expected to nod our heads grimly at this unfolding danger. But why? What is credible about this story? And what is dangerous about it?

Those of us who remember the 1980s recall that the monster threatening the world economy then was Japan, the unstoppable industrial machine that was “flooding the world” with imports. (Yes, that’s right – the same Japan whose economy has been lying comatose for twenty years.) The term of art was “export-led growth.” Now these authors are telling us that massive exports are a reaction to weakness rather than a symptom of growth.

“Unstoppable” Japan suddenly stopped in its tracks. No country has ever ascended an economic throne based on its ability to subsidize the consumption of other nations. Nor has the world ever died of economic indigestion caused by too many imports produced by one country. The story told at the beginning of this article lacks any vestige of economic sense or credibility. It is pure journalistic scare-mongering. Nowhere do the authors employ the basic tools of international economic analysis. Instead, they employ the basic tools of scarifying yellow journalism.

The Oxymoron of “Dumping” 

The authors have set up their readers with a menacing specter described in threatening language. A menace must have victims. So the authors identify the victims. Victims must be saved, so the authors bring the savior into their story. Naturally, the savior is government.

The victims are “steel producers around the globe.” They are victimized by “falling prices.” The authors are well aware that they have a credibility problem here, since their readers are bound to wonder why they should view falling steel prices as a threat to them. As consumers, they see falling prices as a good thing. As prices fall, their real incomes rise. Falling prices allow consumers to buy more goods and services with their money incomes. Businesses buy steel. Falling steel prices allow businesses to buy more steel. So why are falling steel prices a threat?

Well, it turns out that falling steel prices are a threat to “chief executives of leading American steel producers,” who will “testify later this month at a Congressional Steel Caucus hearing.” This is “the prelude to launching at least one anti-dumping complaint with the International Trade Commission.” And what is “dumping?” “‘Dumping,’ or selling abroad below the cost of production to gain market share, is illegal under World Trade Organization law and is punishable with tariffs.”

After this operatic buildup, it turns out that the foreign threat to America spearheaded by a gigantic, menacing foreign power is… low prices. Really low prices. Visualize buying steel at Costco or Wal Mart.

Oh, no! Not that. Head for the bomb shelters! Break out the bug-out bags! Get ready to live off the grid!

The inherent implication of dumping is oxymoronic because the end-in-view behind all economic activity is consumption. A seller who sells for an abnormally low price is enhancing the buyer’s capability to consume, not damaging it. If anybody is “damaged” here, it is the seller, not the buyer. And that begs the question, why would a seller do something so foolish?

More often than not, proponents of the dumping thesis don’t take their case beyond the point of claiming damage to domestic import-competing firms. (The three Journal reporters make no attempt whatsoever to prove that the Chinese are selling below cost; they rely entirely on the allegation to pull their story’s freight.) Proponents rely on the economic ignorance of their audience. They paint an emotive picture of an economic world that functions like a giant Olympics. Each country is like a great big economic team, with its firms being the players. We are supposed to root for “our” firms, just as we root for our athletes in the Summer and Winter Olympics. After all, don’t those menacing firms threaten the jobs of “our” firms? Aren’t those jobs “ours?” Won’t that threaten “our” incomes, too?

This sports motif is way off base. U.S. producers and foreign producers have one thing in common – they both produce goods and services that we can consume, either now or in the future. And that gives them equal economic status as far as we are concerned. The ones “on our team” are the ones that produce the best products for our needs – period.

Wait a minute – what if the producers facing those low prices happen to be the ones employing us? Doesn’t that change the picture?

Yes, it does. In that case, we would be better off if our particular employer faced no foreign competition. But that doesn’t make a case for restricting or preventing foreign competition in general. Even people who lose their jobs owing to foreign competition faced by their employer may still gain more income from the lower prices brought by foreign competition in general than they lose by having to take another job at a lower income.

There’s another pertinent reason for not treating foreign firms as antagonistic to consumer interests. Foreign firms can, and do, locate in America and employ Americans to produce their products here. Years ago, Toyota was viewed as an interloper for daring to compete successfully with the “Big 3″ U.S. automakers. Now the majority of Toyota automobiles sold in the U.S. are assembled on America soil in Toyota plants located here.

Predatory Pricing in International Markets

Dumping proponents have a last-ditch argument that they haul out when pressed with the behavioral contradictions stressed above. Sure, those foreign prices may be low now, import-competing producers warn darkly, but just wait until those devious foreigners succeed in driving all their competitors out of business. Then watch those prices zoom sky-high! The foreigners will have us in their monopoly clutches.

That loud groan you heard from the sidelines came from veteran economists, who would no sooner believe this than ask a zookeeper where to find the unicorns. The thesis summarized in the preceding paragraph is known as the “predatory pricing” hypothesis. The behavior was notoriously ascribed to John D. Rockefeller by the muckraking journalist Ida Tarbell. It was famously disproved by the research of economist John McGee. And ever since, economists have stopped taking the concept seriously even in the limited market context of a single country.

But when propounded in the global context of international trade, the whole idea becomes truly laughable. Steel is a worldwide industry because its uses are so varied and numerous. A firm that employed this strategy would have to sacrifice trillions of dollars in order to reduce all its global rivals to insolvency. This would take years. These staggering losses would be accounted in current outflows. They would be weighed against putative gains that would begin sometime in the uncertain future – a fact that would make any lender blanch at the prospect of financing the venture.

As if the concept weren’t already absurd, what makes it completely ridiculous is the fact that even if it succeeded, it would still fail. The assets of all those firms wouldn’t vaporize; they could be bought up cheaply and held against the day when prices rose again. Firms like the American steel company Nucor have demonstrated the possibility of compact and efficient production, so competition would be sure to emerge whenever monopoly became a real prospect.

The likelihood of any commercial steel firm undertaking a global predatory-pricing scheme is nil. At this point, opponents of foreign trade are, in poker parlance, reduced to “a chip and a chair” in the debate. So they go all in on their last hand of cards.

How Do We Defend Against Government-Subsidized Foreign Trade?

Jiming Zou, analyst at Moody’s Investor Service, is the designated spokesman of last resort in the article. “Many Chinese steelmakers are government-owned or closely linked to local governments [and] major state-owned steelmakers continue to have their loans rolled over or refinanced.”

Ordinary commercial firms might cavil at the prospect of predatory pricing, but a government can’t go broke. After all, it can always print money. Or, in the case of the Chinese government, it can always “manipulate the currency” – another charge leveled against the Chinese with tiresome frequency. “The weakening renminbi was also a factor in encouraging exports,” contributed another Chinese analyst quoted by the Journal.

One would think that a government with the awesome powers attributed to China’s wouldn’t have to retrench in all the ways mentioned in the article – reduce spending, lower interest rates, and cut subsidies to state-owned firms including steel producers. Zou is doubtless correct that “given their important role as employers and providers of tax revenue, the mills are unlikely to close or cut production even if running losses,” but that cuts both ways. How can mills “provide tax revenue” if they’re running huge losses indefinitely?

There is no actual evidence that the Chinese government is behaving in the manner alleged; the evidence is all the other way. Indeed, the only actual recipients of long-term government subsidies to firms operating internationally are creatures of government like Airbus and Boeing – firms that produce most or all of their output for purchase by government and are quasi-public in nature, anyway. But that doesn’t silence the protectionist chorus. Government-subsidized foreign competition is their hole card and they’re playing it for all it’s worth.

The ultimate answer to the question “how do we defend against government-subsidized foreign trade?” is: We don’t. There’s no need to. If a foreign government is dead set on subsidizing American consumption, the only thing to do is let them.

If the Chinese government is enabling below-cost production and sale by its firms, it must be doing it with money. There are only three ways it can get money: taxation, borrowing or money creation. Taxation bleeds Chinese consumers directly; money creation does it indirectly via inflation. Borrowing does it, too, when the bill comes due at repayment time. So foreign exports to America subsidized by the foreign government benefit American consumers at the expense of foreign consumers. No government in the world can subsidize the world’s largest consumer nation for long. But the only thing more foolish than doing it is wasting money trying to prevent it.

What Does “Trade Protection” Accomplish?

Textbooks in international economics spell out in meticulous detail – using either carefully drawn diagrams or differential and integral calculus – the adverse effects of tariffs and quotas on consumers. Generally speaking, tariffs have the same effects on consumers as taxes in general – they drive a wedge between the price paid by the consumer and received by the seller, provide revenue to the government and create a “deadweight loss” of value that accrues to nobody. Quotas are, if anything, even more deleterious. (The relative harm depends on circumstances too complex to enumerate.)

This leads to a painfully obvious question: If tariffs hurt consumers in the import-competing country, why in the world do we penalize alleged misbehavior by exporters by imposing tariffs? This is analogous to imposing a fine on a convicted burglar along with a permanent tax on the victimized homeowner.

Viewed in this light, trade protection seems downright crazy. And in purely economic terms, it is. But in terms of political economy, we have left a crucial factor out of our reckoning. What about the import-competing producers? In the Wall Street Journal article, these are the complainants at the bar of the International Trade Commission. They are also the people economists have been observing ever since the days of Adam Smith in the late 18th century, bellied up at the government-subsidy bar.

In Smith’s day, the economic philosophy of Mercantilism reigned supreme. Specie – that is, gold and silver – was considered the repository of real wealth. By sending more goods abroad via export than returned in the form of imports, a nation could produce a net inflow of specie payments – or so the conventional thinking ran. This philosophy made it natural to favor local producers and inconvenience foreigners.

Today, the raison d’etre of the modern state is to take money from people in general and give it to particular blocs to create voting constituencies. This creates a ready-made case for trade protection. So what if it reduces the real wealth of the country – the goods and services available for consumption? It increases electoral prospects of the politicians responsible and appears to increase the real wealth of the beneficiary blocs, which is sufficient to for legislative purposes.

This is corruption, pure and simple. The authors of the Journal article present this corrupt process with a straight face because their aim is to present cheap Chinese steel as a danger to the American people. Thus, their aims dovetail perfectly with the corrupt aims of government.

And this explains the front-page article on the 03/16/2015 Wall Street Journal. It reflects the news value of posing a danger where none exists – that is, the corruption of journalism – combined with the corruption of the political process.

The “Effective Rate of Protection”

No doubt the more temperate readers will object to the harshness of this language. Surely “corruption” is too harsh a word to apply to the actions of legislators. They have a great big government to run. They must try to be fair to everybody. If everybody is not happy with their efforts, that is only to be expected, isn’t it? That doesn’t mean that legislators aren’t trying to be fair, does it?

Consider the economic concept known as the effective rate of protection. It is unknown to the general public, but is appears in every textbook on international economics. It arises from the conjunction of two facts: first, that a majority of goods and services are composed of raw materials, intermediate goods and final-stage (consumer) goods; and second, that governments have an irresistible impulse to levy taxes on goods that travel across international borders.

To keep things starkly simple and promote basic understanding, take the simplest kind of numerical example. Assume the existence of a fictional textile company. It takes a raw material, cotton, and spin, weaves and processes that cotton into a cloth that it sells commercially to its final consumers. This consumer cloth competes with the product of domestic producers as well as with cotton cloth produced by foreign textile producers. We assume that the prevailing world price of each unit of cloth is $1.00. We assume further that domestic producers obtain one textile unit’s worth of cotton for $.50 and add a further $.50 worth of value to the cloth by spinning, weaving and processing it into the cloth.

We have a basic commodity being produced globally by multiple firms, indicated the presence of competitive conditions. But legislators, perhaps possessing some exalted concept of fairness denied to the rabble, decide to impose a tariff on the importation of cotton. Not wishing to appear excessive or injudicious, the solons set this ad valorem tariff at 15%. Given the competitive nature of the industry, this will soon elevate the domestic price of textiles above the world price by the amount of the tariff; e.g., by $.15, to $1.15. Meanwhile, there is no tariff levied on cotton, the raw material. (Perhaps cotton is grown domestically and not imported into the country or, alternatively, perhaps cotton growers lack the political clout enjoyed by textile producers.)

The insight gained from the effective rate of protection begins with the realization that the net income of producers in general derives from the value they add to any raw materials and/or intermediate products they utilize in the production process. Initially, textile producers added $.50 worth of value for every unit of cotton cloth they produced. Imposition of the tariff allows the domestic textile price to rise from $1.00 to $1.15, which causes textile producers’ value added to rise from $.50 to $.65.

Legislators judiciously and benevolently decided that the proper amount of “protection” to give domestic textile producers from foreign competition was 15%. They announced this finding amid fanfare and solemnity. But it is wrong. The tariff has the explicit purpose of “protecting” the domestic industry, of giving it leeway it would not otherwise get under the supposedly harsh and unrelenting regime of global competition. But this tariff does not give domestic producers 15% worth of protection. $15 divided by $.50 – that is, the increase in value added divided by the original value added – is .30, or 30%. The effective rate of protection is double the size of the “nominal” (statutory) level of protection. In general, think of the statutory tariff rate as the surface appearance and the effective rate as the underlying truth.

Like oh-so-many economic principles, the effective rate of protection is a relatively simple concept that can be illustrated with simple examples, but that rapidly becomes complex in reality. Two complications need mention. When tariffs are also levied on raw materials and/or intermediate products, this affects the relationship between the effective and nominal rate of protection. The rule of thumb is that higher tariff rates on raw materials and intermediate goods relative to tariffs on final goods tend to lower effective rates of protection on the final goods – and vice-versa.

The other complication is the percentage of total value added comprised by the raw materials and intermediate goods prior to, and subsequent to, imposition of the tariff. This is a particularly knotty problem because tariffs affect prices faced by buyers, which in turn affect purchases, which in turn can change that percentage. When tariffs on final products exceed those on raw materials and intermediate goods – and this has usually been the case in American history – an increase in this percentage will increase the effective rate.

But for our immediate purposes, it is sufficient to realize that appearance does not equal reality where tariff rates are concerned. And this is the smoking gun in our indictment of the motives of legislators who promote tariffs and restrictive foreign-trade legislation.


Corrupt Legislators and Self-Interested Reporting are the Real Danger to America

In the U.S., the Commercial Code includes thousands of tariffs of widely varying sizes. These not only allow legislators to pose as saviors of numerous business constituent classes. They also allow them to lie about the degree of protection being provided, the real locus of the benefits and the reasons behind them.

Legislators claim that the size of tariff protection being provided is modest, both in absolute and relative terms. This is a lie. Effective rates of protection are higher than they appear for the reasons explained above. They unceasingly claim that foreign competitors behave “unfairly.” This is also a lie, because there is no objective standard by which to judge fairness in this context – there is only the economic standard of efficiency. Legislators deliberately create bogus standards of fairness to give themselves the excuse to provide benefits to constituent blocs – benefits that take money from the rest of us. International trade bodies are created to further the ends of domestic governments in this ongoing deception.

Readers should ask themselves how many times they have read the term “effective rate of protection” in The Wall Street Journal, The Financial Times of London, Barron’s, Forbes or any of the major financial publications. That is an index of the honesty and reputability of financial journalism today. The term was nowhere to be found in the Journal piece of 03/16/2015.

Instead, the three Journal authors busied themselves flacking for a few American steel companies. They showed bar graphs of increasing Chinese steel production and steel exports. They criticized the Chinese because the country’s steel production has “yet to slow in lockstep” with growth in demand for steel. They quoted self-styled experts on China’s supposed “problem [with] hold[ing] down exports” – without every explaining what rule or standard or economic principle of logic would require a nation to withhold exports from willing buyers. They cited year-over-year increases in exports between January, 2013, 2014 and 2015 as evidence of China’s guilt, along with the fact that the Chinese were on pace to export more steel than any other country “in this century.”

The reporters quoted the whining of a U.S. steel vice-president that demonstrating damage from Chinese exports is just “too difficult” to satisfy trade commissioners. Not content with this, they threw in complaints by an Indian steel executive and South Koreans as well. They neglect to tell their readers that Chinese, Indian and South Korean steels tend to be lower grades – a datum that helps to explain their lower prices. U.S. and Japanese steels tend to be higher grade, and that helps to explain why companies like Nucor have been able to keep prices and profit margins high for years. The authors cite one layoff at U.S. steel but forget to cite the recent article in their own Wall Street Journal lauding the history of Nucor, which has never laid off an employee despite the pressure of Chinese competition.

That same article quoted complaints by steel buyers in this country about the “competitive disadvantage” imposed by the higher-priced U.S. steel. Why are the complaints about cheap Chinese exports front-page news while the complaints about high-priced American steel buried in back pages – and not even mentioned by a subsequent banner article boasting input by no fewer than three Journal reporters? Why did the reporters forget to cite the benefits accruing to American steel users from low prices for steel imports? Don’t these reporters read their own newspaper? Or do they report only what comports with their own agenda?

DRI-204 for week of 3-8-15: The West-Coast Port Lockout: Causes and Consequences

An Access Advertising EconBrief:

The West-Coast Port Lockout: Causes and Consequences

When unions make news in the U.S. these days, chances are they are public-employee unions. The days of high-profile strikes, high-visibility picket lines and high voltage tension between labor and corporate management are long gone. Or so we thought until the last nine months, when 29 ports in California, Oregon and Washington were crippled by a maritime lockout staged by the Pacific Maritime Association (PMA) against the International Longshore Warehouse Union (ILWU). As the lockout lengthened, unloaded cargo ships dotted the coastline of cities like Long Beach and Los Angeles. Predictably, President Obama eventually intervened by dispatching Secretary of Labor Thomas Perez westward with a threatening message to the principals.

The lockout ended with a settlement on Friday evening, February 21, 2015. The lockout may be over, but we are left to make sense of it. In an age when private-sector unionism is dying on the vine, why is one specimen healthy enough to produce this job action? What actual role – if any – did the President play in settling the dispute? And, as always, what general principles of economic logic emerge to enlighten us?

In this case, the background to the dispute is voluminous and wide-ranging in history and subject. But it is vital to understanding.

The Economic Importance of the West Coast Trade

The West Coast ports host the Trans-Pacific trade between China and the United States. This international trade connection, linking ports like Shanghai and Los Angeles, is by far the most lucrative of the nine major global trade routes. In value terms, about half of all U.S. maritime cargo docks there. It is the jewel in the crown of globalism.

The U.S. is the world’s leading consumer nation. China is the world’s leading supplier of human labor and, consequently, a contender with the U.S. for the world lead in production of goods and services. As the U.S. has increasingly developed a comparative advantage in services – particularly finance – China has produced an increasing share of the world’s manufactured goods. These goods are shipped to their ultimate consumers primarily by sea.

The vessels carrying Chinese manufactured goods are highly specialized for this purpose. They resemble nautical skyscrapers much more than oceangoing vessels of old. The goods are mostly held in gigantic containers. A standard container’s dimensions are twenty feet long, eight-foot six-inches high and eight feet wide. That is 6.09 meters by 2.6 meters by 2.4 meters. These containers are stacked in the mammoth cargo hold of the ship – seven or eight high and fourteen across. That is not all. Cargo is even placed on the top deck of the vessel in 45-foot, 48-foot and 53-foot boxes. The size and tonnage of one of these modern-day arks is wondrous to behold. Each ship holds roughly three warehouses full of goods. Of course, bulkier and more specialized merchandise like cars, trucks and agricultural machinery is carried in vessels specifically tailored to the purpose – but still mind-numbingly capacious.

Now envision the ports of Los Angeles and Long Beach shortly after settlement of the lockout. Some 29 of these ships were still bobbing at anchor, unloaded, in those waters. The value of the merchandise being stalled by this dispute was enormous – undoubtedly larger than the GDP of many a country.

So far, we have posited a commercial dispute that has slowed and frustrated business affecting roughly half of international trade traveling by sea in the U.S. Given the zeitgeist, it is not hard to anticipate the next chapter in the story. It is government intervention – and given the nature of the current political administration, can Presidential interference be far behind?

Presidential Intervention

The Taft-Hartley law was passed in 1947 by the overwhelmingly Republican-dominated Congress in response to countless pro-labor, big-government measures adopted by Franklin Roosevelt’s New Deal. These included the Wagner Act. Taft-Hartley was widely viewed as a re-balancing of the scales back toward “business” in the field of labor negotiations. Among its provisions was a clause allowing the U.S. President to intervene in strikes or lockouts that adversely affected the “national interest.” The intervention could take various forms, including a call for compulsory arbitration as a means of breaking a negotiating deadlock. The general philosophy behind the provision was something like this: Each side in the negotiations is selfishly pursuing its own interests while consumers are being harmed by the goods and services not produced and work not performed as the strike lingered on. So, the President – who, it is presumed, has only the “national interest” at heart, will nobly step in and discipline the selfish adults who are acting like unruly children.

Readers should take special note of the overarching logic here: Market participants are selfish, immature and short-sighted, while the government in general and the President in particular are far-sighted, benevolent and noble. We, the general public, are dumb. The government is smart and must rule us for our own good. Of course, the government patiently allows us the freedom to live our own lives for a while – but when we get out of hand, the government steps in firmly and decisively, in no uncertain terms and corrects our foolish and careless mistakes.

As noted above, the lockout was resolved with a settlement between PMA and ILWU on February 21, 2015. At the eleventh hour, President Barack Obama intervened in the dispute by sending his Secretary of Labor, Thomas Perez, to California. Perez’s mandate was as follows: “Get this done now – if necessary, in Washington.” Was this the decisive measure in achieving resolution of the dispute?

No. This was classic political theater. The President was undoubtedly following the progress of the negotiations. When it seemed agreement was near, he sensed that his chance for seizing credit for ending the dispute was slipping away, so he did what politicians do – he looked as busy as possible while pretending to solve the problem. He had to act fast to get his response on the public record before the disputants settled their disagreement without him. If he did, he knew that when the two sides reached agreement, many people would assume that they were prodded into action by his tough talk.

Although President Obama’s career specialty is high-handed, unilateral, unprecedented executive action, his dispatching Secretary Perez to the West Coast does not fit into that category. Obama’s predecessor, George W. Bush, intervened in a similar manner in 2002. Indeed, the principals were the same – the PMA and the ILWU – and the basic scenario was identical – a lockout staged by the PMA. That lockout was motivated by a “work slowdown” called by the ILWU when contract negotiations between the two groups were deadlocked. President Bush threatened both groups with a court injunction that would have forced them to abandon their respective stances and reach agreement. The President’s threatened injunction would have nullified the PMA’s lockout; meanwhile, the Bush administration threatened to reclassify the dock workers as “railway workers.” This odd-seeming bureaucratic step would have had profound repercussions for the union, since railway workers were legally treated as essential to national safety and forbidden to strike. The threat had its desired effect when the parties ratified a new contract without requiring the prod of an injunction. Since Obama was not about to threaten a radical left-wing union whose members were a key voting bloc for his party, his admonition to Secretary Perez was the emptiest of threats.

Very well. If President Obama’s threat did not bring the recalcitrant disputants to heel, what did? Why was the lockout settled? And what incentives operate in these cases to persuade the parties to settle and limit the damage to consumers?

Trade-Route Competition

The true story is a familiar one. Government perennially poses as the savior of first, last and customary resort for consumers. But this is just a pose; it’s for show, not for go. Government can’t help consumers because government doesn’t organize resources to produce output, doesn’t possess entrepreneurial skills necessary to find out what consumers want, doesn’t possess the incentives and motivation imparted by the profit motive and lacks the information necessary to utilize the price system the way market participants do.

In other words, government lacks just about every concrete attribute necessary to actually help consumers. Where can that help be found? In competitive markets, that’s where. 

Let’s apply that generalization to the West Coast port lockout. For over nine months, roughly half of the tangible cargo entering the United States has been funneled through the 29 bottlenecked West Coast ports, producing costly time delays and increasingly frustrated transporters, wholesalers and retailers. Do all these frustrated people have to suffer in silence?

No. There is more than one way to move goods from China to North America. The Trans-Pacific route from Shanghai to the West Coast is the shortest route, taking 12-14 days. But the port lockout has encouraged shippers to experiment with alternative routes. These alternatives may be longer – up to twice as long, in fact – but the cargo can be unloaded immediately upon arrival rather than waiting in the West Coast queue for an indeterminate time. The most common of these alternatives is to circle around the West Coast and continue southward down the California coast and past the Mexican Panhandle. The ships can stop off at a Mexican port, or reach Central America and cross the Panama Canal before turning north towards the Atlantic Coast, reaching New York City in 25 days.

The travel itinerary of any particular ship will depend on the composition of its freight. As reported by The Wall Street Journal (“Ports Gridlock Reshapes Trade,” 03/06/2015, by Laura Stevens and Paul Ziobro), “the biggest shippers, including Wal-Mart Stores, Inc., Home Depot Inc. and Target Corp., have employed for years what is known in the industry as a four-corner strategy, in which networks are expanded to include warehouses at northern and southern ports on both coasts and the Gulf of Mexico. Now even smaller companies are diversifying.” A Journal survey found that 65% of U.S. shippers planned to reduce shipments routed through the West Coast during the remainder of this year and next year. Inevitably, some of those re-routing decisions will become permanent.

A shorthand way of describing this changeover is to say that the Trans-Pacific trade route faces competition from the alternative route to the East Coast – which is really a combination of multiple routes with a terminal route ending in New York City. More precisely, each of the 29 ports served by the Trans-Pacific route competes with the many port-terminal facilities in Mexico, the Gulf Coast, the Atlantic Coast and New York City. In January, according to the Journal, the port of Oakland suffered a 32% drop in volume while the port of Virginia picked up a 15% increase. Toymaker Hasbro stopped splitting its shipments between coasts and routed its full, lucrative complement of toys to Savannah, GA by way of the Panama Canal. It was the hot breath of this kind of competitive pressure that the members of the PMA felt on their necks during negotiations with ILWU. It was the threat of losing business to other ports that finally drove them to settle with ILWU. (As it happens, the final matter at issue was the details of an arbitration agreement between the two bodies, rather than wages or working conditions.)

There is still another maritime trade route from China to North America. It is less important for the U.S. because the route passes through Southeast Asia, Africa and Europe via the South China Sea, the Indian Ocean, the Mediterranean, the Suez Canal and Gibraltar. Ships will drop off cargo in all these locations and only the residual will remain to continue across the Atlantic to the East Coast. But the ports lockout has redirected some cargo to this route as well.

Like many economic questions, trade-route optimality becomes more complicated the closer it is examined. Larger companies are more apt to change their route, either temporarily or permanently, because they have the resources and flexibility to take advantage of any potential advantage. They can utilize economies of scale, size and time beyond the reach of smaller companies.

The West Coast ports are not the only ones grappling with the problem of bottlenecks. The Panama Canal has been a key trade connection for over a century. It was constructed at the dawn of the 20th century and wasn’t built for the supersized vessels that careen across today’s maritime superhighways. It is now undergoing enlargement, a process scheduled for completion in 2016. The anticipation of this upgrade was a factor influencing the PMA to hold out longer rather than settle with the ILWU. Why? The port owners knew that they were bound to lose some business to the East Coast next year, when the Panama Canal revamp was complete. Thus, if that business switched early – this year instead of next year – only this year’s lost business would be attributable to their negotiating stance, not the full discounted present value of all future business lost, because future business was slated to transfer away anyway.

It is evident that competition from other trade routes – that is, from other ports – was the real motivation for settlement of the West Coast port lockout. The PMA faced a complicated tradeoff – weighing the gains from further negotiation against the net loss of business to other ports. Having disposed of the shibboleth that Presidential intervention under Taft-Hartley was the instrument of salvation, we should next turn the argument around. Why were port operators intent on locking out the ILWU in the first place? What did the PMA have to gain?

The Gains from the Lockout

The most fruitful way to examine the issue of gains from the lockout ordered by PMA is to look backward to the previous lockout in 2002. This was the one in which President George Bush intervened, after the PMA ordered a lockout in response to the ILWU’s work slowdown. The key issue from PMA’s standpoint, on which they finally prevailed, was the employment of new technology. The technique of “containerization” as outlined above was becoming the efficient production technique for transporting ocean cargo. In order to move and stack large containers inside gigantic ships, port personnel had to use the latest mechanized technology. The ILWU resisted this strenuously, but finally conceded.

For a century and a half, labor unions have portrayed technology as the enemy of labor. That is ironic. Machines make labor more productive, thereby increasing the marginal-value product of labor, which is defined as the amount of output produced by an additional increment of labor multiplied by the price of that output. The increase in marginal-value product increases the demand for labor by business firms, thereby driving up wages. Essentially, this is the process by which wages have increased in America ever since colonial times. These increases have been broad-based across industries ranging from agriculture to manufacturing to extractive industries to service industries like warehousing.

Unions have instead painted technology as the enemy of labor because it substitutes for labor instead of complementing it. It should be obvious that this process of substitution has severe limits. Indeed, it is only now, after a few centuries of technological progress, that we are reaching the point where robots can really substitute for people to any significant degree. While this does have the effect of actually eliminating some jobs, its effect on productivity is so tremendous that remaining jobholders – those whose work is complemented by the machines – see huge increases in income.

Unions must provide gains to their members that cannot be had without union participation. Obviously, workers can have the benefits of technology without joining a union, since employers are willing and even anxious to adopt technological innovations that promise to increase the rate of output per unit of input expended. That is why unions have always sought to force workers to join unions. Next, unions must find a way to raise the wage of members higher than that prevailing in the marketplace. Once unions have forced workers to join unions, they then restrict union membership in order to restrict the supply of labor to the marketplace. The restriction in supply doesn’t restrict the number of workers willing to work, only the number able to work. This increases wages by creating an artificial surplus of labor. That labor surplus is what we call unemployment. Various sources estimate that thousands of people occupy waiting lists for membership in the ILWU. Tens of thousands of applicants apply whenever a union vacancy pops up.

Labor unions are cartels. In many ways, they are analogous to business cartels organized to raise prices paid by consumers above the prevailing market price. (Unlike labor cartels, business cartels don’t produce surpluses of goods; instead, the business combination reduces output in accordance with the higher price contrived by the cartel. The essential difference between the two cases is that business cartels theoretically control the total supply of output to the market but unions do not control the total supply of labor to the labor market.) But while business cartels have been forbidden by the antitrust laws, labor cartels are actively encouraged by government labor laws like the Wagner Act. In particular, government encourages labor unions to compel membership and compel payment of dues by workers who do not belong to unions.

The 14,000 members of ILWU who work in the West Coast ports have been described by the San Francisco Chronicle as “the aristocrat[s] of the working class,” who “can earn well over $100,000 a year with excellent benefits.” Partly, this is owed to the decades of exclusionary policies followed by ILWU, which has studiously restricted membership in the union. The union employs the ancient “hiring hall” method of allocating the drastically restricted number of jobs. Union members – including thousands of part-time “casual” workers – show up at hiring halls where bosses inform them what work is available for them. Movie fans recall the Oscar-winning film On the Waterfront, which graphically portrayed this system.

But the living standard enjoyed by ILWU members today is also the result of the contract negotiated in 2002 at the insistence of the PMA. That was the year in which the new technology was finally adopted by the West Coast ports. The result of this was that in succeeding years employment at West Coast ports increased by 32%. This was the measure of the increased productivity caused by technological innovation.

For years, the ILWU reacted to technological progress by instituting job cuts. The union watched while black union members were laid off because they were junior to their white brothers. Today, this would be called “disparate impact,” but during the ILWU Presidency of the Communist sympathizer Harry Bridges (1937-1977), his political influence precluded any criticism. Similarly, Bridges managed to exclude women from union membership without attracting attention as a sexist. The ILWU’s restrictive and exclusionary policies kept thousands from working in West Coast ports. In contrast, the PMA’s lockout won a concession for technological adoption that spurred productivity and added thousands of jobs to port payrolls.

The Real Story and Consequences of the Port Lockout, as Told by Economic Logic

Our analysis of the West Coast port lockout, using the tools of history and economic logic, paints a completely different picture than the one drawn by the news media, left-wing academics and union propagandists. The lockout was indeed the action of two sides, each pursuing their own joint interests; that much is true. But Presidential intervention was neither necessary nor sufficient to end it, either in the current instance or in the past. Market participants are not immature children governed by passions over which they have no control. They are adults who make rational choices based on the limited information at their disposal. In this case, market forces – specifically, the competition embodies in alternative global trade routes – put limits on the amount of time that the PMA would be willing to negotiate for their ends.

Presidents are not noble men governed by altruistic motives foreign to market participants. Presidents are politicians. Presidents act to achieve their political ends. Political ends typically involve persuading a bloc of people that you will provide the largest possible benefit to them at little or no cost. This is never economically logical or feasible, so political action and economics are constantly at odds. Thus, it is not surprising that neither President Bush nor President Obama would rely on market forces to handle their respective West Coast port lockouts, nor is it surprising that their tough talk designed to bring the disputants up short and break the negotiating deadlock was not meaningful.

Ending the deadlock by force majeure was not economically beneficial to American consumers because it was not required. Market forces would do that anyway. The measures ancillary to achieving that coercive end were not beneficial, either. Forcing people to do things against their will is only justified to prevent crimes and loss of rights. In this case, the PMA and ILWU were harming only themselves. Shippers have alternative ways to get their goods to consumers and were, in fact, making use of those alternatives.

The central fact of the West Coast port lockouts is illuminated by the aftermath of the 2002 agreement: The adoption of productivity-enhancing technology demanded by the PMA ushered in a 32% increase in employment on the West Coast docks. This benefitted America’s consumers, the PMA and several thousand workers who previously had been shut out of jobs by the exclusionary policies of the ILWU. The PMA’s lockout was the force working for the public good – not the actions of the President and certainly not the actions of the ILWU. The PMA was the beneficial force not because its members were inherently more noble or altruistic than everybody else, but because they were the only ones directly associated with the case who were responding to market forces. This was a contemporary instance of Adam Smith’s “invisible hand” at work. It was so invisible that nobody else has noticed it until now.

To be sure, this isn’t the way that competition is supposed to work. Firms are supposed to introduce productivity-enhancing technology as it becomes available. They shouldn’t need to ask permission from anybody – not government regulators, Presidents or labor unions. But the fact that federal-government labor policy allowed a union like the ILWU to gain a stranglehold on the maritime labor market put port owners and operators in a bind. They had to form their own organization just to negotiate with the ILWU. They had to risk losing business to other ports during negotiations in order win the right to adopt the technology that allowed them to compete successfully with those ports. This is no way to run a railroad – or a maritime cargo terminal – but that’s the hand of cards they were dealt. They played it out to the best of their ability. For their trouble, they have been vilified.

And the ILWU? In 2001, they celebrated the centenary of the founding President of their union, the Communist Harry Bridges. Fittingly, they celebrated it with a work stoppage. They shut down the port of San Pedro, CA, for eight hours.

The PMA negotiated tenaciously for the right to create jobs – and did so. The ILWU negotiated for the right to restrict employment. They celebrated the birth of their founder by organizing labor to waste of a day’s worth of work. In a nutshell, that sums up the contending sides in the West Coast port lockout. The lockout itself was not a pointless waste of time, a threat to consumers, and a danger to national welfare that demanded Presidential intervention. It was the only way that the interests of consumers could be served, given the powers placed in the hands of a labor union determined to thwart progress and benefit a small number of incumbent union members at the expense of job seekers, consumers and everybody else. In short, the reality of the West Coast port lockout is diametrically opposed to the conventional narrative.

DRI-183 for week of 3-1-15: George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

An Access Advertising EconBrief:

George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

February 26, 2015 is a date that will live in regulatory infamy. That assertion is subject to revision by the courts, as is nearly everything undertaken these days by the Obama administration. As this is written, the Supreme Court hears yet another challenge to “ObamaCare,” the Affordable Care Act. President Obama’s initiative to achieve a single-payer system of national health care in the U.S. is rife with Orwellian irony, since it cannot help but make health care unaffordable for everybody by further removing the consumer of health care from every exposure to the price of health care. Similarly, the latest administration initiative is the February 26 approval by the Federal Communications Commission (FCC) of the so-called “Net Neutrality” doctrine in regulatory form. Commission Chairman Tom Wheeler’s summary of his regulatory proposal – consisting of 332 pages that were withheld from the public – has been widely characterized as a proposal to “regulate the Internet like a public utility.”

This episode is riven with a totalitarian irony that only George Orwell could fully savor. The FCC is ostensibly an independent regulatory body, free of political control. In fact, Chairman Wheeler long resisted the “net neutrality” doctrine (hereinafter, shortened to “NN” for convenience). The FCC’s decision was a response to pressure from President Obama, which made a mockery of the agency’s independence. The alleged necessity for NN arises from the “local monopoly” over “high-speed” broadband exerted by Internet service providers (again, hereinafter abbreviated as “ISPs”) – but a “public utility” was, and is, by definition a regulated monopoly. Since the alleged local monopoly held by ISPs is itself fictitious, the FCC is in fact proposing to replace competition with monopoly.

To be sure, the particulars of Chairman Wheeler’s proposal are still open to conjecture. And the enterprise is wildly illogical on its face. The idea of “regulating the Internet like a public utility” treats those two things as equivalent entities. A public utility is a business firm. But the Internet is not a single business firm; indeed, it is not a single entity at all in the concrete sense. In the business sense, “the Internet” is shorthand for an infinite number of existing and potential business firms serving the world’s consumers in countless ways. The clause “regulate the Internet like a public utility” is quite literally meaningless – laughably indefinite, overweening in its hubris, frightening in its totalitarian implications.

It falls to an economist, former FCC Chief Economist Thomas Hazlett of Clemson University, to sculpt this philosophy into its practical form. He defines NN as “a set of rules… regulating the business model of your local ISP.” In short, it is a political proposal that uses economic language to prettify and conceal its real intentions. NN websites are emblazoned with rhetoric about “protecting the Open Internet” – but the Internet has thrived on openness for over 20 years under the benign neglect of government regulators. This proposal would end that era.

There is no way on God’s green earth to equate a regulated Internet with an open Internet; the very word “regulated” is the antithesis of “open.” NN proponents paint scary scenarios about ISPs “blocking or interfering with traffic on the Internet,” but their language is always conditional and hypothetical. They are posing scenarios that might happen in the future, not ones that threaten us today. Why? Because competition and innovation protected consumers up to now and continue to do so. NN will make its proponents’ scary predictions more likely, not less, because it will restrict competition. That is what regulation does in general; that is what public-utility regulation specifically does. For over a century, public-utility regulation has installed a single firm as a regulated monopoly in a particular market and has forcefully suppressed all attempts to compete with that firm.

Of course, that is not what President Obama, Chairman Wheeler and NN proponents want us to envision when we hear the words “regulate the Internet like a public utility.” They want us to envision a lovely, healthy flock of sheep grazing peacefully in a beautiful meadow, supervised by a benevolent, powerful Shepherd with a herd of well-trained, affectionate shepherd dogs at his command. Soothing music is piped down from heaven and love and tranquility reign. At the far edges of the meadow, there is a forest. Hungry wolves dwell within, eyeing the sheep covetously. But they dare not approach, for they fear the power of the Shepherd and his dogs.

In other words, the Obama administration is trying to manipulate the emotions of the electorate by creating an imaginary vision of public-utility regulation. The reality of public-utility regulation was, and is, entirely different.

The Natural-Monopoly Theory of Public-Utility Regulation

The history of public-utility regulation is almost, but not quite, co-synchronous with that of government regulation of business in the United States. Regulation began at the state level with Munn vs. Illinois, which paved the way for state government of the grain business in the 1870s. The Interstate Commerce Commission’s inaugural voyage with railroad regulation followed in the late 1880s. With the commercial introduction of electric lighting and the telephone came business firms tailored to those ends. And in their wake came the theory of natural monopoly.

Both electric power and telephones came to be known as “natural monopoly” industries; that is, industries in which both economic efficiency and commercial viability chose one single firm to serve the entire market. This was the outgrowth of economies of scale in production, owing to decreasing long-run average cost of production. This decidedly unusual state of affairs is a technological anomaly. Engineers recognize it in conjunction with the “two-thirds rule.” There are certain cases in which cost increases as the two-thirds power of output, which implies that cost decreases steadily as output rises. (The thru-put of pipes and cables and the capacity of cargo holds are examples.) In turn, this implies that the firm that grows the fastest will undersell all others while still covering all its costs. The further implication is that consumers will receive the most output at the lowest price if one monopoly firm serves everybody – if, and only if, the firm’s price can be constrained equal to its long-run average cost at the rate of output necessary to meet market demand. An unconstrained monopoly would produce less than this optimal rate of output and charge a higher price, in order to maximize its profit. But the theoretical outcome under regulated monopoly equates price with long-run average cost, which provides the utility with a rate of return equal to what it could get in the best alternative use for its financial capital, given its business risk.

In the U.S. and Canada, this regulated outcome is sought by a public-utility commission via the medium of periodic hearings staged by the public-utility regulatory commission (PUC for short). The utility is privately owned by shareholders. In Europe, utilities are not privately owned. Instead, their prices are (in principle) set equal to long-run marginal cost, which is below the level of average cost and thus constitutes a loss in accounting terms. Taxpayers subsidize this loss – these subsidies are the alternative to the profits earned by regulated public-utility firms in the U.S. and Canada.

These regulatory schemes represent the epitome of what the Nobel laureate Ronald Coase called “blackboard economics” – economists micro-managing reality as if they possessed all the information and control over reality that they do when drawing diagrams on a classroom blackboard. In practice, things did not work out as neatly as the foregoing summary would lead us to believe. Not even remotely close, in fact.

The Myriad Slips Twixt Theoretical Cup and Regulatory Lip

What went wrong with this theoretical set-up, seemingly so pat when viewed in a textbook or on a classroom blackboard? Just about everything, to some degree or other. Today, we assume that the institution of regulated monopoly came in response to market monopolies achieved and abuses perpetrated by electric and telephone companies. What mostly happened, though, was different. There were multiple providers of electricity and telephone service in the early days. In exchange for submitting to rate-of-return regulation, though, one firm was extended a grant of monopoly and other firms were excluded. Only in very rare cases did competition exist for local electric service – and curiously, this rate competition actually produced lower electric rates than did public-utility regulation.

This result was not the anomaly it seemed, since the supposed economies of scale were present only in the distribution of electric power, not in power generation. So the cost superiority of a single firm producing for the whole market turned out to be not the slam-dunk that was advertised. That was just one of many cracks in the façade of public-utility regulation. Over the course of the 20th century, the evolution of public-utility regulation in telecommunications proved to be paradigmatic for the failures and inherent shortcomings of the form.

Throughout the country, the Bell system were handed a monopoly on the provision of local service. Its local service companies – the analogues to today’s ISPs – gradually acquired reputations as the heaviest political hitters in state-government politics. The high rates paid by consumers bought lobbyists and legislators by the gross, and they obediently safeguarded the monopoly franchise and kept the public-utility commissions (PUCs) staffed with tame members. That money also paid the bill for a steady diet of publicity designed to mislead the public about the essence of public-utility regulation.

We were assured by the press that the PUC was a vigilant watchdog whose noble motives kept the greedy utility executives from turning the rate screws on a helpless public. At each rate hearing, self-styled consumer advocacy groups paraded their compassion for consumers by demanding low rates for the poor and high rates on business – as if it were really possible for some non-human entity called “business” to pay rates in the true sense, any more than they could pay taxes. PUCs made a show of ostentatiously requiring the utility to enumerate its costs and pretending to laboriously calculate “just and reasonable” rates – as if a Commission possessed juridical powers denied to the world’s greatest philosophers and moralists.

Behind the scenes, after the press had filed their poker-faced stories on the latest hearings, increasingly jaded and cynical reporters, editors and industry consultants rolled their eyes and snorted at the absurdity of it all. Utilities quickly learned that they wouldn’t be allowed to earn big “profits,” because this would be cosmetically bad for the PUC, the consumer advocates, the politicians and just about everybody involved in this process. So executives, middle-level managers and employees figured out that they had to make their money differently than they would if working for an ordinary business in the private sector. Instead of working efficiently and productively and striving to maximize profit, they would strive to maximize cost instead. Why? Because they could make money from higher costs in the form of higher salaries, higher wages, larger staffs and bigger budgets. What about the shareholders, who would ordinarily be shafted by this sort of behavior? Shareholders couldn’t lose because the PUC was committed to giving them a rate of return sufficient to attract financial capital to the industry. (And the shareholders couldn’t gain from extra diligence and work effort put forward by the company because of the limitation on profits.) That is, the Commission would simply ratchet up rates commensurate with any increase in costs – accompanied by whatever throat-clearing, phony displays of concern for the poor and cost-shifting shell games were necessary to make the numbers work. In the final analysis, the name of the game was inefficiency and consumers always paid for it – because there was nobody else who could pay.

So much for the vaunted institution of public-utility regulation in the public interest. Over fifty years ago, a famous left-wing economist named Gardiner Means proposed subjecting every corporation in the U.S. to rate-of-return regulation by the federal government. This held the record for most preposterous policy program advanced by a mainstream commentator – until Thomas Wheeler announced that henceforth the Internet would be regulated as if it were a public utility. Now every American will get a taste of life as Ivan Denisovich, consigned to the Gulag Archipelago of regulatory bureaucracy.

Of particular significance to us in today’s climate is the effect of this regime on innovation. Outside of totalitarian economies such as the Soviet Union and Communist China, public-utility regulation is the most stultifying climate for innovation ever devised by man. The idea behind innovation is to find ways to produce more goods using the same amount of inputs or (equivalently) the same amount of goods using fewer inputs. Doing this lowers costs – which increases profits. But why do to the trouble if you can’t enjoy the increase in profits? Of course, utilities were willing to spend money on research, provided they could get it in the rate base and earn a rate of return on the investment. But they had no incentive to actually implement any cost-saving innovations. The Bell System was legendary for its unwillingness to lower its costs; the economic literature is replete with jaw-dropping examples of local Bell companies lagging years and even decades behind the private sector in technology adoption – even spurning advances developed in Bell’s own research labs!

Any reader who suspects this writer of exaggeration is invited to peruse the literature of industrial organization and regulation. One nagging question should be dealt with forthwith. If the demerits of public-utility regulation were well recognized by insiders, how were they so well concealed from the public? The answer is not mysterious. All of those insiders had a vested interest in not blowing the whistle on the process because they were making money from ongoing public-utility regulation. Commission employees, consultants, expert witnesses, public-interest lawyers and consumer advocates all testified at rate hearings or helped prepare testimony or research it. They either worked full-time or traveled the country as contractors earning lucrative hourly pay. If any one of them was crazy enough to launch an expose of the public-utility scam, he or she would be blackballed from the business while accomplishing nothing – the institutional inertia in favor of the system was so enormous that it would have taken mass revolt to effect change. So they just shrugged, took the money and grew more cynical by the year.

In retrospect, it seems miraculous that anything did change. In the 1960s, local Bell companies were undercharging for local service to consumers and compensating by soaking business and long-distance customers with high prices. The high long-distance rates eventually attracted the interest of would-be competitors. One government regulator grew so fed up with the inefficiency of the Bell system that he granted the competitive petition of a small company called MCI, which sought to compete only in the area of long-distance telecommunications. MCI was soon joined by other firms. The door to competition had been cracked slightly ajar.

In the 1980s, it was kicked wide open. A federal antitrust lawsuit against AT&T led to the breakup of the firm. At the time, the public was dubious about the idea that competition was possible in telecommunications. The 1990s soon showed that regulators were the only ones standing between the American public and a revolution unlike anything we had seen in a century. After vainly trying to protect the local Bells against competition, regulators finally succumbed to the inevitable – or rather, they were overrun by the competitive hordes. When the public got used to cell phones and the Internet, they ditched good old Ma Bell and land-line phones.

This, then, is public-utility regulation. The only reason we have smart phones and mobile Internet access today is that public-utility regulation in telecommunications was overrun by competition despite regulatory opposition in the 1990s. But public-utility regulation is the wonderful fate to which Barack Obama, Thomas Wheeler and the FCC propose to consign the Internet. What is the justification for their verdict?

The Case for Net Neutrality – Debunked

As we have seen, public-utility regulation was based on a premise that certain industries were “natural monopolies.” But nobody has suggested that the Internet is a natural monopoly – which makes sense, since it isn’t an industry. Nobody has suggested that all or even some of the industries that utilize the Internet are natural monopolies – which makes sense, since they aren’t. So why in God’s name should we subject them to public-utility regulation – especially since public-utility regulation didn’t even work well in the industries for which it was ideally suited? We shouldn’t.

The phrase “net neutrality” is designed to achieve an emotional effect through alliteration and a carefully calculated play on the word “neutral.” In this case, the word is intended to appeal to egalitarian sympathies among hearers. It’s only fair, we are urged to think, that ISPs, the “gatekeepers” of the Internet, are scrupulously fair or “neutral” in letting everybody in on the same terms. And, as with so many other issues in economics, the case for “fairness” becomes just so much sludge upon closer examination.

The use of the term “gatekeepers” suggests that God handed to Moses on Mount Olympus a stone tablet for the operation of the Internet, on which ISPs were assigned the role of “gatekeepers.” Even as hyperbolic metaphor, this bears no relation to reality. Today, cable companies are ISPs. But they began life as monopoly-killers. In the early 1960s, Americans chose between three monopoly VHF-TV networks, broadcast by ABC, NBC and CBS. Gradually, local UHF stations started to season the diet of content-starved viewers. When cable-TV came along, it was like manna from heaven to a public fed up with commercials and ravenous for sports and movies. But government regulators didn’t allow cable-TV to compete with VHF and UHF in the top 100 media markets of the U.S. for over two decades. As usual, regulators were zealously protecting government monopoly, restricting competition and harming consumers.

Eventually, cable companies succeeded in tunneling their way into most local markets. They did it by bribing local government literally and figuratively – the latter by splitting their profits via investment in pet political projects of local politicians as part of their contracts. In return, they were guaranteed various degrees of exclusivity. But this “monopoly” didn’t last because they eventually faced competition from telecommunication firms who wanted to get into their business and whose business the cable companies wanted to invade. And today, the old structural definitions of monopoly simply don’t apply to the interindustry forms of competition that prevail.

Take the Kansas City market. Originally, Time Warner had a monopoly franchise. But eventually a new cable company called Everest invaded the metro area across the state line in Johnson County, KS. Overland Park is contiguous with Kansas City, MO, and consumers were anxious to escape the toils of Time Warner. Eventually, Everest prevailed upon KC, MO to gain entry to the Missouri side. Now even the cable-TV market was competitive. Then Google selected Kansas City, KS as the venue for its new high-speed service. Soon KC, MO was included in that package, too – now there were three local ISPs! (Everest has morphed into two successive incarnations, one of which still serves the area.)

Although this is not typical, it does not exhaust the competitive alternatives. This is only the picture for fixed service. Americans are now turning to mobile forms of access to the Internet, such as smart phones. Smart watches are on the horizon. For mobile access, the ISP is a wireless company like AT&T, Verizon, Sprint or T-Mobile.

The NN websites stridently maintain that “most Americans have only a single ISP.” This is nonsense; a charitable interpretation would be that most of us have only a single cable-TV provider in our local market. But there is no necessary one-to-one correlation between “cable-TV provider” and “ISP.” Besides, the state of affairs today is ephemeral – different from what is was a few years ago and from what it will be a few years from now. It is only under public-utility regulation that technology gets stuck in one place because under public-utility regulation there is no incentive to innovate.

More specifically, the FCC’s own data suggest that 80% of Americans have two or more ISPs offering 10Mbps downstream speeds. 96% have two or more ISPs offering 6Mbps downstream and 1.5 upstream speeds. (Until quite recently, the FCC’s own criterion for “high-speed” Internet was 4Mbps or more.) This simply does not comport with any reasonable structural concept of monopoly.

The current flap over “blocking and interfering with traffic on the Internet” is the residue of disputes between Netflix and ISPs over charges for transmission of the former’s streaming services. In general, there is movement toward higher charges for data transmission than for voice transmission. But the huge volumes of traffic generated by Netflix cause congestion, and the free-market method for handling congestion is a higher price, or the functional equivalent. That is what economists have recommended for dealing with road congestion during rush hours and congested demand for air-conditioning and heating services at peak times of day and during peak seasons. Redirecting demand to the off-peak is not a monopoly response; it is an efficient market response. Competitive bar and restaurant owners do it with their pricing methods; competitive movie theater owners also do it (or used to).

Similar logic applies to other forms of hypothetically objectionable behavior by ISPs. The prioritization of traffic, creation of “fast” and “slow” lanes, blocking of content – these and other behaviors are neither inherently good nor bad. They are subject to the constraints of competition. If they are beneficial on net balance, they will be vindicated by the market. That is why we have markets. If a government had to vet every action by every business for moral worthiness in advance, it would paralyze life as we know it. The only sensible course is to allow free markets and competition to police the activities of competitors.

Just as there is nothing wrong or untoward with price differentials based on usage, there is nothing virtuous about government-enforced pricing equality. Forcing unequals to be treated equally is not meritorious. NN proponents insist that the public has to be “protected” from that kind of treatment. But this is exactly what PUCs did for decades when they subsidized residential consumers inefficiently by soaking business and long-distance users with higher rates. Back then, the regulatory mantra wasn’t “net neutrality,” it was “universal service.” Ironically, regulators never succeeded in achieving rates of household telephone subscription that exceeded the rate of household television service. Consumers actually needed – but didn’t get – protection from the public-utility monopoly imposed upon them. Today, consumers don’t need protection because there is no monopoly, nor is there any prospect of one absent regulatory intervention. The only remaining vestige of monopoly is that remaining from the grants of local cable-TV monopoly given by municipal governments. Compensating for past mistakes by local government is no excuse for making a bigger mistake by granting monopoly power to FCC regulators.


The late, great economist Frank Knight once remarked that he had heard do-gooders utter the equivalent words to “I want power to do good” so many times for so long that he automatically filtered out the last three words, leaving only “I want power.” Federal-government regulators want the maximum amount of power with the minimum number of restrictions, leaving them the maximum amount of flexibility in the exercise of their power. To get that, they have learned to write excuses into their mandates. In the case of NN and Internet regulation, the operative excuse is “forbearance.”

Forbearance is the writing on the hand with which they will wave away all the objections raised in this essay. The word appears in the original Title II regulations. It means that regulators aren’t required to enforce the regulations if they don’t want to; they can “forebear.” “Hey, don’t worry – be happy. We won’t do the bad stuff, just the good stuff – you know, the ‘neutrality’ stuff, the ‘equality’ stuff.” Chairman Wheeler is encouraging NN proponents to fill the empty vessel of Internet regulation with their own individual wish-fulfillment fantasies of what they dream a “public-utility” should be, not what the ugly historical reality tells us public-utility regulation actually was. For example, he has implied that forbearance will cut out things like rate-of-return regulation.

This just begs the questions raised by the issue of “regulating the Internet like a public utility.” The very elements that Wheeler proposes to forbear constitute part and parcel of public-utility regulation as we have known it. If these are forborne, we have no basis for knowing what to expect from the concept of Internet public-utility regulation at all. If they are not, after all, forborne – then we are back to square one, with the utterly dismal prospect of replaying 20th-century public-utility regulation in all its cynical inefficiency.

Forbearance is a good idea, all right – so good that we should apply it to the whole concept of Internet regulation by the federal government. We should forbear completely.

DRI-180 for week of 2-22-15: Will Macroeconomics Survive the Aftershocks of the Great Recession?

An Access Advertising EconBrief:

 Will Macroeconomics Survive the Aftershocks of the Great Recession?

Today there are courses on Macroeconomics in the Economics departments of every American university. It was not ever thus. Macroeconomics was born in the agony of the Great Depression. Before that, economists worked with aggregative concepts like the Quantity Theory of Money, but there was no holistic study or theory of economic aggregates. It was not clear why there should be, since all economic action originated in the minds of individual human beings and statistical data have no life of their own apart from the people embodied within them.

The Depression focused public attention on economics and economists, who were previously obscure. People wanted to know what went wrong and how to recover from it. When reigning economic theory proved unavailing and the Depression resisted the frantic resuscitative efforts of government, the economics profession threw professional decorum to the winds and started chasing after any explanation that seemed either plausible or palatable. The winner in this guess-the-theory sweepstakes was John Maynard Keynes, whose General Theory of Employment Interest and Money offered an apparent answer to the more important of the two big questions; namely, how do we get out of this fix?

Keynes’ prescription was deficit spending by government – the more, the better until the cloud of Depression lifted. Keynes won the professional competition, but World War II made his victory anticlimactic; when the smoke cleared after the war and normal life resumed, the Depression was over. But the economics profession had taken the bit between its teeth. It had organized a new system of national income accounts around the aggregative theory of income and employment advanced by Keynes and his burgeoning school of disciples. Professional journals bulged with articles on Keynesian economics, triggering a forty-year odyssey of research.

Economics split in two. Formerly, economics studied individual economic entities like the consumer, the producer and the worker. Now the theory of consumer demand, the theory of the firm and the theory of input supply and demand were pigeonholed under the study of Microeconomics. Monetary theory, which had formerly sought to express the barter theory of pure exchange in the language of indirect monetary exchange, was now converted into Macroeconomics – the study of national economic aggregates using the language of Keynesian economics.

Keynesian economics fell into disrepute in the early 1980s and textbooks were revised accordingly. But its skeletal structure and aggregative logic still survives, as does the Micro/Macro split.

The Great Recession and its stubbornly lingering aftermath have midwived a lengthening string of books and articles purporting to explain what went wrong and how to prevent it from happening again. In that respect, we are witnessing a replay – or perhaps “remake” might be more accurate – of the founding story of Macroeconomics. This time, though, we are heading for an entirely different ending. Whereas the Great Depression fused the study of Macroeconomics around the core of Keynesian theory, the Great Recession has fragmented the subject almost to the limits of recognition.

The Fragmentation of Macroeconomics

Of course, the fragmentation process began even earlier, with the organized opposition to Keynesian theory. That began in the 1950s with the rise to prominence of Milton Friedman. Friedman’s revival of the Quantity Theory of Money as a Monetarist theory that competed with Keynesian economics made him famous. He vied with John Kenneth Galbraith in public recognition and popularity and nearly single-handedly restored free-market economics to respectability in America. His promotion of floating exchange rates for international currencies and the permanent-income theory of consumption established an academic reputation that eventually earned him the Nobel Prize.

In the 1970s, Friedman’s Monetarism was joined by the Rational Expectations theory of Robert Lucas and Thomas Sargent, each of whom later earned the Nobel Prize. In a sense, Rational Expectations competed with both Keynesian economics and Monetarism, since both previous theories shared a common analytical framework. Rational Expectations theory denied that policymakers could systematically trick the public by printing money and inflating the currency as Keynes had advocated in a famous passage of the General Theory.

Keynesian economics fell, but rose again in the form of the New Keynesian Economics. This version was created to purge the failings of its ancestor, so it has more in common with Monetarism and Rational Expectations than it does with its namesake. But its most striking feature is its ecumenism. Perusing a list of economists who style themselves New Keynesians is an exercise in cognitive dissonance. The members have as little in common as did the members of opposing schools of thought in the 1970s.

Paul Krugman is an unreconstructed, big-spending Keynesian and defender of big government. N. Gregory Mankiw could just as easily be called a “New Monetarist.” John Taylor is the inventor of the “Taylor Rule,” the successor to Milton Friedman’s “monetary rule” that tied the hands of Federal Reserve policymakers by prescribing fixed annual increases in the quantity of money. Stanley Fischer is a famous central banker and textbook author who combined Rational Expectations theory with New Keynesian economics. David and Christina Romer are a husband and wife who frequently form a research team. As individuals, they have sometimes expressed skepticism about the effects of activist Keynesian policy measures that other New Keynesians like Krugman approve, such as tax increases. Indeed, Christina once authored a well-known paper doubting the efficacy of post-World War II federal-government macroeconomic policy intervention. (This viewpoint was nowhere to be heard, though, when she became head of President Obama’s Council of Economic Advisors.) About the only thing uniting these economists is a belief that government should take some active measures on a regular basis to improve economic outcomes. But they are dramatically, not to say violently, at odds over exactly what those measures should be.

Free-Market Economists and Macroeconomics

Historically, free-market economists have always been outliers. Unlike everybody else, they never accepted Keynes, nor did they accept his aggregative methods. Indeed, the great free-market economist F.A. Hayek was the principal rival of Keynes during the 1930s. Hayek was the only economist to offer a coherent explanatory theory of recessions and depressions. While Keynes offered an active measure to cure the Great Depression without pretending to explain why it had occurred, Hayek did just the opposite. He provided a logical explanation for the onset of the Depression, but maintained that – like the common cold – the Depression could not be cured, only endured. More specifically, Hayek insisted that active fiscal and monetary measures would merely make things worse.

This kind of stubborn independence persisted throughout succeeding decades. At least superficially, it remains intact to this day. While unreconstructed Keynesian like Joseph Stiglitz ascribe the Great Recession to “deregulation” that allowed the commercial and shadow-banking sectors to run amok, free-market economists demur by noting the nearly complete absence of any deregulatory initiatives other than the comparatively trivial Gramm-Leach-Bliley Act of 1999. But upon closer inspection of the numerous free-market exegeses of the financial crisis and Great Recession, it emerges that free-market theorists have broken ranks. They have become individualists analytically as well as temperamentally. They now appear every bit as fragmented as the rest of the economics profession.

A recent book (Boom and Bust Banking: The Causes and Cures of the Great Recession, edited and Introduced by David Beckworth; Independent Institute, 2012) collected the views of prominent free-market economists on the Great Recession and financial crisis. The Introduction, by one of the leading exponents of the school, conveys the impression that all of the contributors are on the same page. This is true in only one respect: they all disapprove of actions taken by the Federal Reserve prior to and during the crisis. Beyond that, however, they are almost as diverse in their views as are the New Keynesians. This movement toward ideological and analytical atomism is unprecedented in modern economics.

Every Man Is an Island

Lawrence White is a longtime Austrian economist whose specialty is money and banking. Along with colleague George Selgin, he is a leading proponent of free banking, the advocacy of free competitive banking over the heavily regulated and protected banking that exists now.

White attributes the Great Recession and financial crisis not to “laissez faire or deregulation” but rather “the interaction of an unanchored government fiat monetary system with a perversely regulated financial system.” The Federal Reserve’s cheap credit policy “…kept interest rates too low for too long… in 2001-2006″ to create the “…housing boom and bust cycle of “2001-2007.” Real interest rates (that is, nominal rates minus the rate of inflation) were negative from 2002-2005. Nominal spending was artificially high in 1998-2000, leading to a boom that went bust in 2001. This pattern was repeated in 2002-2004, leading eventually to recession in 2007-2009. The second time around, the bubble created by artificially high demand disproportionately surrounded the housing sector. Housing prices shot up during 2001-2006, leveled off, then crashed. The resulting problems were amplified by political and regulatory mistakes that produced bailouts for financial firms and dilution of credit standards for house buyers. Overall, though, the Fed’s actions were the proximate cause of disaster. Thus, the Fed has worsened the chronic currency problems it was created to cure.

White believes that we need an alternative set of monetary institutions, with free banking leading the list.

David Beckworth is a New Monetarist. In one of his two contributions, he wonders why Fed policy was too loose for too long. He finds the answer in the Fed’s mishandling of the U.S.’s “productivity boom” from 2002-2004. During those years, U.S. total factor productivity rose by an average annual rate of about 2.5%. This compares with an average rate of 0.9% over the previous 30 years in the U.S. This sounds like good economic news if anything ever did. Yet, incredible as it seems, Federal Reserve policy turned it into bad news.

The Fed was – and still is – dead set on avoiding “deflation” at all costs. The quotation marks reflect uncertainty about just what constitutes the sort of falling overall price level we are, or should be, trying to avoid. In practice, the Fed and other central banks treat the prospect of even a tiny overall fall in prices as a catastrophe of the first order. Supposedly, deflation fatally strains borrowers, who are thereby forced to repay debt with dollars of successively greater value. In any case, a general increase in productivity causes costs to fall and, all else equal, will tend to cause prices to fall, too. Rather than allow this to happen, the Fed increases the money supply to lower interest rates, increase inflation and raise the general level of prices. And that is what happened in 2002-2004 to offset the productivity boom. This tended to create an artificial increase in the general level of demand. The Fed was assuming that the falling price level was the precursor of a decrease in aggregate demand, lower investment, lower real income, less employment and more unemployment. It believed that its actions were needed to prevent a recession. Wrong; the Fed generated an artificial increase in aggregate demand and an increase in inflation instead. Then, later on, the boom turned into a bust.

Beckworth believes that the Fed makes so many mistakes because it lacks a proper source of feedback from markets. He advocates targeting of Nominal Gross Domestic Product (NGDP) by the Fed. NGDP is simply the nominal level of spending in the economy, which Beckworth believes should be held to as constant a level as possible for optimal results.

Beckworth’s ideas are elaborated further by Scott Sumner, the leading New Monetarist who writes one of the most widely followed economics blogs in the world. Sumner proposes the creation of a NGDP futures market to give the Fed market feedback on the level of NGDP. Sumner finds the cause of the Great Recession and financial crisis in “misdiagnosis by macroeconomists,” not mistakes made by bankers and regulators. Indeed, he exhibits touching faith in the willingness of central bankers to strive for favorable economic outcomes – if only they would see the light! Alas, macroeconomics is ruled by “superstitions, including the view that good economists are those that can predict the business cycle, or asset-market crashes.” Stop relying on policymakers being smarter than markets, Sumner pleads; instead, “restructure macro around market expectations.”

Hedge-fund manager and finance theorist Diego Espinosa believes that the Fed created the housing bubble and ensuing financial crisis by creating an environment that not only allowed, but actively encouraged, traders to pursue a “carry trade” in mortgage securities. Carry trading implies the notion of borrowing in a short-term financial asset and lending long-term. Leverage is used to juice up the return since the spread earned by the trader is usually small. The mortgage securities used were provided by investment-banking houses rather than commercial banks because investment bankers had the necessary experience and expertise to “securitize” mortgages in packaged form, supervise their rating and distribute them widely. The distribution in tranched form from highest to lowest rated allowed the greatest possible distribution among all risk classes of buyers. After all, with small spreads, there were only two ways to increase returns – ever-greater leverage and ever-greater volume. Volume was further enhanced by diminution of credit standards in every way: lower down payments, higher loan-to-value standards, lower income requirements, lower consumer-credit-rating standards, low or no verification of consumer application statements.

This recipe – increasing leverage, increasing volume and wide distribution of indebtedness, abandonment of any semblance of credit standards – was predestined to end in disaster. So why did traders embrace it so fervently? Espinosa confides that the mastermind of this scheme was the Federal Reserve. In 2002, the Fed announced a policy of maintaining a low Fed funds (overnight) rate long after a recession. It promised to raise rates only slowly, in small increments, to give market participants time to unwind positions taken in response to this policy. Thus, traders believed that “the fix was in” – they couldn’t lose the carry-trade game in the usual way, by being caught short when interest rates rose. Instead, they ended up losing in ways they didn’t anticipate. When the crisis arrived, their ability to borrow short was impaired and the securities they owned became illiquid and/or worthless.

Of course, the Fed was motivated by its own political aims. The Fed was bound and determined to prevent deflation in the worse way. That’s exactly what it did – by leading the country into a huge financial crisis and recession. It provoked traders into demanding its short-term funds and buying mortgage securities, thereby achieving both its policy aims and the administration’s political aims simultaneously.

Espinosa knows that the Fed’s actions were utterly unprincipled. But his only policy recommendation is that the Fed should “recognize the limits of its own powers.”

Jeffrey Rogers Hummel is an economic historian who has evolved into a leading monetary theorist. His study of Ben Bernanke completely overturns the mainstream view of Bernanke’s tenure as Federal Reserve Chairman. Bernanke is famous for his homage to Milton Friedman, so much so that he gained the sobriquet “Helicopter Ben” in reference to Friedman’s insistence that the Fed could drop money from helicopters if needed. But Hummel shows that Bernanke actually repudiated Friedman’s legacy by bailing out particular banks while refusing liquidity for the banking system in general. Bernanke epitomized the FDR prototype of a leader: a whirlwind of action, always willing to experiment with other people’s money and welfare and confident that his good intentions would justify any result. He centralized control of the financial system within the Fed, thereby earning Hummel’s title of “central planner in chief.” Bernanke completed the transition of the Fed from the role it played in its first decades, a banker’s bank and custodian of the money supply, to that of financial central planner for the economy and even for the world at large.

Lawrence Kotlikoff is an old-line Chicago economist who has held various academic, research and journalistic posts. His contribution reflects his heritage. He recognizes the disastrous role played by fractional-reserve banking in the economic history of the U.S. and the world. Why do we continue to put up with banking panics, recessions and the accompanying dislocations, he complains? There is only one way out of this box. We must reform the practice of banking itself.

“The economic moral is simple. If you want markets to function, don’t let critical market-makers… gamble with their businesses. Apply the moral to banks and the regulatory prescription is clear. Don’t let banks take risky positions. Make banks stick to their two critical functions – mediating the payments system and connecting lenders to borrowers.”

Kotlikoff’s solution is limited-purpose banking, a proposal with roots in the old “Chicago Plan” of 1933. By law, banks would be limited to two types of activity: a plain vanilla banking operation that operated as a mutual fund with an interest-paying checking-account service only; and another, entirely separate, operation that operates a mutual fund offering opportunities in bonds, mortgages, stocks, private equity, real estate, and other financial securities.” The cash mutual fund would hold 100% reserves and would thus require no taxpayer protections of any kind, including government deposit insurance. In the investment form of banking, banks would initiate but not hold their own loans. A Federal Financial Authority would hold the loans and audit all books. According to Kotlikoff, “never again would a Bernie Madoff be free to custody his own accounts; i.e., to lie about the actual investments being made with investor money.”

George Selgin is one of the modern proponents of free banking. Implicitly, he rebuts Kotlikoff by asking the question: Suppose central banks and banking regulation did not exist; what arrangements would take their place? “Banks would issue banknotes that would be backed by some kind of reserve,” which could be specie or the U.S. monetary base. These banknotes would circulate and clear as checks do today. Interbank clearing and reserve transfers would stem overissue of banknotes by any individual bank. This system would handle the level of money demand by the public. It would provide an automatic system of equating the supply of money to the amount of money demanded. That is to say, it would automatically solve the central problem of monetary theory. In turn, this would automatically stabilize the total level of nominal dollar spending. Presto! At a stroke, the key problems of monetary theory, banking and Macroeconomics are solved.

Selgin then explains at length how the history of central banking reveals the inherently destabilizing nature of that institution. Not only does a central bank such as the Federal Reserve lack any automatic feedback system allowing it to equate the quantity of money demanded and supplied – this is in and of itself a fatal flaw of central banking - but central banks are also inherently compromised by their political connections. Originally, central banks were created to pander to the financial needs of the sovereign. Thus, the needs of the government took precedence over the needs of the public at large. Even today, we see the Fed catering to the financing needs of the government by holding interest rates artificially low for years to allow the federal government to finance its outsize public debt.

A Mass of Contradictions 

A quick perusal uncovers the mass of contradictions among the free market contributors. Kotlikoff and Selgin are poles apart in their insistence on a rigid, government-controlled approach to banking (Kotlikoff) as opposed to a free-market-feedback approach (Selgin). Sumner and Beckworth are both prominent New Monetarists; both favor the latest Macroeconomic-stabilization-policy gimmick, Nominal Gross Domestic Product stabilization. So they have to be in agreement, right? Wrong. Sumner sees falling prices in the Depression as a sign of “deflation” that the Fed should have corrected with loose monetary policy. But Beckworth regards 2002-2004 as a “productivity boom” for the U.S., not a time of disastrous deflation. Well, the 1920s saw a similar boom in productivity, with similar effects on prices. Presumably, Beckworth would regard them similarly – which puts him squarely at odds with Sumner.

White and Selgin are both determined to put markets in control and depose the Fed. Sumner is equally keen to utilize the principle of market feedback because macroeconomists have disastrously misdiagnosed the ills of the economy. Moreover, he believes passionately that policymakers are not smarter than the market. But he proposes to take his carefully cultivated, pet panacea of NGDP stabilization and put it in the hands of the Fed – the very policymakers and macroeconomists he bad-mouths! And those same people are bossed around by… politicians! That puts Sumner somewhere on the opposite side of the world from White and Selgin.

As for Espinosa and Hummel… well, their analysis may be the most detailed, penetrating and acute of any being offered on the market today. But when it comes drawing implications from their conclusions, they opt out.

Increasingly, the operative principle of Macroeconomics is becoming “to each his own (theory).”

DRI-180 for week of 2-15-15: The Midnight Ride of the Interest-Rate Alarmists

An Access Advertising EconBrief:

The Midnight Ride of the Interest-Rate Alarmists

In every Middlesex village and farm – and these days, the word “Middlesex” carries a decided double meaning – the alarm is being sounded. Interest rates will rise. The only question is when.

For six years, the question has been “if,” not “when.” At first, interest rates were held down by “stimulus” – the combination of fiscal and monetary policy embodied in the multi-billion (or trillion, depending on how one counts) dollar program enacted in the early days of the Obama administration in 2009. Then, when the “zero lower bound” beckoned, the QE series of quantitative expansions in monetary “stimulus” helped enforce a continuing ZIRP (Zero Interest Rate Policy).

Now, we have reached a point at which some middle-school youths have no memory of what a real interest rate looked or felt like. And quite a few adults in financial and policymaking circles have no desire to relive their old memories, either. They have mounted up, a la Paul Revere, to cry “The rate hike is coming! The rate hike is coming!”

In a recent Wall Street Journal op-ed (“Why the Alarms About a Slight Rate Hike?” WSJ, 02/18/2015), author Omid Malekan quotes several of these alarmists. “Charles Evans, president of the Chicago Fed and a voting member of the board that determines rate policy, said last month that raising rates too soon would be a ‘catastrophe.’ Former CEO of General Electric Jack Welch, during a Feb. 4 interview on CNBC, called a possible spring rate hike ‘ludicrous.’ Billionaire investor Warren Buffett told Fox Business Network on the same day that he didn’t think a rate increase this year would be ‘feasible.'”

Malekan’s view of these modern-day midnight riders is droll. “Catastrophe. Ludicrous. Not feasible. Really?” For the previous five decades, Malekan notes, the benchmark overnight Fed funds rate averaged 5.7%, ranging from a high of 19% in the early 1980s down to 1% in the early 2000s. But for most of the five-decade reference period – including the Vietnam War, most of the Cold War, the stagflation of the 1960s and 70s and two serious recessions in the 70s and 80s – that 5.7% figure wasn’t far off the mark. But “since December 2008 the fed-funds rate has been kept close to zero.”

And what would the Fed’s proposed interest-rate hike, anathematized as unthinkable by its critics, do? It “would take the fed-funds rate from near zero to about 0.25%, and no that isn’t a misplaced decimal point. We aren’t talking about 2.5%, which would still be less than half the 1954-2007 average. We are talking about 0.25%, which would mean the Fed’s monetary policy would be rolled back from full pedal-to-the-metal to a fraction above pedal-to-the-metal. On a historical chart of the fed-funds rate, the proposed hike would barely be visible to the naked eye. Does that sound like inviting catastrophe?”

The fact that Malekan can mine humor from ZIRP and QE is testimony to the human capacity for finding fun in the darkest of circumstances. After all, one of the most popular motion-picture comedies of all time poked fun at nuclear war and ended with the destruction of the planet. A rise of one-quarter basis point in interest is hardly that apocalyptic, so a little black humor isn’t out of place. But the underlying issues make this no laughing matter.

Hamlet or Waiting for Godot?

For free-market economists, the last six years have been a living nightmare. Like many nightmares, this one has been murky and hard to follow. It has many features of Shakespearian tragedy. The Fed often seems to be playing the role of Hamlet, as when it cannot make up its mind whether or when to raise interest rates. At other times, economic policy takes on the surrealism of a Samuel Johnson play. The QE sequence and the long wait for the return of normality to monetary policy casts the Open Market Committee as the characters from Waiting For Godot – waiting for someone or something they aren’t sure they know or want.

One of the alarmists cited above is actually an Open Market Committee member and Fed policymaker. This just adds to the atmosphere of surrealism surrounding economic policy. But it jibes with the ambivalent reactions that the Fed itself has displayed to its own rate-hike proposal.

The exasperation of Fed watchers is captured in another Wall Street Journal op-ed (“A Muddle of Mixed Messages From the Fed,” WSJ, 02/19/2015) by two members of the Shadow Open Market Committee, Charles W. Calomiris and Peter Ireland. The SOMC is a group of economic and finance professors whose avocation is criticizing the Fed’s monetary-policy actions.

These two men begin by noting that the conventional index of market expectations is the futures market. Interest-rate futures indicate that markets do not believe the Fed will follow through on its stated intention to raise interest rates discretely over the next two years, beginning in mid-2015. Instead, markets expect rates to rise more slowly beginning later this year. Why does this divergence exist? Because the Fed has been giving mixed signals; Fed leaders say one thing (“rates will rise beginning in June”) but hint otherwise (by implying in various forums that both labor markets in particular and the overall economy in general are still shaky). Market participants believe the hints that Janet Yellen and other Fed officials are dropping, not the official policy statements issuing from the agency.

The Fed is legendary for using language reminiscent of the Delphic Oracle as a means of preserving its policy flexibility. While this is politically and bureaucratically useful to the agency, it is economically harmful. If market participants plan for one type of monetary policy and interest-rate environment but later experience a different one, their plans will be adversely affected. The very essence and purpose of interest rates is to coordinate the plans of savers and investors over time, so this confusion cannot be a good thing.

Without saying it in so many words, the two authors also accuse the Fed of reverting to old-line Keynesian habits. This wouldn’t be surprising in view of Chairwoman Janet Yellen’s left-wing Keynesian ideological slant. The hoary Phillips Curve tradeoff between inflation and unemployment has apparently been resuscitated with the Fed’s pathological fear of deflation, insistence on a 2% annual rate of inflation as a positive goal and Ahab-like pursuit of the ever-receding goal of “full employment.” Calomiris and Ireland insist that falling oil prices are not something to be feared and cannot – in and of themselves – cause a deflationary Depression. Only a sudden and severe decline in the money supply can do that. In effect, they are invoking the spirit of Milton Friedman’s famous dictum, “Inflation is always and everywhere a monetary phenomenon” – only in reverse gear.

The Fed’s problem is that Keynesians like Yellen were trained to believe that the interest-rate hike they are now advertising will torpedo an economic expansion – and the existence of a current expansion is the ostensible justification for the interest-rate hike in the first place. As the two authors point out, “even with a hike beginning in midyear, interest rates would remain very low and still well below the inflation rate, implying a negative real interest rate. Prior rate hikes in similar circumstances in 1994 and 2004 did not throw the economy into recession.

Calomiris and Ireland also resurrect another Friedmanism – his famous reference to the “long and variable lags” with which changes in the monetary policy affect the economy. Since 2011, the broad measure of the money supply, M2, has increased at an annual rate of over 6%. The two men see the excess reserves of banks gradually being absorbed into the economy after long sitting idle on deposit at the Fed. This will eventually – sooner rather than later – ratchet the annual rate of inflation toward and above the Fed’s target rate of 2% and completely offset the downward price momentum created by the decline in oil prices. Why, they complain, doesn’t the Fed own up to this?

Thus, the Fed’s case in favor of its announced policy is vastly stronger than the Fed pretends. The Fed is acting as though it doesn’t believe in its own policy.

The Crowning Irony

As if all this weren’t enough to leave any sensible observer groggy, we are forced to acknowledge that the Fed’s critics – fans of interest-rate hikes who are itching to “get back to a normal monetary policy” – suffer from their own blind spot.

Ironically, Calomiris, Ireland and Malekan are so dumbfounded by the Fed’s progressive march away from monetary reality that they haven’t noticed how far into the swamp that march has taken us. Having marched in, we can’t just turn around and march back out again and expect that the exit will be as smooth as the entry.

Calomiris and Ireland cite the interest-rate hikes of 1994 and 2004 as precedent for the one upcoming in June. But the previous increases did not take place in an economy staggering under the public and private debt load we carry today. Malekan cites the quarter-basis-point increase derisively; who’s afraid of a big, bad quarter point, anyhow, he laughs? Hell, we used to live with real interest rates of 5.4% in the old days. So we did, but then the federal-government debt wasn’t $14 trillion, either. We weren’t forced to finance federal-government debt with short-term debt instruments to hold down the rate. If we had to pay even halfway realistic interest rates on our current debt, the federal-government budget would be eaten alive. Suddenly, the U.S. would become Europe – no, it would become Greece, facing a full-blown fiscal crisis that would instantly become a political crisis.

Oh. Well, then – maybe it’s right to be so cautious, after all. Come to think of it, maybe we shouldn’t increase rates at all. Maybe we’re just stuck. You know, life really isn’t so bad. After all, unemployment has declined to the neighborhood of 5%. The economy is growing – slowly, but it’s growing. Let’s just stay where we are, then. Why is the Fed even talking about increasing rates?

From Op-ed Page to Front Page

Let’s jump from the op-ed page of the Wall Street Journal to the front page. The headline for 02/19/2015 reads: “Borrowers Flock to Subprime Loans.” Uh-oh; déjà vu all over again. “Loans to consumers with low credit scores have reached the highest level since the start of the financial crisis, driven by a boom in car lending and a new crop of companies extending credit. Almost four of every 10 loans to autos, credit cards and personal borrowing in the U.S. went to subprime customers in the first 11 months of 2014,” based on data supplied by Equifax.

In other words, the ultra-low interest rates stage-managed by the Fed have paved the way for a new financial crisis. The lead-in to the article didn’t even mention student loans, probably because the category of “subprime” is not meaningful for that type of loan. The auto-loan, credit-card and personal-finance industries are different from real estate. Banks no longer face the same risk exposures as they did in the early years of this millennium. Various elements of this impending crisis differ from the mortgage-finance-dominated crisis that preceded it. To be sure, history does not repeat itself – but it does rhyme, in the words of one sage observer.

It has now penetrated even the thick skulls of Federal Reserve policymakers, though, that asset bubbles are not born spontaneously. They are generated by bad government policies, with interest-rate manipulation prominent among those. It cannot have escaped notice that fixed investment during the six years of ZIRP and QE has fallen to anemic levels. Apparently, it is not so much low interest rates that promote healthy levels of investment as real, genuine interest rates – that is, interest rates that actually reflect and coordinate the desires of savers and investors.

Savers are people who plan savings today and on an ongoing basis to provide for future consumption. Investors are people who plan investments today and on an ongoing basis to provide the future productive capacity that makes future consumption possible. Interest rates coordinate the activities of these two groups of market participants over differing future time periods. This serves to coordinate intertemporal production and consumption in a manner analogous to the way that the prices of goods and services coordinate production and consumption over short-term time periods. (In this connection, “short-term” refers to time periods too short for interest rates to play the major role.)

When the interest rates prevailing in the market are not real interest rates but the artificial interest rates controlled by a central authority, that means that rates are not performing their vital coordinative function. And that means that future investments fail because investors were responding to a false market signal, one that told them that savers wanted more future goods and services in the future than were actually wanted. Having been burned very badly by this process just a few years ago, investors evidently aren’t about to be suckered again. They’re sitting things out, waiting for markets to normalize so they can invest in a market environment that works instead of one that fails. (The exceptions are situations in which “the fix is in;” when investors can get subsidies from government or are sure they will be bailed out in case of failure.)

If this comes as a surprise, it shouldn’t. Over a 70-year period, the Soviet Union tried to live without functioning capital markets. Any mention of interest rates was verboten in Communist circles, but after a while the need for intertemporal coordination in production was so crying that Soviet planners had to invent the concept of an interest rate. But they couldn’t call their invention an interest rate without risking execution, so they called it an “efficiency index.” Alas, merely calling it that did not actually give it the coordinative properties possessed by genuine market interest rates and the Soviet economy collapsed under the weight of its failures in the late 1980s. Similarly, the Chinese Communist economy got nowhere until, in desperation, Deng Xiaoping liberated market forces sufficiently to allow flexible prices and interest rates to prevail in an independent, competitive sector of the Chinese economy. And it was this sector that thrived and promoted Chinese economic growth, while the official, government-controlled sector stagnated.

More and more, respected commentators and observers across the spectrum are speaking out about the untenable status quo into which the Fed has forced us. The speech usually takes the form of grumbling about the need for return to a “more normal” policy. Of course, the problem is that any sort of normal policy is now impossible given the box we are in, but the point is that recognition of the harm caused by ZIRP and QE is becoming general.

So the Fed can’t just sit tight either, much as it would like to. The pressure to change the status quo has built up and is growing by the day. If the Fed continues to stall, it will be obvious to all and sundry that its so-called political independence is a fiction and that its policy is aimed at saving the government’s skin by preserving deficit finance and stalling off fiscal reform.

Actually, the proper metaphor for our current dilemma is probably that of a man riding a tiger. Once the man is atop the tiger, he faces a pair of impossible, or at least wildly unattractive, options. If he gets off, the tiger will kill and eat him. But if he stays on, he will be scratched, clawed and whipsawed to death eventually. Really, the question he must be asking himself as he tries desperately to hang on is: How in the world did I ever get myself in this position?

That question is purely academic to the man on the tiger but vitally important to us as we contemplate the Fed’s dilemma. How in the world did the Fed every get itself in this no-win situation? What made it seem attractive for the Fed to follow a policy that now seems disastrous? Alternatively, what made it seem necessary?

The Keynesian Link With ZIRP: Keynes’ Embrace of Marx

Close students of John Maynard Keynes know that Keynesian economic theory was mostly the work of Keynes’ followers. Students like Nicholas Kaldor, Piero Sraffa, Joan Robinson, Richard Kahn and John Hicks made numerous contributions to the theory that eventually dominated macroeconomics textbooks for some four decades and still survives today in skeletal form.

Nobel laureate Paul Samuelson once observed that Keynes’ General Theory was a work of genius in spite of its poor organization, confusing theoretical structure and intermittent moments of inspiration. Even more pertinent to our present predicament is that the second half of The General Theory leaves economics behind and takes up the cause of social policy.

Keynes faulted capitalism for its preoccupation with what he called the “fetish of liquidity.” It was the capitalist’s insistence on liquidity that underlay the speculative demand for money, which created idle balances that thwarted the expenditure of money necessary to purchase the short-term full-employment level of output. The payment of interest similarly thwarted the level of investment requisite for long-term full-employment. Capitalism would have to be supplanted with a kind of quasi-socialism in order for the market order to be preserved.

The linchpin of this new, stable market order would be a government-directed investment policy specifically intent on driving the rate of interest to zero by injecting fiat money as necessary.Only then would long-term investment would be maximized because the marginal efficiency of investment would be zero. (Another way of characterizing this outcome would be to say that all possible benefit would be squeezed out of investment.) Reading this second section of the General Theory makes it clear that Keynes was the original impetus behind ZIRP.

Keynes’ antipathy towards capitalism and the charging of interest brought him into general sympathy with Marx. Although they reached their respective conclusions by different routes, they both fervently sought the negation of capital markets and the castration of capitalism. Keynes felt he was preserving the institution of private property while Marx sought to destroy it, but in practice Keynesianism and Marxism have had similar effects on free markets and private property.

Should we be surprised, then, that Keynesians in Japan and the U.S. unveiled ZIRP to the world? Certainly not. ZIRP was the deep-seated secret desire of their hearts, the long-denied, long-awaited desideratum for which the financial crisis finally provided the pretext.

Reconsidering the Financial Rescue

Malekan no doubt echoed the views of most when he blandly observed that “although at the time few could argue with the need for such extraordinary Fed action.” He then went on to insist that things were different now and ZIRP and QE had outlived their usefulness and were no longer needed. But our full analysis suggests something quite different. If the Fed’s actions got us into a box from which there is no escape, then the only answer to the dilemma we face today is: Don’t get ourselves into this situation in the first place.

That means that we shouldn’t have ratcheted up federal-government debt with the Obama stimulus – or, for that matter, the Bush stimulus that preceded it. That conclusion will not resonate with most observers, given the overwhelming consensus that we had to do something to prevent the recurrence of a 1930s-style Depression and that massive government stimulus was the only thing to do. But we certainly aren’t forced to take that consensus verdict at face value now, six years after the fact. Six years ago, we felt under time pressure to do something fast, before it was too late. Now we have the luxury of retrospective review.

Neither stimulus lifted the U.S. economy out of recession. The Obama stimulus had hardly been spent when the U.S. economy officially emerged from recession in June, 2009. The unemployment rate declined with painful slowness in the six years after the stimulus, notwithstanding that academic students of economics are taught that the only theoretical rationale for preferring stimulative policies is that they act faster than waiting for markets to eliminate unemployment on their own. There is compelling evidence that the decline in unemployment resulted mostly from long-term departures from the labor force and elimination of unemployment-benefit extensions rather than from job creation. Malekan remarks that “the fact that there is a debate about a quarter-point rate hike tells us that extraordinarily low interest rates have mostly failed to deliver a robust recovery. That people opposed to even the tiniest increase in rates are resorting to hyperbole tells us that they too know this.” And what did we get for what Malekan calls “modest benefits,” but what we can see are really almost no benefits but a flock of trouble? We are riding a tiger with no way out of the fix that confronts us.

Although the reflex action of critics and commentators was to blame the financial crisis and the Great Recession on the usual suspects – greedy capitalists, Wall Street and deregulation – the passage of time has produced numerous studies decisively refuting this emotive response. The roster of government failures at the local, state and federal level was so lengthy that no single study has comprehensively included them all. That lengthy list is the only bit of evidence implying that things could have been worse than they actually were. Everything else – a priori logic and the long history of recessions since the founding of the republic – leads us to think that if left alone to recover, the U.S. economy would be vastly better off now than it actually is.

James Grant has recently written at book length about the severe U.S. recession of 1920-1921, which lasted no more than eighteen months despite no countercyclical government action at all. This is a template for government (in-) action in the face of impending recession. We have tried every form of preventive, stimulative and recuperative remedy the mind of man can devise and they have all failed. Maybe, if we’re lucky, we will someday have the chance to try the free-market cure.

DRI-178 for week of 2-8-15: A Closer Look at Prices

An Access Advertising EconBrief:

A Closer Look at Prices

There is no more important tool in the economist’s kit than the price of a good or service. Microeconomics was formerly called “price theory.” That conveys the correct impression that the theories of household, firm and input behavior are best characterized as processes of price formation.

Basic economics texts painstakingly develop the fundamentals of market pricing. This is necessary; we must crawl before walking, walk before running and run in order to stay in shape. But if all economists did was endlessly draw simple supply and demand curves and point meaningfully to the intersection of two lines, they would spend all their working hours in the classroom teaching undergraduate students. In order to avoid this ignominious fate, economists have had to grapple with the multifarious pricing schemes, strategies and tactics encountered in actual practice.

If market price equates the ex-ante quantity demanded and quantity supplied of a good, how do we explain the existence of sales? In particular, what about the perennial favorite, the after-Christmas sale? What are we to make of the market for coupons, which has been estimated at approximately $1 billion in face value in the U.S.? How should we evaluate the phenomenon of the manufacturer’s rebate, a practice that has proven almost as hard for economists to accept as it is for consumers to handle?

For many years, economists ruminated over these matters in the isolation of scholarly journals. Over the last decade or so, their musings have been publicized in books on popular economics. The idea is to make economic logic not merely accessible, but downright useful, to the masses.

How are we doing? You be the judge.

The After-Christmas Sale 

No class of beginning economics students would be at a loss to explain the origin and purpose of the after-Christmas sale. Everybody knows that retails stores make their bones at Christmas time. They place orders with sugarplum visions of Christmas sales dancing in their heads. Then comes the dawn on December 26th, and all those after-Christmas inventories have to be disposed of. Ideally, this should be achieved before yearend, to keep carryover inventories as low as possible for tax purposes. So, to make a virtue of necessity, the after-Christmas sale is born.

As with many a popular explanation for familiar economic phenomena, the beauty of this one is only epidermal. Sure, anybody can make a mistake – even a retail buyer. But the same mistake? Year after year after year? Buying for the most crucial segment of the store’s calendar year, on which its annual profitability depends? Should we suppose, then, that the professional life expectancy of a retail buyer is exactly one year – hired every January and fired every December 31 after annually misjudging the Christmas demand for the store’s product(s)?

No, this story does not withstand close scrutiny. Something else is at work here. What is it?

Economists believe two factors account for after-Christmas sales. Richard McKenzie identifies the first in his survey of pricing, Why Popcorn Costs So Much at the Movies: And Other Pricing Puzzles. To an economist, the most salient feature of the after-Christmas sale is an identical good, sold at two radically different prices in remarkably close temporal succession. One day, good X sells for price Y. The very next day, the same good X sells for Y – [anywhere from 15% to 75%].

When economists encounter simultaneous price differentials for sales of the same good, they label the practice price discrimination. Economists use familiar words in their own inimitable way, and in this case the word “discrimination” need not be pejorative. (Technically, the practice carries antitrust penalties if sellers use it as a device to impede competition.) Sellers practice price discrimination to exploit differential price-sensitivities among their customers – by charging a higher price to less price-sensitive buyers and a lower price to more price-sensitive buyers, the seller can earn more total revenue than by charging a single price to all buyers.

Why does this tactic increase total revenues? The common sense of it is this: For price-insensitive buyers, the stimulative effect of the price increase on revenue outweighs the slight fall in purchases, while for price-sensitive buyers, the stimulative effect on revenue of the increase in sales caused by a lower price outweighs the effect of a lower price.

If this sounds almost too good to be true, we should hasten to add that this condition does not exist for all goods and services and sellers may not be able to exploit it even if it does. But the possibility is enticing.

In our after-Christmas sale, the general idea of price discrimination applies but the context is atypical. Instead of charging different prices to different buyers at the same point in time, sellers are charging different prices to the same buyers at different points in time. But the motivation is exactly the same – to earn more total revenue than would be earned by keeping price the same throughout. The “different points in time” are these: before Christmas and the day after Christmas (continuing for the duration of the sale, perhaps to Dec. 31).

Why are the different prices charged? That’s the easy part – most people are much less price-sensitive during the Christmas season and much more price-sensitive as soon as Christmas is over. And unlike many contexts, in which it may take keen analysis to distinguish the price-sensitive buyers from the price-insensitive ones, there is no buyer-identification problem to plague sellers. Sellers just change price tags at midnight on Christmas or, more conveniently, at closing time on Christmas Eve.

Viewed in this light, it is obvious that after-Christmas sales are not mistakes. Obviously, sellers want to have enough inventories on hand to take advantage of high-priced Christmas demand, but they do not expect to sell out or even come close. They have factored in the price-sensitive demand after Christmas. Indeed, they look upon the existence of the Christmas holiday as built-in market segmentation. The inherent problems facing any potential exercise of price discrimination are two: first, effectively dividing the market into price-sensitive and price-insensitive buyers; and second, preventing resales of the good or service by the first group to the second. The Christmas holiday solves both problems automatically with temporal separation. People are automatically willing to pay higher prices before Christmas and automatically unwilling to pay more afterwards. And low-price buyers cannot resell to high-price buyers because the high-price buyers already made their purchases first. And, as an extra added attraction, the low price after Christmas will lure new buyers to the product who would never try the product under a single-price policy.

When we turn our attention to other types of sale, we find that price discrimination still plays a big explanatory role. Future EconBriefs will tackle some of these advanced cases. It is certainly not impossible that sellers can occasionally over-order stock and may need to reduce inventories; a sale may well be the expedient means to recover from this error. But we should be wary of imputing systematic errors to experienced market participants. Systematic errors result in insolvency – in which case, the erring seller isn’t around to repeat the mistake.


Discount coupons have been around for well over a century. Your parents and grandparents have seen them throughout their lives. Today they can be found in newspapers, magazines, circulars and online. Has it ever occurred to you to wonder why they exist? After all, coupons are costly to produce and distribute – “costly,” that is, in the economic sense that the resources necessary to make and provide them have alternative uses.

The total cost to manufacturers has been estimated (for example, by McKenzie) at around $1 billion for the approximately 153 billion coupons distributed to Americans. (These figures are roughly a decade old and may be somewhat lower today.) If the purpose of coupons is simply and solely to discount the price of the product, might it be both simpler and cheaper to just lower the nominal market price rather than issue coupons?

No, not necessarily. Various arguments bolster the use of coupons. Some of them are transparent. Others are much less clear but just as compelling.

The simplest is the notification effect. Consumers cannot buy a product if they are ignorant of its existence. A coupon is a simple and relatively cheap way of announcing a new product and making it cheap for consumers to try it.

Coupons that reward repeat purchases strive to create brand loyalty. For years, some economists have distrusted markets, doubted the ability of consumers to make rational choices and celebrated the effectiveness of government intervention in markets. They have decried efforts to promote brand loyalty as wasteful, inefficient and downright evil. But it is difficult to see why a repeat purchase should be any more inimical to a consumer than an initial one. Apart from powerful narcotic goods, goods or services do not possess addictive qualities. Once the terms of the coupon have been fulfilled, the consumer is back at square one – but with the added knowledge gleaned from multiple trials of a new product. How bad can this be? It may well be a good thing indeed if the consumer’s brand loyalty reflects a genuine preference – and who is the economist to doubt that?

Still, the most-often cited rationale for coupon issuance is price discrimination. As noted above, sellers want and need to identify and separate the price-sensitive from the price-insensitive among their customers. Coupon distribution is a tried-and-true means to that end.

The term found in economics textbooks to characterize the degree of price sensitivity among buyers is price elasticity of demand. (The rule of thumb among economists is to shun common, ordinary, easily grasped words and phrases in favor of unusual, esoteric, obscure terms – preferably in a foreign language.) Factors conducing to high price elasticity are the existence of copious substitutes for the good, low real incomes and a price that comprises a high fraction of the buyer’s real income.

The collection and use of coupons takes up the buyer’s time. It takes time to collect the coupons – time to hunt them up in their various sources, time to clip and save them, time to gather them again before shopping and dig them up when paying. This time is economically significant. Time has alternative uses. The time of low-income people has a lower alternative-use value than that of higher-income people. And the value gained from the coupon comprises a larger fraction of the low-income person’s real income than it does of the higher-income person’s income, so low-income people rate to gain more from coupon use. For these reasons, we expect low-income people to be more price-sensitive and also to be more avid users of coupons. Thus, coupon distribution is a relatively cheap and effective way for sellers to segment their market into price-sensitive and (relatively) price-insensitive buyers. Thanks to the coupons, the price-sensitive buyers arrange to pay lower prices and the price-insensitive buyers pay the nominal (higher) price.

Since the costs of coupon production and distribution are low on a per-unit of production basis, sellers gain total revenue from coupon distribution even when the costs of coupons are factored into the accounting.

Economist Steven Landsburg, another noted exponent of popular economics in works such as The Armchair Economist and More Sex Is Safer Sex, has made another, related argument for coupons. This is the peak-load pricing argument.

Some businesses face a demand for their product(s) that varies dramatically according to time of day or season. If demand at the highest (peak) time exceeds or strains their capacity, it is in their interest to shift some of this peak demand to off-peak times. Electric utilities are one famous example of this phenomenon. At one point, movie theaters were another case, and this gave rise to twilight-hour movie pricing. These days, though, it’s a rare movie indeed that strains the capacity of a movie theater. Popular bars started the practice of “Happy Hour” to shift some of the late evening clientele to the early evening. Uber has seized upon the idea of charging higher prices during rush hours, something taxicabs should have been done years ago but didn’t.

Landsburg noticed that coupon-clippers are disproportionately retirees, who tend to be low-income, price-sensitive people. They can shop in grocery stores in off-peak times such as mid-morning and mid-afternoon, while full-time workers must shop on their way to or from work. The working population has higher incomes and tends to be less price-sensitive on net balance. Obviously, the opportunity cost of taking off work makes grocery shopping during the off-peak disproportionately expensive for working people. But the coupons do make it more attractive for them to shop in the one off-peak time that is convenient for them – namely, the after-dinner hours. So, by distributing coupons, grocery stores can shift some of their demand from the morning and (particularly) evening rush-hour peaks to the off-peak, thereby lessening their staffing problems and increasing their total revenue and improving their competitive position relative to convenience stores.

Manufacturer’s Rebates

The $1 billion in annual coupon distributions are dwarfed by the estimated $6 billion in the value of manufacturer’s rebates offered annually. McKenzie estimates that “a third of all personal computers and their peripherals, and a fifth of all digital cameras, camcorders, and LCD TVs are sold with rebate offers.” He cites previous research suggesting that the total number of rebate offers approaches 400 million annually.

As most people already know, a rebate offers the return of money expended for a good or service in return for showing proof of purchase. That showing generally demands a fair amount of the buyer’s time and trouble – mailing in a “proof of purchase” (such as a receipt), perhaps accompanied by one or more completed documents, within a specified time period (called the “redemption period”). The redemption period may vary from a week to a year, but once it is exceeded the customer’s rebate privileges are lost.

Given the prominence of rebates on the retail landscape, it is not surprising that they have evolved a unique vernacular. The term lift is defined as the increase in sales stimulated by a given manufacturer’s rebate. Breakage is the percentage of customers who do not seek, or fail to obtain, a rebate during the redemption period. Slippage is the percentage of customers who obtain the rebate but then fail to cash (!) their rebate checks.

The existence of this vernacular implies that there is many a slip between the rebate cup and the consumer’s lip. According to self-styled consumer advocates, the slips constitute “rebate abuse.” Sellers may deliberately – how could it be inadvertent? – specify short redemption periods (say, one week), discontinuous with purchase (perhaps starting three weeks after purchase). The purpose behind such tactics is clear – to frustrate attempts at redemption.

Naturally, this will not endear the seller or the product to consumers. Perhaps, though, the company is on the ropes and facing insolvency; its managers are staging a desperate “hail Mary” promotion to raise cash. The company is willing to risk offending rebate redeemers when facing commercial oblivion. Since the alternative is simply to sell the product for its nominal price, consumers cannot be deemed worse off for being offered a rebate, however strict the terms – unless we consider the rising blood pressure and indignation they suffer upon reading the fine print in the rebate terms as part of their cost.

Natural curiosity leads to the question: What is the average rate of redemption, anyway? Economists believe the average rate may be as high as 40-50 %. The term “average” (or mean, as statisticians call it) is a measure of central tendency. It is useful by itself, but vastly more useful if contemplated alongside the amount of variance around that central tendency. In this case, the variance is huge.

Some rebate offers produce redemption rates at or nearing 100%. McKenzie cites the case of firms producing digital scanners whose rebate offers were universally redeemed – after which the companies went out of business! Before writing this off as colossal misjudgment, we should ponder the not unlikely possibility that the companies were seeking a last-ditch lift from rebates to avoid insolvency, hoping that breakage would be their salvation. In the digital age, fierce competition and low prices have been the handmaidens of technological innovation.

At the other extreme, sometimes the redemption rate is virtually nil. McKenzie quotes one retailer whose comments are quite revealing, if not perceptive: “Manufacturers love rebates because redemption rates are close to none… they get people into stores, but when it comes time to collect, few people follow through. And this is just what the manufacturer has in mind.” As McKenzie notes, this is wrong not just in practice but also in theory. If rebates were really a guaranteed way to increase sales, everybody would use them. The truth is much more complicated, hence more interesting.

By now, readers can begin to appreciate the basic strategy behind rebates. Manufactured products such as computers and printers carry price tags large enough to stimulate price sensitivity among many consumers. It is in the interest of sellers to sort out the price-sensitive from the price-insensitive buyers and charge differential prices. This is not easy; sellers must segment their market and prevent resales by low-price buyers to high-price buyers.

Manufacturer’s rebates perform market segmentation for the same reasons that coupons do. They are attractive to price-sensitive lower-income buyers whose time has a lower value and who therefore are more willing to take the time to comply with rebate terms. Thus, this is the market segment that actually gets the lower price; that is, the market price discounted by the amount of the rebate.

McKenzie also points out that rebates affect – and are affected by – the reputation of the company offering them. This can give them the quality of a “self-enforcing contract.” (The term was first used by University of Chicago economist Lester Telser.) He uses Dell Computers as a case in point. When Dell offers a rebate, consumers take it seriously. They know Dell will follow through on terms because any slip-ups would harm its reputation – something Dell can ill afford. In turn, that makes Dell’s rebate promotions that much more effective in terms of their lift. So even though Dell’s breakage will be minimal, its lift will be maximal – giving it a solid, consistent, dependable return on its rebate program.

Manufacturer’s rebates may be the most controversial of all pricing policies because their terms offer such scope for variation and their results are so variable. Any detrimental effect on consumers resulting from a manufacturer’s rebate cannot help but be small, not to say miniscule. Related complaints are purely emotional – which makes them an ideal topic for the political left wing, which is bereft of intellectual content and must rely entirely on emotive appeals.

Prices and Information

In his Preface, McKenzie quotes the famous passage from F. A. Hayek’s “Economics and Knowledge,” in which the late Nobel laureate describes the value of prices as collectors and transmitters of information. The various pricing practices of sellers put this feature on display. We cannot contemplate any central authority possessing or acquiring the quantity or quality of information that is routinely exchanged by the price system. Sellers have the strongest possible incentive to benefit consumers, while a central authority’s only institutional incentives are political. The more one learns about market pricing, the stronger the case for it becomes.

A future EconBrief will explore the links between market pricing and the evolutionary development of the human brain.