DRI-233 for week of 12-9-12: Subsidy Nation

An Access Advertising EconBrief:

Subsidy Nation

Recently, several think tanks such as American Enterprise Institute have quantified the degree of Americans’ dependence on government. Federal-government transfer payments have increased from less than $100 billion dollars in 1960 to well over $2 trillion dollars today. Even in real terms, adjust for inflation, transfer payments per capita have increased sevenfold. In 1983, around 30% of U.S. households contained at least one member receiving a subsidy check from the federal government. Today, the number is close to 50%. The fraction of individuals between ages 18 and 64 who were receiving Social Security disability payments in 1960 was about 0.04%. By 2010, the percentage on disability had risen to about 4.6%. (This coincides with the time period in which government agencies charged with policing workplace safety were created.)

A book by AEI’s Nicholas Eberstadt summarizing our dependence on government summarizing our dependence on government is entitled A Nation of Takers. That conjures up the picture of a government-run gravy train on which an army of citizens queues up to hitch a ride, like hobos gathering just outside town at dusk. Transform that picture into a painting and its title would be: Subsidy Nation.

The Roots of Subsidy Nation

The roots of subsidy nation were planted in the 19th century with Henry Clay’s American System. Clay built a political coalition that offered American businesses protection from foreign competition in the form of tariffs – taxes levied on imported foreign goods. The owners and employees of import-competing domestic businesses gained from these tariffs. Everybody else lost, even allowing for the fact that the taxes were the primary basis for federal-government revenue prior to 1913. The American System died with Clay but the tariffs remain. Their height waxed with the Fordney-McCumber and Smoot-Hawley bills of 1922 and 1930, respectively, and waned with subsequent multilateral bureaucratic efforts to free up international trade through the General Agreement on Trade and Tariffs (GATT). But when the chips were down – e.g., when re-election was at stake – politicians could be relied upon to sacrifice the general interest of consumers and import-dependent producers to the special interests of import-competing producers.

American businessmen learned a valuable lesson from U.S. commercial policy. An ounce of protection purchased from government is worth a pound of competitive zeal. Subsidy Nation was born.

Over the course of American history, its farmers have made us the world’s breadbasket. Along the way, they have had to overcome the twin handicaps of price-inelastic and income-inelastic demand for most of their products. That is, when the prices of food and fiber decline and real incomes rise, people do not increase their purchases proportionally. After all, you can only eat so much or wear so many clothes. And falling prices were the norm throughout the 19th and 20th centuries, thanks to continuous improvements in technology and productivity and the resulting increases in agricultural supply.

But American farmers are tough. They know that when the going gets tough, the tough get going – to the government for handouts. Activist farmers carefully picked out a year when agricultural prices were high, then enshrined that year’s prices as their desideratum. They demanded “parity” – farm prices commensurate with those in the good old days. Eventually, after a few decades of hectoring by populist legislators, the New Deal acquiesced with agricultural subsidies.

Federal-government farm-subsidy programs have used techniques like price supports, which prop up prices artificially high using government purchases, creating huge surpluses and wasteful storage costs – not to mention the waste of resources in producing more agricultural goods that are desired in the first place. Target-price programs at least got rid of the surpluses by allowing market prices to fall until the surpluses were disposed of, while paying farmers a direct subsidy to make up the difference. But farmers were embarrassed at getting welfare checks instead of price-support checks, so taxpayers got stuck with the storage costs after all when target prices got shot down. Acreage allotments paid farmers not to farm part of their land. This produced a crop of undesirable consequences, ranging from overfarming and overfertilization of the smaller allotments to monopoly rents accruing to owners of the subsidized acreage.

The joke turned out to be on farmers in the end. Technology gradually turned agriculture into big business by requiring sizable capital investment for machinery and expertise (both scientific and financial). Large corporations are designed for the express purpose of raising large amounts of capital. Not surprising, agribusiness took over most of agriculture. Subsidy programs rewarded farmers according to output; therefore, most of the farm subsidies went to owners of the large corporations. The political left wing, who had called the loudest for government involvement to protect the small “family farmer,” now screamed bloody murder when the beneficiaries turned out to be Archer Daniels Midland instead of Okies from the Dust Bowl.

Subsidy Nation was up and running.

Social Insecurity

The concept of “social insurance” dates back at least to the 18th century, but it found concrete expression in late 19th century Germany. Chancellor Otto von Bismarck had his hands full coping with the socialist movement that had swept over Europe beginning in 1848. To consolidate his power by appeasing the opposition to his left, he agreed to a system of old-age, sickness and accident benefits, funded and administered by the state. By the 1930s, proponents of Social Security had convenient forgotten its origins and remembered only its intentions; namely, to end poverty and neglect suffered by the elderly.

The Roosevelt Administration sold its proposal to the American public by trading heavily on the word “insurance.” Social Security would take in “contributions” from citizens and “invest” them in “special trust funds” where they would be “pooled” for future need. In other words, it would operate much like private insurance, except it would have no need to earn profits and would consequently behave in a reasonable, compassionate manner toward its “beneficiaries.”

This always was, and remains, pure malarkey. Like all so-called “social insurance” systems of its day and until the last two decades, the U.S. Social Security system began and remains a “pay as you go” system in which current benefits are paid from taxes levied on contemporaneous workers and employers. (In the true economic sense, the full incidence of the tax falls on the worker even though half the tax is nominally paid by the employer; the worker pays the other half in lower wages.) The tax is a regressive tax levied on earnings up to a maximum. Thus, it transfers income not merely from young to old but from poor to rich(er). Total benefits have virtually no connection with amounts paid in. The first Social Security recipient began paying in 1937, receiving in 1939, lived to age 100 in 1974, paid in $24.75 and received $22,888.92. Today, most new recipients will not receive even what they pay in, let alone a reasonable rate of return on their “investment.”

For years, Social Security was pointed out as the crown jewel of the welfare state. The baby boom created huge numbers of payees relative to recipients, masking the inherent unsoundness of the system. Today, the reverse is true. Recent falling birth rates throughout the Western world have combined with increasing life expectancies at age 65 to produce an actuarial nightmare – the unfunded liabilities of social insurance systems are off the charts, dwarfing even those of sovereign debt. Politically powerful senior citizens’ groups like AARP agitate against reforms to the system and demand the return of “their” money, oblivious to the facts that it is long gone and that the trust funds are mere accounting fiction.

When private citizens save for their own retirement, their savings are invested in productive assets. Of course, not every investment is successful, but on net balance, American business is productive. After all, the increase in productivity from year to year is what enables more goods and services to be produced and consumed; that is what has constituted America’s rising standard of living. When government takes our Social Security contributions in lieu of allowing us to save, that money is spent to pay current benefit recipients and the productive investment that would otherwise occur is never made.

With the passage of Social Security, America had crossed an invisible divide. Subsidy Nation had hit the big time.

Welfare – But Whose?

In the mid-1940s, the federal government began subsidizing the lunches of public schoolchildren. The program was originally designed to do two things: help get rid of surplus foodstuffs piled up by federal farm subsidies and improve the nutrition of poor schoolchildren. Over the years, the program grew in size and scope. It expanded to include breakfasts as well as lunches – of course, this required children to arrive at school earlier. Recently, dinners have showed up on the menu. And what started out as a school-lunch program for the poor has now become an all-purpose program of nourishment for public schoolchildren.

As a means of improving welfare, it suffers all the defects of government programs. The program has expanded pari passu with an increase in childhood obesity and incipient diabetes. We now know that government-imposed dietary standards are responsible for some large measure of this; today’s new nutritional learning has stood yesterday’s virtually on its head. The subsidized prices students pay (or not) distort the choices made; encouraging food consumption via artificially low prices is not the way to deal with obesity. The distortion affects the supply side of the market, too, by reducing the incentive to craft desirable menus; yesterday’s wasted rotting food surpluses are today’s uneaten food shoveled down the drain or into trash bins.

Emboldened by its rip-roaring success with school children, the federal government broadened its food-subsidy programs to include poor people generally in the early 1960s. It began issuing stamps for use as vouchers in purchasing food, with each stamp good for a value of food purchases. Again, the program began with limited purposes – to insure a minimum amount of nutrition for citizens with incomes below the poverty line. Again, the program grew like Topsy. Today, some 47.7 million people – nearly 1 out of every 7 Americans – receive food stamps.

Was this stupendous growth owing to the brilliant success of the program? Hardly. The food-stamp program is such a notorious example of government subsidy gone wrong that it is featured in economics textbooks as a case study in what not to do when trying to help people. Over the years, food stamps have usually traded at a discount for cash in the black market – sometimes at rates of 50 cents per dollar of nominal value. The words “waste, fraud and abuse” should appear in dictionaries alongside the entry for “food stamps.”

Both school lunch and food-stamp programs run afoul of the general economic principle that subsidies in cash are generally preferred to subsidies in kind. People can use the cash in any way they prefer but are limited to particular uses for school lunch or food-stamp subsidies. This raises the highly pertinent question: Whose welfare are welfare programs supposed to increase – that of recipients or taxpayers? Given the tremendous waste inherent in both programs, that may seem a dumb question. Yet taxpayers display stubborn resistance to reform of these programs, typically based on the presumption that cash subsidies would be wrongly used by the poor and needy – who, unlike taxpayers, are presumably too stupid to be able to judge their own best interests.

At a moment in history when Western governments are staggering under the burden of overwhelming debt, it seems germane to point out that cash subsidies sufficient to life every man, woman and child in the U.S. above the poverty line would amount to much less money than is currently spent on “welfare” programs of all types. How much less? Over the decades, back-of-the-envelope estimates have ranged anywhere from two to ten times less.

In other words, these subsidies are staggeringly inefficient.

Medicare and Medicaid

In the 1960s, President Lyndon Johnson’s Great Society observed that there was evidently political capital in creating benefits for the elderly. Even more importantly, the fact that these benefits were financed by taxes and cost more than the value they created did not seem to be generally recognized. Nor did it affect their political popularity. Johnson’s advisors rubbed their hands and set to work creating a system of government medical care for the elderly and the indigent.

On the surface, Medicare may seem different than government-owned and operated systems like Great Britain’s National Health Service. The difference derives almost completely from the fact that Medicare is administered by large private contractees, such as Blue Cross and Blue Shield and hospitals. That does make at least one important difference on the supply side of the market – the presence of profit throughout the system means that there is an incentive to make and maintain capital investments in medical technology and pharmaceutical products. In Great Britain, by contrast, the lack of state-of-the-art equipment is a scandal.

But as the consumer experiences them, there is little to choose between the U.S. system and government systems. The overriding similarity is the rule of the third-party payment, in which the consumer chooses treatments but the government/insurance company pays the bills. Thus, the consumer has almost no incentive to economize or choose wisely and resources are wasted hand over fist. Walk into a U.S. emergency room – the context in which high prices would and should place the highest premium on careful choices by consumers – and chances are that hospital staff will refuse to quote a price for any particular service, at most providing a flat rate for provision of service. The consent to treatment form that the patient is obligated to sign is either a blank check written on the insurance company or (for the uninsured) a farewell note to his or her net worth.

Even worse is the effect on the demand for medical care. When somebody else is paying the bill, consumers react as if the price of medical services were zero. Demand zooms into the stratosphere. Proponents of government-run health care pretend that medical care is an absolute necessity, but only a tiny fraction of it pertains to immediate threats to life. The need for a working price system in health care is urgent.

All this is bad enough, but Medicare and Medicaid compound their basic felonies with Byzantine regulations that add complication in the name of saving money without reducing true economic costs. Economic cost is the value of alternatives in production and consumption, as reflected in market prices. Since Medicare and Medicaid suppress market prices, their supposed “cost savings” are bogus. Both bureaucrats and government contractees lack the information necessary to centrally plan the medical care of millions of individuals. Arbitrarily reducing the fees of doctors is not cost savings; it merely reduces the level of care and substitutes poor service for higher prices.

Subsidy Nation had achieved another milestone: entrapping the elderly in an inferior system of subsidized medical care with no escape route.

Environmentalism, Alternative Energy and Streetcars

The publication of Rachel Carson’s Silent Spring in 1962 marked the watershed when the movement for conservation of natural resources – a goal with some basis in logic – changed course into a secular religion called environmentalism. The latter term has no logical underpinnings since it offers no grounds for favoring one aspect of “the environment” over another. Clean air is a good thing, right enough – but how clean does it need to be? And whose idea of “clean” gets imposed on everybody else? Ditto for water, land and the rest of “the environment.” The very essence of free markets is to provide an efficient answer to questions like that, while in Subsidy Nation those questions not only go unanswered, but even unasked.

Insofar as environmentalism has anything one could call “principles,” it believes that there is some sort of absolute standard for damage to the holistic, personified entity called Mother Nature, and that the resources of nature cannot be exploited without violating that standard. This provides the implicit justification for subsidizing technologies like solar and wind and fuels like ethanol. Without subsidies, these would vanish from sight due to their unproductiveness. Indeed, the subsidies must be given on both sides of the market – business-firm subsidies to stimulate production and consumer subsidies to stimulate consumption. This is the sine qua non of Subsidy Nation: government at every level has to play Dr. Frankenstein by artificially animating the entire market.

Environmental hysteria reaches its apogee with the recent fad for streetcars. In an age when science has given us the power to ride in computer-guided, driverless cars, trucks and planes – thus vastly enhancing safety, increasing speed and boosting human productivity – we are going to ride around in streetcars? Voluntarily? These are so monumentally inefficient and ineffectual that they require massive subsidies funded by taxes enacted by stealth. This is Subsidy Nation run amok.

Uneconomic Development

Over the last quarter century, casual observers of state and local business blinked in amazement at the rise to prominence of “economic development.” Every state in the nation established a department, bureau or corporation of economic development. “Economic development incentives” became the order of the day. After a couple centuries of folding, spindling and mutilating economic principles and logic, at last local politicos were finally getting the message. Surely prosperity would follow closely in the wake of this phenomenon.

Not.

Seasoned observers knew better. They knew that the discipline of economics contained a field of specialization called “economic development,” pioneered by one of the most famous economists of all time, Joseph Schumpeter. They had remarked the curiosity that – like the dog that didn’t bark in the nighttime – state departments of economic development seldom employed actual economists and almost never spoke a word of genuine economic development theory.

The reality of economic development after World War II in the undeveloped Third World – Africa, most of Latin America and much of Asia – foundered on the lack of effective markets. A major roadblock was the stifling effect of crony capitalism – the preemption of investment by friends, relatives and associates of those in power. This is the kind of “economic development” being preached and practiced at the state and local level today. It may have been best described by the wags in Arkansas who remarked drily that, while Governor, Bill Clinton was bent on achieving economic development one business at a time – starting with his friends.

EDIs take various forms, but they all involve artificial encouragement of business and particularly of investment. The operative meaning of “artificial” is defined in two ways: by selectivity and by distortion. Selectivity implies the granting of favors and making of distinctions for one or a few, but not for any or all. The process known at “tax increment finance” is a classic example. It allows recipient businesses to get favored tax treatment on their business investment and it is awarded by a commission, not available to all on equal terms. Naturally, the commission is set up precisely to award TIF status to those who enjoy the favor of and/or play ball with the local political establishment. Also naturally, the pretense is made that TIF follows sound, established economic principles. Accordingly, awards and publicity are well larded with jargon terms and blue sky prospectuses. Distortion involves the waiving or modifying of normal outcomes and procedures, such as market prices, taxes and government rules and processes.

It takes a professional to see all the way through the economic development charade. For example, many well-meaning amateurs comment approvingly about the “competition between state and local governments for business” that gives rise to EDIs, and compare it with states that lower corporate and individual income tax rates to attract business and residential inhabitants. In the first place, tax reductions for everybody do not distort the relative merits of individual investments the way that EDIs do. Secondly, taxes discourage economic activity by distorting prospective returns and incentives – hence reducing taxes is eliminating a distortion when the reductions apply across the board. EDIs create distortions; by making one investment appear misleadingly attractive, they are making another one misleadingly less attractive.

But the professionals aren’t fooled. Organizations as diverse as the Minneapolis Federal Reserve and the United Nations have commented unfavorably upon EDIs. Economists across the political spectrum have condemned them. The problem here is that the only time public attention can be distracted by economists is when the debate focuses on unemployment, inflation, “creating jobs” or predicting interest rates. Since these are things economists don’t do well, the well is poisoned for discussion of genuine economics.

Going Down for the Third Time

Today, the U.S. is drowning in a sea of subsidies. Most Americans have grown up being subsidized by the federal government. They have watched their friends, neighbors and enemies being subsidized with their tax dollars. They have come to view the political process exclusively as a zero-sum game, a fight in which the majority gets to use its absolute power to take the minority’s money for its own use. “Rights” simply define the ways and means of effecting the seizure. Economic growth, if it is pondered at all, is viewed not in the aggregate but rather as their own, personal, cost-of-living increase, decreed from above by somebody in authority. Apparently, the source of all that bounty that comprises the American way of life is a mystery to them.

The truth is grim. Throughout the 20th century, real income rose in the U.S. But this aggregate outcome concealed an underlying struggle between two forces pulling real income in opposite directions. Technology was advancing, increasing productivity and driving real income up. Meanwhile, each new round of subsidies was reducing economic efficiency, driving real income down. The net result, fairly steady increases in real income, reflected the fact that science, technology and a brain-inflow from the rest of the world was enough to pay the subsidy bills with a little left over to grow on. But that long holiday is now over. The end of the baby boom and the death knell of Reaganomics sent economic growth on a downhill slide.

And we aren’t alone. A good part of the world preceded us, even outdid us, in the transition to Subsidy Nation. Their downfall is the preview of our coming detractions. They are already underwater. We are going down for the third time. Like them, we are going not with a bang, but with a whimper. Crying for political compromise. Moaning for our entitlements. Whining for our subsidies.

DRI-445: Size Matters

For the umpteenth time, we awake to find Greece in the headlines. Her bickering political parties cannot form a coalition government – new elections are unavoidable. Capital is fleeing the country – almost $900 million worth on a single day. Greece will have to abandon the euro; the currency’s viability is in doubt. The European welfare state is imploding in slow motion, like a desolated high-rise public housing project condemned as uninhabitable.

The Wall Street Journal put its editorial finger on the problem. “The euro zone was conceived as a currency union…rather than …a fiscal or debt union…Trying to turn the euro into a larger political union has put the entire euro zone in jeopardy.”

For the last two years, governments of the European Community have applied one band aid after another to the debt-and-spending problems of its smaller, weaker members. Each application was accompanied by ostentatious public exhalations of relief and proclamations that fiscal peace in our time had been attained. None of the temporary fixes addressed the underlying problem, which is that Greece, Portugal, Spain and Italy have been supporting bloated, inefficient governments by overspending their budgets and borrowing the overage.

Eventually, creditors recognized the imminent danger of default and refused to play along with the charade any longer. Greece is the first country to face the choice that will soon confront the rest; namely, default on the debt – thereby shredding their credit rating and any hope of borrowing in the future – or leave the European Community and the euro behind.

The Eurozone can afford to lose Greece, but a protracted procession to the monetary exits would write finis to the euro as a currency. Consequently, the dominant member of the organization, Germany, has tried to impose a program of fiscal austerity on Greece. “Fiscal austerity” is shorthand for tax increases and spending cuts. Having ridden the governmental gravy train for most of their lives, Greece’s voters are in no mood to be thrown off by a prime minister – Angela Merkel of Germany – they didn’t even vote for. Instead, Greek political rallies feature posters of Merkel dressed in a Nazi uniform. Meanwhile, Spain offers a preview of coming attractions; mobs of protestors called “indignants” parade in opposition to austerity measures.

Economically, Ms. Merkel is right. After all, the only way to keep Greece and the euro both afloat is to bail the Greeks out, and German taxpayers are the only ones solvent enough to take on that job. Ms. Merkel is like the captain of a lifeboat currently holding its capacity of twenty souls who is importuned by another dozen shipwreck survivors. But politically, the protestors are right. When the citizens of a democracy surrender control over their tax and spending policies to a foreign power, they are no longer living in a democracy. How can such a stalemate possibly be resolved?

Free Trade and the Optimum Size of Government

The European Community is the successor to the old Common Market. Its purpose is to leapfrog the political borders that otherwise hamstring economic transactions. Its advent superseded thousands of rules, regulations, tariffs, quotas and barriers that previously made commerce between European nations a quagmire of cost and complication.

Now the economy of Europe resembles that of the United States. People, goods and services move freely across the political boundaries that separate the several states. Almost everybody benefits from this, the only exceptions being those that would reap large gains from excluding foreign competition. (Economists call these people import-competing suppliers of goods, services and inputs.)

It is not enough to eliminate the political impediments to trade across national boundary lines. Trade within the United States is lubricated by the use of a common currency that allows all prices and relative values to be expressed in terms of a common denominator. Failing that, the need for rates of exchange to facilitate trade involving different national currencies can impede trade nearly as much as political barriers. Problems arise because each government’s control of its own national monetary unit lets it create money for its own purposes. Money creation depreciates the value of the national money in the foreign exchange market, causing the exchange rate to rise. The price of goods is supposed to be based on supply and demand, but exchange rate changes cause the price of internationally goods to change for reasons unrelated to their underlying supply and demand. When people react to these camouflaged prices, the resulting changes in supply and demand falsify the real wants and needs of the people.

That is why the European Community created a common monetary unit – the euro – for its members. By affording all Europeans the luxury of a common currency, the euro made it possible to effectuate the ideal envisioned by author T. H. White, author of The Once and Future King. White felt that war and strife between nations could be avoided if we could somehow see the world as a bird did, from the air – without boundaries. The classical liberals of the 19th century agreed that free trade between nations was the best prophylactic against war. The great French economist Bastiat proclaimed that if goods did not cross borders, soldiers would.

But White was only half right. Political borders might be an economic nuisance, but they served a useful purpose. In their absence, the only alternative would be one gigantic, world government. The existence of political borders allows us to keep government small and manageable. Lord Acton, the English jurist and political philosopher, declared that “power corrupts and absolute power corrupts absolutely.” Unfortunately, the wider the scope of government, the more power is needed to enforce its dictates and fund its administration. The bigger is government, the harder it falls upon the heads (and wallets) of its subjects.

Economics says to eliminate political borders. Politics says that the more political borders we create, the smaller government is and the freer and more prosperous we become. We can have our cake and eat it by keeping governments small and numerous but simultaneously allowing free trade to cross political borders.

The problem is that officeholders have an incentive to make government as big as possible and oppose free trade across national boundaries. In other words, the incentives facing government are perverse – they tend to produce the opposite outcome to that desired. The only way to overcome that is by imprisoning government in a constitutional straitjacket that forces it to allow free trade while severely limiting its scope for growth.

The Size of Companies

Nobel laureate Ronald Coase saw transactions costs as the raison d’être of businesses. When households migrated away from self-sufficiency and toward specialization, why didn’t each one simply purchase the other goods it consumed direct from the specialist household? Coase believed that the transactions costs of providing goods and making them widely available in trade demanded the efforts of a firm organized to incur those costs efficiently. Thus was born the business firm.

The more efficient businesses became, the larger they grew. The greater the number and variety of production operations businesses could perform efficiently, the larger they became. In some cases, physical laws created “economies of scale” or greater-than-proportional increases in output resulting from proportional increases in all inputs. One such law is the so-called “two-thirds rule,” a functional relationship of cost to surface area and throughput to volume in various structures and processes like oceangoing cargo ships and pipelines. Since area tends to increase as the two-thirds power of volume, this means that the larger the structure, the lower will be the unit cost of a given output. This gives firms a powerful incentive to grow as large as possible.

No matter the source of the increase, it cannot not persist indefinitely. That is, no business could grow infinitely large without encountering some factor limiting its optimal size. In a free-market economy, the limiting factor always finds expression in the profit earned by the firm.

Perhaps the firm eventually grows so big that it cannot monitor product quality well enough. What constitutes “well enough” and how will firm managers know when the critical point is reached? Consumers will let them know by reducing their purchases of the product, thereby lowering the firm’s profits. Maybe the firm’s size loosens its grip on purchasing decisions. This will cause its costs to rise, thereby reducing its profits. Maybe the firm expands so far and so fast that spending on research and development falls, thereby depressing its rate of innovation and dropping its productivity. This will drive its costs up and its profits down. It might be as simple as just losing touch with the needs of its customers or with the pulse of the market. Whatever the cause, the symptom is the same – lower profits.

Note the difference between government and private business. Both face incentives to grow, but may grow too large. The profit motive is an inborn, inherent governor on the growth propensities of business. Government contains no such automatic, inherent restraint. The only restraining forces on government are exerted at the ballot box or in the streets.

Elections are an uncertain means of reducing government growth. The two watershed elections in recent Western industrial history – the elections of Ronald Reagan as U.S. President in 1980 and Margaret Thatcher as British Prime Minister in 1979 – succeeded only in slowing the pace of government growth in their respective nations. Revolutions, whether violent or peaceful, can be more transformational but are also riskier and potentially more costly.

Why the Left is Wrong About Company Size

One would be hard put to reconcile the foregoing with the picture of business painted by the left wing. The Left’s favorite villain may be the corporation. They use the word “corporate” as an adjective to modify a pejorative noun. “Corporate greed,” “corporate welfare,” “corporate cronyism,” “corporate profits” – even “corporate culture” takes on a pejorative cast.

If there is one connotation invariably and immutably associated with the corporation by the Left, it is bigness. Yet there is no necessary connection between size and the corporation. Economic historian Robert Hessen pointed out that almost every member of the Fortune 500 started as a small, closely held corporation whose stock was owned by a tiny group of owner-operators. When convinced of the firm’s potential, investors became willing to supply investment capital.

Eventually, the potential to reach national and international markets justified “going public” – selling equity shares to the public at the cost of relinquishing management control of the firm to the board of directors hired to safeguard the shareholders’ interests. It is actually this “separation of ownership from control” that is the single defining characteristic of the corporation, even more so than the limitation of investor liability.

The joint stock companies organized to explore the New World beginning in the 15th and 16th centuries were gigantic by the standards of their time. They had to be. But they weren’t corporations in the modern sense, merely companies organized to aggregate capital. Today, limitation of liability is inseparable from corporations and other forms of business organization. But the concept didn’t firmly attach itself to the corporation until the late 19th and early 20th centuries. Andrew Carnegie’s steel companies were structured as limited partnerships, while John D. Rockefeller built Standard Oil using the trust, a vehicle designed to facilitate expansion by merger and acquisition.

It wasn’t the corporate form itself that gave rise to big business, but rather the economic imperatives of the marketplace – economies of scale and scope, increasing market size, reductions in transport costs and all the rest. And they carried with them their own inherent, automatic limitation. Profit served as the trouble light on the instrument panel that served to warn owners and investors that growth had reached its natural limits.

The implication of the Left’s sinister portrayal of the corporation is that small business is somehow pure, noble, untarred by corporate pitch. In practice, of course, this is pure hooey. The relationship between big business and small business is symbiotic.

Virtually all big businesses start small – in that sense, the former couldn’t exist without the latter. Less teleologically, big corporations rely on small businesses for raw materials, supplies, recruiting needs, equipment purchases and servicing. But the flow of influence also runs in the opposite direction. Big business serves as the training ground in which entrepreneurs learn their trade; indeed, the standard advice to the would-be entrepreneur is to take a job working for wages in his or her chosen industry. Oftentimes, that job is corporate – either a salaried position at a major corporation or perhaps ownership of a franchise, where the owner can “paint by the numbers” using a proven operational and marketing plan in order to learn the ropes of running a business.

Much is made of the fact that the bulk of job growth comes from small business. This is not the result of virtue or the small-business mentality or even superior agility, so much as simply economic and arithmetic logic. Big businesses are, by definition, big; this means that a preponderance of them have approached, attained or (occasionally) surpassed their optimum size. They have little or no room for growth in income and employment. Small businesses, being small, have more “room” for growth. While some of them will remain small, the ones that thrive will grow. Some will grow a little, some quite a bit, a few will grow enormously. The outcome of this is that small business naturally generates most of the new job growth.

The Mixed Economy – the Case of Banks

As an intermediate case between growth in government and free-market growth in private business, we have heavily regulated private firms. On the one hand, regulation is conducted by government bureaucrats within the framework of government. On the other hand, its ostensible purpose is to replicate, replace or fine-tune the competitive process in cases where competition is either absent or impracticable. To which of these polar extremes will the outcome of regulation tend?

Banks are a locus classicus of business regulation. Although banks were regulated in the 19th century, thoroughgoing regulation began during the New Deal and the Great Depression. Widespread bank failures are cited by just about everybody as an important cause of the Depression. It is striking that our next-door neighbor, Canada, suffered next to no bank failures compared to the hundreds we endured.

At the time, branch banking was illegal in the U.S. but legal in the U. K. Various monetary theorists have suggested that Canadian branch banks protected their parent companies against risk by diversifying the company’s asset base geographically, resulting in better offset to region-specific loan losses. This is a case in which restrictive U. S. regulation kept our banks from growing large enough to neutralize the risks borne in their local environment.

The opposite case unfolded in the 1980s and persisted for the following four decades. A philosophy gradually developed that the larger a bank grew, the greater were the number and strength of its ties to other firms. If the bank failed, it might take many of these connected firms with it. The term for this interconnectedness problem was systemic risk. Fear of systemic risk gave rise to the doctrine known as “too big to fail.” Some banks, or financial firms generally, were considered too big to be subject to the risk of failure. This mindset is widely thought to underlie the massive bailouts of late 2008 and early 2009.

The most striking feature of the “too big to fail” doctrine is its effect on the size of banks. Clearly, banks have a huge incentive to get as large as possible in order to qualify for this kid-glove, hands-on treatment. More generally, the incentives confronting banks under regulation tend to be perverse, motivating them to do the wrong thing – not diversifying the bank in the 1930s and growing to improve eligibility for bailouts in the 2000s.

One of the most perverse features of bank regulation was that the biggest banks adopted policies that were quite unfriendly to their own customers. For example, financial commentators remarked that one mega-bank apparently had sought to induce its depositors to overdraw their accounts in order to incur large fees. In a competitive market, a bank would never take such a punitive stance toward its own customers, for fear of losing them. Under regulation, mega-banks apparently take a heads-I-win, tails-I-get-bailed-out attitude.

Size Matters

There is no doubt that, as elsewhere, size matters in politics and economics. Free markets feature an automatic mechanism that regulates business-firm growth and size; namely, the firm’s rate or level of profit. Increasing profits encourages continued growth, while reduction in profits suggests that the firm pull back from growth. Governments find it difficult to cut back on their own size and growth because they lack this warning sign telling them when to cut back on growth and when to embrace it.

Regulatory behavior is actually perverse in its effects on banking firm size. The “too big to fail” doctrine nudges firms toward gigantism, where they may win favor for a bailout. It also promotes neglect of customer service – a failing that would come back to haunt a competitive practitioner.

There is a countervailing hypothesis that it is not size per se that matters, but the use to which any size is put. Free markets make the best of things by encouraging businesses to reach their optimum size. Instead of regulating firms with discretionary orders issued by human bureaucrats, markets appoint consumers as the regulatory czars. The profit motive becomes the tool to impose regulatory discipline on businesses.