Do We Need a Social Welfare State?

One must evalulate all the trade-offs.

The following article in today’s NY Times asks the provocative question of whether we can afford a major shift to a social welfare state. One must also ask if the USA needs such a level of social welfare spending and what trade-offs it might impose. This is a question that must be answered through the democratic political process because the economic trade-offs are real.

See comments in RED.

Can America Afford to Become a Major Social Welfare State?

nytimes.com/2021/09/15/opinion/biden-spending-plan-welfare.html

By N. Gregory Mankiw

September 15, 2021

In the reconciliation package now being debated in Washington, President Biden and many congressional Democrats aim to expand the size and scope of government substantially. Americans should be wary of their plans — not only because of the sizable budgetary cost, but also because of the broader risks to economic prosperity.

The details of the ambitious $3.5 trillion social spending bill are still being discussed, so it is unclear what it will end up including. In many ways, it seems like a grab bag of initiatives assembled from the progressive wish list. And it may be bigger than it sounds: Reports suggest that some provisions will arbitrarily lapse before the end of the 10-year budget window to reduce the bill’s ostensible size, even though lawmakers hope to extend those policies at a later date.

People of all ages are in line to get something: government-funded pre-K for 3- and 4-year-olds, expanded child credits for families with children, two years of tuition-free community college, increased Pell grants for other college students, enhanced health insurance subsidies, paid family and medical leave, and expansions in Medicare for older Americans. A recent Times headline aptly described the plan’s coverage as “cradle to grave.”

If there is a common theme, it is that when you need a helping hand, the government will be there for you. It aims to assist people who are struggling in our rough-and-tumble market economy. On its face, that instinct doesn’t sound bad. Many Western European nations have more generous social safety nets than the United States. The Biden plan takes a big step in that direction.

Can the United States afford to embrace a larger welfare state? From a narrow budgetary standpoint, the answer is yes. But the policy also raises larger questions about American values and aspirations, and about what kind of nation we want to be.

The issue Prof. Mankiw addresses here is the question as to whether the costs of such programs yield the benefits desired. There is a lot of talk on the left that Modern Monetary Theory demonstrates that deficits don’t constrain government spending so that politicians should spend what’s needed to achieve whatever objective they choose. This is a bit of wishful fantasy. What matters economically and financially is whether such spending yields a greater return in terms of freedom and quality of life for society as a whole. If such spending merely increases the deficit but does not invest in the productivity of the economy, then it is a dead weight upon society. It’s not much different than one’s personal desire to choose between buying a new car or instead investing in education. One must compare how each choice will yield in terms of financial freedom and happiness over the longer term.

The Biden administration has promised to pay for the entire plan with higher taxes on corporations and the very wealthy. But there’s good reason to doubt that claim. Budget experts, such as Maya MacGuineas, president of the Committee for a Responsible Federal Budget, are skeptical that the government can raise enough tax revenue from the wealthy to finance Mr. Biden’s ambitious agenda.

The United States could do what Western Europe does — impose higher taxes on everyone. Most countries use a value-added tax, a form of a national sales tax, to raise a lot of revenue efficiently. If Americans really want larger government, we will have to pay for it, and a VAT could be the best way.

The costs of an expanded welfare state, however, extend beyond those reported in the budget. There are also broader economic effects.

Arthur Okun, the former economic adviser to President Lyndon Johnson, addressed this timeless issue in his 1975 book, “Equality and Efficiency: The Big Tradeoff.” According to Mr. Okun, policymakers want to maximize the economic pie while slicing it equally. But these goals often conflict. As policymakers attempt to rectify the market’s outcome by equalizing the slices, the pie tends to shrink.

Mr. Okun explains the trade-off with a metaphor: Providing a social safety net is like using a leaky bucket to redistribute water among people with different amounts. While bringing water to the thirstiest may be noble, it is also costly as some water is lost in transit. 

In the real world, this leakage occurs because higher taxes distort incentives and impede economic growth. And those taxes aren’t just the explicit ones that finance benefits such as public education or health care. They also include implicit taxes baked into the benefits themselves. If these benefits decline when your income rises, people are discouraged from working. This implicit tax distorts incentives just as explicit taxes do. That doesn’t mean there is no point in trying to help those in need, but it does require being mindful of the downsides of doing so.

Yes, we must reconcile the trade-off, but I would also characterize it as freedom and liberty to pursue one’s personal happiness versus the promise of individual economic security promised by the collective. The fulfillment of that promise is often costlier than anticipated and the benefits disappointing.

Which brings us back to Western Europe. Compared with the United States, G.D.P. per person in 2019 was 14 percent lower in Germany, 24 percent lower in France and 26 percent lower in the United Kingdom.

Economists disagree about why European nations are less prosperous than the United States. But a leading hypothesis, advanced by Edward Prescott, a Nobel laureate, in 2003, is that Europeans work less than Americans because they face higher taxes to finance a more generous social safety net.

In other words, most European nations use that leaky bucket more than the United States does and experience greater leakage, resulting in lower incomes. By aiming for more compassionate economies, they have created less prosperous ones. Americans should be careful to avoid that fate.

The point of course, is not that leisure time is undesirable but that people can choose how they invest their time and energy, rather than have state policy reward or penalize that choice arbitrarily. In a free and just society, this choice should be left to the individual. Liberty and security are not mutually exclusive goals.

Compassion is a virtue, but so is respect for those who are talented, hardworking and successful. Most Americans descended from immigrants, who left their homelands to find freedom and forge their own destinies. Because of this history, we are more individualistic than Europeans, and our policies rightly reflect that cultural difference.

That is not to say that the United States has already struck the right balance between compassion and prosperity. It is a continuing tragedy that children are more likely to live in poverty than the overall population. That’s why my favorite provision in the Biden plan is the expanded child credit, which would reduce childhood poverty. (I am also sympathetic to policies aimed at climate change, which is an entirely different problem. Sadly, the Biden plan misses the opportunity to embrace the best solution — a carbon tax.)

But the entire $3.5 trillion package is too big and too risky. The wiser course is to take more incremental steps rather than to try to remake the economy in one fell swoop.

Actually, I would suggest that the choice between liberty and security is a false one and the assumption that security can only be secured for the individual by the state to also be false. The leftist assumption is that the state has to intervene to redistribute wealth after the fact when instead we can design policies that empower citizens to join in the distribution of that wealth by participating in the risk-taking venture before the fact. Then the distribution of resources in society will mostly take care of itself. As it is now, and with this social welfare expansion, we prevent most individuals who need to participate from participating, forcing them to depend on the largesse of the state or the dictates of the market. This is hardly optimal in the search for liberty and justice. In light of my preference to preserve my liberty and take care of my own security, my answer to Prof. Mankiw’s question would be NO.

N. Gregory Mankiw is a professor of economics at Harvard. He was the chairman of the Council of Economic Advisers under President George W. Bush from 2003 to 2005.

QE4Ever

A bit like love, eh?

This article offers some good insights into monetary manipulation. The one thing I see missing is the recapitalization of assets based on depressed long-term interest rates, which is a result of Quantitative Easing and Zero Interest Rate Policy (ZIRP). So we have massive asset bubbles across many real asset classes as a result. No one seems to have any idea how this unwinds, but unwind it must.

‘Quantitative Easing’ Isn’t Stimulus, and Never Has Been

By Ken Fisher, RealClearMarkets

(AP Photo/Jose Luis Magana)

Upside down and backwards! Nearly 13 years since the Fed launched “quantitative easing” (aka “QE”), it is still misunderstood, both upside down and backwards. One major camp believes it is inflation rocket fuel. The other deems it essential for economic growth—how could the Fed even consider tapering its asset purchases amid Delta variant surges and slowing employment growth, they shriek! But both groups’ fears hinge on a fatal fallacy: presuming QE is stimulus. It isn’t, never has been and, in reality, is anti-stimulus. Don’t fear tapering—welcome it.

Banking’s core business is sooooooo simple: taking in short-term deposits to finance long-term loans. The spread between short- and long-term interest rates approximates new loans’ gross profit margins (effectively cost versus revenue). Bigger spreads mean bigger loan profits—so banks more eagerly lend more.

Overwhelmingly, people think central banks “print money” under QE. Wrong. Very wrong. Super wrong! Under QE, central banks create non-circulating “reserves” they use to buy bonds banks own. This extra demand boosts bond prices relative to what they would be otherwise. Prices and yields move inversely, so long-term interest rates fall.

Fed Chair Jerome Powell and the two preceding him wrongheadedly label QE stimulus, thinking lower rates spur borrowing—pure demand-side thinking. Few pundits question it, amazingly. But economics hinges on demand … and supply. Central bankers almost completely forget the latter—which is much more powerful in monetary matters. These “bankers” ignore banking’s core business! When short-term rates are pinned near zero, lowering long rates shrinks spreads (“flattening” the infamous yield curve). Lending grows less profitable. So guess what banks do? They lend less! Increase demand all you want—if banks lack incentive to actually dish out new loans, it means zilch.  Stimulus? In any developed world, central bank-based system, so-called “money creation” stems from the total banking system increasing net outstanding loans. QE motivates exactly the opposite.

Doubt it? Consider recent history. The Fed deployed three huge QE rounds after 2008’s financial crisis. Lending and official money supply growth shriveled. In the five pre-2008 US expansions, loan growth averaged 8.2% y/y. But from the Fed’s first long-term Treasury purchases in March 2009 to December 2013’s initial taper, loan growth averaged just 0.8% y/y. After tapering nixed the nonsense, it accelerated, averaging 5.8% until COVID lockdowns truncated the expansion. While broad money supply measures are flawed, it is telling that US official quantity of money grew at the slowest clip of any expansion in history during QE.

Now? After a brief pop tied to COVID aid, US lending has declined in 12 of the last 14 months. In July it was 4.7% above February 2020’s pre-pandemic level—far from gangbusters growth over a 17-month span.

Inflation? As I noted in June, it comes from too much money chasing too few goods and services worldwide. By discouraging lending, QE creates less money and decreases inflation pressure. You read that right: QE is disinflationary. Always has been. Wherever it has been tried and applied inflation has been fried. Like Japan for close to …ah…ah…ah….forever. Demand-side-obsessed “experts” can’t see that. But you can! Witness US prices’ measly 1.6% y/y average growth last expansion. Weak lending equals weak real money growth and low inflation—simple! The higher rates we have seen in recent months are all about distortions from lockdowns and reopenings—temporary.

The 2008 – 2009 recession was credit-related, so it was at least conceivable some kind of central bank action might—maybe kinda sorta—actually help. Maybe! But 2020? There was zero logic behind the Fed and other central banks using QE to combat COVID. How would lowering long rates stoke demand when lockdowns halted commerce?

It didn’t. So fearing QE’s wind-down makes absolutely no sense. Tapering, other things equal, would lift long-term rates relative to short rates—juicing loans’ profitability. Banks would lend more. Growth would accelerate. Stocks would zoom! Almost always when central banks try to get clever they wield a cleaver relative to what they desire.  A lack of FED action is what would otherwis be called normalcy.

Fine, but might a QE cutback still trigger a psychological freak-out, roiling markets? Maybe—briefly. Short-term volatility is always possible, for any or no reason. But it wouldn’t last. Tapering is among the most watched financial stories—has been for months. Pundits over-worry about it for you. Their fretting largely pre-prices QE’s end, so you need not sweat it. This is why Powell’s late-August Jackson Hole commentary—as clear a statement that tapering is near as Fed heads can make—didn’t stoke market swings. The ECB’s September 9 “don’t call it a taper” taper similarly did little. Remember: Surprises move markets materially. Neither fundamentals nor sentiment suggest tapering is bear market fuel.

Not buying it? Look, again, at history. The entrenched mythological mindset paints 2013’s “Taper Tantrum” as a game-changer for markets. Untrue! After then-Fed Chairman Ben Bernanke first hinted at tapering back in May 2013, long-term Treasury bond prices did sink—10-year yields jumped from 1.94% to 3.04% by that yearend. But for US stocks, the “tantrum” amounted to a -5.6% decline from May 21 through late June—insignificant volatility. After that, stocks shined. By yearend, the S&P 500 was up 12.2% from pre-taper-talk levels. Stocks kept rising in 2014 after tapering began. 10-year yields slid back to 2.17%. My sense is even tapering’s teensy impact then is smaller this time because, whether people consciously acknowledge it or not, we all saw this movie before.

Taper terror may well worsen ahead of each coming Fed meeting until tapering actually arrives. Any disappointing economic data will spark cries of “too soon!” Tune them down. History and simple logic show QE fears lack the power to sway stocks for long.

Ken Fisher, the founder, Executive Chairman and co-CIO of Fisher Investments, authored 11 books and is a widely published global investment columnist.

Modern Monetary Fantasies 2

The Myth of Big Government Deficits – A TED Talk

This is quite the tale. I’m sure Ms. Kelton studied her economics but here with MMT she takes a few basic truths and spins an elaborate fantasy. Essentially her argument is that debt is no obstacle to economic policy and economic outcomes. You want a Ferrari? No problem, the Fed can write a check and it’s yours, no taxes, no worries. Advocates will hate this simplification but that’s essentially what Ms. Kelton is selling. (You can substitute free healthcare, free college, whatever you want, but I’d go with the Ferrari 365GT.)

MMT is utopian economics. Yes, in theory it can make sense, just don’t go too far down that rabbit hole. Govt debt is not like private debt because it never has to be paid back, only serviced and rolled over. So the debt in $ terms doesn’t matter, but the productivity of that debt matters a lot (the debt to GDP ratio is a good indicator – it looks worse every day).

She lauds the pandemic stimulus because that essentially was an MMT experiment. Look, no recession! But recessions are measured in monetary terms (not value), and if the Fed keeps pumping out money, voila! No recession. But value creation matters and in value terms, we are suffering an extreme recession and stagflation. How many small businesses have closed in the past 2 years? How much price inflation are we experiencing? 5-9%? Have you tried to buy a house lately? 20% price increases. Tried to get a plumber or electrician?

Yes, when the government spends $28 trillion, that money goes somewhere in the private sector. And yes, we’ve seen it skimmed off by the banking industry, the asset-rich who have merely leveraged 3% debt, and the securities markets that have bubbled up even as production has declined. This is what is driving inequality to new heights as the global elites suck up this cheap credit courtesy of the central banks. Check out the number of mega yachts plying the oceans.

Yes, we’ve seen the fantasy of MMT in action and that’s why we’re having a political revolution. Kelton and the handful of economists selling MMT are assuming a utopian political world where everybody always does the right thing. Ultimately, intellectual dishonesty like this is extremely damaging.

Read her book, there’s nothing there that will address these false assumptions. Credit and debt are tools that the market uses to restrain profligacy. Without those restraints, the party will eventually implode.

Modern Monetary Fantasies

I read this comment to an article on cultural conflict and politics (the article was a UK perspective and not that insightful – see link below). I was struck by this reader’s comment because it hits the nail on the head, despite its rudimentary tone and language. I could write an empirical and theoretical analysis that would bore readers to tears but it would all support this view.

It’s US$ monetary policy that is driving the distributional consequences of deficit spending along with globalization and technology into the cul-de-sac we find ourselves in. Think about it: when the government borrows and spends $28 trillion, where do we think it goes? Into private pockets controlled by those at the top. (All those real estate assets we own are merely keeping pace – it’s still the same four walls and roof.)

There’s probably not more than a handful of politicians in Washington DC that could explain this well or understand it, but they’re all setting the policies in ignorance.

Money Printing, the ability to spend more than is taken in has had a vast set of consequences – and almost all the problems can be laid at its feet. Really Nixon in 1971 taking US off the ‘Gold Standard’ to fund Johnson’s ‘Great Society’ and the Vietnam War. Both these make the rich richer. The Military-Industrial Complex goes directly to the wealthy, and the increased Social Spending $ always trickle-up while paying the poor to be poor traps them in poverty.

And so it has progressed till the National debt is 28$ Trillion! About equal to 8 years of all USA’s tax revenues. At the current ZERO percent interest rate it takes 1.5$ Trillion to service the debt! About half of all the Fed Tax revenues! Biden wants to add 4.5$ Trillion on human infrastructure (waste, pork, corruption, and free money to minorities, to trickle to the super-rich (and China, via Amazon and Walmart)). This on top of the monthly 120$ BILLION purchases of Treasuries and mortgage-backed securities the Fed buys – and the 1-3$ Trillion budget deficit! (If, when, interest rises to 5% it will take all the gov tax revenue just to service the national debt – )

Anyway, the printed $ all rise to the super wealthy, they get all of it. The poor just get addicted to the drug of the Welfare Trap, and become multi-generational poor. The working class and middle class have all their savings and pensions harvested by the stealth Tax called Inflation (now officially 5%, but really 9) because interest must be kept at Zero for the debt to be serviced. So all workers’ savings get eaten up by inflation Tax of 5% – (minus the bank and bond interest of 1% = MINUS 4% savings growth). Their pensions and savings melting like snow as the printing inflates the money supply….

But the above just scratches the surface of the harm. USA will eventually lose ‘Reserve Currency Status’ over this. The foreign trade deficit is a Trillion – how can that continue – the hard assets and Equities so inflated – and the wealthy own them. The rich have hard assets which appreciate, they carry HUGE debt at 3% interest while inflation eats the debt basis away – and Dividends, so make money while everyone goes broke.

This is what Lefty/Liberal MMT is doing – the death of America. The Left economics is always same – all the money to the elites, and the rest go broke.

Why Does America Hate Itself?

Bretton Woods – #2 of Series

Second of a Series of articles on the international monetary regime reprinted from the NY Sun.

Not sure I would agree with all of this. Net exports is different than manufacturing exports and manufacturing employment, especially in the global information economy. I believe the problem here is that the reserve currency allows the US central bank to issue too much US$ credit liabilities without paying the direct consequence. Our trading partners are not exactly happy about this either since they surrender control of their currencies to the dominance of the US Federal Reserve and US politics. I think we need to rein in political discretion over the value of money.

Time To Reverse the Curse Over the Dollar

nysun.com/national/beyond-bretton-woods-the-road-from-genoa/91606/

By JOHN MUELLER

Journalism thrives on simple narratives and round numbers. So I must note that what President Nixon ended 50 years ago was not the international gold standard, which persisted despite interruptions for more than two millennia to 1914, but its complicated parody: the gold-exchange standard, established 99, not 50, years ago by a 1922 agreement at Genoa.

Prime Minister David Lloyd George convened the Genoa Conference in an effort to restore the economies of Central and Eastern Europe, modify the schedule of German reparations owed to France, and begin the re-integration of Soviet Russia into the European economy. Lacking any American support, the conference was a failure on all those counts.

The gold-exchange standard, John Maynard Keynes’ idea, was Genoa’s one tangible result. Keynes had proposed in 1913 that the monetary system of British colonial India be adopted world-wide. The British pound would remain convertible into gold, but India’s and other countries’ domestic payments would be backed by ostensibly gold-convertible claims on London. Following Genoa, the pound could be exchanged for gold, and other national currencies could be exchanged for pounds.

But there was a complication: unlike most currencies, the Indian rupee actually was based on silver, not gold, and British officials, including Keynes, overvalued the silver rupee, hoping to reduce heavy demands for British gold. British monetary experts inserted this scheme (without the silver wrinkle) in the 1922 Genoa accord, incidentally forestalling impecunious Britain’s repayment of its World War I debts in gold.

While working 35 years ago for Congressman Jack Kemp, I first coined the term the “reserve currency curse.” I was tutored in the subject by Lewis E, Lehrman, who in turn was influenced by the French economist Jacques Rueff (1896-1978). Keynes had claimed that what matters is only the value, not kind, of monetary reserves. It was Rueff who countered in 1932 that foreign exchange is qualitatively different from an equal value of precious metal.

With the creation of, say, dollar reserves, purchasing power “has simply been duplicated, and thus the American market is in a position to buy in Europe, and in the United States, at the same time.” This credit duplication causes prices to rise faster in the reserve-currency country than its trading partners, precipitating the reserve-currency country’s deindustrialization. That fate soon befell Great Britain, then the United States after the dollar replaced the pound under the 1944 Bretton Woods agreement.

Other countries backing their currencies with dollar-denominated securities led to a dilemma for America. The United States is the only major country with negative net monetary reserves (foreign official assets minus liabilities). All others — even those whose currencies are used by foreign central banks — have positive net reserves (i.e., those countries’ foreign official assets exceed their foreign official liabilities).

There is a correlation of more than 90% between America’s net reserves and its manufacturing employment. American net reserves had been positive before but turned negative by 1960, and manufacturing jobs have since disappeared in direct proportion to the decline in our net reserves. Focusing on one bilateral trade balance or other — say, the US and China — is a mug’s game. What matters is the total balance, not bilateral subsets.

How could an American president reverse the reserve-currency curse? By making honesty the best policy: negotiating and starting repayment of all outstanding dollar reserves over several decades. Since international payments must be settled in real goods — not IOUs — the necessary production of American goods for export is the surest way to revive America’s manufacturing employment.

To increase our manufacturing jobs back to the peak of 17 million from today’s 12 million, it would be necessary to repay most outstanding official dollar reserves. If President Biden is as ineffectual as most of his recent predecessors in responding to the “reserve-currency curse,” he, too, will have to get used to the title “ex-President.”

________

Mr. Mueller is the Lehrman Institute Fellow in Economics at the Ethics and Public Policy Center in Washington DC and author of “Redeeming Economics.” Image: Conferees at the Genoa Conference, with Prime Minister Lloyd George of Britain front and center. Detail of a British Government photo, via Wikipedia Commons.

What is Money?

This looks to be an excellent series of articles concerning the most important policy issue of the past 50 years. The global monetary regime that uses the US$ as the reserve currency and gives the world’s central banks discretion and control over the supply of fiat currency drives current global events, for better and worse. The effects range from economic crises and financial meltdowns to inequality, political conflict, and environmental degradation. Given the importance of money, I print the following article from the NY Sun in full…

God and Money: ‘A Perfect and Just Measure Shalt Thou Have’

nysun.com/national/god-and-money-a-perfect-and-just-measure-shalt/91597/

By JUDY SHELTON

Following begins a new series of columns marking the 50th anniversary of the collapse of the Bretton Woods gold exchange standard established in the closing months of World War II. A related editorial appears nearby.

* * *

The 50th anniversary of the collapse, on August 15, 1971, of the Bretton Woods monetary system is a momentous moment in the history of money. It should provide an occasion for thoughtful discussion focused on the road to reform, our priceless constitutional foundation, and the restoration of honest money.

Let us avoid an academic food fight among economists over prior international monetary systems. We should not be arguing about the classical gold standard versus the Bretton Woods pegged exchange-rate system, as these are just variations on the more significant theme of gold convertibility and the role of government in regulating money.

We can’t even usefully revert to debating the old fixed-versus-flexible arguments that were part of Milton Friedman’s justification for freely floating rates in the 1960s; the theoretical models for both positions have been mugged by reality.

Instead, we should be talking about money itself — what is its basic purpose, its relationship with productive economic growth — and whether today’s dysfunctional international monetary regime deserves to be designated any kind of system at all.

As the former chief of the International Monetary Fund, Jacques de Larosiere, noted at a conference in February 2014 at Vienna, today’s central bank-dominated monetary arrangements foster “volatility, persistent imbalances, disorderly capital movements, currency misalignments.”

These, he warned, were all major factors in the explosion of credit and leverage that precipitated the 2008 global financial crisis. Such an unanchored approach, he said, does not amount to a “non-system” but something considerably worse: an “anti-system.”

It is time to think creatively about money. We need to remind ourselves what it means as a measure, how it facilitates voluntary commerce and opportunity — how it can lead to greater shared prosperity while remaining compatible with liberty, individualism, and free enterprise. We’re at a moment when everything is on the table. For the wisdom of central bank mechanisms for conducting monetary policy is being called into question just as private alternative monies are making ever more credible bids for legitimacy.

Looking back and looking ahead, we can see that the most relevant and stimulating views emphasize the importance of productive economic activity and an open global marketplace. Money’s crucial role is to provide clear price signals to optimize the rewards of entrepreneurial endeavor and increased human knowledge.

Adam Smith wrote his treatise “The Wealth of Nations” during an age when nations forged a global monetary system by defining their currencies in terms of precise weights of gold and silver. A level monetary playing field arising from a system inherently disciplined by forces outside the control of government — wherein the economic decisions of private individuals are not held hostage to the ambitions of politicians—served profoundly liberal goals such as rule of law, private property, and the equal protection of human rights.

Modern-day visionaries likewise focus on the integrity of market signals conveyed through money. When Elon Musk says, “I think about money as an information system,” he goes to the heart of money’s unit-of-account function and underscores the importance of price signal clarity. When he tweets that “goods and services are the real economy, any form of money is simply the accounting thereof,” he illuminates the same reasoning that caused our constitutional Framers to include the power to coin money and regulate the value of American money, and of foreign coin, in the same sentence of our Constitution that grants Congress the power to fix our standard of weights and measures.

Money is meant to be a reliable measure, a meaningful unit of account, and a dependable store of value. When those qualities are undermined — especially by government — for purposes of redirecting economic outcomes at the risk of global financial instability, the dynamism and productive potential of free-market forces is diminished.

Political arguments in favor of maintaining government control over the issuance of money tend to invoke short-term objectives couched in words such as “stimulus” and the need for central bank “support” for an economy. Such calls are met with somber warnings about long-term “unsustainability” from the monetary authorities who nevertheless indulge them.

“But thou shalt have a perfect and just weight, a perfect and just measure shalt thou have,” goes the passage from the Book of Deuteronomy (25:15), “that thy days may be lengthened in the land which the LORD thy God giveth thee.” The biblical injunction against dishonest measures can be interpreted as alluding to sustainability not only in economic terms but also in the moral realm.

As noted by Robert Bartley, editor of the editorial page of The Wall Street Journal for more than 30 years, economist Robert Mundell was correct in his assessment that the only closed economy is the world economy. It’s time to start building an ethical international monetary system.

________

Judy Shelton, an economist, is a senior fellow at the Independent Institute and author of “Money Meltdown.” Image: The conference room at the Mount Washington Hotel, Bretton Woods, New Hampshire, where, in 1944, the Bretton Woods Treaty was crafted. Via Wikipedia Commons.SupportAboutTerms

The Gig Economy (sic)

The Gig Economy has merely exposed the lie that our labor is the most valuable asset we own. Rather, our man-hours have been depreciated drastically in the last 50 years. Much of this has been due to the explosion in capital credit after the abandonment by Nixon in 1971 of the gold peg under the Bretton Woods international monetary regime. This has led to capital-labor substitution, technological innovation, and productivity increases that have reduced the demand for labor, both skilled and unskilled. It’s made some of us richer.

The second contributing factor has been capital mobility under globalization and the liberalization of the populations of the developing world. This has led to a vast increase in the supply of both skilled and unskilled labor. China and India, for the past 30 years, but we still have Africa and South America in the pipeline. 

The combined effect of these policies and geopolitical trends has driven the marginal price of labor down towards the subsistence level. We need to think outside this shrinking box. Btw, union organization will do nothing to reverse these trends unless the focus is not on controlling the supply of labor and artificially raising wages. Asset ownership, risk sharing, and personal data ownership are key.

(Note: We can’t really expect Vanity Fair to tackle these issues.)

“What Have We Done?”: Silicon Valley Engineers Fear They’ve Created a Monster

Vanity Fair

In the heart of San Francisco, the gig economy reigns supreme. Walk into a grocery store, and a large number of shoppers you see are independent contractors for grocery-delivery start-up Instacart. Step outside, and cars with black-and-white Uber stickers or flashing Lyft dashboard lights are sitting, hazards on, blocking the bike lane as they wait for passengers. Cyclists zigzag around the cars, many hauling bags branded with various logos—Caviar, Postmates, Uber Eats—as they deliver food to customers around the city. You can stand on a street corner and count the number of gig-economy workers walking by, as I often do; sometimes it’s 2 out of every 10. On some corners, like the one near the Whole Foods on 4th and Harrison, I’ve counted 8 out of every 10.

The gig-economy ecosystem was supposed to represent the promised land, striking a harmonious egalitarian balance between supply and demand: consumers could off-load the drudgery of commuting or grocery shopping, while workers were set free from the Man. “Set your own schedule,” touts the Uber-driver Web site; “Be your own boss,” tempts Lyft; “Make an impact on people’s lives,” lures Instacart. These companies have been wildly successful: Uber, perhaps the most notorious, is also the most valuable start-up in the U.S., reportedly worth $72 billion. Lyft is valued at $11 billion, and grocery delivery start-up Instacart is valued at just over $4 billion. In recent months, however, a spate of lawsuits has highlighted an alarming by-product of the gig economy—a class of workers who aren’t protected by labor laws, or eligible for benefits provided to the rest of the nation’s workforce—evident even to those outside the bubble of Silicon Valley. A July report commissioned by the New York City Taxi and Limousine Commission found that 85 percent of New York City’s Uber, Lyft, Juno, and Via drivers earn less than $17.22 an hour. When the California Supreme Court ruled in May that delivery company Dynamex must treat its gig workers like full-time employees, Eve Wagner, an attorney who specializes in employment litigation, predicted to Wired, “The number of employment lawsuits is going to explode.”

Of course, the threads of this disillusionment are woven into the very structure that has made these start-ups so successful. A few weeks into my tenure at Uber, where I started as a software developer just a year after graduating from college, still blindly convinced I could make the world a better place, a co-worker sat down next to my desk. “There’s something you need to know,” she said in a low voice, “and I don’t want you to forget it. When you’re writing code, you need to think of the drivers. Never forget that these are real people who have no benefits, who have to live in this city, who depend on us to write responsible code. Remember that.”

I didn’t understand what she meant until several weeks later, when I overheard two other engineers in the cafeteria discussing driver bonuses—specifically, ways to manipulate bonuses so that drivers could be “tricked” into working longer hours. Laughing, they compared the drivers to animals: “You need to dangle the carrot right in front of their face.” Shortly thereafter, a wave of price cuts hit drivers in the Bay Area. When I talked to the drivers, they described how Uber kept fares in a perfectly engineered sweet spot: just high enough for them to justify driving, but just low enough that not much more than their gas and maintenance expenses were covered.

Those of us on the front lines of the gig economy were the first to spot and expose its flaws—two months after leaving Uber, I wrote a highly publicized account of my time there, describing the company’s toxic work environment in detail. Now, as Silicon Valley struggles to come to terms with its corrosive underpinnings, a new vein of disquiet has wormed its way into the Slack chats and happy-hour outings of low-level rank-and-file engineers, spurred by a question that seems to drown out everything else: What have we done? It’s a question that I, too, have been forced to grapple with as I notice how my job as a software engineer has changed the nature of work in general—and not necessarily for the better.

The risk, we agreed, is that the gig economy will become the only economy.

Gig-economy “platforms,” as they’re called, take their inspiration from software engineering, where the goal is to create modular, scalable software applications. To do this, engineers build small pieces of code that run concurrently, dividing a task into ever smaller pieces to conquer it more efficiently. Start-ups function in a similar way; tasks that used to make up a single job are broken down into the smallest possible code pieces, then partitioned so those pieces can be accomplished in parallel. It’s been a successful approach for start-ups for the same reason it’s a successful approach to writing code: it is perfectly, beautifully efficient. Across so-called platforms, there are no individuals—no bosses delegating tasks. Instead, various algorithms run on the platform, matching consumers with workers, riders with the nearest driver, and hungry customers with delivery people, telling them where to go, what to do, and how to do it. Constant needs and their quick solutions all hummingly, perpetually aligned.

By now it’s clear that these companies represent more than a trend. Though it’s difficult to accurately determine the size of the gig economy—estimates range from 0.7 to 34 percent of the national workforce—the number grows with each new start-up that figures out how to break down another basic task. There’s a relatively low risk associated with launching gig-economy companies, start-ups that can engage in “a kind of contract arbitrage” because they “aren’t bearing the corporate or societal cost, even as they reap fractional or full-time value from workers,” explains Seattle-based tech journalist Glenn Fleishman. Thanks to this buffer, they’re almost guaranteed to multiply. As the gig economy grows, so too does the danger that engineers, in attempting to build the most efficient systems, will chop and dice jobs into pieces so dehumanized that our legal system will no longer recognize them. [Note: yes, labor contracts will be obsolete and meaningless, which means asset ownership is the only defensible right.] And along with this comes an even more sinister possibility: jobs that would and should be recognizable—especially supervisory and management positions—will disappear altogether. If a software engineer can write a set of programs that breaks a job into smaller increments, and can follow it up with an algorithm that fills in as the supervisor, then the position itself can be programmed to redundancy.

A few months ago, a lunchtime conversation with several friends turned to the subject of the gig economy. We began to enumerate the potential causes of worker isplacement—things like artificial intelligence and robots, which are fast becoming a reality, expanding the purview of companies such as Google and Amazon. “The displacement is happening right under our noses,” said a woman sitting next to me, another former engineer. “Not in the future—it’s happening now.

“What can we do about it?” someone asked. Another woman replied that the only way forward was for gig-economy workers to unionize, and the table broke out into serious debate [Labor contracts, union or otherwise, will be legally ill-defined and indefensible]. Yet even as we roundly condemned the tech world’s treatment of a vulnerable new class of worker, we knew the stakes were much higher: high enough to alter the future of work itself, to the detriment of all but a select few. “Most people,” I said, interrupting the hubbub, “don’t even see the problem unless they’re on the inside.” Everyone nodded. The risk, we agreed, is that the gig economy will become the only economy, swallowing up entire groups of employees who hold full-time jobs, and that it will, eventually, displace us all. The bigger risk, however, is that the only people who understand the looming threat are the ones enabling it. 

Was Quantitative Easing the Father of Millennial Socialism?

If you’ve been reading these pages for the past 8 years you know that central bank policy has been a constant refrain. The financial policies of the Fed for the past generation under both Greenspan and Bernanke have created a historic asset bubble with cheap credit. This has greatly aggravated wealth inequality and invited greater risks of both economic catastrophe and political chaos. We’re still experiencing where it leads. The eventual correction will likely be more painful than the original problem…

From the Financial Times:

Is Ben Bernanke the father of Alexandria Ocasio-Cortez? Not in the literal sense, obviously, but in the philosophical and political sense.

As we mark the 10th anniversary of the bull market, it is worth considering whether the efforts of the US Federal Reserve, under Mr Bernanke’s leadership, to avoid 1930s-style debt deflation ended up spawning a new generation of socialists, such as the freshman Congresswoman Ms Ocasio-Cortez, in the home of global capitalism.

Mr Bernanke’s unorthodox “cash for trash” scheme, otherwise known as quantitative easing, drove up asset prices and bailed out baby boomers at the profound political cost of pricing out millennials from that most divisive of asset markets, property. This has left the former comfortable, but the latter with a fragile stake in the society they are supposed to build. As we look towards the 2020 US presidential election, could Ms Ocasio-Cortez’s leftwing politics become the anthem of choice for America’s millennials?

But before we look forward, it is worth going back a bit. The 2008 crash itself didn’t destroy wealth, but rather revealed how much wealth had already been destroyed by poor decisions taken in the boom. This underscored the truism that the worst of investments are often taken in the best of times. Mr Bernanke, a keen student of the 1930s, understood that a “balance sheet recession” must be combated by reflating assets. By exchanging old bad loans on the banks’ balance sheets with good new money, underpinned by negative interest rates, the Fed drove asset prices skywards. Higher valuations fixed balance sheets and ultimately coaxed more spending and investment. [A sharp correction and reflation of solvent banks would have given asset speculators the correct lesson for their imprudent risks. Prudent investors would have had access to capital to purchase those assets at rational prices. Instead, we rewarded the profligate borrowers and punished the prudent.]

However, such “hyper-trickle-down” economics also meant that wealth inequality was not the unintended consequence, but the objective, of policy. Soaring asset prices, particularly property prices, drive a wedge between those who depend on wages for their income and those who depend on rents and dividends. This wages versus rents-and-dividends game plays out generationally, because the young tend to be asset-poor and the old and the middle-aged tend to be asset-rich. Unorthodox monetary policy, therefore, penalizes the young and subsidizes the old. When asset prices rise much faster than wages, the average person falls further behind. Their stake in society weakens. The faster this new asset-fuelled economy grows, the greater the gap between the insiders with a stake and outsiders without. This threatens a social contract based on the notion that the faster the economy grows, the better off everyone becomes. What then? Well, politics shifts.

Notwithstanding Winston Churchill’s observation about a 20-year-old who isn’t a socialist not having a heart, and a 40-year-old who isn’t a capitalist having no head, polling indicates a significant shift in attitudes compared with prior generations. According to the Pew Research Center, American millennials (defined as those born between 1981 and 1996) are the only generation in which a majority (57 per cent) hold “mostly/consistently liberal” political views, with a mere 12 per cent holding more conservative beliefs. Fifty-eight per cent of millennials express a clear preference for big government. Seventy-nine per cent of millennials believe immigrants strengthen the US, compared to just 56 per cent of baby boomers. On foreign policy, millennials (77 per cent) are far more likely than boomers (52 per cent) to believe that peace is best ensured by good diplomacy rather than military strength. Sixty-seven per cent want the state to provide universal healthcare, and 57 per cent want higher public spending and the provision of more public services, compared with 43 per cent of baby boomers. Sixty-six per cent of millennials believe that the system unfairly favors powerful interests.

One battleground for the new politics is the urban property market. While average hourly earnings have risen in the US by just 22 per cent over the past 9 years, property prices have surged across US metropolitan areas. Prices have risen by 34 per cent in Boston, 55 per cent in Houston, 67 per cent in Los Angeles and a whopping 96 per cent in San Francisco. The young are locked out.

Similar developments in the UK have produced comparable political generational divides. If only the votes of the under-25s were counted in the last UK general election, not a single Conservative would have won a seat. Ten years ago, faced with the real prospect of another Great Depression, Mr Bernanke launched QE to avoid mass default. Implicitly, he was underwriting the wealth of his own generation, the baby boomers. Now the division of that wealth has become a key battleground for the next election with people such as Ms Ocasio-Cortez arguing that very high incomes should be taxed at 70 per cent.

For the purist, capitalism without default is a bit like Catholicism without hell. But we have confession for a reason. Everyone needs absolution. QE was capitalism’s confessional. But what if the day of reckoning was only postponed? What if a policy designed to protect the balance sheets of the wealthy has unleashed forces that may lead to the mass appropriation of those assets in the years ahead?

The Asset Divide

Below is a recent article explaining the growing wealth inequality based on asset ownership and control. This shouldn’t even be phrased as a question as our easy credit policies, massive RE debt leverage, and favored housing policy has created an almost insurmountable wealth divide between the asset-rich and the asset-poor. Who and what policies do we think those left behind are going to be voting for? Non-gender bathrooms? See also Thomas Edsall’s article in the NYT.

Is Housing Inequality the Main Driver of Economic Inequality?

Richard Florida

A growing body of research suggests that inequality in the value of Americans’ homes is a major factor—perhaps the key factor—in the country’s economic divides.

Economic inequality is one of the most significant issues facing cities and entire nations today. But a mounting body of research suggests that housing inequality may well be the biggest contributor to our economic divides.

Thomas Piketty’s influential book, Capital in the Twenty-First Century, put economic inequality—and specifically, wealth inequality—front and center in the global conversation. But research by Matthew Rognlie found that housing inequality (that is, how much more expensive some houses are than others) is the key factor in rising wealth.

Rognlie’s research documented that the share of wealth or capital income derived from housing has grown significantly since around 1950, and substantially more than for other forms of capital. In other words, those uber-expensive penthouses, luxury townhomes, and other real estate holdings in superstar cities like London and New York amount to a “physical manifestation” of Piketty’s insights into wealth inequality, as Felix Salmon so aptly puts it.

More recent research on this topic by urban economists David Albouy and Mike Zabek documents the surge in housing inequality in the United States. Their study, published as a National Bureau of Economic Research working paper, charts the rise in housing inequality across the U.S. from the onset of the Great Depression in 1930 through the great suburban boom of the 1950s, 1960s, and 1970s, to the more recent back-to-the-city movement, the 2008 economic crash, and the subsequent recovery, up to 2012. They use data from the U.S. Census on both homeowners and renters.

Over the period studied, the share of owner-occupied housing rose from less than half (45 percent) to nearly two-thirds (65 percent), although it has leveled off somewhat since then. The median cost of a home tripled in real dollar terms, according to their analysis. Housing now represents a huge share of America’s total consumption, comprising roughly 40 percent of the U.S. total capital stock, and two-thirds of the wealth held by the middle class.

What Albouy and Zabek find is a clear U-shaped pattern in housing inequality (measured in terms of housing values) over this 80-year period. Housing inequality was high in 1930 at the onset of the Depression. It then declined, alongside income inequality, during the Great Compression and suburban boom of the 1950s and 1960s. It started to creep back up again after the 1970s. There was a huge spike by the 1990s, followed by a leveling off in 2000, and then another significant spike by 2012, in the wake of the recovery from the economic crisis of 2008 and the accelerating back-to-the-city movement.

By 2012, the level of housing inequality in the U.S. looked much the same as it did in the ’30s. Now as then, the most expensive 20 percent of owner-occupied homes account for more than half of total U.S. housing value.

Data by Albouy et al. Design by Madison McVeigh/CityLab

Rents show a different pattern. Rent inequality—or the gap between the cost of rent for some relative than others—was high in the 1930s, then declined dramatically until around 1960. Starting in about 1980, it began to increase gradually, but much less than housing inequality (based on owner-occupied homes) or income inequality. And much of this small rise in rental inequality seems to stem from expensive rental units in very expensive cities.

The study suggests this less severe pattern of rent inequality may be the result of measures like rent control and other affordable housing programs to assist lower-income renters, especially in expensive cities such as New York and San Francisco.

That said, there also is an additional and potentially large wealth gap between owners and renters. Homeowners are able to basically lock in their housing costs after purchasing their home, and benefit from the appreciation of their properties thereafter. Renters, on the other hand, see rents increase in line with the market, and sometimes faster. This threatens their ability to maintain shelter, while they accumulate no equity in the place where they live.

***

But what lies behind this surge in housing inequality? Does it stem from the large housing-price differences between superstar cities and the rest, or does it stem from inequality within cities and metro areas—for instance, high-priced urban areas and suburban areas compared to less advantaged neighborhoods?

The Albouy and Zabek study considers three possible explanations: The change over time from smaller to larger housing units; geographic or spatial inequality between cities and metro areas; and economic segregation between rich and poor within metro areas.

Even as houses have grown bigger and bigger, with McMansions replacing bungalows and Cape Cods in many cities and suburbs since the 1930s (as the size of households shrunk), the study says that, at best, 30 percent of the rise in housing inequality can be pegged to changes in the size of houses themselves.

Ultimately, the study concludes that the rise in both housing wealth and housing inequality stems mainly from the increase in the value of land. In other research, Albouy found that the value of America’s urban land was $25 trillion in 2010, roughly double the nation’s 2016 GDP.

But here’s the kicker: The main catalyst of housing inequality, according to the study, comes from the growing gap within cities and metro areas, not between them. The graph below shows the differences in housing inequality between “commuting zones”—geographic areas that share a labor market—over time. In it, you can see that inequality varies sharply within commuting zones (marked “CZ”) while it remains more or less constant between them.

In other words, the spatial inequality within metros is what drives housing inequality. Factors like safety, schools, and access to employment and local amenities lead individual actors to value one neighborhood over the next.

Data by Albouy et al. Design by Madison McVeigh/CityLab

All this forms a fundamental contradiction in the housing market. Housing is at once a basic mode of shelter and a form of investment. As this basic necessity has been transformed over time into a financial instrument and source of wealth, not only has housing inequality increased, but housing inequality has become a major contributor to—if not the major overall factor in—wealth inequality. When you consider the fact that what is a necessity for everyone has been turned into a financial instrument for a select few, this is no surprise.

The rise in housing inequality brings us face to face with a central paradox of today’s increasingly urbanized form of capitalism. The clustering of talent, industry, investment, and other economic assets in small parts of cities and metropolitan areas is at once the main engine of economic growth and the biggest driver of inequality. The ability to buy and own housing, much more than income or any other source of wealth, is a significant factor in the growing divides between the economy’s winners and losers.

 86_main

Health Care Fantasies

A couple of articles today outlining how far apart from reality are the pro and con arguments for different possible reforms. This is going to matter at some point soon, if not now.

Socialized Medicine Has Won the Health Care Debate

The first article, by Sarah Jaffe published in The New Republic, suggests that “socialized” healthcare has won the policy debate. Citing opinion polls (for which all questions display a certain bias), the author claims that the American public favors government-run socialized medicine. (Here’s a good example of survey bias: “Do you favor free healthcare for all?” – How many No’s do you think that question elicits?)

Ms. Jaffe explains away Obamacare’s unpopularity with this, “What people don’t like are the inequities that still prevail in our health care system, not the fact that “government is too involved. …The law didn’t go too far for Americans to get behind. It didn’t go far enough. And while single-payer opponents continue to evoke rationed care, long lines and wait times, and other problems that supposedly plague England or Canada, the public seems well aware that the reality for many Americans is far worse.”

Really?

What’s more, what makes her think that government control removes inequalities rather than make them worse according to different selection criteria?

Finally, she proclaims, “This is now an American consensus. And if socialism is the medicine our system needs, the country is ready to embrace it—even by name.”

At no point does Ms. Jaffe discuss the associated costs, who is going to pay them, and what kind of trade-offs this will impose on citizens and taxpayers. This is an argument motivated by political ideology, not reality.

***

This brings us to the second article, by Sally Pipes in Investor’s Business Daily (this should give us a clue that Pipes actually plans to address money issues).

Sanders’ Single-Payer Fairy Tale

Ms. Pipes first gives us an indication of polling bias: “The idea is … enchanting ordinary Americans. Fifty-three percent support single payer, according to a June 2017 poll from the Kaiser Family Foundation. But this supposed support is a mirage. According to the same Kaiser poll, 62% would oppose single-payer if it gave the government too much power over health care. Sixty percent would reject it if it increased taxes.”

Sen. Sanders estimates that “Medicare for all” would cost an extra $14 trillion over 10 years, while the Urban Institute’s analysis of the plan puts the figure at $32 trillion. Our current annual health spending is $3.2 trillion, so Medicare at minimum would double that spending level, with no viable way to pay for it, with taxes or otherwise.

Medicare for the 65+ crowd is already a deficit buster, so the nation will not be affording such care for the entire population and promises to do so are a dangerous fantasy. We do know what will happen – the “free” care we expect will never be delivered and the politicians who sell such snake oil will be long gone.

The real problem with our health care debates is that they focus solely on distribution and not on the real problem, which is adequate supply. If no one is producing health care goods, what is there to distribute?