What’s Goin’ On…

This essay by Victor Davis Hanson is worth reprinting in full (with citation). Our current politics is so focused on the Trump phenomenon that people miss the fact that this all started long before Trump set his sights on the POTUS. Trump is a symptom, not a cause.

We may all have laudable goals for society, but it matters how we attain them.

Why Are the Western Middle Classes So Angry?

What is going on with the unending Brexit drama, the aftershocks of Donald Trump’s election and the “yellow vests” protests in France? What drives the growing estrangement of southern and eastern Europe from the European Union establishment? What fuels the anti-EU themes of recent European elections and the stunning recent Australian re-election of conservatives?

Put simply, the middle classes are revolting against Western managerial elites. The latter group includes professional politicians, entrenched bureaucrats, condescending academics, corporate phonies and propagandistic journalists.

What are the popular gripes against them?

One, illegal immigration and open borders have led to chaos. Lax immigration policies have taxed social services and fueled multicultural identity politics, often to the benefit of boutique leftist political agendas.

Two, globalization enriched the cosmopolitan elites who found worldwide markets for their various services. New global markets and commerce meant Western nations outsourced, offshored and ignored their own industries and manufacturing (or anything dependent on muscular labor that could be replaced by cheaper workers abroad).

Three, unelected bureaucrats multiplied and vastly increased their power over private citizens. The targeted middle classes lacked the resources to fight back against the royal armies of tenured regulators, planners, auditors, inspectors and adjustors who could not be fired and were never accountable.

Four, the new global media reached billions and indoctrinated rather than reported.

Five, academia became politicized as a shrill agent of cultural transformation rather than focusing on education — while charging more for less learning.

Six, utopian social planning increased housing, energy and transportation costs.

One common gripe framed all these diverse issues: The wealthy had the means and influence not to be bothered by higher taxes and fees or to avoid them altogether. Not so much the middle classes, who lacked the clout of the virtue-signaling rich and the romance of the distant poor.

In other words, elites never suffered the firsthand consequences of their own ideological fiats.

Green policies were aimed at raising fees on, and restricting the use of, carbon-based fuels. But proposed green belt-tightening among hoi polloi was not matched by a cutback in second and third homes, overseas vacations, luxury cars, private jets and high-tech appurtenances.

In education, government directives and academic hectoring about admissions quotas and ideological indoctrination likewise targeted the middle classes but not the elite. The micromanagers of Western public schools and universities often preferred private academies and rigorous traditional training for own children. Elites relied on old-boy networks to get their own kids into colleges. Diversity administrators multiplied at universities while indebted students borrowed more money to pay for them.

In matters of immigration, the story was much the same. Western elites encouraged the migration of indigent, unskilled and often poorly educated foreign nationals who would ensure that government social programs — and the power of the elites themselves — grew. The champions of open borders made sure that such influxes did not materially affect their own neighborhoods, schools and privileged way of life.

Elites masked their hypocrisy by virtue-signaling their disdain for the supposedly xenophobic, racist or nativist middle classes. Yet the non-elite have experienced firsthand the impact on social programs, schools and safety from sudden, massive and often illegal immigration from Latin America, the Middle East, Africa and Asia into their communities.

As for trade, few still believe in “free” trade when it remains so unfair. Why didn’t elites extend to China their same tough-love lectures about global warming, or about breaking the rules of trade, copyrights and patents?

The middle classes became nauseated by the constant elite trashing of their culture, history and traditions, including the tearing down of statues, the Trotskyizing of past heroes, the renaming of public buildings and streets, and, for some, the tired and empty whining about “white privilege.”

If Western nations were really so bad, and so flawed at their founding, why were millions of non-Westerners risking their lives to reach Western soil?

How was it that elites themselves had made so much money, had gained so much influence, and had enjoyed such material bounty and leisure from such a supposedly toxic system — benefits that they were unwilling to give up despite their tired moralizing about selfishness and privilege?

In the next few years, expect more grassroots demands for the restoration of the value of citizenship. There will be fewer middle-class apologies for patriotism and nationalism. The non-elite will become angrier about illegal immigration, demanding a return to the idea of measured, meritocratic, diverse and legal immigration.

Because elites have no answers to popular furor, the anger directed at them will only increase until they give up — or finally succeed in their grand agenda of a non-democratic, all-powerful Orwellian state.

(C) 2019 TRIBUNE CONTENT AGENCY, LLC.

Victor Davis Hanson is a classicist and historian at the Hoover Institution, Stanford University. His latest book is The Savior Generals from BloomsburyBooks. You can reach him by e-mailing author@victorhanson.com.

The World’s Greatest Procrastinator: ADHD?

The inspiration for my book, Saving Mona Lisa. From the Smithsonian:

New Study Suggests Leonardo da Vinci Had A.D.H.D.

Despite his global fame, Leonardo da Vinci’s reputation as an artist is based on just 20 paintings still known to exist. While a few works have been lost or possibly destroyed over the centuries, there’s another reason we have so few genuine works by the master: the Italian artist was notorious for beginning and never completing artworks. He toiled on plans for the Sforza Horse, intended to be the largest cast bronze sculpture ever, off and on for 12 years before abandoning it. A commissioned mural of the Battle of Anghiari was plastered over when the master painter failed to complete the work. Some researchers even believe the Mona Lisa is unfinished, something mentioned by Leonardo’s first biographer.

Looking at the scant details of his life and his penchant to procrastinate and abandon artworks, two neuroscientists have presented a possible reason for Leonardo’s behavior in the journal Brain. They suggest that the artist may have had Attention Deficit and Hyperactive Disorder (A.D.H.D.).

“While impossible to make a postmortem diagnosis for someone who lived 500 years ago, I am confident that A.D.H.D. is the most convincing and scientifically plausible hypothesis to explain Leonardo’s difficulty in finishing his works,” co-author Marco Catani of King’s College London says in a press release. “Historical records show Leonardo spent excessive time planning projects but lacked perseverance. A.D.H.D. could explain aspects of Leonardo’s temperament and his strange mercurial genius.”

In the paper, the researchers report that while Leonardo dedicated “excessive” time to planning out his ideas, his perseverance waned when it came to executing them. “Leonardo’s chronic struggle to distill his extraordinary creativity into concrete results and deliver on commitments was proverbial in his lifetime and present since early childhood,” they write.

In fact, in a biography of famous sculptors and painters, the first to include information about Leonardo, Giorgio Vasari writes an almost textbook definition of A.D.H.D.:

“in learning and in the rudiments of letters he would have made great proficiency, if he had not been so variable and unstable, for he set himself to learn many things, and then, after having begun them, abandoned them.”

When Leonardo was older and began apprenticing in the workshop of painter Andrea del Verrocchio in Florence, his inability to execute became more apparent. There, he received his first commissions, and though he planned the works extensively, he ultimately walked away from them. In 1478, he received his first commission as a solo painter for an altarpiece in the Chapel of San Bernardo. Despite taking an advance of 25 florins, Leonardo did not deliver.

This may explain why Leonardo stayed in Verrochio’s workshop until the relatively advanced age of 26 while other painters set off on their own. When he left the atelier, it wasn’t as a painter, but as a musician working for the Duke of Milan.

When the Duke of Milan finally let Leonardo go after 20 years of service, the artist wrote in his diary that he had never finished any of the many projects the Duke had commissioned from him. Even the pope got on his case; after working for the Vatican for three years he was dismissed by Pope Leo X who exclaimed, “Alas! this man will never do anything, for he begins by thinking of the end of the work, before the beginning.”

Novelist and contemporary Matteo Bandello, who observed Leonardo during the time he worked on The Last Supper, provides one of the few glimpses we have of these work habits:

“I have also seen him, as the caprice or whim took him, set out at midday, […] from the Corte Vecchio, where he was at work on the clay model of the great horse, and go straight to the Grazie and there mount on the scaffolding and take up his brush and give one or two touches to one of the figures and suddenly give up and go away again”

Besides these biographical tidbits, Emily Dixon at CNN reports there are other signs of A.D.H.D. Leonardo is known to have worked continuously through the night, alternating cycles of short naps and waking. He was also left-handed and some research indicates he may have been dyslexic, both of which are associated with A.D.H.D. At age 65, Leonardo suffered a left-hemisphere stroke, yet his language centers were left in tact. That indicates that the right hemisphere of his brain contained the language centers of his brain, a condition found in less than 5 percent of the population and prevalent in children with A.D.H.D. and other neurodevelopmental conditions.

While this study may feel like a slam dunk diagnosis, Jacinta Bowler at ScienceAlert cautions that these type of postmortem diagnoses are alway problematic. That’s because, in many cases, medical professionals don’t have the skills to properly critique or place into context historical documents and may interpret things incorrectly. And anecdotes, short biographies and diary entries are no substitute for a direct examination.

Graeme Fairchild of the department of psychology at the University of Bath tells Dixon at CNN that diagnosing Leonardo with A.D.H.D. could be a positive. It shows that “people with A.D.H.D. can still be incredibly talented and productive, even though they might have symptoms or behaviors that lead to impairment such as restlessness, poor organizational skills, forgetfulness and inability to finish things they start,” he says.

It also highlights the fact that the disorder affects adults too, not just children as some think. “For many people, A.D.H.D. is a lifelong condition rather than something they grow out of, and it certainly sounds like Leonardo da Vinci had major problems in many of these areas throughout his life,” says Fairchild.

Leonardo recognized his difficulties with time and project management and sometimes teamed up with other people to get things done. But he also beat himself up for what he saw as his lack of discipline. Even at the end of his life, he regretted his failures and reportedly said “that he had offended God and mankind in not having worked at his art as he should have done.”

Catani tells Kate Kelland at Reuters that Leonardo could serve as the poster child for A.D.H.D., which in the public mind is often associated with low IQ or misbehaving children. He says there are many successful people with the problem, and they can be even more successful if they learn how to manage or treat the disorder.

“Leonardo considered himself as someone who had failed in life – which is incredible,” he says. “I hope (this case) shows that A.D.H.D. is not linked to low IQ or lack of creativity, but rather the difficulty of capitalizing on natural talents.”

In fact, recent research indicates that adults with A.D.H.D. are often more creative than those without, giving them a leg up in certain fields.

Tweet, Tweet, Twitter…

Obviously, it is a way to hear about news and opinions. It also helps us to manage FOMO, the fear of missing out on trends and memes and fun things our friends are doing. And it gives us a chance to self-promote and virtue-signal.

I repost this article in full because I think it makes very good points about what people are doing on Twitter and why. Focusing on social behaviors and instincts helps us understand where social media needs to go. Republished from The New Atlantis:

The Emergent Order of Twitter

Why the platform should be fixed from the bottom up, not the top down

Andy Smarick

When a set of arrangements is making people miserable, coercion is often a big part of the explanation. Think of authoritarianism, discrimination, or vigilantism, where individuals suffer because of conditions they can’t change, imposed by others possessing power.

But in some cases, incentives, not coercion, are to blame. This happens often in markets and in personal relationships — and it’s true also of Twitter. The environment is such that free people, making individually rational decisions, harm themselves and the group as a whole, creating suboptimal but — paradoxically — highly stable outcomes. History, economics, psychology, and sociology are rife with examples. Or, looking to game theory, we might say that Twitter is a dilemma in which we are all prisoners.

When misery is caused by coercion, the solution is typically straightforward: Stop those with illegitimate power from hurting people. But when misery is caused by voluntary activity, the proper intervention is less clear. Respect for liberty generally requires avoiding the use of a central authority — whether the state or Silicon Valley algorithmic overlords — to override the lawful, morally permissible choices of individuals. Even when there is general agreement that peoples’ choices are causing damage to themselves or others, authorizing an authority to intervene also means authorizing it to decide what the right outcomes are, what constitutes enough social “damage” to justify intervention, what kinds of penalties should be applied and when, and so on. State authorities, when granted such power, may well go on to claim they’ve found negative externalities warranting suppression of people’s choices in other areas — how they spend their income, where they live, which organizations they join, how they raise their children. There is always a technocrat, a redistributionist, or a “nudger” convinced that the world would be much improved if her learning and sense of justice could replace everyone else’s.

No one is forced to use Twitter. It is a mess founded on voluntary choices. So, although it may be doing harm to individuals, degrading public discourse and social norms, we should begin by appreciating that its users must be currently assessing that their participation provides them greater benefits than costs.

A fruitful approach might therefore not be to bemoan Twitter’s downsides or to infringe upon individuals’ liberty to speak and associate in this way, but to start by understanding what the utility is that keeps people on the platform. We may then appreciate that Twitter is bound to change — perhaps even to fix itself — as users change their assessment. Our aim should be to seek not engineering or policy solutions but a gradual, organic transformation of the platform by the users themselves.

What benefits does Twitter offer its users? Obviously, it is a way to hear about news and opinions. It also helps us to manage FOMO, the fear of missing out on trends and memes and fun things our friends are doing. And it gives us a chance to self-promote and virtue-signal. Although these are usually derogatory terms, they can also simply acknowledge that we have a need to be recognized for our worth and to be seen as on the side of the angels. Many journalists and other content providers are also under pressure by business managers to prioritize “engagement,” which manifests in everything from clickbait headlines to provocative content to engaging directly, if often pointlessly, with users on social media. Twitter also serves as a virtually cost-free venting mechanism, catharsis at the fingertips. Your fury can be decompressed almost instantaneously with nothing but a few keystrokes.

These benefits have a common feature. They all enable us to feel like we matter — that we are part of something, that we’re being heard, that we’re on the right side. In an era of profound dislocation, Twitter offers something resembling community. We can find our tribe and our anti-tribe. We can speak and get a reaction. By simply typing a few words and hitting “tweet,” we are given voice. With every reply, like, retweet, and new follower we are given a sense of efficacy. The prospects of our meme or witty retort going viral offers us the potential of mattering a great deal.

Unfortunately, with Twitter the costs of bad behavior are generally delayed or are felt by individuals other than the actor: It’s the target of the outrage mob rather than the instigator who loses her job; the full consequences of destroying social norms are only felt far down the line. So the typical user’s short-term cost–benefit analysis approves more tweeting and fails to warn against Twitter’s anti-social use.

Of course, we make many of our decisions in less analytical and more impulsive ways, especially when we are feeling anxious, disconnected, or under assault. Splurging on that pricey item, yelling at a friend, or relapsing into an addiction doesn’t make sense in the long term, but by a calculation in the moment it does make sense, when the benefits feel so immediate and exaggerated, and the costs so abstract and distant. Similarly, our hunger for meaning and connection is so acute in this historical moment that we inflate social media’s immediate gains and discount its future costs.

However, because so many users have had years of experience with Twitter, its corrosive consequences, once far on the horizon, can now be felt by many of us. We have witnessed its depressive and isolating effects, and we have seen how it harms relationships and civil discourse. We should recognize the platform’s trouble with profits in recent years as a lagging indicator of its social costs.

The good news is that, because voluntary systems allow for a gradual, evolutionary process of self-reform, we can expect that behavior on Twitter may improve on its own.

Voluntary associations, unlike systems of coercion, include participants’ right of exit. Those engaged can disengage at any time and for any reason. Second, norms and traditions are highly malleable, since they are not encoded in legislation. Dissenters can arise at any moment to challenge them. The shifting views and actions of countless individuals then continuously remold the communities and systems of which they are part. Consider customs related to manners, courting, chivalry, child-rearing, and so on. These didn’t change suddenly or at the direction of a central authority; shifts were organic and gradual, but their influence was systemic in scale.

So while great attention has been paid to Twitter’s official terms of use and its enforcement thereof, the more important and lasting changes will almost certainly be brought about by individuals’ changes in behavior resulting from their recalibrated cost–benefit analysis. Twitter will change as some users drop out and others decide to reshape its norms of conduct — both actions that lead users to make ever new assessments about how to use Twitter, and whether using it at all is worth it.

There have already been high-profile examples of fed-up media figurespoliticians, and celebrities quitting Twitter. These instances of exit are likely to be bellwethers, not outliers: Other users will likely follow suit, and this could cause a cascade. A platform with fewer users and less attention offers the remaining users less voice, efficacy, and sense of community. The benefit part of the cost–benefit ratio will drop — which will make remaining users less willing to bear the costs, perhaps decreasing their willingness to tolerate bad behavior.

Gradual changes in norms on the platform could lead to even more significant improvements in the average Twitter user’s experience. For instance, there is currently a clear incentive to join a mob vilifying someone who has done something you find objectionable. Doing so is virtually costless to the participant, and it can contribute to the effort to get the offender chastened or fired. But a change of certain norms might well produce a new sense of proportion. People might become less willing to offer gut-wrenching public apologies for minor infractions, and employers might become more willing to privately admonish and forgive transgressors. If so, the wind would be taken out of the mob’s sails. Similarly, if Twitter users begin reprimanding those who dredge up a public figure’s embarrassing tweets from when she was 14, that practice could disappear. Or, if journalists agree to stop engaging with anyone disrespectful or anonymous, disrespect and anonymity could decrease.

A social-media community is not an institution, a forced or planned entity instituted by a powerful authority. It is more like a garden; it forms organically and with decentralized tending, but not centralized direction. The path to altering an institution is clear: Whatever powerful device, such as legislation or regulation, that was employed to bring it about can be employed to change it. Organic formations, on the other hand, emerge through voluntary responses to conditions and incentives. And they evolve because of voluntary responses to changes in these conditions and incentives. If we want Twitter and social media to change, we need to approach the problem more like gardeners, not engineers.


Andy Smarick (@smarick) is the Director of Civil Society, Education, and Work at the R Street Institute. He has tweeted more than 55,000 times.

Was Quantitative Easing the Father of Millennial Socialism?

If you’ve been reading these pages for the past 8 years you know that central bank policy has been a constant refrain. The financial policies of the Fed for the past generation under both Greenspan and Bernanke have created a historic asset bubble with cheap credit. This has greatly aggravated wealth inequality and invited greater risks of both economic catastrophe and political chaos. We’re still experiencing where it leads. The eventual correction will likely be more painful than the original problem…

From the Financial Times:

Is Ben Bernanke the father of Alexandria Ocasio-Cortez? Not in the literal sense, obviously, but in the philosophical and political sense.

As we mark the 10th anniversary of the bull market, it is worth considering whether the efforts of the US Federal Reserve, under Mr Bernanke’s leadership, to avoid 1930s-style debt deflation ended up spawning a new generation of socialists, such as the freshman Congresswoman Ms Ocasio-Cortez, in the home of global capitalism.

Mr Bernanke’s unorthodox “cash for trash” scheme, otherwise known as quantitative easing, drove up asset prices and bailed out baby boomers at the profound political cost of pricing out millennials from that most divisive of asset markets, property. This has left the former comfortable, but the latter with a fragile stake in the society they are supposed to build. As we look towards the 2020 US presidential election, could Ms Ocasio-Cortez’s leftwing politics become the anthem of choice for America’s millennials?

But before we look forward, it is worth going back a bit. The 2008 crash itself didn’t destroy wealth, but rather revealed how much wealth had already been destroyed by poor decisions taken in the boom. This underscored the truism that the worst of investments are often taken in the best of times. Mr Bernanke, a keen student of the 1930s, understood that a “balance sheet recession” must be combated by reflating assets. By exchanging old bad loans on the banks’ balance sheets with good new money, underpinned by negative interest rates, the Fed drove asset prices skywards. Higher valuations fixed balance sheets and ultimately coaxed more spending and investment. [A sharp correction and reflation of solvent banks would have given asset speculators the correct lesson for their imprudent risks. Prudent investors would have had access to capital to purchase those assets at rational prices. Instead, we rewarded the profligate borrowers and punished the prudent.]

However, such “hyper-trickle-down” economics also meant that wealth inequality was not the unintended consequence, but the objective, of policy. Soaring asset prices, particularly property prices, drive a wedge between those who depend on wages for their income and those who depend on rents and dividends. This wages versus rents-and-dividends game plays out generationally, because the young tend to be asset-poor and the old and the middle-aged tend to be asset-rich. Unorthodox monetary policy, therefore, penalizes the young and subsidizes the old. When asset prices rise much faster than wages, the average person falls further behind. Their stake in society weakens. The faster this new asset-fuelled economy grows, the greater the gap between the insiders with a stake and outsiders without. This threatens a social contract based on the notion that the faster the economy grows, the better off everyone becomes. What then? Well, politics shifts.

Notwithstanding Winston Churchill’s observation about a 20-year-old who isn’t a socialist not having a heart, and a 40-year-old who isn’t a capitalist having no head, polling indicates a significant shift in attitudes compared with prior generations. According to the Pew Research Center, American millennials (defined as those born between 1981 and 1996) are the only generation in which a majority (57 per cent) hold “mostly/consistently liberal” political views, with a mere 12 per cent holding more conservative beliefs. Fifty-eight per cent of millennials express a clear preference for big government. Seventy-nine per cent of millennials believe immigrants strengthen the US, compared to just 56 per cent of baby boomers. On foreign policy, millennials (77 per cent) are far more likely than boomers (52 per cent) to believe that peace is best ensured by good diplomacy rather than military strength. Sixty-seven per cent want the state to provide universal healthcare, and 57 per cent want higher public spending and the provision of more public services, compared with 43 per cent of baby boomers. Sixty-six per cent of millennials believe that the system unfairly favors powerful interests.

One battleground for the new politics is the urban property market. While average hourly earnings have risen in the US by just 22 per cent over the past 9 years, property prices have surged across US metropolitan areas. Prices have risen by 34 per cent in Boston, 55 per cent in Houston, 67 per cent in Los Angeles and a whopping 96 per cent in San Francisco. The young are locked out.

Similar developments in the UK have produced comparable political generational divides. If only the votes of the under-25s were counted in the last UK general election, not a single Conservative would have won a seat. Ten years ago, faced with the real prospect of another Great Depression, Mr Bernanke launched QE to avoid mass default. Implicitly, he was underwriting the wealth of his own generation, the baby boomers. Now the division of that wealth has become a key battleground for the next election with people such as Ms Ocasio-Cortez arguing that very high incomes should be taxed at 70 per cent.

For the purist, capitalism without default is a bit like Catholicism without hell. But we have confession for a reason. Everyone needs absolution. QE was capitalism’s confessional. But what if the day of reckoning was only postponed? What if a policy designed to protect the balance sheets of the wealthy has unleashed forces that may lead to the mass appropriation of those assets in the years ahead?

Suburbs vs. Urbans

Our modern politics are defined by locational choices of urban, suburban, and rural communities. None of these dominate the other; nor should they as we enjoy each during different phases of our lives. Our parties make a senseless war over this. By Joel Kotkin in The Daily Beast:

The Democrats Finally Won the Suburbs. Now Will They Destroy Them?

Whatever the thinkers say people should do, what they keep doing given the choice is moving to places where they can live in single-family homes.

read more

 

 

Financial Moral Hazard

If we believe this Houdini act then we have only ourselves to blame.

How Many Bank Bailouts Can America Withstand?

The architects of the 2008 rescues pretend they’ve been vindicated.

Ten years after the financial crisis of 2008, the architects of the bailouts are still describing their taxpayer-backed rescues of certain financial firms as great products which were poorly marketed to the American people. The American people still aren’t buying.

A decade ago, federal regulators were in the midst of a series of unpredictable and inconsistent interventions in the financial marketplace. After rescuing creditors of the investment bank Bear Stearns and providing a partial rescue of its shareholders in March of 2008, the feds then shocked markets six months later by allowing the larger Lehman Brothers to declare bankruptcy. Then regulators immediately swerved again to take over insurer AIG and use it as a vehicle to rescue other financial firms.

Within days legislative drafts were circulating for a new bailout fund that would become the $700 billion Troubled Asset Relief Program. Throughout that fall of 2008 and into 2009, the government continued to roll out novel inventions to support particular players in the financial industry and beyond. Some firms received assistance on better terms than others and of course many firms, especially small ones outside of banking, received no help at all.

In the fall of 2008, Ben Bernanke chaired the Federal Reserve, Timothy Geithner ran the New York Fed and Hank Paulson served as U.S. Treasury secretary. Looking back now, the three bailout buddies have lately been congratulating themselves for doing a dirty but important job. They recently wrote in the New York Times:

Many of the actions necessary to stem the crisis, including the provision of loans and capital to financial institutions, were controversial and unpopular. To us, as to the public, the responses often seemed unjust, helping some of the very people and firms who had caused the damage. Those reactions are completely understandable, particularly since the economic pain from the panic was devastating for many.

The paradox of any financial crisis is that the policies necessary to stop it are always politically unpopular. But if that unpopularity delays or prevents a strong response, the costs to the economy become greater. We need to make sure that future generations of financial firefighters have the emergency powers they need to prevent the next fire from becoming a conflagration.

The authors say that their actions saved the United States and the world from catastrophe, but of course this claim cannot be tested. We’ll never get to run the alternative experiment in which investors and executives all have to live with the consequences of their investments. But Stanford economist John Taylor has made the case that massive ad hoc federal interventions were among the causes of the conflagration. On the fifth anniversary of the crisis he noted that in 2008 markets deteriorated as the government was taking a more active role in the financial economy, which may have contributed to a sense of panic:

…the S&P 500 was higher on September 19—following a week of trading after the Lehman Brothers bankruptcy—than it was on September 12, the Friday before the bankruptcy. This indicates that some policy steps taken after September 19 worsened the problem… Note that the stock market crash started at the time TARP was being rolled out… When former Treasury Secretary Hank Paulson appeared on CNBC on the fifth anniversary of the Lehman Brothers failure, he said that the markets tanked, and he came to the rescue; effectively, the TARP saved us. Appearing on the same show minutes later, former Wells Fargo chairman and CEO Dick Kovacevich—observing the same facts in the same time—said that the TARP… made things worse.

CNBC reported at the time on its Kovacevich interview:

TARP caused the crisis to get “much greater,” he added.

“Shortly after TARP, the stock market fell by 40 percent,” he continued. “And the banking industry stocks fell by 80 percent. How can anyone say that TARP increased the confidence level of an industry, when its stock market valuation fell by 80 percent.”

Perhaps the argument can never be resolved. What is known but is conveniently left out of the Times op-ed is an acknowledgment of the role that regulators played in creating the crisis by encouraging financial firms to invest in mortgage debt, to operate with high leverage and to expect help in a crisis. The Times piece includes no mention of Mr. Bernanke and his Fed colleagues holding interest rates too low for too long, or the massive risks at Citigroup overseen by Mr. Geithner’s New York Fed, or the mortgage bets at AIG approved by the Office of Thrift Supervision at Mr. Paulson’s Treasury Department.

Foolish regulators creating bad incentives was nothing new, though Beltway blunders had rarely if ever occurred on such a scale. What was of course most shocking for many Americans in 2008 was observing so many of their tax dollars flowing into the coffers of large financial institutions. For months both the financial economy and the real economy suffered as Washington continued its ad hoc experiments favoring one type of firm or another.

In 2009 markets began to recover and, thanks in no small part to years of monetary expansion by the Federal Reserve, stock investors enjoyed a long boom. But when it comes to economic growth and wages for the average worker there was no such boom, just an era of discouraged Americans leaving the labor force. And by keeping interest rates near zero for years, the Fed punished savers and enabled an historic binge of government borrowing.

badnewsforsavers

That federal borrowing binge was also enabled by the rescue programs. The basic problem was that once Washington said yes to bailing out large financial houses, politicians could hardly say no to anyone else. It was no coincidence that just months after enacting the $700 billion TARP, lawmakers enacted an $800 billion stimulus plan. So began the era of trillion-dollar annual deficits. Since the fall of 2008, federal debt has more than doubled and now stands at more than $21 trillion.

mtdebt

The expansion of government also included record-setting levels of regulation, which limited economic growth. A financial economy heavily distorted by federal housing policy was cast as the free market that failed, and decision-making affecting every industry was further concentrated in Washington.

Messrs. Bernanke, Geithner and Paulson make the case that they saved the financial system but failed to sell the public on the value of their interventions. It’s a sale that can never be made. Even if the bailouts hadn’t led to an era of diminished opportunity and skyrocketing federal debt, Americans would have resisted the idea that our system requires occasional instant welfare programs for wealthy recipients chosen by un-elected wise men.

The bailout buddies are now urging the creation of more authorities for regulators to stage future bailouts. The Trump administration should do the opposite, so that bank investors finally understand they will get no help in a crisis.

This column isn’t sure how many bailouts of financiers the American political system can withstand but is certain that such efforts will never be welcomed by non-financiers.

***

housing2

Bank Bailout 3.0

I’d have to agree with this. As we’ve said all along, saving the banking system was necessary, saving the bankers was not. Now we’re set up for the next bailout of the financial elite. What a great casino this is: heads they win, tails we lose.

The bank bailout of 2008 was unnecessary. Fed Chairman Ben Bernanke scared Congress into it

By Dean Baker

This week marked 10 years since the harrowing descent into the financial crisis — when the huge investment bank Lehman Bros. went into bankruptcy, with the country’s largest insurer, AIG, about to follow. No one was sure which financial institution might be next to fall.

 

The banking system started to freeze up. Banks typically extend short-term credit to one another for a few hundredths of a percentage point more than the cost of borrowing from the federal government. This gap exploded to 4 or 5 percentage points after Lehman collapsed. Federal Reserve Chair Ben Bernanke — along with Treasury Secretary Henry Paulson and Federal Reserve Bank of New York President Timothy Geithner — rushed to Congress to get $700 billion to bail out the banks. “If we don’t do this today we won’t have an economy on Monday,” is the line famously attributed to Bernanke.

The trio argued to lawmakers that without the bailout, the United States faced a catastrophic collapse of the financial system and a second Great Depression.

Neither part of that story was true.

Still, news reports on the crisis raised the prospect of empty ATMs and checks uncashed. There were stories in major media outlets about the bank runs of 1929.

No such scenario was in the cards in 2008.

Unlike 1929, we have the Federal Deposit Insurance Corporation. The FDIC was created precisely to prevent the sort of bank runs that were common during the Great Depression and earlier financial panics. The FDIC is very good at taking over a failed bank to ensure that checks are honored and ATMs keep working. In fact, the FDIC took over several major banks and many minor ones during the Great Recession. Business carried on as normal and most customers — unless they were following the news closely — remained unaware.

 

The prospect of Great Depression-style joblessness and bread lines was just a scare tactic used by Bernanke, Paulson and other proponents of the bailout.

Had bank collapses been more widespread, stretching the FDIC staff thin, it is certainly possible that there would be glitches. This could have led to some inability to access bank accounts immediately, but that inconvenience would most likely have lasted days, not weeks or months.

 

Following the collapse of Lehman Bros., however, the trio promoting the bank bailout pointed to a specific panic point: the commercial paper market. Commercial paper is short-term debt (30 to 90 days) that companies typically use to finance their operations. Without being able to borrow in this market even healthy companies not directly affected by the financial crisis such as Boeing or Verizon would have been unable to meet their payroll or pay their suppliers. That really would have been a disaster for the economy.

However, a $700-billion bank bailout wasn’t required to restore the commercial paper market. The country discovered this fact the weekend after Congress approved the bailout when the Fed announced a special lending facility to buy commercial paper ensuring the availability of credit for businesses.

 

bailout-cartoon-2

Without the bailout, yes, bank failures would have been more widespread and the initial downturn in 2008 and 2009 would have been worse. We were losing 700,000 jobs a month following the collapse of Lehman. Perhaps this would have been 800,000 or 900,000 a month. That is a very bad story, but still not the makings of an unavoidable depression with a decade of double-digit unemployment.

 

The Great Depression ended because of the massive government spending needed to fight World War II. But we don’t need a war to spend money. If the private sector is not creating enough demand for workers, the government can fill the gap by spending money on infrastructure, education, healthcare, childcare or many other needs.

There is no plausible story where a series of bank collapses in 2008-2009 would have prevented the federal government from spending the money needed to restore full employment. The prospect of Great Depression-style joblessness and bread lines was just a scare tactic used by Bernanke, Paulson and other proponents of the bailout to get the political support needed to save the Wall Street banks.

 

This kept the bloated financial structure that had developed over the last three decades in place. And it allowed the bankers who got rich off of the risky financial practices that led to the crisis to avoid the consequences of their actions.

 

While an orderly transition would have been best, if the market had been allowed to work its magic, we could have quickly eliminated bloat in the financial sector and sent the unscrupulous Wall Street banks into the dustbin of history. Instead, millions of Americans still suffered through the Great Recession, losing homes and jobs, and the big banks are bigger than ever. Saving the banks became the priority of the president and Congress. Saving people’s homes and jobs mattered much less or not at all.

 

Dean Baker is senior economist at the Center for Economic and Policy Research and the author of “Rigged: How Globalization and the Rules of the Modern Economy Were Structured to Make the Rich Richer.”

bailout-toon2

Generalists vs. Specialists

Specialization has led to great economic gains over the course of civilization. But specialization as such is an economic imperative, not a humanistic or evolutionary one. Evolution, as this article argues, may strongly favor the generalist capabilities of a species. And it rewards individuals with its humanistic benefits, if not in material gains.

As an unrepentant generalist across many disciplines, I especially appreciate the intangible benefits and payoffs (if not the monetary tradeoffs!).

I’m reminded of the academic distinction: a generalist is somebody who knows nothing about everything, while a specialist is one who knows everything about nothing.

The Generalist Specialist: Why Homo Sapiens Succeeded

By Gemma Tarlach | July 30, 2018 10:00 am

 

HS-N

Being a generalist specialist, a unique niche, is the hallmark of our species, say researchers — and the reason Homo sapiens (left) are still around but other hominins, including Neanderthals (right), are not. (Credit: Wikimedia Commons)

Some animals are jacks of all trades, some masters of one. Homo sapiens, argues a provocative new commentary, are an evolutionary success story because our ancestors pulled off a unique feat: being masterly jacks of all trades. But is this ecological niche, the generalist specialist, the real reason our species is the last hominin standing?

When paleoanthropologists and archaeologists define what makes our species unique, they usually focus on our use of symbolism and language, as well as our skills in social networking (long before Facebook) and technological innovation. Those arguments for human exceptionalism have been challenged in recent years, however, as researchers have uncovered evidence that other members of the genus Homo, notably Neanderthals, were capable of similar cognitive processes, from artistic expression to producing fire at will.

But maybe, say two researchers, we got it wrong. What defines our species, and has allowed H. sapiens to survive and even thrive after all other hominins went extinct, is not about making better stone projectiles, or networking, or sprucing up the cave walls with a little ochre artwork. We’re the last hominins on Earth because we’re really good at adapting to a huge range of environments, including the extreme.

Over The River And Through The Woods (And The Tundra, And The Desert…)

To make their case, researchers mapped out the likely ranges of archaic members of the genus Homo according to current fossil, paleoenviromental and archaeological evidence. Being a fan of the scientific method, I think it’s worth noting here that this map almost certainly will change as new finds turn up. But for now, working with the best body of evidence we’ve got, it’s clear that early H. sapiens, once they left Africa, seemed to explode across the Old World, moving into territory previously occupied by one or at most two other hominin species.

map
A map of the estimated ranges of archaic members of the genus Homo, spanning the period H. sapiens emerged in Africa and dispersed across the rest of the Old World, roughly 60,000-300,000 years ago. (Credit: Roberts and Stewart, 2018. Defining the ‘generalist specialist’ niche for Pleistocene Homo sapiens. Nature Human Behaviour. 10.1038/s41562-018-0394-4)

To be clear, there is good evidence that other hominins called extreme environments home. Denisovans appear to have adapted to high-altitude life in Central Asia, for example, while diminutive H. floresiensis was at home in equatorial island rainforests. It’s been argued, heatedly (no pun intended), that Neanderthals were high-latitude specialists. But only H. sapiens turn up in all of those environments. What might not be immediately evident from the map is that early H. sapiens dispersal wasn’t just about setting foot on a new continent; it was also about moving into new and often extremely challenging environments, from deserts to arctic climes, from treeless, high-altitude plateaus to dense tropical rainforests.

Nevertheless We Persisted

It’s the “unique ecological plasticity” of our species that’s our defining trait, argue the researchers, and it’s what gave us a leg up on surviving, whether moving into new territories or adapting to changing climate conditions. While this conclusion may seem obvious to us now, it’s only been possible to reach it thanks to the flood of new evidence that’s revised the timeline of human evolution and dispersal.

The new research has shown our species evolved earlier than once thought (our start date is now at least 300,000 years ago) and spread beyond Africa sooner than expected: Consider, for example, the first H. sapiens fossil found in the Arabian Peninsula — once thought inhospitable to early humans — and described earlier this year, or a H. sapiens partial jaw from Israel that’s 177,000-194,000 years old.

The key to proving their hypothesis is correct — and to understanding how this ecological plasticity arose in our species — will be acquiring not just more evidence of a H. sapiens presence at different sites, but also strong paleo-enviromental data, particularly in Africa where the earliest H. sapiens lived.

In the meantime, the researchers have coined a novel niche for the intrepid early H. sapiens: the generalist specialist. The team looked at the ecological niche profiles of specialists, such as pandas, and generalists, like the trash panda (aka the raccoon). They concluded that H. sapiens’ unique generalist specialist niche allowed early members of our species to adapt to, and specialize in, living in wildly different environments.

generalist-specialist-1024x674

Pandas are considered specialists because all individuals utilize a single food web. Raccoons, on the other hand (paw?), are generalists adept at exploiting whatever food web they can find, as anyone who has left an unsecured trash can out at night probably knows. Our species has often been considered a generalist, but the authors of today’s commentary propose a new ecological niche for us: the generalist specialist, with different populations capable of adapting to and specializing in a wide range of environments and resources. (Credit: Roberts and Stewart, 2018)

While occupying the unique niche of generalist specialist will no doubt appeal to fans of H. sapiens exceptionalism, it’s unclear that it provides what the researchers describe as a “framework for discussing…how our species became the last surviving hominin on the planet.” Specialists tend to face extinction, for example, only if their specialized ecological niche is wiped out — or they are out-competed by an invasive species. Ahem.

Pondering National Governance

This is a recent article published in the NY Times. To make any sense of our answers to this question requires some ideological and historical clarity. [Blog comments]

Is the United States Too Big to Govern?

By Neil Gross

May 11, 2018

Last month the Pew Research Center released a poll showing that Americans are losing faith in their system of government. Only one-fifth of adults surveyed believe democracy is working “very well” in the United States, while two-thirds say “significant changes” are needed to governmental “design and structure.” [Because nobody really knows what these words mean, or they don’t agree among the many meanings, polling results are questionable indicators.]

The 2016 election is one explanation for these findings. Something is not right in a country where Donald Trump is able to win the presidency. [Well, that’s a selective value judgment – one could easily substitute in the names Hillary Clinton or Bernie Sanders. The point of a democratic society is that the people get to make those decisions and the people agree to abide by them or revolt. Are the people revolting against themselves or against their political representatives?]  

But here’s another possibility: What if trust in American democracy is eroding because the nation has become too big to be effectively governed through traditional means? With a population of more than 325 million and an enormously complex society, perhaps this country has passed a point where — no matter whom we elect — it risks becoming permanently dissatisfied with legislative and governmental performance. [There’s an implicit assumption here that the original intent of the founders is that some central authority should “govern” the affairs of the population and manage the national interest (“traditional means”?). This is probably half true in that a national interest must be represented as the sum of its many parts. We have a Federal government. What was not intended was an all-powerful Federal government.]

Political thinkers, worried about the problem of size, have long advocated small republics. Plato and Aristotle admired the city-state because they thought reason and virtue could prevail only when a polis was small enough that citizens could be acquaintances. Montesquieu, the 18th-century French political philosopher, picked up where the ancient Greeks left off, arguing for the benefits of small territories. “In a large republic,” he wrote, “the common good is sacrificed to a thousand considerations,” whereas in a smaller one the common good “is more strongly felt, better known, and closer to each citizen.” [I suspect Dunbar’s number is at work here.]

The framers of the United States Constitution were keenly aware of these arguments. As the political scientists Robert Dahl and Edward Tufte noted in their 1973 book, “Size and Democracy,” the framers embraced federalism partly because they thought that states were closer in scale to the classical ideal. Ultimately, however, a counterargument advanced by James Madison won the day: Larger republics better protected democracy, he claimed, because their natural political diversity made it difficult for any supersized faction to form and dominate. [With Federalism and the separation of powers and overlapping jurisdictions, I think the founders split the difference here.]

Two and a half centuries later, the accumulated social science suggests that Madison’s optimism was misplaced. Smaller, it seems, is better. [This is a false and impossible choice. When complex networks grow too large, they break-up into smaller, more manageable pieces, but these smaller entities are vulnerable to competitive pressures. This is true in industrial organization, economic and financial markets, and digital and social networks. It also applies to social choice and governance. The founders’ idea was to create a coordinated network of states, counties, and municipalities to manage affairs at the appropriate jurisdictional level. National issues are the sole responsibility of a Federal government balanced by parochial interests. This would secure the strongest union to guarantee citizens’ rights and freedoms. As that task grows in complexity, the need for decentralization and coordination reasserts itself.] 

There are clear economic and military advantages to being a large country. But when it comes to democracy, the benefits of largeness — defined by population or geographic area — are hard to find. Examining data on the world’s nations from the 19th century until today, the political scientists John Gerring and Wouter Veenendaal recently discovered that although size is correlated with electoral competition (in line with the Madisonian argument), there is no association between size and many other standard measures of democratic functioning, such as limits on executive power or the provision of human rights. [Another question raised here is what exactly we mean by democracy. Strictly democracy means government by the people, but popular democracy is a narrow offshoot of that definition. IT also begs the question of what a government by the people is trying to accomplish. Our founders made it clear they thought it was life, liberty, and the pursuit of happiness.  Note: the pursuit of happiness, not its guarantee.]

In fact, large nations turn out to have what the political scientist Pippa Norris has called “democratic deficits”: They don’t fully satisfy their citizens’ demands for democracy. [Again, what is that demand? Is it coherent?] For one thing, citizens in large nations are generally less involved in politics and feel they have less of a voice. [Are they unable to secure life, liberty and pursue happiness or do they just not like the results?] Voter turnout is lower. [Low voter turnout could mean that voters are happy with the status quo, or don’t believe voting matters to their individual fates.] According to the political scientist Karen Remmer, smaller-scale political entities encourage voting in ways large ones can’t by “creating a sense of community” and “enforcing norms of citizenship responsibility.” [Perhaps because they enjoy more intrinsic rewards to participation. This would suggest more localized control over politics.] In addition, small countries promote political involvement by leaning heavily on forms of direct democracy, like referendums or citizen assemblies. [This is a feature of scale. Direct democracy on a large scale can empower the tyranny of the popular majority because the effects are so far removed from that majority.]

A second problem is political responsiveness: The policies of large nations can be slow to change, even if change is needed and desired. In a book published last year, the sociologists John Campbell and John Hall compared the reactions to the 2007-2008 financial crisis in Denmark, Ireland, and Switzerland. These three small countries didn’t cause the crisis; a homegrown Irish housing bubble notwithstanding, the shock wave they dealt with came from America. But though the countries were economically vulnerable, Mr. Campbell and Mr. Hall observed, this vulnerability fostered unexpected resilience and creativity, generating in each nation “a sense of solidarity or ‘we-ness’” that brought together politicians, regulators, and bankers eager to do whatever was necessary to calm markets. [Again, a sense of “we-ness” is one of scale. Cultural homogeneity helps.] 

With the United States lacking the same sense of shared fate and vulnerability, American policymakers could organize only a tepid response, which helps explain why the recovery here was so slow. This theory sheds light as well on developments in environmental and social welfare policy, where it is increasingly common to find a complacent America lagging behind its smaller, more innovative peers. [Complexity plus centralization leads to sclerosis, which is why centralizing authority in a large, diverse, pluralist society make be unworkable.] 

Finally, largeness can take a toll on citizen trust. The presence of a wide variety of social groups and cultures is the primary reason for this. Nearly all scholars who study country size recognize, as Madison did, that large nations are more socially heterogeneous, whether because they represent an amalgamation of different regions, each with its own ethnolinguistic, religious or cultural heritage; or because their economic vitality encourages immigration; or because population size and geographic spread promote the growth of distinctive subcultures; or because they have more differentiated class structures. [Agreed, which is why encouraging a large diverse population of the virtues of multiculturalism may actually be a detriment. I believe the original idea, or at least the one that prevailed in past influxes of cultural groups, was the melting pot of gradual, voluntary assimilation.]

It isn’t inevitable that a large amount of social variation would undermine trust. Well-governed societies like Canada address the issue by stitching diversity and multiculturalism into their national identities. Yet in the absence of cultural and institutional supports, heterogeneity and trust are frequently in tension, as different ways of life give rise to suspicion and animosity. Without at least a veneer of trust among diverse social groups, politics spirals downward. [This characterization of Canada seems counter-intuitive. Stitching ethnic diversity and multiculturalism into a national identity means that national identity must be based not on ethnicity, race, or diverse cultures but in a national identity based on universal principles and social contracts. In other words, on something called patriotism and fealty to the larger community, subsuming ethnic, racial, and cultural differences.]

The challenges of American largeness are here to stay. The task now is for individuals, civic organizations and institutions to commit themselves to building stronger communities and a renewed sense of shared responsibility and trust among different groups. Within the constraints of our nation’s size, we can create conditions for as much democracy as possible. [So, we converge on the idea that it is inevitable we decentralize power and assume the responsibility of self-governance? What then is the real political conflict of interest?]

Neil Gross is a professor of sociology at Colby College.

How the Enlightenment Ends

%d bloggers like this: