The Other

I find this book and its review oddly obtuse. The root cause remains mysterious? Perhaps the fact that these behaviors are documented across time, place, and culture might suggest a root cause in human nature? The author coins this as xenophobia, but that is not the accurate word because it suggests “otherism” is rooted in human psychology, whereas we observe the same behaviors in other species, especially those of pack animals like wolves, hyenas, and apes.

Instead I would attribute “otherism” to a natural survival instinct that sees the other as a possible threat, especially when the invasion involves rivalry over scarce resources. This would apply across many species that exhibit a sense of “insider vs. outsider” groups.

The difference with human society is that we aware of the moral implications of ostracizing or persecuting the “other” as fellow homo sapiens. We also have a multitude of characteristics we can use to differentiate groups, such as skin color, race, ethnicity, language, gender, cultural habits, etc. In fact, this multitude implies that our current obsession with race may not be the most important factor. I would guess that several of these characteristics coalesce around the nuances of cultural antipathy. In other words, it may not be skin color that matters most, it’s just the most obvious.

For example, a black male that attends Ivy League schools and works on Wall St. can assimilate easily into mainstream society and apparently can easily become President, whereas a black rapper who speaks urban dialect, sports tattoos, and sags his pants below his posterior has almost no chance of assimilating into the dominant cultural milieu, no matter how rich he is. This would apply on a less obvious scale to those, say, who cannot speak fluently the dominant language of a society.

The challenge for a diverse society is to manage the cultural conflicts that arise from our differences. These conflicts cannot be managed with platitudes and bromides about tolerance or focusing solely on chosen identities. Unfortunately this is where our author and reviewer end up: quoting polls about how people feel about national identity in Europe and the USA. It’s an odd comparison because the historical definition of being French or German is categorically different than being American. For centuries people’s identities were defined by where they were born into a dominant local culture. The American experiment is a complete departure from that because it is a land of immigrants (and involuntary servitude in the case of slavery). The true differences between indigenous tribes and European settlers is really a matter of when they arrived on the continent. The struggle for power dominance between insiders and outsiders is a global historical phenomenon, not just a North American one.

How can we meet this challenge of the “other” when globalization is turning us all into “others”? First, we must recognize that antipathy of the other is partly driven by fear, and the fear may very well be rational. Fear of Middle Eastern terrorists touting the conquering mission of Islam is not an irrational fear. An invasion of migrants across borders is a rational fear. The point being that rational fears can be overcome, but not by denying or condemning them.

Some have wrongly assumed that because nationalism can engender a negative attitude toward the other, the nation-state must be a detriment to peace and harmony. This is exactly wrong because nation-states with borders and defined governance are exactly what prevents chaos and conflict by defining the rights to scarce resources. It is why the nation-state has been so durable over the last 400 years. In this respect, One Worldism makes no sense and is a dangerous flirtation.

Second, as this idea of the nation-state suggests, we need to understand that a multicultural society can be detrimental to a free, democratic one. All communities develop and maintain cultural norms and values that make it easier to live together in peace. Accepting the dominant values of the society we live in is merely to understand this and not an impediment to celebrating one’s own cultural heritage. America has been more successful than other developed democracies because being American is not defined by skin color, or language, or race, but by the voluntary acceptance of the American credo of individual rights and freedoms. It is truly the melting pot. Anyone from anywhere in the world can adopt this spirit, even if they cannot transplant themselves. But this fact also underscores the importance of assimilation to the dominant values of a society’s culture, and the USA is no exception. In the USA we might classify these values according to constitutional principles of liberty, justice and law as well as according to commonly accepted behavioral norms. It does not mean surrendering to any “other’s” cultural heritage, but merely accepting those attributes easily assimilated without sacrificing our individual identities.

We can see that uncontrolled borders with uncontrolled waves of migrants only undermines the good will people harbor for embracing the other. It creates uncertainty and disruption to the stable societal norms and anxiety over scarce material resources. It also threatens the touchstones of national identity. Unfortunately, the southern border crisis is now something American society will have to manage and it is not helped by wrongly attributing the problem to systemic racism. This is merely a tragic fallacy. A free diverse society can embrace and has embraced a tolerant attitude toward newcomers, but the prudent pace of adaptation is crucial. No society can peacefully absorb a horde of migrants completely unassimilated to the cultural values and norms of that society. It only invites chaos and conflict.

One can only pray that our national leaders in Washington D.C. wake up to these realities.

‘Of Fear and Strangers’ Review: The Others

Many of history’s most nightmarish episodes are rooted in humanity’s propensity for hatred of ‘The Other.’ But the root cause remains mysterious.

wsj.com/articles/of-fear-and-strangers-history-xenophobia-book-review-the-others-11634912374October 22, 2021

By Adam Kuper Oct. 22, 2021 10:20 am ET

George Makari’s concern with xenophobia goes back to a childhood trauma. In 1974, at the age of 13, he was taken on a family visit to his parents’ native Beirut. Suddenly, the travelers found themselves caught in the midst of what would become a civil war. “To me, it was bizarre,” Dr. Makari recalls in “Of Fear and Strangers: A History of Xenophobia.” He continues: “All these bewildering sects were far more alike than different. All were Levantines who spoke the same dialect; all loved the same punning humor, devoured the same cuisine, abided by strict rules of hospitality, and approached any purchase as a three-act play: bargain, stage a walk-out, then settle. They were quick with proverbs and went agog when Fairuz sang. And yet, subtle distinctions in their identities now meant life or death.”

Of Fear and Strangers: A History of Xenophobia

By George Makari

W.W. Norton

Today, Dr. Makari, a psychiatrist, psychoanalyst and the director of Weill Cornell’s DeWitt Wallace Institute of Psychiatry, sees xenophobia as a threat to social peace, not only in the Middle East but also in Europe and North America, where recent political convulsions have been driven by a bristling hostility toward strangers and outsiders. Dr. Makari is clear that a lot of different impulses are often conflated here: “ethnocentrism, ultranationalism, racism, misogyny, sexism, anti-Semitism, homophobia, transphobia, or Islamophobia.” What might they have in common? “Is there any one term specific enough to not be meaningless, while broad enough to allow us to consider whatever common strands exist between these phenomena?” He thinks that there is: xenophobia. And if all these disorders are variants of the same affliction, then perhaps they have the same cause and might be susceptible to the same treatment.

Dr. Makari traces the invention of “xenophobia” to the 1880s, when psychiatrists came up with a variety of “phobias” apparently caused by traumatic experience. “Hydrophobia”—a fear of water—was an old term for rabies. There followed a rash of other phobias, from claustrophobia to my personal favorite, phobophobia—the fear of being frightened. (One commentator remarked that the number of phobias seemed limited only by an Ancient Greek dictionary.) Xenophobia entered a medical dictionary in 1916 as a “morbid dread of meeting strangers.

Like many psychiatric classifications, early definitions of xenophobia covered too much ground. What began as a psychiatric diagnosis would soon be used to describe the fury with which colonized populations often turned on settlers. These settlers, in turn, would be accused of xenophobia by the critics of colonialism, as waves of migrations in the years leading up to World War I provoked fears of a loss of national identity.



In the U.S., three confrontations between different segments of the population proved formative. The first pitted the Puritans, who were themselves refugees from religious persecution, against Native Americans. The second was the forced migration and enslavement of millions of Africans by descendants of the country’s European settlers. The third was provoked by the migrants, first from Europe, then from Asia, who arrived after the Civil War largely for economic reasons.

Dr. Makari notes that in 1860 60% of the white population in the U.S. was of British origin, while 35% were broadly classified as German. By 1914, after 20 million immigrants had passed through American ports, 11% of the white population had British roots, 20% German, 30% Italian and Hispanic, and 34% Slavic. The settled sense of identity enjoyed by established white American Protestants was threatened. There was, in particular, a panic about Chinese immigration, even though the number of arriving Chinese was relatively small. This led to the passage of the Chinese Exclusion Act in 1882, prohibiting the immigration of Chinese laborers. In 1892, 241 lynchings were recorded in America. Two-thirds of the victims were black; the remaining third were mostly Chinese and Italian. In 1908, the Harvard philosopher Josiah Royce asked: “Is it a ‘yellow peril,’ or ‘black peril,’ or perhaps, after all, is it not some form of ‘white peril’ which threatens the future of humanity in this day of great struggles and complex issues?”

Dr. Makari’s whirlwind historical survey tells a compelling story of racial and ethnic animosity, but he might have paid more attention to religious conflicts. Europe in the 16th and 17th centuries was torn by bloody wars between Catholics and Protestants, a feud that still festered in 20th-century Ireland. The Partition of India in 1947 was accompanied by violent Hindu-Muslim confrontations and the displacement of more than 10 million people. When communist Yugoslavia fell apart, Orthodox Christians and Muslims waged war in the Balkans. The Middle East is currently going through another cycle of Shiite-Sunni wars. Are these religious hatreds also to be considered xenophobia?

Then there are sometimes ferocious confrontations between political parties, or fratricidal quarrels between factions within parties. And what about all those brawling sports fans? So many apparently irrational fears and hatreds. Could they all possibly come down to the same psychic or social forces?

One idea is that there is something fundamentally human here. Early human groups competed for territory. All intruders were enemies. The more you feared and hated outsiders, the better your chances of survival. So xenophobia bestowed an evolutionary advantage. Sports fans are simply expressing inherited tribal instincts. Even babies are frightened by a strange face.

This is a popular one-size-fits-all explanation. But it is problematic. For one thing, anthropologists do not agree that constant strife was the norm during the 95% of human history when small nomadic bands lived by hunting and gathering. The Victorian anthropologist Edward Burnett Tylor said that early humans would have had to choose between marrying out or being killed out. When Europeans in the early 19th century made contact with surviving communities of hunter-gatherers, different bands were observed forming marriage alliances and trading partnerships that generally kept feuds from raging out of control.

In the aftermath of World War II and the Holocaust, however, a better explanation of mass hatreds was needed. The orthodox theory in American psychology at the time was behaviorism, which explained habitual attitudes and responses as the products of conditioning: Pavlov’s dogs salivated at the sound of a bell because they had been conditioned to recognize this as a cue for food. In the same sort of way, children are warned against strangers and so conditioned to fear others.

Less orthodox, but more influential in the long run, is the notion of projection. Each of us half-recognizes our shameful desires, infantile fears, aggressive impulses. Instead of dealing with them, we may accuse someone else of harboring those same feelings, cleansing ourselves by shifting the blame onto a scapegoat.

According to yet another analytic theory, the people most susceptible to collective paranoia are the children of strict and demanding fathers whom they feared and adored. Theodor Adorno, the lead author of the classic account “The Authoritarian Personality,” wrote that the typical subject “falls, as it were, negatively in love.” Cowed by the father-figure, he is masochistically submissive to authority and sadistically takes out his anger on the weak.

These psychoanalytic theories all seek to explain the personal traumas and particular pathologies of individuals. But how do whole populations come to share common anxieties and antipathies? In 1928, the sociologist Emory Bogardus published the landmark study “Immigration and Race Attitudes.” One of its disconcerting findings was that the most widely disliked people in the U.S. at the time were “Turks.” Though very few Americans had actually encountered a member of that group, they had heard about them. And what they had heard about was the massacre of Armenians in Turkey after World War I, which was presented in the press as a slaughter of Christians at the hands of Muslims.

It was this power of the media to shape popular sentiment that the journalist Walter Lippmann came to dread. An early supporter of American involvement in World War I, Lippmann had joined the Inter-Allied Propaganda Board in London. In 1922 he published “Public Opinion,” his study of “how public opinion is made.” In it, he borrowed a term from the printing press: stereotype. We all share ready-made ideas that facilitate snap judgments about people and situations. These stereotypes are crude but may be useful in a pinch. They save time and trouble.

Effective propaganda weaponizes stereotypes. Lippmann’s work inspired Sigmund Freud’s American nephew Edward Bernays, who set up the first public relations business. Bernays in turn influenced Joseph Goebbels, who made terrible use of his ideas. Social media now serves up propaganda on steroids.

Yet surely not everyone is gulled—at least not all the time. How then to explain what is going on when strangers are demonized? Dr. Makari suggests that some combination of these psychological and sociological theories may be cobbled together to guide our thinking. This is probably the best that we can manage at present. What then can be done to limit the damage? Here Dr. Makari is less helpful. He suggests that all will be well if society becomes more equal, open and informed. He might as well add that social media should be better regulated, and the public better equipped for critical thought. Failing that, we may have to relive these nightmares of collective hatred again and again for a long time to come.

Yet there are grounds for hope. A study released in May this year by the Pew Research Center reported that conceptions of national identity in the U.S. and Western Europe have recently become more inclusive. Compared with 2016, “fewer people now believe that to truly be American, French, German or British, a person must be born in the country, must be a Christian, has to embrace national customs, or has to speak the dominant language.” This may suggest that xenophobia waxes and wanes with recent events, and that politicians can fan or tamp down outbreaks of public fear and fury. Wise and prudent leaders really might spare us a great deal of trouble.

—Mr. Kuper, a specialist on the ethnography of Southern Africa, has written widely on the history of anthropology.

Copyright ©2021 Dow Jones & Company, Inc. All Rights Reserved. 

On Thomas Sowell

This is a nice review of a biography of one of the pre-eminent economic intellectuals of our time, Thomas Sowell.

The triumph of Thomas Sowell

newcriterion.com/issues/2021/6/the-triumph-of-thomas-sowell

Features June 2021

Thomas Sowell. Photo: Free to Choose Network.

by John Steele Gordon

On Maverick: A Biography of Thomas Sowell, by Jason L. Riley.

Thomas Sowell is one of the towering American intellectuals of our time. An economist trained at the University of Chicago and a social theorist of the first rank, he has been a senior fellow at the Hoover Institution at Stanford University since 1980.

He has written an astonishing fifty books (if you count revised and expanded editions), numerous essays, and a long-running, twice-a-week newspaper column. Extraordinarily wide ranging, he has covered everything from the rudiments of economics to race relations, the housing crisis of 2008 to late-talking children.

His best known book, Basic Economics (2000), a best-selling, chart-, graph-, and jargon-free introduction to the subject, is now in its fifth edition and has been translated into seven languages.

No less an authority than Milton Friedman, who taught Sowell at the University of Chicago, has said that “The word ‘genius’ is thrown around so much that it’s becoming meaningless, but nevertheless I think Tom Sowell is close to being one.”

So it’s about time for there to be a biography of this remarkable man, although it should be noted that Maverick is far more an intellectual biography than a personal one.1 And we should be grateful to Jason L. Riley for writing a very good one. Riley is the author of Please Stop Helping Us: How Liberals Make It Harder for Blacks to Succeed (Encounter). He is also a senior fellow at the Manhattan Institute and a columnist at TheWall Street Journal.

Sowell’s life did not get off to an easy start, to put it mildly. In 1930, the year he was born into a black family in Gastonia, North Carolina, the Great Depression was gathering strength. And Jim Crow was in full force, so he seldom encountered white people in his early years. As Riley explains, “He’d been turned away from restaurants and housing because of his skin color. He’d felt the pain and humiliation of racism firsthand throughout his life. He needed no lectures from anyone on the evils of Jim Crow.”

His father had died a few months before his birth, and his mother, a housemaid, already had four children. So he was raised by a great-aunt.

The family moved to Harlem when he was nine, part of the great migration of black families from the South to the North in search of greater opportunity in those years. Forced to drop out of high school to get a job, he only went to college after a stint in the Marines during the Korean War.

He was the first member of his family to get beyond the seventh grade, and he was ignorant of even the basics of higher education. At first he thought that professors who were addressed as “doctor” were physicians as well as professors. “It came as a revelation to me that there was education beyond college,” he wrote, “and it was some time before I was clear whether an M.A. was beyond a Ph.D. or vice versa. Certainly I had no plans to get either.”

At first he attended night classes at the historically black Howard University. There, his professors noted his remarkable intellect and capacity for hard work and helped him transfer to Harvard the next year. He thrived there intellectually and graduated at the age of twenty-eight magna cum laude.

But he was less enamored of the social atmosphere in Cambridge. Sowell noted that he “resented attempts by some thoughtless Harvardians to assimilate me, based on the assumption that the supreme honor they could bestow was to allow me to become like them.”

Chicago was not an imitation of anything. It was wholly itself.

He got his master’s degree the next year at Columbia and intended to get his doctorate there as well, so he could study under George Stigler, who had written an essay on the early economist David Ricardo that Sowell had greatly admired. (It might be noted that the very first quotation in Sowell’s Basic Economics, written many years later, is from George Stigler.) But when Stigler (who won a Nobel Prize in 1982) moved to the University of Chicago, Sowell followed him there. He was very glad he did.

For while Sowell thought Columbia was a sort of a “watered-down” version of Harvard, Chicago was not an imitation of anything. It was wholly itself.

And the economics department was extraordinarily rigorous. Ross Emmett, an authority on the economics department at Chicago, told Riley that “During that period of time, Harvard took in twenty-five to twenty-seven students and graduated twenty-five of them, whereas Chicago took in seventy students and graduated twenty-five of them.” In the fifty-two years that Nobel Prizes in economics have been awarded, no fewer than thirteen have gone to scholars associated with the University of Chicago.

Although Chicago has long been the center of the study of free-market economics, Sowell was a Marxist in his twenties. He explained that, when working as a Western Union messenger after he left high school, he would sometimes ride the bus from the Wall Street area to his home in Harlem. The ride took him past the upscale department stores on Fifth Avenue, past Carnegie Hall, and through the affluent residential neighborhoods of Riverside Drive. “And then,” Sowell wrote, “somewhere around 120th Street, it would cross a viaduct and onto 135th Street, where you have the tenements. And that’s where I got off. The contrast between that and what I’d been seeing most of the trip really baffled me. And Marx seemed to explain it.”

But then he took a summer job at the U.S. Department of Labor in 1960, when he turned thirty. Even after a year at the University of Chicago, including a course under Milton Friedman, Sowell had “remained as much a Marxist as I had been before arriving.”

He spent the summer analyzing the sugar industry in Puerto Rico, where a minimum wage was set by the U.S. Government. It wasn’t long before he noticed that as the minimum wage had risen, the number of sugar workers fell. He had always supported minimum wages, assuming they helped the poor earn a decent living. But now he realized that minimum-wage laws cost jobs and were a net detriment to the poor.

“From there on,” Sowell wrote, “as I learned more and more from both experience and research, my adherence to the visions and doctrines of the left began to erode rapidly.”

Soon, Sowell was “rethinking the whole notion of government as a potentially benevolent force in the economy and society.” He also couldn’t help noticing that his fellow bureaucrats did not care if the minimum wage helped workers. Their job was to enforce the laws. It was not to see if the laws did any good.

“It forced me to realize, Sowell wrote, “that government agencies have their own self-interest to look after, regardless of those for whom a program has been set up.” Marxist theory ignores the powerful force of self-interest in the working of economies, and Sowell came to realize the centrality of self-interest to the human universe.

At Chicago, Sowell studied the history of ideas under the great Friedrich Hayek, but it was Hayek’s own ideas that had lasting consequences for him. Hayek’s essay “The Use of Knowledge in Society” dealt with how the information used to make economic decisions spreads through an economy. Its central insight is that knowledge is highly dispersed and no one person or group can possess all the knowledge needed to make good economic decisions. Therefore, he argued, the decision-making process should also be decentralized, the opposite of what Marx argued for.

Later, when Sowell was asked to teach a course on the Soviet economy, the significance of Hayek’s essay hit home:

I could see what the factors were that led the Soviets to do what they were doing, and why it wasn’t working. There was a knowledge problem that was inherent in that system. In a nutshell, those with the power didn’t have the knowledge, and those with the knowledge didn’t have the power.

Out of this came one of Sowell’s most important books, Knowledge and Decisions (1980), which extended Hayek’s work and, as Riley says, “would do so in ways that even Hayek had never contemplated.”

In hopes of reaching a wider audience than Hayek, who wrote in the technical language of economics, Sowell’s book, in “lieu of graphs and equations . . . offers rich metaphors and copious real-world examples that make the weightier concepts under discussion not merely digestible but tasty.” This appeal to a wider audience is no small part of the reason that Sowell has been so influential.

Another is that, while an economist by training, Sowell’s mastery of subjects is far wider. Gerald Early, of Washington University, noted that his expertise extends to sociology and history as well. “He had some kind of mastery of other fields to do the kind of comprehensive stuff he was doing. Whether you agree totally with his ideas or not, it was impressive what he was doing. Who knew an economist could write that stuff?”

Indeed, far too many economists can’t write, period. Sowell most certainly can. Early, who is black himself, noted that “I knew lots of black people who were not academics and who had heard about him and were reading his stuff because it was accessible.”

Another thing that distinguishes Sowell from all too many other economists is his insistence that theory be tested in the real world. Gunnar Myrdal, who won the Nobel Prize in economics in 1974, for instance, argued that third-world countries could not develop without extensive foreign aid and much central planning, despite the fact that post-war Japan, Taiwan, South Korea, and Singapore did exactly that in the late twentieth century.

“I got no sense,” Sowell wrote, “that Myrdal actually investigated these theories of his and compared them with anything that actually happened. I myself, of course, started out on the left and believed a lot of this stuff. The one thing that saved me was that I always thought facts mattered. And once you think that facts matter, then of course that’s a very different ball game.”

Myrdal and his type are essentially theoretical in their approach to economics. Sowell, like Stiller, Hayek, and Friedman, is empirical, demanding real-world proof, not just elegant ideas.

“The market can be ruthless in devaluing degrees that do not mean what they say.”

Sowell has always regarded himself as fortunate that his higher education came before the era of affirmative action, which he regards as an unmitigated disaster for blacks. In his memoir, My Grandfather’s Son (2007), the Supreme Court Justice Clarence Thomas recalled how shocked he had been when his law degree from Yale and his sterling grades failed to impress the white-shoe law firms where he applied for a job. “Now I knew what a law degree from Yale was worth when it bore the taint of racial preference,” he wrote.

But Sowell had predicted this in the very first days of affirmative action. “The double standard of grades and degrees is an open secret on many college campuses, and it is only a matter of time before it is an open secret among employers as well,” he predicted in 1970. “The market can be ruthless in devaluing degrees that do not mean what they say. It should be apparent to anyone not blinded by his own nobility that it also devalues the student in his own eyes.”

One of Sowell’s most important contributions has been to notice how wide the gap often is between ordinary black Americans and black intellectuals and civil rights leaders. In a pair of op-eds in The WashingtonPost in 1981, Sowell wrote that

Historically, the black elite has been preoccupied with symbolism rather than pragmatism. Like other human beings, they have been able to rationalize their special perspective and self-interest as a general good. Much of their demand for removing racial barriers was a demand that they be allowed to join the white elite and escape the black masses.

In other words, they have been all too anxious to do what Sowell had spurned doing many years before at Harvard.

In fact, Sowell doesn’t have much use for the pretensions of intellectuals of whatever color. Perhaps my favorite quote in Maverick is used by Riley to open his chapter on “Sowell’s Wisdom”: “Some of the biggest cases of mistaken identity are among intellectuals who have trouble remembering that they are not God.”

In this short, well-written book, Jason Riley leads the reader on an enlightening tour of the thought and experiences of one of the most luminous minds this country has produced.

It should cause many readers to explore the works of Thomas Sowell. They will be richly rewarded for doing so.

1Maverick: A Biography of Thomas Sowell, by Jason L. Riley; Basic Books, 304 pages, $30.

Facing Facts About Race

hoodie

Last week President Obama weighed in again on the Trayvon Martin episode. Sadly, most of what he said was wrong, both literally and ethically.

I don’t usually post about cultural politics but I link to this truly excellent article by Victor Davis Hanson published by National Review Online, regarding an ongoing tragedy that has sucked a lot of oxygen out of our public discourse.

Liberty, Politics, and Justice…

JusticeNotBlind

…a delicate mix.

As readers well know, this blog focuses primarily on economic policy and monetary issues, but policies are not made in a political vacuum. The legacy of the post-60s period in American politics has been distilled down to momentous Supreme Court decisions, which have become the tail that wags the dog of our collective lives. But politicizing court decisions is not really the way “rule by the people” (democracy) was meant to work. No wonder our democracy has become so dysfunctional: as we raise irreconcilable issues such as race, abortion, and sexual preference to the level of national politics, we become divided by emotions or distracted by ‘bread and circuses.’ Meanwhile, the political class runs the government in their own narrow interests. We the citizens become the losers by our own design. Whether winning or losing in this judicial lottery, nobody should be real content with the present state of affairs.

From the WSJ:

Our Rights, Not the Court’s

There’s no good reason to give the justices the last word on race, abortion and gay marriage

 By MARK TUSHNET

Reacting to this past week’s Supreme Court decisions, a conservative law-school colleague told me, “Law matters in the Supreme Court from October to May, not so much in June.” Politics takes over in June, and the Supreme Court becomes a super-legislature, deciding by majority vote what our constitutional rights are.

“Deciding” is the right verb. In June, no serious observer of the Court can think that the justices are just “calling balls and strikes” or interpreting the Constitution’s words or telling us what most people understood the words to mean when they were placed in the Constitution.

But if the justices are deciding rather than interpreting, why should they be the ones to decide, substituting their decisions for ours? The usual explanation is that we can trust them to do the right thing—and we can’t trust ourselves.

After all, some of us think that affirmative action promotes constitutional rights; others think that it violates them. Some of us think that the Voting Rights Act promotes, well, voting rights; others think that it violates the structural principles that make our government worth having.

We usually use our legislatures as the forum in which to discuss and resolve such differences. We call that “politics,” and for lots of issues (tax rates, spending programs, declaring war), we think that politics, with majority rule, works well enough.

So why don’t we use these same institutions for hot-button constitutional issues like abortion rights, gun rights and affirmative action? In these cases, we let nine other people decide, by majority vote, what they think our rights are. That majority vote then becomes “constitutional law.” The great puzzle is, why do we let them get away with it?

Consider the Supreme Court’s decisions this past week. Conservatives liked the rulings upholding property rights, limiting affirmative action and striking down a key element of the Voting Rights Act. Liberals liked the decisions striking down the federal Defense of Marriage Act and allowing California to have gay marriage. Only a few people, though, think that this mixed bag of results should lead us to rethink the whole system. But it should.

What justifies giving the Court the last word on our constitutional rights?

The most common answer is to say that legislatures don’t do a good job of protecting minority interests. Liberals think that Congress messed up in enacting the Defense of Marriage Act, failing to protect the interests of gays and lesbians. Conservatives think that it messed up in re-enacting the Voting Rights Act, failing to protect the sovereign equality of states. And so on down the line.

But the Voting Rights Act shows that Congress sometimes does protect racial minorities. And though gays and lesbians are clearly a minority in the country, the rapid spread of legislative recognition of gay marriage shows that some legislatures can protect their interests too.

Another common view is that, though conservatives and liberals like some decisions and dislike others, they all hope that the justices will get it right eventually—becoming conservative or liberal across the board. Everything will be fine, they think, if only the justices get their heads straight about the Constitution.

But there is no reason to think this is going to happen. Maybe if we get a long run of conservative or liberal presidents, enough new justices will be appointed to make the Court consistently conservative or liberal. That happened with the Warren Court, for example. But the Court didn’t become consistently conservative even with Republican presidents appointing every justice from 1968 to 1994.

A third perspective is purely strategic. Pick your favorite policy outcomes, and then do a complicated calculation. For each issue, ask, “How likely is it that I will win in the legislature and have the courts uphold my victory? How likely is it that I will win in the legislature only to have the courts snatch my victory away? How likely is it that I will lose in the legislature but get the courts to give me what I want?” Then weight each issue according to its importance to you. Finally, add everything up. If, on balance, you get more of the policies that you like from letting the courts oversee the legislature, you should be for judicial review. Otherwise, you should oppose it.

I think that this is an entirely sensible way of deciding whether you like judicial review or not. But I don’t think anyone actually goes through the calculation, which is maddeningly complicated.

A final perspective would be to resign yourself to the status quo, take your victories when they come and live to fight another day when you lose.

This might make sense, at least if you’re not completely annihilated on the battlefield. The Court usually does leave paths open to renewing the fight in ordinary political arenas. Voting rights advocates can try to get Congress to enact a new coverage formula. Defenders of the traditional family can try to get Congress to enact more precisely targeted restrictions on benefits for people in gay marriages. But why should they have to bother? They won these battles once, and the only reason they have to re-fight them is that five justices thought they were wrong about who had what constitutional rights.

Some scholars say that all of this is no big deal because, if we put aside some short, anomalous periods, the Supreme Court never gets too far out of line with the national majority. But if this is true, what’s the point of judicial review? To police those states that depart from national views? To cleanse the statute books of antiquated laws that no longer have majority support?

Maybe. But note that this is not what happened in last week’s decisions. The Voting Rights Act was re-enacted in 2006, the Defense of Marriage Act was adopted in 1996, and affirmative action has lots of supporters today.

If judicial review is a problem, what can we do about it? I’m fond of a Canadian innovation that Judge Robert Bork also found interesting: Let the justices strike down statutes they think are unconstitutional and give their explanations. Then let Congress respond. If a congressional majority agrees with the Court, the decision stands. But if a majority thinks that the Court got it wrong, Congress can override the decision.

I, for one, would welcome a chance to engage in constitutional politics. My own policy views are so eccentric that I can’t count on any justice to reflect them consistently. I’d rather take my chances trying to persuade my fellow citizens and representatives to agree with me.