Journalistic Integrity or Just Plain Dumb?

If the Wuhan lab-leak hypothesis is true, expect a political earthquake

This incredible article is by Thomas Frank, a respected journalist and author (What’s the Matter With Kansas? – an exercise in urban political myopia) who is a well-educated and well-read member of the liberal urban media. Here’s an excerpt of his political touchstones:

Like everyone else I know, I spent the pandemic doing as I was told. A few months ago I even tried to talk a Fox News viewer out of believing in the lab-leak theory of Covid’s origins. The reason I did that is because the newspapers I read and the TV shows I watched had assured me on many occasions that the lab-leak theory wasn’t true, that it was a racist conspiracy theory, that only deluded Trumpists believed it, that it got infinite pants-on-fire ratings from the fact-checkers, and because (despite all my cynicism) I am the sort who has always trusted the mainstream news media.

[Ah, yes, it’s Trump’s fault. LOL.]

If an individual whose entire career and livelihood is wrapped up in ‘getting it right’ is so easily misled by our dominant media sources, what hope is there for the rest of us who have better things to do? Now he’s wondering if he’s gotten it all wrong and the larger consequences.

This is the problem with the urban corporate media that started to seriously degenerate after the 2000 election. But we have also learned how it started long before, as alternative media such as cable news, Talk Radio, and the Internet have presented an existential financial challenge for traditional media outlets, especially print newspapers and broadcast news.

Mr. Frank and his colleagues in corporate media (NYT, WaPo, LAT, Fox) need to undergo a serious bit of soul searching to discover if they have a role as the Fourth Estate in our information economy, or if they should just go pursue a career in real estate somewhere. Journalists today have to understand that nobody is going to hero worship them as the modern-day Woodward and Bernstein. Honest journalism and reputational capital is it’s own reward and can actually be lucrative on platforms like SubStack.

So here is what Frank has discovered:

  •  Lab leaks happen. They aren’t the result of conspiracies: “a lab accident is an accident,” as Nathan Robinson points out; they happen all the time, in this country and in others, and people die from them.
  •  There is evidence that the lab in question, which studies bat coronaviruses, may have been conducting what is called “gain of function” research, a dangerous innovation in which diseases are deliberately made more virulent. By the way, right-wingers didn’t dream up “gain of function”: all the cool virologists have been doing it (in this country and in others) even as the squares have been warning against it for years.
  •  There are strong hints that some of the bat-virus research at the Wuhan lab was funded in part by the American national-medical establishment — which is to say, the lab-leak hypothesis doesn’t implicate China alone.
  •  There seem to have been astonishing conflicts of interest among the people assigned to get to the bottom of it all, and (as we know from Enron and the housing bubble) conflicts of interest are always what trip up the well-credentialed professionals whom liberals insist we must all heed, honor, and obey.
  •  The news media, in its zealous policing of the boundaries of the permissible, insisted that Russiagate was ever so true but that the lab-leak hypothesis was false false false, and woe unto anyone who dared disagree. Reporters gulped down whatever line was most flattering to the experts they were quoting and then insisted that it was 100% right and absolutely incontrovertible — that anything else was only unhinged Trumpist folly, that democracy dies when unbelievers get to speak, and so on.
  •  The social media monopolies actually censored posts about the lab-leak hypothesis. Of course they did! Because we’re at war with misinformation, you know, and people need to be brought back to the true and correct faith — as agreed upon by experts.

With this we get Mr. Frank’s revelation:

If it does indeed turn out that the lab-leak hypothesis is the right explanation for how it began — that the common people of the world have been forced into a real-life lab experiment, at tremendous cost — there is a moral earthquake on the way.

Because if the hypothesis is right, it will soon start to dawn on people that our mistake was not insufficient reverence for scientists, or inadequate respect for expertise, or not enough censorship on Facebook. It was a failure to think critically about all of the above, to understand that there is no such thing as absolute expertise. 

Yeah, no kidding. And that’s a bad thing? It’s doubly ironic that most of the voices haranguing us to “follow the science” were really constraining true science. Critical thinking is merely what real scientists have been telling us all along, as opposed to those succumbing to “political” science. There are no absolutes in science, only skepticism and hypothesis testing – this applies to the pandemic as well as climate change and systemic racism and Modern Monetary Theory. And mea culpas won’t save journalists from the anvils of “I told you so’s” that will rain down upon their heads.

Generalists vs. Specialists

Specialization has led to great economic gains over the course of civilization. But specialization as such is an economic imperative, not a humanistic or evolutionary one. Evolution, as this article argues, may strongly favor the generalist capabilities of a species. And it rewards individuals with its humanistic benefits, if not in material gains.

As an unrepentant generalist across many disciplines, I especially appreciate the intangible benefits and payoffs (if not the monetary tradeoffs!).

I’m reminded of the academic distinction: a generalist is somebody who knows nothing about everything, while a specialist is one who knows everything about nothing.

The Generalist Specialist: Why Homo Sapiens Succeeded

By Gemma Tarlach | July 30, 2018 10:00 am



Being a generalist specialist, a unique niche, is the hallmark of our species, say researchers — and the reason Homo sapiens (left) are still around but other hominins, including Neanderthals (right), are not. (Credit: Wikimedia Commons)

Some animals are jacks of all trades, some masters of one. Homo sapiens, argues a provocative new commentary, are an evolutionary success story because our ancestors pulled off a unique feat: being masterly jacks of all trades. But is this ecological niche, the generalist specialist, the real reason our species is the last hominin standing?

When paleoanthropologists and archaeologists define what makes our species unique, they usually focus on our use of symbolism and language, as well as our skills in social networking (long before Facebook) and technological innovation. Those arguments for human exceptionalism have been challenged in recent years, however, as researchers have uncovered evidence that other members of the genus Homo, notably Neanderthals, were capable of similar cognitive processes, from artistic expression to producing fire at will.

But maybe, say two researchers, we got it wrong. What defines our species, and has allowed H. sapiens to survive and even thrive after all other hominins went extinct, is not about making better stone projectiles, or networking, or sprucing up the cave walls with a little ochre artwork. We’re the last hominins on Earth because we’re really good at adapting to a huge range of environments, including the extreme.

Over The River And Through The Woods (And The Tundra, And The Desert…)

To make their case, researchers mapped out the likely ranges of archaic members of the genus Homo according to current fossil, paleoenviromental and archaeological evidence. Being a fan of the scientific method, I think it’s worth noting here that this map almost certainly will change as new finds turn up. But for now, working with the best body of evidence we’ve got, it’s clear that early H. sapiens, once they left Africa, seemed to explode across the Old World, moving into territory previously occupied by one or at most two other hominin species.

A map of the estimated ranges of archaic members of the genus Homo, spanning the period H. sapiens emerged in Africa and dispersed across the rest of the Old World, roughly 60,000-300,000 years ago. (Credit: Roberts and Stewart, 2018. Defining the ‘generalist specialist’ niche for Pleistocene Homo sapiens. Nature Human Behaviour. 10.1038/s41562-018-0394-4)

To be clear, there is good evidence that other hominins called extreme environments home. Denisovans appear to have adapted to high-altitude life in Central Asia, for example, while diminutive H. floresiensis was at home in equatorial island rainforests. It’s been argued, heatedly (no pun intended), that Neanderthals were high-latitude specialists. But only H. sapiens turn up in all of those environments. What might not be immediately evident from the map is that early H. sapiens dispersal wasn’t just about setting foot on a new continent; it was also about moving into new and often extremely challenging environments, from deserts to arctic climes, from treeless, high-altitude plateaus to dense tropical rainforests.

Nevertheless We Persisted

It’s the “unique ecological plasticity” of our species that’s our defining trait, argue the researchers, and it’s what gave us a leg up on surviving, whether moving into new territories or adapting to changing climate conditions. While this conclusion may seem obvious to us now, it’s only been possible to reach it thanks to the flood of new evidence that’s revised the timeline of human evolution and dispersal.

The new research has shown our species evolved earlier than once thought (our start date is now at least 300,000 years ago) and spread beyond Africa sooner than expected: Consider, for example, the first H. sapiens fossil found in the Arabian Peninsula — once thought inhospitable to early humans — and described earlier this year, or a H. sapiens partial jaw from Israel that’s 177,000-194,000 years old.

The key to proving their hypothesis is correct — and to understanding how this ecological plasticity arose in our species — will be acquiring not just more evidence of a H. sapiens presence at different sites, but also strong paleo-enviromental data, particularly in Africa where the earliest H. sapiens lived.

In the meantime, the researchers have coined a novel niche for the intrepid early H. sapiens: the generalist specialist. The team looked at the ecological niche profiles of specialists, such as pandas, and generalists, like the trash panda (aka the raccoon). They concluded that H. sapiens’ unique generalist specialist niche allowed early members of our species to adapt to, and specialize in, living in wildly different environments.


Pandas are considered specialists because all individuals utilize a single food web. Raccoons, on the other hand (paw?), are generalists adept at exploiting whatever food web they can find, as anyone who has left an unsecured trash can out at night probably knows. Our species has often been considered a generalist, but the authors of today’s commentary propose a new ecological niche for us: the generalist specialist, with different populations capable of adapting to and specializing in a wide range of environments and resources. (Credit: Roberts and Stewart, 2018)

While occupying the unique niche of generalist specialist will no doubt appeal to fans of H. sapiens exceptionalism, it’s unclear that it provides what the researchers describe as a “framework for discussing…how our species became the last surviving hominin on the planet.” Specialists tend to face extinction, for example, only if their specialized ecological niche is wiped out — or they are out-competed by an invasive species. Ahem.

Networks and Hierarchies

This is a review of British historian Niall Ferguson’s new book titled The Square and the Tower: Networks, Hierarchies and the Struggle for Global Power. It’s interesting to take the long arc of history into account in this day and age of global communication networks, which might seem to herald the permanent dominance of networks over hierarchies. That history cautions us otherwise.

Ferguson notes two predominant ages of networks: the advent of the printing press in 1452 that led to an explosion of networks across the world until around 1800. This was the Enlightenment period that helped transform economics, politics, and social relations.

Today, the second age of networks consumes us, starting at about 1970 with microchip technology and continuing forward to the present. It is the age of telecommunications, digital technology, and global networks. Ours is an age where it seems “everything is connected.”

Ferguson notes that, beginning with the invention of written language,  all that has happened is that new technologies have facilitated our innate, ancient urge to network – in other words, to connect. This seems to affirm Aristotle’s observation that “man is a social animal,” as well as a large library of psychological behavioral studies over the past century. He also notes that most networks may reflect a power law distribution and be scale-free. In other words, large networks grow larger and become more valuable as they do so. This means the rich get richer and most social networks are profoundly inegalitarian. This implies that the GoogleAmazonFacebookApple (GAFA) oligarchy may be taking over the world, leaving the rest of us as powerless as feudal serfs.

But there is a fatal weakness inherent to this futuristic scenario, in that complex networks create interdependent relationships that can lead to catastrophic cascades, such as the global financial crisis of 2008. Or an explosion of “fake news” and misinformation spewed out by global gossip networks.

We are also seeing a gradual deconstruction of networks that compete with the power of nation-state sovereignty. This is reflected in the rise of nationalistic politics in democracies and authoritarian monopoly control over information in autocracies.

However, from the angle of hierarchical control, Ferguson notes that failures of democratic governance through the administrative state “represents the last iteration of political hierarchy: a system that spews out rules, generates complexity, and undermines both prosperity and stability.”

These historical paths imply that the conflict between distributed networks and concentrated hierarchies is likely a natural tension in search of an uneasy equilibrium.

Ferguson notes “if Facebook initially satisfied the human need to gossip, it was Twitter – founded in March 2006 – that satisfied the more specific need to exchange news, often (though not always) political.” But when I read Twitter feeds I’m thinking Twitter may be more of a tool for disruption rather than constructive dialogue. In other words, we can use these networking technologies to tear things down, but not so much to build them back up again.

As a Twitter co-founder confesses:

‘I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,’ said Evan Williams, one of the co-founders of Twitter in May 2017. ‘I was wrong about that.’

Rather, as Ferguson asserts, “The lesson of history is that trusting in networks to run the world is a recipe for anarchy: at best, power ends up in the hands of the Illuminati, but more likely it ends up in the hands of the Jacobins.”

Ferguson is quite pessimistic about today’s dominance of networks, with one slim ray of hope. As he writes,

“…how can an urbanized, technologically advanced society avoid disaster when its social consequences are profoundly inegalitarian?

“To put the question more simply: can a networked world have order? As we have seen, some say that it can. In the light of historical experience, I very much doubt it.”

That slim ray of hope? Blockchain technology!

A thought-provoking book.













How the Enlightenment Ends

The Death of Text?


The following short essay was published in the NY Times feature called The Fate of the Internet. Frankly, it’s difficult to take these arguments too seriously, despite the transformative effects of technology.

Welcome to the Post-Text Future

by Farhad Manjoo, NY Times

I’ll make this short: The thing you’re doing now, reading prose on a screen, is going out of fashion. [Which means what? It’s popularity is fading as a communication channel?]

We’re taking stock of the internet right now, with writers [Hmm, what’s a writer without a reader?] who cover the digital world cataloging some of the most consequential currents shaping it. If you probe those currents and look ahead to the coming year online, one truth becomes clear. The defining narrative of our online moment concerns the decline of text, and the exploding reach and power of audio and video. [Yes, but where does real “power” really reside? In cat videos and selfies? Those behind the curtain are really smiling.]

This multimedia internet has been gaining on the text-based internet for years. But last year, the story accelerated sharply, and now audio and video are unstoppable. The most influential communicators online once worked on web pages and blogs. They’re now making podcasts, Netflix shows, propaganda memes, Instagram and YouTube channels, and apps like HQ Trivia.

Consider the most compelling digital innovations now emerging: the talking assistants that were the hit of the holidays, Apple’s face-reading phone, artificial intelligence to search photos or translate spoken language, and augmented reality — which inserts any digital image into a live view of your surroundings.

These advances are all about cameras, microphones, your voice, your ears and your eyes.

Together, they’re all sending us the same message: Welcome to the post-text future. [No, they are welcoming us to the distractions of circuses. That’s what entertainment is.]

It’s not that text is going away altogether. Nothing online ever really dies, and text still has its hits — from Susan Fowler’s whistle-blowing blog post last year about harassment at Uber to #MeToo, text was at the center of the most significant recent American social movement.

Still, we have only just begun to glimpse the deeper, more kinetic possibilities of an online culture in which text recedes to the background, and sounds and images become the universal language.

The internet was born in text because text was once the only format computers understood. Then we started giving machines eyes and ears — that is, smartphones were invented — and now we’ve provided them brains to decipher and manipulate multimedia. [Yes, but civilization was not born with the ASCII computer language. Computers are becoming clever tvs, but they still deliver a lot of trivia as content and video formats probably amplify that. Perhaps we are seeing the trivialization of popular culture? Has it ever not been trivial?]

My reading of this trend toward video as a substitute for text applies to certain types of media and content. Certain commentators have adapted readily to YouTube channels to transmit knowledge and ideas and the educational potential is just being tapped. But true power in the world of ideas is controlled by those who know how to manipulate text to understand abstract intellectual ideas that govern our world.

The question is, is technology turning us into sheep or shepherds? Because for sure, there are wolves out there.

As John Maynard Keynes wrote,

The ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back…


G–gle Culture



These excerpts are from a recent online interview by Stefan Molyneux of the fired Google employee James Damore explaining himself:

Generally, I just really like understanding things,” he said about his reasons for compiling his argument. “And recently, through interactions with people, I have noticed how different political ideologies divide us in many ways. I wanted to understand what was behind all that.”

“I read a lot into Jonathan Haidt’s work, a lot about what exactly is the philosophy behind all of these things. And that led me to the beginning of the document,” he explained. 

He described his crystallizing moment as: “I could see that all of us are really blind to the other side, so in these environments where everyone is in these echo chambers just talking to themselves, they are totally blind to so many things.We really need both sides to be talk to each other about these things and trying to understand each other.” 

He critiques both the left and right for not working together: “The easiest way of understanding the left is: It is very open, it is looking for changes. While the right is more closed, and wants more stability. There are definitely advantages to both of those. Sometimes there are things that need to change, but you actually need a vision for what you want. There is value in tradition, but not all traditions should be how they are.”

“We create biases for ourselves. This is particularly interesting, when we talk about how it relates to reality,” he said.

“Both sides are biased in a way, they have motivated reasoning to see what they want out of a lot of things,” he continued. 

This happens a lot in social science, where it is 95% leaning to the left. And so they only study what they want, and they only see the types of things that they want, and they really aren’t as critical of their own research as much as they should. The popular conception is that the right doesn’t understand science at all, that the right is anti-science. It is true that they often deny evolution and climate science, climate change, but the left also has its own things that it denies. Biological differences between people — in this case, sex differences,” he explained. 

He described the experience of diversity training at Google, which inspired him to write: “I heard things I definitely disagreed with in some of the programs. I had some discussions with people there, but there was a lot of just shaming. ‘No you can’t say that, that’s sexist, you can’t do this.’ There is so much hypocrisy in a lot things they are saying. I decided to just create the document just to clarify my thoughts.

I have often recommended Jon Haidt’s research presented in his book, The Righteous Mind. It’s worth a read because much of what is happening in social and political discourse these days reflects a psychological pathology that should be completely unnecessary. But getting out of our own way in politics is a difficult challenge.

I find nothing particularly mendacious about Mr. Damore’s document or his intentions to clarify what is basically an empirical puzzle concerning gender differences. Of course, this was all blown way out of proportion because it challenges some unscientific political agenda.

As a scientist, I assume that all empirical phenomena should be open to skepticism and challenges. I’m not sure how we progress intellectually any other way. The attack on Mr. Damore is an attack on science and for me can only reveal an indefensible political agenda. This is sad, if not dangerous, to say the least.

My own approach in this blog has been to suggest analytical frameworks to help understand how human behavior aggregates up into social behavior that defines our civilization; past, present and future (see Common Cent$). The universe is constantly changing, and survival depends on successful adaptation. Unsuccessful adaptation leads to extinction. Thus, the problem for all species is how to successfully adapt.

It seems to me our knowledge-base in the biological and social sciences, and in the arts and humanities can help us humans out here and I can’t understand why anyone who wants to survive would ignore or discount anything we can learn from that wealth of knowledge. Yet, some would choose to ignore anything that might challenge their world-view, even when they know it is false. G–gle seems to have succumbed to that pressure. That’s a shame, but not a path any of us have to accept.

What’s G–gle’s motto again?





Why You Should Play Music


Following text excerpted from The Ultimate Killer App: The Power to Create and Connect   Chapter 3.

…Music is a bewitching art because it seems to engage areas of our brain that integrate emotions, memory, language/communication, and motor skills. Music not only stimulates more areas of the brain, it resonates to the very core of our physical being, especially when we dance and sing.

Through the ages philosophers and artists have often argued over which of the arts is preeminent and most venerated.[i] The ancient Greeks lauded poetry, Leonardo da Vinci exalted painting, and Michelangelo favored sculpture as the most sublime art of all. I have to side with philosopher Arthur Schopenhauer’s judgment that music portrays the inner flow of life more directly than the other arts,[ii] and Friedrich Nietzsche, who famously said “Without music, life would be a mistake.” With music we dance, we sing, we communicate, we synchronize and coordinate, we contemplate, we remember. Sometimes we even fall into an otherworldly trance. Reggae icon Bob Marley perhaps puts it most simply when he sings, “One good thing about music, it gets you feeling okay…”


Reflect, for a moment, on how we interact with music: how we remember and respond to certain melodies over time; how a particular song or melody can replay constantly in our mind’s ear, even to the point of distraction[iii]; how particular melodies and harmonies can make us feel joyful or sad, fearful or fearless; how some individuals can see musical pitches as colors; how a particular shuffle rhythm can make us relax with a resting heartbeat, or an up-tempo straight beat can make our hearts race. Interestingly, humans are unique among primates in being able to tap their feet in time to a rhythm, an activity that involves a process of meter extraction so complicated that most computers cannot do it.

E.O. Wilson argues from an evolutionary perspective that creating and performing music is instinctual, one of the true universals of our species. Anthropological studies of tribal cultures show the extent to which singing and dancing is a natural activity in various communities, seamlessly integrated and involving everyone.[iv] In many of the world’s languages, the verb for singing is the same as the one for dancing; there is no distinction, since it is assumed that singing involves bodily movement.

Functional brain imaging shows that playing and listening to music involves nearly every region of the brain and nearly every neural subsystem. Learning to play a musical instrument even alters the structure of our brains, from subcortical circuits that encode sound patterns to neural fibers that connect the two cerebral hemispheres and patterns of gray matter density in certain regions of the cerebral cortex. One neuroscientist [Harvard’s Gottfried Schlaug] has shown that the front portion of the corpus callosum—the mass of fibers connecting the two cerebral hemispheres—is significantly larger in musicians than in non-musicians.[v]

Music is also powerful in its impact on human feeling and on perception. This is why movie soundtracks have the sublime capacity to enhance our multisensory experience. Music is extraordinarily complex in the neural circuits it employs, appearing to elicit emotion in at least six different brain mechanisms. We have all experienced the pleasures of music and neuroscientists have found that music is strongly associated with the brain’s reward system through the release of dopamine.

The emotional power of music is also reflected in that most time-honored form, the romantic love song. One researcher who analyzed the lyrics of the year’s 10 most popular songs listed in Billboard for two eras, 2002-2005 and 1968-1971, found that 24 of the 40 songs in the modern era — 60 percent — and half the songs of the classic era were devoted to the subject of love and relationships.[vi]

In The Descent of Man Darwin surmised that “musical notes and rhythm were first acquired by the male or female progenitors of mankind for the sake of charming the opposite sex. Thus, musical tones became firmly associated with some of the strongest passions an animal is capable of feeling, and are consequently used instinctively.” Beyond love and sex, music in politics and revolution can become a national anthem, a rallying cry, or a military march. In a communal celebration, such as Mardi Gras, music becomes an expression of collective joy and celebration.

Music is a language, not only an aural language but a written one. Music invokes some of the same neural regions as language but, far more than language does, music taps into primitive brain structures involved with motivation, reward, and emotion. The mental structure in music requires both halves of the brain, while the mental structure of language only requires the left half. In this sense, music is even more powerful than spoken language and is its likely precursor. Music may have prepared our pre-human ancestors for speech communication and for the very cognitive, representational flexibility necessary to become human. Singing and instrumental activities might have helped our species to refine motor skills, paving the way for the development of the exquisitely fine muscle control required for vocal or signed speech.

Not surprisingly, studies have found that children who take music lessons for two years also process language better. Music therapy using listening and instrument playing has been shown to help people overcome a broad range of psychological and neurological problems. Patients suffering from Parkinson’s disease, in whom movements tend to be incontinently fast or slow, or sometimes frozen, can overcome these disorders of timing when they are exposed to the regular tempo and rhythm of music.

In This is Your Brain on Music: The Science of a Human Obsession, neuroscientist Daniel J. Levitin offers evidence to support the view that musical ability served as an indicator of cognitive, emotional and physical health, and was evolutionarily advantageous as a force that led to social bonding and increased fitness. Levitin writes:

The story of your brain on music is the story of an exquisite orchestration of brain regions, involving both the oldest and newest parts of the human brain, and regions as far apart as the cerebellum in the back of the head and the frontal lobes just behind your eyes. It involves a precision choreography of neurochemical release and uptake between logical prediction systems and emotional reward systems. When we love a piece of music, it reminds us of other music we have heard, and it activates memory traces of emotional times in our lives. Your brain on music is all about…connections.[vii] (emphasis added)

Medical research into two specific neuro-developmental disorders reveals an interesting neurological link between music and social development. Williams Syndrome (WS) is a rare genetic disorder that causes physical and cognitive deficits, such as heart defects, stunted physical development, brain abnormalities, low IQs, high levels of emotional anxiety and various learning disabilities. However, WS individuals also exhibit high levels of sociability, gregariousness, and an affinity and talent for music. In contrast to WS are the family of Autism Spectrum Disorders (ASD), such as Asperger’s syndrome. Individuals with ASD exhibit deficits in sociability and an inability to empathize. In general, they also display no emotional affinity for music. As Levitin explains, complementary syndromes such as these, which neuroscientists call a double dissociation, strengthen the putative link between music and social bonding.

Historically and anthropologically, music has been involved with social activities. People sing and dance together in every culture, and one can imagine them doing so around the first fires a hundred thousand years ago. This observation dovetails with E.O. Wilson’s narrative of the campfire as the focus of social and community development cited in Chapter 1.

In Music and the Mind, psychologist Anthony Storr stresses that in all societies, a primary function of music is collective and communal, to bring and bind people together. As Storr explains, in modern culture the choice of music has important social consequences. People listen to the music their friends listen to and people who listen to the same music form friendships. Particularly when we are young, and in search of our identity, we form bonds or social groups with people whom we want to be like, or with whom we believe we have something in common. As a way of externalizing the bond, we dress alike, share activities, and listen to the same music. It becomes a mark of our chosen identity. This ties in with the evolutionary idea of music as a vehicle for social bonding and societal cohesion. Music and musical preferences become a mark of personal and group identity and of distinction.

As a powerful biological, psychological, emotional, and communicative medium, music reinforces the ties that bring us together and then bind us. Think of two musicians playing together, jamming, or playing a structured piece – the music is heard as one indivisible expression. A duet can become a trio, then a quartet, a quintet, and finally a full orchestra or big band. The possibilities for creative variation multiply with collaborative input. There is nothing more enjoyable to jazz aficionados – players and audiences alike – than an artful improvisation on a theme that becomes a new musical exploration of the unknown. Philharmonic audiences, likewise, are thrilled by the grandeur of an orchestra that plays as one.

I have deliberately highlighted the role of creativity in music because it provides strong evidence for the synergistic power of creating and sharing (connecting). The power of creative art is that it connects us to one another, and to larger truths about what it means to be alive and what it means to be human.


[i] Granted, this judgment may be largely influenced by the era in which the art is technically applied. Certainly film has been a dominant art form of the 20th century, while others claim that virtual gaming will be the preeminent creative art form of the near future. Nevertheless, I will stick with the universality and simplicity of music.

[ii] See Schopenhauer on the “Hierarchy among the fine arts.”

[iii] For some inexplicable reason as I write this, the song “Winchester Cathedral” keeps repeating in my head. A song I most certainly have not heard replayed for at least 50 years, and yet, there it is playing back in my memory. Not my first choice!

[iv] This points out the modern travesty of dividing communal music performance between virtuosi and the rest of us listening in the audience. The communal drum circle is much more in tune with our nature.

[v] Gottfried Schlaug, “Musicians and music making as a model for the study of brain plasticity.” Prog Brain Res. 2015; 217: 37–55.


[vii] Daniel J. Levitin, This is Your Brain on Music, p. 188. For a lovely graphic illustrating the myriad brain functions that music engages, which I cannot print here due to copyright issues, go to

Finite and Infinite Games: the Internet and Politics

About two decades ago James Carse, a religious scholar and historian, wrote a philosophical text titled Finite and Infinite Games. As he explained, there are two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play.

This simple distinction invites some profound thought. War is a finite game, as is the Superbowl. Peace is an infinite game, as is the game of love. Finite games end with a winner(s) and loser(s), while infinite games seek perpetual play. Politics is a finite game; democracy, liberty, and justice are infinite games.

Life itself, then, could be considered a finite or infinite game depending on which perspective one takes. If ‘he who dies with the most toys wins,’ one is living in a finite game that ends with death. If one chooses to create an entity that lives beyond the grave, a legacy that perpetuates through time, then one is playing an infinite game.

One can imagine that we often play a number of finite games within an infinite game. This supports the idea of waging war in order to attain peace (though I wouldn’t go so far as saying it validates destroying the village in order to save it). The taxonomy also relates to the time horizon of one’s perspective in engaging in the game. In other words, are we playing for the short term gain or the long term payoff?

I find Carse’s arguments compelling when I relate them to the new digital economy and how the digital world is transforming how we play certain games, especially those of social interaction and the monetization of value. That sounds a bit hard to follow, but what I’m referring to is the value of the information network (the Internet) as an infinite game.

I would value the internet according to its power to help people connect and share ideas. (I recently wrote a short book on this power called The Ultimate Killer App: The Power to Create and Connect.) The more an idea is shared, the more powerful and valuable it can be. In this sense, the internet is far more valuable than the sum of its various parts, and for it to end as the victim of a finite game would be a tragedy for all. So, I see playing on the information network as an infinite game.

The paradox is that most of the big players on the internet – the Googles, Facebooks, Amazons, etc – are playing finite games on and with the network. In fact, they are using the natural monopoly of network dynamics to win finite games for themselves, reaping enormous value in the process. But while they are winning, many others are losing. Yes, we do gain in certain ways, but the redistribution of information data power is leading to the redistribution of monetary gains and losses across the population of users. In many cases those gains and losses are redistributed quite arbitrarily.

For instance, let us take the disruption of the music industry, or the travel industry, or the publishing industry. One need not lament the fate of obsolete business models to recognize that for play to continue, players must have the possibility of adapting to change in order to keep the infinite game on course. Most musicians and authors believe their professions are DOA. What does that say for the future of culture?

Unfortunately, this disruption across the global economy wrought by digitization is being reflected in the chaotic politics of our times, mostly across previously stable developed democracies.

These economic and political developments don’t seem particularly farsighted and one can only speculate how the game plays out. But to relate it to current events, many of us are playing electoral politics in a finite game that has profound implications for the more important infinite game we should be playing.


Science: In the Battle Between Faith and Reason


I recently was engaged in some interesting discussions about science and reason in tension with religious doctrine and faith, probably inspired by the publicity generated by the Catholic Bishop of Rome, Pope Francis. As a social scientist cautioned by skepticism when it comes to scientific or spiritual truths, I easily agree with the quote by Simone Weil above.

For me, the discussion resonated with the major theme of my first book, The City of Man: A Trilogy, that explored the dramatic story of an epic battle waged in 15th century Italy between these very forces that marked the transition of the Age of Faith into the Age of Reason. The clash of ideas lent itself readily to personification through the historical characters of the fundamentalist preacher Girolamo Savonarola and the first political modernist Niccolo Machiavelli. Yet, their story is far more nuanced and complex than a simple progression of man’s reasoning intellect.

I have excerpted my Author’s Note from the book to reprint here.

Author’s Note

Girolamo Savonarola can hardly be considered an obscure figure in European history. Many people are readily familiar with his infamous Bonfire of the Vanities and have a vague understanding of his relation to the art and politics of his day. I have always been astounded, however, by the way in which this friar’s tale, set within its particular historical circumstances, so closely approximates the great myths and legends that transcend both time and place. To borrow the words of one scholar, the life of Savonarola in Florence approximates “the battle between good and evil, played out against a background of order and chaos, fought for the redemption of fallen and painfully self-conscious man.”[1] In this sense we may appreciate its universalism and relevance. My interpretation is presented along three important dimensions of historical literature: context, character, and theme.

Context: Most of us have a basic knowledge of the Italian Renaissance, primarily within the context of art history and the genius of such figures as Leonardo da Vinci, Michelangelo, and Raphael. Standard reference texts characterize this remarkable cultural period as one in which new conceptions of the individual in relation to the universe contributed to a great flourishing of scholarly, literary, philosophical, scientific, and artistic achievement. This novel arose from my desire to comprehend the richness of the Renaissance and the causality of historical events. How did such a great flowering of human achievement come about? Was it a historical accident? An unfathomable mystery? Cultural or religious destiny? Plain dumb luck?

The most helpful view in understanding the conceptual framework for this novel is the classic one that characterizes the Renaissance as a transition, a bridge between the Middle Ages and the early modern world. It was a rapid period of change between the Age of Faith and the Enlightenment, brought forth by an upheaval of social, economic, and political institutions. Such periods of transition and change are not unique in history. We may look to the Age of Pericles in Athens, the Age of Rome under Julius Caesar, the nineteenth century’s Industrial Revolution, and our own technological information age for parallels ancient and modern.

History, of course, is littered with winners and losers and every remarkable period of human advancement has been accompanied by rather less appealing characteristics and events. The great flowering of ideas and creativity during the Renaissance occurred on a continent beset with low life expectancy and high death rates due to famine, plague, and frequent wars. Medical and sanitary conditions were abysmal and the Black Death still haunted the urban landscape. Great disparities in wealth, combined with the tyranny of ruthless despots and oligarchs, resulted in constant economic and political instability. The imperialist nations of France, Spain and the Ottoman Turks were in their initial phases of expansionary conquest. The Roman Catholic Church engaged in corrupt practices, such as selling indulgences, and the Popes themselves were hardly paragons of virtue with their large retinues of mistresses and bastard children. The entire European countryside was largely mired in poverty, brutalized by war and famine, and ruled by despots and superstitions.

The two sides of the Renaissance—its glory and brutality—bring many important issues into sharp relief. Periods of social and cultural upheaval have motivated mankind to ponder deeper philosophical and religious questions concerning the purpose and meaning of existence. As the Renaissance bridged two periods classified as the Ages of Faith and Reason, such questions were particularly momentous because of the immediate conflict between the worldviews that defined these two eras. Prior to the Renaissance, the West was gradually becoming aware of new ways of relating to the universe. In fathoming the mysteries of the unknown, the intellect, employing reason and scientific inquiry, seemed to hold more promise than the traditional touchstones of spiritual faith and superstition. God and the universe became centered in Man, and the experience of man became paramount for understanding the world. As coincidence would have it, self-serving church leaders ensured that Faith was being corrupted just as Science and Reason were emerging as powerful cornerstones of a new philosophical humanism. Existing structures of power were challenged by the new usurpers to that power and the ensuing clash was violent.

The highest ethical objective of the new philosophical humanism became the salvation of the soul through the earthly good of humanity and the perfectibility of man. Such a philosophy justified the mass accumulation of wealth and power—ostensibly for the ultimate glory of God—and represented a significant shift away from the church’s moral teachings of poverty, humility, and penance. The result was an explosion of new expression through art, poetry, architecture and philosophy that overwhelmed the piety and reverent morality of Christian doctrine. Savonarola was acutely aware of the conflict in which he was engaged as his program of reform was symbolically represented by the transformation of the sinful, earthly city of Man (Babylon or Rome) into the heavenly city of God (the “New Jerusalem”).

These issues remain sharply delineated in our own societal distinctions between Church and State. Whereas we, in the modern West, presume a separation between religious and secular institutions, the concept of a unified religious state has been widely pursued in both western and non-western societies. The most obvious recent example is the rise of the Islamic fundamentalist state, but one must also consider the Victorian Age of morality, Puritanism, and modern Christian fundamentalism. From the perspective of the dramatist, these philosophical positions are often manifested in the attitudes and characters of real persons. Girolamo Savonarola can be viewed as the last vestige of the medieval age of faith while Niccolo Machiavelli can be celebrated (or vilified) as the harbinger of a new, enlightened age of science and reason. In my interpretation of the Renaissance I explore these two archetypal historical characters in depth.

Character: The character of Savonarola is particularly intriguing within the aforementioned historical context. Here was an obscure, ascetic monk who rose to the height of power and influence in the richest and most sophisticated city of his day. Under Savonarola’s guidance, Florence developed new institutions of government, economic justice and religious charity. Evidence suggests that many of Florence’s cultural icons—Michelangelo, Botticelli, Pico della Mirandola, to name only a few—were quite taken with the charismatic preacher’s message and became his devout followers. Ultimately, however, Savonarola became a victim to his own overreaching ambitions and emotional weaknesses. His story is a classic, earthly Greek tragedy. In posterity, Savonarola has been immortalized by history and his message has inspired religious reformers from Martin Luther to our present day.

Most modern characterizations of Savonarola appear to reflect a contemporary bias that finds horrifying any assault on the primacy of reason and scientific inquiry. Many studies portray him as a fanatic and a reactionary, a religious fundamentalist who resisted science and reason by burning books and art and condemning the enlightened views of his day. But if he were a madman, how are we to judge Michelangelo, Botticelli, Lorenzo de’Medici and thousands of other Florentines who were deeply impressed by him? Were they all mad as well?

Niccolo Machiavelli is another figure who holds a significant place in our cultural imagination and, like Savonarola, is in some need of reevaluation. Like Marx and ‘Marxist’, Machiavelli and ‘Machiavellian’ suffer a strained, uneasy relationship warped by time. Machiavelli was blessed with an analytical mind and is often cited as the first political scientist. He studied the human drive for power and sought to devise a strategy to harness that drive to the greater good. His reputation, however, has been hijacked throughout history in the service of those who pursue power as the means to any ends. In this he is certainly misunderstood—perhaps not wrongly, but surely not fully.

We have no writings and little knowledge of Machiavelli that predate a letter he wrote near the end of the Savonarolan episode. We can conjecture that many of Machiavelli’s early ideas derived from his experiences in Florence in the 1490s at which time he was a young man in his twenties. I have tried to employ his eyes to view the Savonarolan phenomenon from a modern perspective. I use Machiavelli’s The Prince, The Discourses and History of Florence from which to extrapolate back to those youthful experiences when he must have struggled to make sense of a rapidly changing world. In my interpretation Machiavelli seeks to impose order and restrain chaos by the most efficient means he can imagine. In this respect he is no different from Savonarola, who seeks the same through faith in God.

Theme: The Age of Reason that commenced more than five hundred years ago is the age in which we still reside. In this age, the immense and vast capability of man and his intellect is expected to solve all puzzles and answer all questions that plague existence. From such a perspective, Savonarola is inevitably dismissed as a reactionary who resisted a new world that he could not understand. But perhaps he understood it all too well. At the turn of the twenty-first century we have discovered that science overpromises, at least in the sense of immediate gratification, and that we need to recognize the limits imposed on reason. For Savonarola, the salvation of the soul was paramount, but certainly not to the exclusion of the development of the mind. The preacher was a man of considerable intellectual talents and was an avid proponent of Augustine and Thomas Aquinas (and, by implication, Aristotle) on the reconciliation of philosophy and Christian theology. The conflicts and tensions within the heart and mind of this one individual cannot be dismissed by caricaturing him as a zealot.

The story of Savonarola and Machiavelli is, above all, a story of the conflicts within the human spirit—what we might refer to as the soul of mankind. As the human soul is a prisoner of the body, the struggle of mankind, like that of Savonarola, is the struggle to free the soul from the body, or the flesh. The material world imposes itself on our sense of the spiritual world of the soul and, indeed, to reconcile the demands of the material with the needs of the spiritual has been the struggle of Buddhists, Hindus, Christians, Muslims, and Jews alike. This struggle has profound dimensions, such as the search for a higher being or the experience of love, and also mundane ones, such as how to function within the world of industrial employment in order to secure a living. A long literary tradition addresses this struggle and includes works by such renowned and celebrated authors as Dostoyevsky, Hesse, Hugo, Kafka, Kazantzakis, Mann, Conrad, Camus, and Kundera.

I believe this eternal struggle of the soul is why the life of Girolamo Savonarola is so compelling. Today, more than five hundred years after his death, his story seems as relevant as ever to the universals of human experience. More Ancient Greek than modern Western hero, Savonarola is a man of shining virtues and tragic flaws, and in this he is all too human.

Through the struggle of the friar and the observations of Machiavelli, this novel also examines the question of mankind’s capacity for both extreme good and evil. The Florentines glorified Savonarola, and ultimately crucified him when he became an inconvenience. The guilt of this contradiction lives on today as humble citizens honor him with the memorial marking the site of his hanging and burning on May 23, 1498 in the Piazza della Signoria in Florence. (Every May 23 flowers appear in the morning to cover the brass plaque in the Piazza.) The trials of history reveal that the barbarism of men extends well into our own time. Our collective experience defies our faith in both God and reason to fathom the depths of the human heart and soul. And this is, perhaps, how it should be. We are, and will remain, a mystery.

In addressing “The Modern Spiritual Problem,” Carl Jung captures the dilemma from the perspective of the psychoanalyst:

The modern man has lost all the metaphysical certainties of his medieval brother, and set up in their place the ideals of material security, general welfare and humaneness. But it takes more than an ordinary dose of optimism to make it appear that these ideals are still unshaken. Material security, even, has gone by the board, for the modern man begins to see that every step in material progress adds just so much force to the threat of a more stupendous catastrophe. The very picture terrorizes the imagination. What are we to imagine when cities today perfect measures of defense against poison-gas attacks, and practice them in “dress rehearsals”? We cannot but suppose that such attacks have been planned and provided for—again on the principle ‘in time of peace prepare for war.’ Let man accumulate his materials of destruction and the devil within him will soon be unable to resist putting them to their fated use.

…if [modern man] turns away from the terrifying prospect of a blind world in which building and destroying successively tip the scale, and if he then turns his gaze inward upon the recesses of his own mind, he will discover a chaos and darkness there which he would gladly ignore. Science has destroyed even the refuge of the inner life. What was once a sheltering haven has become a place of terror.[2]

Jung, writing in the 1930s before the conflagration of the Second World War, was eerily prescient of future experiences with terrorism and genocide. The evidence continues to accumulate that mankind is both good and evil, light and dark. One particular hypothesis explored in this book is that change, which we often label progress, can have corrupting influences on society, well apart from its positive effects. When it is the harbinger of chaos and crisis, change incites fear and lays bare the most base and cruel of human instincts. Only by understanding this dimension of ourselves, either consciously or intuitively, can we hope to resist the temptation to evil. Faith, whether it is in God or science, fellow man or self, is the only antidote to fear. Faith props up efforts to reestablish order and maintain control of human destiny. Savonarola put his faith in God and the Bible while Machiavelli put his in Cicero, the Republic and Realpolitik.

Background texts:

To tease out the thematic elements of the story, I have drawn heavily upon three principal texts: the Bible, Augustine’s City of God, and Dante’s The Divine Comedy. The Bible, especially the Old Testament, was the primary reference from which Savonarola drew his sermons and developed his prophesies. His particular gift was to draw parallels between the Old Testament and events of his day and in so doing, reiterate the teachings of the Gospels. His intellectual sources for the ideal organization of earthly society, along religious guidelines, were Augustine and Aquinas. Savonarola liberally used the Old Testament and Augustine’s City of God to delineate a clear moral distinction between the earthly and heavenly cities. He employed Aquinas’ writings on politics for practical implementation. His mission was to transform the earthly city of Florence into the heavenly City of God.

Dante’s The Divine Comedy can be viewed as a mythical, Christian text delivered from a more secular, historical perspective. Dante’s pilgrim takes us on a journey that first plunges into the depths of sin in Hell, finds redemption in Purgatory, and ultimately ascends to salvation in Heaven. On this journey Dante takes us through the Florence and Italy of his day, exposing the factionalism, conflict, and corruption that impedes the good and the just. In so doing he, much like Savonarola, provided a framework for common Florentine citizens to comprehend their everyday world. Dante is Florence’s most famous son and all Florentines were intimately familiar with his verse. Furthermore, his three-stage pilgrimage through condemnation, redemption, and salvation parallels Savonarola’s own brief journey across Florence’s stage.

This interpretation of the events of Savonarola’s life is uniquely my own and is intended to adhere closely to the historical record of Renaissance Florence, and to the words attributed to Savonarola, Machiavelli, Lorenzo de’ Medici, Pope Alexander VI and others. (The only purely fictional character is Chiara Corbinelli and those created by her circumstances, such as the Prioress. The relationships between Tommaso Soderini and Machiavelli, as well as those between Tommaso and the Compagnacci are inferred from historical evidence, but there is no indication that Tommaso and Niccolo were close friends. I have assumed they were because they were neighbors, close in age, and because a close mentor relationship did exist between Niccolo and Piero Soderini, Tommaso’s uncle.) Certain plot elements have been created to tie the characters together but do not, to the best of my historical research, contradict any historical evidence. The historical background is gleaned from the professional research of respected scholars and historians. Artistic license, though kept to a minimum, and mistakes, hopefully minimized as well, are my sole responsibility.

The Renaissance city of Florence was a moment of promise—a promise of the mind, body, and spirit of man; a sensual awakening, the puberty of modern civilization; the birth of l’uomo universale; the blossoming of intellectual discipline and humanistic interpretation. We have embraced its ideals and continue to uphold the myth. But we should always question how well it serves us and never forget that every man is modern to his times.

[1] Jordan B. Peterson, Maps of Meaning: the Architecture of Belief.

[2] Carl G. Jung, from “The Modern Spiritual Problem,” in Modern Man in Search of a Soul [p. 204].


%d bloggers like this: