The Importance of Being Blasphemous

Stephen R. Welch


This past January, millions marched throughout France in memory of the victims of the terror attack on the Paris offices of the satirical weekly newspaper Charlie Hebdo. The attack, perpetrated earlier that month by two French Islamists, had purportedly been committed to avenge the newspaper’s cartoon portrayals of Muhammad. The outrage expressed by the French public was unequivocal. In rallies counted among the largest in French history, “Je suis Charlie” (“We are Charlie”) became a global rallying cry, both as an expression of solidarity for the principle of free speech and in defiance of the terrorists’ attempt to suppress it. “Je suis Charlie” quickly went viral on social media.

Though by no means freighted with the same gravity, a similar affront to free speech occurred in the latter half of last year, when hackers stole data from Sony Pictures and made threats against the studio. The threats specifically targeted Sony for its comedy film The Interview, which depicted the assassination of the “Great Leader” of North Korea, Kim Jong-un. Citing “public safety concerns,” Sony cancelled release of The Interview. Weeks later, after its corporate backbone had received stiffening from the public outcry and words of “disappointment” by the U.S. president, Sony reversed its decision.

What is happening? For over two decades, ever since the now-infamous fatwa issued against Salman Rushdie for his novel The Satanic Verses, there has been a seemingly inexorable retrenchment in the public’s defense of controversial or offensive speech and art. Are we seeing, at last, a thaw in the long chill of self-censorship?

It is far too early to be sanguine. The battle to annex iconoclasm under the ever-expanding domain of the taboo is still being vigorously waged, particularly among the ideological Left. The voices of suppression, that warn that Charlie Hebdo’s blasphemy was a reckless indulgence or decry it as a form of hate speech, make essentially the same arguments that were levied against Salman Rushdie more than twenty-six years ago. Behind the rhetoric is a very real fear. The Ayatollah Khomeini’s edict ordering all Muslims of the world to kill “without delay” the author, editors, and publishers of The Satanic Verses may not have succeeded in censoring the book. But the lesson delivered to us all on that Valentine’s Day in 1989 is one that no flurry of public rallies or the short-lived bloom of a well-meant hashtag will easily dispel.

 

There was a time when the only real fear a publisher had to face was the critic’s pen. Tracing the legacy of the Rushdie Affair in his book From Fatwa to Jihad, Kenan Malik highlights the contrast between then and now in an interview with publisher Peter Mayer. In 1989, Mayer was CEO of Penguin Books, publisher of The Satanic Verses. When he first learned of the fatwa, Mayer says that his primary concern was for Rushdie and his Penguin staff. His other reaction was bafflement. “The fate of being a publisher,” he said to Malik, is that one “always find[s] people offended by books you publish.” When Jews and Christians had objected—in writing, usually it went no further than that—to a book he had published, he would respond by simply saying that he could not publish only inoffensive books. The understanding among publishers, authors, and the reading public at the time was that the right to publish books included the publishing of offensive books, and any differences in taste and opinion were sorted out through discussion. “It was generally a civilized dialogue,” Mayer recalls. “One relied on the sanity of secular democracy.”

That a head of state would issue a death sentence upon the author of a novel and all involved in its publication was not something Mayer, or anyone in the industry, could have anticipated. The bewilderment quickly turned to something grimmer when Mayer began receiving letters and phone calls threatening, in graphic terms, him and his family with death. Yet Penguin did not back down from publishing The Satanic Verses. In his interview with Malik, Mayer recalls telling the Penguin board that, despite the threats and intimidation, they must take the long view on the matter: “Any climb-down . . . will only encourage future terrorist attacks by individuals or groups offended for whatever reason by other books that we or any publisher might publish. If we capitulate, there will be no publishing as we know it.”

Mayer and his Penguin colleagues became acutely aware that their decision would not only affect the future of publishing but of free inquiry, and by extension civil society itself. Such awareness and the urgency with which it was felt, Malik soberly observes, “seems to belong to a different age.”

This same defense of principle seemed to reemerge, at least for a time, in Paris earlier this year. Yet even in the wake of the Charlie Hebdo massacre, the right to offend was treated with equivocation by those who should know better. While some media outlets reproduced the Hebdo Muhammad cartoons, most did not: the New York Times, the BBC, and UK’s Channel 4 would not do so. While condemning the attacks, notable journalists such as CNN’s Jay Carney questioned the judgment of the magazine’s editors for publishing images “we know . . . will be deeply offensive” and have the “potential to be inflammatory,” while journalist Tony Barber admonished that such publications are “being stupid” when they “provoke Muslims.”*

This finger-wagging speaks not so much to a principled stance against offending religious sensibilities as it reveals the foregone conclusion of violence. And it is this fear of violence, disingenuously cloaked in the rhetoric of prudence, that has come to serve as a de facto“blasphemy law.” As Nick Cohen acerbically notes, though the fear is arguably justified, it is reprehensible that writers and journalists—those who, one would presume, have the most to lose—cannot bring themselves to admit their fear and thus acknowledge their self-censorship. An honest admission, Cohen suggests, would “shred the pretence that journalists are fearless speakers of truth to power. But it would be a small gesture of solidarity. It would say to everyone, from Pakistani secularists murdered for opposing theocratic savagery, to British parents worried sick that their boys will join Islamic State, that radical Islam is a real fascistic force.”

Instead, Cohen says, journalists and many in the arts and academia have been living a lie. “We take on the powerful—and ask you to admire our bravery—if, and only if, the powerful are not a paramilitary force that may kill us.”

One silver lining in this depressing cloud is that the Charlie Hebdo incident has fomented public debate over the merits of the right to blaspheme in a free society, and whether that right truly jeopardizes social harmony or is intrinsic to the values of a liberal society. The Rushdie affair did not, unfortunately, precipitate the same level of public discourse. With the exception of a few voices (including Rushdie’s own), the fatwa was generally treated in the media as a problem directed singularly against Rushdie and, therefore, suffered by him alone. The possibility that Khomeini’s edict was delivered against the liberal principles of the West in toto or that the death threat was truly leveled at all authors and publishers—and by extension, readers—was not widely appreciated.

This naïveté seems like folly now. In his memoir of the fatwa years, Joseph Anton, Rushdie likens his ordeal to an unheeded Cassandra-like warning of things to come. Borrowing from Hitchcock’s The Birds, he illustrates how the threat of Islamism gathered while we in the West sat, oblivious. Recounting a famous scene from the film, he describes the actress Tippi Hedren as she sits on a bench outside an elementary school, unaware of the blackbirds gathering ominously on the jungle-gym behind her:


The children in the classroom . . . sing a sad nonsense song. Outside the school a cold wind is blowing. A single blackbird flies down from the sky and settles on the climbing frame in the playground. The children’s song is a roundelay. It begins but it doesn’t end. It just goes round and round. . . .

There are four more blackbirds on the climbing frame, and then a fifth arrives. Inside the school the children are singing. Now there are hundreds of blackbirds . . . and thousands more birds fill the sky, like a plague of Egypt. A song has begun, to which there is no end.

When the first bird comes down to roost, Rushdie explains, it was just about him, “individual, particular, specific. Nobody [felt] inclined to draw any conclusions from it.” It is only a dozen years later, “after the plague begins,” when people see that the first bird had been a harbinger.

In January of 1989, four months after The Satanic Verses was published, the first book-burnings in Britain occurred. This was followed in February by a small protest in Pakistan that turned deadly after police fired into the crowd of demonstrators. Five people were killed. Two days later, on Valentine’s Day, Khomeini issued his edict.

From the outset, many stood fast in their support of Rushdie. On the day of the book’s publication in the United States later that month, the Association of American Publishers, the American Bookseller’s Association, and the American Library Association paid for a full-page advertisement in the New York Times. The ad asserted that free people write books, free people read books, free people publish and sell books, and in the “spirit of . . . commitment to free expression” affirmed that The Satanic Verses “will be available to readers at bookshops and libraries throughout the country.” One hundred Muslim writers jointly published a book of essays in defense of free speech titled For Rushdie. Poets and writers from across the Arab world courageously, and publicly, defended him. “I choose Salman Rushdie,” wrote Syrian novelist Jamil Hatmal, “over the murderous turbans.”

In contrast to his defenders, there grew a loud chorus of detractors. Counted among them, sadly, were some fellow authors, including Egyptian novelist Naguib Mahfouz—himself also once accused of blasphemy—who, though first decrying the Ayatollah’s act as “terrorism,” later backtracked and stated that Rushdie “did not have the right to insult . . . anything considered holy.” One of the most notable critics was John Le Carre, who on the pages of the Guardiansniffed, “[T]here is no law in life or nature that says great nations may be insulted with impunity.” Self-appointed leaders of the Muslim “community” in the United Kingdom voiced support for the fatwa. UK parliamentarians, pandering to their Muslim constituencies, focused their efforts not on defending their citizens’ rights but on preventing the paperback publication of the book. And the archbishop of Canterbury, George Carey, scolded Rushdie for his “abuse of free speech,” declaring that the novel was an “outrageous slur” and that “[w]e must be more tolerant of Muslim anger.”

Victim-blaming continues to have traction where Rushdie’s critics are concerned. One of the more notable reviews of Joseph Anton was also one of the most negative. In her piece in the December 20, 2012, issue of the New York Review of Books, Zoë Heller did not exude quite the disdain for Rushdie as did his earlier detractors, but many of her criticisms repeated the same canards and exerted the same tired emphasis on the author’s perceived foibles. Heller found particularly objectionable the hardening of Rushdie’s perspective on Islam. “Respect for Islam,” Rushdie had written without qualm, was merely fear of Islamist violence cloaked in a “Tartuffe-like hypocrisy” by the dogma of multiculturalism. Like his critics two decades ago, Heller makes no effort to ascribe responsibility to the Islamists for sullying Islam or making the world feel “smaller and grimmer.” Instead, she lays it at Rushdie’s feet, chastising him for having “narrowed” his viewpoint.

Writing, as surely Heller herself knows, is a deeply personal endeavor. One can imagine, certainly, that a man finding his world turned upside-down for nine years, his life threatened, his character maligned, his worth as an artist questioned, and the fruit of his work—his novel—crucified and immolated by a mob, might succumb to the human response of taking it all quite personally. But such generosity of perspective becomes increasingly impossible for one who has been demonized. The first proposition of the assault against him, Rushdie recalls, was that “anyone who wrote a book with the word ‘satanic’ in the title must be satanic, too. Like many false propositions that flourished in the incipient Age of Information (or disinformation), it became true by repetition. Tell a lie about a man once and many people will not believe you. Tell it a million times and it is the man himself who will no longer be believed.”

The real man Salman Rushdie was replaced by an invented “Satan Rushdy,” an effigy that his adversaries could offer up to the hysterical mobs. Likewise, The Satanic Verses itself had been subjected to a vigorous campaign of demonization. The book he had written about migration, transformation, and identity, Rushdie laments, vanished and was replaced by one that “scarcely existed.” It was this imaginary novel, this figment, against which the “rage of Islam” would be directed.

Even the propositions surrounding that “rage” were the products of disinformation, repeated until presumed to be true. Malik lays out several facts that reduce to rubbish any claims that The Satanic Verses had caused mortal offense to Muslims en masse. There was “barely a squeak of protest,” he points out, among Muslims in France or Germany, nor any mass protests in the United States. Arabs and Turks were likewise “unmoved by Rushdie’s blasphemies.” Most Muslim countries (Pakistan and Saudi Arabia being the notable exceptions) did not ban the novel. It was not banned in Iran. In fact, in the months prior to Khomeini’s edict, the novel was reviewed in the Iranian press and discussed in government ministries and at street cafes. The Iranian literary journal Kayhan Farangi, though criticizing The Satanic Verses on artistic merits and for a “caricaturelike . . . image of Islamic principles,” did not once raise the specter of blasphemy. Kayhan Farangi did acknowledge that the book was a “work of imagination” and went as far as to suggest that the ban in India had been driven by politics rather than theology.

The fatwa was not an answer to The Satanic Verses or its putatively blasphemous contents. The decade following 1979 had brought Iran’s Islamic revolution to a disappointing standstill; Iran had fought a long and bitter war with Iraq to a costly stalemate, and it had failed to unseat the Saudi regime from its perceived role as the face of world Islam. Meanwhile, reformers within the Iranian parliament were growing restive. On his deathbed, brooding over his legacy, Khomeini made what was a calculated act to put fire back into his revolution’s belly. The fatwa against Salman Rushdie and the publishers of his novel was, in a manner of speaking, the Ayatollah’s parting shot. (It is almost certain that the old man had never read the novel.)

Rushdie’s sin was not that he wrote a book that incurred the wrath of Islam, but that he wrote the right book at the right time to be exploited by Islamist demagogues. Despite what critics may still repeat, nowhere in The Satanic Verses does Rushdie slander the Prophet or his companions as “scums and bums,” though characters persecuting his fictional Prophet use these words; nor does he malign the wives of the Prophet as whores, though, again, characters in a fictional brothel so name themselves. His Prophet is not-quite-Muhammad and his Mecca is not-quite-Mecca; his protagonist, Gabreel, is no more or less the angel Gabriel than he is the Indian film actor Amitabh Bachchan; and the book’s narrator is no more Satan than he is Salman Rushdie. It is neither polemic nor satire; nor is it an allegory or insult, veiled or otherwise. The Satanic Verses is no more or less than what its author intended—a novel.

Rushdie’s “offense” was, by fictionalizing him, to make the Prophet merely human, and in so doing to subvert the fiction of Muhammad’s divinity. No less an undertaking would be expected of an author of Rushdie’s caliber, a man who by his own admission is “godless, but fascinated by gods and prophets.” The novel is not an attack against Islam. On the contrary, it is an engagement with that religion’s legacy, an attempt by a man who is not a believer to reconcile that faith’s long shadow with his own nonbelief. Within the pages of The Satanic Verses, Salman Rushdie recreated Islam in an image of his own making. That is the blasphemy for which some believers, and those who speak in their name, will not forgive him.

 

This past May, PEN America gave its annual Freedom of Expression Courage Award to the surviving staff of Charlie Hebdo. Explaining the decision to the Guardian, PEN president Andrew Solomon reminded readers that the award was for courage, not content, adding, “[t]here is courage in refusing the very idea of forbidden statements, an urgent brilliance in saying what you have been told not to say in order to make it sayable.”

Several well-known writers protested the PEN decision, igniting a brief furor on social media. The protesters raised the familiar arguments from taste, objecting to the perceived racist or “phobic” content of the magazine’s cartoons. Rushdie was one of the first to defend PEN America’s decision, as were Nick Cohen and Kenan Malik, among others. Those who defended PEN did so in full recognition that the freedom to speak derives precisely from those few who have the courage to say the unsayable, and that it is the freedom to speak upon which all other freedoms depend.

Our conviction that these freedoms have value has grown alarmingly weak over the past two decades, the consequence in large part of our embrace of the morally incoherent dogma of cultural relativism. In the 1980s, bookstores had been firebombed and assassination attempts made upon publishers and translators—in one case, successfully—and yet publication of The Satanic Verses continued. Today no violence, nor even a credible threat of violence, is required; the mere suggestion of “offense,” in the form of an organized protest or social-media campaign, is enough now to shut down a book, a play, or an art installation. Where the courage to publish the unsayable is lacking, the courage of those who write and speak it comes to naught.

Sometimes all it takes is a phone call. In 2008, twenty years after the fatwa, Random House bought The Jewel of Medina, a historical romance written by journalist Sherry Jones. In nearly every respect, Jones’s novel held nothing in common with Rushdie’s but for its fictionalizing of Islamic history. The protagonist in Jewel of Medina is Aisha, one of Muhammad’s wives. Though by all accounts the novel is self-consciously positive in its portrayal of the Prophet (in the words of Douglas Murray, “stomach-
churningly fawning”), this did not save it from the ire of the self-righteous. In researching Aisha’s legacy, Jones had used a book by Denise Spellberg, an associate professor of Islamic history at the University of Texas. Random House, seeking a cover endorsement, sent the galley proofs to Spellberg. After reading them, Spellberg took it upon herself to phone an editor at Random House and, condemning the novel as an “offensive” and “ugly piece of work,” warned that it was “far more controversial than The Satanic Verses” and could pose “a very real possibility of major danger for the . . . staff and widespread violence.” Spellberg recommended that the book be withdrawn as soon as possible. Apparently on the recommendation of that single phone call, reinforced by negative posts on an online forum (also initiated by Spellberg), Random House—Salman Rushdie’s current publisher—pulled Jewel of Medina from publication.

It is not only literature that suffers from this voluntary censorship, and it is not only Islam that cries foul. In 2005 a production of Bezhti, a play by Sikh writer Gurpreet Kaur Bhatti that depicted sexual abuse and murder in a gurdwara, was cancelled in response to protests by activists from the Sikh community in Birmingham, England. As recently as last year Exhibit B, an art installation that depicted live black actors in a recreation of a colonial-era “human zoo,” was also forced to close by protesters. The critics of Bezhti condemned the play for its “blasphemy” and “offense” against the Sikh religion, while the protesters against Exhibit B charged it with “complicit racism” (the very social ill it had intended to critique). Nor does a production have to be altogether cancelled to compromise freedom of speech. Last year, the New York Metropolitan Opera capitulated to protesters and cancelled the simulcast to cinemas of its production of John Adams’s controversial opera The Death of Klinghoffer, effectively censoring it for anyone who could not afford the privilege of paying more than one hundred dollars to see the live performance.

All forms of inquiry and expression today are subject to the veto of the offended. Academic works, which normally do not generate much controversy (or attention) outside the confines of the ivory tower, are no less subject to suppression. Last year, a scholarly work, The Hindus: An Alternative History by American Indologist Wendy Doniger, was withdrawn from publication in India as the result of a lawsuit brought by members of the Hindu Right. The publisher was Penguin, and, in a sad irony, all Indian copies of Doniger’s book—cited for “denigration of Hinduism” by the plaintiffs—were pulped during the week of the twenty-fifth anniversary of the Salman Rushdie fatwa.

That Penguin, the original publisher of The Satanic Verses, had pulled Doniger’s work brings the saga full circle. It was India that was first to ban Rushdie’s novel. It is deeply troubling that the lesson learned from Khomeini’s fatwa over the past twenty-six years has not been how to better champion and protect our writers, playwrights, and scholars but rather how to best emulate the “rage of Islam” in order to suppress any speech and art that an aggrieved party can claim has offended them. Free speech has become an indulgence, whereas grievance culture is now an equal-opportunity entitlement.

The Rushdie Affair, as Malik observes, was a watershed. Rushdie’s detractors “lost the battle in the sense that they never managed to stop the publication of The Satanic Verses,” but, he says, “they won the war by pounding into the liberal consciousness the belief that to give offence was a morally despicable act.” We have internalized the fatwa, a fact affirmed by the writers interviewed in From Fatwa to Jihad. “What is really dangerous is when you don’t know you’ve censored yourself,” worries Monica Ali, whose 2003 novel Brick Lane was subject to protest marches amid the familiar accusations of offense and insult. The writing process is unconscious, and as such, she laments, “it is difficult to know to what extent you’ve been infected by the debate about offense.”

Hanif Kureishi, another prominent British novelist and a contemporary of Rushdie’s, goes further. “Nobody would have the balls today to write The Satanic Verses. Writing is now timid because writers are terrified.”

It is often said that it is the most offensive and unpopular speech that must be protected. However, it is not necessarily the work of the iconoclast or the polemicist that is most at risk. Works of honest inquiry—the history that questions received truths or the novel that dares to humanize the divine or demonic—all are threatened from this collective, internalized taboo against giving offense. Satire may stir the type of public attention that garners marches and pronouncements by presidents, and polemic may goad the ire of those it scorns. But it is the type of work that only casually disturbs or discomforts us, art that succeeds in penetrating the shell of our unexamined assumptions—the best of art, in other words—that is most likely to be censored, not necessarily by the spectacle of violence but by the stroke of an editor’s or publisher’s rejection or, worse, the author’s fear of embarking on the work to begin with. From this perspective, it is clear that the lesson delivered to us by the Ayatollah Khomeini in 1989, and reprised early this year in Paris, has yet to be unlearned.

The true demonstration that we have at last freed ourselves will not be found in a march of solidarity with the next assassinated writer, or cartoonist, or playwright. It will manifest in something more prosaic. Proof that the old man’s fatwa has been truly exorcized, that we have indeed conquered it, will arrive when the next Satanic Verses is published, bought, read, and reviewed despite the protests, the threats, and the misinformation and shaming campaigns organized by the offended.

But first, someone needs to write it.

Further Reading

Cohen, Nick. 2015. “Paris Attacks: Unless We Overcome 
Fear, Self-censorship Will Spread.” Guardian, January 10. http://www.theguardian.com/commentisfree/2015/jan/11/paris-attacks-we-must-overcome-fear-or-selfcensorship-will-spread. Accessed May 12, 2015.

Flood, Alison, and Alan Yuhas. 2015. “Salman Rushdie Slams Critics of PEN’s Charlie Hebdo Tribute.” Guardian, April 27. http://www.theguardian.com/books/2015/apr/27/salman-rushdie-pen-charlie-hebdo-peter-carey. Accessed May 21, 2015.

Heller, Zoë. 2012. “The Salman Rushdie Case.” The New York Review of Books, December 20. http://www.nybooks.com/articles/archives/2012/dec/20/salman-rushdie-case/?pagination=false. Accessed May 19, 2015.

Khomeini, Ruhollah Mostafavi Moosavi. “Ayatollah Sentences Author to Death.” http://news.bbc.co.uk/onthisday/hi/dates/stories/february/14/newsid_2541000/2541149.stm. Accessed May 12, 2015.

Malik, Kenan. 2010. From Fatwa to Jihad: The Rushdie Affair and Its Aftermath. Brooklyn: Melville House Publishing.

———. 2014. “On The Death of Klinghoffer.” Pandaemonium (blog). November 13. https://kenanmalik.wordpress.com/2014/11/13/on-the-death-of-klinghoffer/. Accessed May 21, 2015.

Muir, Hugh. 2014. “Barbican Criticizes Protesters Who Forced Exhibit B cancellation.” Guardian, September 24. http://www.theguardian.com/culture/2014/sep/24/barbican-criticise-protesters-who-forced-exhibit-b-cancellation. Accessed May 21, 2015.

Murray, Douglas. 2013. Islamophilia: A Very Metropolitan Malady. New York: emBooks.

Prashad, Vijay. 2014. “Wendy Doniger’s Book Is a Tribute to Hinduism’s Complexity, Not an Insult.” Guardian, February 12. http://www.theguardian.com/commentisfree/2014/feb/12/wendy-doniger-book-hinduism-penguin-hindus. Accessed May 21, 2015. Thanks to Kenan Malik for pointing out the concurrence with the twenty-
fifth anniversary week of the fatwa, in https://kenanmalik.wordpress.com/2014/12/19/fear-and-free-speech/.

Rushdie, Salman. 2012. Joseph Anton: A Memoir. New York: Random House.

———. 2008. The Satanic Verses. New York: Random House.

Singh, Gurharpal. 2004. “Sikhs Are the Real Losers from Bezhti.” Guardian, December 23. http://www.theguardian.com/stage/2004/dec/24/theatre.religion. Accessed May 21, 2015.

Vale, Paul. 2015. “Financial Times Europe Editor Tony Barber Accuses Charlie Hebdo of ‘Muslim Baiting.” Huffington Post UK, January 7. http://www.huffingtonpost.co.uk/2015/01/07/financial-times-europe-editor-tony-barber-accuses-charlie-hebdo-of-muslim-baiting_n_6431346.html. Accessed May 12, 2015.

Wemple, Erik. 2015. “On CNN, Jay Carney Sticks to Position that Charlie Hebdo Should Have Pulled Back.” Washington Post, January 8. http://www.washingtonpost.com/blogs/erik-wemple/wp/2015/01/08/on-cnn-jay-carney-sticks-to-position-that-charlie-hebdo-should-have-pulled-back/. Accessed May 12, 2015.

 

*Barber later “updated and expanded” his Financial Times opinion piece to excise the words “being stupid.”

Stephen R. Welch

Stephen R. Welch is a freelance writer based in New York. He writes regularly for Free Inquiry; his last article “The Importance of Being Blasphemous: Literature, Self-Censorship, and the Legacy of The Satanic Verses,” appeared in the October/November 2015 issue.


How Morality Has the Objectivity that Matters—Without God

Ronald A. Lindsay

The thesis of this essay is that morality is not objective in the same way that statements of empirically verifiable facts are objective, yet morality is objective in the ways that matter: moral judgments are not arbitrary; we can have genuine disagreements about moral issues; people can be mistaken in their moral beliefs; and facts about the world are relevant to and inform our moral judgments. In other words, morality is not “subjective” as that term is usu ally interpreted. Moral judgments are not equivalent to descriptive statements about the world—factual assertions about cars, cats, and cabbages—but neither are they merely expressions of personal preferences.

This thesis has obvious importance to our understanding of morality. Moreover, this thesis has special relevance to humanists and other nonreligious people, because one of the most frequently made arguments against atheism is that it is incompatible with the position that morality is objective and that rejecting the objectivity of morality would have unacceptable consequences.

The Need for God: The Argument from Morality

For centuries now, those who argue for theism have been running out of room to maneuver. Things that once seemed to require a supernatural explanation—whether it was thunder, volcanoes, diseases, human cognition, or the existence of the solar system—have long since become the domain of science. (Admittedly, some, such as Bill O’Reilly, remain unaware that we can explain the regularity of certain phenomena, such as the tides, without reliance on divine intervention.) So the theists have changed tactics. Instead of using God to explain natural phenomena, theistic apologists have increasingly relied on arguing that God is indispensable for morality. At first, this contention often took the form of an accusation that atheists can’t be trusted; they’re immoral. In the last few decades, however, many theists have—in the face of overwhelming evidence—grudgingly conceded that at least some atheists can be good people. So has God now become irrelevant? Do we need a deity for anything?

Yes, says the theist. Sure, some individual atheists can be relied upon to act morally, but, as political commentator Michael Gerson put it, “Atheists can be good people; they just have no objective way to judge the conduct of those who are not.” In other words, without God, atheists cannot explain how there are objective moral truths, and without objective moral truths, atheists have no grounds for saying anything is morally right or wrong. We atheists might act appropriately, but we cannot rationally justify our actions; nor can we criticize those who fail to act appropriately.

Furthermore, this contention that God is required for morality to be objective has become the new weapon of choice for those wishing to argue for the existence of God. For example, the Christian apologist William Lane Craig has made what he regards as the reality of objective moral truths the key premise of one of his favorite arguments for the existence of God. According to Craig, there can be no objective moral truths without God, and since there are objective moral truths, God must exist.

One traditional counter to the argument that God is required to ground objective morality is that we cannot possibly rely on God to tell us what’s morally right and wrong. As Plato pointed out long ago in his dialogue Euthyphro, divine commands cannot provide a foundation for morality. From a moral perspective, we have no obligation to follow anyone’s command—whether it’s God’s, Putin’s, or Queen Elizabeth’s—just because it is a command. Rules of conduct based on the arbitrary fiats of someone more powerful than us are not equivalent to moral norms. Moreover, it is no solution to say that God commands only what is good. This response presupposes that we can tell good from bad, right from wrong, or, in other words, that we have our own independent standards for moral goodness. But if we have such independent standards, then we don’t need God to tell us what to do. We can determine what is morally right or wrong on our own.

This response to the theist is effective as far as it goes. Contrary to the theist, God cannot be the source of morality. However, this doesn’t address the concern that morality then loses its objectivity. It becomes a matter of personal preference. We cannot really criticize others for doing something morally wrong, because all we’re saying is “we don’t like that.”

It’s this fear that without God we’ll have a moral vacuum and descend into nihilism that sustains some in the conviction that there is a God or that we need to encourage belief in God regardless of the evidence to the contrary. It sustains belief in God (or belief in belief) even in the face of the argument from Euthyphro. Logic does not always triumph over emotion, and the dread that without God we have no moral grounding—“without God, everything is permitted”—can be a powerful influence on many.

The notion that God’s word is what counts and what makes the difference between moral and immoral actions comforts some because it provides them with the sense that there is something beyond us, something outside of our ourselves that we can look to determine whether some action is morally right or wrong. Is murdering someone wrong? Sure, God tells us that in the Bible. For the devout, that’s a fact. A fact that can be confirmed, just like the fact that ripe tomatoes are red, not blue. It’s not a matter of subjective opinion. And if morality isn’t objective, then it must be subjective, correct?

For these reasons—and also because we want a firm grounding for morality ourselves—it is incumbent upon humanists, and secular ethicists generally, to address squarely the contentions that without God there is no objectivity in morality and that this situation would be something dreadful. The problem is that most try to do this by arguing that morality is objective in a way similar to the way in which ordinary descriptive statements are objective. The better argument is that morality is neither objective nor subjective as those terms are commonly understood.

Secular Attempts to Make Morality Objective

Some secular ethicists have tried to supply substitutes for God as the moral measuring-stick while adhering to the notion that morality must be objective and that moral judgments can be determined to be true or false in ways similar to statements about the world. Some argue that facts have certain moral implications. In this way, morality is based on natural facts, and statements about morality can be determined to be true or false by reference to these facts. Often, the starting point for such arguments is to point out undisputed facts, such that pain is a bad thing and, all other things being equal, people avoid being in pain. Or, if one wants to approach the issue from the other direction, well-being is a good thing, and, all other things being equal, people want to have well-being. The argument will then proceed by using this foundation to argue that we have a moral obligation to avoid inflicting pain or to increase well-being. But this will not do. Granted, pain is “bad” in a nonmoral sense, and people don’t want it, but to say that inflicting pain on someone is presumptively morally bad implies we have some justification for saying that this action is morally bad, not just that it’s unwanted. From where does this moral obligation derive and how do we detect it?

The problem with trying to derive moral obligations directly from facts about the world is that it’s always open for someone to ask “Why do these facts impose a mo
ral obligation?” Sure, well-being may be desirable, and I may want well-being for myself and those close to me, but that doesn’t imply that I am obliged to increase well-being in general. Certainly, it’s not inconsistent for people to say that they want well-being for themselves and those close to them, but that they feel no moral obligation to increase the well-being of people they don’t know. This is not the equivalent of saying ripe tomatoes are both red and blue simultaneously.

The difficulty in deriving moral obligations directly from discrete facts about the world was famously noted by the eighteenth-century Scottish philosopher David Hume, who remarked that from a statement about how things are—an “is” statement—we cannot infer a moral norm about how things should be—an “ought” statement. Despite various attempts to show Hume wrong, his argument was and is sound. Note that Hume did not say that facts are not relevant to moral judgments. Nor did he claim that our moral norms are subjective—although this is a position often mistakenly attributed to him. He did not assert that the truth of moral judgments is determined by referring to our inner states, which would be a subjectivist position. Instead, he maintained that a factual statement, considered in isolation, cannot imply a moral norm. An “is” statement and an “ought” statement are distinct classes of statements.

Some have tried to circumvent the difficulty in deriving moral obligations directly from factual statements by arguing that “nonnatural” facts or properties supply the grounding for morality. However, all such attempts to do so have foundered on the inability to describe with precision the nature of these mysterious nonnatural facts or properties and how it is we can know them. “Intuition” is sometimes offered as a method for knowing moral facts, but intuitions notoriously differ.

Derek Parfit, an Oxford scholar whom some regard as one of the most brilliant philosophers of our time (and I so regard him), recently produced a massive work on ethics titled On What Matters. This two-volume work covers a lot of ground, but one of its main claims is that morality is objective, and we can and do know moral truths but not because moral judgments describe some fact. Indeed, moral judgments do not describe anything in the external world, nor do they refer to our own feelings. There are no mystical moral or normative entities. Nonetheless, moral judgments express objective truths. Parfit’s solution? Ethics is analogous to mathematics. There are mathematical truths even though, on Parfit’s view, there are no such things as an ideal equation 2 + 2 = 4 existing somewhere in Plato’s heaven. Similarly, we have objectively valid moral reasons for not inflicting pain gratuitously even though there are no mystical moral entities to which we make reference when we declare, “Inflicting pain gratuitously is morally wrong.” To quote Parfit, “Like numbers and logical truths … normative properties and truths have no ontological status” (On What Matters, vol. 2, p. 487).

Parfit’s proposed solution is ingenious because it avoids the troublesome issues presented when we tie moral judgments to facts about the world (or facts about our feelings). However, ingenuity does not ensure that a theory is right. Parfit provides no adequate explanation of how we know ethical truths, other than offering numerous examples where he maintains we clearly have a decisive reason for doing X rather than Y. In other words, at the end of the day he falls back on something such as intuition, with the main difference between his theory and other theories being that his intuitions do not reference anything that exists; instead they capture an abstract truth.

So secular attempts to provide an objective foundation for morality have been … well, less than successful. Does this imply we are logically required to embrace nihilism?

No. Let me suggest we need to back up and look at morality afresh. The whole notion that morality must be either entirely subjective or objective in some way comparable to factual (or in Parfit’s case, mathematical) truths is based on a misguided understanding of morality. It’s based on a picture of morality in which morality serves functions similar to factual descriptions (or mathematical theorems). We need to discard that picture. Let’s clear our minds and start anew.

The Functions of Morality

So, if we are starting from the ground up, let’s ask basic questions. Why should we have morality? What is its purpose? Note that I am not asking, “Why should I be moral?”—a question often posed in introductory philosophy courses. I do not mean to be dismissive of this question, but it raises a different set of issues than the ones we should concentrate on now. What I am interested in is reflection on the institution of morality as a whole. Why bother having morality?

One way to begin to answer this question is just to look at how morality functions, and has functioned, in human societies. What is it that morality allows us to do? What can we accomplish when (most) people behave morally that we would not be able to accomplish otherwise? Broadly speaking, morality appears to serve these related purposes: it creates stability, provides security, ameliorates harmful conditions, fosters trust, and facilitates cooperation in achieving shared and complementary goals. In other words, morality enables us to live together and, while doing so, to improve the conditions under which we live.

This is not necessarily an exhaustive list of the functions of morality, nor do I claim to have explained the functions in the most accurate and precise way possible. But I am confident that my list is a fair approximation of some of the key functions of morality.

How do moral norms serve these functions? In following moral norms we engage in behavior that enables these functions of morality to be fulfilled. When we obey norms like “don’t kill” and “don’t steal,” we help ensure the security and stability of society. It really doesn’t take a genius to figure out why, but that hasn’t stopped some geniuses from drawing our attention to the importance of moral norms. As the seventeenth-century English philosopher Thomas Hobbes and many others have pointed out, if we always had to fear being injured or having our property stolen, we could never have any rest. Our lives would be “solitary, poor, nasty, brutish, and short.” Besides providing security and stability by prohibiting certain actions, moral norms also promote collaboration by encouraging certain actions and by providing the necessary framework for the critical practice of the “promise”—that is, a commitment that allows others to rely on me. Consider a simple example, one that could reflect circumstances in the Neolithic Era as much as today. I need a tool that you have to complete a project, so I ask you to lend it to me. You hesitate to lend me the tool, but you also believe you are obliged to help me if such help doesn’t significantly harm you. Moreover, I promise to return the tool. You lend me the tool; I keep my promise to return the tool. This exchange fosters trust between us. Both of us will be more inclined to cooperate with each other in the future. Our cooperation will likely improve our respective living conditions.

Multiply this example millions of times, and you get a sense of the numerous transactions among people that allow a peaceful, stable, prospering society to emerge. You also can imagine how conditions would deteriorate if moral norms were not followed. Going back to my tool example, let us imagine you do not respond positively to my request for assistance. This causes resentment and also frustrates my ability to carry out a beneficial project. I am also less likely to assist you if you need help. Or say you do lend me a tool, but I keep it instead of returning it as promised. This causes distrust, and you are less likely to assist me (and others) in the future. Multiplied many times, such failures to follow moral norms can result in mistrust, reduced cooperation, and even violence. If I do not return that tool peacefully, you may resort to brute force to reacquire it.

Fortunately, over time, humans have acted in ways that further the objectives of morality far more often than in ways that frustrate these objectives. Early humans were able to establish small communities that survived, in part, because most members of the community followed moral norms. These small communities eventually grew larger, again, in part because of moral norms. In this instance, what was critical was the extension of the scope or range of moral norms to those outside one’s immediate community. Early human communities were often at war with each other. Tribe members acted benevolently only to fellow members of their tribe; outsiders were not regarded as entitled to the same treatment. One of the earliest moral revolutions was the extension of cooperative behavior—almost surely based initially on trade—to members of other communities, which allowed for peaceful interaction and the coalescing of small human groups into larger groups. This process has been repeated over the millennia of human existence (with frequent, sanguinary interruptions) until we have achieved something like a global moral community.

This outline of morality and its history is so simple that I am sure some will consider it simplistic. I have covered in a couple of paragraphs what others devote thick tomes to. But it suffices for my purposes. The main points are that in considering morality, we can see that it serves certain functions, and these functions are related to human interests. Put another way, we can describe morality and its purposes without bringing God into the picture; moreover, we can see that morality is a practical enterprise, not a means for describing the world.

Moral Judgments Versus Factual Assertions

The practical function of morality is the key to understanding why moral judgments are not true or false in the same way that factual statements are true or false. The objective/subjective dichotomy implicitly assumes that moral judgments are used primarily to describe, so they must have either an objective or subjective reference. But, as indicated, moral judgments have various practical applications; they are not used primarily as descriptive statements.

Consider these two statements:

Kim is hitting Stephanie.

Without provocation, we ought not to hit people.

Do these statements have identical functions? I suggest that they do not. The first statement is used to convey factual information; it tells us about something that is happening. The second statement is in the form of a moral norm that reflects a moral judgment. Depending on the circumstances, the second statement can be used to instruct someone, condemn someone, admonish someone, exhort someone, confirm that the speaker endorses this norm, and so forth. The second statement has primarily practical, not descriptive, functions. Admittedly, in some circumstances, moral norms or descriptive counterparts of moral norms also can be used to make an assertion about the world, but they do not primarily serve to convey factual information.

In rejecting the proposition that moral judgments are equivalent to factual statements about the world, I am not endorsing the proposition that moral judgments are subjective. A subjective statement is still a descriptive statement that is determined to be true by reference to facts. It’s simply a descriptive statement referring to facts about our inner states—our desires, our sentiments—as opposed to something in the world. To claim that moral judgments are subjective is to claim that they are true or false based on how a particular person feels. That’s not how most of us regard moral judgments.

But if Moral Judgments Do Not Refer to Facts, How Do We Decide What’s Right and Wrong?

It’s obvious that people disagree about moral issues, but the extent of that disagreement is often exaggerated. The reality is that there is a core set of moral norms that almost all humans accept. We couldn’t live together otherwise. For humans to live together in peace and prosper, we need to follow norms such as do not kill, do not steal, do not inflict pain gratuitously, tell the truth, keep your commitments, reciprocate acts of kindness, and so forth. The number of core norms is small, but they govern most of the transactions we have with other humans. This is why we see these norms in all functioning human societies, past and present. Any community in which these norms were lacking could not survive for long. This shared core of moral norms represents the common heritage of civilized human society.

These shared norms also reflect the functions of morality as applied to the human condition. Earlier I observed that morality has certain functions; that is, it serves human interests and needs by creating stability, providing security, ameliorating harmful conditions, fostering trust, and facilitating cooperation in achieving shared and complementary goals. One can quibble about my wording, but that morality has something like these functions is beyond dispute. The norms of the common morality help to ensure that these functions are fulfilled by prohibiting killing, stealing, lying, and so forth. Given that humans are vulnerable to harm, that we depend upon the honesty and cooperation of others, and that we are animals with certain physical and social needs, the norms of the common morality are indispensable.

We can see now how morality has the type of objectivity that matters. If we regard morality as a set of practices that has something like the functions I described, then not just any norm is acceptable as a moral norm. “Lie to others and betray them” is not going to serve the functions of morality. Because of our common human condition, morality is not arbitrary; nor is it subjective in any pernicious sense. When people express fears about morality being subjective, they are concerned about the view that what’s morally permissible is simply what each person feels is morally permissible. But morality is not an expression of personal taste. Our common needs and interests place constraints on the content of morality. Similarly, if we regard morality as serving certain functions, we can see how facts about the world can inform our moral judgments. If morality serves to provide security and foster cooperation, then unprovoked assaults on others run counter to morality’s aims. Indeed, these are among the types of actions that norms of the common morality try to prevent. For this reason, when we are informed that Kim did hit Stephanie in the face without provocation, we quickly conclude that what Kim did was wrong, and her conduct should be condemned.

Note that in drawing that conclusion, we are not violating Hume’s Law. Facts by themselves do not entail moral judgments, but if we look upon morality as a set of practices that provide solutions to certain problems, for example, violence among members of the community, then we can see how facts are relevant to moral judgments. Part of the solution to violence among members of the community is to condemn violent acts and encourage peaceful resolution of disputes. Facts provide us with relevant information about how to best bring about this solution in particular circumstances.

Similarly, with a proper understanding of morality, we can also see how we can justify making inferences from factual statements to evaluative judgments. Recall that the fact/value gap prevents us from inferring a moral judgment from isolated statements of fact. But if we recognize and accept that morality serves
certain functions and that the norms of the common morality help carry out these functions, the inference from facts to moral judgments is appropriate because we are not proceeding solely from isolated facts to moral judgments; instead, we are implicitly referencing the background institution of morality. An isolated factual observation cannot justify a moral judgment, but a factual observation embedded in a set of moral norms can justify a moral judgment.

Objection 1: Just Because Morality Serves Certain Functions Does Not Imply It Should Have Those Functions

At this point, the perceptive reader might object that even assuming that the functions of morality I have described correspond to functions served by morality, this does not address the question of what the functions of morality should be. Haven’t I just moved the fact/value gap back one step, from the level of an individual factual statement to the level of a description of the institution of morality as a whole? Put another way, explaining how morality functions doesn’t address the issue of how it should function.

This is a reasonable objection, but it is one I can meet. So let’s consider this issue: Should morality have objectives that reflect the functions of morality that I have described, that is, serving human interests and needs by creating stability, providing security, ameliorating harmful conditions, fostering trust, and facilitating cooperation in achieving shared and complementary goals? Perhaps the best way to answer this question is with another question: What’s the alternative? If morality should not aim to create stability, provide security, ameliorate harmful conditions, and so forth, what’s the point of morality otherwise? To increase the production of cheese? One could maintain that cheese production is an overriding imperative, and one could label this a moral imperative, but the reality is that for humans to live and work together we would still need something to fulfill the functions of what we now characterize as morality. Perhaps we’d call it “shmorality,” but we’d still have a similar body of norms and practices, whatever its name.

Granted, some philosophers have argued that morality should have objectives somewhat different than the ones I have outlined. Various philosophers have argued that morality should aim at maximizing happiness, or producing a greater balance of pleasure over pain, or producing virtuous characters. Without digressing into a long discussion of ethical theory, I believe these views grasp certain aspects of the moral enterprise, but they mistakenly elevate part of what we accomplish through morality into the whole of it. There is no single simple principle that governs morality. Yes, we want to encourage people to be virtuous—that is, to be kind, courageous, and trustworthy—but to what end? Likewise, we want people to be happy, but exactly how do we measure units of happiness, and how do we balance the happiness of different individuals against one another or against the happiness of the community? If we look at morality as a practical enterprise, something like the objectives I have outlined represents a better description of what we want morality to accomplish. (I say “something like” because I am not claiming to give the best possible description of morality’s objectives.)

Objection 2: I Haven’t Explained Why Moral Norms Are Obligatory

A second important objection to my argument is that I have not explained how it is that moral norms are binding on us. Even if we accept that there is a common morality, why must we follow these norms?

There are two types of answers I can give here. Both are important, so we need to keep them distinct. One answer would appeal to human psychology. The combination of our evolutionary inheritance and the moral training most of us receive disposes us to act morally. We should not lose sight of this fact because if we were not receptive to moral norms, no reference to a divine command, no appeal to an ethical argument, could ever move us to behave morally. For a moral norm to act as a motivating reason to do or refrain from doing something, we must be the type of person who can respond to moral norms. Ethicists as far back as Aristotle have recognized this. Good moral conduct owes much to moral training, and the most sublime exposition of the magnificence of the moral law will not persuade those who have been habituated into antisocial behavior.

But in addition to a casual explanation of why we feel a sense of moral obligation, we also want an explanation of the reason for acknowledging moral obligations. In my view, it’s largely a matter of logical consistency. If we accept the institution of morality, then we are tacitly agreeing to be bound by moral norms. We cannot logically maintain that moral norms apply to everyone except us. If we think it is morally wrong for others to break their promises to us, as a matter of logic we cannot say that we are under no obligation to keep our promises. In saying that an action is morally wrong, we are committed to making the same judgment regardless of whether it is I or someone else performing the action. In accepting the institution of morality, we are also accepting the obligations that come with this institution. Hence, there is a reason, not just a psychological cause, for acknowledging our obligation to follow moral norms.

What if someone rejects the institution of morality altogether? The perceptive reader will not have failed to notice that I italicized “if” when I stated, “If we accept the institution of morality, then we are tacitly agreeing to be bound by moral norms.” I emphasized this condition precisely to draw attention to the fact that, as a matter of logic, there is nothing preventing an individual from rejecting the institution of morality entirely, from “opting out” of morality, as it were—that is, apart from the likely unpleasant consequences for that person of such a decision. There is nothing to be gained by pretending otherwise. There is no mystical intuition of “the moral law” that inexorably forces someone to accept the institution of morality. Nor is there any set of reasons whose irresistible logic compels a person to behave morally. Put another way, it is not irrational to reject the institution of morality altogether. One can coherently and consistently prefer what one regards as one’s own self-interest to doing the morally appropriate thing. However, leaving aside those who suffer from a pathological lack of empathy, few choose this path. Among other things, this would be a difficult decision to make psychologically.

That said, there is no guarantee that people will not make this choice. But notice that bringing God into the picture doesn’t change anything. People can make the decision to reject morality even if they think God has promulgated our shared moral norms. Indeed, many believers have made this decision, as evidenced by the individuals who throughout history have placed themselves outside the bounds of human society and have sustained themselves by preying on other humans. Many ruthless brigands and pirates have had no doubts about God’s existence. They robbed, raped, and murdered anyway.

You may say: “But what they did was objectively wrong”—and an atheist can’t say this. As you have admitted, there is nothing outside the institution of morality to validate this institution, so the obligations of morality are not really binding.” If one means by “objectively wrong” something that conforms to a standard of wrongness that exists completely independently of the human condition and our moral practices, then, correct, an atheist might not use “objectively wrong” in this sense. (Some ethicists who are atheists might, as I have already discussed.) But so what? First,
as indicated by the Euthyphro argument, the notion that God could provide such an external standard is highly questionable. Second, and more important, what is lost by acknowledging that morality is a wholly human phenomenon that arose to respond to the need to influence behavior so people can live together in peace? I would argue that nothing is lost, except some confused notions about morality that we would do well to discard.

The temptation to think that we need some standard external to morality in order to make morality objective and to make moral obligations really binding is buttressed by the fear that the only alternative is a subjectivist morality—but recognizing that morality is based on human needs and interests, and not on God’s commands, doesn’t make one a subjectivist. As already discussed, when those who don’t think that morality is derived from God say that something is morally wrong, they don’t (typically) mean that this is just how they as individuals feel, which would be a true subjectivist position. One cannot argue with feelings. But most nonreligious people think we can argue about moral issues and that some people are mistaken about their conclusions on moral matters.

To have genuine disagreements about moral issues, we need accepted standards for distinguishing correct from incorrect moral judgments, and facts must influence our judgments. Morality as I have described it meets these conditions. All morally serious individuals accept the core moral norms I have identified, and it is these core norms that provide an intersubjective foundation for morality and for disagreements about more complex moral issues. For example, all morally serious individuals recognize that there is a strong presumption that killing is wrong, and our knowledge that we live among others who also accept this norm allows us to venture outside instead of barricading ourselves in our homes. There is no dispute about this norm. But there are discrete areas of disagreement regarding the applicability of this norm, for example, in the debate over physician-assisted dying. Such disputes on complex issues do not indicate that morality is subjective; to have a dispute—a genuine dispute, and not just dueling statements of personal preference—the parties to the dispute must have shared premises. In discussing and trying to resolve such moral disputes, we make reference to norms of the common morality (such as the obligation not to kill versus the obligation to show compassion and prevent suffering), interpret them in light of relevant facts, and try to determine how our proposed resolution would serve the underlying rationale of the applicable norms. Only the morally inarticulate invoke subjective “feelings.” (In my forthcoming book, The Necessity of Secularism: Why God Can’t Tell Us What To Do, I devote a chapter to illustrating how we can express disagreement on public policy matters without invoking God or just saying “that’s how I feel.”)

From the forgoing, we can also see that morality is not arbitrary. People can argue intelligently about morality and can also assert that an action is morally wrong—not just for them, but wrong period. They can condemn wrongdoers, pointing out how their actions are inconsistent with core norms (although most wrongdoers are already aware of their transgressions). Furthermore, if the offense is serious enough, they will impose severe punishment on the wrongdoer, possibly including removal from society. All that seems pretty objective, in any relevant sense of the term. Granted, it’s not objective in the same way that the statement that it is raining outside is objective, but that’s because, as we have already established, factual statements have a different function than moral judgments.

At this point, the believer might protest, “But there has to be something more than that. Morality is not just a human institution.” Well, what is this something more? Why is it not enough to tell the wrongdoer that everyone condemns him because what he or she did violated our accepted norms, which are essential to our ability to live together in peace? Do we have to add, “Oh, by the way, God condemns you too?” Exactly what difference would that make?

What some believers (and, again, some secular ethicists) appear to want is some further fact, something that will make them more comfortable in claiming that moral norms are authoritative and binding. Somehow it is not sufficient that a norm prohibiting the gratuitous affliction of violence reduces pain and suffering and allows us to live together in peace, and has, therefore, been adopted by all human societies. No; for the believer there has to be something else. A moral norm must be grounded in something other than its beneficial effects for humans and human communities. The statement that “it was wrong for Kim to hit Stephanie” must pick out some mystical property that constitutes “wrongness.” For the believer, this further fact is usually identified as a command from God, but as we have already established, God’s commands cannot be regarded as imposing moral obligations unless we already possess a sense of right and wrong independent of his commands.

Those who cling to the “further fact” view—that is, the view that there must be something outside of morality that provides the objective grounding for morality—are not unlike those naïve economists who insist that currency has no value unless it’s based on gold or some other precious metal. Hence, we had the gold standard, which for many years provided that a dollar could be exchanged for a specific quantity of gold. The gold standard reassured some that currency was based on something of “objective” value. However, the whole world has moved away from the gold standard with no ill effects. Why was there no panic? Why didn’t our economic systems collapse or become wildly unstable? Because currency doesn’t need anything outside of the economic system itself to provide it with value. Money represents the value found within our economic system, which, in turn, is based on our economic relationships.

Similarly, moral norms represent the value found in living together. There is no need to base our moral norms on something outside of our relationships. Moral norms are effective in fostering collaboration and cooperation and in improving our conditions, and there is no need to refer to a mystical entity, a gold bar, or God to conclude that we should encourage everyone to abide by common moral norms.

Conclusion

In conclusion, the claim that we need God to provide morality with objectivity does not withstand analysis. To begin with, God would not be able to provide objectivity, as the argument from Euthyphro demonstrates. Moreover, morality is neither objective nor subjective in the way that statements of fact are said to be objective or subjective; nor is that type of objectivity really our concern. Our legitimate concern is that we don’t want people feeling free “to do their own thing,” that is, we don’t want morality to be merely a reflection of someone’s personal desires. It’s not. To the extent that intersubjective validity is required for morality, it is provided by the fact that, in relevant respects, the circumstances under which humans live have remained roughly the same. We have vulnerabilities and needs similar to those of people who lived in ancient times and medieval times, and to those of people who live today in other parts of the world. The obligation to tell the truth will persist as long as humans need to rely on communications from each other. The obligation to assist those who are in need of food and water will persist as long as humans need hydration and nutrition to sustain themselves. The obligation not to maim someone will persist as long as humans cannot spontane
ously heal wounds and regrow body parts. The obligation not to kill someone will persist as long as we lack the power of reanimation. In its essentials, the human condition has not changed much, and it is the circumstances under which we live that influence the content of our norms, not divine commands. Morality is a human institution serving human needs, and the norms of the common morality will persist as long as there are humans around.

Ronald A. Lindsay

Ronald A. Lindsay is the former president and CEO of the Center for Inquiry. Currently, he is senior research fellow for CFI and adjunct professor of philosophy at Prince George’s Community College.


The Fable of the Christ

Michael Paulkovich


I have always been a staunch Bible skeptic but not a Christ-mythicist. I maintained that Jesus probably existed but had fantastic stories foisted upon the memory of his earthly yet iconoclastic life.

After exhaustive research for my first book, I began to perceive both the light and darkness from history. I discovered that many prominent Christian fathers believed with all pious sincerity that their savior never came to Earth or that if he did, he was a Star-Trekian character who beamed down pre-haloed and full-grown, sans transvaginal egress. And I discovered other startling bombshells.

An exercise that struck me as meritorious, even today singular, involved reviving research into Jesus-era writers who should have recorded Christ tales but did not. John Remsburg enumerated forty-one “silent” historians in The Christ (1909). To this end, I spent many hours bivouacked in university libraries, the Library of Congress, and on the Internet. I terminated that foray upon tripling Remsburg’s count: in my book, I offer 126 writers who should have but did not write about Jesus (see the box on p. 57). Perhaps the most bewildering “silent one” is the super-Savior himself. Jesus is a phantom of a wisp of a personage who never wrote anything. So, add one more: 127.

Perhaps none of these writers is more fascinating than Apollonius Tyanus, saintly first-century adventurer and noble paladin. Apollonius was a magic-man of divine birth who cured the sick and blind, cleansed entire cities of plague, foretold the future, and fed the masses. He was worshiped as a god and as a son of a god. Despite such nonsense claims, Apollonius was a real man recorded by reliable sources.

Because Jesus ostensibly performed miracles of global expanse (such as in Matthew 27), his words going “unto the ends of the whole world” (Rom. 10), one would expect virtually every literate person to have recorded those events. A Jesus contemporary such as Apollonius would have done so, as well as those who wrote of Apollonius.

Such is not the case. In Philostratus’s third-century chronicle Vita Apollonii, there is no hint of Jesus. Nor does Jesus appear in the works of other Apollonius epistolarians and scriveners: Emperor Titus, Cassius Dio, Maximus, Moeragenes, Lucian, Soterichus Oasites, Euphrates, Marcus Aurelius, or Damis of Hierapolis. It seems that none of these first- to third-century writers ever heard of Jesus, his miracles and alleged worldwide fame be damned.

Another bewildering author is Philo of Alexandria. He spent his first-century life in the Levant and even traversed Jesus-land. Philo chronicled contemporaries of Jesus—Bassus, Pilate, Tiberius, Sejanus, Caligula—yet knew nothing of the storied prophet and rabble-rouser enveloped in glory and astral marvels.

Historian Flavius Josephus published his Jewish Wars circa 95 CE. He had lived in Japhia, one mile from Nazareth—yet Josephus seems unaware of both Nazareth and Jesus. (I devoted a chapter to the interpolations in Josephus’s works that make him appear to write of Jesus when he did not.)

The Bible venerates the artist formerly known as Saul of Tarsus, but he was a man essentially oblivious to his savior. Paul was unaware of the virgin mother and ignorant of Jesus’s nativity, parentage, life events, ministry, miracles, apostles, betrayal, trial, and harrowing passion. Paul didn’t know where or when Jesus lived and considered the crucifixion metaphorical (Gal. 2:19–20). Unlike what is claimed in the Gospels, Paul never indicated that Jesus had come to Earth. And the “five hundred witnesses” claim (1 Cor. 15) is a forgery.

Qumran, hidey-hole for the Dead Sea Scrolls, lies twelve miles from Bethlehem. The scroll writers, coeval and abutting the holiest of hamlets one jaunty jog eastward, never heard of Jesus. Christianity still had that new-cult smell in the second century, but Christian presbyter Marcion of Pontus in 144 CE denied any virgin birth or childhood for Christ. Jesus’s infant circumcision (Luke 2:21) was thus a lie, as well as the crucifixion! Marcion claimed that Luke was corrupted; Christ self-spawned in omnipresence, esprit sans corps.

I read the works of second-century Christian father Athenagoras and never encountered the word Jesus—Athenagoras was unacquainted with the name of his savior! This floored me. Had I missed something? No; Athenagoras was another pious early Christian who was unaware of Jesus.

The original Mark ended at 16:8, with later forgers adding the fanciful resurrection tale. John 21 also describes post-death Jesus tales, another forgery. Millions should have heard of the crucifixion with its astral enchantments: zombie armies and meteorological marvels (Matt. 27) recorded not by any historian but only in the dubitable scriptures scribbled decades later by superstitious folks. The Jesus saga is further deflated by Nazareth, a town without piety and in fact having no settlement until after the war of 70 CE—suspiciously, just around the time the Gospels were concocted.

Conclusion

When I consider those 126 writers, all of whom should have heard of Jesus but did not—and Paul and Marcion and Athenagoras and Matthew with a tetralogy of opposing Christs, the silence from Qumran and Nazareth and Bethlehem, conflicting Bible stories, and so many other mysteries and omissions—I must conclude that Christ is a mythical character. Jesus of Nazareth was nothing more than an urban (or desert) legend, likely an agglomeration of several evangelic and deluded rabbis who might have existed.

I also include in my book similarities of Jesus to earlier God-sons such as Sandan and Mithra and Horus and Attis, too striking to disregard. The Oxford Classical Dictionary and Catholic Encyclopedia, as well as many others, corroborate.

Thus, today I side with Remsburg—and with Frank Zindler, John M. Allegro, Godfrey Higgins, Robert M. Price, Salomon Reinach, Samuel Lublinski, Charles-François Dupuis, Allard Pierson, Rudolf Steck, Arthur Drews, Prosper Alfaric, Georges Ory, Tom Harpur, Michael Martin, John Mackinnon Robertson, Alvar Ellegård, David Fitzgerald, Richard Carrier, René Salm, Timothy Freke, Peter Gandy, Barbara Walker, Michael Martin, D.M. Murdock, Thomas Brodie, Earl Doherty, Thomas L. Thompson, Bruno Bauer, and others—heretics and iconoclasts and freethinking dunces all, it would seem.

If all the evidence and nonevidence including 126 (127?) silent writers cannot convince, I’ll wager that we will uncover much more. Yet this is but a tiny tip of the mythical-Jesus iceberg: nothing adds up for the fable of the Christ.

 

The Silent Historians

  • Aelius Theon
  • Albinus
  • Alcinous
  • Ammonius of Athens
  • Alexander of Aegae
  • Antipater of Thessalonica
  • Antonius Polemo
  • Apollonius Dyscolus
  • Apollonius of Tyana
  • Appian
  • Archigenes
  • Aretaeus
  • Arrian
  • Asclepiades of Prusa
  • Asconius
  • Aspasius
  • Atilicinus
  • Attalus
  • Bassus of Corinth
  • C. Cassius Longinus
  • Calvisius Taurus of Berytus
  • Cassius Dio
  • Chaeremon of Alexandria
  • Claudius Agathemerus
  • Claudius Ptolemaeus
  • Cleopatra the physician
  • Cluvius Rufus
  • Cn. Cornelius Lentulus Gaetulicus
  • Cornelius Celsus
  • Columella
  • Cornutus
  • D. Haterius Agrippa
  • D. Valerius Asiaticus
  • Damis
  • Demetrius
  • Demonax
  • Demosthenes Philalethes
  • Dion of Prusa
  • Domitius Afer
  • Epictetus
  • Er
    otianus
  • Euphrates of Tyre
  • Fabius Rusticus
  • Favorinus Flaccus
  • Florus
  • Fronto
  • Gellius
  • Gordius of Tyana
  • Gnaeus Domitius
  • Halicarnassensis Dionysius II
  • Heron of Alexandria
  • Josephus
  • Justus of Tiberias
  • Juvenal
  • Lesbonax of Mytilene
  • Lucanus
  • Lucian
  • Lysimachus
  • M. Antonius Pallas
  • M. Vinicius
  • Macro
  • Mam. Aemilius Scaurus
  • Marcellus Sidetes
  • Martial
  • Maximus Tyrius
  • Moderatus of Gades
  • Musonius
  • Nicarchus
  • Nicomachus Gerasenus
  • Onasandros
  • P. Clodius Thrasea
  • Paetus Palaemon
  • Pamphila
  • Pausanias
  • Pedacus Dioscorides
  • Persius/Perseus
  • Petronius
  • Phaedrus
  • Philippus of Thessalonica
  • Philo of Alexandria
  • Phlegon of Tralles
  • Pliny the Elder
  • Pliny the Younger
  • Plotinus
  • Plutarch
  • Pompeius Saturninus
  • Pomponius Mela
  • Pomponius Secundus
  • Potamon of Mytilene
  • Ptolemy of Mauretania
  • Q. Curtius Rufus
  • Quintilian
  • Rubellius Plautus
  • Rufus the Ephesian
  • Saleius Bassus
  • Scopelian the Sophist
  • Scribonius
  • Seneca the Elder
  • Seneca the Younger
  • Sex. Afranius Burrus
  • Sex. Julius Frontinus
  • Servilius Damocrates
  • Silius Italicus
  • Soranus
  • Soterides of Epidaurus
  • Sotion
  • Statius the Elder
  • Statius the Younger
  • Suetonius
  • Sulpicia
  • T. Aristo
  • T. Statilius Crito
  • Tacitus
  • Thallus
  • Theon of Smyrna
  • Thrasyllus of Mendes
  • Ti. Claudius Pasion
  • Ti. Julius Alexander
  • Tiberius
  • Valerius Flaccus
  • Valerius Maximus
  • Vardanes I
  • Velleius Paterculus
  • Verginius Flavus
  • Vindex

 


Michael Paulkovich is an aerospace engineer and freelance writer, a frequent contributor to Free Inquiry and Humanist Perspectives magazines, a contributing editor at The American Rationalist, and a columnist for American Atheist. His book No Meek Messiah was pusblished in 2013 by Spillix.

Michael Paulkovich

Michael Paulkovich is a NASA engineer and freelance writer, a contributor to Free Inquiry and Humanist Perspective magazines, and an author of the series “Dogma Watch” for American Atheist.