Have Christians Accepted the Scientific Conclusion That God Does Not Answer Intercessory Prayer?

Brian Bolton

In 1982, a young cardiologist at the San Francisco General Medical Center named Randolph Byrd had a brilliant insight that would motivate several important investigations of prayer during the following two decades. He realized that the standard research paradigm known as the double-blind randomized clinical trial could be used to test the efficacy of intercessory prayer with medical patients in the hospital setting.

Dr. Byrd, who identified as a born-again Christian, believed that he could put (heretofore faith-based) prayer on a solid scientific foundation by implementing rigorous study procedures within an experimental framework. In his ground-breaking investigation, four hundred cardiac patients were randomly assigned to prayed-for and not-prayed-for conditions. The prayers offered by Christian intercessors requested a quick recovery with minimal complications.

A variety of intermediate measures of medical progress, as well as the ultimate indicator of patient success—recovery versus mortality—were objectively recorded for all subjects, prayer recipients and control subjects alike. After the data were analyzed, Byrd erroneously concluded that the results supported the hypothesis that intercessory prayers to the Judeo-Christian god have a beneficial therapeutic effect on cardiac patients. He thanked God for responding to the prayers for healing. Byrd’s landmark study was published in the Southern Medical Journal in 1988.

While the initial results did suggest the possibility that prayer conferred only a small benefit, the subsequent revelation that a major violation of experimental protocol had occurred cast even this modest ray of hope into doubt. Later studies confirmed the correct inference, namely, that God does not answer intercessory prayer.

After mixed reactions from research specialists, medical personnel, and religious believers, four new major investigations incorporating the same procedures Byrd had followed were carried out at other medical centers. The principal investigators of the four replication studies, with their affiliations and publication dates for their articles, are: William Harris, Saint Luke’s Hospital (1999); Jennifer Aviles, Mayo Clinic (2001); Mitchell Krucoff, Duke University Medical Center (2005); and Herbert Benson, Beth Israel Deaconess Medical Center (2006). Brief summaries of the studies are provided in the Appendix of this article.

Why Investigate Intercessory Prayer?

If we begin with a general definition of prayer as a petition or request for assistance directed to a higher power, then we can distinguish between three types of prayer: personal, interpersonal, and intercessory. Why not investigate the first two types of prayer?

The alleged benefits of personal prayer (for oneself) and interpersonal prayer (with or for others who are present or know that they are targets) are difficult to isolate because of the confounding effects of belief (or placebo effect) and interpersonal support, respectively.

In contrast, intercessory prayer (for others who are not present and do not know they are intended recipients) is amenable to scientific investigation. It was this critical recognition by Byrd that intercessory prayer could be studied objectively that established the conceptual basis for his visionary project. Note that intercessory prayer is sometimes referred to as “distant healing” in the literature.

Critical Features of the Investigations

All five of the large-sample randomized trials of intercessory prayer with hospitalized cardiac patients shared a set of attributes that justify confidence in the findings obtained from the studies.

  1. Careful random assignment of subjects to the prayed-for and not-prayed-for conditions ensured that groups were equivalent (within the limits of statistical sampling fluctuation) on all relevant variables.
  2. The double-blind requirement, in which neither patients nor hospital personnel knew which condition (prayer or non-prayer) to which subjects had been assigned, prevented contamination. This was the flaw in Byrd’s project—the study coordinator did know which patients were receiving prayers at the time she interacted with them.
  3. The principal investigators were all committed Christian advocates of the healing power of prayer or (in the fifth study) a physician sympathetic to the value of prayer and spiritual support in medical practice. It should be emphasized that none was an unbeliever or skeptic.
  4. The overwhelming majority of the members of the teams of intercessors came from Christian backgrounds, including various Protestant denominations and Catholic traditions, with a small number of Jews, Muslims, and Buddhists participating in some studies.
  5. The critical outcome indicator common to all investigations was patient mortality, a definitive criterion of success or failure that reflected directly the primary focus of the intercessory prayers for speedy recovery with few complications.

Unambiguous Experimental Results

Refuting the sincere beliefs and hopes of the dedicated researchers and funding agencies, the results of the five investigations of intercessory prayer were uniformly negative. This series of carefully conducted clinical trials did not produce any evidence in support of the proposition that hospitalized cardiac patients derive benefit from the altruistic prayers of committed intercessors.

Specifically, the mortality rates listed in the sidebar were very similar for the prayed-for and not-prayed-for patients. It can be seen that the outcomes of three of the individual studies slightly favored the not-prayed-for patients, although none of the five differences was statistically significant. The total comparison (a simple meta-analysis) is statistically significant because of the huge aggregate samples, but the effect size is miniscule; mostly due to Byrd’s study, which accounted for 80 percent of the difference; and obviously not the achievement of an omnipotent god.

 

It is apparent that the Judeo-Christian god disregards fervent prayers for recovery just as often as favorable outcomes occur. In other words, restoration to good health is a matter of competent medical care and good luck. Intercessory prayer does not improve the probability of survival. Why would God allow 116 patients in the five study samples who were recipients of Christian prayer to die, when Jesus promised, “If you ask anything in my name, I will do it” (John 14:10–14)?

One anomalous finding from the Benson study that, after a dozen years, continues to be cited by prayer articles in popular magazines is that a third group of participants, composed of patients who knew that they were receiving intercessory prayer, exhibited significantly more post-operative complications during the recovery period. However, the overall mortality rate for this group (2.16 percent) was very similar to the other two study groups, indicating that the higher level of complications did not manifest as an increased probability of death.

The truly remarkable data presented in the sidebar deserve special comment. These figures summarize the mortality statistics for two thousand recipients of intercessory prayer and two thousand not-prayed-for cardiac patients in five major investigations conducted over a span of twenty years. This level of documentation is unparalleled in the history of prayer research.

Excuses, Denial, Ignorance

How did Christian medical personnel and other believers in the healing power of prayer react to the disappointing results of the five scientific prayer studies? Six assertions capture the range of excuses and denials expressed by ignorant critics of the investigations:

  • The research is premised on a misconception of how God responds to prayer.
  • God is outside the domain of science and therefore is not amenable to experimental evaluation.
  • It is not possible to randomize God or truly understand his will.
  • Research can neither prove nor disprove the validity of divine intervention.
  • Because God already knows who needs healing, prayer is superfluous.
  • It is a corruption of faith, if not willful blasphemy or sacrilege, to test God.

All six of these rationalizations entail the invocation of theological dogma in an attempt to explain away the irrefutable evidence that the Judeo-Christian god does not answer prayer. It is important to note that these worthless explanations from the dumbfounded religious community only emerged after the negative results of the clinical trials were thoroughly replicated. Two legitimate questions that prayer advocates should have addressed are:

  1. Why would an omnipotent, omniscient, omnibenevolent god refuse to respond to altruistic healing prayers for cardiac patients (or anyone else)?
  2. Why didn’t God instantaneously cure all patients in both the prayed-for and not-prayed-for groups? This miraculous mediation would have demonstrated conclusively that God really does intervene supernaturally to heal disease and illness.

Illustrating just how far removed from reality prayer advocates can be, the principal investigator of one major study asserted that prayer could conceivably overdose the human body with fatal consequences! He then declared that if prayer were as powerful as he believed it to be, prayer interventions and therapies would be taught in all medical schools. (These extremely optimistic comments were made before the negative results of his study were known.)

The most bizarre reaction to the results of the prayer studies was published in an evangelical Christian magazine. The following quotes demonstrate the devout authors’ utter confusion: “The real scandal is that the not-prayed-for group received just as much, if not more of God’s blessings; True to his character, God appears inclined to heal and bless as many as possible; His answers often don’t give us the where, when, or how that we originally sought; We know that prayer works: The real question is, are we prepared for God’s answer.” These comments exemplify the fundamentalist rejection of science, which is discussed below.

A Common Criticism of the Studies

An oft-voiced criticism of the investigations is that patients in the not-prayed-for condition were recipients of prayers from family, friends, and churches. In other words, all study participants, including those in the control groups, received healing intercessory prayers from others.

Christian defenders of the studies explained that patients in the prayed-for experimental groups actually received supplemental or incremental prayers of a more carefully focused nature, in addition to prayers from family, friends, and church. Based on the assumption of a positive dose-response relationship, supporters of the investigations expected patients in the prayed-for condition to experience greater healing benefit.

Of course this criticism would be moot if the presumed omnipotent, omniscient, omnibenevolent Judeo-Christian god had simply eliminated all human illness in the first place. But remember that God routinely inflicts disease on his children and their enemies as punishment for disobedience or to test their faith.

Small Sample Studies

Articles in news magazines continue to assert that the results of research on prayer have been mixed, with some investigations finding that prayer can improve disease outcomes and prolong survival. This misleading and unwarranted conclusion is typically based on the uncritical acceptance of claims from various small sample studies of intercessory prayer, all spawned by Byrd’s innovative research project.

For example, a study of intercessory prayer with advanced AIDS patients (Western Journal of Medicine, 1998, 169, 356–363) appeared to have generated positive findings until several egregious methodological violations were exposed, rendering the results meaningless.

Another investigation that received much attention was subsequently labeled the “Columbia Miracle Prayer Study” (Journal of Reproductive Medicine, 2001, 46, 781–787). The researchers purportedly used intercessory prayer with infertile women, producing such an amazing outcome that independent reviewers eventually concluded that the data were contrived, because there were no records indicating that the study was ever carried out.

Yet at the time, prominent prayer advocates and “distant healing” gurus touted these and other investigations as evidence for a scientific revolution in healing modalities. Investigations inspired by “the Byrd effect” included intercessory-prayer interventions with rheumatoid arthritis patients, patients with common skin warts, and alcoholic patients. After these and other similar studies were carefully examined, it became apparent that the hoped-for healing breakthroughs attributable to prayer simply did not occur.

Some Skeptics Were Confused

While it is obvious from their comments that many devout believers in the power of prayer refused to accept the unequivocal results of the prayer studies, it is also true that even some rational observers had difficulty appreciating the very important contribution that intercessory prayer research made to the scientific study of supernatural claims.

For example, one prominent skeptic reached two completely unwarranted conclusions, declaring that scientific prayer research is “bad science and worse religion.” Of course, it was clearly not bad science, because the randomized double-blind clinical trial has established the respected and highly useful body of knowledge known as evidence-based medicine. Nobody argues with the scientific success of this endeavor.

Elaborating his allegation of defective religion, the skeptic propounded the colorful charge that “scientific prayer makes God a celestial lab rat.” It is apparent that the committed Christian investigators who carried out the studies were convinced that their research procedures were theologically sound, or they would not have undertaken their projects in the first place.

Distrust of Scientific Methodology

More than a decade after results from the last of the five replicated studies were reported in the medical literature, no additional efforts to document the benefits of healing prayer have been implemented. This suggests that believers may have seen the “handwriting on the wall” – or that they followed the logic of mathematical induction and gave up on their goal of putting prayer on a scientific foundation.

Is this because Christians have accepted the verdict of science and dismissed the value of intercessory prayer? No. When science fails to support faith, it is science that is rejected, not faith. (For details, see Chris Mooney, The Republican War on Science, 2005; Shawn Otto, The War on Science, 2016; and Antony Alumkal, Paranoid Science: The Christian Right’s War on Reality, 2017.) The numerous excuses proffered by Christian believers summarized above amply testify to this pervasive distrust of science.

What sustains belief in the efficacy of intercessory prayer in the face of overwhelming negative scientific evidence? The simple fact is that Christians are much more likely to absorb uncritically the claims of anecdotal accounts in newspapers, on television, and in personal testimonials, rather than to accept the replicated results of randomized clinical trials.

For example, after a prominent actor’s twelve-day-old twins recovered from a massive accidental medication overdose, he announced that “the power of prayer from so many is what saved them.” At about the same time, in an unrelated tragedy, a young man died of incurable cancer.

Situations such as these, which occur daily across America, raise a perplexing question: Why would a benevolent deity save the actor’s infant twins but allow a young man to die of cancer? In trying to rescue prayer from failure, the young man’s obituary asserted that “many prayer requests by Christian intercessors were miraculously answered along the way” (to death)!

Regardless of the nature of the tragedy – whether lost or abducted children, patients with serious illnesses, or victims of natural disasters – the reaction is always the same. Family, friends, and strangers pray for a happy outcome. When the prayed-for individual survives, all credit is given to God and the power of prayer is acclaimed. But when the prayed-for person dies, God is never blamed for the negative outcome. Typical responses are “God has called him home” and “She has gone to be with Jesus.” The infallibility of prayer is an article of faith that cannot be questioned.

In plain language, the traditional Christian framework does not permit prayer to fail.

Believers are quick to cite cases where prayer is associated with positive results, while they steadfastly disregard those circumstances where prayer is followed by unfavorable outcomes. It is this selective attention to everyday events that helps sustain faith in the power of prayer.

Is there any reason to think that the unequivocal negative outcome of the prayer studies has resulted in any diminishment of prayer activity in the United States? No. Surveys indicate that the vast majority of Americans still believe in the value of prayer, which is virtually identical to belief in the existence of God, because prayer is simply communication with the deity.

Moreover, intercessory prayers for healing continue to be a critical feature of many Christian activities, including regular worship services, the ubiquitous prayer circles, and continuous (“24/7”) prayer ministries that address health problems of parishioners, community leaders, and national figures. In other words, despite the disconfirming scientific evidence, Christians have not lost faith in the power of prayer for healing.

Another strong indicator of the undiminished Christian belief in the value of prayer is the huge volume of books on the subject published over the past decade at a rate exceeding one thousand titles per year.

Popular news magazines routinely publish feature articles touting the alleged connection between religiosity and health. The topic of prayer is inevitably addressed under headings such as “New Proof Prayer Works” and “The Healing Power of Prayer.” Writers typically depend on believers as their sole source of information and never devote equal time to skeptics.

It is obvious that Christians have not accepted the scientific conclusion that God does not answer intercessory prayer, demonstrating again that faith surpasses evidence in the religious mind, especially when reinforced by irresponsible propaganda generated by major news outlets for the purpose of perpetuating the dominant religious mythology.

Implications for Larger Issues

Despite the endless excuses generated by fundamentalist believers, the only rational conclusion supported by the five replicated studies is that God does not respond to altruistic prayers for hospitalized cardiac patients. This conclusion can be reasonably generalized to non-hospitalized patients and those with other medical conditions and to all forms of prayer to all gods under all circumstances. Put simply, God does not answer prayer. What does this conclusion mean for two larger issues of Christian faith?

First, it thoroughly undermines the indispensable axiom of so-called “theistic science,” which is that the supernatural realm actually exists and therefore supernatural causation is a legitimate scientific explanation. The replicated failure of the attempted intercessory prayer demonstrations constitutes the strongest rebuke yet to this doctrinal claim, which derives from a universal religious belief.

Second, it casts substantial doubt on the foundational claim of theology, which is that the postulated god has independent existence in objective reality. In other words, theology asserts that God actually exists outside of the human mind, rather than just being a figment of the imagination. A reasonable inference from the failed prayer investigations is that the allegedly omnipotent Judeo-Christian god does not exist. We are reminded of Mark Twain’s astute aphorism, “Faith is believing what you know ain’t so.”


Appendix

Capsule Summaries of Five Intercessory Prayer Studies

The first major experimental investigation of intercessory prayer was conducted by Randolph Byrd and reported in the Southern Medical Journal (1988, 81, 826–829). A total of twenty-nine diagnostic and outcome variables were recorded for 393 coronary patients who had been randomly assigned to prayed-for and not-prayed-for groups. Christian intercessors prayed daily for the patients in the experimental group. Statistically significant differences were obtained on six variables and also on a global judgment of improvement completed by Byrd. However, he subsequently revealed that the study contained two serious procedural violations. The global judgment of improvement was made after the data were unmasked, and the study coordinator knew which patients were assigned to the prayer group at the time she interacted with them. These compromises could have easily accounted for the slightly favorable results.

A decade after the Byrd study was published, William Harris and nine colleagues attempted to replicate Byrd’s findings (Archives of Internal Medicine, 1999, 159, 2273–2278). Thirty-five variables, including all of Byrd’s, were recorded for 990 cardiac patients who had been randomly assigned to prayed-for and control groups. None of Byrd’s twenty-nine variables or his global judgment of progress were statistically significant. Clearly, the attempted replication completely failed. However, a barely significant result was obtained for a new composite recovery score. Contrary to the original report, less than one percent of the variance in overall recovery was explained by prayer—hardly evidence for the power of an omnipotent god.

The third large-sample investigation of intercessory prayer was carried out by Jennifer Aviles and six colleagues (Mayo Clinic Proceedings, 2001, 76, 1192–1198). Five primary outcome variables were assessed at a six-month follow-up for 762 coronary patients who had been assigned to prayed-for and not-prayed-for groups. Christian intercessors initiated prayers for patients in the experimental group at the time of hospital discharge. No statistically significant differences were obtained for the total group comparisons or for subgroups of high-risk and low-risk patients.

The fourth major study of intercessory prayer was conducted by Mitchell Krucoff and fourteen colleagues (Lancet, 2005, 366, 211–217) . Four outcome variables, in-hospital and six-month follow-up, were recorded for 748 coronary patients who were treated at nine medical centers. Participating prayer groups encompassed Christian, Muslim, Jewish, and Buddhist traditions. There were no statistically significant differences between the randomized prayed-for and control patients. It is noteworthy that Krucoff is an enthusiastic advocate of the use of prayer in conjunction with standard medical treatment.

The fifth and largest investigation of the alleged benefits of intercessory prayer was reported by Herbert Benson and fifteen collaborators (American Heart Journal, 2006, 151, 934–942). Ten indicators of complications were recorded for 1,201 coronary bypass patients at six hospitals who were randomly assigned to prayed-for and not-prayed-for groups. The prayed-for participants received fourteen consecutive days of prayer from Catholic and Protestant intercessors. Statistical comparisons between the experimental and control subjects on the ten indicators yielded no significant findings.

It is ironic that Benson’s study, which was the final nail in the coffin for the intercessory prayer claim, was funded by the John Templeton Foundation, a Christian grant-awarding institution that has the mission of demonstrating that science and religion are compatible worldviews.


An Evolution of Lies

Mark Cagnetta

As I made the short walk back to my house from my mailbox, shuffling my stack of mail along the way, an impersonally addressed, oversized postcard caught my eye. So intriguing was this invitation to an upcoming event, I began perusing its content before I reached the front door. Apparently, per the return label, “Emmanuel”—the Messiah himself, who now resides in Tecumseh, Michigan—was on a nationwide tour to educate Americans. “The Evidence: Does God Exist?” was prominently displayed on the back of the card with a list of nine dates, each corresponding with religious subject matter; most of which, not coincidentally, has been challenged at some point by modern secular scholarship. The colloquium was to start with a bang, answering the initial query, Does God Exist? Each consecutive night’s subject matter was designed as a counterargument to topics concerned with science, paleontology, archaeology, history, and reason: Where did life on Earth come from? Does science support evolution? Are fossils millions of years old? Does archaeology support the Bible? Can the Bible predict the future? The last three evenings focused on theological questions: Whatever happened to the second coming of Jesus? Why theodicy? What is love?

At first glance, this appeared to be an exercise in disguised apologetics. Truth, which Thomas Henry Huxley referred to as “the heart of morality,” was obviously going to take a backseat to these pseudoscientific prevarications. Having been raised a Catholic, I was well-versed in the mythology that is religion, and these imminent proceedings roused in my mind a scene from the pages of the Old Testament. As I recall, in Exodus, the necessity of truth-telling was revealed to God’s minions. With a dollop of poetic license, the biblical narrative went something like this: To the musical accompaniment of a supernatural trumpeter, Yahweh, disguised as a thick cloud, descended upon the mythical Mount Sinai to address the Israelites. Amid a raging thunderstorm and a concomitant earthquake, Yahweh presented Moses and his followers with a slew of precepts ranging from the stoning of misbehaving children to a demand for monotheism in a wildly polytheistic world. One such dictate addressed the need for truthfulness, however ironic it may seem, particularly in light of the fact that the entire narrative is a fabrication. “Thou shalt not bear false witness against thy neighbor,” God commanded his chosen people. “False witness,” technically, means to not perjure oneself, but it has commonly been interpreted as a moral imperative to tell the truth.

Assuming those who currently practice the Judeo-Christian religion adhere to this adjuration as vehemently as they wish to uphold the Levitical law against homosexuality, one would assume that honesty, for them, is of the utmost importance. I decided to attend at least one of these indoctrinational seminars to put my supposition to the test.

This article is available to subscribers only.
Subscribe now or log in to read this article.

Why I Am Pro-Abortion, Not Just Pro-Choice

Valerie Tarico

I believe that abortion care is a positive social good—and I think it’s time people said so.

Not long ago, the Daily Kos published an article titled “I Am Pro-Choice, Not Pro-Abortion.” “Has anyone ever truly been pro-abortion?” one commenter asked.

Uh. Yes. Me. That would be me.

I am pro-abortion like I’m pro–knee replacement and pro-chemotherapy and pro–cataract surgery. As the last protection against ill-conceived childbearing when all else fails, abortion is part of a set of tools that help women and men to form the families of their choosing. I believe that abortion care is a positive social good. And I suspect that a lot of other people secretly believe the same thing. I think it’s time we said so.

Note: I’m also pro-choice. Choice is about who gets to make the decision. The question of whether and when we bring a new life into the world is, to my mind, one of the most important decisions a person can make. It is too big a decision for us to make for each other, especially for perfect strangers.

But independent of who owns the decision, I’m pro on the procedure. I’ve decided that it’s time, for once and for all, to count it out on my ten fingers.

  1. I’m pro-abortion because being able to delay and limit childbearing is fundamental to female empowerment and equality. A woman who lacks the means to manage her fertility lacks the means to manage her life. Any plans, dreams, aspirations, responsibilities or commitments—no matter how important—have a great big contingency clause built-in: “… until or unless I get pregnant, in which case all bets are off.” Think of any professional woman you know. She wouldn’t be in that role if she hadn’t been able to time and limit her childbearing. Think of any girl you know who imagines becoming a professional woman. She won’t get there unless she has effective, reliable means to manage her fertility. In generations past, nursing care was provided by nuns and teachers who were spinsters, because avoiding sexual intimacy was the only way women could avoid unpredictable childbearing and so be freed up to serve their communities in other capacities. But if you think that abstinence should be our model for modern fertility management, consider the little graves that get found every so often under old nunneries and Catholic homes for unwed mothers.
  2. I’m pro-abortion because well-timed pregnancies give children a healthier start in life. We now have ample evidence that babies do best when women are able to space their pregnancies and get both prenatal and preconception care. The specific nutrients we ingest in the weeks before we get pregnant can have a lifelong effect on the well-being of our offspring. Rapid repeat pregnancies increase the risk of low birth-weight babies and other complications. Wanted babies are more likely to get their toes kissed, to be welcomed into families that are financially and emotionally ready to receive them, to get preventive medical care during childhood, and to receive the kinds of loving engagement that helps young brains to develop.
  3. I’m pro-abortion because I take motherhood seriously. Most female bodies can incubate a baby; thanks to antibiotics, cesareans, and anti-hemorrhage drugs, most of us are able to survive pushing a baby out into the world. But parenting is a lot of work, and doing it well takes twenty dedicated years of focus, attention, patience, persistence, social support, mental health, money, and a whole lot more. This is the biggest, most life-transforming thing most of us will ever do. The idea that women should simply go with it when they find themselves pregnant after a one-night stand, or a rape, or a broken condom completely trivializes motherhood.
  4. I’m pro-abortion because intentional childbearing helps couples, families, and communities to get out of poverty. Decades of research in countries ranging from the United States to Bangladesh show that reproductive policy is economic policy. It is no coincidence that the American middle class rose along with the ability of couples to plan their families, starting at the beginning of the last century. Having two or three kids instead of eight or ten was critical to prospering in the modern industrial economy. Early, unsought childbearing nukes economic opportunity and contributes to multigenerational poverty. Today in the United States, unsought pregnancy and childbearing is declining for everyone but the poorest families and communities, contributing to what some call a growing “caste system” in America. Strong, determined girls and women sometimes beat the odds, but their stories inspire us precisely because they are the exceptions to the rule. Justice dictates that the full range of fertility management tools—including the best state-of-the-art contraceptive technologies and, when that fails, abortion care—be equally available to all, not just a privileged few.
  5. I’m pro-abortion because reproduction is a highly imperfect process. Genetic recombination is a complicated progression with flaws and false starts at every step along the way. To compensate, in every known species including humans, reproduction operates as a big funnel. Many more eggs and sperm are produced than will ever meet; more combine into embryos than will ever implant; more implant than will grow into babies; and more babies are born than will grow up to have babies of their own. This systematic culling makes God or nature the world’s biggest abortion provider: nature’s way of producing healthy kids essentially requires every woman to have an abortion mill built into her own body. In humans, an estimated 60 to 80 percent of fertilized eggs self-destruct before becoming babies, which is why the people who kill the most embryos are those like the Duggars who try to maximize their number of pregnancies. But the weeding-out process is also highly imperfect. Sometimes perfectly viable combinations boot themselves out; sometimes horrible defects slip through. A woman’s body may be less fertile when she is stressed or ill or malnourished, but as pictures of skeletal moms and babies show, some women conceive even under devastating circumstances. Like any other medical procedure, therapeutic contraception and abortion complement natural processes designed to help us survive and thrive.
  6. I’m pro-abortion because I think morality is about the well-being of sentient beings. I believe that morality is about the lived experience of sentient beings—beings who can feel pleasure and pain, preference and intention and who at their most complex can live in relation to other beings, love and be loved, and value their own existence. What are they capable of wanting? What are they capable of feeling? These are the questions my husband and I explored with our children when they were figuring out their responsibility to their chickens and guinea pigs. It was a lesson that turned expensive when the girls stopped drinking milk from cows that didn’t get to see the light of day or eat grass, but it’s not one I regret. Do unto others as they want you to do unto them. It’s called the “Platinum Rule.” In this moral universe, real people count more than potential people, hypothetical people, or corporate people.
  7. I’m pro-abortion because contraceptives are imperfect, and people are too. The pill is 1960s technology, now half a century old. For decades, women were told that the pill was 99 percent effective, and they blamed themselves when they got pregnant anyway. But that 99 percent is a “perfect use” statistic. In the real world, where most of us live, people aren’t perfect. In the real world, one in eleven women relying on the pill gets pregnant each year. For a couple relying on condoms, that’s one in six. Young and poor women—those whose lives are least predictable and most vulnerable to being thrown off course—are also those who have the most difficulty taking pills consistently. Pill technology most fails those who need it most, which makes abortion access a matter not only of compassion but of justice. State-of-the-art IUDs and implants radically change this equation, largely because they take human error out of the picture for years on end, or until a woman wants a baby. And despite the deliberate misinformation being spread by opponents, these methods are genuine contraceptives, not abortifacients. Depending on the method chosen, they disable sperm or block their path, or prevent an egg from being released. Once settled into place, an IUD or implant drops the annual pregnancy rate below one in five hundred. And guess what? Teen pregnancies and abortions plummet—which makes me happy, because even though I’m pro-abortion, I’d love the need for abortion to go away. Why mitigate harm when you can prevent it?
  8. I’m pro-abortion because I believe in mercy, grace, compassion, and the power of fresh starts. Many years ago, my friend Chip was driving his family on vacation when his kids started squabbling. His wife, Marla, undid her seatbelt to help them, and, as Chip looked over at her, their top-heavy minivan veered onto the shoulder and then rolled, and Marla died. Sometimes people make mistakes or have accidents that they pay for the rest of their lives. But I myself have swerved onto the shoulder and simply swerved back. The price we pay for a lapse in attention or judgment or an accident of any kind isn’t proportional to the error we made. Who among us hasn’t had unprotected sex when the time or situation or partnership wasn’t quite right for bringing a new life into the world? Most of the time we get lucky; sometimes we don’t. And in those situations we rely on the mercy, compassion, and generosity of others. In this regard, an unsought pregnancy is like any other accident. I can walk today only because surgeons reassembled my lower leg after it was crushed between the front of a car and a bicycle frame when I was a teen. And I can walk today (and run and jump) because another team of surgeons reassembled my knee joint after I fell off a ladder. And I can walk today (and bicycle with my family) because a third team of surgeons repaired my other knee after I pulled a whirring brush mower onto myself, cutting clear through bone. Three accidents, all my own doing, and three knee surgeries. Some women have three abortions.
  9. I’m pro-abortion because the future is always in motion, and we have the power and responsibility to shape it well. As a college student, I read a Ray Bradbury story about a man who travels back into prehistory on a “time safari.” The tourists have been coached about the importance of not disturbing anything lest they change the flow of history. When they return to the present, they realize that the outcome of an election has changed, and they discover that the protagonist, who had gone off the trail, has a crushed butterfly on the bottom of his shoe. In baby-making, as in Bradbury’s story, the future is always in motion, and every little thing we do has consequences we have no way to predict. Any small change means that a different child comes into the world. Which nights your mother had headaches, the sexual position of your parents when they conceived you, whether or not your mother rolled over in bed afterward—if any of these things had been different, someone else would be here instead of you. Every day, men and women make small choices and potential people wink into and out of existence. We move, and our movements ripple through time in ways that are incomprehensible, and we can never know what the alternate futures might have been. But some things we can know or predict, at least at the level of probability, and I think this knowledge provides a basis for guiding wise reproductive decisions. My friend Judy says that parenting begins before conception. I agree. How and when we choose to carry forward a new life can stack the odds in favor of our children or against them, and to me that is a sacred trust.
  10. I’m pro-abortion because I love my daughter. I first wrote the story of my own abortion when Dr. George Tiller was murdered, and I couldn’t bear the thought of abortion providers standing in the crosshairs alone. “My Abortion Baby” was about my daughter, Brynn, who exists only because a kind doctor such as George Tiller gave me and my husband the gift of a fresh start when we learned that our wanted pregnancy was unhealthy. Brynn literally embodies the ever-changing flow of the future, because she could not exist in an alternate universe in which I would have carried that first pregnancy to term. She was conceived while I would still have been pregnant with a child we had begun to imagine but who never came to be. My husband and I felt very clear that carrying forward that pregnancy would have been a violation of our values, and neither of us ever second-guessed our decision. Even so, I grieved. Even when I got pregnant again a few months later, I remember feeling petulant and thinking, I want that baby, not this one. And then Brynn came out into the world, and I looked into her eyes, fell in love, and never looked back.

All around us living, breathing, and loving are the chosen children of mothers who waited, who ended an ill-timed or unhealthy pregnancy and then later chose to carry forward a new life. “I was only going to have two children,” my friend Jane said as her daughters raced, screeching joyfully, across my lawn. Jane followed them with her eyes. “My abortions let me have these two when the time was right, with someone I loved.”

Those who see abortion as an unmitigated evil often talk about the “millions of missing people” who were not born into this world because a pregnant woman decided “Not now.” But they never talk about the millions of children and adults who are here today only because their mothers had abortions—real people who exist in this version of the future, people who are living out their lives all around us—loving, laughing, suffering, struggling, dancing, dreaming, and having babies of their own.

When those who oppose abortion lament the “missing people,” I hear an echo of my own petulant thought: I want that person, not this one. And I wish that they could simply experience what I did, that they could look into the beautiful eyes of the people in front of them and fall in love.

Editors’ note: This article was originally posted on AlterNet. It has been adapted for publication in Free Inquiry with the permission of the author.

Valerie Tarico

Valerie Tarico is a psychologist, author, and founder of WisdomCommons. org. She is the author of Trusting Doubt: A Former Evangelical Looks at Old Beliefs in a New Light (Oracle Institute Press, 2010).


The Importance of Being Blasphemous

Stephen R. Welch


This past January, millions marched throughout France in memory of the victims of the terror attack on the Paris offices of the satirical weekly newspaper Charlie Hebdo. The attack, perpetrated earlier that month by two French Islamists, had purportedly been committed to avenge the newspaper’s cartoon portrayals of Muhammad. The outrage expressed by the French public was unequivocal. In rallies counted among the largest in French history, “Je suis Charlie” (“We are Charlie”) became a global rallying cry, both as an expression of solidarity for the principle of free speech and in defiance of the terrorists’ attempt to suppress it. “Je suis Charlie” quickly went viral on social media.

Though by no means freighted with the same gravity, a similar affront to free speech occurred in the latter half of last year, when hackers stole data from Sony Pictures and made threats against the studio. The threats specifically targeted Sony for its comedy film The Interview, which depicted the assassination of the “Great Leader” of North Korea, Kim Jong-un. Citing “public safety concerns,” Sony cancelled release of The Interview. Weeks later, after its corporate backbone had received stiffening from the public outcry and words of “disappointment” by the U.S. president, Sony reversed its decision.

What is happening? For over two decades, ever since the now-infamous fatwa issued against Salman Rushdie for his novel The Satanic Verses, there has been a seemingly inexorable retrenchment in the public’s defense of controversial or offensive speech and art. Are we seeing, at last, a thaw in the long chill of self-censorship?

It is far too early to be sanguine. The battle to annex iconoclasm under the ever-expanding domain of the taboo is still being vigorously waged, particularly among the ideological Left. The voices of suppression, that warn that Charlie Hebdo’s blasphemy was a reckless indulgence or decry it as a form of hate speech, make essentially the same arguments that were levied against Salman Rushdie more than twenty-six years ago. Behind the rhetoric is a very real fear. The Ayatollah Khomeini’s edict ordering all Muslims of the world to kill “without delay” the author, editors, and publishers of The Satanic Verses may not have succeeded in censoring the book. But the lesson delivered to us all on that Valentine’s Day in 1989 is one that no flurry of public rallies or the short-lived bloom of a well-meant hashtag will easily dispel.

 

There was a time when the only real fear a publisher had to face was the critic’s pen. Tracing the legacy of the Rushdie Affair in his book From Fatwa to Jihad, Kenan Malik highlights the contrast between then and now in an interview with publisher Peter Mayer. In 1989, Mayer was CEO of Penguin Books, publisher of The Satanic Verses. When he first learned of the fatwa, Mayer says that his primary concern was for Rushdie and his Penguin staff. His other reaction was bafflement. “The fate of being a publisher,” he said to Malik, is that one “always find[s] people offended by books you publish.” When Jews and Christians had objected—in writing, usually it went no further than that—to a book he had published, he would respond by simply saying that he could not publish only inoffensive books. The understanding among publishers, authors, and the reading public at the time was that the right to publish books included the publishing of offensive books, and any differences in taste and opinion were sorted out through discussion. “It was generally a civilized dialogue,” Mayer recalls. “One relied on the sanity of secular democracy.”

That a head of state would issue a death sentence upon the author of a novel and all involved in its publication was not something Mayer, or anyone in the industry, could have anticipated. The bewilderment quickly turned to something grimmer when Mayer began receiving letters and phone calls threatening, in graphic terms, him and his family with death. Yet Penguin did not back down from publishing The Satanic Verses. In his interview with Malik, Mayer recalls telling the Penguin board that, despite the threats and intimidation, they must take the long view on the matter: “Any climb-down . . . will only encourage future terrorist attacks by individuals or groups offended for whatever reason by other books that we or any publisher might publish. If we capitulate, there will be no publishing as we know it.”

Mayer and his Penguin colleagues became acutely aware that their decision would not only affect the future of publishing but of free inquiry, and by extension civil society itself. Such awareness and the urgency with which it was felt, Malik soberly observes, “seems to belong to a different age.”

This same defense of principle seemed to reemerge, at least for a time, in Paris earlier this year. Yet even in the wake of the Charlie Hebdo massacre, the right to offend was treated with equivocation by those who should know better. While some media outlets reproduced the Hebdo Muhammad cartoons, most did not: the New York Times, the BBC, and UK’s Channel 4 would not do so. While condemning the attacks, notable journalists such as CNN’s Jay Carney questioned the judgment of the magazine’s editors for publishing images “we know . . . will be deeply offensive” and have the “potential to be inflammatory,” while journalist Tony Barber admonished that such publications are “being stupid” when they “provoke Muslims.”*

This finger-wagging speaks not so much to a principled stance against offending religious sensibilities as it reveals the foregone conclusion of violence. And it is this fear of violence, disingenuously cloaked in the rhetoric of prudence, that has come to serve as a de facto“blasphemy law.” As Nick Cohen acerbically notes, though the fear is arguably justified, it is reprehensible that writers and journalists—those who, one would presume, have the most to lose—cannot bring themselves to admit their fear and thus acknowledge their self-censorship. An honest admission, Cohen suggests, would “shred the pretence that journalists are fearless speakers of truth to power. But it would be a small gesture of solidarity. It would say to everyone, from Pakistani secularists murdered for opposing theocratic savagery, to British parents worried sick that their boys will join Islamic State, that radical Islam is a real fascistic force.”

Instead, Cohen says, journalists and many in the arts and academia have been living a lie. “We take on the powerful—and ask you to admire our bravery—if, and only if, the powerful are not a paramilitary force that may kill us.”

One silver lining in this depressing cloud is that the Charlie Hebdo incident has fomented public debate over the merits of the right to blaspheme in a free society, and whether that right truly jeopardizes social harmony or is intrinsic to the values of a liberal society. The Rushdie affair did not, unfortunately, precipitate the same level of public discourse. With the exception of a few voices (including Rushdie’s own), the fatwa was generally treated in the media as a problem directed singularly against Rushdie and, therefore, suffered by him alone. The possibility that Khomeini’s edict was delivered against the liberal principles of the West in toto or that the death threat was truly leveled at all authors and publishers—and by extension, readers—was not widely appreciated.

This naïveté seems like folly now. In his memoir of the fatwa years, Joseph Anton, Rushdie likens his ordeal to an unheeded Cassandra-like warning of things to come. Borrowing from Hitchcock’s The Birds, he illustrates how the threat of Islamism gathered while we in the West sat, oblivious. Recounting a famous scene from the film, he describes the actress Tippi Hedren as she sits on a bench outside an elementary school, unaware of the blackbirds gathering ominously on the jungle-gym behind her:


The children in the classroom . . . sing a sad nonsense song. Outside the school a cold wind is blowing. A single blackbird flies down from the sky and settles on the climbing frame in the playground. The children’s song is a roundelay. It begins but it doesn’t end. It just goes round and round. . . .

There are four more blackbirds on the climbing frame, and then a fifth arrives. Inside the school the children are singing. Now there are hundreds of blackbirds . . . and thousands more birds fill the sky, like a plague of Egypt. A song has begun, to which there is no end.

When the first bird comes down to roost, Rushdie explains, it was just about him, “individual, particular, specific. Nobody [felt] inclined to draw any conclusions from it.” It is only a dozen years later, “after the plague begins,” when people see that the first bird had been a harbinger.

In January of 1989, four months after The Satanic Verses was published, the first book-burnings in Britain occurred. This was followed in February by a small protest in Pakistan that turned deadly after police fired into the crowd of demonstrators. Five people were killed. Two days later, on Valentine’s Day, Khomeini issued his edict.

From the outset, many stood fast in their support of Rushdie. On the day of the book’s publication in the United States later that month, the Association of American Publishers, the American Bookseller’s Association, and the American Library Association paid for a full-page advertisement in the New York Times. The ad asserted that free people write books, free people read books, free people publish and sell books, and in the “spirit of . . . commitment to free expression” affirmed that The Satanic Verses “will be available to readers at bookshops and libraries throughout the country.” One hundred Muslim writers jointly published a book of essays in defense of free speech titled For Rushdie. Poets and writers from across the Arab world courageously, and publicly, defended him. “I choose Salman Rushdie,” wrote Syrian novelist Jamil Hatmal, “over the murderous turbans.”

In contrast to his defenders, there grew a loud chorus of detractors. Counted among them, sadly, were some fellow authors, including Egyptian novelist Naguib Mahfouz—himself also once accused of blasphemy—who, though first decrying the Ayatollah’s act as “terrorism,” later backtracked and stated that Rushdie “did not have the right to insult . . . anything considered holy.” One of the most notable critics was John Le Carre, who on the pages of the Guardiansniffed, “[T]here is no law in life or nature that says great nations may be insulted with impunity.” Self-appointed leaders of the Muslim “community” in the United Kingdom voiced support for the fatwa. UK parliamentarians, pandering to their Muslim constituencies, focused their efforts not on defending their citizens’ rights but on preventing the paperback publication of the book. And the archbishop of Canterbury, George Carey, scolded Rushdie for his “abuse of free speech,” declaring that the novel was an “outrageous slur” and that “[w]e must be more tolerant of Muslim anger.”

Victim-blaming continues to have traction where Rushdie’s critics are concerned. One of the more notable reviews of Joseph Anton was also one of the most negative. In her piece in the December 20, 2012, issue of the New York Review of Books, Zoë Heller did not exude quite the disdain for Rushdie as did his earlier detractors, but many of her criticisms repeated the same canards and exerted the same tired emphasis on the author’s perceived foibles. Heller found particularly objectionable the hardening of Rushdie’s perspective on Islam. “Respect for Islam,” Rushdie had written without qualm, was merely fear of Islamist violence cloaked in a “Tartuffe-like hypocrisy” by the dogma of multiculturalism. Like his critics two decades ago, Heller makes no effort to ascribe responsibility to the Islamists for sullying Islam or making the world feel “smaller and grimmer.” Instead, she lays it at Rushdie’s feet, chastising him for having “narrowed” his viewpoint.

Writing, as surely Heller herself knows, is a deeply personal endeavor. One can imagine, certainly, that a man finding his world turned upside-down for nine years, his life threatened, his character maligned, his worth as an artist questioned, and the fruit of his work—his novel—crucified and immolated by a mob, might succumb to the human response of taking it all quite personally. But such generosity of perspective becomes increasingly impossible for one who has been demonized. The first proposition of the assault against him, Rushdie recalls, was that “anyone who wrote a book with the word ‘satanic’ in the title must be satanic, too. Like many false propositions that flourished in the incipient Age of Information (or disinformation), it became true by repetition. Tell a lie about a man once and many people will not believe you. Tell it a million times and it is the man himself who will no longer be believed.”

The real man Salman Rushdie was replaced by an invented “Satan Rushdy,” an effigy that his adversaries could offer up to the hysterical mobs. Likewise, The Satanic Verses itself had been subjected to a vigorous campaign of demonization. The book he had written about migration, transformation, and identity, Rushdie laments, vanished and was replaced by one that “scarcely existed.” It was this imaginary novel, this figment, against which the “rage of Islam” would be directed.

Even the propositions surrounding that “rage” were the products of disinformation, repeated until presumed to be true. Malik lays out several facts that reduce to rubbish any claims that The Satanic Verses had caused mortal offense to Muslims en masse. There was “barely a squeak of protest,” he points out, among Muslims in France or Germany, nor any mass protests in the United States. Arabs and Turks were likewise “unmoved by Rushdie’s blasphemies.” Most Muslim countries (Pakistan and Saudi Arabia being the notable exceptions) did not ban the novel. It was not banned in Iran. In fact, in the months prior to Khomeini’s edict, the novel was reviewed in the Iranian press and discussed in government ministries and at street cafes. The Iranian literary journal Kayhan Farangi, though criticizing The Satanic Verses on artistic merits and for a “caricaturelike . . . image of Islamic principles,” did not once raise the specter of blasphemy. Kayhan Farangi did acknowledge that the book was a “work of imagination” and went as far as to suggest that the ban in India had been driven by politics rather than theology.

The fatwa was not an answer to The Satanic Verses or its putatively blasphemous contents. The decade following 1979 had brought Iran’s Islamic revolution to a disappointing standstill; Iran had fought a long and bitter war with Iraq to a costly stalemate, and it had failed to unseat the Saudi regime from its perceived role as the face of world Islam. Meanwhile, reformers within the Iranian parliament were growing restive. On his deathbed, brooding over his legacy, Khomeini made what was a calculated act to put fire back into his revolution’s belly. The fatwa against Salman Rushdie and the publishers of his novel was, in a manner of speaking, the Ayatollah’s parting shot. (It is almost certain that the old man had never read the novel.)

Rushdie’s sin was not that he wrote a book that incurred the wrath of Islam, but that he wrote the right book at the right time to be exploited by Islamist demagogues. Despite what critics may still repeat, nowhere in The Satanic Verses does Rushdie slander the Prophet or his companions as “scums and bums,” though characters persecuting his fictional Prophet use these words; nor does he malign the wives of the Prophet as whores, though, again, characters in a fictional brothel so name themselves. His Prophet is not-quite-Muhammad and his Mecca is not-quite-Mecca; his protagonist, Gabreel, is no more or less the angel Gabriel than he is the Indian film actor Amitabh Bachchan; and the book’s narrator is no more Satan than he is Salman Rushdie. It is neither polemic nor satire; nor is it an allegory or insult, veiled or otherwise. The Satanic Verses is no more or less than what its author intended—a novel.

Rushdie’s “offense” was, by fictionalizing him, to make the Prophet merely human, and in so doing to subvert the fiction of Muhammad’s divinity. No less an undertaking would be expected of an author of Rushdie’s caliber, a man who by his own admission is “godless, but fascinated by gods and prophets.” The novel is not an attack against Islam. On the contrary, it is an engagement with that religion’s legacy, an attempt by a man who is not a believer to reconcile that faith’s long shadow with his own nonbelief. Within the pages of The Satanic Verses, Salman Rushdie recreated Islam in an image of his own making. That is the blasphemy for which some believers, and those who speak in their name, will not forgive him.

 

This past May, PEN America gave its annual Freedom of Expression Courage Award to the surviving staff of Charlie Hebdo. Explaining the decision to the Guardian, PEN president Andrew Solomon reminded readers that the award was for courage, not content, adding, “[t]here is courage in refusing the very idea of forbidden statements, an urgent brilliance in saying what you have been told not to say in order to make it sayable.”

Several well-known writers protested the PEN decision, igniting a brief furor on social media. The protesters raised the familiar arguments from taste, objecting to the perceived racist or “phobic” content of the magazine’s cartoons. Rushdie was one of the first to defend PEN America’s decision, as were Nick Cohen and Kenan Malik, among others. Those who defended PEN did so in full recognition that the freedom to speak derives precisely from those few who have the courage to say the unsayable, and that it is the freedom to speak upon which all other freedoms depend.

Our conviction that these freedoms have value has grown alarmingly weak over the past two decades, the consequence in large part of our embrace of the morally incoherent dogma of cultural relativism. In the 1980s, bookstores had been firebombed and assassination attempts made upon publishers and translators—in one case, successfully—and yet publication of The Satanic Verses continued. Today no violence, nor even a credible threat of violence, is required; the mere suggestion of “offense,” in the form of an organized protest or social-media campaign, is enough now to shut down a book, a play, or an art installation. Where the courage to publish the unsayable is lacking, the courage of those who write and speak it comes to naught.

Sometimes all it takes is a phone call. In 2008, twenty years after the fatwa, Random House bought The Jewel of Medina, a historical romance written by journalist Sherry Jones. In nearly every respect, Jones’s novel held nothing in common with Rushdie’s but for its fictionalizing of Islamic history. The protagonist in Jewel of Medina is Aisha, one of Muhammad’s wives. Though by all accounts the novel is self-consciously positive in its portrayal of the Prophet (in the words of Douglas Murray, “stomach-
churningly fawning”), this did not save it from the ire of the self-righteous. In researching Aisha’s legacy, Jones had used a book by Denise Spellberg, an associate professor of Islamic history at the University of Texas. Random House, seeking a cover endorsement, sent the galley proofs to Spellberg. After reading them, Spellberg took it upon herself to phone an editor at Random House and, condemning the novel as an “offensive” and “ugly piece of work,” warned that it was “far more controversial than The Satanic Verses” and could pose “a very real possibility of major danger for the . . . staff and widespread violence.” Spellberg recommended that the book be withdrawn as soon as possible. Apparently on the recommendation of that single phone call, reinforced by negative posts on an online forum (also initiated by Spellberg), Random House—Salman Rushdie’s current publisher—pulled Jewel of Medina from publication.

It is not only literature that suffers from this voluntary censorship, and it is not only Islam that cries foul. In 2005 a production of Bezhti, a play by Sikh writer Gurpreet Kaur Bhatti that depicted sexual abuse and murder in a gurdwara, was cancelled in response to protests by activists from the Sikh community in Birmingham, England. As recently as last year Exhibit B, an art installation that depicted live black actors in a recreation of a colonial-era “human zoo,” was also forced to close by protesters. The critics of Bezhti condemned the play for its “blasphemy” and “offense” against the Sikh religion, while the protesters against Exhibit B charged it with “complicit racism” (the very social ill it had intended to critique). Nor does a production have to be altogether cancelled to compromise freedom of speech. Last year, the New York Metropolitan Opera capitulated to protesters and cancelled the simulcast to cinemas of its production of John Adams’s controversial opera The Death of Klinghoffer, effectively censoring it for anyone who could not afford the privilege of paying more than one hundred dollars to see the live performance.

All forms of inquiry and expression today are subject to the veto of the offended. Academic works, which normally do not generate much controversy (or attention) outside the confines of the ivory tower, are no less subject to suppression. Last year, a scholarly work, The Hindus: An Alternative History by American Indologist Wendy Doniger, was withdrawn from publication in India as the result of a lawsuit brought by members of the Hindu Right. The publisher was Penguin, and, in a sad irony, all Indian copies of Doniger’s book—cited for “denigration of Hinduism” by the plaintiffs—were pulped during the week of the twenty-fifth anniversary of the Salman Rushdie fatwa.

That Penguin, the original publisher of The Satanic Verses, had pulled Doniger’s work brings the saga full circle. It was India that was first to ban Rushdie’s novel. It is deeply troubling that the lesson learned from Khomeini’s fatwa over the past twenty-six years has not been how to better champion and protect our writers, playwrights, and scholars but rather how to best emulate the “rage of Islam” in order to suppress any speech and art that an aggrieved party can claim has offended them. Free speech has become an indulgence, whereas grievance culture is now an equal-opportunity entitlement.

The Rushdie Affair, as Malik observes, was a watershed. Rushdie’s detractors “lost the battle in the sense that they never managed to stop the publication of The Satanic Verses,” but, he says, “they won the war by pounding into the liberal consciousness the belief that to give offence was a morally despicable act.” We have internalized the fatwa, a fact affirmed by the writers interviewed in From Fatwa to Jihad. “What is really dangerous is when you don’t know you’ve censored yourself,” worries Monica Ali, whose 2003 novel Brick Lane was subject to protest marches amid the familiar accusations of offense and insult. The writing process is unconscious, and as such, she laments, “it is difficult to know to what extent you’ve been infected by the debate about offense.”

Hanif Kureishi, another prominent British novelist and a contemporary of Rushdie’s, goes further. “Nobody would have the balls today to write The Satanic Verses. Writing is now timid because writers are terrified.”

It is often said that it is the most offensive and unpopular speech that must be protected. However, it is not necessarily the work of the iconoclast or the polemicist that is most at risk. Works of honest inquiry—the history that questions received truths or the novel that dares to humanize the divine or demonic—all are threatened from this collective, internalized taboo against giving offense. Satire may stir the type of public attention that garners marches and pronouncements by presidents, and polemic may goad the ire of those it scorns. But it is the type of work that only casually disturbs or discomforts us, art that succeeds in penetrating the shell of our unexamined assumptions—the best of art, in other words—that is most likely to be censored, not necessarily by the spectacle of violence but by the stroke of an editor’s or publisher’s rejection or, worse, the author’s fear of embarking on the work to begin with. From this perspective, it is clear that the lesson delivered to us by the Ayatollah Khomeini in 1989, and reprised early this year in Paris, has yet to be unlearned.

The true demonstration that we have at last freed ourselves will not be found in a march of solidarity with the next assassinated writer, or cartoonist, or playwright. It will manifest in something more prosaic. Proof that the old man’s fatwa has been truly exorcized, that we have indeed conquered it, will arrive when the next Satanic Verses is published, bought, read, and reviewed despite the protests, the threats, and the misinformation and shaming campaigns organized by the offended.

But first, someone needs to write it.

Further Reading

Cohen, Nick. 2015. “Paris Attacks: Unless We Overcome 
Fear, Self-censorship Will Spread.” Guardian, January 10. http://www.theguardian.com/commentisfree/2015/jan/11/paris-attacks-we-must-overcome-fear-or-selfcensorship-will-spread. Accessed May 12, 2015.

Flood, Alison, and Alan Yuhas. 2015. “Salman Rushdie Slams Critics of PEN’s Charlie Hebdo Tribute.” Guardian, April 27. http://www.theguardian.com/books/2015/apr/27/salman-rushdie-pen-charlie-hebdo-peter-carey. Accessed May 21, 2015.

Heller, Zoë. 2012. “The Salman Rushdie Case.” The New York Review of Books, December 20. http://www.nybooks.com/articles/archives/2012/dec/20/salman-rushdie-case/?pagination=false. Accessed May 19, 2015.

Khomeini, Ruhollah Mostafavi Moosavi. “Ayatollah Sentences Author to Death.” http://news.bbc.co.uk/onthisday/hi/dates/stories/february/14/newsid_2541000/2541149.stm. Accessed May 12, 2015.

Malik, Kenan. 2010. From Fatwa to Jihad: The Rushdie Affair and Its Aftermath. Brooklyn: Melville House Publishing.

———. 2014. “On The Death of Klinghoffer.” Pandaemonium (blog). November 13. https://kenanmalik.wordpress.com/2014/11/13/on-the-death-of-klinghoffer/. Accessed May 21, 2015.

Muir, Hugh. 2014. “Barbican Criticizes Protesters Who Forced Exhibit B cancellation.” Guardian, September 24. http://www.theguardian.com/culture/2014/sep/24/barbican-criticise-protesters-who-forced-exhibit-b-cancellation. Accessed May 21, 2015.

Murray, Douglas. 2013. Islamophilia: A Very Metropolitan Malady. New York: emBooks.

Prashad, Vijay. 2014. “Wendy Doniger’s Book Is a Tribute to Hinduism’s Complexity, Not an Insult.” Guardian, February 12. http://www.theguardian.com/commentisfree/2014/feb/12/wendy-doniger-book-hinduism-penguin-hindus. Accessed May 21, 2015. Thanks to Kenan Malik for pointing out the concurrence with the twenty-
fifth anniversary week of the fatwa, in https://kenanmalik.wordpress.com/2014/12/19/fear-and-free-speech/.

Rushdie, Salman. 2012. Joseph Anton: A Memoir. New York: Random House.

———. 2008. The Satanic Verses. New York: Random House.

Singh, Gurharpal. 2004. “Sikhs Are the Real Losers from Bezhti.” Guardian, December 23. http://www.theguardian.com/stage/2004/dec/24/theatre.religion. Accessed May 21, 2015.

Vale, Paul. 2015. “Financial Times Europe Editor Tony Barber Accuses Charlie Hebdo of ‘Muslim Baiting.” Huffington Post UK, January 7. http://www.huffingtonpost.co.uk/2015/01/07/financial-times-europe-editor-tony-barber-accuses-charlie-hebdo-of-muslim-baiting_n_6431346.html. Accessed May 12, 2015.

Wemple, Erik. 2015. “On CNN, Jay Carney Sticks to Position that Charlie Hebdo Should Have Pulled Back.” Washington Post, January 8. http://www.washingtonpost.com/blogs/erik-wemple/wp/2015/01/08/on-cnn-jay-carney-sticks-to-position-that-charlie-hebdo-should-have-pulled-back/. Accessed May 12, 2015.

 

*Barber later “updated and expanded” his Financial Times opinion piece to excise the words “being stupid.”

Stephen R. Welch

Stephen R. Welch is a freelance writer based in New York. He writes regularly for Free Inquiry; his last article “The Importance of Being Blasphemous: Literature, Self-Censorship, and the Legacy of The Satanic Verses,” appeared in the October/November 2015 issue.


How Morality Has the Objectivity that Matters—Without God

Ronald A. Lindsay

The thesis of this essay is that morality is not objective in the same way that statements of empirically verifiable facts are objective, yet morality is objective in the ways that matter: moral judgments are not arbitrary; we can have genuine disagreements about moral issues; people can be mistaken in their moral beliefs; and facts about the world are relevant to and inform our moral judgments. In other words, morality is not “subjective” as that term is usu ally interpreted. Moral judgments are not equivalent to descriptive statements about the world—factual assertions about cars, cats, and cabbages—but neither are they merely expressions of personal preferences.

This thesis has obvious importance to our understanding of morality. Moreover, this thesis has special relevance to humanists and other nonreligious people, because one of the most frequently made arguments against atheism is that it is incompatible with the position that morality is objective and that rejecting the objectivity of morality would have unacceptable consequences.

The Need for God: The Argument from Morality

For centuries now, those who argue for theism have been running out of room to maneuver. Things that once seemed to require a supernatural explanation—whether it was thunder, volcanoes, diseases, human cognition, or the existence of the solar system—have long since become the domain of science. (Admittedly, some, such as Bill O’Reilly, remain unaware that we can explain the regularity of certain phenomena, such as the tides, without reliance on divine intervention.) So the theists have changed tactics. Instead of using God to explain natural phenomena, theistic apologists have increasingly relied on arguing that God is indispensable for morality. At first, this contention often took the form of an accusation that atheists can’t be trusted; they’re immoral. In the last few decades, however, many theists have—in the face of overwhelming evidence—grudgingly conceded that at least some atheists can be good people. So has God now become irrelevant? Do we need a deity for anything?

Yes, says the theist. Sure, some individual atheists can be relied upon to act morally, but, as political commentator Michael Gerson put it, “Atheists can be good people; they just have no objective way to judge the conduct of those who are not.” In other words, without God, atheists cannot explain how there are objective moral truths, and without objective moral truths, atheists have no grounds for saying anything is morally right or wrong. We atheists might act appropriately, but we cannot rationally justify our actions; nor can we criticize those who fail to act appropriately.

Furthermore, this contention that God is required for morality to be objective has become the new weapon of choice for those wishing to argue for the existence of God. For example, the Christian apologist William Lane Craig has made what he regards as the reality of objective moral truths the key premise of one of his favorite arguments for the existence of God. According to Craig, there can be no objective moral truths without God, and since there are objective moral truths, God must exist.

One traditional counter to the argument that God is required to ground objective morality is that we cannot possibly rely on God to tell us what’s morally right and wrong. As Plato pointed out long ago in his dialogue Euthyphro, divine commands cannot provide a foundation for morality. From a moral perspective, we have no obligation to follow anyone’s command—whether it’s God’s, Putin’s, or Queen Elizabeth’s—just because it is a command. Rules of conduct based on the arbitrary fiats of someone more powerful than us are not equivalent to moral norms. Moreover, it is no solution to say that God commands only what is good. This response presupposes that we can tell good from bad, right from wrong, or, in other words, that we have our own independent standards for moral goodness. But if we have such independent standards, then we don’t need God to tell us what to do. We can determine what is morally right or wrong on our own.

This response to the theist is effective as far as it goes. Contrary to the theist, God cannot be the source of morality. However, this doesn’t address the concern that morality then loses its objectivity. It becomes a matter of personal preference. We cannot really criticize others for doing something morally wrong, because all we’re saying is “we don’t like that.”

It’s this fear that without God we’ll have a moral vacuum and descend into nihilism that sustains some in the conviction that there is a God or that we need to encourage belief in God regardless of the evidence to the contrary. It sustains belief in God (or belief in belief) even in the face of the argument from Euthyphro. Logic does not always triumph over emotion, and the dread that without God we have no moral grounding—“without God, everything is permitted”—can be a powerful influence on many.

The notion that God’s word is what counts and what makes the difference between moral and immoral actions comforts some because it provides them with the sense that there is something beyond us, something outside of our ourselves that we can look to determine whether some action is morally right or wrong. Is murdering someone wrong? Sure, God tells us that in the Bible. For the devout, that’s a fact. A fact that can be confirmed, just like the fact that ripe tomatoes are red, not blue. It’s not a matter of subjective opinion. And if morality isn’t objective, then it must be subjective, correct?

For these reasons—and also because we want a firm grounding for morality ourselves—it is incumbent upon humanists, and secular ethicists generally, to address squarely the contentions that without God there is no objectivity in morality and that this situation would be something dreadful. The problem is that most try to do this by arguing that morality is objective in a way similar to the way in which ordinary descriptive statements are objective. The better argument is that morality is neither objective nor subjective as those terms are commonly understood.

Secular Attempts to Make Morality Objective

Some secular ethicists have tried to supply substitutes for God as the moral measuring-stick while adhering to the notion that morality must be objective and that moral judgments can be determined to be true or false in ways similar to statements about the world. Some argue that facts have certain moral implications. In this way, morality is based on natural facts, and statements about morality can be determined to be true or false by reference to these facts. Often, the starting point for such arguments is to point out undisputed facts, such that pain is a bad thing and, all other things being equal, people avoid being in pain. Or, if one wants to approach the issue from the other direction, well-being is a good thing, and, all other things being equal, people want to have well-being. The argument will then proceed by using this foundation to argue that we have a moral obligation to avoid inflicting pain or to increase well-being. But this will not do. Granted, pain is “bad” in a nonmoral sense, and people don’t want it, but to say that inflicting pain on someone is presumptively morally bad implies we have some justification for saying that this action is morally bad, not just that it’s unwanted. From where does this moral obligation derive and how do we detect it?

The problem with trying to derive moral obligations directly from facts about the world is that it’s always open for someone to ask “Why do these facts impose a mo
ral obligation?” Sure, well-being may be desirable, and I may want well-being for myself and those close to me, but that doesn’t imply that I am obliged to increase well-being in general. Certainly, it’s not inconsistent for people to say that they want well-being for themselves and those close to them, but that they feel no moral obligation to increase the well-being of people they don’t know. This is not the equivalent of saying ripe tomatoes are both red and blue simultaneously.

The difficulty in deriving moral obligations directly from discrete facts about the world was famously noted by the eighteenth-century Scottish philosopher David Hume, who remarked that from a statement about how things are—an “is” statement—we cannot infer a moral norm about how things should be—an “ought” statement. Despite various attempts to show Hume wrong, his argument was and is sound. Note that Hume did not say that facts are not relevant to moral judgments. Nor did he claim that our moral norms are subjective—although this is a position often mistakenly attributed to him. He did not assert that the truth of moral judgments is determined by referring to our inner states, which would be a subjectivist position. Instead, he maintained that a factual statement, considered in isolation, cannot imply a moral norm. An “is” statement and an “ought” statement are distinct classes of statements.

Some have tried to circumvent the difficulty in deriving moral obligations directly from factual statements by arguing that “nonnatural” facts or properties supply the grounding for morality. However, all such attempts to do so have foundered on the inability to describe with precision the nature of these mysterious nonnatural facts or properties and how it is we can know them. “Intuition” is sometimes offered as a method for knowing moral facts, but intuitions notoriously differ.

Derek Parfit, an Oxford scholar whom some regard as one of the most brilliant philosophers of our time (and I so regard him), recently produced a massive work on ethics titled On What Matters. This two-volume work covers a lot of ground, but one of its main claims is that morality is objective, and we can and do know moral truths but not because moral judgments describe some fact. Indeed, moral judgments do not describe anything in the external world, nor do they refer to our own feelings. There are no mystical moral or normative entities. Nonetheless, moral judgments express objective truths. Parfit’s solution? Ethics is analogous to mathematics. There are mathematical truths even though, on Parfit’s view, there are no such things as an ideal equation 2 + 2 = 4 existing somewhere in Plato’s heaven. Similarly, we have objectively valid moral reasons for not inflicting pain gratuitously even though there are no mystical moral entities to which we make reference when we declare, “Inflicting pain gratuitously is morally wrong.” To quote Parfit, “Like numbers and logical truths … normative properties and truths have no ontological status” (On What Matters, vol. 2, p. 487).

Parfit’s proposed solution is ingenious because it avoids the troublesome issues presented when we tie moral judgments to facts about the world (or facts about our feelings). However, ingenuity does not ensure that a theory is right. Parfit provides no adequate explanation of how we know ethical truths, other than offering numerous examples where he maintains we clearly have a decisive reason for doing X rather than Y. In other words, at the end of the day he falls back on something such as intuition, with the main difference between his theory and other theories being that his intuitions do not reference anything that exists; instead they capture an abstract truth.

So secular attempts to provide an objective foundation for morality have been … well, less than successful. Does this imply we are logically required to embrace nihilism?

No. Let me suggest we need to back up and look at morality afresh. The whole notion that morality must be either entirely subjective or objective in some way comparable to factual (or in Parfit’s case, mathematical) truths is based on a misguided understanding of morality. It’s based on a picture of morality in which morality serves functions similar to factual descriptions (or mathematical theorems). We need to discard that picture. Let’s clear our minds and start anew.

The Functions of Morality

So, if we are starting from the ground up, let’s ask basic questions. Why should we have morality? What is its purpose? Note that I am not asking, “Why should I be moral?”—a question often posed in introductory philosophy courses. I do not mean to be dismissive of this question, but it raises a different set of issues than the ones we should concentrate on now. What I am interested in is reflection on the institution of morality as a whole. Why bother having morality?

One way to begin to answer this question is just to look at how morality functions, and has functioned, in human societies. What is it that morality allows us to do? What can we accomplish when (most) people behave morally that we would not be able to accomplish otherwise? Broadly speaking, morality appears to serve these related purposes: it creates stability, provides security, ameliorates harmful conditions, fosters trust, and facilitates cooperation in achieving shared and complementary goals. In other words, morality enables us to live together and, while doing so, to improve the conditions under which we live.

This is not necessarily an exhaustive list of the functions of morality, nor do I claim to have explained the functions in the most accurate and precise way possible. But I am confident that my list is a fair approximation of some of the key functions of morality.

How do moral norms serve these functions? In following moral norms we engage in behavior that enables these functions of morality to be fulfilled. When we obey norms like “don’t kill” and “don’t steal,” we help ensure the security and stability of society. It really doesn’t take a genius to figure out why, but that hasn’t stopped some geniuses from drawing our attention to the importance of moral norms. As the seventeenth-century English philosopher Thomas Hobbes and many others have pointed out, if we always had to fear being injured or having our property stolen, we could never have any rest. Our lives would be “solitary, poor, nasty, brutish, and short.” Besides providing security and stability by prohibiting certain actions, moral norms also promote collaboration by encouraging certain actions and by providing the necessary framework for the critical practice of the “promise”—that is, a commitment that allows others to rely on me. Consider a simple example, one that could reflect circumstances in the Neolithic Era as much as today. I need a tool that you have to complete a project, so I ask you to lend it to me. You hesitate to lend me the tool, but you also believe you are obliged to help me if such help doesn’t significantly harm you. Moreover, I promise to return the tool. You lend me the tool; I keep my promise to return the tool. This exchange fosters trust between us. Both of us will be more inclined to cooperate with each other in the future. Our cooperation will likely improve our respective living conditions.

Multiply this example millions of times, and you get a sense of the numerous transactions among people that allow a peaceful, stable, prospering society to emerge. You also can imagine how conditions would deteriorate if moral norms were not followed. Going back to my tool example, let us imagine you do not respond positively to my request for assistance. This causes resentment and also frustrates my ability to carry out a beneficial project. I am also less likely to assist you if you need help. Or say you do lend me a tool, but I keep it instead of returning it as promised. This causes distrust, and you are less likely to assist me (and others) in the future. Multiplied many times, such failures to follow moral norms can result in mistrust, reduced cooperation, and even violence. If I do not return that tool peacefully, you may resort to brute force to reacquire it.

Fortunately, over time, humans have acted in ways that further the objectives of morality far more often than in ways that frustrate these objectives. Early humans were able to establish small communities that survived, in part, because most members of the community followed moral norms. These small communities eventually grew larger, again, in part because of moral norms. In this instance, what was critical was the extension of the scope or range of moral norms to those outside one’s immediate community. Early human communities were often at war with each other. Tribe members acted benevolently only to fellow members of their tribe; outsiders were not regarded as entitled to the same treatment. One of the earliest moral revolutions was the extension of cooperative behavior—almost surely based initially on trade—to members of other communities, which allowed for peaceful interaction and the coalescing of small human groups into larger groups. This process has been repeated over the millennia of human existence (with frequent, sanguinary interruptions) until we have achieved something like a global moral community.

This outline of morality and its history is so simple that I am sure some will consider it simplistic. I have covered in a couple of paragraphs what others devote thick tomes to. But it suffices for my purposes. The main points are that in considering morality, we can see that it serves certain functions, and these functions are related to human interests. Put another way, we can describe morality and its purposes without bringing God into the picture; moreover, we can see that morality is a practical enterprise, not a means for describing the world.

Moral Judgments Versus Factual Assertions

The practical function of morality is the key to understanding why moral judgments are not true or false in the same way that factual statements are true or false. The objective/subjective dichotomy implicitly assumes that moral judgments are used primarily to describe, so they must have either an objective or subjective reference. But, as indicated, moral judgments have various practical applications; they are not used primarily as descriptive statements.

Consider these two statements:

Kim is hitting Stephanie.

Without provocation, we ought not to hit people.

Do these statements have identical functions? I suggest that they do not. The first statement is used to convey factual information; it tells us about something that is happening. The second statement is in the form of a moral norm that reflects a moral judgment. Depending on the circumstances, the second statement can be used to instruct someone, condemn someone, admonish someone, exhort someone, confirm that the speaker endorses this norm, and so forth. The second statement has primarily practical, not descriptive, functions. Admittedly, in some circumstances, moral norms or descriptive counterparts of moral norms also can be used to make an assertion about the world, but they do not primarily serve to convey factual information.

In rejecting the proposition that moral judgments are equivalent to factual statements about the world, I am not endorsing the proposition that moral judgments are subjective. A subjective statement is still a descriptive statement that is determined to be true by reference to facts. It’s simply a descriptive statement referring to facts about our inner states—our desires, our sentiments—as opposed to something in the world. To claim that moral judgments are subjective is to claim that they are true or false based on how a particular person feels. That’s not how most of us regard moral judgments.

But if Moral Judgments Do Not Refer to Facts, How Do We Decide What’s Right and Wrong?

It’s obvious that people disagree about moral issues, but the extent of that disagreement is often exaggerated. The reality is that there is a core set of moral norms that almost all humans accept. We couldn’t live together otherwise. For humans to live together in peace and prosper, we need to follow norms such as do not kill, do not steal, do not inflict pain gratuitously, tell the truth, keep your commitments, reciprocate acts of kindness, and so forth. The number of core norms is small, but they govern most of the transactions we have with other humans. This is why we see these norms in all functioning human societies, past and present. Any community in which these norms were lacking could not survive for long. This shared core of moral norms represents the common heritage of civilized human society.

These shared norms also reflect the functions of morality as applied to the human condition. Earlier I observed that morality has certain functions; that is, it serves human interests and needs by creating stability, providing security, ameliorating harmful conditions, fostering trust, and facilitating cooperation in achieving shared and complementary goals. One can quibble about my wording, but that morality has something like these functions is beyond dispute. The norms of the common morality help to ensure that these functions are fulfilled by prohibiting killing, stealing, lying, and so forth. Given that humans are vulnerable to harm, that we depend upon the honesty and cooperation of others, and that we are animals with certain physical and social needs, the norms of the common morality are indispensable.

We can see now how morality has the type of objectivity that matters. If we regard morality as a set of practices that has something like the functions I described, then not just any norm is acceptable as a moral norm. “Lie to others and betray them” is not going to serve the functions of morality. Because of our common human condition, morality is not arbitrary; nor is it subjective in any pernicious sense. When people express fears about morality being subjective, they are concerned about the view that what’s morally permissible is simply what each person feels is morally permissible. But morality is not an expression of personal taste. Our common needs and interests place constraints on the content of morality. Similarly, if we regard morality as serving certain functions, we can see how facts about the world can inform our moral judgments. If morality serves to provide security and foster cooperation, then unprovoked assaults on others run counter to morality’s aims. Indeed, these are among the types of actions that norms of the common morality try to prevent. For this reason, when we are informed that Kim did hit Stephanie in the face without provocation, we quickly conclude that what Kim did was wrong, and her conduct should be condemned.

Note that in drawing that conclusion, we are not violating Hume’s Law. Facts by themselves do not entail moral judgments, but if we look upon morality as a set of practices that provide solutions to certain problems, for example, violence among members of the community, then we can see how facts are relevant to moral judgments. Part of the solution to violence among members of the community is to condemn violent acts and encourage peaceful resolution of disputes. Facts provide us with relevant information about how to best bring about this solution in particular circumstances.

Similarly, with a proper understanding of morality, we can also see how we can justify making inferences from factual statements to evaluative judgments. Recall that the fact/value gap prevents us from inferring a moral judgment from isolated statements of fact. But if we recognize and accept that morality serves
certain functions and that the norms of the common morality help carry out these functions, the inference from facts to moral judgments is appropriate because we are not proceeding solely from isolated facts to moral judgments; instead, we are implicitly referencing the background institution of morality. An isolated factual observation cannot justify a moral judgment, but a factual observation embedded in a set of moral norms can justify a moral judgment.

Objection 1: Just Because Morality Serves Certain Functions Does Not Imply It Should Have Those Functions

At this point, the perceptive reader might object that even assuming that the functions of morality I have described correspond to functions served by morality, this does not address the question of what the functions of morality should be. Haven’t I just moved the fact/value gap back one step, from the level of an individual factual statement to the level of a description of the institution of morality as a whole? Put another way, explaining how morality functions doesn’t address the issue of how it should function.

This is a reasonable objection, but it is one I can meet. So let’s consider this issue: Should morality have objectives that reflect the functions of morality that I have described, that is, serving human interests and needs by creating stability, providing security, ameliorating harmful conditions, fostering trust, and facilitating cooperation in achieving shared and complementary goals? Perhaps the best way to answer this question is with another question: What’s the alternative? If morality should not aim to create stability, provide security, ameliorate harmful conditions, and so forth, what’s the point of morality otherwise? To increase the production of cheese? One could maintain that cheese production is an overriding imperative, and one could label this a moral imperative, but the reality is that for humans to live and work together we would still need something to fulfill the functions of what we now characterize as morality. Perhaps we’d call it “shmorality,” but we’d still have a similar body of norms and practices, whatever its name.

Granted, some philosophers have argued that morality should have objectives somewhat different than the ones I have outlined. Various philosophers have argued that morality should aim at maximizing happiness, or producing a greater balance of pleasure over pain, or producing virtuous characters. Without digressing into a long discussion of ethical theory, I believe these views grasp certain aspects of the moral enterprise, but they mistakenly elevate part of what we accomplish through morality into the whole of it. There is no single simple principle that governs morality. Yes, we want to encourage people to be virtuous—that is, to be kind, courageous, and trustworthy—but to what end? Likewise, we want people to be happy, but exactly how do we measure units of happiness, and how do we balance the happiness of different individuals against one another or against the happiness of the community? If we look at morality as a practical enterprise, something like the objectives I have outlined represents a better description of what we want morality to accomplish. (I say “something like” because I am not claiming to give the best possible description of morality’s objectives.)

Objection 2: I Haven’t Explained Why Moral Norms Are Obligatory

A second important objection to my argument is that I have not explained how it is that moral norms are binding on us. Even if we accept that there is a common morality, why must we follow these norms?

There are two types of answers I can give here. Both are important, so we need to keep them distinct. One answer would appeal to human psychology. The combination of our evolutionary inheritance and the moral training most of us receive disposes us to act morally. We should not lose sight of this fact because if we were not receptive to moral norms, no reference to a divine command, no appeal to an ethical argument, could ever move us to behave morally. For a moral norm to act as a motivating reason to do or refrain from doing something, we must be the type of person who can respond to moral norms. Ethicists as far back as Aristotle have recognized this. Good moral conduct owes much to moral training, and the most sublime exposition of the magnificence of the moral law will not persuade those who have been habituated into antisocial behavior.

But in addition to a casual explanation of why we feel a sense of moral obligation, we also want an explanation of the reason for acknowledging moral obligations. In my view, it’s largely a matter of logical consistency. If we accept the institution of morality, then we are tacitly agreeing to be bound by moral norms. We cannot logically maintain that moral norms apply to everyone except us. If we think it is morally wrong for others to break their promises to us, as a matter of logic we cannot say that we are under no obligation to keep our promises. In saying that an action is morally wrong, we are committed to making the same judgment regardless of whether it is I or someone else performing the action. In accepting the institution of morality, we are also accepting the obligations that come with this institution. Hence, there is a reason, not just a psychological cause, for acknowledging our obligation to follow moral norms.

What if someone rejects the institution of morality altogether? The perceptive reader will not have failed to notice that I italicized “if” when I stated, “If we accept the institution of morality, then we are tacitly agreeing to be bound by moral norms.” I emphasized this condition precisely to draw attention to the fact that, as a matter of logic, there is nothing preventing an individual from rejecting the institution of morality entirely, from “opting out” of morality, as it were—that is, apart from the likely unpleasant consequences for that person of such a decision. There is nothing to be gained by pretending otherwise. There is no mystical intuition of “the moral law” that inexorably forces someone to accept the institution of morality. Nor is there any set of reasons whose irresistible logic compels a person to behave morally. Put another way, it is not irrational to reject the institution of morality altogether. One can coherently and consistently prefer what one regards as one’s own self-interest to doing the morally appropriate thing. However, leaving aside those who suffer from a pathological lack of empathy, few choose this path. Among other things, this would be a difficult decision to make psychologically.

That said, there is no guarantee that people will not make this choice. But notice that bringing God into the picture doesn’t change anything. People can make the decision to reject morality even if they think God has promulgated our shared moral norms. Indeed, many believers have made this decision, as evidenced by the individuals who throughout history have placed themselves outside the bounds of human society and have sustained themselves by preying on other humans. Many ruthless brigands and pirates have had no doubts about God’s existence. They robbed, raped, and murdered anyway.

You may say: “But what they did was objectively wrong”—and an atheist can’t say this. As you have admitted, there is nothing outside the institution of morality to validate this institution, so the obligations of morality are not really binding.” If one means by “objectively wrong” something that conforms to a standard of wrongness that exists completely independently of the human condition and our moral practices, then, correct, an atheist might not use “objectively wrong” in this sense. (Some ethicists who are atheists might, as I have already discussed.) But so what? First,
as indicated by the Euthyphro argument, the notion that God could provide such an external standard is highly questionable. Second, and more important, what is lost by acknowledging that morality is a wholly human phenomenon that arose to respond to the need to influence behavior so people can live together in peace? I would argue that nothing is lost, except some confused notions about morality that we would do well to discard.

The temptation to think that we need some standard external to morality in order to make morality objective and to make moral obligations really binding is buttressed by the fear that the only alternative is a subjectivist morality—but recognizing that morality is based on human needs and interests, and not on God’s commands, doesn’t make one a subjectivist. As already discussed, when those who don’t think that morality is derived from God say that something is morally wrong, they don’t (typically) mean that this is just how they as individuals feel, which would be a true subjectivist position. One cannot argue with feelings. But most nonreligious people think we can argue about moral issues and that some people are mistaken about their conclusions on moral matters.

To have genuine disagreements about moral issues, we need accepted standards for distinguishing correct from incorrect moral judgments, and facts must influence our judgments. Morality as I have described it meets these conditions. All morally serious individuals accept the core moral norms I have identified, and it is these core norms that provide an intersubjective foundation for morality and for disagreements about more complex moral issues. For example, all morally serious individuals recognize that there is a strong presumption that killing is wrong, and our knowledge that we live among others who also accept this norm allows us to venture outside instead of barricading ourselves in our homes. There is no dispute about this norm. But there are discrete areas of disagreement regarding the applicability of this norm, for example, in the debate over physician-assisted dying. Such disputes on complex issues do not indicate that morality is subjective; to have a dispute—a genuine dispute, and not just dueling statements of personal preference—the parties to the dispute must have shared premises. In discussing and trying to resolve such moral disputes, we make reference to norms of the common morality (such as the obligation not to kill versus the obligation to show compassion and prevent suffering), interpret them in light of relevant facts, and try to determine how our proposed resolution would serve the underlying rationale of the applicable norms. Only the morally inarticulate invoke subjective “feelings.” (In my forthcoming book, The Necessity of Secularism: Why God Can’t Tell Us What To Do, I devote a chapter to illustrating how we can express disagreement on public policy matters without invoking God or just saying “that’s how I feel.”)

From the forgoing, we can also see that morality is not arbitrary. People can argue intelligently about morality and can also assert that an action is morally wrong—not just for them, but wrong period. They can condemn wrongdoers, pointing out how their actions are inconsistent with core norms (although most wrongdoers are already aware of their transgressions). Furthermore, if the offense is serious enough, they will impose severe punishment on the wrongdoer, possibly including removal from society. All that seems pretty objective, in any relevant sense of the term. Granted, it’s not objective in the same way that the statement that it is raining outside is objective, but that’s because, as we have already established, factual statements have a different function than moral judgments.

At this point, the believer might protest, “But there has to be something more than that. Morality is not just a human institution.” Well, what is this something more? Why is it not enough to tell the wrongdoer that everyone condemns him because what he or she did violated our accepted norms, which are essential to our ability to live together in peace? Do we have to add, “Oh, by the way, God condemns you too?” Exactly what difference would that make?

What some believers (and, again, some secular ethicists) appear to want is some further fact, something that will make them more comfortable in claiming that moral norms are authoritative and binding. Somehow it is not sufficient that a norm prohibiting the gratuitous affliction of violence reduces pain and suffering and allows us to live together in peace, and has, therefore, been adopted by all human societies. No; for the believer there has to be something else. A moral norm must be grounded in something other than its beneficial effects for humans and human communities. The statement that “it was wrong for Kim to hit Stephanie” must pick out some mystical property that constitutes “wrongness.” For the believer, this further fact is usually identified as a command from God, but as we have already established, God’s commands cannot be regarded as imposing moral obligations unless we already possess a sense of right and wrong independent of his commands.

Those who cling to the “further fact” view—that is, the view that there must be something outside of morality that provides the objective grounding for morality—are not unlike those naïve economists who insist that currency has no value unless it’s based on gold or some other precious metal. Hence, we had the gold standard, which for many years provided that a dollar could be exchanged for a specific quantity of gold. The gold standard reassured some that currency was based on something of “objective” value. However, the whole world has moved away from the gold standard with no ill effects. Why was there no panic? Why didn’t our economic systems collapse or become wildly unstable? Because currency doesn’t need anything outside of the economic system itself to provide it with value. Money represents the value found within our economic system, which, in turn, is based on our economic relationships.

Similarly, moral norms represent the value found in living together. There is no need to base our moral norms on something outside of our relationships. Moral norms are effective in fostering collaboration and cooperation and in improving our conditions, and there is no need to refer to a mystical entity, a gold bar, or God to conclude that we should encourage everyone to abide by common moral norms.

Conclusion

In conclusion, the claim that we need God to provide morality with objectivity does not withstand analysis. To begin with, God would not be able to provide objectivity, as the argument from Euthyphro demonstrates. Moreover, morality is neither objective nor subjective in the way that statements of fact are said to be objective or subjective; nor is that type of objectivity really our concern. Our legitimate concern is that we don’t want people feeling free “to do their own thing,” that is, we don’t want morality to be merely a reflection of someone’s personal desires. It’s not. To the extent that intersubjective validity is required for morality, it is provided by the fact that, in relevant respects, the circumstances under which humans live have remained roughly the same. We have vulnerabilities and needs similar to those of people who lived in ancient times and medieval times, and to those of people who live today in other parts of the world. The obligation to tell the truth will persist as long as humans need to rely on communications from each other. The obligation to assist those who are in need of food and water will persist as long as humans need hydration and nutrition to sustain themselves. The obligation not to maim someone will persist as long as humans cannot spontane
ously heal wounds and regrow body parts. The obligation not to kill someone will persist as long as we lack the power of reanimation. In its essentials, the human condition has not changed much, and it is the circumstances under which we live that influence the content of our norms, not divine commands. Morality is a human institution serving human needs, and the norms of the common morality will persist as long as there are humans around.

Ronald A. Lindsay

Ronald A. Lindsay is the former president and CEO of the Center for Inquiry. Currently, he is senior research fellow for CFI and adjunct professor of philosophy at Prince George’s Community College.


The Fable of the Christ

Michael Paulkovich


I have always been a staunch Bible skeptic but not a Christ-mythicist. I maintained that Jesus probably existed but had fantastic stories foisted upon the memory of his earthly yet iconoclastic life.

After exhaustive research for my first book, I began to perceive both the light and darkness from history. I discovered that many prominent Christian fathers believed with all pious sincerity that their savior never came to Earth or that if he did, he was a Star-Trekian character who beamed down pre-haloed and full-grown, sans transvaginal egress. And I discovered other startling bombshells.

An exercise that struck me as meritorious, even today singular, involved reviving research into Jesus-era writers who should have recorded Christ tales but did not. John Remsburg enumerated forty-one “silent” historians in The Christ (1909). To this end, I spent many hours bivouacked in university libraries, the Library of Congress, and on the Internet. I terminated that foray upon tripling Remsburg’s count: in my book, I offer 126 writers who should have but did not write about Jesus (see the box on p. 57). Perhaps the most bewildering “silent one” is the super-Savior himself. Jesus is a phantom of a wisp of a personage who never wrote anything. So, add one more: 127.

Perhaps none of these writers is more fascinating than Apollonius Tyanus, saintly first-century adventurer and noble paladin. Apollonius was a magic-man of divine birth who cured the sick and blind, cleansed entire cities of plague, foretold the future, and fed the masses. He was worshiped as a god and as a son of a god. Despite such nonsense claims, Apollonius was a real man recorded by reliable sources.

Because Jesus ostensibly performed miracles of global expanse (such as in Matthew 27), his words going “unto the ends of the whole world” (Rom. 10), one would expect virtually every literate person to have recorded those events. A Jesus contemporary such as Apollonius would have done so, as well as those who wrote of Apollonius.

Such is not the case. In Philostratus’s third-century chronicle Vita Apollonii, there is no hint of Jesus. Nor does Jesus appear in the works of other Apollonius epistolarians and scriveners: Emperor Titus, Cassius Dio, Maximus, Moeragenes, Lucian, Soterichus Oasites, Euphrates, Marcus Aurelius, or Damis of Hierapolis. It seems that none of these first- to third-century writers ever heard of Jesus, his miracles and alleged worldwide fame be damned.

Another bewildering author is Philo of Alexandria. He spent his first-century life in the Levant and even traversed Jesus-land. Philo chronicled contemporaries of Jesus—Bassus, Pilate, Tiberius, Sejanus, Caligula—yet knew nothing of the storied prophet and rabble-rouser enveloped in glory and astral marvels.

Historian Flavius Josephus published his Jewish Wars circa 95 CE. He had lived in Japhia, one mile from Nazareth—yet Josephus seems unaware of both Nazareth and Jesus. (I devoted a chapter to the interpolations in Josephus’s works that make him appear to write of Jesus when he did not.)

The Bible venerates the artist formerly known as Saul of Tarsus, but he was a man essentially oblivious to his savior. Paul was unaware of the virgin mother and ignorant of Jesus’s nativity, parentage, life events, ministry, miracles, apostles, betrayal, trial, and harrowing passion. Paul didn’t know where or when Jesus lived and considered the crucifixion metaphorical (Gal. 2:19–20). Unlike what is claimed in the Gospels, Paul never indicated that Jesus had come to Earth. And the “five hundred witnesses” claim (1 Cor. 15) is a forgery.

Qumran, hidey-hole for the Dead Sea Scrolls, lies twelve miles from Bethlehem. The scroll writers, coeval and abutting the holiest of hamlets one jaunty jog eastward, never heard of Jesus. Christianity still had that new-cult smell in the second century, but Christian presbyter Marcion of Pontus in 144 CE denied any virgin birth or childhood for Christ. Jesus’s infant circumcision (Luke 2:21) was thus a lie, as well as the crucifixion! Marcion claimed that Luke was corrupted; Christ self-spawned in omnipresence, esprit sans corps.

I read the works of second-century Christian father Athenagoras and never encountered the word Jesus—Athenagoras was unacquainted with the name of his savior! This floored me. Had I missed something? No; Athenagoras was another pious early Christian who was unaware of Jesus.

The original Mark ended at 16:8, with later forgers adding the fanciful resurrection tale. John 21 also describes post-death Jesus tales, another forgery. Millions should have heard of the crucifixion with its astral enchantments: zombie armies and meteorological marvels (Matt. 27) recorded not by any historian but only in the dubitable scriptures scribbled decades later by superstitious folks. The Jesus saga is further deflated by Nazareth, a town without piety and in fact having no settlement until after the war of 70 CE—suspiciously, just around the time the Gospels were concocted.

Conclusion

When I consider those 126 writers, all of whom should have heard of Jesus but did not—and Paul and Marcion and Athenagoras and Matthew with a tetralogy of opposing Christs, the silence from Qumran and Nazareth and Bethlehem, conflicting Bible stories, and so many other mysteries and omissions—I must conclude that Christ is a mythical character. Jesus of Nazareth was nothing more than an urban (or desert) legend, likely an agglomeration of several evangelic and deluded rabbis who might have existed.

I also include in my book similarities of Jesus to earlier God-sons such as Sandan and Mithra and Horus and Attis, too striking to disregard. The Oxford Classical Dictionary and Catholic Encyclopedia, as well as many others, corroborate.

Thus, today I side with Remsburg—and with Frank Zindler, John M. Allegro, Godfrey Higgins, Robert M. Price, Salomon Reinach, Samuel Lublinski, Charles-François Dupuis, Allard Pierson, Rudolf Steck, Arthur Drews, Prosper Alfaric, Georges Ory, Tom Harpur, Michael Martin, John Mackinnon Robertson, Alvar Ellegård, David Fitzgerald, Richard Carrier, René Salm, Timothy Freke, Peter Gandy, Barbara Walker, Michael Martin, D.M. Murdock, Thomas Brodie, Earl Doherty, Thomas L. Thompson, Bruno Bauer, and others—heretics and iconoclasts and freethinking dunces all, it would seem.

If all the evidence and nonevidence including 126 (127?) silent writers cannot convince, I’ll wager that we will uncover much more. Yet this is but a tiny tip of the mythical-Jesus iceberg: nothing adds up for the fable of the Christ.

 

The Silent Historians

  • Aelius Theon
  • Albinus
  • Alcinous
  • Ammonius of Athens
  • Alexander of Aegae
  • Antipater of Thessalonica
  • Antonius Polemo
  • Apollonius Dyscolus
  • Apollonius of Tyana
  • Appian
  • Archigenes
  • Aretaeus
  • Arrian
  • Asclepiades of Prusa
  • Asconius
  • Aspasius
  • Atilicinus
  • Attalus
  • Bassus of Corinth
  • C. Cassius Longinus
  • Calvisius Taurus of Berytus
  • Cassius Dio
  • Chaeremon of Alexandria
  • Claudius Agathemerus
  • Claudius Ptolemaeus
  • Cleopatra the physician
  • Cluvius Rufus
  • Cn. Cornelius Lentulus Gaetulicus
  • Cornelius Celsus
  • Columella
  • Cornutus
  • D. Haterius Agrippa
  • D. Valerius Asiaticus
  • Damis
  • Demetrius
  • Demonax
  • Demosthenes Philalethes
  • Dion of Prusa
  • Domitius Afer
  • Epictetus
  • Er
    otianus
  • Euphrates of Tyre
  • Fabius Rusticus
  • Favorinus Flaccus
  • Florus
  • Fronto
  • Gellius
  • Gordius of Tyana
  • Gnaeus Domitius
  • Halicarnassensis Dionysius II
  • Heron of Alexandria
  • Josephus
  • Justus of Tiberias
  • Juvenal
  • Lesbonax of Mytilene
  • Lucanus
  • Lucian
  • Lysimachus
  • M. Antonius Pallas
  • M. Vinicius
  • Macro
  • Mam. Aemilius Scaurus
  • Marcellus Sidetes
  • Martial
  • Maximus Tyrius
  • Moderatus of Gades
  • Musonius
  • Nicarchus
  • Nicomachus Gerasenus
  • Onasandros
  • P. Clodius Thrasea
  • Paetus Palaemon
  • Pamphila
  • Pausanias
  • Pedacus Dioscorides
  • Persius/Perseus
  • Petronius
  • Phaedrus
  • Philippus of Thessalonica
  • Philo of Alexandria
  • Phlegon of Tralles
  • Pliny the Elder
  • Pliny the Younger
  • Plotinus
  • Plutarch
  • Pompeius Saturninus
  • Pomponius Mela
  • Pomponius Secundus
  • Potamon of Mytilene
  • Ptolemy of Mauretania
  • Q. Curtius Rufus
  • Quintilian
  • Rubellius Plautus
  • Rufus the Ephesian
  • Saleius Bassus
  • Scopelian the Sophist
  • Scribonius
  • Seneca the Elder
  • Seneca the Younger
  • Sex. Afranius Burrus
  • Sex. Julius Frontinus
  • Servilius Damocrates
  • Silius Italicus
  • Soranus
  • Soterides of Epidaurus
  • Sotion
  • Statius the Elder
  • Statius the Younger
  • Suetonius
  • Sulpicia
  • T. Aristo
  • T. Statilius Crito
  • Tacitus
  • Thallus
  • Theon of Smyrna
  • Thrasyllus of Mendes
  • Ti. Claudius Pasion
  • Ti. Julius Alexander
  • Tiberius
  • Valerius Flaccus
  • Valerius Maximus
  • Vardanes I
  • Velleius Paterculus
  • Verginius Flavus
  • Vindex

 


Michael Paulkovich is an aerospace engineer and freelance writer, a frequent contributor to Free Inquiry and Humanist Perspectives magazines, a contributing editor at The American Rationalist, and a columnist for American Atheist. His book No Meek Messiah was pusblished in 2013 by Spillix.

Michael Paulkovich

Michael Paulkovich is a NASA engineer and freelance writer, a contributor to Free Inquiry and Humanist Perspective magazines, and an author of the series “Dogma Watch” for American Atheist.