Here’s a question you won’t find on a Gallup or Pew survey: “How much do you hate religious proselytizing?” Wouldn’t it be nice to know how Americans really feel about being evangelized? Don’t expect the answer soon! The problem is that neither major polling organizations nor such major funders of research as the Templeton and Lilly foundations are likely to sponsor a questionnaire that is biased against religion.1 They have reputations to protect, after all. But this does not serve science well. By not asking such questions, major polling organizations and funders—several of which have close ties to conservative Christianity—are actively supporting a cultural bias in the United States that favors religion.2 The questions asked by researchers and academics about religion are almost universally geared toward evaluating how religious people are, which means that we do not know the extent to which Americans dislike or hold negative opinions about religion.
Another factor that limits our ability to measure religiosity and attitudes toward religion accurately is the structure of the questions used. Many questions asked in national opinion polls are either poorly worded or intentionally framed to make it seem as though people are more religious than they really are. A good example is the recent WIN-Gallup International Global Index of Religiosity and Atheism, which received widespread media attention.3 The question that it asked of thousands of people around the world was, “Irrespective of whether you attend a place of worship, would you say you are a religious person, not a religious person, or a convinced atheist?” This isn’t the worst question we’ve ever seen, but it’s pretty bad. First, the three options are presented as mutually exclusive, when they are not. Almost all convinced atheists are also not religious. What was the intention of the survey researchers in forcing people to choose between “not a religious person” and “a convinced atheist” when most atheists are both? What’s more, the addition of one word in that question—convinced—undermines the reliability of the entire survey. What is a “convinced atheist” convinced of? How are convinced atheists different from unconvinced or semi-convinced atheists? It seems as though the word convinced was added in order to reduce the number of people who report that they are atheists. It adds an additional criterion to the choice “atheist” such that some atheists may believe it does not apply to them. We think it perfectly reasonable to call into question the addition of “convinced” to “atheist” when no such qualifier was added to “religious person.” Why weren’t the researchers who conducted this survey interested in “convinced” religious people?
While this survey was probably well-intentioned (and appreciated by those of us who do research on atheism and the nonreligious), we’re very skeptical of the Global Index of Religiosity and Atheism, as it likely underestimates the number of atheists around the world. Another example of the subtle asymmetry in question wording that favors religion occurred in the much-quoted National Study of Youth and Religion by Christian Smith of Notre Dame University. For example, religious youth were asked, “In the last year, how much, if at all, have you had doubts about whether your religious beliefs are true?” But the same question was not asked of nonreligious youth. Instead they were presented with this question: “In the last year, how much, if at all, have you had doubts about being non-religious?” The first question focused on the veracity of one aspect of religiosity—beliefs—while the second called into question the entire (non)religious identity of those who do not consider themselves religious. Why was the survey designed in this way? Why does it question only the beliefs of the religious but the identity of the nonreligious?
While it reflects a bias, Smith’s work is not the most egregious example of the practice of loading questions to increase perceptions of religiosity or spirituality. One of our favorite sources for terrible questions is Reginald Bibby, a Canadian sociologist of religion who is engaged in a concerted effort to paint Canada’s continuing religious decline as anything but, well, a decline.4 Currently, his phrase of choice to describe the situation in Canada is “a restructuring of religion.”5 To make religion seem healthier than it is, Bibby asks questions such as “Would you be receptive to greater religious involvement if you found religion to be worthwhile?” or “How important are religious or spiritual beliefs to the way you live your life?”6 If the problems with these questions aren’t immediately apparent, let us make them a bit clearer.
The first question is very sneaky. By suggesting that religion is or can be worthwhile, Bibby is forcing Canadians to respond as to whether or not they should do something that is worthwhile. Of course people will do things that are worthwhile. What’s more, that question is very much interested in getting people back into the churches. The second question does not provide a definition of spirituality, and there is growing evidence that at least some atheists consider themselves “spiritual” in a purely secular sense–for example, experiencing wonder and awe when observing nature. There is also some evidence that people equate spirituality with morality.7 Few who equate spirituality with wonder or awe or with being moral would report that their “spiritual beliefs” are unimportant.
In terms of social science, Bibby is creating loaded questions and interpreting his data loosely in order to generate a picture of Canadian religion that aligns with his agenda. This comes awfully close to “cheating.” And the shocking thing is that Bibby isn’t alone. In fact, there are hundreds of scholars doing this with their research, and the mainstream media tends to support their biases. If influencing Americans’ opinions about religion were a sport, it would be one in which the playing field is not level. On this field, researchers hostile to—or merely neutral about—religion resemble a high-school football team suddenly invited to play in the NFL. Scholars with strongly pro-religious agendas are receiving enormous grants that allow them to fund large studies of religiosity, which they then bias by introducing misleading questions, then interpreting the answers in faith-promoting ways.8 Because they have the best data and the most money, these scholars receive the most attention for their work, which results in their influencing public perceptions about religiosity and reinforcing the pro-religious bias in American culture.
The more we thought about the fact that the discussion of religion in the United States is largely controlled by religions and those favoring religions, the more clearly we began to recognize how the nonreligious should respond: Cheat in return! Because the other side is already cheating, if the nonreligious cheat, that basically means that we would be playing fair. Right? Your high-school football team might just have a chance against the GreenBay Packers if they got to put steel spikes on their uniforms or ride horses.
So, how do we cheat? We ask loaded questions!
This tactic has been employed in politics and is often referred to as “push polling.” The aim of push polling is actually not so much to gather information from those taking the survey as it is to influence their opinions by presenting information in a very specific way. The pollster then reports the results, typically without admitting that push polling was involved. Push polling is not regarded as a legitimate research technique by the major survey research organizations in the United States, but that doesn’t mean it isn’t effective.9
To test the efficacy of religious push polling, we came up with eight ridiculously loaded questions about religion. We did, however, want to make sure that the questions met two criteria: (1) the information the questions included must be empirically defensible and (2) questions should be as unfair and loaded as possible without being completely obvious that they are loaded. Keep in mind that we know these are loaded, biased questions—we’re not claiming otherwise. We don’t ever ask loaded questions in our own surveys, but that was the whole point of this experiment.
And so we ran a pilot study to see if asking biased questions changed the way people thought about religion.10 Once we had the questions, we then had to figure out how to test them. The first author, Ryan T. Cragun, is a professor at a teaching-oriented university. He had his students recruit non students to answer the questions. The resulting sample is not nationally representative, but it also isn’t composed of students, as samples so often are in campus-based research.
In order to maximize the information we might glean from this pilot study, we used an experimental design. We created two surveys that were identical in every way except for the loaded questions. The experimental group (EG) was asked the loaded questions. The control group (CG) was asked the final sentence of the loaded questions, avoiding all of the information that was negative about religions. Participants were randomly assigned to one of the groups.
What did we find? First, the random assignment method did a pretty good job in assigning people to groups. There was not a statistically significant difference between the religiosity of participants in the experimental and control groups. There was also not a statistically significant difference between the two groups in age, employment status, gender, race, or income. However, the sizes of the two groups were somewhat different: EG = 188 and CG = 267. We’re not entirely sure why this happened, because participants were randomly assigned and the attrition rate (i.e., those who started and then stopped the survey without completing it) was virtually identical for both surveys, approximately 20 percent.
The mean age of the participants (combining both groups) was forty-one. The majority were married (53.2 percent). Sixty percent were female. Participants were rather wealthy, with 24 percent of respondents reporting a household income between $100,000 and $200,000 and just 8.3 percent reporting incomes below$25,000. Seventy-one percent were white. Thirty-four percent had bachelor’s degrees, and 16.7 percent had master’s degrees. Interestingly, these demographics mean that the study participants come very close to representing the current U.S. voting population.
Now, what about those loaded questions? Let’s review the questions and the answers they elicited one by one.
The first loaded question we asked was:
“According to salary.com, the median compensation (salary and benefits) of full-time pastors in the U.S. is $129,000. Adding to their already high salaries, clergy are able to write off their living expenses (mortgage/rent, furnishings, utilities, etc.) as a tax deduction thanks to the parsonage allowance in U.S. tax policy, saving them thousands in taxes every year.”
How strongly do you oppose tax breaks for clergy?
The final sentence (in italics) was all that respondents in the control group were shown. The five response options ranged from “strongly oppose” to “strongly favor.” Figure 1 compares the two groups.
EG respondents who had been provided with commentary information about the salaries and tax breaks enjoyed by clergy were significantly more likely to oppose the tax breaks, with 47 percent in opposition compared to 25 percent who were not shown the additional information.
The second question was alsoabout taxes, but it concerned religious institutions, not clergy:
“Religions in the U.S. pay no income tax, no property tax,and no investment taxes. According to a recently published article in Free Inquiry11 religions are estimated to receive $100 billion per year in donations, own over $600 billion in property and hold close to $20 billion in investments. If they were taxed on income, property, and investments, they would contribute an additional $71 billion per year to local, state, and federal revenues.”
How strongly do you disagree with religions being tax exempt?
Again, the portion in italics was the entirety of the question presented in the control survey. The response options were the same as those of the first question. Figure 2 shows the responses to this question.
While not quite as influential as in the previous question, leading information still appeared to be effective: respondents presented with the biased question were significantly more likely to oppose tax breaks for religions than those who were not. However, with this question we began to notice a phenomenon that will become more apparent with the survey’s later questions. Large percentages of both groups, whether they received the biased information or not, answered in ways suggesting that they already held critical or negative views of religion. In this case, lots of people said that they opposed tax breaks for religions—40 percent of those in the CG and 56 percent in the EG.
The third question we asked was possibly the most inflammatory, as the implications of the information presented in the “loaded” version were the most disturbing. This question attempted to influence participants’ concerns about clergy sex abuse. The question was:
“The percentage of Catholic priests who have sexually abused children is not precisely known, but experts have estimated it could be as low as 1 in 25 (4 percent) or as high as 1 in 3 (33 percent; see the John Jay Report). Catholic priests are not the only clergy to have abused children; thousands of clergy have been accused and convicted of sexually abusing children in the U.S. from many different religions.”
How safe are kids if they are left alone with religious clergy?
Responses, shown in figure 3, ranged from “not at all safe” to “very safe.
This loaded question increased the percentage of respondents reporting that they do not think children are safe being left alone with religious clergy; 17 percent of those in the CG said they would not feel safe, while 37 percent of those in the EG group felt that way. In retrospect, we probably should have asked about the participants’ children in order to personalize this question, but the background information given was still effective at altering opinions. Again, we found it noteworthy that close to 40 percent of Americans, whether they were shown the biased information or not, aren’t sure how safe children are around clergy.
The fourth question attempted to heighten a concern many Americans already
feel: that religion can inspire acts of terrorism. The question was:
“Osama bin Laden was motivated by his religion to attack the United States and succeeded on September 11th, 2001, when members of his Al Qaeda network flew planes into the World Trade Center buildings and the Pentagon, killing 2,996 people. According to scholars such as Mark Juergensmeyer, religions that claim to be the only true religion increase their members’ hatred for people outside the religion.”
To what extent is religion responsible for terrorism around the world today?
Responses, shown in figure 4, ranged from “completely responsible”to “not at all responsible.”
While those who responded to the loaded question did indicate that religion has a greater responsibility for terrorism than those presented with the control question, of far greater interest is the remarkably high percentage of people in both groups who put at least some blame on religion for terrorism: 74 percent of those in the control group and 85 percent of those in the experimental group did. To what extent this is a manifestation of “Islamaphobia” or a general association of religion with terrorism isn’t clear, but it is clear that the participants in our study strongly associated religion with terrorism.
The fifth question attempted to influence views toward religion and gender. The question was:
“According to scholars like Mark Chaves, half of the religions in the U.S. do not allow women to be clergy. In the religions that allow women to be clergy, female pastors make less money, work in smaller churches, and are more likely to be assistant pastors than are male pastors.”
How responsible are religions for gender inequality in the U.S. today?
The response options were identical to those of the previous question, and the responses are shown in figure 5.
A higher percentage of those who viewed the loaded question assigned blame for gender inequality in the United States to religions, but the difference was not statistically significant.12 However, much as we saw with the terrorism question, of greater interest than the ability of the loaded questions to influence attitudes is the fact that so many of the survey participants put the blame for gender inequality on religion whether they were prompted with a loaded question or not. Some 56 percent of the CG put at least some blame on religions, while 64 percent of those in the EG put at least some blame on religions.
The sixth question examined an issue that is likely near and dear to the readers of this magazine—science:
“The primary reason some Americans oppose science and science education is religion. This opposition is reflected in rankings of American education and science acceptance. According to the Program for International Student Assessment, U.S. students regularly rank below those in most other developed countries on science knowledge and Americans are less accepting of scientific theories like evolution than every other highly developed country.”
To what extent do you think religion is responsible for the anti-science attitudes of Americans?
Response options were identical to the previous two questions, and responses are shown in figure 6.
While a higher percentage of those who viewed the loaded questions blamed religion for the anti science attitudes of Americans, the difference was not statistically significant. Once again, as with attitudes toward terrorism and gender inequality, the majority of the participants in the study put at least some of the blame on religion for the anti science attitudes of Americans: 52 percent of those in the CG and 63 percent of those in the EG.
Another loaded question we asked concerned homosexuality:
“According to scholars like Amy Adamczyk, religion is the primary basis for opposing equal rights for homosexuals in the U.S. Many religions consider homosexuality to be a serious sin and want to withhold marriage and other rights from homosexuals.”
To what extent are religions responsible for the unequal treatment of homosexuals in the U.S. today?
Responses ranged from “completely responsible” to “not at all responsible.” Results are shown in figure 7.
More of those who viewed the loaded questions placed blame on religion for the unequal treatment of homosexuals, but the difference was not statistically significant. This question, like several previous ones, does illustrate that the majority of the participants in the study put at least some blame on religion; 67 percent of those in the CG compared to 78 percent of those in the EG.
Astute readers may be wondering if we asked the question that we used to begin this article. Indeed we did, though the loaded version is even more provocative:
“Religions are widely known for trying to convert others to their beliefs. One of the best times to find people at home is on weekends and evenings, which is why religious missionaries and evangelists often knock on doors during dinner and other inopportune times.”
How much do you dislike religious proselytizing (i.e., people trying to convert you to their religion)?
Response options ranged from “hate it alot” to “love it a lot.” Responses are shown in figure 8.
Those who were shown the loaded question were significantly more likely to report hating religious proselytizing. But this is another question where the overall responses are even more interesting—only 5 percent of people in the CG reported a positive perspective on proselytizing, and just 1 percent of those in the EG did. Proselytizing is widely disliked by the participants in our study: 75 percent of those in the CG reported hating or disliking it compared to 83 percent of those in the EG.
Our experiment illustrates that loaded questions are effective at influencing attitudes on some issues, such as tax breaks for religions and clergy. But what is also fascinating is just how much the people in our study already disliked religion and blamed it for social problems including gender inequality, anti science attitudes, and terrorism. The media seems to suggest that Americans love their religions, yet our study suggests that many Americans probably don’t. Keep in mind that our sample is not representative of Americans in general (although it comes close to reflecting the voting population), so these findings may not be generalizable at all. (We’d happily accept a grant to test these questions on a nationally representative sample.)
What can we conclude from this study? To begin with, loading questions with prior context or commentary does appear to alter attitudes, though the extent to which it did so varied for different questions. Loaded questions significantly increased participants’ opposition to tax breaks for clergy and churches, increased concern about clergy sex abuse, increased how much responsibility they assign to religions for terrorism, and increased their dislike for religious proselytizing. The loaded questions seemed to have similar effects in other areas, but those effects were not statistically significant. The study also suggests that there are likely aspects of religion that many Americans dislike, including its positions on homosexuality, science, gender, and proselytizing.
From a policy standpoint, what should secularists and freethinkers who want to reduce organized religion’s influence consider doing moving forward? First, if religious researchers are using loaded and biased questions (that is, “cheating”)—and we know they are—we need to call them out on it. Second, secularists need to fund nationally representative surveys of Americans by scholars who know how to ask less-biased questions that would give a more accurate picture of American attitudes regarding religiosity and secularity. Third, our study suggests that, at least on some issues, loading questions is less important than simply asking the right questions. Fourth, it’s clear that presenting negative information about religion does influence some Americans’ attitudes, which would support the critical advertising campaigns some groups have undertaken. Fifth, it may be a good idea to play to current strengths. If our sample can be used as a guide, religions do not have the confidence of most Americans when it comes to protecting the rights of homosexuals, advocating gender and racial equality and peace, and limiting proselytizing. Those issues may be aspects of religions that activists want to target for critique. However, given the fact that many Americans probably already take issue with religion on those issues, going after new areas where religions are not seen as negatively—for example, tax benefits and sexual predation—may draw more people out of religion.
There is no doubt in our minds that if more neutral and fair questions were to appear in opinion polls, then the general American ethos favoring religiosity, widely disseminated by the media, could be undermined.
- S. Presser and L. Stinson, “Data Collection Mode and Social Desirability Bias in Self-Reported Religious Attendance.” American Sociological Review 63, No. 1 (1998): 137–45.
- The Pew Trusts helped fund the very conservative and Evangelical Grove City College in Pennsylvania as well as the Gospel and Our Culture Network, which published books such as Missional Church: A Vision for the Sending of the Church in North America. George Gallup Jr. (1930–2011), a one-time missionary and long-time Christian activist, personally oversaw the organization’s surveys on religion. The Templeton and Lilly foundations are open about their commitment to strengthening religion in their mission statements.
- WIN-Gallup International, Global Index of Religiosity and Atheism (Pakistan: Worldwide Independent Network of Market Research and GallupInternational, 2012).
- See, for instance, this article in which a sociologist takes Bibby to task for his misleading work: David E. Eagle, “Changing Patterns of Attendance atReligious Services in Canada, 1986–2008. “Journal for the Scientific Study of Religion 50, No. 1 (2011): 187–200.
- For Bibby’s latest efforts to claim that religion is doing just fine in Canada, see R. W. Bibby, Beyond the Gods & Back: Religion’s Demise and Rise and Why It Matters (Toronto: Project Canada Books, 2011).
- We pulled these questions from a presentation Bibby recently gave that can be found here: http://www.reginaldbibby.com/images/BTG_AUTHOR_CRITICS_Waterloo_June_2012_final.pdf.
- B.J. Zinnbauer, K. I. Pargament, B. Cole ; M.S. Rye, E.M. Butfer, T.G. Belavich, … J.L. Kadar, “Religion and Spirituality: Unfuzzying the Fuzzy,” Journal for the Scientific Study of Religion 36, No. 4 (1997): 549–64.
- Two examples of scholars doing precisely this are: A.W. Astin, H.S. Astin, and J. A. Lindholm, Cultivating the Spirit: How College Can Enhance Students’ Inner Lives (Hoboken, NJ: Jossey-Bass, 2010) and E.H. Ecklund, Science vs. Religion: What Scientists Really Think (New York: Oxford University Press, 2010).
- K.G. Feld, “What Are Push Polls, Anyway? Campaigns & Elections (1996) 21, No. 4(2000): 62.
- Since human subjects were involved, this study required approval from an Institutional Review Board (IRB). The IRB at the first author’s university approved the study, but not without lengthy discussion over whether it was permissible to ask loaded questions about religion. We were not trying to hide the fact that the questions were loaded, but the fact that the IRB struggled with whether this was a justifiable study is telling. Were we trying to influence views on smoking or what shoes to buy or even political views, such a study would have hardly raised an eyebrow. But religion is, even in academia, sacrosanct. Many scholars still feel it is inappropriate to experiment on religion.
- We are, of course, citing the research of the first author here: Ryan T. Cragun, Stephanie Yeager, and Desmond Vega, “Research Report: How Secular Humanists (and Everyone Else) Subsidize Religion in the United States.” Free Inquiry 32, No. 4 (2012): 39–46.
- The chi-square test I used was not significant, but a t-test for difference in means was significant.