Enhancing Virtues: Fairness

James Hughes

Our moral codes are rooted in preconscious feelings of disgust with people who hurt others, cheat, are disloyal, disobey authority, and violate social taboos. Some of these moral feelings support modern Enlightenment ideas of morality, while others are in contradiction with modern values of individual rights and critical thought. By illuminating the ways that our value systems are shaped by prerational impulses, we can make more conscious choices about how to build a fair society and practice the civic virtues of fairness and engaged citizenship. But we also can begin to experiment with ways to enhance our moral reasoning with drugs and devices to become even better citizens than previously possible.

Just as we have ancient neural architectures for bonding with our fellow mammals, we also appear to have evolved deeply wired neural intuitions about fairness and morality. One of our deeply ingrained moral intuitions is that it is wrong to cheat and that cheaters need to be punished. This impulse can be demonstrated in a laboratory experiment called the “ultimatum game.” One participant is given some money and instructed to offer a portion of it to the other participant. Any fraction of the amount or none at all can be offered, but the first person doesn’t get to keep any of the money if the other person rejects the split. Three-quarters of participants offer something between 40 percent and 50 percent. When the splitter offers less than half, it triggers a disgust reaction in the amygdala of the person who needs to choose to accept or reject the split. When that disgust reaction is strong enough, which is usually when the offer is less than 40 percent, the person will reject the split even though it means he or she is giving up whatever was offered. That self-sacrifice to spank the “cheater” at the cost to oneself is known as “altruistic punishment.”

These intuitions can be observed in our simian cousins and human children. When chimpanzees and human children are set up in ultimatum situations, they also mostly offer fairish splits, and their willingness to sacrifice rewards to punish cheaters is the same as in adult humans.1, 2 Even human infants under two years old react negatively when they observe unequal rewards given to others.3 According to Paul Bloom, one of the leading researchers on the moral life of infants and the author of Just Babies,4 infants exhibit four moral sensibilities:

  • Moral judgment: some capacity to distinguish between kind and cruel actions.
  • Empathy: suffering at the pain of those around us and wishing to make this pain go away.
  • Fairness: a tendency to favor those who divide resources equally.
  • Justice: a desire to see good actions rewarded and bad actions punished.5

We experience these biologically rooted moral intuitions differently than we do other kinds of values. A group at DePaul University in Chicago surveyed students about a variety of moral attitudes. Some had been ranked by previous researchers6 as biologically determined and inheritable, such as attitudes toward premarital sex, racism, and the death penalty; and others as only weakly influenced by biology and genes, such as attitudes about privacy. They found that the stronger the likely genetic influence on the value, the more deeply held the students’ beliefs were about that value.7

Just as empathy has to be cultivated by intelligence to become a mature theory of mind and social intelligence, our moral intuitions can take us only so far. Is affirmative action fair? Is collateral damage in a war morally justified? Should a poor person be allowed to steal bread? In order to cultivate the virtue of fairness, we need to move from innate moral intuitions to mature moral reasoning.

Liberal and Conservative Brains

The psychologist Jon Haidt adds to this picture by showing that we are not all equally sensitive to inherited moral intuitions. Haidt began his research on moral intuitions by studying reactions to topics such as cannibalism and incest. By unraveling how people felt about these deeply emotive subjects, he eventually identified a set of core moral intuitions that he and the other proponents of moral foundations theory believe have evolutionary and neurobiological roots:

  • Care/harm: protecting others from harm.
  • Fairness/cheating: treating others in proportion to their actions.
  • Liberty/oppression: judgments about whether subjects are tyrannized.
  • In-group loyalty: to your race, group, family, nation.
  • Respect for authority/hierarchy.
  • Sanctity/purity: sanctity/degradation, avoiding disgusting things, foods, and actions.

Haidt found that conservatives, liberals, and libertarians differ in their sensitivity to these innate, monkey-brain moral sentiments. Liberals are more sensitive to the first two, the impulses to protect others from harm and to fairness. Conservatives are less sensitive to those and more sensitive to the impulses to protect the in-group, to defer to authority, and to have disgust for the profane. For instance, liberals are more likely to agree with the statements “I wish there were no nations or borders and we were all part of one big group” and conservatives are more likely to agree that “Respect for authority is something all children need to learn.” Libertarians are more sensitive to the liberty/oppression intuition and less sensitive to the other five.8

Using survey responses from twenty-five thousand people that allowed them to be assessed for their response to these moral intuitions and their views on political issues, Haidt and his team found that these moral intuitions predicted positions on issues ranging from gay marriage and immigration to global warming and defense spending.9 These innate moral sentiments help explain why our political debates so often are like we are talking different languages. We simply can’t understand how the other side can take certain kinds of arguments or sentiments seriously and not see the importance of our views. As Haidt and his collaborators recently framed it, liberals and conservatives are as different as people from entirely different cultures.10 These political differences are deeply rooted in neurobiological differences. The idea that political ideology has biological roots seems counterintuitive, since political views seem so determined by the time and place we find ourselves in. Also, what evolutionary advantage could there have been for humans to develop such divergent moral and political views? But a recent study by researchers from Harvard University, Brown University, and Penn State University dramatically illustrates how deep biological political ideology appears to be. They recruited twenty-one adults, ten strongly liberal and eleven strongly conservative. The participants were asked to bathe in scent-free soap and refrain from cigarettes, alcohol, deodorants, perfumes, sex, or sleeping with humans or pets. They then taped a gauze pad under their arms for twenty-four hours. The pads were frozen in vials and thawed out later to be smelled by 125 participants whose politics had also been ascertained to be either strongly liberal or strongly conservative. The smellers rated each vial on a scale of 1 to 5 on attractiveness of the body odor. Controlling for gender, conservatives found the smell of other conservatives more attractive, and liberals liked how liberals smelled better. Somehow the biological bases of ideological preferences were being communicated through body odor.11

The evidence that these biological determinants of ideology are genetically inheritable is now quite strong. A 2014 meta-analysis of the effects of genes on politics looked at nineteen studies of twelve thousand twins in five countries spanning three decades.12 As in studies of genetic influences on intelligence, they did not find any single gene that explained a significant amount about the twins’ political views. But they did find a significant and substantial genetic influence on political views across a wide range of issues in every country and time period. Attitudes toward things as diverse as school prayer, the death penalty, gay rights, foreign aid, feminism, taxation, and global warming were all genetically linked.

PFC vs. Amygdala

One of the most popular scenarios used in the emerging experimental philosophy field is the trolley dilemma.13 In the first trolley scenario, the participant is told to imagine standing beside a train track and seeing a runaway trolley about to hit five men down the track. The participant is standing next to a lever that can switch the trolley to a track on which only one man is standing. Will the participant switch the train to the track to kill just one man instead of five? This is a classic utilitarian choice; the greater good for five outweighs the harm imposed on one. Most people choose to pull the lever.

In the second scenario, the “footbridge dilemma,” the participant is standing next to a very fat man on a bridge over the track. The participant is told that (however implausibly) the only way to stop the trolley hitting the five men is to push the fat man onto the track. Most people say they wouldn’t or couldn’t push the fat man, even though the result would be the same as in the first scenario: one man dies, five live.

Since neuroscientist Josh Greene and colleagues first used functional magnetic resonance imaging (fMRI) to watch the brains of people making these trolley decisions, more than a decade of experiments has shown that the utilitarian decision in the first scenario is largely handled by the rational, prefrontal cortex (PFC), while the second, footbridge dilemma strongly stirs up the emotional centers of the brain, overriding rational utilitarian calculation.14, 15 Passive moral judgments based on intuitions such as “it’s never OK to push someone to his or her death” are based in the amygdala, while active moral reasoning, such as the reasoning necessary to rationalize pushing the fat man on the track, relies on parts of the PFC.16 People with larger, more active, and better-connected prefrontal cortices are better able to filter and channel the hot moral intuitions—including the desire to protect others and punish others but also disgust, loyalty, and submission to authority—bubbling up from our amygdalas. On the other hand, when people are sleepy, distracted, pressed for time, or under stress, they are less likely to make rational, utilitarian judgments.17, 18, 19, 20, 21, 22

Another way of understanding the genetic influences on moral and political thought is that our genes partly determine the relative influence of the prefrontal cortex versus the more emotional parts of the brain such as the amygdala on our moral and political decision-making. Conservatives have larger and twitchier amygdalas than liberals and libertarians, startle more easily, and react more strongly to bad smells and unpleasant images.23, 24, 25, 26, 27, 28 Conservatives are therefore more sensitive to the discomfort of uncertainty and cognitive dissonance and work harder to avoid it.29 Sensitivity to the two liberal moral intuitions, care and fairness, is correlated with larger volumes in the PFC, while sensitivity to conservative moral intuitions, deference to authority, in-group loyalty, and purity/sanctity, is correlated with larger volumes in the emotive limbic system.30 When the influence of the PFC over the amygdala is reduced by alcohol or other cognitive burdens, people express more racial bias31 and conservative opinions.32 They become more conservative and morally judgmental when the amygdala’s disgust response is triggered by bad odors or the feeling of stickiness.33, 34

Liberal Virtues

How then can we understand liberal versus conservative ideas of virtue? As Haidt and his colleagues recently observed, the intuitive style of thought favored by conservatives is the human default style, while the analytical style of thought more common among liberals has to be learned.35 Liberals and conservatives don’t actually differ in their moral intuitions about authority, in-group loyalty, and sacred values. Both liberals and conservatives have prefrontal cortices that have been taught Enlightenment values and amygdalas pinging them with disgust and alarm reactions. Rather, their differences emerge because the prefrontal cortices of liberals are capable of filtering out the signals from the amygdala more successfully than in the prefrontal cortices of conservatives. When liberals feel impulses for deference to authority and hierarchy, they are checked by reminders of the importance of equality and the questioning of authority. When liberals feel uneasy about out-groups or impulses to favor their own kind, they are checked by reminders of the importance of tolerance and universalism.36 When liberals feel revulsion about the breaking of taboos, such as seeing two men kiss, the feelings are checked by reminders that “They aren’t hurting anyone.”

The real difference between liberal virtue and conservative virtue then is why and how the two tribes come to moral conclusions. Conservatives believe that moral intuitions are self-justifying. Liberals believe that reason needs to interrogate our intuitions. This leads liberals to be more tolerant and humble in their moral and political claims, a cautiousness and diffidence that conservatives interpret as weakness and uncertainty.

Intelligence, Personality, and Ideology

In 2010, the evolutionary psychologist Satoshi Kanazawa published an article provocatively titled “Why Liberals and Atheists Are More Intelligent.” Kanazawa reviewed the large body of evidence that correlates intelligence with atheism37 and political liberalism38, 39 and proposed the “Savanna-IQ Interaction Hypothesis.”40 The theory starts with the observation that human brains first evolved in the African savanna between 2.5 million and 130,000 years ago. Then, as we faced environmental challenges and started migrating around the globe, we had to evolve new cognitive abilities to deal with novel situations. This flexible form of learning and problem-solving is the basis of general intelligence, which then allowed us to invent tools, agriculture, and civilization. Individuals and groups with more of this ability are more open to novel experiences, more tolerant of ambiguity and complexity, and more open to novel ways of thinking, such as atheism and liberalism.

Earlier, I reviewed how the personality trait of openness to novelty is partly genetic and correlated with intelligence. It is also correlated with political liberalism.41 Across more than seventy studies of personality and politics reviewed by Sibley and Duckitt, people who scored higher on openness to experience were less right-wing, racially prejudiced, and authoritarian.42 Just as the variations in serotonin genes may partly explain why some populations are happier, geographic variations in the genetic settings for personality may be influencing the politics of countries and American states. Using personality data for six hundred thousand Americans, a group at the University of Illinois found that the liberalism of a state was strongly related to its citizens’ level of openness to experience.43

Liberals are not without their own cognitive biases, of course, and there are intelligent conservatives and stupid liberals. Both liberals and conservatives are prone to tune out information that doesn’t fit with their worldviews.44 But the biases of liberals and conservatives are not symmetrical. The psychological factors that tend toward liberalism undercut cognitive bias in ways that conservative psychology does not. Liberals are far more invested in the project of a deliberative democracy guided by science and rational discussion.

Haidt has drawn a very different conclusion from the differences between liberals and conservatives, however. To Haidt, liberals are deaf to important conservative moral intuitions that they should work harder to appreciate. This is a version of the “naturalistic fallacy,” the idea that something is right because it exists. The root of liberal deafness to conservative moral intuitions is not because liberals lack a cognitive faculty but because, in general, they are better at exercising their cognitive faculties.

The enhancement of fairness, moral reasoning, and the “liberal virtues” is, therefore, part of the larger project of cognitive enhancement, focused on becoming increasingly aware of and independent of one’s own cognitive biases.

Fairness is a liberal virtue rooted in instinctive aversion to cheating and inequality but then filtered through prefrontal cognition. Since the spread of Enlightenment values, fairness has grown in importance as a virtue, especially for liberals with stronger prefrontal cortices and weaker amygdalas. Fairness finds less support among conservatives, for whom respect for authority, in-group loyalty, and disgust/sanctity are more neurologically salient. What impact do social policy and individual practices have on the influence of fairness and cognitive biases?

Building a Fairer Society

Education. Much of the spread of the liberal virtues of tolerance, antiauthoritarianism, egalitarianism, and secularism can be attributed to rising levels of education, which both spreads those norms and strengthens the prefrontal cognitive faculties and habits of reflection that enable them. For instance, educational level is the strongest predictor of Americans’ tolerance of sexual and racial minorities and general liberalism45, 46 and of Europeans’ acceptance of immigrants.47 Education is also a predictor of endorsement of fairness and caring moral intuitions. In an analysis of almost sixty thousand people who had taken the Haidt et al. Moral Foundations Questionnaire, Leeuwen, Koenig, Graham, and Park found that people with more education were more likely to endorse the caring and fairness moral intuitions.48

Class and social equality. The structure of society, and our position within it, also has a powerful effect on the way we view morality and fairness. People not only become more tolerant as they are exposed to higher education but also as they become more financially secure49 in more equal societies.50 Citizens of more equal societies generally are also more supportive of redistributive policies; acceptance of social inequality is both a cause and an effect of actual social inequality.51, 52 On the other hand, the affluent—influenced by their vested interest in society—are generally less supportive of egalitarian redistribution than the poor.53

So the natural political polarization on class lines is in between an egalitarian but racial-nationalist, moralistic, and authoritarian working class and tolerant and cosmopolitan but inegalitarian middle and upper classes.54 There is less of this moral polarization in more equal countries, however; the relatively equal Finns and Danes have higher moral consensus around the importance of an equal and tolerant society than the relatively unequal Britons and Swiss.55 In other words, social inequality and social class distort the impact of liberal virtues on moral cognition, especially by weakening the egalitarian moral intuitions of educated and affluent cosmopolitans, while liberal virtues are expressed more consistently and broadly in more equal societies.

Training to Reduce Implicit Racial Bias

One especially timely application of fairness enhancement is the attempt to reduce implicit racial biases in policing, spurred by the disproportionate killing of black men by American police. But evidence for the ubiquity of unconscious biases about race, gender, and all kinds of things has been accumulating for sixty years, since a study of racialized attitudes toward dolls helped convince the U.S. Supreme Court to decide the desegregation case Brown v. Board of Education. The most common tool used to test for implicit racial bias today is the Harvard Implicit Association Test (IAT). The IAT asks subjects to rapidly match positive and negative words on a computer screen with white or black faces. More rapidly associating positive words with white faces and negative words with black faces (and vice versa) is a measure of unconscious racial bias, which is often at odds with the professed values of the subject. The test finds unconscious negative associations with black faces in both white and black subjects.

Many strategies for reducing biases have been attempted, but only now are we systematically evaluating their efficacy. As with the rethinking of psychotherapeutic approaches to trauma, which has discovered that some forms of talk therapy reinforce rather than dampen trauma, research on antiracism programs has found that some can actually cause resentment and reinforce racial antagonism.56 Some of the most effective interventions turn out not to be discussions of racism or the importance of fairness but rather exercises that bind positive associations with stigmatized groups, such as reading about the heroism of black soldiers, using a black avatar in a video game,57 or imagining oneself being rescued by a black firefighter.58 Loving-kindness meditation, which explicitly works on associating positive emotions with people one doesn’t like, has also been found to be effective in reducing implicit racial bias. In one controlled trial that compared whites randomly assigned to practice loving-kindness meditation, talk about loving-kindness, or do nothing, the loving-kindness meditators saw significant declines in implicit racism.59

These methods work by changing the emotional valence of the stigmas bubbling up from the amygdala. Another approach, however, is to slow down those reactions and give the prefrontal cortex a chance to intercept and reject the biases. There is evidence, in fact, that people with stronger executive function exhibit less implicit bias.60, 61 By shining a light of awareness on our biased sentiments, we can develop our moral muscles.62

One study that demonstrated the effects of bias awareness looked at the calls made by National Basketball Association (NBA) referees before and after a major report on referee racial bias was published. The report showed that referees were more likely to call personal fouls against basketball players who were of a different race than the referee.63 The report was released in May 2007 and received a lot of attention in basketball circles. When the team looked for the same patterns after the report had been published, however, they had disappeared.64 The referees, along with society, had examined their behavior and overcome their unconscious biases.

Practicing Mindfulness of Biases

Would the same effect have been achieved if only the referees had become aware of their racial biases and not society as well? One suggestive study found that bilingual people make more utilitarian decisions in the trolley dilemma when they use the less-used language; having to think harder slows down the instinctive reaction of amygdala to reject pushing the fat man.65 There is also evidence accumulating that mindfulness meditation can dampen biases, such as ageism and racism,66 change political cognition, and actually shrink the size of the amygdala.67

In 2013, geneticist James Fowler and some colleagues recruited 139 people
for an experiment on the effect of mindfulness on political opinion. The participants were told they would be shown some disgusting images and assigned to one of three groups. The first group was given this instruction to mindfully reappraise feelings of disgust:

As you view the images, please try to adopt a detached and unemotional attitude. Or, you could think about the positive aspect of what you are seeing. Please try to think about what you are seeing objectively, watch all images carefully, but please try to think about what you are seeing in such a way that you feel less negative emotion.

A second group was instructed to suppress feelings of disgust:

As you view the images, if you have any feelings, please try your best not to let those feelings show. Watch all images carefully, but try to behave so that someone watching you would not know that you are feeling anything at all.

The third group was given no instruction. Then the three groups were shown images of things such as cockroaches and dirty toilets and asked to fill out the Moral Foundations Questionnaire that Haidt developed to test moral intuitions. The mindful reappraisal group was significantly less disgusted and was significantly less likely to express moral purity concerns on the Moral Foundations questions.

Next, the researchers recruited 119 people and first asked them to answer political questions. Then they wired them to track their heart rates and asked them a series of questions to measure their sensitivity to disgust, such as whether they would touch a dead body. Then the subjects were randomly assigned to the three groups—reappraisal, suppression, and no instruction—and shown disgusting images. The mindful reappraisers’ heart rates did not respond to the images, while the other two groups’ hearts did. Then they were tested on moral intuitions and policy views. Disgust-prone subjects remained more conservative in the suppression and no-instruction groups. But for the mindful reappraisers, disgust sensitivity no longer was related to adopting moral and politically conservative views.

Fairness Reminders and Ethical Assistance Software

In a sense, we have used exocortical aids to improve moral decision-making since the beginning of civilization, in the form of amulets, tattoos, clothing, and haircuts designed to remind us and our community of moral commitments. Today the moral exocortex has expanded to include “What Would Jesus Do?” bracelets and electronic Bible and Qur’an apps. But many secular digital aids are also emerging. The New York State Bar Association, for instance, has created an app that gives users access to more than nine hundred decisions of its Professional Ethics Committee on issues confronting judges and attorneys. The Moral Compass app provides a flowchart of moral decision-making questions, and the SeeSaw app allows users to query other users about which action they should take in a situation.

Secular ethics assistants will also likely emerge from the efforts to design “moral machines” and ethical artificial intelligence (AI). Some of this work is being done in order to provide onboard rules of engagement for autonomous battlefield robots, but there are moral decision applications being thought about for robots in many occupations, including industry, transportation, and medicine. Should your autonomous car drive you into the river to prevent killing five others?68 How should a robotic home caregiver react when a demented patient refuses to bathe, eat, or take medication?69 The effort to codify and balance all the factual and value considerations involved in messy, human moral decision-making will be very complicated and result in multiple possible morality settings, since there is wide moral variability in humans. As Wendell Wallach and Colin Allen have argued, the full replication of recognizable human moral decision-making in machines will probably require both human-level cognitive abilities and the program of character development and moral reasoning that produces mature morality in humans.

Eventually, as these morality AIs become more sophisticated and woven into our environment and exocortices and then tied directly to our brains, they will become a seamless part of our own cognition, allowing us to choose consciously to achieve levels of moral consistency that are currently impossible for most.70

But what if our inner AI angel reminders aren’t as loud as the persistent voice of our hind brain devils? Are there ways that we can affect the way our brain works to strengthen the hand of fairness and moral cognition?

Notes

  1. D. Proctor, S. F. Brosnan, and F. B. M. de Waal, “How Fairly Do Chimpanzees Play the Ultimatum Game?” Communicative & Integrative Biology 6, No. 3 (2013): e23819.
  2. D. Proctor, R. A. Williamson. F. B. M. de Waal. and S. F. Brosnan, “Chimpanzees Play the Ultimatum Game,” PNAS 110, No. 6 (2013): 2070–2075.
  3. S. Sloane, R. Baillargeon, and D. Premack, “Do Infants Have a Sense of Fairness?” Psychological Science 23, No. 2 (2012): 196–204.
  4. P. Bloom, Just Babies: The Origins of Good and Evil (New York: Crown, 2013).
  5. ———. “Did God Make These Babies Moral?” New Republic 2014; available at http://www.newrepublic.com/article/116200/moral-design-
latest-form-intelligent-design-its-wrong.
  6. E. J. Eaves, H. J. Eysenck, and N. G. Martin, Genes, Culture and Personality: An Empirical Approach (San Diego, Calif.: Academic Press, 1989).
  7. M. J. Brandt and G. A. Wetherell, “What Attitudes Are Moral Attitudes? The Case of Attitude Heritability,” Social Psychological and Personality Science 3, No. (2012): 172–79
  8. R. Iyer, S. Koleva, J. Graham, P. Ditto, and J. Haidt, “Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians,” PLOS ONE 7, No. 8 (2012): e42366.
  9. S. P. Koleva, J. Graham, R. Iyer, P. H. Ditto, and J. Haidt, “Tracing the Threads: How Five Moral Concerns (Especially Purity) Help Explain Culture War Attitudes,” Journal of Research in Personality 46 (2012): 184–94.
  10. T. Talheim et al. “Liberals Think More Analytically (More ‘WEIRD’) Than Conservatives,” Personal Social Psychology Bulletin 4, No. 2 (2015): 250–67.
  11. R. McDermott, D. Tingley, and P. K. Hatemi, “Assortative Mating on Ideology Could Operate Through Olfactory Cues,” American Journal of Political Science May 1–9, 2014.
  12. P. K. Hatemi et al., “Genetic Influences on Political Ideologies: Twin Analyses of 19 Measures of Political Ideologies from Five Democracies and Genome-Wide Findings from Three Populations,” Behavioral Genetics 44 (2014): 282–94.
  13. J. J. Thomson, “The Trolley Problem,” Yale Law Journal 94 (1985): 1395–1415.
  14. J. D. Greene, R. B. Sommerville, L. E. Nystrom, J. M. Darley, and J. D. Cohen, “An fMRI Investigation of Emotional Engagement in Moral Judgment,” Science 293 (2001): 2105–2108.
  15. A. Shenhav and J. D. Greene, “Integrative Moral Judgment: Dissociating the Roles of the Amygdala and Ventromedial Prefrontal Cortex,” Journal of Neuroscience 34, No. 13 (2014): 4741–4749.
  16. G. Sevinc and R. N. Spreng, “Contextual and Perceptual Brain Processes Underlying Moral Cognition: A Quantitative Meta-Analysis of Moral Reasoning and Moral Emotions,” PLOS ONE 9, No. 2 (2014): e87427.
  17. O. K. Olsen, S. Pallesen, and J. Eid, “The Impact of Partial Sleep Deprivation on Moral Reasoning in Military Officers,” Sleep 33, No. 8 (2010): 1086–1090.
  18. K. Starcke, A. C. Ludwig, and M. Brand, “Anticipatory Stress Interferes with Utilitarian Moral Judgment,” Judgment and Decision Making 7, No. 1 (2012): 61–68.
  19. B. Trémolière et al.,
    “Mortality Salience and Morality: Thinking about Death Makes People Less Utilitarian,” Cognition 124, No. 3 (2012): 379–84.
  20. P. Conway and B. Gawronski, “Deontological and Utilitarian Inclinations in Moral Decision Making: A Process Dissociation Approach,” Journal of Personality and Social Psychology 104, No. 2 (2013): 216–35.
  21. J. Greene et al., “Cognitive Load Selectively Interferes with Utilitarian Moral Judgment. Cognition,” 107, No. 3 (2008):1144–1154.
  22. R. S. Suter and R. Hertwig, “Time and Moral Judgment,” Cognition 119, No. 3 (2011): 454–58.
  23. D. Schreiber et al., “Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans,” PLOS ONE 8, No. 2 (2013): e52970.
  24. D. R. Oxley et al., “Political Attitudes Vary with Physiological Traits,” Science 321 (2008): 1667–1670.
  25. M. D. Dodd et al., “The Left Rolls with the Good; The Right Confronts the Bad: Physiology and Cognition in Politics,” Philosophical Transactions of the Royal Society Biological Sciences, 367, No. 1589 (2012): 640–49.
  26. K. B. Smith et al., “Disgust Sensitivity and the Neurophysiology of Left-Right Political Orientations,” PLOS ONE 6, No. 10 (2011): e25552.
  27. R. Kanai, “Political Orientations Are Correlated with Brain Structure in Young Adults,” Current Biology 21, No. 8 (2011): 677–80.
  28. E. G. Helzer and D. A. Pizarro, “Dirty Liberals! Reminders of Physical Cleanliness Influence Moral and Political Attitudes,” Psychological Science 22, No. 4 (2011): 517–22.
  29. H. H. Nam, J. T. Jost, and J. J. Van Bavel, “Not for All the Tea in China!’ Political Ideology and the Avoidance of Dissonance-Arousing Situations,” PLOS ONE 8, No. 4 (2013): e59837.
  30. G. J. Lewis, R. Kanai, T.C. Bates, and G. Rees, “Moral Values Are Associated with Individual Differences in Regional Brain Volume,” Journal of Cognitive Neuroscience 24, No. 8 (2012): 1657–1663.
  31. B. D. Bartholow, C. L. Dickter, and M. A. Sestir,” “Stereotype Activation and Control of Race Bias: Cognitive Control of Inhibition and Its Impairment by Alcohol,” Journal of Personality and Social Psychology 90, No. 2 (2006): 272–87.
  32. S. Eidelman et al., “Low-Effort Thought Promotes Political Conservatism,” Personality and Social Psychology Bulletin 38, No. 6 (2011): 808–20.
  33. S. Schnall, J. Benton, and S. Harvey, “With a Clean Conscience: Cleanliness Reduces the Severity of Moral Judgments,” Psychological Science 2008.
  34. T. G. Adams, P. A. Stewart, and J. C. Blanchar, “Disgust and the Politics of Sex: Exposure to a Disgusting Odorant Increases Politically Conservative Views on Sex and Decreases Support for Gay Marriage,” PLOS ONE 9, No. 5 (2014): e95572.
  35. T. Talhelm, J. Haidt, S. Oishi, X. Zhang, F. F.Miao, and S. Chen, “Liberals Think More Analytically (More ‘WEIRD’) Than Conservatives,” Personality and Social Psychology Bulletin 41, No. 2 (2014): 250.
  36. D. M. Amodio, P.G. Devine, and E. Harmon-Jones, “Individual Differences in the Regulation of Intergroup Bias: The Role of Conflict Monitoring and Neural Signals for Control,” Journal of Personality and Social Psychology 94 (2008): 60–74.
  37. M. Zuckerman, J. Silberman, and J. A. Hall, “The Relation Between Intelligence and Religiosity: A Meta-Analysis and Some Proposed Explanantions,” Personality and Social Psychology Review 17, No. 4 (2013): 325–54.
  38. G. Hodson and M. A. Busseri, “Bright Minds and Dark Attitudes: Lower Cognitive Ability Predicts Greater Prejudice through Right-Wing Ideology and Low Intergroup Contact,” Psychological Science 23, No. 2 (2012): 187–95.
  39. N. Carl, “Verbal Intelligence Is Correlated with Socially and Economically Liberal Beliefs,” Intelligence 44 (2014): 142–48.
  40. S. Kanazawa, “Why Liberals and Atheists Are More Intelligent,” Social Psychology Quarterly 73 (2010): 33–57.
  41. B. Verhulst, L. J. Eaves, and P. K. Hatemi, “Correlation Not Causation: The Relationship between Personality Traits and Political Ideologies,” American Journal of Political Science 56, No. 1 (2012): 34–51.
  42. C. G. Sibley and J. Duckitt, “Personality and Prejudice: A Meta-Analysis and Theoretical Review,” Personality and Social Psychology Review 12 (2008): 248–79.
  43. J. J. Mondak and D. Canache, “Personality and Political Culture in the American States,” Political Research Quarterly 67, No. 1 (2014): 26–41.
  44. E. C. Nisbet, K. E. Coope, and R. K. Garrett, “The Partisan Brain: How Dissonant Science Messages Lead Conservatives and Liberals to (Dis)Trust Science,” Annals of the American Academy of Political and Social Science 658, No. 1 (2015): 36–66.
  45. M. J. Kozloski, “Homosexual Moral Acceptance and Social Tolerance: Are the Effects of Education Changing?” Journal of Homosexuality 57 (2010): 1370–1383.
  46. J. A. Davis, “A Generation of Attitude Trends Among US Householders as Measured in the NORC General Social Survey 1972–2010,” Social Science Research 42 (2013): 571–83.
  47. F. Borgonovi, “The Relationship between Education and Levels of Trust and Tolerance in Europe,” British Journal of Sociology 63, No. 1 (2012): 146–67.
  48. F. Van Leeuwen, B. L. Koenig, J. Graham, and J. H. Park, “Moral Concerns across the United States: Associations with Life-History Variables, Pathogen Prevalence, Urbanization, Cognitive Ability, and Social Class.” Evolution and Human Behavior 35 (2014): 464–71.
  49. H. Carvacho et al., “On the Relation between Social Class and Prejudice: The Roles of Education, Income, and Ideological Attitudes,” European Journal of Social Psychology 43 (2013): 272–85.
  50. S. Milligan, “Economic Inequality, Poverty, and Tolerance: Evidence from 22 Countries,” Comparative Sociology 11, No. 4 (2012): 594–619.
  51. W. R. Kerr, “Income Inequality and Social Preferences for Redistribution and Compensation Differentials,” Journal of Monetary Economics 66 (2014): 62–78.
  52. K. S. Trump, “The Status Quo and Perceptions of Fairness: How Income Inequality Influences Public Opinion,” dissertation submitted to the Harvard University Faculty of Arts and Sciences, 2012.
  53. R. Andersen and M. Yaish, “Public Opinion on Income Inequality in 20 Democracies: The Enduring Impact of Social Class and Economic Inequality,” AIAS GINI Discussion Paper 48, 2012.
  54. P. Flavin, “Differences in Income, Policy Preferences, and Priorities in American Public Opinion,” Paper presented at the annual meeting of the Midwest Political Science Association 67th Annual National Conference, 2009; available at http://citation.allacademic.com/meta/p362665_index.htm.
  55. J. Kulin and S. Svallfor, “Class, Values, and Attitudes Towards Redistribution: A European Comparison,” European Sociology Review 29, No. 2 (2013): 155–67.
  56. C. A. Moss-Racusin et al., “Scientific Diversity Interventions,” Science 343, No. 7 (2014): 615–16.
  57. T. C. Peck et al., “Putting Yourself in the skin of a Black Avatar Reduces Implicit Racial Bias,” Consciousness and Cognition 22, No. 3 (2013): 779–87.
  58. C. K. Lai et al., “Reducing Implicit Racial Preferences: I. A Comparative Investigation of 17 Interventions,” Journal of Experimental Psychology: General 143, No. 4 (2014): 1765–1785.
  59. Y. Kang, J. R. Gray, and J. F. Dovidio, “The Nondiscriminating Heart: Lovingkindness Meditation Training Decreases Implicit Intergroup Bias,” Journal of Experimental Psychology: General 143, No. 3 (2014): 1306–1313.
  60. B. J. Diamond et al., “Implicit Bias, Executive Control and Information Processing Speed,” Journal of Cognition and C
    ulture
    12, No. 3–4 (2012): 183–93.
  61. T. A. Ito et al., “Toward a Comprehensive Understanding of Executive Cognitive Function in Implicit Racial Bias,” Journal of Personality and Social Psychology 108, No. 2 (2015): 187–218.
  62. C. A. Fitzgerald, “A Neglected Aspect of Conscience: Awareness of Implicit Attitudes,” Bioethics 28, No. 1 (2014): 0269–9702.
  63. J. Price and J. Wolfers, “Racial Discrimination Among NBA Referees,” The Quarterly Journal of Economics 125, No. 4 (2010): 1859–1887.
  64. D. G. Pope, J. Price, and J. Wolfers, “Awareness Reduces Racial Bias,” NBER Working Paper, 2013; available at http://www.nber.org/papers/w19765.
  65. A. Costa et al., “Your Morals Depend on Language,” PLOS ONE 9, No. 4 (2014): e94842.
  66. A. Lueke and B. Gibson, “Mindfulness Meditation Reduces Implicit Age and Race Bias: The Role of Reduced Automaticity of Responding,” Social Psychological and Personality Science 2014: 1–8.
  67. B. K. Holzel et al., “Stress Reduction Correlates with Structural Changes in the Amygdala,” SCAN 5 (2010): 11–17.
  68. N. J. Goodall, “Machine Ethics and Automated Vehicles,” in Road Vehicle Automation, eds. G. Meyer and S. Beiker (New York: Springer, 2014), 93–102.
  69. P. Lin, K. Abney, and G. A. Bekey, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge, Mass.: MIT Press, 2011).
  70. J. Savulescu and H. Maslen, “Moral Enhancement and Artificial Intelligence: Moral AI?” in Beyond Artificial Intelligence, eds.J. Romportl et al. (New York: Springer, 2015), 79–95.

James Hughes

James J. Hughes is the founder and executive director of the Institute for Ethics and Emerging Technologies. He teaches health policy at Trinity College and is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future (Hachette, 2004).


“We can begin to experiment
with ways to enhance our moral reasoning with drugs and devices to become even better citizens.”

This article is available to subscribers only.
Subscribe now or log in to read this article.