Can We Make More Moral Brains?

John Shook

Improving the brain’s cognitive performance is the next great frontier for not just the brain sciences but also the wider field of medical therapy. As soon as some fresh discovery about the brain’s functioning is announced, there are novel proposals for modifying and enhancing that brain process. Therapies that repair poorly functioning brains are treatments that often have the power to enhance those same functions to levels above normal. Is there anything about us—and who we really are—that cannot be improved? Modifying our brains is the modification of the deepest, most personal core of ourselves. What if you could still be you, only better?

Cognitive enhancers have already been invented and experimentally implemented. Drugs for slowing memory loss in patients suffering from degenerative diseases can also be drugs for increasing memory retention in healthy people. Helping people suffering from attention deficit disorders soon transfers into boosting other people’s length of focused attention to extraordinary degrees. From the perspective of pharmacology, the only question is how much modification to cognitive functioning is desired. The administration of a treatment to one patient receives the label of a “medical therapy,” while the same treatment administered to another patient is instead called a “pharmaceutical enhancement.” It all depends on the level of brain functioning with which one is starting.

Strictly speaking, all the brain does is “cognition”; the brain sciences confirm how the entire brain is involved with some kind of cognition in its broadest sense. Significant changes to any major brain process will impact the cognitive functioning of the rest. Highly specialized treatments try to narrowly modify just one or another aspect of the brain’s work, perhaps staying below the conscious threshold or noticeably affecting our conscious awareness. Pharmacological treatments are just one kind of inventive technology permitting brain/mind modification. Refined surgical techniques, genetic modifications for influencing brain development, nanotechnologies changing individual neurons, or even cybernetic fusions of brains with computers are all possibilities closer in the future than we may expect. There may be very little about our brains and about our conscious selves that will remain beyond the reach of these radical technologies by this century’s end. Think about it: who are you, really? And who would you like to become?


Improving features of intelligence—from better memory and attention to faster thinking and quicker problem-solving—are without question desirable enhancements. But we are so much more than just our wits. When we think about who we are essentially, other aspects of ourselves are important, such as our strongest values, our highest priorities, and our deepest relationships with others. Our values, our priorities, and our relationships make us who we are just as much as anything else about us. Arriving at the realm of morality, where the brain controls our caring and conduct, there is a similar opportunity to turn the dials, pull the levels, and change how we morally behave ourselves. This is the area of moral enhancement.

Do we want more moral brains? This simple question hides at least three specific questions. First, do we want there to be more brains that possess expected levels of moral capacity? This is a question that involves therapy. Too many brains lack sufficient moral capacity; this is where psychology and psychiatry apply categories such as “psychopath,” “sociopath,” “social deviant,” and the like. Besides the other character shortcomings of persons afflicted by one of these conditions, we notice how they fail some of our basic moral expectations. It is only natural to want more normally moral brains, because we understandably want all people’s behavior to rise to regular moral expectations. Since therapies are treatments that seek to restore normal functioning, we can ask for “moral therapies.”

From a different angle, a second question can be posed: do we want brains to be more moral? Fulfilling minimally expected levels of moral conduct is one thing; elevating one’s moral performance is another. We like to take notice when people not only “do the right thing” but additionally “go above and beyond.” From the especially nice person who is more helpful than required, or the person who could have easily taken advantage of another’s weaker situation but decided against causing harm, to the heroic figure who is courageously self-sacrificing to save another, there are ways for society to applaud and recognize extraordinary moral conduct. Anyone can be heard to say, “I wish there were more people like that.” Maybe you yourself have thought, “I wish I was more like that.” Perhaps some of us will get that chance.

From the widest angle, wanting more moral brains may mean a third thing: expressing the wish that all of humanity would permanently elevate its moral capacity to some far higher level. Not only religions but secular philosophies, too, have occasionally harbored idealistic hopes for this species. Some say that we could already be ethical saints if not for the corrupting influences of a misguided society. Others say that while we have a mixed moral nature, great effort and training can ensure that our better selves prevail. Whatever the speculative view of the raw human material (and the ever-proliferating training exercises) that ethics has to work with, perhaps brain science and moral enhancement technologies will render all that experimentation completely irrelevant. Do we want to know what the brain is really doing? Ask the brain scientists. Do we want to know how to make more moral people? Ask the moral enhancement specialists. Already self-styled “transhumanists” are picturing a future kind of society full of such caring and cooperative people that you’d think you were in heaven—or a utopian “heaven on earth,” one might secularly say.

It’s becoming common enough to read in academic journals about not just cognitive enhancement but moral enhancement as well that a wider audience is starting to notice. When academics or journalists throw around the term moral enhancement, it can sound like everyone agrees what that could mean. And maybe there is a common enough sense for everyone to grasp it. Don’t we all mean pretty much the same thing by “morality” and by “being more moral”?

If we all had the same meaning in mind when we say “That’s moral” or “See, that’s the right thing to do,” then I suppose the great work of religion or philosophy might be completed. Yet there’s no real sign that any one religion or philosophy, or even one cultural tradition, has gotten everyone to the point where we agree on what is moral and what isn’t. Sure, there’s broad enough agreement on the moral basics; travel around the world and you will see foundational habits of moral conduct instilled in children that scarcely vary. Put another way, what counts as antisocial and grossly immoral conduct—what no parent would want to see a child doing, much less an adult—is roughly similar across cultures. Sure, it’s easy to focus on highly specific instances of cultural rituals or practices carried out by adults that are viewed as repulsive and immoral by other cultures. Looking deeper, humanity isn’t really all that diverse in its elemental habits regarding respectful, loyal, and fair social relationships. Yes, visitors to a very different culture from their own must learn speedily how to display a particular virtue, but there’s no confusion about the expected virtue itself.

But we are still talking about quite basic kinds of simple virtues upon which all societies rely. That doesn’t mean that all of morality is a matter of common agreement around the world; many moral disagreements obviously divide us. The fact that we can get into moral disagreements is proof of humanity’s general sense of morality and fairness; that we can’t easily resolve specific moral disagreements is likewise proof that not all of morality is settled among us.


We all agree that morality comes in degrees, and that we often wish that others, and ourselves as well, could be more moral on occasion. Let’s move beyond the thirst for more moral conduct and more moral brains to the deeper question about how such desires could possibly be satisfied. What would a moral enhancer specifically do? There’s no obvious answer to that simple question, even after we agree that more morality would be great and that everyone should at least fulfill minimally expected levels of moral conduct. When we point farther in the direction of “above-normal morality,” where are we pointing? Are we all pointing in the same direction?

It all depends on the specific ethical view one has in mind. Let’s consider an example. Suppose you think that no one is making moral choices unless they are using “free will,” and you also believe that this free will is such an unnatural and uncaused thing that no modifications to the brain could possibly increase it. If that’s your ethical view about morality, then you will deny that a drug targeting a brain process to make a person more moral could truly be a moral enhancer. At most you’d be prepared to admit that perhaps that drug has hit upon a “moral degrader,” some other brain process that clouds or obstructs what the morally good free will would do. Sure, the drug permits a person to do more moral things, but that’s only because the untouched moral part of the person is at greater liberty to do good. But for you, there will never be, by definition, any possible physical modification to the source of moral choices, so there could be “moral-behavior enhancers” but never “moral enhancers” in the narrower ethical sense. Since the brain sciences and technologies will never be dealing with any such nonexistent thing as an uncaused and nonnatural free will, we can drop that notion here. However, there are further senses of “morality” that similarly resist the very notion of a “moral enhancer.”

On first hearing, “moral enhancement” probably strikes most people as meaning something like “making a person more moral,” which specifically means something like “more likely to do the morally right thing.” When people consider the vague idea of moral enhancement in this general way, they are simply relying on whatever they already regard as moral; differences among their personal views on morality will readily become apparent. However, if a proposed moral enhancer actually resulted in no observable difference to a person’s moral conduct, that enhancer would be universally judged a failure. We are all practical-minded about morality, aren’t we? Well, maybe not all of us. For some people, a “moral enhancer” might be more like a mood enhancer that causes little outward behavioral modification but does cause a noticeable change to one’s inner mental qualities. “I feel so much more like a morally good person,” we could imagine a subject saying after a moral enhancement treatment. Perhaps being a moral person is mostly about one’s own personal sensitivity to the moral aspects of situations, or about one’s earnest inclination to want to do the right thing more often.

Besides moral mood enhancement, another sort of “moral” enhancement won’t turn out to be practical: the moral enhancer that works by placing one’s powers of rational command in greater control of decisions. Philosophers have dreamt of ethical utopias where reason rules. If only the rational center of our mind could always be in command! Indeed, some philosophers have gone so far as to argue that our reason always is pretty much in command. This is highly implausible, not only because those philosophers can’t explain why human behavior usually departs so widely from rationality but also because the brain sciences cannot find any neural basis for such an executive command center for rationality. There are portions of the cortex that participate in what we call deliberative and careful thinking. But a center for just pure reasoning? Pure fiction. And those cortical regions capable of making our conduct a little more careful, a little more logical, or a little more farsighted still have only limited regulatory participation, alongside so many other sensory and emotional centers. That’s a scientific account that more easily explains why our conduct is rational to only modest degrees. It is also an account that refutes an ethical theory expecting more or better morality to result from increasing one’s capacity for pure reason.

Will moral enhancement end up being just like pleasant mood enhancement or pure-reason enhancement? Probably not. Again, most of us are practical-minded people—we think good intentions are nice, but good conduct is what counts. Reporting of changes to mood or inclination or logicality are not enough. The ultimate laboratory test of the efficacy of a genuine moral enhancement would be the observation of increased moral conduct, either in frequency or substance or both. Perhaps one would perform more altruistic acts and fewer selfish acts. Or, perhaps there would be the same frequency of conformities to expected moral standards, the same raw number of good deeds, but one’s deeds would turn out to be more cooperative, helpful, and beneficial to others than before.

Of course, we have to remain within the realm of science: experimentally verifying changes to moral conduct are crucial, and those verifications presuppose some standard of moral conduct that is available to apply. People doing experimental checks would use some selected standard of what is to count as appropriately moral conduct. That standard may be taken from philosophy or religion, from cultural norms, or from one’s personal moral views. The experimenters could apply their selected standards, or they could simply choose to apply the moral standards of the subject. An experimental check on a moral enhancement’s efficacy would apply either an externalist or an internalist moral standard, where “internalist” means a standard that the subject of the enhancement already takes to be moral and “externalist” means a standard that the enhancer’s inventors take to be moral. Of course, an external standard may coincide with an internal standard, where subjects and experimenters agree on relevant moral standards to apply. But standards need not coincide, and the future of moral enhancement will confirm this.

A moral enhancer might work simply by increasing the frequency of conduct that the subject already believes is moral. This sort of “generic” moral enhancer doesn’t work by changing what a person believes is moral or by changing which motives actually determine a person’s conduct. It merely strengthens whatever moral intuitions, motives, or habits a person already has. For example, a vegetarian could be confident that a temptation to eat meat would be much less likely to overwhelm his moral intentions to be kind to animals. A banking executive could be confident that any temptation to be compassionate toward an employee would be much less likely to overwhelm his or her moral intentions to practice equal treatment. A soldier could be confident that any temptation to be merciful to another person would be much less likely to overwhelm his or her moral intentions to kill an enemy combatant. For a generic moral enhancer, experimenters can confirm efficacy by simply first ascertaining the subject’s own moral views.

If a moral enhancer accidentally altered what the subject believed to be morally right, experimenters would confirm this by showing the contrast between his or her moral beliefs as measured pre- and post-experiment. An experimental subject who was initially pro-choice might be shocked and dismayed after the enhancement was completed to hear his or her pro-choice views as recorded in a pre-experiment interview video. The experience of an experimental subject having deep moral beliefs turned on and off by some moral enhancement treatment could itself be deeply thought provoking.

In contrast to any generic moral enhancer, there could be specific moral enhancers. Imagine if the members of a trial group that received an experimental moral enhancement stopped eating meat and began giving large amounts of money to animal-rights organizations. Was the treatment successful? Fine-tuning the specific kind of moral enhancement would be necessary in order to ensure that any effective moral treatment enhanced the kind of morality desired. A variety of specific moral enhancers might be possible—enhancers for compassionate giving to the poor, scrupulous fairness towards only the deserving, or for prioritizing care for family and friends over strangers. Indeed, we could imagine the development of highly specific “boutique” moral enhancers that focus narrowly on some desired moral behavior. A teacher could desire to enhance pupils’ devotion to helping the immature and ignorant; a politician could wish to enhance his or her concern that large campaign donors receive every possible assistance.

Boutique moral enhancers may not impress most people as genuine cases of moral enhancement. However, by what moral standard should we judge efficacious moral enhancers? If we want to go beyond internal subjective moral standards, where shall we turn? Standout suggestions for alternatives arrive from cultural conventionalism and objectivism. Cultural conventionalism would suggest that the moral standards to apply should be those the culture as a whole generally takes to be morally right, while objectivism would suggest that the moral standards to apply should be the justifiably correct moral standards, regardless of what any culture or individual says. We may leave aside philosophical issues over how cultural conventionalism or objectivism can justify recommendations of moral standards. Out in the real-world market for consumers of moral enhancements, people will judge for themselves whether a proposed moral enhancer is actually efficacious. They all have a simple method for doing this: they will apply their own judgment of what is moral, regardless of what their culture says or what some philosopher says. If a drug manufacturer markets a moral enhancer by pointing out that your neighbors or some philosophers agree that it works, you won’t be too impressed unless your moral standards are getting confirmed in the process. Anyone could simply say, “Sure, they say it enhances morality, but it really doesn’t, according to what I know is right and wrong.” A future moral-enhancement market will confront a great deal of this sort of skepticism and relativism. How can moral-enhancement marketing surmount this initial problem? Initially, and perhaps for a long time, marketing will go around this problem by offering moral enhancements that conform to what the vast majority of people already think is moral. If you need many customers for your product, culture is king.

The standpoint of cultural conventionalism probably works best for designing moral enhancers designed to inhibit specific immoral conduct. For example, people who are easily aroused to violent behavior could benefit from moral enhancers targeting such things as respect for others or compassion for others’ suffering. Where a culture generally agrees that some kind of conduct is a severe moral transgression, such as a moral transgression that is also illegal, moral therapy could elevate moral conduct to something approaching normality. Objectivists may claim that a moral standard against unnecessary violence enjoys a status far more justified than just approval by some cultures, but in practice this philosophical difference is irrelevant. We would accept a moral treatment that conforms to our cultural standards of morality even if we think additionally that there is objective justification as well. It should not prove too difficult for cultural conventionalism to justify some moral therapies, especially if they successfully inhibit criminal conduct.

Cultural conventionalism won’t work as well for confirming proposed moral enhancements that elevate moral performance above normally expected levels. We may not really want what we say we want. For example, people widely approve of generosity. However, is there common agreement that normal generosity should be significantly elevated in our population? We may say that we wish people were more generous, but we really mean is that other people should be more generous to us or that people should be more generous to others who deserve it. Generosity to rich people or malevolent people can seem misdirected. Conventionalism seems inadequate when trying to apply moral platitudes to moral enhancement, and objectivism won’t help either. Objectivism is useless for confirming proposed moral enhancements beyond normal levels. Either objectivism just largely agrees with what many cultures already believe, or objectivism can demand some moral standards that significantly deviate from widely shared cultural standards. If objectivism just delivers inflated cultural conventionalism, then cultural conventionalism’s limitations apply. If objectivism deviates from what most cultures believe, it renders itself impractical for testing the efficacy of moral enhancers because few people will accept objectivism’s proposed standards. If some alleged moral enhancement causes people to stop treating their friends any differently from strangers, Kantian objectivists may approve, but few others would. Of course, Kantian objectivists could seek their own boutique moral enhancers, but that returns our situation to a question of which special moral enhancer dominates the market.

Subjectivists, relativists, conventionalists, and the like could not object to the possibility of moral enhancers—but objectivists may. An objectivist could claim that since morality truly consists of certain rules or principles, any proposed moral enhancer that does not enhance conformity to those objective rules or principles is in fact not a moral enhancer at all because it cannot be enhancing morality, whatever else it may be enhancing. However, since the kinds of moral enhancers we are considering are designed to try to enhance what some people in the real world actually take to be moral already, moral objectivism is practically irrelevant to bringing moral enhancers to market.

Subjectivists and conventionalists would get into disagreements over boutique moral enhancers. Conventionalists would not be impressed by the more eccentric enhancers that seem remote from ordinary moral concerns or directly contradict common moral standards. If a person wants a moral enhancer that intensifies his or her devotion to following his or her favorite sports team to the point of fanaticism, are we still talking about a moral enhancer? What about a moral enhancer that enhances a religious person’s enthusiasm for abandoning family and job for a life of vagabond proselytizing?


Philosophical debates ensue quickly, but free-market economies will produce moral enhancement therapies based on conventional standards—and boutique moral enhancers based on subjective standards as well—without waiting for philosophical conclusions. Society might go ahead to demand—and enforce—a distinction between “genuine” moral enhancement and boutique moral modifiers, given that boutique modifications may not quite conform to what society expects of moral conduct.

This implies that the possibility of moral enhancement at present faces no deep philosophical or practical obstacles. So long as the field of moral enhancement does not permit itself to make implausible claims about the sort of morality enhanced or the degree of enhancement possible, then the future for the development of genuine moral enhancement remains open.

John Shook

John Shook is an associate editor of FREE INQUIRY and director of education and senior research fellow at the Center for Inquiry. He has authored and edited more than a dozen books, is coeditor of three philosophy journals, and travels for lectures and debates across the United States and around the world.

Improving the brain’s cognitive performance is the next great frontier for not just the brain sciences but also the wider field of medical therapy. As soon as some fresh discovery about the brain’s functioning is announced, there are novel proposals for modifying and enhancing that brain process. Therapies that repair poorly functioning brains are treatments …

This article is available to subscribers only.
Subscribe now or log in to read this article.