On Ethics and Morality

David Tribe

Who would have thought in 1972 that “ethics” would ever become fashionable? That was when my Nucleoethics: Ethics in Modern Society was published in the same sociological series as Ger maine Greer’s The Female Eunuch (1970).

Today, ethics is everywhere. The hitherto-obsolescent Hippocratic Oath for doctors has been joined by codes of ethics for advertisers, journalists, lawyers, accountants, and even financial planners; John Forge’s Responsible Scientist (2008) urges one for scientists. In almost every industry, big companies have ethics committees; there are “bioethics” features in every serious newspaper. But has this effusion of ethics led to any overflow of morality?

My own background is that of a Bible-believing Austrian puritan obsessed with “pure” thoughts and chastity. Now only the Puritan work ethic survives. Of course, scattered in both the moralistic Bible and the Puritan code are moral teachings of universal validity, but how does one detect them? My moral quest first produced “What Is Morality?” (The Plain View, Winter 1964): an “orthodox” mélange of humanist ethics and morality but with a hint of later heterodoxy. First the orthodoxy.

Secular humanists agree that religion cannot be the basis of morality. Differing moral codes—from religion to religion, sect to sect, and generation to generation—negate any universal moral law upheld by an eternal, unchanging deity. A more basic question is whether right conduct is simply what God decrees, or does God decree it because it is right? The first alternative prompts two troubling conclusions: (1) humankind needs no moral insights, only obedience to God’s dictates; (2) as raised by Robert Stovold (Argus [U.K.], October 3, 2009), if human “sin” is merely disobeying God, there is no yardstick to distinguish between a heinous crime and a peccadillo. The second alternative also prompts two troubling conclusions: (1) God is subservient to the moral law and is thus superfluous morally; (2) humans must rely on the moral law to decide which gods are good and to be obeyed (Yahweh) and which are not (Satan). Further, how are they to know what the good god’s will really is—through a teaching church, ancient scriptures, or personal intuition? All of these communications are imperfect, especially regarding morality.

Practically, religions—especially Christianity—have moral dilemmas. Is motivation by fear of hell or hope of heaven moral at all or merely prudential? Morality is placed below faith, mediated through divine grace (Eph. 2:8–9). The extremity of this position is antinomianism, the view that Christians are freed from the moral law. So Martin Luther could say, “Pecca fortiter” (sin boldly). Those sects that practice confession and absolution can also sin boldly. Another regrettable aspect of Christianity is its frequent equation of sins of the mind (impure thoughts), which do not cause harm, with sins of the body, which may (Matt. 5:28). Another objectionable feature of the Judeo-Christian-Muslim moral purview is its obsession with sex, such that an “immoral” person is usually not taken to be one guilty of aggression, intolerance, untruthfulness, malice, greed, and exploitation but rather of forms of sexual activity or depiction of which religionists disapprove. Together with a messianic urge to proselytize, this obsession leads fundamentalists to extremes of torture and murder as practiced by their founders and the patriarchs of history, legend, or myth. Of course, most religionists today are not fundamentalists, but the best that can be said of them is that their conduct is no better than that of irreligionists from the same socioeconomic background.

In humanist circles in Britain, there was much talk during the 1960s and 1970s of altruism, individual morality, rationality based on science as the essence of morality, permissiveness, and an open society (oxymoron)—all unsupported by adequate definitions, or any definitions at all. Humanists wrote about determinism but spoke of free will. They endorsed philosophical materialism but eschewed mechanism or cited Arthur Koestler’s Ghost in the Machine (1967). Liberals of all backgrounds, including secular humanists, were strong on “rights” and giving “power” or “liberation” to hitherto-discounted segments of society (women, gays, blacks, students, and infants), but most said nothing about responsibilities. Despite its ongoing promotion of the theory of evolution and the work of ethnologists like Desmond Morris, the U.K. humanist movement generally perpetuated an anthropocentric view of the world, often quoting Protagoras’s “Man is the measure of all things,” writing “god” and “Man,” and overlooking “animal liberation” (bioethicist Peter Singer, 1974).

Just as Darwinism (despite Charles Darwin’s recent apotheosis) was not the last word in evolution, neither was the above “orthodox” humanist position of 1964. Asked about the universe or humankind’s place in it, about natural law or moral law, a religionist could simply reply, “God did it.” It was clearly not enough for an unbeliever to say, “God did not do it.” Nor was it enough to add, “Morality arose through the situation of humankind as a social animal”; especially when almost all humanists then—and probably many today—asserted that morality was uniquely human and other social animals merely had “instincts.”

While presumptuously claiming to solve some fundamental philosophical problems, Nucleoethics freely concedes that we do not know, and are likely never to know, why there is something instead of nothing (see the writings of Polish philosopher Leszek Kolakowski) or why that something has the properties and potentialities it has. My book began with thirty-three definitions of some common words and several neologisms, in a vain hope of doing for ethics and morality what Mendeleyev’s periodic table had done for chemistry and Linnaeus’s natural systems for biology: that is, to clarify and systematize moral concepts. First, a distinction was made between ethics (“study or theory of the origin and nature of right and wrong”) and morality (“moral code or mores or moropractice or persomoropractice”). Thus Homo sapiens remains unique by formulating ethics, while morality can be attributed to other social species. “Moropractice” was defined as “following one’s own moral code. . . .”

By that I meant the individual’s “working” moral code, compounded of genetic inheritance (nature) and experiences in life, especially early life (nurture), and always somewhat different from a professed moral code in tune with public opinion. “Persomoropractice” refined moropractice. It meant the way an individual behaves in ordinary circumstances when not under the influence of “emergency ethics” (see Bernard Gert, Michael Walzer). Then behavior is dictated by ideological frenzy, as whipped up in times of war, crusades, jihads, pogroms, or revolutions and by threats to oneself and family.

Persomorality is to be distinguished from “individual morality,” promoted in the 1960s by psychologist and humanist James Hemming. His Individual Morality (1969) defines the “moral instinct” as a manifestation of “the transcendental and the spiritual” and stresses individuals’ “responsibilities to themselves to develop their individual uniqueness.” Not only is such quasi-religious language uncongenial to most humanists today, but the concep
t ignores age-old wisdom. The Jesuits famously declared, “Give me a child until he is seven and I will show you the man”; rationalist and utopian socialist Robert Owen proclaimed, “the Character of man is formed for him, and not by him” (New Moral World masthead). Thus, while every individual has a responsibility to learn from mistakes and respond to correction, society has a collective responsibility to provide an environment in which everyone can flourish.

There are ethical assumptions, and explicit or implicit moral codes or expectations flowing from them, within a number of ideologies: not only versions of religion and humanism, which are clearly committed to moral considerations, but also political philosophies, science, and individualism. All are seen as offering a blueprint for living. All worldviews have a moral core: that is, whatever the hidden agendas of their founders and promoters, they are presented as, and come to represent, a moral outlook justifying the allegiance of adherents. Those adherents, in turn, defect only when they come to doubt the ideology’s relevance to their societies or learn of—and are appalled by—the immorality of the worldview’s current figureheads. Unfortunately, the ethics of each ideology usually leads to moral requirements that, when viewed objectively, are either unacceptable or unattainable. Even when society benefits materially from the implementation of particular worldviews, which all name their saints and martyrs, opponents can name their villains and oppressors; and the moral lives (moropractice) of ordinary people are not significantly affected at all.

What then are the factors that determine actual moropractice? Everyone is likely to concede the importance of parents and teachers, particularly during the vital first seven years identified by the Jesuits, and probably to recognize peer pressure on children and teenagers. But the background of each individual is unique, and civil society depends on a broad consensus concerning acceptable behavior. Factors I identified as producing this consensus are pragmatism, technology, admass (J.B. Priestley’s name for “the creation of the mass mind”), bureaucracy, and law, despite widespread contemporary demonization of most of them; but eternal vigilance is needed. Some practical questions and attempted answers follow.

How Does Morality Actually Work without God?

Collectively, morality works through the operation in each society of the factors just outlined, which in turn depend on contemporaneous knowledge. In McLuhan’s global village, dissemination of knowledge slowly leads to closer global approximation of moral codes and particularly of expected moropractice. While a Victorian “cult of progress” (my Words and Ideas, 2009) is no longer tenable, in Western societies life is inarguably better now than in past centuries for most people.

How Do Unbelievers Discover ‘the Good’?

“The good” is discovered through entirely natural processes accessible to everyone. Figures who have demonstrated exceptional wisdom or undertaken special “psychosocial” research (Julian Huxley’s term) may be called moral “authorities,” but reform is possible because their authority is not absolute and can be supplanted. Humanists regard “intuition” as the natural emergence into consciousness of unconscious/preconscious ideas. Their value is problematic, and each needs rigorous testing.

What Is ‘Conscience’?

It is a natural thought process, a concern over personal moromalpractice (“breaking one’s own moral code”) or unrealized moropractice (deficient sympathy, empathy, or practical humanitarianism), usually played out obliquely in unremembered dreams rather than confronted consciously, and producing an adverse psychological response (abreaction). This is akin to neurological “negative feedback,” where an action causes a countervailing reaction, but it is not as instantaneous and predictable. To be more than a pious, superficial New Year’s resolution, some dramatic event is usually required to produce behavioral change. Moral education is an even slower and less predictable analogue to “positive feedback” (reinforcement), and if merely theoretical and moralistic, it can be counterproductive. Most students tune out and the school cheat or bully wins first prize.

Ancient beliefs and modern idioms center emotions in the heart, and a few physiologists even speak of a “cardiac brain,” but the real brain is “the thing that thinks” (Nucleoethics); its complex operations, though still imperfectly understood, are increasingly seen as mechanistic. Thus science aspires to be philosophical even while philosophy aspires to be scientific, seeking empirical evidence for conscience and a sense of guilt or shame. Most Anglo-Saxon humanists are now in the ambiguous position of having embraced empiricism from the beginning of the Enlightenment, yet still speaking like “armchair philosophers” of the Renaissance.

Does How We Think Influence What We Think?

Having rightly rejected the extremist 1920s behaviorism of J.B. Watson, who dismissed consciousness, sensation, imagery, perception, and will, and both B.F. Skinner’s simplistic utopian behaviorism of the 1940s and his rejection of freedom and dignity in the 1970s, humanists have tended to overlook the importance of positive reinforcement and experimental psychology, as promoted by the behaviorists. While rightly rejecting the bogus diagnostic and therapeutic claims of psychoanalysis, they often undervalue its revelation of the importance of the unconscious in mental processes, including the development of morality. And they largely ignore the generally useful new insights of experimental pharmacology, neurophysiology, genetic engineering, robotics, and artificial intelligence research in understanding human evolution and its implications, exciting or excruciating, for the future.

In recent decades, a number of independent investigations of the way we think, and the consequences for ethics and morality, have been pursued and are converging in a discipline named “x-phi” (Joshua Knobe, Shaun Nichols), whose icon is an armchair in flames. X-phi has three components: (1) functional magnetic resonance imaging (fMRI); (2) questionnaires (“clipboard” documentation); (3) psychological experimentation.

The first was originally developed by Daniel Langleben for diagnostic purposes. The phenomena of percepts (appreciation of sensations) and concepts (translation of percepts into ideas) and the operation of mental triggers for observable behavior occur in different parts of the brain, which light up when functioning and exposed to electromagnetic radiation. The light patterns in healthy people are different from those with mental or neurological diseases or impairments. It was a short step from neuropathology to neurophysiology—the study of mental processes in healthy people—in an attempt to discover the sites of consciousness, free will (if it exists), memory, intelligence, personality, sexual orientation, and even beliefs and to resolve the conflict between rational and emotional reactions. Not everyone believes that these complex mental processes, and any moral conclusions that may be drawn from them, can be elucidated by brain scans; Raymond Tallis and Edward Vul have equated fMRI with phrenology.

The value of x-phi questionnaires has also been challenged (as by David Papineau). So have those used by psephologists, opinion pollsters, market researchers, and mid-twentieth-century linguistic analysts and promoters of “non-cognitivist meta-ethics,” the “logical study of the language of morals” (R.M. Hare, 1952). Do people respond accurately and honestly when confronted with questions about their use of mo
ral language, a topic to which they have probably given scant attention? If not, are their answers likely to reflect what they perceive that their questioner, or respectable society in general, expects of them?

Increasingly, therefore, attention has concentrated on psychological experimentation, which largely involves eliciting spontaneous responses to moral alternatives. This approach has spawned a new branch of moral philosophy (in reality, an academic industry) known as “trolleyology.” It was facilitated by a growing interest in Aquinas’s theories of a “just war” (just cause pursued by just means) and “double effect,” which asserts that a good action with foreseeable bad side-effects can be justified if the side effects are not intended and the net result is good. This interest was in turn facilitated by the conviction of some philosophers (Kolakowski, John Gray) that religion and philosophy are inseparable, so that a growing “consequentialist” or pragmatic approach to moral questions outside the orbit of philosophy was called for. In opposition to the classical moral view, this approach asserts that right and wrong actions do not depend on right and wrong attitudes or abstract concepts of “good” and “bad,” “virtue” and “vice” but on right and wrong outcomes (consequences).

What Is Trolleyology and How Has It Evolved?

Trolleyology was invented some decades ago by philosophers Philippa Foot and Judith Jarvis Thomson. Today it is chiefly associated with the colorful biologist/psychologist Marc Hauser (Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong [2006] and Moral Sense Tests on the Internet), who began his career studying the social behavior of monkeys. The “trolley” is a hypothetical runaway train placing five people in its path in jeopardy unless it is (1) diverted to a siding (“spur”) with only one potential victim in its path, or (2) an obese person (“fat man”) can be pushed off an overpass above the track to stop the train. Responses to these two scenarios were originally sought from American college students and later from the general public. Marginally different results have been reported, but always a significant majority supports 1 and opposes 2. During a recent public lecture at Sydney University, Peter Singer asked for a show of hands and got the same response. Variants of 2 involve Frances Kamm’s version involving pushing the fat man onto a turntable (“lazy Susan”) in the train’s path (on this variant, support for pushing increases somewhat) and Josh Greene’s pressing a button to drop the fat man onto the track (in this variant, support increases greatly). Explanations given for these findings are: those who support (1) do not intend to kill the lone person on the siding, and it is not certain that they will do so (the train may be derailed by the switch); those who oppose (2) see pushing the fat man as intentional homicide, which cannot be justified under any circumstances. Opposition declines with the lazy-Susan solution because the fat man is thought to have a chance of survival and falls away sharply with the trapdoor solution, which involves no personal contact and therefore much less emotional involvement (Josh Greene). Singer told his Sydney audience he thought the spur and fat-man options morally equal. As an evolutionary moralist, a utilitarian, and a rationalist, in all circumstances he favored sacrificing one life for the sake of five.

Hauser has extended his analysis of spur and fat-man responses to those involving incest and eating vomit (the “yuck!” factor or “reactive attitude” [Peter Strawson]) as well as attitudes toward authority, reciprocity, aggression, dominance, submission, different types of weapons, harming chimps and humans, and actions or omissions. Instead of two categories of response—“for” and “against”—he employs three: “obligatory,” “permissible,” and “forbidden.”

Lawyers have long debated the relationship between the permissible and the forbidden, and whether it should be the business of law to uphold morality. My (controversial) view as a layman is that it should. I justify simultaneous support for “permissive” legislation (many supporters of which assert that law should not uphold morality) by saying that such legislation really concerns matters of personal taste and not common morality. Moral philosophers are now debating the relationship between the forbidden and the obligatory, the latter being invoked in “good Samaritan” legislation. Bernard Gert has gained considerable support for his proposal of ten “negative” rules (to avoid causing harm to others) and accompanying “positive” ideals (to prevent harm to others). This revives the religious division between sins of commission and sins of omission, which have traditionally been seen as less blameworthy.

In contrast with the 1960s’ democratic concentration on social permissiveness, contemporary attention among sociologists and “applied” (as distinct from “pure”) philosophers is focusing on responsibility. “Role” responsibility (Peter Berger, Thomas Luckmann) is seen as applying to not only parents, teachers, and carers, but to all the professions and occupational groups, such as lifeguards. Here, conduct that for others might merely be desirable becomes obligatory. Also identified are ad hoc “group” responsibility (H. Lewis) and ongoing “collective” responsibility (Joel Feinberg), where all members are held responsible for the activities of their associates. An unfortunate by-product of role responsibility is a potential conflict of interest between the demands of clients and the state. An example is doctors’ official opposition to voluntary euthanasia as they “strive officiously to keep [patients] alive” (Arthur Hugh Clough).

Setting theoretical priorities in trolleyology has widespread practical implications for bioethics and government policy. The most obvious parallel, heard daily in the news, is “collateral damage”—usually a euphemism for killing civilians—in warfare. Infliction of collateral damage is universally practiced, and almost universally condemned. An exception is Singer, who sees both combatants and noncombatants as enemy targets. A less publicized but constant dilemma is medical “triage,” deciding which casualties to treat first (or at all) in emergencies in which resources of staff, equipment, and drugs are limited. As populations age and the ratio of taxpayers to beneficiaries of health care shifts, the dilemma is likely to extend beyond emergency situations.

Why Should We Do What We Think Is Right?

Moralists have periodically agonized over this issue since David Hume wrote A Treatise of Human Nature (1739). In his Principia Ethica (1903) G.E. Moore called this conflation of is and ought statements censured by Hume the “naturalistic fallacy,” but he was anxious lest readers conclude, as Max Stirner had (The Ego and His Own, 1845), that morality itself was a fallacy. Moore asserted unconvincingly that moral qualities were nonnatural, recognizable by “intuition.” In the mid-twentieth century, traditional normative ethics, which prescribed the good life, gave way to fashionable noncognitivist meta-ethics. By pointing out that “happiness” is elusive, subjective, and populist (“the greatest numbers”); ignores justice (John Rawls); and involves a measure of psychology rather than morality, the meta-ethicians were largely responsible for undermining utilitarianism, which had been formulated in 1725 and uncritically accepted by humanists as the yardstick of morality for two centur
ies—nor have humanists entirely abandoned it today. Hare addressed the naturalistic fallacy by saying that moral statements simply constitute “commending” and follow ordinary logic, while other meta-ethicians invoked a mysterious emotive logic or rejected all logic. Neither approach satisfactorily explained why anyone should follow a moral injunction.

I suggest that, far from being what Kant termed “categorical imperatives,” ought statements usually imply some doubt over whether they will be followed, by ourselves or anyone else, and I regret that morality is still tinged with the mysticism of religion. Moreover, I assert that the naturalistic fallacy is itself a fallacy. There is no essential difference between “Jones is a large man” and “Jones is a good man to be emulated.” Both statements are evaluative, depending on the experience of the commentator, and will be accepted or rejected by hearers according to their experiences. With sufficient numbers, a broad consensus will be reached, though the degree of certainty is likely to be greater for descriptive is statements than prescriptive ought statements and therefore less likely to change.

Why Do We Act Against Professed Convictions with Little Compunction?

The most plausible explanation is that people’s actual moral codes are often different from what they profess, so there is little real moromalpractice. Another factor is selective memory. Recollections are actively suppressed, nonreactivated, or reactivated to rationalize vice and magnify virtue.

Is Our Behavior Motivated By Reason or Emotion or Both?

While some contemporary philosophers (Rawls, Singer, J. Rachels) agree with Kant that reason is paramount in morality, most probably support Hume’s dictum: “Reason is, and only ought to be, the slave of the passions.” This does not mean that we do, or should, succumb to every whim. There is evidence that the rational prefrontal cortex filters out bizarre emotional impulses arising in the amygdala. Yet our moral sense is innate, and functional efficiency and benevolence result from routinely acting on “impulse” and not consciously debating the merits of every action. Thus we can be saved from callous calculations like “economic rationalism.”

Nucleoethics posited that the traditional mental sequence of awareness, will, then memory should be reversed. Our memory of appropriate response leads to spontaneous action, which we almost instantaneously become aware of and imagine we “willed.” Neurophysiology has recently verified this order of brain activity, which occurs over a span of about half a second.

H. Tristram Englehardt Jr. asserts that it is not possible for “reason to justify a canonical contentful morality” (The Foundations of Bioethics, 1996). Most secular humanists probably believe that, while our actions may be technically “impulsive,” we can usually give reasons for them; though we should be alert to the dangers of specious rationalizations.

Is Morality Dictated By Nature or Nurture or Both?

Child rearing has always revealed a balance between nature and nurture, but the publicly recognized relationship has changed over time. In the pagan world, human beings saw themselves as the puppets of spirits or gods who differed from humankind in power and in the acceptability of their vices and crimes, or puppets of fate. Judeo-Christians of all persuasions inherited this tradition and added predestination or “free will”—essentially the will of God or the wiles of Satan. All these concepts demonstrate the supremacy of nature.

Freethinkers progressively abandoned traditional theology, religioethics, and religiomorality. Nucleoethics described the fruits of unbelief as “irreligioethics and irreligiomorality (or religioethics [2] and religiomorality [2]”) as a concession to those who might regard secular humanism as a “religion of humanity.” (I would not concede this today.) By the 1960s, most psychologists and sociologists were attributing criminality to society (nurture) and not the individual, though Hans Eysenck called it hereditary (nature). In the 1970s, genetics (nature) became trendy again when Richard Dawkins proclaimed The Selfish Gene (1976). Over time, particular genes (or their absence) were identified as causing diseases and regulating sensory or motor functions; Craig Venter and Francis Collins sequenced the human genome in 2003. Even genes for criminality, drug dependence, homosexuality, and religious belief were “identified” then abandoned.

“Religion doesn’t have a ‘God spot’ as such, instead it’s embedded in a whole range of other belief systems in the brain that we use every day,” Jordan Grafman suggests. Brain scans show that networks activated by religious beliefs overlap those that mediate political and moral beliefs. Today a balance between nature and nurture seems to be recognized. Working with criminals, including twins and adoptees, in Dunedin, New Zealand, and London, Terrie Moffitt and Aushalom Caipi concluded there was a genetic predisposition to criminality, but that it must be switched on by childhood ill-treatment in order to be expressed.

How does this operate? Nucleoethics postulated layers of computerlike programs in the brain: primary (nature or machine code); secondary (nurture or software); tertiary (nature or functional). The book wrongly stated what was believed in 1972: “The baby is born with a full complement of neurons.”

Are We Motivated By Free Will or Determinism?

Despite the quantum-mechanics fiction of “particulate freewill,” nature and nurture operate on causality (determinism). As the product of both, human activity is therefore determined. Moral philosophers, religionists, and many humanists then ask how this conclusion leaves room for personal responsibility. Philosophically this is worrisome, but real life largely depends on pragmatism (truth is what works). The law asks felons if they knew what they were doing and that it was wrong. If the answers are unequivocally no, they are deemed unfit to plead. Claims of being under the influence of alcohol, other drugs, or hypnosis are no defense against conviction, but they may be allowed as evidence of “diminished responsibility” for purposes of sentencing. Factors like age, provocation, prior convictions, “moral” duress, and family commitments are also considered today. Proven self-defense by “appropriate” means usually results in acquittal.

In academia, a new jargon has arisen of “compatibilists” and “incompatibilists.” John Forge says the former “believe that free will—and thus responsibility—is in fact compatible with determinism.” How can freewill and determinism be compatible? But free will is not the same as responsibility. Leaders are widely held “responsible” for what subordinates do without their knowledge or approval. In Truman’s phrase, “The buck stops here.” And so it is with personal delinquency.

Civil society remains civil through enlightened penology, whereby public retribution and private payback yield to reformation and deterrence—both deterministic forces.

Should Altruism Be Fostered?

Many religionists and humanists advocate altruism, without definition. If it means Sermon-on-the-Mount self-abnegation, it is both unrealistic and undesirable. Evolution depends on the survival of groups and ultimately of individuals within them. Self-respect precedes respect for others. Self-regard should not exclude regard for others. Sane individuals usually call their heroic deeds (“guts and glory”) spontaneous, not suicidal or especially praiseworthy. Others praise them but may deem them foolish.

This even has a parallel at the interspecies level. A bird known as the East African Honey Caller has a special song it sings for Masai warriors, by which it leads them to a hard-to-find beehive. After smoking and robbing the hive, the warriors take most of the honeycomb but leave some for the bird. If it deems the reward insufficient, it ceases collaborating. Society functions best on enlightened self-interest.

Have the Above Developments Affected Humanism?

Certainly its terminology has multiplied. Inspired by Pierre Teilhard de Chardin’s “noosphere,” in New Bottles for New Wine (1957) Julian Huxley coined transhumanism to describe human direction of evolution as a psychosocial phenomenon. Transhumanism has reemerged in technological guise to describe genetic manipulation and human interfacing with machines to form cyborgs (Jean Baudrillard). “Posthumanism” (Donna Harraway, Cary Wolfe) combines speciesism, transhumanism, poststructuralism, and postmodernism. And there are “humanist posthumanism” (Singer) and “posthumanist posthumanism” (Jacques Derrida).

What Is the Role of Humanism?

The above account suggests that secular humanism, like other ideologies, has no direct impact on moropractice. Its significant influence is in the realm of ideas that help to create a society in which individuals can flourish physically, mentally, emotionally, and morally; in which science is tempered with humanity, and all assertions are tested.

For particularities, see Paul Kurtz’s “Affirmations of Humanism,” frequently reproduced in this magazine.

David Tribe

David Tribe is a leading secular humanist now in Australia and formerly in the United Kingdom, where he was chair of Humanist Group Action, president of the National Secular Society, and editor of The Freethinker. He is an honorary associate of Rationalist International. His books include The Rise of Mediocracy and Questions of Censorship (both from Allen &, Unwin, 1975 and 1973).


Who would have thought in 1972 that “ethics” would ever become fashionable? That was when my Nucleoethics: Ethics in Modern Society was published in the same sociological series as Ger maine Greer’s The Female Eunuch (1970). Today, ethics is everywhere. The hitherto-obsolescent Hippocratic Oath for doctors has been joined by codes of ethics for advertisers, …

This article is available to subscribers only.
Subscribe now or log in to read this article.