Free Inquiry
Subscribers
Register
Mar
30
2017
Appeared in Free Inquiry, vol 37 issue 3

ISSUES IN TECHNOLOGY AND ETHICS

Nanotech: New Legal and Moral Challenges

David Koepsell

We are interested in promoting beneficial technologies, which nanotechnology certainly promises to be, and doing so in ways that are efficient, innovative, and ethical. At the same time, we must also consider how much and what type of regulation is warranted regarding this new technology. Societal and individual concerns regarding the environment, safety, and security may warrant regulation of a new technology in some form, whether by governments, international bodies, or the researchers and innovators themselves. Mistakes made in the past in developing and implementing technologies and the resulting harms that have befallen individuals, populations, and the environment serve as examples by which nanotechnology can be more carefully introduced into the stream of commerce and unnecessary harms avoided. We should be mindful, however, that critics’ exaggeration of risks and irrational public fears have also proved harmful to society, as worthwhile new technologies have been stifled or delayed when they were needed most.

Nanotechnology

Nanotechnology, like any technology, poses the potential for both significant improvements in our well-being and lifestyles as well as the potential for harms. These harms may be environmental, or they may be direct harms to those working with the materials involved and to consumers who willingly purchase nanotech-based goods. The “Belmont Principles,” established in an influential 1979 report in the wake of the infamous Tuskegee Study, are relevant here. Expanding their moral horizon to humanity as a whole—which is, after all, subject to the introduction of new technologies even though we are not all subjects of studies—should help us to avoid some of the ethical lapses of the past. Also relevant is the recent experience of the medical research sector in coming to grips with the moral dimensions of risk. While a fair amount of the institutional machinery of today’s bioethics involves self-policing and peer-review, given the lapses of the past, governmental rules, regulations, and laws now back up many of those institutions. Failures of ethics in the modern era can bring legal consequences or at least result in significant institutional punishments, fines, and withholding of licenses, as well as personal and professional liability.1

Self-policing of behavior is preferable in modern liberal polities and economies because doing the right thing out of proper motivation is morally preferable to either skirting duties or acting with bad intentions but also because it is more efficient. The less bureaucracy, the lower the costs of the technology or industry, and institutional rules and regulations add to bureaucracy.

It is incumbent upon both scientists and those seeking to create nanotech-based products that either affect the environment or that will enter the stream of commerce and directly affect users and those working with those products that the mistakes of the past be avoided. The ethical duties embodied in the Belmont Principles and similar international codes are owed despite the regulations that came to be deemed as necessary. Choosing in the early stages of the development of nanotech to abide by ethical duties may help obviate the need later to enforce good behavior by institutional means and avoid top-down rules, regulations, and laws and the bureaucratic inefficiencies that inevitably follow. Even absent a desire to do the right thing for its own sake, enlightened self-interest should provide sufficient incentive to avoid the mistakes of other sciences and industries.

Proper respect for the ethical duties noted above requires special consideration of the characteristics of nanotech products and why they must be carefully studied before people are exposed to them. The factors that make nanotech so interesting and useful, namely the size of the materials and machinery involved, make nanotech a special concern regarding human exposure. Specifically, small things are more “reactive” and pose the possibility of harms that larger materials do not. Because of their high surface area, nanoscale materials and objects may be aspirated through airways, become deeply lodged in lungs and other tissues, and even permeate the skin, all offering means to harm people that many other products do not. Various organizations, professional groups, and governments have recognized the special nature of nanotechnology in regard to human health and have promulgated various recommendations and codes of behavior to guide the fledgling industry. The Foresight Institute, which owes its genesis to Eric Drexler, published its Foresight Guidelines for Responsible Nanotechnology Development in 2006.4 In 2008, the Commission of the European Communities published its report A Code of Conduct for Responsible Nanosciences and Nanotechnologies Research.5

It remains to be seen whether nanotechnology will suffer lapses such as those of medical science, and whether more regulation or other institutional responses will eventually be necessary to protect people from its potential harms. While nanotech poses unique harms, as noted above, some features have been present (and proven harmful in some circumstances) in other technologies and products. Dioxins are molecules; cigarette smoke (and other pollutants) are composed of nanoscale objects. But while we are familiar with the risks in general, each new nanotechnology product will pose unknown risks that must be carefully evaluated. And while we can agree that the community of well-intentioned researchers and developers of marketable technologies will do well to be guided by ethical principles, it is concerns about bad actors whose intentions are already unethical and who wish to cause harm that lead us to consider whether rules, regulations, and laws should govern the dissemination of technologies (such as some nanowares) that can be put to evil uses.

Security

Even the most benign technologies can be adapted in order to cause harm. Fertilizers meant to increase crop yields killed many scores of innocent people when Timothy McVeigh used them to bomb a U.S. federal building in Oklahoma City in 1995. Household implements have been used to hurt, maim, or murder people for as long as man has been making tools. Certain technologies, however, have recently been developed that are considered so inherently dangerous that significant regulatory steps have been developed to contain them.

Until the twentieth century, guns and gunpowder were generally available to any who could afford them, but larger, deadlier weapons such as cannon remained too expensive for those of ordinary means. While many technologies have been developed specifically for killing and warfare, techniques that incorporated the use of deadly chemicals into ordinary arms prompted the first attempts to curtail the use of certain technologies relating to warfare. As early as 1675, the Strasbourg Agreement between France and Germany banned the use of poison-tipped bullets. In 1874, the Brussels Convention regulated the use of chemical weapons. Although three Hague treaties were signed before the start of the First World War, chemical gas weapons were used in that war and have been occasionally used even since the signing of the Treaty of Versailles, which in Article 171 stated that “the use of asphyxiating, poisonous or other gases and all analogous liquids, materials or devices being prohibited, their manufacture and importation are strictly forbidden in Germany . . . the same applies to materials specially intended for the manufacture, storage and use of the said products or devices.”6

After the Second World War, control of chemical weapons was subsumed into general, international regulations concerning weapons of mass destruction, which included the newest, deadliest technology: nuclear weapons. Fear about the spread of nuclear technology inspired the United States to make the technology itself classified and to forbid patents on it as well. One of the purposes of patents is to enable others to practice and improve the art disclosed in its claims once the patent expires and the invention moves into the public domain. Fear about the proliferation of nuclear weapons and the use of nuclear technology by other countries to create their own weapons encouraged secrecy and restraint of the technology itself—except by those within the U.S. government. But knowledge about the underlying science was already well-known, and other governments soon duplicated the United States’ success in building both fission and fusion bombs. The genie was out of the bottle. Yet even after the spread and duplication of nuclear technology by other states, the United States and most of the other nuclear states attempted to regulate proliferation of both the knowledge and the production of nuclear materials and eventually entered various treaties among themselves to further limit nuclear technology outside of the select group of first-comers.

Nuclear arms control and anti-proliferation treaties have created international monitoring and enforcement mechanisms to track the flow of fissile materials and to outlaw attempts to build or otherwise possess nuclear weapons by other states. Similar treaties continue to monitor pathogens capable of use in biological warfare and also the stockpiling of dangerous chemical agents. Since the Oklahoma City bombing, even quantities of fertilizers capable of being used for explosives are now tracked, and there are limits on who can purchase them.

While uranium is found in the environment, purifying it for use in a weapon can hardly be done without attracting the attention of those agencies and organizations tasked with tracking attempts to build nuclear weapons. It is expensive, complicated, and takes attaining a certain level of technological advancement and the possession of specialized equipment to create fissile materials for bombs. Manufacturing sufficient quantities of chemical weapons for use in war or for terror is also difficult to do without being noticed, although it is easier. Creating weaponized biowarfare agents is easier still and harder to track. Witness, for instance, the successful anthrax attacks in the United States in 2001 and 2002.

Nanotechnology and synthetic biology pose catastrophic possibilities for rogue states and terrorists to attain weapons of mass destruction cheaply and without drawing attention. Because of the potential for essentially “garage”- or “basement”-made mayhem, those involved at the early stages of research and development are also trying to develop ways to track the use of the components of deadly products. Nanotech terrorism is a long way off (though arguably, the weaponized anthrax may have been purposely coated with silicon particles7). In the meantime, security concerns in connection with synthetic biology are rising among various militaries and security agencies and researchers themselves.

Synthetic biology is essentially engineering at the nanoscale level using biological systems. Synthetic or systems biology essentially extends an engineering approach to biological systems and attempts to create basic building blocks by altering genetic code to construct materials and even nanoscale machinery. One of the essential mechanisms for synthetic biology is the identification of useful snippets of genetic codes and other biological materials so that they can be combined in new ways. Researchers can now order custom-made strings of DNA or, if they can afford the equipment, create the sequences themselves. In 2002, researchers at the State University of New York at Stony Brook created a synthetic polio virus, pathologically identical to a naturally occurring one (but with markers to denote its artificial manufacture) by using mail-ordered sequences.8 The potential for mischief as synthetic biology matures is clear. If polio could be constructed in the lab, then so could smallpox or, even worse, hitherto unknown biologically based weapons of mass destruction.

Synthetic biology is being touted as a quick and easy path to realizing some of the promises inherent in nanotechnology, piggybacking off nature’s success in designing nanoscale processes and products and speeding our means to achieve nanoscale constructions of our own. The tools to make it possible are also becoming cheaper and more available to amateur and professional synthetic biologists. With growing (and spreading) knowledge of the fundamentals of biological processes, combined with falling costs of equipment and greater availability, nation-states are growing nervous about potential uses by terrorists. Researchers understand that such an incident, should one ever happen, will bring this fledgling science to a screaming halt. Self-policing the industry by a variety of mechanisms has become a widely accepted necessity, even if there are questions about its efficacy.

As early as 2002, researchers in the field met for their second international conference, “Synthetic Biology 2.0,” and discussed in some depth issues relating to safety and security. In 2006, out of that meeting and subsequent meetings and colloquia (as well as public input), came a white paper titled “From Understanding to Action: Community-Based Options for Improving Safety and Security in Synthetic Biology.”9 The document heavily stresses the duties of researchers and commercial suppliers in the field to be aware and to self-police. Numerous other efforts by nongovernmental organizations, governmental bureaucracies, commissions, and law-enforcement agencies have also begun to examine the security implications of the spread of knowledge and means to conduct synthetic biology. Several national and international consortia in Europe have launched inquiries and studies into the ethics and practical concerns of regulating synthetic biology, with a special emphasis on security concerns. In 2007, a white paper was published by Synbiosafe, a project involving the University of Bath, the University of Bradford, and the Organization for International Dialogue and Conflict Management.10 In 2009, the European Group on Ethics published its report on ethical and practical issues in synthetic biology, noting certain security issues as well.11 The Synth-Ethics project, funded by the European Union and in which I have been an investigator, published its first report in late 2010.12 The trend, begun with the “Synthetic Biology 2.0” meeting, is to focus on voluntary notification and enforcement mechanisms, as well as individual researcher responsibility. This model is proposed in a joint report of the J. Craig Venter Institute, Massachusetts Institute of Technology, and the Center for Strategic and International Studies (CSIS). The report, published in 2007, was titled “Synthetic Biology: Options for Governance”13 and explores a number of policy options. Although it presents no recommendations per se, the weight of the projected impacts of the various options presented leans heavily in favor of voluntary professional oversight, education, and openness in order to encourage innovation without significant top-down control mechanisms and to help prevent intentional misuse of the technology. The alternative to this trend is much more closed, tighter regulation of research that would affect knowledge dissemination and include oversight of labs, materials, and researchers. Of course, this sort of restricted environment is generally anathema to a liberal democracy, to say nothing of the smooth conduct of research in a rapidly evolving field; in any case, it is doubtful whether such measures would accomplish the overall goal of improving security.

As we have noted, the knowledge and materials necessary for synthetic biology are already generally available and growing cheaper every day. Unlike the tools and materials used in nuclear weapons, chemical weapons, and even some weaponized biological agents, there is really very little that can be done to effectively police the pursuit of synthetic biology. For now, this is not the case with the tools and knowledge necessary for pursuing true molecular nanotechnology. So far, the cost of things such as powerful electron microscopes is prohibitive for garage tinkerers, and the various grassroots efforts at creating fabricators are not close to achieving technical detail. But if the trajectories of both the top-down and bottom-up approaches to nanowares continue to merge, then nanotechnology will have to take issues of security as well as safety seriously. Perhaps more so than any other industrial failure, either intentional or accidental, the use of nanotechnology by bad actors will undermine public confidence in the technology and bring to bear a measure of government regulation and oversight that could choke the field and hinder its progress and benefits.

What can we learn from regulatory efforts regarding other technologies in the past, and how can we best pursue nanotechnology’s benefits while avoiding the environmental, safety, and security dangers expressed above? And what other regulatory and governance issues can we address now, in the nascent stages of nanotech science?

The Path of Openness

Consider what might have happened had nuclear technology been kept open and the knowledge and means of producing nuclear weapons, as well as nuclear’s peaceful uses, not been regulated so heavily. Would the world have been less safe? At one point during the Cold War, when the Soviet Union and the United States had helped create an international climate in which those two states held a virtual monopoly on nuclear weapons, each side had enough warheads to destroy Earth. A nuclear exchange would have eliminated most life on the planet. How safe were we then? During the Cuban Missile Crisis, we came closer to nuclear war than at any other point so far in history. A diplomatic failure could have spelled the end of civilization. War was only narrowly averted. The balance of terror maintained by the policy of Mutually Assured Destruction (MAD) may have helped avert nuclear war, or it may have been just lucky that given our capabilities, we conscientiously avoided use of our nuclear weapons due to some other inhibition. In the post–Cold War era, some truths have emerged that have tested the MAD policy and suggested that deadly technologies are not likely to be used even by rogue states or terrorists, with some exceptions.

Although nuclear technologies continue to proliferate and states such as Pakistan, India, Iran, Israel, and North Korea are known to possess, or likely possess, either the technology to produce nuclear devices or the devices themselves, they have not yet been used. International nonproliferation treaties and policies of containment have generally failed. These agreements actually serve as bargaining chips rather than deterrents. As a society attains the level of technological capability to produce nuclear weapons, it makes more sense to do so secretly to the degree one can and then to use this capability, once achieved, to bargain for something. International pressure to limit proliferation of nuclear weapons creates a climate for blackmail. States that skirt these agreements and develop their own nuclear capabilities can then taunt the world community with their technological achievement, flout their violation of treaties, and use their new membership in the nuclear club as leverage to secure aid, cooperation in some other dispute, trade deals, or attain other demands. Since the end of the Cold War, in which the two major nuclear superpowers came to a stalemate, the growth in the number of nuclear states has been steady. International efforts to curtail the spread of nuclear technology seem to have achieved the opposite. And yet, are we any less safe?

The post–Cold War era offers a glimpse of what the world might have been like had we never regulated nuclear technologies. If everyone has weapons of mass destruction, is the threat of nuclear conflagration any greater than if only two mortal enemies possess them? In the Cold War world, if the USSR or the United States used nuclear weapons on a small state that had developed and used a nuclear weapon on its nonnuclear neighbor, the chances of a U.S./USSR nuclear exchange would have increased, and either of the two superpowers would have looked like a bully. In a post–Cold War world, in which (presumably) anyone might develop and possess nuclear technology, the risks to a state that chooses to use nukes increases dramatically, because retaliation could be immediate and pose less diplomatic consequences to non-superpower states that choose to use them given that such a use would be self-directed and legitimately defensive.

Had the nuclear world after World War II been multi-polar instead of bipolar and had no caps been imposed upon the research and development of nuclear technology, an international stalemate would have likely prevented the wartime use of nukes. We are arguably less safe now that any state might develop destructive technologies in secret, then use blackmail later, than if we simply assumed that any state might develop and possess weapons of mass destruction legally if it had the capability. The latter climate encourages multi-polar diplomatic agreements to curtail the use of these weapons, rather than complex institutional measures and threats of force to prevent their development. There seems more to be gained for safety and security from openness rather than through tight regulation and curtailment of knowledge and technology development.

Astonishingly, we have failed to destroy ourselves as a species, despite the means to do so being available for the past sixty years. Of course, it’s still possible that we will do so—if for no other reason than because of the still-vast supply of nuclear weapons, primarily still in the hands of the United States and Russia. Every day that we reduce the number of nuclear weapons, the likelihood of global nuclear catastrophe falls. This doesn’t mean that someone won’t someday use a nuclear device in war (as the United States did twice) or for terror. But cheaper, easier means for conducting terror exist, as the events of September 11, 2001, graphically demonstrated. Doomsday scenarios aside, more banal and significantly more destructive means of killing people remain widely available, and no amount of regulation will rein them in.

Could it be that we can be trusted with dangerous knowledge? Might even the most evil of people be thwarted by external factors, or by fear and trepidation, from engaging in deliberate acts of large-scale destruction? We can hope, though history shows that outliers emerge now and then who will stop at nothing to kill or to commit genocides or launch dreadful wars. And while scientists and innovators must be cognizant of the possibility that such people will use their technologies for harm, this cannot serve as an argument not to pursue potentially deadly knowledge. Attempts to build nuclear weapons were underway inside Nazi Germany; outside of it, the knowledge of Nazi efforts to do so, and Albert Einstein’s awareness of the technical capability of the Nazis to succeed, arguably helped enable the Allied forces to prevail. No chance of victory could have emerged from attempts to squelch the knowledge itself.

It is the nature of information and knowledge to spread, despite attempts to curtail it. Attempting to curtail the spread of seemingly dangerous knowledge only encourages those who wish to have that knowledge at all costs to do harm to acquire it and to operate underground, secretively, and beyond the view of those who might be able to prevent that knowledge’s evil uses. Consider the drug trade. A dangerous underground system of manufacture and distribution exists. Thousands of people are killed each year in wars among rival gangs, and the products are unregulated, impure, tainted with the blood of innocents, and uncontrolled. Market demand has continued unabated and even been exacerbated despite and perhaps due to regulations. As use is criminalized, the ability to intervene and treat addictions is diminished, and the cycle of illegal manufacture, distribution, and use is that much harder to track. If people want something, they will find a way to make it, or entrepreneurs will emerge who will satisfy the market demand.

Attempts to curtail knowledge about nanotechnology, or to regulate the availability of the machinery and equipment needed to realize its full potential, will ensure that a black market emerges. It will become that much harder to track the development of potentially harmful products and uses, and overall safety and security will be diminished. We should instead encourage openness. While we survived the Cold War, the emerging multilateral post–Cold War environment will only be survivable if we adopt maximum openness. Given the means, the better angels of our natures can be trusted to prevent intentional catastrophe. The distant possibility of “gray goo” should be kept in mind, and the current and near-future dangers of synthetic biology ought to motivate us, but only to educate those who are involved in these sciences about their duties and encourage the free spread of knowledge as a preventive measure. The more we know about the possibilities and the better capable we are of evaluating risks, the more likely researchers will be motivated and able to prevent nanotechnology’s harmful uses.

Openness also leads us to more proactive measures, besides guiding our protective actions. Considerations of justice should encourage efforts to repudiate regulatory measures aimed at curtailing the free flow of information wherever it threatens the positive potential of the technology. As we have discussed, the modus operandi of liberal democracies is to increase political participation and encourage freedom and open markets. Yet powerful regulatory forces currently work against these goals out of an expressed motivation to encourage the progress of “science and the useful arts.” Intellectual property (IP) is now taken for granted as a right, although its history suggests that the rights established by IP are relatively recent, wholly positive, and not founded upon the sort of principles that have grounded other human rights. If we are interested in promoting the growth and full potential of nanotechnology and investigating the role of all regulations in this effort, then we must also focus on the role and impact of IP on innovation in general and nanotechnology in particular. We cannot take for granted that it always achieves its stated ends, or that we must accept without modification its current forms.

Notes

  1. For an extended discussion of the moral principles involved in evaluating risk in connection to both technological innovation and medical research, see my “The Morality of Risk: A Primer,” available online as an appendix to this issue on the Council for Secular Humanism website (www.secularhumanism.org).
  2. Tuskeegee Study—Timeline. NCHHSTP. Centers for Disease Con­trol, June 25, 2008, http://www.cdc.gov/tuskegee/timeline.htm, ac­cessed September 22, 2010.
  3. David Koepsell, “On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D,” Science and Engineering Ethics 16, No. 1 (2009):119–33.
  4. http://www.foresight.org/guidelines/current.html, accessed Sep­tem­­ber 22, 2010.
  5. http://ec.europa.eu/nanotechnology/pdf/nanocode-rec_pe0894c_en.pdf, accessed September 22, 2010.
  6. The Avalon Project—The Laws of War, Yale Law School, Lillian Gold­man Library, http://avalon.law.yale.edu/subject_menus/lawwar.asp, September 22, 2010.
  7. Press Briefing by Homeland Security Director Tom Ridge, Health and Human Services Secretary Tommy Thompson, and Centers for Disease Control Emergency Environmental Services Director Dr. Pat Meehan, October 29, 2001, http://www.presidency.ucsb.edu/ws/index.php?pid=79187, accessed September 22, 2010.
  8. J. Cello, A. V. Paul, and E. Wimmer, “Chemical Synthesis of Polio­virus cDNA: Generation of Infectious Virus in the Absence of Natural Template,” Science 297, 5583 (2002): 1016–1018.
  9. http://gspp.berkeley.edu/iths/UC%20White%20Paper.pdf, accessed September 22, 2010].
  10. http://synbiosafe.eu and http://www.idialog.eu/uploads/file/Synbiosafe-Biosecurity_awareness_in_Europe_Kelle.pdf, accessed Sept­em­ber 22, 2010.
  11. http://ec.europa.eu/european_group_ethics/docs/opinion25_en.
pdf, accessed September 22, 2010.
  12. http://synthethics.eu/, accessed September 22, 2010.
  13. Michele S. Garfinkel, Drew Endy, Gerald L. Epstein, and Robert M. Friedman, “Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science,” December 5, No. 5 (2007): 359–62.


David Koepsell is an author, philosopher, attorney (retired), and educator whose recent research focuses on the nexus of science, technology, ethics, and public policy. He has provided commentary regarding ethics, society, religion, and technology on numerous media outlets, including MSNBC, Fox News Channel, the Guardian, the Washington Times, National Public Radio, Radio Free Europe, Air America, the Atlanta Journal Constitution, and the Associated Press. He has been a tenured associate professor of philosophy at the Delft University of Technology, Faculty of Technology, Policy, and Management in the Netherlands; visiting professor at UNAM (National Autonomous University of Mexico), Instituto de Filosoficas, and the Unidad Posgrado, Mexico; director of Research and Strategic Initiatives at Comisión Nacional De Bioética in Mexico; and asesor de rector at UAM Xochimilc.

comments powered by Disqus

Related

Introduction

09/27/17 at 03:12 PM

The Artful Blasphemer

09/27/17 at 03:10 PM

Blasphemy Is Harass for Me

09/27/17 at 03:08 PM

‘I Love Tuesdays’

09/27/17 at 02:30 PM

The  Secular Humanist Magazine

Subscribe Now

Facebook

© 2017 Council for Secular Humanism. All Rights Reserved. Privacy Policy.
DONATE Contact Us Facebook