Biotechnology describes a range of human activities in medicine, agriculture, and environmental management. One biotechnology in particular, gene technology, continues to evolve both in capacity and potential to benefit and harm society. The purpose of this article is to offer a policy bridge from unproductive descriptions of gene technology to useful methods for identifying sources of significant biological and socioeconomic risk in complex food systems. Farmers and the public could be voluntarily and involuntarily interacting with new techniques of genome editing and gene silencing in entirely new ways, limiting the usefulness of previous gene technology histories to predict safety. What we believe is a more consistent, verifiable, and practical approach is to identify the critical control points that emerge where the scale effects of a human activity diverge between risk and safety. These critical control points are where technical experts can collaborate with publics with different expertise to identify and manage the technology. The use of technical terminology describing biochemical-level phenomena discourages publics that are not technical experts from contesting the embedded cultural perspectives and uncertainty in “scientific” concepts and prejudice the risk discourse by ignoring other issues of significance to society. From our perspective as gene technologists, we confront the use of pseudo-scale language in risk discourse and propose an escape path from clashes over whether risks that arise spontaneously (from nature) can be perfectly mimicked by gene technology to a discussion on how to best control the risks created by human activity. Scale is conceptually implicit and explicit in gene technology regulation, but there is no agreement about what scales are most useful to managing risk and social expectations. Both differentiated governance (risk-tiered) and responsible research and innovation models could accommodate the critical control points mechanism that we describe.
Introduction
Legislators and regulators around the world are being asked to reconsider what gene technology techniques they regulate (Bratlie et al., 2019; Heinemann, 2019). New Zealand and the European Union (EU) courts have ruled that the new techniques referred to as gene/genome editing fall under the regulations specific to genetically engineered/modified organisms (Kershen, 2015; Gelinksy and Hilbeck, 2018), whereas Australia and Japan have excluded some applications of genome editing and/or gene silencing from regulation (Tsuda et al., 2019; Office of the Gene Technology Regulator, 2020).
Interpretations of terminology used in gene technology legislation has led to different conclusions on whether or how to regulate editing and silencing techniques. The terms used have been based largely on biochemical imaginaries of equivalence to nature. Discussions founded in approximation of or improvement upon nature have been remarkably harmonious with “highly optimistic promises of major social and industrial transformation” (Stilgoe et al., 2013) but are reaching the limits of utility for shaping a social consensus on the regulation of gene technology.
Biochemical imaginaries were also present at the beginning of the gene technology era, in the 1960s–1970s, but they were effectively countered by a generation of scientists and civil society organizations who made a distinction between how gene technology mimics or improves upon nature and how it differentially responds to human activity in nature (Wright, 1978; Wright, 1986). In their analyses, the degree or level of human activity is the primary feature that leads to harm and benefit from technology in general and is an appropriate focal point of regulation that limits harm while promoting responsible use.
We suggest that the concept of the degree or level of human activity be returned to the center of the discussion on why and how to find coherence in governance and regulation of gene technology. As Montenegro de Wit (2020) has observed: “Research spanning resilience theory to epigenetics to agroecology reveals a central paradox: while the scale of control afforded by science advances, so does the domain of uncertainty and potential risk.” There is no vaccine against the actual or potential harms of any technology, and technologies in their particular social and environmental contexts differ in their potential to channel and amplify harm. The potential to scale uncertainty and harm is a specific feature of the gene technology techniques that emerged in the 1970s and differentiates them from agricultural breeding and selection techniques dating back millennia.
In this article, two ideas are presented as either new or important for moving beyond the present mainly biochemical arguments that bog down legislators and courts in a mire of pseudo-technical distractions. First, that the new techniques of gene technology are distinctively scalable. Second, when the potential to cause harm or improve safety describes a relational scale, the property of scalability may be used to inform an improved structure for the regulation of gene technology including new editing and silencing techniques. Effects that scale with application have been drivers of public and scientific concerns about gene technology (Mueller, 2020), but currently proposed or adopted regulatory triggers for the new techniques often fail to link them with these relevant scales.
We begin by explaining what scale means in the context of gene technology. Scale is a complex concept that differs in meaning across disciplines (Sayre, 2009). It is not exclusively a measure of distance, area, volume, and time but also a mixture of these and their relationship with human activity. Where human activity intersects with the environment, there is risk (Figure 1), putting the intersection at the place where we may best control risks of our own making. The highest priority for technology regulation, after deciding to adopt a technology, are harmful or beneficial effects that scale up quickly and/or widely as a result of human activity.
We present the case that scale relationships can be exploited in a regulatory framework for all techniques of gene technology so far invented, even if the relationship is not linear. A governance system should not fail to respond where “the degree of hazard or level of risk” (Conko et al., 2016) of a technology ascends with its use. These places we call “critical control points,” namely which regulation and policy can target for the most effective, precautionary outcomes.
Below, we will briefly recount the sometimes unnoticed use of essentially scale-based relations to describe the need for and means of regulating gene technology. In order to do this, we first set down some working definitions of risk and human activity. Recognizing that these concepts are contestable, and not wishing to impose a universal meaning on notions of nature, the intention is to describe the concepts that guide this essay. From there we discuss the language of this fast moving area of biotechnology, but mainly we refer readers with particular interest in biochemical terminology to other sources. The language of gene technology is growing and changing very quickly and can be an issue of confusion separate from the technology itself (Wirz et al., 2020). After choosing a vocabulary, we discuss how biochemical properties of gene technology have obscured scaling properties in addressing governance, regulation, and risk. Finally, we will examine how the scale features we discuss might work in two different proposed governance models, differentiated governance based on a risk-tiered regulatory framework and responsible research and innovation (RRI).
Language and context
Defining risk is inherently problematic because “risk to whom and what counts as risky are a set of questions with a contested political and social history” (Montenegro de Wit, 2020), which we will not resolve here. As will become apparent, however, biochemical imaginaries tend to cluster around a narrow conception of risk as actual or potential biological harms (Pavone et al., 2011; Herrero et al., 2015; Macnaghten and Habets, 2020). This view was formalized by the U.S. National Research Committee (NRC, 1987) as: “Assessment of the risks of introducing R-DNA-engineered organisms into the environment should be based on the nature of the organism and the environment into which it will be introduced, not on the method by which it was modified.” It has evolved to mean “risk assessment more consistent with scientific principles” (Eriksson et al., 2020).
We accept this view of risk but do not limit ourselves to it. Risk includes “the nature, magnitude and likelihood of potential harms” within their “social context, including the attitudes and practices of those (individuals and institutions) involved in managing risk” (Pavone et al., 2011). There is risk in the way risks are described and therefore assessed (Pavone et al., 2011; Montenegro de Wit, 2020). After all, what is “the environment”? Where is it? These notions have ambiguous boundaries even when we think we know what they mean (Figure 1).
Nature is often invoked in gene technology governance discourse, usually as a foil in risk assessment, but is a term no less ambiguous than “the environment.” Both of these terms frame the risk window, defining when the technology is relevant to governance (e.g., when it is in the environment) and what it must be like to be governed at all (i.e., not like nature). For our purposes, nature is not a place but a condition: Nature is what can or does happen independently of human activity. In the past, it is everything that happened before humans existed, and in the present, it is everything that happens independently of direct manipulation of the environment by humans, even if the chain of events is one that was initiated by a human action.
We will use the word “nature” to mean “spontaneous.” Spontaneous has an established history of use this way, and it avoids circular definitions such as natural mutations are those produced naturally (see Department of Health [DoH], 2018). For example, spontaneous mutation describes variations in genes that arise independently of human activity. It therefore is the meaning that contrasts with the use of gene technology including the new techniques, and it does so by emphasis on how the variations come to be.
The environment may be where things happen spontaneously or include human activity and, in this article, activities associated with technology. “Technologies never operate outside a biophysical and social context, and it is their interaction with their contexts that generates effects, impacts and implications” (Pavone et al., 2011).
Technologies of food systems are highly diverse and include thousands of years of breeding and selecting crops and livestock as well as managing agroecosystems. Our focus is gene technology, but our main test will be whether scaling effects work as alternative scoping definitions for the new techniques of genome editing and gene silencing. These are seen as the most challenging to define by existing governance models (Eriksson et al., 2020).
Genome editing and gene silencing are called new techniques, but it is not always clear why. The kinds of technological futures ascribed to them are difficult to distinguish from those expressed for gene technology in general since the early 1970s (see Danielli, 1972). Methods of gene editing also were well documented already in the 1970s (Itakura and Riggs, 1980; Shortle et al., 1981; Rivera-Torres and Kmiec, 2015).
New tools have become available for genome editing, particularly the nucleases such as Cas9, TALENs, and ZFNs (for definitions, see Kawall et al., 2020), as have new methods for applying the tools to a larger range of organisms. For example, short pieces of DNA called oligonucleotides used in a genome editing technique called oligonucleotide-directed mutagenesis (ODM) were applied in yeast prior to the 1990s as they can now be applied in plants and animals in the 21st Century (Moerschell et al., 1988; Baudin et al., 1993).
The new techniques are frequently contrasted with the “old” laws that regulate gene technology. For example, New Zealand’s Chief Science Advisor said, “I think the first step is probably to look at the legal and regulatory framework, because at the moment it’s not fit for purpose, because the act was written before the [new gene-editing] technologies we’re discussing were even invented” (Manhire, 2018). Similarly, in the wake of the European Court of Justice ruling that the genome editing techniques did create genetically modified organisms, the European Commission’s Chief Scientific Advisors (CSA, 2018) wrote: “In view of the Court’s ruling, it becomes evident that new scientific knowledge and recent technical developments have made the GMO Directive no longer fit for purpose” because the “definition of GMOs contained in the GMO Directive dates back to 1990.”
The concept and application of editing may be creating new possibilities in food systems and elsewhere. Nevertheless, referenced laws were written at time of familiarity with editing. The new tools have created capacities for harm that perhaps were not appreciated because of the technical limitations of editing tools available before the turn of the century, or different contexts of the day. However, this kind of “newness” is akin to comparing a retail drone from 2020 to a remote control airplane of 1985. They are both remote control aircraft, but the sophistication and altitude of the 2020 version, in context of other differences between society in 1985 and 2020, such as density of airspace occupation and availability of eavesdropping equipment, has added to what needs to be considered in governance. Those departures from past tools challenge the social license for use and subsequently the risk assessment if they are used.
The genome editing terminology used presently was rare or nonexistent in the 1980s–2000s, decades dominated by terms such as recombinant DNA (rDNA) and transgenes. Because of this, we include the term rDNA, or what the EU Chief Scientists call established techniques of genetic modification and described as methods “used to introduce DNA sequences from other organisms” (CSA, 2018), in order to make sense of when a source is trying to describe different risk pathways for new techniques. It is necessary to use “rDNA” and “transgenes” both because they are language of the sources we quote but also because they have embedded scale features.
The new techniques could be seen as being new in the sense that they have so far few released products (Montenegro de Wit, 2020). Therefore, there is also very little evidence confirming the long list of claims already made in their names (Gelinksy and Hilbeck, 2018; Hurlbut, 2018). Among the few products, we provide two examples relevant to agriculture and food that also have different positions in different scales. They are in two different species (one plant and one animal) and have different sized sequence changes (one a single nucleotide change and one a gene replacement).
The first example is a single nucleotide change, conferring herbicide resistance, in the enormously large and complex genome of canola (Gocal, 2014; Convention on Biological Diversity [CBD], 2020). The second is a gene alteration made in cattle to stop the development of horns. The reader should be aware that both of these products have been subject to postlaunch difficulties. The canola developer has, at time of writing, reportedly retracted earlier claims that its canola is the product of new techniques (Songstad et al., 2017; CBD, 2020). Instead, the change arose from somaclonal variation that occurs during tissue culture of the treated plant cell used to regenerate the modified plant (GMWatch, 2020; Meunier, 2020). The developer of hornless cattle has retracted its claims about precision and purity of the genome modifications (Carlson et al., 2016; Van Eenennaam et al., 2019) after FDA scientists found that they were not accurate (Norris et al., 2020). The company initially said that: “We have all the scientific data that proves that there are no off target effects” (quoted in Regalado, 2020), but it overlooked, among other changes, about 4,000 new nucleotides inserted during the application of the new techniques, including antibiotic resistance genes.
According to initial claims (Songstad et al., 2017), the herbicide tolerant canola plant had been produced using ODM and was an example of a crop plant made without using “transgenes” (Gocal, 2014). It is the type that could come about using what are called SDN-1 or ODM techniques. SDN-1/ODM techniques are the most influential examples of when technology approximates what happens spontaneously and therefore have provided regulators compelling reason to reconsider both governance and regulatory frameworks. Further developments on this will be discussed later.
SDN-1 applications cause “random changes to the genomic DNA sequence at specific locations, which are created by error-prone repair of double-strand breaks introduced by a” nuclease, an enzyme that breaks the bonds between nucleotides in a DNA molecule (Eckerstorfer et al., 2019). ODM applications involve a small DNA molecule called an oligonucleotide that is introduced into cells. The sequence of nucleotides in the oligonucleotide is used as a template to convert the sequence of nucleotides in the genome that is already similar but not identical to that in the oligonucleotide.
SDN-2, the kind of reaction used to make the hornless cattle (Sonstegard, 2018), involves the intended insertion of DNA, but changes which are normatively judged as “small” in scale: “repair of site-specific double-strand breaks to introduce small specific sequence changes at genomic targets” (Eckerstorfer et al., 2019). SDN-3 applications involve “larger-sized DNA elements of heterologous origin [inserted] into the recipient genome at specific locations” (Eckerstorfer et al., 2019). The kinds of changes that can be made with genome editing tools are not just those that make small changes to existing DNA sequences. The same tools can be used to make “transgenic” organisms too. Our emphasis in this article is at the other end of the range, the class of outcomes that are argued to make existing laws out of date.
Gene silencing is induced in organisms by double-stranded RNA (dsRNA) molecules. These can be made by the organism or be synthesized and transferred into the cells of an organism. Gene silencing is often referred to as RNAi, for RNA interference. RNAi is only seen in eukaryotic cells, but prokaryotes have their own biochemistry for regulating gene expression with dsRNA. The important features of silencing are that it can be induced by human activity, it has the power to alter traits within a generation, and sometimes it too can cause heritable changes (Heinemann et al., 2013; Heinemann, 2019).
A brief history of scale in gene technology
Scale has always been at the foundation of gene technology governance models and regulatory systems. It has not always been explicitly understood in this way, however, and misunderstandings and substitutions of pseudo-scales have led to muddy thinking in the semantic (re)interpretation of legislation.
Pseudo-scales are normative judgments described in semiquantitative and measurable terms. They have features that can resemble relevant scales but have no or only a limited perceptual basis. They may have unspoken, perhaps often unrealized, value to some publics at some times but are inappropriate surrogates for measuring biological safety.
An example of a pseudo-scale in gene technology based on normative judgments is the measure of “foreignness,” as in “the status of organisms obtained through new plant breeding techniques requires confirmation that they have no nucleic acids derived from foreign organisms” or “foreign DNA” (Tsuda et al., 2019). This scale has embedded value relationships about how different components need to be from one another to be foreign (e.g., species concepts or places of origin) or how different the ways they come to be incorporated (e.g., bioballistics vs. bacterial conjugation) need to be for the outcome to be foreign. The hierarchy of editing classification as SDN-1 through SDN-3 mirrors the foreignness pseudo-scale.
In 1974, Nobel laureate Sydney Brenner articulated the relevance of scale to governance. Brenner’s conception of scale can be useful and generally applicable to governance models that require risk assessments.
It cannot be argued that this is simply another, perhaps easier way to do what we have been doing for a long time with less direct methods. For the first time, there is now available a method which allows us to cross very large evolutionary barriers and to move genes between organisms which have never before had genetic contact” and the “essence is that we now have the tools to speed up biological change and if this is carried out on a large enough scale then we can say that if anything can happen it certainly will. In this field, unlike motor car driving, accidents are self-replicating and could also be contagious. (emphasis added to Brenner, 1974)
Brenner was attentive to differences in organisms that may be sources or recipients of new (foreign) genes, but his focus was on the role of human activity as the scale driver for governance. He specifically spoke about how the emerging techniques of gene technology affected the rate and consequently the number of gene manipulations by direct human activity. There were no precedents of this pace of biological change in historical techniques of screening for traits and then selecting parents for mating in plant and animal breeding. Moreover, as we will see later in this article, the use of new genome editing and gene silencing techniques potentially makes biological changes at large geographical/spatial scales and across multiple species. There is no experience with the manipulation of the scales of time, space, and species range in gene technology prior to the invention of the new tools applied to the techniques of genome editing and silencing, much less as an analog of spontaneous events in nature. This feature separates human activities that may go unregulated from those that should be regulated. Gene technology in general, and the techniques of genome editing and gene silencing in particular, can scale risks one way while regulation looks the other way.
It was already known in the 1970s that genes could spontaneously transfer between organisms that had no known genetic contact (e.g., by mating) through what is called horizontal gene transfer (Heinemann, 1991; Heinemann, 1999). Brenner closed the door on the idea that because horizontal gene transfer happens independently of human activity, it is therefore no different to when humans transfer genes between genomes to create biotechnology products. The risk Brenner drew attention to was the compression of the time required to create a relevant number of transfers of a particular gene, which, in contrast to foreignness, can all be measured and controlled through regulation.
Nevertheless, language such as “natural GMOs” or “naturally transgenic” (Kyndt et al., 2015) instead of “spontaneous/natural horizontal gene transfer” still demonstrates—and causes—confusion, as in linking GMOs to the observation of sweet potato cultivars containing DNA sequences also found in a bacterial pathogen of plants. As the journal Nature proclaimed: “The sweet-potato genome contains genes from bacteria, so is an example of a naturally occurring genetically modified (GM) plant” (Anon, 2015). In fact, all plants and animals have genes from viruses and bacteria. Some come from mitochondria that reside in each of our cells. Mitochondrial ancestors were free-living bacteria. Some of these genes from the mitochondria have migrated to the chromosomes kept in the cell nucleus (Martin, 2003). There are many examples of genes from viruses and bacteria in plant and animal genomes because of horizontal gene transfer. It would have been extraordinary if the sweet potatoes had no DNA from bacteria.
These headlines and phrases linking spontaneous events in nature to outcomes of gene technology have the purpose to “influence the public’s current perception that transgenic crops are ‘unnatural’” (Kyndt et al., 2015). The DNA that the researchers found spontaneously had entered into the sweet potato lines via the same bacterium that was, beginning in the 1980s, manipulated by technologists to deliver transgenes into plant cells. Presumably, they thought that the distinction would be lost on nonbiotechnologists, in the same way that someone might confuse getting poked by a sharp branch while walking through a forest with being pierced by an arrow shot from a bow, because both the branch and arrow are made of wood.
Pseudo-scales as problems
In the simplistic view we the authors hold, regulatory systems fulfill the objectives of governance. Regulation acts when a legislative trigger is pulled. The trigger can differ by jurisdiction, such as when it can be pulled by all applications of gene editing techniques in Europe but only some of them in Japan. For a list of when new techniques result in exemptions from regulations in various countries, we refer readers to other sources (Eckerstorfer et al., 2019; Macnaghten and Habets, 2020).
Regulatory systems are categorized as based on process, product, or novelty (National Academies of Sciences, Engineering, and Medicine [NASEM], 2016; Eckerstorfer et al., 2019). There are no firm boundaries between these three types and operationally they overlap. For instance, process-based regulation still evaluates products, and product-based often selects products produced by specific processes. The United States regulates a genetically modified plant differently depending on whether or not a plant pest was part of the process in making the genetically modified plant (Waltz, 2015). Novelty is an aspect of a product that makes it subject to regulation (Eckerstorfer et al., 2019). All three frameworks have advantages and disadvantages, and their implementation is probably not purely one or the other in any jurisdiction.
The features of the new techniques discussed in this section are those that are commonly referred to as making them different in important ways from rDNA/transgenes, such as the ability to precisely make small interventions like single nucleotide changes in a genome that are indistinguishable from changes made by nature. Each of these features are described as pseudo-scales, such as from imprecision (“randomness”) to sequence-unique precision, size of change, measures of equivalence, and measures of naturalness. However persuasive the pseudo-scales have been for some, they have not catalyzed a uniformity of opinion about whether or how new techniques should be regulated.
Foreignness
Gene technology has many different scaling relationships that can give rise to emergent properties or respond to stochastic effects. For example, the size of a foreign DNA molecule does not predict the size of change, if any, in a phenotype, nor does it predict the rate at which the change may spread in a population of organisms, nor does it predict the effects it has on the environment in which the new phenotype is introduced, and nor can it predict how an environment might adapt to an invading population of organisms with the phenotype. How much “foreign” do you need for harm?
Examples such as sickle cell anemia disease demonstrate the impotence of using pseudo-scales such as foreignness as legislative triggers for techniques that might only change a single nucleotide, but at the global level might change patterns of habitation. The sickle cell phenotype arises from a single nucleotide change in the gene that specifies adult hemoglobin. The change of the nucleotide A (adenine) to T (thymine) at one position in the gene causes one amino acid change in the protein, from glutamic acid to valine. The change is small and in the form of one for another nucleotide in a genome of 3 billion nucleotides. The change from A to T changes which amino acid is incorporated into the protein at a single position in a multi-peptide protein, amounting to a change of approximately 0.1% of its molecular mass. The change in interaction between the sickle cell hemoglobin and its intracellular environment alters the propensity of hemoglobin molecules to clump together. The effect of this clumping severely changes the capacity of red blood cells to carry oxygen. People that have some variant hemoglobin have a very different phenotypic susceptibility to malaria, allowing them to move into and occupy environments with mosquito vectors of the malaria-causing parasite; having only variant hemoglobin is lethal. At the very next position in the gene, a change from a G (guanine) to an A would have no effect. Thus, changes of equal “size” at the DNA level have different relationships to scale of change at other levels.
Nature as a quantitative scale
The European Court of Justice repeated Brenner’s scale framing when it said that the new techniques “make it possible to produce genetically modified varieties at a rate and in quantities quite unlike those resulting from the application of conventional methods of random mutagenesis” (recounted in CSA, 2018). Although not explicit in the Court’s statement, the rate of production includes the rate at which modifications can be made by sequential applications of genome editing to one or more genes each time. (This will be a focus later because the rate of serial modifications to a genome is a dimension of scale that is very responsive to the new techniques and is a critical control point.)
In contrast, others have coined normative scales to describe nature or naturalness as quantitative. For example, a substantial number of members of the Norwegian Biotechnology Advisory Board (NBAB, 2018) refer to a quantity of naturalness concluding that “genetic engineering is not inherently more problematic than other technologies if the products have similar traits and do not deviate too much from nature” (emphasis added). The New Zealand Royal Society Gene Editing Panel noted that “gene editing techniques provide a continuum of change that starts at the scale of natural mutations, and ends with the future possibility of creating synthetic organisms” (emphasis added to Everett-Hincks and Henaghan, 2019, p. 5). This is not a scale with small mutations at one end and large ones at the other, as one might think, because the Panel says that “Natural mutations can also involve long sequences being inserted, e.g., transposon insertions” (RSNZ, 2019). Thus, the Royal Society also invokes a pseudo-scale of nature as a quantity, as in how much natural a change is.
In response to the European Court’s ruling, the EU Chief Scientists too drew special attention to a nature metaphor in the form of “a naturally transgenic food crop” (Kyndt et al., 2015). “Therefore, if referred to in the legislation, the concept of ‘naturalness’ should be based on current scientific evidence of what indeed occurs naturally, without any human intervention, in organisms and in their DNA” (CSA, 2018). They were suggesting that this interpretation of an interaction between a sweet potato and its pathogen (Agrobacterium tumefaciens) that occurred spontaneously in the distant past was the same or similar to any number of future human–plant interactions mediated through using any number of genes and variants of genes over unspecified time periods and measures of harm.
Implicit in this use of nature is not a scale where the probability or severity of harm of a technology is measured, but a different scale. This scale says that if the same harm arises by two different means, either spontaneously or through technology, then it is not necessary to control the risk from the latter. This is a messy scale because how you measure metaphorical equivalence of “natural” process with technology, and harm, is not made explicit. Nor is it obvious why a society would not try to reduce harm from any source over which it has control. As the scale relies heavily therefore on nonscientific judgments—regardless of whether or not scientists are making them—it is difficult to see how it is useful as part of a governance model that relies on risk capture through triggering regulation. This point we return to later in discussing whose metaphors matter (Box 1).
Distinguishable from nature
Along with the metaphor “natural” as a quantity is the inclination to measure differences from nature. Contested scientific interpretation is accumulating rapidly at this point in our discussion, and it is necessary to take a step back and clarify some ideas. Nature does two things nearly simultaneously when it determines “what indeed occurs naturally.” First, it is a condition in which spontaneous DNA changes can happen, as was observed by the Chief Scientists: “Mutations occur naturally without human intervention” (CSA, 2018). Second, selection based on the fitness of an organism in its environment determines whether it occurs in nature. It is an error by the Chief Scientists and others to say that mutation “is the underlying mechanism of natural evolution.”
“Evolution is a population concept. An individual does not evolve; only populations evolve in the face of the genetic changes accumulated from one generation to the next” (Russo and André, 2019). Mutation is a source of underlying genetic change in individuals that contributes to variation of phenotype that is acted upon by natural selection which is the underlying mechanism of evolution of populations. Human activity can substitute for natural selection and thus introduce possibilities that do not occur in nature even if the same mutations were to.
We offer a thought experiment to illustrate this point. Let’s assume that the spontaneous mutation rate in maize is on average 5 × 10–8/base pair. The probability of two plants arising with a mutation in the same position is 2.5 × 10–15. At current average planting densities, the United States would need to grow 12 billion acres, or 135 times the land used for maize now, for two such plants to spontaneously arise. Provided that maize was reproducing without human intervention, those two mutants would then have to amplify within the population as a function of their added fitness, if any. That process would occur over what is called evolutionary time. In contrast, using gene technology, nature can receive year on year tens of millions of plants with that particular mutation all at once, or using the “spray” technologies that are being developed (discussed below), tens of millions of maize plants in fields can be uniformly transformed while they grow in the field. Unlike the pseudo-scale of naturalness, human scalability factors can be precisely determined. In this instance, increasing selected mutation rates by a factor approaching 1 billion and population sizes based on number of acres exposed.
Writing at the dawn of gene technology, a group of scientists from Scotland made a similar observation on this point. At the time of their writing, all rDNA technology was new and the first concerns were about making novel pathogens of humans, even by mistake. Nevertheless, their description of nature/technology (below) is still resonant today using our example of an unintended adverse mutation in maize.
We believe that experiments of the sort we envisage already occur naturally with a low but significant frequency…However, our experiments differ in one important respect from natural ones, in that we are able to produce very large quantities of a single type of transfectant, and to release it all at once. (Bishop et al., 1974)
Distinguishable from other mutations
Distinguishability is also invoked at the micro level, the biochemical reactions such as those that swap one nucleotide for another. As described by the Australian DoH (2018):
It has been argued these techniques produce changes that can be identical to those that are, or could be, produced in nature (i.e., naturally) and can be indistinguishable from conventional or other techniques that have been excluded from the Scheme (due to a history of safe use). There is also complexity in determining the reference point for what is ‘natural’, given it is not a static state.
Distinguishability is influenced by technology which is also not in a static state. Indistinguishability (i.e., perceived equivalence) is a conclusion that is reached using someone’s choice of the characteristics measured. At one time, monozygotic twins were (and are still referred to as) identical. Indistinguishability was taken away from them by fingerprint technology and the discovery of genomic methylation patterns, demonstrating how technology itself informs and shapes perception (Box 1). What characteristics make one thing like or unlike another depends on “what is chosen for knowing [which] means also choosing what may remain unknown, and such intentional or accidental social production of ignorance will affect societal ability to assess, manage, and respond to social and environmental hazards” (Agapito-Tenfen et al., 2018).
Advances in technology lurch from being reassurances that any changes made by the new techniques could be characterized because with “modern gene sequencing, any unintended insertions can be identified and, if undesirable, can be eliminated from the breeding programme” to just sentences later being cast aside because “some gene editing events will be indistinguishable from naturally occurring variation or variation induced by mutagenesis” (RSNZ, 2019). They may so remain if a governance structure allows distinguishability by either technological or legal means to be a chosen unknown.
The NBAB frames the problems of distinguishability similarly and suggests that the critical issue will be regulatory capacity. “This may solve one of the major contemporary challenges for regulatory authorities: that enforcing current provisions may prove difficult or even impossible when many genetically engineered products are indistinguishable from other products” (Bratlie et al., 2019). Reducing regulatory scope is seen not only as a solution to a hypothetical future challenge, it is also promoted as a hypothetical benefit:
New technologies are cheaper, more accessible and more precise, and enable an increasing range of applications and products. This is also driving a diversification of the stakeholder landscape as research and development are increasingly shifting from big industry towards academia and small- and medium-sized enterprises. (Bratlie et al., 2019)
In this frame, there are no downsides to deregulation. The decentralization of the technology is expected to increase the number and kind of users to include individuals working in their garages. There are inherent risks with larger numbers of people editing and silencing genes all sorts of organisms, including bacteria that are part of food production and consumption, and of unknown ability to assess the degree or level of change being brought about in them (Heinemann and Walker, 2019; Montenegro de Wit, 2020). Changes in number of users (and their context of use) is a critical control point because as the number of users increases, the capacity of regulation is challenged. A critical control point process would recognize the scale change to either potential harms or benefits that result from decentralization and deregulation.
In any case, questions of distinguishability are destined to be historical markers on the journey of gene technology. Already, cracks are emerging in the analogy of equivalence (Kawall, 2019). It is true that both SDNs and ultraviolet rays generated by the sun can cause breaks in DNA molecules (Jones, 2015), but that is only a superficial equivalence between genome editing and what occurs “naturally.” How breaks in DNA caused by these processes is reversed (referred to as damage repair) may be different depending on what caused the breaks. Brinkman et al. (2018) found that the genome editing nuclease Cas9 significantly skews the outcome of repair initiated by the damage it causes in human cells, where it is outside its natural context. This is evidence of distinguishability even if current technologies may not be fully able to exploit this evidence for traceability.
Furthermore, reactions mediated by enzymes such as SDNs and radiation energy are themselves not equivalent processes. Spontaneous mutations may not occur with equal probability everywhere in the genome (Makova and Hardison, 2015), which could make them different from those caused by artificial exposures to SDNs.
Unlike the damage caused to genes in individual (mainly somatic) cells of multicellular organisms, the use of gene technology is concentrated in cells destined to lead to multigenerational effects. Finally, mass produced formulations (“kits”) used in gene technology have now been found to be contaminated with sufficient quantities of DNA from the biological sources of components to become part of the repair (Ono et al., 2019). Therefore, the assertion that there are some genome editing techniques that do not introduce “foreign” DNA is likely to be false, and the aspiration to regulate based on the insertion or otherwise of foreign DNA is an ideological fiction.
Many different perspectives have been published that adopt a view expressed similarly to “if the only alteration introduced is nothing else than one single base mutation [as can be the case with some applications of genome editing], then it is not at all possible to identify the technique used” (Wasmer, 2019). As will be shown for the genome edited canola product discussed below, existing technologies can detect single nucleotide changes (Chhalliyil et al., 2020), and the detection technology available for these extreme examples is already able to do more than it is routinely required to do now (Agapito-Tenfen et al., 2018).
As for the point about how the single nucleotide came into being, that is an argument that would also apply to even large insertions of DNA such as transgenes too. It is only possible to definitively know the origin of a DNA sequence that ranges in length from one nucleotide to thousands by having additional contextual information, such as reference genomes to compare to or legal compulsion to disclose detection methods for identifying the transgene. Recall the example of the “natural GMO” from above (Kyndt et al., 2015) and consider that it is also not possible to prove without contextual information that the DNA of bacterial origin described in sweet potato was spontaneous and not put there by human activity, later to be discovered by Kyndt et al. (2015).
Distinguishability also has a socioeconomic dimension that is systematically neglected in many accounts of why new techniques should be deregulated. For commercial applications, the intellectual property rights of the developer would be unenforceable and untraceable if the product were indeed indistinguishable from variants that have arisen in nature. As long as the use of the process of genome editing permits utility-type patents on products, the products will be distinguishable in the germ line if someone wants them to be. The origin of the changes to genomes will only be unknown if someone wants them to be, too.
The herbicide resistant canola variety that was created by ODM (discussed above) was considered to be an early demonstration of an agricultural and food product created through one of the new techniques. It was also a concrete example of how the new techniques create regulatory ambiguity because the alteration of the DNA sequence was a single nucleotide change, also called a point mutation or single nucleotide polymorphism, compared to the starting germplasm.
Depending on the mutation type and the context in which it is used, it will be difficult and sometimes impossible for applicants to provide a detection method for gene edited products which will meet regulatory requirements (Casacuberta and Puigdomènech, 2018), for instance in the case of point mutations. (CSA, 2018)
A group of civil society organizations and a commercial certification company took up that challenge and more. Even without the cooperation of the company that created the ODM canola, they were able to find a single base change in a genome of billions of bases. The genome also had very similar copies of this altered gene, with at least one of these copies containing a different single base change not created using new techniques (Chhalliyil et al., 2020). Despite all these challenges, their method worked and was concomitantly verified by an independent certified testing laboratory and shown to meet legal requirements for specificity and sensitivity and ability to integrate into current testing infrastructure. This remarkable outcome from civil society demonstrates that it is possible to choose different unknowns, and theirs did not include indistinguishability.
Precision as a scale
Genome editing and gene silencing techniques are described as new, exciting, and powerful and are simultaneously claimed to be safe. These statements are made despite the rapid development of the techniques and supported by evidence of only a few products across a very limited number of species (Hurlbut, 2018).
One of the iconic terms used to highlight genome editing and gene silencing and distinguish them from other processes is “precision.” Not unexpectedly, this quality is already beginning to look like a straw man for the fire. The “highly precise nature of the genome editing technology CRISPR/Cas” (Wolter et al., 2019)—a description from May 2019 nearly ubiquitous in the contemporary literature—is already becoming tomorrow’s clumsy imprecision. “For all the ease with which the wildly popular CRISPR–Cas9 gene-editing tool alters genomes, it’s still somewhat clunky and prone to errors and unintended effects” begins an article published in October 2019 (Ledford, 2019, p. 464).
This pattern of praise and perdition is surprisingly common in the history of gene technology with little explicit recall from technique to technique. Gene silencing (by a process called RNAi) was initially described as specific, a synonym of precise. The “RNAi machinery can be both flexible and exquisitely specific” to its target (Hannon, 2002, p. 248). RNAi’s specificity was muscled out with the rise of genome editing, as this excerpt from a patent on CRISPR/Cas9 shows: “Currently the most common approach for targeting arbitrary genes for regulation is to use RNA interference (RNAi). This approach has limitations. For example, RNAi can exhibit significant off-target effects and toxicity” (Doudna et al., 2019).
As demonstrated by the following passage written over 30 years ago to reflect on the events dating back another 15 years, the pattern also repeats between generations of scientists:
The R-DNA technology developed over the last 15 years has permitted a new and more precise kind of genetic manipulation…Breeders who use traditional techniques change (or mutate) genes and move them, but…[t]heir methods are much less precise and controlled. A mutation made by traditional techniques may be accompanied by many unknown mutations, which often have deleterious effects on the organism…The power of R-DNA techniques lies in their ability to make extremely precise alterations in an organism …” (NRC, 1987, p. 11)
Precision has been used to market gene technology for a long time. The moving target of precision is framed as a scale from large (“random mutagenesis”) to small numbers of unintended changes that occur in genomes. How many mutations is a large change? How many unintended changes is a small change? The answers can be useful within a risk assessment but are impractical for guiding governance. Likewise, if greater biochemical precision for making mutations does not meet society’s harm prevention expectations, then who will be held to account? Similar questions are being asked in munitions technology where increases in precision make it harder for off-target damage to be dismissed as unintentional (Ciocca and Kahn, 2020).
The language of precision has not been decisive because it fails to acknowledge what Brenner saw from the beginning. Precision is also a means to hit your target with less effort and time and thereby “make it possible to produce genetically modified varieties at a rate and in quantities quite unlike those resulting from the application of conventional methods of random mutagenesis” (EU Court as quoted in CSA, 2018) or breeding. Since scale of production grows with increases in precision and faster than incremental improvements in target specificity (Figure 2), then “if anything can happen it certainly will” (Brenner, 1974).
Useful scales for regulation
Pseudo-scales are exposed by having normative thresholds or superficial relationships to safety. The pseudo-scales of distinguishability and precision illustrate how industrial concepts of production can be repackaged as safety features. If a mutation created by a technique of gene technology is indistinguishable from one that emerges spontaneously, it will also be indistinguishable from another mutation made by a technique of gene technology. Uniformity of production methods and outcomes reduces the cost of production by increasing quality control and reliability of postsale performance. That industrial utility morphs into a suggestion that society should treat products of gene technology and spontaneous mutation the same. Likewise, precision increases the efficiency of making the intended modifications to genomes, thus reducing development costs. Whether or not these efficiencies reduce the number of unintended changes to a point where they become irrelevant to a risk assessment is far from clear (Macnaghten and Habets, 2020).
Pseudo-scales are also revealed through use of undefinable or arbitrary units of measure, such as in foreignness and naturalness. These units become defined by those with prevailing financial and political power and who hold the keys to particular technological imaginaries of the future (Heinemann, 2009; Heinemann et al., 2014). In turn, that alignment creates a social ratchet that resists movement “backward” to more inclusive decision making across all possible visions by multiple publics. Expectations and actions that “advance” a technology favored by powerful institutions drag social license behind it (Montenegro de Wit, 2020; Roberts et al., 2020). Society at large only influences the rate at which the ratchet turns.
Pseudo-scales serve narrow conceptions of risk. Transferring issues of social importance to a biological risk discourse has failed to produce a unifying social license (Kuzma and Grieger, 2020; Wirz et al., 2020). “Narrowing regulatory consideration to an assessment of empirically demonstrable risks to human health and safety of biotechnologies as isolated (and isolatable) objects therefore fails to recognise, account for and consider” that GMOs are “hybrid bio/socio/techno objects, they are shaped by the interests, values, goals and visions that arise with their contexts of development and deployment…they also shape the discourses, practices, knowledge, skills, meanings, problems and purposes of the human and non-human actants they emerge into being with” (Herrero et al., 2015). New approaches to obtaining social license are needed (Marris, 2015; Foley et al., 2016; Steinbrecher and Paul, 2017; Lassen, 2018; Dressel, 2019).
To partly address this need for social license, co-design is incorporated into our framework built around critical control points, as the entry gate. The gateway is the transition between the idea and the decision to act on the idea (Figure 3A; Wirz et al., 2020). It is inescapably and properly built on values of multiple worldviews rather than a “scale” as such. Seeking social license prior to commitment flips the familiar order where technologies seek their approval post release (Pavone et al., 2011) and democratizes how imaginaries, albeit “fraught with epistemic complexity” (Macnaghten and Habets, 2020), are made. The gateway is sincere when “the choice to say no to particular visions of progress” is real (Montenegro de Wit, 2020). This stage is similar to, among others, Macnaghten and Habets’ (2020) inclusion dimension as built into the RRI and an integral part of Pavone et al.’s (2011) in-context trajectory evaluation models.
If the transition from idea to a provisional social license has been completed, then a regulatory framework is required. Regardless of whether or not the framework approaches the RRI structure or persists as a biological safety assessment, critical control points can be identified based on processes/activities where agreed risks are most amplified.
From discussions at the origin of regulation of gene technology, some scientists understood that human activity was the feature that distinguished genetic change brought about by gene technology from spontaneous mutagenesis. This would not change by making technology mimic the biochemistry used by nature because that biochemistry is not the only source of risk (Macnaghten and Habets, 2020). Risk can be minimized or avoided by regulatory interventions that interrupt the connection between the use of gene technology and the generation of harm. We call these critical control points because it is at these transitions of use that the risk is amplified by the technology.
Containment facilities and restricted materials
Gene technology techniques, in contrast to breeding and selection, historically have been applied to cells or organisms in isolation (“in vitro”). For mutagens such as rDNA, the efficiency of delivery, that is, causing penetration of the target cell and relevant subcellular locations, is low, as is the frequency of its integration into chromosomes. Therefore, organisms to be transformed through the use of rDNA have to be concentrated and kept free of any contamination (Doyle et al., 2019). In many cases, from bacteria to plants, additional chemicals such as antibiotics are used to both concentrate and protect the rare recombinant cell. These conditions require laboratory environments (D’Alessio, 2019).
Containment strategies, ranging from physical containment inside secure laboratories to biological containment such as pollen sterility, also are commonly used to short circuit pathways through which products of gene technology can cause harm (Hurlbut, 2018). Moreover, containment strategies were already being applied to regulatory control of other technologies and products, including cancer-causing materials.
For mutagens such as highly radioactive material or hazardous chemicals, restrictions that control exposure are required to ensure worker safety. Such tools are also regulated to make sure that they are disposed of properly because of their potential to be weaponized or to avoid their use in criminal acts or industrial sabotage.
Although organisms modified through use of chemical and radiation mutagens may not be regulated as genetically modified organisms because they are exempt from the regulations in various countries, they are not de facto free from regulation or oversight. Powerful radiation sources and chemical mutagens are not released into the environment but used in highly regulated containment facilities. Tight regulation of the materials of the mutagenesis process prevents their broad-scale use, for example, by noncommercial or “do-it-yourself” communities or even by different scientists at the same university. Regulatory requirements imposed on those who purchase, use, and then dispose of mutagenic chemicals and radiation sources contribute to a risk management system that has provided evidence of a “history of safe use.” Moreover, living biological products are tracked through an international register maintained by the International Atomic Energy Agency.
Regulation applied to tools of this form of gene technology has managed risk. Centralization of actors and accountability of their activities has controlled the unintended effects from use or disposal of the mutagenic material. Decentralizing access to other tools of gene technology is a risk identified by Australia. “The Review includes recommendations…ensuring the [gene technologies] Scheme is suitably equipped to regulate work with GMOs undertaken outside of universities, research institutions or large companies” (DoH, 2018). The restricted access to chemical and radiation mutagens may have contributed to their history of safe handling. The risk from organisms made using the mutagens may also have been reduced because of the management of the mutagens and the constraints on those who were accessing them for the purposes of genetic modification.
Physical containment of genetically modified organisms is a ubiquitous strategy for managing the distributed risk arising from experiments conducted by tens of thousands of gene technologists around the world. It sometimes fails but has been remarkably effective too. Nevertheless, to transition from physical containment to the open environment requires a new risk assessment taking into account the outside world and greater number of variables (Ad Hoc Technical Expert Group, 2012). This frustrates some biotechnologists and companies and in the case of agriculture, some farmers.
Their frustration is not just with regulation. Until now, the tools for creating products of gene technology have required carefully controlled laboratory environments, specialist personnel, and expensive equipment. Those demands have also been rate limiting.
Recognizing this, biotechnologists have been searching for tools that would be both unregulated and could be used outside of the laboratory (Doyle et al., 2019). In an example from the patent literature, the issue of scalability is explicitly discussed in the context of treating organisms with nucleic acids (DNA, RNA) in the open environment, particularly for the purposes of gene silencing.
There is a need for introducing nucleic acids, such as DNA or RNA…into plants, where the methods are scalable so as to be practical for use in multiple plants, such as plants in a greenhouse or growing in a field. Most methods of introducing a nucleic acid for gene suppression are cumbersome and therefore generally of practical use only on individual plants in the laboratory or other small-scale environments. (emphasis added to Huang et al., 2018)
In that particular example, the two scales where human activity changes outcomes are species range and geographical/spatial range, taking the application of the methods for delivery of gene-altering treatments outside a laboratory and into a landscape. Along with geographical and species range, we will add reaction efficiency, serial/multiplex applications, and regulatory capacity.
Efficiency, serial, and multiplex applications
Genome editing using SDN types 1–2 or ODM may achieve the same phenotypic outcomes that could otherwise be achieved not using these tools. However, the use of other techniques or selecting spontaneous mutants would be uneconomical or simply too slow to be practical (Mueller et al., 2019; Wolter et al., 2019). The developers of the commercialized herbicide resistant canola discussed earlier, for example, said that “Tan et al. (2005) reported that the S563 N mutation in the canola AHAS gene was not successful[ly isolated] after several decades of chemical mutagenesis, thus reflecting the value of ODM in producing this mutation” (Songstad et al., 2017).
The impacts of human activity are differentially geared up by genome editing techniques because they reduce how long it takes biotechnologists to generate changes in genes (Kawall, 2019). The capacity to target a location allows serial changes to the same genomic locus to be made, making nearly any possible kind of change, desired or unintended, achievable (Schenkel and Leggewie, 2015).
Changes to many different loci may be made in a single treatment in multiplex applications. It is this capability that “random”—or more accurately unguided—mutagenic processes do not have and why it can be difficult or effectively impossible to apply them to the same desired ends. The ability to rapidly change and return to a particular locus for serial changes, or to treat multiple different parts of a genome simultaneously, decreases the time required to engineer desired phenotypes into existence (Lim and Choi, 2019; Reis et al., 2019; Riesenberg et al., 2019; Wolter et al., 2019). At a remarkable frequency of one in only 67 treated protoplasts (>0.02%), CRISPR-Cas9 performed simultaneous changes in two different genes to generate a new color in petunia. “The CRISPR-Cas system is now revolutionizing agriculture by allowing researchers to generate various desired mutations in plants at will…we demonstrated a precedent of ornamental crop engineering by DNA-free CRISPR method for the first time, which will greatly accelerate a transition from a laboratory to a farmer’s field” (Yu et al., 2020).
Serial and multiplex applications do not always require a laboratory. For example, as discussed more extensively below in the sections Species Range and Geographical Range, the means to use mechanical and chemical methods for external treatment of organisms is rapidly developing (Heinemann and Walker, 2019). Because of their ability to differentially collapse the time scale for use, serial and multiplex applications could be used to achieve similar outcomes as proposed for gene drives. Gene drives are genetic elements that distort the transmission of traits to offspring (Sandler and Novitski, 1957; Conference of the Parties [COP], 2014; Dressel, 2019). Organisms that undergo meiosis in reproduction, such as people, expect each parent to make an equal contribution to the offspring. A meiotic drive disrupts that balance in favor of one parent. Meiotic drives are not the only kind of gene drives, and genome editors for both “sexual” and “asexual” forms of reproduction can be constructed to distort inheritance patterns (Cooper and Heinemann, 2000; de Lorenzo, 2017; Valderrama et al., 2019). Although drive mechanisms perpetuate themselves, the tools for external applications of genome editors allow human activity to drive persistent effects instead. The “drive” becomes the ability of humans to make large-scale releases rather than use breeding as a dispersal mechanism for the effects on fitness.
Species range
The ability to deliver the materials (DNA, RNA, proteins) needed to cause edits or silencing into cells has limited how quickly the technology can be applied. To transition from radiation and chemical mutagens to rDNA genetic engineering required finding ways to get the DNA into a cell, and where relevant the cell nucleus, intact and in a state that it could integrate or recombine with chromosomes (or replicate independently), and then regenerate an entire organism from a single transformed cell (Birch, 1997; Doyle et al., 2019).
All plant transformation must currently utilise either Agrobacterium tumefaciens, biolistics, or regeneration from PEG transformed protoplasts, as a vehicle to introduce DNA, regardless of whether the changes are transient (non-heritable) or stable (inheritable). This means that not only are established advances such as GM species and cultivar limited, but new technologies such as gene editing suffer the same bottleneck as these delivery methods are still required. Furthermore, even in species and cultivars where plant transformation is possible, the process is expensive, slow, requires significant resources in terms of facilities and expertise, is frequently inefficient, and damages the plant genome. (Doyle et al., 2019)
Various and equally vexing challenges of the type referred to by Doyle et al. (2019) exist for all potential target organisms—bacteria, fungi, plants, and animals—with each requiring customized methods with many methods working for only some species of each group (e.g., Yoshida and Sato, 2009; Rivera et al., 2014; Kelliher et al., 2019; Ren et al., 2019; Lule-Chávez et al., 2020). The difficulty in developing and deploying customized methods and materials, along with highly specialized expertise, limits the ability of gene technology to amplify the risks. As the need for customization diminishes, so do the limits.
Those limitations are being overcome with the invention of new delivery technologies. An examination of these new products is well beyond the scope of this article but was recently reviewed by us (Heinemann and Walker, 2019). Being able to easily penetrate living tissue of all organisms with DNA, RNA, and proteins, even achieving this delivery at a lower efficiency than existing customized methods, adds a differential scalability to gene technologies (Lule-Chávez et al., 2020).
Particularly because the new delivery technologies make it possible to edit and silence genes in real time out of doors, two scalable dimensions are simultaneously invoked. They are species range as discussed in this section and geographical range as discussed next. For these reasons, the critical control point is covered by regulating access to and use of new delivery technologies.
Geographical range
The tools for delivering proteins and nucleic acids for editing and silencing are both easily accessible and mechanizable (Doyle et al., 2019; Heinemann and Walker, 2019). New delivery technology makes it possible to edit or silence genes in target organisms spread over large landscapes: lawns, crop fields, sewer networks, cities, regions, and oceans. No laboratory, expensive personnel, or special materials are needed, and therefore, this new delivery technology when combined with gene technology can be adopted by everyone from do-it-yourself genetic engineers and school teachers to private and state actors. Outdoor applications of genome editing and gene silencing thereby create unprecedented potential to amplify risk.
We have identified two dominant strategies for making genetic engineering a geo-scalable technology. The first is to create an organism that produces the necessary componentry itself. This componentry can be molecules of RNA (dsRNA as used in gene silencing) or a site-directed nuclease (such as TALEN or ZFN) and nucleic acid guide (CRISPR, for CRISPR/Cas9), as appropriate for genome editing. The organism may be inter alia a genetically modified virus, plant, or insect. Plants that are engineered to silence the genes of pest insects or viruses are already commercially available. An example of the former is a maize that produces dsRNA that silences an essential gene of the pest western corn rootworm. The dsRNA is taken into the rootworm by ingestion of plant material (Bachman et al., 2016).
A variant of this strategy, called haploid-induction editing, is used to accelerate breeding of elite plant varieties. One (transgenic) parent in a cross produces Cas9, an SDN and its nucleic acid guide molecule (called a guide RNA; Kelliher et al., 2019; Wang et al., 2019). This parent also has the trait called haploid induction. Haploid induction results in the loss of one of the two parental genomes in the fertilized embryo during development. Offspring that have only the genome from the elite parent (which does not have the cas9 gene) will have a change at the intended gene because of CRISPR/Cas9 activity in the embryo when both genomes were temporarily together. This technique is expected to greatly reduce the barriers to transforming many elite lines directly and can be used to create interspecies modifications. For instance, pollen from a transgenic haploid-inducing maize plant can fertilize wheat (Kelliher et al., 2019).
The second strategy is based on making contact between the genome editing/silencing materials (DNA, RNA, protein) and the outer surface of the organism or with its inner surfaces which become exposed through inhalation or ingestion (Heinemann and Walker, 2019). The delivery vector is not a genetically modified organism as above, but a formulation of a nuclease and/or nucleic acids that cause genome editing or gene silencing. The proteins and/or nucleic acids to be transferred are carried by mechanical or chemical vectors that have little specificity and thus overcome the barriers that have constrained application scales of gene technologies in the past. So whereas the proteins and/or nucleic acids may be site directed once inside a cell, the range of target and off-target effects will be determined by both the number of DNA sequences vulnerable to editing/silencing in the target organism and the number of vulnerable sequences in any exposed nontarget organism.
Taken together, these new delivery technologies developing in parallel with the new techniques for use in the out of doors (just through contact) create challenges for risk assessment that have never been encountered before. Unlike existing approaches wherein the gene technology is applied to a single organisms which is later amplified and tested in containment, future applications will result in immediate release of unknown numbers of organisms with unmonitored changes.
It would be tempting to think that the predominant issue of outdoor use is one of the biohazards arising from the uncontrollable exposure of so many genomes, of both target and nontarget species, to the biochemical reagents used by the new technique. However, in context of the commercial developments for outdoor use, there are also additional socioeconomic scale issues.
For example, the patent literature makes claims that go well beyond the formulations and procedures for delivery. Their claims (which are still to be tested by courts) reach to the exposed organism itself, products of those organisms and the organism’s descendants (reviewed by Heinemann, 2019; Heinemann and Walker, 2019). This is a characteristic of utility patents wherein the exclusionary right of the patent holder extends into other products in which it may be found, necessitating a license for use.
Extension of utility patents to plant breeding techniques was unprecedented prior to the introduction of genetically modified organisms (Quist et al., 2013; Shear, 2015). Significant revisions to the seed market in some countries resulted from the use of utility patents (Heinemann et al., 2014). However, the social and economic effects were not restricted to this tool. For example, the instruments of intellectual property and contract law combined in the United States, resulting in a more restrictive research access (Editors, 2009). The full legal context of the technology in an adopting country therefore is relevant to its assessment.
Governance
The purpose of this article is to offer a bridge from frustratingly unproductive pseudo-scales that rely on risk-based legislation to what we believe are more fundamental, verifiable, and practical scale-based critical control points to identify and manage risk. The use of technological terms that emphasize contestable nuances in biochemical-level phenomena discourages challenge from publics that are not technical experts and cannot untangle the embedded cultural perspectives and uncertainty in “scientific” concepts (Montenegro de Wit, 2020). Biotechnology and research
institutions have further embattled themselves by continuing, in the face of evident public scepticism, to propagate a monovalent simple-realist discourse in which ‘the risks,’ though they may be imprecisely known, have a meaning which is taken for granted, not a political-cultural artefact whose meaning and definition have been (deliberately or not) constructed. To this cultural imagination, to suggest such meanings are constructed would be interpreted as saying that the risks are unreal. Moreover, the attendant idolatry of scientific thought here allows the notion of risk so conceptualized to become the assumed objective and universal meaning of the overall public issue, to the exclusion or subordination of all other dimensions of meaning with which the technology, its driving aims and conditions, and its possible implications, may be invested. (Wynne, 2002)
To address this, we briefly examine below how our proposed scale measures would integrate into two different governance models, the differentiated governance (risk-tiered) and RRI models.
The RRI governance model is explicit about how society engages in both the identification of and decisions about what is and is not chosen to be regulated from the set of technologies and technology products that might cause harm. In the RRI framework, “technological innovation should not just be the activity of the principal players of science, industry and government, with the general public merely in the role of recipient consumers” (Bruce and Bruce, 2019). It is a responsive and anticipatory governance which recognizes the highly resourced hype of science and technology imaginaries that undermine the effectiveness of a society to make predictions about its own future (Heinemann, 2009; Macnaghten and Habets, 2020).
New Zealand’s Royal Society and the NBAB advocate for differentiated governance approaches to prevent overregulation of new techniques (DoH, 2018; Bratlie et al., 2019; Everett-Hincks and Henaghan, 2019). These identify a number of levels of risk and place them into a class of regulatory action. At the bottom tier, low or nonexistent risk, the action is to deregulate the processes or products. The higher the category of risk, the more intense the regulatory action. For example, “Organisms with genetic changes that cross species barriers or involve synthetic (artificial) DNA sequences,” including SDN-3 processes, will fall into the highest risk category (see Figure 1 of Bratlie et al., 2019). The new U.S. SECURE (sustainable, ecological, consistent, uniform, responsible, efficient) framework is similar in that it automatically deregulates some products based on preformulated criteria and imaginaries of what happens in nature (Kuzma and Grieger, 2020).
The differentiated governance models do not preclude participation from nontechnical publics at either the early stages of development (Figure 3A) or later, during considerations of risk (Bratlie et al., 2019; Macnaghten and Habets, 2020), but in practice can (Kuzma and Grieger, 2020). They tend to privilege what the biotechnology community identifies as biological risks and the regulatory burden over “the ‘wisdom’ of the wider culture” because once tier criteria are set, it could be difficult to retain “at least some power to change what ‘the usual players’ would otherwise have done with their innovation” (Bruce and Bruce, 2019).
Automatic deregulation of some risk categories makes it difficult if not impossible to gather further information on them as they develop into new future contexts. Differentiated governance operates under the cloudy assumptions that functional equivalence can be presumed and has already been demonstrated in ways that matter to future publics and environments and that the world is the same as it was before the past products were introduced (Pavone et al., 2011; Kuzma and Grieger, 2020). For instance, a canola plant edited to become herbicide resistant would not be a weed in a field of canola plants, but the same plant would be in a field of wheat. Sequentially adding to this field more canola plants edited to be resistant to even more herbicides removes more if not all options to control canola that are weeds in wheat fields. The addition of the last herbicide resistant variety is not the same as adding the first herbicide resistant canola variety to a field independently of the process being SDN-1 or SDN-3 for either plant. Risk-tiered approaches assume an indefinite social license to operate products in environments that were possibly not even imagined much less existed outside the hypothetical class of what could be made using “conventional” methods (Mueller, 2020).
In contrast, the RRI framework seeks renewed mandates for social licenses because the “detrimental implications of new technologies are often unforeseen, and risk-based estimates of harm have commonly failed to provide early warnings of future effects” (Stilgoe et al., 2013, p. 1570). Although RRI can also accommodate a regulatory system where not all risks are equal and not all products are strictly reviewed before release, the assumption that only scientists can steward the science and choose the technological options for a future which a society will then accept is firmly rejected.
Our approach, which is to identify the critical control points from which risk and safety scale in responses to use, may be subject to criticisms like those aimed at risk-tiering; it may be seen as furthering a risk discourse wherein risk is deterministic and thus fully predictable or manageable, or can be made so soon (Wynne, 2002; Pavone et al., 2011).
What we describe is more than a simple tweak to risk discourse, but it is certainly not a revolution compared to governance based on “democratic upstream political and social agenda of (more co-constructed, hybrid and contingent) technology” (Wynne, 2002). We therefore prefer to think of a critical control point approach as a patch to processes premised on formal risk assessment rather than a tweak, but a patch that both could improve risk-based legislative models of regulation and continue to contribute to other emerging models that offer more comprehensive revisions to governance frameworks. Furthermore, where risk assessment remains a part of any governance, clarity of scales will help focus the regulatory risk assessment and does not prevent a variety of publics from having a say in the process.
In contrast, risk-tiering could leave it to biotechnology developers and citizen scientists to diagnose whether or not their application of a gene technology creates “organisms with temporary, non-heritable changes” (Bratlie et al., 2019), a class that might be exempted from regulations. We found that even regulators were not always able to recognize when their assumptions on heritability were inaccurate, as in some applications of gene silencing techniques applied to fungi with RNA components in their genomes (Heinemann, 2019). Furthermore, a length of time that is temporary remains undefined (Heinemann, 2019). Does it have to be a change that does not cross a generation or some number of generations? Does it expire in moments, days, or years? For example, if an apple tree is sprayed with dsRNA to induce gene silencing that affects the qualities of apples for years, but does not transmit through seed, is that temporary? What if a scion of that tree is grafted to another tree and retains the effects of the gene silencing treatment, is that nonheritable? If these decisions stand outside of review by exemption from regulations, then they eliminate the potential for society to influence the perspectives of biotechnologists.
Similarly, risk tiering erodes the value of process-based regulations which New Zealand, the EU, and Australia have endorsed (DoH, 2018). Placing SDN-1 reactions into risk tier Level 1, which might only require notification (Bratlie et al., 2019), puts blind faith in the quality assurance capacity of the developer. This is difficult to do in light of the Recombinetics case where the “‘unintended’ addition of DNA from a different species occurred during the gene-editing process itself…It went undetected by the company even as it touted the animals as 100% bovine…‘It was not something expected, and we didn’t look for it’ says Tad Sontesgard, CEO of Acceligen, a subsidiary of Recombinetics that owns the animals” (Regalado, 2020). Lack of regulatory oversight and independent verification could contribute to a deregulation creep up the SDN scale because of an assumed absence of the unexpected. This is especially important now that it has been found that insertions of DNA from commercial kits may be common (Ono et al., 2019), and these kits would be available to both institutional and do-it-yourself biotechnologists.
Others see reducing oversight on SDN-1 processes as a positive. “[Availability of new techniques] is also driving a diversification of the stakeholder landscape as research and development are increasingly shifting from big industry towards academia and small- and medium-sized enterprises” (Bratlie et al., 2019). However, with this trend comes decentralization of practitioners, and this could result in more ad hoc applications by those that have fewer resources to thoroughly evaluate the modified organisms.
Critical control points: Where risk but not safety scale
This returns us to Brenner’s distinction between concepts of genetically intrinsic risk arising from an organism and the risks special to human activity. Whereas all technologies have risks of creating unintended hazards, the focus of a governance system should not fail to respond when safety and risk diverge as the scale of use changes (Conko et al., 2016). We argue risk and use scale together at critical control points, which make them useful for regulation and could serve either an RRI or distributed governance model.
As an example, recall that precision is the efficiency at achieving changes at a desired location in a DNA molecule. Increases in precision make “on-target” outcomes scalable. However, with increased use, “off-target” effects also are scalable. Because off-target effects are associated with risk, risk increases with use, while safety increases incrementally with slow improvements to target specificity (precision) built into the tools over time. Serial and multiplex applications further increase the number of uses per unit of time, causing safety and risk to change differentially in response to human activity. Efficiencies in on-target changes create a feedback loop of lower costs in materials and time to make products that can be sold, with the proceeds reinvested into more products arising from serial and multiplex applications.
Because they are rooted in complex (and fallible) biochemical characteristics of the tools, the pseudo-scales of precision, size of change, sequence naturalness, or foreignness do little to reassure publics that are concerned about harms of gene technology. Regulations based on these pseudo-scales have questionable effects on controlling plausible harms from large releases of modified organisms while inflaming a sense of burden in the biotechnology community (Figure 1).
Differential risk and safety scalability is seen in other technologies too, as shown in the following examples. A vaccine may confer higher levels of population protection as a function of the frequency of vaccination, a phenomenon known as herd immunity. Increased use increases efficacy. Increased use of seat belts differentially scales safety, not risk. A seat belt helps a driver in part by allowing them to control the vehicle over a greater range of physical experiences that otherwise lead to accidents. If other drivers are wearing seat belts, they too will have an enhanced response range. The seat belt does not just protect drivers from the outcome of a collision, it is that all drivers are wearing seat belts and this results in a road environment where accident-causing behavior and catastrophic harm plummets.
How could critical scale changes be adopted as regulatory triggers?
Critical control points identify where the use of a technology or technique has the potential for either or both large qualitative and quantitative changes in scale (Figure 3). They are also responsive to changing technology in ways that product-based triggers are not. The sale of formulations that allow farmers to spray their fields with gene altering tools to, for example, kill or sterilize pests, delay ripening, or alter flower color will not only genetically modify those crops or pest animals but potentially any exposed organism, including the literally billions of bacteria and fungi in the soil. Instead of selling farmers seed to grow pure lines of a genetically modified crop, the farmers will make their own genetic modifications.
With this comes a disruption to the existing market. Indeed, from the perspective of the consumer, it is the service (or process) of gene editing that is the “product” involved here. Farmers might be familiar with being dispersal agents of black-box pesticide formulations, but this new black-box may create independent populations of organisms with unanticipated traits unlike the currently certified and tested GM plants sold to them. The disruption caused has socioeconomic dimensions. As we have asked elsewhere (Heinemann, 2019), who is liable if formulations that assist RNA to penetrate cells get contaminated by RNA viruses and potentially seed the spread of the virus? If a formulation intended to make one farmer’s canola crop resistant to a herbicide creates weeds in another farmer’s wheat field, who owns the costs to the wheat farmer? Who is responsible for ensuring that other sources of DNA are not incorporated during the genome editing process, even if only SDN-1 type applications are intended?
Product-based triggers, where the products are the genetically modified organism and/or its particular trait, are thus becoming obsolete in one of the most active areas of future commercial interest. This is especially true for risk-tiered models. If an SDN-1 herbicide is sprayed on a field, it could entirely escape regulation in a risk-tiered model that exempts nonheritable modifications. Are all the other consequences of spraying a genome-editing pesticide from an airplane then also outside of social oversight, control, and consent? If they are not, then how much value does a product-based risk-tiered regulatory system offer?
Use of critical control points identified by scale changes in risk resolves the problems discussed above. If it is decided to proceed from idea to reality (Figure 3A), then the next scale consideration is whether the work will be done in physical containment, field trial, or released and what pathway to each of these will be taken (Figure 3B). Although these stages are reminiscent of the Asilomar-era facilitation of gene technologies via a narrow conception of risk that could be mitigated through containment (Hurlbut, 2018), here there is explicit requirement to seek social license from publics affected by both the technology and changes special to transitions between each scale. For example, the use of a topical agent on crops at a field trial stage probably does not have implications for farmer seed savings and exchange but use at release stage could.
An embedded scale trigger in gene technology regulation could also address an inconsistency in the regulation of different mutagens with some exempted from review by specific regulations. Many chemical or radiation processes are exempt, for example, by the EU, Australia, and New Zealand. This is described by some as an anomaly unrelated to the risks that might arise from using those mutagens (DoH, 2018; Van Eenennaam et al., 2019).
A scale trigger could apply to chemical and radiation mutagenesis, especially in the transition from contained laboratory to either field or full release. Legislative consistency could be achieved as illustrated in Figure 3B. Organisms created through the use of chemical or radiation mutagenesis would have to pass the same first critical checkpoints as those made through the use of other kinds of mutagens and again if their scale circumstances significantly changed. The number of chemically or radiatively mutated plant varieties listed on the International Atomic Energy Agency register is not large and has not increased in rate over many years, suggesting that any actual increase in regulatory or compliance burden from including such organisms would be limited.
Scale is not static
A final consideration is the qualitative effects of scale changes. As noted above, the science of gene technology is not static and neither are the technical and social contexts in which it and its products may be embedded. What is not a possibility at some scales becomes a characteristic at another. Genetic data have entered this transition. The ability to collect the browsing trends of billions of people, or the sequence variations in genomes of millions of species, provides information to develop products that could not be developed any other way (Heinemann et al., 2018).
A case in point was the use by police of a commercial genealogy database as a forensic tool. Commercial genealogy collections of DNA profiles were surreptitiously used by police to trace familial connections of the alleged Golden State Killer (Creet, 2018). These collections, some dating back to 1984, have become significant databases used by customers to trace their current and ancestral family trees. Police created a false account and uploaded sequences obtained from forensic samples. The suspect himself never participated in the service but was tracked through relatives who had.
Box 1. Whose biochemistry metaphors matter?
Who constructs the perspectives in discourse on gene technology (Wynne, 2002)? For example, what is natural about genome editing biochemistry?
ZFNs, TALENs, meganucleases, and the Cas proteins are site-directed nucleases because they or co-factors associated with them bias where they bind a DNA molecule. Not all SDNs are natural nucleases. “Some in the genome-engineering community consider ZFNs and TALENs an abomination of nature, espousing the philosophy that if nature had intended zinc fingers and TALEs to be nucleases, they would have been endowed with cleavage domains. Indeed, there are no natural ZFNs or TALENs” (Segal and Meckler, 2013, p. 144). Those that are, such as Cas9, are so far observed only in a small number of bacterial species even though they can be and are being used where they are not naturally found, in many different bacterial and nonbacterial species.
The “natural or not” discussion goes beyond the tools to the level of the biochemical reactions. Some emphasize reaction initiation, such as “a SDN can be considered as merely acting as a mutagen, albeit a more benign and highly-targeted one, to break DNA in the same way as high energy radiation or a chemical such as EMS does when used in mutation breeding” (Jones, 2015, p. 226). Stating incorrectly that reaction pathways initiated by physical or chemical processes are the same as those initiated by nucleases leads the reader to agree that the outcome of the reaction is also similar, and that is all that matters. This is how it was framed by the Australian Office of the Gene Technology Regulator (2018).
In nature, DNA breaks in the genome of an organism can be caused by a range of natural factors, and cells have evolved mechanisms to scan DNA for breaks and to repair them. The same repair mechanisms are employed, regardless of the cause of the DNA break…SDN-1 involves using a site-directed nuclease to cause a DNA break at a chosen DNA sequence which is then repaired using the cell’s natural mechanisms. The DNA repair is no more directed than the repair of DNA breaks occurring through other causes, resulting in the same range of possible DNA changes and the same range of possible changes to the characteristics of the organism as could occur in nature.
The assumptions above are contested. A team of researchers at the University of California-Berkeley found that repair pathways cannot be taken for granted (Richardson et al., 2018).
The enthusiasm for using CRISPR-Cas9 for medical or synthetic biology applications is great, but no one really knows what happens after you put it into cells…It goes and creates these breaks and you count on the cells to fix them. But people don’t really understand how that process works. (Sanders, 2018)
The views of the Australian regulator and Berkeley researchers would reconcile if the lack of understanding applied equally to repair processes initiated by spontaneous events and those initiated by people putting SDNs into cells. However, as the technology for investigating reactions gets better, they appear to be more dissimilar.
It has been suggested that the genome in a human cell may be hit by as many as 10–50 DSBs [double-strand breaks] per day. Yet, in the genome of skin cells of a 55-year-old individual, only about 2,000 small indels were detected by deep sequencing…In this light, our estimated error rates in the range of 20%–100% per break event seem rather high. This raises the possibility that repair of Cas9-induced DSBs is not representative for naturally occurring DSBs. (Brinkman et al., 2018)
Indeed, others have estimated that there are over 10,000 DNA lesions of all kinds per human cell per day (van den Berg et al., 2018). The overwhelming majority thus are repaired perfectly, in contrast to when a DSB is induced by an SDN on a species that does not spontaneously have these nucleases. This distinction is especially relevant when the genome editing reactions are performed in uncontrolled environments and with unknown species exposures, such as through topical applications (Heinemann and Walker, 2019). In a laboratory-controlled application, a particular outcome in an individual can be selected and tested, regardless whether it is a rare or common repair. In outdoor use, the millions of potential exposures cannot be evaluated, and thus, what would be rare repair outcomes in nature may become common by this form of mutagenesis.
The inability to distinguish between chemical bonds forged by postdamage repair pathways has been a focal point in arguments promoting the deregulation of genome editing. Regulatory inconsistency is argued, as in “the disproportionate regulatory burden for products that could have been achieved using conventional breeding will likely disincentivize the use of gene editing” (Van Eenennaam et al., 2019).
Consistency as a test seems to be inconsistently applied. To our knowledge, consistency has not been sought for intellectual property rights protection. In extending the biochemical metaphors to indistinguishability, and equating chemical and radiation mutagenesis processes with genome editing (EC, 2017; Wasmer, 2019), products of genome editing could also be registered under much weaker intellectual property rights instruments, such as variety rights than under utility patents, the latter based on genome editing not creating products of nature (Shear, 2015). However, losing utility patent protection probably would also disincentivize the use of genome editing. Regardless of whether or not it is theoretically true that genome edits are indistinguishable from ones arising by other means, they can be made to be distinguishable because they are for intellectual property claims.
To some extent, separation of issues of safety and intellectual property criteria is defensible. However, where the distinction is stretched beyond accommodation is when the patented product is the release of the tools, for example, in topical applications applied out of doors, to act on exposed organisms (Figure 3B). Here, the features that are relevant to some aspects of safety and other relevant regulation (unintended on and off-target effects, unintended species exposures, transfer of intellectual property claims to treated organisms) are inseparable from the features that make the process eligible for patenting. These features include, among others, the landscape and efficacy scales at which the mutagen can be applied as being unnatural.
Likewise, increases in scale of the research and commercial sectors has driven the availability of materials for genetic engineering (COP, 2014). This in turn has made it possible for amateurs (“hackers” or “do-it-yourselfers”) to obtain them, and deregulation of the processes of making genetically modified organisms allows their use by unprecedented numbers of people, institutions, and organizations. “[J]ust because the experiences with these technologies have been positive does not mean that all synthetic biology products may be assumed to be safe in the future” (Gronvall, 2018, p. 463). Likewise, professionals using the tools in a private capacity, or those with nefarious intent, including industrial or state sabotage, may work without oversight (Ahteensuu, 2017; Reeves et al., 2018; D’Alessio, 2019; Heinemann and Walker, 2019).
Conclusion
In this article, we described the qualities of the new gene techniques that make the potential harmful outcomes of their use distinctively scalable. Following from this, we identified how potentials to cause harm or improve safety may vary with how the technology is used. Significantly, safety and harm scale differentially, that is, they do not scale together either in direction or magnitude, with most often the potential for harm, but not safety, increasing with use. Finally, for these reasons, we suggest that the inherent property of scalability with amount or kind of use may be used to inform a critical control point structure for the governance of new gene techniques.
Changes in scale in the development or use of gene technology can inform regulatory triggers as well as guide risk assessment. Scale issues are compatible with prevailing risk assessment frameworks for use of gene technology in food and agriculture, whether they are based on product, process, or novelty. Integrating critical control points at scale changes would be harmonious with a number of different governance options currently being used or discussed, including RRI and differentiated governance.
Some may see our proposal as a return to the Asilomar-era (NASEM, 2016), a backward step on the march to deregulation as the means to reduce the “regulatory burden” on gene technology. If so, we are not unique because risk-tiering was also 1970s thinking. The Asilomar conference suggested “three levels of safety precautions which should be accompanying various genetic manipulation experiments,” low risk, moderate risk, and high risk (Norman, 1975, p. 6). The assembled scientists worried about a meddling government that would interfere with progress. “Underlying much of the discussion of the need for self-imposed controls was…that if regulations were not imposed from within, legislation could be anticipated which would probably turn out to be much more restrictive” (Norman, 1975). A viewpoint still common today (Hurlbut, 2018; Montenegro de Wit, 2020).
We concede that some Asilomar-era idea leaders were inspirational for us too, not because they sought to stifle innovation but because they had a clearheadedness about what caused risk. Regrettably, risk management became focused control of biological harms, a choice to “assess in a rational manner concerns about possible adverse environmental effects” (NRC, 1987, p. 5). This evolved into a risk discourse that privileged the rationale of technological elites, narrowed attention to products, and was dominated by comparison to the extreme spontaneous events outside of human control or to previous harms caused by human actions.
We do not discount the value of assessments for biological risks, but we do contest that the value judgments of some scientists about what constitutes risk and what is the appropriate comparator (e.g., nature or previous harms caused by technology) are the same as effective governance. Furthermore, we believe that societies both accept risk and will abide by their decisions to take them.
In summary, we have attempted to unravel the concepts and language that underpin dominating narratives on regulation of gene technology. Surprisingly, after adjusting for differences in vocabulary over time, the themes, arguments, and discontent of actors appear not to have changed over the last approximately 50 years. From Asilomar forward, there has been unequal access to the means of transitioning imaginaries of the future into present day, with a science, technology, and entrepreneurial sector persistently resisting and lamenting democratic oversight, evading some but not all of it, and continuing to strive for greater levels of “self” regulation.
We have presented evidence that regulators, science advisors, research societies, and governments have been to varying degrees nudged further toward a form of risk discourse that is unreflective on questions of why to use gene technology to one that is protective of it, legitimizing adoption because it satisfies safety criteria disproportionately influenced by those who develop the technology. The language of persuasion has been drawn from normative and industrial concepts, such as foreignness and precision, and reframed them as scalable safety features. In confronting that linguistic drift, we hope to have provided a rationale for, and bridge to, a consistent and comprehensive approach to gene technology regulation.
Acknowledgments
The authors are grateful to Ben Hurlbut and Sarah Agapito-Tenfen for helpful discussions. Although it is dangerous to thank editors, we would be remiss failing to acknowledge how valuable ours were for helping us to sharpen our thesis and get closer to clear expression of it.
Competing interests
The authors declare that they have no competing interests in the publication of this article. JAH is a Guest Editor of the special feature: Gene Editing the Food System. He was not involved in the review process of this article.
Author contributions
JAH conceived of the article, drafted the abstract submitted to guest editors for approval, and was primary author. All other authors made substantial contributions to design and revision, as well as providing critically important intellectual content, and gave final approval for submission and publication.
References
How to cite this article: Heinemann, JA, Paull, DJ, Walker, S, Kurenbach, B. 2021. Differentiated impacts of human interventions on nature: Scaling the conversation on regulation of gene technologies. Elementa Science of the Anthropocene. 9: 1. DOI: https://doi.org/10.1525/elementa.2021.00086.
Domain Editor-in-Chief: Alastair Iles, University of California, Berkeley, CA, USA
Guest Editor: Maywa Montenegro de Wit, Department of Human Ecology, University of California, Davis, CA, USA
Knowledge Domain: Sustainability Transitions
Part of an Elementa Special Feature: Gene Editing the Food System