The rapid development of CRISPR-based gene editing has been accompanied by a polarized governance debate about the status of CRISPR-edited crops as genetically modified organisms. This article argues that the polarization around the governance of gene editing partly reflects a failure of public engagement with the current state of research in genomics and postgenomics. CRISPR-based gene-editing technology has become embedded in a narrow narrative about the ease and precision of the technique that presents the gene as a stable object under technological control. By tracing the considerably destabilized scientific understanding of the gene in genomics and postgenomics, this article highlights that this publicly mediated ontology strategically avoids positioning the “ease of CRISPR-based editing” in the wider context of the “complexity of the gene.” While this strategic narrowness of CRISPR narratives aims to create public support for gene-editing technologies, we argue that it stands in the way of socially desirable anticipatory governance and open public dialogue about societal promises and the unintended consequences of gene editing. In addressing the polarization surrounding CRISPR-based editing technology, the article emphasizes the need for engagement with the complex state of postgenomic science that avoids strategic simplifications of the scientific literature in promoting or opposing the commercial use of the gene-editing technology.
1. Introduction: The publicly mediated ontology of the gene
On July 25, 2018, the Court of Justice of the European Union ruled that gene-edited crops should be regulated as genetically modified organisms (GMOs). Caught off-guard by this ruling, the CRISPR1-editing community responded with irritation and frustration. Both industry and scientists were quick to condemn the ruling as stifling innovation and threatening the competitive edge of European research and development. For example, the journal Nature described the ruling as a blow to scientists that would hinder investment in agricultural research and quoted the plant physiologist Stefan Jansson’s prediction that it “will have a chilling effect on research, in the same way that GMO legislation has had a chilling effect for 15 years now” (Callaway, 2018, p. 16).
As the industrial and scientific response to the ruling was being organized, the court’s decision was not only denounced as economically harmful but also as scientifically flawed. A position paper by 93 scientists from leading European research institutes identified existing EU GMO legislation as the root problem, countering that “it does not correctly reflect the current state of scientific knowledge,” and emphasizing that the European “GMO Directive should be thoroughly revised to correctly reflect scientific progress in biotechnology” (CBGP UPM-INIA, 2018, p. 1; see also Kershen, 2015, for the similar case of New Zealand). The position paper presents CRISPR-based gene editing as “simple,” “safe,” “targeted,” and “precise.” This narrative is not unique to the position paper but has dominated science communication on the rapid rise of CRISPR-based gene-editing technology.
CRISPR-based gene editing is presented to the public as safe and precise through a myriad of different platforms. In her TED talk in 2015—“How CRISPR let us edit our DNA?”—Jennifer Doudna, a professor of biochemistry at the University of California, Berkeley, who in 2020 was awarded the Noble prize in chemistry (shared with Emmanuelle Charpentier) for her work on what was referred to in the New York Times as the CRISPR revolution, describes CRISPR as a technology “to edit the genome,” “to delete or insert specific bits of DNA into cells with incredible precision.” She further explains that “CRISPR/Cas9 is analogous to a word processing program to fix a typo.” She also mentions several times in her TED talk and in many other interviews, how this “editing” can be done at a precise location in the genome with simplicity by inducing “just a tiny change in one gene of the entire genome.” In scientific terms, the CRISPR technique is described as follows: “In this system, a programmable guide RNA is used to bring the endonuclease (often Cas9) to a specific genomic target with unprecedented ease and precision. The endonuclease cleaves the DNA at the targeted site, triggering the cell to repair the double-strand break. The repair process can then be exploited to make genomic changes at the target site.” The change happens at the exact location where it is induced without making any further consequences for the rest of the genome, Doudna clarifies (Doudna, 2015) (all emphases ours).
The metaphor of CRISPR/Cas9 as a highly accurate “gene-editing” tool (also termed as technology or software) for gene editing is now generally accepted and is communicated not only by the media but also by scientists themselves (Merriman, 2015; O’Keefe et al., 2015; Ledford, 2016; Haapaniemi et al., 2018). As Ledford (2019, p. 15) emphasized in Nature: “Gene edits—made with tools such as CRISPR—often alter just a few DNA letters, whereas conventional genetic modifications often involve transplanting longer stretches of DNA from one species to another.” According to the dominant narrative in public outreach, gene editing through CRISPR eliminates uncertainties that characterized earlier GMO technologies by replacing the messy insertion of foreign DNA through “precise” (Ledford, 2019) and increasingly “super-precise” (Ledford, 2020) interventions into DNA sequences.
The result is what we call a “publicly mediated ontology of the gene.” Narratives about the ease and precision of CRISPR-based gene editing are ontological in the sense that they appeal to the nature of the gene as a stable object under technological control. Furthermore, this ontology is publicly mediated in the sense that it is strategically employed to gather public support, whether from TED talks (Doudna, 2015) to Nature News (Ledford, 2019, 2020), while circumventing decades of ontological controversy about the nature of the gene in genomic and postgenomic research (Strohman, 1997; El-Hani, 2007; Griffiths and Stotz, 2013).
The article explores 2 entangled problems with this publicly mediated ontology of the gene. First, we argue that it is out of touch with the state of scientific debates, some of them spanning over decades and more recently culminating into a considerably destabilized understanding of the gene, for example, in the science of the Encyclopedia of DNA Elements ENCODE project (Section II), which has led to complex theoretical debates about the nature of the functional gene, the gene concept itself, and the causal relation between DNA sequences and traits. The use of a publicly mediated ontology of the gene as a stable object under technological control therefore creates tensions with the complex and contested ontology of the gene increasingly represented in the scientific literature (Section III). The lack of an open dialogue about the complexities of the postgenomic era (Richardson and Stevens, 2015) is, we argue, contributing to a wider public backlash that is already reminiscent of the GMO debates (Shah, 2011; Macnaghten and Habets, 2020; Montenegro de Wit, 2020).
Second, we argue that this mismatch between a publicly mediated and a scientifically contested ontology of the gene is not merely of theoretical interest but is entangled with the governance of CRISPR-based gene editing more generally (Section IV). The publicly mediated ontology of the gene as a stable object under technological control limits the socially desirable anticipatory and regulatory capacity of governance to respond to both intended and unintended consequences of gene editing and to move the technology forward in ways that genuinely engages with public concerns, hopes, and wider values. Furthermore, this mismatch between the public and scientifically mediated ontology of the gene runs the risk of reproducing the public polarization of previous GMO debates by avoiding public engagement with the complexities of the postgenomic era in which CRISPR technologies are inevitably embedded.
We argue that scientists need to respond to these developments by welcoming public and governance conversations about the complex science of gene editing in a postgenomic era rather than obscuring these complexities through a narrative about the ease and precision of editing of DNA sequences. This article is an invitation and plea for such a conversation by returning the basic ontological question that is carefully avoided in current CRISPR conversations: What is the gene that is being edited?
2. What is the gene? From a discrete to destabilized object
The ease with which CRISPR/Cas9 is claimed to be able to modify the gene and associated traits implicitly appeals to the idea of the stability and structural unity of the gene. The idea of the gene that emerged in the first half of the 20th century was not only reductionist—in the sense that the gene was understood as nothing but a chemical unit lined up as a linear sequence at an exact location on the chromosome—but also mechanistic and static in that it did not include the concept of time. Presentations of the gene as analogous to “text”—commonplace in both public and scientific narratives on CRISPR—retain close affinity with the framing of the gene as static and stable. The scientific research on the understanding of the gene, however, has dramatically changed through genomics and postgenomics.
Gradually emerging for several decades, and culminating into radically revised ideas, the genetic structure of organisms is now understood as considerably more complex than described in the early debates. There are many sources of this shift emerging slowly over a long period of time in the work of individual scientists (Shah, 2018). For the sake of analytical clarity, however, this section focuses on the ENCODE project that has spanned over the last 2 decades, as a forum through which to discuss and highlight key issues debated and contested in genomics and post-genomics. We then discuss how this current knowledge on the nature of the gene compares with the framing of the gene in current narratives of CRISPR-based gene-editing tools.
2.1. Negotiating the gene in ENCODE: Significant departures
The Human Genome Project, launched in 1990 and concluded in 2003, successfully sequenced the entire human genome. In 2001, the sequence of the human genome was announced by the International Human Genome Sequencing Consortium (Lander et al., 2001; Venter et al., 2001). Although sequencing nearly 3 billion bases of the human genome was a significant achievement, the function of the vast majority remained unknown. The task of the second project launched in September 2003, the Encyclopedia of DNA Elements (ENCODE), which brought together more than 400 international scientists, was described as both “simple and incredibly ambitious: to comprehensively annotate all functional sequences in the human genome” (Eddy, 2013; Diehl and Boyle, 2016, p. 238). Initially focusing on 1% of the coding genome, the pilot project was expanded to the whole genome in 2007 to cover the role of so-called “junk DNA,” or nonprotein coding DNA (The Encode Project Consortium, 2007; Ecker et al., 2012; Harrow et al., 2012). ENCODE was also the first project to compare long stretches of noncoding DNA across many mammals, from mice to monkeys to humans. Because the key functional features of noncoding DNA, known as constrained or conserved DNA, remains the same across species, the ENCODE results have far-reaching significance for the entire science of genomics (Check, 2007). More recently in 2017, the fourth phase of the ENCODE project was extended by 3 more years to cover mouse, fly, and worm genomes (Kwon, 2017). ENCODE results have appeared in more than 2,000 peer-reviewed papers, a significant number of which have been published in the leading multidisciplinary science journal Nature (Diehl and Boyle, 2016). In 3 ways, the ENCODE results have consolidated earlier findings, destabilizing the early legacy of the static and reductionist understanding of the gene and contributing to a more complex ontology.
First, at the end of the project on human genome sequencing in 2001, earlier findings were confirmed that only a fraction of the human genome codes for proteins, meaning only a fraction is composed of coding or so-called “structural genes,” while the rest is noncoding regions, also identified as involved in regulatory activities of some kind (Kellis et al., 2014). The noncoding regions were described as “junk DNA” in the project on human genome sequencing that preceded ENCODE, but over the last 2 decades, it has become increasingly clear that the noncoding regions have a profound regulatory effect on the timing and content of gene expression (Harrow et al., 2012). It is also acknowledged that the enormous expanse of the regulation of genomic activities is yet to be explored and understood and that the gene regulatory mechanisms are far more complex than previously thought (Henikoff, 2007; Ecker et al., 2012). Gene expression is thus understood to be influenced by multiple stretches of regulatory DNA located both near and far from the gene itself and by myriads of strands of RNA transcribed2 but not translated into proteins (Ecker et al., 2012). A cascade of transcriptional activities involving a vast number of cellular processes and a range of regulatory, structural, messenger, and microRNA, alongside entities such as exons (coding regions of messenger RNA) and introns (noncoding regions), are reported to be essential for a particular developmental action to take place (Stamatoyannopoulos, 2012). Another example of the complex regulatory mechanisms now identified are transcription factors. Transcription factors are proteins essential for gene expression by acting on promoters or enhancers to activate or repress the transcription of specific genes. Consolidated by the findings of ENCODE, studies of the regulatory system have made evident (a set of biochemical feedback loops) that is so complex that some scientists find it no longer productive to fold the regulatory system into the definition of the gene (Gerstein et al., 2007).
Second, the ENCODE findings confirmed that gene expression and regulation involves interaction with many other genes as well as with cellular and environmental and epigenetic factors. ENCODE therefore reinforces a long-standing position which some philosophers of biology have called “causal democracy”: that is, that many cellular, genetic, and epigenetic processes are causally equally necessary in determining a developmental outcome (Stotz and Griffiths, 2016). These findings also extend to what is called a “many-many” problem in relation to gene expression—that is, that a vast number of genes are responsible for the vast array of developmental and regulatory activities that ultimately result in a particular trait. In other words, all traits require the action of many genes, and many genes contribute to the development of more than one trait. Even before the ENCODE project, it was well understood, for example, that a gene can affect more than 1 protein. Thus, mutating or editing 1 gene could result in so-called “pleiotropic” effects: having an outcome other than the one intended (Low, 2001). ENCODE established these earlier findings through quantitative genetics by concluding that many traits are constituted through multiple loci in the genome, and hence are multi-genic.
And third, the ENCODE project has advanced our understanding of epigenome and chromatin organization. Epigenetics focuses on cases of heritable changes within a cell that do not result from changes in DNA sequences. Methylation and histone modification are 2 of the best known mechanisms for epigenetic control studied in the ENCODE project (Siggens and Ekwall, 2014). Often depicted as a new scientific revolution, the field of epigenetics has been defined as a theory, a process, a phenomenon, a scientific movement, a study of “how environmental factors modify our genes,” or the study of how “scientists now know that genes are not the only authors of inheritance. There are ghostwriters, too.” (Rogers, 2012; cited in Stelmach and Nerlich, 2015, p. 201). The ENCODE project had emphatically recognized the indispensable role played by epigenomics in gene expression and also in genome-wide disease association studies (Hardison, 2012), and still so much remains unknown about the epigenetic processes that often they are described in such metaphors as “the ghost in our genes,” “Grandma’s curse,” “womb doom,” “sins of the father,” “poison that keeps poisoning through the generations,” or “a time bomb in your genes” (as quoted in Stelmach and Nerlich, 2015, p. 202).
2.2. Negotiating the gene in ENCODE: New debates
The ENCODE project consolidated earlier findings on the complexity of the science of the gene, while at the same time sparking new debates on the contested ontologies of the gene. Recognizing the complex processes of transcription, translation and other regulatory mechanisms, the ENCODE project proposed that the gene should be defined functionally in terms of what it does as opposed to what it may contain or where it is located on the chromosome. What then counts as a functional gene? ENCODE results have generated debate among and between geneticists and evolutionary biologists on the functional elements of the genome. ENCODE defines a functional element in broad terms as “a discrete genome segment that encodes a defined product (e.g., protein or noncoding RNA) or displays a reproducible biochemical signature (e.g., protein binding, or a specific chromatin structure)” (The ENCODE Project Consortium, 2012, p. 57). According to this broad definition, almost 80% of the genome is considered as having some sort of biochemical signature and hence functional. ENCODE thus claims to have partially solved the mystery of why a vast majority of the human genome does not code for proteins and why evolution would maintain large amounts of seemingly “useless” or “wasteful” DNA. It drives home the point that there are many “genes” out there in which DNA codes for RNA, not a protein, as the end product. Many geneticists argue that the fundamental unit of the genome and the basic unit of heredity should hence be the RNA transcript—the piece of RNA transcribed from DNA—and not the gene. And given that ENCODE has shown that the large part of the genome “pervasively transcribes,” it is argued that the whole of the genome has function and purpose (Djebali et al., 2012; Stamatoyannopoulos, 2012).
The claim that 80% of the human genome is functional, however, has not been universally accepted and has certainly not contributed to a consensus around the viability of functional gene concepts. In particular, evolutionary biologists have accused the ENCODE project of playing “fast and loose with the term ‘function’” and argued that the claim of 80% functionality “flies in the face of current estimates according to which the fraction of the genome that is evolutionarily conserved through purifying selection is less than 10%” (Graur et al., 2013, p. 578; see also Doolittle, 2013). Graur et al. (2013, p. 579) oppose ENCODE’s broad “causal role” definition of function according to which functional elements may have some causal role without having an adaptive or maladaptive evolutionary consequence for the organism. Instead, they endorse a “selected effect” definition of function according to which a functional element of the gene is the one that codes for a trait that is “selected” as a result of the “reproduction” (a copy or a copy of a copy) of some prior trait that performed some function. They argue that the human genome carries a load of what is otherwise termed as “junk DNA” without definable “selected function” as a necessary part of the evolutionary process, and that natural selection maintains a vast reservoir of DNA that may or may not become functional in future. They “urge biologists not be afraid of junk DNA. The only people that should be afraid are those claiming that natural processes are insufficient to explain life and that evolutionary theory should be supplemented or supplanted by an intelligent designer…ENCODE’s take home message that everything has a function implies purpose, and purpose is the only thing that evolution cannot provide” (Graur et al., 2013, p. 587).
Graur and colleagues prefer the term “selected effect” because it embodies empirical evidence of the evolutionary process, whereas what they call the “causal role” is seen as ahistorical and non-evolutionary. These assertions connect to 2 iconic statements in the history of molecular biology. Theodosius Dobzhansky: “Nothing in biology makes sense except in the light of evolution” (Dobzhansky, 1973). And, François Jacob: “Natural selection does not work as an engineer…It works like a tinkerer” (Jacob, 1977). Jacob further argues that “What [the tinkerer] ultimately produces is generally related to no special project, and it results from a series of contingent events” (1977, p. 1164). This means that from a range of available variations in nature, a particular effect or trait is chosen through natural selection; this effect is blind to structural differences in DNA. In other words, this means that thousands of combinations of DNA sequences could in principle code for one trait. Natural selection maintains such variation. Therefore, there cannot be a single molecular structure for a single functional trait.
2.3. Negotiating the gene in ENCODE: Gene definition
Not only does the gene signify more than used to be the case, but the physicality of the functional gene as an object of investigation for its causal relation to traits has undergone a radical redefinition. Genome-wide association studies, which link variations in DNA sequence with specific traits and diseases, are in recent years significantly driving the field of genetic studies. Many scientists propose that the transcript be considered as the basic atomic unit of inheritance. And concomitantly, “the term gene would then denote a higher-order concept intended to capture all those transcripts (eventually divorced from their genomic locations) that contribute to a given phenotypic trait” (Djebali et al., 2012, p. 108). Taking ENCODE’s liberal and broad definition of what may count as a functional gene, this field of study has identified thousands of DNA variants associated with hundreds of complex traits (such as height) and diseases (such as diabetes or cancer). But association is not causality, and identifying those variants that are causally linked to a given disease or trait and understanding how they exert such influence has been difficult. Furthermore, most of these associated variants lie in noncoding regions, so their functional effects have remained difficult to define (Ecker et al., 2012). This becomes even more complicated since some of these studies increasingly question whether there is any definable boundary between regulatory and structural functions of DNA: “Experimental interventions reveal high degrees of interdependency between these transcription units, which have been co-opted as gene regulatory mechanisms.…Thus transcription itself regulates transcription initiation or repression at many regions of the genome” (Mellor et al., 2016, p. 57). Yet again, results such as these call for a radical redefinition of the concept of the gene.
It is also discovered that the gene itself has a discontinuous structure—1 gene could be contained or nested with another’s intron or 1 gene could overlap with another without sharing exons or a regulatory system: “Noncoding transcription units overlap with genes and genes overlap with other genes, meaning that genomes are extensively interleaved” (Mellor et al., 2016, p. 57). The likely continued reduction in the lengths of intergenic regions have also steadily led to the thesis that most genes, previously assumed to be distinct genetic loci, overlap (Djebali et al., 2012, p. 108), supporting the proposition of a highly interleaved transcribed genome. But more importantly, it has prompted the reconsideration of the definition of the gene. Below, we discuss various attempts to redefine the gene, highlighting the contrast between the complexity of the gene emerging out of this emerging science with the simplistic idea of the gene adopted in the narratives on CRISPR-based gene editing.
ENCODE illustrates how postgenomics has escalated this definitional complexity by revealing “patterns of dispersed regulation and pervasive transcription […] together with nongenic conservation and the abundance of noncoding RNA genes” (Gerstein et al., 2007, p. 669) that undermine not only straightforward molecular definitions of the gene but also simple causal pathways between genes and traits. ENCODE has motivated various attempts to radically revise the definition of the gene. It has become increasingly more productive to define the gene in terms of what it is not than in terms of what it is. One such attempt, citing Falk (1986), (un)defines the gene in these terms—“The gene is neither discrete…nor continuous…nor does it have a constant location…nor a clear cut function…and not even constant sequences…nor definite borderlines” (Gerstein et al., 2007, p. 679). In this (un)definition, the precision of the physical unity and location of the gene on chromosome is shattered, and one may see ENCODE as reinforcing a narrative in which the crisis of the gene concept culminates in the collapse of the concept itself (Neumann-Held, 1999; Keller, 2000). While the first obituaries to the gene concept appeared more than 20 years ago, ENCODE’s (un)definition is not yet declared as the final death sentence. Many scholars argue that the “reports of the death of the gene are greatly exaggerated” (Knight, 2007), and for 2 important reasons that we discuss below. Whether or not the gene concept in its original form lives or dies, these definitional debates have established one fact beyond doubt—that the gene is a highly complex and contested entity.
First, the ENCODE project has driven various attempts to radically revise the definition of the gene to retain the concept as scientifically viable. Responding to the gene as a functional unit as we have discussed at length in the previous section, one such definition hinges not on structural elements but on functional products—“the gene is a union of genomic sequences encoding a coherent set of potentially overlapping functional products” (Gerstein et al., 2007, p. 677). Note that in this definition, multiple and overlapping DNA sequences correspond to multiple and overlapping functional products. Another attempt to address the disappearing boundaries between the regulatory and structural elements defines not the gene but the genome in relational terms. “Genes as subroutines in the hugely complex genomic operating system,” and “gene transcription in terms of parallel threads of execution…intertwined in a rather higgledy-piggledy fashion, very much like what would be described as a sloppy, unstructured computer program code with lots of GOTO statements zipping in and out of loops and other constructs” (Gerstein et al. 2007, p. 675). In this definition, the genomic OS does not have as neat a quality as in normal computer OS. Both these definitions—many DNA sequences overlapping with many functional products—or, the genome as an unstructured computer program—emphasize how various components of the genome closely intertwine, interleave, and overlap, where changes in one part of the genome inevitably result in changes to other parts. In other words, the function or role of any DNA sequence makes sense only in relation to many others.
Second, failures to formulate a universally accepted definition of the gene do not necessarily lead to the collapse of the gene concept. They have also motivated the development of pluralist accounts of different gene concepts. In the philosophy of biology, this proposal has been on the table for a while and includes Moss’ influential distinction between a Gene–P, that is a phenotype predictor, and a Gene–D, that “is defined by its nucleic acid sequence [and that] itself is indeterminate with respect to phenotype” (2003, p. 60). Griffiths and Stotz (2013, p. 75) formulate a recent pluralist proposal in the light of ENCODE that picks up Gerstein et al.’s (2007) definition of the “postgenomic gene” while insisting on the simultaneous use of the “nominal gene” in the sense of nucleotide sequences and the traditional Mendelian gene defined through its causal role rather than as a molecular entity.
Suggestions of elimination, reformulation, and pluralism demonstrate that the definition and nature of the gene remains highly contested in postgenomic research, and various debates and contestations around ENCODE amply demonstrate this to be the case. For the purposes of our discussion, however, it is more important what these suggestions have in common rather than where they differ: All of them agree that the gene is not a clearly defined structural and functional unit that would allow easy molecular identification and intervention through gene-editing technologies such as CRISPR. Crucially, the “crisis of the gene concept” (El-Hani, 2007) is therefore not merely a conceptual crisis of competing definitions—it is also a crisis of modeling and establishing clear causal pathways between genes and traits, including those that are the basis for gene-editing technologies such as CRISPR.
3. Unintended effects of CRISPR-based gene editing
Public debates about the ease and potential of CRISPR-based gene editing commonly ignore complexity by appealing to the gene as a stable object under technological control. For example, in a podcast in New York Times (April 2, 2021), Ezra Klein and Walter Isaacson (who have written a biography of Doudna and the scientific process that led to CRISPR) discuss the “implications” of “humanity’s awesome, terrifying takeover of evolution” by CRISPR (Isaacson, 2021; Isaacson and Klein, 2021). The tone of the podcast leaves no doubt that CRISPR will revolutionize human evolution and that the ethical debate needs to deliberate solely the impacts of the transformation (e.g., how to ensure that the benefits are fairly distributed). As with many such public discourses, this podcast starts on a high note on how CRISPR-based editing had already cured sickle cell anemia in a woman named Victoria Gray from Mississippi, and how CRISPR has the potential to similarly correct mutations and develop gene therapy to cure diseases such as cystic fibrosis and Huntington’s disease.
These 3 diseases are repeatedly mentioned in such publicly mediated ontologies of the gene because they are relatively easy cases for gene-editing therapies, given the fact that each disease involves mutations of only a single or a few base pairs from the 3 billion or so that exist in the human genome. Isaacson and Klein then discuss how the application of CRISPR to more complicated traits like muscle mass or height is not yet technologically possible but will surely happen in a few years’ time. Scientists and science communicators in such public discussions rarely point to the possible unintended consequences of CRISPR-based editing, or to what may be highly difficult or even impossible to achieve. Below, we highlight how the ontological complexity of post-genomic research relates to the complexity of CRISPR editing along different dimensions that are controversially discussed in the current literature. The following sections highlight 3 of these dimensions: (1) the large number of genes that are involved in many traits and hence in intended and unintended effects, (2) off- and on-target effects, and (3) methods for detecting these effects. These three dimensions are further contextualized through the examples of CRISPR-edited hornless cattle and pigs and the history of genetic technologies.
3.1. Most traits involve a large number of genes
Scientists have long recognized that a majority of the traits relevant in agriculture—like drought tolerance and yield—are not encoded by a single gene but rather are spread across the genome through multiple interconnected loci. For example, a study published in the Proceedings of the National Academy of Sciences provides a detailed look at how a plant exercises exquisite control over its genome, switching some genes on and some genes off in response to harsh surroundings (Manke, 2019; Varoquaux et al., 2019). The study, based on 400 samples of sorghum plants grown during 17 weeks in open fields in California’s Central Valley, reveals that the plant modulates the expression of a total of 10,727 genes, or more than 40% of its genome, in response to drought stress. Many of these changes occur within a week of the plant first experiencing water stress, while another set of genes are again switched on and off when water returns. This finding relates to our previous discussion—how a large number of genes are responsible for one trait and how the genome is highly interleaved.
3.2. Examples of unintended (off- and on-target) effects
One of the most debated unintended consequence of CRISPR/Cas9 editing is how it can inhibit the functioning of p53—also known as TP53 or tumor protein—a gene that codes for a protein that regulates the cell cycle and hence functions as a tumor suppressor. Inhibition of p53 improves the efficiency of precision genome editing; however, the “inhibition of p53 leaves the cell transiently vulnerable to the introduction of chromosomal rearrangements and other tumorigenic mutations” (Haapaniemi et al., 2018, p. 930).
Indeed, few scientists would deny that “ease and precision” is de facto only part of a more complex story both at the level of the genome and at wider levels of interactions between genetic, epigenetic, and environmental factors. Even in the technological context of CRISPR-based editing, the current scientific literature has acknowledged various instances of complexity which remain in constant tension with the narrow narrative of ease and precision in the public ontology of the gene. For example, Kosicki et al. (2018, p. 765) reported large deletions and more complex rearrangements at targeted DNA sites and speculated “that current assessments may have missed a substantial proportion of potential genotypes generated by on-target Cas9 cutting and repair, some of which may have potential pathogenic consequences following somatic editing of large populations of mitotically active cells.” Shin et al. (2017, p. 1) sequenced target sites of mice and reported that both insertion and deletions in the mouse genome showed “unreported asymmetric deletions and large insertions of middle repetitive sequences.” Adikusuma et al. (2018, E8) raise similar concerns by demonstrating “that large deletions are frequently generated in mouse zygotes after CRISPR–Cas9 single cleavage, as has recently been noted by others.” It is highly problematic that the publicly mediated narratives of the CRISPR-editing as precise and simple underplay such relational complexity of the genome—that is, the highly uncertain nature of the unintended consequences that gene editing can permeate. A significant set of similar literature is recently emerging on unintended consequences, both on and off target, of CRISPR-based gene editing.
The Case of Hornless Cattle
Norris et al. (2020) analyzed publicly available whole genome sequencing data from cattle germline genome-edited to introduce polledness (lack of horns) by the biotech company Recombinetics. The company had at the time filed a patent on the gene-edited hornless cattle which had been widely projected as a success story for the new genomic techniques and even as a boom for animal welfare, given that these gene-edited cattle would not need to be de-horned. The cattle were germline edited with gene-editing nucleases called TALENs (Transcription activator-like effector nucleases), a prominent tool in genome editing alongside CRISPR/Cas9. However, the FDA research demonstrated unintended effects at the intended target sites. Alongside the successful integration of the desired “polled” gene variant was an unintended incorporation of two different antibiotic resistance genes that make bacteria resistant to 3 different antibiotics. The resistance genes could potentially be picked up by bacteria that could then cause disease and be resistant to antibiotics. The genetically edited cattle thus unintentionally posed a significant potential risk to public health.
The Case of CRISPR edited Pigs as Organ Donors for Xenotransplantation
Several studies have not only demonstrated that off-target errors go undetected but also that the narrative of on-target precision is troubled by the complex ontology of the gene. Gene-edited pigs that have been designed to serve as organ donors for humans is so far arguably the most well-known and best developed case of CRISPR editing. A major obstacle for pigs to function as effective organ donors for humans is the threat of infection from viruses present in the animals, and particularly those of PERVs (porcine endogenous retroviruses). PERVs can infect the organ receiver and cause tumors, leukemia or neuronal degeneration. CRISPR-editing has been used to knock out PERVs at multiple sites (reportedly from 40 to 62) in the pig genome. The genetically edited cells are then exposed to a cocktail of chemicals to induce growth and to prevent the once edited and hence damaged DNA to stop the cell from growing, dividing or self-destructing (Niu et al., 2017). This extensive use of gene editing, however, is not sufficient. Scientists still need to knock out pig genes that provoke the human immune system, and insert others that would prevent toxic interactions with human blood. A review of multi-genetic modification of donor pigs concludes that despite the display of techno capacity for gene editing, the multi-modified pig xenotransplanted organ is still unlikely to survive in human bodies (Kemter et al., 2020). The longest survival of life-supporting xenografts in preclinical models has been achieved using pig donors with only a small number (2–3) of modifications, whereas the genome engineers researching PERV-free pigs are working with base-editing technique that involve more than 13,000 CRISPR edits in a single cell (Servick, 2017; Regalado, 2019). Why the low survival rate in humans of such multi-cited CRISPR-edited pig organs is unclear, however, it points once again to the complex interactions associated with gene editing, and to the formidable challenges associated with prediction even of on-target edits.
3.3. Methods to detect unintended effects
Many scientists have warned against the potential hazards of unintended mutations, including single nucleotide mutations in noncoding regions of the genome. However, many of the controversies on unintended consequences hinge on the fact that the techniques to identify these effects, as distinct from spontaneous mutations and genetic drift, which occur in the evolution of genomes, are not yet fully developed. A controversy broke out in 2017 after a study claimed that 1,500 single-nucleotide mutations and more than 100 larger deletions and insertions were identified after CRISPR-based gene editing was performed on mice. None of these DNA mutations had been predicted by the computer algorithms that are widely used by researchers to look for off-target effects (Schaefer et al., 2017). Scientists tend to use predictive algorithms when CRISPR is performed in cells or tissues in a dish to identify areas most likely to be affected and then to examine those areas for deletions and insertions. Whole genome sequencing, however, is not regularly performed to look for off-target effects in living organisms. This study recommended the need to perform whole genome sequencing to identify unintended mutations, as even single nucleotide mutations have the potential to have significant impact.
The publication of this study triggered not only a controversy among the scientific community but to the rapid decline in the stock value of several companies aiming to commercialize CRISPR-based applications (Montoliu and Whitelaw, 2018). The rebuttal responses critiquing this study therefore came both from academia and the commercial sector. In response to critics, the authors submitted 2 more papers providing additional sequencing data and explanations. Eventually, 5 rebuttal articles were published in the journal Nature Methods in 2018 on the same day that the original study was retracted by the journal without the approval of all authors—only 2 of the 6 authors approved the retraction (Montoliu and Whitelaw, 2018). An addendum was added by the editors of the journal with the retraction of the original paper:
The editors of Nature Methods are issuing an editorial expression of concern regarding this paper to alert our readers to concerns about interpretation of the data. Multiple groups have questioned the interpretation that single nucleotide changes seen in whole-genome sequences of two CRISPR–Cas9-treated mice are due to the CRISPR treatment. Since the background genetic variation between the control mouse and the CRISPR-treated animals is not known, an alternative proposed interpretation is that the observed changes are due to normal genetic variation. We are in contact with the critics and with the authors to examine this matter further. We will update our readers once these investigations are complete. All the authors do not agree with the journal’s decision to issue an editorial expression of concern.
The jury is still out on this particular controversy which largely played out on differing interpretations of the control and screening methods of off-target mutations. A more recent study carried out by the U.S. Food and Drug Administration’s Center for Veterinary Medicine in 2020 found an array of insertions, deletions, inversions, and translocations that had been difficult to detect by standard PCR and DNA sequencing methods (Norris et al., 2020). In response, the authors propose new sequencing-based methods to screen for off-target errors (e.g. GUIDE-Seq1, SITE-Seq2, CIRCLE-Seq3 29, DISCOVER-Seq4 30), and long-read sequencing of the target site to detect on-target errors. The authors further argue that each screening approach carries assumptions and biases of its own that may allow further alterations of unexpected types to go undetected. This controversy on screening methods and their capacity to detect unintended changes is not just a technical issue; it corroborates with our overall argument on the complexity of postgenomic dynamics and how such an outlook mitigates against easy identification of the unintended consequences of CRISPR-editing.
Although the introduction of unintended mutations at off- and on-target sites within the genome has been reported frequently in the mammalian field, the precision of CRISPR-Cas9 gene editing in plant systems has also come under scrutiny. While a number of scientists have pointed out that off-target effects are also possible in plants, the genome-wide studies that are needed to identify such effects have been performed only in a few plant species—to date, in rice, maize, tomato and Arabidopsis. In plant studies, a range of other nuclease options other than Cas9—such as Cas12a or engineered Cas9 enzyme such as SpCas9-HF—are being developed to achieve higher precision (Hahn and Nekrasov, 2019). While the CRISPR industry is chasing after the elusive precision, a whole cottage industry of tools—Chop-Chop, E-Crisp, and CRISPOR, to name a few—are being developed in the pursuit of better prediction and to deal with unintended off-target edits in plants (table 1 in Hahn and Nekrasov, 2019). At the same time, a new set of whole genome sequencing methods are being suggested and tried to identify off- and on-target unintended effects.
Why are on- and off-target effects so difficult to predict and control? In part, the answer is associated with the operation of the CRISPR gene-editing technique itself. In the process of CRISPR-based gene editing, the (intended and unintended) insertion of genetic material is made possible because of the cell’s innate DNA repair processes, which are activated after the gene-editing tool cuts and damages a DNA sequence. The intended effects do not follow the cutting of the DNA which indeed could be precisely targeted but stem from the DNA repair process of the cell which, by contrast, is inherently error prone—especially with double-stranded DNA cuts to which the cell reacts rather violently. More recently, new “base editing” techniques have emerged to overcome the need to cut both strands of DNA. Here it is important to note that cuts in both DNA strands with CRISPR/Cas9 have often resulted in the death of the cell. To avoid that problem, a team of scientists at Harvard and MIT led by molecular engineer George Church have developed a variation of Cas9 called a dead-Cas9 base editor (dBEs) that avoids cutting DNA and instead replaces one genetic letter with another—say, turning a C into a T. According to the reported research, the team was able to make over 13,000 changes at once in some cells without destroying them or causing gross genome-wide instability (Regalado, 2019; Smith et al., 2020). This team of scientists has the ambition to rewrite genomes at a far larger scale than has currently been possible, something they say that could ultimately lead to the “radical redesign” of species (Regalado, 2019). While they retain the idealism of CRISPR scientists, their invention may already be making CRISPR gene-editing appear passé, a tool without precision, even if the scientists still conform to the belief that they have the knowledge and capacity to make precise edits across all these genome sites.
3.4. Unpacking the term “effects”
The scientific literature uses a number of terms to describe intended and unintended effects, in the form of on- and off-target effects, alternatively defined as “changes,” “consequences,” “mutations,” “edits,” and “errors.” We believe the term “effects” deserves to be unpacked to avoid common equivocations of “effect” being used to talk about the edit (the action of CRISPR/Cas), the event (the biochemical event resulting from the action) and the outcome (the phenotypic outcome of the action, mediated by all the complex ontology of the gene). This is also to reiterate the argument we have made in this paper on how these edits, events, and effects are rarely, if ever, in exact alignment.
We would like to clarify the following points also to emphasize the main argument of this article. An edit using a nuclease like Cas may be “targeted,” but this is different from how this results in a “precise outcome”; in other words, any precision with respect to the mutation induced by the edit does not equate to control over the outcome. Not only is the cell repair in response to the cut or edit always likely to be error-prone as we have discussed in detail, but this biochemical event of the cell response arising from the edit also may lead to a phenotypic outcome other than what is intended, since it too has to interact with the complex gene ontology, arising from the discontinuous nature of the gene, the complex feedback loops of regulatory systems, the quantitative trait loci, the epigenetic effects, and so on. This complex understanding ruptures any simple account of clear causal pathways between genes and traits, and between the edits, events, and effects.
This distinction between edits, events, and effects has substantial relevance at the societal scale. The phenotype outcome intended of the edit is generally limited to organismal scale, but the societal impacts stretch far beyond. Organisms relate to other organisms in communities. Communities provide functions and structure to living ecosystems. Ecologies are entangled in social, cultural, and political-economic “effects.” In other words, a significant number of downstream effects that may come out of gene-editing events can remain “unintended,” even if the edit is on-target. For this reason, we argue that the multi-scalar “effects” of gene editing are profoundly difficult to predict, regardless of biochemical targeting: whether the cut is on-target or off-target, and regardless of whether the cell repair was seamless or not. Some high level CRISPR scientists have indeed argued that gene expression is extraordinarily complex, and given such complexity they have been less concerned about what happens when CRISPR does not hit its target than when it does. Their fear resonates with our analysis based on ENCODE-findings that little is known about the complex networks that mediate gene expression, including over an organism’s lifetime, and under changing environmental conditions. The unpredictable multi-scalar “effects” of gene editing connects to our discussion in Section IV on what happens when CRISPR meets its target and what ramifications or effects need to be discussed at a societal level.
3.5. Hope, hype, hubris, and history of (im)precision of genetic technologies
The publicly mediated ontology of the gene is embedded in narratives of ease and precision that clash not only with the complexity of postgenomic research but also with the challenges of CRISPR-based editing, as illustrated by our examples from the previous sections. However, these complexities and challenges are rarely addressed in narratives that focus on technological control and its transformative potential. In this sense, the framing of CRISPR is also a déjà vu moment that echoes dynamics of hope, hype and hubris in the history of genetic science. To recount some of these historical moments: In 1924, Herman J. Muller became an instant celebrity in America when he used X-ray technology to create mutations in DNA for the first time. Newspapers ran stories of how X-ray induced mutations could help control the whole of evolution. Also in 1924, J. B. S. Haldane wrote an influential piece “science and the future” in which he predicated how Thomas Morgan’s work on heredity material would create a super-human race after 150 years (as discussed in Shah, 2018).
The promise of radical transformation remained an integral part of the development of genetic sciences from recombinant DNA and “genetic engineering” in the 1970s to the mainstreaming of synthetic biology in the early 2000s. Echoes of this history are ubiquitous in debates about CRISPR including transformative hopes expressed in iconic statements such as “we may be nearing the beginning of the end of genetic diseases” made by Jennifer Doudna at multiple occasions, including in the TED talk in 2015. Such framing not only risks underdelivering on hopes but also reinforcing public backlash against the perceived hubris of genetic sciences. As we now highlight, this dynamic is especially salient in the agricultural domain where debates about CRISPR-based crops risk repeating the mistakes of earlier controversies about GM crops, leading to polarization in public discourse and policy.
4. Rethinking the governance of CRISPR
So far, we have argued that the widely endorsed narrative of the “publicly mediated ontology of the gene” is both out of touch with scientific debates in genomics and postgenomics, and ill-prepared to recognize or anticipate the complex unintended (on- and off-target) effects that appear, in ways that are perhaps increasingly recognized, to accompany gene editing in practice. In this final and concluding section we argue that this debate has a number of profound implications for the politics and governance of gene editing.
Especially in the agricultural domain, governance debates follow patterns of polarization that are remarkably stable in the development from transgenic GMOs to CRISPR-edited crops and foods (Gutzmann et al., 2017; Macnaghten and Habets, 2020; Montenegro de Wit, 2020). For over 2 decades, a wide coalition of scientists, journalists, and industry actors has supported transgenic GMOs by highlighting increased technological control against biosafety concerns and by stressing the transformative potential of GMOs in addressing global challenges such as food security, malnutrition, and poverty in the Global South. The ease and precision of CRISPR-based gene-editing techniques further strengthens this narrative by pointing to increased technological control with even more revolutionary promises of new crops with benefits to all. From this vantage point, opposition to gene-edited organisms may come with good intentions but has become fundamentally out of touch with the state of scientific knowledge about gene editing. Adopting this frame, Hotez (2020) sees opposition to gene editing as the next frontier of an anti-science agenda and argues that its fundamental misunderstanding of science is on par with anti-vaccination movements and climate change deniers as bedfellows.
However, opposition to gene-edited foods and crops is all too reasonable when subjected to a sociological gaze, not least because it is following a familiar narrative that reflects the controversies about transgenic GMOs. Proponents of transgenic GMOs may have promised benefits for all of humanity but often delivered benefits for large agricultural producers through herbicide- and insect-resistant crop varieties (Macnaghten and Habets, 2020). While “Golden Rice” may have been the hopeful face of the promised GMO revolution, “Roundup Ready” seeds for the widespread application of Monsanto’s herbicide were often its reality. The political economy and political ecology of global food production further hardened opposition to GMOs that became represented—and symbolized—as a technology of imposed neoliberal capitalism that assimilated the rural poor into monopolized and patented markets for large agricultural producers and seed companies (Macnaghten and Carro-Ripalda, 2015). As “local and global elites join hands” (Shah, 2005) in pushing for genetically engineered seeds, GMOs have become a symbol for allegedly value-neutral technological development that strategically downplays concerns about unintended consequences and hides its instrumental role in deepening global inequality (Jasanoff, 2002). From this vantage point, the development of gene-edited crops and foods is likely to be the next stage of a brutal modernization and agrarian intensification process that obscures its production of environmental and social injustices behind the lofty promise of “feeding the world.”
The lack of a substantial debate about the complexities and uncertainties of gene editing in the postgenomic era easily feeds into both narratives of this polarized debate and contributes to putting CRISPR governance on the same path as the governance of transgenic GMOs. For advocates of gene-edited crops, the narrow narrative about the ease and precision of CRISPR may come with short-term benefits by strengthening the case for biosafety and for transformative promises regarding food security, malnutrition, and poverty in the Global South. In the long run, however, this narrative stands in the way of anticipatory governance that is substantially reflective about the prospects of realizing these promises (Guston, 2014). As much as an emphasis on the ease of CRISPR highlights technological opportunities of inserting desired traits into crops, the complexity of the gene highlights the likely difficulties in realizing promises of precise molecular control of traits. Complementing the ubiquitous narrative of the ease of CRISPR-edited crops with an open debate about the complexity of the gene would provide a more substantial picture of the state of scientific knowledge that would allow a better-informed evaluation of both promises and fears surrounding gene-editing technologies. In addition to limiting substantial anticipatory governance, narrow narratives about the ease and precision of CRISPR-editing are fueling discourse polarization by contributing to public mistrust. As this article has shown, the publicly mediated ontology of the gene as a stable object under technological control does not tell the whole story but is based on a strategically narrow narrative that omits complexity and uncertainty of postgenomic research. By only talking about the precision of CRISPR/Cas in “cutting” pieces of DNA while omitting the complexity of the context of this technological intervention, the CRISPR-editing community is setting itself up for accusations of strategic simplification and dishonesty.
But there is a wider point. While scientists may have promoted the narrative of the ease and precision of gene-editing technologies, and their potential for societal progress and benefit, it is precisely this representation that has been resisted in public dialogue exercises, both in relation to emerging biotechnologies as well as to other domains of technovisionary science, such as nanotechnology and planetary climate engineering. In our own anticipatory public engagement research, for example, over more than 2 decades, we have found widespread public resistance not only to the dominant imaginary of conflating technoscientific advance with societal progress, but with the particular (genetic) reductionist trope that assumes that organisms can be reduced into constituent parts (including genes), and that these can be reconfigured through scientific work as a means for “improved” plants, animals and indeed, societies (see Grove-White et al., 1997; Macnaghten, 2004; Macnaghten, 2010). Put simply, few people buy into this ontology, and it is for precisely this reason that people struggle to represent what it is that they find so uncomfortable. As social scientists, we have sought not only to recognize this phenomena (which is so much more than a trade-off between risks and benefits), but to work with our participants to develop a robust set of representations, understandings and contextualizations (Macnaghten et al., 2019; Macnaghten, 2021).
Partially, this has been accomplished through reworking the category of the natural, the overriding concept through which public concerns are expressed, which we have viewed as a relational, contested and historical category and not an essentialist one (Macnaghten and Urry, 1998). For this reason, we have paid very close attention to expressed concerns that genetic technologies can represent an “interfering” or “messing” with nature, particularly when represented as “simple,” “safe,” “targeted,” and “precise.” To adopt such a simplistic ontology is seen by our publics not simply as “hubris” (“can life in its complexity really be reduced to this?”), but as aligned to an imaginary in which “the rich get richer.” The danger is represented not as one of “scientists playing God” but of what Dupuy (2009) has called “false humility” (the assumption that biotechnological innovation at the molecular level is nothing special) premised on a biophysical style of thought “in which the biological can no longer be assumed to impose limits to human endeavour” (Macnaghten, 2010, p. 30). Thus, the charge is not simply that biotechnological innovation will have unforeseen consequences, which arguably could have been foreseen (at least in part) if science and innovation were aligned with a more complex ontology (the argument voiced in this paper), but that the CRISPR narrative is part of a wider style of thought that views life as infinitely plastic, without limit. Working with this line of thought, we have sought to work with citizens to develop cultural resources through which we can both situate and contextualize public concerns. This has been accomplished both by drawing on religious and theological perspectives which are better equipped to question technological promises of release from earthly limitations (Davies et al., 2009), and through situating public concerns as relying on older, pre-Enlightenment ideas that configure the concept of nature "as having sacred qualities that establishes norms or order to the human world" (Macnaghten et al., 2019, p. 511).
Returning to the theme of anticipatory governance, 4 implications follow. First, if we are to govern gene-editing techniques in an anticipatory manner, then we will have to attend far more closely to uncertainties in the science and to the likelihood of unanticipated consequences. To put it bluntly, if we know that off-target effects have happened and that genomes/cells are being altered at best somewhat opaquely, then we should expect more of this to happen and to integrate such expectation into scientific and regulatory practice. In part, this requires new and better methods, including the whole genome sequencing methods outlined above, to identify off- and on-target unintended effects. But there is a wider point concerning the need for a deeper and more humble epistemic way of thinking about intervening in genomes, and in which "editing" may not be the appropriate metaphor.3 This requires that practitioners find time to slow down, to embrace uncertainty, to reflect on the language and metaphors they employ, and to collectively think through the effects of these in practice (Middelveld and Macnaghten, 2021).
Second, we need to recalibrate the role of governance and regulation in striking a balance between enabling benefits and managing risks. If a more complex ontology points to the likelihood of more pervasive unforeseen effects, then we need to develop anticipatory methodologies equipped for such exploration, noting that this extends well beyond traditional science-based risk assessment methodologies (Shah, 2011). Governance is not limited to matters of risks to health and the environment, both short and long-term, but also to a reflective evaluation of benefits (Shah, 2008) and to the profound transformations (societal, ethical, ecological) that gene editing may bring into the world. Amongst others, methods that are designed to anticipate effects include those of foresight, scenarios, horizon scanning, and technology assessment (Stilgoe et al., 2013; Macnaghten, 2020). Such an evaluation requires not only the application of anticipative methods but also the broad inclusion of publics and stakeholders in regulatory processes, to ensure a substantive account of the socioeconomic and ethical issues that need to be addressed, and even potentially to a tiered regulatory framework that includes a reflexive assessment of socioeconomic considerations (Macnaghten and Habets, 2020).
Third, there is the sphere of environmental and socioeconomic consequences that could manifest not because CRISPR has failed to meet its target but because it has met its target. Here, the ontology of the gene is connected to an ontology of industrial agriculture that despite its reformation in the guise of “Sustainable Intensification,” “Climate Smart Agriculture,” “Precision Agriculture,” and so on, remains grounded in the logics of scarcity (to be answered with yield), simplification, control and mastery.4 Unless such connections are challenged and questioned, CRISPR-based agriculture is likely to become embedded in technological regimes—with associated sociotechnical lock-ins and path-dependencies—that prevent other forms of agricultural innovations from taking root, such as agroecology (Vanloqueren and Baret, 2009; Montenegro de Wit, 2020).
And fourth, and this is perhaps the most challenging aspect, we need to develop governance mechanisms and cultures equipped to shape science and society relations on matters of ontology, including the ontology of the gene, and to open up such deliberation to public debate. What is the role and relevance of ideas of naturalness both for scientific research on gene editing and for its governance (see also Nuffield Council on Bioethics, 2015)? How to cultivate a debate within the gene-editing community about the (Earthly) limits of our capacities with respect to knowledge and power, and about what constitutes ill-judged action? How to design listening spaces where two-way dialogue can take place between molecular engineers and other actors on matters of ontology? How to integrate a plurality of public values into research and governance processes—such as those of equity, solidarity, and sustainability (Nuffield Council on Bioethics, 2012)?
This article invites scientists and governance actors to do better. We have argued that the narrative of the ease and precision of CRISPR-editing tends to invoke an ontology of the gene as a stable object under technological control both misrepresents the scientifically contested ontology of the gene in postgenomic research and frames the governance debate in unhelpful ways. In contrast, engagement with this complex state of research would allow scientists to clarify where CRISPR/Cas is indeed remarkably precise (e.g., in cutting pieces of DNA) and where these precise interventions interact with messy complexities of postgenomic research. Moving beyond the narrow narrative of the ease of CRISPR/Cas would contribute to a governance discourse that can address the causal specificity of targeted gene edits—and their potentially far-reaching worldly effects—as well as the causal complexity of systems in which they are embedded. Rather than circumventing this complexity as a thread to the public acceptance of CRISPR technologies, scientists have the opportunity to create a more open dialogue that can respond to the patterns of polarization of the GMO debates through a more honest conversation about knowledge and technological opportunities as well as uncertainties and the limitations of technological control in living organisms.
Data accessibility statement
Data for this publication primarily came from published literature which has been adequately cited in the text.
Acknowledgments
We would like to thank Michelle Habets and Keith Lindsey for their comments on an earlier version of the paper. We also thank comments made by Jack Heinemann, David S. Thaler and one anonymous reviewer, which greatly strengthened our article as well as the guidance and support from the editorial team at Elementa and in particular Alastair Iles and Maywa Montenegro de Wit.
Funding
Phil Macnaghten's contribution has been supported by a grant from the Netherlands Organization for Scientific Research (NWO), under grant program Responsible Innovation (MVI; grant number: 313-99-331). David Ludwig’s contribution has been supported by an ERC Starting Grant (851004 Local Knowledge) and a NWO Vidi Grant (V1.Vidi.195.026 Ethnoontologies).
Competing interests
There are no competing interests.
Author contributions
Authors are solely responsible for the data collection, analysis, writing, and publication of this article.
Notes
Before we present our own argument, a terminological clarification is required. CRISPR is a short term for DNA sequences known as clustered regularly interspaced short palindromic repeats. Scientists use a specific enzyme called Cas9 (or CRISPR-associated protein 9) that uses single-stranded RNA (sgRNA) as a guide to recognize and cleave specific strands of DNA that are complementary to the CRISPR sequence. Cas9 enzymes together with CRISPR sequences form the basis of a technology known as CRISPR/Cas9 that is used by scientists to edit genes within any organism. Cas9 is the enzyme that is used most often, and hence, most of the time CRISPR is understood as a gene-editing system CRISPR/Cas9. However, because other enzymes such as Cpf1 could also be used to cut the DNA, we refer to CRISPR here as a gene-editing tool in the broad terms.
Transcription is the process by which a particular segment of DNA is copied into RNA by an enzyme (called RNA polymerase). The multitude of RNA transcripts, in turn, serve as template for protein’s synthesis through translation.
We are grateful to Alastair Iles for this and other helpful points.
We are grateful to Maywa Montenegro de Wit for this and other helpful points.
References
How to cite this article: Shah, E, Ludwig, D, Macnaghten, P. 2021. The complexity of the gene and the precision of CRISPR: What is the gene that is being edited? Elementa: Science of the Anthropocene 9(1). DOI: https://doi.org/10.1525/elementa.2020.00072
Domain Editor-in-Chief: Alastair Iles, University of California, Berkeley, CA, USA
Guest Editor: Jack Heinemann, School of Biological Sciences, University of Canterbury, Christchurch, New Zealand
Knowledge Domain: Sustainability Transitions
Part of an Elementa Special Feature: Gene Editing the Food System