The “(classical) scientific view of the world” that characterizes the modern history of human civilization has been successful by objectifying nature, humans, and society, for reductive analysis into (approximate) linear causation to allow prediction and control. However, because of its growing maturity and complexity, our modern society now confronts the complexity of multilayered causal structures underlying the real phenomena, which classical science has abstracted through reductive approximation, and consequently, modern scientists are perplexed by the limitations on comprehension, predictability, and controllability. The “uncertainty principle” of quantum physics, discovered a century ago, has overthrown this classical mechanistic and deterministic worldview, but the “(quantum) scientific worldview” remained confined at the level of microscopic science and has to date never extended onward to the life-size human world.
However, as practical applications of the quantum computer are now becoming realistic, it might provide us with an innovative way to manipulate such complex causal structures and open up a new era in the history of civilization. In this paper, we build ideas on our earlier research findings in the context of the evolutionary patterns of human cognition, so as to extrapolate them to advance speculations on the mechanism of the phase transition of worldviews from classical to quantum causal structure-based ones, expecting to obtain insights into practical ways of computation to realize such a transition. The paper begins with a section examining the origin of the linear approximation adopted in classical science, back casting from the evolutionary history of the (linguistic) consciousness of our human ancestors. In the next section, we show how human intelligence and civilization have in fact evolved as analog with quantum laws, and review the limitations of modern science in finding an expression of these laws in Eastern philosophy. This section proceeds to show the potential of quantum computation to not only realize a fusion of Eastern and Western approaches but also integrate the humanities and natural sciences. The final section concludes that this new framework can expand and develop the structure and function of human “consciousness” and build a bridgehead against recent “anti-scientism” that is rooted in skepticism concerning the (classical) scientific view of the world and humanity.
1. Illusion of control
1.1. The scientific view of the human world and the limitations of human consciousness
The prosperity of modern scientific and technological civilization can be said to have been built on the mechanistic and deterministic worldview first established by Newtonian physics ([1687] 2016) and Descartes’s mind-matter dualism (Descartes [1644] 1985) some 350 years ago. In this “scientific worldview,” the laws of classical mechanics were applied to all entities in nature, including humans themselves, enabling the scientists involved to understand, predict, and control these entities’ states in the past, present, and future. This worldview, as reduced to the combination of linear causations, prevailed over our intuitive understanding of reality. In many cases, it exhibited an overwhelming power to spread across the globe, and it rapidly resulted in the shaping of our current human civilization.
Such a worldview expressed in terms of linear causality is highly compatible with the human-specific faculty of “language,” which expresses semantic contents in a sequential and linear arrangement of symbols (Saussure [1916] 1995): a syntax or word order. The scientific worldview could in effect be regarded as an extreme manifestation of the internal logic of the linguistic system. This superb linguistic ability, however, also has restraints and limitations inherent in its characteristics. While chronological linearity of language can understand, predict, and control the world through a description of exclusive causation, it cannot cope with the development of events that are due to the superposition and layering of multiple probabilistically parallel and coexistent possibilities, which classical science has thus far discarded in preference for the process of linear approximation. These characteristics have been of particular benefit for clear and straightforward manifestation of “scientific” contents, but when trying to describe such superposition, language would be forced to use extremely roundabout and esoteric phraseology. Thus, language has restricted our mode of logical scientific thinking so that we approach understanding of the real world of multiple causal relationships in an understandable framework that converges and approximates causal relationships into a single linear chain. (Semantic ambiguity in nonscientific daily utterances, poetry, art, or literature will be discussed separately later in box 4.)
This contraction to linearity is also tied to the unity of human “consciousness.” It is usually impossible for us to divide our consciousness into multiple coexisting streams since they are normally experienced as a unified temporal stream (James [1890] 1950; Husserl 1950), and even if we could, we would grasp only a very small number of phenomena through special spiritual or martial arts training. To date, no schools have succeeded in articulating and explicitly transmitting such a spiritual state in terms of linguistic definitions—for example, in a book of traditional secrets. The only ways the masters could approximate them was to propose the methods of training that lead students to acquire those through their own “enlightenment” by themselves.
In other words, while human consciousness is linguistically constrained, it has contributed to the illusion of prediction and control over reality generated by a linear understanding of the world’s dynamics, a characteristic that has contributed to the establishment of the modern scientific worldview. Hence, the human “scientific worldview,” “language,” and “consciousness” have coevolved and developed by interaction with each other throughout history.
1.2. Human evolution as “Triadic Niche Construction”
The origin of the human view of the world as a controllable object seems to date back to the era when our primitive ancestors began using tools some two million years ago. That coincided with a dramatic acceleration of the expansion of the brain (Iriki et al. 2021). Tools are the materialization of the user’s latent “intentionality” to act on the environment, and the mechanism for tool use becomes embedded in the positive feedback loop linking brain, cognition, and environment.
Our previous studies (Iriki and Taoka 2012) showed that when primates (including humans) hold tools in their hands intending to interact with the environment, (1) specific areas of the cerebral cortex (neural niche) that support the tool-use function expand, (2) new cognitive abilities (cognitive niches) emerge around these brain areas, and (3) such tool usage modifies the surrounding world (environmental niche). Thus, tool use established a loop-shaped mechanism of “triadic niche construction,” through which the next round of novel neural and cognitive niches would be constructed, in adaptation to the modified environment, in which extragenomic and transgenerational information is stored. The incorporation of materialized “oriented intentionality” into this process, as accidentally induced by tool use, is at the origin of humans’ rapid acquisition of their perspective of the environment as an object of control.
The process of “triadic niche construction” is here proposed as advancing in two phases (Iriki et al. 2021). In the first phase, the expansion of the neural niche must be instantiated through a large investment of parenchymal resources, including neurons comprising brain tissue. It progresses gradually and reversibly by balancing cost and benefit for each of the specific cognitive domains (figure 1, top). Once the expansion of these niches reaches a certain extent, they oversaturate the whole brain, causing some of these domains to overlap or interconnect (figure 1, bottom). The precise sequence of such associations and the pattern of their combination may be probabilistic depending on the situation. But when they collectively became supersaturated in the brain, this process suddenly exploded similar to a phase transition. The novel neural niche here is then represented by novel forms of neural networks, which can be achieved merely by “rewiring” among regions that have already completed saturation. Therefore, the cost of this second phase is minimal, and it can maximize the speed with which it progresses. That makes the whole process irreversible and the resulting function domain-general.
This association among cognitive domains is realized through the principle of “stimulus equivalence” (Iriki et al. 2021) (box 1), which strongly reflects the (illogical) cognitive bias unique to humans. This principle is also the basis of human language, which allows us to transcend the constraints of the real world and has provided humans with means to rapidly create new cognitive and environmental niches, fully utilizing a virtually cost-free physical resource of mere rewiring between exapted resources. As these conceptual achievements were shared and spread throughout human society through language, the intentionality already embedded in the human world through the triadic niche construction has manifested itself consciously and spread irreversibly across the globe at an explosive rate. In other words, the three elements (world, language, and consciousness) were created in tandem to shape human civilization as seemingly controllable objectified matter.
The basis for the creation of domain-general intelligence is through association of different cognitive domains, mental activities from the expansion of the images of body parts, through the use of tools, and social formations based on empathy between self and others, as well as logical thinking through language, etc. Common to these activities is the discovery of equivalent relations between different objects/phenomena (tools and body, self and others, etc.), and the formation of new concepts based on such an equivalence; equally important is the ability to freely reconstruct these connections by expressing them in abstract terms in language.
The mental activity in common across all of these areas is the function of “stimulus equivalence.” This is exemplified in the learning of language, which is based on the interchangeability of functional equivalencies between arbitrary objects and stimuli that seemingly belong to completely different categories. For example, when a child is taught by its mother that the sound “rabbit” (A) corresponds to a real rabbit (B) (figure B1; A→B), the child will vocalize “rabbit” (B→A) just by looking at the real rabbit. When taught that the actual rabbit (B) corresponds to the written characters “rabbit” (C) (B→C), the child will recognize the actual rabbit just by looking at the characters (C→B). In other words, if they are taught A→B and B→C, the correspondence between B→A (symmetry), C→B (symmetry), A→C (transitivity), and C→A (equivalence) spontaneously emerges without requiring specific explanations. When stimuli A, B, and C become functionally interchangeable in this way, “stimulus equivalence” (the equivalent relation of stimuli) is established.
Stimulus equivalence is a feature that can be extended to any object or event—for example, the sounds “lièvre” (in French) or “Hase” (in German), different kinds of rabbits, the letters [lièvre] and [Hase], etc. This property is more likely to be established specifically in humans than in other species, and there are very few confirmed cases of stimulus equivalence in nonhuman animals.
In humans, the phenomenon is observed from infancy onward without training, regardless of age or presence of disease. Stimulus equivalence is a property that is clearly demonstrated in language acquisition, but it is not acquired with language use, since it can be established even in the absence of speech repertoire, and while rare there are reports of its establishment in some nonhuman animals. Thus, it is considered to be a background cognitive ability that is essential for the expression of representational functions necessary for language use.
The concrete neural mechanism to realize this conceptual “stimulus equivalence” must be the emergence of long-range connections between different cortical regions in the second phase of the triadic niche construction. While the process of realizing the higher-level concept emerged by inter-areal coupling, individual phenomena within each individual region that were not coupled will be discarded—even if they had the possibility of being coupled, those that actually have not been instantiated will be discarded. However, as larger-scale networks are formed, they will eventually be considered foundations for the emergence of similar functions. For example, while the languages of the world are extremely diverse, many similarities coexist in their grammatical structures, implying the existence of a universal grammar (Chomsky 1957) (see also box 4). The same must be true for the diversity and similarity of civilizations across the world, all of which emerged within nearly the same narrow window in the history of mankind.
Another notable feature of the principle of stimulus equivalence is the automatic generation of reverse logic (symmetry and equivalence) as a result of logical learning. This is a strong illogical cognitive bias that is specific to humans, as described above, and results in illogical reasoning in humans in which causality is easily reversed, especially as this error is usually unnoticeable (see section 3.2 below). In spite of this downside to this function, the advantage of stimulus equivalence as the basis of language, various kinds of concept formation, and abstraction has been massive, and human society must have been able to manage such disadvantages.
2. Fantasy of the causal world and the foundation of humanities
2.1. Complexity of causal paths along the evolution of humans and their cultures
The evolutionary mechanism of the triadic niche construction discussed above is based on an algorithmic loop of reactions. It is temporally irreversible and evolves in an accumulative manner based on each cycle that came immediately before, rather than being timeless and reversible as in the laws described by the equations of classical mechanics. Characteristic of this process is that the “latent capabilities” of each cognitive domain matured independently and accumulated during its first phase, and that they are combined and linked in the second phase in a probabilistic manner until they become fully integrated. This means that since there are a finite number of materials available, the conclusion reached by the synthesis of these materials appears analogous, but the paths leading to it can consist of various combinations and orderings. In this scheme of complex causal superimposition, the “what-if” stories common in discussions of evolutionary, developmental, or historical mechanisms, as well as discussions that pursue the “main causes” of the paths actually taken, are only some of many candidates that coexist in parallel in the “cloud of possibilities.” Once a particular path appeared to instantiate the chain of linear causality, other possible paths disappear.
Hence, when studying phenomena with such causal structures, it appears to lack sense to pursue an explanation by assuming a unique linear causation that requires ruling out alternative pathways. However, it is extremely difficult to systematically explain those complexities with human cognitive traits that are constrained by the linguistic limit of comprehension.
Here, we are awaiting the opportunity for the next leap forward in academic knowledge, to identify the limitations of the scientific worldview based on the cognitive tendencies associated with classical mechanics and to find a breakthrough in this area of study.
2.2. The “path integral” in quantum mechanics as a general principle of causation
The existence of these complex and layered causal structures, which could not be captured by linguistic consciousness as associated with the scientific worldview, was discovered in the early twentieth century as the principle of quantum physics (see Feynman 2005 for a general review). But at that time its potential application to the subject discussed in this paper was, of course, not yet recognized. In this theory, if one tries to explain phenomena as linearly caused, the theory forces us to accept fundamental uncertainties between the elements. To describe such ambiguous phenomena, Werner K. Heisenberg proposed the theory of “matrix mechanics,” which expresses the spatiotemporal interaction of the constituent elements by means of a matrix comprising multiple elements. Erwin R. J. A. Schrödinger described the distribution of the probabilities of existence of such indeterminate phenomena by means of “wave functions.” But Richard P. Feynman’s theory of the “path integral” is probably the most intuitively compatible one for discussing the subject of this paper.
These three methods of expression have proven to be mutually equivalent. The path integral can be summarized as follows. While classical mechanics specifies the trajectory of an object as a single path (figure 2, bottom left), the quantum motion handled by the path integral is expressed as a transition of oscillation or wave motion of the “field” to express an infinite number of paths (figure 2, bottom middle). The path integral expresses the total view of the state of the fluctuating quantum field, and when this expression is metaphorically applied to the evolutionary process, it is interpreted as expressing the complex multiplicity of paths that develop from one evolutionary stage to another (figure 2, top; exemplified by the combination of multiple latent capabilities along the human evolutionary process of triadic niche construction as depicted in section 1.2). On this concept of the path integral, once one pathway is reinforced and observed, other possible pathways described as a wave packet will be weakened to eventually contract toward the observed one, and their existence disappears (figure 2, bottom right). Such analogical speculations are generally known to often result in the “leap” of scientific advancements (Holyoak and Thagard 1996) (see also box 4 for an example).
When various constraints were assumed here and there along this pathway (for example, in quantum theory, one constraint corresponds to a combination of a barrier that particles cannot pass through and a slit that particles can pass through), one can imagine that this is relevant to a complex, layered combination of various elements and conditions in the course of evolutionary processes. Once development following a certain path is realized (observed), other possibilities will be erased and the triadic niche construction algorithm advances to the next cycle of evolution.
Therefore, by extracting only the path that has been realized and trying to construct linear causation based on it, we will never be able to grasp the general principles that lie behind the whole scene, and it would be impossible to establish reproducibility or generalizability in them.
However, if we examine this process as a whole, including phenomena that could have occurred but actually did not, we will be able to find the general principles. In contrast to the reduced deterministic worldview of classical mechanics, which is composed of approximate linear causality, this quantum worldview is reminiscent of holistic Eastern philosophy. As such, attempts to formulate and make sense of the I Ching (or Book of Changes) (https://en.wikipedia.org/wiki/I_Ching), which is regarded as the origin of Chinese philosophy, have been made in many stages of the development of Western science—for example, by G. Gottfried W. Leibniz ([1714] 2014) in the early days of classical mechanics and by Wolfgang E. Pauli and Carl G. Jung ([1954] 1971) in the early days of quantum mechanics (box 2).
Among diverse theoretical attempts to explore new worldviews in response to the radical change in physics, Jung’s idea of synchronicity deserves more attention than it is likely to receive (Jung [1954] 1971). On the one hand collaborating with Pauli, a well-known physicist of his time, and on the other hand assimilating the ancient Chinese philosophy of I Ching, Jung developed his idea of synchronicity. Along with Yuasa (2008), we can summarize the concept of synchronicity in three points: (1) An acausal connecting principle: synchronicity is a principle that relates multiple phenomena that are not related through linear causality. It typically manifests as contingent events that seem meaningfully related in cognitive perception and in the material realm beyond duality, such as parapsychological events. (2) Beyond dualism: the mechanistic worldview of classical physics was based on the mind-matter dualism that separates a human observer from the natural world—that is, separating the subjectivity (res cogitans) from the objectivity (res extensa) as is seen in Cartesian philosophy (Descartes [1644] 1985). Synchronicity is considered as a generating principle of the world, in which human subjectivity is embedded within and synchronized with the natural world. (3) Generation through meaning: the natural world involving human subjectivity is not deterministic but open to chronological development, especially through contingent events. For human subjectivity functioning as an internal observer of those events, the natural world seems to generate “meaning” that connects multiple events beyond randomized probability. For this reason, Jung also explains synchronicity as a principle of “meaningful coincidence.”
On this basis, Jung regarded the bagua patterns of I Ching as prototypical “meanings” that represent principal meanings behind synchronicity. That is to say, they are prototypal “meanings” through which multiple events are constellated. The bagua is composed of three solid (yang) or broken (yin) lines, which unfold as nature multiplies the branches of yin and yang forces to produce a pattern of possible meanings after three branches (figure B2). The bagua are further combined to form the sixty-four hexagrams, each of which represents how the natural world may develop through multiplying the bifurcation of yin and yang forces, generating conceivable patterns of meaning after six bifurcations. According to Jung, the meaning of these characters is what constellates a human observer and the surrounding nature at a certain point in time, implying the direction toward which the natural world is chronologically developing through contingent events. The worldview represented in I Ching is far different from the deterministic one based on linear causality but an emergence through contingency based on a principle of conceivable meaning that derives from the fundamental force of yin and yang, and more ultimately, the unified universal principle of Taoism.
Looking further back in the history of Western philosophy, the worldview of monadology developed by Leibniz ([1714] 2014), who tried to apply his binary logics (composed of 1 and 0) to the pair of yin and yang to interpret the I Ching, corresponds with our argument here to a certain degree. In contrast to the Cartesian view of nature composed of matter as res extensa, monads are not quantitative but qualitative substances that appear as multiple agents in the world and operate as a unified force of the world. Each monad has capacities of “perception” and “appetite” so that it moves or acts within the world according to its own internal tendency by reflecting the whole world from its own perspective. What is characteristic in Leibniz’s worldview is that he emphasizes the living aspect of the natural world including humans as a type of monad, in contrast to Descartes. In Cartesian mind-matter dualism, nature was separated from the human mind and enclosed within the deterministic area governed by linear causation. Leibniz attempted to open the worldview in which basic substances of the natural world are all alive and develop through temporality, even though he also noted the existence of preestablished harmony of this world by an act of God. Such primordial worldviews are also well depicted across indigenous knowledges, in a variety of forms of quantum-like expressions (e.g., Deloria and Wildcat 2001), which might provide crucial resources for future scientific formalizations.
However, these attempts have not been successful to date. The reason for failure can be explained as that the intrinsic nature of individuality and singularity does not fit into the framework of a classical science that is based on universality and reproducibility, so that calculation (prediction and control) was not possible. Even more so, this failure was a result of regarding yin and yang as an explicit opposition of 0 or 1, and thereby trying to reproduce the processes of this complex world just by complicating deterministic calculations. This is because, as shown in figure B2 (center), yin and yang are not dichotomous but are essentially intertwined, probabilistic, and multilayered structures that appear differently depending on individual cases. Western science to date does not have the language to calculate such phenomena.
2.3. Potential of quantum computing in the study of humanities
While currently awaiting the advent of practical work with quantum computers (box 3), one may be able to visualize path integral mechanisms to a conscious level that have thus far been impossible to implement. What had been theoretically envisioned but considered a distant dream is now within our reach, thanks to the development of mathematics, programs and implementation of novel technological devices. At present, although the superparallel processing of this computation method and the resulting tremendous increase in scale and speed are attracting attention, we are still at a preliminary stage of identifying computational targets suitable for quantum computing’s operational use. But as proposed earlier in this paper, fundamental operational possibilities concern the principles of human spiritual and cultural activities and their evolutionary development.
Classical computers process information by logical operations using only two types of numbers, 0 and 1, called “bits” (or binary digits). In contrast, quantum computers process information through novel logical operations that use the superposition of states due to quantum fluctuations on these bits (see Arute et al. 2019 for a general review). It is as if yin and yang look binary on the surface but are essentially intertwined and multilayered. By using quantum bits (or “qbits”) with quantum properties, it is possible to represent 0 and 1 simultaneously (parallelism), and furthermore, when information that includes multiple bits is combined, a quantum computer can process this information simultaneously at once, whereas a classical computer can process only one bit at a time (high speed).
The idea of a quantum computer was first proposed by Feynman in the early 1980s. Inspired by this idea, physicists and mathematicians proposed technical methods and algorithms to realize a quantum computer one after another from the 1990s onward, but most of them remained only theoretical hypotheses, and no scientists or engineers seriously tried to build an operational quantum computer. However, in the 2010s, when IT giants seriously entered the development of quantum computers, their huge R & D budgets and proficient PR strategies attracted worldwide interest. In particular, a substantial impact that shocked the world in 2019 was when Google announced the demonstration of “quantum supremacy”—that is, a calculation that would take the fastest classical supercomputer 10,000 years was achieved in 3 minutes and 20 seconds by Google’s quantum prototype.
In spite of global enthusiasm, in the current state of actual development, it will admittedly still take quite a few years to manufacture actual devices on a scale sufficient to pursue quantum supremacy through parallelism and high speed in order to compute the kind of objects that the world generally envisions. This is because the development of “error correction algorithms” to extract appropriate results from simultaneous and overlapping outputs—and of devices that can be used reliably on a large scale of quantum bits (at least mega-qbits, for practical operation)—remain major challenges. However, the supply and operation of small-scale prototypic testing machines with a few hundred quantum bits is already at hand, and proof-of-concept (POC) studies such as the one proposed below in box 4 are ready to be initiated immediately.
In the fields of the humanities, there are many phenomena that seem to manifest principles analogous to quantum physics in a nonstatistical way, judging by a small number of examples. Although it is impossible in the humanities to base a statistical description on very large sample sizes as in physics, the principle of generalizability itself should exist underneath, and the phenomena should follow such principles. Therefore, although there seem at present to be many hurdles for practical use of large-scale quantum computation, small-scale prototype machines that are already in operation should be sufficient to initiate POC (proof-of-concept) research (box 4) that is rooted in the fundamental principles of quantum theory.
One application of quantum computation, for starters, could be to study how linguistic utterances are generated in nonscientific daily speech, art, poetry, or literature, as raised in section 1.1. Utterances are enabled by selecting each word from among those in associative relations and coordinating the chosen words into a linear sequence—that is, syntagmatic relations. In cases of either selecting an inappropriate word or coordinating into an inappropriate syntax, the utterance loses context and does not make sense. In a single utterance, there are potentially infinite combinations of word selections and coordinating syntax. What kind of mechanism underpins a speaker’s ability to make meaningful utterances for specific contexts through appropriate selection and coordination?
Human speech is made possible by sequencing words according to certain rules and structures (e.g., “a-car-hit-a-passenger-at-an-intersection”). However, each individual element that is expressed through the act of speech has a group of potential elements that are allied to it in memory (e.g., “intersection” would imply corresponding ideas of a traffic light, road, corner, etc.). In Saussurean linguistics (Saussure [1916] 1995), a chain of manifest linear elements is called a “syntagmatic relation,” whereas a linkage of latent elements that form a group is called an “associative relation.” Hjelmslev (1975), who advanced Saussurean linguistics, renamed the associative relation “paradigmatic relation.” He then pointed out that the paradigmatic relation manifests the relation of coalition (both/and), while the syntagmatic relation is a potential relation of alternation (either/or), and that it is impossible to conceive of syntagmatic relations without paradigmatic relations. In other words, human speech acts are established from the beginning not only by linear sequences but also by the exclusive selection of one possibility from a wide variety of potential associated words, akin to the “wave packet contraction” in the theory of path integral.
At present, this similarity between linguistic utterances and quantum mechanics may take the form of an analogy—that is, a comparison between two things for the purpose of explanation. However, in the case of linguistic utterances, it is possible to make a concrete list of the words in the paradigmatic relations of certain utterances and calculate all the possible sentences that could be aligned in the syntagmatic relations (e.g., “a-car-hit-a-passenger-at-an-intersection” could be transformed into “a-cheeseburger-serenaded-a-rainbow-at-the-intergalactic-party”). The relationship between all these possible sentences and the sentence actually uttered represents the “wave packet contraction.” If this is so, it should also be possible to create artificial intelligence that calculates all the possible sentences in advance and chooses an appropriate sentence to utter in a real conversation with a human. When such technological reproducibility is demonstrated, the idea in this paper will be more than an analogy; the analogic understanding will lead us to “leaps” in scientific advancement (Holyoak and Thagard 1996).
To further generalize, the description of natural phenomena in language is nothing other than an arrangement of factors that manifest themselves linearly through a chronological series in accordance with an integrative relationship. It is impossible to simultaneously include in a general description all the remaining potential in paradigmatic relations. However, this does not mean that such latent potentialities have nothing to do with the phenomena manifested. It is necessary to see them as possibilities, superimposed on each other, but giving an invisible context to the manifestation of the phenomenon as a phenomenon. From this perspective, there are many research questions that fall within the scope of POC paradigms, including, for example:
• Linguistic linearity and multilinearity (the mechanism of convergence of paradigms into syntagmas);
• The generative mechanisms of poetic language (metaphor, connotation, etc.);
• Dreaming state of consciousness as potentiality and waking state of consciousness as its linear convergence;
• Symptom of schizophrenia as pathology of dreaming consciousness spilling over into everyday life;
• Performance generation in mask plays, including Noh (a traditional Japanese performing art);
• Victory or defeat as a result of changes in situation in competitive group behavior.
3. Control of illusion
3.1. Power of extended consciousness to unify paradoxical causations
The “illusion of control” was an extremely effective means for human beings to face nature, which was expediently brought about by the incorporation of an intentionality operating on the environment as part of the evolution of human cognitive ability. However, at the same time, this kind of evolution induced a paradox by simultaneously inviting characteristics that are mutually conflicting, as follows. On the one hand, to tame the complex and multilayered causal structure of the world’s reality, the most effective method of computation (for prediction and control) was to extract approximate linear causation that was unable to be divided or operated in parallel. This can be used as a means to achieve linguistic clarity as conscious recognition of the world through “wave packet contraction”-like processes. The ultimate form achieved in terms of this process was the modern mechanistic (scientific) view of the world and humans, as a fantasy with an illusion of prediction and control, in which human (linguistic) consciousness played a major role. On the other hand, behind the explosive success of modern scientific civilization, there solemnly lurk complex causal relationships spreading and layered in a probabilistic manner, which are quite impossible for the current state of human civilization to calculate and articulate (Bateson 2000).
This paradox, however, can be solved by the future practical application of quantum computation technology, which can offer the possibility of computing information from seemingly chaotic causal structures and exposing them to conscious experiences. This means that a formerly indivisible consciousness has now acquired a technology to explicitly incorporate the “Eastern” worldview and its ways of thinking, which have hitherto been cloaked in the garb of religion or witchcraft. In other words, this innovation in computation and the novel “extended consciousness” thereby realized have brought into existence complex causal structures, which in the past remained an illusion, like a cloud of complex and probabilistically layered entities that humankind abandoned preferentially to understand and consciously control through language. This computing technology will become a practical means to bring the existence of complex causal structures back into the ground of the (novel) scientific worldview, which will continue to treat them as objects of control that are manifestations of the human mind, thoughts, and intentionality.
3.2. Integration of science and the humanities to defeat “anti-scientism”
What does such a concerted combination of language, consciousness, intentionality, and quantum computation pose for the future extended scientific view of the world, humanity, and society? Modern scientific and technological civilization flourished on a mechanistic and deterministic worldview, first established some 350 years ago by Newtonian physics and Cartesian dualism of mind and matter. In this perspective, the laws of classical mechanics were believed to be able to predict and control all entities in nature, including humans, and this framework spread across the whole globe with overwhelming power. During this course, what was discarded in the process of the linear approximation used in classical science, along with the complexity of the causal structure layered behind the real world, was the human mental world reserved in the Cartesian duality, and the historical development of the humanistic civilized world whose actions have been examined as the subject of discussion thus far. The application of quantum computational principles proposed in this paper is an attempt to reintegrate these various fields that have been fragmented in the process of maturation of the present modern scientific worldview. It is in skepticism about such a divorce in this worldview that the currently emerging trends of anti-scientism would be rooted (Holton 1993).
However, it is extremely difficult to break through the boundary conditions and reintegrate these fragmented fields from within the norms of the classical, and thus far successful, scientific worldview, which is based on our contracted and streamlined consciousness. This is because living in the current normative world that has matured around human cognitive characteristics dictates that compliance with the rules, which should initially have been a method of control and prediction, eventually becomes the objective for maintaining the system. This tendency toward reverse causality is a feature that inevitably derives from the stimulus equivalence principle (box 1), which is the basis of this cognitive trait, and we cannot escape from its constraints. However, if we acquire a novel, extended consciousness as depicted earlier, we may be able to bring this illusion into the range of controllable objects. If we integrate the humanities and the natural sciences through this newly discovered “margin of growth” in consciousness, it can also be the “margin of growth” of future science and the future scientific worldview based on it. Once this were to happen, we could thereby overcome the emergent trends of anti-scientism, and we could expect that it would open a new door for human civilization in the history of global evolution.
Competing Interests
The authors have no competing interests to disclose.
Author Biographies
Atsushi Iriki received his PhD in neuroscience from Tokyo Medical and Dental University in 1986. He held research associate positions at the Tokyo Medical and Dental University and then at the Rockefeller University (USA). He joined the faculty of Toho University Medical School as an assistant professor and then as an associate professor in physiology (1991–99). In 1999, he returned to Tokyo Medical and Dental University as a full professor and chairman of Cognitive Neurobiology. In 2004, Iriki was appointed head of the Laboratory for Symbolic Cognitive Development at RIKEN Institute (first at its Brain Science Institute and then from 2018 onward at the Center for Biosystems Dynamics Research, until his retirement on 2023). He is currently a senior researcher at RIKEN Innovation Design Office, an adjunct professor of Keio University, a visiting professor of Nanyang Technological University (Singapore), a senior fellow of the Canadian Institute for Advanced Research (Canada).
Shogo Tanaka is a professor of psychology and philosophy at Tokai University in Tokyo, Japan. He is also a senior visiting scientist at RIKEN. He received his PhD in philosophical psychology from the Tokyo Institute of Technology in 2003. Dr. Tanaka is primarily interested in phenomenology and psychology—specifically, in clarifying the theoretical foundations of psychology from the perspective of embodiment—and draws inspiration from the ideas of Maurice Merleau-Ponty and Edmund Husserl. The topics of his published papers encompass a broad range of issues, including body schema, body image, skill acquisition, embodied self, social cognition, and intercorporeality. From 2013 to 2014, and from 2016 to 2017, he was a visiting scholar at the Department of Psychiatry of the University of Heidelberg in Germany, where he worked on phenomenology, psychology, and psychopathology. His recent publications include the book Body Schema & Body Image: New Directions (Oxford University Press, 2021, coedited with Yochai Ataria and Shaun Gallagher).