This article examines the cognitive load lie detection hypothesis. The idea that lying is more challenging than telling the truth—thus, imposing cognitive load can exacerbate the challenge liars face and expose lies. I reviewed 24 publications to flag derivation chains authors employ to justify the hypothesis. The findings indicate that authors recycle the same set of justifications but not systematically. That state of the literature shields cognitive load lie detection from severe testing in two ways. There is no clear justification to focus on when wanting to nominate or design severe tests. And the justifications contain ambiguities that make it challenging to determine what would count as a severe test of the hypothesis. I illustrate those limitations and discuss the need to make cognitive load lie detection amenable to severe testing.
The prevailing view on tackling lie detection calls for stakeholders to focus on cognition. Psychology researchers espouse approaches designed to influence liars’ and truth-tellers’ thinking and communication strategies (e.g., Granhag et al., 2014). A perspective within this school of thought, cognition-based lie detection, is that investigators can use cognitive load interventions to expose lies. That idea continues to gain traction (e.g., Krakovsky, 2009; Schafer, 2020). This review examines the foundations of cognitive load lie detection.
An Overview of Cognitive Load Lie Detection
Zuckerman et al. (1981) asserted that lying demands more cognitive resources than truth-telling. “Creating the details of a [plausible] lie is a more difficult task than telling the truth (p. 10).” In this article, lying means a sender making a believed-false statement to another person, the receiver: (a) to make the receiver believe that statement as true; or (b) to convince the receiver that the sender believes that statement as true; or (c) the sender intends a and b (Mahon, 2008).
Legal/Forensic Psychology researchers draw on Zuckerman et al.’s (1981) proposition when developing techniques or methods to detect lying.1 The literature in question, called “the cognitive [load] approach to lie detection”, recommends several techniques (e.g., Vrij et al., 2017). The proponents have grouped those techniques into three categories. Next follows a summary based on Vrij (2015) and Vrij et al. (2017), who discuss the taxonomy in detail.
Imposing cognitive load. This category refers to interventions designed to make the interview setting more mentally challenging. For example, the receiver, a person posing questions, could ask the sender to provide their response in reverse temporal order (e.g., Vrij et al., 2012).
Encouraging interviewees to provide more information. The name of this category is also its definition. Techniques under this classification include requesting drawings in addition to spoken or written messages (e.g., Leins et al., 2011a).
Unexpected questions. This designation refers to methods of questioning that pose inquiries interviewees do not anticipate. Examples include asking about spatial and temporal information (Vrij et al., 2009).
Exponents argue that cognitive load techniques compound the liar’s uphill battle by exacerbating the mental demand to produce a plausible lie. Consider the following premise that forms the bedrock of cognitive load lie detection.
“If lying requires more cognitive resources than truth-telling, liars will have fewer cognitive resources leftover [emphasis added]. If cognitive demand is further raised [emphasis added], which could be achieved by making additional requests [emphasis added], liars may not be as good as truth tellers in coping with these additional requests” (Vrij & Granhag, 2012, p. 113).
The premise invokes a counterfactual2: the strain cognitive load enhances is one would-be liars will experience if they lie. There are studies wherein, after interviews, liars have reported experiencing more cognitive load than truth-tellers did (e.g., Granhag & Strömwall, 2002; Strömwall et al., 2006). At a minimum, liars must manufacture some details and, all things being equal, truth-tellers relay known information. That difference presents a level of difficulty one can exploit. Cognitive load lie detection assumes questioning methods can introduce non-trivial challenges for would-be liars, and those difficulties can expose lying: this is the cognitive load lie detection hypothesis. There are reasons indicating that one cannot take that hypothesis for granted.
Time to Pause and Reflect
What is the evidence in favor of the cognitive load lie detection hypothesis? A meta-analysis by Vrij et al. (2017) demonstrated that cognitive load techniques improved lie detection compared to control conditions (d = ~ .40; cf. Levine et al., 2018: d = ~ .38). But two meta-analyses give cause for further scrutinizing the idea of cognitive load lie detection (i.e., Mac Giolla & Luke, 2021; Verschuere et al., 2018).
Researchers employ reaction time as a lie detection cue and propose that people deliver lies more slowly than truths (e.g., Walczyk et al., 2003). Researchers assume that slower reaction times indicate one was contending with a heavier cognitive load. Verschuere et al. (2018) investigated whether imposing cognitive load during questioning increased the difference between how quickly people deliver lies versus truths. The results revealed that people told the truth faster than lies. Nonetheless, the imposing of cognitive load did not increase that difference. On the contrary, cognitive load slowed down the expected speed of truth-telling, obscuring the ability to detect lies from truths via reaction times (see also Frank et al., 2019).
Mac Giolla and Luke (2021) examined whether methods of questioning based on the cognitive load lie detection hypothesis improve observers’ ability to detect lies. Cognitive load approaches improved lie detection by 7% compared to control conditions; those gains rose to 27% when observers knew the cues to focus on. The authors note that those gains are promising and admonish us to be optimistic about the future of cognitive load lie detection.
But stakeholders would be prudent to remain skeptical considering the limitations the authors raise about the improvement cognitive load lie detection brings. “This promising result is qualified by indications of publication bias, considerable heterogeneity between studies, and a lack of research on important practical issues, such as the influence of counter-measures (Mac Giolla & Luke, 2021, see also Jordan, 2016; Levine et al., 2018).” The authors note that few studies in their meta-analysis included informed observers; those studies employed research designs wherein training effects may have confounded the accuracy rates. Other reviews have revealed that the strengths of lie detection cues decline the more researchers study them (Luke, 2019). Moreover, publication bias and selective reporting lead meta-analyses to overstate effect sizes (Kvarven et al., 2020).
The situation calls for the field to subject cognitive load lie detection to severe testing: studies that examine the critical aspects of the hypothesis to assist in calibrating our confidence. A path toward severe testing is examining what exponents claim to be justifications of cognitive load lie detection. I use justification in a similar sense as Meehl’s (1990) “derivation chain”: the network of concepts, auxiliary assumptions, and theoretical mechanisms on which researchers stake hypotheses. Derivation chains must be substantiated to allow valid hypothesis testing (Scheel et al., 2021). Here, those justifications would be the properties of lying versus truth-telling that make lying more challenging such that imposing cognitive load can expose lies. This review aimed to flag the justifications featured in the cognitive load lie detection literature. Then stakeholders can nominate or design studies that would count as severe tests considering those justifications.
Methods
Overview and Literature Selection
To my knowledge, authors draw from a list of seven justifications when positing the cognitive load lie detection hypothesis (see, e.g., Vrij, 2015, pp. 206–208).
Liars must monitor their lies to remain plausible and consistent.
Liars are less likely to take their credibility for granted than truth-tellers would.
Liars monitor receivers’ reactions to assess whether a lie is being believed.
Liars may be engrossed in reminding themselves to play the role of a truth-teller.
Lying requires a mental justification, but truth-telling does not.
Liars have to suppress the truth while lying.
The mental activation of a lie is deliberate, but the truth often comes to mind automatically.
Rather than speculate about the prevalence of those justifications, I re-reviewed the articles Mac Giolla and Luke (2021) used in their meta-analysis. The interested reader can consult that publication for exhaustive details about the search strategy and inclusion criteria. I chose to use the articles curated by Mac Giolla and Luke (2021) due to four reasons. (a) To my knowledge, that meta-analysis is currently the most recent one on cognitive load lie detection; (b) the authors included studies missed by the penultimate meta-analysis on the topic (viz., Vrij et al., 2017); (c) Mac Giolla and Luke (2021) selected studies of utmost applied relevance wherein observers made dichotomous lie detections as opposed to judging the extent of a statement’s veracity; (d) by deferring the data curation to others, I hoped to exercise maximum objectivity over the inclusion and exclusion of studies. In all, the entire text of 24 publications3 underwent review.
Extracting the Components of the Review
I extracted four components4 of the selected studies to explore the current question of interest. The extraction protocol was preregistered (osf.io/8hjsk).
The first component was whether a publication explicitly invoked Zuckerman et al.’s (1981) assertion that lying demands more cognitive resources than truth-telling does.
The second component comprised authors’ explicit or implied invocations of the cognitive load lie detection hypothesis (e.g., Vrij & Granhag, 2012). This category also included invocations wherein authors posit the hypothesis without explicit citations. I identified such invocations in two ways (i) by flagging relevant sentences that appeared after authors quoted Zuckerman et al. (1981); or (ii) via compound sentences consisting of the Zuckerman et al. (1981) assertion, explicitly cited, followed by claims akin to the cognitive load lie detection hypothesis (e.g., Vrij & Granhag, 2012).
The third component flagged the authors’ intervention of cognitive load, for example, unanticipated questions.
The fourth component consisted of explanations that indicate why the authors chose the intervention(s) they did. I flagged the reason(s) why the authors predicted that the intervention(s) they chose would exacerbate cognitive load and consequently facilitate lie detection. Such reasons are different from stating the cognitive load lie detection hypothesis. Those reasons specify justifications or a derivation chain: the network of concepts, auxiliary assumptions, and theoretical mechanisms on which researchers stake their hypothesis.
Analysis Strategy
The analysis strategy was fairly straightforward. The aim was to identify the frequency and patterns with which authors implicitly or explicitly invoked the cognitive load lie detection hypothesis via seminal works—for example, Vrij and Granhag (2012) or Zuckerman et al. (1981). Next, I examined the frequency and patterns with which authors specified derivation chains to justify those invocations and the corresponding cognitive load intervention. Then the analysis explored any themes evident in the flagged justifications.
The analysis largely followed the recommendations of Braun and Clarke (2006) on executing thematic reviews. I first read over the extracted data to get familiar with it and developed initial codes representing the data’s contents. This initial coding phase revealed that authors employed varying formulations of the previously mentioned seven justifications to support invocations of the cognitive load lie detection hypothesis. Moreover, each of the seven justifications appeared with different frequencies across invocations and interventions. Because the seven justifications already serve as interpretive attributes in the literature, I retained them as themes. The entire dataset was then compared again to the delineated attributes to ensure correspondence and identify any undiscovered justifications.
Plausibility and Consistency (J1)
This designation captures justifications suggesting that liars must monitor their lies to remain plausible and consistent. An example of such a justification is the claim that liars must ensure that what they report is plausible given the available information and those an interviewer can discover (i.e., Evans et al., 2013, p. 34).
Concern about Credibility (J2)
This category encapsulates justifications wherein authors assert that liars versus truth-tellers are less likely to take their credibility for granted. For instance, Warmelink et al. (2012, p. 178) write that “liars pay more attention to their credibility, which truth-tellers take for granted”.
Monitoring Success (J3)
This theme contains justifications alluding to the notion that liars versus truth-tellers monitor receivers’ reactions to evaluate whether the receiver believes the lie. An example is the claim by Vernham et al. (2014, p. 310) that liars scrutinize the interviewer to check whether the lie is being believed, whereas truth-tellers merely focus on storytelling.
Preoccupation with Role-play (J4)
Here, I grouped justifications wherein authors indicate that liars may be engrossed in playing the role of the truth-teller. As such, this category included assertions that liars versus truth-tellers are more concerned with impression management (e.g., Colwell et al., 2015).
The Basis for Lying (J5)
This designation contains verbatim claims that lying requires a justification, but truth-telling does not (see, e.g., Vrij et al., 2009).
Suppression of the Truth (J6)
This category included justifications stating that liars have to suppress or inhibit the truth; for example, their actual memory of an event or information (e.g., Evans & Michael, 2014)
Mental Activation of Truths and Lies (J7)
This theme comprised justifications implying that the mental activation of a lie is deliberate, whereas the truth comes to mind automatically. Examples include the claim that mnemonic devices like context reinstatement improve truth-tellers’ recall but raise the cognitive effort required to lie (i.e., Montalvo et al., 2013). And Leins et al.’s (2011b) assertion that truth-tellers can produce a reverse order narrative, from memory, with relative ease, but lying requires a novel construction.
No justification (NJ)
Some invocations of the cognitive load lie detection hypothesis and interventions of cognitive load included no justifications. Such instances received the no justification category.
Restating the Foundational hypothesis (FP)
Some authors justified their cognitive load interventions by restating the cognitive load lie detection hypothesis. Those justifications received the current classification. An example is the claim that liars versus truth-tellers will find it more difficult to produce their message in a second language because honesty versus truth-telling is not particularly challenging (i.e., Evans & Michael, 2014)
Results
Figures 1 and 2 illustrate the frequency of justifications (J1 through J7) across the invocations of the cognitive load lie detection hypothesis and interventions, respectively. Supplemental Table 1 presents the publications reviewed and the justifications featured in each article. The interested reader may consult the Supplemental Table 2 (https://osf.io/vmzna), which depicts the review in exhaustive detail. One can use Supplemental Table 2 to reproduce the results.
Invocations of the Cognitive Load Lie Detection Hypothesis
The findings indicated that few research works ascribe the idea of cognitive load lie detection to Zuckerman et al. (1981), although seminal works suggest Zuckerman et al. (1981) engendered cognitive load lie detection (e.g., Vrij, 2008, 2015). Most publications, 87.5% (21/24), invoked the cognitive load lie detection hypothesis by explicitly citing at least one seminal publication on the topic or implied the hypothesis without an explicit citation. Implicit invocations, 41.66% (10/24), contained a handful of justifications: seven of ten publications offered no justification when implying that cognitive load can improve lie detection (see Supplemental Table 1). Of the three invocations with justifications, Vrij et al. (2010) advanced J1 through J7; Fenn (2015) offered J7, and Evans et al. (2013) offered J1, J6, and J7. Explicit invocations, 45.83% (11/24), included more justifications than implied invocations (see Figure 1). Three explicit invocations came without justification, but the remaining eight contained at least one reason justifying the cognitive load lie detection hypothesis.
Notably, there appeared to be no systematic way by which authors applied justifications J1 through J7. Authors recycled the common list of seven justifications across the implicit and explicit invocations of the cognitive load lie detection hypothesis (see Supplemental Table 1).
Cognitive Load Interventions
Based on the authors’ descriptions, nine distinct interventions (i.e., training programs and manipulations) of cognitive load lie detection emerged from the dataset. The training programs were: (a) the Assessment Criteria Indicative of Deception (ACID); (b) general training in cognitive load lie detection; and (c) the cognitive interview for suspects (CIS). The manipulations included requiring interviewees to: (d) communicate in a nonnative language; (e) maintain eye contact with their interviewer; (f) perform a secondary task while communicating; (g) argue against their opinion (devil’s advocate); (h) provide reverse chronological reports; and (i) answer unanticipated questions. The ambiguous nature of what constitutes an unanticipated question became evident when flagging the interventions (see Figure 2). For example, some publications described reverse chronological reports as likely to be unanticipated (e.g., Geiselman, 2012). Other publications distinguished reverse reporting from unanticipated questions (e.g., Leins et al., 2011b).
Eight interventions (excluding the CIS) came with at least one of the familiar seven justifications. However, those justifications appeared with considerable variance and in no systematic manner (see Figure 2). Notably, six primary interventions were justified at least once by authors restating the foundational hypothesis of cognitive load lie detection, which belies a tautological justification. Stating that an interviewer will expose lies because an intervention will be challenging for liars fails to explain how that intervention exerts the hypothesized influence. Owing to the cognitive load lie detection hypothesis, any intervention classified as a cognitive load technique should make lying difficult by default. Thus, cognitive load interventions must critically outline the processes that engender their difficulty.
Reliability Analysis
I recruited a second coder to assist in assessing the reliability of the analysis on how the justifications featured in the dataset. That aspect is susceptible to bias, given that I had free reign to determine how to categorize the justifications. I preregistered the protocol and the corresponding codebook: https://osf.io/k2vym. The second coder was blind to the findings and attempted to independently replicate them. I provided the second coder with descriptions of my codes. Then the coder rated the presence or absence of the codes in the relevant columns of the dataset: “invocations of the cognitive load lie detection hypothesis” and “reason(s) why authors chose their intervention”, respectively. The second coder was invited to suggest new codes if appropriate. I planned to consider those suggestions when discussing the findings.
There was a 98.4% agreement between the second coder’s ratings and mine, κ = .86, SE = .03, 95% CI [.80, .93]. The most consistent reason behind minor disagreements was ambiguity in text containing justifications. For example, Vrij et al. (2008, p. 255) note that reverse order recall “increase[s] cognitive demand because (a) it runs counter to the natural forward-order coding of sequentially occurring events […] and (b) it disrupts reconstructing events from a schema […]”. I coded the initial part of that sentence as a recapitulation of the cognitive load lie detection hypothesis (FP). And I coded items-a and -b as implying that the mental activation of a lie is deliberate, whereas the truth comes to mind automatically (J7). The second coder noted that the text simply indicates that reverse order recall will be demanding. One can access the second coder’s replication here: https://osf.io/zbe9f.
Discussion
This review examined the prevalence of seven justifications authors cite when positing the cognitive load lie detection hypothesis. The findings indicate that many authors recycle the seven justifications—in no systematic manner (see Supplemental Table 1). That state of the literature obscures the possibility of nominating or designing studies that would count as severe tests of cognitive load lie detection. First, it is challenging to determine what would count as a severe test of the hypothesis—per the respective justifications. Some of those derivation chains contain unexamined research gaps. Second, analysts have no clear justification to focus on when wanting to nominate or design severe tests. There is no bright-line to flag the crucial justification(s): authors cite them unsystematically.
Unexamined Research Gaps
Each justification is a derivation chain on which exponents stake the cognitive load lie detection hypothesis. Lying is more challenging than telling the truth. Thus, imposing cognitive load will exacerbate the plight of liars versus truth-tellers to a greater extent. One can exploit that constraint to expose lies. To warrant the hypothesis, the corresponding justifications must account for the challenges that liars and truth-tellers might face. When scrutinized, however, some justifications suggest that similar to liars, truth-tellers might face challenges cognitive load can compound—call this research gap truth-tellers’ underappreciated plight. Other justifications suggest that liars might not face challenges. Thus, imposing cognitive load could introduce similar impediments to liars and truth-tellers—call this research gap liars’ potential escape. Those research gaps remain unexamined, and their consequences for cognitive load lie detection are unknown.
Stakeholders should be skeptical if imposing cognitive load can handicap liars and truth-tellers similarly. Cognitive load interventions might lead to the worst-case scenario—a miscarriage of justice. Law enforcement interviewers could misclassify truth-tellers as liars when the truth-tellers cannot perform as fluently as expected. For example, Verschuere et al.’s (2018) meta-analysis indicated that the imposing of cognitive load obscured the ability to detect lies from truths via reaction times (see also Frank et al., 2019).
For brevity’s sake, I will discuss justifications-J1 (plausibility and consistency) and -J6 (suppression of the truth). The goal is to illustrate the two types of research gaps mentioned above and call attention to the literature’s limitations: imposing cognitive load might affect liars and truth-tellers similarly. I intend to flesh out the issues on all the justifications in another article (see Neequaye, 2021 for a preliminary draft).
Plausibility and consistency: on truth-tellers’ underappreciated plight
According to this justification, liars versus truth-tellers must ensure that their messages align with what receivers might know or discover. One can exploit that hurdle to expose lies by imposing cognitive load. Arguably, preparing a lie in advance alleviates the burden of manufacturing details on the spot. Any difficulty with conjuring lies likely happens when the liar needs to invent a message instantly; say the question is unexpected. Asking unanticipated questions is one of the main strands of cognitive load lie detection. All things being equal, a truth-teller knows the information the interviewer is requesting, even if the question is unexpected. Truth-tellers versus liars can surmount being plausible and consistent more easily. The explanation just outlined belies an alternative explanation that could impeach the cognitive load lie detection hypothesis.
We can expect that neurotypical people in forensic investigative interviews will want their message to appear plausible and consistent, given the risks of seeming dubious. Truth-tellers might also keep track of their responses to unexpected questions; arguably, neurotypical truth-tellers would want their answer to align with what the receiver knows or might discover. In doing so, truth-tellers will aim to be consistent with any previous messages; or any aspects of the issue under discussion. Truth-tellers likely suffer predicaments with respect to providing plausible and consistent answers to unexpected questions. The nature of those challenges may differ from what liars face; nonetheless, important challenges arise for both liars and truth-tellers.
One cannot assume that because truth-tellers versus liars ostensibly have the answers to unanticipated questions in their memory stores, truth-tellers can respond with more ease. Even if a truth-teller claims to remember the event to be discussed, note that they now have unexpected questions to contend with. A hurdle they have not yet considered; otherwise, the questions would be anticipated. We cannot take it for granted that truth-tellers will have a less demanding task to tackle.
There is no reason to privilege the alternative explanation over the plausibility-consistency justification (J1) or vice versa. I raise the alternative explanation to make this point: assuming that truth-tellers versus liars would be less focused on plausibility and consistency is tenuous. Justification-J1’s validity remains unexamined and unknown. Analysts cannot yet rely on the plausibility-consistency justification (J1) to determine severe tests of cognitive load lie detection.
Furthermore, exponents must nominate a definition and specify a falsifiable theory that systematically delineates question-types interviewees would generally not anticipate and why. Such efforts will reduce ambiguity about what qualifies as an unexpected question and, by extension, improve measurement practices. Then analysts can determine, with precision, studies that would count as severe tests—based on the plausibility-consistency justification (J1). Without such specification to offer an objective yardstick, one could call any manipulation an unanticipated question depending on the hypothesis one wants to support. This loophole hinders severe testing by inviting questionable research practices and argumentation. Suppose a manipulation fails to achieve the hypothesized effect of exposing lies from truths. One could blame the manipulation, not the cognitive load lie detection hypothesis.
Suppression of the truth: on liars’ potential escape
Another justification for cognitive load lie detection is the claim that liars must suppress the truth while lying. Exponents assert that because such suppression is straining, imposing cognitive load can expose lies (e.g., Leal et al., 2008; Vrij, 2015). Several ambiguities riddle this justification. One can hardly nail down the suppression mechanism or how it invites cognitive load interviewers can exploit. Thus, analysts cannot determine severe tests of cognitive load lie detection based on the notion of suppression.
Let us begin by trying to decipher what the phrase suppression of the truth means. One possible interpretation is that the phrase suppression of the truth is synonymous with lying; the phrase simply means not telling the truth. The current interpretation leaves the pressing questions unanswered. What the presumed suppression is, remains unknown. Exponents must define how suppression differs from lying. And they must specify how that difference makes lying versus truth-telling more challenging such that one can capitalize on suppression to expose lies.
One may object to the limitation just described by contending that the analysis is overly simplistic. Critics might argue that people lie using various strategies that warp the truth in different ways. For example, liars may partially distort the truth, fabricate similar versions of the truth, or feign amnesia. Thus, suppression of the truth features differently depending on the chosen strategy.
For that argument’s sake, assume that suppressing the truth is not synonymous with lying. Then the next available interpretation is this: suppression is a separate behavior the liar performs in addition to whatever strategy they choose to communicate their lie. Two ambiguities become evident. First, what specific behavior during lying, constitutes suppression, and does that behavior change depending on the strategy chosen as the preferred method of lying? Second, how do such behaviors make lying more laborious than truth-telling?
Consider a possible solution to the issues just outlined. One might propose that suppression constitutes a liar physically preventing themselves from communicating the truth before lying. Arguably, such a feat will be more challenging than simply communicating the truth. But does telling the truth come with no hesitation? No. That a would-be truth-teller can communicate a requested truth with no hesitation is a tenuous claim. Following conversational norms (Grice, 1975), a truth-teller might need a moment to ascertain, for example, the version of the truth that best answers a question. Suppose a liar prepares their lies in advance. In that case, the so-called suppression mechanism is unlikely to feature; the liar already knows what they intend to say. Hence, the suppression mechanism could occur only when the question is unexpected. I have already discussed specification issues with the theory surrounding unanticipated questions, but there are more limitations to raise.
Neuroimaging studies suggest that lying involves inhibitory control and task switching (e.g., Christ et al., 2009). However, those studies do not indicate that liars necessarily inhibit the truth or switch their response from a truth to a lie. Liars might well be inhibiting/switching over irrelevant information, implausible lies, or inconsistent messages that may damage their goal. The results of Walczyk et al. (2003, Experiment 1) illustrate my contention. In that study, people reported the thoughts that occurred to them when lying to open-ended questions. Of those reports, 35.3% related to the requested truth, and an approximately equal amount, 34.93%, related to other possible responses (p. 765).
Furthermore, based on Gricean norms, one can reasonably expect that truth-tellers might also inhibit what they consider irrelevant information, given a receiver’s inquiry. We do not know whether liars and truth-tellers experience different levels of intrusive thoughts. And we do not know if that feature corresponds to different levels of cognitive load, which can be exploited to expose lies.
There is another species of evidence one can leverage to support the idea that suppression constitutes physically restraining oneself from telling the truth. That is, research about the cognitive consequences of secrecy: Lane and Wegner (1995) theorized that people become mentally preoccupied with thoughts about topics they want to keep secret. The corresponding empirical studies supported the theory; secrecy invites intrusive thoughts about topics people want to conceal (Lane & Wegner, 1995). If lying comprises an attempt to conceal something, call that thing item-i, item-i could behave like a secret. Then a liar would be mentally preoccupied with item-i. The liar might have to physically prevent herself from blurting out item-i while communicating the intended lie.
Lane and Wegner (1995, p. 238) explicitly noted that lying by commission recruits a different mechanism from the preoccupation model of secrecy they propose. A person will likely be preoccupied with thoughts of item-i when the person has nothing to say in place of item-i. “Unlike lying [emphasis added], secrecy suggests ‘no distracters’ to stand in for the omitted information, and so leaves the secret keeper with nothing to think about but the secret itself (Lane & Wegner, 1995, p. 238).” Thus, when the liar thinks of or has something to say in place of item-i, preoccupation with item-i is not necessarily likely to feature.
Let us briefly grant the unwarranted concession that liars might be so preoccupied with item-i that item-i would become mentally intrusive. We still do not know whether that intrusion labors lying. As noted earlier, based on Grecian norms, truth-tellers might also contend with intrusive thoughts of a different variety—other possible truthful responses.
To summarize, whatever liars supposedly suppress, one cannot be sure that they become preoccupied with that thing. We do not know whether that presumed preoccupation labors lying such that one can exploit suppression to expose lies. We do not know whether liars physically prevent themselves from uttering the truth before relaying a lie. Suppose we grant the idea of thought suppression as a memory phenomenon—despite the debate on the issue (Werner et al., 2022). Even then, we still do not know whether thought suppression brings cognitive load interviewers can capitalize on.
That suppression of the truth is a handicap interviewers can exploit to detect lying belies assumptions and mechanisms that remain to be verified. Any attempt to nominate a severe test of cognitive load lie detection drawing on the suppression justification (J6) will be imprecise and susceptible to post-hoc explanations.
The Absence of a Bright-line
An impediment to precise severe testing is the finding that there appears to be no bright-line on which to focus. Authors cite the justifications unsystematically. We do not know the primary justification(s) vitalizing the cognitive load lie detection hypothesis. Analysts cannot choose a judicious path to severe testing. Are the justifications independent of each other, stand-alone derivation chains? Or are the justifications a network of derivation chains? Suppose the justifications were independent. Then analysts can focus on identifying the valid ones and designing severe tests based on the defensible justifications. And if the justifications are a network of interrelated assumptions and mechanisms, exponents must explain the necessary interrelations. Then analysts can determine what would count as severe tests according to the specified interrelations.
There is a problematic loophole if analysts embark on severe testing, given the current state of the literature. One could shrug off evidence rejecting a justification by shifting focus to another justification. Then the cognitive load lie detection hypothesis will remain perpetually shielded against severe testing. Stakeholders must remain skeptical until a bright-line appears to rid the hypothesis of the shields guarding it.
Critics might object to the thesis in this section with this counterargument. The issues under contention are peculiar to empirical articles. The limitations crop up because of the specific set of publications reviewed in this article. (1) Due to space constraints, review versus empirical publications might have specified the justifications to allow severe testing. (2) Publications not included in this review might contain other theories or justifications.
To my knowledge, no review publications offer different discussions of the justifications. Review and empirical publications describe the list of the seven justifications similarly (e.g., Leal et al., 2008; Leal & Vrij, 2008; Vrij, 2008, 2014, 2015; Vrij et al., 2010, 2011; Vrij & Ganis, 2014).
What about other justifications and lie detection theories? For example, the second coder who assisted with the reliability analysis suggested additional codes (https://osf.io/m8q3k). And Sporer (2016) proposes a working memory model explaining lying and cognitive load. Stakeholders might be prudent to turn to other justifications and theories; suppose those alternatives prove valid. But that alternatives exist is not the matter under contention here. Finding solace in alternatives does not alleviate the issues this article highlights. A substantial portion of the literature adheres to the seven justifications. Stakeholders should be concerned that the validity of those derivation chains remains unknown.
Another potential objection merits a response, and the complaint goes something like this. “OK, the seven justifications blur the path to severe testing. But that limitation is of no consequence because the justifications are not crucial, and we do not need the counterfactual that people will experience cognitive load if they lie. The crux of cognitive load lie detection is this: making lying versus truth-telling taxing can expose lies”.
The revised hypothesis could be a bright-line as the revision contains fewer assumptions, allowing for more precise severe testing. But the revised hypothesis is different from the original one. The revision effectively rids cognitive load lie detection of two things: (1) the counterfactual that people will experience cognitive load if they lie; (2) the shield the justifications provided. This review’s findings indicate that a substantial portion of the literature has relied on the counterfactual and justifications when designing cognitive load lie detection studies. Suppose exponents revise the hypothesis as outlined in the preceding paragraph. Given the new hypothesis, there would still be a need to nominate or design severe tests of cognitive load lie detection.
Concluding Remarks
I am not suggesting that we should reject any version of cognitive load lie detection outright. Let us not throw the baby out with the bathwater, but we must ensure that there is, indeed, a baby in the bathwater. And we can save that potential baby by bringing the cognitive load lie detection hypothesis to a suitable place. The field must reexamine the corresponding justifications or revise the hypothesis and then determine what would count as severe tests.
Author Note
Many thanks to Jasmine Afram for assisting me with data extraction. I am fully and solely responsible for any errors in this article.
Competing Interests
There are no conflicts of interest to declare.
Data Accessibility Statement
All data supporting the findings in this research are publicly available publications and the present manuscript contains the pertinent aspects.
Appendix: An alternative interpretation of the cognitive load lie detection premise
One could interpret the cognitive load premise in another way besides the counterfactual premise.5 Cognitive load techniques further raise a preexisting mental demand of lying versus truth-telling—call this the preexisting-demand premise.
The proposal that one can further raise, enhance, or magnify the cognitive demand of lying versus truth-telling invokes the preexisting-demand premise by supposing that a would-be liar has a problem one can aggravate. That invocation, taken at face value, assumes a would-be liar contends a difficulty before lying. A difficulty must necessarily exist before one can further raise it—otherwise, there is nothing to aggravate. The preexisting-demand premise belies a problematic inference: that lying is burdensome before one enacts the behavior. This proposition is not feasible. The effort a behavior requires can arguably only manifest during its performance, not before. Thus, one cannot further raise, enhance, or magnify the supposed initial difficulty of lying versus truth-telling. The presumed difficulty—to be enhanced—does not exist. The mental demand of lying is likely to appear only when the would-be receiver poses a difficult question. More difficult questions likely introduce a heavier cognitive load than less difficult questions ostensibly would. Before the inquiry, the would-be sender has no challenge to grapple with.
A critic may object by arguing that the issues I have flagged with the preexisting-demand premise concern wording, not necessarily a conceptual problem; I concede. However, because that premise can come up, I discuss it for a pragmatic reason. By showing the attendant flaws, I hope to preempt possible future objections that invoke the preexisting-demand premise. Let us now abandon the preexisting-demand premise.
Footnotes
I refrain from using the word deceive and its variants (e.g., deception) unless to quote a source directly. Analysts (e.g., Mahon, 2008) argue that to deceive possibly implies a sender has achieved the goal of the lie. However, lying can be unsuccessful.
In the appendix, I explore an alternative interpretation of the cognitive load lie detection premise.
One article, Zimmerman et al. (2010), could not undergo this review because that study is a classified report, which prevents analysis of the text. Additionally, Mac Giolla and Luke (2021) included 21 articles, but this review contains 24. When I reached out to the first author for the materials, I received 24 articles (excluding Zimmerman et al., 2010). Later, the first author informed me that they excluded four articles at the data analysis stage because those publications did not fit their inclusion criteria (reasons listed below). The four articles were within the scope of this review, so I retained them.
Evans and Michael (2014): Cognitive load not manipulated, but inferred from a native versus non-native language classification.
Geiselman (2012). Deception judgments were made on a rating scale at different points throughout the interview
Jordan (2016). Deception judgments were made on a bisected rating scale to generate an accuracy rating.
Leins et al. (2011b). Participants used a lever to rate truth and deception in real-time: over zero = truth, under zero = lie.
I extracted the lie detection cues and the corresponding operationalizations in each publication. Lie detection cues are not pertinent to the current review but was included to provide a comprehensive database for further research.
The counterfactual premise is that cognitive load techniques further raise the mental strain a liar would have experienced had a cognitive load technique not been used, but after being asked a question.