This article examines the idea that cognitive load interventions can expose lies—because lying is more demanding than truth-telling. I discuss the limitations of that hypothesis by reviewing seven of its justifications. For example, liars must suppress the truth while lying, and this handicap makes lying challenging such that one can exploit the challenge to expose lies. The theoretical fitness of each justification is variable and unknown. Those ambiguities prevent analysts from ascertaining the verisimilitude of the hypothesis. I propose research questions whose answers could assist in specifying the justifications and making cognitive load lie detection amenable to severe testing.

This review examines the foundations of the cognitive load lie detection hypothesis: investigators can exploit the challenge lying brings—by imposing cognitive load—to expose lies (Vrij & Granhag, 2012). The prediction derives from the counterfactual that would-be liars versus truth-tellers will experience cognitive load if they lie (Zuckerman et al., 1981, p. 10). Thus, imposing cognitive load leaves liars versus truth-tellers with fewer reserves in their cognitive stores, which handicaps liars’ success (Vrij & Granhag, 2012). An investigator could implement cognitive load lie detection by asking one to recall an event in reverse rather than the natural forward order. That questioning style poses a greater challenge to liars versus truth-tellers because it can disrupt reconstructing events from a schema while allowing truthful respondents to recall honestly (Vrij et al., 2008). Researchers primarily recommend cognitive load lie detection to practitioners who conduct investigative interviews for national and international security purposes (Vrij et al., 2017). For example, to flag potential terrorists, security officers might be advised to use the approach to ascertain the intentions of people crossing an international border (e.g., Sooniste et al., 2013).

Authors have tested cognitive load lie detection by developing interviewing methods to make communicating a message challenging. Examples include asking questions people do not anticipate and inviting people to provide more information (see, e.g., Vrij et al., 2017 for a taxonomy). The first meta-analysis indicated that cognitive load techniques versus controls moderately improved lie detection (Vrij et al., 2017 [67% accuracy]; cf. Levine et al., 2018). Subsequent meta-analyses give pause. Mac Giolla and Luke’s (2021) meta-analysis yielded an accuracy rate of 60% [55.03% bias corrected]. Another synthesis complicates the picture: Verschuere et al. (2018) found that imposing cognitive load handicapped truth-tellers’ ability to relay their messages fluently.

Stakeholders must reexamine the verisimilitude of cognitive load lie detection, given the state of the evidence. But additional meta-analyses do not appear to be the way forward for two reasons. (a) Analysts disagree on the appropriate studies to include and tests to use in meta-analyses on cognitive load lie detection (see Vrij et al., 2017 versus Levine et al., 2018; cf., Vrij et al., 2018). Mac Giolla and Luke (2021) focused only on studies wherein participants made dichotomous (versus extent of veracity) judgments, arguing that such studies are apropos to practitioners’ work. (b) Meta-analyses inflate effect-sizes due to selective reporting and publication bias (Kvarven et al., 2020).

Severe Testing as a Way Forward

How can analysts circumvent the limitations of meta-analysis in assessing the verisimilitude of cognitive load lie detection? The hypothesis must undergo multi-laboratory or large-scale severe tests—studies purposely designed to examine the foundations of the hypothesis. That approach would assist in calibrating our confidence and would not suffer publication bias or selective reporting (Kvarven et al., 2020; O’Donnell et al., 2021). But what would count as a severe test of cognitive load lie detection?

An objective way to determine severe tests1 is to examine the derivation chain or justifications of the hypothesis (Neequaye, 2022). I mean the network of concepts, assumptions, and theoretical mechanisms on which exponents stake the cognitive load lie detection hypothesis (Meehl, 1990; Scheel et al., 2021). The properties of lying that make the behavior demanding to execute under cognitive load would be the justifications of the hypothesis (Neequaye, 2022).

Cognitive load lie detection rests on seven justifications (e.g., Leal & Vrij, 2008; Vrij, 2008, 2014, 2015). They are claims explaining why lying versus truth-telling is more taxing such that imposing cognitive load can expose lies. A previous review has demonstrated the prevalence of the justifications and how authors cite them unsystematically (see Neequaye, 2022). Here, I focus on the theoretical issues with the justifications for the sake of conciseness.

  • Liars must monitor their lies to remain plausible and consistent.

  • Liars are less likely to take their credibility for granted than truth-tellers would.

  • Liars monitor receivers’ reactions to assess whether a lie is being believed.

  • Liars may be engrossed in reminding themselves to play the role of a truth-teller.

  • Lying requires a mental justification, but truth-telling does not.

  • Liars have to suppress the truth while lying.

  • The mental activation of a lie is deliberate, but the truth often comes to mind automatically.

The state of the literature impedes severe testing in ways stakeholders cannot afford to ignore. The unsystematic citations of the justifications shield cognitive load lie detection from severe testing (Neequaye, 2022). Analysts cannot determine which justification is the cornerstone of the hypothesis and that loophole leaves evidence rejecting any justification susceptible to posthoc rebuttals. Other justifications can take the place of the rejected one, keeping cognitive load lie detection guarded indefinitely. See Neequaye (2022) for an extended discussion on the limitation just described.

A critic might counterargue that all the justifications undergird cognitive load lie detection, not some. Any attempt to determine severe tests must engage with all the seven justifications. This review aims to examine the seven justifications to flag what would count as severe tests of cognitive load lie detection. Then analysts can determine the studies to include in an audit. I will summarize the upcoming theoretical analysis to assist readers in tracking the arguments.

The State of the Justifications

All the seven justifications contain research gaps whose consequences remain unexamined and unknown. This limitation hinders analysts from determining—with precision—what would count as severe tests of cognitive load lie detection. The justifications span various topics in psychology, making them unwieldy to discuss. But the research gaps are primarily alternative explanations that could impeach the justifications and, by extension, cognitive load lie detection.

To rein in the discussion, I have categorized the alternative explanations into two varieties (Neequaye, 2022). (a) Truth tellers face challenges that imposing cognitive load can compound—call this truth-tellers’ plight. (b) Liars do not necessarily encounter challenges cognitive load might exacerbate—call this liars’ potential escape. Truth-tellers’ plight and liars’ potential escape point to a complication of note. Imposing cognitive load could bring the same hurdle to liars and truth-tellers—but in different ways.

Ignoring truth-tellers’ plight might erroneously permit the prediction that inducing cognitive load aggravates liars’ versus truth-tellers’ challenge to communicate a message. The reality might be that truth-tellers also face predicaments that cognitive load can compound. Verschuere et al. (2018) found that imposing cognitive load handicapped the expected ease with which people told the truth (see also Frank et al., 2019).

Disregarding liars’ potential escape creates a blind spot: the assumption that imposing cognitive load would exacerbate some difficulty—if a person lies. But the reality might be that lying comes with no challenge. For example, DePaulo et al. (2003, p. 79) explicitly reject the argument that lies, by definition, are more difficult to construct than truths. Truth-tellers and liars might face the same hurdle under cognitive load. Then interviewers might misclassify liars as truth-tellers when liars overcome the presumed difficulty. In the worst-case scenario, a miscarriage of justice could arise, given that researchers recommend cognitive load lie detection to the police: truth-tellers might resemble liars when truth-tellers do not perform as fluently as expected.

The justifications of cognitive load lie detection must account for truth-tellers’ plight and liars’ potential escape to warrant the hypothesis. But the justifications in the extant literature remain underspecified, which obstructs analysts from determining what would count as severe tests (Meehl, 1990; Scheel et al., 2021). I will illustrate the research gaps about truth-tellers’ plight and liars’ potential escape using three species of argumentation. (a) Evidence from lie detection research; (b) theoretical and empirical evidence of other phenomena brought to bear on a justification in question; and (c) arguments based on logic and coherence.

Examining the Justifications

Next follows in-depth reviews of the seven justifications. I have discussed the plausibility-consistency and the suppression justifications elsewhere (Neequaye, 2022) but repeat and extend some of those points to be comprehensive.

Plausibility and Consistency. Liars must monitor their lies to remain plausible and consistent.

Vrij (2015) notes that conjuring a lie is arduous and explains why in three parts: next follows a summary. First, apart from manufacturing the details, the liar must monitor the lie to ensure it is plausible. The lie must align with what the receiver knows or might discover. Second, liars must remember their previous messages and the corresponding receivers. In doing so, the liar can appear consistent when repeating the previous message. Third, “liars should also avoid making slips of the tongue and should refrain from providing investigators with new leads (Vrij, 2015, p. 206)”.

Let us now examine the justification just described, beginning with the claims about plausibility and consistency. All things being equal, preparing a lie in advance mitigates the challenge of inventing details on the spot. Logically speaking, any challenge with conjuring lies might happen when the liar is unprepared with the contents—for example, when the question is unexpected. The unanticipated questioning approach is a primary method of executing cognitive load lie detection (see, e.g., Mac Giolla & Luke, 2021). The assumption is that truth-tellers versus liars would know the details they must provide, even if the interviewer’s question is unexpected. Truth tellers can better cope with being plausible and consistent when facing unanticipated questions.

The derivation chain just described contains logic and coherence limitations. Why would liars versus truth-tellers be more inclined to suffer predicaments unexpected questions might bring? Note that the interviewer does not know the would-be liar or truth-teller beforehand. If the interviewer knew who would lie, the interview would be unnecessary because the interviewer would also likely know the information in advance.

Neurotypical people in forensic investigative interviews will want their message to appear plausible and consistent—appearing dubious comes with risks. Truth-tellers might monitor their response to an unexpected question like liars would. Neurotypical truth-tellers would want to ensure coherence between their previous messages and other aspects of the issue under discussion. Truth-tellers might encounter challenges when striving to provide plausible and consistent responses to unexpected questions. Those challenges could differ from what liars face. Researchers must investigate the nature of the cognitive demands unexpected questions bring to both liars and truth-tellers.

The assumption that truth-tellers versus liars can answer unanticipated questions with more ease is premature. A truth-teller might claim to remember an event to be discussed. But that claim, by default, fails to take into account that the interviewer would ask questions they have not considered. Such a truth-teller would be contending a hurdle they could not have foreseen. Whether truth-tellers will have a less demanding task remains unexamined and unknown. Studies have implemented various operationalizations of the unanticipated questions approach (see Vrij et al., 2011 for an overview; see also Lancaster et al., 2013). However, to my knowledge, none of those endeavors have examined whether truth-tellers versus liars find unexpected questions less demanding to answer and how that specific effect elicits lying cues.

Researchers must first outline the cognitive mechanisms underlying how would-be liars and truth-tellers answer unanticipated questions—to appear plausible and consistent. The highlighted gap in the literature also calls for analysts to nominate a definition and specify a falsifiable theory that systematically delineates question-types interviewees would generally not anticipate and why. Such research will reduce ambiguity about what qualifies as an unexpected question and, by extension, improve measurement practices in this area of study.

The literature contains little to no specification on what qualifies as an unanticipated question. That situation allows questionable research practices and argumentation to thrive. Suppose a manipulation fails to perform as the cognitive load lie detection hypothesis predicts. In that case, the manipulation could be charged as the culprit, absolving cognitive load lie detection indefinitely. Addressing the highlighted gaps in the literature will clarify the potential effects of unexpected questions on lying and truth-telling.

The final aspect of the three-part challenge liars versus truth tellers tackle is avoiding slips of the tongue and providing new leads (Vrij, 2015). That argumentation suffers from logic and coherence limitations.

Assume “slips of the tongue” and “new leads” denote certain things liars must conceal. Suppose liars must hide the fact that their message is a lie; for example, a liar ought to conceal that they are making a believed-false statement. Then the justification does not specify why lying is prone to becoming more difficult than truth-telling, much less that one can capitalize on the challenge to expose lies.

Alternatively, we can assume the slippage liars must avoid is some event that, if exposed, can derail the communication or success of the lie. A theory is needed to systematically predict how the nature of such an event can make lying versus truth-telling laborious. The event a liar may want to conceal can vary on several relevant dimensions—for example, the complexity and the time the event occurred relative to the current interview (e.g., Sporer, 2016). The literature needs to better specify the important ways concealing information differs from lying; concealing information is an aspect of lying (McCornack, 1992). Then exponents must specify how those differences make lying taxing such that one can exploit that burden to reveal lies.

Summary. Claiming that lying versus truth-telling burdens people with more concern for plausibility and consistency is underspecified when considering truth-tellers’ plight. Like liars, truth-tellers contend with appearing plausible and consistent—as neurotypical people would. The extent to which truth-tellers monitor their messages to unexpected questions remains unexamined and unknown. The literature lacks insight into the challenges truth-tellers might face and the attendant risks cognitive load lie detection might bring.

Concern about Credibility. Liars are less likely to take their credibility for granted than truth-tellers would.

Another justification claims that liars versus truth-tellers will be less likely to take their credibility for granted (e.g., Leal & Vrij, 2008); exponents offer two reasons to support that claim. The first reason indicates that the stakes are sometimes greater for liars than truth-tellers. Such high-stakes mean the negative repercussions of a lie being exposed and the advantages of the lie succeeding (Vrij, 2015). Second, the proposal notes that truth-tellers usually presume their innocence will be apparent because of certain cognitive biases. Truth-tellers assume their truth will “shine through” (e.g., Vrij et al., 2010).

According to Vrij (2015), truth-tellers suppose that people get what they deserve (viz., the just-world hypothesis, Lerner, 1998); truth-tellers also overestimate how perceptible their mental states are to others (viz., the illusion of transparency, Gilovich et al., 1998). Conversely, liars will be prone to monitoring and controlling their demeanor to appear honest. That burden makes the cognitive strain of lying susceptible to becoming compounded under cognitive load.

The logic and coherence of the arguments in the preceding paragraphs suffer from limitations. The framing or definition of high-stakes obscures scrutiny. High-stakes seem to apply to liars only; namely, “[the] negative consequence of getting caught and [the] positive consequence of getting away with the lie [emphasis added] (Vrij, 2008, 2015).” Because that definition of high-stakes necessarily excludes truth-tellers, the definition conceals the need to contemplate the plight of truth-tellers in the same situation. Let us reformulate that definition of high-stakes to allow a comparison between liars and truth-tellers but retain the sentiment in the original definition. Take the phrase high-stakes to mean a negative consequence of not being believed and a positive consequence of being believed.

Given the risks of appearing dubious in an investigative interview, one should expect neurotypical truth-tellers to be concerned about their credibility just as liars would. For example, liars and truth-tellers report firmly denying guilt as a strategy when undergoing investigative interviews (Hartwig et al., 2007). Moreover, being subjected to an interview signals that, at a minimum, the interviewer is trying to find out some true information. Arguably, neurotypical adults will expect that the interviewer will not take their messages for granted. Suppose there are significant gains to achieve if one is believed and grave dangers if the interviewer is not convinced. In that case, the hypothesis to expect is that liars and truth-tellers will be similarly concerned about their credibility. For example, neurotypical suspects undergoing an interview about their involvement in a murder should be concerned about their credibility, irrespective of their culpability. If they appear dubious, they might become suspects and may later serve a prison sentence. Whatever cognitive demand high-stakes induce, that strain should tax liars and truth-tellers, not just liars.

Let us consider whether the just-world hypothesis and the illusion of transparency license the claim that truth-tellers typically take their credibility for granted more than liars do. For argument’s sake, assume that those cognitive biases manifest in high-stakes scenarios. In this view, truth-tellers will take their believability for granted, not minding that they might fail to convince their interviewer and the consequences that possibility can invite. Suppose we accept that those cognitive biases are so formidable to lead truth-tellers to assume the veracity of their message will be evident without question. Then, we should expect would-be liars to perfectly refrain from lying in high-stakes scenarios; their mental states will presumably be perceptible, and in a just world, they would definitely suffer the deserved consequences. Nonetheless, people do lie when there are potentially grave consequences for doing so.

My point is that the presumed effects of the just-world hypothesis and the illusion of transparency have been implemented lopsidedly, focusing on truth-tellers without considering how the same premise might apply to liars. If those effects are so strong, why do they affect truth-tellers more than liars when both parties undergo a high-stakes interview where veracity is being contended? If those cognitive biases hold in high-stakes interviews, then truth-tellers and liars should equally expect ‘the truth to shine through’. One cannot assume the cognitive biases absolve truth-tellers from being concerned about their credibility but have little influence on liars. Granting that assumption would suggest that those cognitive biases perfectly stifle lying, which is not the case. There are videotaped press conferences of murderers asking the public for assistance in finding their victims (ten Brinke & Porter, 2012; Vrij & Mann, 2001).

One could argue that certain personality traits (e.g., agreeableness) might make truth-tellers versus liars more susceptible to believing in a just world and the illusion of transparency. Such traits could make truth-tellers versus liars more liable to the cognitive biases and, thus, likely to take their credibility for granted more than liars would. That argument is limited in my view. Human communication is generally truthful, but it is incontrovertible that, occasionally, everybody lies (DePaulo & Bell, 1996; Serota et al., 2010; Serota & Levine, 2015). Hence, we cannot definitively associate lying or truth-telling with any personality trait (cf. Hart et al., 2020; McArthur et al., 2022). People lie or tell the truth, with variability, in different circumstances. Whatever characteristics make people susceptible to believing in a just world or the illusion of transparency should feature amongst a random sample of liars and truth-tellers, not just one group. Therefore, we cannot expect belief in a just world or the illusion of transparency to affect truth-tellers to a greater degree. Those cognitive biases and the corresponding effects should feature in any random sample of humans communicating a message.

Critics can leverage two rebuttals to the issues I have raised. (a) Liars are inclined to take risks and truth-tellers hope for the best outcome. That explanation addresses why people lie in high-stakes scenarios. The explanation does not elucidate why one should expect liars to be more concerned about their credibility than truth-tellers will. One can hope for the best outcome and still be concerned about their credibility, just as the person taking a risk. (b) Innocent versus guilty suspects (i.e., truth-tellers vs. liars) think that they can convince their interviewer with the truth (e.g., Masip & Herrero, 2013); they might even waive their Miranda rights (e.g., Kassin & Norwick, 2004). Those findings do not warrant the claim that truth-tellers versus liars are less concerned about their credibility. A truth-teller can waive their Miranda rights and relay their truth to be convincing but remain concerned about their credibility. Truth-tellers are necessarily forthcoming because they want to demonstrate their credibility. We do not know whether their expectation to be convincing relieves cognitive load, especially when an interviewer poses a difficult question.

Summary. The idea that high-stakes situations and cognitive biases lead liars versus truth-tellers to be more concerned about their credibility contains blind spots. The derivation chain ignores truth-tellers’ plight by leaving critical questions unanswered. Stakeholders should clarify the definition of high-stakes and determine whether truth-tellers perceive high-stakes differently than liars. The literature must examine whether risk-taking versus hoping for the best outcome affects concern about credibility in different ways. And we need to know if that possible difference yokes liars versus truth-tellers with a greater challenge one can capitalize on to expose lies.

Monitoring success. Liars monitor receivers’ reactions to assess whether a lie is being believed.

The present derivation chain builds on the concern-about-credibility justification and is similarly limited in logic and coherence. The contention is that liars may surveil their targets to check if the lie is succeeding, and such monitoring consumes cognitive resources (e.g., Vrij, 2008, 2014, 2015).

Two references often accompany the notion that liars monitor receivers’ reactions (viz., Buller & Burgoon, 1996; Schweitzer et al., 2002). Broadly speaking, the Interpersonal Deception Theory (IDT) posits that communication is dynamic; interlocutors, in this case, liars, adapt their communication based on receivers’ feedback to be successful (Buller & Burgoon, 1996). Schweitzer et al. (2002) describe a particular instance they call a monitoring-dependent lie. They note that such lies usually require the liar to monitor the receiver, for example, when one equivocates on a bluff, depending on the receiver’s initial reaction. Schweitzer et al. (2002) found that people capitalize on visual access to execute monitoring-dependent lies.

The IDT (Buller & Burgoon, 1996) and the study by Schweitzer et al. (2002) suggest that liars might monitor receivers’ reactions to track the success of a lie. However, those examinations pertain to lying—not a comparison between lying and truth-telling. Why liars versus truth-tellers would be more inclined to monitoring remains unclear. The current justification makes no predictions about truth-tellers; this lack ignores truth-tellers’ plight and conceals the need to consider how truth-tellers might behave. Interviews implicitly or explicitly call on interviewees to be convincing. We can assume a truth-teller will want to be believed even if they expect the truth to shine through. It is feasible to expect a truth-teller to check the reactions of their interviewer to assess whether the interviewer believes the truth being reported. For example, a truth-teller might clarify aspects of their message if the interviewer expresses disbelief in any way.

Summary. The claim that liars surveil their targets is a tenuous basis for cognitive load lie detection. The evidence cited to support the justification provides no information about truth-tellers’ tendencies to monitor their success. Whether surveillance is taxing or unique to lying remains unexamined and unknown. A research gap worth exploring is whether liars and truth-tellers necessarily monitor receivers in different ways and whether those differences moderate cognitive load.

Preoccupation with Role-play. Liars may be engrossed in reminding themselves to play the role of a truth-teller.

A factor posited to contribute to liars’ cognitive load is that liars might get absorbed in reminding themselves to act like truth-tellers. Drawing on DePaulo et al. (2003), the proponents posit that such role-play requires extra cognitive effort (e.g., Vrij, 2008, 2014, 2015). Extant lie detection research calls the present justification into question, and the derivation chain fails to consider liars’ potential escape.

Based on the idea of self-presentation (e.g., Schlenker & Pontari, 2000), DePaulo et al. (2003) note that lying and truth-telling involve the tendency to portray certain behaviors to receivers: personal qualities aimed at making the receiver believe the sender is sincere. Because the claims liars make are false, DePaulo et al. (2003) argue that liars’ self-presentation should be more deliberate than truth-tellers. Liars try to behave like they imagine truth-tellers would. Consequently, the authors predicted that liars’ performance would seem less forthcoming, less convincing, less pleasant, and tenser insofar as liars act more deliberately than truth-tellers (DePaulo et al., 2003, p. 79).

Luke (2019) conducted a simulation study reexamining the available meta-analytic data on cues of lying (including DePaulo et al., 2003). The findings indicated that the cues featured in the deception literature are weak. Moreover, DePaulo et al. (2003) explicitly note that the self-presentation that attends deliberate behavior does not always hamper performance; the self-presentation demands on liars do not necessarily surpass what truth-tellers experience. DePaulo et al. (2003) note that truth-tellers might also be preoccupied. Given the risks failure might attract, neurotypical truth-tellers will want to portray personal qualities and deliver their messages in ways they believe will strengthen their credibility.

Summary. The issues raised in the preceding paragraph bring the liars’ potential escape to light. Stakeholders should be skeptical of the idea that because liars deliberately act like truth-tellers, liars will be preoccupied with role-playing and, in doing so, spend more mental resources. “It takes as much self-presentation skill to communicate accurate, truthful information that creates the desired impact on others as it does to tell lies that try to take advantage of others (Schlenker & Pontari, 2000, p. 225).” Interviewing methods intending to strain deliberate self-presentation might pose a similar challenge to liars and truth-tellers—not liars alone.

Three research gaps become evident. (a) Future theories must specify how self-presentation manifests in an investigative interview: do liars and truth tellers engage in different types of self-presentation? (b) Does such self-presentation elicit different levels of cognitive load that one can exploit to expose lies? (c) Then advocates must elucidate how that difference, if one exists, hampers the construction of lies versus truths.

The Basis for Lying. Lying requires a mental justification truth-telling does not.

An extant rationale of cognitive load lie detection is that lying requires a justification, but truth-telling does not (e.g., Vrij, 2014, 2015). The proponents argue that people usually lie because material and psychological reasons motivate them. Those considerations drain mental resources because, after each question, a liar must decide whether lying is a worthwhile response (Vrij, 2014, p. 186). I will illustrate that the rationale outlined in this paragraph conceals the liars’ potential escape. The discussion will draw on phenomena relevant to the justification under examination (i.e., Bok, 1999), deception research (e.g., Levine & McCornack, 1996), and logical arguments.

When theorists invoke the present justification, they consistently cite Levine et al. (2010) as support; see, for example, Vrij (2014, p. 186, 2015, p. 207) and Vrij and Ganis (2014, p. 321). Levine et al. (2010) explicitly claim to empirically examine Bok’s (1999) exposition of the maxim that people prefer to tell the truth if they have no reason to lie. Thus, the present justification of cognitive load lie detection stems from Bok (1999).

Bok notes that humans generally consider lying morally reprehensible; lying erodes trust in human relations. That indictment creates an initial imbalance wherein lying carries a negative moral valence. But sometimes lying is a person’s last resort, and such instances invite us to consider whether lying is morally justified (Bok, 1999). For example, when someone lies due to an unexamined good intention: telling a friend their unpalatable meal is delightful to prevent hurt feelings. In this view, a discovered lie requires an explanation, but a discovered truth does not (Bok, 1999, p. 32). Humans behave in ways suggesting they want to avoid the potential disrepute lying brings (e.g., Levine et al., 2010).

Now let us reconsider the claim that lying saps mental resources because one has to decide whether lying is worthwhile (viz., Vrij, 2014, p. 186). Suppose one decides in advance to lie and prepares those lies as the extant literature indicates liars in investigative interviews do (e.g., Hartwig et al., 2007). Then such would-be liars have already decided that lying is worthwhile and will likely not be contending with moral justifications during the act. What about unanticipated questions, which become relevant here? Is the moral justification for lying a significant item would-be liars contemplate after receiving an unexpected question? If so, does that contemplation necessarily make lying straining such that one can exploit the strain?—the present rationale does not specify. Moreover, logically speaking, if one has decided in advance to lie, regardless of whether they have prepared the lie’s specific contents, the would-be liar has effectively moved past the issue of morality—even if they receive unexpected questions.

A relevant competing hypothesis is evident here: liars are likely to be most concerned with the material risks an investigative interview presents, for example, becoming a suspect if the interviewer discovers the lie. In this view, the moral legitimacy of lying will likely be trivial to liars. Possibly appearing morally reprehensible is arguably not a material risk, at least not an important one. Liars will be concerned with constructing plausible lies that will evade suspicion. If a liar were to summon cognitive resources, the primary consumer would likely be their attempt to navigate any material risks2. Mentally justifying the moral legitimacy of the lie will hardly compete for cognitive resources: this is a logical point that analysts can subject to empirical testing.

A critic may object to the issues just raised by saying: the present rationale of cognitive load lie detection has nothing to do with liars contending the moral legitimacy of their lies. Such a critique will have to demonstrate why the current rationale consistently cites Levine et al. (2010), which explicitly aimed to empirically examine Bok’s (1999) musings. But for argument’s sake, let us entertain the potential objection. Assume the rationale intends to claim that motives play a significant role in deploying lies. That is, “[people] lie because they are too embarrassed to tell the truth (psychological reasons) or they lie to make money or to avoid punishment (material reasons) (e.g., Vrij, 2014, 2015).” Because liars have to consider those motives, lying is prone to becoming difficult; consequently, imposing cognitive load will expose lies.

Why such motives make lying versus truth-telling more liable to become cognitively demanding remains unclear, unexamined, and unknown. As Levine et al. (2010) indicate, people lie when their motives make truth-telling more challenging; for instance, when the truth might bring unwanted consequences. Remember that the interviewer does not know the would-be liar or truth-teller beforehand. Suppose lying, in that case, is an easier response option than truth-telling. Then the idea that motives make lying prone to difficulty contradicts the premise that engendered the idea.

Furthermore, note that liars and truth-tellers share the same motive to be believed, given the consequences of appearing dubious in an investigative interview (see e.g., Hartwig et al., 2007). The reasons for wanting to be believed might vary. Future research must specify whether such lower-level variation exerts an important difference on the higher-order motive of wanting to be believed. For example, do the lower-level variations lead liars to want to be believed more than truth-tellers do? If such a difference exists, theorists must clarify how that difference makes imposing more cognitive load likely to expose lies.

Another objection could be that the claim—lying requires a justification and truth-telling does not—refers to answering unanticipated follow-up questions. Such a view supposes that liars versus truth-tellers will struggle more with justifying or answering follow-up questions to prove the plausibility of their statements. There are two critical limitations with that hypothesis.

First, it is unclear why liars would be more susceptible to needing a justification for their messages. An interviewer might well ask truth-tellers probing questions too. Remember that the interviewer does not know who would be honest. We cannot take it for granted that unexpected questions present truth-tellers versus liars an easier hurdle. Exponents must first specify the definitions and mechanisms of what unanticipated questions constitute.

The second limitation arises when considering the probing effect (e.g., Levine & McCornack, 1996). Studies have shown that probing questions make liars and truth-tellers appear honest (Buller et al., 1989; Levine & McCornack, 2001). Thus, whatever difficulty follow-up questions might invite, interviewees seem to rise to the challenge; probing hampers lie detection, not improve it. What about studies suggesting unanticipated questions can expose liars as discussed by Vrij et al. (2011)? Caution is warranted. Meta-analyses on cognitive load lie detection indicate that, generally, the approach might not be as efficacious as theorized (Levine et al., 2018; Mac Giolla & Luke, 2021; Verschuere et al., 2018). And a falsifiable theory of the unanticipated questions approach is yet to be posited. When researchers outline a theory of what generally constitutes unanticipated questions, analysts can reexamine the probing effect using the specified unanticipated questioning method. Such research will clarify how answering unexpected follow-up questions warrant cognitive load lie detection.

Summary. The idea that liars contend with moral justifications that invite exploitable cognitive load is prone to the limitation of liars’ potential escape. Liars and—for that matter, truth-tellers—are likely to be concerned with the material risks an investigative interview brings. The idea that moral reprehensibility would sap the cognitive resources of those who have decided in advance to lie remains to be examined.

Suppression of the Truth. Liars have to suppress the truth while lying.

A claim undergirding cognitive load lie detection is that liars must suppress the truth while lying. And when one elects to lie, suppression invites a burden that cognitive load can compound (e.g., Vrij, 2015). Two strands of support often accompany that justification.

One strand comprises neuroimaging studies examining the brain regions involved in lying (viz., Christ et al., 2009; Spence et al., 2001; Spence & Kaylor-Hughes, 2008). Those neuroimaging studies indicate that lying recruits brain regions responsible for executive control, inhibitory control, task switching, and working memory. The other source of support comes from a study that examined how frequent lying or truth-telling subsequently influences the difficulty of performing the same behaviors (Verschuere et al., 2011).

The suppression justification suffers from limitations regarding liars’ potential escape. I will explain those limitations using logical arguments, lie detection research (i.e., neuroimaging studies), and insights from related phenomena (i.e., Gricean norms).

Let us begin with a logical limitation. What does the phrase suppression of the truth mean? The phrase cannot be synonymous with lying. Otherwise, the sentence ‘liars have to suppress the truth when they lie’ would amount to the following tautology: liars must lie (or not tell the truth) when they lie. The most charitable interpretation of truth suppression is that it is a separate behavior liars perform in addition to lying (Neequaye, 2022). But the literature contains no definition outlining the difference between suppression and lying. The presumed mechanism of suppression cannot warrant cognitive load lie detection if suppression of the truth remains unknown or indistinguishable from lying. We cannot be sure that suppression labors lying such that one can exploit the presumed handicap to detect lies.

Let us consider possible rebuttals to the issues just outlined. The counterargument could be that suppression constitutes a liar physically preventing themselves from communicating the truth and then lying. That situation could arise when a question is unexpected. But as I have argued previously, the literature has yet to specify a falsifiable theory of unanticipated questions and the corresponding boundary conditions.

One could turn to the neuroimaging literature cited to support suppression as evidence that suppression is indeed physical. For example, inhibitory control and task switching come to bear when people lie (e.g., Spence et al., 2001). But it remains unknown whether such inhibition involves liars switching from the truth to a lie. The inhibitory maneuver could entail switching over implausible lies or inconsistencies that may damage the goal to deceive (Neequaye, 2022). Truth-tellers might also be concerned with appearing consistent, and they could engage in inhibitory control. Following Gricean norms (Grice, 1975), a truth-teller could decide not to report irrelevant information, given an interviewer’s question. Truth-tellers might use such discourse moves to present the version of the truth that best answers the interviewer’s inquiry (Neequaye, 2022). The literature needs research on whether liars and truth-tellers experience different types and/or levels of intrusive thoughts when communicating in investigative interviews. Then it must be ascertained whether that feature corresponds to different levels of cognitive load, which can be exploited to expose lies.

Spence and Kaylor-Hughes (2008) theorize about a specific instance in which people might physically prevent themselves from uttering the truth before telling a lie. In that scenario, the truthful response to a question is yes, and the liar has a limited time to lie by saying no—or vice versa. Spence and Kaylor-Hughes (2008) posit that those conditions mentally prompt the word yes to the extent that the liar must stop themself from saying yes before saying no. Even if one grants that proposition, the depicted scenario is narrow considering the possible range of responses in an investigative interview and conversations in general. Additionally, there is the practical matter; how does an interviewer reliably flag whether a yes- or no-response was slow enough to suggest that the sender spent the time inhibiting the alternative response? Verschuere et al. (2011) was also a yes- or no-response format study whose findings demonstrate that practice makes people fluent liars, not evidence of suppression. These research works hardly offer evidence of a thought suppression mechanism specifically inducing cognitive load that one can exploit3.

Summary. The definition of the phrase suppression of the truth remains underspecified and conceals liars’ potential escape. The charitable interpretation that suppression involves liars physically preventing themselves from uttering the truth suffers from logic and coherence limitations. And the supporting literature hardly indicates that the presumed physicality of suppression is unique to lying. Associative memory is not reserved for liars alone. Truth-tellers might remember and refrain from reporting irrelevant information: they might “suppress” versions of their narrative that do not contribute to bolstering their credibility. The claim that liars must suppress the truth while lying belies an elusive mechanism that remains to be verified. We cannot take it for granted that suppression burdens liars’ cognitive resources such that it justifies cognitive load lie detection.

The Activation of Truths and Lies. The mental activation of a lie is deliberate, but the truth often comes to mind automatically.

The current justification argues that differences in automaticity makes lying more liable to mental effort than truth-telling, warranting cognitive load lie detection (e.g., Leal & Vrij, 2008; Vrij, 2015; Vrij & Ganis, 2014).

A limitation in coherence and logic becomes apparent when scrutinizing the present justification and the supporting evidence. Suppose a liar knows the prospective topics and prepares lies in advance. In that case, activation will not be an issue—the lie is already in mind. There will hardly be cognitive load to compound, providing liars an escape from cognitive load lie detection. Activation might be problematic for liars when they encounter unexpected questions. But researchers must first provide enough specification to license unanticipated questioning as an unambiguous lie detection method.

For argument’s sake, let us assume that there are themes of questions people generally do not expect. Why truth-tellers versus liars will more easily activate answers tailored to questions they do not anticipate remains unclear. At bottom, an unexpected question should lead any prospective respondent to ponder their would-be answers, especially given the risk of appearing dubious in an investigative interview. I have already discussed the limitation with assuming that truth-tellers will have an easier time conjuring plausible and consistent responses to unanticipated questions. But one could claim that the contention of interest remains unaddressed. Despite the need to ponder a response to an unexpected question, are truths more automatically mentally activated than lies? To answer that question, let us turn our attention to the evidence cited to support the idea that activation makes lying versus truth-telling more difficult, namely Gilbert (1991) and Walczyk et al. (2003).

Gilbert’s (1991) exposition raises the sentiment that humans streamline their communication using the convention that incoming messages are generally truthful. It is not immediately apparent why authors cite Gilbert (1991) as support for the current justification. Future research can elucidate the necessary connection.

Researchers assume that higher levels of cognitive load elicit longer reaction times (e.g., Verschuere et al., 2018). Walczyk et al. (2003) report specific conditions wherein the mental activation of lies versus truths elicited longer reaction times (and, by extension, more cognitive load). First, respondents were instructed to answer unexpected questions quickly. Second, the questions aimed to elicit short one- or two-word responses. As Walczyk et al. (2003) note, their findings offer a promising start, but they admit that the context they examined is narrow. It remains to be verified whether those results generalize to investigative interviews in which inquiries call for more than one- or two-word responses. Moreover, constraining interviewees to hasty answers is arguably impractical. It behooves future research to explore whether mentally generating narratives to unexpected questions presents a steeper challenge to liars versus truth-tellers. Then we must ascertain whether one can exploit that hurdle to expose lies.

Summary. Claiming that liars versus truth-tellers would find it more challenging to mentally activate their messages suffers from logical limitations, failing to recognize liars’ potential escape. The mental activation of a message will likely not be taxing when one prepares to lie—they have deliberated in advance. And we cannot be sure that unexpected questions will bring a steeper challenge to liars versus truth-tellers. Such questions should be taxing to any prospective respondent, not just liars. Appearing dubious in an investigative interview is risky; people are likely to ponder their answers.

Table 1.
A summary of limitations and issues for future research.
JustificationLimitationsIssues for Future Research
  • Liars must monitor their lies to remain plausible and consistent.

 
  • Logic and coherence limitations regarding truth-tellers’ plight.

    1. Neurotypical people would want to appear plausible and consistent in investigative interviews. Truth-tellers might monitor their responses to unexpected questions.

    2. The constructs “slips of the tongue” and “new leads” recapitulate the construct “lying”.

 
  1. Exponents must outline a falsifiable theory of unanticipated questions that provides enough specification to license unanticipated questioning as an unambiguous lie-detection method.

    1. What are the cognitive mechanisms underlying how liars and truth-tellers answer unanticipated questions to appear plausible and consistent.

  2. Are there important differences between concealing information (i.e., slips of the tongue & new leads) and lying?

    1. How do those differences make lying taxing such that one can exploit that burden to reveal lies?

 
  • Liars are less likely to take their credibility for granted than truth-tellers would.

 
  • Logic and coherence limitations regarding truth-tellers’ plight.

    1. The definition of high-stakes and the alleged cognitive biases apply the risks of appearing dubious to liars and excludes such potential effects on truth-tellers. Neurotypical suspects undergoing an interview about their involvement in a serious crime should be concerned about their credibility, irrespective of their culpability. Whatever cognitive demand high-stakes induce, that strain should tax liars and truth-tellers, not just liars.

 
  1. What defines high-stakes situations when it comes to lie detection? (See a suggested definition of high-stakes in the main text.)

    1. Do liars and truth-tellers perceive high-stakes differently?

    2. Does the risk-taking of liars versus truth-tellers hoping for the best in high-stakes situations affect concern about their credibility in different ways?

      1. Does that difference yoke liars with a greater challenge one can capitalize on to expose lies?

 
  • Liars monitor receivers’ reactions to assess whether a lie is being believed.

 
  • Logic and coherence limitations regarding truth-tellers’ plight.

    1. It is feasible to expect a truth-teller to check the reactions of their interviewer to assess whether the interviewer believes the truth being reported. For example, a truth-teller might clarify aspects of their message if the interviewer expresses disbelief in any way. Whether surveillance is taxing or unique to lying remains unexamined and unknown.

 
  1. Do liars and truth-tellers monitor receivers in different ways to assess whether their receiver is believing the incoming message?

    • Do those differences moderate cognitive load such that interviewers can exploit the difficulty to expose lies?

 
  • Liars may be engrossed in reminding themselves to play the role of a truth-teller.

 
  • Limitations regarding liars’ potential escape when considering lie detection research and impression management.

    1. Luke (2019) demonstrated that cues featured in the deception literature, including cues based on self-presentation, are weak. “It takes as much self-presentation skill to communicate accurate, truthful information that creates the desired impact on others as it does to tell lies that try to take advantage of others (Schlenker & Pontari, 2000, p. 225).” Interviewing methods intending to strain deliberate self-presentation might pose a similar challenge to liars and truth-tellers—not liars alone.

 
  1. How do people self-present in an investigative interview: do liars and truth tellers engage in different types of self-presentation?

    1. Does such self-presentation elicit different levels of cognitive load between liars and truth-tellers?

      1. How does that difference, if one exists, hamper the construction of lies versus truths?

 
  • Lying requires a mental justification truth-telling does not.

 
  • Limitations regarding liars’ potential escape when considering Bok (1999), the probing effect (e.g., Levine & McCornack, 1996), and logic.

    1. The idea that liars contend with moral justifications that invite exploitable cognitive load is prone to the limitation of liars’ potential escape. Liars and—for that matter truth-tellers—are likely to be concerned with the material risks an investigative interview brings. For example, becoming a prime suspect if one appears dubious. The idea that moral reprehensibility would sap the cognitive resources of those who have decided in advance to lie remains to be examined.

 
  1. Is the moral justification of lying a significant item would-be liars contemplate after receiving an unexpected question? If so, does that contemplation necessarily make lying straining? This research gap requires a specified theory of unanticipated questions.

  2. Do the different reasons for wanting to be believed exert an important difference on lying versus truth-telling? For example, do the lower-level variations lead liars to want to be believed more than truth-tellers do?

    1. If such a difference exists, theorists must clarify how that difference imposes more cognitive load on liars versus truth-tellers. Then analysts can examine whether that presumed challenge is something interviewers can utilize to expose lies.

  3. Does the probing effect (e.g., Levine & McCornack, 1996) manifest when interviewers ask unanticipated follow-up questions? This research gap requires a specified theory of unanticipated questions.

 
  • Liars have to suppress the truth while lying.

 
  • Limitations regarding liars’ potential escape when considering logic, neuroimaging studies (e.g., Spence et al., 2001), and Gricean norms (Grice, 1975).

    1. Suppression of the truth remains underspecified and conceals liars’ potential escape. The charitable interpretation that suppression involves liars physically preventing themselves from uttering the truth suffers from coherence limitations. The supporting literature hardly indicates that the presumed physicality of suppression is unique to lying. Associative memory is not reserved for liars alone. Truth-tellers might remember and refrain from reporting irrelevant information: they might “suppress” versions of their narrative that do not contribute to bolstering their credibility.

 
  1. How does suppression differ from lying? And how does that difference make lying versus truth-telling more challenging such that one can capitalize on suppression to expose lies?

  2. What specific behavior during lying constitutes suppression, and does that behavior change depending on the strategy chosen as the preferred method of lying.

    1. How do such behaviors make lying more laborious than truth-telling?

  3. Do liars and truth-tellers experience different types and/or levels of intrusive thoughts when communicating in investigative interviews?

    1. Does that difference correspond to different levels of cognitive load that one can exploit to expose lies?

 
  • The mental activation of a lie is deliberate, but the truth often comes to mind automatically.

 
  • Logic and coherence limitations regarding liars’ potential escape.

    1. Claiming that liars versus truth-tellers would find it more challenging to mentally activate their messages fails to recognize liars’ potential escape. The mental activation of a message will likely not be taxing when one prepares to lie—they have deliberated in advance. And we cannot be sure that unexpected questions will bring a steeper challenge to liars versus truth-tellers. Appearing dubious in an investigative interview is risky; liars and truth-tellers are likely to ponder their answers.

 
  1. Do mentally generating narratives to unexpected questions present a steeper challenge to liars versus truth-tellers?

    1. Then stakeholders must verify whether an interviewer can capitalize on the ostensible challenge to flag lies.

    2. This research gap requires a specified theory of unanticipated questions.

 
JustificationLimitationsIssues for Future Research
  • Liars must monitor their lies to remain plausible and consistent.

 
  • Logic and coherence limitations regarding truth-tellers’ plight.

    1. Neurotypical people would want to appear plausible and consistent in investigative interviews. Truth-tellers might monitor their responses to unexpected questions.

    2. The constructs “slips of the tongue” and “new leads” recapitulate the construct “lying”.

 
  1. Exponents must outline a falsifiable theory of unanticipated questions that provides enough specification to license unanticipated questioning as an unambiguous lie-detection method.

    1. What are the cognitive mechanisms underlying how liars and truth-tellers answer unanticipated questions to appear plausible and consistent.

  2. Are there important differences between concealing information (i.e., slips of the tongue & new leads) and lying?

    1. How do those differences make lying taxing such that one can exploit that burden to reveal lies?

 
  • Liars are less likely to take their credibility for granted than truth-tellers would.

 
  • Logic and coherence limitations regarding truth-tellers’ plight.

    1. The definition of high-stakes and the alleged cognitive biases apply the risks of appearing dubious to liars and excludes such potential effects on truth-tellers. Neurotypical suspects undergoing an interview about their involvement in a serious crime should be concerned about their credibility, irrespective of their culpability. Whatever cognitive demand high-stakes induce, that strain should tax liars and truth-tellers, not just liars.

 
  1. What defines high-stakes situations when it comes to lie detection? (See a suggested definition of high-stakes in the main text.)

    1. Do liars and truth-tellers perceive high-stakes differently?

    2. Does the risk-taking of liars versus truth-tellers hoping for the best in high-stakes situations affect concern about their credibility in different ways?

      1. Does that difference yoke liars with a greater challenge one can capitalize on to expose lies?

 
  • Liars monitor receivers’ reactions to assess whether a lie is being believed.

 
  • Logic and coherence limitations regarding truth-tellers’ plight.

    1. It is feasible to expect a truth-teller to check the reactions of their interviewer to assess whether the interviewer believes the truth being reported. For example, a truth-teller might clarify aspects of their message if the interviewer expresses disbelief in any way. Whether surveillance is taxing or unique to lying remains unexamined and unknown.

 
  1. Do liars and truth-tellers monitor receivers in different ways to assess whether their receiver is believing the incoming message?

    • Do those differences moderate cognitive load such that interviewers can exploit the difficulty to expose lies?

 
  • Liars may be engrossed in reminding themselves to play the role of a truth-teller.

 
  • Limitations regarding liars’ potential escape when considering lie detection research and impression management.

    1. Luke (2019) demonstrated that cues featured in the deception literature, including cues based on self-presentation, are weak. “It takes as much self-presentation skill to communicate accurate, truthful information that creates the desired impact on others as it does to tell lies that try to take advantage of others (Schlenker & Pontari, 2000, p. 225).” Interviewing methods intending to strain deliberate self-presentation might pose a similar challenge to liars and truth-tellers—not liars alone.

 
  1. How do people self-present in an investigative interview: do liars and truth tellers engage in different types of self-presentation?

    1. Does such self-presentation elicit different levels of cognitive load between liars and truth-tellers?

      1. How does that difference, if one exists, hamper the construction of lies versus truths?

 
  • Lying requires a mental justification truth-telling does not.

 
  • Limitations regarding liars’ potential escape when considering Bok (1999), the probing effect (e.g., Levine & McCornack, 1996), and logic.

    1. The idea that liars contend with moral justifications that invite exploitable cognitive load is prone to the limitation of liars’ potential escape. Liars and—for that matter truth-tellers—are likely to be concerned with the material risks an investigative interview brings. For example, becoming a prime suspect if one appears dubious. The idea that moral reprehensibility would sap the cognitive resources of those who have decided in advance to lie remains to be examined.

 
  1. Is the moral justification of lying a significant item would-be liars contemplate after receiving an unexpected question? If so, does that contemplation necessarily make lying straining? This research gap requires a specified theory of unanticipated questions.

  2. Do the different reasons for wanting to be believed exert an important difference on lying versus truth-telling? For example, do the lower-level variations lead liars to want to be believed more than truth-tellers do?

    1. If such a difference exists, theorists must clarify how that difference imposes more cognitive load on liars versus truth-tellers. Then analysts can examine whether that presumed challenge is something interviewers can utilize to expose lies.

  3. Does the probing effect (e.g., Levine & McCornack, 1996) manifest when interviewers ask unanticipated follow-up questions? This research gap requires a specified theory of unanticipated questions.

 
  • Liars have to suppress the truth while lying.

 
  • Limitations regarding liars’ potential escape when considering logic, neuroimaging studies (e.g., Spence et al., 2001), and Gricean norms (Grice, 1975).

    1. Suppression of the truth remains underspecified and conceals liars’ potential escape. The charitable interpretation that suppression involves liars physically preventing themselves from uttering the truth suffers from coherence limitations. The supporting literature hardly indicates that the presumed physicality of suppression is unique to lying. Associative memory is not reserved for liars alone. Truth-tellers might remember and refrain from reporting irrelevant information: they might “suppress” versions of their narrative that do not contribute to bolstering their credibility.

 
  1. How does suppression differ from lying? And how does that difference make lying versus truth-telling more challenging such that one can capitalize on suppression to expose lies?

  2. What specific behavior during lying constitutes suppression, and does that behavior change depending on the strategy chosen as the preferred method of lying.

    1. How do such behaviors make lying more laborious than truth-telling?

  3. Do liars and truth-tellers experience different types and/or levels of intrusive thoughts when communicating in investigative interviews?

    1. Does that difference correspond to different levels of cognitive load that one can exploit to expose lies?

 
  • The mental activation of a lie is deliberate, but the truth often comes to mind automatically.

 
  • Logic and coherence limitations regarding liars’ potential escape.

    1. Claiming that liars versus truth-tellers would find it more challenging to mentally activate their messages fails to recognize liars’ potential escape. The mental activation of a message will likely not be taxing when one prepares to lie—they have deliberated in advance. And we cannot be sure that unexpected questions will bring a steeper challenge to liars versus truth-tellers. Appearing dubious in an investigative interview is risky; liars and truth-tellers are likely to ponder their answers.

 
  1. Do mentally generating narratives to unexpected questions present a steeper challenge to liars versus truth-tellers?

    1. Then stakeholders must verify whether an interviewer can capitalize on the ostensible challenge to flag lies.

    2. This research gap requires a specified theory of unanticipated questions.

 

General Summary and Additional Issues. The limitations of the justifications make cognitive load lie detection challenging to audit with precision. Each justification contains ambiguities that obfuscate what would count as a severe test of the hypothesis. Table 1 summarizes the limitations and the corresponding research questions to address. Until stakeholders address those research gaps, cognitive load lie detection will remain shielded from severe testing. One can explain away any evidence rejecting cognitive load lie detection using two maneuvers. (1) The justifications could be reframed—after the fact—because all of them contain ambiguities and unverified assumptions. (2) Alternatively, one could shift focus to another justification.

Having tackled the justifications individually, let us discuss them from a birds-eye view; further issues persist. Four justifications appear to invoke each other. Consider the claim that (a) liars are less likely to take their credibility for granted. That justification logically ushers in three other justifications. When a person is concerned about their credibility, they would necessarily (b) ensure that their messages are plausible and consistent; (c) monitor their success rate; and (d) be engrossed in executing their goals. Cognitive load lie detection does not specify whether the four justifications just described are related. And how those possible dependencies affect the hypothesis remains unknown. That underspecification is yet another loophole that adds to cognitive load lie detection’s perpetual shield. Analysts cannot determine appropriate severe tests using any of the four justifications just mentioned. Suppose evidence was presented to reject any of them. The rebuttal could be that the test failed to consider post hoc dependencies or non-dependencies between the justifications.

Cognitive load lie detection rests on a collection of underspecified justifications. The way forward is a better specification of what warrants the hypothesis to make it amenable to severe testing. At bottom, the literature must clarify whether the justifications hold under boundary conditions. For example, do the justifications hold early or later in interviews; or do the justifications hold when truth-tellers versus liars fail to grasp the potential risks an investigative interview brings? Do the justifications hold over multiple interviews? Table 1 provides a list of research avenues to commence the reexamination process. Luke (2019) also offers critical methodological reforms.

One more potential objection is worth preempting; that complaint might go something like this. “How can psychology research support practitioners if we halt developing lie detection techniques and devote the time to fixing theoretical issues? Surely practitioners, like police officers, are concerned with applied matters? Their chief focus is knowing what works.” Any scientific field, even the applied ones, should be concerned with staking recommendations on robust theory. Cognitive load lie detection sounds promising at face value. But it will benefit practitioners long-term if researchers invest in ensuring that the approach is likely a productive way to detect lies.

When writing this article, I have learned the most from conversations with and comments from Karl Ask, Irena Boskovic, Pär Anders Granhag, and Erik Mac Giolla (listed alphabetically by surname). I am fully and solely responsible for any errors in this article.

There are no conflicts of interest to declare.

This work contains no empirical data.

1.

A related issue concerns the methodologies and statistical tests to use when examining cognitive load lie detection. This article provides a conceptual analysis to assist future research in flagging those appropriate tests.

2.

This point is not to say that constructing plausible lies versus truth-telling invites a heavier cognitive load. As noted earlier, people—irrespective of their disposition—will be concerned about the plausibility of their messages.

3.

One could draw on the cognitive costs of secrecy (i.e., Lane & Wegner, 1995) to defend the notion that suppression is physical. I address those objections elsewhere (see Neequaye, 2022).

Bok, S. (1999). Lying: Moral Choice in Public and Private Life. Vintage.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal Deception Theory. Communication Theory, 6(3), 203–242. https://doi.org/10.1111/j.1468-2885.1996.tb00127.x
Buller, D. B., Comstock, J., Aune, R. K., & Strzyzewski, K. D. (1989). The effect of probing on deceivers and truthtellers. Journal of Nonverbal Behavior, 13(3), 155–170. https://doi.org/10.1007/bf00987047
Christ, S. E., Van Essen, D. C., Watson, J. M., Brubaker, L. E., & McDermott, K. B. (2009). The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-analyses. Cerebral Cortex, 19(7), 1557–1566. https://doi.org/10.1093/cercor/bhn189
DePaulo, B. M., & Bell, K. L. (1996). Truth and investment: Lies are told to those who care. Journal of Personality and Social Psychology, 71(4), 703–716. https://doi.org/10.1037/0022-3514.71.4.703
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1), 74–118. https://doi.org/10.1037/0033-2909.129.1.74
Frank, A., Biberci, S., & Verschuere, B. (2019). The language of lies: A preregistered direct replication of Suchotzki and Gamer (2018; Experiment 2). Cognition and Emotion, 33(6), 1310–1315. https://doi.org/10.1080/02699931.2018.1553148
Gilbert, D. T. (1991). How mental systems believe. American Psychologist, 46(2), 107–119. https://doi.org/10.1037/0003-066x.46.2.107
Gilovich, T., Savitsky, K., & Medvec, V. H. (1998). The illusion of transparency: Biased assessments of others’ ability to read one’s emotional states. Journal of Personality and Social Psychology, 75(2), 332–346. https://doi.org/10.1037/0022-3514.75.2.332
Grice, H. P. (1975). Logic and Conversation. Speech Acts, 41–58. https://doi.org/10.1163/9789004368811_003
Hart, C. L., Lemon, R., Curtis, D. A., & Griffith, J. D. (2020). Personality Traits Associated with Various Forms of Lying. Psychological Studies, 65(3), 239–246. https://doi.org/10.1007/s12646-020-00563-x
Hartwig, M., Anders Granhag, P., Strömwall, L. A. (2007). Guilty and innocent suspects’ strategies during police interrogations. Psychology, Crime Law, 13(2), 213–227. https://doi.org/10.1080/10683160600750264
Kassin, S. M., Norwick, R. J. (2004). Why People Waive Their Miranda Rights: The Power of Innocence. Law and Human Behavior, 28(2), 211–221. https://doi.org/10.1023/b:lahu.0000022323.74584.f5
Kvarven, A., Strømland, E., Johannesson, M. (2020). Comparing meta-analyses and preregistered multiple-laboratory replication projects. Nature Human Behaviour, 4(4), 423–434. https://doi.org/10.1038/s41562-019-0787-z
Lancaster, G. L. J., Vrij, A., Hope, L., Waller, B. (2013). Sorting the Liars from the Truth Tellers: The Benefits of Asking Unanticipated Questions on Lie Detection. Applied Cognitive Psychology, 27(1), 107–114. https://doi.org/10.1002/acp.2879
Lane, J. D., Wegner, D. M. (1995). The cognitive consequences of secrecy. Journal of Personality and Social Psychology, 69(2), 237–253. https://doi.org/10.1037/0022-3514.69.2.237
Leal, S., Vrij, A. (2008). Blinking During and After Lying. Journal of Nonverbal Behavior, 32(4), 187–194. https://doi.org/10.1007/s10919-008-0051-0
Lerner, M. J. (1998). The Two Forms of Belief in a Just World. In L. Montada M. J. Lerner (Eds.), Responses to Victimizations and Belief in a Just World (pp. 247–269). Springer US. https://doi.org/10.1007/978-1-4757-6418-5_13
Levine, T. R., Blair, J. P., Carpenter, C. J. (2018). A critical look at meta-analytic evidence for the cognitive approach to lie detection: A re-examination of Vrij, Fisher, and Blank (2017). Legal and Criminological Psychology, 23(1), 7–19. https://doi.org/10.1111/lcrp.12115
Levine, T. R., Kim, R. K., Hamel, L. M. (2010). People Lie for a Reason: Three Experiments Documenting the Principle of Veracity. Communication Research Reports, 27(4), 271–285. https://doi.org/10.1080/08824096.2010.496334
Levine, T. R., McCornack, S. A. (1996). A Critical Analysis of the Behavioral Adaptation Explanation of the Probing Effect. Human Communication Research, 22(4), 575–588. https://doi.org/10.1111/j.1468-2958.1996.tb00380.x
Levine, T. R., McCornack, S. A. (2001). Behavioral Adaptation, Confidence, and Heuristic-Based Explanations of the Probing Effect. Human Communication Research, 27(4), 471–502. https://doi.org/10.1111/j.1468-2958.2001.tb00790.x
Luke, T. J. (2019). Lessons From Pinocchio: Cues to Deception May Be Highly Exaggerated. Perspectives on Psychological Science, 14(4), 646–671. https://doi.org/10.1177/1745691619838258
Mac Giolla, E., Luke, T. J. (2021). Does the cognitive approach to lie detection improve the accuracy of human observers? Applied Cognitive Psychology, 35(2), 385–392. https://doi.org/10.1002/acp.3777
Masip, J., Herrero, C. (2013). ‘What Would You Say if You Were Guilty?’ Suspects’ Strategies During a Hypothetical Behavior Analysis Interview Concerning a Serious Crime. Applied Cognitive Psychology, 27(1), 60–70. https://doi.org/10.1002/acp.2872
McArthur, J., Jarvis, R., Bourgeois, C., Ternes, M. (2022). Lying motivations: Exploring personality correlates of lying and motivations to lie. Canadian Journal of Behavioural Science / Revue Canadienne Des Sciences Du Comportement, 54(4), 335–340. https://doi.org/10.1037/cbs0000328
McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59(1), 1–16. https://doi.org/10.1080/03637759209376245
Meehl, P. E. (1990). Why Summaries of Research on Psychological Theories are Often Uninterpretable. Psychological Reports, 66(1), 195–244. https://doi.org/10.2466/pr0.1990.66.1.195
Neequaye, D. A. (2022). A Metascientific Empirical Review of Cognitive Load Lie Detection. Collabra: Psychology, 8(1), 57508. https://doi.org/10.1525/collabra.57508
O’Donnell, M., Dev, A. S., Antonoplis, S., Baum, S. M., Benedetti, A. H., Brown, N. D., Carrillo, B., Choi, A. L., Connor, P., Donnelly, K., Ellwood-Lowe, M. E., Foushee, R., Jansen, R., Jarvis, S. N., Lundell-Creagh, R., Ocampo, J. M., Okafor, G. N., Azad, Z. R., Rosenblum, M., … Nelson, L. D. (2021). Empirical audit and review and an assessment of evidentiary value in research on the psychological consequences of scarcity. Proceedings of the National Academy of Sciences, 118(44), e2103313118. https://doi.org/10.1073/pnas.2103313118
Scheel, A. M., Tiokhin, L., Isager, P. M., Lakens, D. (2021). Why Hypothesis Testers Should Spend Less Time Testing Hypotheses. Perspectives on Psychological Science, 16(4), 744–755. https://doi.org/10.1177/1745691620966795
Schlenker, B. R., Pontari, B. A. (2000). The strategic control of information: Impression management and self-presentation in daily life. In Psychological perspectives on self and identity. (pp. 199–232). American Psychological Association. https://doi.org/10.1037/10357-008
Schweitzer, M. E., Brodt, S. E., Croson, R. T. A. (2002). Seeing and believing: Visual access and the strategic use of deception. International Journal of Conflict Management, 13(3), 258–375. https://doi.org/10.1108/eb022876
Serota, K. B., Levine, T. R. (2015). A Few Prolific Liars: Variation in the Prevalence of Lying. Journal of Language and Social Psychology, 34(2), 138–157. https://doi.org/10.1177/0261927x14528804
Serota, K. B., Levine, T. R., Boster, F. J. (2010). The Prevalence of Lying in America: Three Studies of Self-Reported Lies. Human Communication Research, 36(1), 2–25. https://doi.org/10.1111/j.1468-2958.2009.01366.x
Sooniste, T., Granhag, P. A., Knieps, M., Vrij, A. (2013). True and false intentions: Asking about the past to detect lies about the future. Psychology, Crime Law, 19(8), 673–685. https://doi.org/10.1080/1068316x.2013.793333
Spence, S. A., Farrow, T. F. D., Herford, A. E., Wilkinson, I. D., Zheng, Y., Woodruff, P. W. R. (2001). Behavioural and functional anatomical correlates of deception in humans. Neuroreport, 12(13), 2849–2853. https://doi.org/10.1097/00001756-200109170-00019
Spence, S. A., Kaylor-Hughes, C. J. (2008). Looking for truth and finding lies: The prospects for a nascent neuroimaging of deception. Neurocase, 14(1), 68–81. https://doi.org/10.1080/13554790801992776
Sporer, S. L. (2016). Deception and Cognitive Load: Expanding Our Horizon with a Working Memory Model. Frontiers in Psychology, 7, 420. https://doi.org/10.3389/fpsyg.2016.00420
ten Brinke, L., Porter, S. (2012). Cry me a river: Identifying the behavioral consequences of extremely high-stakes interpersonal deception. Law and Human Behavior, 36(6), 469–477. https://doi.org/10.1037/h0093929
Verschuere, B., Köbis, N. C., Bereby-Meyer, Y., Rand, D., Shalvi, S. (2018). Taxing the brain to uncover lying? Meta-analyzing the effect of imposing cognitive load on the reaction-time costs of lying. Journal of Applied Research in Memory and Cognition, 7(3), 462–469. https://doi.org/10.1016/j.jarmac.2018.04.005
Verschuere, B., Spruyt, A., Meijer, E. H., Otgaar, H. (2011). The ease of lying. Consciousness and Cognition, 20(3), 908–911. https://doi.org/10.1016/j.concog.2010.10.023
Vrij, A. (2008). Detecting Lies and Deceit: Pitfalls and Opportunities. John Wiley Sons.
Vrij, A. (2014). Interviewing to detect deception. European Psychologist, 19(3), 184–194. https://doi.org/10.1027/1016-9040/a000201
Vrij, A. (2015). A cognitive approach to lie detection. In Detecting deception: Current challenges and cognitive approaches (pp. 205–229). Wiley-Blackwell.
Vrij, A., Blank, H., Fisher, R. P. (2018). A re-analysis that supports our main results: A reply to Levine et al. Legal and Criminological Psychology, 23(1), 20–23. https://doi.org/10.1111/lcrp.12121
Vrij, A., Fisher, R. P., Blank, H. (2017). A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology, 22(1), 1–21. https://doi.org/10.1111/lcrp.12088
Vrij, A., Ganis, G. (2014). Theories in Deception and Lie Detection. In D. C. Raskin, C. R. Honts, J. C. Kircher (Eds.), Credibility Assessment (pp. 301–374). Academic Press. https://doi.org/10.1016/b978-0-12-394433-7.00007-5
Vrij, A., Granhag, P. A. (2012). Eliciting cues to deception and truth: What matters are the questions asked. Journal of Applied Research in Memory and Cognition, 1(2), 110–117. https://doi.org/10.1016/j.jarmac.2012.02.004
Vrij, A., Granhag, P. A., Mann, S., Leal, S. (2011). Outsmarting the Liars: Toward a Cognitive Lie Detection Approach. Current Directions in Psychological Science, 20(1), 28–32. https://doi.org/10.1177/0963721410391245
Vrij, A., Mann, S. (2001). Who killed my relative? Police officers’ ability to detect real-life high-stake lies. Psychology, Crime Law, 7(2), 119–132. https://doi.org/10.1080/10683160108401791
Vrij, A., Mann, S. A., Fisher, R. P., Leal, S., Milne, R., Bull, R. (2008). Increasing cognitive load to facilitate lie detection: The benefit of recalling an event in reverse order. Law and Human Behavior, 32(3), 253–265. https://doi.org/10.1007/s10979-007-9103-y
Vrij, A., Mann, S., Leal, S., Granhag, P. A. (2010). Getting into the minds of pairs of liars and truth tellers: An examination of their strategies. The Open Criminology Journal, 3(1), 17–22. https://doi.org/10.2174/18749178010030200017
Walczyk, J. J., Roper, K. S., Seemann, E., Humphrey, A. M. (2003). Cognitive mechanisms underlying lying to questions: Response time as a cue to deception. Applied Cognitive Psychology, 17(7), 755–774. https://doi.org/10.1002/acp.914
Zuckerman, M., DePaulo, B. M., Rosenthal, R. (1981). Verbal and Nonverbal Communication of Deception. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 14, pp. 1–59). Academic Press. https://doi.org/10.1016/s0065-2601(08)60369-x
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material