Over a hundred prior studies show that reward-related distractors capture attention. It is less clear, however, whether and when reward-related distractors affect performance on tasks that require cognitive control. In this experiment, we examined whether reward-related distractors impair performance during a demanding arithmetic task. Participants (N = 81) solved math problems, while they were exposed to task-irrelevant stimuli that were previously associated with monetary rewards (vs. not). Although we found some evidence for reward learning in the training phase, results from the test phase showed no evidence that reward-related distractors harm cognitive performance. This null effect was invariant across different versions of our task. We examined the results further with Bayesian analyses, which showed positive evidence for the null. Altogether, the present study showed that reward-related distractors did not harm performance on a mental arithmetic task. When considered together with previous studies, the present study suggests that the negative impact of reward-related distractors on cognitive control is not as straightforward as it may seem, and that more research is needed to clarify the circumstances under which reward-related distractors harm cognitive control.

The prospect of earning rewards, such as monetary rewards, has a profound tendency to boost people’s performance (Botvinick & Braver, 2015). Indeed, when rewards can be earned, people learn faster (Le Pelley, Mitchell, Beesley, George, & Wills, 2016), invest more effort (Wigfield & Eccles, 2000), and generally perform better on cognitive tasks (Padmala & Pessoa, 2011). At first sight, it thus seems that rewards consistently facilitate performance during cognitive tasks. Intriguingly, however, research has also identified some specific circumstances under which rewards harm, not help, performance. For instance, smelling some delicious food during work at the office means that rewards are near, but these rewards do not necessarily improve performance on the task one is currently working on. In this research, we will examine how reward cues that are not directly related to the current task (i.e., reward-related distractors) affect cognitive performance.

Prior research has extensively studied how reward-related distractors grab people’s attention during visual search (for reviews, see Anderson, 2016a; Failing & Theeuwes, 2017). Typically, in these studies (e.g., Anderson, Laurent, & Yantis, 2011), participants first go through a learning phase, in which they learn to associate some stimulus (e.g., a red circle) with earning money (e.g., 5 cents). Later, in a testing phase, participants perform a search task (i.e., they have to search for a target), while the previously rewarded stimulus (e.g., the red circle) re-appears as a task-irrelevant stimulus (i.e., as a distractor). These studies showed that distractors that were associated with high reward captured visual attention more than distractors associated with low (or no) reward, and thus slowed down search for the targets (e.g., Anderson et al., 2011; Bucker, Belopolsky, & Theeuwes, 2014; Le Pelley, Pearson, Griffiths, & Beesley, 2015). In other words, these studies support the idea that reward-related irrelevant cues can harm cognitive processes.

Although previous studies demonstrate that reward-related distractors have a strong impact on visual attention (e.g., where people direct their eye-movements), less is known about whether reward-related distractors have broader cognitive and behavioral consequences. That is, in daily life at work and school, most tasks require complex interactions with information, not merely a search for target stimuli. In these contexts, optimal performance, for instance giving correct answers during an exam, giving a clear presentation, or constructively contributing to a staff meeting, relies on cognitive control processes. Cognitive control refers to the maintenance and adaptive regulation of thoughts and behavior in pursuit of internally represented behavioral goals (Braver, 2012). If reward-related distractors disrupt control processes, beyond visual attention, then being exposed to these reward-related distractors may turn out to be especially harmful to outcomes in work and educational settings. So, in the current study, we investigated whether reward-related distractors indeed harm performance during a task that requires cognitive control (i.e., a mental arithmetic task).

On the one hand, one might suspect that although reward-related distractors disrupt early cognitive processes (e.g., attentional shifts that occur before 150 ms; Posner, Inhoff, Friedrich, & Cohen, 1987), they may not affect later processing stages (e.g., that occur after 150 ms). Support for this idea comes from a study by Theeuwes and Belopolsky (2012), who investigated how reward-related distractors impact eye movements. In their experiment, participants had to move their gaze to a certain target as fast as possible (i.e., oculomotor capture task). While doing so, stimuli appeared that were previously associated with reward, but were now unrelated to the task. Findings showed that participant’s gaze was rapidly captured by reward-related distractors (e.g., saccades to the direction of reward-related distractors), but after this initial capture, people could readily disengage. The authors concluded that rewards seem to increase the salience of a cue, and therefore capture attention rapidly, but that rewards are less likely to influence processes after this attentional capture (see also Failing, Nissens, Pearson, Le Pelley, & Theeuwes, 2015; Le Pelley, Seabrooke, Kennedy, Pearson, & Most, 2017; Maclean & Giesbrecht, 2015). Along similar lines, a recent meta-analysis showed that positive emotional cues – regardless of task relevance – capture attention rapidly and have stronger impact during early (rather than later) stages of processing (Pool, Brosch, Delplanque, & Sander, 2015). Thus, if reward cues have similar effects to positive emotional cues, one could argue that reward-related distractors impact performance on tasks, in which performance is mainly dependent on early processes (e.g., visual search), but not on tasks that mainly require cognitive control processes (e.g., active maintenance of goal-relevant cues).

On the other hand, some studies do suggest that reward-related distractors do not just affect visual attentional capture (Anderson, Laurent, & Yantis, 2012; Failing & Theeuwes, 2015; MacLean, Diaz, & Giesbrecht, 2016; Munneke, Belopolsky, & Theeuwes, 2016; Munneke, Hoppenbrouwers, & Theeuwes, 2015; Wang, Duan, Theeuwes, & Zhou, 2014) and that they may impair cognitive control processes. Support from this latter idea comes from Krebs and colleagues (2010), who found that distractors associated with reward disrupted conflict processing in a Stroop task (see also: Krebs, Boehler, Appelbaum, & Woldorff, 2013; Krebs, Boehler, Egner, & Woldorff, 2011). Specifically, in this prior study, some ink colors (e.g., red) were associated with monetary reward. When these stimuli appeared as distractors (in this case, as the semantic meaning of the word; e.g., the word “red” presented in yellow), they slowed down people’s responses in the Stroop task, more so than distractors not associated with reward. Along similar lines, another study showed that distractors that are associated with strong emotions, disrupt control processes during working memory maintenance (Dolcos & McCarthy, 2006). Taken together, it seems plausible that reward-related distractors first attract visual attention (Anderson et al., 2011; Le Pelley et al., 2015; Theeuwes & Belopolsky, 2012). Then, in turn, these distractors are more likely to permeate into working memory (Gong & Li, 2014; Klink, Jeurissen, Theeuwes, Denys, & Roelfsema, 2017), which eventually causes people to have less capacity left to carry out task-relevant processes, hence leading to worse performance on the task.

To test the latter possibility, we previously carried out two experiment (reported in Rusz, Bijleveld, & Kompier, 2018). In these experiments, we tested whether reward-related distractors could harm performance on a demanding math task, and whether this putative distraction effect was moderated by people’s current motivational states. In these previous experiments, participants first learned to associate different colors with high vs. no monetary rewards. Later, in the test phase, participants had to solve math problems. While they worked on these problems, the colors associated with high vs. no reward reappeared as distractors. To examine the effect of current motivational states, on some trials during the math task, participants could earn money for responding accurately. Findings from these previous studies were inconclusive. In particular, the first experiment showed that reward-related distractors harmed performance regardless of motivational states, but we could not directly replicate this effect in the second experiment. So, if anything, these previous studies yielded weak evidence for reward-based distraction on math tasks.

A possible explanation for why these prior studies yielded only weak evidence, is that we manipulated people’s current motivational state in these prior studies (i.e., participants could earn money on some trials). A side-effect of this manipulation may have been that it increased participants’ general motivation throughout the math task, which may have made distraction less likely altogether (Müller et al., 2007). Thus, in the present study, we further investigated whether reward-related distractors can harm people’s performance on a math task. However, this time, we took a step back, and we did not manipulate people’s current motivational state. Instead, we focused solely on testing the reward-based distraction effect.

In particular, we predicted that distractors that were previously associated with reward (vs. no reward) harm cognitive performance. If we find support for this hypothesis, it would mean that reward-related distractors are capable of disrupting not just visual search (i.e., quick saccades away from goal-relevant information, that are corrected immediately), but, in turn, may have more severe consequences for subsequent cognitive processes.

To test our hypothesis, we used our original paradigm (Rusz et al., 2018) – without the motivation manipulation in the test phase – which was originally designed based on research on value-driven attention (Anderson et al., 2011) and on research on distraction during math performance (Beilock et al., 2004; Beilock & Carr, 2005). First, in a training phase, participants learned to associate earning high vs. no monetary rewards with two different colors. Later, in a test phase, participants performed a mental arithmetic task (they had to add up four numbers), while the previously rewarded colors reappeared as distractors. So, participants had to prioritize the arithmetic task, while ignoring distractors, even though these were sometimes associated with reward. As simple mental additions can take up to 800–900 ms (Ashcraft, 1992; Ashcraft & Battaglia, 1978), we expected that the rather fast presentation of the numbers (700 ms per number) would make it especially difficult to perform these mental additions while trying to ignore distractors. Therefore, we expected that distractors associated with reward (vs. no reward) will disrupt cognitive performance, which we operationalized as the percentage of accurate responses.

To explore the circumstances under which reward-based disruption – if we can detect it at all – is strongest, we designed three variations of our task described above. In these variations, we tested whether distractors associated with reward impede cognitive performance more when they were previously always associated with rewards (Experiment 1A) vs. when they were randomly associated with rewards on 80% of trials (Experiment 1B), or when they were located further away from task-relevant cues (Experiment 1C).

Preregistration and data availability

We preregistered our study on AsPredicted (https://aspredicted.org/3j7gw.pdf). Unless otherwise noted, inclusion criteria and statistical analyses were pre-registered. The experimental task, data, and analysis scripts can be found via https://osf.io/dwa89/. This research was approved by the Ethics Committee of the Social Science Faculty (ECSW2017-0805-50).

Participants

Participants were 90 students from Radboud University. We recruited fluent Dutch speakers, who were 25 years old or younger, slept at least 6 hours the night before the experiment, and were not colorblind. After data collection, 2 participants nevertheless reported to have slept less than 6 hours, so they were excluded from the final analysis. Moreover, we excluded 7 participants who performed below 60% accuracy. The final sample consisted of 81 students (27 participants per task variation; 59 females and 22 males; Mage = 22.35 years, SDage = 1.96). Participants received compensation in cash based on their performance (maximum 6 €).

Procedure

Participants were seated in a cubicle in front of a computer. First, participants gave their written informed consent to participate in the study. Next, they reported their demographics (age, sex), hours of sleep the night before, and their need for money on a 1 (not at all) to 7 (very much) scale (“To what extent are you in need for money at the moment?”). Afterwards, they carried out the task (see below). Then, they reported how motivated they were, and how demanding and difficult they felt the task was on a 1 (not at all) to 7 (very much) scale. Finally, they filled out the Dutch version of the Barratt Impulsiveness Scale (BIS-11; Patton, Stanford, & Barratt, 1995; see Table 1 for descriptive statistics of all subjective measures). We collected these subjective measures in order to explore whether current states and traits affect reward-based distraction (based on results of Anderson et al., 2011). The experiment took around 35–40 minutes to finish. At the end, participants were thanked, paid, and debriefed.

Table 1

Descriptive Statistics of Subjective Measures.

Subjective MeasuresMSDRange

 
Sleep 7.72 .73 3.5 
Task demands 5.17 1.5 
Task difficulty 4.06 1.66 
Fatigue 4.48 1.57 
Motivation 5.35 1.18 
BIS 2.07 .32 1.47 
Need for money 4.33 1.52 
Subjective MeasuresMSDRange

 
Sleep 7.72 .73 3.5 
Task demands 5.17 1.5 
Task difficulty 4.06 1.66 
Fatigue 4.48 1.57 
Motivation 5.35 1.18 
BIS 2.07 .32 1.47 
Need for money 4.33 1.52 

General overview of the task

The task consisted of two parts, a training and a testing phase. In the training phase (see Figure 1), participants first saw a fixation cross, then four times a number + letter combination presented one after each other (e.g., 8W, X5, 9Y, and Z7; see Figure 1). Their task was to report whether the letters were in the correct alphabetical order (e.g., W, X, Y, Z). They responded by pressing “Q” for correct and “P” for incorrect trials. On half of the trials, one of the letters had a different color. If this letter was blue (or red, counterbalanced across all participants, N = 90), participants could earn a monetary reward (8 cents). If it was red (or blue, counterbalanced), participants could earn no monetary reward. On no-reward trials (e.g., red), responses were followed by visual feedback indicating “Good” or “False”. High-reward trials (e.g., blue) were additionally followed by reward feedback (+ 8 cents) and the total amount that has been earned during the task so far. Participants were not informed about the reward contingency beforehand. In total, participants completed 4 practice trials and 150 training trials.

Figure 1

Training phase. Example of (A) no reward and (B) high reward trials.

Figure 1

Training phase. Example of (A) no reward and (B) high reward trials.

Close modal

In the test phase (see Figure 2), participants saw a fixation cross and four times a number + letter combination presented one after each other (e.g., 8W, X5, 9Y, and Z7), and lastly, a two-digit number that was +/–1 to the sum of these numbers (e.g., 28). Their task was to add up the four numbers and report whether the sum of the presented numbers (e.g., 8 + 5 + 9 + 7 = 29) was higher or lower than the number presented in the last display (e.g., 28). They responded by pressing “Q” for smaller and “P” for larger sums (29 is bigger than 28, so the correct response would be “P”). Identical to the training phase, on half of the trials, one letter was always red and the other half of the trials one letter was always blue. In this case, the letters served as distractors, previously associated with monetary (vs. no monetary) rewards. Importantly, because the spatial location of the distractor (left vs. right) was randomized, participants could not predict the location of the distractor, which made distractors difficult to ignore. In total, participants completed 10 practice and 64 test trials.

Figure 2

Testing phase. Example trial with a distractor associated with (A) no rewards and with (B) high rewards.

Figure 2

Testing phase. Example trial with a distractor associated with (A) no rewards and with (B) high rewards.

Close modal

Task variations

As mentioned above, we designed three variations of the experiment (between-subject factor), to which we randomly assigned participants (N = 30/variation). All three variations were in line with the general task description above, with only slight differences. In Experiment 1A, reward contingency in the training phase was 100%. That is, high-reward distractors were previously associated with earning money on all trials. To test whether a variable ratio schedule (Thorndike, 1898) makes the hypothesized disruption effect stronger, Experiment 1B used an 80%–20% reward contingency. That is, high-reward distractors were previously associated with earning money on 80% of the trials (vs. all trials). Finally, Experiment 1C was identical to Experiment 1A with one difference: both in the training and test phases, target (e.g., 8) and distractor (e.g., W) were located spatially further away from each other (8.5 mm vs. 34.2 mm difference between target and distractor/1680 × 1050 display). In this version, the goal was to test whether reward-based disruption is stronger when it takes longer to direct eye movements from distractor (e.g., red colored W) to target (e.g., 8). If indeed eye movements are rapidly captured by reward-related distractors, it should take longer to direct attention back to task relevant cues (Henderson & Macquistan, 1993; Laberge & Brown, 1986; Shulman, Wilson, & Sheehy, 1985), which may result in even worse performance on the math task (for an alternative perspective, see Gaspar & McDonald, 2014; Hickey & Theeuwes, 2011).

Training phase analyses (not pre-registered)

We should note that we did not preregister these analyses; thus, these results should be interpreted with caution. As we had no a priori hypotheses about the learning phase, we mainly rely on effect sizes when we interpret the results (Forstmeier, Wagenmakers, & Parker, 2017; Greenland et al., 2016; Nosek, Ebersole, DeHaven, & Mellor, 2018). Additionally, the p-values that are related to exploratory analyses in this paper are not corrected for multiple comparisons.

In order to examine reward learning, we ran a GLM analysis with reward (high vs. no) and block (first vs. second) as independent variables, separately for accuracy and response time (RT) as dependent variables. Based on prior research (e.g., Anderson, 2016b; Sha & Jiang, 2016), the first goal of this analysis was to explore whether participants are more accurate and/or faster when there is a reward (vs. no reward)-predictive color in the sequence. In line with analyses from previous studies (Anderson, Laurent, & Yantis, 2014; Bourgeois, Neveu, Bayle, & Vuilleumier, 2015; Failing & Theeuwes, 2015), the second goal was to test whether this difference would change over time. More specifically, if there is a difference in responses to reward predictive vs. non-predictive trials, this difference should be especially pronounced by the end of the training phase (e.g., in the second block) – when participants had enough time to pick up on stimulus-reward associations. We ran our GLM, first, on the whole sample and second, separately for each experiment. Results are summarized in Table 2 and Figure 3a–d.

Table 2

Analyses of the learning phase.

AccuracyRT

All experimentsdfsFpηp2Fpηp2

 
1. Reward 1,80 2.79 .098 .03 <1 .478 <.01 
2. Block 1,80 2.55 .115 .03 2.98 .088 .04 
3. Reward × Block 1,80 <1 .646 <.01 <1 .983 <.01 
Experiment 1A 

 
1. Reward 1,26 2.91 .099 .10 .22 .641 .01 
2. Block 1,26 2.08 .160 .07 1.15 .293 .04 
3. Reward × Block 1,26 2.47 .127 .09 3.10 .089 .101 
Experiment 1B 

 
1. Reward 1,26 .04 .527 .02 .02 .875 <.01 
2. Block 1,26 1.49 .232 .05 9.27 .005 .26 
3. Reward × Block 1,26 8.91 .006 .26 5.29 .029 .17 
Experiment 1C 

 
1. Reward 1,26 .33 .568 .01 .85 .364 .03 
2. Block 1,26 .04 .842 <.01 .15 .701 .01 
3. Reward × Block 1,26 <1 .957 <.01 .10 .753 <.01 
AccuracyRT

All experimentsdfsFpηp2Fpηp2

 
1. Reward 1,80 2.79 .098 .03 <1 .478 <.01 
2. Block 1,80 2.55 .115 .03 2.98 .088 .04 
3. Reward × Block 1,80 <1 .646 <.01 <1 .983 <.01 
Experiment 1A 

 
1. Reward 1,26 2.91 .099 .10 .22 .641 .01 
2. Block 1,26 2.08 .160 .07 1.15 .293 .04 
3. Reward × Block 1,26 2.47 .127 .09 3.10 .089 .101 
Experiment 1B 

 
1. Reward 1,26 .04 .527 .02 .02 .875 <.01 
2. Block 1,26 1.49 .232 .05 9.27 .005 .26 
3. Reward × Block 1,26 8.91 .006 .26 5.29 .029 .17 
Experiment 1C 

 
1. Reward 1,26 .33 .568 .01 .85 .364 .03 
2. Block 1,26 .04 .842 <.01 .15 .701 .01 
3. Reward × Block 1,26 <1 .957 <.01 .10 .753 <.01 
Figure 3

(a) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in all experiments. (b) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in Experiment 1A. (c) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in Experiment 1B. (d) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in Experiment 1C.

Figure 3

(a) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in all experiments. (b) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in Experiment 1A. (c) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in Experiment 1B. (d) Accuracy scores and response times in high and no reward predictive trials in the first and second block of the training phase in Experiment 1C.

Close modal

Inspection of Table 2 shows that high (vs. no) reward predictive colors did not influence accuracy or RT when considering the pooled data from all experiments. When examining the experiments separately, however, we found that reward learning seemed more pronounced in Experiment 1A and Experiment 1B (80–20% random reward) compared to Experiment 1C (increased distance). In Experiment 1A, participants seemed to be more accurate on high-reward trials, though this was not accompanied by faster response times. In Experiment 1B, participants became faster in the second half of the experiment, particularly on high-reward trials, but they did not become more accurate. So, while there were indications of reward learning in both Experiments 1A and 1B, these indications were not entirely consistent.

We further explored whether task-related subjective measures (motivation, difficulty, demands), current states (need for money, fatigue), and trait impulsivity were related to performance in the training phase. We found that trait impulsivity was negatively related to accuracy scores on reward predictive trials (r = –.34, p = .002). It could be that for participants who score high on this trait, reward predictive colors triggered some impulsive responses that were less likely to be accurate. Furthermore, self-reported motivation was positively related to accuracy scores in both no-reward (r = .26, p = .017), and high-reward conditions (r = .37, p < .001). All other correlations were not significant (ps > .05).

Main analyses (pre-registered)

Responses that were three standard deviations faster or slower than the participant’s mean and responses faster than 300 ms (which were considered guesses) were deleted, which resulted in the exclusion of 1.4% of trials. To test our hypothesis, we performed a GLM analysis with the reward value of the distractor (high vs. no) as a within-subject independent variable, and accuracy scores (percentage of correct responses) as the dependent variable. There was no significant difference in accuracy scores between high vs. no distractor reward trials, F(1,80) = 0.26, p = .608, ηp2 = .00. Thus, contrary to our hypothesis, we found no evidence that reward-related distractors impede cognitive performance.

To test whether different task variations had any effect on performance in high (vs. no) reward distractor trials (see Table 3 for descriptive statistics), we ran the same GLM as before, now adding task variation as a between subject factor. Crucially, the distractor reward × task variation interaction was also not significant, F(2,78) = 0.15, p = .864, ηp2 = .00. To conclude, we did not find evidence that reward-related distractors harm more when reward contingency is random (no difference between Experiment 1A and Experiment 1B) and when the distractor is located further away from the target (Experiment 1C).

Table 3

Descriptive Statistics for Accuracy and Response Times in High vs. No Distractor trials in All Task Variations.

Accuracy (percentage of correct responses)

Experiment 1AExperiment 1BExperiment 1C

Within-subject conditionMSDMSDMSD

 
No reward-related distractor 81.2% 11% 84.4% 13.9% 82.3% 12.2% 
Reward-related distractor 80.7% 12% 84.5% 12.6% 81.0% 10.9% 
 Response Times (ms) 

 
M SD M SD M SD 

 
No reward-related distractor 2141 1699 1559 920 2113 2108 
Reward-related distractor 2141 1683 1571 1088 2210 2380 
Accuracy (percentage of correct responses)

Experiment 1AExperiment 1BExperiment 1C

Within-subject conditionMSDMSDMSD

 
No reward-related distractor 81.2% 11% 84.4% 13.9% 82.3% 12.2% 
Reward-related distractor 80.7% 12% 84.5% 12.6% 81.0% 10.9% 
 Response Times (ms) 

 
M SD M SD M SD 

 
No reward-related distractor 2141 1699 1559 920 2113 2108 
Reward-related distractor 2141 1683 1571 1088 2210 2380 

To further explore these null results, visually inspecting individual accuracy scores in high vs. no reward distractor conditions might be helpful. Figure 4 shows each participants’ difference score (i.e., high-reward accuracy minus no-reward accuracy scores) across task variations. A difference score below 0 would mean that our manipulation worked, that is, participants had lower accuracy (i.e., performed worse) in the high-reward distractor condition. When inspecting Figure 4, it is clear that participants did not clearly score below 0.

Figure 4

Difference scores for accuracy, for all task variations. Negative difference scores indicate that people performed worse during high-reward distractor trials. Small, blue dots reflect mean difference scores for individual participants. Large, black dots reflect mean differences scores for all participants. Error bars reflect 95% confidence intervals around the group means.

Figure 4

Difference scores for accuracy, for all task variations. Negative difference scores indicate that people performed worse during high-reward distractor trials. Small, blue dots reflect mean difference scores for individual participants. Large, black dots reflect mean differences scores for all participants. Error bars reflect 95% confidence intervals around the group means.

Close modal

As a secondary analysis, we tested whether high (vs. no) reward-related distractors had an impact on people’s speed. To this end, we ran another GLM with distractor reward (high vs. no) as within subject independent variable and RT as dependent variable. We found no significant difference in RTs between high vs. no distractor reward trials, F(1,80) = 0.90, p = .346, ηp2 = .01. The distractor value × task variation interaction was not significant either, F(2,78) = 0.57, p = .563, ηp2 = .01. In sum, we found no evidence that reward-related distractors slow down people’s responses on a math task.

Bayesian analyses (not pre-registered)

To provide more conclusive evidence for this null effect, we tested our predictions with a Bayesian approach (Dienes, 2014) in JASP (JASP Team, 2018). A Bayesian paired-samples t-test with distractor reward (high vs. no) as independent and accuracy scores as dependent variable yielded a BF01 = 6.68, which is considered moderate evidence for the null hypothesis (see Figure 5A). Another Bayesian paired-samples t-test with distractor reward (high vs. no) as independent variable and RTs as dependent variable also showed moderate evidence for the null hypothesis, i.e., BF01 = 5.26 (see Figure 5B). In sum, these results strengthen findings from our preregistered analyses: they directly support the idea that reward-related distractors do not harm cognitive performance.

Figure 5

Sequential Bayesian analysis separately for (A) accuracy and for (B) RT. The plot shows whether the observations coming in sequence (i.e., as N grows) are in favour of the null (BF01 > 1) or the alternative hypotheses (BF01 < 1), as. Inspection of Figure 5 suggests that there is moderate evidence for the null hypotheses, i.e., that reward-related distractors do not harm cognitive performance. As BF01 does not grow towards the direction of the null hypothesis after reaching N = ~30, it becomes clear that we would have gotten evidence for the null hypothesis after collecting approximately 30 participants in total.

Figure 5

Sequential Bayesian analysis separately for (A) accuracy and for (B) RT. The plot shows whether the observations coming in sequence (i.e., as N grows) are in favour of the null (BF01 > 1) or the alternative hypotheses (BF01 < 1), as. Inspection of Figure 5 suggests that there is moderate evidence for the null hypotheses, i.e., that reward-related distractors do not harm cognitive performance. As BF01 does not grow towards the direction of the null hypothesis after reaching N = ~30, it becomes clear that we would have gotten evidence for the null hypothesis after collecting approximately 30 participants in total.

Close modal

To test whether different task variations had an effect on performance in high (vs. no) reward distractor trials, we ran a Bayesian repeated-measures ANOVA with distractor reward and task variation as independent variables, and separately for accuracy and RTs as dependent variables. The distractor reward × task variation interaction on accuracy as dependent variable yielded a BF01 = 91.29 – and on RTs as dependent variable yielded a BF01 = 52.68. These results show strong evidence against the idea that different task variations moderated performance in high vs. no distractor reward trials.

Further exploratory analyses (not pre-registered)

We explored whether task-related measures (motivation, difficulty, demands), current states (need for money, fatigue), and trait impulsivity were related to the amount of reward-related capture (calculated as RT when a high-reward distractor was present minus RT when no-reward distractor was present; based on Anderson et al., 2011). Difficulty was positively related to the amount of reward-related capture (r = .25, p = .028). This implies that the more difficult one felt like the task was, the more they got distracted (i.e., in this case, became slower) from high reward distractors. Other correlations were not significant (ps > .05).

Finally, we further explored whether performance in the training phase was related to the distractor effects in the test phase. For this purpose we computed a reward learning score for each participant (i.e., the difference between RTs on no vs. high reward trials on the first part of the training phase minus the difference between no vs. high reward trials on the second part of the training phase). The higher the score is, the bigger the RT difference in between no vs. high reward predictive trials in the second vs. the first phase of the learning phase and, thus, the more successful reward learning. This reward learning score was not related to the amount of reward-related capture (r = .06, p = .58); thus, we found no evidence for a relationship between reward learning and reward-based distraction.

Note: in our previous paper, in Experiment 2, we found that reward-related distractors impaired performance only when they appeared early vs. late during the trial. To be consistent, we performed that same analysis (GLM with distractor reward and timing as within subject independent measures and accuracy as dependent measure) on the current data too. This analysis yielded no meaningful effect (F < 1). This means that unlike in our previous study (Rusz et al., 2018), distractor timing did not moderate reward-based distraction.

The goal of the present research was to test whether reward-related distractors harm cognitive performance. Participants first learned to associate reward (vs. no reward) to different colors in the training phase via a deterministic (Experiment 1A and 1C) or a random reward schedule (Experiment 1B). After, they performed a math task, which involved difficult operations (e.g., the active maintenance and updating of task-relevant information), while they were exposed to reward-related, but task-irrelevant cues. Task-irrelevant and task-relevant stimuli appeared spatially close together (Experiments 1A and 1B) or far apart (Experiment 1C).

Our main confirmatory test suggested that being exposed to reward-related distractors do not necessarily thwart cognitive performance on a mental arithmetic task. This finding is unexpected, given that a large body of prior work showed that reward-related distractors do capture visual attention (e.g., Anderson et al., 2011; Bucker & Theeuwes, 2017; Le Pelley et al., 2015). Furthermore, this finding is not fully in line with our own previous findings on math performance. In particular, in our previous study (Rusz et al., 2018), we found only weak evidence for reward-driven distraction; in the current study, we found evidence against reward-based distraction. Although the current study’s method was somewhat different from the studies in Rusz et al. (2018), the studies collectively suggest that reward-related distractors do not harm math performance. Now, we will discuss three possible explanations for finding evidence against our hypothesized effect.

First, it is plausible that reward-related distractors simply have no influence on task performance that relies on cognitive control processes. As discussed in the introduction, the rapid visual capture effect of reward-related cues may deteriorate in later processing stages. Supporting this possibility, Theeuwes and Belopolsky (2012) showed that distractors associated with reward rapidly captured visual attention (i.e., saccades), but did not affect eye movements after this capture. Along these lines, recent research showed that reward-related distractors more likely influence early (vs. late) stages of visual processing (Failing et al., 2015; Lee & Shomstein, 2014; Maclean & Giesbrecht, 2015). Corroborating this finding, Le Pelley, Seabrooke, Kennedy, Pearson, & Most (2017) found that the negative effect of reward-related distractors was rather short-lived on a temporal attention task (but see Failing & Theeuwes, 2015). Based on these studies, one could speculate that reward-related distractors impede early cognitive processes, leading to misguided saccades. However, such early “disruptions” may have only limited effect on later processing stages, because people may be able to quickly correct for their impact.

Indeed, results from Experiment 1C seem consistent with the possibility that reward-related distractors rapidly capture visual attention, but do not affect later cognitive operations. In this version of our task, we increased the distance between target and distractor, assuming that this would increase the chance for visual attentional capture by the distractor associated with reward – thus maximizing the chance for reward-driven disruption. Surprisingly, this manipulation had no effect on how accurate participants were in high vs. no distractor reward conditions. Nevertheless, participants were somewhat slower on trials where a high (2210 ms) vs. no (2113 ms) reward-related distractor appeared. This effect was small (d = .29) and non-significant, so it should be interpreted with extreme caution. Still, based on this finding, one might speculate that although people showed misguided saccades to reward-related cues, they might have been able to overcome this initial shift (within 700 ms) as they were still able to respond accurately on the math task.

As we did not measure people’s gaze during our study, we cannot conform nor disconfirm the latter possibility, but we do feel this would be a promising future direction. Such a design—i.e., a design that combines eye tracking measures with math performance—would help to better understand whether optimal performance on the math task is still possible, even if attention is initially rapidly captured by reward-related distractors. Specifically, it would be informative to see what aspects of gaze behavior (e.g., whether the first fixation is on the distractor vs. the target; how quickly people can disengage from the distractor; whether people can avoid fixating on the distractor at all) is most likely to impair performance (Failing et al., 2015; Le Pelley et al., 2015).

Second, it is possible that reward-related distractors do impair cognitive processes beyond visual capture, but that our task was not sensitive to detecting such an effect. After inspecting the results, one may argue that our task was too easy; overall, participants were quite accurate (i.e., above 80%). Thus, even if participants were initially disrupted by reward-related distractors, they may have managed to protect goals from these interfering stimuli: they could quickly (i.e., within 700 ms) correct for this and apparently were still able to process the targets (e.g., 8) and to perform mental operations on these targets (e.g., 9 + 8). In line with this explanation, exploratory findings revealed that perceived task difficulty was related to the susceptibility for reward-based distraction. Specifically, participants who thought that the task was more difficult, were slowed down more by high reward-related distractors. This explanation is in line with previous research that shows that under high working memory load, people are more susceptible to distractions (for a review, see Lavie, 2010). Speculatively, future research should make the task more difficult (e.g., by increasing the pace), or take into account individual differences, such as working memory capacity (e.g., Anderson et al., 2011).

The potential lack of difficulty might also explain why our results are not in line with Krebs et al. (2011, 2010), who reported that irrelevant-reward associations interfere with conflict processing. For two reasons, the Stroop task used in Krebs’ studies is different from the current task. First, in the Stroop task, distractor colors are always associated with a competing response, so in those studies, distractor effects may reflect response interference, not just attentional capture. In our task, such response competition was absent, as participants never had to respond to the colors themselves. Second, in the Stroop task, the concept color is always task-relevant, which makes ink colors difficult to strategically ignore. In our task, it may have been easier for people to categorically ignore the color dimension during the test phase, thus preventing distraction before it began. The latter issue could be addressed in future research, e.g., by introducing mini-blocks of learning and testing phases (e.g., Lee & Shomstein, 2014). Such a design could prevent people from strategically only focusing on numbers, not colors, thus making distractors even harder to ignore.

Third, although we found some indications that participants learned reward associations in the training phase (especially in Experiment 1B), these stimulus-reward associations may have been too subtle to disrupt performance during the test phase. Indeed, our learning phase was different from most previous studies in this area in three ways. First, in previous studies, the learning phase often employed instrumental learning—where action towards a certain cue (e.g., finding a red circle) was directly associated with earning money. In our study, however, reward learning was arguably less instrumental in nature. Specifically, while stimuli (e.g., a blue X) predicted the availability of reward contingent on people’s action (indicating the correct order of the letters), people were never required to approach this cue, or actively search for it. Second, our training task required more focused attention, and perhaps greater working memory capacity, than learning tasks in prior studies (which often used visual search procedures). This strong attentional focus may have caused people to readily suppress task-irrelevant dimensions (in this case, color), impairing acquisition of stimulus-reward associations. Third, our training phase was shorter than training phases in previous studies (e.g., 240 in Anderson et al., 2011), which may have given simply less time for participants to learn reward associations. We should note, though, there are existing studies that used training phases with less instrumental reward learning (Bucker & Theeuwes, 2016; Le Pelley et al., 2015; Pool et al., 2014), more focused attention learning tasks (Anderson, 2016b; Mine & Saiki, 2015) and shorter training sessions (e.g,. 144 trials in Sali et al., 2014); these prior studies have shown reward-based attentional capture in a test phase, suggesting that our learning phase, in principle, should be conducive to reward learning. We recommend that future studies use a more established reward learning procedure, to facilitate interpretation of results.

We further examined how different variations of our task affected performance in high vs. no distractor reward conditions. First, we introduced a random reward schedule in Experiment 1B (Skinner & Ferster, 2015; Thorndike, 1898), but confirmatory results from Experiment 1B (80% random reward) were similar to those of Experiment 1A (100% reward). Second, we located the distractor further away from the target in Experiment 1C. In this version of the task, misguided saccades towards the distractor should have had the strongest detrimental effects for performance. Nevertheless, by contrast to mounting evidence from vision research (Anderson, 2016a; Failing & Theeuwes, 2017), we did not find evidence for such an effect. It could be that even if participants’ eye movements were initially captured by reward-related distractors, they could quickly disengage from these cues and still perform the math task well. To conclude, task variations did not influence the magnitude of disruption of performance by reward-related distractors.

Concluding remarks

In the current research, we found no evidence that reward-related distractors harm performance during a mental arithmetic task. In our view, this finding is interesting as it contradicts findings from visual search tasks (but see Sha & Jiang, 2016), from conflict processing tasks (Krebs et al., 2011, 2010), and from our previous research on mental arithmetic tasks (Rusz et al., 2018). All in all, our research suggests that reward-related distractors may not harm performance on all types of tasks that require cognitive control.

The preregistration, all the stimuli, presentation materials, participant data, and analysis scripts can be found on this paper’s project page on the Open Science Framework (https://osf.io/dwa89/).

The authors have no competing interests to declare.

  • Contributed to conception and design: DR, EB, MK

  • Contributed to acquisition of data: DR, EB

  • Contributed to analysis and interpretation of data: DR, EB

  • Drafted and/or revised the article: DR, EB, MK

  • Approved the submitted version for publication: DR, EB, MK

1
Anderson
,
B. A.
(
2016a
).
The attention habit: How reward learning shapes attentional selection
.
Annals of the New York Academy of Sciences
,
1369
(
1
),
24
39
. DOI:
2
Anderson
,
B. A.
(
2016b
).
Value-driven attentional capture in the auditory domain
.
Attention, Perception, & Psychophysics
,
78
,
242
250
. DOI:
3
Anderson
,
B. A.
,
Laurent
,
P. A.
, &
Yantis
,
S.
(
2011
).
Value-driven attentional capture
.
Proceedings of the National Academy of Sciences
,
108
,
10367
10371
. DOI:
4
Anderson
,
B. A.
,
Laurent
,
P. A.
, &
Yantis
,
S.
(
2012
).
Generalization of value-based attentional priority
.
Visual Cognition
,
20
,
37
41
. DOI:
5
Anderson
,
B. A.
,
Laurent
,
P. A.
, &
Yantis
,
S.
(
2014
).
Value-driven attentional priority signals in human basal ganglia and visual cortex
.
Brain Research
,
1587
(
1
),
88
96
. DOI:
6
Ashcraft
,
M. H.
(
1992
).
Cognitive arithmetic: A review of data and theory
.
Cognition
,
44
(
1–2
),
75
106
. DOI:
7
Ashcraft
,
M. H.
, &
Battaglia
,
J.
(
1978
).
Cognitive arithmetic: Evidence for retrieval and decision processes in mental addition
.
Journal of Experimental Psychology: Human Learning & Memory
,
4
(
5
),
527
538
. DOI:
8
Beilock
,
S. L.
, &
Carr
,
T. H.
(
2005
).
When high-powered people fail: Working memory and “Choking under pressure” in math
.
Psychological Science
,
16
(
2
),
101
105
. DOI:
9
Beilock
,
S. L.
,
Holt
,
L. E.
,
Kulp
,
C. A.
,
Carr
,
T. H.
,
Holt
,
L. E.
, &
Carr
,
T. H.
(
2004
).
More on the fragility of performance: Choking under pressure in mathematical problem solving
.
Journal of Experimental Psychology: General
,
133
(
4
),
584
600
. DOI:
10
Botvinick
,
M.
, &
Braver
,
T.
(
2015
).
Motivation and cognitive control: From behavior to neural mechanism
.
Annual Review of Psychology
,
66
,
83
113
. DOI:
11
Bourgeois
,
A.
,
Neveu
,
R.
,
Bayle
,
D. J.
, &
Vuilleumier
,
P.
(
2015
).
How does reward compete with goal-directed and stimulus-driven shifts of attention?
Cognition and Emotion
,
31
,
1
10
.
September
. DOI:
12
Braver
,
T. S.
(
2012
).
The variable nature of cognitive control: A dual-mechanisms framework
.
Trends in Cognitive Sciences
,
16
(
2
),
106
113
. DOI:
13
Bucker
,
B.
,
Belopolsky
,
A. V.
, &
Theeuwes
,
J.
(
2014
).
Distractors that signal reward attract the eyes
.
Visual Cognition
,
23
(
1–2
),
1
24
. DOI:
14
Bucker
,
B.
, &
Theeuwes
,
J.
(
2016
).
Pavlovian reward learning underlies value driven attentional capture
.
Attention, Perception, & Psychophysics
,
79
(
2
),
1
14
. DOI:
15
Bucker
,
B.
, &
Theeuwes
,
J.
(
2017
).
Pavlovian reward learning underlies value driven attentional capture
.
Attention, Perception, & Psychophysics
,
79
(
2
),
415
428
. DOI:
16
Dienes
,
Z.
(
2014
).
Using Bayes to get the most out of non-significant results
.
Frontiers in Psychology
,
5
,
1
17
. DOI:
17
Dolcos
,
F.
, &
McCarthy
,
G.
(
2006
).
Brain Systems Mediating Cognitive Interference by Emotional Distraction
.
Journal of Neuroscience
,
26
(
7
),
2072
2079
. DOI:
18
Failing
,
M.
,
Nissens
,
T.
,
Pearson
,
D.
,
Le Pelley
,
M. E.
, &
Theeuwes
,
J.
(
2015
).
Oculomotor capture by stimuli that signal the availability of reward
.
Journal of Neurophysiology
,
114
(
4
),
2316
2327
. DOI:
19
Failing
,
M.
, &
Theeuwes
,
J.
(
2015
).
Nonspatial attentional capture by previously rewarded scene semantics
.
Visual Cognition
,
23
(
1–2
),
82
104
. DOI:
20
Failing
,
M.
, &
Theeuwes
,
J.
(
2017
).
Selection history: How reward modulates selectivity of visual attention
.
Psychonomic Bulletin and Review
,
1
25
. DOI:
21
Forstmeier
,
W.
,
Wagenmakers
,
E. J.
, &
Parker
,
T. H.
(
2017
).
Detecting and avoiding likely false-positive findings – a practical guide
.
Biological Reviews
,
92
(
4
),
1941
1968
. DOI:
22
Gaspar
,
J. M.
, &
McDonald
,
J. J.
(
2014
).
Suppression of Salient Objects Prevents Distraction in Visual Search
.
Journal of Neuroscience
. DOI:
23
Gong
,
M.
, &
Li
,
S.
(
2014
).
Learned reward association improves visual working memory
.
Journal of Experimental Psychology. Human Perception and Performance
,
40
(
2
),
841
56
. DOI:
24
Greenland
,
S.
,
Senn
,
S. J.
,
Rothman
,
K. J.
,
Carlin
,
J. B.
,
Poole
,
C.
,
Goodman
,
S. N.
, &
Altman
,
D. G.
(
2016
).
Statistical tests, P values, confidence intervals, and power: A guide to misinterpretations
.
European Journal of Epidemiology
,
31
(
4
),
337
350
. DOI:
25
Henderson
,
J. M.
, &
Macquistan
,
A. D.
(
1993
).
The spatial distribution of attention following an exogenous cue
.
Perception & Psychophysics
. DOI:
26
Hickey
,
C.
, &
Theeuwes
,
J.
(
2011
).
Context and competition in the capture of visual attention
.
Attention, Perception, and Psychophysics
. DOI:
27
JASP Team
. (
2018
).
JASP (Version 0.8.6.0)
. [Computer Software]. Retrieved from: http://jasp-stats.org.
28
Klink
,
P. C.
,
Jeurissen
,
D.
,
Theeuwes
,
J.
,
Denys
,
D.
, &
Roelfsema
,
P. R.
(
2017
).
Working memory accuracy for multiple targets is driven by reward expectation and stimulus contrast with different time-courses
.
Scientific Reports
,
7
. DOI:
29
Krebs
,
R. M.
,
Boehler
,
C. N.
,
Appelbaum
,
L. G.
, &
Woldorff
,
M. G.
(
2013
).
Reward Associations Reduce Behavioral Interference by Changing the Temporal Dynamics of Conflict Processing
.
PLOS ONE
,
8
(
1
). DOI:
30
Krebs
,
R. M.
,
Boehler
,
C. N.
,
Egner
,
T.
, &
Woldorff
,
M. G.
(
2011
).
The neural underpinnings of how reward associations can both guide and misguide attention
.
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience
,
31
(
26
),
9752
9
. DOI:
31
Krebs
,
R. M.
,
Boehler
,
C. N.
, &
Woldorff
,
M. G.
(
2010
).
The influence of reward associations on conflict processing in the Stroop task
.
Cognition
,
117
,
341
347
. DOI:
32
Laberge
,
D.
, &
Brown
,
V.
(
1986
).
Variations in size of the visual field in which targets are presented: An attentional range effect
.
Perception & Psychophysics
. DOI:
33
Lavie
,
N.
(
2010
).
Attention, distraction, and cognitive control under load
.
Current Directions in Psychological Science
,
19
(
3
),
143
148
. DOI:
34
Lee
,
J.
, &
Shomstein
,
S.
(
2014
).
Reward-based transfer from bottom-up to top-down search tasks
.
Psychological Science
,
25
(
2
),
466
75
. DOI:
35
Le Pelley
,
M.
,
Mitchell
,
C. J.
,
Beesley
,
T.
,
George
,
D. N.
, &
Wills
,
A. J.
(
2016
).
Attention and Associative Learning in Humans: An Integrative Review
.
Psychological Bulletin
,
142
(
10
),
1111
1140
. DOI:
36
Le Pelley
,
M.
,
Pearson
,
D.
,
Griffiths
,
O.
, &
Beesley
,
T.
(
2015
).
When goals conflict with values: Counterproductive attentional and oculomotor capture by reward-related stimuli
.
Journal of Experimental Psychology: General
,
144
,
158
171
. DOI:
37
Le Pelley
,
M.
,
Seabrooke
,
T.
,
Kennedy
,
B. L.
,
Pearson
,
D.
, &
Most
,
S. B.
(
2017
).
Miss it and miss out: Counterproductive nonspatial attentional capture by task-irrelevant, value-related stimuli
.
Attention, Perception, & Psychophysics
,
79
(
6
),
1628
1642
. DOI:
38
MacLean
,
M. H.
,
Diaz
,
G. K.
, &
Giesbrecht
,
B.
(
2016
).
Irrelevant learned reward associations disrupt voluntary spatial attention
.
Attention, Perception, & Psychophysics
,
78
(
7
),
2241
2252
. DOI:
39
MacLean
,
M. H.
, &
Giesbrecht
,
B.
(
2015
).
Neural evidence reveals the rapid effects of reward history on selective attention
.
Brain Research
,
1606
,
86
94
. DOI:
40
Mine
,
C.
, &
Saiki
,
J.
(
2015
).
Task-irrelevant stimulus-reward association induces value-driven attentional capture
.
Attention, Perception, & Psychophysics
,
77
(
6
),
1896
1907
. DOI:
41
Müller
,
J.
,
Dreisbach
,
G.
,
Goschke
,
T.
,
Hensch
,
T.
,
Lesch
,
K.-P.
, &
Brocke
,
B.
(
2007
).
Dopamine and cognitive control: The prospect of monetary gains influences the balance between flexibility and stability in a set-shifting paradigm
.
European Journal of Neuroscience
,
26
,
3661
3668
. DOI:
42
Munneke
,
J.
,
Belopolsky
,
A. V.
, &
Theeuwes
,
J.
(
2016
).
Distractors associated with reward break through the focus of attention
.
Attention, Perception, & Psychophysics
,
78
(
7
),
2213
2225
. DOI:
43
Munneke
,
J.
,
Hoppenbrouwers
,
S. S.
, &
Theeuwes
,
J.
(
2015
).
Reward can modulate attentional capture, independent of top-down set
.
Attention, Perception & Psychophysics
,
2540
2548
. DOI:
44
Nosek
,
B. A.
,
Ebersole
,
C. R.
,
DeHaven
,
A. C.
, &
Mellor
,
D. T.
(
2018
).
The preregistration revolution
.
Proceedings of the National Academy of Sciences
. DOI:
45
Padmala
,
S.
, &
Pessoa
,
L.
(
2011
).
Reward Reduces Conflict by Enhancing Attentional Control and Biasing Visual Cortical Processing
.
Journal of Cognitive Neuroscience
,
23
(
11
),
3419
3432
. DOI:
46
Patton
,
J. H.
,
Stanford
,
M. S.
, &
Barratt
,
E. S.
(
1995
).
Factor structure of the barratt impulsiveness scale
.
Journal of Clinical Psychology
,
51
(
6
),
768
774
. DOI:
47
Pool
,
E.
,
Brosch
,
T.
,
Delplanque
,
S.
, &
Sander
,
D.
(
2015
).
Attentional Bias for Positive Emotional Stimuli: A Meta-Analytic Investigation Attentional Bias for Positive Emotional Stimuli: A Meta-Analytic Investigation
.
Psychological Bulletin
,
142
,
79
106
.
September
. DOI:
48
Pool
,
E.
,
Delplanque
,
S.
,
Porcherot
,
C.
,
Jenkins
,
T.
,
Cayeux
,
I.
, &
Sander
,
D.
(
2014
).
Sweet reward increases implicit discrimination of similar odors
.
Frontiers in Behavioral Neuroscience
,
8
. DOI:
49
Posner
,
M.
,
Inhoff
,
A. W.
,
Friedrich
,
F. J.
, &
Cohen
,
A.
(
1987
).
Isolating attentional systems: A cognitive-anatomical analysis
.
Psychobiology
,
15
(
2
),
107
121
. DOI: .
50
Rusz
,
D.
,
Bijleveld
,
E.
, &
Kompier
,
M. A. J.
(
2018
).
Reward-associated distractors can harm cognitive performance
.
PloS One
. DOI:
51
Sali
,
A. W.
,
Anderson
,
B. A.
, &
Yantis
,
S.
(
2014
).
The Role of Reward Prediction in the Control of Attention
.
Journal of Experimental Psychology: Human Perception and Performance
,
40
(
4
),
1654
64
. DOI:
52
Sha
,
L. Z.
, &
Jiang
,
Y. V.
(
2016
).
Components of reward-driven attentional capture
.
Attention, Perception, & Psychophysics
,
78
(
2
),
403
414
. DOI:
53
Shulman
,
G. L.
,
Wilson
,
J.
, &
Sheehy
,
J. B.
(
1985
).
Spatial determinants of the distribution of attention
.
Perception & Psychophysics
. DOI:
54
Skinner
,
B.
, &
Ferster
,
C.
(
2015
).
Schedules of reinforcement
.
Cambridge
:
B. F. Skinner Foundation
.
55
Theeuwes
,
J.
, &
Belopolsky
,
A. V.
(
2012
).
Reward grabs the eye: Oculomotor capture by rewarding stimuli
.
Vision Research
,
74
,
80
85
. DOI:
56
Thorndike
,
E. L.
(
1898
).
Animal intelligence: An experimental study of the associative processes in animals
.
Psychological Review
,
2
(
4
),
1
107
. DOI:
57
Wang
,
L.
,
Duan
,
Y.
,
Theeuwes
,
J.
, &
Zhou
,
X.
(
2014
).
Reward breaks through the inhibitory region around attentional focus
.
Journal of Vision
,
14
(
12
),
2
. DOI:
58
Wigfield
,
A.
, &
Eccles
,
J.
(
2000
).
Expectancy-value theory of achievement motivation
.
Contemporary Educational Psychology
,
25
(
1
),
68
81
. DOI:

The author(s) of this paper chose the Open Review option, and the peer review comments are available at: http://doi.org/10.1525/collabra.169.pr

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.