Previous studies have demonstrated that people trade off between expending motor effort and memorization effort when completing a copying task that requires looking in different directions to gather information. When information is easy to gather with minimal movement, people choose to look more frequently and rely on less on memorization, however, they reduce motor effort and rely on memory more when looking requires larger (and more effortful) shifts of gaze. This paper investigated the eye-head-body movements that guided looking as well as participants’ trade-off between using motor effort and memory in a copying task. We add to prior work by characterizing the coordination between eyes, head, and body required to look at targets at different angles. In the task, participants copied models onto a board while wearing a mobile eye tracker that measured eye rotations and motion sensors that measured head rotations. We manipulated the angle of the model relative to the participant and the memorization difficulty of the models. Results showed that eye movements contributed more to small gaze shifts but the head and body contributed more as the angle of the target increased (requiring larger amplitude shifts of gaze). Participants chose to look frequently when models were adjacent to the workspace, but chose to rely more on memory for models that required greater shifts of gaze. Memorization difficulty affected the trade-off such that participants used more memory when the models were easy to remember. We discuss how participants’ decisions to look versus remember may depend on the contribution of eyes, head, and body to shift gaze by varying amplitudes.

Suppose you are cooking an unfamiliar meal and you need to follow the recipe. Will you turn to look at the recipe at each step to avoid taxing your memory, or will you memorize several steps during each look at the recipe to reduce the effort incurred by looking multiple times? It may depend on how close recipe is to the counter. If you only need to glance to the side of the counter to see the recipe, you may choose to look more frequently. If you have to turn completely around to see the recipe pinned to the fridge, you may prefer to memorize more steps at a time.

This example illustrates that people trade off the use of motor and cognitive abilities depending on task conditions. Previous studies have shown that people’s trade-off between using more motor effort or more memory depends on the degree of effort required to turn the eyes, head, and body to gather information (Gray et al., 2006; Hardiess et al., 2011; Hayhoe, 2000; Inamdar & Pomplun, 2003; Lagers-Van Haselen et al., 2000), the accessibility and predictability of the to-be-gathered information (Ballard et al., 1992; Droll & Hayhoe, 2007; Hayhoe et al., 1998), the time constraint to complete the task (Howes et al., 2015), the cost of making errors (Howes et al., 2015; Patrick et al., 2015), and the exposure to prior training (Patrick et al., 2015). In this paper, we will focus on how motor effort and memorization difficulty affect the motor-memory trade-off and the coordination of eye, head, and body movements to visually explore in different directions.

In a foundational experiment, Ballard and colleagues devised a block-copying task to examine the motor-memory trade-off (Ballard et al., 1995). In the block-copying task, participants were asked to copy models consisting of eight blocks onto a nearby blank board (i.e. workspace). Results showed that participants picked up one block at a time, looked at the model where the block was located, and then placed the block on the workspace. They followed the same sequence—look, pick up, look, place—to copy the blocks. The finding suggested that participants were willing to expend motor effort to look at the model each time rather than rely on memory to copy the models. However, in another study Ballard et al. (1992) found that if the models were only present for 10 s, participants were actually able to remember the locations of 4 blocks at a time and copied them correctly from memory. But participants did not do so voluntarily when given a choice.

Recently, Draschkow et al. (2021) manipulated the motor effort required to visually inspect the model by varying the angle needed to turn to look at the models by placing them at 45°, 90°, and 135° around the participants. They measured how many features (e.g. the identity and/or location of one or multiple blocks) participants remembered in one look. Results showed that participants only memorized 1-2 features each time they looked at the model, and the number of features memorized remained similar as the angle increased. That is, the increase in memorization in response to the increased motor effort was very small. People were reluctant to use memory even though using memory could improve efficiency by reducing the number of model looks (which reduces the total amount of motor movement).

A limitation in past work was how motor effort was operationalized. In previous studies, motor effort was considered as either a binary (e.g. more or less) or linear variable (e.g. ranging from 45° to 90° and from 90° to 135°). But simply increasing the angle may not result in equal, linear change in motor effort. As we will review, the coordination of eyes, head, and body can be complex.

Indeed, the motor effort to look depends on how the eyes, head, and body are coordinated to shift gaze (Franchak, 2020; Franchak et al., 2021; Freedman & Sparks, 2000; Land, 2004; Sidenmark & Gellersen, 2020; Solomon et al., 2006). The oculomotor range of human eyes is ±55° horizontally (Guitton & Volle, 1987; Stahl, 2001), which means that people can only look at things within 55° to either side without also moving the head or body. But people do not tend to make eye movements near the oculomotor limit unless head movement is restrained, such as in laboratory situations. Instead, people move both the eyes and head (and even body slightly) to make small amplitude gaze shifts in everyday tasks (Einhäuser et al., 2007; Franchak et al., 2021; Hu et al., 2017; Li et al., 2018; Oommen et al., 2004; Stahl, 1999). But, as the gaze shift angle increases, the head and body make larger contribution to the gaze shifts (Franchak et al., 2021; Freedman, 2008; Hollands et al., 2004; Proudlock & Gottlob, 2007; Sidenmark & Gellersen, 2020).

How might changing contributions of the eyes versus head versus body alter the motor effort of a gaze shift? First, the eyes, head, and body movements have different biomechanical properties. Eye movements are quick with smaller range of rotation whereas the head and body movements are slow with larger range of rotation (Anastasopoulos et al., 2008; Gadotti et al., 2016; Hardiess et al., 2008; Hollands et al., 2004; Land, 2004; Solomon et al., 2006; Stahl, 2001). Second, eye movements cost less energetically compared to head and body movements (Solman & Kingstone, 2014). It is possible that compared to eye movements, head and body movements may greatly change people’s willingness to use motor effort versus memory.

Thus, it was unclear in the previous studies how the manipulation of angle changed the motor effort (i.e. eye-head-body movements). As a result, in the current study we tracked the movements of eyes, head, and body in the block-copying task to observe the eye-head-body coordination to view models varying from 45° to 180°. We aimed to measure the actual movements of the eyes, head, and body when each gaze shift was made in a motor-memory trade-off task. Understanding the relative contributions of low-effort effectors (eyes) versus higher-effort effectors (head and body) will help explain why effort varies between different angles of rotation, and may provide insight about how participants select how often to make gaze shifts in different circumstances.

In addition to motor effort, some previous studies also manipulated memorization difficulty by designing different patterns of models to test the effect of memorization difficulty on the motor-memory trade-off (Hardiess et al., 2011; Lagers-Van Haselen et al., 2000). The findings from the previous studies were mixed. In Lagers-Van Haselen et al. (2000), two types of models were generated with one type organized into one configuration whereas the other as two configurations (with space in between). Both types were composed of 6 blocks of 3 different colors. No effect of memorization difficulty was found in their study. However, in Hardiess et al. (2011), the “easy” and “difficult” models (both composed of 6 blocks of 6 different colors) depended on how adjacent blocks were connected—the adjacent blocks shared a full edge in the easy model, whereas shared a half edge or a corner point in the difficult model. They found a significant main effect of memorization difficulty on the motor-memory trade-off: the more difficult, the more motor effort was spent by the participants to look at models.

Is it possible that the mixed results were due to the fact that the models were made too difficult so that the participants minimized the use of memory even for the “easy” models? Thus, in the current study, we enlarged the difference between the easy and difficult models by making the easy model easier to remember. Specifically, the difficult model was equivalent to the models used in the foundational study (Ballard et al., 1995) consisting of 8 blocks of various colors. The easy model contained 8 blocks of the same color. The same-colored models were easy because the participants only needed to remember the location of the blocks, whereas the multi-colored models were difficult as both the color and location of the blocks to be remembered. With decreased memorization difficulty in the easy model, the participants might adopt a different trade-off by using more memory and reducing the number of times they look at the model. Crossing memorization difficulty with the angle of the model allowed us to test if memorization difficulty interacted with motor effort, or whether memorization difficulty and motor effort had independent effects on visual exploration.

Although motor effort has been identified as a factor that affects the motor-memory trade-off, little is known about the underlying eye-head-body movements that guide looking. In the current study, we first examined how motor effort and memorization difficulty affected people’s willingness to use more movement or memory in the block-copying task (Ballard et al., 1995). Second, we investigated how the eyes, head, and body coordinated to visually explore models at different rotations.

In the task, we manipulated two independent variables: 1) motor effort, by placing the models at 4 different angles on each side of the participants (45°, 90°, 135°, and 180°; and 2) memorization difficulty, by varying whether the model to be copied contained a set of blocks consisting of either one color or multiple colors. The two independent variables were fully crossed. We aimed to investigate how the varying motor effort and memorization difficulty affect participants’ trade-off between motor and memory use.

To answer the first question on how motor effort and memorization difficulty affect the motor-memory trade-off, a head-mounted eye tracker worn by the participants determined when participants looked at the models, allowing us to calculate two dependent variables. The first dependent variable was how many times the participants looked at the models in the trials (number of looks), which indicated the motor effort spent by the participants to view the model. The greater number of looks meant the participants moved the eyes, head, and/or body more frequently to look at the model. The second dependent variable was the mean duration of a model look in the trials (look duration). The longer they looked at the models meant they spent longer time memorizing the models during each look. We hypothesized that as the angle of the models increased, participants would move their eyes, head, and body less frequently to look at the models, but the duration of each look would increase as they needed to encode the models in memory. However, when models were more difficult to memorize, the participants might spend less time memorizing the models each time, instead choosing to look at the models more frequently to acquire the information needed to copy.

To answer the second question on the eye-head-body coordination, we measured the eye-head-body rotations when participants looked at the models at different angles. Specifically, we calculated the eye rotation within the head during the model looks using the head-mounted eye tracking data. Two inertial measurement units (IMUs) were placed at the participants’ forehead and the base of their neck to measure the head rotation within the body during the model looks. To estimate the body rotation relative to the space, we subtracted the sum of eye and head rotations from the angle of models. Knowing the eye, head, and body rotations, we examined the absolute degrees the eyes, head, and body moved to look at the models and their relative proportion in accomplishing the rotation. We hypothesized that as the angle of models increased, the eye, head, and body rotations would all increase, but by different rates. By each increase of 45° in the models, the body rotation would increase the most while the head rotation would increase less and the eye rotation the least. That is, body movements would account for an ever-larger proportion in the rotation as the participants needed to look at the models at larger angles.

Participants and Design

Twenty-eight college students (14 males, 14 females, M age = 19.7 years, SD = 1.91) participated in the study. All participants had normal or corrected to the normal vision and were not color blind. One participant was left-hand dominant and the remaining participants were right-hand dominant. They were recruited from the introductory psychology class at the University of California, Riverside and received course credit for their participation in the study. Participants described their ethnicity as Hispanic/Latinx (14) or not Hispanic/Latinx (13); one participant chose not to answer. Participants described their race as Black (2), White (4), More than One Race (4), Asian (11), Pacific Islander (1), and Other (4); two chose not to answer. Four other participants were run but excluded from the study due to technical problems affecting the eye tracking and/or motion tracking (e.g. poor pupil detection, motion sensors stopped recording). A sensitivity analysis indicated that the final sample size had 95% statistical power to detect an medium effect size of d = .52 for the analyses.

This study used a within-subject design where each participant completed 16 trials of the model-copying task in a 60-min session. Two within-subject factors were the angle of the model (45°, 90°, 135°, or 180°) and the difficulty of the patterns (same-colored or multi-colored). Four random orders of the trials (2 instances of each angle-difficulty pairing) were generated and alternately used among participants.

Apparatus

The experimental setup is shown in Figure 1. Participants stood in front of a music stand which held the workspace: an empty 4 × 4 grid with 8 magnets provided for participants to copy the model. The magnets were placed on the side of the grid corresponding to the participant’s dominant hand. The model was displayed on another music stand which was positioned at the angle of 45°, 90°, 135°, or 180° to the left or right side of the participant. The distance from the participants to the workspace was 35.56 cm (line ‘a’), and was 66.04 cm to the model (line ‘b’). The height of the workspace and model were adjusted to the participant’s chest level. The models had two levels of difficulty: the easy level was composed of 8 magnets of the same color (green, blue, pink, or yellow), and the difficult level was composed of 8 magnets of 4 different colors with 2 magnets each color. Two wall-mounted security cameras (on the participants’ left and right sides) recorded the third-person view videos.

Figure 1.
An example trial in which the multi-colored model was positioned at 90° on the right side.
Figure 1.
An example trial in which the multi-colored model was positioned at 90° on the right side.
Close modal

To capture eye movements, participants wore a Positive Science head-mounted eye tracker (Figure 2). The eye tracker has two cameras: a scene camera positioned over participants’ right eye recording the participants’ first-person field of view, and an eye camera that points to the participants’ right eye and records the eye movements. The two cameras are mounted on a lightweight eyeglass frame. The videos recorded by the two cameras (sampling rate 30 Hz) were streamed to a recording device attached to the shoulder band worn by participants.

Figure 2.
Participants wore a head-mounted eye tracker, which has an eye camera and a scene camera, and two IMUs placed at the forehead and the base of neck were attached to a headband and a shoulder band.
Figure 2.
Participants wore a head-mounted eye tracker, which has an eye camera and a scene camera, and two IMUs placed at the forehead and the base of neck were attached to a headband and a shoulder band.
Close modal

To capture head movements, participants wore two wireless STT Systems IWS inertial measurement units (IMUs) (dimensions: 56 × 38 × 18 mm, 46 g): one on the forehead (attached to a head band), the other on the neck (attached to a shoulder band) at the C7 vertebrae (Figure 2). The IMUs streamed acceleration and gyroscope data at a rate of 400 Hz over WiFi to the STT software installed on a laptop, which recorded the data.

Procedure

In the 1-hour long session, participants were instructed to complete the model-copying task, before and after which they completed brief calibration for the eye tracker and IMUs.

Calibration and Synchronization for the Eye Tracker and IMUs

After giving consent, participants put on a shoulder band, a head band, and the eye tracker. The experimenter adjusted the cameras of the eye tracker to ensure the scene camera would capture the participant’s first-person field of view and the eye camera would capture the pupil and corneal reflection no matter how participants moved their eyes. The experimenter attached the IMUs to the headband and shoulder band worn by the participants and ensured they were at the correct positions and orientation.

To calibrate the eye tracker, participants stood 35 cm away from a cardboard poster (76 cm × 51 cm), which was put on the music stand. Nine targets (2.5 × 2.5 cm) were drawn on the board: 4 at the corners, 4 at the midpoints of each edge, and 1 at the center. Participants were asked to move their eyes to look at specific targets while keeping their head still. The eye tracker measured the rotations of eyes while participants were looking at different targets in the field of view and determined the gaze direction in pixels relative to the scene camera video. The same calibration procedure was repeated in the middle and end of the model-copying task to ensure the accuracy of eye tracking throughout the session.

To calibrate the IMUs and to synchronize with the eye tracking data for subsequent analyses, participants were asked to look forward, keep the chin parallel to the floor and still for a while, and then quickly turn the head to the left and right. These motions created identifiable moments in the IMU data to be synchronized with the eye tracking data. The experimenter checked the visualization of the IMUs data on the software to ensure it showed the correct motions.

Model-Copying Task

Participants completed two practice trials and 16 experimental trials of the model-copying task. Sixteen different configurations of the model patterns (8 same-colored and 8 multi-colored) were randomly created by a computer script. One same-colored and one multi-colored pattern were randomly assigned to each model position (4 angles × 2 sides). In each trial, participants were required to copy the model correctly while standing at a designated spot on the floor (the wrongly copied trials were excluded from the data analyses). They were also required to pick up one magnet at one time with the dominant hand. They faced toward the workspace at the beginning of each trial but were allowed to freely turn their head, body, and feet to look at the model as many times as they needed while copying the model. They were reminded to close their eyes while the experimenters changed the model between trials.

Data Processing and Analyses

First, the eye rotation in each frame of the scene videos (in unit of pixels) was calculated from the eye and scene videos recorded by the eye tracker using the software Yarbus (Positive Science). The time series of horizontal eye rotation in pixels were then converted to degrees based on the camera’s horizontal field of view and lens correction, as in Franchak et al. (2021).

Second, the acceleration and gyroscope data recorded by the IMUs were used to calculate the time series of horizontal head rotation in degrees relative to the base of the neck by the software iSen (STT Systems).

Third, to synchronize the eye tracking and IMU data, the experimenter identified the sharp head turns participants made during calibration in the time series of IMU data and the corresponding moments in the scene videos (quick shifts of field of view). The experimenter marked down the times of the head turns and the frame numbers in the videos and then converted the times series to the unit of frames in a Matlab script. Figure 3 showed the synchronized data of eye, head rotations and the eye and scene videos.

Figure 3.
The eye tracker videos were synchronized with the time series of horizontal eye and head rotations.

Left: One frame taken from the eye tracker videos (the eye video was imposed over the scene video on the top left corner). The bulls-eye shows the participant’s gaze landed on the model which was positioned at 90 to the participant’s right hand side. Right: The time series of eye (upper) and head (lower) rotations in degrees, with positive values on the y-axis indicating rotations to the observer’s right. The x-axis is time with positive values indicating the future and negative values indicating the past. The red vertical lines mark the current time (0), which corresponds to the video frame shown on the left.

Figure 3.
The eye tracker videos were synchronized with the time series of horizontal eye and head rotations.

Left: One frame taken from the eye tracker videos (the eye video was imposed over the scene video on the top left corner). The bulls-eye shows the participant’s gaze landed on the model which was positioned at 90 to the participant’s right hand side. Right: The time series of eye (upper) and head (lower) rotations in degrees, with positive values on the y-axis indicating rotations to the observer’s right. The x-axis is time with positive values indicating the future and negative values indicating the past. The red vertical lines mark the current time (0), which corresponds to the video frame shown on the left.

Close modal

After synchronization, for each frame of the trials in the scene videos, two independent coders tagged each instance of gazing at the model. The inter-rater reliability was 95.4%. If the participants’ eye gaze fell within the area of model in the field of view for at least 2 frames (allowing 1 frame off or a blink in between), a model look was defined (Figure 3). Once the model looks were defined, we calculated five outcome variables for different trials:

  1. Number of looks: the total number of model looks in a trial. The larger the number of looks is, the more frequently the participants moved their eyes, head, and body to look at the models (expending greater effort).

  2. Look duration: the duration of an average model look in a trial. This variable was calculated by calculating the mean duration of all model looks in a trial (in unit of frames then converted to seconds). Longer average look durations suggest that participants spent longer in memorizing the models.

  3. Eye rotation: the mean of the horizontal eye rotation during model looks in a trial. Note that the absolute values of the eye rotation were used in the calculation to avoid the opposite signs of the degree of rotation (positive if to the right and negative if to the left) cancelling each other out.

  4. Head rotation: the mean of the horizontal head rotation during model looks in a trial. Similarly, the absolute values of the head rotation were used in the calculation.

  5. Body rotation was estimated by subtracting eye and head rotations from the angle for the trial and then taking the absolute values.

For statistical analyses, linear mixed-effect models (LMMs) were first applied to test the effects of model side (2 levels, left and right), angle (4 levels, 45°, 90°, 135°, 180°), and difficulty (2 levels, easy and difficult) on the number of looks and look duration using the lme4 (Bates et al., 2015) package in R (R Core Team, 2019). The side, angle, and difficulty were included as fixed effects and participant as a random effect in the LMMs (random slope models failed to converge). Results showed there was no interaction between side and angle/difficulty. More importantly, we were mainly interested in how angle and difficulty affected the motor-memory trade-off and eye-head-body movements. Therefore, we collapsed side and only tested the effects of angle and difficulty in the analyses reported below.

To understand the effects of angle and difficulty on the eye-head-body coordination, two 3-way LMMs predicting the rotation (in degrees) and proportion separately were calculated with angle, difficulty, and segment (eye, head, and body) as the independent variables. For all analyses, ANOVAs were used to test significance of main effects and interactions from the LMMs using the lmerTest package in R (Kuznetsova et al., 2017). Degrees of freedom were determined by the Satterthwaite approximation (Luke, 2016; Satterthwaite, 1941). Follow-up pairwise comparisons used the Holm-Bonferroni correction to adjust for multiple comparisons.

Motor Effort and Memorization Difficulty Affected Motor-Memory Trade-Off

The number of looks decreased as the angle of the model became larger (Figure 4A). That is, the participants moved their eyes, head, and body less frequently to look at the model if the models were positioned at larger angles that required greater rotations. Instead, they took longer to look at and memorize the models (Figure 4B). However, as the model became difficult to remember, participants relied on more frequent eye-head-body movements to gather information about the models rather than using memory. Descriptive statistics are shown in Table 2.

Figure 4.
The number of looks (A) and mean look duration (B) by angle and difficulty.
Figure 4.
The number of looks (A) and mean look duration (B) by angle and difficulty.
Close modal

The results of the LMM predicting the number of looks showed significant main effects of angle and difficulty and a significant interaction effect (Table 1). To follow up on the angle × difficulty interaction, pairwise comparisons between adjacent angles within each difficulty condition suggested that the number of looks at 45° was significantly greater than that at 90° for the easy (t(188) = 6.104, p < .001) and difficult levels (t(188) = 10.300, p < .001). The larger drop in the number of looks from 45° to 90° at the difficult level compared to the easy level resulted in the angle × difficulty interaction. There was no difference in the number of looks between 90° and 135° or between 135° and 180° for either difficult level. Nevertheless, participants looked at the model more frequently in the difficult condition at every angle.

Table 1.
Summary of LMM results predicting the number of looks and look duration from angle and difficulty.
  Number of Looks Look Duration 
 df F p  F p  
Angle 77.38 <.001 *** 35.09 <.001 *** 
Difficulty 145.01 <.001 *** 22.91 <.001 *** 
Angle \(\times\) Difficulty 3.35 .020 .59 .620 n.s. 
  Number of Looks Look Duration 
 df F p  F p  
Angle 77.38 <.001 *** 35.09 <.001 *** 
Difficulty 145.01 <.001 *** 22.91 <.001 *** 
Angle \(\times\) Difficulty 3.35 .020 .59 .620 n.s. 

*p<.05, **p<.01, ***p<.001

Table 2.
Descriptive statistics (M and SD) for the number of looks and look duration (in seconds) by angle and difficulty.
 Number of Looks Look Duration 
 Easy Difficult Easy Difficult 
45° 6.00 (1.86) 8.84 (2.78) .75 (0.43) .61 (0.35) 
90° 4.00 (1.67) 5.46 (1.69) 1.13 (0.67) .87 (0.54) 
135° 3.71 (1.46) 5.61 (2.04) 1.25 (0.88) 1.03 (0.53) 
180° 3.32 (1.18) 5.14 (1.34) 1.57 (1.12) 1.23 (0.60) 
 Number of Looks Look Duration 
 Easy Difficult Easy Difficult 
45° 6.00 (1.86) 8.84 (2.78) .75 (0.43) .61 (0.35) 
90° 4.00 (1.67) 5.46 (1.69) 1.13 (0.67) .87 (0.54) 
135° 3.71 (1.46) 5.61 (2.04) 1.25 (0.88) 1.03 (0.53) 
180° 3.32 (1.18) 5.14 (1.34) 1.57 (1.12) 1.23 (0.60) 

The results of the LMM predicting mean look duration only showed significant main effects of angle and difficulty (Table 1), indicating that participants looked at the models for longer period of time when the models were at larger angles and when models were easy to remember. The descriptive statistics are shown in Table 2. Follow-up pairwise comparison between adjacent angles showed an increase in the look duration from 45° to 90° (t(188) = -4.538, p < .001), from 90° to 135° (t(188) = -1.997, p = .047), and from 135° to 180° (t(188) = -3.533, p = .001). Thus, unlike number of looks which changed from 45° to 90° but not beyond 90°, look duration showed progressive increases with each 45° increase in model angle.

Eye-Head-Body Coordination Differed Across Angles

As Figure 5A shows, the eye, head, and body rotations all increased as the angle of the models increased, but each segment (eyes, head, or body) increased at different rates. The body rotation increased most and sharply; the head rotation first increased quickly then slowed down; whereas the eye rotation remained almost the same across the 4 angles. Proportionally, the contribution of the eye movement to the gaze shifts decreased as the angle increased (Figure 5B). The contribution of the head movement first increased and then slightly decreased. The contribution of the body movement, however, kept rising. This implied that the participants mainly moved the eyes to look at the models at small angles. As larger movements were required to look at the models, the participants then relied more on head and body movements.

Figure 5.
A: The average eye, head, and body rotations at different angles. B: The proportion that the eye, head, and body movements contributed to the gaze shifts as the angle increased.

Note that the proportions were calculated at the trial level, so the sum of mean proportions is not equal to one.

Figure 5.
A: The average eye, head, and body rotations at different angles. B: The proportion that the eye, head, and body movements contributed to the gaze shifts as the angle increased.

Note that the proportions were calculated at the trial level, so the sum of mean proportions is not equal to one.

Close modal

The results of the LMMs (Table 3) predicting the rotation showed main effects of angle (4 levels: 45°, 90°, 135°, 180°) and segment (3 levels: eye, head, body) and an interaction between angle and segment. To follow up on the interaction (Table 4), pairwise comparisons showed: at 45°, the eye rotation was greater than the head (t(618) = 4.833, p < .001) and body rotations (t(618) = 4.700, p < .001) but there was no difference between the head and body rotations (t(618) = -.133, p = .894); at 90° the eye rotation was slightly smaller than the head rotation (t(618) = -2.618, p = .027), but there was no difference between the eye and body rotations (t(618) = -1.027, p = .305) or between the head and body rotations (t(618) = 1.591, p = .224); at 135° the eye rotation was significantly smaller than the head (t(618) = -10.418, p < .001) and body rotations (t(618) = -11.264, p < .001) but there was no difference between the head and body rotations (t(618) = -.846, p = .398); at 180° the eye rotation was far smaller than the head (t(618) = -13.355, p < .001) and body rotations (t(618) = -26.342, p < .001) and the head rotation was smaller than the body rotation as well (t(618) = -12.987, p < .001). The descriptive statistics are shown in Table 5.

Table 3.
Summary of LMM results predicting rotation and proportion from angle, difficulty, and segment.
  Rotation Proportion 
 df F p  F p  
Angle 469.53 <.001 *** 12.13 <.001 *** 
Difficulty .01 .925 n.s. .03 .862 n.s. 
Segment 148.82 <.001 *** 29.23 <.001 *** 
Angle \(\times\) Difficulty .00 1.000 n.s. .01 .998 n.s. 
Angle \(\times\) Segment 99.41 <.001 *** 84.60 <.001 *** 
Difficulty \(\times\) Segment .21 .807 n.s. 1.09 .336 n.s. 
Angle \(\times\) Difficulty \(\times\) Segment .38 .895 n.s. .86 .526 n.s. 
  Rotation Proportion 
 df F p  F p  
Angle 469.53 <.001 *** 12.13 <.001 *** 
Difficulty .01 .925 n.s. .03 .862 n.s. 
Segment 148.82 <.001 *** 29.23 <.001 *** 
Angle \(\times\) Difficulty .00 1.000 n.s. .01 .998 n.s. 
Angle \(\times\) Segment 99.41 <.001 *** 84.60 <.001 *** 
Difficulty \(\times\) Segment .21 .807 n.s. 1.09 .336 n.s. 
Angle \(\times\) Difficulty \(\times\) Segment .38 .895 n.s. .86 .526 n.s. 

*p<.05, **p<.01, ***p<.001

Table 4.
Pairwise comparisons between segments for each angle.
 Absolute Rotation 
45\(^{\circ}\) Eye > Head = Body 
90\(^{\circ}\) Eye < Head, Head = Body, Eye = Body 
135\(^{\circ}\) Eye < Head, Head = Body, Eye < Body 
180\(^{\circ}\) Eye < Head < Body 
 Absolute Rotation 
45\(^{\circ}\) Eye > Head = Body 
90\(^{\circ}\) Eye < Head, Head = Body, Eye = Body 
135\(^{\circ}\) Eye < Head, Head = Body, Eye < Body 
180\(^{\circ}\) Eye < Head < Body 
Table 5.
Descriptive statistics (M and SD) for the rotation (in degree) and proportion by angle and segment.
 Rotation Proportion 
 Eye Head Body Eye Head Body 
45\(^{\circ}\) 24.03(4.77) 14.04(5.87) 14.32(5.58) .53(.11) .31(.13) .32(.12) 
90\(^{\circ}\) 28.04(5.85) 33.45(9.22) 30.16(11.05) .31(.06) .37(.10) .34(.12) 
135\(^{\circ}\) 30.07(5.64) 51.59(10.81) 53.34(14.47) .22(.04) .38(.08) .40(.11) 
180\(^{\circ}\) 32.41(7.86) 60.26(15.22) 87.33(20.95) .18(.04) .33(.08) .49(.12) 
 Rotation Proportion 
 Eye Head Body Eye Head Body 
45\(^{\circ}\) 24.03(4.77) 14.04(5.87) 14.32(5.58) .53(.11) .31(.13) .32(.12) 
90\(^{\circ}\) 28.04(5.85) 33.45(9.22) 30.16(11.05) .31(.06) .37(.10) .34(.12) 
135\(^{\circ}\) 30.07(5.64) 51.59(10.81) 53.34(14.47) .22(.04) .38(.08) .40(.11) 
180\(^{\circ}\) 32.41(7.86) 60.26(15.22) 87.33(20.95) .18(.04) .33(.08) .49(.12) 

Note that the proportions were calculated at the trial level, so the sum of mean proportions is not equal to one.

The results of the LMMs (Table 3) predicting the proportion of rotation showed the main effects of angle and segment and an interaction between angle and segment. To follow up the interaction (Table 6), pairwise comparisons showed that: the proportion that the eyes contributed to gaze shifts decreased significantly from 45° to 90° (t(618) = 11.939, p < .001), from 90° to 135° (t(618) = 4.770, p < .001), and from 135° to 180° (t(618) = 2.280, p = .023); the proportion of head rotation increased from 45° to 90° (t(618) = -3.198, p = .007), plateaued from 90° to 135° (t(618) = -.563, p = .574), and slightly dropped from 135° to 180° (t(618) = 2.533, p = .046); the proportion of body rotation remained similarly low from 45° to 90° (t(618) = -.910, p = .363), but increased from 90° to 135° (t(618) = -3.218, p = .003), and from 135° to 180° (t(618) = -4.813, p < .001). The descriptive statistics are shown in Table 5.

Table 6.
Pairwise comparisons between angles for each segment based on the proportion of each gaze shift.
 Proportion 
Eye 45\(^{\circ}\) > 90\(^{\circ}\) > 135\(^{\circ}\) > 180\(^{\circ}\) 
Head 45\(^{\circ}\) < 90\(^{\circ}\) = 135\(^{\circ}\), 135\(^{\circ}\) > 180\(^{\circ}\) 
Body 45\(^{\circ}\) = 90\(^{\circ}\) < 135\(^{\circ}\) < 180\(^{\circ}\) 
 Proportion 
Eye 45\(^{\circ}\) > 90\(^{\circ}\) > 135\(^{\circ}\) > 180\(^{\circ}\) 
Head 45\(^{\circ}\) < 90\(^{\circ}\) = 135\(^{\circ}\), 135\(^{\circ}\) > 180\(^{\circ}\) 
Body 45\(^{\circ}\) = 90\(^{\circ}\) < 135\(^{\circ}\) < 180\(^{\circ}\) 

In this study, we investigated the trade-off between using motor effort to visually explore versus cognitive effort to memorize the model in the block-copying task, and how that trade-off related to eye-head-body movements and memorization difficulty. We found that people coordinated their eyes, head, and body differently to look at models at different angles. The eyes contributed more to small gaze shifts but the head and body contributed more as the gaze shift amplitude increased (Freedman, 2008; Hollands et al., 2004; Proudlock & Gottlob, 2007; Sidenmark & Gellersen, 2020). This means that even across equally-spaced 45° increases in angle, the increase in effort was not equal because each angle incurred different relative contributions from lower-effort eye movements compared with higher-effort head and body movements. The resulting motor effort of looking influenced how people completed the copying task: they turned their eyes, head, and body more frequently to visually explore the models at 45° compared to the larger degrees (90°, 135°, and 180°); however, they spent longer in memorizing the models as the angle increased in each 45° increment (from 45° to 180°). Memorization difficulty also affected the trade-off such that people used more memory when the models were easy to remember. Memorization difficulty interacted with motor effort for the number of looks such that participants excessively looked at difficult models at 45°. But memorization difficulty did not interact with the motor effort for the look duration nor impact how people coordinated the eyes, head, and body to look at the models.

Consistent with previous studies (Ballard et al., 1992, 1995; Draschkow et al., 2021; Gray et al., 2006; Hardiess et al., 2011; Lagers-Van Haselen et al., 2000), we found that people adapt the motor-memory trade-off to different task conditions. That is, they looked more frequently when the eye-head-body movements were less effortful or when memorization was more difficult. In this study we extended the maximum target angle to 180° (the largest amplitude in the previous studies was 135° (Ballard et al., 1995; Draschkow et al., 2021), which resulted in dramatically different eye-head-body coordination but had a modest effect on the motor-memory trade-off compared with 135°. Participants turned their eyes, head, and body very frequently to look at the models at 45°, but the frequency of eye-head-body movements to look at the model remained similar across 90°, 135°, and 180° angles. Yet, despite the stability of number of looks over this range, the mean duration of model looks did significantly increase from 90° until 180°. One possibility is that the eyes, head, and body need more time to change the state of rotational motion and stabilize the gaze for visual processing when making large gaze shifts. Alternatively, as the time of gaze shifts increased, participants may need longer to encode and rehearse the information in the memory to resist memory decay. These possibilities should be tested in future work.

Overall, memorization difficulty affected the number of looks and look duration but did not affect the motor-memory trade-off qualitatively. In other words, participants tended to move the eyes, head, and body frequently to look when the gaze shift amplitude was small regardless of the memorization difficulty, even though they could potentially use more memory to improve efficiency when the models were easier to memorize. It seemed that motor effort has a more pronounced effect on the motor-memory trade-off than memorization difficulty.

This is the first study to examine the eye-head-body coordination in the block-copying task. The current study demonstrated that each 45° increase in angle did not produce an equivalent change in effort. Although each segment (eyes, head, and body) increased in rotation as angles increased, the coordination among the eyes, head, and body at different angles were complex—the eyes contributed most to the gaze shifts when the amplitude was small; but even before the eyes reached the oculomotor limit, the head became increasingly involved; at larger angles, the body overtook the head in contributing the most for the largest gaze shifts.

How might the differential contribution of low-effort eye movements versus high-effort head/body movements relate to the motor-memory trade-off? Referring to Figure 4A and Figure 5A, the results showed that the big drop of the number of looks (between 45° and 90°) did not coincide with the sharp increase of the body rotation (between 135° and 180°). Instead, it co-occurred with the big increase in the head movement. In other words, the change in body movement did not affect the motor-memory trade-off much even though the body movement is more costly compared to head movement. Participants changed their strategy in copying the model by increasing the memory use once head movement was required to look at the models. This implies that the motor effort of eye movement is treated as distinct from the efforts of head and body movements by observers. Participants were more willing to move the eyes back and forth than moving both the head and the body probably because of the short time scale of saccadic eye movements (about 300 ms) (Ballard et al., 1997). But note that the current study only explored the eye-head-body movements underlying the motor-memory trade-off observed in a copying task. Future work should investigate how visual exploration relates to the effort of underlying effector movements across different tasks.

We acknowledge several limitations in our study. First, the unavailability of a full-body motion capture system made it impossible to track the rotations of trunk and feet. Body rotations were estimated by subtracting the eye and head rotations from the model angle. Although not ideal, this approximation could still shed light on the body movement in the copying task. Second, there were individual differences in how participants traded off between motor effort and use of memory. It was possible that individuals with higher memory capacity might choose to rely more on memory. Future studies could investigate the impact of memory capacity on the motor-memory trade-off. Last, we did not directly measure the energetic cost of the eye-head-body movements. A recent study showed that people did not spontaneously select the walking speed, step length and width that cost less energy while walking (Antos et al., 2022). Likewise, we are unsure whether participants coordinated their eyes, head, and body in a way that was energetically optimal in the block-copying task.

In sum, this study enhanced our understanding of the motor-memory trade-off by investigating the underlying eye-head-body movements that guided looking in the block-copying task. We found that the eyes, head, and body coordinated differently when looking at different angles, resulting in varying motor effort. Future studies should further examine the interaction between the motor and cognitive systems.

Contributed to conception and design: Chuan Luo, John M. Franchak.

Contributed to acquisition of data: Chuan Luo.

Contributed to analysis and interpretation of data: Chuan Luo, John M. Franchak.

Drafted and/or revised the article: Chuan Luo, John M. Franchak.

Approved the submitted version for publication: Chuan Luo, John M. Franchak.

We thank the members of UCR Perception, Action, and Development Lab for their assistance in collecting and coding the data.

The authors declare that they have no conflict of interest.

The data and analysis scripts can be found on this paper’s project page on the Open Science Framework (doi:10.17605/osf.io/x8qac).

Anastasopoulos, D., Ziavra, N., Hollands, M., & Bronstein, A. (2008). Gaze displacement and inter-segmental coordination during large whole body voluntary rotations. Experimental Brain Research, 193(3), 323–336. https://doi.org/10.1007/s00221-008-1627-y
Antos, S. A., Kording, K. P., & Gordon, K. E. (2022). Energy expenditure does not solely explain step length–width choices during walking. Journal of Experimental Biology, 225(6), onlineaheadofprint.doi:10.1242/jeb.243104. https://doi.org/10.1242/jeb.243104
Ballard, D. H., Hayhoe, M. M., Li, F., & Whitehead, S. D. (1992). Hand-eye coordination during sequential tasks. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 337(1281), 331–339. https://doi.org/10.1098/rstb.1992.0111
Ballard, D. H., Hayhoe, M. M., & Pelz, J. B. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7(1), 66–80. https://doi.org/10.1162/jocn.1995.7.1.66
Ballard, D. H., Hayhoe, M. M., Pook, P. K., & Rao, R. P. N. (1997). Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences, 20(4), 723–742. https://doi.org/10.1017/s0140525x97001611
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
Draschkow, D., Kallmayer, M., & Nobre, A. C. (2021). When Natural Behavior Engages Working Memory. Current Biology, 31(4), 869-874.e5. https://doi.org/10.1016/j.cub.2020.11.013
Droll, J. A., & Hayhoe, M. M. (2007). Trade-offs between gaze and working memory use. Journal of Experimental Psychology: Human Perception and Performance, 33(6), 1352--1365.
Einhäuser, W., Schumann, F., Bardins, S., Bartl, K., Böning, G., Schneider, E., & König, P. (2007). Human eye-head co-ordination in natural exploration. Network: Computation in Neural Systems, 18(3), 267–297. https://doi.org/10.1080/09548980701671094
Franchak, J. M. (2020). Visual exploratory behavior and its development. Psychology of Learning and Motivation, 59–94. https://doi.org/10.1016/bs.plm.2020.07.001
Franchak, J. M., McGee, B., & Blanch, G. (2021). Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration. PLoS One, 16(8), e0256463. https://doi.org/10.1371/journal.pone.0256463
Freedman, E. G. (2008). Coordination of the eyes and head during visual orienting. Experimental Brain Research, 190(4), 369–387. https://doi.org/10.1007/s00221-008-1504-8
Freedman, E. G., & Sparks, D. L. (2000). Coordination of the eyes and head: Movement kinematics. Experimental Brain Research, 131(1), 22–32. https://doi.org/10.1007/s002219900296
Gadotti, I. C., Elbaum, L., Jung, Y., Garbalosa, V., Kornbluth, S., Da Costa, B., Maitra, K., & Brunt, D. (2016). Evaluation of eye, head and trunk coordination during target tracking tasks. Ergonomics, 59(11), 1420–1427. https://doi.org/10.1080/00140139.2016.1146345
Gray, W. D., Sims, C. R., Fu, W.-T., & Schoelles, M. J. (2006). The soft constraints hypothesis: A rational analysis approach to resource allocation for interactive behavior. Psychological Review, 113(3), 461–482. https://doi.org/10.1037/0033-295x.113.3.461
Guitton, D., & Volle, M. (1987). Gaze control in humans: Eye-head coordination during orienting movements to targets within and beyond the oculomotor range. Journal of Neurophysiology, 58(3), 427–459. https://doi.org/10.1152/jn.1987.58.3.427
Hardiess, G., Basten, K., & Mallot, H. A. (2011). Acquisition vs. memorization trade-offs are modulated by walking distance and pattern complexity in a large-scale copying paradigm. PLoS One, 6(4), e18494. https://doi.org/10.1371/journal.pone.0018494
Hardiess, G., Gillner, S., & Mallot, H. A. (2008). Head and eye movements and the role of memory limitations in a visual search paradigm. Journal of Vision, 8(1), 7. https://doi.org/10.1167/8.1.7
Hayhoe, M. M. (2000). Vision Using Routines: A Functional Account of Vision. Visual Cognition, 7(1–3), 43–64. https://doi.org/10.1080/135062800394676
Hayhoe, M. M., Bensinger, D. G., & Ballard, D. H. (1998). Task constraints in visual working memory. Vision Research, 38(1), 125–137. https://doi.org/10.1016/s0042-6989(97)00116-8
Hollands, M. A., Ziavra, N. V., & Bronstein, A. M. (2004). A new paradigm to investigate the roles of head and eye movements in the coordination of whole-body movements. Experimental Brain Research, 154(2), 261–266. https://doi.org/10.1007/s00221-003-1718-8
Howes, A., Duggan, G. B., Kalidindi, K., Tseng, Y.-C., & Lewis, R. L. (2015). Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice. Cognitive Science, 40(5), 1192–1223. https://doi.org/10.1111/cogs.12271
Hu, B., Johnson-Bey, I., Sharma, M., & Niebur, E. (2017). Head movements during visual exploration of natural images in virtual reality. 2017 51st Annual Conference on Information Sciences and Systems (CISS), 1–6. https://doi.org/10.1109/ciss.2017.7926138
Inamdar, S., & Pomplun, M. (2003). Comparative Search Reveals the Tradeoff between Eye Movements and Working Memory Use in Visual Tasks. Proceedings of the Annual Meeting of the Cognitive Science Society, 25(25).
Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest Package: Tests in Linear Mixed Effects Models. Journal of Statistical Software, 82(13). https://doi.org/10.18637/jss.v082.i13
Lagers-Van Haselen, G. C., Van Der Steen, J., & Frens, M. A. (2000). Copying strategies for patterns by children and adults. Perceptual and Motor Skills, 91(2), 603–615. https://doi.org/10.2466/pms.2000.91.2.603
Land, M. F. (2004). The coordination of rotations of the eyes, head and trunk in saccadic turns produced in natural situations. Experimental Brain Research, 159(2), 151–160. https://doi.org/10.1007/s00221-004-1951-9
Li, C.-L., Aivar, M. P., Tong, M. H., & Hayhoe, M. M. (2018). Memory shapes visual search strategies in large-scale environments. Scientific Reports, 8(1), 4324. https://doi.org/10.1038/s41598-018-22731-w
Luke, S. G. (2016). Evaluating significance in linear mixed-effects models in R. Behavior Research Methods, 49(4), 1494–1502. https://doi.org/10.3758/s13428-016-0809-y
Oommen, B. S., Smith, R. M., & Stahl, J. S. (2004). The influence of future gaze orientation upon eye-head coupling during saccades. Experimental Brain Research, 155(1), 9–18. https://doi.org/10.1007/s00221-003-1694-z
Patrick, J., Morgan, P. L., Smy, V., Tiley, L., Seeby, H., Patrick, T., Evans, J. (2015). The influence of training and experience on memory strategy. Memory Cognition, 43(5), 775–787. https://doi.org/10.3758/s13421-014-0501-3
Proudlock, F. A., Gottlob, I. (2007). Physiology and pathology of eye–head coordination. Progress in Retinal and Eye Research, 26(5), 486–515. https://doi.org/10.1016/j.preteyeres.2007.03.004
R Core Team. (2019). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. https://www.R-project.org/
Satterthwaite, F. E. (1941). Synthesis of variance. Psychometrika, 6(5), 309–316. https://doi.org/10.1007/bf02288586
Sidenmark, L., Gellersen, H. (2020). Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality. ACM Transactions on Computer-Human Interaction, 27(1), 1–40. https://doi.org/10.1145/3361218
Solman, G. J. F., Kingstone, A. (2014). Balancing energetic and cognitive resources: Memory use during search depends on the orienting effector. Cognition, 132(3), 443–454. https://doi.org/10.1016/j.cognition.2014.05.005
Solomon, D., Vijay Kumar, Jenkins, R. A., Jewell, J. (2006). Head control strategies during whole-body turns. Experimental Brain Research, 173(3), 475–486. https://doi.org/10.1007/s00221-006-0393-y
Stahl, J. S. (1999). Amplitude of human head movements associated with horizontal saccades. Experimental Brain Research, 126(1), 41–54. https://doi.org/10.1007/s002210050715
Stahl, J. S. (2001). Eye-head coordination and the variation of eye-movement accuracy with orbital eccentricity. Experimental Brain Research, 136(2), 200–210. https://doi.org/10.1007/s002210000593
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary data