| |
No categories | |
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene. However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In (...) Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate. Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. (shrink) No categories | |
No categories | |
No categories | |
Eye behavior is increasingly used as an indicator of internal versus external focus of attention both in research and application. However, available findings are partly inconsistent, which might be attributed to the different nature of the employed types of internal and external cognition tasks. The present study, therefore, investigated how consistently different eye parameters respond to internal versus external attentional focus across three task modalities: numerical, verbal, and visuo‐spatial. Three eye parameters robustly differentiated between internal and external attentional focus across (...) all tasks. Blinks, pupil diameter variance, and fixation disparity variance were consistently increased during internally directed attention. We also observed substantial attentional focus effects on other parameters (pupil diameter, fixation disparity, saccades, and microsaccades), but they were moderated by task type. Single‐trial analysis of our data using machine learning techniques further confirmed our results: Classifying the focus of attention by means of eye tracking works well across participants, but generalizing across tasks proves to be challenging. Based on the effects of task type on eye parameters, we discuss what eye parameters are best suited as indicators of internal versus external attentional focus in different settings. (shrink) | |
No categories | |
It has been generally assumed in the Theory of Mind literature of the past 30 years that young children fail standard false-belief tasks because they attribute their own knowledge to the protagonist. Contrary to the traditional view, we have recently proposed that the children's bias is task induced. This alternative view was supported by studies showing that 3 year olds are able to pass a false-belief task that allows them to focus on the protagonist, without drawing their attention to the (...) target object in the test phase. For a more accurate comparison of these two accounts, the present study tested the true-belief default with adults. Four experiments measuring eye movements and response inhibition revealed that adults do not have an automatic tendency to respond to the false-belief question according to their own knowledge and the true-belief response need not be inhibited in order to correctly predict the protagonist's actions. The positive results observed in the control conditions confirm the accuracy of the various measures used. I conclude that the results of this study undermine the true-belief default view and those models that posit mechanisms of response inhibition in false-belief reasoning. Alternatively, the present study with adults and recent studies with children suggest that participants' focus of attention in false-belief tasks may be key to their performance. (shrink) | |
‘Dira’ is a novel experimental paradigm to record combinations of behavioural and metacognitive measures for the creative process. This task allows assessing chronological and chronometric aspects of the creative process directly and without a detour through creative products or proxy phenomena. In a study with 124 participants we show that (a.) people spend more time attending to selected versus rejected potential solutions, (b.) there is a clear connection between behavioural patterns and self-reported measures, (c.) the reported intensity of Eureka experiences (...) is a function of interaction time with potential solutions, and (d.) experiences of emerging solutions can happen immediately after engaging with a problem, before participants explore all potential solutions. The conducted study exemplifies how ‘Dira’ can be used as an instrument to narrow down the moment when solutions emerge. We conclude that the ‘Dira’ experiment is paving the way to study the process, as opposed to the product, of creative problem solving. (shrink) | |
How do people evaluate causal relationships? Do they just consider what actually happened, or do they also consider what could have counterfactually happened? Using eye tracking and Gaussian process modeling, we investigated how people mentally simulated past events to judge what caused the outcomes to occur. Participants played a virtual ball‐shooting game and then—while looking at a blank screen—mentally simulated (a) what actually happened, (b) what counterfactually could have happened, or (c) what caused the outcome to happen. Our findings showed (...) that participants moved their eyes in patterns consistent with the actual or counterfactual events that they mentally simulated. When simulating what caused the outcome to occur, participants moved their eyes consistent with simulations of counterfactual possibilities. These results favor counterfactual theories of causal reasoning, demonstrate how eye movements can reflect simulation during this reasoning and provide a novel approach for investigating retrospective causal reasoning and counterfactual thinking. (shrink) | |
No categories | |
Recent research put forward the hypothesis that eye movements are integrated in memory representations and are reactivated when later recalled. However, “looking back to nothing” during recall might be a consequence of spatial memory retrieval. Here, we aimed at distinguishing between the effect of spatial and oculomotor information on perceptual memory. Participants’ task was to judge whether a morph looked rather like the first or second previously presented face. Crucially, faces and morphs were presented in a way that the morph (...) reactivated oculomotor and/or spatial information associated with one of the previously encoded faces. Perceptual face memory was largely influenced by these manipulations. We considered a simple computational model with an excellent match that expresses these biases as a linear combination of recency, saccade, and location. Surprisingly, saccades did not play a role. The results suggest that spatial and temporal rather than oculomotor information biases perceptual face memory. (shrink) No categories | |
This thesis investigates the relationship between eye movements, mental imagery and memory retrieval in four studies based on eye-tracking experiments. The first study is an investigation of eye movements during mental imagery elicited both visually and verbally. The use of complex stimuli and the development of a novel method where eye movements are recorded concurrently with verbal data enabled the above-mentioned relationship to be studied to an extent going beyond what previous research had been able to do. Eye movements were (...) found to closely reflect content and spatial layout while participants were listening to a spoken scene description, while they were describing the same scene from memory, and while they were describing a picture they had previously seen. This effect was equally strong during recall from memory irrespective of whether the scene visualised had originally been inspected visually by the participants or whether it was constructed whole-cloth from long-term memory. It was also found that eye movements "to nothing" appeared both when the participants were visualising scenes while looking at a blank screen and when they were doing so in complete darkness. The second study explored an effect frequently observed in the first study, involving a "scaling-down" during recall of participants' gaze patterns to an area smaller than that occupied by the stimulus encoded. It was found that this scaling effect correlated with spatial-imagery ability: the gaze patterns of participants with weaker spatial-imagery ability were closer in size to the encoded scene than the gaze patterns of those stronger in spatial-imagery ability. In the third study, the role of eye movements during mental imagery was investigated in four experiments where eye movements were prohibited during either the encoding phase or the recall phase. Experiments 1 and 2 showed that maintaining central fixation during visual or auditory encoding, respectively, had no effect on how eye movements were executed during recall. Thus, oculomotor events during recall are not reproductions of those produced during encoding. In Experiments 3 and 4, central fixation was instead maintained during recall. This turned out to alter and impair scene recollection, irrespective of the modality of encoding. Finally, in the fourth study, the functional role of eye movements in relation to memory retrieval was further investigated by means of direct eye-movement manipulation in the retrieval phase of an episodic-memory task. Four conditions were used: free viewing on a blank screen, maintaining central fixation, viewing within a square congruent with the location of the objects to be recalled, and viewing within a square incongruent with the location of the objects to be recalled. The results obtained show that gaze position plays an active and facilitatory role during memory retrieval. The findings from these studies are discussed in the light of current theories regarding eye movements during mental imagery and memory retrieval. (shrink) No categories | |