Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. How does it feel to act together?Elisabeth Pacherie -2014 -Phenomenology and the Cognitive Sciences 13 (1):25-46.
    This paper on the phenomenology of joint agency proposes a foray into a little explored territory at the intersection of two very active domains of research: joint action and sense of agency. I explore two ways in which our experience of joint agency may differ from our experience of individual agency. First, the mechanisms of action specification and control involved in joint action are typically more complex than those present in individual actions, since it is crucial for joint action that (...) people coordinate their plans and actions. I discuss the implications that these coordination requirements might have for the strength of the sense of agency an agent may experience for a joint action. Second, engagement in joint action may involve a transformation of agentive identity and a partial or complete shift from a sense of self-agency to a sense of we-agency. I discuss several factors that may contribute to shaping our sense of agentive identity in joint action. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  • Joint Action: Mental Representations, Shared Information and General Mechanisms for Coordinating with Others.Cordula Vesper,Ekaterina Abramova,Judith Bütepage,Francesca Ciardo,Benjamin Crossey,Alfred Effenberg,Dayana Hristova,April Karlinsky,Luke McEllin,Sari R. R. Nijssen,Laura Schmitz &Basil Wahn -2017 -Frontiers in Psychology 7.
  • The role of shared visual information for joint action coordination.Cordula Vesper,Laura Schmitz,Lou Safra,Natalie Sebanz &Günther Knoblich -2016 -Cognition 153 (C):118-123.
    No categories
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Joint Action: Current Perspectives.Bruno Galantucci &Natalie Sebanz -2009 -Topics in Cognitive Science 1 (2):255-259.
    In recent years researchers have begun to investigate how the perceptual, motor and cognitive activities of two or more individuals become organized into coordinated action. In the first part of this introduction we identify three common threads among the ten papers of this special issue that exemplify this new line of research. First, all of the papers are grounded in the experimental study of online interactions between two or more individuals. Second, albeit at different levels of analysis, the contributions focus (...) on the mechanisms supporting joint action. Third, many of the papers investigate empirically the pre‐requisites for the highly sophisticated forms of joint action that are typical of humans. In the second part of the introduction, we summarize each of the papers, highlighting more specific connections among them. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Performance in a Collaborative Search Task: The Role of Feedback and Alignment.Moreno I. Coco,Rick Dale &Frank Keller -2018 -Topics in Cognitive Science 10 (1):55-79.
    When people communicate, they coordinate a wide range of linguistic and non-linguistic behaviors. This process of coordination is called alignment, and it is assumed to be fundamental to successful communication. In this paper, we question this assumption and investigate whether disalignment is a more successful strategy in some cases. More specifically, we hypothesize that alignment correlates with task success only when communication is interactive. We present results from a spot-the-difference task in which dyads of interlocutors have to decide whether they (...) are viewing the same scene or not. Interactivity was manipulated in three conditions by increasing the amount of information shared between interlocutors. We use recurrence quantification analysis to measure the alignment between the scan-patterns of the interlocutors. We found that interlocutors who could not exchange feedback aligned their gaze more, and that increased gaze alignment correlated with decreased task success in this case. When feedback was possible, in contrast, interlocutors utilized it to better organize their joint search strategy by diversifying visual attention. This is evidenced by reduced overall alignment in the minimal feedback and full dialogue conditions. However, only the dyads engaged in a full dialogue increased their gaze alignment over time to achieve successful performances. These results suggest that alignment per se does not imply communicative success, as most models of dialogue assume. Rather, the effect of alignment depends on the type of alignment, on the goals of the task, and on the presence of feedback. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.Matthias S. Gobel,Miles R. A. Tufft &Daniel C. Richardson -2018 -Cognitive Science 42 (S1):161-185.
    We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue—a hand or an eye—or due to its social relevance—a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the (...) Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue‐target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue—whether the cue is connected to another person, who this person is, and what this person is doing—and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. (shrink)
    No categories
    Direct download(6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Two Trackers Are Better than One: Information about the Co-actor's Actions and Performance Scores Contribute to the Collective Benefit in a Joint Visuospatial Task.Wahn Basil,Kingstone Alan &König Peter -2017 -Frontiers in Psychology 8.
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • How the Eyes Tell Lies: Social Gaze During a Preference Task.Tom Foulsham &Maria Lock -2015 -Cognitive Science 39 (7):1704-1726.
    Social attention is thought to require detecting the eyes of others and following their gaze. To be effective, observers must also be able to infer the person's thoughts and feelings about what he or she is looking at, but this has only rarely been investigated in laboratory studies. In this study, participants' eye movements were recorded while they chose which of four patterns they preferred. New observers were subsequently able to reliably guess the preference response by watching a replay of (...) the fixations. Moreover, when asked to mislead the person guessing, participants changed their looking behavior and guessing success was reduced. In a second experiment, naïve participants could also guess the preference of the original observers but were unable to identify trials which were lies. These results confirm that people can spontaneously use the gaze of others to infer their judgments, but also that these inferences are open to deception. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Joint perception: gaze and social context.Daniel C. Richardson,Chris N. H. Street,Joanne Y. M. Tan,Natasha Z. Kirkham,Merrit A. Hoover &Arezou Ghane Cavanaugh -2012 -Frontiers in Human Neuroscience 6.
  • What Am I Looking at? Interpreting Dynamic and Static Gaze Displays.Margot van Wermeskerken,Damien Litchfield &Tamara van Gog -2018 -Cognitive Science 42 (1):220-252.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What Am I Looking at? Interpreting Dynamic and Static Gaze Displays.Margot Wermeskerken,Damien Litchfield &Tamara Gog -2018 -Cognitive Science 42 (1):220-252.
    Displays of eye movements may convey information about cognitive processes but require interpretation. We investigated whether participants were able to interpret displays of their own or others' eye movements. In Experiments 1 and 2, participants observed an image under three different viewing instructions. Then they were shown static or dynamic gaze displays and had to judge whether it was their own or someone else's eye movements and what instruction was reflected. Participants were capable of recognizing the instruction reflected in their (...) own and someone else's gaze display. Instruction recognition was better for dynamic displays, and only this condition yielded above chance performance in recognizing the display as one's own or another person's. Experiment 3 revealed that order information in the gaze displays facilitated instruction recognition when transitions between fixated regions distinguish one viewing instruction from another. Implications of these findings are discussed. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Assessing Team Effectiveness by How Players Structure Their Search in a First‐Person Multiplayer Video Game.Patrick Nalepka,Matthew Prants,Hamish Stening,James Simpson,Rachel W. Kallen,Mark Dras,Erik D. Reichle,Simon G. Hosking,Christopher Best &Michael J. Richardson -2022 -Cognitive Science 46 (10):e13204.
    Cognitive Science, Volume 46, Issue 10, October 2022.
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Using gaze patterns to predict task intent in collaboration.Chien-Ming Huang,Sean Andrist,Allison Sauppé &Bilge Mutlu -2015 -Frontiers in Psychology 6:144956.
    In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to (...) quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a “worker” prepared a sandwich by adding ingredients requested by a “customer.” In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Modest Sociality: Continuities and Discontinuities.Elisabeth Pacherie -2014 -Journal of Social Ontology 1 (1):17-26.
    A central claim in Michael Bratman’s account of shared agency is that there need be no radical conceptual, metaphysical or normative discontinuity between robust forms of small-scale shared intentional agency, i.e., modest sociality, and individual planning agency. What I propose to do is consider another potential discontinuity, whose existence would throw doubt on his contention that the structure of a robust form of modest sociality is entirely continuous with structures at work in individual planning agency. My main point will be (...) that he may be wrong in assuming that the basic cognitive infrastructure sufficient to support individual agency doesn’t have to be supplemented in significant ways to support shared agency. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can Limitations of Visuospatial Attention Be Circumvented? A Review.Basil Wahn &Peter König -2017 -Frontiers in Psychology 8.
  • The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills.Daniel R. Leff,David R. C. James,Felipe Orihuela-Espina,Ka-Wai Kwok,Loi Wah Sun,George Mylonas,Thanos Athanasiou,Ara W. Darzi &Guang-Zhong Yang -2015 -Frontiers in Human Neuroscience 9.
  • But is it social? How to tell when groups are more than the sum of their members.Allison A. Brennan &James T. Enns -2016 -Behavioral and Brain Sciences 39.
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Face to face: The eyes as an anchor in multimodal communication.Desiderio Cano Porras &Max M. Louwerse -2025 -Cognition 256 (C):106047.
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • The Social Situation Affects How We Process Feedback About Our Actions.Artur Czeszumski,Benedikt V. Ehinger,Basil Wahn &Peter König -2019 -Frontiers in Psychology 10.
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Optimistic metacognitive judgments predict poor performance in relatively complex visual tasks.Daniel T. Levin,Gautam Biswas,Joeseph S. Lappin,Marian Rushdy &Adriane E. Seiffert -2019 -Consciousness and Cognition 74 (C):102781.
  • Collective benefit in joint perceptual judgments: Partial roles of shared environments, meta-cognition, and feedback.Pavel V. Voinov,Natalie Sebanz &Günther Knoblich -2019 -Cognition 189 (C):116-130.
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Gaze Coordination of Groups in Dynamic Events – A Tool to Facilitate Analyses of Simultaneous Gazes Within a Team.Frowin Fasold,André Nicklas,Florian Seifriz,Karsten Schul,Benjamin Noël,Paula Aschendorf &Stefanie Klatt -2021 -Frontiers in Psychology 12.
    The performance and the success of a group working as a team on a common goal depends on the individuals’ skills and the collective coordination of their abilities. On a perceptual level, individual gaze behavior is reasonably well investigated. However, the coordination of visual skills in a team has been investigated only in laboratory studies and the practical examination and knowledge transfer to field studies or the applicability in real-life situations have so far been neglected. This is mainly due to (...) the fact that a methodological approach along with a suitable evaluation procedure to analyze the gaze coordination within a team in highly dynamic events outside the lab, is still missing. Thus, this study was conducted to develop a tool to investigate the coordinated gaze behavior within a team of three human beings acting with a common goal in a dynamic real-world scenario. This team was a basketball referee team adjudicating a game. Using mobile eye-tracking devices and an indigenously designed software tool for the simultaneous analysis of the gaze data of three participants, allowed, for the first time, the simultaneous investigation of the coordinated gaze behavior of three people in a highly dynamic setting. Overall, the study provides a new and innovative method to investigate the coordinated gaze behavior of a three-person team in specific tasks. This method is also applicable to investigate research questions about teams in dynamic real-world scenarios and get a deeper look at interactions and behavior patterns of human beings in group settings. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction.Stephanie Gross,Brigitte Krenn &Matthias Scheutz -2016 -Interaction Studies 17 (2):180-210.
    Human instructors often refer to objects and actions involved in a task description using both linguistic and non-linguistic means of communication. Hence, for robots to engage in natural human-robot interactions, we need to better understand the various relevant aspects of human multi-modal task descriptions. We analyse reference resolution to objects in a data collection comprising two object manipulation tasks and find that 78.76% of all referring expressions to the objects relevant in Task 1 are verbally underspecified and 88.64% of all (...) referring expressions are verbally underspecified in Task 2. The data strongly suggests that a language processing module for robots must be genuinely multi-modal, allowing for seamless integration of information transmitted in the verbal and the visual channel, whereby tracking the speaker’s eye gaze and gestures as well as object recognition are necessary preconditions. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp