Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction

  • Conference paper

Part of the book series:Lecture Notes in Computer Science ((LNISA,volume 7403))

  • 3770Accesses

Abstract

In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and presentFurhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and howFurhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application ofFurhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Dominik, Z.: Who did actually invent the word robotand what does it mean? The Karel Čapek website,http://capek.misto.cz/english/robot.html (retrieved December 10, 2011)

  2. Summerfield, Q.: Lipreading and audio-visual speech perception. Philosophical Transactions: Biological Sciences 335(1273), 71–78 (1992)

    Article  Google Scholar 

  3. Al Moubayed, S., Beskow, J.: Effects of Visual Prominence Cues on Speech Intelligibility. In: Proceedings of Auditory-Visual Speech Processing, AVSP 2009, Norwich, England (2009)

    Google Scholar 

  4. Argyle, M., Cook, M.: Gaze and mutual gaze. Cambridge University Press (1976)

    Google Scholar 

  5. Kleinke, C.L.: Gaze and eye contact: a research review. Psychological Bulletin 100, 78–100 (1986)

    Article  Google Scholar 

  6. Ekman, P., Friesen, W.V.: Unmasking the face: A guide to recognizing emotions from facial clues. Malor Books (2003) ISBN: 978-1883536367

    Google Scholar 

  7. Shinozawa, K., Naya, F., Yamato, J., Kogure, K.: Differences in effect of robot and screen agent recommendations on human decision-making. International Journal of Human Computer Studies 62(2), 267–279 (2005)

    Article  Google Scholar 

  8. Mori, M.: Bukimi no tani.:The uncanny valley (K. F. MacDorman & T. Minato, Trans.). Energy 7(4), 33–35 (1970) (Originally in Japanese)

    Google Scholar 

  9. Gockley, R., Simmons, J., Wang, D., Busquets, C., DiSalvo, K., Caffrey, S., Rosenthal, J., Mink, S., Thomas, W., Adams, T., Lauducci, M., Bugajska, D., Perzanowski, Schultz, A.: Grace and George: Social Robots at AAAI. In: Proceedings of AAAI 2004, Mobile Robot Competition Workshop, pp. 15–20. AAAI Press (2004)

    Google Scholar 

  10. Edlund, J., Al Moubayed, S., Beskow, J.: The Mona Lisa Gaze Effect as an Objective Metric for Perceived Cospatiality. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS (LNAI), vol. 6895, pp. 439–440. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  11. Todorovi, D.: Geometrical basis of perception of gaze direction. Vision Research 45(21), 3549–3562 (2006)

    Article  Google Scholar 

  12. Raskar, R., Welch, G., Low, K.-L., Bandyopadhyay, D.: Shader lamps: animating real objects with image-based illumination. In: Proc. of the 12th Eurographics Workshop on Rendering Techniques, pp. 89–102 (2001)

    Google Scholar 

  13. Lincoln, P., Welch, G., Nashel, A., Ilie, A., State, A., Fuchs, H.: Animatronic shader lamps avatars. In: Proc. of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2009). IEEE Computer Society, Washington, DC (2009)

    Google Scholar 

  14. Al Moubayed, S., Edlund, J., Beskow, J.: Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections. ACM Trans. Interact. Intell. Syst. 1(2), Article 11, 25 pages (2012)

    Google Scholar 

  15. Al Moubayed, S., Skantze, G.: Turn-taking Control Using Gaze in Multiparty Human-Computer Dialogue: Effects of 2D and 3D Displays. In: Proceedings of the International Conference on Auditory-Visual Speech Processing AVSP, Florence, Italy (2011)

    Google Scholar 

  16. Al Moubayed, S., Beskow, J., Edlund, J., Granström, B., House, D.: Animated Faces for Robotic Heads: Gaze and Beyond. In: Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., Nijholt, A. (eds.) Communication and Enactment 2010. LNCS, vol. 6800, pp. 19–35. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  17. Beskow, J.: Talking heads - Models and applications for multimodal speech synthesis. Doctoral dissertation, KTH (2003)

    Google Scholar 

  18. Beskow, J.: Animation of talking agents. In: Benoit, C., Campbel, R. (eds.) Proc of ESCA Workshop on Audio-Visual Speech Processing, Rhodes, Greece, pp. 149–152 (1997)

    Google Scholar 

  19. Granström, B., House, D.: Modeling and evaluating verbal and non-verbal communication in talking animated interface agents. In: Dybkjaer, l., Hemsen, H., Minker, W. (eds.) Evaluation of Text and Speech Systems, pp. 65–98. Springer (2007)

    Google Scholar 

  20. Al Moubayed, S., Beskow, J., Granström, B.: Auditory-Visual Prominence: From Intelligibility to Behavior. Journal on Multimodal User Interfaces 3(4), 299–311 (2010)

    Article  Google Scholar 

  21. Brouwer, D.M., Bennik, J., Leideman, J., Soemers, H.M.J.R., Stramigioli, S.: Mechatronic Design of a Fast and Long Range 4 Degrees of Freedom Humanoid Neck. In: Proceedings of ICRA, Kobe, Japan, ThB8.2, pp. 574–579 (2009)

    Google Scholar 

  22. Harel, D.: Statecharts: A visual formalism for complex systems. Science of Computer Programming 8(3), 231–274 (1987)

    Article MathSciNet MATH  Google Scholar 

  23. Blackwell, R.D., Hensel, J.S., Sternthal, B.: Pupil dilation: What does it measure? Journal of Advertising Research 10, 15–18 (1970)

    Google Scholar 

  24. Nishino, K., Nayar, S.K.: Corneal Imaging System: Environment from Eyes. Int. J. Comput. Vision 70(1), 23–40 (2006), doi:10.1007/s11263-006-6274-9

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

  1. Department of Speech, Music, and Hearing, KTH Royal Institute of Technology, Lindstedtsvägen 24, 10044SE, Stockholm, Sweden

    Samer Al Moubayed, Jonas Beskow, Gabriel Skantze & Björn Granström

Authors
  1. Samer Al Moubayed

    You can also search for this author inPubMed Google Scholar

  2. Jonas Beskow

    You can also search for this author inPubMed Google Scholar

  3. Gabriel Skantze

    You can also search for this author inPubMed Google Scholar

  4. Björn Granström

    You can also search for this author inPubMed Google Scholar

Editor information

Editors and Affiliations

  1. Department of Psychology, and IIASS, Seconda Università degli Studi di Napoli, Italy

    Anna Esposito

  2. Istituto Nazionale di Geofisica e Vulcanologia, sezione di Napoli Osservatorio Vesuviano, Napoli, Italy

    Antonietta M. Esposito

  3. School of Computing Science, University of Glasgow, Glasgow, UK

    Alessandro Vinciarelli

  4. Laboratory of Acoustics and Speech Communication, Technische Universität Dresden, 01062, Dresden, Germany

    Rüdiger Hoffmann

  5. Dept. of Humanities and Social Sciences, Anatolia College/ACT, P.O. Box 21021, 55510, Pylaia, Greece

    Vincent C. Müller

Rights and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Al Moubayed, S., Beskow, J., Skantze, G., Granström, B. (2012). Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds) Cognitive Behavioural Systems. Lecture Notes in Computer Science, vol 7403. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34584-5_9

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp