Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Modality (human–computer interaction)

From Wikipedia, the free encyclopedia
Type of data
Not to be confused withMode (user interface).

In the context ofhuman–computer interaction, amodality is the classification of a single independent channel ofinput/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory),[1] or other significant differences in processing (e.g., text vs. image).[2]A system is designated unimodal if it has only one modality implemented, andmultimodal if it has more than one.[1] When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively.[3] Modalities can be generally defined in two forms: computer-human and human-computer modalities.

Computer–human modalities

[edit]

Computers utilize a wide range of technologies to communicate and send information to humans:

Any human sense can be used as a computer to human modality. However, the modalities ofseeing andhearing are the most commonly employed since they are capable of transmitting information at a higher speed than other modalities, 250 to 300[4] and 150 to 160[5]words per minute, respectively. Though not commonly implemented as computer-human modality, tactition can achieve an average of 125 wpm[6] through the use of arefreshable Braille display. Other more common forms of tactition are smartphone and game controller vibrations.

Human–computer modalities

[edit]

Computers can be equipped with various types ofinput devices and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer andafford practical adjustments to the user. Certain modalities can provide a richer interaction depending on the context, and having options for implementation allows for more robust systems.[7]

With the increasing popularity ofsmartphones, the general public are becoming more comfortable with the more complex modalities. Motion and orientation are commonly used in smartphone mapping applications. Speech recognition is widely used with Virtual Assistant applications. Computer Vision is now common in camera applications that are used to scan documents and QR codes.

Using multiple modalities

[edit]
Main article:Multimodal interaction

Having multiple modalities in a system gives moreaffordance to users and can contribute to a more robust system. Having more also allows for greateraccessibility for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information. Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others.

There are six types of cooperation between modalities, and they help define how a combination or fusion of modalities work together to convey information more effectively.[8]

  • Equivalence: information is presented in multiple ways and can be interpreted as the same information
  • Specialization: when a specific kind of information is always processed through the same modality
  • Redundancy: multiple modalities process the same information
  • Complementarity: multiple modalities take separate information and merge it
  • Transfer:a modality produces information that another modality consumes
  • Concurrency: multiple modalities take in separate information that is not merged

Complementary-redundant systems are those which have multiple sensors to form one understanding or dataset, and the more effectively the information can be combined without duplicating data, the more effectively the modalities cooperate. Having multiple modalities for communication is common, particularly in smartphones, and often their implementations work together towards the same goal, for example gyroscopes and accelerometers working together to track movement.[8]

See also

[edit]

References

[edit]
  1. ^abKarray, Fakhreddine; Alemzadeh, Milad; Saleh, Jamil Abou; Arab, Mo Nours (March 2008)."Human-Computer Interaction: Overview on State of the Art"(PDF).International Journal on Smart Sensing and Intelligent Systems.1 (1):137–159.doi:10.21307/ijssis-2017-283. Archived fromthe original(PDF) on April 30, 2015. RetrievedApril 21, 2015.
  2. ^Jing Yu Koh; Salakhutdinov, Ruslan; Fried, Daniel (2023). "Grounding Language Models to Images for Multimodal Inputs and Outputs".arXiv:2301.13823 [cs.CL].
  3. ^Palanque, Philippe; Paterno, Fabio (2001).Interactive Systems. Design, Specification, and Verification. Springer Science & Business Media. pp. 43.ISBN 9783540416630.
  4. ^Ziefle, M (December 1998). "Effects of display resolution on visual performance".Human Factors.40 (4):554–68.doi:10.1518/001872098779649355.PMID 9974229.
  5. ^Williams, J. R. (1998). Guidelines for the use of multimedia in instruction, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, 1447–1451
  6. ^"Braille".ACB. American Council of the Blind. Retrieved21 April 2015.
  7. ^Bainbridge, William (2004).Berkshire Encyclopedia of Human-computer Interaction. Berkshire Publishing Group LLC. p. 483.ISBN 9780974309125.
  8. ^abGrifoni, Patrizia (2009).Multimodal Human Computer Interaction and Pervasive Services. IGI Global. p. 37.ISBN 9781605663876.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Modality_(human–computer_interaction)&oldid=1282991989"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp