Movatterモバイル変換


[0]ホーム

URL:


W3C

XR Accessibility User Requirements

W3C Working Group Note

This version:
https://www.w3.org/TR/2021/NOTE-xaur-20210825/
Latest published version:
https://www.w3.org/TR/xaur/
Latest editor's draft:
https://w3c.github.io/apa/xaur/
Previous version:
https://www.w3.org/TR/2020/WD-xaur-20200916/
Editors:
Joshue O'Connor (W3C)
Janina Sajka
Jason White (Educational Testing Service)
Scott Hollier
Michael Cooper (W3C)
Participate:
GitHub w3c/apa
File an issue
Commit history
Pull requests

Copyright © 2020-2021W3C® (MIT,ERCIM,Keio,Beihang). W3Cliability,trademark andpermissive document license rules apply.


Abstract

This document lists user needs and requirements for people with disabilities when using virtual reality or immersive environments, augmented or mixed reality and other related technologies (XR). It first introduces a definition ofXR as used throughout the document, then briefly outlines some uses ofXR. It outlines the complexity of understandingXR, introduces some technical accessibility challenges such as the need for multi-modal support, synchronization of input and output devices and customization. It then outlines accessibility related user needs forXR and suggests subsequent requirements. This is followed by related work that may be helpful understanding the complex technical architecture and processes behind howXR environments are built and what may form the basis of a robust accessibility architecture.

This document is most explicitly not a collection of baseline requirements. It is also important to note that some of the requirements may be implemented at a system or platform level, and some may be authoring requirements.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of currentW3C publications and the latest revision of this technical report can be found in theW3C technical reports index at https://www.w3.org/TR/.

To comment on this draftfile an issue in theW3C APA GitHub repository. If this is not feasible, send email topublic-apa@w3.org (archives). In-progress updates to the document may be viewed in thepublicly visible editors' draft.

This document was published by theAccessible Platform Architectures Working Group as a Working Group Note.

GitHub Issues are preferred for discussion of this specification.

Publication as a Working Group Note does not imply endorsement by theW3C Membership.

This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under theW3C Patent Policy. The group does not expect this document to become aW3C Recommendation.

This document is governed by the15 September 2020W3C Process Document.

1.Introduction

XR is an acronym used to refer to the spectrum of hardware, applications, and techniques used for virtual reality or immersive environments, augmented or mixed reality and other related technologies. This document is developed as part of a discovery into accessibility related user needs and requirements forXR. This document does not represent a formal working group position, nor does it currently represent a set of technical requirements that a developer or designer need strictly follow. It aims to outline the diversity of some current accessibility related user needs inXR and what potential requirements to meet those needs may be.

1.1What does the term 'XR' mean?

As with theWebXR API spec and as indicated in the relatedWebXR explainer, this document uses the acronymXR to refer to the spectrum of hardware, applications, and techniques used for virtual reality or immersive environments, augmented or mixed reality and other related technologies. Examples include, but are not limited to:

The important commonality between them being that they all offer some degree of spatial tracking with which to simulate a view of virtual content as well as navigation and interaction with the objects within these environments.

Terms like "XR Device", "XR Application", etc. are generally understood to apply to any of the above. Portions of this document that only apply to a subset of these devices will be indicated as appropriate.

1.2Definitions of virtual reality and immersive environments

Virtual reality and immersive environment definitions vary but converge on the notion of immersive computer-mediated experiences. They involve interaction with objects, people and environments using a range of controls. These experiences are often multi-sensory and may be used for educational, therapeutic or entertainment purposes.

1.3Definitions of augmented and mixed reality

Augmented and mixed reality definitions vary but converge on the notion of computer-mediated interactions involving overlays on the real world. These may be informational, or interactive depending on the application.

2.What isXR used for?

XR has a range of purposes from work, education, gaming, multimedia and communication. It is evolving at a fast rate and while not yet mainstream, this will change as computing power increases, hardware becomes cheaper and the quality of the user experience improves.XR will be more commonly used for the performance of work tasks, for therapeutic uses, education and for entertainment.

3.UnderstandingXR and Accessibility Challenges

UnderstandingXR itself presents various challenges that are technical. They include issues with a range of hardware, software and authoring tools. To make accessibleXR experiences there is a need to understand interaction design principles, accessibility semantics and assistive technologies. However, these all represent 'basic' complexities that are in themselves substantial. To add to this, for many designers and authors they may neither know nor have access to people with disabilities for usability testing. Neither may they have a practical way of understanding accessibility related user needs that they can build a solid set of requirements from. In short, they just may not understand what user needs they are trying to meet.

Some of the issues inXR, for example in gaming, for people with disabilities include:

There are a range of disabilities that will need to be considered in makingXR accessible. It is beyond the scope of this document to describe them all in detail. General categories or types of disabilities are:

A person may have one of these disabilities or a combination of several. User needs are presented here that may relate to several of these disabilities with a range of requirements that should be met by the author or the platform. ForXR designers and authors understanding these needs is crucial when makingXR environments accessible.

Some things designers and authors need to be aware of:

3.1Immersive Environment challenges

Some of the challenges within immersive environments (and gaming) accessibility include the use of extremely complex input devices, control schemes that require a high degree of precision, timing and simultaneous action; ability to distinguish subtle differences in busy visual and audio information, having to juggle multiple complex goals and objectives [web-adapt].

There are also currently very useful accessibility guidelines available that are specific to gaming [game-a11y].

3.2XR and supporting multimodality

Modality relates to modes of sense perception such as sight, hearing, touch and so on. Accessibility can be thought of as supporting multi-modal requirements and the transformation of content or aspects of a user interface from one mode to another that will support various user needs.

Considering various modality requirements in the foundation ofXR means these platforms will be better able to support accessibility related user needs. There will be many modality aspects for the developer and/or content author to consider.

XR authors and content designers will also need access to tools that support the multi-modal requirements listed below.

The following inputs and outputs can be considered modalities that should be supported inXR environments.

3.3Various input modalities

The following are example of some of the diverse input methods used by people with disabilities. In many real-world applications these input methods may be combined.

3.4Various output modalities

The following are a list of outputs that can be available to a user to help them understand, interact with and 'sense' feedback from anXR application. Some of these are in common use on the Web and other exploratory (such as Olfactory and Gustatory.)

3.5XR controller challenges

As mentioned, there are a range of input devices that may be used. Supporting these controllers requires an understanding of what they are and how they work.There are a variety of alternative gaming controls that may be very useful inXR environments and applications. For example, theXbox Adaptive Controller.

WhileXR is the experience, the controller plays a critical part in overcoming some complexity as well as mediating issues that may relate to other challenges around usability and helping the user understand sensory substitution devices.

Controllers such as the Xbox Adaptive Controller and other switch type inputs allow the user to remap keyboard inputs to control or interact with virtual environments. These powerful customizations allow the user to "do that thing that is difficult" for them with ease. In conjunction with this controller, for example, users with limited mobility they can also simulate actions in theXR environment that they would not be able to physically perform.WalkinVRDriver is a good example of this where motion range, position and orientation can be set to the user's ability.

3.6Customization of control inputs

Give the user the ability to modify their input preference or use a variety of input devices. The remapping of keys used to control movement or interaction in virtual environments is not currently required by WCAG. It is nevertheless noted in the literature as desirable.

3.7 Using multiple diverse inputs simultaneously

A user with a disability may have several input devices or different assistive technologies. A user may switch 'mode' of interaction, or the tools used without degrading the user experience where they lose focus on a task and cannot return to it, or make unwanted input.

Complexity needs to be managed and co-ordinated between different kinds of assistive technology in immersive environments. There is a platform level requirement to support multiple assistive technologies in a cohesive manner. This would allow combinations to be used in a co-ordinated way e.g. where the users day-to-day AT, can be used with other AT that may be embedded in the environment already for example.

Note

The REQ 5b: Voice activation also indicates potential issues with pairing multiple devices via Bluetooth.

3.8Consistent tracking with multiple inputs

There may be tracking issues when switching input devices. A tracking issue is where the user may lose their focus or it can be modified in unpredictable or unwanted ways, this can cause loss of focus and potentially push the user to make unwanted inputs or choices.

Outputs sent to multiple devices will need to be synchronised.

3.9Usability and affordances inXR

AnXR application should have a high level of usability for someone with a disability using assistive technology. Therefore, communicating affordances successfully is critical and needs to be done in a way that supports multiple modalities. Some related questions are:

Note

Regarding the discoverability of accessibility features inXR. It is important for designers of accessibleXR to understand how to categorize various accessibility features and understand where to place them, in a menu for example. An accessibility related accommodation may have multiple contexts of use that may not be obvious. For example, the suggested use of "mono" in User Need 19 is not just an accessibility feature under a hearing-impaired category, as it is also useful for users with spatial orientation impairments or cognitive and learning disabilities. Care should be taken to ensure these features are categorized in menus correctly and discoverable in multiple contexts.

4.XR User Needs and Requirements

This document outlines various accessibility related user needs forXR. These user needs should drive accessibility requirements forXR and its related architecture. These come from people with disabilities who use assistive technologies and wish to see the features described available withinXR enabled applications.

User needs and requirements are often dependent on context of use. The following outline some accessibility user needs and requirements that may be applicable in immersive environments, augmented reality and 360° applications.

These following are neither exhaustive, nor definitive but are presented to help orientate the reader towards understanding some broad user needs and how to meet them.

4.1Immersive semantics and customization

Note

In an spatialized augmented reality environment a blind user may find a combination of text to speech and sonic symbols helpful. By using a combination of text to speech and sonic symbolism a blind user can do a self-guided tour of a given area using their smartphone. [spatialized-navigation]

4.2Motion agnostic interactions

Note

There are accessibility issues specific to augmented reality. For example, the user may be expected to scan the environment, or scan physical objects, to determine the placement of virtual objects. The user may need to mark a location or an area in space so that the AR application can generate appropriate virtual objects. The user should be able to perform these actions in a motion agnostic way.

4.3Immersive personalization

Note

Personalization involves tailoring aspects of the user experience to meet the needs and preferences of the individual user.W3C are working on various modules for web content that aim to support personalization and are exploring areas such as:

4.4Interaction and target customization

Note

Users with cognitive and learning disabilities need to understand what items in a visual display are actionable targets and how to interact with them. There is a need for accessibility API's that map custom user interface actions to control types. These actions can then be understood by a broad range of assistive technologies. This would help indicate to users what targets are actionable, and how they can interact with them. By supporting this kind of adaptation and personalization the user can select preferred, familiar options from a set of alternatives. TheW3C have produced a useful list of these patterns that could help readers understand the user needs of people with cognitive and learning disabilities, as well as in the development of suitable APIs. [coga-usable], especiallysection 4, the Design Guide.

4.5Voice commands

4.6Color changes

4.7Magnification context and resetting

Note

There are customisation approaches such as the automatic generation of user interfaces as demonstrated in the SUPPLE project, which adapt to the different challenges the user may face, such as vision, motor control and other user preferences and abilities. A generated UI can make multiple adaptations for different user needs at the same time. This is achieved by generating a UI, or several - after testing a person's ability using an algorithm to learn their preferences. [supple-project]

4.8Critical messaging and alerts

4.9Gestural interfaces and interactions

4.10Signing videos and text description transformation

Note

Currently, it is not possible to provide an accurate live interpretation via a signing avatar. In general, animated or digital signing avatars should be avoided as users find them less expressive than recorded video of humans who can convey the natural quality and skill provided by appropriately trained and qualified interpreters and translators. Therefore, uses of signed avatars should rely only on pre-recording of 'real people' who are trained and qualified interpreters and translators. See the concerns expressed by the WFD and WASLI 'Statement on Use of Signing Avatars'. [wfd-wasli]

However, we note this is an emerging field and exploration is encouraged to ensure the future development of quality signing avatars. For example, this could be via building a signing avatar that both provides a face with fully functioning muscular variables and can successfully parse the nuances of vocal expression and meaning.

4.11Safe harbour controls

4.12Immersive time limits

4.13Orientation and navigation

4.14Second screen devices

Note

'Second screen' is a term used in this document to denote any another external output device, such as a monitor or sound card, or assistive technology such as braille output. The use of the term is not restricted to just these devices and can refer to any output device a user may choose.

4.15Interaction speed

Note

The term 'help' for REQ 15c may vary from explanatory information such as textual/symbolic annotations in an application, to human assistance in real time.

4.16Avoiding sickness triggers

4.17Spatial audio tracks and alternatives

4.18Spatial orientation: Mono audio option

Note

People with traumatic brain injuries can have a range of impairments. These may be spatial orientation impairments, auditory processing difficulties, visual processing difficulties or a combination. They may miss information in stereo or binaural soundscapes. This can affect orientation while navigating. Even if provided with accurate directions, they may not recognize surroundings, or experience anxiety when navigating.

4.19Captioning, Subtitling and Text: Support and customization

Note

TheW3C Immersive Captions Community Group is actively contributing to this emerging accessibility standards work representing a diverse range of user needs.

5.Related Documents

Other documents that relate to this and represent current work in the RQTF/APA are:

A.Change Log

The following is a list of new requirements and other changes in this document:

Requirements have been updated based on combined review feedback, discussion and Research Questions Task Force consensus. Other user needs have been edited to better reference related requirements such as with Second screen devices.

Various clarification or reference notes have been added relating to:

B.Acknowledgements

B.1Participants of the APA working group active in the development of this document

B.2Previously active participants, commenters, and other contributors

B.3Enabling Funders

This work is supported by theEC-funded WAI-Guide Project.

C.References

C.1Informative references

[able-gamers]
Thought On Accessibility and VR. AJ Ryan. March, 2017. URL:https://ablegamers.org/thoughts-on-accessibility-and-vr/
[coga-usable]
Making Content Usable for People with Cognitive and Learning Disabilities. Lisa Seeman-Horwitz; Rachael Bradley Montgomery; Steve Lee; Ruoxi Ran. W3C. 29 April 2021. W3C Note. URL:https://www.w3.org/TR/coga-usable/
[game-a11y]
Game Accessibility Guidelines. Barrie Ellis; Ian Hamilton; Gareth Ford-Williams; Lynsey Graham; Dimitris Grammenos; Ed Lee; Jake Manion; Thomas Westin. 2019. URL:http://gameaccessibilityguidelines.com
[inclusive-seattle]
W3C Workshop on Inclusive XR Seattle. W3C; Pluto VR. W3C. Nov 2019. URL:https://www.w3.org/2019/08/inclusive-xr-workshop/
[maidenbaum-amendi]
Non-visual virtual interaction: Can Sensory Substitution generically increase the accessibility of Graphical virtual reality to the blind?. Maidenbaum, S.; Amedi, A. In Virtual and Augmented Assistive Technology (VAAT), 2015 3rd IEEE VR International Workshop on (pp. 15-17). IEEE. 2015.
[mono-ios]
iPhone User Guide. Apple. 2020. URL:https://support.apple.com/en-gb/guide/iphone/iph3e2e2cdc/ios
[personalization-content]
Personalization Semantics Content Module 1.0. Lisa Seeman; Charles LaPierre; Michael Cooper; Roy Ran; Richard Schwerdtfeger. W3C. 2020. URL:https://www.w3.org/TR/personalization-semantics-content-1.0/
[personalization-requirements]
Requirements for Personalization Semantics. Lisa Seeman; Charles LaPierre; Michael Cooper; Roy Ran. W3C. 2020. URL:https://www.w3.org/TR/personalization-semantics-requirements-1.0/
[personalization-semantics]
Personalization Semantics Explainer 1.0. Lisa Seeman; Charles LaPierre; Michael Cooper; Roy Ran; Richard Schwerdtfeger. W3C. 2020. URL:https://www.w3.org/TR/personalization-semantics-1.0/
[raja-asl]
Legibility of Videos with ASL signers. Cornell UNiversity. 27 May 2021. URL:https://arxiv.org/abs/2105.12928
[spatialized-navigation]
What’s around Me? Spatialized Audio Augmented Reality for Blind Users with a Smartphone. Blum J.R.; Bouchard M.; Cooperstock J.R. . Springer. 2012. URL:https://link.springer.com/chapter/10.1007/978-3-642-30973-1_5
[supple-project]
SUPPLE: Automatically Generating Personalized User Interfaces. Krzysztof Gajos et al. Harvard. 2010. URL:http://www.eecs.harvard.edu/~kgajos/research/supple/
[web-adapt]
W3C Workshop on Web Games Position Paper: Adaptive Accessibility. Matthew Tylee Atkinson; Ian Hamilton; Joe Humbert; Kit Wessendorf. W3C. Dec 2018. URL:https://www.w3.org/2018/12/games-workshop/papers/web-games-adaptive-accessibility.html
[wfd-wasli]
WFD and WASLI. World Federation of the Deaf. 14 March 2081. URL:https://wfdeaf.org/news/resources/wfd-wasli-statement-use-signing-avatars/


[8]ページ先頭

©2009-2025 Movatter.jp