Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Screen reader

From Wikipedia, the free encyclopedia
Assistive technology that converts text or images to speech or Braille
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Screen reader" – news ·newspapers ·books ·scholar ·JSTOR
(July 2017) (Learn how and when to remove this message)

An example of someone using a screen reader showing documents that are inaccessible, readable and accessible

Ascreen reader is a form ofassistive technology (AT)[1] that renders text and image content as speech or braille output. Screen readers are essential toblind people,[2] and are also useful to people who arevisually impaired,[2]illiterate orlearning-disabled.[3] Screen readers aresoftware applications that attempt to convey what people with normal eyesight see on adisplay to their users via non-visual means, liketext-to-speech,[4] sound icons,[5] or abraille device.[2] They do this by applying a wide variety of techniques that include, for example, interacting with dedicatedaccessibility APIs, using variousoperating system features (likeinter-process communication and queryinguser interface properties), and employinghooking techniques.[6]

Microsoft Windowsoperating systems have included theMicrosoft Narrator screen reader sinceWindows 2000, though separate products such asFreedom Scientific's commercially availableJAWS screen reader andZoomText screen magnifier and thefree and open source screen readerNVDA by NV Access are more popular for that operating system.[7]Apple Inc.'smacOS,iOS, andtvOS includeVoiceOver as a built-in screen reader, whileGoogle'sAndroid provides theTalkback screen reader and itsChromeOS can use ChromeVox.[8] Similarly, Android-based devices from Amazon provide the VoiceView screen reader. There are also free and open source screen readers forLinux andUnix-like systems, such as Speakup andOrca.

History

[edit]

Around 1978, Al Overby of IBM Raleigh developed a prototype of a talking terminal, known as SAID (for Synthetic Audio Interface Driver), for theIBM 3270 terminal.[9] SAID read the ASCII values of the display in a stream and spoke them through a large vocal track synthesizer the size of a suitcase, and it cost around $10,000.[10] Dr. Jesse Wright, a blind research mathematician, andJim Thatcher, formerly his graduate student from the University of Michigan, working as mathematicians for IBM, adapted this as an internal IBM tool for use by blind people. After the earlyIBM Personal Computer (PC) was released in 1981, Thatcher and Wright developed a software equivalent to SAID, called PC-SAID, orPersonal Computer Synthetic Audio Interface Driver. This was renamed and released in 1984 as IBM Screen Reader, which became theproprietary eponym for that general class of assistive technology.[10]

Types

[edit]

Command-line (text)

[edit]

In earlyoperating systems, such asMS-DOS, which employedcommand-line interfaces (CLIs), the screen display consisted ofcharactersmapping directly to ascreen buffer inmemory and acursor position. Input was by keyboard. All this information could therefore be obtained from the system either byhooking the flow of information around the system and reading the screen buffer or by using a standard hardware output socket[11] and communicating the results to the user.

In the 1980s, the Research Centre for the Education of the Visually Handicapped (RCEVH) at theUniversity of Birmingham developed a Screen Reader for theBBC Micro and NEC Portable.[12][13]

Graphical

[edit]

Off-screen models

[edit]

With the arrival ofgraphical user interfaces (GUIs), the situation became more complicated. AGUI has characters and graphics drawn on the screen at particular positions, and therefore there is no purely textual representation of the graphical contents of the display. Screen readers were therefore forced to employ new low-level techniques, gathering messages from theoperating system and using these to build up an "off-screen model", a representation of the display in which the required text content is stored.[14]

For example, the operating system might send messages to draw a command button and its caption. These messages are intercepted and used to construct the off-screen model. The user can switch between controls (such as buttons) available on the screen and the captions and control contents will be read aloud and/or shown on arefreshable braille display.

Screen readers can also communicate information on menus, controls, and other visual constructs to permit blind users to interact with these constructs. However, maintaining an off-screen model is a significant technical challenge; hooking the low-level messages and maintaining an accurate model are both difficult tasks.[citation needed]

Accessibility APIs

[edit]

Operating system and application designers have attempted to address these problems by providing ways for screen readers to access the display contents without having to maintain an off-screen model. These involve the provision of alternative and accessible representations of what is being displayed on the screen accessed through anAPI. ExistingAPIs include:

Screen readers can query the operating system or application for what is currently being displayed and receive updates when the display changes. For example, a screen reader can be told that the current focus is on a button and the button caption to be communicated to the user. This approach is considerably easier for the developers of screen readers, but fails when applications do not comply with the accessibilityAPI: for example,Microsoft Word does not comply with theMSAAAPI, so screen readers must still maintain an off-screen model for Word or find another way to access its contents.[citation needed] One approach is to use available operating system messages and application object models to supplement accessibilityAPIs.

Screen readers can be assumed to be able to access all display content that is not intrinsically inaccessible. Web browsers, word processors, icons and windows and email programs are just some of the applications used successfully by screen reader users. However, according to some users,[who?] using a screen reader is considerably more difficult than using a GUI, and many applications have specific problems resulting from the nature of the application (e.g. animations) or failure to comply with accessibility standards for the platform (e.g. Microsoft Word and Active Accessibility).[citation needed]

Self-voicing programs and applications

[edit]

Some programs and applications have voicing technology built in alongside their primary functionality. These programs are termedself-voicing and can be a form ofassistive technology if they are designed to remove the need to use a screen reader.[citation needed]

Cloud-based

[edit]

Some telephone services allow users to interact with the internet remotely. For example, TeleTender can read web pages over the phone and does not require special programs or devices on the user side.[citation needed]

Virtual assistants can sometimes read out written documents (textual web content,PDF documents, e-mails etc.) The best-known examples are Apple'sSiri,Google Assistant, andAmazon Alexa.

Web-based

[edit]

A relatively new development in the field is web-based applications like Spoken-Web that act as web portals, managing content like news updates, weather, science and business articles for visually impaired or blind computer users.[citation needed] Other examples are ReadSpeaker orBrowseAloud that addtext-to-speech functionality to web content.[citation needed] The primary audience for such applications is those who have difficulty reading because of learning disabilities or language barriers.[citation needed] Although functionality remains limited compared to equivalent desktop applications, the major benefit is to increase the accessibility of said websites when viewed on public machines where users do not have permission to install custom software, giving people greater "freedom to roam".[citation needed]

This functionality depends on the quality of the software but also on a logical structure of the text. Use of headings, punctuation, presence of alternate attributes for images, etc. is crucial for a good vocalization. Also a web site may have a nice look because of the use of appropriate two dimensional positioning with CSS but its standard linearization, for example, by suppressing any CSS and JavaScript in the browser may not be comprehensible.[citation needed]

Customization

[edit]

Most screen readers allow the user to select whether mostpunctuation is announced or silently ignored. Some screen readers can be tailored to a particular application throughscripting. One advantage of scripting is that it allows customizations to be shared among users, increasing accessibility for all.JAWS enjoys an active script-sharing community, for example.[citation needed]

Verbosity

[edit]

Verbosity is a feature of screen reading software that supports vision-impaired computer users. Speech verbosity controls enable users to choose how much speech feedback they wish to hear. Specifically, verbosity settings allow users to construct a mental model of web pages displayed on their computer screen. Based on verbosity settings, a screen-reading program informs users of certain formatting changes, such as when a frame or table begins and ends, where graphics have been inserted into the text, or when a list appears in the document. The verbosity settings can also control the level of descriptiveness of elements, such as lists, tables, and regions.[18] For example,JAWS provides low, medium, and high web verbosity preset levels. The high web verbosity level provides more detail about the contents of a webpage.[19]

Language

[edit]

Some screen readers can read text in more than onelanguage, provided that the language of the material is encoded in itsmetadata.[20]

Screen reading programs likeJAWS,NVDA, andVoiceOver also include language verbosity, which automatically detects verbosity settings related to speech output language. For example, if a user navigated to a website based in the United Kingdom, the text would be read withan English accent.[citation needed]

See also

[edit]

References

[edit]
Look upscreen reader in Wiktionary, the free dictionary.
  1. ^"Types of Assistive Technology Products". Microsoft Accessibility. RetrievedJune 13, 2016.
  2. ^abc"Screen reading technology".AFB. RetrievedFebruary 23, 2022.
  3. ^"Screen Readers and how they work with E-Learning". Virginia.gov. Archived fromthe original on November 13, 2018. RetrievedMarch 31, 2019.
  4. ^"Hear text read aloud with Narrator".Microsoft. RetrievedJune 13, 2016.
  5. ^Coyier, Chris (October 29, 2007)."Accessibility Basics: How Does Your Page Look To A Screen Reader?". CSS-Tricks. RetrievedJune 13, 2016.
  6. ^"What is a Screen Reader".Nomensa. RetrievedJuly 9, 2017.
  7. ^"Screen Reader User Survey #9".WebAIM. RetrievedJuly 1, 2021.
  8. ^"ChromeVox". Google. RetrievedMarch 9, 2020.
  9. ^Cooke, Annemarie (March 2004)."A History of Accessibility at IBM".The American Foundation for the Blind (AFB).
  10. ^ab"Making A Difference Award (2009) — Jim Thatcher (interview)".SIGCAS, the Association for Computing Machinery Special Interest Group for Computers and Society. 2009.
  11. ^"Talking Terminals. BYTE, September 1982". Archived fromthe original on June 25, 2006. RetrievedSeptember 7, 2006.
  12. ^Paul Blenkhorn, "TheRCEVH project on micro-computer systems and computer assisted learning", British Journal of Visual Impairment, 4/3, 101-103 (1986).Free HTML version at VisugateArchived September 28, 2007, at theWayback Machine.
  13. ^"Access to personal computers using speech synthesis. RNIB New Beacon No.76, May 1992". March 3, 2014.
  14. ^According to "Making theGUI Talk[dead link]" (by Richard Schwerdtfeger,BYTE December 1991, p. 118-128), the first screen reader to build an off-screen model was outSPOKEN.
  15. ^Implementing Accessibility on Android.
  16. ^Apple AccessibilityAPI.
  17. ^"Oracle Technology Network for Java Developers – Oracle Technology Network – Oracle".
  18. ^Zong, Jonathan; Lee, Crystal; Lundgard, Alan; Jang, JiWoong; Hajas, Daniel; Satyanarayan, Arvind (2022). "Rich Screen Reader Experiences for Accessible Data Visualization".Computer Graphics Forum.41 (3):15–27.arXiv:2205.04917.doi:10.1111/cgf.14519.ISSN 0167-7055.S2CID 248665696.
  19. ^"JAWS Web Verbosity".www.freedomscientific.com. RetrievedNovember 6, 2022.
  20. ^Chris Heilmann (March 13, 2008)."Yahoo! search results now with natural language support".Yahoo! Developer Network Blog.Archived from the original on January 25, 2009. RetrievedFebruary 28, 2015.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Screen_reader&oldid=1323533076"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp