Movatterモバイル変換


[0]ホーム

URL:


  1. Web
  2. Web APIs
  3. Web Speech API

Web Speech API

TheWeb Speech API enables you to incorporate voice data into web apps.The Web Speech API has two parts:SpeechSynthesis (Text-to-Speech), andSpeechRecognition (Asynchronous Speech Recognition.)

Web speech concepts and usage

The Web Speech API enables web apps to handle voice data. It has two components:

  • Speech recognition is accessed via theSpeechRecognition interface, which provides the ability to recognize voice context from an audio source and allows your app to respond appropriately.Generally, you use the interface's constructor to create a newSpeechRecognition object. This object provides a number of event handlers to detect when speech is incoming from the device's microphone (or from an audio track).You can specify whether you want the speech recognition to use a service provided by the user's platform (the default) or be performedlocally in the browser.
  • Speech synthesis is accessed via theSpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesizer.) Different voice types are represented bySpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented bySpeechSynthesisUtterance objects.You can get these spoken by passing them to theSpeechSynthesis.speak() method.

For more details on using these features, seeUsing the Web Speech API.

Web Speech API Interfaces

Speech recognition

SpeechRecognition

The controller interface for the recognition service; this also handles theSpeechRecognitionEvent sent from the recognition service.

SpeechRecognitionAlternative

Represents a single word that has been recognized by the speech recognition service.

SpeechRecognitionErrorEvent

Represents error messages from the recognition service.

SpeechRecognitionEvent

The event object for theresult andnomatch events, and contains all the data associated with an interim or final speech recognition result.

SpeechRecognitionPhrase

Represents a phrase that can be passed into the speech recognition engine to be used forcontextual biasing.

SpeechRecognitionResult

Represents a single recognition match, which may contain multipleSpeechRecognitionAlternative objects.

SpeechRecognitionResultList

Represents a list ofSpeechRecognitionResult objects, or a single one if results are being captured incontinuous mode.

Speech synthesis

SpeechSynthesis

The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.

SpeechSynthesisErrorEvent

Contains information about any errors that occur while processingSpeechSynthesisUtterance objects in the speech service.

SpeechSynthesisEvent

Contains information about the current state ofSpeechSynthesisUtterance objects that have been processed in the speech service.

SpeechSynthesisUtterance

Represents a speech request.It contains the content the speech service should read and information about how to read it (e.g., language, pitch and volume.)

SpeechSynthesisVoice

Represents a voice that the system supports.EverySpeechSynthesisVoice has its own relative speech service including information about language, name and URI.

Window.speechSynthesis

Specified out as part of a[NoInterfaceObject] interface calledSpeechSynthesisGetter, and Implemented by theWindow object, thespeechSynthesis property provides access to theSpeechSynthesis controller, and therefore the entry point to speech synthesis functionality.

Deprecated interfaces

The concept of grammar has been removed from the Web Speech API. Related features remain in the specification and are still recognized by supporting browsers for backwards compatibility, but they have no effect on speech recognition services.

SpeechGrammarDeprecated

Represents words or patterns of words for the recognition service to recognize.

SpeechGrammarListDeprecated

Represents a list ofSpeechGrammar objects.

Errors

For information on errors reported by the Speech API (for example,"language-not-supported" and"language-unavailable"), see the following documentation:

Security considerations

Access to theon-device speech recognition functionality of the Web Speech API is controlled by theon-device-speech-recognitionPermissions-Policy directive.

Specifically, where a defined policy blocks usage, any attempts to call the API'sSpeechRecognition.available() orSpeechRecognition.install() methods will fail.

Examples

OurWeb Speech API examples illustrate speech recognition and synthesis.

Specifications

Specification
Web Speech API
# speechreco-section
Web Speech API
# tts-section

Browser compatibility

api.SpeechRecognition

api.SpeechSynthesis

See also

Help improve MDN

Learn how to contribute

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp