Movatterモバイル変換


[0]ホーム

URL:


MDN Web Docs

Web Speech API

TheWeb Speech API enables you to incorporate voice data into web apps.The Web Speech API has two parts:SpeechSynthesis (Text-to-Speech), andSpeechRecognition (Asynchronous Speech Recognition.)

Web Speech Concepts and Usage

The Web Speech API makes web apps able to handle voice data.There are two components to this API:

  • Speech recognition is accessed via theSpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately.Generally you'll use the interface's constructor to create a newSpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device's microphone. TheSpeechGrammar interface represents a container for a particular set of grammar that your app should recognize.Grammar is defined usingJSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via theSpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesizer.) Different voice types are represented bySpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented bySpeechSynthesisUtterance objects.You can get these spoken by passing them to theSpeechSynthesis.speak() method.

For more details on using these features, seeUsing the Web Speech API.

Web Speech API Interfaces

Speech recognition

SpeechRecognition

The controller interface for the recognition service; this also handles theSpeechRecognitionEvent sent from the recognition service.

SpeechRecognitionAlternative

Represents a single word that has been recognized by the speech recognition service.

SpeechRecognitionErrorEvent

Represents error messages from the recognition service.

SpeechRecognitionEvent

The event object for theresult andnomatch events, and contains all the data associated with an interim or final speech recognition result.

SpeechGrammar

The words or patterns of words that we want the recognition service to recognize.

SpeechGrammarList

Represents a list ofSpeechGrammar objects.

SpeechRecognitionResult

Represents a single recognition match, which may contain multipleSpeechRecognitionAlternative objects.

SpeechRecognitionResultList

Represents a list ofSpeechRecognitionResult objects, or a single one if results are being captured incontinuous mode.

Speech synthesis

SpeechSynthesis

The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.

SpeechSynthesisErrorEvent

Contains information about any errors that occur while processingSpeechSynthesisUtterance objects in the speech service.

SpeechSynthesisEvent

Contains information about the current state ofSpeechSynthesisUtterance objects that have been processed in the speech service.

SpeechSynthesisUtterance

Represents a speech request.It contains the content the speech service should read and information about how to read it (e.g., language, pitch and volume.)

SpeechSynthesisVoice

Represents a voice that the system supports.EverySpeechSynthesisVoice has its own relative speech service including information about language, name and URI.

Window.speechSynthesis

Specified out as part of a[NoInterfaceObject] interface calledSpeechSynthesisGetter, and Implemented by theWindow object, thespeechSynthesis property provides access to theSpeechSynthesis controller, and therefore the entry point to speech synthesis functionality.

Errors

For information on errors reported by the Speech API (for example,"language-not-supported" and"language-unavailable"), see the following documentation:

Examples

TheWeb Speech API examples on GitHub contains demos to illustrate speech recognition and synthesis.

Specifications

Specification
Web Speech API
# speechreco-section
Web Speech API
# tts-section

Browser compatibility

api.SpeechRecognition

api.SpeechSynthesis

See also

Help improve MDN

Learn how to contribute.

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp