Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Robust Speech Recognition via Large-Scale Weak Supervision

License

NotificationsYou must be signed in to change notification settings

happyhaplu/whisper

 
 

Repository files navigation

[Blog][Paper][Model card][Colab example]

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.

Approach

Approach

A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.

Setup

We used Python 3.9.9 andPyTorch 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.11 and recent PyTorch versions. The codebase also depends on a few Python packages, most notablyOpenAI's tiktoken for their fast tokenizer implementation. You can download and install (or update to) the latest release of Whisper with the following command:

pip install -U openai-whisper

Alternatively, the following command will pull and install the latest commit from this repository, along with its Python dependencies:

pip install git+https://github.com/openai/whisper.git

To update the package to the latest version of this repository, please run:

pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git

It also requires the command-line toolffmpeg to be installed on your system, which is available from most package managers:

# on Ubuntu or Debiansudo apt update&& sudo apt install ffmpeg# on Arch Linuxsudo pacman -S ffmpeg# on MacOS using Homebrew (https://brew.sh/)brew install ffmpeg# on Windows using Chocolatey (https://chocolatey.org/)choco install ffmpeg# on Windows using Scoop (https://scoop.sh/)scoop install ffmpeg

You may needrust installed as well, in casetiktoken does not provide a pre-built wheel for your platform. If you see installation errors during thepip install command above, please follow theGetting started page to install Rust development environment. Additionally, you may need to configure thePATH environment variable, e.g.export PATH="$HOME/.cargo/bin:$PATH". If the installation fails withNo module named 'setuptools_rust', you need to installsetuptools_rust, e.g. by running:

pip install setuptools-rust

Available models and languages

There are six model sizes, four with English-only versions, offering speed and accuracy tradeoffs.Below are the names of the available models and their approximate memory requirements and inference speed relative to the large model.The relative speeds below are measured by transcribing English speech on a A100, and the real-world speed may vary significantly depending on many factors including the language, the speaking speed, and the available hardware.

SizeParametersEnglish-only modelMultilingual modelRequired VRAMRelative speed
tiny39 Mtiny.entiny~1 GB~10x
base74 Mbase.enbase~1 GB~7x
small244 Msmall.ensmall~2 GB~4x
medium769 Mmedium.enmedium~5 GB~2x
large1550 MN/Alarge~10 GB1x
turbo809 MN/Aturbo~6 GB~8x

The.en models for English-only applications tend to perform better, especially for thetiny.en andbase.en models. We observed that the difference becomes less significant for thesmall.en andmedium.en models.Additionally, theturbo model is an optimized version oflarge-v3 that offers faster transcription speed with a minimal degradation in accuracy.

Whisper's performance varies widely depending on the language. The figure below shows a performance breakdown oflarge-v3 andlarge-v2 models by language, using WERs (word error rates) or CER (character error rates, shown inItalic) evaluated on the Common Voice 15 and Fleurs datasets. Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 ofthe paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.

WER breakdown by language

Command-line usage

The following command will transcribe speech in audio files, using theturbo model:

whisper audio.flac audio.mp3 audio.wav --model turbo

The default setting (which selects theturbo model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the--language option:

whisper japanese.wav --language Japanese

Adding--task translate will translate the speech into English:

whisper japanese.wav --language Japanese --task translate

Run the following to view all available options:

whisper --help

Seetokenizer.py for the list of all available languages.

Python usage

Transcription can also be performed within Python:

importwhispermodel=whisper.load_model("turbo")result=model.transcribe("audio.mp3")print(result["text"])

Internally, thetranscribe() method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.

Below is an example usage ofwhisper.detect_language() andwhisper.decode() which provide lower-level access to the model.

importwhispermodel=whisper.load_model("turbo")# load audio and pad/trim it to fit 30 secondsaudio=whisper.load_audio("audio.mp3")audio=whisper.pad_or_trim(audio)# make log-Mel spectrogram and move to the same device as the modelmel=whisper.log_mel_spectrogram(audio,n_mels=model.dims.n_mels).to(model.device)# detect the spoken language_,probs=model.detect_language(mel)print(f"Detected language:{max(probs,key=probs.get)}")# decode the audiooptions=whisper.DecodingOptions()result=whisper.decode(model,mel,options)# print the recognized textprint(result.text)

More examples

Please use the🙌 Show and tell category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.

License

Whisper's code and model weights are released under the MIT License. SeeLICENSE for further details.

About

Robust Speech Recognition via Large-Scale Weak Supervision

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python100.0%

[8]ページ先頭

©2009-2025 Movatter.jp