- Notifications
You must be signed in to change notification settings - Fork4.5k
Port of OpenAI's Whisper model in C/C++
License
ggml-org/whisper.cpp
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
High-performance inference ofOpenAI's Whisper automatic speech recognition (ASR) model:
- Plain C/C++ implementation without dependencies
- Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal andCore ML
- AVX intrinsics support for x86 architectures
- VSX intrinsics support for POWER architectures
- Mixed F16 / F32 precision
- Integer quantization support
- Zero memory allocations at runtime
- Vulkan support
- Support for CPU-only inference
- Efficient GPU support for NVIDIA
- OpenVINO Support
- Ascend NPU Support
- Moore Threads GPU Support
- C-style API
- Voice Activity Detection (VAD)
Supported platforms:
- Mac OS (Intel and Arm)
- iOS
- Android
- Java
- Linux /FreeBSD
- WebAssembly
- Windows (MSVC andMinGW)
- Raspberry Pi
- Docker
The entire high-level implementation of the model is contained inwhisper.h andwhisper.cpp.The rest of the code is part of theggml
machine learning library.
Having such a lightweight implementation of the model allows to easily integrate it in different platforms and applications.As an example, here is a video of running the model on an iPhone 13 device - fully offline, on-device:whisper.objc
whisper-iphone-13-mini-2.mp4
You can also easily make your own offline voice assistant application:command
command-0.mp4
On Apple Silicon, the inference runs fully on the GPU via Metal:
metal-base-1.mp4
First clone the repository:
git clone https://github.com/ggml-org/whisper.cpp.git
Navigate into the directory:
cd whisper.cpp
Then, download one of the Whispermodels converted inggml
format. For example:
sh ./models/download-ggml-model.sh base.en
Now build thewhisper-cli example and transcribe an audio file like this:
# build the projectcmake -B buildcmake --build build -j --config Release# transcribe an audio file./build/bin/whisper-cli -f samples/jfk.wav
For a quick demo, simply runmake base.en
.
The command downloads thebase.en
model converted to customggml
format and runs the inference on all.wav
samples in the foldersamples
.
For detailed usage instructions, run:./build/bin/whisper-cli -h
Note that thewhisper-cli example currently runs only with 16-bit WAV files, so make sure to convert your input before running the tool.For example, you can useffmpeg
like this:
ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
If you want some extra audio samples to play with, simply run:
make -j samples
This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format viaffmpeg
.
You can download and run the other models as follows:
make -j tiny.enmake -j tinymake -j base.enmake -j basemake -j small.enmake -j smallmake -j medium.enmake -j mediummake -j large-v1make -j large-v2make -j large-v3make -j large-v3-turbo
Model | Disk | Mem |
---|---|---|
tiny | 75 MiB | ~273 MB |
base | 142 MiB | ~388 MB |
small | 466 MiB | ~852 MB |
medium | 1.5 GiB | ~2.1 GB |
large | 2.9 GiB | ~3.9 GB |
whisper.cpp
supports POWER architectures and includes code whichsignificantly speeds operation on Linux running on POWER9/10, making itcapable of faster-than-realtime transcription on underclocked RaptorTalos II. Ensure you have a BLAS package installed, and replace thestandard cmake setup with:
# build with GGML_BLAS definedcmake -B build -DGGML_BLAS=1cmake --build build -j --config Release./build/bin/whisper-cli [ .. etc .. ]
whisper.cpp
supports integer quantization of the Whisperggml
models.Quantized models require less memory and disk space and depending on the hardware can be processed more efficiently.
Here are the steps for creating and using a quantized model:
# quantize a model with Q5_0 methodcmake -B buildcmake --build build -j --config Release./build/bin/quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0# run the examples as usual, specifying the quantized model file./build/bin/whisper-cli -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav
On Apple Silicon devices, the Encoder inference can be executed on the Apple Neural Engine (ANE) via Core ML. This can result in significantspeed-up - more than x3 faster compared with CPU-only execution. Here are the instructions for generating a Core ML model and using it withwhisper.cpp
:
Install Python dependencies needed for the creation of the Core ML model:
pip install ane_transformerspip install openai-whisperpip install coremltools
- To ensure
coremltools
operates correctly, please confirm thatXcode is installed and executexcode-select --install
to install the command-line tools. - Python 3.11 is recommended.
- MacOS Sonoma (version 14) or newer is recommended, as older versions of MacOS might experience issues with transcription hallucination.
- [OPTIONAL] It is recommended to utilize a Python version management system, such asMiniconda for this step:
- To create an environment, use:
conda create -n py311-whisper python=3.11 -y
- To activate the environment, use:
conda activate py311-whisper
- To create an environment, use:
- To ensure
Generate a Core ML model. For example, to generate a
base.en
model, use:./models/generate-coreml-model.sh base.en
This will generate the folder
models/ggml-base.en-encoder.mlmodelc
Build
whisper.cpp
with Core ML support:# using CMakecmake -B build -DWHISPER_COREML=1cmake --build build -j --config Release
Run the examples as usual. For example:
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav...whisper_init_state: loading Core ML model from 'models/ggml-base.en-encoder.mlmodelc'whisper_init_state: first run on a device may take a while ...whisper_init_state: Core ML model loadedsystem_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | COREML = 1 |...
The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.Next runs are faster.
For more information about the Core ML implementation please refer to PR#566.
On platforms that supportOpenVINO, the Encoder inference can be executedon OpenVINO-supported devices including x86 CPUs and Intel GPUs (integrated & discrete).
This can result in significant speedup in encoder performance. Here are the instructions for generating the OpenVINO model and using it withwhisper.cpp
:
First, setup python virtual env. and install python dependencies. Python 3.10 is recommended.
Windows:
cd modelspython-m venv openvino_conv_envopenvino_conv_env\Scripts\activatepython-m pip install--upgrade pippip install-r requirements-openvino.txt
Linux and macOS:
cd modelspython3 -m venv openvino_conv_envsource openvino_conv_env/bin/activatepython -m pip install --upgrade pippip install -r requirements-openvino.txt
Generate an OpenVINO encoder model. For example, to generate a
base.en
model, use:python convert-whisper-to-openvino.py --model base.en
This will produce ggml-base.en-encoder-openvino.xml/.bin IR model files. It's recommended to relocate these to the same folder as
ggml
models, as thatis the default location that the OpenVINO extension will search at runtime.Build
whisper.cpp
with OpenVINO support:Download OpenVINO package fromrelease page. The recommended version to use is2024.6.0. Ready to use Binaries of the required libraries can be found in theOpenVino Archives
After downloading & extracting package onto your development system, set up required environment by sourcing setupvars script. For example:
Linux:
source /path/to/l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64/setupvars.sh
Windows (cmd):
C:\Path\To\w_openvino_toolkit_windows_2023.0.0.10926.b4452d56304_x86_64\setupvars.bat
And then build the project using cmake:
cmake -B build -DWHISPER_OPENVINO=1cmake --build build -j --config Release
Run the examples as usual. For example:
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav...whisper_ctx_init_openvino_encoder: loading OpenVINO model from 'models/ggml-base.en-encoder-openvino.xml'whisper_ctx_init_openvino_encoder: first run on a device may take a while ...whisper_openvino_init: path_model = models/ggml-base.en-encoder-openvino.xml, device = GPU, cache_dir = models/ggml-base.en-encoder-openvino-cachewhisper_ctx_init_openvino_encoder: OpenVINO model loadedsystem_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 1 |...
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will getcached for the next run.
For more information about the OpenVINO implementation please refer to PR#1037.
With NVIDIA cards the processing of the models is done efficiently on the GPU via cuBLAS and custom CUDA kernels.First, make sure you have installedcuda
:https://developer.nvidia.com/cuda-downloads
Now buildwhisper.cpp
with CUDA support:
cmake -B build -DGGML_CUDA=1cmake --build build -j --config Release
or for newer NVIDIA GPU's (RTX 5000 series):
cmake -B build -DGGML_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES="86"cmake --build build -j --config Release
Cross-vendor solution which allows you to accelerate workload on your GPU.First, make sure your graphics card driver provides support for Vulkan API.
Now buildwhisper.cpp
with Vulkan support:
cmake -B build -DGGML_VULKAN=1cmake --build build -j --config Release
Encoder processing can be accelerated on the CPU via OpenBLAS.First, make sure you have installedopenblas
:https://www.openblas.net/
Now buildwhisper.cpp
with OpenBLAS support:
cmake -B build -DGGML_BLAS=1cmake --build build -j --config Release
Ascend NPU provides inference acceleration viaCANN
and AI cores.
First, check if your Ascend NPU device is supported:
Verified devices
Ascend NPU | Status |
---|---|
Atlas 300T A2 | Support |
Then, make sure you have installedCANN toolkit
. The lasted version of CANN is recommanded.
Now buildwhisper.cpp
with CANN support:
cmake -B build -DGGML_CANN=1cmake --build build -j --config Release
Run the inference examples as usual, for example:
./build/bin/whisper-cli -f samples/jfk.wav -m models/ggml-base.en.bin -t 8
Notes:
- If you have trouble with Ascend NPU device, please create a issue with[CANN] prefix/tag.
- If you run successfully with your Ascend NPU device, please help update the table
Verified devices
.
With Moore Threads cards the processing of the models is done efficiently on the GPU via muBLAS and custom MUSA kernels.First, make sure you have installedMUSA SDK rc4.0.1
:https://developer.mthreads.com/sdk/download/musa?equipment=&os=&driverVersion=&version=4.0.1
Now buildwhisper.cpp
with MUSA support:
cmake -B build -DGGML_MUSA=1cmake --build build -j --config Release
or specify the architecture for your Moore Threads GPU. For example, if you have a MTT S80 GPU, you can specify the architecture as follows:
cmake -B build -DGGML_MUSA=1 -DMUSA_ARCHITECTURES="21"cmake --build build -j --config Release
If you want to support more audio formats (such as Opus and AAC), you can turn on theWHISPER_FFMPEG
build flag to enable FFmpeg integration.
First, you need to install required libraries:
# Debian/Ubuntusudo apt install libavcodec-dev libavformat-dev libavutil-dev# RHEL/Fedorasudo dnf install libavcodec-free-devel libavformat-free-devel libavutil-free-devel
Then you can build the project as follows:
cmake -B build -D WHISPER_FFMPEG=yescmake --build build
Run the following example to confirm it's working:
# Convert an audio file to Opus formatffmpeg -i samples/jfk.wav jfk.opus# Transcribe the audio file./build/bin/whisper-cli --model models/ggml-base.en.bin --file jfk.opus
- Docker must be installed and running on your system.
- Create a folder to store big models & intermediate files (ex. /whisper/models)
We have two Docker images available for this project:
ghcr.io/ggml-org/whisper.cpp:main
: This image includes the main executable file as well ascurl
andffmpeg
. (platforms:linux/amd64
,linux/arm64
)ghcr.io/ggml-org/whisper.cpp:main-cuda
: Same asmain
but compiled with CUDA support. (platforms:linux/amd64
)ghcr.io/ggml-org/whisper.cpp:main-musa
: Same asmain
but compiled with MUSA support. (platforms:linux/amd64
)
# download model and persist it in a local folderdocker run -it --rm \ -v path/to/models:/models \ whisper.cpp:main"./models/download-ggml-model.sh base /models"# transcribe an audio filedocker run -it --rm \ -v path/to/models:/models \ -v path/to/audios:/audios \ whisper.cpp:main"whisper-cli -m /models/ggml-base.bin -f /audios/jfk.wav"# transcribe an audio file in samples folderdocker run -it --rm \ -v path/to/models:/models \ whisper.cpp:main"whisper-cli -m /models/ggml-base.bin -f ./samples/jfk.wav"
You can install pre-built binaries for whisper.cpp or build it from source usingConan. Use the following command:
conan install --requires="whisper-cpp/[*]" --build=missing
For detailed instructions on how to use Conan, please refer to theConan documentation.
- Inference only
This is a naive example of performing real-time inference on audio from your microphone.Thestream tool samples the audio every half a second and runs the transcription continuously.More info is available inissue #10.You will need to havesdl2 installed for it to work properly.
cmake -B build -DWHISPER_SDL2=ONcmake --build build -j --config Release./build/bin/whisper-stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
rt_esl_csgo_2.mp4
Adding the--print-colors
argument will print the transcribed text using an experimental color coding strategyto highlight words with high or low confidence:
./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/gb0.wav --print-colors
For example, to limit the line length to a maximum of 16 characters, simply add-ml 16
:
$ ./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 16whisper_model_load: loading model from './models/ggml-base.en.bin'...system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...[00:00:00.000 --> 00:00:00.850] And so my[00:00:00.850 --> 00:00:01.590] fellow[00:00:01.590 --> 00:00:04.140] Americans, ask[00:00:04.140 --> 00:00:05.660] not what your[00:00:05.660 --> 00:00:06.840] country can do[00:00:06.840 --> 00:00:08.430] for you, ask[00:00:08.430 --> 00:00:09.440] what you can do[00:00:09.440 --> 00:00:10.020] for your[00:00:10.020 --> 00:00:11.000] country.
The--max-len
argument can be used to obtain word-level timestamps. Simply use-ml 1
:
$ ./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 1whisper_model_load: loading model from './models/ggml-base.en.bin'...system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...[00:00:00.000 --> 00:00:00.320][00:00:00.320 --> 00:00:00.370] And[00:00:00.370 --> 00:00:00.690] so[00:00:00.690 --> 00:00:00.850] my[00:00:00.850 --> 00:00:01.590] fellow[00:00:01.590 --> 00:00:02.850] Americans[00:00:02.850 --> 00:00:03.300] ,[00:00:03.300 --> 00:00:04.140] ask[00:00:04.140 --> 00:00:04.990] not[00:00:04.990 --> 00:00:05.410] what[00:00:05.410 --> 00:00:05.660] your[00:00:05.660 --> 00:00:06.260] country[00:00:06.260 --> 00:00:06.600] can[00:00:06.600 --> 00:00:06.840] do[00:00:06.840 --> 00:00:07.010] for[00:00:07.010 --> 00:00:08.170] you[00:00:08.170 --> 00:00:08.190] ,[00:00:08.190 --> 00:00:08.430] ask[00:00:08.430 --> 00:00:08.910] what[00:00:08.910 --> 00:00:09.040] you[00:00:09.040 --> 00:00:09.320] can[00:00:09.320 --> 00:00:09.440] do[00:00:09.440 --> 00:00:09.760] for[00:00:09.760 --> 00:00:10.020] your[00:00:10.020 --> 00:00:10.510] country[00:00:10.510 --> 00:00:11.000] .
More information about this approach is available here:#1058
Sample usage:
# download a tinydiarize compatible model./models/download-ggml-model.shsmall.en-tdrz# run as usual, adding the "-tdrz" command-line argument./build/bin/whisper-cli-f ./samples/a13.wav-m ./models/ggml-small.en-tdrz.bin-tdrz...main:processing'./samples/a13.wav' (480000samples,30.0sec),4threads,1processors,lang=en,task=transcribe,tdrz=1,timestamps=1 ......[00:00:00.000-->00:00:03.800]OkayHouston,we'vehadaproblemhere. [SPEAKER_TURN][00:00:03.800-->00:00:06.200]ThisisHouston.Sayagainplease. [SPEAKER_TURN][00:00:06.200-->00:00:08.260]UhHoustonwe'vehadaproblem.[00:00:08.260-->00:00:11.320]We'vehadamainbeamuponavolt. [SPEAKER_TURN][00:00:11.320-->00:00:13.820]Rogermainbeaminterval. [SPEAKER_TURN][00:00:13.820-->00:00:15.100]Uhuh [SPEAKER_TURN][00:00:15.100-->00:00:18.020]Sookaystand,bythirteenwe'relookingatit. [SPEAKER_TURN][00:00:18.020-->00:00:25.740]OkayuhrightnowuhHoustontheuhvoltageisuhislookinggoodum.[00:00:27.620-->00:00:29.940]Andwehadaaprettylargebankorso.
Thewhisper-cli example provides support for output of karaoke-style movies, where thecurrently pronounced word is highlighted. Use the-owts
argument and run the generated bash script.This requires to haveffmpeg
installed.
Here are a few"typical" examples:
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -owtssource ./samples/jfk.wav.wtsffplay ./samples/jfk.wav.mp4
jfk.wav.mp4
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/mm0.wav -owtssource ./samples/mm0.wav.wtsffplay ./samples/mm0.wav.mp4
mm0.wav.mp4
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/gb0.wav -owtssource ./samples/gb0.wav.wtsffplay ./samples/gb0.wav.mp4
gb0.wav.mp4
Use thescripts/bench-wts.sh script to generate a video in the following format:
./scripts/bench-wts.sh samples/jfk.wavffplay ./samples/jfk.wav.all.mp4
jfk.wav.all.mp4
In order to have an objective comparison of the performance of the inference across different system configurations,use thewhisper-bench tool. The tool simply runs the Encoder part of the model and prints how much time ittook to execute it. The results are summarized in the following Github issue:
Additionally a script to run whisper.cpp with different models and audio files is providedbench.py.
You can run it with the following command, by default it will run against any standard model in the models folder.
python3 scripts/bench.py -f samples/jfk.wav -t 2,4,8 -p 1,2
It is written in python with the intention of being easy to modify and extend for your benchmarking use case.
It outputs a csv file with the results of the benchmarking.
The original models are converted to a custom binary format. This allows to pack everything needed into a single file:
- model parameters
- mel filters
- vocabulary
- weights
You can download the converted models using themodels/download-ggml-model.sh scriptor manually from here:
For more details, see the conversion scriptmodels/convert-pt-to-ggml.py ormodels/README.md.
- Rust:tazz4843/whisper-rs |#310
- #"/ggml-org/whisper.cpp/blob/master/bindings/javascript">bindings/javascript
- React Native (iOS / Android):whisper.rn
- stlukey/whispercpp.py (Cython)
- AIWintermuteAI/whispercpp (Updated fork of aarnphm/whispercpp)
- aarnphm/whispercpp (Pybind11)
- abdeladim-s/pywhispercpp (Pybind11)
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS,and macOS. It can be used in Swift projects without the need to compile thelibrary from source. For example, the v1.7.5 version of the XCFramework can beused as follows:
// swift-tools-version: 5.10// The swift-tools-version declares the minimum version of Swift required to build this package.import PackageDescriptionletpackage=Package( name:"Whisper", targets:[.executableTarget( name:"Whisper", dependencies:["WhisperFramework"]),.binaryTarget( name:"WhisperFramework", url:"https://github.com/ggml-org/whisper.cpp/releases/download/v1.7.5/whisper-v1.7.5-xcframework.zip", checksum:"c7faeb328620d6012e130f3d705c51a6ea6c995605f2df50f6e1ad68c59c6c4a")])
Support for Voice Activity Detection (VAD) can be enabled using the--vad
argument towhisper-cli
. In addition to this option a VAD model is alsorequired.
The way this works is that first the audio samples are passed throughthe VAD model which will detect speech segments. Using this information theonly the speech segments that are detected are extracted from the original audioinput and passed to whisper for processing. This reduces the amount of audiodata that needs to be processed by whisper and can significantly speed up thetranscription process.
The following VAD models are currently supported:
Silero-vad is a lightweight VAD modelwritten in Python that is fast and accurate.
Models can be downloaded by running the following command on Linux or MacOS:
$./models/download-vad-model.sh silero-v5.1.2Downloading ggml model silero-v5.1.2 from 'https://huggingface.co/ggml-org/whisper-vad' ...ggml-silero-v5.1.2.bin 100%[==============================================>] 864.35K --.-KB/s in 0.04sDone! Model 'silero-v5.1.2' saved in '/path/models/ggml-silero-v5.1.2.bin'You can now use it like this: $ ./build/bin/whisper-cli -vm /path/models/ggml-silero-v5.1.2.bin --vad -f samples/jfk.wav -m models/ggml-base.en.bin
And the following command on Windows:
>.\models\download-vad-model.cmd silero-v5.1.2Downloading vad model silero-v5.1.2...Done! Model silero-v5.1.2 saved in C:\Users\danie\work\ai\whisper.cpp\ggml-silero-v5.1.2.binYou can now use it like this:C:\path\build\bin\Release\whisper-cli.exe -vm C:\path\ggml-silero-v5.1.2.bin --vad -m models/ggml-base.en.bin -f samples\jfk.wav
To see a list of all available models, run the above commands without anyarguments.
This model can be also be converted manually to ggml using the following command:
$python3 -m venv venv&&source venv/bin/activate$(venv) pip install silero-vad$(venv) $ python models/convert-silero-vad-to-ggml.py --output models/silero.binSaving GGML Silero-VAD model to models/silero-v5.1.2-ggml.bin
And it can then be used with whisper as follows:
$./build/bin/whisper-cli \ --file ./samples/jfk.wav \ --model ./models/ggml-base.en.bin \ --vad \ --vad-model ./models/silero-v5.1.2-ggml.bin
--vad-threshold: Threshold probability for speech detection. A probabilityfor a speech segment/frame above this threshold will be considered as speech.
--vad-min-speech-duration-ms: Minimum speech duration in milliseconds. Speechsegments shorter than this value will be discarded to filter out brief noise orfalse positives.
--vad-min-silence-duration-ms: Minimum silence duration in milliseconds. Silenceperiods must be at least this long to end a speech segment. Shorter silenceperiods will be ignored and included as part of the speech.
--vad-max-speech-duration-s: Maximum speech duration in seconds. Speech segmentslonger than this will be automatically split into multiple segments at silencepoints exceeding 98ms to prevent excessively long segments.
--vad-speech-pad-ms: Speech padding in milliseconds. Adds this amount of paddingbefore and after each detected speech segment to avoid cutting off speech edges.
--vad-samples-overlap: Amount of audio to extend from each speech segment intothe next one, in seconds (e.g., 0.10 = 100ms overlap). This ensures speech isn'tcut off abruptly between segments when they're concatenated together.
There are various examples of using the library for different projects in theexamples folder.Some of the examples are even ported to run in the browser using WebAssembly. Check them out!
Example | Web | Description |
---|---|---|
whisper-cli | whisper.wasm | Tool for translating and transcribing audio using Whisper |
whisper-bench | bench.wasm | Benchmark the performance of Whisper on your machine |
whisper-stream | stream.wasm | Real-time transcription of raw microphone capture |
whisper-command | command.wasm | Basic voice assistant example for receiving voice commands from the mic |
whisper-server | HTTP transcription server with OAI-like API | |
whisper-talk-llama | Talk with a LLaMA bot | |
whisper.objc | iOS mobile application using whisper.cpp | |
whisper.swiftui | SwiftUI iOS / macOS application using whisper.cpp | |
whisper.android | Android mobile application using whisper.cpp | |
whisper.nvim | Speech-to-text plugin for Neovim | |
generate-karaoke.sh | Helper script to easilygenerate a karaoke video of raw audio capture | |
livestream.sh | Livestream audio transcription | |
yt-wsp.sh | Download + transcribe and/or translate any VOD(original) | |
wchess | wchess.wasm | Voice-controlled chess |
If you have any kind of feedback about this project feel free to use the Discussions section and open a new topic.You can use theShow and tell categoryto share your own projects that usewhisper.cpp
. If you have a question, make sure to check theFrequently asked questions (#126) discussion.
About
Port of OpenAI's Whisper model in C/C++
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.