Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on May 6, 2022. It is now read-only.

Extensible Android mobile voice framework: wakeword, ASR, NLU, and TTS. Easily add voice to any Android app!

License

NotificationsYou must be signed in to change notification settings

spokestack/spokestack-android

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spokestack Android

CircleCICoverageMaven CentralJavadocsLicense

Spokestack is an all-in-one solution for mobile voice interfaces on Android. It provides every piece of the speech processing puzzle, including voice activity detection, wakeword detection, speech recognition, natural language understanding (NLU), and speech synthesis (TTS). Under its default configuration (on newer Android devices), everything except TTS happens directly on the mobile device—no communication with the cloud means faster results and better privacy.

And Android isn't the only platform it supports!

Creating a free account atspokestack.io lets you train your own NLU models and test out TTS without adding code to your app. We can even train a custom wakeword and TTS voice for you, ensuring that your app's voice is unique and memorable.

For a brief introduction, read on, but for more detailed guides, see the following:

Installation


Note: Spokestack used to be hosted on JCenter, but since the announcement of itsdiscontinuation, we've moved distribution to Maven Central. Please ensure that your root-levelbuild.gradle file includesmavenCentral() in itsrepositories block in order to access versions >= 11.0.2.


A Note on API Level

The minimum Android SDK version listed in Spokestack's manifest is 8 because that's all you should need to run wake word detection and speech recognition. To use other features, it's best to target at least API level 21.

If you include ExoPlayer for TTS playback (see below), you might have trouble running on versions of Android older than API level 24. If you run into this problem, try adding the following line to yourgradle.properties file:

android.enableDexingArtifactTransform=false

Dependencies

Add the following to your app'sbuild.gradle:

android {// ...  compileOptions {    sourceCompatibilityJavaVersion.VERSION_1_8    targetCompatibilityJavaVersion.VERSION_1_8  }}// ...dependencies {// ...// make sure to check the badge above or "releases" on the right for the// latest version!  implementation'io.spokestack:spokestack-android:11.5.2'// for TensorFlow Lite-powered wakeword detection and/or NLU, add this one too  implementation'org.tensorflow:tensorflow-lite:2.6.0'// for automatic playback of TTS audio  implementation'androidx.media:media:1.3.0'  implementation'com.google.android.exoplayer:exoplayer-core:2.14.0'// if you plan to use Google ASR, include these  implementation'com.google.cloud:google-cloud-speech:1.22.2'  implementation'io.grpc:grpc-okhttp:1.28.0'// if you plan to use Azure Speech Service, include these, and// note that you'll also need to add the following to your top-level// build.gradle's `repositories` block:// maven { url 'https://csspeechstorage.blob.core.windows.net/maven/' }  implementation'com.microsoft.cognitiveservices.speech:client-sdk:1.9.0'}

Usage

See thequickstart guide for more information, but here's the 30-second version of setup:

  1. You'll need to request theRECORD_AUDIO permission at runtime. See ourskeleton project for an example of this. TheINTERNET permission is also required but is included by the library's manifest by default.
  2. Add the following code somewhere, probably in anActivity if you're just starting out:
privatelateinitvar spokestack:Spokestack// ...spokestack=Spokestack.Builder()    .setProperty("wake-detect-path","$cacheDir/detect.tflite")    .setProperty("wake-encode-path","$cacheDir/encode.tflite")    .setProperty("wake-filter-path","$cacheDir/filter.tflite")    .setProperty("nlu-model-path","$cacheDir/nlu.tflite")    .setProperty("nlu-metadata-path","$cacheDir/metadata.json")    .setProperty("wordpiece-vocab-path","$cacheDir/vocab.txt")    .setProperty("spokestack-id","your-client-id")    .setProperty("spokestack-secret","your-secret-key")// `applicationContext` is available inside all `Activity`s    .withAndroidContext(applicationContext)// see below; `listener` here inherits from `SpokestackAdapter`    .addListener(listener)    .build()// ...// starting the pipeline makes Spokestack listen for the wakewordspokestack.start()

This example assumes you're storing wakeword and NLU models in your app'scache directory; again, see the skeleton project for an example of decompressing these files from the assets bundle into this directory.

To use the demo "Spokestack" wakeword, download the TensorFlow Lite models:detect |encode |filter

If you don't want to bother with that yet, just disable wakeword detection and NLU, and you can leave out all the file paths above:

spokestack=Spokestack.Builder()    .withoutWakeword()    .withoutNlu()// ...    .build()

In this case, you'll still need tostart() Spokestack as above, but you'll also want to create a button somewhere that callsspokestack.activate() when pressed; this starts ASR, which transcribes user speech.

Alternately, you can set Spokestack to start ASR any time it detects speech by using a non-default speech pipeline profile as described in thespeech pipeline documentation. In this case you'd want theVADTriggerAndroidASR profile:

// replace.withoutWakeword()// with.withPipelineProfile("io.spokestack.spokestack.profile.VADTriggerAndroidASR")

Note also theaddListener() line during setup. Speech processing happens continuously on a background thread, so your app needs a way to find out when the user has spoken to it. Important events are delivered via events to a subclass ofSpokestackAdapter. Your subclass can override as many of the following event methods as you like. Choosing to not implement one won't break anything; you just won't receive those events.

  • speechEvent(SpeechContext.Event, SpeechContext): This communicates events from the speech pipeline, including everything from notifications that ASR has been activated/deactivated to partial and complete transcripts of user speech.
  • nluResult(NLUResult): When the NLU is enabled, user speech is automatically sent through NLU for classification. You'll want the results of that classification to help your app decide what to do next.
  • ttsEvent(TTSEvent): If you're managing TTS playback yourself, you'll want to know when speech you've synthesized is ready to play (theAUDIO_AVAILABLE event); even if you're not, thePLAYBACK_COMPLETE event may be helpful if you want to automatically reactivate the microphone after your app reads a response.
  • trace(SpokestackModule, String): This combines log/trace messages from every Spokestack module. Some modules include trace events in their own event methods, but each of those events is also sent here.
  • error(SpokestackModule, Throwable): This combines errors from every Spokestack module. Some modules include error events in their own event methods, but each of those events is also sent here.

Thequickstart guide contains sample implementations of most of these methods.

As we mentioned, classification is handled automatically if NLU is enabled, so the main methods you need to know about while Spokestack is running are:

  • start()/stop(): Starts/stops the pipeline. While running, Spokestack uses the microphone to listen for your app's wakeword unless wakeword is disabled, in which case ASR must be activated another way. The pipeline should be stopped when Spokestack is no longer needed (or when the app is suspended) to free resources.
  • activate()/deactivate(): Activates/deactivates ASR, which listens to and transcribes what the user says.
  • synthesize(SynthesisRequest): Sends text to Spokestack's cloud TTS service to be synthesized as audio. Under the default configuration, this audio will be played automatically when available.

Development

Maven is used for building/deployment, and the package is hosted at Maven Central.

This package requires theAndroid NDKto be installed and theANDROID_HOME andANDROID_NDK_HOME variables to beset. On OSX,ANDROID_HOME is usually set to~/Library/Android/sdk andANDROID_NDK_HOME is usually set to~/Library/Android/sdk/ndk/<version>.

ANDROID_NDK_HOME can also be specified in your local Mavensettings.xml file as theandroid.ndk.path property.

Testing/Coverage

mvntest jacoco:report

Lint

mvn checkstyle:check

Release

Ensure that your Sonatype/Maven Central credentials are in your usersettings.xml (usually~/.m2/settings.xml):

<servers>    <server>        <id>ossrh</id>        <username>sonatype-username</username>        <password>sonatype-password</password>    </server></servers>

On a non-master branch, run the following command. This will prompt you to enter a version number and tag for the new version, push the tag to GitHub, and deploy the package to the Sonatype repository.

mvn release:clean release:prepare release:perform

The Maven goal may fail due to a bug where it tries to upload the files twice, but the release has still happened.

Complete the process by creating and merging a pull request for the new branch on GitHub and updating the release notes by editing the tag.

For additional information about releasing seehttp://maven.apache.org/maven-release/maven-release-plugin/

License

Copyright 2021 Spokestack, Inc.

Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.


[8]ページ先頭

©2009-2025 Movatter.jp