Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Sylber: Syllabic Embedding Representation of Speech from Raw Audio

License

NotificationsYou must be signed in to change notification settings

Berkeley-Speech-Group/sylber

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Paper |Audio Samples

Sylber is the first of its kind that yields extremely short tokens from raw audio (on average, 4.27 tokens/sec) through dynamic tokenization at the syllable granularity.

The model is developed and trained by Berkeley Speech Group.

Updates

03/02/2025

  1. Distribute inference package

01/22/2025

  1. Sylber is accepted at ICLR 2025!

12/25/2024

  1. Initial code release with training and inference pipelines.
  2. Checkpoint release

Installation

The model can be installed through pypi for inference.

pip install sylber

Please checkdemo notebook for the usage.For training, please follow the below instructions.

Usage

fromsylberimportSegmenter# Loading Sylbersegmenter=Segmenter(model_ckpt="sylber")# Run Sylberwav_file="samples/sample.wav"outputs=segmenter(wav_file,in_second=True)# in_second can be False to output segments in frame numbers.# outputs = {"segments": numpy array of [start, end] of segment,#            "segment_features": numpy array of segment-averaged features,#            "hidden_states": numpy array of raw features used for segmentation.

Environment

Install the dependencies fromrequirements.txt:

pip install -r requirements.txt

Training SYLBER

Datasets and Checkpoints

  1. Noise Dataset for WavLM-based Augmentation: The noise dataset for the WavLM noise augmentation is sourced fromDNS Challenge. You can use the following script to download the dataset:

    bash download-dns-challenge-3.sh

    and untardatasets_fullband/datasets_fullband.noise_fullband.tar.bz2

  2. Generated Datasets: The other data used for training SYLBER are generated using theSDHuBERT repository. Please follow the instructions there for data preparation.

  3. Checkpoints: Pretrained model checkpoints for sylber are available on Google Drive:link

Stage 1 Training

python train.py --config-name=sylber_base

Stage 2 Training

python train.py --config-name=sylber_base_stage2

The training is split into two stages. Make sure to review the configurations in theconfigs/ directory for detailed settings.

Inference

Segmentation and Visualization

For inference to obtain segmentations and visualize results, please refer todemo.ipynb.

SPARC (formerly known as Articulatory Encodec)

For using SPARC, refer toSpeech-Articulatory-Coding for installation and usage instructions.

Acknowledgements

Website adapted from:https://github.com/BytedanceSpeech/bytedancespeech.github.io

Citation

If you use this work, please cite our paper:

@article{cho2024sylber,  title={Sylber: Syllabic Embedding Representation of Speech from Raw Audio},  author={Cho, Cheol Jun and Lee, Nicholas and Gupta, Akshat and Agarwal, Dhruv and Chen, Ethan and Black, Alan W and Anumanchipalli, Gopala K},  journal={arXiv preprint arXiv:2410.07168},  year={2024}}

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp