Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Data manipulation and transformation for audio signal processing, powered by PyTorch

License

NotificationsYou must be signed in to change notification settings

pytorch/audio

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DocumentationAnaconda BadgeAnaconda-Server Badge

TorchAudio Logo

Note

We have transitioned TorchAudio into amaintenance phase. This process removed some user-facingfeatures. These features were deprecated from TorchAudio 2.8 and removed in 2.9.Our main goals were to reduce redundancies with the rest of thePyTorch ecosystem, make it easier to maintain, and create a version ofTorchAudio that is more tightly scoped to its strengths: processing audiodata for ML. Please seeour community messagefor more details.

The aim of torchaudio is to applyPyTorch tothe audio domain. By supporting PyTorch, torchaudio follows the same philosophyof providing strong GPU acceleration, having a focus on trainable features throughthe autograd system, and having consistent style (tensor names and dimension names).Therefore, it is primarily a machine learning library and not a general signalprocessing library. The benefits of PyTorch can be seen in torchaudio throughhaving all the computations be through PyTorch operations which makes it easyto use and feel like a natural extension.

Installation

Please refer tohttps://pytorch.org/audio/main/installation.html for installation and build process of TorchAudio.

API Reference

API Reference is located here:http://pytorch.org/audio/main/

Contributing Guidelines

Please refer toCONTRIBUTING.md

Citation

If you find this package useful, please cite as:

@article{yang2021torchaudio,title={TorchAudio: Building Blocks for Audio and Speech Processing},author={Yao-Yuan Yang and Moto Hira and Zhaoheng Ni and Anjali Chourdia and Artyom Astafurov and Caroline Chen and Ching-Feng Yeh and Christian Puhrsch and David Pollack and Dmitriy Genzel and Donny Greenberg and Edward Z. Yang and Jason Lian and Jay Mahadeokar and Jeff Hwang and Ji Chen and Peter Goldsborough and Prabhat Roy and Sean Narenthiran and Shinji Watanabe and Soumith Chintala and Vincent Quenneville-Bélair and Yangyang Shi},journal={arXiv preprint arXiv:2110.15018},year={2021}}
@misc{hwang2023torchaudio,title={TorchAudio 2.1: Advancing speech recognition, self-supervised learning, and audio processing components for PyTorch},author={Jeff Hwang and Moto Hira and Caroline Chen and Xiaohui Zhang and Zhaoheng Ni and Guangzhi Sun and Pingchuan Ma and Ruizhe Huang and Vineel Pratap and Yuekai Zhang and Anurag Kumar and Chin-Yun Yu and Chuang Zhu and Chunxi Liu and Jacob Kahn and Mirco Ravanelli and Peng Sun and Shinji Watanabe and Yangyang Shi and Yumeng Tao and Robin Scheibler and Samuele Cornell and Sean Kim and Stavros Petridis},year={2023},eprint={2310.17864},archivePrefix={arXiv},primaryClass={eess.AS}}

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Pre-trained Model License

The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.

For instance, SquimSubjective model is released under the Creative Commons Attribution Non Commercial 4.0 International (CC-BY-NC 4.0) license. Seethe link for additional details.

Other pre-trained models that have different license are noted in documentation. Please checkout thedocumentation page.


[8]ページ先頭

©2009-2025 Movatter.jp