- Notifications
You must be signed in to change notification settings - Fork40
HumeAI/hume-python-sdk
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
There were major breaking changes in version0.7.0 of the SDK. If upgrading from a previous version, pleaseView the Migration Guide. That release deprecated several interfaces and moved them to thehume[legacy] package extra. Thelegacy extra was removed in0.9.0. The last version to includelegacy was0.8.6.
API reference documentation is availablehere.
The Hume Python SDK is compatible across several Python versions and operating systems.
- For theEmpathic Voice Interface, Python versions
3.9through3.11are supported on macOS and Linux. - ForText-to-speech (TTS), Python versions
3.9through3.12are supported on macOS, Linux, and Windows. - ForExpression Measurement, Python versions
3.9through3.12are supported on macOS, Linux, and Windows.
Below is a table which shows the version and operating system compatibilities by product:
| Python Version | Operating System | |
|---|---|---|
| Empathic Voice Interface | 3.9,3.10,3.11 | macOS, Linux |
| Text-to-speech (TTS) | 3.9,3.10,3.11,3.12 | macOS, Linux, Windows |
| Expression Measurement | 3.9,3.10,3.11,3.12 | macOS, Linux, Windows |
pip install hume# orpoetry add hume# oruv add hume
fromhume.clientimportHumeClientclient=HumeClient(api_key="YOUR_API_KEY")client.empathic_voice.configs.list_configs()
The SDK also exports an async client so that you can make non-blocking calls to our API.
importasynciofromhume.clientimportAsyncHumeClientclient=AsyncHumeClient(api_key="YOUR_API_KEY")asyncdefmain()->None:awaitclient.empathic_voice.configs.list_configs()asyncio.run(main())
Writing files with an async stream of bytes can be tricky in Python!aiofiles can simplify this some. For example,you can download your job artifacts like so:
importaiofilesfromhumeimportAsyncHumeClientclient=AsyncHumeClient()asyncwithaiofiles.open('artifacts.zip',mode='wb')asfile:asyncforchunkinclient.expression_measurement.batch.get_job_artifacts(id="my-job-id"):awaitfile.write(chunk)
This SDK contains the APIs for empathic voice, tts, and expression measurement. Evenif you do not plan on using more than one API to start, the SDK provides easy access incase you would like to use additional APIs in the future.
Each API is namespaced accordingly:
fromhume.clientimportHumeClientclient=HumeClient(api_key="YOUR_API_KEY")client.emapthic_voice.# APIs specific to Empathic Voiceclient.tts.# APIs specific to Text-to-speechclient.expression_measurement.# APIs specific to Expression Measurement
All errors thrown by the SDK will be subclasses ofApiError.
importhume.clienttry:client.expression_measurement.batch.get_job_predictions(...)excepthume.core.ApiErrorase:# Handle all errorsprint(e.status_code)print(e.body)
Paginated requests will return aSyncPager orAsyncPager, which can be used as generators for the underlying object. For example,list_tools will return a generator overReturnUserDefinedTool and handle the pagination behind the scenes:
importhume.clientclient=HumeClient(api_key="YOUR_API_KEY")fortoolinclient.empathic_voice.tools.list_tools():print(tool)
you could also iterate page-by-page:
forpageinclient.empathic_voice.tools.list_tools().iter_pages():print(page.items)
or manually:
pager=client.empathic_voice.tools.list_tools()# First pageprint(pager.items)# Second pagepager=pager.next_page()print(pager.items)
We expose a websocket client for interacting with the EVI API as well as Expression Measurement.
When interacting with these clients, you can use them very similarly to how you'd use the commonwebsockets library:
fromhumeimportStreamDataModelsclient=AsyncHumeClient(api_key=os.getenv("HUME_API_KEY"))asyncwithclient.expression_measurement.stream.connect(options={"config":StreamDataModels(...)})ashume_socket:print(awaithume_socket.get_job_details())
The underlying connection, in this casehume_socket, will support intellisense/autocomplete for the different functions that are available on the socket!
The Hume SDK is instrumented with automatic retries with exponential backoff. A request will beretried as long as the request is deemed retriable and the number of retry attempts has not grown largerthan the configured retry limit.
A request is deemed retriable when any of the following HTTP status codes is returned:
Use themax_retries request option to configure this behavior.
fromhume.clientimportHumeClientfromhume.coreimportRequestOptionsclient=HumeClient(...)# Override retries for a specific methodclient.expression_measurement.batch.get_job_predictions(...,request_options=RequestOptions(max_retries=5))
By default, requests time out after 60 seconds. You can configure this with atimeout option at the client or request level.
fromhume.clientimportHumeClientfromhume.coreimportRequestOptionsclient=HumeClient(# All timeouts are 20 secondstimeout=20.0,)# Override timeout for a specific methodclient.expression_measurement.batch.get_job_predictions(...,request_options=RequestOptions(timeout_in_seconds=20))
You can override the httpx client to customize it for your use-case. Some common use-casesinclude support for proxies and transports.
importhttpxfromhume.clientimportHumeClientclient=HumeClient(http_client=httpx.Client(proxies="http://my.test.proxy.example.com",transport=httpx.HTTPTransport(local_address="0.0.0.0"), ),)
While we value open-source contributions to this SDK, this library is generated programmatically.
Additions made directly to this library would have to be moved over to our generation code, otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept, but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!
On the other hand, contributions to the README are always very welcome!
About
Python client for Hume AI
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
