Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

AssemblyAI's Official Python SDK

License

NotificationsYou must be signed in to change notification settings

AssemblyAI/assemblyai-python-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


CI PassingGitHub LicensePyPI versionPyPI Python VersionsPyPI - WheelAssemblyAI TwitterAssemblyAI YouTubeDiscord

AssemblyAI's Python SDK

Build with AI models that can transcribe and understand audio

With a single API call, get access to AI models built on the latest AI breakthroughs to transcribe and understand audio and speech data securely at large scale.

Overview

Documentation

Visit ourAssemblyAI API Documentation to get an overview of our models!

Quick Start

Installation

pip install -U assemblyai

Examples

Before starting, you need to set the API key. If you don't have one yet,sign up for one!

importassemblyaiasaai# set the API keyaai.settings.api_key=f"{ASSEMBLYAI_API_KEY}"

Core Examples

Transcribe a local audio file
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("./my-local-audio-file.wav")print(transcript.text)
Transcribe an URL
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3")print(transcript.text)
Transcribe binary data
importassemblyaiasaaitranscriber=aai.Transcriber()# Binary data is supported directly:transcript=transcriber.transcribe(data)# Or: Upload data separately:upload_url=transcriber.upload_file(data)transcript=transcriber.transcribe(upload_url)
Export subtitles of an audio file
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3")# in SRT formatprint(transcript.export_subtitles_srt())# in VTT formatprint(transcript.export_subtitles_vtt())
List all sentences and paragraphs
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3")sentences=transcript.get_sentences()forsentenceinsentences:print(sentence.text)paragraphs=transcript.get_paragraphs()forparagraphinparagraphs:print(paragraph.text)
Search for words in a transcript
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3")matches=transcript.word_search(["price","product"])formatchinmatches:print(f"Found '{match.text}'{match.count} times in the transcript")
Add custom spellings on a transcript
importassemblyaiasaaiconfig=aai.TranscriptionConfig()config.set_custom_spelling(  {"Kubernetes": ["k8s"],"SQL": ["Sequel"],  })transcriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config)print(transcript.text)
Upload a file
importassemblyaiasaaitranscriber=aai.Transcriber()upload_url=transcriber.upload_file(data)
Delete a transcript
importassemblyaiasaaitranscript=aai.Transcriber().transcribe(audio_url)aai.Transcript.delete_by_id(transcript.id)
List transcripts

This returns a page of transcripts you created.

importassemblyaiasaaitranscriber=aai.Transcriber()page=transcriber.list_transcripts()print(page.page_details)# Page detailsprint(page.transcripts)# List of transcripts

You can apply filter parameters:

params=aai.ListTranscriptParameters(limit=3,status=aai.TranscriptStatus.completed,)page=transcriber.list_transcripts(params)

You can also paginate over all pages by using the helper propertybefore_id_of_prev_url.

Theprev_url always points to a page with older transcripts. If you extract thebefore_idof theprev_url query parameters, you can paginate over all pages from newest to oldest.

transcriber=aai.Transcriber()params=aai.ListTranscriptParameters()page=transcriber.list_transcripts(params)whilepage.page_details.before_id_of_prev_urlisnotNone:params.before_id=page.page_details.before_id_of_prev_urlpage=transcriber.list_transcripts(params)

LeMUR Examples

Use LeMUR to summarize an audio file
importassemblyaiasaaiaudio_file="https://assembly.ai/sports_injuries.mp3"transcriber=aai.Transcriber()transcript=transcriber.transcribe(audio_file)prompt="Provide a brief summary of the transcript."result=transcript.lemur.task(prompt,final_model=aai.LemurModel.claude3_5_sonnet)print(result.response)

Or use the specialized Summarization endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:

importassemblyaiasaaiaudio_url="https://assembly.ai/meeting.mp4"transcript=aai.Transcriber().transcribe(audio_url)result=transcript.lemur.summarize(final_model=aai.LemurModel.claude3_5_sonnet,context="A GitLab meeting to discuss logistics",answer_format="TLDR")print(result.response)
Use LeMUR to ask questions about your audio data
importassemblyaiasaaiaudio_file="https://assembly.ai/sports_injuries.mp3"transcriber=aai.Transcriber()transcript=transcriber.transcribe(audio_file)prompt="What is a runner's knee?"result=transcript.lemur.task(prompt,final_model=aai.LemurModel.claude3_5_sonnet)print(result.response)

Or use the specialized Q&A endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:

importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/customer.mp3")# ask some questionsquestions= [aai.LemurQuestion(question="What car was the customer interested in?"),aai.LemurQuestion(question="What price range is the customer looking for?"),]result=transcript.lemur.question(final_model=aai.LemurModel.claude3_5_sonnet,questions=questions)forqinresult.response:print(f"Question:{q.question}")print(f"Answer:{q.answer}")
Use LeMUR with customized input text
importassemblyaiasaaitranscriber=aai.Transcriber()config=aai.TranscriptionConfig(speaker_labels=True,)transcript=transcriber.transcribe("https://example.org/customer.mp3",config=config)# Example converting speaker label utterances into LeMUR input texttext=""foruttintranscript.utterances:text+=f"Speaker{utt.speaker}:\n{utt.text}\n"result=aai.Lemur().task("You are a helpful coach. Provide an analysis of the transcript ""and offer areas to improve with exact quotes. Include no preamble. ""Start with an overall summary then get into the examples with feedback.",input_text=text,final_model=aai.LemurModel.claude3_5_sonnet)print(result.response)
Apply LeMUR to multiple transcripts
importassemblyaiasaaitranscriber=aai.Transcriber()transcript_group=transcriber.transcribe_group(    ["https://example.org/customer1.mp3","https://example.org/customer2.mp3",    ],)result=transcript_group.lemur.task(context="These are calls of customers asking for cars. Summarize all calls and create a TLDR.",final_model=aai.LemurModel.claude3_5_sonnet)print(result.response)
Delete data previously sent to LeMUR
importassemblyaiasaai# Create a transcript and a corresponding LeMUR request that may contain senstive information.transcriber=aai.Transcriber()transcript_group=transcriber.transcribe_group(  ["https://example.org/customer1.mp3",  ],)result=transcript_group.lemur.summarize(context="Customers providing sensitive, personally identifiable information",answer_format="TLDR")# Get the request ID from the LeMUR responserequest_id=result.request_id# Now we can delete the data about this requestdeletion_result=aai.Lemur.purge_request_data(request_id)print(deletion_result)

Audio Intelligence Examples

PII Redact a transcript
importassemblyaiasaaiconfig=aai.TranscriptionConfig()config.set_redact_pii(# What should be redactedpolicies=[aai.PIIRedactionPolicy.credit_card_number,aai.PIIRedactionPolicy.email_address,aai.PIIRedactionPolicy.location,aai.PIIRedactionPolicy.person_name,aai.PIIRedactionPolicy.phone_number,  ],# How it should be redactedsubstitution=aai.PIISubstitutionPolicy.hash,)transcriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config)

To request a copy of the original audio file with the redacted information "beeped" out, setredact_pii_audio=True in the config.Once theTranscript object is returned, you can access the URL of the redacted audio file withget_redacted_audio_url, or save the redacted audio directly to disk withsave_redacted_audio.

importassemblyaiasaaitranscript=aai.Transcriber().transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(redact_pii=True,redact_pii_policies=[aai.PIIRedactionPolicy.person_name],redact_pii_audio=True  ))redacted_audio_url=transcript.get_redacted_audio_url()transcript.save_redacted_audio("redacted_audio.mp3")

Read more about PII redaction here.

Summarize the content of a transcript over time
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(auto_chapters=True))forchapterintranscript.chapters:print(f"Summary:{chapter.summary}")# A one paragraph summary of the content spoken during this timeframeprint(f"Start:{chapter.start}, End:{chapter.end}")# Timestamps (in milliseconds) of the chapterprint(f"Healine:{chapter.headline}")# A single sentence summary of the content spoken during this timeframeprint(f"Gist:{chapter.gist}")# An ultra-short summary, just a few words, of the content spoken during this timeframe

Read more about auto chapters here.

Summarize the content of a transcript
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(summarization=True))print(transcript.summary)

By default, the summarization model will beinformative and the summarization type will bebullets.Read more about summarization models and types here.

To change the model and/or type, pass additional parameters to theTranscriptionConfig:

config=aai.TranscriptionConfig(summarization=True,summary_model=aai.SummarizationModel.catchy,summary_type=aai.SummarizationType.headline)
Detect sensitive content in a transcript
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(content_safety=True))# Get the parts of the transcript which were flagged as sensitiveforresultintranscript.content_safety.results:print(result.text)# sensitive text snippetprint(result.timestamp.start)print(result.timestamp.end)forlabelinresult.labels:print(label.label)# content safety categoryprint(label.confidence)# model's confidence that the text is in this categoryprint(label.severity)# severity of the text in relation to the category# Get the confidence of the most common labels in relation to the entire audio fileforlabel,confidenceintranscript.content_safety.summary.items():print(f"{confidence*100}% confident that the audio contains{label}")# Get the overall severity of the most common labels in relation to the entire audio fileforlabel,severity_confidenceintranscript.content_safety.severity_score_summary.items():print(f"{severity_confidence.low*100}% confident that the audio contains low-severity{label}")print(f"{severity_confidence.medium*100}% confident that the audio contains mid-severity{label}")print(f"{severity_confidence.high*100}% confident that the audio contains high-severity{label}")

Read more about the content safety categories.

By default, the content safety model will only include labels with a confidence greater than 0.5 (50%). To change this, passcontent_safety_confidence (as an integer percentage between 25 and 100, inclusive) to theTranscriptionConfig:

config=aai.TranscriptionConfig(content_safety=True,content_safety_confidence=80,# only include labels with a confidence greater than 80%)
Analyze the sentiment of sentences in a transcript
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(sentiment_analysis=True))forsentiment_resultintranscript.sentiment_analysis:print(sentiment_result.text)print(sentiment_result.sentiment)# POSITIVE, NEUTRAL, or NEGATIVEprint(sentiment_result.confidence)print(f"Timestamp:{sentiment_result.start} -{sentiment_result.end}")

Ifspeaker_labels is also enabled, then each sentiment analysis result will also include aspeaker field.

# ...config=aai.TranscriptionConfig(sentiment_analysis=True,speaker_labels=True)# ...forsentiment_resultintranscript.sentiment_analysis:print(sentiment_result.speaker)

Read more about sentiment analysis here.

Identify entities in a transcript
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(entity_detection=True))forentityintranscript.entities:print(entity.text)# i.e. "Dan Gilbert"print(entity.entity_type)# i.e. EntityType.personprint(f"Timestamp:{entity.start} -{entity.end}")

Read more about entity detection here.

Detect topics in a transcript (IAB Classification)
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(iab_categories=True))# Get the parts of the transcript that were tagged with topicsforresultintranscript.iab_categories.results:print(result.text)print(f"Timestamp:{result.timestamp.start} -{result.timestamp.end}")forlabelinresult.labels:print(label.label)# topicprint(label.relevance)# how relevant the label is for the portion of text# Get a summary of all topics in the transcriptforlabel,relevanceintranscript.iab_categories.summary.items():print(f"Audio is{relevance*100}% relevant to{label}")

Read more about IAB classification here.

Identify important words and phrases in a transcript
importassemblyaiasaaitranscriber=aai.Transcriber()transcript=transcriber.transcribe("https://example.org/audio.mp3",config=aai.TranscriptionConfig(auto_highlights=True))forresultintranscript.auto_highlights.results:print(result.text)# the important phraseprint(result.rank)# relevancy of the phraseprint(result.count)# number of instances of the phrasefortimestampinresult.timestamps:print(f"Timestamp:{timestamp.start} -{timestamp.end}")

Read more about auto highlights here.


Streaming Examples

Read more about our streaming service.

Stream your microphone in real-time
importassemblyaiasaaifromassemblyai.streaming.v3import (BeginEvent,StreamingClient,StreamingClientOptions,StreamingError,StreamingEvents,StreamingParameters,StreamingSessionParameters,TerminationEvent,TurnEvent,)defon_begin(self:Type[StreamingClient],event:BeginEvent):"This function is called when the connection has been established."print("Session ID:",event.id)defon_turn(self:Type[StreamingClient],event:TurnEvent):"This function is called when a new transcript has been received."print(event.transcript,end="\r\n")defon_terminated(self:Type[StreamingClient],event:TerminationEvent):"This function is called when an error occurs."print(f"Session terminated:{event.audio_duration_seconds} seconds of audio processed"  )defon_error(self:Type[StreamingClient],error:StreamingError):"This function is called when the connection has been closed."print(f"Error occurred:{error}")# Create the streaming clienttranscriber=StreamingClient(StreamingClientOptions(api_key="YOUR_API_KEY",  ))client.on(StreamingEvents.Begin,on_begin)client.on(StreamingEvents.Turn,on_turn)client.on(StreamingEvents.Termination,on_terminated)client.on(StreamingEvents.Error,on_error)# Start the connectionclient.connect(StreamingParameters(sample_rate=16_000,formatted_finals=True,  ))# Open a microphone streammicrophone_stream=aai.extras.MicrophoneStream()# Press CTRL+C to aborttranscriber.stream(microphone_stream)transcriber.disconnect()
Transcribe a local audio file in real-time
# Only WAV/PCM16 single channel supported for nowfile_stream=aai.extras.stream_file(filepath="audio.wav",sample_rate=44_100,)transcriber.stream(file_stream)

Change the default settings

You'll find theSettings class with all default values intypes.py.

Change the default timeout and polling interval
importassemblyaiasaai# The HTTP timeout in seconds for general requests, default is 30.0aai.settings.http_timeout=60.0# The polling interval in seconds for long-running requests, default is 3.0aai.settings.polling_interval=10.0

Playground

Visit our Playground to try our all of our Speech AI models and LeMUR for free:

Advanced

How the SDK handles Default Configurations

Defining Defaults

When noTranscriptionConfig is being passed to theTranscriber or its methods, it will use a default instance of aTranscriptionConfig.

If you would like to re-use the sameTranscriptionConfig for all your transcriptions,you can set it on theTranscriber directly:

config=aai.TranscriptionConfig(punctuate=False,format_text=False)transcriber=aai.Transcriber(config=config)# will use the same config for all `.transcribe*(...)` operationstranscriber.transcribe("https://example.org/audio.wav")

Overriding Defaults

You can override the default configuration later via the.config property of theTranscriber:

transcriber=aai.Transcriber()# override the `Transcriber`'s config with a new configtranscriber.config=aai.TranscriptionConfig(punctuate=False,format_text=False)

In case you want to override theTranscriber's configuration for a specific operation with a different one, you can do so via theconfig parameter of a.transcribe*(...) method:

config=aai.TranscriptionConfig(punctuate=False,format_text=False)# set a default configurationtranscriber=aai.Transcriber(config=config)transcriber.transcribe("https://example.com/audio.mp3",# overrides the above configuration on the `Transcriber` with the followingconfig=aai.TranscriptionConfig(dual_channel=True,disfluencies=True))

Synchronous vs Asynchronous

Currently, the SDK provides two ways to transcribe audio files.

The synchronous approach halts the application's flow until the transcription has been completed.

The asynchronous approach allows the application to continue running while the transcription is being processed. The caller receives aconcurrent.futures.Future object which can be used to check the status of the transcription at a later time.

You can identify those two approaches by the_async suffix in theTranscriber's method name (e.g.transcribe vstranscribe_async).

Getting the HTTP status code

There are two ways of accessing the HTTP status code:

  • All custom AssemblyAI Error classes have astatus_code attribute.
  • The latest HTTP response is stored inaai.Client.get_default().latest_response after every API call. This approach works also if no Exception is thrown.
transcriber=aai.Transcriber()# Option 1: Catch the errortry:transcript=transcriber.submit("./example.mp3")exceptaai.AssemblyAIErrorase:print(e.status_code)# Option 2: Access the latest response through the clientclient=aai.Client.get_default()try:transcript=transcriber.submit("./example.mp3")except:print(client.last_response)print(client.last_response.status_code)

Polling Intervals

By default we poll theTranscript's status each3s. In case you would like to adjust that interval:

importassemblyaiasaaiaai.settings.polling_interval=1.0

Retrieving Existing Transcripts

Retrieving a Single Transcript

If you previously created a transcript, you can use its ID to retrieve it later.

importassemblyaiasaaitranscript=aai.Transcript.get_by_id("<TRANSCRIPT_ID>")print(transcript.id)print(transcript.text)

Retrieving Multiple Transcripts as a Group

You can also retrieve multiple existing transcripts and combine them into a singleTranscriptGroup object. This allows you to perform operations on the transcript group as a single unit, such as querying the combined transcripts with LeMUR.

importassemblyaiasaaitranscript_group=aai.TranscriptGroup.get_by_ids(["<TRANSCRIPT_ID_1>","<TRANSCRIPT_ID_2>"])summary=transcript_group.lemur.summarize(context="Customers asking for cars",answer_format="TLDR")print(summary)

Retrieving Transcripts Asynchronously

BothTranscript.get_by_id andTranscriptGroup.get_by_ids have asynchronous counterparts,Transcript.get_by_id_async andTranscriptGroup.get_by_ids_async, respectively. These functions immediately return aFuture object, rather than blocking until the transcript(s) are retrieved.

See the above section onSynchronous vs Asynchronous for more information.


[8]ページ先頭

©2009-2025 Movatter.jp