Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Seamlessly integrate LLMs as Python functions

License

NotificationsYou must be signed in to change notification settings

jackmpcollins/magentic

Repository files navigation

Seamlessly integrate Large Language Models into Python code. Use the@prompt and@chatprompt decorators to create functions that return structured output from an LLM. Combine LLM queries and tool use with traditional Python code to build complex agentic systems.

Features

Installation

pip install magentic

or using uv

uv add magentic

Configure your OpenAI API key by setting theOPENAI_API_KEY environment variable. To configure a different LLM provider seeConfiguration for more.

Usage

@prompt

The@prompt decorator allows you to define a template for a Large Language Model (LLM) prompt as a Python function. When this function is called, the arguments are inserted into the template, then this prompt is sent to an LLM which generates the function output.

frommagenticimportprompt@prompt('Add more "dude"ness to: {phrase}')defdudeify(phrase:str)->str: ...# No function body as this is never executeddudeify("Hello, how are you?")# "Hey, dude! What's up? How's it going, my man?"

The@prompt decorator will respect the return type annotation of the decorated function. This can beany type supported by pydantic including apydantic model.

frommagenticimportpromptfrompydanticimportBaseModelclassSuperhero(BaseModel):name:strage:intpower:strenemies:list[str]@prompt("Create a Superhero named {name}.")defcreate_superhero(name:str)->Superhero: ...create_superhero("Garden Man")# Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman'])

SeeStructured Outputs for more.

@chatprompt

The@chatprompt decorator works just like@prompt but allows you to pass chat messages as a template rather than a single text prompt. This can be used to provide a system message or for few-shot prompting where you provide example responses to guide the model's output. Format fields denoted by curly braces{example} will be filled in all messages (exceptFunctionResultMessage).

frommagenticimportchatprompt,AssistantMessage,SystemMessage,UserMessagefrompydanticimportBaseModelclassQuote(BaseModel):quote:strcharacter:str@chatprompt(SystemMessage("You are a movie buff."),UserMessage("What is your favorite quote from Harry Potter?"),AssistantMessage(Quote(quote="It does not do to dwell on dreams and forget to live.",character="Albus Dumbledore",        )    ),UserMessage("What is your favorite quote from {movie}?"),)defget_movie_quote(movie:str)->Quote: ...get_movie_quote("Iron Man")# Quote(quote='I am Iron Man.', character='Tony Stark')

SeeChat Prompting for more.

FunctionCall

An LLM can also decide to call functions. In this case the@prompt-decorated function returns aFunctionCall object which can be called to execute the function using the arguments provided by the LLM.

fromtypingimportLiteralfrommagenticimportprompt,FunctionCalldefsearch_twitter(query:str,category:Literal["latest","people"])->str:"""Searches Twitter for a query."""print(f"Searching Twitter for{query!r} in category{category!r}")return"<twitter results>"defsearch_youtube(query:str,channel:str="all")->str:"""Searches YouTube for a query."""print(f"Searching YouTube for{query!r} in channel{channel!r}")return"<youtube results>"@prompt("Use the appropriate search function to answer: {question}",functions=[search_twitter,search_youtube],)defperform_search(question:str)->FunctionCall[str]: ...output=perform_search("What is the latest news on LLMs?")print(output)# > FunctionCall(<function search_twitter at 0x10c367d00>, 'LLMs', 'latest')output()# > Searching Twitter for 'Large Language Models news' in category 'latest'# '<twitter results>'

SeeFunction Calling for more.

@prompt_chain

Sometimes the LLM requires making one or more function calls to generate a final answer. The@prompt_chain decorator will resolveFunctionCall objects automatically and pass the output back to the LLM to continue until the final answer is reached.

In the following example, whendescribe_weather is called the LLM first calls theget_current_weather function, then uses the result of this to formulate its final answer which gets returned.

frommagenticimportprompt_chaindefget_current_weather(location,unit="fahrenheit"):"""Get the current weather in a given location"""# Pretend to query an APIreturn {"temperature":"72","forecast": ["sunny","windy"]}@prompt_chain("What's the weather like in {city}?",functions=[get_current_weather],)defdescribe_weather(city:str)->str: ...describe_weather("Boston")# 'The current weather in Boston is 72°F and it is sunny and windy.'

LLM-powered functions created using@prompt,@chatprompt and@prompt_chain can be supplied asfunctions to other@prompt/@prompt_chain decorators, just like regular python functions. This enables increasingly complex LLM-powered functionality, while allowing individual components to be tested and improved in isolation.

Streaming

TheStreamedStr (andAsyncStreamedStr) class can be used to stream the output of the LLM. This allows you to process the text while it is being generated, rather than receiving the whole output at once.

frommagenticimportprompt,StreamedStr@prompt("Tell me about {country}")defdescribe_country(country:str)->StreamedStr: ...# Print the chunks while they are being receivedforchunkindescribe_country("Brazil"):print(chunk,end="")# 'Brazil, officially known as the Federative Republic of Brazil, is ...'

MultipleStreamedStr can be created at the same time to stream LLM outputs concurrently. In the below example, generating the description for multiple countries takes approximately the same amount of time as for a single country.

fromtimeimporttimecountries= ["Australia","Brazil","Chile"]# Generate the descriptions one at a timestart_time=time()forcountryincountries:# Converting `StreamedStr` to `str` blocks until the LLM output is fully generateddescription=str(describe_country(country))print(f"{time()-start_time:.2f}s :{country} -{len(description)} chars")# 22.72s : Australia - 2130 chars# 41.63s : Brazil - 1884 chars# 74.31s : Chile - 2968 chars# Generate the descriptions concurrently by creating the StreamedStrs at the same timestart_time=time()streamed_strs= [describe_country(country)forcountryincountries]forcountry,streamed_strinzip(countries,streamed_strs):description=str(streamed_str)print(f"{time()-start_time:.2f}s :{country} -{len(description)} chars")# 22.79s : Australia - 2147 chars# 23.64s : Brazil - 2202 chars# 24.67s : Chile - 2186 chars

Object Streaming

Structured outputs can also be streamed from the LLM by using the return type annotationIterable (orAsyncIterable). This allows each item to be processed while the next one is being generated.

fromcollections.abcimportIterablefromtimeimporttimefrommagenticimportpromptfrompydanticimportBaseModelclassSuperhero(BaseModel):name:strage:intpower:strenemies:list[str]@prompt("Create a Superhero team named {name}.")defcreate_superhero_team(name:str)->Iterable[Superhero]: ...start_time=time()forheroincreate_superhero_team("The Food Dudes"):print(f"{time()-start_time:.2f}s :{hero}")# 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang']# 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang']# 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters']

SeeStreaming for more.

Asyncio

Asynchronous functions / coroutines can be used to concurrently query the LLM. This can greatly increase the overall speed of generation, and also allow other asynchronous code to run while waiting on LLM output. In the below example, the LLM generates a description for each US president while it is waiting on the next one in the list. Measuring the characters generated per second shows that this example achieves a 7x speedup over serial processing.

importasynciofromtimeimporttimefromtypingimportAsyncIterablefrommagenticimportprompt@prompt("List ten presidents of the United States")asyncdefiter_presidents()->AsyncIterable[str]: ...@prompt("Tell me more about {topic}")asyncdeftell_me_more_about(topic:str)->str: ...# For each president listed, generate a description concurrentlystart_time=time()tasks= []asyncforpresidentinawaititer_presidents():# Use asyncio.create_task to schedule the coroutine for execution before awaiting it# This way descriptions will start being generated while the list of presidents is still being generatedtask=asyncio.create_task(tell_me_more_about(president))tasks.append(task)descriptions=awaitasyncio.gather(*tasks)# Measure the characters per secondtotal_chars=sum(len(desc)fordescindescriptions)time_elapsed=time()-start_timeprint(total_chars,time_elapsed,total_chars/time_elapsed)# 24575 28.70 856.07# Measure the characters per second to describe a single presidentstart_time=time()out=awaittell_me_more_about("George Washington")time_elapsed=time()-start_timeprint(len(out),time_elapsed,len(out)/time_elapsed)# 2206 18.72 117.78

SeeAsyncio for more.

Additional Features

  • Thefunctions argument to@prompt can contain async/coroutine functions. When the correspondingFunctionCall objects are called the result must be awaited.
  • TheAnnotated type annotation can be used to provide descriptions and other metadata for function parameters. Seethe pydantic documentation on usingField to describe function arguments.
  • The@prompt and@prompt_chain decorators also accept amodel argument. You can pass an instance ofOpenaiChatModel to use GPT4 or configure a different temperature. See below.
  • Register other types to use as return type annotations in@prompt functions by followingthe example notebook for a Pandas DataFrame.

Backend/LLM Configuration

Magentic supports multiple LLM providers or "backends". This roughly refers to which Python package is used to interact with the LLM API. The following backends are supported.

OpenAI

The default backend, using theopenai Python package and supports all features of magentic.

No additional installation is required. Just import theOpenaiChatModel class frommagentic.

frommagenticimportOpenaiChatModelmodel=OpenaiChatModel("gpt-4o")

Ollama via OpenAI

Ollama supports an OpenAI-compatible API, which allows you to use Ollama models via the OpenAI backend.

First, install ollama fromollama.com. Then, pull the model you want to use.

ollama pull llama3.2

Then, specify the model name andbase_url when creating theOpenaiChatModel instance.

frommagenticimportOpenaiChatModelmodel=OpenaiChatModel("llama3.2",base_url="http://localhost:11434/v1/")

Other OpenAI-compatible APIs

When using theopenai backend, setting theMAGENTIC_OPENAI_BASE_URL environment variable or usingOpenaiChatModel(..., base_url="http://localhost:8080") in code allows you to usemagentic with any OpenAI-compatible API e.g.Azure OpenAI Service,LiteLLM OpenAI Proxy Server,LocalAI. Note that if the API does not support tool calls then you will not be able to create prompt-functions that return Python objects, but other features ofmagentic will still work.

To use Azure with the openai backend you will need to set theMAGENTIC_OPENAI_API_TYPE environment variable to "azure" or useOpenaiChatModel(..., api_type="azure"), and also set the environment variables needed by the openai package to access Azure. Seehttps://github.com/openai/openai-python#microsoft-azure-openai

Anthropic

This uses theanthropic Python package and supports all features of magentic.

Install themagentic package with theanthropic extra, or install theanthropic package directly.

pip install"magentic[anthropic]"

Then import theAnthropicChatModel class.

frommagentic.chat_model.anthropic_chat_modelimportAnthropicChatModelmodel=AnthropicChatModel("claude-3-5-sonnet-latest")

LiteLLM

This uses thelitellm Python package to enable querying LLMs frommany different providers. Note: some models may not support all features ofmagentic e.g. function calling/structured output and streaming.

Install themagentic package with thelitellm extra, or install thelitellm package directly.

pip install"magentic[litellm]"

Then import theLitellmChatModel class.

frommagentic.chat_model.litellm_chat_modelimportLitellmChatModelmodel=LitellmChatModel("gpt-4o")

Mistral

This uses theopenai Python package with some small modifications to make the API queries compatible with the Mistral API. It supports all features of magentic. However tool calls (including structured outputs) are not streamed so are received all at once.

Note: a future version of magentic might switch to using themistral Python package.

No additional installation is required. Just import theMistralChatModel class.

frommagentic.chat_model.mistral_chat_modelimportMistralChatModelmodel=MistralChatModel("mistral-large-latest")

Configure a Backend

The defaultChatModel used bymagentic (in@prompt,@chatprompt, etc.) can be configured in several ways. When a prompt-function or chatprompt-function is called, theChatModel to use follows this order of preference

  1. TheChatModel instance provided as themodel argument to the magentic decorator
  2. The current chat model context, created usingwith MyChatModel:
  3. The globalChatModel created from environment variables and the default settings insrc/magentic/settings.py

The following code snippet demonstrates this behavior:

frommagenticimportOpenaiChatModel,promptfrommagentic.chat_model.anthropic_chat_modelimportAnthropicChatModel@prompt("Say hello")defsay_hello()->str: ...@prompt("Say hello",model=AnthropicChatModel("claude-3-5-sonnet-latest"),)defsay_hello_anthropic()->str: ...say_hello()# Uses env vars or default settingswithOpenaiChatModel("gpt-4o-mini",temperature=1):say_hello()# Uses openai with gpt-4o-mini and temperature=1 due to context managersay_hello_anthropic()# Uses Anthropic claude-3-5-sonnet-latest because explicitly configured

The following environment variables can be set.

Environment VariableDescriptionExample
MAGENTIC_BACKENDThe package to use as the LLM backendanthropic / openai / litellm
MAGENTIC_ANTHROPIC_MODELAnthropic modelclaude-3-haiku-20240307
MAGENTIC_ANTHROPIC_API_KEYAnthropic API key to be used by magenticsk-...
MAGENTIC_ANTHROPIC_BASE_URLBase URL for an Anthropic-compatible APIhttp://localhost:8080
MAGENTIC_ANTHROPIC_MAX_TOKENSMax number of generated tokens1024
MAGENTIC_ANTHROPIC_TEMPERATURETemperature0.5
MAGENTIC_LITELLM_MODELLiteLLM modelclaude-2
MAGENTIC_LITELLM_API_BASEThe base url to queryhttp://localhost:11434
MAGENTIC_LITELLM_MAX_TOKENSLiteLLM max number of generated tokens1024
MAGENTIC_LITELLM_TEMPERATURELiteLLM temperature0.5
MAGENTIC_MISTRAL_MODELMistral modelmistral-large-latest
MAGENTIC_MISTRAL_API_KEYMistral API key to be used by magenticXEG...
MAGENTIC_MISTRAL_BASE_URLBase URL for an Mistral-compatible APIhttp://localhost:8080
MAGENTIC_MISTRAL_MAX_TOKENSMax number of generated tokens1024
MAGENTIC_MISTRAL_SEEDSeed for deterministic sampling42
MAGENTIC_MISTRAL_TEMPERATURETemperature0.5
MAGENTIC_OPENAI_MODELOpenAI modelgpt-4
MAGENTIC_OPENAI_API_KEYOpenAI API key to be used by magenticsk-...
MAGENTIC_OPENAI_API_TYPEAllowed options: "openai", "azure"azure
MAGENTIC_OPENAI_BASE_URLBase URL for an OpenAI-compatible APIhttp://localhost:8080
MAGENTIC_OPENAI_MAX_TOKENSOpenAI max number of generated tokens1024
MAGENTIC_OPENAI_SEEDSeed for deterministic sampling42
MAGENTIC_OPENAI_TEMPERATUREOpenAI temperature0.5

Type Checking

Many type checkers will raise warnings or errors for functions with the@prompt decorator due to the function having no body or return value. There are several ways to deal with these.

  1. Disable the check globally for the type checker. For example in mypy by disabling error codeempty-body.
    # pyproject.toml[tool.mypy]disable_error_code = ["empty-body"]
  2. Make the function body... (this does not satisfy mypy) orraise.
    @prompt("Choose a color")defrandom_color()->str: ...
  3. Use comment# type: ignore[empty-body] on each function. In this case you can add a docstring instead of....
    @prompt("Choose a color")defrandom_color()->str:# type: ignore[empty-body]"""Returns a random color."""

[8]ページ先頭

©2009-2025 Movatter.jp