Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

License

NotificationsYou must be signed in to change notification settings

lmnr-ai/lmnr-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python SDK forLaminar.

Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence.

Check ouropen-source repo and don't forget to star it ⭐

PyPI - VersionPyPI - DownloadsPyPI - Python Version

Quickstart

First, install the package, specifying the instrumentations you want to use.

For example, to install the package with OpenAI and Anthropic instrumentations:

pip install'lmnr[anthropic,openai]'

To install all possible instrumentations, use the following command:

pip install'lmnr[all]'

Initialize Laminar in your code:

fromlmnrimportLaminarLaminar.initialize(project_api_key="<PROJECT_API_KEY>")

You can also skip passing theproject_api_key, in which case it will be lookedin the environment (or local .env file) by the keyLMNR_PROJECT_API_KEY.

Note that you need to only initialize Laminar once in your application. You shouldtry to do that as early as possible in your application, e.g. at server startup.

Set-up for self-hosting

If you self-host a Laminar instance, the default connection settings to it arehttp://localhost:8000 for HTTP andhttp://localhost:8001 for gRPC. Initializethe SDK accordingly:

fromlmnrimportLaminarLaminar.initialize(project_api_key="<PROJECT_API_KEY>",base_url="http://localhost",http_port=8000,grpc_port=8001,)

Instrumentation

Manual instrumentation

To instrument any function in your code, we provide a simple@observe() decorator.This can be useful if you want to trace a request handler or a function which combines multiple LLM calls.

importosfromopenaiimportOpenAIfromlmnrimportLaminarLaminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])client=OpenAI(api_key=os.environ["OPENAI_API_KEY"])defpoem_writer(topic:str):prompt=f"write a poem about{topic}"messages= [        {"role":"system","content":"You are a helpful assistant."},        {"role":"user","content":prompt},    ]# OpenAI calls are still automatically instrumentedresponse=client.chat.completions.create(model="gpt-4o",messages=messages,    )poem=response.choices[0].message.contentreturnpoem@observe()defgenerate_poems():poem1=poem_writer(topic="laminar flow")poem2=poem_writer(topic="turbulence")poems=f"{poem1}\n\n---\n\n{poem2}"returnpoems

Also, you can useLaminar.start_as_current_span if you want to record a chunk of your code usingwith statement.

defhandle_user_request(topic:str):withLaminar.start_as_current_span(name="poem_writer",input=topic):poem=poem_writer(topic=topic)# Use set_span_output to record the output of the spanLaminar.set_span_output(poem)

Automatic instrumentation

Laminar allows you to automatically instrument majority of the most popular LLM, Vector DB, database, requests, and other libraries.

If you want to automatically instrument a default set of libraries, then simply do NOT passinstruments argument to.initialize().See the full list of available instrumentations in theenum.

If you want to automatically instrument only specific LLM, Vector DB, or othercalls with OpenTelemetry-compatible instrumentation, then pass the appropriate instruments to.initialize().For example, if you want to only instrument OpenAI and Anthropic, then do the following:

fromlmnrimportLaminar,InstrumentsLaminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"],instruments={Instruments.OPENAI,Instruments.ANTHROPIC})

If you want to fully disable any kind of autoinstrumentation, pass an empty set asinstruments=set() to.initialize().

Autoinstrumentations are provided by Traceloop'sOpenLLMetry.

Evaluations

Quickstart

Install the package:

pip install lmnr

Create a file namedmy_first_eval.py with the following code:

fromlmnrimportevaluatedefwrite_poem(data):returnf"This is a good poem about{data['topic']}"defcontains_poem(output,target):return1ifoutputintarget['poem']else0# Evaluation datadata= [    {"data": {"topic":"flowers"},"target": {"poem":"This is a good poem about flowers"}},    {"data": {"topic":"cars"},"target": {"poem":"I like cars"}},]evaluate(data=data,executor=write_poem,evaluators={"containsPoem":contains_poem    },group_id="my_first_feature")

Run the following commands:

export LMNR_PROJECT_API_KEY=<YOUR_PROJECT_API_KEY># get from Laminar project settingslmnreval my_first_eval.py# run in the virtual environment where lmnr is installed

Visit the URL printed in the console to see the results.

Overview

Bring rigor to the development of your LLM applications with evaluations.

You can run evaluations locally by providing executor (part of the logic used in your application) and evaluators (numeric scoring functions) toevaluate function.

evaluate takes in the following parameters:

  • data – an array ofEvaluationDatapoint objects, where eachEvaluationDatapoint has two keys:target anddata, each containing a key-value object. Alternatively, you can pass in dictionaries, and we will instantiateEvaluationDatapoints with pydantic if possible
  • executor – the logic you want to evaluate. This function must takedata as the first argument, and produce any output. It can be both a function or anasync function.
  • evaluators – Dictionary which maps evaluator names to evaluators. Functions that take output of executor as the first argument,target as the second argument and produce a numeric scores. Each function can produce either a single number ordict[str, int|float] of scores. Each evaluator can be both a function or anasync function.
  • name – optional name for the evaluation. Automatically generated if not provided.
  • group_id – optional group name for the evaluation. Evaluations within the same group can be compared visually side-by-side

* If you already have the outputs of executors you want to evaluate, you can specify the executor as an identity function, that takes indata and returns only needed value(s) from it.

Read thedocs to learn more about evaluations.

Client for HTTP operations

Various interactions with LaminarAPI are available inLaminarClientand its asynchronous versionAsyncLaminarClient.

Agent

To run Laminar agent, you can invokeclient.agent.run

fromlmnrimportLaminarClientclient=LaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")response=client.agent.run(prompt="What is the weather in London today?")print(response.result.content)

Streaming

Agent run supports streaming as well.

fromlmnrimportLaminarClientclient=LaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")forchunkinclient.agent.run(prompt="What is the weather in London today?",stream=True):ifchunk.chunk_type=='step':print(chunk.summary)elifchunk.chunk_type=='finalOutput':print(chunk.content.result.content)

Async mode

fromlmnrimportAsyncLaminarClientclient=AsyncLaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")response=awaitclient.agent.run(prompt="What is the weather in London today?")print(response.result.content)

Async mode with streaming

fromlmnrimportAsyncLaminarClientclient=AsyncLaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")# Note that you need to await the operation even though we use `async for` belowresponse=awaitclient.agent.run(prompt="What is the weather in London today?",stream=True)asyncforchunkinclient.agent.run(prompt="What is the weather in London today?",stream=True):ifchunk.chunk_type=='step':print(chunk.summary)elifchunk.chunk_type=='finalOutput':print(chunk.content.result.content)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors10


[8]ページ先頭

©2009-2025 Movatter.jp