Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23

License

NotificationsYou must be signed in to change notification settings

langfuse/langfuse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

github-banner

Langfuse usesGitHub Discussions for Support and Feature Requests.
We're hiring.Join us in product engineering and technical go-to-market roles.

MIT LicenseY Combinator W23Docker Pullslangfuse Python package on PyPilangfuse npm package
chat on Discordfollow on X(Twitter)follow on LinkedInCommits last monthIssues closedDiscussion postsAsk DeepWiki

README in English简体中文版自述文件日本語のREADMEREADME in Korean

Langfuse is anopen source LLM engineering platform. It helps teams collaborativelydevelop, monitor, evaluate, anddebug AI applications. Langfuse can beself-hosted in minutes and isbattle-tested.

Langfuse Overview Video

✨ Core Features

Langfuse Overview
  • LLM Application Observability: Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions. Try the interactivedemo to see this in action.

  • Prompt Management helps you centrally manage, version control, and collaboratively iterate on your prompts. Thanks to strong caching on server and client side, you can iterate on prompts without adding latency to your application.

  • Evaluations are key to the LLM application development workflow, and Langfuse adapts to your needs. It supports LLM-as-a-judge, user feedback collection, manual labeling, and custom evaluation pipelines via APIs/SDKs.

  • Datasets enable test sets and benchmarks for evaluating your LLM application. They support continuous improvement, pre-deployment testing, structured experiments, flexible evaluation, and seamless integration with frameworks like LangChain and LlamaIndex.

  • LLM Playground is a tool for testing and iterating on your prompts and model configurations, shortening the feedback loop and accelerating development. When you see a bad result in tracing, you can directly jump to the playground to iterate on it.

  • Comprehensive API: Langfuse is frequently used to power bespoke LLMOps workflows while using the building blocks provided by Langfuse via the API. OpenAPI spec, Postman collection, and typed SDKs for Python, JS/TS are available.

📦 Deploy Langfuse

Langfuse Deployment Options

Langfuse Cloud

Managed deployment by the Langfuse team, generous free-tier, no credit card required.

Static Badge

Self-Host Langfuse

Run Langfuse on your own infrastructure:

  • Local (docker compose): Run Langfuse on your own machine in 5 minutes using Docker Compose.

    # Get a copy of the latest Langfuse repositorygit clone https://github.com/langfuse/langfuse.gitcd langfuse# Run the langfuse docker composedocker compose up
  • VM: Run Langfuse on a single Virtual Machine using Docker Compose.

  • Kubernetes (Helm): Run Langfuse on a Kubernetes cluster using Helm. This is the preferred production deployment.

  • Terraform Templates:AWS,Azure,GCP

Seeself-hosting documentation to learn more about architecture and configuration options.

🔌 Integrations

github-integrations

Main Integrations:

IntegrationSupportsDescription
SDKPython, JS/TSManual instrumentation using the SDKs for full flexibility.
OpenAIPython, JS/TSAutomated instrumentation using drop-in replacement of OpenAI SDK.
LangchainPython, JS/TSAutomated instrumentation by passing callback handler to Langchain application.
LlamaIndexPythonAutomated instrumentation via LlamaIndex callback system.
HaystackPythonAutomated instrumentation via Haystack content tracing system.
LiteLLMPython, JS/TS (proxy only)Use any LLM as a drop in replacement for GPT. Use Azure, OpenAI, Cohere, Anthropic, Ollama, VLLM, Sagemaker, HuggingFace, Replicate (100+ LLMs).
Vercel AI SDKJS/TSTypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js.
APIDirectly call the public API. OpenAPI spec available.

Packages integrated with Langfuse:

NameTypeDescription
InstructorLibraryLibrary to get structured LLM outputs (JSON, Pydantic)
DSPyLibraryFramework that systematically optimizes language model prompts and weights
MirascopeLibraryPython toolkit for building LLM applications.
OllamaModel (local)Easily run open source LLMs on your own machine.
Amazon BedrockModelRun foundation and fine-tuned models on AWS.
AutoGenAgent FrameworkOpen source LLM platform for building distributed agents.
FlowiseChat/Agent UIJS/TS no-code builder for customized LLM flows.
LangflowChat/Agent UIPython-based UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows.
DifyChat/Agent UIOpen source LLM app development platform with no-code builder.
OpenWebUIChat/Agent UISelf-hosted LLM Chat web ui supporting various LLM runners including self-hosted and local models.
PromptfooToolOpen source LLM testing platform.
LobeChatChat/Agent UIOpen source chatbot platform.
VapiPlatformOpen source voice AI platform.
InferableAgentsOpen source LLM platform for building distributed agents.
GradioChat/Agent UIOpen source Python library to build web interfaces like Chat UI.
GooseAgentsOpen source LLM platform for building distributed agents.
smolagentsAgentsOpen source AI agents framework.
CrewAIAgentsMulti agent framework for agent collaboration and tool use.

🚀 Quickstart

Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions.

1️⃣ Create new project

  1. Create Langfuse account orself-host
  2. Create a new project
  3. Create new API credentials in the project settings

2️⃣ Log your first LLM call

The@observe() decorator makes it easy to trace any Python LLM application. In this quickstart we also use the LangfuseOpenAI integration to automatically capture all model parameters.

Tip

Not using OpenAI? Visitour documentation to learn how to log other models and frameworks.

pip install langfuse openai
LANGFUSE_SECRET_KEY="sk-lf-..."LANGFUSE_PUBLIC_KEY="pk-lf-..."LANGFUSE_BASE_URL="https://cloud.langfuse.com"# 🇪🇺 EU region# LANGFUSE_BASE_URL="https://us.cloud.langfuse.com" # 🇺🇸 US region
fromlangfuseimportobservefromlangfuse.openaiimportopenai# OpenAI integration@observe()defstory():returnopenai.chat.completions.create(model="gpt-4o",messages=[{"role":"user","content":"What is Langfuse?"}],    ).choices[0].message.content@observe()defmain():returnstory()main()

3️⃣ See traces in Langfuse

See your language model calls and other application logic in Langfuse.

Example trace in Langfuse

Public example trace in Langfuse

Tip

Learn more about tracing in Langfuse or play with theinteractive demo.

⭐️ Star Us

star-langfuse-on-github

💭 Support

Finding an answer to your question:

  • Ourdocumentation is the best place to start looking for answers. It is comprehensive, and we invest significant time into maintaining it. You can also suggest edits to the docs via GitHub.
  • Langfuse FAQs where the most common questions are answered.
  • Use "Ask AI" to get instant answers to your questions.

Support Channels:

  • Ask any question in ourpublic Q&A on GitHub Discussions. Please include as much detail as possible (e.g. code snippets, screenshots, background information) to help us understand your question.
  • Request a feature on GitHub Discussions.
  • Report a Bug on GitHub Issues.
  • For time-sensitive queries, ping us via the in-app chat widget.

🤝 Contributing

Your contributions are welcome!

  • Vote onIdeas in GitHub Discussions.
  • Raise and comment onIssues.
  • Open a PR - seeCONTRIBUTING.md for details on how to setup a development environment.

🥇 License

This repository is MIT licensed, except for theee folders. SeeLICENSE anddocs for more details.

⭐️ Star History

Star History Chart

❤️ Open Source Projects Using Langfuse

Top open-source Python projects that use Langfuse, ranked by stars (Source):

RepositoryStars
 langflow-ai /langflow116251
 open-webui /open-webui109642
 abi /screenshot-to-code70877
 lobehub /lobe-chat65454
 infiniflow /ragflow64118
 firecrawl /firecrawl56713
 run-llama /llama_index44203
 FlowiseAI /Flowise43547
 QuivrHQ /quivr38415
 microsoft /ai-agents-for-beginners38012
 chatchat-space /Langchain-Chatchat36071
 mindsdb /mindsdb35669
 BerriAI /litellm28726
 onlook-dev /onlook22447
 NixOS /nixpkgs21748
 kortix-ai /suna17976
 anthropics /courses17057
 mastra-ai /mastra16484
 langfuse /langfuse16054
 Canner /WrenAI11868
 promptfoo /promptfoo8350
 The-Pocket /PocketFlow8313
 OpenPipe /ART7093
 topoteretes /cognee7011
 awslabs /agent-squad6785
 BasedHardware /omi6231
 hatchet-dev /hatchet6019
 zenml-io /zenml4873
 refly-ai /refly4654
 coleam00 /ottomator-agents4165
 JoshuaC215 /agent-service-toolkit3557
 colanode /colanode3517
 VoltAgent /voltagent3210
 bragai /bRAG-langchain3010
 pingcap /autoflow2651
 sourcebot-dev /sourcebot2570
 open-webui /pipelines2055
 YFGaia /dify-plus1734
 TheSpaghettiDetective /obico-server1687
 MLSysOps /MLE-agent1387
 TIGER-AI-Lab /TheoremExplainAgent1385
 trailofbits /buttercup1223
 wassim249 /fastapi-langgraph-agent-production-ready-template1200
 alishobeiri /thread1098
 dmayboroda /minima1010
 zstar1003 /ragflow-plus993
 openops-cloud /openops939
 dynamiq-ai /dynamiq927
 xataio /agent857
 plastic-labs /tutor-gpt845
 trendy-design /llmchat829
 hotovo /aider-desk781
 opslane /opslane719
 wrtnlabs /autoview688
 andysingal /llm-course643
 theopenconversationkit /tock587
 sentient-engineering /agent-q487
 NicholasGoh /fastapi-mcp-langgraph-template481
 i-am-alice /3rd-devs472
 AIDotNet /koala-ai470
 phospho-app /text-analytics-legacy439
 inferablehq /inferable403
 duoyang666 /ai_novel397
 strands-agents /samples385
 FranciscoMoretti /sparka380
 RobotecAI /rai373
 ElectricCodeGuy /SupabaseAuthWithSSR370
 LibreChat-AI /librechat.ai339
 souzatharsis /tamingLLMs323
 aws-samples /aws-ai-ml-workshop-kr295
 weizxfree /KnowFlow285
 zenml-io /zenml-projects276
 wxai-space /LightAgent275
 Ozamatash /deep-research-mcp269
 sql-agi /DB-GPT241
 guyernest /advanced-rag238
 bklieger-groq /mathtutor-on-groq233
 plastic-labs /honcho224
 OVINC-CN /OpenWebUI202
 zhutoutoutousan /worldquant-miner202
 iceener /ai186
 giselles-ai /giselle181
 ai-shifu /ai-shifu181
 aws-samples /sample-serverless-mcp-servers175
 celerforge /freenote171
 babelcloud /LLM-RGB164
 8090-inc /xrx-sample-apps163
 deepset-ai /haystack-core-integrations163
 codecentric /c4-genai-suite152
 XSpoonAi /spoon-core150
 chatchat-space /LangGraph-Chatchat144
 langfuse /langfuse-docs139
 piyushgarg-dev /genai-cohort135
 i-dot-ai /redbox132
 bmd1905 /ChatOpsLLM127
 Fintech-Dreamer /FinSynth121
 kenshiro-o /nagato-ai119

🔒 Security & Privacy

We take data security and privacy seriously. Please refer to ourSecurity and Privacy page for more information.

Telemetry

By default, Langfuse automatically reports basic usage statistics of self-hosted instances to a centralized server (PostHog).

This helps us to:

  1. Understand how Langfuse is used and improve the most relevant features.
  2. Track overall usage for internal and external (e.g. fundraising) reporting.

None of the data is shared with third parties and does not include any sensitive information. We want to be super transparent about this and you can find the exact data we collecthere.

You can opt-out by settingTELEMETRY_ENABLED=false.

About

🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Languages


[8]ページ先頭

©2009-2025 Movatter.jp