Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Improve docs index and README#2058

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
DouweM wants to merge1 commit intomain
base:main
Choose a base branch
Loading
fromdocs-readme-index
Open

Conversation

DouweM
Copy link
Contributor

No description provided.

@DouweMDouweM added the documentationImprovements or additions to documentation labelJun 23, 2025
@hyperlint-aiHyperlint AI
Copy link
Contributor

PR Change Summary

Enhanced documentation for Pydantic AI, improving clarity and structure throughout the README and related files.

  • Updated README to improve clarity and structure, including renaming sections and enhancing descriptions.
  • Revised terminology from 'system prompts' to 'instructions' for consistency across documentation.
  • Added new features and examples to demonstrate the capabilities of Pydantic AI more effectively.
  • Improved integration details with Pydantic Logfire for observability and performance monitoring.

Modified Files

  • README.md
  • docs/agents.md
  • docs/index.md
  • docs/models/index.md

How can I customize these reviews?

Check out theHyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add thehyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to addhyperlint-ignore to the PR to ignore the link check for this PR.

@github-actionsGitHub Actions
Copy link

Docs Preview

commit:7505a22
Preview URL:https://a5651a5e-pydantic-ai-previews.pydantic.workers.dev

@DouweMDouweM assignedDouweM andsamuelcolvin and unassignedDouweMJun 24, 2025
@@ -9,10 +9,10 @@ The [`Agent`][pydantic_ai.Agent] class has full API documentation, but conceptua

| **Component** | **Description** |
|-----------------------------------------------|-----------------------------------------------------------------------------------------------------------|
| [System prompt(s)](#system-prompts) | A set of instructions for the LLM written by the developer. |
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I also made a start at replacing older features with newer ones (system prompts -> instructions, output validator functions -> output functions), I will continue that in a new PR and can pull this out of here if you prefer

Copy link
Member

@samuelcolvinsamuelcolvin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think we should put this on hold except for changing to "Pydantic AI".

I think the rest is not clearly better, and/or requires a longer conversation, so I think we should put it on hold until we can have a longer conversation about the main points we want to get across.

@DouweM
Copy link
ContributorAuthor

@samuelcolvin OK, sounds it like it may be best for you to have a crack at it yourself!

Copy link
ContributorAuthor

@DouweMDouweM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

@samuelcolvin I think there are at least a few things in here thatare strictly better than what we have currently, so I've left comments to explain why I made certain changes, and would appreciate it if you could point out the specific things you dislike or would prefer to consider as part of a bigger marketing conversation, instead of shelving the entire thing.

</picture>
</a>
</div>
<div align="center">
<em>Agent Framework / shim to use Pydantic with LLMs</em>
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think calling it a shim is seriously underselling it, and it's not differentiated from all the other frameworks that use Pydantic, so I feel we should drop that asap.

I went with "the Pydantic way" because it carries so much positive associations, and the way we and FastAPI do Python libs/frameworks really is why people would come to us primarily.

I don't really care what we put here, I just don't like calling it a shim :)

@@ -24,76 +24,81 @@

---

PydanticAIis a Python agent framework designed tomake it less painful tobuild production grade applications with Generative AI.
### <em>Pydantic AIis a Python agent framework designed tohelp you quickly, confidently, and painlesslybuild production grade applicationsand workflowswith Generative AI.</em>
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Besides the styling to make the README look more like the docs index, the goal here was to hit on the same "speeds the way" point of from the deck with "quickly", and "confidently" to hint at Logfire. I noticed other AI frameworks (like ADK) mention workflows pretty prominently, so I thought that was worth a mention as well (with anecdotal evidence from the sales call we just had that they're building a workflow, not an app).


Similarly,virtually every agent framework and LLM library in Python uses Pydantic, yet when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), we couldn't find anything that gave us the same feeling.
Yet despite the fact thatvirtually every agent framework and LLM library in Python uses Pydantic, when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), we couldn't find anything that gave us the same feeling.
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I rephrased this from "Similarly" as on first read that made the sentence seem like it was going to say something positive about the other agent frameworks, while the point is really the "yet".

* __Model-agnostic__:
Supports OpenAI, Anthropic, Gemini,Deepseek, Ollama,Groq, Cohere, and Mistral,andthereisa simple interface toimplementsupport for [other models](models/index.md).
2. __Model-agnostic__:
Supportsvirtually every [model](models/index.md) and provider under the sun:OpenAI, Anthropic, Gemini,DeepSeek, Ollama,Grok, Cohere, and Mistral; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Groq, Together AI, Fireworks AI, OpenRouter,andHeroku. If your favorite model or providerisnot listed, you can easilyimplementa [custom model](models/index.md#custom-models).
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I thought this list could benefit from being more complete and pointing out providers as well, as we want to reassure someone ASAP that the thing they want to use is supported without having to dive into another page. Especially the enterprise AI providers, so we don't lose anyone "serious" too early.

* __Pydantic Logfire Integration__:
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire)for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
3. __Seamless Observability__:
Tightly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform,for real-time debugging,evals-basedperformance monitoring, and behaviorand costtracking. If you already have an observability platform that supports OTel, you can use that too.
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I thought it worth mentioning evals and cost tracking, and reassuring people we're not locking them into our platform.

As of today, these files *cannot* be natively leveraged by LLM frameworks or IDEs. Alternatively,
an [MCP server](https://modelcontextprotocol.io/) can be implemented to properly parse the `llms.txt`
file.
As of today, these files are not automatically leveraged by IDEs or coding agents, but they will use it if you provide a link or the full text.
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

"*cannot*" was too strong



## Next Steps

To try PydanticAI yourself, follow the instructions [in the examples](examples/index.md).
To try Pydantic AI for yourself, [install it](install.md) and follow the instructions [in the examples](examples/index.md).
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Not everyone's gonna want to read examples, those who want to dive in will want to install it and go from there. There wasn't a link to installation at all, which is odd (especially in the README version of this)


Read the [API Reference](api/agent.md) to understand PydanticAI's interface.
Join [Slack](https://logfire.pydantic.dev/docs/join-slack/) or file an issue on [GitHub](https://github.com/pydantic/pydantic-ai/issues) if you have any questions.
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

In my experience, pushing people to join Slack ASAP massively increases the chances of them seeing it through when they inevitably get stuck on something

@@ -59,14 +59,16 @@ If you want to use a different provider or profile, you can instantiate a model

## Custom Models

!!! note
If a model API is compatible with the OpenAI API, you do not need a custom model class and can provide your own [custom provider](openai.md#openai-compatible-models) instead.
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The vast majority of users should not need to implement their own model class, so we should mention this upfront


* __Graph Support__:
[Pydantic Graph](graph.md) provides a powerful way to define graphs using typing hints, this is useful in complex applications where standard control flow can degrade to spaghetti code.
9. __Dependency Injection__:
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I moved this down as I think it's less compelling than streamed outputs

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@samuelcolvinsamuelcolvinAwaiting requested review from samuelcolvin

Assignees

@samuelcolvinsamuelcolvin

Labels
documentationImprovements or additions to documentation
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@DouweM@samuelcolvin

[8]ページ先頭

©2009-2025 Movatter.jp