Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Ai migration doc#146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
jherr wants to merge1 commit intofeat/splitting-adapters-up
base:feat/splitting-adapters-up
Choose a base branch
Loading
fromai-migration-doc

Conversation

@jherr
Copy link
Contributor

@jherrjherr commentedDec 13, 2025
edited by coderabbitaibot
Loading

🎯 Changes

Adds a migration doc to help folks migrate from the first version of the API to the proposed newer version.

✅ Checklist

  • I have followed the steps in theContributing guide.
  • I have tested this code locally withpnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated achangeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

Release Notes

  • New Features

    • Unifiedai() API for all AI interactions, replacingchat()
    • Tree-shakeable modular adapters for optimized bundle sizes
    • Image generation, text-to-speech, transcription, and experimental video generation capabilities
    • Structured output support with Zod schema validation
    • Non-streaming mode support
  • Breaking Changes

    • chat() renamed toai()
    • Adapter naming updated:openai()openaiText(),anthropic()anthropicText(), etc.
    • Type renames:ChatOptionsTextOptions
  • Documentation

    • New migration guide for upgrading to the updated API

✏️ Tip: You can customize this high-level summary in your review settings.

@jherrjherr requested a review froma teamDecember 13, 2025 17:39
@coderabbitai
Copy link

coderabbitaibot commentedDec 13, 2025
edited
Loading

Walkthrough

This pull request refactors TanStack AI from monolithic adapters to tree-shakeable modular adapters, renaming the primary entry point fromchat() toai(), renaming types from Chat* to Text*, and introducing specialized adapters for embeddings, image generation, text-to-speech, transcription, video, and summarization across OpenAI, Anthropic, Gemini, and Ollama providers. Comprehensive documentation and examples are updated to reflect the new unified API.

Changes

Cohort / File(s)Summary
Changeset & Core Type Renames
.changeset/two-bikes-kneel.md,docs/config.json
Records adapter refactoring for tree shaking; updates navigation to include migration guide and new guides; renames public API references (chattext,ChatOptionsTextOptions,ChatCompletionChunkTextCompletionChunk,ChatStreamOptions*TextStreamOptions*)
Primary API & Documentation
docs/api/ai.md,README.md,docs/getting-started/*,docs/protocol/*
Renames primary entry point fromchat() toai(); updates imports and usage across all documentation; introducestoServerSentEventsStream andtoStreamResponse as public exports; documents unifiedai() workflow with adapter-specific variants
Adapter Documentation
docs/adapters/anthropic.md,docs/adapters/gemini.md,docs/adapters/ollama.md,docs/adapters/openai.md
Replaces monolithic adapter patterns with tree-shakeable variants (anthropicText,anthropicSummarize, etc.); documents new model identifiers, provider options, and specialized adapters (embed, image, TTS, summarization); expands API references and configuration examples
New Guides
docs/guides/image-generation.md,docs/guides/text-to-speech.md,docs/guides/transcription.md,docs/guides/video-generation.md,docs/guides/tree-shakeable-adapters.md,docs/guides/migration-guide.md
New documentation covering image generation, TTS, transcription, and video generation capabilities; comprehensive migration guide from legacychat()/embedding()/summarize() to unifiedai() with tree-shakeable adapters; tree-shakeable adapters overview
Guide Updates
docs/guides/agentic-cycle.md,docs/guides/client-tools.md,docs/guides/multimodal-content.md,docs/guides/per-model-type-safety.md,docs/guides/server-tools.md,docs/guides/streaming.md,docs/guides/tool-*.md
All guide examples updated to useai() instead ofchat(); adapter imports changed to new text variants (openaiText,anthropicText, etc.); multimodal examples refactored to use adapter factories
Reference Documentation
docs/reference/classes/BaseAdapter.md,docs/reference/functions/text.md,docs/reference/functions/textOptions.md,docs/reference/interfaces/AIAdapter.md,docs/reference/interfaces/TextCompletionChunk.md,docs/reference/interfaces/TextOptions.md,docs/reference/type-aliases/TextStreamOptions*.md,docs/reference/index.md
Updates parameter/return types fromChatOptions toTextOptions; renames interface and type references; documents new unified reference surface
OpenAI Adapters
packages/typescript/ai-openai/src/adapters/text.ts,packages/typescript/ai-openai/src/adapters/embed.ts,packages/typescript/ai-openai/src/adapters/image.ts,packages/typescript/ai-openai/src/adapters/summarize.ts,packages/typescript/ai-openai/src/adapters/tts.ts,packages/typescript/ai-openai/src/adapters/transcription.ts,packages/typescript/ai-openai/src/adapters/video.ts
New modular adapters for text, embedding, image generation, summarization, TTS, transcription, and video with corresponding factory functions (openaiText,createOpenaiText,openaiEmbed, etc.)
OpenAI Utilities & Configuration
packages/typescript/ai-openai/src/utils/client.ts,packages/typescript/ai-openai/src/model-meta.ts,packages/typescript/ai-openai/src/image/image-provider-options.ts,packages/typescript/ai-openai/src/audio/tts-provider-options.ts,packages/typescript/ai-openai/src/audio/transcription-provider-options.ts,packages/typescript/ai-openai/src/tools/function-tool.ts,packages/typescript/ai-openai/src/openai-adapter.ts
Adds client factory, model metadata, comprehensive provider options for image/TTS/transcription; updates legacy adapter to useTextOptions; enables new video/transcription/TTS/image models
OpenAI Exports & Tests
packages/typescript/ai-openai/src/index.ts,packages/typescript/ai-openai/package.json
Exports all new adapters and types as tree-shakeable modules; adds zod to peerDependencies; deprecates legacy monolithic adapter
Anthropic Adapters
packages/typescript/ai-anthropic/src/adapters/text.ts,packages/typescript/ai-anthropic/src/adapters/summarize.ts
New text and summarization adapters with streaming, structured output, and tool support; comprehensive message and content conversion utilities
Anthropic Utilities & Configuration
packages/typescript/ai-anthropic/src/utils/client.ts,packages/typescript/ai-anthropic/src/utils/schema-converter.ts,packages/typescript/ai-anthropic/src/tools/custom-tool.ts,packages/typescript/ai-anthropic/src/anthropic-adapter.ts
Adds Anthropic client factory and schema converter; updates tool converter to use Anthropic-specific schema; legacy adapter now usesTextOptions
Anthropic Exports
packages/typescript/ai-anthropic/src/index.ts,packages/typescript/ai-anthropic/package.json
Exports new text and summarize adapters as tree-shakeable modules; adds zod to peerDependencies
Gemini Adapters
packages/typescript/ai-gemini/src/adapters/text.ts,packages/typescript/ai-gemini/src/adapters/embed.ts,packages/typescript/ai-gemini/src/adapters/image.ts,packages/typescript/ai-gemini/src/adapters/summarize.ts,packages/typescript/ai-gemini/src/adapters/tts.ts
New modular adapters for text, embedding, image generation, summarization, and TTS with corresponding factories
Gemini Utilities & Configuration
packages/typescript/ai-gemini/src/utils/client.ts,packages/typescript/ai-gemini/src/utils/schema-converter.ts,packages/typescript/ai-gemini/src/image/image-provider-options.ts,packages/typescript/ai-gemini/src/tools/tool-converter.ts,packages/typescript/ai-gemini/src/model-meta.ts,packages/typescript/ai-gemini/src/gemini-adapter.ts
Adds Gemini client factory, schema converter, comprehensive image provider options; enables image/TTS models; legacy adapter usesTextOptions
Gemini Exports & Tests
packages/typescript/ai-gemini/src/index.ts,packages/typescript/ai-gemini/package.json,packages/typescript/ai-gemini/tests/*
Exports all new adapters as tree-shakeable modules; adds zod to peerDependencies; tests updated to use new adapters andai()
Ollama Adapters
packages/typescript/ai-ollama/src/adapters/text.ts,packages/typescript/ai-ollama/src/adapters/embed.ts,packages/typescript/ai-ollama/src/adapters/summarize.ts
New modular text, embedding, and summarization adapters with corresponding factories
Ollama Utilities & Configuration
packages/typescript/ai-ollama/src/utils/client.ts,packages/typescript/ai-ollama/src/utils/schema-converter.ts,packages/typescript/ai-ollama/src/ollama-adapter.ts
Adds Ollama client factory and schema converter; legacy adapter usesTextOptions; uses Ollama-specific schema conversion
Ollama Exports
packages/typescript/ai-ollama/src/index.ts,packages/typescript/ai-ollama/package.json
Exports new text, embed, and summarize adapters as tree-shakeable modules; moves zod from dependencies to peerDependencies
Client & DevTools
packages/typescript/ai-client/src/chat-client.ts,packages/typescript/ai-devtools/src/store/ai-context.tsx,packages/typescript/ai-devtools/vite.config.ts
Adds post-stream action queue to chat client; updates event channels fromchat:* totext:* in devtools; adds type cast for Solid plugin
Example Applications
examples/ts-group-chat/chat-server/capnweb-rpc.ts,examples/ts-group-chat/chat-server/chat-logic.ts,examples/ts-group-chat/chat-server/claude-service.ts,examples/ts-react-chat/src/routes/api.tanchat.ts,examples/ts-solid-chat/src/routes/api.chat.ts,examples/ts-svelte-chat/src/routes/api/chat/+server.ts,examples/ts-vue-chat/vite.config.ts,examples/README.md
All examples updated to use new adapter factories (openaiText,anthropicText, etc.) andai() instead ofchat(); imports and configurations refactored; type normalizations applied
Example Dependency Updates
examples/*/package.json,packages/typescript/ai-client/package.json,packages/typescript/ai-devtools/package.json,package.json
Minor version bumps for vite (^7.2.4 → ^7.2.7), TanStack packages, React, and UI libraries across all example applications

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Areas requiring extra attention:

  • New adapter implementations (packages/typescript/ai-*/src/adapters/*.ts): Complex streaming logic, schema conversion, message formatting, and provider option handling across OpenAI, Anthropic, Gemini, and Ollama
  • Type system changes across core and all adapter packages (migration fromChatOptionsTextOptions and related type renames)
  • Schema converter utilities for each provider (convertZodTo*Schema functions) and their integration into tool conversion and structured output flows
  • Post-stream action queue inai-client for managing delayed continuation logic
  • Event channel rename in devtools fromchat:* totext:* and impact on tracking/state
  • Documentation migration completeness: Ensure all code examples and API references accurately reflect the new unified entry point and adapter exports
  • Deprecation handling: Legacy monolithic adapter exports remain but marked for deprecation; verify backward compatibility where appropriate

Poem

🐇Tree-shakeable whiskers and modular hops,
Oneai() to rule them, no more adapter crops!
Text, embed, image, voice in separate bundles so tight,
Hopping through schemas, each provider just right. 🌳✨

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check nameStatusExplanationResolution
Title check❓ InconclusiveThe title 'Ai migration doc' is vague and generic, lacking specificity about the migration scope, nature, or main change details.Improve title clarity: use 'Add migration guide for API v1 to v2 transition' or similar to clearly convey the purpose and scope.
✅ Passed checks (2 passed)
Check nameStatusExplanation
Description check✅ PassedThe PR description follows the template with checked items and provides context, though the 'Changes' section is brief; the checklist is complete and release impact is clear.
Docstring Coverage✅ PassedDocstring coverage is 97.22% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branchai-migration-doc

Thanks for usingCodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment@coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloudbot commentedDec 13, 2025
edited
Loading

View yourCI Pipeline Execution ↗ for commita0c983c

CommandStatusDurationResult
nx affected --targets=test:sherif,test:knip,tes...✅ Succeeded7sView ↗
nx run-many --targets=build --exclude=examples/**✅ Succeeded1sView ↗

☁️Nx Cloud last updated this comment at2025-12-13 17:40:53 UTC

@pkg-pr-new
Copy link

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@146

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@146

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@146

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@146

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@146

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@146

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@146

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@146

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@146

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@146

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@146

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@146

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@146

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@146

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@146

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@146

commit:a0c983c

Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (12)
packages/typescript/ai-devtools/src/store/ai-context.tsx (1)

1312-1384:Address remainingchat:* event references across the codebase.

The migration fromchat:* totext:* events is incomplete. While the code in this file has been correctly migrated to usetext:started,text:completed, andtext:iteration, the following files still reference the oldchat:* namespace and should be updated for consistency:

  • testing/panel/src/lib/recording.ts: Lines 161, 322 usechat:started andchat:completed
  • packages/typescript/ai/tests/ai-text.test.ts: References tochat:started andchat:iteration in test cases
  • docs/guides/observability.md: Line 25 example useschat:started

Update these references to use thetext:* event namespace to maintain consistency across the codebase.

examples/ts-vue-chat/vite.config.ts (1)

232-251:Abort upstream work on client disconnect to avoid leaking in-flight AI requests.

TheAbortController is created and passed toai() andtoStreamResponse(), but it's never aborted when the HTTP client disconnects. This allows the AI provider call to continue running unnecessarily, wasting resources.

             const abortController = new AbortController()+            req.on('close', () => abortController.abort())+            req.on('abort', () => abortController.abort())             const stream = ai({               adapter: adapter as any,

Also remove theas any type casts foradapter andselectedModel. The TanStack AIai() function supports proper generic type inference without requiring type assertions:

             const stream = ai({-              adapter: adapter as any,-              model: selectedModel as any,+              adapter: adapter,+              model: selectedModel,               tools: [

If the conditionally-assignedadapter makes inference difficult, use a type union or proper type guards instead ofas any.

examples/ts-react-chat/src/routes/api.tanchat.ts (1)

100-115:Refactor to enable compile-time type safety for adapter and model selection.

Theas any casts on lines 101–102 completely bypass TypeScript's type checking. Theai() function requires concrete generic types—specifically a literal model string type paired with the specific adapter type—which cannot be satisfied when both vary at runtime.

Move theai() call into each switch case so that the adapter and model are hardcoded per provider, allowing TypeScript to properly inferAIOptionsFor and enforce type safety:

switch(provider){case'anthropic':returntoStreamResponse(ai({adapter:anthropicText(),model:'claude-sonnet-4-5',tools:[...],// ... rest of options}),{ abortController})case'gemini':returntoStreamResponse(ai({adapter:geminiText(),model:'gemini-2.0-flash-exp',tools:[...],// ... rest of options}),{ abortController})// ... other cases}

This pattern provides per-model type safety as required by the coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."

packages/typescript/ai-gemini/src/model-meta.ts (1)

224-606:Remove IMAGEN_3 model definition—it was deprecated and retired on November 10, 2025.

Model identifiers verified against current Gemini API documentation (December 2025):

  • Gemini 2.5 Flash variants: gemini-2.5-flash-image, gemini-2.5-flash-lite, gemini-2.5-flash-lite-preview-09-2025 ✓
  • Gemini 2.0 Flash variants: gemini-2.0-flash, gemini-2.0-flash-preview-image-generation, gemini-2.0-flash-lite ✓
  • Imagen 4 variants: imagen-4.0-generate-001, imagen-4.0-ultra-generate-001, imagen-4.0-fast-generate-001 ✓

However, IMAGEN_3 (imagen-3.0-generate-002) is no longer available—Google deprecated it and recommends using Imagen 4 instead. The model definition should be removed from active code.

examples/README.md (1)

385-390:Inconsistent API usage - complete the migration.

This code example still useschat({ instead ofai({, which is inconsistent with the rest of the migration in this file (lines 311, 314, 363) and the PR's objectives.

Apply this diff to complete the migration:

-const stream = chat({+const stream = ai({   adapter: openai(),   model: 'gpt-4o',   messages,   tools: [weatherTool], // SDK executes these automatically })
docs/guides/tools.md (1)

282-288:Fix invalid snippet: duplicatetools key in object literal.
As written, this won’t compile and readers won’t know which form to use.

 ai({   adapter: openaiText(),   messages,-  tools: [addToCartDef], // Client will execute, or-  tools: [addToCartServer], // Server will execute+  // Client will execute (definition-only):+  // tools: [addToCartDef],++  // Server will execute (server impl):+  tools: [addToCartServer], });
docs/getting-started/quick-start.md (1)

181-194:Incompleteai() call in tool example may confuse users.

The example showsai({ tools: [getProducts] }) without the requiredadapter,model, andmessages parameters. Consider showing a complete call or adding a comment indicating this is a partial snippet.

 const getProducts = getProductsDef.server(async ({ query }) => {   return await db.products.search(query) })-ai({ tools: [getProducts] })+// Use with the ai() function+ai({+  adapter: openaiText(),+  model: 'gpt-4o',+  messages: [{ role: 'user', content: 'Find products' }],+  tools: [getProducts],+})
packages/typescript/ai-anthropic/src/anthropic-adapter.ts (1)

83-96:WireTextOptions.abortController into the SDK request signal (or document precedence).

After switching toTextOptions, consumers will reasonably expectabortController to work. Consider:

-      const stream = await this.client.beta.messages.create(+      const signal =+        options.abortController?.signal ??+        (options.request instanceof Request ? options.request.signal : options.request?.signal)++      const stream = await this.client.beta.messages.create(         { ...requestParams, stream: true },         {-          signal: options.request?.signal,+          signal,           headers: options.request?.headers,         },       )
docs/adapters/anthropic.md (1)

89-117:Docs:messages is undefined in the “With Tools” example.
The snippet won’t run as-is; add a minimalconst messages = [...] (or show request parsing like the earlier POST example).

docs/api/ai.md (3)

18-30:Docs: missingmaxIterations import in the mainai() example.
Copy/paste will fail. Add it to the import line.

-import { ai } from "@tanstack/ai";+import { ai, maxIterations } from "@tanstack/ai";

64-75:Docs: summarizationstyle values don’t match the actual type.
SummarizationOptions.style is'bullet-points' | 'paragraph' | 'concise' (no"detailed"). (Based on the provided type snippet.)

-- `style?` - Summary style ("concise" | "detailed")+- `style?` - Summary style ("concise" | "bullet-points" | "paragraph")

101-144:Align toolDefinition examples to consistent Zod +.server()/.client() pattern.

ThegetWeather example at lines 334–359 uses{ parameters, execute }, contradicting thetoolDefinition() pattern shown at lines 101–144 (inputSchema,outputSchema,.server()/.client()). Update the weatherTool example to match the documented pattern:

-const weatherTool = toolDefinition({-  name: "getWeather",-  description: "Get the current weather for a city",-  parameters: {-    city: { type: "string", description: "City name" },-  },-  async execute({ city }) {-    return { temperature: 72, condition: "Sunny" };-  },-});+import { z } from "zod";++const weatherToolDef = toolDefinition({+  name: "getWeather",+  description: "Get the current weather for a city",+  inputSchema: z.object({+    city: z.string(),+  }),+  outputSchema: z.object({+    temperature: z.number(),+    condition: z.string(),+  }),+});++const weatherTool = weatherToolDef.server(async ({ city }) => {+  return { temperature: 72, condition: "Sunny" };+});

Also clarify the distinction between passing aToolDefinition (schema only) versustoolDef.server(...) (executable on server).

♻️ Duplicate comments (2)
packages/typescript/ai-openai/package.json (1)

43-52:Verify dual listing of @tanstack/ai in both dependencies and peerDependencies.

Similar to the Anthropic adapter package, this package lists@tanstack/ai in bothdependencies (line 43) andpeerDependencies (line 51). Clarify whether this dual listing is intentional or if@tanstack/ai should be exclusively in one section.

packages/typescript/ai-ollama/package.json (1)

44-53:Verify dual listing of @tanstack/ai in both dependencies and peerDependencies.

Consistent with the Anthropic and OpenAI adapter packages, this package lists@tanstack/ai in bothdependencies (line 44) andpeerDependencies (line 52). This pattern appears across all adapter packages in this PR, suggesting it may be intentional, but it warrants verification.

🟠 Major comments (25)
examples/ts-react-chat/src/routes/api.tanchat.ts-116-116 (1)

116-116:Stream type cast toany is unnecessary with proper adapter typing.

The stream returned fromai() is cast toany before passing totoStreamResponse(). When the text adapter is properly typed,ai() returnsAsyncIterable<StreamChunk> by default (matchingtoStreamResponse()'s parameter type exactly). This cast is a workaround for the adapter being cast toany on line 102 and would be eliminated by fixing the root adapter type issue.

packages/typescript/ai-ollama/src/adapters/embed.ts-40-73 (1)

40-73:OllamaEmbedAdapterOptions.host is unused (misleading public API)

Right nowhost inOllamaEmbedAdapterOptions is never read; onlyhostOrClient affects the client. Either removehost from the options type, or honor it whenhostOrClient isn’t provided.

Proposed fix (honoroptions.host):

 export class OllamaEmbedAdapter implements EmbeddingAdapter<   typeof OllamaEmbeddingModels,   OllamaEmbedProviderOptions > {@@   constructor(     hostOrClient?: string | Ollama,     options: OllamaEmbedAdapterOptions = {},   ) {     if (typeof hostOrClient === 'string' || hostOrClient === undefined) {-      this.client = createOllamaClient({ host: hostOrClient })+      const host = typeof hostOrClient === 'string' ? hostOrClient : options.host+      this.client = createOllamaClient({ host })     } else {       this.client = hostOrClient     }     this.defaultModel = options.model ?? 'nomic-embed-text'   }@@ export function createOllamaEmbed(   host?: string,   options?: OllamaEmbedAdapterOptions, ): OllamaEmbedAdapter {   return new OllamaEmbedAdapter(host, options) }

Also applies to: 114-129

packages/typescript/ai-ollama/src/adapters/text.ts-100-131 (1)

100-131:OllamaTextAdapterOptions.host is unused (misleading public API)
Same as the other adapters: either removehost from options or honor it whenhostOrClient is undefined.

Also applies to: 391-406

packages/typescript/ai-ollama/src/adapters/summarize.ts-55-88 (1)

55-88:OllamaSummarizeAdapterOptions.host is unused (misleading public API)
Same issue asembed.ts: either removehost from options or honor it whenhostOrClient is undefined.

Also applies to: 152-167

packages/typescript/ai-ollama/src/adapters/text.ts-189-281 (1)

189-281:Tool callindex is not provided by ollama's streamed responses; derive it deterministically from array position

Ollama's OpenAI-compatible streamed tool-call responses do not include theindex field onfunction, making the unsafe cast to{ index: number } unreliable. This breaks consumers that depend on ordering or reconciliation. Use the loop index when iterating tool calls instead.

-      const handleToolCall = (toolCall: ToolCall): StreamChunk => {+      const handleToolCall = (toolCall: ToolCall, index: number): StreamChunk => {         const actualToolCall = toolCall as ToolCall & {           id: string-          function: { index: number }+          function: { index?: number }         }         return {@@-          index: actualToolCall.function.index,+          index,         }       }@@         if (chunk.message.tool_calls && chunk.message.tool_calls.length > 0) {-          for (const toolCall of chunk.message.tool_calls) {-            yield handleToolCall(toolCall)+          for (const [i, toolCall] of chunk.message.tool_calls.entries()) {+            yield handleToolCall(toolCall, i)             hasEmittedToolCalls = true           }@@       if (chunk.message.tool_calls && chunk.message.tool_calls.length > 0) {-        for (const toolCall of chunk.message.tool_calls) {-          yield handleToolCall(toolCall)+        for (const [i, toolCall] of chunk.message.tool_calls.entries()) {+          yield handleToolCall(toolCall, i)           hasEmittedToolCalls = true         }       }
packages/typescript/ai-ollama/src/adapters/summarize.ts-90-118 (1)

90-118:Provider options interface is unused; SummarizationOptions framework limitation prevents implementation

OllamaSummarizeProviderOptions is defined but cannot be applied becauseSummarizationOptions (from@tanstack/ai) does not include aproviderOptions field—unlikeTextOptions which supports generic, type-safe provider options. Thesummarize() method hardcodestemperature: 0.3 andnum_predict, leaving model tuning parameters likenum_ctx,top_p, andrepeat_penalty inaccessible to callers.

To fix this, extendSummarizationOptions to includeproviderOptions similar toTextOptions, then update the adapter to extract and merge them:

  async summarize(options: SummarizationOptions): Promise<SummarizationResult> {    const model = options.model || this.defaultModel    const prompt = this.buildSummarizationPrompt(options)+   const providerOptions = options.providerOptions as+     | OllamaSummarizeProviderOptions+     | undefined    const response = await this.client.generate({      model,      prompt,      options: {        temperature: 0.3,        num_predict: options.maxLength ?? 500,+       ...providerOptions,      },      stream: false,    })

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/adapters/text.ts-147-187 (1)

147-187:structuredOutput() should validate with Zod after JSON parsing

The method converts theoutputSchema to JSON Schema and passes it to Ollama'sformat parameter (correct), but then only callsJSON.parse() without validating the result against the original schema. This means callers can receive structurally-invalid data even though they provided a schema.

AddoutputSchema.safeParse() after JSON parsing to ensure the response matches the intended schema:

      let parsed: unknown      try {        parsed = JSON.parse(rawText)      } catch {        throw new Error(          `Failed to parse structured output as JSON. Content: ${rawText.slice(0, 200)}${rawText.length > 200 ? '...' : ''}`,        )      }+     // Runtime validation with Zod+     const validated = outputSchema.safeParse(parsed)+     if (!validated.success) {+       throw new Error(+         `Structured output did not match schema: ${validated.error.message}`,+       )+     }      return {-       data: parsed,+       data: validated.data,         rawText,       }
packages/typescript/ai-ollama/src/adapters/embed.ts-75-108 (1)

75-108:Pass provider options through embeddings request + use generateId for IDs

ThecreateEmbeddings method ignoresoptions.providerOptions and usesDate.now() for ID generation, creating two issues:

  1. Callers cannot pass provider-specific options (e.g.,num_gpu,num_thread) since they're not forwarded to the Ollama embeddings request.
  2. id: \embed-${Date.now()}`can collide under concurrency; other adapters usegenerateId()` instead.

ImportgenerateId and apply:

 import {   createOllamaClient,   estimateTokens,   getOllamaHostFromEnv,+  generateId, } from '../utils'
   async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {     const model = options.model || this.defaultModel+    const providerOptions = options.providerOptions as+      | OllamaEmbedProviderOptions+      | undefined      for (const input of inputs) {       const response = await this.client.embeddings({         model,         prompt: input,+        options: providerOptions,       })
     return {-      id: `embed-${Date.now()}`,+      id: generateId('emb'),       model,       embeddings,

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-openai/src/audio/tts-provider-options.ts-1-31 (1)

1-31:Remove "verse" voice option and clarify theinstructions parameter limitation.

The voice "verse" is not available in the current OpenAI TTS API and should be removed. Theinstructions parameter comment is incomplete—while it correctly notes that instructions do not work withtts-1 ortts-1-hd, it should clarify that instructions are supported bygpt-4o-mini-tts. The audio formats listed (mp3, opus, aac, flac, wav, pcm) are correct.

packages/typescript/ai-gemini/src/utils/schema-converter.ts-8-15 (1)

8-15:Remove reliance on internal Zod APIschema.def.type for object detection; use JSON Schema structure instead.
Theschema.def.type property is not part of Zod's public API and may be undefined or removed in future versions, making this converter fragile at runtime. The JSON Schema structure-based checks (checking forproperties key, empty objects) are more stable alternatives.

-    const isZodObject =-      typeof schema === 'object' &&-      'def' in schema &&-      schema.def.type === 'object'--    if (isZodObject && !result.type) {-      result.type = 'object'-    }--    if (Object.keys(result).length === 0) {+    if (Object.keys(result).length === 0) {       result.type = 'object'     }      if ('properties' in result && !result.type) {       result.type = 'object'     }

Also applies to: packages/typescript/ai-ollama/src/utils/schema-converter.ts, packages/typescript/ai-anthropic/src/utils/schema-converter.ts, packages/typescript/ai-openai/src/utils/schema-converter.ts

packages/typescript/ai-gemini/src/utils/client.ts-38-43 (1)

38-43:Extract duplicatedgenerateId to a shared utility.

ThegenerateId function is duplicated across multiple providers (Anthropic, OpenAI, Ollama, and now Gemini). This violates the DRY principle and creates maintenance overhead.

Consider extracting this to a shared utility package (e.g.,@tanstack/ai-utils or within the core@tanstack/ai package) that all provider packages can import. This ensures consistent ID generation and reduces code duplication.

Example location:packages/typescript/ai-utils/src/id-generator.ts orpackages/typescript/ai/src/utils/id-generator.ts

// Shared utilityexportfunctiongenerateId(prefix:string):string{return`${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`}

Then import in each provider:

+import { generateId } from '@tanstack/ai-utils'-export function generateId(prefix: string): string {-  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`-}

Based on learnings, tree-shakeable architecture is preferred, so ensure the shared utility is properly exported.

packages/typescript/ai-openai/src/adapters/image.ts-59-92 (1)

59-92:Don’t letproviderOptions override validated request fields (n,size, etc.).

Right now...providerOptions is last, so a caller can bypass validation by supplying conflicting values insideproviderOptions. Prefer applyingproviderOptions first, then writing the normalized/validated fields last.

   private buildRequest(     options: ImageGenerationOptions<OpenAIImageProviderOptions>,   ): OpenAI_SDK.Images.ImageGenerateParams {     const { model, prompt, numberOfImages, size, providerOptions } = options     return {-      model,-      prompt,-      n: numberOfImages ?? 1,-      size: size as OpenAI_SDK.Images.ImageGenerateParams['size'],-      ...providerOptions,+      ...providerOptions,+      model,+      prompt,+      n: numberOfImages ?? 1,+      size: size as OpenAI_SDK.Images.ImageGenerateParams['size'],     }   }
packages/typescript/ai-gemini/src/adapters/tts.ts-73-85 (1)

73-85:Either support or explicitly rejectvoice /format /speed from TTSOptions.
Right now they’re silently ignored, which is confusing. Consider throwing if provided, or document “Gemini TTS ignores these” in JSDoc.

packages/typescript/ai-anthropic/src/adapters/text.ts-130-167 (1)

130-167:Useclient.beta.messages.create() for structured output instead of non-betaclient.messages.create().

Tool-based structured outputs andtool_choice are beta-only features in the Anthropic SDK. The streaming implementation correctly usesclient.beta.messages.create(), but thestructuredOutput() method calls the non-betaclient.messages.create(), which does not have full tool support. Change lines 155–162 to usethis.client.beta.messages.create() to ensure both endpoints use the same compatible API contract.

packages/typescript/ai-anthropic/src/adapters/text.ts-341-426 (1)

341-426:Add explicit guard forrole: 'tool' messages withouttoolCallId.

Currently, arole: 'tool' message withouttoolCallId falls through to the generic fallback and is converted to{ role: 'user', ... }, corrupting the conversation. Although the framework never creates such messages, there is no explicit validation. Add an error throw or explicit log to reject this invalid state rather than silently mishandling it.

packages/typescript/ai-anthropic/src/adapters/text.ts-504-522 (1)

504-522:Add block type check tocontent_block_stop handling to prevent duplicate empty-args emissions.

The current code yields empty arguments when a tool call has no input, but doesn't verify that the stopped block is actually atool_use block. A subsequentcontent_block_stop event (for text or thinking blocks) will reference the samecurrentToolIndex, potentially re-emitting the empty args for the last tool block. Use the event's block type or index to ensure only actual tool_use block completions trigger this emission.

packages/typescript/ai-anthropic/src/adapters/text.ts-482-502 (1)

482-502:Emit accumulated tool arguments instead of delta fragments to match content/thinking pattern and prevent data loss.

Duringinput_json_delta events, you accumulate intoexisting.input but yield onlyevent.delta.partial_json. This inconsistency causes consumers to lose non-empty tool call arguments—unlike content and thinking streams which emit both delta and accumulated state. The diff correctly changesarguments to useexisting.input for consistency and completeness.

           } else if (event.delta.type === 'input_json_delta') {             const existing = toolCallsMap.get(currentToolIndex)             if (existing) {               existing.input += event.delta.partial_json               yield {                 type: 'tool_call',                 id: genId(),                 model: model,                 timestamp,                 toolCall: {                   id: existing.id,                   type: 'function',                   function: {                     name: existing.name,-                    arguments: event.delta.partial_json,+                    arguments: existing.input,                   },                 },                 index: currentToolIndex,               }             }           }
packages/typescript/ai-openai/src/adapters/embed.ts-22-27 (1)

22-27:OpenAIEmbedProviderOptions are defined but never forwarded to the OpenAI API.
TheencodingFormat anduser options have no effect unless extracted from the options parameter and passed to the embeddings request asencoding_format anduser. Additionally, consider usingOPENAI_EMBEDDING_MODELS[0] instead of hardcoding'text-embedding-ada-002' for consistency with the model definitions.

 export class OpenAIEmbedAdapter extends BaseEmbeddingAdapter<   typeof OPENAI_EMBEDDING_MODELS,   OpenAIEmbedProviderOptions > {   async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {+    const providerOptions = (options as EmbeddingOptions & {+      providerOptions?: OpenAIEmbedProviderOptions+    }).providerOptions+     const response = await this.client.embeddings.create({-      model: options.model || 'text-embedding-ada-002',+      model: options.model || OPENAI_EMBEDDING_MODELS[0],       input: options.input,       dimensions: options.dimensions,+      encoding_format: providerOptions?.encodingFormat,+      user: providerOptions?.user,     })

Also applies to lines 50–55.

packages/typescript/ai-gemini/src/adapters/embed.ts-20-29 (1)

20-29:Wire GeminiEmbedProviderOptions into embedContent request and throw on missing embeddings.

The adapter definesGeminiEmbedProviderOptions withtaskType,title, andoutputDimensionality but never uses them in the API request. Additionally, ifresponse.embeddings?.[0]?.values is missing, the embedding is silently dropped, causing the returned embeddings array to have fewer entries than the input array (index misalignment).

Fix this by:

  1. ExtractingproviderOptions from options
  2. PassingtaskType,title, andoutputDimensionality to the embedContent config
  3. Throwing an error instead of silently dropping missing embeddings
  async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {    const model = options.model || this.defaultModel+   const providerOptions = (options as any).providerOptions as+     | GeminiEmbedProviderOptions+     | undefined    // Ensure input is an array    const inputs = Array.isArray(options.input)      ? options.input      : [options.input]    const embeddings: Array<Array<number>> = []-   for (const input of inputs) {+   for (const [idx, input] of inputs.entries()) {      const response = await this.client.models.embedContent({        model,        contents: [{ role: 'user', parts: [{ text: input }] }],        config: {-         outputDimensionality: options.dimensions,+         outputDimensionality:+           providerOptions?.outputDimensionality ?? (options as any).dimensions,+         taskType: providerOptions?.taskType,+         title: providerOptions?.title,        },      })-     if (response.embeddings?.[0]?.values) {-       embeddings.push(response.embeddings[0].values)-     }+     const values = response.embeddings?.[0]?.values+     if (!values) {+       throw new Error(+         `Gemini embedContent returned no embedding values (model=${model}, index=${idx})`,+       )+     }+     embeddings.push(values)    }

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-openai/src/adapters/video.ts-78-112 (1)

78-112:Validate the actual size you send (providerOptions.size can bypass validation).

createVideoJob() validatesvalidateVideoSize(model, size) wheresize = options.size, butbuildRequest() will fall back toproviderOptions.size whenoptions.size is undefined. That means an invalidproviderOptions.size can slip through and hit the API.

Suggested fix:

- const { model, size, duration, providerOptions } = options+ const { model, size, duration, providerOptions } = options+ const resolvedSize = size ?? providerOptions?.size- validateVideoSize(model, size)+ validateVideoSize(model, resolvedSize)

(And useresolvedSize consistently when building the request.)

Also applies to: 275-301

packages/typescript/ai-gemini/src/adapters/text.ts-205-255 (1)

205-255:UsenextToolIndex to generate unique tool call IDs.

When the same function is called multiple times in a stream,const toolCallId = functionCall.name creates collisions intoolCallMap, causing separate tool calls to merge their arguments. Use a stable synthetic ID likeconst toolCallId = functionCall.name ||${++callIndex}; // or any monotonic counter approach to ensure each call gets a unique identifier and preserves its own arguments.

packages/typescript/ai-openai/src/adapters/video.ts-92-110 (1)

92-110:Heavy reliance onany for OpenAI “videos” API needs a compatibility gate and clearer failure modes.

Right now the adapter assumesclient.videos.create/retrieve/... exist and have certain response shapes. If the SDK differs, this becomes runtime-unsafe. Consider feature-detectingclient.videos once (constructor) and throwing a clear “SDK does not support videos in this version” error before runtime calls.

Also applies to: 132-154, 171-272

packages/typescript/ai-gemini/src/adapters/summarize.ts-26-40 (1)

26-40:Provider options interface defined but non-functional due to missing framework support.

GeminiSummarizeProviderOptions definesgenerationConfig andsafetySettings, butSummarizationOptions (the method parameter type) has noproviderOptions field to pass them through. Additionally, thegenerateContent() call at lines 86-99 only appliessystemInstruction to config, ignoring temperature, topP, topK, maxOutputTokens, stopSequences, and safety settings entirely.

Requires two-part fix: (1) extendSummarizationOptions in the framework to supportproviderOptions, and (2) mapoptions.providerOptions.generationConfig.* andsafetySettings into the SDK config object.

packages/typescript/ai-gemini/src/adapters/text.ts-382-439 (1)

382-439:Store function name with tool call results for proper Gemini API attribution.

functionResponse: { name: msg.toolCallId, ... } violates Gemini's API contract. According to the Gemini API specification,functionResponse.name must be the function's declared name (e.g., "get_current_weather"), not the toolCallId. Tool results won't be correctly attributed to their function calls. TheModelMessage structure for tool role messages needs to preserve the function name from the original tool call, or the adapter must have a way to maptoolCallId back to the function name.

packages/typescript/ai-gemini/src/image/image-provider-options.ts-160-174 (1)

160-174:Aspect ratio mappings appear incorrect for some sizes.

Several mappings don't match the actual aspect ratios:

  • '1536x1024': '3:4' — 1536/1024 ≈ 1.5, which is closer to 3:2 (landscape), not 3:4 (portrait).
  • '1024x1536': '4:3' — 1024/1536 ≈ 0.67, which is 2:3 (portrait), not 4:3 (landscape).

The comments acknowledge these are "approximate mappings," but users relying on these mappings may get unexpected results.

 export const GEMINI_SIZE_TO_ASPECT_RATIO: Record<string, GeminiAspectRatio> = {   // Square   '1024x1024': '1:1',   '512x512': '1:1',   // Landscape   '1024x768': '4:3',-  '1536x1024': '3:4', // Actually this is portrait, but matching common dimensions+  '1536x1024': '3:4', // Note: 1536x1024 is actually ~3:2, mapped to closest supported ratio   '1792x1024': '16:9',   '1920x1080': '16:9',   // Portrait   '768x1024': '3:4',-  '1024x1536': '4:3', // Inverted+  '1024x1536': '3:4', // Portrait dimensions, using 3:4 portrait ratio   '1024x1792': '9:16',   '1080x1920': '9:16', }
🟡 Minor comments (15)
packages/typescript/ai-devtools/src/store/ai-context.tsx-1309-1309 (1)

1309-1309:Update comment to reflect new event namespace.

The comment references "Chat Events" but the event listeners below use thetext:* namespace. Update the comment for consistency.

Apply this diff:

-    // ============= Chat Events (for usage tracking) =============+    // ============= Text Events (for usage tracking) =============
examples/ts-vue-chat/vite.config.ts-5-10 (1)

5-10:Consolidate the duplicate@tanstack/ai import.

The@tanstack/ai package is imported twice—once on line 5 and again on line 10. MovetoolDefinition to the first import statement:

-import { ai, maxIterations, toStreamResponse } from '@tanstack/ai'+import { ai, maxIterations, toStreamResponse, toolDefinition } from '@tanstack/ai' import { openaiText } from '@tanstack/ai-openai' import { anthropicText } from '@tanstack/ai-anthropic' import { geminiText } from '@tanstack/ai-gemini' import { ollamaText } from '@tanstack/ai-ollama'-import { toolDefinition } from '@tanstack/ai'
packages/typescript/ai-ollama/src/utils/client.ts-39-41 (1)

39-41:Potential for short/empty random suffix in ID generation.

Math.random().toString(36).substring(7) can produce strings of varying length (0-6 characters) depending on the random value. For more consistent IDs, consider usingsubstring(2, 9) or a fixed-length approach.

 export function generateId(prefix: string = 'msg'): string {-  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`+  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2, 9)}` }
packages/typescript/ai-ollama/src/adapters/summarize.ts-120-146 (1)

120-146:maxLength mixes “words” in the prompt with “tokens” innum_predict
The prompt says “under N words”, but you treatmaxLength as tokens fornum_predict. Either (a) rename semantics in the prompt to “tokens”, or (b) keep it as words and translate to an approximate token budget.

packages/typescript/ai-ollama/src/adapters/text.ts-283-386 (1)

283-386:Add explanatory comments for image source handling or implement explicit format conversion

The current code treatsdata and non-data image sources identically by pushingsource.value directly. While the ollama-js SDK accepts multiple image formats (base64, URLs, file paths, bytes) and handles conversion internally, the previous implementation included clarifying comments that "URL-based images [are] not directly supported" and "Ollama may need the image to be fetched externally." These comments should be restored to document the current behavior and any limitations, or the code should explicitly handle format conversion for consistency with how other adapters differentiate source types.

docs/adapters/ollama.md-126-126 (1)

126-126:Fix hyphenation in compound modifier.

The phrase should use a hyphen: "tool-calling support" instead of "tool calling support" when used as a compound adjective modifying "support."

Apply this diff:

-**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.+**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.
packages/typescript/ai-gemini/src/tools/tool-converter.ts-79-82 (1)

79-82:Wrap variable declaration in block to prevent leakage.

ThejsonSchema variable is declared in a switch case without a surrounding block. This can cause the variable to be accessible in other case clauses, leading to unexpected behavior.

Apply this diff to wrap the declaration in a block:

       default:+        {         // Collect function declarations to group together         // Description is required for Gemini function declarations         if (!tool.description) {           throw new Error(             `Tool ${tool.name} requires a description for Gemini adapter`,           )         }         // Convert Zod schema to Gemini-compatible JSON Schema         const jsonSchema = tool.inputSchema           ? convertZodToGeminiSchema(tool.inputSchema)           : { type: 'object', properties: {}, required: [] }         functionDeclarations.push({           name: tool.name,           description: tool.description,           parameters: jsonSchema,         })         break+        }

Based on static analysis hints.

docs/reference/type-aliases/TextStreamOptionsForModel.md-14-16 (1)

14-16:Update comment to reflect renamed type.

Line 14 still refers to "Chat options" but should be "Text options" to match the renamed typeTextStreamOptionsForModel.

Apply this diff:

-Chat options constrained by a specific model's capabilities.+Text options constrained by a specific model's capabilities.
docs/config.json-189-196 (1)

189-196:Consider adding redirects/aliases for renamed function reference routes (chat→text).
If the docs site supports redirects, keeping legacy routes working avoids breaking deep links from older blog posts / READMEs.

docs/config.json-484-490 (1)

484-490:Same redirect/alias concern for renamed type-alias pages (ChatStream→TextStream).
Worth preserving legacy URLs if feasible.

packages/typescript/ai-openai/src/adapters/tts.ts-71-79 (1)

71-79:Ensureinstructions is only passed to compatible models.
Theinstructions field is supported inSpeechCreateParams, but only works withgpt-4o-mini-tts; passing it totts-1 ortts-1-hd will fail at runtime. Validate the model before spreadingproviderOptions or document this constraint.

packages/typescript/ai-openai/src/adapters/summarize.ts-57-66 (1)

57-66:Handle emptychoices explicitly (avoid silently returning '').
Returning'' can mask upstream failures; consider throwing whenchoices[0] is missing.

packages/typescript/ai-anthropic/src/adapters/summarize.ts-55-68 (1)

55-68:Guard against missingresponse.usage / unexpected content blocks.
If the SDK can omitusage (or include non-text blocks), consider null-safe access + clearer erroring when summary ends up empty unexpectedly.

packages/typescript/ai-gemini/src/image/image-provider-options.ts-208-226 (1)

208-226:Comment/implementation mismatch for image count limits.

The comment states "Imagen models support 1-8 images per request (varies by model)" but the implementation hardcodesmaxImages = 4. Themodel parameter is accepted but not used for model-specific validation.

Consider either:

  1. Using themodel parameter to apply model-specific limits, or
  2. Updating the comment to reflect the actual hardcoded limit.
 export function validateNumberOfImages(   model: string,   numberOfImages: number | undefined, ): void {   if (numberOfImages === undefined) return-  // Most Imagen models support 1-4 images, some support up to 8-  const maxImages = 4+  // All Imagen models currently support up to 4 images per request+  const maxImages = 4 // TODO: Implement model-specific limits if needed   if (numberOfImages < 1 || numberOfImages > maxImages) {
packages/typescript/ai-openai/src/adapters/text.ts-226-226 (1)

226-226:Dead code:eventTypeCounts map is never populated.

TheeventTypeCounts map is declared at line 226 but never updated during stream processing. The error handler at lines 479-486 logs it, but it will always be empty.

Either populate the map during processing:

       for await (const chunk of stream) {         chunkCount+++        eventTypeCounts.set(chunk.type, (eventTypeCounts.get(chunk.type) || 0) + 1)

Or remove the unused declaration:

-    const eventTypeCounts = new Map<string, number>()

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 58 to 67
##Available Models

###Chat Models

-`claude-sonnet-4-5-20250929` - Claude Sonnet 4.5 (balanced)
-`claude-opus-4-5-20251101` - Claude Opus 4.5 (most capable)
-`claude-haiku-4-0-20250514` - Claude Haiku 4.0 (fastest)
-`claude-3-5-sonnet-20241022` - Claude 3.5 Sonnet
-`claude-3-opus-20240229` - Claude 3 Opus

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

fd model-meta.ts

Repository: TanStack/ai

Length of output: 201


🏁 Script executed:

cat packages/typescript/ai-anthropic/src/model-meta.ts

Repository: TanStack/ai

Length of output: 12900


🏁 Script executed:

rg"claude-sonnet-4-5-20250929|claude-opus-4-5-20251101|claude-haiku-4-0-20250514|claude-3-5-sonnet-20241022|claude-3-opus-20240229" --type ts --type md

Repository: TanStack/ai

Length of output: 4378


Update ANTHROPIC_MODELS inpackages/typescript/ai-anthropic/src/model-meta.ts to match versioned model IDs used throughout the codebase.

The model IDs in docs/adapters/anthropic.md are correct and used consistently across examples and testing, but the model-meta.ts constants have incorrect IDs that don't match actual Anthropic API model identifiers:

  • claude-sonnet-4-5 should beclaude-sonnet-4-5-20250929
  • claude-opus-4-5 should beclaude-opus-4-5-20251101
  • claude-haiku-4-5 should beclaude-haiku-4-0-20250514
  • claude-3-7-sonnet should beclaude-3-5-sonnet-20241022
  • claude-opus-4 should beclaude-3-opus-20240229

Correct the model ID constants to match the actual versioned IDs used in the codebase.

🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/model-meta.ts around lines 58 to 67, theANTHROPIC_MODELS constants use unversioned/incorrect IDs; update each constantto the versioned model IDs used in the repo/docs: replace claude-sonnet-4-5 →claude-sonnet-4-5-20250929, claude-opus-4-5 → claude-opus-4-5-20251101,claude-haiku-4-5 → claude-haiku-4-0-20250514, claude-3-7-sonnet →claude-3-5-sonnet-20241022, and claude-opus-4 → claude-3-opus-20240229 so theconstants match the documented and tested model identifiers.

Comment on lines 86 to 102
```ts
import {openaiText }from'@tanstack/ai-openai'

const textAdapter=openaiText()

```

2.**Class constructor** (for more control):

```ts
import {createOpenAIText }from'@tanstack/ai-openai/adapters'

const textAdapter=createOpenAIText({
apiKey:'your-api-key',
// additional configuration...
})
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat packages/typescript/ai-openai/package.json| jq'.exports'

Repository: TanStack/ai

Length of output: 145


🏁 Script executed:

#!/bin/bash# Check if adapters directory exists in ai-openai packagels -la packages/typescript/ai-openai/src/| head -20# Also check what's exported from the main index.tsecho"=== Main index.ts exports ==="head -50 packages/typescript/ai-openai/src/index.ts

Repository: TanStack/ai

Length of output: 2228


🏁 Script executed:

#!/bin/bash# Search for actual imports of adapters in the codebaseecho"=== Imports from @tanstack/ai-openai ==="rg"from ['\"]@tanstack/ai-openai['\"]" --type ts --type tsx -A 2| head -30echo -e"\n=== Imports from @tanstack/ai-openai/adapters ==="rg"from ['\"]@tanstack/ai-openai/adapters['\"]" --type ts --type tsx -A 2| head -30# Check adapters directory index.ts if it existsecho -e"\n=== Checking adapters/index.ts ==="if [-f"packages/typescript/ai-openai/src/adapters/index.ts" ];then  cat packages/typescript/ai-openai/src/adapters/index.tselseecho"No adapters/index.ts found"fi

Repository: TanStack/ai

Length of output: 282


🏁 Script executed:

#!/bin/bash# List contents of adapters directoryls -la packages/typescript/ai-openai/src/adapters/# Check one of the adapter files to see what it exportsecho -e"\n=== Contents of text adapter ==="head -30 packages/typescript/ai-openai/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1480


Incorrect import path - use@tanstack/ai-openai instead of/adapters subpath.

Line 96 shows importing from'@tanstack/ai-openai/adapters', but this path is not properly exported in the package configuration. The package.json only exports from the root (.), and there is noadapters/index.ts file to support the subpath import.

The correct import should be from'@tanstack/ai-openai', which re-exports all adapter functions (as shown in lines 19-21 and 165). Update line 96 to:

import{createOpenAIText}from'@tanstack/ai-openai'

This ensures consistency throughout the document and uses the actual exported API.

🤖 Prompt for AI Agents
In docs/guides/tree-shakeable-adapters.md around lines 86 to 102, the exampleimports from the non-exported subpath '@tanstack/ai-openai/adapters' (line ~96);change that import to use the package root: import { createOpenAIText } from'@tanstack/ai-openai' so it matches the package.json exports and other examplesin the doc.

Comment on lines 1 to 22
// ============================================================================
// New Tree-Shakeable Adapters (Recommended)
// ============================================================================

// Text (Chat) adapter - for chat/text completion
export{
AnthropicTextAdapter,
anthropicText,
createAnthropicText,
typeAnthropicTextConfig,
typeAnthropicTextProviderOptions,
}from'./adapters/text'

// Summarize adapter - for text summarization
export{
AnthropicSummarizeAdapter,
anthropicSummarize,
createAnthropicSummarize,
typeAnthropicSummarizeConfig,
typeAnthropicSummarizeProviderOptions,
}from'./adapters/summarize'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bashset -euo pipefail# Find the ai-anthropic package.json and print its exports (if any).fd -a'^package\.json$' packages/typescript/ai-anthropic|whileread -r f;doecho"==>$f"  python -<<'PY' "$f"import json,sysp=sys.argv[1]d=json.load(open(p))print(json.dumps(d.get("exports", {}), indent=2))PYdone

Repository: TanStack/ai

Length of output: 141


🏁 Script executed:

cat packages/typescript/ai-anthropic/package.json

Repository: TanStack/ai

Length of output: 1232


🏁 Script executed:

fd -a'package\.json$' packages/typescript| head -10|whileread -r f;doecho"==>$f"  cat"$f"| python3 -c"import json, sys; d=json.load(sys.stdin); print('exports:', json.dumps(d.get('exports', {}), indent=2)[:500])"done

Repository: TanStack/ai

Length of output: 1941


Add subpath exports to package.json to enable tree-shaking of adapters. Thepackage.json exports only the root "." entry point. To support tree-shakeable imports as shown in the code (e.g.,from './adapters/text'), add subpath exports like:

"exports": {".": {"types":"./dist/esm/index.d.ts","import":"./dist/esm/index.js"  },"./adapters": {"types":"./dist/esm/adapters/index.d.ts","import":"./dist/esm/adapters/index.js"  }}

This allows consumers to useimport { anthropicText } from '@tanstack/ai-anthropic/adapters' and properly tree-shake unused adapters.

🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/index.ts around lines 1–22 the modulere-exports adapter modules directly which requires subpath exports so consumerscan import adapters for tree-shaking; update this package's package.json"exports" to include a subpath entry for "./adapters" (and keep the root "."export) that maps "types" to the adapters types file in dist/esm (e.g.,./dist/esm/adapters/index.d.ts) and "import" to the adapters JS entry (e.g.,./dist/esm/adapters/index.js), ensuring consumers can import from"@tanstack/ai-anthropic/adapters" and tree-shake unused adapters.

Comment on lines 654 to 668
case'image':{
constimageMetadata=part.metadata
if(part.source.type==='url'){
return{
type:'input_image',
image_url:part.source.value,
detail:imageMetadata?.detail||'auto',
}
}
// For base64 data, construct a data URI
return{
type:'input_image',
image_url:part.source.value,
detail:imageMetadata?.detail||'auto',
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -nA10"base64" --type=ts packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 13450


🏁 Script executed:

rg -n"source.*value|image.*url" --type=ts packages/typescript/ai-openai/src/adapters/text.ts| head -30

Repository: TanStack/ai

Length of output: 233


🏁 Script executed:

# Check how image sources are created and formattedrg -nB5A5"type.*base64|source.*.type.*base64" --type=ts packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 136


🏁 Script executed:

rg -nA10"source.*type.*base64|type.*base64.*source" --type=ts packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Look at how image parts are structured when createdrg -nB5"image.*base64" --type=ts packages/typescript/

Repository: TanStack/ai

Length of output: 2691


🏁 Script executed:

# Check if there's any test data or documentation about image formatfind packages/typescript/ai-openai/tests -name"*.test.ts" -exec grep -l"image" {}\;| head -5

Repository: TanStack/ai

Length of output: 167


🏁 Script executed:

# Look at the part structure and how images are passed incat packages/typescript/ai-openai/tests/text.test.ts| head -100

Repository: TanStack/ai

Length of output: 135


🏁 Script executed:

find packages/typescript/ai-openai/tests -name"*.test.ts"| xargs grep -l"base64\|image"| head -5

Repository: TanStack/ai

Length of output: 167


🏁 Script executed:

# Look at the part/content structure definitionrg -n"type.*base64|part.*source" --type=ts packages/typescript/ai/src/types.ts -A3 -B3| head -60

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check for any image content handling testsfind packages/typescript -name"*.test.ts" -exec grep -l"source.*base64" {}\;

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Find the content/part type definitionsrg -n"type.*Part|interface.*Part|ContentPart" --type=ts packages/typescript/ai/src/types.ts| head -20

Repository: TanStack/ai

Length of output: 966


🏁 Script executed:

# Look at the complete type definitions for contentcat packages/typescript/ai/src/types.ts| grep -A 20"interface.*Content\|type.*Content"| head -60

Repository: TanStack/ai

Length of output: 1645


🏁 Script executed:

# Check the model-meta tests which might show usagecat packages/typescript/ai-openai/tests/model-meta.test.ts| head -150

Repository: TanStack/ai

Length of output: 6016


🏁 Script executed:

# Check if ImagePart metadata or structure includes mime type informationrg -n"ImagePart|image.*metadata|mime" --type=ts packages/typescript/ai/src/types.ts -A5| head -40

Repository: TanStack/ai

Length of output: 1030


🏁 Script executed:

# Look at the actual code to see if mime type is available anywherecat packages/typescript/ai-openai/src/adapters/text.ts| sed -n'640,690p'

Repository: TanStack/ai

Length of output: 1341


🏁 Script executed:

# Check OpenAI documentation or any comments about image_url format requirementsrg -n"image_url|data:image" --type=ts packages/typescript/ai-openai/ -B2 -A2| head -50

Repository: TanStack/ai

Length of output: 3124


Fix base64 image handling to construct proper data URI.

The comment at line 663 correctly states "For base64 data, construct a data URI," but the code incorrectly passespart.source.value directly toimage_url. Whensource.type is'data', the value is raw base64 (not a data URI), which OpenAI's API will reject. Construct a proper data URI by prefixing the base64 value:data:image/jpeg;base64,${part.source.value} (or useimage/png as appropriate).

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around lines 654 to 668,the image branch returns raw base64 for parts with source.type === 'data'instead of a data URI; update the logic so when part.source.type === 'url' youkeep using the URL, but when part.source.type === 'data' you build a proper dataURI (use part.metadata?.mime if available, otherwise default to image/jpeg orimage/png) and prefix the base64 with "data:{mime};base64," before assigning toimage_url; preserve the detail fallback to imageMetadata?.detail || 'auto'.

Comment on lines 199 to 252
// Last resort: The /content endpoint returns raw binary video data, not JSON.
// We need to construct a URL that the client can use to fetch the video.
// The URL needs to include auth, so we'll create a signed URL or return
// a proxy endpoint.

// For now, return a URL that goes through our API to proxy the request
// since the raw endpoint requires auth headers that browsers can't send.
// The video element can't add Authorization headers, so we need a workaround.

// Option 1: Return the direct URL (only works if OpenAI supports query param auth)
// Option 2: Return a blob URL after fetching (memory intensive)
// Option 3: Return a proxy URL through our server

// Let's try fetching and returning a data URL for now
constbaseUrl=this.config.baseUrl||'https://api.openai.com/v1'
constapiKey=this.config.apiKey

constcontentResponse=awaitfetch(
`${baseUrl}/videos/${jobId}/content`,
{
method:'GET',
headers:{
Authorization:`Bearer${apiKey}`,
},
},
)

if(!contentResponse.ok){
// Try to parse error as JSON, but it might be binary
constcontentType=contentResponse.headers.get('content-type')
if(contentType?.includes('application/json')){
consterrorData=awaitcontentResponse.json().catch(()=>({}))
thrownewError(
errorData.error?.message||
`Failed to get video content:${contentResponse.status}`,
)
}
thrownewError(
`Failed to get video content:${contentResponse.status}`,
)
}

// The response is the raw video file - convert to base64 data URL
constvideoBlob=awaitcontentResponse.blob()
constbuffer=awaitvideoBlob.arrayBuffer()
constbase64=Buffer.from(buffer).toString('base64')
constmimeType=
contentResponse.headers.get('content-type')||'video/mp4'

return{
jobId,
url:`data:${mimeType};base64,${base64}`,
expiresAt:undefined,// Data URLs don't expire
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n packages/typescript/ai-openai/src/adapters/video.ts| head -50

Repository: TanStack/ai

Length of output: 1918


🏁 Script executed:

wc -l packages/typescript/ai-openai/src/adapters/video.ts

Repository: TanStack/ai

Length of output: 110


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/adapters/video.ts| tail -100

Repository: TanStack/ai

Length of output: 3190


🏁 Script executed:

rg -A 5 -B 5"\.server\(\)|\.client\(\)" packages/typescript/ai-openai/src/adapters/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg"getVideoUrl" packages/typescript/ai-openai/src/ -A 3 -B 3

Repository: TanStack/ai

Length of output: 1065


🏁 Script executed:

sed -n'199,252p' packages/typescript/ai-openai/src/adapters/video.ts

Repository: TanStack/ai

Length of output: 2327


🏁 Script executed:

rg"class BaseVideoAdapter" packages/typescript/ -A 20

Repository: TanStack/ai

Length of output: 1867


🏁 Script executed:

fd"adapters" packages/typescript/ -type d

Repository: TanStack/ai

Length of output: 225


🏁 Script executed:

ls -la packages/typescript/ai/src/adapters/| head -30

Repository: TanStack/ai

Length of output: 138


🏁 Script executed:

rg"getVideoUrl" packages/typescript/ -B 5 -A 5

Repository: TanStack/ai

Length of output: 4293


🏁 Script executed:

rg"\.server\(\)|\.client\(\)" packages/typescript/ -B 2 -A 2| head -80

Repository: TanStack/ai

Length of output: 2795


🏁 Script executed:

find packages/typescript -name"*.ts" -path"*/adapters/*" -type f| head -20

Repository: TanStack/ai

Length of output: 1158


🏁 Script executed:

rg"typeof window|typeof document|'use server'|'use client'" packages/typescript/ai-openai/src/adapters/ -A 2 -B 2

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

cat packages/typescript/ai/src/activities/video/index.ts| head -100

Repository: TanStack/ai

Length of output: 2996


🏁 Script executed:

rg"apiKey" packages/typescript/ai-openai/src/adapters/ -B 2 -A 2

Repository: TanStack/ai

Length of output: 16304


🏁 Script executed:

cat packages/typescript/ai-openai/src/adapters/text.ts| grep -A 50"class OpenAITextAdapter"

Repository: TanStack/ai

Length of output: 1754


🏁 Script executed:

grep -r"typeof window" packages/typescript/ai-openai/src/||echo"No typeof window checks found"

Repository: TanStack/ai

Length of output: 84


getVideoUrl() fallback exposes API key and creates problematic data URLs.

This fallback path usesthis.config.apiKey directly in a fetch call and constructs base64 data URLs for potentially large video files. Risks:

  • API key can be exposed if adapter is instantiated in client-side environments
  • Base64 data URLs for large videos consume enormous memory and can crash browser tabs
  • UsesBuffer.from() which isn't available in browsers without polyfills

Add an environment guard to prevent client-side execution:

+ if (typeof window !== 'undefined') {+   throw new Error('Fetching video content requires a server environment to protect API keys.')+ }

Avoid converting to data URLs by default. Consider returning a server-side retrievable URL or instructing users to implement proper proxying instead.

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 199 to 252,the fallback in getVideoUrl() fetches the video using this.config.apiKey andbuilds a base64 data: URL (using Buffer), which can leak the API key in clientenvironments, crash browsers with large videos and relies on Node Buffer; changethis by adding a runtime environment guard that throws or rejects if executed ina browser (e.g., detect window or navigator and refuse to use API keyclient-side), stop returning base64 data URLs for large media and instead returna server-side retrievable URL (either require a configured proxy/signed URLendpoint on your server or return an instruction/error that caller must requestthe proxied URL from the server), ensure any server-only code uses Node-safeAPIs (avoid Buffer in code paths that could run in the browser) and validatethat this.config.apiKey is only used when running server-side.

@AlemTuzlakAlemTuzlak changed the base branch frommain tofeat/splitting-adapters-upDecember 14, 2025 08:17
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@coderabbitaicoderabbitai[bot]coderabbitai[bot] left review comments

@AlemTuzlakAlemTuzlakAlemTuzlak approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@jherr@AlemTuzlak

[8]ページ先頭

©2009-2025 Movatter.jp