Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat: split up adapters into functionalities and add generate function#137

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
AlemTuzlak wants to merge15 commits intomain
base:main
Choose a base branch
Loading
fromfeat/splitting-adapters-up

Conversation

@AlemTuzlak
Copy link
Contributor

@AlemTuzlakAlemTuzlak commentedDec 10, 2025
edited by coderabbitaibot
Loading

🎯 Changes

Split adapters into:

  • text
  • embed
  • summarize

Added in generate function, how do we handle this from ehre? Do we want to deprecate the oldopenai() approach?

✅ Checklist

  • I have followed the steps in theContributing guide.
  • I have tested this code locally withpnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated achangeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features

    • Split up adapters for better tree shaking into separate functionalities
    • Added embeddings, summarization, image generation, text-to-speech, and transcription support across providers
    • Added experimental video generation support (OpenAI)
  • Bug Fixes & Improvements

    • Enhanced type safety with improved model-specific options
    • Improved streaming and tool handling
  • Documentation

    • Updated guides for new adapters and features
    • Added tree-shaking guide and examples

✏️ Tip: You can customize this high-level summary in your review settings.

@nx-cloud
Copy link

nx-cloudbot commentedDec 10, 2025
edited
Loading

View yourCI Pipeline Execution ↗ for commit2f5d2dc

CommandStatusDurationResult
nx run-many --targets=build --exclude=examples/**✅ Succeeded3sView ↗
nx affected --targets=test:sherif,test:knip,tes...✅ Succeeded6sView ↗

☁️Nx Cloud last updated this comment at2025-12-17 10:33:00 UTC

@pkg-pr-new
Copy link

pkg-pr-newbot commentedDec 10, 2025
edited
Loading

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@137

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@137

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@137

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@137

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@137

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@137

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@137

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@137

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@137

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@137

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@137

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@137

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@137

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@137

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@137

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@137

commit:2f5d2dc

@coderabbitai
Copy link

coderabbitaibot commentedDec 11, 2025
edited
Loading

Walkthrough

This pull request refactors the TanStack AI framework to support tree-shakeable adapters. The primary change splits monolithic provider adapters (e.g.,openai,anthropic) into separate, modular adapters for each capability (text, embedding, summarization, image generation, TTS, video, transcription). Renamed core types (ChatOptionsTextOptions), updated example imports and usage across the codebase, and added comprehensive activity-based modules to support the new modular architecture. The framework now supports selective imports for smaller bundles, improved type safety per adapter and model, and a cleaner, more extensible API surface.

Changes

Cohort / File(s)Summary
Changelog & Versioning
.changeset/two-bikes-kneel.md
Bumps minor versions for all AI provider packages and core AI package with changelog note about tree-shakeable adapter split.
Core Type Renames & Infrastructure
packages/typescript/ai/src/types.ts,packages/typescript/ai/src/base-adapter.ts,packages/typescript/ai/src/event-client.ts
RenamedChatOptionsTextOptions,ChatCompletionChunkTextCompletionChunk, and related stream option types; updated event client keys fromchat:* totext:*.
Core Adapter Abstractions (Activities)
packages/typescript/ai/src/activities/chat/adapter.ts,packages/typescript/ai/src/activities/embedding/adapter.ts,packages/typescript/ai/src/activities/summarize/adapter.ts,packages/typescript/ai/src/activities/generateImage/adapter.ts,packages/typescript/ai/src/activities/generateVideo/adapter.ts,packages/typescript/ai/src/activities/generateSpeech/adapter.ts,packages/typescript/ai/src/activities/generateTranscription/adapter.ts
Added base adapter classes and interfaces for each activity type (text, embedding, summarize, image, video, TTS, transcription) with config and inheritance support.
Core Activity Implementations
packages/typescript/ai/src/activities/chat/index.ts,packages/typescript/ai/src/activities/chat/messages.ts,packages/typescript/ai/src/activities/embedding/index.ts,packages/typescript/ai/src/activities/summarize/index.ts,packages/typescript/ai/src/activities/generateImage/index.ts,packages/typescript/ai/src/activities/generateVideo/index.ts,packages/typescript/ai/src/activities/generateSpeech/index.ts,packages/typescript/ai/src/activities/generateTranscription/index.ts
Implemented full activity workflows with event emission, type-safe option handling, streaming support (where applicable), and adapter integration.
Activity Option Factory
packages/typescript/ai/src/activity-options.ts,packages/typescript/ai/src/activities/index.ts
Added factory functions (createChatOptions, createEmbeddingOptions, etc.) for typed option construction and centralized activity map/exports.
Stream & Message Utilities
packages/typescript/ai/src/stream-to-response.ts,packages/typescript/ai/src/activities/chat/stream/*,packages/typescript/ai/src/activities/chat/tools/*
AddedstreamToText() utility; reorganized stream processing, message updating, and tool-related code into activities structure.
OpenAI Adapters (Text, Embed, Summarize, Image, Video, TTS, Transcription)
packages/typescript/ai-openai/src/adapters/{text,embed,summarize,image,video,tts,transcription}.ts,packages/typescript/ai-openai/src/image/image-provider-options.ts,packages/typescript/ai-openai/src/video/video-provider-options.ts,packages/typescript/ai-openai/src/audio/{tts,transcription}-provider-options.ts,packages/typescript/ai-openai/src/utils/{client,schema-converter}.ts
Implemented separate tree-shakeable adapters for each OpenAI capability with provider-specific options, model validation, streaming support, and factory helpers.
OpenAI Index & Exports
packages/typescript/ai-openai/src/index.ts,packages/typescript/ai-openai/src/model-meta.ts
Replaced monolithic export with modular adapter exports for text, embed, summarize, image, video, TTS, transcription; added model lists (OPENAI_IMAGE_MODELS, OPENAI_VIDEO_MODELS, OPENAI_TTS_MODELS, OPENAI_TRANSCRIPTION_MODELS).
Anthropic Adapters (Text, Summarize)
packages/typescript/ai-anthropic/src/adapters/{text,summarize}.ts,packages/typescript/ai-anthropic/src/utils/client.ts
Refactored monolithic adapter into separate text and summarize adapters with new naming (AnthropicTextAdapter, anthropicText); added shared client utilities.
Anthropic Index & Exports
packages/typescript/ai-anthropic/src/index.ts
Replaced Anthropic class export with AnthropicTextAdapter and AnthropicSummarizeAdapter exports; preserved model metadata and tool conversion utilities.
Gemini Adapters (Text, Embed, Summarize, Image, TTS)
packages/typescript/ai-gemini/src/adapters/{text,embed,summarize,image,tts}.ts,packages/typescript/ai-gemini/src/image/image-provider-options.ts,packages/typescript/ai-gemini/src/utils/client.ts
Implemented separate Gemini adapters for text, embedding, summarization, image generation, and TTS with provider-specific options and factory helpers.
Gemini Index & Model Meta
packages/typescript/ai-gemini/src/index.ts,packages/typescript/ai-gemini/src/model-meta.ts
Replaced monolithic GeminiAdapter with modular exports; exposed image and TTS model lists; added comprehensive provider option type exports.
Ollama Adapters (Text, Embed, Summarize)
packages/typescript/ai-ollama/src/adapters/{text,embed,summarize}.ts,packages/typescript/ai-ollama/src/utils/client.ts
Implemented tree-shakeable text, embed, and summarize adapters for Ollama with host-aware client creation and token estimation.
Ollama Index & Adapter Removal
packages/typescript/ai-ollama/src/index.ts,packages/typescript/ai-ollama/src/ollama-adapter.ts (deleted)
Replaced monolithic ollama-adapter.ts with modular adapter exports; removed legacy Ollama class.
Core Package Re-exports
packages/typescript/ai/src/index.ts
Reorganized exports to expose new activity functions (text, embedding, summarize, image, video, TTS, transcription), tool definitions, and deprecated legacy chat/embedding/summarize exports for compatibility.
Client-Side Hooks (React, Solid, Vue, Svelte)
packages/typescript/ai-react/src/use-chat.ts,packages/typescript/ai-solid/src/use-chat.ts,packages/typescript/ai-vue/src/use-chat.ts,packages/typescript/ai-svelte/src/create-chat.svelte.ts,packages/typescript/ai-client/src/chat-client.ts
Updated cleanup logic to unconditionally callstop() on unmount; added post-stream action queuing in chat-client for deferred processing.
Documentation: Guides
docs/guides/{tree-shaking,text-to-speech,transcription,image-generation,video-generation,runtime-adapter-switching}.md,docs/guides/agentic-cycle.md,docs/getting-started/{overview,quick-start}.md
Added new guides for tree-shaking, TTS, transcription, image/video generation, and runtime adapter switching; updated existing guides to reflect new adapter naming.
Documentation: Adapter Pages
docs/adapters/{anthropic,gemini,ollama,openai}.md
Updated adapter documentation with new factory names (createAnthropicChat → createAnthropicText, etc.), new adapters (summarize, embedding, image, TTS), and revised examples.
Documentation: API & Reference
docs/api/{ai,ai-client,ai-react,ai-solid}.md,docs/reference/{functions,interfaces,type-aliases}/*,docs/config.json
Renamed API references (ChatOptions → TextOptions, chat → text, etc.); added new guide entries; updated configuration metadata with ordering and new pages.
Examples (Multi-Framework)
examples/{ts-react-chat,ts-group-chat,ts-solid-chat,ts-svelte-chat,ts-vue-chat}/*,examples/README.md
Updated all example files to use new adapter naming (openaiText, anthropicText, etc.), refactored provider selection logic to use typed option factories instead of switch statements.
Package.json Updates
Multiplepackage.json files across packages and examples
Updated devDependency vite (^7.2.4 → ^7.2.7); added zod to peerDependencies in anthropic/gemini/ollama/openai packages; added new export path "./adapters" in core AI package.
Removed Legacy Files
packages/typescript/ai/src/{core/chat-common-options.ts,core/embedding.ts,core/summarize.ts,utilities/chat-options.ts,tools/zod-converter.ts}
Deleted legacy standalone utilities; functionality moved into activities modules.
README Updates
README.md,packages/typescript/ai-solid/README.md
Updated with tree-shakeable adapters feature highlight and example code reflecting new adapter API.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Areas requiring extra attention during review:

  • New adapter implementations across all providers (OpenAI, Anthropic, Gemini, Ollama): verify correct client initialization, API call mappings, error handling, and consistent patterns for streaming and structured output.
  • Type system changes: ensure TextOptions and related renamed types are correctly constrained with generics, and that type inference works as intended across adapter/model combinations.
  • Activity orchestration (text/chat, embedding, summarize, etc.): validate event emission ordering, error propagation, state management, and compatibility with client-side hooks (React, Solid, Vue, Svelte).
  • Tool call and structured output handling: review logic for tool routing, partial JSON parsing, and schema validation in the new architecture.
  • Post-stream action queuing in chat-client: confirm deferred processing works correctly when streams are active and doesn't introduce timing-related bugs.
  • Example and documentation consistency: spot-check that all example code and guide snippets correctly reflect the new adapter and option naming conventions.

🐰Whiskers twitches with delight at this grand refactor—

Tree-shaking adapters, bundled tighter, branching bolder,

Each provider a module, users build what they hold, sir!

Smaller bytes, type safety, a framework much lighter—

Hip hop hooray for this architectural exciter! 🌳✨

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check nameStatusExplanationResolution
Description check❓ InconclusiveThe description follows the required template with 🎯 Changes and ✅ Checklist sections. However, the Changes section lacks detail about the specific implementation and impact. The Release Impact section has unchecked boxes indicating no changeset was generated despite the question about deprecating the old approach, which suggests release documentation may be incomplete.Complete the description by adding more detail about implementation approach for the generate function, migration strategy, and generate a changeset if this affects published code.
✅ Passed checks (2 passed)
Check nameStatusExplanation
Title check✅ PassedThe title clearly and concisely describes the main changes: splitting adapters into separate functionalities (text, embed, summarize) and adding a generate function.
Docstring Coverage✅ PassedDocstring coverage is 81.31% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branchfeat/splitting-adapters-up

Thanks for usingCodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment@coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

41-73:Fix README example imports to match actual package exports

The new tree-shakeable adapters section has two import path issues that will prevent the example from compiling:

  1. Function name mismatch: The example importsimport { generate } from '@tanstack/ai', but the root package only exportsai (notgenerate). Should beimport { ai } from '@tanstack/ai'.

  2. Adapter path mismatch: The example showsimport { openaiText } from '@tanstack/ai-openai/adapters', but adapters are re-exported at the package root. Should beimport { openaiText } from '@tanstack/ai-openai'.

Update the code snippet to use:

import{openaiText}from'@tanstack/ai-openai'import{ai}from'@tanstack/ai'
♻️ Duplicate comments (1)
packages/typescript/ai-openai/src/utils/client.ts (1)

45-47:Duplicate:generateId duplication.

This function is identical to implementations in Gemini, Anthropic, and Ollama adapters. See the comment onpackages/typescript/ai-gemini/src/utils/client.ts (lines 41-43) for consolidation recommendations.

🧹 Nitpick comments (25)
packages/typescript/ai-anthropic/src/utils/client.ts (1)

43-45:Consider centralizinggenerateId to eliminate duplication.

This exact implementation is duplicated across multiple provider utilities (OpenAI, Gemini, Ollama) and the base adapters in@tanstack/ai. Consider extracting it to a shared location in@tanstack/ai and re-exporting from there.

Additionally,substring(7) may return fewer than 4 characters for small random values. Usingsubstring(2, 9) orpadEnd would ensure consistent length.

-export function generateId(prefix: string): string {-  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`+export function generateId(prefix: string): string {+  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2, 9)}` }
packages/typescript/ai-anthropic/src/adapters/summarize.ts (3)

32-34:Redundantkind declaration.

BaseSummarizeAdapter already setskind = 'summarize' as const. This redeclaration is unnecessary unless you intend to override with a different value.

 export class AnthropicSummarizeAdapter extends BaseSummarizeAdapter<   typeof ANTHROPIC_MODELS,   AnthropicSummarizeProviderOptions > {-  readonly kind = 'summarize' as const   readonly name = 'anthropic' as const   readonly models = ANTHROPIC_MODELS

71-97:buildSummarizationPrompt is duplicated across providers.

This method is identical to the implementation inOpenAI.buildSummarizationPrompt (seeopenai-adapter.ts lines 171-197). Consider extracting it to a shared utility in@tanstack/ai to reduce duplication and ensure consistent behavior across providers.


38-41:Consider passing config to base class.

The constructor passes an empty object tosuper({}) whileBaseSummarizeAdapter acceptsSummarizeAdapterConfig with fields liketimeout,maxRetries, andheaders. IfAnthropicSummarizeConfig expands to include these fields, they won't propagate to the base class.

 constructor(config: AnthropicSummarizeConfig) {-  super({})+  super(config)   this.client = createAnthropicClient(config) }
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)

11-121:Solid base chat adapter design; consider wiringTModels intoChatOptions for extra safety

TheChatAdapterConfig,ChatAdapter generics, andBaseChatAdapter shape look coherent and extensible: thekind: 'chat' discriminator, type‑only_providerOptions/modalities/metadata fields, andchatStream contract are all aligned with how a unifiedgenerate/ai API will discriminate adapters.

If you ever want stricter coupling between supported models andchatStream calls, you could consider tightening the signature to something like:

chatStream:(options:ChatOptions<TModels[number],TProviderOptions>,)=>AsyncIterable<StreamChunk>

sooptions.model can’t drift frommodels. Not required, but it’s a low‑cost type‑safety win if it doesn’t fightChatOptions’ current generic constraints.

packages/typescript/ai/src/adapters/index.ts (1)

1-33:Adapter index looks good; consider derivingAdapterKind fromAnyAdapter

The base adapter re-exports andAnyAdapter union are a nice central place to hang adapter‑level types.

To reduce the chance of drift if you add another adapter kind later, you could deriveAdapterKind fromAnyAdapter instead of hard‑coding the union, e.g.:

exporttypeAnyAdapter=|ChatAdapter<any,any,any,any,any>|EmbeddingAdapter<any,any>|SummarizeAdapter<any,any>exporttypeAdapterKind=AnyAdapter['kind']

Not required, but it keeps the discriminator and the union in sync automatically.

packages/typescript/ai-gemini/src/utils/client.ts (1)

41-43:Consider consolidatinggenerateId across adapter packages.

This identical function is duplicated across Gemini, OpenAI, Anthropic, Ollama, and base adapter classes. Consider extracting it to a shared utility module (e.g.,@tanstack/ai/utils) to reduce duplication and ease maintenance.

Example consolidation inpackages/typescript/ai/src/utils/id.ts:

exportfunctiongenerateId(prefix:string):string{return`${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`}

Then import from the shared location in each adapter package.

packages/typescript/ai-ollama/src/utils/client.ts (1)

39-41:Consider removing the default parameter for consistency.

ThisgenerateId implementation has a default parameter'msg', while the identical function in OpenAI, Anthropic, and Gemini adapters doesn't have a default. For consistency across the codebase, consider removing the default or applying it uniformly.

Note: This also relates to the code duplication mentioned in the Gemini utils review—consolidatinggenerateId would naturally resolve this inconsistency.

packages/typescript/ai-ollama/src/adapters/text.ts (2)

254-260:Duplicate code in image source handling.

Both branches of the conditional (lines 255-259) execute identical code. The distinction betweendata and URL sources appears intended but isn't implemented.

Simplify by removing the redundant conditional:

           } else if (part.type === 'image') {-            if (part.source.type === 'data') {-              images.push(part.source.value)-            } else {-              images.push(part.source.value)-            }+            images.push(part.source.value)           }

Or, if different handling was intended (e.g., prefixing data URIs), implement the distinction.


280-292:Silent failure on JSON parse error.

The empty catch block silently converts invalid JSON arguments to an empty object, which could mask malformed tool call data and cause downstream failures.

Consider logging or preserving the original string:

                   try {                     parsedArguments = JSON.parse(                       toolCall.function.arguments,                     ) as Record<string, unknown>                   } catch {-                    parsedArguments = {}+                    // Log or handle malformed arguments+                    console.warn(`Failed to parse tool call arguments for ${toolCall.function.name}`)+                    parsedArguments = { _raw: toolCall.function.arguments }                   }
packages/typescript/ai-gemini/src/adapters/embed.ts (2)

74-86:Sequential API calls for multiple inputs may be inefficient.

Each input triggers a separate API call in sequence. For batches of inputs, this adds latency. Consider parallelizing withPromise.all if the Gemini API supports concurrent requests, or check if batch embedding is available.

-    for (const input of inputs) {-      const response = await this.client.models.embedContent({+    const responses = await Promise.all(+      inputs.map((input) =>+        this.client.models.embedContent({-        model,-        contents: [{ role: 'user', parts: [{ text: input }] }],-        config: {-          outputDimensionality: options.dimensions,-        },-      })+          model,+          contents: [{ role: 'user', parts: [{ text: input }] }],+          config: {+            outputDimensionality: options.dimensions,+          },+        }),+      ),+    )-      if (response.embeddings?.[0]?.values) {-        embeddings.push(response.embeddings[0].values)+    for (const response of responses) {+      if (response.embeddings?.[0]?.values) {+        embeddings.push(response.embeddings[0].values)       }     }

88-96:Hardcoded usage tokens and inconsistent ID generation.

Usage tokens are hardcoded to0, which doesn't reflect actual API consumption. Also,Date.now() is used directly instead of thegenerateId utility used in other adapters (e.g., Ollama summarize adapter usesgenerateId('sum')).

For consistency with other adapters:

+import { generateId } from '../utils'+     return {-      id: `embed-${Date.now()}`,+      id: generateId('embed'),       model,       embeddings,       usage: {-        promptTokens: 0,-        totalTokens: 0,+        promptTokens: 0, // TODO: Gemini API may provide token counts in response+        totalTokens: 0,       },     }
packages/typescript/ai-openai/src/adapters/summarize.ts (1)

15-20:Provider options are defined but never used.

OpenAISummarizeProviderOptions definestemperature andmaxTokens, but thesummarize() method hardcodestemperature: 0.3 (line 53) and usesoptions.maxLength formax_tokens. Consider either removing these unused options or applying them in the API call.

packages/typescript/ai-gemini/src/adapters/summarize.ts (2)

50-53:Inconsistent inheritance pattern.

This adapter implementsSummarizeAdapter directly, whileOpenAISummarizeAdapter extendsBaseSummarizeAdapter. For consistency and to leverage shared functionality (like the basegenerateId), consider extendingBaseSummarizeAdapter instead.


26-40:Provider options are defined but not applied.

GeminiSummarizeProviderOptions definesgenerationConfig andsafetySettings, but these are not passed to thegenerateContent call. Either use these options in the API request or remove them from the interface.

     const response = await this.client.models.generateContent({       model,       contents: [         {           role: 'user',           parts: [             { text: `Please summarize the following:\n\n${options.text}` },           ],         },       ],       config: {         systemInstruction: systemPrompt,+        ...options.providerOptions?.generationConfig,       },+      safetySettings: options.providerOptions?.safetySettings,     })
packages/typescript/ai-openai/src/adapters/text.ts (3)

100-107:Remove or replace debug logging.

Theseconsole.error statements appear to be debugging artifacts. Consider removing them or using a proper logging abstraction that can be configured for different environments.

     } catch (error: unknown) {       const err = error as Error-      console.error('>>> chatStream: Fatal error during response creation <<<')-      console.error('>>> Error message:', err.message)-      console.error('>>> Error stack:', err.stack)-      console.error('>>> Full error:', err)       throw error     }

380-400:Debug logging in catch block.

Similar to the earlier comment, theconsole.log should be removed or replaced with a configurable logging mechanism. Also note the unusedeventTypeCounts referenced here.


348-351:Consider a more robust type check.

The type assertion(item as { type: string }).type works but is fragile. Consider using a type guard or checking for the property existence first.

-          const hasFunctionCalls = chunk.response.output.some(-            (item: unknown) =>-              (item as { type: string }).type === 'function_call',-          )+          const hasFunctionCalls = chunk.response.output.some(+            (item) =>+              item && typeof item === 'object' && 'type' in item && item.type === 'function_call',+          )
packages/typescript/ai-ollama/src/adapters/embed.ts (2)

85-92:Sequential API calls may impact performance for batch embeddings.

Each input triggers a separateclient.embeddings() call. If the Ollama API supports batching multiple prompts in a single request, consider leveraging that to reduce latency and overhead. Otherwise, for large input arrays, consider usingPromise.all for concurrent requests (with appropriate rate limiting if needed).


100-100:Non-unique ID generation.

Date.now() can produce duplicate IDs for rapid successive calls. Consider using a UUID library or combining with a random suffix:

-      id: `embed-${Date.now()}`,+      id: `embed-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`,
packages/typescript/ai/src/types.ts (1)

994-1037:Duplication withChatStreamOptionsForModel.

ChatOptionsForChatAdapter (lines 994-1037) is structurally very similar toChatStreamOptionsForModel (lines 887-937). The main difference is the adapter base type (ChatAdapter vsAIAdapter). Consider whether these could share a common base type or be unified to reduce maintenance overhead as the API evolves.

packages/typescript/ai-gemini/src/adapters/text.ts (1)

157-162:Silent error swallowing in argument merging.

The catch block silently replaces the existing args with the new args when JSON parsing fails, which could mask data corruption issues. Consider logging a warning:

             } catch {+              // Log or track parse failure for debugging               toolCallData.args =                 typeof functionArgs === 'string'                   ? functionArgs                   : JSON.stringify(functionArgs)             }
packages/typescript/ai/src/core/generate.ts (3)

495-497:Dead code -outputSchema is guaranteed to be defined.

This function is only called whenchatOptions.outputSchema is truthy (line 347), andoutputSchema is destructured fromoptions. This check will never trigger.

-  // Validate against the schema-  if (!outputSchema) {-    throw new Error('outputSchema is required for structured output')-  }-   const result = outputSchema.safeParse(parsed)

223-224:Missing newline between type definitions.

There's a missing newline separator before theGenerateReturnType comment, which reduces readability:

-      : never /** Infer the return type based on adapter kind, schema, and stream */-type GenerateReturnType<+      : never++/** Infer the return type based on adapter kind, schema, and stream */+type GenerateReturnType<

550-589:generateSummaryStream doesn't actually stream.

This function awaits the fullsummarize() result before yielding any chunks, so it provides a streaming interface but not streaming behavior. This may be intentional for API consistency, but consider documenting this limitation or implementing true streaming if the underlying adapter supports it.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between049eb8a and86fa305.

📒 Files selected for processing (40)
  • .changeset/two-bikes-kneel.md (1 hunks)
  • README.md (2 hunks)
  • docs/guides/tree-shakeable-adapters.md (1 hunks)
  • examples/ts-react-chat/src/routes/api.tanchat.ts (2 hunks)
  • packages/typescript/ai-anthropic/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/index.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/embed.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-gemini/src/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/model-meta.ts (1 hunks)
  • packages/typescript/ai-gemini/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-gemini/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/embed.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-ollama/src/index.ts (1 hunks)
  • packages/typescript/ai-ollama/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-ollama/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/embed.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-openai/src/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/openai-adapter.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/index.ts (1 hunks)
  • packages/typescript/ai/package.json (1 hunks)
  • packages/typescript/ai/src/adapters/base-chat-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/index.ts (1 hunks)
  • packages/typescript/ai/src/core/generate.ts (1 hunks)
  • packages/typescript/ai/src/index.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (2 hunks)
  • packages/typescript/ai/tests/generate-types.test-d.ts (1 hunks)
  • packages/typescript/ai/tests/generate.test.ts (1 hunks)
  • packages/typescript/ai/vite.config.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
packages/typescript/ai-gemini/src/model-meta.ts (1)
packages/typescript/ai-gemini/src/index.ts (2)
  • GeminiModels (38-38)
  • GEMINI_MODELS (37-37)
packages/typescript/ai/src/adapters/index.ts (3)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • ChatAdapter (30-70)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • EmbeddingAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • SummarizeAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai/tests/generate-types.test-d.ts (3)
packages/typescript/ai/src/adapters/index.ts (3)
  • BaseChatAdapter (7-7)
  • BaseEmbeddingAdapter (14-14)
  • BaseSummarizeAdapter (21-21)
packages/typescript/ai/src/types.ts (6)
  • ChatOptions (554-588)
  • StreamChunk (678-686)
  • EmbeddingOptions (722-726)
  • EmbeddingResult (728-736)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai/src/core/generate.ts (1)
  • ai (315-393)
packages/typescript/ai-gemini/src/utils/client.ts (7)
packages/typescript/ai-gemini/src/utils/index.ts (4)
  • GeminiClientConfig (5-5)
  • createGeminiClient (2-2)
  • getGeminiApiKeyFromEnv (4-4)
  • generateId (3-3)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-anthropic/src/utils/client.ts (1)
  • generateId (43-45)
packages/typescript/ai-ollama/src/utils/client.ts (1)
  • generateId (39-41)
packages/typescript/ai-openai/src/utils/client.ts (1)
  • generateId (45-47)
packages/typescript/ai-anthropic/src/utils/client.ts (6)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-gemini/src/utils/client.ts (1)
  • generateId (41-43)
packages/typescript/ai-ollama/src/utils/client.ts (1)
  • generateId (39-41)
packages/typescript/ai-openai/src/utils/client.ts (1)
  • generateId (45-47)
packages/typescript/ai-openai/src/adapters/embed.ts (6)
packages/typescript/ai-openai/src/utils/client.ts (4)
  • OpenAIClientConfig (3-7)
  • createOpenAIClient (12-18)
  • generateId (45-47)
  • getOpenAIApiKeyFromEnv (24-40)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_EMBEDDING_MODELS (1664-1668)
packages/typescript/ai/src/types.ts (2)
  • EmbeddingOptions (722-726)
  • EmbeddingResult (728-736)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-openai/src/openai-adapter.ts (2)
packages/typescript/ai-anthropic/src/text/text-provider-options.ts (1)
  • validateTextProviderOptions (198-204)
packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
  • validateTextProviderOptions (320-325)
packages/typescript/ai-ollama/src/adapters/text.ts (4)
packages/typescript/ai-ollama/src/index.ts (5)
  • OllamaTextModels (8-8)
  • OllamaTextModel (12-12)
  • OllamaTextProviderOptions (13-13)
  • OllamaTextAdapter (7-7)
  • Ollama (49-49)
packages/typescript/ai-ollama/src/utils/index.ts (3)
  • createOllamaClient (2-2)
  • generateId (4-4)
  • getOllamaHostFromEnv (5-5)
packages/typescript/ai/src/types.ts (3)
  • StreamChunk (678-686)
  • ToolCall (68-75)
  • Tool (326-427)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai-gemini/src/adapters/summarize.ts (4)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (2)
  • SummarizeAdapter (22-41)
  • generateId (68-70)
packages/typescript/ai/src/types.ts (3)
  • SummarizeAdapter (1040-1040)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai-gemini/src/utils/client.ts (3)
  • createGeminiClient (10-14)
  • generateId (41-43)
  • getGeminiApiKeyFromEnv (20-36)
packages/typescript/ai-gemini/src/utils/index.ts (3)
  • createGeminiClient (2-2)
  • generateId (3-3)
  • getGeminiApiKeyFromEnv (4-4)
packages/typescript/ai-openai/src/utils/client.ts (6)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-anthropic/src/utils/client.ts (1)
  • generateId (43-45)
packages/typescript/ai-gemini/src/utils/client.ts (1)
  • generateId (41-43)
packages/typescript/ai-ollama/src/utils/client.ts (1)
  • generateId (39-41)
packages/typescript/ai-ollama/src/utils/client.ts (3)
packages/typescript/ai-anthropic/src/utils/client.ts (1)
  • generateId (43-45)
packages/typescript/ai-gemini/src/utils/client.ts (1)
  • generateId (41-43)
packages/typescript/ai-openai/src/utils/client.ts (1)
  • generateId (45-47)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (2)
packages/typescript/ai/src/adapters/index.ts (3)
  • EmbeddingAdapterConfig (16-16)
  • EmbeddingAdapter (15-15)
  • BaseEmbeddingAdapter (14-14)
packages/typescript/ai/src/types.ts (3)
  • EmbeddingAdapter (1040-1040)
  • EmbeddingOptions (722-726)
  • EmbeddingResult (728-736)
packages/typescript/ai-anthropic/src/adapters/summarize.ts (4)
packages/typescript/ai-anthropic/src/utils/client.ts (3)
  • AnthropicClientConfig (3-5)
  • createAnthropicClient (10-16)
  • getAnthropicApiKeyFromEnv (22-38)
packages/typescript/ai-anthropic/src/utils/index.ts (3)
  • AnthropicClientConfig (5-5)
  • createAnthropicClient (2-2)
  • getAnthropicApiKeyFromEnv (4-4)
packages/typescript/ai-anthropic/src/model-meta.ts (1)
  • ANTHROPIC_MODELS (363-373)
packages/typescript/ai/src/types.ts (2)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai/src/types.ts (4)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • ChatAdapter (30-70)
packages/typescript/ai/src/adapters/index.ts (3)
  • ChatAdapter (8-8)
  • EmbeddingAdapter (15-15)
  • SummarizeAdapter (22-22)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • EmbeddingAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • SummarizeAdapter (22-41)
packages/typescript/ai-openai/src/adapters/text.ts (1)
packages/typescript/ai-openai/src/utils/index.ts (4)
  • OpenAIClientConfig (5-5)
  • createOpenAIClient (2-2)
  • generateId (4-4)
  • getOpenAIApiKeyFromEnv (3-3)
🪛 LanguageTool
docs/guides/tree-shakeable-adapters.md

[style] ~173-~173: This phrase is redundant. Consider writing “eliminated”.
Context: ... your bundle - Unused adapter types are completely eliminated - Smaller bundles lead to faster load t...

(COMPLETELY_ANNIHILATE)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (53)
packages/typescript/ai/package.json (1)

20-23: No changes needed. The "./adapters" export path correctly re-exports all three adapter types (chat, embedding, summarize) through named exports inadapters/index.ts, enabling proper tree-shaking. The structure follows the established pattern and does not require separate sub-export paths.

Likely an incorrect or invalid review comment.

packages/typescript/ai/vite.config.ts (1)

32-36:New adapter entry point correctly configured for tree-shaking.

The addition of'./src/adapters/index.ts' as a separate entry point is properly set up. The file exists and exports the three adapter types (chat, embedding, summarize) as intended. Thepackage.json exports field correctly exposes the adapters entry point at"./adapters", enabling independent imports and tree-shaking of adapter functionality.

packages/typescript/ai-openai/src/openai-adapter.ts (1)

508-512:LGTM - validation now includes model context.

The expansion ofvalidateTextProviderOptions to includemodel: options.model alongsideinput andproviderOptions is appropriate. This enables model-specific validation rules (e.g., checking if certain provider options are compatible with the selected model).

packages/typescript/ai-openai/src/utils/index.ts (1)

1-6:LGTM - clean barrel export for OpenAI utilities.

The re-export pattern is appropriate for consolidating client utilities and enabling tree-shakeable imports.

packages/typescript/ai-anthropic/src/utils/index.ts (1)

1-6:LGTM - consistent with OpenAI utility barrel pattern.

The re-export structure mirrors the OpenAI package, maintaining consistency across provider adapters.

packages/typescript/ai-anthropic/src/utils/client.ts (2)

22-38:LGTM - environment detection for API key retrieval.

The fallback chain fromglobalThis.window.env toprocess.env handles both browser and Node.js environments appropriately. The error message is clear and actionable.


3-16:LGTM - clean client factory implementation.

TheAnthropicClientConfig interface andcreateAnthropicClient factory follow the established pattern from other provider adapters.

packages/typescript/ai/src/adapters/base-summarize-adapter.ts (2)

22-41:LGTM - well-designed adapter interface with discriminator.

Thekind: 'summarize' discriminator enables type-safe routing in thegenerate() function. The generic parameters forTModels andTProviderOptions provide good type inference for concrete implementations.


47-71:LGTM - abstract base class follows established adapter patterns.

The use ofdeclare for the type-only_providerOptions is correct. The structure mirrorsBaseChatAdapter andBaseEmbeddingAdapter, maintaining consistency across adapter types.

packages/typescript/ai-anthropic/src/adapters/summarize.ts (1)

103-118:LGTM - factory functions follow established patterns.

ThecreateAnthropicSummarize andanthropicSummarize factories mirror the conventions used in other provider adapters, providing both explicit and environment-based API key options.

.changeset/two-bikes-kneel.md (1)

1-9:Changeset scope and message look appropriate

Minor bumps for the affected@tanstack/ai-* packages and the short description accurately reflect the adapter split for tree‑shaking. No adjustments needed.

packages/typescript/ai-gemini/src/model-meta.ts (1)

771-783:GeminiModels alias correctly captures the model name union

GeminiModels = (typeof GEMINI_MODELS)[number] is the right way to derive the string‑literal union from the model list and lines up with theGeminiTextModel re‑export pattern inindex.ts. This should improve ergonomics without changing behavior.

packages/typescript/ai-ollama/src/utils/index.ts (1)

1-7:Ollama utils barrel export is clear and consistent

The utils index cleanly re-exports the client helpers and config type from./client, matching the pattern used in other provider packages and making the public surface easier to consume. Looks good as-is.

packages/typescript/ai/src/index.ts (1)

4-12: No action required. The review comment's premise is incorrect.

The codebase consistently exports and documents theai function. All JSDoc examples incore/generate.ts useimport { ai }, and the rootindex.ts correctly exportsai from./core/generate. There is no documentation (README or guide) claiming users should importgenerate—no such guide exists in the package, and the README contains no code examples. The API is already consistent.

Likely an incorrect or invalid review comment.

packages/typescript/ai-gemini/src/utils/index.ts (1)

1-6:LGTM!

Clean barrel export consolidating Gemini utilities for easy import.

examples/ts-react-chat/src/routes/api.tanchat.ts (1)

2-3:LGTM!

Import changes align with the new tree-shakeable adapter pattern.

packages/typescript/ai-gemini/src/utils/client.ts (3)

1-5:LGTM!

Clean interface definition for Gemini client configuration.


10-14:LGTM!

Clean factory function for creating Gemini client instances.


20-36:LGTM!

Good environment-aware API key retrieval with clear error messaging and support for both GOOGLE_API_KEY and GEMINI_API_KEY.

packages/typescript/ai/tests/generate.test.ts (1)

1-287:LGTM!

Comprehensive test suite covering all three adapter types (chat, embedding, summarize) with proper mocking and type safety checks. Well-structured and thorough.

packages/typescript/ai-ollama/src/utils/client.ts (4)

1-5:LGTM!

Clean configuration interface for Ollama client.


10-14:LGTM!

Good default host configuration for local Ollama instances.


20-34:LGTM!

Good environment-aware host resolution with sensible localhost fallback.


46-49:LGTM!

Reasonable token estimation heuristic for Ollama. The approximation is clearly documented in the comment.

packages/typescript/ai-openai/src/adapters/embed.ts (5)

1-27:LGTM!

Well-documented interfaces with clear API reference. Good use of base adapter patterns.


35-48:LGTM!

Clean adapter class structure following the base adapter pattern with proper typing.


50-66:LGTM!

Clean implementation of the embeddings method with proper response mapping and sensible model fallback.


81-86:LGTM!

Well-documented factory function with clear example usage.


111-116:LGTM!

Well-documented factory with automatic environment-based API key detection. The example clearly shows the intended usage pattern.

packages/typescript/ai-openai/src/utils/client.ts (3)

1-7:LGTM!

Clean configuration interface with appropriate optional fields for OpenAI SDK configuration.


12-18:LGTM!

Clean factory function properly initializing the OpenAI SDK client.


24-40:LGTM!

Good environment-aware API key retrieval with clear error messaging.

packages/typescript/ai/tests/generate-types.test-d.ts (1)

1-529:LGTM!

Excellent comprehensive type-level test suite. The tests thoroughly cover:

  • Return type inference for all adapter types
  • Model validation and constraints
  • Strict providerOptions typing
  • outputSchema handling with complex nested types
  • Summarize streaming behavior

The use of@ts-expect-error for negative test cases is appropriate and ensures type safety. Well-structured and thorough.

packages/typescript/ai-ollama/src/index.ts (1)

1-53:Well-structured tree-shakeable exports with proper deprecation notice.

The organization is clear with new adapters at the top and deprecated legacy exports clearly marked. The deprecation JSDoc provides helpful migration guidance.

packages/typescript/ai-anthropic/src/index.ts (1)

1-63:Clean export structure with appropriate provider-specific note.

The note on line 23 about Anthropic not supporting embeddings natively is helpful for users. The deprecation pattern is consistent with other adapters.

packages/typescript/ai-ollama/src/adapters/summarize.ts (2)

64-88:Adapter class structure looks good.

The class properly implements theSummarizeAdapter interface with appropriate readonly properties and flexible constructor accepting either host string or client instance.


152-167:Factory functions follow the established pattern.

createOllamaSummarize for explicit host andollamaSummarize for environment-based configuration aligns with other adapters in the codebase.

packages/typescript/ai-ollama/src/adapters/text.ts (2)

125-132:Streaming implementation looks correct.

ThechatStream method properly delegates to the Ollama client withstream: true and processes chunks via the generator.


309-328:Options mapping is clean.

Common options are correctly mapped to Ollama's expected format, with provider-specific options spread appropriately.

packages/typescript/ai-gemini/src/adapters/embed.ts (1)

39-62:Adapter class structure follows the established pattern.

The class properly implementsEmbeddingAdapter with readonly properties and flexible constructor accepting either API key or client instance.

packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)

1-69:LGTM! Clean base abstraction for embedding adapters.

The interface and abstract class provide a well-structured foundation for provider-specific embedding adapters. The use ofdeclare for_providerOptions correctly marks it as a type-only property without runtime overhead.

packages/typescript/ai-openai/src/adapters/summarize.ts (2)

90-92:ClarifymaxLength semantics.

maxLength is used both as the API'smax_tokens parameter (line 52) and in the prompt as a hint. The prompt says "tokens" but users may expect word or character counts. Consider clarifying the expected unit in the documentation or interface definition.


43-67:Good defensive handling of API response.

The optional chaining onresponse.choices[0]?.message.content and fallback values for usage tokens handle potential undefined responses gracefully.

packages/typescript/ai-openai/src/index.ts (1)

1-62:Well-organized export structure with clear deprecation path.

The separation between new tree-shakeable adapters and deprecated legacy exports is clear. The deprecation notice with migration guidance is helpful for consumers.

packages/typescript/ai-gemini/src/adapters/summarize.ts (1)

75-115:Summarize method implementation is correct.

The method properly builds the prompt, calls the Gemini API, extracts usage metadata, and returns a well-formedSummarizationResult.

packages/typescript/ai-openai/src/adapters/text.ts (2)

110-401:Comprehensive streaming implementation.

The stream processing handles various OpenAI response event types correctly, including content deltas, reasoning, tool calls, and errors. The deduplication logic usinghasStreamedContentDeltas andhasStreamedReasoningDeltas flags appropriately prevents duplicate content emission.


450-537:Message conversion logic is well-structured.

TheconvertMessagesToInput method correctly handles different message roles (tool, assistant, user) and converts them to OpenAI's expected format. The tool call serialization and multimodal content handling are properly implemented.

packages/typescript/ai-ollama/src/adapters/embed.ts (1)

63-73:LGTM!

The constructor correctly handles both string host and pre-configured client patterns, providing flexibility for different use cases.

packages/typescript/ai-gemini/src/index.ts (1)

1-51:LGTM!

Clean organization with clear separation between new tree-shakeable adapters and deprecated legacy exports. The deprecation JSDoc provides helpful migration guidance to users.

packages/typescript/ai/src/types.ts (1)

970-989:LGTM!

Well-structured model extraction utilities that correctly infer model unions from each adapter kind, with a unifiedExtractModelsFromAnyAdapter for polymorphic usage.

packages/typescript/ai-gemini/src/adapters/text.ts (1)

74-99:LGTM!

Good error handling pattern that catches exceptions and yields a structured error chunk with appropriate metadata.

packages/typescript/ai/src/core/generate.ts (1)

305-393:LGTM!

Well-designed unifiedai() function with clean discriminated union routing. The type inference through generics ensures compile-time safety while the runtime switch handles dispatch correctly.

packages/typescript/ai-anthropic/src/adapters/text.ts (1)

520-528: The implementation ofgetAnthropicApiKeyFromEnv() already handles this correctly. The function explicitly throws a clear error with actionable guidance ("ANTHROPIC_API_KEY is required. Please set it in your environment variables or use the factory function with an explicit API key.") when the environment variable is missing, rather than passingundefined to the adapter. The JSDoc also documents this behavior with@throws. No changes needed.

Comment on lines 1 to 209
#Tree-Shakeable Adapters

TanStack AI provides tree-shakeable adapters that allow you to import only the functionality you need, resulting in smaller bundle sizes.

##Overview

Instead of importing a monolithic adapter that includes chat, embedding, and summarization capabilities all at once, you can now import only the specific functionality you need:

-**Text Adapters** - For chat and text generation
-**Embed Adapters** - For creating embeddings
-**Summarize Adapters** - For text summarization

##Installation

Each provider package (e.g.,`@tanstack/ai-openai`,`@tanstack/ai-anthropic`) exports tree-shakeable adapters:

```ts
// Import only what you need
import {openaiText }from'@tanstack/ai-openai'
import {openaiEmbed }from'@tanstack/ai-openai'
import {openaiSummarize }from'@tanstack/ai-openai'
```

##Available Adapters

###OpenAI

```ts
import {
openaiText,// Chat/text generation
openaiEmbed,// Embeddings
openaiSummarize,// Summarization
createOpenAIText,
createOpenAIEmbed,
createOpenAISummarize,
}from'@tanstack/ai-openai'
```

###Anthropic

```ts
import {
anthropicText,// Chat/text generation
anthropicSummarize,// Summarization
createAnthropicText,
createAnthropicSummarize,
}from'@tanstack/ai-anthropic'
```

>Note: Anthropic does not support embeddings natively.
###Gemini

```ts
import {
geminiText,// Chat/text generation
geminiEmbed,// Embeddings
geminiSummarize,// Summarization
createGeminiText,
createGeminiEmbed,
createGeminiSummarize,
}from'@tanstack/ai-gemini'
```

###Ollama

```ts
import {
ollamaText,// Chat/text generation
ollamaEmbed,// Embeddings
ollamaSummarize,// Summarization
createOllamaText,
createOllamaEmbed,
createOllamaSummarize,
}from'@tanstack/ai-ollama'
```

##Usage

###Basic Usage

Each adapter type has two ways to create instances:

1.**Factory function** (recommended for quick setup):

```ts
import {openaiText }from'@tanstack/ai-openai'

const textAdapter=openaiText()

```

2.**Class constructor** (for more control):

```ts
import {createOpenAIText }from'@tanstack/ai-openai/adapters'

const textAdapter=createOpenAIText({
apiKey:'your-api-key',
// additional configuration...
})
```

###Using the`generate` Function

The`generate` function provides a unified API that adapts based on the adapter type:

```ts
import {generate }from'@tanstack/ai'
import {openaiText,openaiEmbed,openaiSummarize }from'@tanstack/ai-openai/adapters'

// Chat generation - returns AsyncIterable<StreamChunk>
const chatResult=generate({
adapter:openaiText(),
model:'gpt-4o',
messages: [{ role:'user', content: [{ type:'text', content:'Hello!' }] }],
})

forawait (const chunkofchatResult) {
console.log(chunk)
}

// Embeddings - returns Promise<EmbeddingResult>
const embedResult=awaitgenerate({
adapter:openaiEmbed(),
model:'text-embedding-3-small',
input: ['Hello, world!'],
})

console.log(embedResult.embeddings)

// Summarization - returns Promise<SummarizationResult>
const summarizeResult=awaitgenerate({
adapter:openaiSummarize(),
model:'gpt-4o-mini',
text:'Long text to summarize...',
})

console.log(summarizeResult.summary)
```

###Type Safety

Each adapter provides full type safety for its supported models and options:

```ts
import {openaiText,typeOpenAITextModel }from'@tanstack/ai-openai'

const adapter=openaiText()

// TypeScript knows the exact models supported
const model:OpenAITextModel='gpt-4o'// ✓ Valid
const model2:OpenAITextModel='invalid'// ✗ Type error
```

##Migration from Monolithic Adapters

The legacy monolithic adapters are still available but deprecated:

```ts
// Legacy (deprecated)
import {openai }from'@tanstack/ai-openai'

// New tree-shakeable approach
import {openaiText,openaiEmbed }from'@tanstack/ai-openai/adapters'
```

##Bundle Size Benefits

Using tree-shakeable adapters means:

- Only the code you use is included in your bundle
- Unused adapter types are completely eliminated
- Smaller bundles lead to faster load times

For example, if you only need chat functionality:

```ts
// Only chat code is bundled
import {openaiText }from'@tanstack/ai-openai'
```

vs.

```ts
// All functionality is bundled (chat, embed, summarize)
import {openai }from'@tanstack/ai-openai'
```

##Adapter Types

Each adapter type implements a specific interface:

-`ChatAdapter` - Provides`chatStream()` method for streaming chat responses
-`EmbeddingAdapter` - Provides`createEmbeddings()` method for vector embeddings
-`SummarizeAdapter` - Provides`summarize()` method for text summarization

All adapters have a`kind` property that indicates their type:

```ts
const textAdapter=openaiText()
console.log(textAdapter.kind)// 'chat'

const embedAdapter=openaiEmbed()
console.log(embedAdapter.kind)// 'embedding'

const summarizeAdapter=openaiSummarize()
console.log(summarizeAdapter.kind)// 'summarize'
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find the main @tanstack/ai package entry pointfind. -name"package.json" -path"*/packages/ai/*"| head -5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the root index exports for @tanstack/aifd -e"index.ts" -e"index.js". --path"*/packages/ai/*"| grep -E"(^|/)index\.(ts|js)$"| head -10

Repository: TanStack/ai

Length of output: 288


🏁 Script executed:

# Look at what's exported from the main packagerg"export.*generate|export.*ai" --type ts --type js --max-count=20| grep -E"(packages/ai|core/generate)"| head -20

Repository: TanStack/ai

Length of output: 182


🏁 Script executed:

# Check package.json for @tanstack/ai to understand the entry pointcat packages/ai/package.json2>/dev/null|| find. -name"package.json" -path"*/@tanstack/ai*" -type f| head -3| xargs cat

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Find the main index file for @tanstack/aifind packages/typescript/ai/src -name"index.ts"| head -5

Repository: TanStack/ai

Length of output: 178


🏁 Script executed:

# Read the main index file to see exportscat packages/typescript/ai/src/index.ts

Repository: TanStack/ai

Length of output: 1967


🏁 Script executed:

# Also check for any re-export of generate or airg"export.*from.*generate" packages/typescript/ai/src/index.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the OpenAI package structurefind packages/typescript/ai-openai/src -name"index.ts"| head -5

Repository: TanStack/ai

Length of output: 195


🏁 Script executed:

# Read the main OpenAI indexcat packages/typescript/ai-openai/src/index.ts

Repository: TanStack/ai

Length of output: 1840


🏁 Script executed:

# Check if there's an adapters indexcat packages/typescript/ai-openai/src/adapters/index.ts2>/dev/null||echo"No adapters index found"

Repository: TanStack/ai

Length of output: 78


🏁 Script executed:

# Check package.json exports for ai-openaicat packages/typescript/ai-openai/package.json| grep -A 20'"exports"'

Repository: TanStack/ai

Length of output: 521


🏁 Script executed:

# Find and read the tree-shakeable-adapters guidefind. -name"tree-shakeable-adapters.md" -type f

Repository: TanStack/ai

Length of output: 95


🏁 Script executed:

# Read the guide filecat docs/guides/tree-shakeable-adapters.md2>/dev/null| head -250

Repository: TanStack/ai

Length of output: 5084


Fixgenerate import, clarify adapter import paths, and correct misleading terminology

The guide has several issues that will prevent code samples from working correctly:

  1. generate is not exported from@tanstack/ai

    The code showsimport { generate } from '@tanstack/ai', but the actual export isai. Change toimport { ai } from '@tanstack/ai' (or align the docs with whatever becomes the canonical function name).

  2. Adapter import paths are inconsistent and partially unsupported

    • "Installation" and "Available Adapters" sections import from@tanstack/ai-openai
    • "Basic Usage" (class constructor), "Using thegenerate Function", and "Migration from Monolithic Adapters" import from@tanstack/ai-openai/adapters

    The root package already re-exports all adapters, so the canonical path is@tanstack/ai-openai. The/adapters entrypoint is not officially exported in package.json. Use the root path consistently throughout, or if/adapters is intentional, update package.json and add documentation explaining the distinction.

  3. "Class constructor" is misleading

    In "Basic Usage", option 2 is labeled "Class constructor" but the example showscreateOpenAIText({ /* config */ }), which is a factory function. Relabel as "Configured factory" or similar.

  4. Minor: tighten "completely eliminated"

    In "Bundle Size Benefits", change "Unused adapter types are completely eliminated" to "Unused adapter types are eliminated" for more direct phrasing.

🧰 Tools
🪛 LanguageTool

[style] ~173-~173: This phrase is redundant. Consider writing “eliminated”.
Context: ... your bundle - Unused adapter types are completely eliminated - Smaller bundles lead to faster load t...

(COMPLETELY_ANNIHILATE)

🤖 Prompt for AI Agents
docs/guides/tree-shakeable-adapters.md lines 1-209: update the docs so examplesuse the correct top-level export and consistent adapter import path, relabel themisleading "Class constructor" example, and tighten one phrase — change the`generate` import to the actual exported name (`ai`) or whatever the libraryexposes, replace all `/adapters` imports with the canonical root package import(`@tanstack/ai-openai`) (or document and export `/adapters` if you intend tokeep it), rename "Class constructor" to "Configured factory" (or similar) sincecreateOpenAIText is a factory, and change "Unused adapter types are completelyeliminated" to "Unused adapter types are eliminated."

Comment on lines +15 to +20
exportinterfaceAnthropicSummarizeProviderOptions{
/** Temperature for response generation (0-1) */
temperature?:number
/** Maximum tokens in the response */
maxTokens?:number
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

AnthropicSummarizeProviderOptions is defined but not used.

Thetemperature andmaxTokens fields are declared but never consumed in thesummarize method. Line 51 hardcodestemperature: 0.3 instead of using provider options.

Either remove these unused options or wire them into thesummarize implementation:

 async summarize(options: SummarizationOptions): Promise<SummarizationResult> {   const systemPrompt = this.buildSummarizationPrompt(options)+  const providerOpts = options.providerOptions as AnthropicSummarizeProviderOptions | undefined   const response = await this.client.messages.create({     model: options.model,     messages: [{ role: 'user', content: options.text }],     system: systemPrompt,-    max_tokens: options.maxLength || 500,-    temperature: 0.3,+    max_tokens: providerOpts?.maxTokens ?? options.maxLength ?? 500,+    temperature: providerOpts?.temperature ?? 0.3,     stream: false,   })

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/adapters/summarize.ts around lines 15 to20 and usage at ~51, the AnthropicSummarizeProviderOptions interface declarestemperature and maxTokens but those options are not used — the summarize callhardcodes temperature: 0.3; to fix, accept and pass the options through: updatethe adapter/provider to accept an AnthropicSummarizeProviderOptions parameter(or read them from the instance config), use options.temperature with a sensibledefault (e.g., 0.3) and options.maxTokens when constructing the request to theAnthropic client, or remove the unused interface if you don’t intend to supportthose parameters.

Comment on lines 105 to 117
}catch(error:unknown){
consterr=errorasError&{status?:number;code?:string}
yield{
type:'error',
id:generateId(this.name),
model:options.model,
timestamp:Date.now(),
error:{
message:err.message||'Unknown error occurred',
code:err.code||String(err.status),
},
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Error code may result in string "undefined".

If botherr.code anderr.status are undefined,String(err.status) produces"undefined" as the error code, which isn't meaningful for consumers.

       yield {         type: 'error',         id: generateId(this.name),         model: options.model,         timestamp: Date.now(),         error: {           message: err.message || 'Unknown error occurred',-          code: err.code || String(err.status),+          code: err.code ?? (err.status != null ? String(err.status) : 'unknown'),         },       }
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
}catch(error:unknown){
consterr=errorasError&{status?:number;code?:string}
yield{
type:'error',
id:generateId(this.name),
model:options.model,
timestamp:Date.now(),
error:{
message:err.message||'Unknown error occurred',
code:err.code||String(err.status),
},
}
}
}catch(error:unknown){
consterr=errorasError&{status?:number;code?:string}
yield{
type:'error',
id:generateId(this.name),
model:options.model,
timestamp:Date.now(),
error:{
message:err.message||'Unknown error occurred',
code:err.code??(err.status!=null ?String(err.status) :'unknown'),
},
}
}

Comment on lines 349 to 358
if(msg.role==='tool'&&msg.toolCallId){
parts.push({
functionResponse:{
name:msg.toolCallId,
response:{
content:msg.content||'',
},
},
})
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🌐 Web query:

Gemini API functionResponse name field format tool calling

💡 Result:

Short answer: the functionResponse "name" is the functionDeclaration.name string (the tool/function identifier) and must match exactly the declared function/tool name (no spaces or special characters); the functionResponse payload is a JSON/Struct under "response". Example patterns from Gemini docs:

  • When sending a function result back to the model (Python SDK): types.Part.from_function_response(name=tool_call.name, response={"result": result}). [1]
  • JavaScript example: { name: tool_call.name, response: { result } } included as a user part: { role: 'user', parts: [{ functionResponse: function_response_part }] }. [1]
  • API/Vertex docs: functionResponse has fields name (string) and response (Struct JSON object) — the name must be the functionDeclaration.name and the response holds the function output. Also note thought_signature handling when the model provided one for the functionCall; you must return that signature in the same part when echoing results. [3][2]

References: [1] Google Gemini function-calling docs; [2] Gemini thought-signatures page; [3] Vertex AI GenerateContent functionCall/functionResponse reference.


🏁 Script executed:

# Get context around the code in question to understand the data structurescd packages/typescript/ai-gemini/src/adapters&& sed -n'340,370p' text.ts

Repository: TanStack/ai

Length of output: 777


🏁 Script executed:

# Search for tool-related definitions and patterns in the codebaserg"toolCallId|functionName|toolCall" packages/typescript/ai-gemini/src --type ts -A 2 -B 2

Repository: TanStack/ai

Length of output: 16228


Use the actual function name, nottoolCallId, forfunctionResponse.name.

According to the Gemini API specification, thefunctionResponse.name field must be thefunctionDeclaration.name (the tool/function identifier), not a call ID. The current code usesmsg.toolCallId, which violates the API contract and may cause tool result correlation failures. Ensure the tool message includes the actual function name so it can be passed tofunctionResponse.name.

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 349 to 358,the code sets functionResponse.name to msg.toolCallId but must use the actualfunction/tool identifier; change the assignment to use the tool/function name(e.g., msg.toolName or msg.tool?.name or msg.functionDeclaration?.name) ratherthan the call ID, preserving toolCallId separately if needed; ensure the messageconstruction passes that function name into functionResponse.name so it matchesthe Gemini API contract.

Comment on lines +98 to +101
options:{
temperature:0.3,
num_predict:options.maxLength??500,
},

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Semantic mismatch:maxLength used as both token limit and word limit.

num_predict limits output tokens (line 100), but the prompt instructs the model to keep the summary undermaxLengthwords (line 136). Tokens and words are not equivalent—tokens are typically ~0.75 words on average. This inconsistency may cause unexpected truncation or overlong summaries.

Consider clarifying the semantics or using separate parameters:

       options: {         temperature: 0.3,-        num_predict: options.maxLength ?? 500,+        // Rough estimate: ~1.5 tokens per word for safety margin+        num_predict: options.maxLength ? Math.ceil(options.maxLength * 1.5) : 500,       },
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
options:{
temperature:0.3,
num_predict:options.maxLength??500,
},
options:{
temperature:0.3,
// Rough estimate: ~1.5 tokens per word for safety margin
num_predict:options.maxLength ?Math.ceil(options.maxLength*1.5) :500,
},
🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/adapters/summarize.ts around lines 98 to101, the code uses options.maxLength as num_predict (token limit) while theprompt asks for a word limit, causing a mismatch; either convert the word limitto an estimated token limit before assigning num_predict (e.g., tokens =Math.ceil((options.maxLength ?? 500) / 0.75)) or add a separateoptions.maxTokens param and use that for num_predict, and update the prompt toreference the same unit (tokens or words) so both the prompt and num_predict usethe same semantic unit.

Comment on lines +144 to +167
consthandleToolCall=(toolCall:ToolCall):StreamChunk=>{
constactualToolCall=toolCallasToolCall&{
id:string
function:{index:number}
}
return{
type:'tool_call',
id:responseId,
model:chunk.model,
timestamp,
toolCall:{
type:'function',
id:actualToolCall.id,
function:{
name:actualToolCall.function.name||'',
arguments:
typeofactualToolCall.function.arguments==='string'
?actualToolCall.function.arguments
:JSON.stringify(actualToolCall.function.arguments),
},
},
index:actualToolCall.function.index,
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash# First, find the ToolCall type definition in the codebaserg -n"interface ToolCall|type ToolCall" --type=ts -B2 -A10

Repository: TanStack/ai

Length of output: 17652


🏁 Script executed:

#!/bin/bash# Check imports at the top of the filehead -30 packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 703


🏁 Script executed:

#!/bin/bash# Search for where ToolCall comes fromrg -n"from.*ToolCall|import.*ToolCall" --type=ts packages/typescript/ai-ollama/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check the actual context around lines 144-167sed -n'140,170p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1066


🏁 Script executed:

#!/bin/bash# Search for Ollama package type definitions in node_modules or check online docsrg"interface ToolCall|type ToolCall" packages/typescript/ai-ollama/ -B3 -A8

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check if there are any Ollama type definitions in the repofind. -name"*.d.ts" -o -name"ollama*"| grep -E"(ollama|types)"| head -20

Repository: TanStack/ai

Length of output: 134


🏁 Script executed:

#!/bin/bash# Search for how Ollama ToolCall is used elsewhere in the codebaserg"toolCall\." packages/typescript/ai-ollama/src/ -B2 -A2

Repository: TanStack/ai

Length of output: 3675


🏁 Script executed:

#!/bin/bash# Let's verify the Ollama package version and check if ToolCall has idgrep -A 5"ollama" packages/typescript/ai-ollama/package.json

Repository: TanStack/ai

Length of output: 591


🏁 Script executed:

#!/bin/bash# Check the complete type assertion and how it's being usedsed -n'144,170p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 955


🏁 Script executed:

#!/bin/bash# Search for Ollama ToolCall type definition online or in types# Let's check if there are any type stubs or documentationrg -i"interface ToolCall|export type ToolCall" --type=ts --type=d.ts -B5 -A10| head -100

Repository: TanStack/ai

Length of output: 83


🏁 Script executed:

#!/bin/bash# Check if Ollama's ToolCall is actually optional for id by looking at how it's used# If the code needs to assert id: string, it might mean Ollama's type doesn't guarantee itrg"ToolCall" packages/typescript/ai-ollama/src/adapters/text.ts -B3 -A3

Repository: TanStack/ai

Length of output: 2128


🏁 Script executed:

#!/bin/bash# Let's check the Ollama npm package types by examining package.json and looking for type hintscd packages/typescript/ai-ollama&& npm ls ollama2>/dev/null| head -20

Repository: TanStack/ai

Length of output: 142


🌐 Web query:

Ollama ToolCall type definition JavaScript npm package v0.6.3

💡 Result:

ToolCall (from ollama v0.6.3 — src/interfaces.ts):

export interface ToolCall {
function: {
name: string;
arguments: { [key: string]: any; };
};
}

Source: ollama-js v0.6.3 interfaces.ts. [1]

References
[1] ollama/ollama-js@v0.6.3 — src/interfaces.ts (raw).


Ollama'sToolCall type lacks anid property—the type assertion masks a real issue.

Ollama v0.6.3'sToolCall interface only providesfunction.name andfunction.arguments, with noid ortype properties. The type assertion on lines 145–148 bypasses TypeScript's type safety but doesn't guaranteeid exists at runtime. Line 156 usesactualToolCall.id without a fallback, risking undefined access.

Add a guard or useresponseId as the tool call identifier:

const handleToolCall = (toolCall: ToolCall): StreamChunk => {  return {    type: 'tool_call',    id: responseId,    model: chunk.model,    timestamp,    toolCall: {      type: 'function',-     id: actualToolCall.id,+     id: responseId, // Use response ID instead of undefined toolCall.id

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/adapters/text.ts around lines 144 to 167,the code unsafely asserts a ToolCall has an id and type and then usesactualToolCall.id which may be undefined at runtime; replace the assertion withruntime guards and fallbacks: check whether toolCall has an id and afunction.index before using them, use responseId as the toolCall id fallbackwhen actualToolCall.id is missing, ensure the returned toolCall.type is set to'function' explicitly, and serialize function.arguments safely (withJSON.stringify fallback) so no undefined properties are accessed.

letresponseId:string|null=null
letmodel:string=options.model

consteventTypeCounts=newMap<string,number>()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Unused variableeventTypeCounts.

This map is declared but never populated with event type counts. Either implement the counting logic or remove this variable and its reference in the error logging (line 386).

-    const eventTypeCounts = new Map<string, number>()

And update the catch block accordingly:

       console.log(         '[OpenAI Adapter] Stream ended with error. Event type summary:',         {           totalChunks: chunkCount,-          eventTypes: Object.fromEntries(eventTypeCounts),           error: err.message,         },       )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around line 129, theMap<string, number> eventTypeCounts is declared but never populated or used;either implement event counting where events are processed (increment counts foreach event.type before logging/processing) so the map reflects actual counts, orremove the declaration and any references to it (including its interpolation inthe error log at line 386). Also update the catch block at/around line 386 tostop referencing eventTypeCounts (use a fallback message or include only dataactually available) or ensure it reads from the populated map; keep the loggingconsistent and avoid accessing an empty/uninitialized structure.

Comment on lines 566 to 571
// For base64 data, construct a data URI
return{
type:'input_image',
image_url:part.source.value,
detail:imageMetadata?.detail||'auto',
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Search for base64 source creation patternsrg -n"type.*base64" --type ts -B5 -A5# Look for ContentPart or image source definitionsrg -n"source.*value" packages/typescript/ai-openai/src --type ts -B3 -A3# Find where this image_url is being used and how sources are constructedfd -e ts packages/typescript/ai-openai/src| head -20

Repository: TanStack/ai

Length of output: 8948


🏁 Script executed:

# Find ContentPart type definitionrg -n"type.*data.*source" packages/typescript --type ts -B2 -A2# Search for where sources are created with type 'data'rg -n"source.*type.*data\|type.*data" packages/typescript --type ts -B3 -A3| head -100# Look for the ContentPart interface/type definitionrg -n"ContentPart\|ImageSource" packages/typescript --type ts| grep -E"(type|interface)"| head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search more broadly for ContentPart definitionrg -n"ContentPart" packages/typescript --type ts| head -20# Look for type definitions in core packagesfind packages -name"*.ts" -type f| xargs grep -l"type ContentPart\|interface ContentPart"2>/dev/null# Search for how image sources with 'data' type are createdrg -n"source.*type.*:.*['\"]data['\"]" packages/typescript --type ts -B3 -A3| head -80

Repository: TanStack/ai

Length of output: 1898


🏁 Script executed:

# Read the ContentPartSource definitioncat -n packages/typescript/ai/src/types.ts| sed -n'95,120p'# Also read the full context around ContentPart definitioncat -n packages/typescript/ai/src/types.ts| sed -n'95,175p'# Look at message-converters.tscat -n packages/typescript/ai/src/message-converters.ts| head -100

Repository: TanStack/ai

Length of output: 7311


🏁 Script executed:

# Check image metadata type to see if media_type is availablerg -n"ImageMetadata\|imageMetadata" packages/typescript/ai-openai/src/adapters/text.ts -B5 -A5| head -80# Look at the full image handling functioncat -n packages/typescript/ai-openai/src/adapters/text.ts| sed -n'550,590p'# Check OpenAI package to see if there are any hints about data URI formatrg -n"data:image\|base64" packages/typescript/ai-openai/src --type ts -B2 -A2

Repository: TanStack/ai

Length of output: 1466


🏁 Script executed:

# Check what metadata types are available for imagesrg -n"OpenAIImageMetadata\|imageMetadata" packages/typescript/ai-openai/src --type ts -B3 -A3# Look for type definitions of image metadata in OpenAIrg -n"ImageMetadata\|image.*metadata" packages/typescript/ai-openai/src/types.ts --type ts -B2 -A2# Check if there's any type definition filefind packages/typescript/ai-openai/src -name"*types*" -o -name"*type*"| head -10

Repository: TanStack/ai

Length of output: 205


🏁 Script executed:

# Read the message types filecat -n packages/typescript/ai-openai/src/message-types.ts| head -200# Also search for any media type referencesrg -n"media.*type\|mediaType" packages/typescript/ai-openai/src --type ts -B2 -A2

Repository: TanStack/ai

Length of output: 2551


🏁 Script executed:

# Search for test cases related to image handlingfind packages/typescript/ai-openai -name"*.test.ts" -o -name"*.spec.ts"| xargs grep -l"image\|base64"2>/dev/null# Look for any usage examples in the codebaserg -n"input_image" packages/typescript --type ts -B5 -A5# Check if there are integration tests or examplesfind. -name"*example*" -o -name"*test*"| grep -i openai| head -10

Repository: TanStack/ai

Length of output: 6555


🏁 Script executed:

# Read the test file to see how images are handledcat packages/typescript/ai-openai/tests/openai-adapter.test.ts| head -300# Also check for any base64 test datarg -n"base64\|data:image" packages/typescript/ai-openai/tests --type ts -B3 -A3

Repository: TanStack/ai

Length of output: 3633


🏁 Script executed:

# Search for any live tests or examples with imagesfind packages/typescript/ai-openai/live-tests -name"*.ts"| xargs cat2>/dev/null| head -200# Look at what OpenAI Responses API expects for imagesrg -n"ResponseInputContent\|input_image" packages/typescript/ai-openai/src --type ts -B2 -A5| head -150# Search for any conversion or handling logic that might give hints about the formatrg -n"image_url\|file_data\|base64" packages/typescript/ai-openai/src/adapters/text.ts --type ts -B3 -A3

Repository: TanStack/ai

Length of output: 5895


🏁 Script executed:

# Search for data URI construction or similar patternsrg -n"data:image\|data:audio\|data:video" packages/typescript --type ts# Check how other adapters handle base64 datarg -n"type.*base64" packages/typescript/ai-anthropic/src/adapters/text.ts -B5 -A10# Look for any media type metadata that might be passedrg -n"mediaType\|media_type" packages/typescript --type ts -B2 -A2| grep -E"(image|audio|video)" -A2 -B2

Repository: TanStack/ai

Length of output: 1293


🏁 Script executed:

# Check the OpenAI SDK import and what types it usesrg -n"import.*OpenAI\|from.*openai" packages/typescript/ai-openai/src --type ts| head -10# Look at the response types being usedrg -n"Responses\|ResponseInputContent" packages/typescript/ai-openai/src --type ts -B2 -A2| head -80# Check if there's any documentation or comments about image formatrg -n"image_url\|ResponseInputImage" packages/typescript/ai-openai/src --type ts -B5 -A5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Read the imports from openai-adapter.tshead -50 packages/typescript/ai-openai/src/openai-adapter.ts# Also check adapters/text.ts importshead -50 packages/typescript/ai-openai/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 2890


🏁 Script executed:

# Search for any OpenAI API documentation references or linksrg -n"platform.openai.com\|docs.*vision\|image.*format" packages/typescript/ai-openai/src --type ts -B2 -A2# Check the actual OpenAI SDK package.json to understand versioncat packages/typescript/ai-openai/package.json| grep -A5 -B5'"openai"'# Look for any comments or documentation about the image format requirementrg -n"image_url.*format\|base64.*data.*uri\|data.*uri.*image" packages/typescript --type ts -i

Repository: TanStack/ai

Length of output: 383


🏁 Script executed:

# Search for how image metadata is created/populated in the frameworkrg -n"ImagePart\|image.*metadata\|mediaType" packages/typescript/ai/src --type ts -B3 -A3| head -150# Check if there's any way media type is determinedrg -n"image/\|media.*type\|jpeg\|png\|webp" packages/typescript/ai/src --type ts -B2 -A2

Repository: TanStack/ai

Length of output: 37


Base64 images need data URI format for OpenAI API.

The code passespart.source.value directly toimage_url, but forsource.type === 'data', this is raw base64 (per the ContentPartSource type definition). OpenAI's Responses API expectsimage_url to be a URL string or data URI likedata:image/jpeg;base64,..., not raw base64. Without media type metadata available inOpenAIImageMetadata, construct a data URI with a sensible default format (e.g.,data:image/jpeg;base64,${part.source.value}).

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around lines 566 to 571,the code returns raw base64 in image_url when part.source.type === 'data' butOpenAI expects a URL or a data URI; detect when source.type === 'data' and, ifso, prepend a sensible default media type to form a data URI (e.g.,"data:image/jpeg;base64," + part.source.value) unless imageMetadata contains anexplicit media type to use; leave non-data sources unchanged and ensure thefinal returned object uses that constructed data URI for image_url.

Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
examples/ts-react-chat/src/routes/api.tanchat.ts (1)

2-8:Remove top-leveltest = ai(openaiImage(...)) and associated imports

Thisconst test = ai({ adapter: openaiImage(), ... }) is:

  • Unused in the route logic, and
  • For image adapters, side‑effectful: theai() implementation immediately callsadapter.generateImages, so this will generate images at module load time for every server start / hot reload.

A very similar unusedtest block was flagged previously; this is effectively the same issue with a valid model instead of'gpt-5'.

You can safely drop it and the now‑unused imports (ai,openaiImage,openaiSummarize,openaiText,z), e.g.:

-import { ai, chat, maxIterations, toStreamResponse } from '@tanstack/ai'-import {-  openai,-  openaiImage,-  openaiSummarize,-  openaiText,-} from '@tanstack/ai-openai'+import { chat, maxIterations, toStreamResponse } from '@tanstack/ai'+import { openai } from '@tanstack/ai-openai'@@-import z from 'zod'@@-const test = ai({-  adapter: openaiImage(),-  model: 'gpt-image-1',-  prompt:-    'A cute baby sea otter wearing a beret and glasses, sitting at a small cafe table, sipping a cappuccino, with a croissant on a plate. The background shows a picturesque Parisian street with the Eiffel Tower in the distance. The scene is bathed in the warm, golden light of a late afternoon.',-  numberOfImages: 2,-  size: 'auto',-  providerOptions: {-    quality: 'high',-  },-})-

That keeps the example route focused on the chat flow without unexpected image API calls.

Also applies to: 12-12, 21-31

🧹 Nitpick comments (9)
packages/typescript/ai-gemini/src/model-meta.ts (1)

223-253:Gemini image model metadata and exports look consistent

The new image model entries (GEMINI_2_5_FLASH_IMAGE, GEMINI_2_FLASH_IMAGE, IMAGEN_* variants) and theGEMINI_IMAGE_MODELS /GeminiImageModels exports line up correctly and give the image adapter a clean, typed model surface. Only nit: you might consider movingGeminiImageModels belowGEMINI_IMAGE_MODELS to avoid the forward reference and mirror the pattern used forGeminiModels, but that’s purely cosmetic.

Also applies to: 421-450, 517-607, 782-793

packages/typescript/ai/src/core/generate.ts (1)

24-281:Unifiedai() routing and typing look correct

TheGenerateAdapter/AnyGenerateAdapter unions,GenerateOptionsFor/GenerateReturnType, and theai() overload cleanly discriminate onadapter.kind and produce the expected API shapes:

  • chat → stream or structuredPromise based onoutputSchema
  • embedding →Promise<EmbeddingResult>
  • summarize → streaming vs non‑streaming viastream
  • image →Promise<ImageGenerationResult>

The model/providerOptions extractor helpers (ChatProviderOptionsForModel,ImageProviderOptionsForModel, etc.) also give nice per‑model typing without leaking index‑signature maps. Aside from the summarizeproviderOptions wiring noted separately, this core looks well‑structured.

Also applies to: 365-400, 472-598, 664-677, 685-704

packages/typescript/ai-gemini/tests/image-adapter.test.ts (1)

115-160:RelaxgenerateImages call expectation to tolerate extra config fields

ThegenerateImages test asserts:

expect(mockGenerateImages).toHaveBeenCalledWith({model:'imagen-3.0-generate-002',prompt:'A cat wearing a hat',config:{numberOfImages:1,aspectRatio:'1:1',},})

But the adapter’sbuildConfig currently also includesimageSize (and may gain more fields over time). With stricttoHaveBeenCalledWith, any additional keys onconfig will cause this test to fail even though the call is still correct.

Consider making this more robust, e.g.:

expect(mockGenerateImages).toHaveBeenCalledWith({model:'imagen-3.0-generate-002',prompt:'A cat wearing a hat',config:expect.objectContaining({numberOfImages:1,aspectRatio:'1:1',}),})

This keeps the important contract assertions while allowing harmless additions to the request config.

packages/typescript/ai-openai/tests/image-adapter.test.ts (1)

135-187:MakegenerateImages request expectations less brittle

The tests currently assert the fullimages.generate call shape:

expect(mockGenerate).toHaveBeenCalledWith({model:'gpt-image-1',prompt:'A cat wearing a hat',n:1,size:'1024x1024',stream:false,})

This will start failing as soon asbuildRequest adds legitimate fields (e.g.,response_format, quality/style options).

You can keep the important guarantees while future‑proofing the test by loosening it slightly:

expect(mockGenerate).toHaveBeenCalledWith(expect.objectContaining({model:'gpt-image-1',prompt:'A cat wearing a hat',n:1,size:'1024x1024',stream:false,}),)

Same idea applies to the second test that only checks ID generation; the call shape doesn’t need to be pinned exactly there either.

Also applies to: 189-218

packages/typescript/ai-gemini/src/adapters/image.ts (1)

90-121:Minor cleanup in config/result shaping

Two small nits you might consider:

  • InbuildConfig,imageSize: size will serialize away whensize isundefined, so it’s harmless but could be omitted entirely when not set for slightly cleaner requests.
  • IntransformResponse, you don’t need to explicitly setusage: undefined; omitting the property keeps the result shape cleaner and avoids consumers checking for an explicitlyundefined field.

Example tweak:

  private buildConfig(    options: ImageGenerationOptions<GeminiImageProviderOptions>,  ): GenerateImagesConfig {    const { size, numberOfImages, providerOptions } = options-   return {-     numberOfImages: numberOfImages ?? 1,-     imageSize: size,-     aspectRatio: size ? sizeToAspectRatio(size) : undefined,-     ...providerOptions,-   }+   return {+     numberOfImages: numberOfImages ?? 1,+     ...(size && { imageSize: size, aspectRatio: sizeToAspectRatio(size) }),+     ...providerOptions,+   }  }  private transformResponse(    model: string,    response: GenerateImagesResponse,  ): ImageGenerationResult {    const images: Array<GeneratedImage> = (response.generatedImages ?? []).map(      (item) => ({        b64Json: item.image?.imageBytes,        revisedPrompt: item.enhancedPrompt,      }),    )    return {      id: generateId(this.name),      model,      images,-     usage: undefined,    }  }
packages/typescript/ai-openai/src/adapters/image.ts (1)

59-92:Double‑checksize: 'auto' semantics across models

Right nowgenerateImages relies onvalidateImageSize, which short‑circuits forsize === 'auto', so'auto' is effectively allowed for any model. If DALL·E 2/3 do not support'auto' forsize, these calls will only fail at the OpenAI API level instead of being caught by your validator.

A tighter approach (if the API requires it) would be to only treat'auto' as special for the GPT‑Image models and run validation (and/or throw) for'auto' on DALL·E:

export function validateImageSize(  model: string,  size: string | undefined,): void {-  if (!size || size === 'auto') return+  if (!size) return++  const isGptImageModel =+    model === 'gpt-image-1' || model === 'gpt-image-1-mini'+  if (size === 'auto' && isGptImageModel) return

Please verify this against the current OpenAI image docs before changing behavior.

packages/typescript/ai-openai/src/image/image-provider-options.ts (2)

213-237:ClarifyvalidateImageSize behavior for'auto' and per‑model sizes

The validator short‑circuits forsize === 'auto', which means'auto' is effectively accepted for all models even though only the GPT‑Image models include'auto' in their documented size sets. If DALL·E 2/3 don’t support'auto', this defeats the purpose of early validation.

Consider:

  • Restricting the'auto' fast‑path to GPT‑Image models only, and
  • Potentially usingOpenAIImageModelSizeByName (or a typed helper) instead of a rawRecord<string, Array<string>> to keep the table aligned with the type aliases.

This keeps runtime validation in sync with the declared capabilities.


264-297:Optional: enforce documentedbackground/output_format constraint

The JSDoc forbackground says that when'transparent',output_format must be'png' or'webp', butvalidateBackground currently only guards thatbackground is used with GPT‑Image models, not that the output format is compatible.

If you want runtime parity with the docs (especially for JS consumers orany usage), consider extending the validation input to includeoutput_format and adding a simple check:

-interface ImageValidationOptions {-  prompt: string-  model: string-  background?: 'transparent' | 'opaque' | 'auto' | null-}+interface ImageValidationOptions {+  prompt: string+  model: string+  background?: 'transparent' | 'opaque' | 'auto' | null+  output_format?: GptImageOutputFormat+}export const validateBackground = (options: ImageValidationOptions) => {  if (options.background) {    const supportedModels = ['gpt-image-1', 'gpt-image-1-mini']    if (!supportedModels.includes(options.model)) {      throw new Error(        `The model ${options.model} does not support background option.`,      )    }++    if (+      options.background === 'transparent' &&+      options.output_format &&+      options.output_format !== 'png' &&+      options.output_format !== 'webp'+    ) {+      throw new Error(+        `Transparent background requires output_format 'png' or 'webp', got '${options.output_format}'.`,+      )+    }  }}
packages/typescript/ai/src/types.ts (1)

1044-1093:ChatOptionsForChatAdapter mirrors ChatStreamOptionsForModel; minor Omit nit

TheChatOptionsForChatAdapter helper matches the structure ofChatStreamOptionsForModel, constrainingproviderOptions by model andmessages by model input modalities and metadata, which is exactly what you want for the newChatAdapter abstraction.

One tiny nit: theOmit<ChatOptions, 'model' | 'providerOptions' | 'responseFormat' | 'messages'> still refers to'responseFormat', while the current options useoutput?: ResponseFormat<...>. This is harmless (the extra key is ignored inOmit), but you could replace'responseFormat' with'output' for accuracy:

-    ? Omit<-        ChatOptions,-        'model' | 'providerOptions' | 'responseFormat' | 'messages'-      > & {+    ? Omit<ChatOptions, 'model' | 'providerOptions' | 'output' | 'messages'> & {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between86fa305 and5c9703c.

📒 Files selected for processing (18)
  • README.md (1 hunks)
  • docs/guides/image-generation.md (1 hunks)
  • examples/ts-react-chat/src/routes/api.tanchat.ts (2 hunks)
  • packages/typescript/ai-gemini/src/adapters/image.ts (1 hunks)
  • packages/typescript/ai-gemini/src/image/image-provider-options.ts (1 hunks)
  • packages/typescript/ai-gemini/src/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/model-meta.ts (10 hunks)
  • packages/typescript/ai-gemini/tests/image-adapter.test.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/image.ts (1 hunks)
  • packages/typescript/ai-openai/src/image/image-provider-options.ts (2 hunks)
  • packages/typescript/ai-openai/src/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/model-meta.ts (5 hunks)
  • packages/typescript/ai-openai/tests/image-adapter.test.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-image-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/index.ts (1 hunks)
  • packages/typescript/ai/src/core/generate.ts (1 hunks)
  • packages/typescript/ai/src/index.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/guides/image-generation.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • README.md
  • packages/typescript/ai/src/adapters/index.ts
🧰 Additional context used
🧬 Code graph analysis (5)
packages/typescript/ai-openai/tests/image-adapter.test.ts (3)
packages/typescript/ai-openai/src/adapters/image.ts (2)
  • createOpenaiImage (137-142)
  • OpenAIImageAdapter (42-117)
packages/typescript/ai-openai/src/index.ts (2)
  • createOpenaiImage (35-35)
  • OpenAIImageAdapter (34-34)
packages/typescript/ai-openai/src/image/image-provider-options.ts (3)
  • validateImageSize (213-237)
  • validateNumberOfImages (242-262)
  • validatePrompt (275-297)
packages/typescript/ai-gemini/tests/image-adapter.test.ts (3)
packages/typescript/ai-gemini/src/adapters/image.ts (2)
  • createGeminiImage (142-147)
  • GeminiImageAdapter (48-122)
packages/typescript/ai-gemini/src/index.ts (2)
  • createGeminiImage (39-39)
  • GeminiImageAdapter (38-38)
packages/typescript/ai-gemini/src/image/image-provider-options.ts (4)
  • sizeToAspectRatio (180-185)
  • validateImageSize (191-206)
  • validateNumberOfImages (212-226)
  • validatePrompt (231-239)
packages/typescript/ai-openai/src/image/image-provider-options.ts (1)
packages/typescript/ai-openai/src/index.ts (2)
  • OpenAIImageProviderOptions (40-40)
  • OpenAIImageModelProviderOptionsByName (41-41)
examples/ts-react-chat/src/routes/api.tanchat.ts (4)
packages/typescript/ai/src/core/generate.ts (1)
  • ai (375-466)
packages/typescript/ai/src/index.ts (1)
  • ai (5-5)
packages/typescript/ai-openai/src/adapters/image.ts (1)
  • openaiImage (167-172)
packages/typescript/ai-openai/src/index.ts (1)
  • openaiImage (36-36)
packages/typescript/ai-openai/src/adapters/image.ts (5)
packages/typescript/ai/src/adapters/index.ts (1)
  • BaseImageAdapter (29-29)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_IMAGE_MODELS (1656-1661)
packages/typescript/ai-openai/src/image/image-provider-options.ts (6)
  • OpenAIImageProviderOptions (173-177)
  • OpenAIImageModelProviderOptionsByName (183-188)
  • OpenAIImageModelSizeByName (193-198)
  • validatePrompt (275-297)
  • validateImageSize (213-237)
  • validateNumberOfImages (242-262)
packages/typescript/ai/src/types.ts (3)
  • ImageGenerationOptions (746-759)
  • ImageGenerationResult (776-789)
  • GeneratedImage (764-771)
packages/typescript/ai/src/adapters/base-image-adapter.ts (1)
  • generateId (88-90)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (15)
packages/typescript/ai/src/adapters/base-image-adapter.ts (1)

1-91:BaseImageAdapter contract and generics look solid

The sharedImageAdapter interface andBaseImageAdapter implementation are well-shaped for the OpenAI/Gemini adapters (models, providerOptions, size maps, config, andkind: 'image'). The type-onlydeclare fields are a good pattern for inference without runtime cost; no issues spotted.

packages/typescript/ai/src/index.ts (1)

4-13:Re-exportingai and generate option types from the root is appropriate

The new export block cleanly exposesai,AnyAdapter,GenerateAdapter,GenerateOptions, and the per-kind option aliases from./core/generate without disturbing existing exports. This is the right place for the new unified entrypoint.

packages/typescript/ai-openai/src/model-meta.ts (1)

1250-1290:Image model metadata andOPENAI_IMAGE_MODELS export are aligned

DefiningDALL_E_2 andDALL_E_3 as properModelMeta entries and exportingOPENAI_IMAGE_MODELS withGPT_IMAGE_1,GPT_IMAGE_1_MINI,dall-e-3, anddall-e-2 matches the adapter/tests expectations and cleanly separates image models from chat/embedding lists. No issues spotted here.

Also applies to: 1656-1661

packages/typescript/ai-gemini/src/adapters/image.ts (2)

31-60:Adapter generics and type integration look correct

GeminiImageAdapter correctly threadsGEMINI_IMAGE_MODELS,GeminiImageProviderOptions, and the model→options/size maps intoBaseImageAdapter, and the declared_modelProviderOptionsByName /_modelSizeByName fields give good type narrowing for consumers.


124-177:Factory helpers are consistent and ergonomic

createGeminiImage andgeminiImage mirror the other adapters’ factory patterns (explicit key vs env lookup) and correctly composeGeminiImageConfig by omittingapiKey from the external config surface.

packages/typescript/ai-openai/src/adapters/image.ts (3)

31-58:Adapter wiring and client construction look solid

OpenAIImageAdapter cleanly wiresOPENAI_IMAGE_MODELS, the provider options types, and thecreateOpenAIClient-based client. Class shape matches the other tree‑shakeable adapters, which keeps the surface consistent.


94-116:Response mapping matches the OpenAI images response shape

Mappingdata entries toGeneratedImage (b64, url, revisedPrompt) and normalizingresponse.usage into the generic{inputTokens, outputTokens, totalTokens} shape looks correct and matches the rest of the SDK’s conventions.


119-172:Factory functions follow the established OpenAI adapter pattern

createOpenaiImage andopenaiImage mirror the text/embed/summarize factories (explicit key vs env lookup,Omit<...,'apiKey'>), which should make migration straightforward for users.

packages/typescript/ai-openai/src/index.ts (1)

5-75:Export surface for tree‑shakeable adapters and legacy adapter looks well‑structured

The index cleanly groups the new text/embed/summarize/image adapters with their configs and provider options, and clearly marks the monolithicOpenAI export as deprecated. This should make the migration path obvious without breaking existing users.

packages/typescript/ai-gemini/src/index.ts (1)

5-91:Gemini index exports align well with the new adapter architecture

The file cleanly exposes the new text/embed/summarize/image adapters and model aliases, while keeping the monolithicGeminiAdapter under a clearly documented deprecated section. The symmetry with the OpenAI index is helpful for users switching providers.

packages/typescript/ai/src/types.ts (3)

738-789:Image generation core types match adapter usage

ImageGenerationOptions,GeneratedImage, andImageGenerationResult line up with how the OpenAI and Gemini image adapters use them (model/prompt/numberOfImages/size/providerOptions in,images + optionalusage out). Centralizing these here should make it easy to add other providers without duplicating shapes.


1016-1043:Adapter model‑extraction helpers are consistent with existing patterns

ExtractModelsFromChatAdapter,ExtractModelsFromEmbeddingAdapter, andExtractModelsFromSummarizeAdapter (plusExtractModelsFromAnyAdapter) mirror the existingExtractModelsFromAdapter style and correctly pullM[number] out of each adapter’s first generic. This will be handy for strongly‑typed higher‑level helpers that accept any of the new adapter types.


1092-1093:Re‑exporting adapter types from./adapters is useful for consumers

The finalexport type { ChatAdapter, EmbeddingAdapter, SummarizeAdapter, AnyAdapter } makes these new abstractions discoverable from the central@tanstack/ai types module, which is convenient for library users.

packages/typescript/ai-gemini/src/image/image-provider-options.ts (2)

29-153:Gemini image provider options and model maps look well‑designed

GeminiImageProviderOptions mirrors theGenerateImagesConfig surface (aspect ratio, safety, watermarking, compression, etc.), and the model→options/size maps (GeminiImageModelProviderOptionsByName,GeminiImageModelSizeByName) give you nice type‑level narrowing for adapters without over‑committing to specific models in this file.


191-239:Size, count, and prompt validators are reasonable defaults

validateImageSize’s use ofsizeToAspectRatio with a clear error message,validateNumberOfImages clamping to 1–4, andvalidatePrompt’s trimmed non‑empty check give good early feedback before hitting the Gemini API. The comments about model‑specific max‑images provide context for the conservativemaxImages = 4 choice.

Comment on lines +155 to +185
/**
* Valid sizes for Gemini Imagen models
* Gemini uses aspect ratios, but we map common WIDTHxHEIGHT formats to aspect ratios
* These are approximate mappings based on common image dimensions
*/
exportconstGEMINI_SIZE_TO_ASPECT_RATIO:Record<string,GeminiAspectRatio>={
// Square
'1024x1024':'1:1',
'512x512':'1:1',
// Landscape
'1024x768':'4:3',
'1536x1024':'3:4',// Actually this is portrait, but matching common dimensions
'1792x1024':'16:9',
'1920x1080':'16:9',
// Portrait
'768x1024':'3:4',
'1024x1536':'4:3',// Inverted
'1024x1792':'9:16',
'1080x1920':'9:16',
}

/**
* Maps a WIDTHxHEIGHT size string to a Gemini aspect ratio
* Returns undefined if the size cannot be mapped
*/
exportfunctionsizeToAspectRatio(
size:string|undefined,
):GeminiAspectRatio|undefined{
if(!size)returnundefined
returnGEMINI_SIZE_TO_ASPECT_RATIO[size]
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

Fix swapped aspect‑ratio mappings for 1536x1024 and 1024x1536

InGEMINI_SIZE_TO_ASPECT_RATIO, the mappings for1536x1024 and1024x1536 appear inverted:

  • 1536x1024 is a landscape resolution, but is currently mapped to'3:4' (portrait).
  • 1024x1536 is a portrait resolution, but is mapped to'4:3' (landscape).

This inversion will causesizeToAspectRatio (and thusvalidateImageSize/buildConfig) to feed a mismatched aspect ratio to the API for those sizes.

A straightforward fix is to swap them:

  // Landscape   '1024x768': '4:3',-  '1536x1024': '3:4', // Actually this is portrait, but matching common dimensions+  '1536x1024': '4:3',   '1792x1024': '16:9',   '1920x1080': '16:9',  // Portrait   '768x1024': '3:4',-  '1024x1536': '4:3', // Inverted+  '1024x1536': '3:4',

You may also want to adjust the comments to reflect the corrected orientation.

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/image/image-provider-options.ts aroundlines 155 to 185, the aspect-ratio mappings for '1536x1024' and '1024x1536' areinverted (landscape mapped as portrait and vice versa); swap the values so'1536x1024' maps to a landscape ratio ('4:3') and '1024x1536' maps to theportrait ratio ('3:4'), and update the inline comments to reflect the correctorientation for each entry.

Comment on lines 185 to 225
/** Options for summarize generation */
exportinterfaceGenerateSummarizeOptions<
TAdapterextendsSummarizeAdapter<ReadonlyArray<string>,object>,
TModelextendsSummarizeModels<TAdapter>,
TStreamextendsboolean=false,
>extendsGenerateBaseOptions<TAdapter&{kind:'summarize'},TModel>{
text:string
maxLength?:number
style?:'bullet-points'|'paragraph'|'concise'
focus?:Array<string>
providerOptions?:SummarizeProviderOptions<TAdapter>
/**
* Whether to stream the summarization result.
* When true, returns an AsyncIterable<StreamChunk> for streaming output.
* When false or not provided, returns a Promise<SummarizationResult>.
*
*@default false
*
*@example
* ```ts
* // Non-streaming (default)
* const result = await ai({
* adapter: summarizeAdapter,
* model: 'summarize-v1',
* text: 'Long text to summarize...'
* })
* console.log(result.summary)
*
* // Streaming
* for await (const chunk of ai({
* adapter: summarizeAdapter,
* model: 'summarize-v1',
* text: 'Long text to summarize...',
* stream: true
* })) {
* console.log(chunk)
* }
* ```
*/
stream?:TStream
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

SummarizeproviderOptions are defined but never forwarded to adapters

GenerateSummarizeOptions exposesproviderOptions?: SummarizeProviderOptions<TAdapter>, but bothgenerateSummary andgenerateSummaryStream drop it when buildingSummarizationOptions. That means any provider‑specific summarize options passed toai({...}) are ignored.

You likely want to plumb these through, e.g.:

 async function generateSummary(   options: GenerateSummarizeOptions<     SummarizeAdapter<ReadonlyArray<string>, object>,     string,     boolean   >, ): Promise<SummarizationResult> {-  const { adapter, model, text, maxLength, style, focus } = options+  const { adapter, model, text, maxLength, style, focus, providerOptions } =+    options    const summarizeOptions: SummarizationOptions = {     model,     text,     maxLength,     style,     focus,+    providerOptions,   }    return adapter.summarize(summarizeOptions) }  async function* generateSummaryStream(   options: GenerateSummarizeOptions<     SummarizeAdapter<ReadonlyArray<string>, object>,     string,     true   >, ): AsyncIterable<StreamChunk> {-  const { adapter, model, text, maxLength, style, focus } = options+  const { adapter, model, text, maxLength, style, focus, providerOptions } =+    options    const summarizeOptions: SummarizationOptions = {     model,     text,     maxLength,     style,     focus,+    providerOptions,   }

Without this, summarize adapters can’t receive their configured provider options through the newai() entrypoint.

Also applies to: 599-617, 623-641

🤖 Prompt for AI Agents
In packages/typescript/ai/src/core/generate.ts around lines 185 to 225 (and alsoapply the same fix at the other places noted: ~599-617 and ~623-641), theGenerateSummarizeOptions includes providerOptions but those options are notforwarded into the SummarizationOptions passed to adapters, so provider-specificsettings are dropped; update the code that builds the SummarizationOptions inboth generateSummary and generateSummaryStream to include the incomingoptions.providerOptions (typed as SummarizeProviderOptions<TAdapter>) and passit through to the adapter calls, preserving the generic types and ensuring theproperty name matches the adapter API so summarize adapters receive theirproviderOptions.

Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (11)
packages/typescript/ai-gemini/src/adapters/text.ts (3)

325-332:Incorrect token count mapping—usecandidatesTokenCount instead ofthoughtsTokenCount.

This issue was flagged in a previous review and remains unresolved.thoughtsTokenCount represents internal reasoning tokens (chain-of-thought), not completion tokens. UsecandidatesTokenCount for actual generated response tokens.

Apply this diff:

          usage: chunk.usageMetadata            ? {                promptTokens: chunk.usageMetadata.promptTokenCount ?? 0,-               completionTokens: chunk.usageMetadata.thoughtsTokenCount ?? 0,+               completionTokens: chunk.usageMetadata.candidatesTokenCount ?? 0,                totalTokens: chunk.usageMetadata.totalTokenCount ?? 0,              }            : undefined,

341-365:Incorrect default MIME type for non-image content.

This issue was flagged in a previous review and remains unresolved. The fallback'image/jpeg' is applied to audio, video, and document types, which will cause processing failures for non-image content.

Apply type-appropriate defaults for each media type:

      case 'image':+     case 'audio':+     case 'video':+     case 'document': {-     case 'audio':-     case 'video':-     case 'document': {-       const metadata = part.metadata as-         | GeminiDocumentMetadata-         | GeminiImageMetadata-         | GeminiVideoMetadata-         | GeminiAudioMetadata-         | undefined+       let defaultMimeType: string+       let metadata:+         | GeminiDocumentMetadata+         | GeminiImageMetadata+         | GeminiVideoMetadata+         | GeminiAudioMetadata+         | undefined++       switch (part.type) {+         case 'image':+           defaultMimeType = 'image/jpeg'+           metadata = part.metadata as GeminiImageMetadata | undefined+           break+         case 'audio':+           defaultMimeType = 'audio/mp3'+           metadata = part.metadata as GeminiAudioMetadata | undefined+           break+         case 'video':+           defaultMimeType = 'video/mp4'+           metadata = part.metadata as GeminiVideoMetadata | undefined+           break+         case 'document':+           defaultMimeType = 'application/pdf'+           metadata = part.metadata as GeminiDocumentMetadata | undefined+           break+       }+        if (part.source.type === 'data') {          return {            inlineData: {              data: part.source.value,-             mimeType: metadata?.mimeType ?? 'image/jpeg',+             mimeType: metadata?.mimeType ?? defaultMimeType,            },          }        } else {          return {            fileData: {              fileUri: part.source.value,-             mimeType: metadata?.mimeType ?? 'image/jpeg',+             mimeType: metadata?.mimeType ?? defaultMimeType,            },          }        }      }

417-426:Use function name instead oftoolCallId forfunctionResponse.name.

This issue was flagged in a previous review and remains unresolved. Per the Gemini API specification,functionResponse.name must be the function/tool identifier (the declaration name), not a call ID.

TheModelMessage type for tool messages needs to include the function name. Assuming the message structure includes it (e.g., asmsg.toolName or similar), apply this diff:

      if (msg.role === 'tool' && msg.toolCallId) {        parts.push({          functionResponse: {-           name: msg.toolCallId,+           name: msg.toolName, // or the appropriate property containing the function name            response: {              content: msg.content || '',            },          },        })      }

If the tool message doesn't currently include the function name, the message type definition must be updated to include it.

packages/typescript/ai/src/core/generate.ts (1)

542-560:SummarizeproviderOptions not forwarded - previously flagged.

This issue was already identified in a previous review. TheproviderOptions fromGenerateSummarizeOptions is not being passed through toSummarizationOptions in bothgenerateSummary andgenerateSummaryStream.

packages/typescript/ai-anthropic/src/adapters/text.ts (4)

109-121:Error code may produce string "undefined" - previously flagged.

This issue was already identified in a previous review. When botherr.code anderr.status are undefined,String(err.status) produces"undefined".


230-239:Spread ofvalidProviderOptions can overridesystemPrompts - previously flagged.

This issue was already identified in a previous review. The'system' key is invalidKeys, so spreadingvalidProviderOptions after settingsystem fromsystemPrompts can override it.


473-491:content_block_stop handler doesn't useevent.index - previously flagged.

This issue was already identified in a previous review. The handler checkstoolCallsMap.get(currentToolIndex) without verifying the stopped block is actually a tool_use block usingevent.index.


554-567:Duplicate error code issue - previously flagged.

This issue was already identified in a previous review. Same pattern as line 118 whereerr.code || String(err.status) can produce"undefined".

packages/typescript/ai-ollama/src/adapters/text.ts (1)

190-213:OllamaToolCall lacksid property - previously flagged.

This issue was already identified in a previous review. Ollama v0.6.3'sToolCall interface doesn't includeid orfunction.index properties. The type assertion masks this, butactualToolCall.id andactualToolCall.function.index will beundefined at runtime.

packages/typescript/ai-openai/src/adapters/text.ts (2)

210-210:UnusedeventTypeCounts variable.

TheeventTypeCounts map is declared but never populated. It's referenced in the error logging at line 467 but will always be empty. Either implement the counting logic or remove this variable and its reference.


647-652:Base64 images still need data URI format.

The comment at line 647 states "For base64 data, construct a data URI" but the code returnspart.source.value directly without the data URI prefix. OpenAI's API expectsimage_url to be a data URI likedata:image/jpeg;base64,... for base64 data.

       // For base64 data, construct a data URI+      const mediaType = part.metadata?.mediaType || 'image/jpeg'       return {         type: 'input_image',-        image_url: part.source.value,+        image_url: `data:${mediaType};base64,${part.source.value}`,         detail: imageMetadata?.detail || 'auto',       }
🧹 Nitpick comments (5)
packages/typescript/ai/src/core/generate.ts (2)

283-300:Missing newlines before type/comment blocks.

Lines 283 and 300 are missing newlines before the comment/type definition, which reduces readability. Thenever type ends directly before the next type definition without separation.

-        : never /** Infer the return type based on adapter kind, schema, and stream */-type GenerateReturnType<+        : never++/** Infer the return type based on adapter kind, schema, and stream */+type GenerateReturnType<

And similarly at line 300:

-        : never /**+        : never++/**

481-482:Avoidas any cast - loses type safety.

Thechat(options as any) cast bypasses TypeScript's type checking entirely. Consider defining a more specific type or creating a proper mapping function.

 async function* generateChat(   options: GenerateChatOptions<     ChatAdapter<ReadonlyArray<string>, object, any, any, any>,     string,     z.ZodType | undefined   >, ): AsyncIterable<StreamChunk> {-  yield* chat(options as any)+  yield* options.adapter.chatStream({+    model: options.model,+    messages: options.messages,+    systemPrompts: options.systemPrompts,+    tools: options.tools,+    options: options.options,+    providerOptions: options.providerOptions,+    abortController: options.abortController,+    agentLoopStrategy: options.agentLoopStrategy,+    conversationId: options.conversationId,+  }) }
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)

159-161:Consider usingcrypto.randomUUID() for more robust ID generation.

Math.random() has limited entropy and can produce collisions in high-volume scenarios. For a base adapter that may be used across many requests,crypto.randomUUID() provides better uniqueness guarantees.

   protected generateId(): string {-    return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`+    return `${this.name}-${Date.now()}-${crypto.randomUUID().slice(0, 8)}`   }

Note: If browser compatibility without crypto API is needed, the current implementation is acceptable.

packages/typescript/ai-ollama/src/adapters/text.ts (1)

300-307:Redundant conditional branches for image source handling.

Both branches of the conditional do the same thing - pushpart.source.value to images. This can be simplified.

           } else if (part.type === 'image') {-            if (part.source.type === 'data') {-              images.push(part.source.value)-            } else {-              images.push(part.source.value)-            }+            images.push(part.source.value)           }
packages/typescript/ai-openai/src/adapters/text.ts (1)

104-111:Consider reducing verbose error logging in production.

The error logging outputs full stack traces and error objects toconsole.error. While useful for debugging, this could be noisy in production environments. Consider using a logging abstraction with configurable levels or reducing the verbosity.

     } catch (error: unknown) {       const err = error as Error-      console.error('>>> chatStream: Fatal error during response creation <<<')-      console.error('>>> Error message:', err.message)-      console.error('>>> Error stack:', err.stack)-      console.error('>>> Full error:', err)+      // Log error details for debugging+      console.error('[OpenAI Adapter] chatStream error:', err.message)       throw error     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between5c9703c and14c4297.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (11)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/image.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-chat-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/index.ts (1 hunks)
  • packages/typescript/ai/src/core/generate.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (4 hunks)
  • packages/typescript/ai/tests/generate-types.test-d.ts (1 hunks)
  • packages/typescript/ai/tests/generate.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai/tests/generate.test.ts
🧰 Additional context used
🧬 Code graph analysis (2)
packages/typescript/ai-gemini/src/adapters/text.ts (2)
packages/typescript/ai-gemini/src/utils/client.ts (4)
  • GeminiClientConfig (3-5)
  • createGeminiClient (10-14)
  • generateId (41-43)
  • getGeminiApiKeyFromEnv (20-36)
packages/typescript/ai-gemini/src/model-meta.ts (3)
  • GEMINI_MODELS (771-780)
  • GeminiChatModelProviderOptionsByName (815-864)
  • GeminiModelInputModalitiesByName (879-891)
packages/typescript/ai/src/core/generate.ts (6)
packages/typescript/ai/src/index.ts (5)
  • GenerateAdapter (7-7)
  • ai (5-5)
  • chatOptions (39-39)
  • chat (1-1)
  • convertZodToJsonSchema (26-26)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • ChatAdapter (50-102)
packages/typescript/ai/src/types.ts (11)
  • ChatAdapter (1100-1100)
  • EmbeddingAdapter (1100-1100)
  • SummarizeAdapter (1100-1100)
  • ModelMessage (226-237)
  • ChatOptions (554-595)
  • StreamChunk (685-693)
  • EmbeddingResult (735-743)
  • SummarizationResult (718-727)
  • ImageGenerationResult (783-796)
  • EmbeddingOptions (729-733)
  • SummarizationOptions (710-716)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • EmbeddingAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • SummarizeAdapter (22-41)
packages/typescript/ai/src/adapters/base-image-adapter.ts (1)
  • ImageAdapter (25-52)
🔇 Additional comments (21)
packages/typescript/ai-gemini/src/adapters/image.ts (5)

1-29:LGTM!

Imports are well-organized with proper separation of type imports usingimport type. Dependencies are correctly structured.


31-66:Clean adapter structure.

The class definition follows the established adapter pattern correctly. The emptyGeminiImageConfig interface extendingGeminiClientConfig provides a good extension point for future Gemini-specific configuration. The use ofdeclare for type-narrowing properties is an appropriate TypeScript pattern.


68-88:Well-structured method with proper validation flow.

The validate-build-call-transform pattern is clean and maintainable. Input validation before the API call correctly prevents unnecessary network requests for invalid inputs.


141-176:LGTM!

The factory functions provide a clean API with both explicit key and environment-based instantiation patterns. The JSDoc comments with examples are helpful for consumers.


103-120: No changes needed. TheGeneratedImage interface already definesb64Json as an optional property (b64Json?: string), which correctly allows the potentially undefined value fromitem.image?.imageBytes. The type is already type-safe.

packages/typescript/ai-gemini/src/adapters/text.ts (1)

435-454:Verify config spread order allows intentional overrides.

TheproviderOpts.generationConfig is spread after individualtemperature,topP, andmaxOutputTokens settings, allowing provider options to override common options. This may be intentional, but ensure this precedence aligns with the expected behavior.

If common options should take precedence, apply this diff:

      config: {-       ...providerOpts,-       temperature: options.options?.temperature,-       topP: options.options?.topP,-       maxOutputTokens: options.options?.maxTokens,-       systemInstruction: options.systemPrompts?.join('\n'),        ...((providerOpts as Record<string, unknown>).generationConfig as          | Record<string, unknown>          | undefined),+       ...providerOpts,+       temperature: options.options?.temperature,+       topP: options.options?.topP,+       maxOutputTokens: options.options?.maxTokens,+       systemInstruction: options.systemPrompts?.join('\n'),        tools: convertToolsToProviderFormat(options.tools),      },
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)

1-102:Well-structured adapter interface with comprehensive generics.

TheChatAdapter interface provides excellent type-safety with its five generic parameters covering models, provider options, model-specific options, input modalities, and message metadata. The use ofdeclare for type-only properties is a clean pattern for inference without runtime overhead.

packages/typescript/ai/src/adapters/index.ts (1)

1-44:Clean barrel export structure.

The index file provides a well-organized export surface with clear section comments. TheAdapterKind andAnyAdapter union types offer convenient aggregations for consumers.

packages/typescript/ai-ollama/src/adapters/text.ts (1)

105-127:Solid adapter initialization with flexible client injection.

The constructor accepts either a host string or a pre-configured Ollama client, which is good for testing and flexibility. The default model fallback to 'llama3' is reasonable.

packages/typescript/ai/tests/generate-types.test-d.ts (4)

48-117:Well-structured mock adapters for type testing.

The mock adapter implementations correctly extend the base adapters with strict generics and provide minimal implementations sufficient for type-level testing. The empty async generator forchatStream is acceptable sincevitest's type tests (test-d.ts files) only perform type checking without runtime execution.


119-334:Comprehensive type inference test coverage.

The tests thoroughly validate type inference across all adapter categories, including model validation, strict provider options enforcement, and cross-adapter option restrictions. The use of@ts-expect-error annotations correctly verifies that TypeScript rejects invalid configurations.


336-340:Acceptable workaround for type-test module imports.

Therequire('zod') pattern with thetypeof import('zod') cast is a valid workaround for type-level tests to avoid module hoisting issues while maintaining proper type inference. The ESLint disable comment appropriately acknowledges this deviation.


483-541:Complete streaming behavior type coverage.

The tests correctly validate that the return type changes based on thestream option for summarize adapters, and that thestream option is properly rejected for chat and embedding adapters.

packages/typescript/ai-openai/src/adapters/text.ts (5)

53-74:Well-structured adapter class definition.

The adapter properly extendsBaseChatAdapter with comprehensive generic parameters for model constraints, provider options, and metadata typing. The constructor correctly delegates client creation.


118-168:Solid structured output implementation.

The method correctly uses OpenAI's native JSON Schema response format withstrict: true for reliable structured output. Error handling appropriately truncates long content in error messages for readability.


256-288:Good handling of response lifecycle events.

The code properly handles response created/incomplete/failed events, resets streaming state appropriately, and yields error chunks when the response contains errors or incomplete details.


461-481:Error chunk properly yielded after stream failure.

The catch block correctly yields an error chunk to notify consumers of stream failures. Note: TheeventTypeCounts reference at line 467 is part of the unused variable issue noted earlier.


710-757:Well-documented factory functions.

The factory functions provide clear APIs for both explicit API key usage and environment variable detection. The JSDoc documentation with examples is helpful for developers.

packages/typescript/ai/src/types.ts (3)

1030-1097:LGTM! Tree-shakeable adapter type utilities added.

The new type utilities (ExtractModelsFromChatAdapter,ExtractModelsFromEmbeddingAdapter,ExtractModelsFromSummarizeAdapter,ExtractModelsFromAnyAdapter, andChatOptionsForChatAdapter) provide proper type inference for the new tree-shakeable adapter architecture.

Note: There's intentional duplication with existing types likeChatStreamOptionsForModel (lines 947-997), which makes sense for migration purposes. Consider adding a comment explaining this is part of the adapter architecture evolution.


1100-1100:LGTM! Adapter types re-exported for public API.

Re-exporting the adapter types from the./adapters module makes them available as part of the public API surface, which aligns with the tree-shakeable architecture goals mentioned in the PR objectives.


4-9: All four adapter types (AnyAdapter,ChatAdapter,EmbeddingAdapter,SummarizeAdapter) are properly exported from the./adapters module. The import statement is correct and valid.

Comment on lines 201 to 202
consttoolCallId=
functionCall.name||`call_${Date.now()}_${nextToolIndex}`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

Tool call ID collision: function name is not unique across multiple calls.

UsingfunctionCall.name directly as thetoolCallId causes collisions when the same function is called multiple times in one stream. Each tool call must have a unique identifier.

Apply this diff to always generate unique IDs:

            const toolCallId =-             functionCall.name || `call_${Date.now()}_${nextToolIndex}`+             `call_${Date.now()}_${nextToolIndex}`

Make the same change at line 272 for the UNEXPECTED_TOOL_CALL branch:

                const toolCallId =-                 functionCall.name || `call_${Date.now()}_${nextToolIndex}`+                 `call_${Date.now()}_${nextToolIndex}`

Also applies to: 271-272

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 201-202 (andalso update the UNEXPECTED_TOOL_CALL branch at lines 271-272), the currentassignment const toolCallId = functionCall.name ||`call_${Date.now()}_${nextToolIndex}` can collide because functionCall.name isreused; change it to always produce a unique id by appending a timestamp andindex to the name (e.g. use the name if present but concat`_${Date.now()}_${nextToolIndex}`, or if no name use the existing `call_`prefix), and apply the exact same unique-id construction at theUNEXPECTED_TOOL_CALL location so every tool call id is guaranteed unique.

Comment on lines 250 to 261
}elseif(chunk.data){
accumulatedContent+=chunk.data
yield{
type:'content',
id:generateId(this.name),
model,
timestamp,
delta:chunk.data,
content:accumulatedContent,
role:'assistant',
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

🛠️ Refactor suggestion |🟠 Major

Remove dead code handling non-existentchunk.data property.

Thechunk.data property does not exist onGenerateContentResponse in the@google/genai types. This branch will never execute.

Apply this diff to remove the dead code:

        }-     } else if (chunk.data) {-       accumulatedContent += chunk.data-       yield {-         type: 'content',-         id: generateId(this.name),-         model,-         timestamp,-         delta: chunk.data,-         content: accumulatedContent,-         role: 'assistant',-       }      }      if (chunk.candidates?.[0]?.finishReason) {
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
}elseif(chunk.data){
accumulatedContent+=chunk.data
yield{
type:'content',
id:generateId(this.name),
model,
timestamp,
delta:chunk.data,
content:accumulatedContent,
role:'assistant',
}
}
}elseif(chunk.candidates?.[0]?.finishReason){
yield{
type:'finish',
id:generateId(this.name),
model,
timestamp,
finishReason:chunk.candidates[0].finishReason,
role:'assistant',
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 250 to 261,there's a dead code branch checking for chunk.data which does not exist onGenerateContentResponse; remove that entire else-if block (the branch thatchecks chunk.data and yields content) so the function only handles valid chunkshapes returned by @google/genai, and ensure surrounding formatting/flow remainscorrect after deletion.

Comment on lines 393 to 406
letparsedArgs:Record<string,unknown>={}
try{
parsedArgs=toolCall.function.arguments
?(JSON.parse(toolCall.function.arguments)asRecord<
string,
unknown
>)
:{}
}catch{
parsedArgs=toolCall.function.argumentsasunknownasRecord<
string,
unknown
>
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

Unsafe type casting after failed JSON parse.

When JSON parsing fails, the code caststoolCall.function.arguments toRecord<string, unknown>, but ifarguments is a string (which it likely is if parsing failed), this cast is incorrect and could cause runtime errors when Gemini processes the malformed args object.

Consider logging a warning or throwing an error instead:

          let parsedArgs: Record<string, unknown> = {}          try {            parsedArgs = toolCall.function.arguments              ? (JSON.parse(toolCall.function.arguments) as Record<                  string,                  unknown                >)              : {}          } catch {-           parsedArgs = toolCall.function.arguments as unknown as Record<-             string,-             unknown-           >+           console.warn(+             `Failed to parse tool call arguments for ${toolCall.function.name}:`,+             toolCall.function.arguments,+           )+           // Skip this tool call or use empty args+           parsedArgs = {}          }
🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 393 to 406,the catch block unsafely casts toolCall.function.arguments to Record<string,unknown> after a JSON.parse failure (likely leaving a string) which can breakdownstream code; change the catch to handle the parse failure safely by: detectthe runtime type of toolCall.function.arguments, if it's a string log a warning(or record a structured error) and either leave parsedArgs as an empty object orattempt a safer recovery (e.g., wrap the raw string under a named field like {raw: "<original-string>" }); alternatively throw a descriptive error ifmalformed args must be fatal — ensure the chosen behavior is logged and does notperform an invalid type cast.

Comment on lines 355 to 374
privatemapCommonOptionsToOllama(options:ChatOptions):ChatRequest{
constmodel=options.model||this.defaultModel
constproviderOptions=options.providerOptionsas
|OllamaTextProviderOptions
|undefined

constollamaOptions={
temperature:options.options?.temperature,
top_p:options.options?.topP,
num_predict:options.options?.maxTokens,
...providerOptions,
}

return{
model,
options:ollamaOptions,
messages:this.formatMessages(options.messages),
tools:this.convertToolsToOllamaFormat(options.tools),
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash# Check Ollama Message type for system rolerg -n"role.*system" --type=ts packages/typescript/ai-ollama/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check the full text adapter implementation and related typesfd"text.ts" packages/typescript/ai-ollama/src/adapters/| head -20

Repository: TanStack/ai

Length of output: 105


🏁 Script executed:

#!/bin/bash# Search for systemPrompts usage in adapters to compare with other adaptersrg -n"systemPrompts" --type=ts packages/typescript/ai-*/src/adapters/

Repository: TanStack/ai

Length of output: 388


🏁 Script executed:

#!/bin/bash# Check the ChatRequest and ChatOptions interfacesrg -n"interface ChatRequest|type ChatRequest|interface ChatOptions|type ChatOptions" --type=ts packages/typescript/ai-ollama/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Search for Ollama message and chat type definitionsrg -n"type.*Message|interface.*Message" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 159


🏁 Script executed:

#!/bin/bash# Check the full text.ts file to see how messages are formatted and what Ollama expectswc -l packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 109


🏁 Script executed:

#!/bin/bash# Search for formatMessages implementationrg -n"formatMessages" --type=ts packages/typescript/ai-ollama/src/adapters/text.ts -A 10

Repository: TanStack/ai

Length of output: 819


🏁 Script executed:

#!/bin/bash# Check Ollama's ChatRequest and related typesrg -n"ChatRequest|OllamaMessage" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 633


🏁 Script executed:

#!/bin/bash# Check Message type definitionrg -n"export.*Message|type Message" --type=ts packages/typescript/ai-ollama/src/message-types.ts -A 5

Repository: TanStack/ai

Length of output: 247


🏁 Script executed:

#!/bin/bash# Get the Message type/interface definition more thoroughlysed -n'1,100p' packages/typescript/ai-ollama/src/message-types.ts

Repository: TanStack/ai

Length of output: 1611


🏁 Script executed:

#!/bin/bash# Check ChatRequest type definition - look in message-types.tsrg -n"ChatRequest" --type=ts packages/typescript/ai-ollama/src/message-types.ts -B 2 -A 10

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check the ollama-adapter.ts mapCommonOptionsToOllama for comparisonsed -n'456,490p' packages/typescript/ai-ollama/src/ollama-adapter.ts

Repository: TanStack/ai

Length of output: 1039


🏁 Script executed:

#!/bin/bash# Check imports in text.ts to see where ChatRequest and Message come fromhead -30 packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 756


🏁 Script executed:

#!/bin/bash# Search for ChatRequest type definition across all ai packagesrg -n"type ChatRequest|interface ChatRequest" --type=ts packages/typescript/

Repository: TanStack/ai

Length of output: 136


🏁 Script executed:

#!/bin/bash# Search for Message type definition (likely in ai-core or ollama)rg -n"export.*type Message|export.*interface Message" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Search for ollama package information - check if it's in package.json and what versionrg -n"ollama" packages/typescript/ai-ollama/package.json

Repository: TanStack/ai

Length of output: 185


🏁 Script executed:

#!/bin/bash# Try to find if there are type definitions or documentation about ollama Message typefind packages/typescript/ai-ollama -name"*.d.ts" -o -name"*.ts"| xargs grep -l"Message.*role"2>/dev/null| head -5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check how other adapters (like openai, anthropic) handle system prompts vs what Ollama doesrg -n"system" --type=ts packages/typescript/ai-anthropic/src/adapters/text.ts -B 2 -A 2| head -40

Repository: TanStack/ai

Length of output: 379


🌐 Web query:

Ollama API system message role support chat request

💡 Result:

Yes — Ollama's chat API accepts a "system" role in the messages array. The message object’s role can be "system", "user", "assistant" (and "tool" for tool-enabled flows). See the API docs for the POST /api/chat endpoint. [1][2]

Sources:

  • Ollama docs — Generate a chat message (POST /api/chat). [1]
  • Ollama API overview (messages/role description). [2]

🏁 Script executed:

#!/bin/bash# Check if there are other adapters that prepend system messages to the messages arrayrg -n"role.*system|'system'" --type=ts packages/typescript/ai-*/src/adapters/text.ts -B 3 -A 3

Repository: TanStack/ai

Length of output: 611


🏁 Script executed:

#!/bin/bash# Look for Ollama types more carefully - check if it's defined elsewhere or in the ollama packagerg -n"system.*role|role.*system" --type=ts packages/typescript/ai-ollama/src/ -i

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check the exact structure being returned in mapCommonOptionsToOllama - the full function including returnsed -n'355,374p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 662


🏁 Script executed:

#!/bin/bash# Search for any existing system prompt handling in the Ollama adapterrg -n"systemPrompts" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Look at the full formatMessages implementation to understand message structuresed -n'291,320p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1046


🏁 Script executed:

#!/bin/bash# Search for how messages are constructed elsewhere in the codebaserg -n"role:" --type=ts packages/typescript/ai-ollama/src/ -B 1 -A 1

Repository: TanStack/ai

Length of output: 1075


🏁 Script executed:

#!/bin/bash# Check the ChatResponse or message examples in the adapterrg -n"role\|content" --type=ts packages/typescript/ai-ollama/src/adapters/text.ts| head -20

Repository: TanStack/ai

Length of output: 37


systemPrompts not forwarded to Ollama.

ThemapCommonOptionsToOllama function doesn't includeoptions.systemPrompts in the returnedChatRequest. Unlike other adapters (OpenAI, Anthropic, Gemini) that handle system prompts, this adapter drops them entirely. Ollama's API supports system role messages, so system prompts should be prepended to the messages array.

   private mapCommonOptionsToOllama(options: ChatOptions): ChatRequest {     const model = options.model || this.defaultModel     const providerOptions = options.providerOptions as       | OllamaTextProviderOptions       | undefined     const ollamaOptions = {       temperature: options.options?.temperature,       top_p: options.options?.topP,       num_predict: options.options?.maxTokens,       ...providerOptions,     }+    // Prepend system message if systemPrompts provided+    const messagesWithSystem = options.systemPrompts?.length+      ? [+          { role: 'system' as const, content: options.systemPrompts.join('\n') },+          ...this.formatMessages(options.messages),+        ]+      : this.formatMessages(options.messages)+     return {       model,       options: ollamaOptions,-      messages: this.formatMessages(options.messages),+      messages: messagesWithSystem,       tools: this.convertToolsToOllamaFormat(options.tools),     }   }
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
privatemapCommonOptionsToOllama(options:ChatOptions):ChatRequest{
constmodel=options.model||this.defaultModel
constproviderOptions=options.providerOptionsas
|OllamaTextProviderOptions
|undefined
constollamaOptions={
temperature:options.options?.temperature,
top_p:options.options?.topP,
num_predict:options.options?.maxTokens,
...providerOptions,
}
return{
model,
options:ollamaOptions,
messages:this.formatMessages(options.messages),
tools:this.convertToolsToOllamaFormat(options.tools),
}
}
privatemapCommonOptionsToOllama(options:ChatOptions):ChatRequest{
constmodel=options.model||this.defaultModel
constproviderOptions=options.providerOptionsas
|OllamaTextProviderOptions
|undefined
constollamaOptions={
temperature:options.options?.temperature,
top_p:options.options?.topP,
num_predict:options.options?.maxTokens,
...providerOptions,
}
// Prepend system message if systemPrompts provided
constmessagesWithSystem=options.systemPrompts?.length
?[
{role:'system'asconst,content:options.systemPrompts.join('\n')},
...this.formatMessages(options.messages),
]
:this.formatMessages(options.messages)
return{
model,
options:ollamaOptions,
messages:messagesWithSystem,
tools:this.convertToolsToOllamaFormat(options.tools),
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/adapters/text.ts around lines 355 to 374,the function mapCommonOptionsToOllama drops options.systemPrompts; update it toprepend any provided systemPrompts (properly formatted) to the messages arraybefore assigning to the ChatRequest. Specifically, call this.formatMessages onoptions.systemPrompts (if present) and concat/prepend those entries tothis.formatMessages(options.messages), ensuring null/undefined checks somessages is always an array, then return the combined messages in theChatRequest.

Comment on lines 39 to 44
// Union type of all adapters
exporttypeAnyAdapter=
|ChatAdapter<any,any,any,any,any>
|EmbeddingAdapter<any,any>
|SummarizeAdapter<any,any>
|ImageAdapter<any,any,any>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n"export interface ImageAdapter<" --type=ts -A 10 packages/typescript/ai/src/adapters/base-image-adapter.ts

Repository: TanStack/ai

Length of output: 614


UpdateImageAdapter inAnyAdapter union to include missing 4th generic parameter.

ImageAdapter has 4 generic type parameters (TModels,TProviderOptions,TModelProviderOptionsByName,TModelSizeByName), but is currently defined in theAnyAdapter union with only 3 type arguments. Update line 44 to:ImageAdapter<any, any, any, any>

🤖 Prompt for AI Agents
In packages/typescript/ai/src/adapters/index.ts around lines 39 to 44, theAnyAdapter union lists ImageAdapter with only three generic parameters; updatethat union member to provide the missing fourth generic (use ImageAdapter<any,any, any, any>) so the type signature matches the ImageAdapter definition andresolves the generic arity mismatch.

Comment on lines 525 to 540
asyncfunctiongenerateEmbedding(
options:GenerateEmbeddingOptions<
EmbeddingAdapter<ReadonlyArray<string>,object>,
string
>,
):Promise<EmbeddingResult>{
const{ adapter, model, input, dimensions}=options

constembeddingOptions:EmbeddingOptions={
model,
input,
dimensions,
}

returnadapter.createEmbeddings(embeddingOptions)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash# Check if EmbeddingOptions includes providerOptionsrg -n"interface EmbeddingOptions" --type=ts -A 10 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 317


🏁 Script executed:

#!/bin/bash# Check GenerateEmbeddingOptions structurerg -n"interface GenerateEmbeddingOptions" --type=ts -A 15 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Search for summarize functions to see the pattern mentionedrg -n"function generateSummarize|async function.*Summarize" --type=ts -A 15 packages/typescript/ai/src/core/generate.ts| head -60

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Search for GenerateEmbeddingOptions more broadlyrg"GenerateEmbeddingOptions" --type=ts -B 2 -A 10 packages/typescript/ai/src/

Repository: TanStack/ai

Length of output: 7226


🏁 Script executed:

#!/bin/bash# Check the structure of generate.ts to understand the function and typesrg -n"generateEmbedding|GenerateSummarize" --type=ts packages/typescript/ai/src/core/generate.ts| head -20

Repository: TanStack/ai

Length of output: 578


🏁 Script executed:

#!/bin/bash# Find the generateSummarize function and see how it handles providerOptionsrg -n"async function generateSummarize" --type=ts -A 20 packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check GenerateSummarizeOptions to see if it has providerOptionsrg -n"interface GenerateSummarizeOptions" --type=ts -A 15 packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 778


🏁 Script executed:

#!/bin/bash# Search for the summarize function implementationrg -n"function generateSummarize" --type=ts -A 20 packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check what SummarizationOptions looks like in types.tsrg -n"interface SummarizationOptions" --type=ts -A 10 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 337


🏁 Script executed:

#!/bin/bash# Get the full generateEmbedding functionsed -n'525,540p' packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 425


🏁 Script executed:

#!/bin/bash# Search more broadly for summarize function with different patternrg -n"generateSummarize" --type=ts -B 2 -A 20 packages/typescript/ai/src/core/generate.ts| grep -A 20"^[0-9]*-async function"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Check the full EmbeddingOptions and EmbeddingAdapter definitionsrg -n"interface EmbeddingOptions|type EmbeddingOptions" --type=ts -A 15 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 393


🏁 Script executed:

#!/bin/bash# Check EmbeddingAdapter to understand what parameters createEmbeddings acceptsrg -n"interface EmbeddingAdapter|type EmbeddingAdapter" --type=ts -A 20 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash# Search for EmbeddingAdapter definition more broadlyrg"interface EmbeddingAdapter|type EmbeddingAdapter" --type=ts -B 2 -A 15 packages/typescript/ai/src/

Repository: TanStack/ai

Length of output: 4453


🏁 Script executed:

#!/bin/bash# Get the complete EmbeddingAdapter interface including the createEmbeddings methodrg -n"interface EmbeddingAdapter" --type=ts -A 40 packages/typescript/ai/src/adapters/base-embedding-adapter.ts

Repository: TanStack/ai

Length of output: 2024


providerOptions not forwarded ingenerateEmbedding andEmbeddingOptions interface incomplete.

GenerateEmbeddingOptions exposesproviderOptions but it's not passed to the adapter. The proposed forwarding alone is insufficient—EmbeddingOptions interface must also be updated to includeproviderOptions field, as thecreateEmbeddings method currently only acceptsEmbeddingOptions which lacks this field.

-interface EmbeddingOptions {+interface EmbeddingOptions {   model: string   input: string | Array<string>   dimensions?: number+  providerOptions?: object }

Then updategenerateEmbedding:

 async function generateEmbedding(   options: GenerateEmbeddingOptions<     EmbeddingAdapter<ReadonlyArray<string>, object>,     string   >, ): Promise<EmbeddingResult> {-  const { adapter, model, input, dimensions } = options+  const { adapter, model, input, dimensions, providerOptions } = options   const embeddingOptions: EmbeddingOptions = {     model,     input,     dimensions,+    providerOptions,   }   return adapter.createEmbeddings(embeddingOptions) }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai/src/core/generate.ts around lines 525-540,GenerateEmbeddingOptions exposes providerOptions but the code doesn't forward itand the EmbeddingOptions interface lacks a providerOptions field; update theEmbeddingOptions interface to include providerOptions with the appropriate type(matching GenerateEmbeddingOptions['providerOptions'] or a sharedProviderOptions type) and then modify generateEmbedding to includeproviderOptions when constructing embeddingOptions so adapter.createEmbeddingsreceives providerOptions as part of EmbeddingOptions.

Comment on lines +569 to +575
/**
* Zod schema for structured output.
* When provided, the adapter should use the provider's native structured output API
* to ensure the response conforms to this schema.
* The schema will be converted to JSON Schema format before being sent to the provider.
*/
outputSchema?:z.ZodType

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

Clarify relationship betweenoutputSchema andoutput fields.

TheChatOptions interface now has bothoutput?: TOutput (line 568) andoutputSchema?: z.ZodType (line 575). This creates ambiguity:

  • output already supports structured output viaResponseFormat with JSON schema
  • outputSchema provides a direct Zod schema approach
  • No clear precedence or validation if both are provided

Consider one of these approaches:

  1. Consolidate into a single field that accepts both formats
  2. Document clear precedence rules (e.g.,outputSchema overridesoutput)
  3. Add runtime validation to prevent both being set simultaneously

Apply this diff to add a JSDoc note clarifying the relationship:

   output?: TOutput   /**    * Zod schema for structured output.    * When provided, the adapter should use the provider's native structured output API    * to ensure the response conforms to this schema.    * The schema will be converted to JSON Schema format before being sent to the provider.+   *+   * Note: If both 'output' and 'outputSchema' are provided, 'outputSchema' takes precedence.+   * Consider using only one approach to avoid confusion.    */   outputSchema?: z.ZodType
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
*Zodschemaforstructuredoutput.
*Whenprovided,theadaptershouldusetheprovider's native structured output API
*toensuretheresponseconformstothisschema.
*TheschemawillbeconvertedtoJSONSchemaformatbeforebeingsenttotheprovider.
*/
outputSchema?:z.ZodType
/**
*Zodschemaforstructuredoutput.
*Whenprovided,theadaptershouldusetheprovider's native structured output API
*toensuretheresponseconformstothisschema.
*TheschemawillbeconvertedtoJSONSchemaformatbeforebeingsenttotheprovider.
*
*Note:Ifboth'output'and'outputSchema'areprovided,'outputSchema'takesprecedence.
*Consider usingonlyoneapproachtoavoidconfusion.
*/
outputSchema?:z.ZodType
🤖 Prompt for AI Agents
In packages/typescript/ai/src/types.ts around lines 569 to 575, the relationshipbetween the existing output?: TOutput field and the new outputSchema?: z.ZodTypeis ambiguous; update the JSDoc above outputSchema to clearly state theprecedence and allowed combinations (e.g., that outputSchema is an alternativeto output and will take precedence if both are provided) and add a note thatcallers should not supply both simultaneously — additionally, implement aruntime guard where ChatOptions are constructed/validated to throw or return anerror if both output and outputSchema are set to prevent ambiguous behavior.

Comment on lines +791 to +795
usage?:{
inputTokens?:number
outputTokens?:number
totalTokens?:number
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Standardize token usage field naming.

Theusage object inImageGenerationResult usesinputTokens andoutputTokens, while other result types use different naming:

  • DoneStreamChunk (line 644): usespromptTokens andcompletionTokens
  • EmbeddingResult (line 740): usespromptTokens

Consider standardizing to one naming convention across all usage types for consistency.

Apply this diff to align with existing conventions:

   /** Token usage information (if available) */   usage?: {-    inputTokens?: number-    outputTokens?: number+    promptTokens?: number+    completionTokens?: number     totalTokens?: number   }
🤖 Prompt for AI Agents
packages/typescript/ai/src/types.ts around lines 791 to 795: theImageGenerationResult usage object currently uses inputTokens/outputTokens whichis inconsistent with other types that use promptTokens and completionTokens;rename usage.inputTokens -> usage.promptTokens and usage.outputTokens ->usage.completionTokens (keep totalTokens) and update any related type referencesand consumers/serializers/deserializers to use the new field names to maintainconsistency across result types.

Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
packages/typescript/ai-anthropic/src/adapters/text.ts (5)

92-121:Avoid emitting error.code as the literal string"undefined" inchatStream

If botherr.code anderr.status are absent,String(err.status) becomes"undefined", which is not useful to consumers.

Consider a safer fallback:

       yield {         type: 'error',         id: generateId(this.name),         model: options.model,         timestamp: Date.now(),         error: {           message: err.message || 'Unknown error occurred',-          code: err.code || String(err.status),+          code:+            err.code ??+            (err.status != null ? String(err.status) : 'unknown'),         },       }

129-178:Structured output silently returnsdata: null on JSON parse failure

InstructuredOutput, aJSON.parse error is caught and coerced toparsed = null, so callers just seedata: null with no indication that parsing failed. This can mask real schema/format issues.

Suggest failing fast with a descriptive error (or otherwise surfacing parse failure):

-      let parsed: unknown = null-      try {-        parsed = JSON.parse(text)-      } catch {-        parsed = null-      }+      let parsed: unknown+      try {+        parsed = JSON.parse(text)+      } catch (err) {+        throw new Error(+          `Failed to parse structured output as JSON: ${+            err instanceof Error ? err.message : String(err)+          }`,+        )+      }

This will then be wrapped by the outer catch and surfaced as a structured output generation error instead of a silentnull.


181-242:Clarify precedence betweensystemPrompts andproviderOptions.system

validProviderOptions includes'system' and is spreadaftersystemPrompts:

system:options.systemPrompts?.join('\n'),tools:tools,...validProviderOptions,

This means aproviderOptions.system value will override the higher‑levelsystemPrompts, which is easy to miss and currently undocumented.

If you wantsystemPrompts to always win (which is what most consumers will expect), either:

  • Remove'system' fromvalidKeys, or
  • SpreadvalidProviderOptions before the explicitsystem field:
-    const requestParams: InternalTextProviderOptions = {-      model: options.model,-      max_tokens: maxTokens,-      temperature: options.options?.temperature,-      top_p: options.options?.topP,-      messages: formattedMessages,-      system: options.systemPrompts?.join('\n'),-      tools: tools,-      ...validProviderOptions,-    }+    const requestParams: InternalTextProviderOptions = {+      model: options.model,+      max_tokens: maxTokens,+      temperature: options.options?.temperature,+      top_p: options.options?.topP,+      messages: formattedMessages,+      ...validProviderOptions,+      system: options.systemPrompts?.join('\n'),+      tools: tools,+    }

If provider‑level override is intentional, it would help to document and add a test capturing that precedence.


408-424:content_block_* handling should use the eventindex instead of a globalcurrentToolIndex

processAnthropicStream tracks tool calls via acurrentToolIndex counter that increments on eachcontent_block_start of typetool_use and is then reused forinput_json_delta andcontent_block_stop. However, Anthropic’s streaming events already include anindex field oncontent_block_start,content_block_delta, andcontent_block_stop.

With mixed content blocks (e.g., text, thinking, tool_use) the current approach can misassociateinput_json_delta/content_block_stop events with the wrong tool, leading to spurious or mis‑indexedtool_call chunks.

You can avoid this by keying offevent.index instead of a mutablecurrentToolIndex:

-    const toolCallsMap = new Map<-      number,-      { id: string; name: string; input: string }-    >()-    let currentToolIndex = -1+    const toolCallsMap = new Map<+      number,+      { id: string; name: string; input: string }+    >()@@-        if (event.type === 'content_block_start') {-          if (event.content_block.type === 'tool_use') {-            currentToolIndex++-            toolCallsMap.set(currentToolIndex, {+        if (event.type === 'content_block_start') {+          if (event.content_block.type === 'tool_use') {+            toolCallsMap.set(event.index, {               id: event.content_block.id,               name: event.content_block.name,               input: '',             })@@-          } else if (event.delta.type === 'input_json_delta') {-            const existing = toolCallsMap.get(currentToolIndex)+          } else if (event.delta.type === 'input_json_delta') {+            const existing = toolCallsMap.get(event.index)             if (existing) {               existing.input += event.delta.partial_json@@-              index: currentToolIndex,+              index: event.index,             }           }         }-        } else if (event.type === 'content_block_stop') {-          const existing = toolCallsMap.get(currentToolIndex)+        } else if (event.type === 'content_block_stop') {+          const existing = toolCallsMap.get(event.index)           if (existing && existing.input === '') {@@-              index: currentToolIndex,+              index: event.index,             }           }         }

This keeps tool-call tracking aligned with Anthropic’s own block indices and avoids cross‑contamination between different content blocks.

Also applies to: 451-471, 473-491


554-565:Duplicate error.code"undefined" issue in stream error handler

Same concern as inchatStream:code: err.code || String(err.status) can yield"undefined" when both are missing. Recommend aligning the fallback with the earlier suggestion:

       yield {         type: 'error',         id: genId(),         model: model,         timestamp,         error: {           message: err.message || 'Unknown error occurred',-          code: err.code || String(err.status),+          code:+            err.code ??+            (err.status != null ? String(err.status) : 'unknown'),         },       }
🧹 Nitpick comments (1)
packages/typescript/ai-anthropic/src/adapters/text.ts (1)

492-552:Makeusage handling more defensive and avoid potential doubledone events

Two small robustness points in themessage_delta /message_stop handling:

  1. Guardevent.usage access:
    If Anthropic ever omitsusage on the finalmessage_delta (even withstop_reason present), the current code will throw onevent.usage.input_tokens. A cheap defensive improvement:
-                  usage: {-                    promptTokens: event.usage.input_tokens || 0,-                    completionTokens: event.usage.output_tokens || 0,-                    totalTokens:-                      (event.usage.input_tokens || 0) +-                      (event.usage.output_tokens || 0),-                  },+                  usage: {+                    promptTokens: event.usage?.input_tokens ?? 0,+                    completionTokens: event.usage?.output_tokens ?? 0,+                    totalTokens:+                      (event.usage?.input_tokens ?? 0) ++                      (event.usage?.output_tokens ?? 0),+                  },
  1. Consider emitting a singledone event per stream:
    Depending on Anthropic’s exact event sequence, you might get both amessage_delta withstop_reason and a latermessage_stop, which will currently yield twodone events. If the consumer expects a single terminal chunk, you could track alet doneEmitted = false and skip subsequentdone emissions once set.

This is more of a resiliency/clarity tweak than a correctness blocker.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between14c4297 and935111d.

📒 Files selected for processing (1)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
🔇 Additional comments (4)
packages/typescript/ai-anthropic/src/adapters/text.ts (4)

47-90:Adapter config and client wiring look solid

AnthropicTextConfig as a thin alias plusAnthropicTextAdapter extendingBaseChatAdapter and initializingthis.client viacreateAnthropicClient is straightforward and consistent with the rest of the adapters. No changes requested here.


244-308:Content-part conversion to Anthropic blocks looks consistent

The handling oftext,image, anddocument parts (with metadata mapping and safe stripping ofmediaType for images) and the explicit rejection ofaudio/video is clear and matches Anthropic’s capabilities. The exhaustivenever check on the default branch is a nice type‑safety touch.


310-398:Message formatting for tools and multi-part user content looks reasonable

The mapping logic:

  • Convertsrole: 'tool' +toolCallId into Anthropictool_result blocks on auser message.
  • Packages assistant tool calls astool_use blocks (with best‑effort JSON parsing offunction.arguments).
  • Properly converts multi‑part user content viaconvertContentPartToAnthropic.
  • Falls back to string or converted content array in other cases.

This matches the usual TanStack AI message‑to‑provider mapping; I don’t see any correctness issues here.


574-589:Factory helpers are thin and ergonomic

createAnthropicText andanthropicText are simple, typed helpers that mirror the other adapters’ factory pattern and keep env‑key detection nicely encapsulated. No issues from my side.

* fix: refactoring ai for more activities* smoke tests passing* woot, all the test stuff is working* dev panel updates for images, summarization, one shot and structured* enhancing smoke tests* fixing tests* adding grok* last minute tests* Refactor imports in documentation and examples to use named imports for `ai`- Updated all instances of `import ai from "@tanstack/ai"` to `import { ai } from "@tanstack/ai"` across various documentation files, guides, and examples.- Ensured consistency in import statements for better clarity and adherence to best practices.* ci: apply automated fixes* fix typesafety on ai* ci: apply automated fixes* cleanup types* ci: apply automated fixes* remove grok* ci: apply automated fixes* fix provenence?* update deps* fix tests---------Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (14)
docs/reference/functions/textOptions.md (1)

6-13:Fix incorrect "Defined in" link—function is inactivities/text/index.ts:1211, notutilities/chat-options.ts.

The documented source location is wrong.textOptions is exported frompackages/typescript/ai/src/activities/text/index.ts starting at line 1211, not from a non-existentutilities/chat-options.ts file. Update the link accordingly.

docs/guides/tool-approval.md (1)

58-74:Fix the adapter function name in the code snippet: useopenai() instead ofopenaiText().

The TanStack AI package exportsopenai from@tanstack/ai-openai, notopenaiText(). The corrected snippet should be:

import{ai,toStreamResponse}from"@tanstack/ai";import{openai}from"@tanstack/ai-openai";import{sendEmail}from"./tools";exportasyncfunctionPOST(request:Request){const{ messages}=awaitrequest.json();conststream=ai({adapter:openai(),    messages,model:"gpt-4o",tools:[sendEmail],});returntoStreamResponse(stream);}

Theopenai() adapter defaults to readingOPENAI_API_KEY from environment variables (server-side only). If docs need to clarify auth setup, consider adding a note that the environment variable must be set before deployment.

examples/ts-group-chat/chat-server/capnweb-rpc.ts (1)

154-199:Bug risk:joinChat() registersclients, butleaveChat() doesn’t unregister—stale “connected users” likely.
This can keep usernames inclients (and queues) after a user leaves without closing the socket. Consider cleaning up inleaveChat() (and possibly also handling duplicate usernames injoinChat()).

   async leaveChat() {     if (!this.currentUsername) return-    console.log(`${this.currentUsername} is leaving the chat`)-    await globalChat.removeUser(this.currentUsername)-    this.currentUsername = null+    const username = this.currentUsername+    console.log(`${username} is leaving the chat`)+    await globalChat.removeUser(username)+    clients.delete(username)+    userMessageQueues.delete(username)+    this.currentUsername = null      return {       message: 'Successfully left the chat',     }   }
examples/ts-svelte-chat/src/routes/api/chat/+server.ts (1)

105-119:Removeas any type casts to preserve type safety.

The migration toai() is correct, but lines 106-107 useas any to bypass type checking. This defeats TypeScript's type safety and could mask incompatibilities between the text adapters and theai() function.

Consider investigating the type mismatch. Possible solutions:

  1. If the adapter types are narrower thanai() expects, update theai() function signature to accept the union of text adapter types
  2. If the types should already be compatible, refine the switch statement's type inference or use explicit type annotations
  3. If types are still being refined in this PR, add a TODO comment indicating this is temporary

Apply this approach to eliminate the type casts:

 const stream = ai({-  adapter: adapter as any,-  model: selectedModel as any,+  adapter,+  model: selectedModel,   tools: [     getGuitars,     recommendGuitarToolDef,

If type errors arise, they will guide you to the proper type definitions needed.

docs/adapters/ollama.md (1)

98-124:Docs examples usemessages without defining it
In “With Tools” and “Provider Options” examples,messages isn’t declared in-snippet, which makes copy/paste fail.

 import { ai, toolDefinition } from "@tanstack/ai"; import { ollamaText } from "@tanstack/ai-ollama"; import { z } from "zod"; const adapter = ollamaText();+const messages = [{ role: "user", content: "Hello!" }]; const getLocalDataDef = toolDefinition({   name: "get_local_data",   description: "Get data from local storage",   inputSchema: z.object({     key: z.string(),   }), });

Also applies to: 132-147

examples/README.md (2)

311-321:Inconsistent adapter import - should useopenaiText for consistency.

The import on line 312 uses the legacyopenai adapter, but per the PR objectives, the tree-shakeableopenaiText adapter should be used instead to align with the new API surface demonstrated elsewhere.

 import { ai, toStreamResponse } from '@tanstack/ai'-import { openai } from '@tanstack/ai-openai'+import { openaiText } from '@tanstack/ai-openai'  const stream = ai({-  adapter: openai(),+  adapter: openaiText(),   model: 'gpt-4o',   messages, })

385-391:Inconsistent API call - should useai() instead ofchat().

This example still uses the oldchat({...}) function call, but should be updated toai({...}) to match the PR's migration pattern.

-const stream = chat({+const stream = ai({   adapter: openai(),   model: 'gpt-4o',   messages,   tools: [weatherTool], // SDK executes these automatically })
docs/reference/type-aliases/TextStreamOptionsUnion.md (1)

1-18:Fix incorrect source file line reference.

The documentation references line 823, butTextStreamOptionsUnion is actually defined at line 890 inpackages/typescript/ai/src/types.ts. Update the link to point to the correct line.

docs/getting-started/quick-start.md (1)

179-194:Tools snippet callsai(...) without importing it.

-import { toolDefinition } from '@tanstack/ai'+import { ai, toolDefinition } from '@tanstack/ai'
docs/guides/client-tools.md (1)

235-239:Tool state namecompleted may be inconsistent with other docs/examples (complete) — please align to the actual enum values.

docs/guides/tools.md (1)

282-288:Hybrid tools example uses twotools: keys in one object literal (invalid / misleading). Split into two separate examples.

-ai({-  adapter: openaiText(),-  messages,-  tools: [addToCartDef], // Client will execute, or-  tools: [addToCartServer], // Server will execute-});+// Client will execute (pass definition)+ai({ adapter: openaiText(), messages, tools: [addToCartDef] });++// Server will execute (pass server implementation)+ai({ adapter: openaiText(), messages, tools: [addToCartServer] });
docs/api/ai.md (2)

47-61:Inconsistent section header with actual API usage.

The section header sayssummarize(options) but the code example usesai({ adapter: openaiSummarize(), ... }). Either update the header to reflect the unifiedai() entry point or clarify thatsummarize() is a separate function.

-## `summarize(options)`+## Summarization with `ai()`-Creates a text summarization.+Creates a text summarization using an adapter.

76-89:Inconsistent section header with actual API usage.

Similar to the summarize section, the header saysembedding(options) but the code usesai({ adapter: openaiEmbed(), ... }).

-## `embedding(options)`+## Embeddings with `ai()`-Creates embeddings for text input.+Creates embeddings for text input using an adapter.
packages/typescript/ai/src/index.ts (1)

74-98:Leaky public types: exportingInternalToolCallState from the root likely locks you into it.
Given the “Internal…” name, either (a) don’t export it fromsrc/index.ts, or (b) mark it/** @internal */ and ensure the build actually strips/flags internal types consistently.

♻️ Duplicate comments (17)
examples/ts-group-chat/chat-server/chat-logic.ts (1)

108-108:Same array syntax consideration applies to method return types.

The sameType[]Array<Type> conversion appears in these method signatures, consistent with the interface changes above.

Also applies to: 112-112

packages/typescript/ai-ollama/src/adapters/text.ts (2)

199-222:Critical: Ollama'sToolCall lacksid property—type assertion hides runtime issue.

According to Ollama's v0.6.3 interface definition,ToolCall only containsfunction.name andfunction.arguments. The type assertion on lines 200-203 bypasses TypeScript's type checking but doesn't create theid property at runtime. Line 211 will access an undefinedactualToolCall.id.

UseresponseId as the tool call identifier instead:

 const handleToolCall = (toolCall: ToolCall): StreamChunk => {-  const actualToolCall = toolCall as ToolCall & {-    id: string-    function: { index: number }-  }   return {     type: 'tool_call',     id: responseId,     model: chunk.model,     timestamp,     toolCall: {       type: 'function',-      id: actualToolCall.id,+      id: `${responseId}-tool-${Math.random().toString(36).substring(7)}`,       function: {-        name: actualToolCall.function.name || '',+        name: toolCall.function.name || '',         arguments:-          typeof actualToolCall.function.arguments === 'string'-            ? actualToolCall.function.arguments-            : JSON.stringify(actualToolCall.function.arguments),+          typeof toolCall.function.arguments === 'string'+            ? toolCall.function.arguments+            : JSON.stringify(toolCall.function.arguments),       },     },-    index: actualToolCall.function.index,+    index: 0,   } }

366-385:systemPrompts not forwarded to Ollama.

ThemapCommonOptionsToOllama function doesn't includeoptions.systemPrompts in the returnedChatRequest. Other adapters (OpenAI, Anthropic, Gemini) prepend system prompts to the messages array. Ollama's API supports system role messages, so system prompts should be handled.

Prepend system messages whensystemPrompts is provided:

 private mapCommonOptionsToOllama(options: TextOptions): ChatRequest {   const model = options.model || this.defaultModel   const providerOptions = options.providerOptions as     | OllamaTextProviderOptions     | undefined   const ollamaOptions = {     temperature: options.options?.temperature,     top_p: options.options?.topP,     num_predict: options.options?.maxTokens,     ...providerOptions,   }+  // Prepend system message if systemPrompts provided+  const messagesWithSystem = options.systemPrompts?.length+    ? [+        { role: 'system' as const, content: options.systemPrompts.join('\n') },+        ...this.formatMessages(options.messages),+      ]+    : this.formatMessages(options.messages)+   return {     model,     options: ollamaOptions,-    messages: this.formatMessages(options.messages),+    messages: messagesWithSystem,     tools: this.convertToolsToOllamaFormat(options.tools),   } }
packages/typescript/ai-gemini/src/adapters/text.ts (6)

256-267:Remove deadchunk.data handling (likely not a real field on streamed Gemini responses).
This branch will either never run or mask future shape mismatches.


205-255:Tool call IDs can collide; always generate a uniquetoolCallId (don’t usefunctionCall.name as the id).
Same function can be called multiple times in a run; using the name as the id breaks correlation.

-            const toolCallId =-              functionCall.name || `call_${Date.now()}_${nextToolIndex}`+            const toolCallId = `call_${Date.now()}_${nextToolIndex}`             const functionArgs = functionCall.args || {}@@-                const toolCallId =-                  functionCall.name || `call_${Date.now()}_${nextToolIndex}`+                const toolCallId = `call_${Date.now()}_${nextToolIndex}`                 const functionArgs = functionCall.args || {}

Also applies to: 272-311


325-338:Token accounting:completionTokens should not come fromthoughtsTokenCount.
thoughtsTokenCount is reasoning/thought tokens; completion text tokens should come from the response candidate tokens field.

Gemini `usageMetadata` fields: what do `candidatesTokenCount`, `promptTokenCount`, `totalTokenCount`, and `thoughtsTokenCount` mean, and which corresponds to completion/output tokens for billing/usage reporting?
-                completionTokens: chunk.usageMetadata.thoughtsTokenCount ?? 0,+                completionTokens: chunk.usageMetadata.candidatesTokenCount ?? 0,

343-372:MIME type fallback is incorrect for non-image modalities.
Defaulting everything to'image/jpeg' will break audio/video/document uploads.

-              mimeType: metadata?.mimeType ?? 'image/jpeg',+              mimeType: metadata?.mimeType ?? 'application/octet-stream',@@-              mimeType: metadata?.mimeType ?? 'image/jpeg',+              mimeType: metadata?.mimeType ?? 'application/octet-stream',

397-412:Don’t cast toRecord<string, unknown> after JSON parse failure—recover safely.
A failed parse almost certainly means you have a string; the current cast risks runtime errors downstream.

           } catch {-            parsedArgs = toolCall.function.arguments as unknown as Record<-              string,-              unknown-            >+            parsedArgs = {}           }

423-432:functionResponse.name must be the tool/function name, nottoolCallId.
Gemini expects the declared function name here; using an ID can break tool-result correlation.

Gemini function calling: what must `functionResponse.name` be set to when returning tool results (function name vs call id), and how is correlation handled?
-          functionResponse: {-            name: msg.toolCallId,+          functionResponse: {+            name: msg.name ?? '',             response: {               content: msg.content || '',             },           },
packages/typescript/ai-openai/src/adapters/text.ts (2)

226-227:Either populateeventTypeCounts or remove it (currently it always logs empty counts).
Right now the log implies useful diagnostics, but nothing ever increments the map.

-    const eventTypeCounts = new Map<string, number>()+    const eventTypeCounts = new Map<string, number>()@@       for await (const chunk of stream) {+        eventTypeCounts.set(chunk.type, (eventTypeCounts.get(chunk.type) ?? 0) + 1)         chunkCount++

Also applies to: 479-486


654-668:Base64 image inputs must be converted to a data URI forimage_url (currently passing raw base64).
part.source.value forsource.type === 'data' is base64, not a URL.

OpenAI Responses API `input_image.image_url`: does it accept raw base64, or must it be a URL / `data:<mime>;base64,<...>` data URI?
       case 'image': {         const imageMetadata = part.metadata         if (part.source.type === 'url') {           return {             type: 'input_image',             image_url: part.source.value,             detail: imageMetadata?.detail || 'auto',           }         }-        // For base64 data, construct a data URI+        // For base64 data, construct a data URI         return {           type: 'input_image',-          image_url: part.source.value,+          image_url: `data:image/jpeg;base64,${part.source.value}`,           detail: imageMetadata?.detail || 'auto',         }       }
packages/typescript/ai/src/types.ts (2)

572-578:Clarify relationship betweenoutputSchema andoutput fields.

This issue was flagged in a previous review. TheTextOptions interface now has bothoutput?: TOutput (line 571) andoutputSchema?: z.ZodType (line 578). Consider adding documentation clarifying precedence rules or runtime validation.


794-798:Token usage field naming inconsistency.

This issue was flagged in a previous review. Theusage object inImageGenerationResult usesinputTokens/outputTokens, while other result types usepromptTokens/completionTokens. Consider standardizing the naming convention.

packages/typescript/ai-anthropic/src/adapters/text.ts (4)

117-120:Error code may result in string "undefined".

This issue was flagged in a previous review. If botherr.code anderr.status are undefined,String(err.status) produces"undefined" as the error code.


261-270:Spread ofvalidProviderOptions can overridesystemPrompts.

This issue was flagged in a previous review. Thesystem field is set explicitly at line 267, but since'system' is included invalidKeys (line 232), it can be extracted fromproviderOptions and will override the joinedsystemPrompts when spread at line 269.


504-522:Use theindex field fromcontent_block_stop to identify which block actually stopped.

This issue was flagged in a previous review. The handler checkstoolCallsMap.get(currentToolIndex) without verifying that the stopped block is actually a tool_use block using the event'sindex field.


593-596:Duplicate error code issue.

This issue was flagged in a previous review. Same issue as inchatStream: if botherr.code anderr.status are undefined, the code will be"undefined".

🧹 Nitpick comments (33)
docs/reference/functions/textOptions.md (1)

8-10:Doc generics:TModel appears unused in the shown signature—either show its role or drop it.
The snippet rendersfunction textOptions<TAdapter, TModel>(options): ...TextStreamOptionsUnion<TAdapter>... (Line 9), butTModel isn’t referenced anywhere in the displayed types (Line 9, 20-22). IfTModel is actually part of the real signature/constraints, consider reflecting that in the snippet/parameter type; otherwise, removing it would reduce confusion.

Also applies to: 14-23

docs/guides/tool-approval.md (1)

63-74:Consider a one-line note about auth/config expectations foropenaiText() in server environments.
Right now the example will fail silently for readers unless they already know where the OpenAI API key is sourced from; a short note like “ensure your OpenAI API key is configured (e.g., env var) before callingopenaiText()” would reduce support churn.

examples/ts-group-chat/chat-server/capnweb-rpc.ts (2)

8-13:Preferunknown (or a real ToolCall shape) overany forModelMessage.toolCalls.
If this is meant to model tool calls,Array<unknown> is a safer default thanArray<any> until the structure is known.


61-62:clients andbroadcastToAll() look inconsistent: callbacks are registered, but “broadcast” only queues messages.
If callbacks are intentional, consider invoking them (and only then counting “successful”); otherwise, consider renamingclients/removing the callback map to avoid confusion.

Also applies to: 89-141

packages/typescript/ai/src/activities/text/tools/zod-converter.ts (2)

9-16:Consider the fragility of internal API usage.

The_zod property is an internal Zod implementation detail that could change across versions. While this type guard may be necessary, be aware that it couples this code to Zod's internals and could break in future Zod updates.

Consider:

  • Adding a comment noting this is intentionally accessing internals
  • Checking if Zod provides a public type guard or utility (e.g., checking for common Zod methods likeparse,safeParse)
  • Wrapping this in a try-catch when used, if version compatibility becomes an issue

198-201:Internal Zod API usage for object detection.

Similar toisZodSchema, this code accesses internal Zod structure (def.type). While necessary for the edge case handling, it creates maintenance risk if Zod's internal structure changes.

This is acceptable for now but consider consolidating all internal API checks into a single utility module with clear documentation about the version compatibility assumptions.

packages/typescript/ai-devtools/src/store/ai-context.tsx (1)

1309-1384:Consider subscribing to bothchat:* andtext:* events for backward compatibility.
Right now, devtools will miss usage events if an app/core emits the olderchat:* names. A small compatibility shim keeps things resilient during the transition.

-    // ============= Chat Events (for usage tracking) =============+    // ============= Text Events (for usage tracking) =============-    cleanupFns.push(-      aiEventClient.on('text:started', (e) => {+    const onTextStarted = (e: any) => {         const streamId = e.payload.streamId         const model = e.payload.model         const provider = e.payload.provider         const clientId = e.payload.clientId@@-      }),-    )+    }++    cleanupFns.push(aiEventClient.on('text:started', onTextStarted))+    cleanupFns.push(aiEventClient.on('chat:started', onTextStarted))-    cleanupFns.push(-      aiEventClient.on('text:completed', (e) => {+    const onTextCompleted = (e: any) => {         const { requestId, usage } = e.payload@@-      }),-    )+    }++    cleanupFns.push(aiEventClient.on('text:completed', onTextCompleted))+    cleanupFns.push(aiEventClient.on('chat:completed', onTextCompleted))-    cleanupFns.push(-      aiEventClient.on('text:iteration', (e) => {+    const onTextIteration = (e: any) => {         const { requestId, iterationNumber } = e.payload@@-      }),-    )+    }++    cleanupFns.push(aiEventClient.on('text:iteration', onTextIteration))+    cleanupFns.push(aiEventClient.on('chat:iteration', onTextIteration))
examples/ts-group-chat/chat-server/claude-service.ts (1)

152-158:Consider removing theas any cast on messages.

Theas any cast on line 155 bypasses type checking and may hide type mismatches between the conversation history and the expected message format. If there's a genuine type incompatibility, it would be better to address it explicitly.

-        messages: [...conversationHistory] as any,+        messages: [...conversationHistory],

If this causes a type error, the underlying type mismatch should be resolved rather than suppressed.

examples/ts-group-chat/package.json (1)

17-20:Minor version inconsistency between router packages.

@tanstack/react-router is updated to^1.141.1 while@tanstack/react-router-devtools remains at^1.139.7. This may be intentional, but consider aligning versions if devtools compatibility is desired.

examples/ts-solid-chat/src/routes/api.chat.ts (1)

46-54:Consider propagating the request signal to abort controller.

requestSignal is captured on line 47 but only used for the early abort check. Consider linking it toabortController so that if the client disconnects mid-stream, the AI request is aborted.

        const abortController = new AbortController()++       // Abort the AI request if the client disconnects+       requestSignal?.addEventListener('abort', () => abortController.abort())
packages/typescript/ai-openai/src/utils/schema-converter.ts (1)

64-126:Nested object schemas may be skipped iftype: 'object' is omitted
InsidemakeOpenAIStructuredOutputCompatible, recursion is gated onprop.type === 'object', but some JSON Schemas represent objects viaproperties without an explicittype. Consider treating “has properties” as object-like.

 function makeOpenAIStructuredOutputCompatible(   schema: Record<string, any>,   originalRequired: Array<string> = [], ): Record<string, any> {   const result = { ...schema }   // Handle object types-  if (result.type === 'object' && result.properties) {+  const isObjectLike = result.type === 'object' || (!!result.properties && result.type == null)+  if (isObjectLike && result.properties) {+    result.type = 'object'     const properties = { ...result.properties }     const allPropertyNames = Object.keys(properties)     // Transform each property     for (const propName of allPropertyNames) {       const prop = properties[propName]       const wasOptional = !originalRequired.includes(propName)       // Recursively transform nested objects/arrays-      if (prop.type === 'object' && prop.properties) {+      const propIsObjectLike =+        prop?.type === 'object' || (!!prop?.properties && prop?.type == null)+      if (propIsObjectLike && prop.properties) {         properties[propName] = makeOpenAIStructuredOutputCompatible(           prop,           prop.required || [],         )       } else if (prop.type === 'array' && prop.items) {         properties[propName] = {           ...prop,           items: makeOpenAIStructuredOutputCompatible(             prop.items,             prop.items.required || [],           ),         }       } else if (wasOptional) {         // Make optional fields nullable by adding null to the type         if (prop.type && !Array.isArray(prop.type)) {           properties[propName] = {             ...prop,             type: [prop.type, 'null'],           }         } else if (Array.isArray(prop.type) && !prop.type.includes('null')) {           properties[propName] = {             ...prop,             type: [...prop.type, 'null'],           }         }       }     }
packages/typescript/ai/src/activities/embedding/adapter.ts (1)

66-68:generateId() randomness slice is inconsistent/low-quality vs other modules
substring(7) can yield very short strings. Consider matching the fixed-length pattern used elsewhere (e.g.slice(2, 9)) or using a shared helper.

   protected generateId(): string {-    return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`+    return `${this.name}-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`   }
packages/typescript/ai/src/activities/summarize/index.ts (2)

155-181:Dispatch logic is fine, but the casts are a bit heavy
Not blocking, but consider overloads to avoid theas unknown as ... casts (keeps the public API nicer to maintain).


93-95:Consider reusing a sharedcreateId() helper
This duplicates the same logic already present in the text activity (and now embedding has its own ID generation too).

packages/typescript/ai-ollama/src/utils/schema-converter.ts (1)

8-15:Note: isZodSchema relies on internal Zod API.

The_zod property check is pragmatic but relies on Zod's internal implementation. While this pattern is commonly used in the ecosystem, it could break if Zod changes its internals in future versions.

Consider adding a comment in the function documentation noting this dependency on Zod internals, or adding a runtime test to verify the check still works as expected.

packages/typescript/ai-gemini/src/utils/schema-converter.ts (1)

59-84:Schema normalization uses internal Zod properties.

The check at lines 60-63 accessesschema.def.type, which is another internal Zod property. While the defensive normalization logic ensures Gemini API compatibility, consider consolidating these checks:

   // Ensure object schemas always have type: "object"   if (typeof result === 'object') {-    const isZodObject =-      typeof schema === 'object' &&-      'def' in schema &&-      schema.def.type === 'object'--    if (isZodObject && !result.type) {-      result.type = 'object'-    }--    if (Object.keys(result).length === 0) {-      result.type = 'object'-    }--    if ('properties' in result && !result.type) {+    // Infer object type from properties or empty schema+    if (+      !result.type &&+      ('properties' in result || Object.keys(result).length === 0)+    ) {       result.type = 'object'     }     if (result.type === 'object' && !('properties' in result)) {       result.properties = {}     }     if (result.type === 'object' && !('required' in result)) {       result.required = []     }   }
examples/ts-vue-chat/vite.config.ts (1)

234-248:Type casts (as any) are pragmatic for dynamic provider selection.

Theadapter as any andselectedModel as any casts at Lines 235-236 lose type safety but are understandable given the runtime provider switching. For a production codebase, consider a discriminated union approach, but this is acceptable for an example.

A more type-safe approach could use a helper that returns properly typed options:

functioncreateAIOptions(provider:Provider,model:string|undefined, ...rest){switch(provider){case'openai':return{adapter:openaiText(),model:model??'gpt-4o', ...rest};// ... other cases}}
packages/typescript/ai-ollama/src/ollama-adapter.ts (1)

398-405:URL-based images may not work reliably with Ollama.

The comment at Line 402-404 notes that URL-based images aren't directly supported by Ollama. Consider logging a warning or throwing an error for unsupported image sources rather than silently passing them through.

           } else {-            // URL-based images not directly supported, but we pass the URL-            // Ollama may need the image to be fetched externally-            images.push(part.source.value)+            // URL-based images require external fetching - warn user+            console.warn(+              `Ollama: URL-based images require external fetching. Consider using base64 data instead.`+            )+            images.push(part.source.value)           }
packages/typescript/ai/src/ai.ts (4)

38-47:AnyAdapter naming is confusing next toAnyAIAdapter; consider renaming or removing the alias.
Exporting bothAnyAdapter andAnyAIAdapter invites accidental imports and ambiguous docs/examples. If you need backwards compat, a more explicit alias name (e.g.AnyGenerateAdapter) is safer.


39-44:Reduceany inGenerateAdapter to preserve inference.
Usingany for provider/model mappings in the union makesai() inference weaker than it needs to be; consider plumbing the generic parameters through (or usingunknown + constraints) so adapter-specific option typing is retained.


62-119:Type-level routing looks solid, but consider extracting per-kind helpers to keep this maintainable.
AIOptionsFor/AIReturnFor are correct-looking but will be hard to evolve as new adapter kinds/options land; splitting intoTextAIOptionsFor,EmbeddingAIOptionsFor, etc. would reduce churn and improve readability.


252-265:Release hygiene: this is a major public API surface—please add a changeset and deprecation plan.
Given the new unified entrypoint + adapter split, shipping without a changeset will make downstream upgrades painful to track.

docs/adapters/openai.md (1)

59-79:Avoid hard-coding long model lists in docs (they’ll go stale fast).
Prefer linking to a “supported models” reference generated frommodel-meta (or a short “examples” list) to reduce ongoing maintenance.

packages/typescript/ai/src/activities/text/adapter.ts (1)

23-28:Consider renamingchatOptionstextOptions (or similar) for consistency with “Text” terminology.
This PR broadly renames “Chat” → “Text”; keeping “chatOptions” here is a small but pervasive inconsistency.

packages/typescript/ai/src/activities/embedding/index.ts (1)

68-70:Consider extractingcreateId to a shared utility.

This helper function is duplicated frompackages/typescript/ai/src/activities/text/index.ts (lines 951-953). To maintain DRY principles and ensure consistent ID generation across activities, consider extracting this to a shared utility module.

-function createId(prefix: string): string {-  return `${prefix}-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`-}+import { createId } from '../../utils/id.js'

Then createpackages/typescript/ai/src/utils/id.ts:

exportfunctioncreateId(prefix:string):string{return`${prefix}-${Date.now()}-${Math.random().toString(36).slice(2,9)}`}
packages/typescript/ai/src/activities/index.ts (2)

160-161:Consider stronger typing forActivityHandler.

Theany types here lose type safety. While this flexibility is needed for the runtime map, consider documenting why or adding a comment explaining this trade-off.

-/** Type for activity handler functions */-type ActivityHandler = (options: any) => any+/**+ * Type for activity handler functions.+ * Uses `any` intentionally as this is a runtime dispatch mechanism+ * where type narrowing happens at the call site via adapter.kind.+ */+type ActivityHandler = (options: unknown) => unknown

490-515:Duplicate deprecated type aliases.

These type aliases (lines 490-515) duplicate the functionality of the aliases at lines 301-327 with slightly different naming. Consider consolidating to one set to reduce maintenance overhead.

Lines 301-327:GenerateTextOptions,GenerateEmbeddingOptions, etc.
Lines 490-515:TextGenerateOptions,EmbeddingGenerateOptions, etc.

Both point to the same underlying*ActivityOptions types.

Either remove one set or have one reference the other:

 /** @deprecated Use TextActivityOptions */ export type TextGenerateOptions<   TAdapter extends TextAdapter<ReadonlyArray<string>, object, any, any, any>,   TModel extends TextModels<TAdapter>,   TSchema extends z.ZodType | undefined = undefined,   TStream extends boolean = true,-> = TextActivityOptions<TAdapter, TModel, TSchema, TStream>+> = GenerateTextOptions<TAdapter, TModel, TSchema, TStream>
packages/typescript/ai/src/activities/text/index.ts (3)

1037-1068:Heavy type assertions may mask type errors.

The multipleas unknown as casts bypass TypeScript's type checking. While these may be necessary due to complex generic constraints, they could mask type errors if the runtime values don't match expectations.

Consider adding runtime assertions or restructuring the generics to reduce the need for double-casting:

// Example: Add runtime assertionif(outputSchema){if(typeofadapter.structuredOutput!=='function'){thrownewError('Adapter does not support structured output')}returnrunAgenticStructuredOutput(options)}

1155-1158:Stream consumption for side effects.

The empty loop body consumes the stream purely for its side effects (executing the agentic loop). This is intentional and correct, but the comment could be more explicit about why this is necessary.

   // Consume the stream to run the agentic loop   for await (const _chunk of engine.run()) {-    // Just consume the stream to execute the agentic loop+    // Intentionally discarding chunks - we only need the side effects+    // (tool execution, message accumulation) for structured output   }

1211-1212:Consider usingTextAdapter instead ofAIAdapter.

ThetextOptions helper usesAIAdapter in its generic constraint while the rest of the text activity module usesTextAdapter. For consistency and to ensure only text-capable adapters are accepted, consider usingTextAdapter.

 export function textOptions<-  TAdapter extends AIAdapter<any, any, any, any, any>,+  TAdapter extends TextAdapter<any, any, any, any, any>,   const TModel extends TAdapter extends AIAdapter<

This would also require updating the subsequent type references to useTextAdapter patterns.

packages/typescript/ai/src/index.ts (3)

1-16:Public API naming is getting redundant/confusing (GenerateOptions vsTextGenerateOptions vsGenerateTextOptions).
Consider consolidating to one naming scheme (and/or adding/** @deprecated */ aliases) to reduce long-term API entropy.


19-32:Avoid exporting from deep “activities/…” paths if these are meant to be stable public APIs.
Re-exporting./activities/text/tools/... bakes internal folder structure into the public surface; prefer a public barrel like./tools (or./text/tools) to keep refactors non-breaking.


44-45:ToolCallManager being public: double-check this is intended “API surface”, not an implementation detail.
If it’s internal plumbing, consider not exporting it from the package root (or clearly documenting it as advanced/unstable).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between935111d and96a08b1.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (107)
  • docs/adapters/anthropic.md (6 hunks)
  • docs/adapters/gemini.md (5 hunks)
  • docs/adapters/ollama.md (5 hunks)
  • docs/adapters/openai.md (9 hunks)
  • docs/api/ai.md (10 hunks)
  • docs/config.json (3 hunks)
  • docs/getting-started/overview.md (3 hunks)
  • docs/getting-started/quick-start.md (4 hunks)
  • docs/guides/agentic-cycle.md (2 hunks)
  • docs/guides/client-tools.md (3 hunks)
  • docs/guides/multimodal-content.md (6 hunks)
  • docs/guides/per-model-type-safety.md (2 hunks)
  • docs/guides/server-tools.md (2 hunks)
  • docs/guides/streaming.md (2 hunks)
  • docs/guides/tool-approval.md (1 hunks)
  • docs/guides/tool-architecture.md (1 hunks)
  • docs/guides/tools.md (4 hunks)
  • docs/protocol/http-stream-protocol.md (3 hunks)
  • docs/protocol/sse-protocol.md (2 hunks)
  • docs/reference/classes/BaseAdapter.md (1 hunks)
  • docs/reference/functions/text.md (1 hunks)
  • docs/reference/functions/textOptions.md (2 hunks)
  • docs/reference/index.md (3 hunks)
  • docs/reference/interfaces/AIAdapter.md (1 hunks)
  • docs/reference/interfaces/TextCompletionChunk.md (1 hunks)
  • docs/reference/interfaces/TextOptions.md (1 hunks)
  • docs/reference/type-aliases/TextStreamOptionsForModel.md (1 hunks)
  • docs/reference/type-aliases/TextStreamOptionsUnion.md (1 hunks)
  • examples/README.md (2 hunks)
  • examples/ts-group-chat/chat-server/capnweb-rpc.ts (7 hunks)
  • examples/ts-group-chat/chat-server/chat-logic.ts (2 hunks)
  • examples/ts-group-chat/chat-server/claude-service.ts (3 hunks)
  • examples/ts-group-chat/package.json (2 hunks)
  • examples/ts-react-chat/package.json (3 hunks)
  • examples/ts-react-chat/src/routes/api.tanchat.ts (4 hunks)
  • examples/ts-solid-chat/package.json (4 hunks)
  • examples/ts-solid-chat/src/routes/api.chat.ts (2 hunks)
  • examples/ts-svelte-chat/package.json (1 hunks)
  • examples/ts-svelte-chat/src/routes/api/chat/+server.ts (2 hunks)
  • examples/ts-vue-chat/package.json (1 hunks)
  • examples/ts-vue-chat/vite.config.ts (3 hunks)
  • examples/vanilla-chat/package.json (1 hunks)
  • package.json (1 hunks)
  • packages/typescript/ai-anthropic/package.json (1 hunks)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/anthropic-adapter.ts (3 hunks)
  • packages/typescript/ai-anthropic/src/tools/custom-tool.ts (2 hunks)
  • packages/typescript/ai-anthropic/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts (3 hunks)
  • packages/typescript/ai-client/package.json (1 hunks)
  • packages/typescript/ai-devtools/package.json (1 hunks)
  • packages/typescript/ai-devtools/src/store/ai-context.tsx (3 hunks)
  • packages/typescript/ai-devtools/vite.config.ts (1 hunks)
  • packages/typescript/ai-gemini/package.json (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-gemini/src/gemini-adapter.ts (3 hunks)
  • packages/typescript/ai-gemini/src/tools/tool-converter.ts (2 hunks)
  • packages/typescript/ai-gemini/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (6 hunks)
  • packages/typescript/ai-ollama/package.json (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-ollama/src/ollama-adapter.ts (5 hunks)
  • packages/typescript/ai-ollama/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-ollama/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-openai/package.json (1 hunks)
  • packages/typescript/ai-openai/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-openai/src/openai-adapter.ts (5 hunks)
  • packages/typescript/ai-openai/src/tools/function-tool.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-openai/tests/openai-adapter.test.ts (2 hunks)
  • packages/typescript/ai-react-ui/package.json (1 hunks)
  • packages/typescript/ai-react/package.json (1 hunks)
  • packages/typescript/ai-solid-ui/package.json (1 hunks)
  • packages/typescript/ai-svelte/package.json (1 hunks)
  • packages/typescript/ai-vue-ui/package.json (1 hunks)
  • packages/typescript/ai/package.json (1 hunks)
  • packages/typescript/ai/src/activities/embedding/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/embedding/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/image/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/image/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/summarize/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/summarize/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/agent-loop-strategies.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/index.ts (10 hunks)
  • packages/typescript/ai/src/activities/text/messages.ts (4 hunks)
  • packages/typescript/ai/src/activities/text/stream/message-updaters.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/stream/processor.ts (2 hunks)
  • packages/typescript/ai/src/activities/text/stream/types.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/tools/tool-calls.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/tools/tool-definition.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/tools/zod-converter.ts (1 hunks)
  • packages/typescript/ai/src/ai.ts (1 hunks)
  • packages/typescript/ai/src/base-adapter.ts (2 hunks)
  • packages/typescript/ai/src/core/chat-common-options.ts (0 hunks)
  • packages/typescript/ai/src/core/embedding.ts (0 hunks)
  • packages/typescript/ai/src/core/summarize.ts (0 hunks)
  • packages/typescript/ai/src/event-client.ts (4 hunks)
  • packages/typescript/ai/src/index.ts (4 hunks)
  • packages/typescript/ai/src/stream-to-response.ts (4 hunks)
  • packages/typescript/ai/src/tools/zod-converter.ts (0 hunks)
  • packages/typescript/ai/src/types.ts (11 hunks)
  • packages/typescript/ai/src/utilities/chat-options.ts (0 hunks)
⛔ Files not processed due to max files limit (36)
  • packages/typescript/ai/src/utilities/messages.ts
  • packages/typescript/ai/tests/agent-loop-strategies.test.ts
  • packages/typescript/ai/tests/ai-abort.test.ts
  • packages/typescript/ai/tests/ai-text.test.ts
  • packages/typescript/ai/tests/generate-types.test-d.ts
  • packages/typescript/ai/tests/generate.test.ts
  • packages/typescript/ai/tests/message-updaters.test.ts
  • packages/typescript/ai/tests/strategies.test.ts
  • packages/typescript/ai/tests/stream-processor-edge-cases.test.ts
  • packages/typescript/ai/tests/stream-processor-replay.test.ts
  • packages/typescript/ai/tests/stream-processor.test.ts
  • packages/typescript/ai/tests/stream-to-response.test.ts
  • packages/typescript/ai/tests/tool-call-manager.test.ts
  • packages/typescript/ai/tests/tool-definition.test.ts
  • packages/typescript/ai/tests/zod-converter.test.ts
  • packages/typescript/ai/vite.config.ts
  • packages/typescript/react-ai-devtools/package.json
  • packages/typescript/smoke-tests/adapters/.env.example
  • packages/typescript/smoke-tests/adapters/package.json
  • packages/typescript/smoke-tests/adapters/src/adapters/index.ts
  • packages/typescript/smoke-tests/adapters/src/cli.ts
  • packages/typescript/smoke-tests/adapters/src/harness.ts
  • packages/typescript/smoke-tests/adapters/src/index.ts
  • packages/typescript/smoke-tests/adapters/src/tests/ags-agentic-structured.ts
  • packages/typescript/smoke-tests/adapters/src/tests/apr-approval-flow.ts
  • packages/typescript/smoke-tests/adapters/src/tests/cst-chat-stream.ts
  • packages/typescript/smoke-tests/adapters/src/tests/emb-embedding.ts
  • packages/typescript/smoke-tests/adapters/src/tests/img-image-generation.ts
  • packages/typescript/smoke-tests/adapters/src/tests/index.ts
  • packages/typescript/smoke-tests/adapters/src/tests/ost-one-shot-text.ts
  • packages/typescript/smoke-tests/adapters/src/tests/str-structured-output.ts
  • packages/typescript/smoke-tests/adapters/src/tests/sum-summarize.ts
  • packages/typescript/smoke-tests/adapters/src/tests/tls-tool-server.ts
  • packages/typescript/smoke-tests/e2e/package.json
  • packages/typescript/smoke-tests/e2e/src/routes/api.tanchat.ts
  • packages/typescript/solid-ai-devtools/package.json
💤 Files with no reviewable changes (5)
  • packages/typescript/ai/src/core/chat-common-options.ts
  • packages/typescript/ai/src/utilities/chat-options.ts
  • packages/typescript/ai/src/tools/zod-converter.ts
  • packages/typescript/ai/src/core/summarize.ts
  • packages/typescript/ai/src/core/embedding.ts
✅ Files skipped from review due to trivial changes (5)
  • packages/typescript/ai-react-ui/package.json
  • docs/reference/functions/text.md
  • packages/typescript/ai/src/activities/text/tools/tool-calls.ts
  • docs/reference/interfaces/TextCompletionChunk.md
  • examples/ts-vue-chat/package.json
🚧 Files skipped from review as they are similar to previous changes (4)
  • packages/typescript/ai-anthropic/src/utils/index.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-openai/src/utils/index.ts
  • packages/typescript/ai-gemini/src/utils/index.ts
🧰 Additional context used
🧬 Code graph analysis (19)
packages/typescript/ai/src/activities/image/index.ts (1)
packages/typescript/ai/src/types.ts (1)
  • ImageGenerationResult (786-799)
packages/typescript/ai/src/base-adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TextOptions (557-598)
  • StreamChunk (688-696)
packages/typescript/ai-gemini/src/gemini-adapter.ts (1)
packages/typescript/ai/src/types.ts (1)
  • TextOptions (557-598)
examples/ts-group-chat/chat-server/chat-logic.ts (1)
examples/ts-group-chat/src/hooks/useChatMessages.ts (1)
  • ChatMessage (4-10)
packages/typescript/ai-openai/src/openai-adapter.ts (1)
packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
  • validateTextProviderOptions (320-325)
packages/typescript/ai/src/activities/text/messages.ts (1)
packages/typescript/ai/src/types.ts (2)
  • Modality (92-92)
  • ConstrainedModelMessage (300-316)
packages/typescript/ai-devtools/src/store/ai-context.tsx (2)
packages/typescript/ai/src/event-client.ts (1)
  • aiEventClient (309-309)
packages/typescript/ai/src/index.ts (1)
  • aiEventClient (61-61)
packages/typescript/ai-ollama/src/adapters/text.ts (2)
packages/typescript/ai-ollama/src/utils/index.ts (4)
  • createOllamaClient (2-2)
  • convertZodToOllamaSchema (8-8)
  • generateId (4-4)
  • getOllamaHostFromEnv (5-5)
packages/typescript/ai-ollama/src/utils/client.ts (3)
  • createOllamaClient (10-14)
  • generateId (39-41)
  • getOllamaHostFromEnv (20-34)
packages/typescript/ai/src/activities/summarize/index.ts (2)
packages/typescript/ai/src/types.ts (1)
  • StreamChunk (688-696)
packages/typescript/ai/src/activities/text/index.ts (1)
  • createId (952-954)
packages/typescript/ai-anthropic/src/anthropic-adapter.ts (1)
packages/typescript/ai/src/types.ts (1)
  • TextOptions (557-598)
packages/typescript/ai/src/types.ts (1)
packages/typescript/ai/src/ai.ts (3)
  • TextAdapter (239-239)
  • EmbeddingAdapter (240-240)
  • SummarizeAdapter (241-241)
packages/typescript/ai/src/activities/embedding/index.ts (1)
packages/typescript/ai/src/activities/text/index.ts (1)
  • createId (952-954)
packages/typescript/ai/src/activities/text/adapter.ts (1)
packages/typescript/ai/src/types.ts (4)
  • TextOptions (557-598)
  • Modality (92-92)
  • DefaultMessageMetadataByModality (805-811)
  • StreamChunk (688-696)
packages/typescript/ai-openai/tests/openai-adapter.test.ts (2)
packages/typescript/ai/src/ai.ts (1)
  • ai (223-232)
packages/typescript/ai/src/index.ts (1)
  • ai (3-3)
packages/typescript/ai/src/activities/text/tools/zod-converter.ts (1)
packages/typescript/ai/src/types.ts (1)
  • SchemaInput (62-62)
packages/typescript/ai-anthropic/src/adapters/text.ts (4)
packages/typescript/ai-anthropic/src/utils/index.ts (4)
  • AnthropicClientConfig (5-5)
  • createAnthropicClient (2-2)
  • generateId (3-3)
  • getAnthropicApiKeyFromEnv (4-4)
packages/typescript/ai-anthropic/src/utils/client.ts (4)
  • AnthropicClientConfig (3-5)
  • createAnthropicClient (10-16)
  • generateId (43-45)
  • getAnthropicApiKeyFromEnv (22-38)
packages/typescript/ai-anthropic/src/model-meta.ts (1)
  • ANTHROPIC_MODELS (363-373)
packages/typescript/ai/src/types.ts (1)
  • ModelMessage (229-240)
packages/typescript/ai-ollama/src/ollama-adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TextOptions (557-598)
  • StreamChunk (688-696)
packages/typescript/ai-openai/src/adapters/text.ts (3)
packages/typescript/ai-openai/src/utils/index.ts (4)
  • OpenAIClientConfig (5-5)
  • createOpenAIClient (2-2)
  • generateId (4-4)
  • getOpenAIApiKeyFromEnv (3-3)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_CHAT_MODELS (1608-1653)
packages/typescript/ai/src/types.ts (4)
  • TextOptions (557-598)
  • StreamChunk (688-696)
  • ModelMessage (229-240)
  • ContentPart (168-179)
packages/typescript/ai/src/activities/index.ts (1)
packages/typescript/ai/src/types.ts (4)
  • StreamChunk (688-696)
  • ImageGenerationResult (786-799)
  • ConstrainedModelMessage (300-316)
  • TextOptions (557-598)
🪛 Biome (2.1.2)
packages/typescript/ai-gemini/src/tools/tool-converter.ts

[error] 80-82: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🪛 LanguageTool
docs/adapters/ollama.md

[grammar] ~126-~126: Use a hyphen to join words.
Context: ...l, andqwen2` generally have good tool calling support. ## Provider Options O...

(QB_NEW_EN_HYPHEN)

```bash
ollama serve
```
**Note:** Tool support varies by model. Models like`llama3`,`mistral`, and`qwen2` generally have good tool calling support.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Hyphenate “tool-calling” (LanguageTool)

-**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.+**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Note:** Tool support varies by model. Models like`llama3`,`mistral`, and`qwen2` generally have good toolcalling support.
**Note:** Tool support varies by model. Models like`llama3`,`mistral`, and`qwen2` generally have good tool-calling support.
🧰 Tools
🪛 LanguageTool

[grammar] ~126-~126: Use a hyphen to join words.
Context: ...l, andqwen2` generally have good tool calling support. ## Provider Options O...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
In docs/adapters/ollama.md around line 126, the phrase "tool calling support"should be hyphenated to "tool-calling support" to match recommended compoundadjective style; update the sentence so models like `llama3`, `mistral`, and`qwen2` generally have good tool-calling support.

Comment on lines 135 to 149
```typescript
const stream=chat({
adapter:openai(),
const stream=ai({
adapter:openaiText(),
messages,
model:"gpt-4o",
providerOptions: {
temperature:0.7,
maxTokens:1000,
topP:0.9,
frequencyPenalty:0.5,
presencePenalty:0.5,
max_tokens:1000,
top_p:0.9,
frequency_penalty:0.5,
presence_penalty:0.5,
stop: ["END"],
},
});
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash# Search for openaiText adapter definitionfd -t f -e ts -e js| xargs rg -l"openaiText"| head -20

Repository: TanStack/ai

Length of output: 564


🏁 Script executed:

#!/bin/bash# Find adapter files in the codebasefd -t d"adapter"| head -20

Repository: TanStack/ai

Length of output: 345


🏁 Script executed:

#!/bin/bash# Search for providerOptions type definitionsrg"providerOptions" -A 5 -B 5| head -100

Repository: TanStack/ai

Length of output: 7346


🌐 Web query:

OpenAI API parameter names: max_tokens vs max_output_tokens difference between Chat Completions and Responses API

💡 Result:

Short answer

  • Chat Completions API: historically uses max_tokens (now deprecated); newer chat reasoning models use max_completion_tokens to cap generated tokens. [1][3]
  • Responses API: uses max_output_tokens to cap generated tokens (this is the Responses API parameter for GPT-5 and o‑series models). [2][3]

Key practical differences

  • Name and API: max_tokens / max_completion_tokens → Chat Completions; max_output_tokens → Responses API. [1][2]
  • Model compatibility: some newer “o‑series” / GPT‑5 models expect max_output_tokens (Responses API); older chat endpoints accept max_completion_tokens (and legacy max_tokens). [1][2]
  • Behavior: all these parameters limit only the output tokens the model may generate (input + output must still fit model context). Chat Completions historically supports multiple completions (n); Responses API does not. [1][2][4]

Sources

  • OpenAI Chat/Completions reference (max_tokens deprecated, max_completion_tokens present). [1]
  • OpenAI Help Center “Controlling length” (Responses API uses max_output_tokens; notes about chat vs responses differences). [2]
  • OpenAI parameter details / token counting (input+output count toward context). [4]

References
[1] OpenAI API Reference — Chat Completions.
[2] OpenAI Help Center — Controlling the length of OpenAI model responses.
[3] OpenAI API Reference — Completions / Advanced parameter details.


🏁 Script executed:

#!/bin/bash# Read the openaiText adapter implementationcat -n packages/typescript/ai-openai/src/adapters/text.ts| head -100

Repository: TanStack/ai

Length of output: 3809


🏁 Script executed:

#!/bin/bash# Search for provider options type definitions in OpenAI adapterrg -A 20"providerOptions" packages/typescript/ai-openai/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1570


🏁 Script executed:

#!/bin/bash# Check the OpenAI adapter index to understand exportscat -n packages/typescript/ai-openai/src/index.ts| head -80

Repository: TanStack/ai

Length of output: 2701


🏁 Script executed:

#!/bin/bash# Search for max_tokens or max_output_tokens in OpenAI adapterrg"max_tokens|max_output_tokens|max_completion_tokens" packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 4450


🏁 Script executed:

#!/bin/bash# Read the text provider options file to see all valid parameterscat -n packages/typescript/ai-openai/src/text/text-provider-options.ts

Repository: TanStack/ai

Length of output: 18555


🏁 Script executed:

#!/bin/bash# Check the Responses API documentation comment or types in the adapterrg -B 5 -A 15"frequency_penalty|presence_penalty" packages/typescript/ai-openai/src/text/text-provider-options.ts

Repository: TanStack/ai

Length of output: 37


FixproviderOptions parameters to match OpenAI Responses API (not Chat Completions).

The example uses Chat Completions API parameters that are invalid for the Responses API:

  • max_tokens → should bemax_output_tokens
  • frequency_penalty,presence_penalty,stop → not supported by Responses API

The underlying adapter callsclient.responses.create(), which expects Responses API parameter names. Valid parameters include:temperature,top_p,max_output_tokens,tool_choice,parallel_tool_calls, and others, but NOT the Chat Completions parameters shown.

🤖 Prompt for AI Agents
In docs/adapters/openai.md around lines 135 to 149, the sample providerOptionsuses Chat Completions parameters that are invalid for the OpenAI Responses API;replace max_tokens with max_output_tokens and remove unsupported keys(frequency_penalty, presence_penalty, stop), and if needed add valid ResponsesAPI options such as temperature, top_p, max_output_tokens, tool_choice, orparallel_tool_calls to match client.responses.create() parameter names; updatethe example providerOptions to only include supported Responses API fields.

Comment on lines 176 to 178
import {ai }from'@tanstack/ai';
import {openaiText }from'@tanstack/ai-openai';

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

NDJSON error chunk shape is inconsistent with the document’s own examples.

Earlier examples includeid,model,timestamp on chunks, but the server snippets’errorChunk only includes{ type, error }. Either update the snippet to include the same envelope fields, or explicitly state that error chunks are allowed to be minimal.

Suggested doc-snippet adjustment:

-        const errorChunk = {-          type: 'error',-          error: {-            message: error.message || 'Unknown error',-            code: error.code,-          },-        };+        const errorChunk = {+          type: 'error',+          id: 'msg_1',+          model: 'gpt-4o',+          timestamp: Date.now(),+          error: {+            message: error?.message || 'Unknown error',+            code: error?.code,+          },+        };

Also applies to: 183-187, 225-227, 239-243

🤖 Prompt for AI Agents
In docs/protocol/http-stream-protocol.md around lines 176-178 (and also applysame change to 183-187, 225-227, 239-243): the NDJSON error chunk examples areinconsistent with the earlier chunk envelope (which includes id, model,timestamp). Either change each server-side errorChunk snippet to include thesame envelope fields (e.g., id, model, timestamp plus type and error) so theshape matches the document’s examples, or add a brief note above the snippetsexplicitly stating that error chunks may be minimal and do not include theenvelope fields; make the chosen approach consistent across all listed ranges.

Comment on lines +2 to 16
id:TextStreamOptionsForModel
title:TextStreamOptionsForModel
---

#Type Alias:ChatStreamOptionsForModel\<TAdapter, TModel\>
#Type Alias:TextStreamOptionsForModel\<TAdapter, TModel\>

```ts
typeChatStreamOptionsForModel<TAdapter,TModel>=TAdapterextendsAIAdapter<any,any,any,any,inferModelProviderOptions,inferModelInputModalities,inferMessageMetadata>?Omit<ChatOptions,"model"|"providerOptions"|"responseFormat"|"messages">&object:never;
typeTextStreamOptionsForModel<TAdapter,TModel>=TAdapterextendsAIAdapter<any,any,any,any,inferModelProviderOptions,inferModelInputModalities,inferMessageMetadata>?Omit<TextOptions,"model"|"providerOptions"|"responseFormat"|"messages">&object:never;
```

Defined in:[types.ts:883](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L883)

Chat options constrained by a specific model's capabilities.
UnlikeChatStreamOptionsUnion which creates a union over all models,
UnlikeTextStreamOptionsUnion which creates a union over all models,
this type takes a specific model and constrains messages accordingly.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Description still says “Chat options” after rename toTextStreamOptionsForModel.

-Chat options constrained by a specific model's capabilities.+Text options constrained by a specific model's capabilities.
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: TextStreamOptionsForModel
title: TextStreamOptionsForModel
---
#Type Alias: ChatStreamOptionsForModel\<TAdapter, TModel\>
#Type Alias: TextStreamOptionsForModel\<TAdapter, TModel\>
```ts
type ChatStreamOptionsForModel<TAdapter, TModel> = TAdapter extends AIAdapter<any, any, any, any, infer ModelProviderOptions, infer ModelInputModalities, infer MessageMetadata> ? Omit<ChatOptions, "model" | "providerOptions" | "responseFormat" | "messages"> & object : never;
type TextStreamOptionsForModel<TAdapter, TModel> = TAdapter extends AIAdapter<any, any, any, any, infer ModelProviderOptions, infer ModelInputModalities, infer MessageMetadata> ? Omit<TextOptions, "model" | "providerOptions" | "responseFormat" | "messages"> & object : never;
```
Defined in:[types.ts:883](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L883)
Chat options constrained by a specific model's capabilities.
Unlike ChatStreamOptionsUnion which creates a union over all models,
Unlike TextStreamOptionsUnion which creates a union over all models,
this type takes a specific model and constrains messages accordingly.
id: TextStreamOptionsForModel
title: TextStreamOptionsForModel
---
#Type Alias: TextStreamOptionsForModel\<TAdapter, TModel\>
🤖 Prompt for AI Agents
docs/reference/type-aliases/TextStreamOptionsForModel.md around lines 2 to 16:the file header and paragraph still refer to "Chat options" after the type wasrenamed; update the descriptive text to say "Text stream options" (or "Textoptions") and adjust the comparison sentence to refer to TextStreamOptionsUnion(keep the same intent: this type constrains text stream/messages for a specificmodel rather than creating a union over all models). Ensure any leftover “chat”wording is replaced and the description matches the type name.

Comment on lines 186 to 228
asyncfunctionrunSummarize(
options:SummarizeActivityOptions<
SummarizeAdapter<ReadonlyArray<string>,object>,
string,
false
>,
):Promise<SummarizationResult>{
const{ adapter, model, text, maxLength, style, focus}=options
constrequestId=createId('summarize')
constinputLength=text.length
conststartTime=Date.now()

aiEventClient.emit('summarize:started',{
requestId,
model,
inputLength,
timestamp:startTime,
})

constsummarizeOptions:SummarizationOptions={
model,
text,
maxLength,
style,
focus,
}

constresult=awaitadapter.summarize(summarizeOptions)

constduration=Date.now()-startTime
constoutputLength=result.summary.length

aiEventClient.emit('summarize:completed',{
requestId,
model,
inputLength,
outputLength,
duration,
timestamp:Date.now(),
})

returnresult
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

Streaming path skipssummarize:* telemetry; add parity with non-streaming
Today,stream: true won’t emitsummarize:started /summarize:completed, which makes metrics incomplete.

 async function* runStreamingSummarize(   options: SummarizeActivityOptions<     SummarizeAdapter<ReadonlyArray<string>, object>,     string,     true   >, ): AsyncIterable<StreamChunk> {   const { adapter, model, text, maxLength, style, focus } = options+  const requestId = createId('summarize')+  const inputLength = text.length+  const startTime = Date.now()++  aiEventClient.emit('summarize:started', {+    requestId,+    model,+    inputLength,+    timestamp: startTime,+  })   const summarizeOptions: SummarizationOptions = {     model,     text,     maxLength,     style,     focus,   }   const result = await adapter.summarize(summarizeOptions)+  const duration = Date.now() - startTime+  const outputLength = result.summary.length++  aiEventClient.emit('summarize:completed', {+    requestId,+    model,+    inputLength,+    outputLength,+    duration,+    timestamp: Date.now(),+  })   // Yield content chunk with the summary   yield {     type: 'content',     id: result.id,     model: result.model,     timestamp: Date.now(),     delta: result.summary,     content: result.summary,     role: 'assistant',   }

Also applies to: 234-273

🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/summarize/index.ts around lines 186-228(and similarly for the streaming implementation at 234-273), the streaming pathdoes not emit the summarize:started and summarize:completed telemetry events;update the streaming branch to mirror the non-streaming flow by: emitsummarize:started before processing (including requestId, model, inputLength,timestamp), record startTime, call the adapter stream API, captureoutputLength/duration when the stream completes (or on final chunk), then emitsummarize:completed with requestId, model, inputLength, outputLength, duration,timestamp, and finally return the same SummarizationResult shape; ensurerequestId is created the same way as non-streaming and errors also emit/handletelemetry consistently.

Comment on lines +109 to +163
export abstract class BaseTextAdapter<
TModels extends ReadonlyArray<string> = ReadonlyArray<string>,
TProviderOptions extends object = Record<string, unknown>,
TModelProviderOptionsByName extends Record<string, unknown> = Record<
string,
unknown
>,
TModelInputModalitiesByName extends Record<string, ReadonlyArray<Modality>> =
Record<string, ReadonlyArray<Modality>>,
TMessageMetadataByModality extends {
text: unknown
image: unknown
audio: unknown
video: unknown
document: unknown
} = DefaultMessageMetadataByModality,
> implements TextAdapter<
TModels,
TProviderOptions,
TModelProviderOptionsByName,
TModelInputModalitiesByName,
TMessageMetadataByModality
> {
readonly kind = 'text' as const
abstract readonly name: string
abstract readonly models: TModels

// Type-only properties - never assigned at runtime
declare _providerOptions?: TProviderOptions
declare _modelProviderOptionsByName?: TModelProviderOptionsByName
declare _modelInputModalitiesByName?: TModelInputModalitiesByName
declare _messageMetadataByModality?: TMessageMetadataByModality

protected config: TextAdapterConfig

constructor(config: TextAdapterConfig = {}) {
this.config = config
}

abstract chatStream(
options: TextOptions<string, TProviderOptions>,
): AsyncIterable<StreamChunk>

/**
* Generate structured output using the provider's native structured output API.
* Concrete implementations should override this to use provider-specific structured output.
*/
abstract structuredOutput(
options: StructuredOutputOptions<TProviderOptions>,
): Promise<StructuredOutputResult<unknown>>

protected generateId(): string {
return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine TextAdapterConfigfd -e ts -e js| xargs rg"TextAdapterConfig" -A 5 -B 2

Repository: TanStack/ai

Length of output: 2577


🏁 Script executed:

# Let's also search for concrete implementations of BaseTextAdapterrg"extends BaseTextAdapter|implements TextAdapter" -A 10 -B 2

Repository: TanStack/ai

Length of output: 9430


🏁 Script executed:

# Check the file structure to understand the context bettergit ls-files| grep -E"(adapter|config)"| head -20

Repository: TanStack/ai

Length of output: 713


🏁 Script executed:

# Look at OpenAITextAdapter implementationcat -n packages/typescript/ai-openai/src/adapters/text.ts| head -100

Repository: TanStack/ai

Length of output: 3809


🏁 Script executed:

# Look at GeminiTextAdapter implementationcat -n packages/typescript/ai-gemini/src/adapters/text.ts| head -100

Repository: TanStack/ai

Length of output: 3603


🏁 Script executed:

# Check the chatStream method in OpenAITextAdapterrg"chatStream" packages/typescript/ai-openai/src/adapters/text.ts -A 30

Repository: TanStack/ai

Length of output: 2275


🏁 Script executed:

# Check the chatStream method in GeminiTextAdapterrg"chatStream" packages/typescript/ai-gemini/src/adapters/text.ts -A 30

Repository: TanStack/ai

Length of output: 1029


🏁 Script executed:

# Find createOpenAIClient functionrg"export.*createOpenAIClient|function createOpenAIClient" packages/typescript/ai-openai -A 20

Repository: TanStack/ai

Length of output: 1709


🏁 Script executed:

# Find createGeminiClient functionrg"export.*createGeminiClient|function createGeminiClient" packages/typescript/ai-gemini -A 20

Repository: TanStack/ai

Length of output: 1719


🏁 Script executed:

# Check OpenAIClientConfig interfacerg"interface OpenAIClientConfig|type OpenAIClientConfig" packages/typescript/ai-openai -B 2 -A 10

Repository: TanStack/ai

Length of output: 1564


🏁 Script executed:

# Check GeminiClientConfig interfacerg"interface GeminiClientConfig|type GeminiClientConfig" packages/typescript/ai-gemini -B 2 -A 10

Repository: TanStack/ai

Length of output: 1397


🌐 Web query:

OpenAI Node.js SDK configuration timeout retries headers per-request

💡 Result:

Short answer — how to configure these in the official OpenAI Node.js SDK:

  • Client-level:

    • timeout (ms): pass to the client constructor. Default = 10 minutes (600000 ms). Example: new OpenAI({ timeout: 20 * 1000, maxRetries: 0 }). [1][2]
    • maxRetries: pass to the client constructor. Default = 2. Example above. [1][2]
    • fetchOptions: RequestInit to customize fetch behavior (proxy/agent, headers, etc.) for all requests. [1]
  • Per-request overrides:

    • Pass a second options object to the API call (same keys as client-level). Examples:
      • timeout override: await client.chat.completions.create(params, { timeout: 5 * 1000 });
      • maxRetries override: await client.chat.completions.create(params, { maxRetries: 5 });
      • per-request headers: use fetchOptions.headers in the second arg: await client.chat.completions.create(params, { fetchOptions: { headers: { 'X-My-Header': 'value' } } });. [1][2]

Notes:

  • Requests that time out are retried by default (2 attempts) unless you change maxRetries. [1][2]
  • You can also pass a custom fetch implementation via new OpenAI({ fetch }) or tweak lower-level fetch options via fetchOptions. [1]

Sources: OpenAI Node.js README (official repo) and package README. [1][2]

References
[1] openai/openai-node README (GitHub).
[2] openai package README (unpkg/npm).


🌐 Web query:

Google GenAI JavaScript SDK configuration timeout retries headers per-request

💡 Result:

Short answer — yes. In the Google GenAI JavaScript/TypeScript SDK you can configure timeout, retries and per-request headers either globally when you create the client or per-request via the request/config/http options.

How to set them (concise):

  • Client-level (constructor) and per-request fields

    • Use the SDK’s http/httpOptions (or per-call config) to set timeout and headers. timeout is in milliseconds; headers override/merge per-request. Example pattern: create client with default http options, then pass per-call options to override. (googleapis.github.io)
  • Timeout

    • Set timeout (ms) in the http/httpOptions (or RequestOptions.timeout for Vertex AI). Per-request timeout overrides the client default. (googleapis.github.io)
  • Retries

    • The SDK exposes retry configuration (retry / retry_options) you can set in http options or RequestOptions (Vertex AI uses google api_core retry objects in some implementations). Configure attempts, initial delay, backoff, jitter and which status codes to retry. Per-request retry options override constructor defaults. (googleapis.github.io)
  • Headers (per-request)

    • Use headers / customHeaders (or httpOptions.headers / RequestOptions.customHeaders) to pass per-request HTTP headers; they merge/override client defaults. There’s also apiClient / x-goog-api-client support for SDK identifiers. (cloud.google.com)

Minimal example (illustrative):

  • Client default:
    const ai = new GoogleGenAI({ /.../ , httpOptions: { timeout: 10000, headers: { 'X-App': 'me' }, retry_options: { attempts: 4, initial_delay:1 } } });
  • Per-call override:
    await ai.models.generateContent({ model:'gemini-2.5', contents:'...', httpOptions: { timeout: 2000, headers: { Authorization: 'Bearer ...' } } });

References:

  1. Google Gen AI JS SDK docs (client & examples). (googleapis.github.io)
  2. Vertex AI RequestOptions (customHeaders, timeout). (cloud.google.com)
  3. GenAI SDK http/retry options (httpOptions / retry_options, merging behavior). (googleapis.github.io)

If you want, tell me which API (Gemini Developer API vs Vertex AI), which SDK version, and I’ll give an exact code snippet for your case.

Citations:


TextAdapterConfig properties (timeout/maxRetries/headers) are defined but not applied—either remove them or pass them to SDK clients. Both OpenAI and Google GenAI SDKs support timeout, retries, and headers at client-level initialization and per-request override. Currently,OpenAIClientConfig andGeminiClientConfig only applyapiKey andbaseURL, ignoring timeout/maxRetries/headers fromTextAdapterConfig. Either remove these unused properties or apply them: at client creation increateOpenAIClient/createGeminiClient, and at per-request time inchatStream/structuredOutput.

Comment on lines 14 to 69
/**
* Type-safe helper to create a messages array constrained by a model's supported modalities.
*
* This function provides compile-time checking that your messages only contain
* content types supported by the specified model. It's particularly useful when
* combining typed messages with untyped data (like from request.json()).
*
* @example
* ```typescript
* import { messages, chat } from '@tanstack/ai'
* import { openai } from '@tanstack/ai-openai'
*
* const adapter = openai()
*
* // This will error at compile time because gpt-4o only supports text+image
* const msgs = messages({ adapter, model: 'gpt-4o' }, [
* {
* role: 'user',
* content: [
* { type: 'video', source: { type: 'url', value: '...' } } // Error!
* ]
* }
* ])
* ```
*/
export function messages<
TAdapter extends AIAdapter<any, any, any, any, any, any>,
const TModel extends TAdapter extends AIAdapter<
infer Models,
any,
any,
any,
any,
any
>
? Models[number]
: string,
>(
_options: { adapter: TAdapter; model: TModel },
msgs: TAdapter extends AIAdapter<
any,
any,
any,
any,
any,
infer ModelInputModalities
>
? TModel extends keyof ModelInputModalities
? ModelInputModalities[TModel] extends ReadonlyArray<Modality>
? Array<ConstrainedModelMessage<ModelInputModalities[TModel]>>
: Array<ModelMessage>
: Array<ModelMessage>
: Array<ModelMessage>,
): typeof msgs {
return msgs
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

TSDoc example likely out-of-date + considerReadonlyArray + silence unused_options.

  • The example referenceschat andopenai(); PR context suggests new entry points (e.g.ai() and split adapters). Update the snippet to match the actual exported API to avoid misleading public docs.
  • msgs should accept readonly arrays (common withas const) since the helper doesn’t mutate.
  • IfnoUnusedParameters/ESLint is enabled,_options may still error;void _options keeps runtime effectively unchanged.
 export function messages<   TAdapter extends AIAdapter<any, any, any, any, any, any>,   const TModel extends TAdapter extends AIAdapter<     infer Models,     any,     any,     any,     any,     any   >     ? Models[number]     : string, >(   _options: { adapter: TAdapter; model: TModel },   msgs: TAdapter extends AIAdapter<     any,     any,     any,     any,     any,     infer ModelInputModalities   >     ? TModel extends keyof ModelInputModalities       ? ModelInputModalities[TModel] extends ReadonlyArray<Modality>-        ? Array<ConstrainedModelMessage<ModelInputModalities[TModel]>>-        : Array<ModelMessage>-      : Array<ModelMessage>-    : Array<ModelMessage>,+        ? ReadonlyArray<ConstrainedModelMessage<ModelInputModalities[TModel]>>+        : ReadonlyArray<ModelMessage>+      : ReadonlyArray<ModelMessage>+    : ReadonlyArray<ModelMessage>, ): typeof msgs {+  void _options   return msgs }
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
*Type-safehelpertocreateamessagesarrayconstrainedbyamodel's supported modalities.
*
*Thisfunctionprovidescompile-timecheckingthatyourmessagesonlycontain
*contenttypessupportedbythespecifiedmodel.It's particularly useful when
*combiningtypedmessageswithuntypeddata(likefromrequest.json()).
*
* @example
*```typescript
*import{messages,chat}from'@tanstack/ai'
*import{openai}from'@tanstack/ai-openai'
*
*constadapter=openai()
*
*// This will error at compile time because gpt-4o only supports text+image
*constmsgs=messages({ adapter,model:'gpt-4o'},[
*{
*role:'user',
*content:[
*{type:'video',source:{type:'url',value:'...'}}// Error!
*]
*}
*])
*```
*/
exportfunctionmessages<
TAdapterextendsAIAdapter<any,any,any,any,any,any>,
constTModelextendsTAdapterextendsAIAdapter<
inferModels,
any,
any,
any,
any,
any
>
?Models[number]
:string,
>(
_options:{ adapter:TAdapter; model:TModel},
msgs:TAdapterextendsAIAdapter<
any,
any,
any,
any,
any,
inferModelInputModalities
>
?TModelextendskeyofModelInputModalities
?ModelInputModalities[TModel]extendsReadonlyArray<Modality>
?Array<ConstrainedModelMessage<ModelInputModalities[TModel]>>
:Array<ModelMessage>
:Array<ModelMessage>
:Array<ModelMessage>,
):typeofmsgs{
returnmsgs
}
/**
*Type-safehelpertocreateamessagesarrayconstrainedbyamodel's supported modalities.
*
*Thisfunctionprovidescompile-timecheckingthatyourmessagesonlycontain
*contenttypessupportedbythespecifiedmodel.It's particularly useful when
*combiningtypedmessageswithuntypeddata(likefromrequest.json()).
*
* @example
*
🤖 Prompt for AI Agents
packages/typescript/ai/src/activities/text/messages.ts lines 14-69: the TSDocexample is outdated (references chat and openai()), the msgs parameter shouldaccept ReadonlyArray so callers can pass as const, and the unused _optionsparameter should be silenced; update the doc example to use the current publicAPI (e.g., ai() and the new adapter entry points) to match exports, change themsgs type to also accept ReadonlyArray<...> variants wherever arrays arereturned so readonly inputs compile, and add a runtime noop reference like void_options at the top of the function body to avoid noUnusedParameters/ESLinterrors while preserving behavior.

@@ -0,0 +1,235 @@
import { toJSONSchema } from 'zod'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find and inspect package.json files to check zod versionfind. -name"package.json" -type f| head -20

Repository: TanStack/ai

Length of output: 924


🏁 Script executed:

# Check the specific file's line 181 to see the full commentsed -n'175,190p' packages/typescript/ai/src/activities/text/tools/zod-converter.ts

Repository: TanStack/ai

Length of output: 477


🏁 Script executed:

# Check package.json in the project root and relevant workspace directoriesif [-f"package.json" ];thenecho"=== Root package.json zod dependency ==="   jq'.dependencies.zod // .devDependencies.zod // "not found"' package.jsonfi# Check in the typescript/ai packageif [-f"packages/typescript/ai/package.json" ];thenecho"=== packages/typescript/ai/package.json zod dependency ==="  jq'.dependencies.zod // .devDependencies.zod // "not found"' packages/typescript/ai/package.jsonfi

Repository: TanStack/ai

Length of output: 176


🌐 Web query:

zod v4 toJSONSchema function API documentation

💡 Result:

z.toJSONSchema(schema, params?) — converts a Zod v4 schema to JSON Schema. Key details:

  • Signature: z.toJSONSchema(schema, ToJSONSchemaParams?). (zod.dev)
  • Important ToJSONSchemaParams:
    • target?: "draft-4" | "draft-7" | "draft-2020-12" | "openapi-3.0" (default: draft-2020-12). (zod.dev)
    • metadata?: $ZodRegistry<Record<string, any>> (use z.globalRegistry / .meta()). (zod.dev)
    • unrepresentable?: "throw" | "any" (default "throw") — controls handling of Zod types that have no JSON Schema equivalent. (zod.dev)
    • cycles?: "ref" | "throw" (default "ref") — how to handle cyclic schemas. (zod.dev)
    • reused?: "ref" | "inline" (default "inline") — how to handle reused schemas. (zod.dev)
    • uri?: (id: string) => string — mapper for external $ref URIs. (zod.dev)
  • Notes:
    • Some Zod types are unrepresentable (bigint, date, map, set, transform, etc.); default behavior is to throw unless unrepresentable: "any" is set. (zod.dev)
    • Converts many Zod checks to formats, patterns, numeric constraints, object additionalProperties behavior, etc.; see the JSON Schema docs for detailed mapping and examples. (zod.dev)

Official docs (full reference and examples): Zod JSON Schema docs. (zod.dev)

Citations:


🏁 Script executed:

# Check if zod package is installed and examine its exportsif [-d"node_modules/zod" ];thenecho"=== Checking zod package exports ==="  cat node_modules/zod/package.json| jq'.exports'| head -30echo -e"\n=== Checking zod lib/types.d.ts for toJSONSchema ==="  grep -n"toJSONSchema" node_modules/zod/lib/types.d.ts| head -5fi

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search the codebase for any other references to Alcyone Labs or toJSONSchema to understand usage patternrg -i"alcyone|toJSONSchema" --type ts --type js -B 2 -A 2

Repository: TanStack/ai

Length of output: 22211


Update comment on line 181 to remove outdated fork reference.

The importimport { toJSONSchema } from 'zod' is correct. However, the comment "Use Alcyone Labs fork which is compatible with Zod v4" is outdated.toJSONSchema is now part of Zod v4's standard API. Update the comment to reflect this:// Use Zod's built-in toJSONSchema for OpenAI 3.0 compatibility or similar.

🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/text/tools/zod-converter.ts around line181, update the outdated comment that references the Alcyone Labs fork; replaceit with a current comment noting that toJSONSchema is provided by Zod v4 (e.g."// Use Zod's built-in toJSONSchema for OpenAI 3.0 compatibility") so thecomment accurately reflects that toJSONSchema is part of Zod v4's standard API.

Comment on lines +59 to +72
} else if (wasOptional) {
// Make optional fields nullable by adding null to the type
if (prop.type && !Array.isArray(prop.type)) {
properties[propName] = {
...prop,
type: [prop.type, 'null'],
}
} else if (Array.isArray(prop.type) && !prop.type.includes('null')) {
properties[propName] = {
...prop,
type: [...prop.type, 'null'],
}
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Handle edge case: optional fields withouttype property.

The logic for making optional fields nullable assumesprop.type exists. However, JSON Schema properties can omittype when usingoneOf,anyOf, or other combinators. If an optional field lacks atype property, it won't be made nullable, which could violate OpenAI's structured output requirements.

Consider adding a fallback for properties withouttype:

       } else if (wasOptional) {         // Make optional fields nullable by adding null to the type         if (prop.type && !Array.isArray(prop.type)) {           properties[propName] = {             ...prop,             type: [prop.type, 'null'],           }         } else if (Array.isArray(prop.type) && !prop.type.includes('null')) {           properties[propName] = {             ...prop,             type: [...prop.type, 'null'],           }+        } else if (!prop.type && (prop.oneOf || prop.anyOf || prop.allOf)) {+          // For schema composition, wrap in anyOf with null+          properties[propName] = {+            ...prop,+            anyOf: [...(prop.anyOf || [prop]), { type: 'null' }],+          }         }       }
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
}elseif(wasOptional){
// Make optional fields nullable by adding null to the type
if(prop.type&&!Array.isArray(prop.type)){
properties[propName]={
...prop,
type:[prop.type,'null'],
}
}elseif(Array.isArray(prop.type)&&!prop.type.includes('null')){
properties[propName]={
...prop,
type:[...prop.type,'null'],
}
}
}
}elseif(wasOptional){
// Make optional fields nullable by adding null to the type
if(prop.type&&!Array.isArray(prop.type)){
properties[propName]={
...prop,
type:[prop.type,'null'],
}
}elseif(Array.isArray(prop.type)&&!prop.type.includes('null')){
properties[propName]={
...prop,
type:[...prop.type,'null'],
}
}elseif(!prop.type&&(prop.oneOf||prop.anyOf||prop.allOf)){
// For schema composition, wrap in anyOf with null
properties[propName]={
...prop,
anyOf:[...(prop.anyOf||[prop]),{type:'null'}],
}
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/text/tools/zod-converter.ts aroundlines 59 to 72, the optional-to-nullable logic assumes prop.type exists; add afallback branch for when prop.type is undefined so optional fields without anexplicit type become nullable. Specifically, inside the wasOptional branch,detect if prop.type is missing and set properties[propName] = { ...prop, type:['null'] } (so the schema allows null); keep the existing branches forsingle-type and array-type cases unchanged.

tannerlinsley
tannerlinsley previously approved these changesDec 12, 2025
@tannerlinsleytannerlinsley dismissed theirstale reviewDecember 12, 2025 21:15

I guess I hit "approve" instead of "start review" and now all of my queued comments are gone from the window...

* video generation* text to speech and speech to text* adding some cool audio UI to the dev panel* small fixups* ci: apply automated fixes* client fixes on tool calls* few more client fixups* one last test fix---------Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 11

♻️ Duplicate comments (3)
packages/typescript/ai/src/types.ts (2)

572-578:Past concern: Clarify relationship betweenoutputSchema andoutput fields.

This issue was previously flagged. The ambiguity betweenoutput?: TOutput (line 571) andoutputSchema?: z.ZodType (line 578) still exists, with no clear precedence or validation when both are provided.


794-798:Past concern: Standardize token usage field naming.

This issue was previously flagged. Theusage object inImageGenerationResult usesinputTokens andoutputTokens, while other result types usepromptTokens andcompletionTokens. This naming inconsistency should be addressed.

docs/adapters/openai.md (1)

147-161:VerifyproviderOptions parameters match the underlying API.

This issue was flagged in a previous review. The example usesmax_tokens,frequency_penalty,presence_penalty, andstop which are Chat Completions API parameters. If the underlying adapter uses the Responses API (client.responses.create()), these should be updated to valid Responses API parameters likemax_output_tokens.

#!/bin/bash# Check which OpenAI API endpoint the text adapter usesrg -n"responses\.create|chat\.completions\.create" packages/typescript/ai-openai/src/adapters/text.ts
🧹 Nitpick comments (15)
packages/typescript/ai/src/activities/transcription/adapter.ts (2)

61-74:Make ID generation robust (avoidMath.random() / collisions).

generateId() is collision-prone and not ideal for tracing. Prefercrypto.randomUUID() (with an isomorphic fallback) and consider keeping it stable-format for logs/telemetry.

 export abstract class BaseTranscriptionAdapter<   TModels extends ReadonlyArray<string> = ReadonlyArray<string>,   TProviderOptions extends object = Record<string, unknown>, > implements TranscriptionAdapter<TModels, TProviderOptions> {@@-  protected generateId(): string {-    return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`-  }+  protected generateId(): string {+    const uuid =+      typeof crypto !== 'undefined' && 'randomUUID' in crypto+        ? crypto.randomUUID()+        : `${Date.now()}-${Math.random().toString(16).slice(2)}`+    return `${this.name}-${uuid}`+  } }

61-65:Considerreadonly config to prevent accidental mutation.

-  protected config: TranscriptionAdapterConfig+  protected readonly config: TranscriptionAdapterConfig
packages/typescript/ai/src/activities/transcription/index.ts (2)

44-62:Avoid duplicatingTranscriptionOptions shape (drift risk).

TranscriptionActivityOptions re-specifies fields that already exist inTranscriptionOptions (audio,language,prompt,responseFormat,providerOptions). Reuse the shared type so future changes don’t silently diverge.

-import type { TranscriptionResult } from '../../types'+import type { TranscriptionOptions, TranscriptionResult } from '../../types'@@ export interface TranscriptionActivityOptions<   TAdapter extends TranscriptionAdapter<ReadonlyArray<string>, object>,   TModel extends TranscriptionModels<TAdapter>,-> {+> extends Omit<+    TranscriptionOptions<TranscriptionProviderOptions<TAdapter>>,+    'model'+  > {   /** The transcription adapter to use */   adapter: TAdapter & { kind: typeof kind }   /** The model name (autocompletes based on adapter) */   model: TModel-  /** The audio data to transcribe - can be base64 string, File, Blob, or Buffer */-  audio: string | File | Blob | ArrayBuffer-  /** The language of the audio in ISO-639-1 format (e.g., 'en') */-  language?: string-  /** An optional prompt to guide the transcription */-  prompt?: string-  /** The format of the transcription output */-  responseFormat?: 'json' | 'text' | 'srt' | 'verbose_json' | 'vtt'-  /** Provider-specific options for transcription */-  providerOptions?: TranscriptionProviderOptions<TAdapter> }

109-118:Optional: add a runtime kind assert for better DX on mis-wired adapters.

Type checks can be bypassed withany; a simple runtime guard would produce a clearer error than “transcribe is not a function”.

packages/typescript/ai-client/src/chat-client.ts (1)

472-476:Consider adding safeguards for continuation chains.

The current design allows indefinite continuation: stream → drain → checkForContinuation → new stream → repeat. While this enables multi-turn tool calling, it could theoretically loop indefinitely if tool execution patterns create cycles.

Consider adding:

  • A maximum continuation depth counter
  • A circuit breaker if the same tool fails repeatedly
  • Logging/telemetry to track continuation chain length

This is not an immediate issue but may help prevent edge-case runaway scenarios in production.

packages/typescript/ai/src/types.ts (2)

1054-1056:Consider renamingchatStream to align with Text-based terminology.

TheAIAdapter.chatStream method retains "chat" terminology while the rest of the codebase has been renamed to use "Text" (e.g.,TextOptions,TextCompletionChunk,TextStreamOptionsUnion).

IfAIAdapter is the legacy interface being maintained for backwards compatibility, this is acceptable. However, if it's part of the new API surface, consider renaming totextStream for consistency.


1240-1283:Consider refactoring to reduce code duplication.

TheTextOptionsForTextAdapter type (lines 1240-1283) duplicates the logic fromTextStreamOptionsForModel (lines 1128-1183). Both types follow the same pattern:

  • Omit the same fields fromTextOptions
  • Apply identicalproviderOptions narrowing
  • Use the samemessages constraining logic based onModelInputModalities

Consider extracting this pattern into a shared utility type to improve maintainability:

typeBaseTextOptionsForAdapter<TAdapter,TModelextendsstring,ModelProviderOptions,ModelInputModalities,MessageMetadata>=Omit<TextOptions,'model'|'providerOptions'|'responseFormat'|'messages'>&{adapter:TAdaptermodel:TModelproviderOptions?:TModelextendskeyofModelProviderOptions    ?ModelProviderOptions[TModel]    :nevermessages:// ... shared logic}

Then both types could be defined more concisely by extracting the appropriate generics from their respective adapter interfaces.

packages/typescript/ai-gemini/src/model-meta.ts (1)

782-793:Consider reordering type alias after the const declaration for consistency.

GeminiImageModels type alias (line 784) is defined beforeGEMINI_IMAGE_MODELS const (line 786), whileGeminiModels (line 782) follows the pattern of being defined afterGEMINI_MODELS (line 771). While TypeScript allows forward references in type positions, consistent ordering improves readability.

 export type GeminiModels = (typeof GEMINI_MODELS)[number]-export type GeminiImageModels = (typeof GEMINI_IMAGE_MODELS)[number]- export const GEMINI_IMAGE_MODELS = [   GEMINI_2_5_FLASH_IMAGE.name,   GEMINI_2_FLASH_IMAGE.name,   IMAGEN_3.name,   IMAGEN_4_GENERATE.name,   IMAGEN_4_GENERATE_FAST.name,   IMAGEN_4_GENERATE_ULTRA.name, ] as const++export type GeminiImageModels = (typeof GEMINI_IMAGE_MODELS)[number]
packages/typescript/ai/src/activities/tts/index.ts (1)

55-57:Consider making format type more flexible for provider extensibility.

Theformat option is hardcoded to OpenAI-specific formats ('mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm'). Gemini TTS may support different formats. Consider extracting this from the adapter type or using a more genericstring type with provider-specific constraints.

   /** The output audio format */-  format?: 'mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm'+  format?: string

Alternatively, extract format types from the adapter similar to how models are extracted, allowing each provider to define their supported formats.

packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (2)

12-17:Naming convention:timestamp_granularities uses snake_case.

Per coding guidelines, camelCase should be used for function and variable names. However,timestamp_granularities uses snake_case to match OpenAI's API. This is a reasonable trade-off for API consistency, but consider documenting this decision or using camelCase (timestampGranularities) with internal conversion.

If maintaining snake_case for API parity is intentional (common pattern when matching external APIs), consider adding a brief comment explaining this choice.


1-18:Consider expanding provider options for parity with OpenAI API.

The interface only exposestemperature andtimestamp_granularities. The OpenAI transcription API supports additional useful options likelanguage,prompt, andresponse_format (referenced in the relevant snippet fromtranscribe-provider-options.ts). Consider if these should be exposed for full API coverage, or document that this is an intentionally minimal interface.

Would you like me to help expand this interface with additional OpenAI transcription options?

packages/typescript/ai-gemini/src/index.ts (1)

5-71:Export parity: consider adding aGeminiTTSModel type re-export (missing today).
You exportGeminiTTSModels (value) but don’t appear to export the corresponding model-name type (parity withGeminiTextModel /GeminiImageModel). This is a small DX gap for consumers wanting model autocomplete for TTS.

Proposed patch:

 export { GEMINI_TTS_MODELS as GeminiTTSModels } from './model-meta' export type { GeminiModels as GeminiTextModel } from './model-meta' export type { GeminiImageModels as GeminiImageModel } from './model-meta'+export type { GeminiTTSModels as GeminiTTSModel } from './model-meta'
packages/typescript/ai/src/activities/video/index.ts (1)

114-226:Return type isn’t wired torequest (you defineVideoActivityResult but don’t use it).
Right nowvideoActivity always returnsPromise<VideoJobResult | VideoStatusResult | VideoUrlResult>, so callers don’t get a type-safe result keyed byrequest.

One way to connect the pieces:

-export async function videoActivity<+export async function videoActivity<   TAdapter extends VideoAdapter<ReadonlyArray<string>, object>,   TModel extends VideoModels<TAdapter>,+  TRequest extends 'create' | 'status' | 'url' = 'create', >(-  options:-    | VideoCreateOptions<TAdapter, TModel>-    | VideoStatusOptions<TAdapter, TModel>-    | VideoUrlOptions<TAdapter, TModel>,-): Promise<VideoJobResult | VideoStatusResult | VideoUrlResult> {+  options: VideoActivityOptions<TAdapter, TModel, TRequest>,+): VideoActivityResult<TRequest> {   const { adapter, request = 'create' } = options   switch (request) {     case 'status': {-      const statusOptions = options as VideoStatusOptions<TAdapter, TModel>-      return adapter.getVideoStatus(statusOptions.jobId)+      return adapter.getVideoStatus((options as VideoStatusOptions<TAdapter, TModel>).jobId) as any     }     case 'url': {-      const urlOptions = options as VideoUrlOptions<TAdapter, TModel>-      return adapter.getVideoUrl(urlOptions.jobId)+      return adapter.getVideoUrl((options as VideoUrlOptions<TAdapter, TModel>).jobId) as any     }     case 'create':     default: {-      const createOptions = options as VideoCreateOptions<TAdapter, TModel>+      const createOptions = options as VideoCreateOptions<TAdapter, TModel>       return adapter.createVideoJob({         model: createOptions.model,         prompt: createOptions.prompt,         size: createOptions.size,         duration: createOptions.duration,         providerOptions: createOptions.providerOptions,-      })+      }) as any     }   } }
packages/typescript/ai-openai/src/adapters/tts.ts (1)

55-79:Avoidas OpenAITTSVoice/OpenAITTSFormat casts; validate or type the public API.
Todayvoice/format can be any string at the@tanstack/ai layer, and the cast defeats type safety. If you want OpenAI-specific typing, consider exposing OpenAI-specific option types (or runtime guards) so invalid values fail predictably before the API call.

packages/typescript/ai-openai/src/video/video-provider-options.ts (1)

88-118:toApiSeconds can produce invalidOpenAIVideoSeconds unless callers validate first.
Right nowtoApiSeconds(5) returns'5' as OpenAIVideoSeconds. Consider either (a) folding validation into conversion, or (b) making the validation+conversion helper the “one true” path.

Example option:

-export function toApiSeconds(-  seconds: number | string | undefined,-): OpenAIVideoSeconds | undefined {-  if (seconds === undefined) return undefined-  return String(seconds) as OpenAIVideoSeconds-}+export function toApiSeconds(+  model: string,+  seconds: number | string | undefined,+): OpenAIVideoSeconds | undefined {+  validateVideoSeconds(model, seconds)+  if (seconds === undefined) return undefined+  return String(seconds) as OpenAIVideoSeconds+}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between96a08b1 and2b56f66.

⛔ Files ignored due to path filters (1)
  • packages/typescript/smoke-tests/adapters/fixtures/test-audio.mp3 is excluded by!**/*.mp3
📒 Files selected for processing (31)
  • docs/adapters/gemini.md (4 hunks)
  • docs/adapters/openai.md (9 hunks)
  • docs/config.json (4 hunks)
  • docs/guides/text-to-speech.md (1 hunks)
  • docs/guides/transcription.md (1 hunks)
  • docs/guides/video-generation.md (1 hunks)
  • packages/typescript/ai-client/src/chat-client.ts (4 hunks)
  • packages/typescript/ai-gemini/src/adapters/tts.ts (1 hunks)
  • packages/typescript/ai-gemini/src/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/model-meta.ts (10 hunks)
  • packages/typescript/ai-openai/src/adapters/transcription.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/tts.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/video.ts (1 hunks)
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (1 hunks)
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts (1 hunks)
  • packages/typescript/ai-openai/src/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/model-meta.ts (7 hunks)
  • packages/typescript/ai-openai/src/video/video-provider-options.ts (1 hunks)
  • packages/typescript/ai-react/src/use-chat.ts (1 hunks)
  • packages/typescript/ai-solid/src/use-chat.ts (1 hunks)
  • packages/typescript/ai-svelte/src/create-chat.svelte.ts (1 hunks)
  • packages/typescript/ai-vue/src/use-chat.ts (1 hunks)
  • packages/typescript/ai/src/activities/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/transcription/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/transcription/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/tts/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/tts/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/video/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/video/index.ts (1 hunks)
  • packages/typescript/ai/src/ai.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (11 hunks)
✅ Files skipped from review due to trivial changes (3)
  • docs/guides/transcription.md
  • docs/guides/video-generation.md
  • packages/typescript/ai-svelte/src/create-chat.svelte.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/typescript/ai-openai/src/model-meta.ts
  • packages/typescript/ai/src/activities/index.ts
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition() and Zod schema inference
Implement isomorphic tool system usingtoolDefinition() with.server() and.client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-vue/src/use-chat.ts
  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai-react/src/use-chat.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai-solid/src/use-chat.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-vue/src/use-chat.ts
  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai-react/src/use-chat.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai-solid/src/use-chat.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
packages/typescript/*/src/adapters/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Files:

  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
packages/typescript/*/src/model-meta.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-gemini/src/model-meta.ts
packages/typescript/*/src/index.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Export tree-shakeable adapters with clear subpath exports in package.json (e.g.,@tanstack/ai/adapters,@tanstack/ai-openai/adapters) to minimize bundle size

Files:

  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
🧠 Learnings (10)
📓 Common learnings
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.784ZLearning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.784ZLearning: Ensure provider-agnostic design where the core AI SDK remains independent of any specific AI provider implementation
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • docs/guides/text-to-speech.md
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • docs/adapters/openai.md
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • docs/adapters/gemini.md
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai-react/src/use-chat.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • docs/adapters/openai.md
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • docs/adapters/gemini.md
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • docs/adapters/openai.md
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • docs/adapters/gemini.md
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.784Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.784ZLearning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top

Applied to files:

  • packages/typescript/ai-react/src/use-chat.ts
  • docs/adapters/openai.md
  • docs/adapters/gemini.md
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai/src/types.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to **/*.{ts,tsx} : Implement isomorphic tool system using `toolDefinition()` with `.server()` and `.client()` implementations for dual-environment execution

Applied to files:

  • docs/adapters/openai.md
  • packages/typescript/ai/src/types.ts
  • docs/adapters/gemini.md
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CRRepo: TanStack/ai PR: 0File: CLAUDE.md:0-0Timestamp: 2025-12-13T17:09:09.783ZLearning: Applies to **/*.{ts,tsx} : Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with `toolDefinition()` and Zod schema inference

Applied to files:

  • docs/adapters/openai.md
  • packages/typescript/ai/src/types.ts
  • docs/adapters/gemini.md
🧬 Code graph analysis (9)
packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (2)
packages/typescript/ai-openai/src/index.ts (1)
  • OpenAITranscriptionProviderOptions (81-81)
packages/typescript/ai-openai/src/audio/transcribe-provider-options.ts (1)
  • TranscribeProviderOptions (1-77)
packages/typescript/ai/src/activities/tts/adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TTSOptions (876-889)
  • TTSResult (894-907)
packages/typescript/ai-openai/src/video/video-provider-options.ts (1)
packages/typescript/ai-openai/src/index.ts (4)
  • OpenAIVideoSize (57-57)
  • OpenAIVideoProviderOptions (55-55)
  • OpenAIVideoModelProviderOptionsByName (56-56)
  • OpenAIVideoDuration (58-58)
packages/typescript/ai-gemini/src/model-meta.ts (2)
packages/typescript/ai-gemini/src/text/text-provider-options.ts (3)
  • GeminiSafetyOptions (16-23)
  • GeminiGenerationConfigOptions (25-168)
  • GeminiCachedContentOptions (170-175)
packages/typescript/ai-gemini/src/index.ts (5)
  • GeminiModels (69-69)
  • GEMINI_MODELS (66-66)
  • GeminiImageModels (70-70)
  • GEMINI_IMAGE_MODELS (67-67)
  • GEMINI_TTS_MODELS (68-68)
packages/typescript/ai-openai/src/adapters/transcription.ts (6)
packages/typescript/ai/src/activities/index.ts (1)
  • BaseTranscriptionAdapter (247-247)
packages/typescript/ai/src/activities/transcription/index.ts (1)
  • BaseTranscriptionAdapter (125-125)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_TRANSCRIPTION_MODELS (1720-1725)
packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (1)
  • OpenAITranscriptionProviderOptions (4-18)
packages/typescript/ai/src/types.ts (3)
  • TranscriptionOptions (917-932)
  • TranscriptionResult (967-982)
  • TranscriptionSegment (937-950)
packages/typescript/ai/src/activities/transcription/adapter.ts (1)
  • generateId (71-73)
packages/typescript/ai/src/activities/transcription/adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TranscriptionOptions (917-932)
  • TranscriptionResult (967-982)
packages/typescript/ai-openai/src/adapters/tts.ts (4)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_TTS_MODELS (1711-1715)
packages/typescript/ai-openai/src/audio/tts-provider-options.ts (2)
  • OpenAITTSVoice (4-15)
  • OpenAITTSFormat (20-20)
packages/typescript/ai/src/types.ts (2)
  • TTSOptions (876-889)
  • TTSResult (894-907)
packages/typescript/ai-openai/src/audio/audio-provider-options.ts (3)
  • validateAudioInput (71-75)
  • validateSpeed (56-62)
  • validateInstructions (64-69)
packages/typescript/ai/src/activities/video/adapter.ts (4)
packages/typescript/ai/src/activities/index.ts (3)
  • VideoAdapterConfig (211-211)
  • VideoAdapter (210-210)
  • BaseVideoAdapter (209-209)
packages/typescript/ai/src/activities/video/index.ts (3)
  • VideoAdapterConfig (229-229)
  • VideoAdapter (229-229)
  • BaseVideoAdapter (230-230)
packages/typescript/ai/src/ai.ts (1)
  • VideoAdapter (313-313)
packages/typescript/ai/src/types.ts (4)
  • VideoGenerationOptions (811-824)
  • VideoJobResult (831-836)
  • VideoStatusResult (843-852)
  • VideoUrlResult (859-866)
packages/typescript/ai/src/ai.ts (4)
packages/typescript/ai/src/activities/index.ts (13)
  • GenerateAdapter (292-292)
  • TextAdapter (128-128)
  • EmbeddingAdapter (149-149)
  • SummarizeAdapter (168-168)
  • ImageAdapter (188-188)
  • VideoAdapter (210-210)
  • AnyAdapter (295-302)
  • TextModels (120-120)
  • EmbeddingModels (143-143)
  • SummarizeModels (162-162)
  • ImageModels (181-181)
  • VideoModels (201-201)
  • AnyAIAdapter (319-330)
packages/typescript/ai/src/index.ts (3)
  • GenerateAdapter (6-6)
  • AnyAdapter (5-5)
  • ai (3-3)
packages/typescript/ai/src/types.ts (7)
  • TextAdapter (1286-1286)
  • EmbeddingAdapter (1286-1286)
  • SummarizeAdapter (1286-1286)
  • AnyAdapter (1286-1286)
  • VideoStatusResult (843-852)
  • VideoUrlResult (859-866)
  • VideoJobResult (831-836)
packages/typescript/ai/src/activities/video/adapter.ts (1)
  • VideoAdapter (32-65)
🔇 Additional comments (34)
packages/typescript/ai-react/src/use-chat.ts (1)

83-91:LGTM! Behavioral improvement prevents premature request abortion.

RemovingisLoading from the effect dependencies is correct and prevents the cleanup from running whenever loading state toggles. This ensures that continuation requests and streaming responses aren't aborted mid-flight. The cleanup now properly runs only on unmount or when the client instance changes, which aligns with React best practices for resource lifecycle management.

The comments clearly document the reasoning and warn against reintroducing the dependency.

packages/typescript/ai/src/activities/transcription/index.ts (1)

81-90: [Rewritten review comment]
[Classification tag]

packages/typescript/ai-client/src/chat-client.ts (3)

29-29:LGTM! Clean queue declaration.

The post-stream action queue is properly typed and initialized.


401-407:LGTM! Deferred continuation prevents nested streams.

The conditional queueing correctly preventscheckForContinuation from triggering a new stream while one is in progress.


443-449:LGTM! Consistent deferred continuation pattern.

The queueing logic mirrors the approach inaddToolResult, maintaining consistency across tool-related operations.

packages/typescript/ai-vue/src/use-chat.ts (1)

42-44:LGTM! Unconditional cleanup is more robust.

Removing theisLoading guard and unconditionally callingclient.stop() on disposal is a good practice. This prevents potential resource leaks from edge cases where streaming activity exists butisLoading might not accurately reflect it. Thestop() method is idempotent—it checks if an abort controller exists before aborting and safely handles multiple calls.

Vue, React, and Solid all follow this same unconditional cleanup pattern. Note that Svelte intentionally handles cleanup differently (manualstop() calls only) due to Svelte 5 runes limitations, as documented in the implementation.

packages/typescript/ai/src/types.ts (3)

748-982:LGTM - Well-structured multimodal types.

The new Image Generation (lines 748-799), Video Generation (lines 801-867), TTS (lines 868-907), and Transcription (lines 909-982) types are well-structured with comprehensive JSDoc documentation. The experimental markers on Video types appropriately signal the feature's status.


1209-1286:LGTM - Consistent adapter extraction types.

The new adapter extraction types (ExtractModelsFromTextAdapter,ExtractModelsFromEmbeddingAdapter,ExtractModelsFromSummarizeAdapter,ExtractModelsFromAnyAdapter) follow consistent patterns and properly support the tree-shakeable adapter architecture outlined in the coding guidelines. The type-only re-exports are appropriately structured.

Based on learnings, these extraction types enable tree-shakeable adapter architecture by allowing model extraction from specialized adapters.


7-12: The adapter types are properly implemented and exported. All four adapter types (TextAdapter, EmbeddingAdapter, SummarizeAdapter, AnyAdapter) are correctly defined in their respective adapter modules and exported from./activities/index.ts, matching the import statement in types.ts. The tree-shakeable adapter architecture is correctly structured.

packages/typescript/ai-solid/src/use-chat.ts (1)

65-73:Cleanup improvement looks good—cleanup behavior is effectively consistent across all framework adapters.

Removing theisLoading guard ensures thatstop() is always called on component unmount, which is the correct behavior since thestop() method should be idempotent. Any in-flight requests should be cancelled when the component disposes, regardless of the current loading state. The added comments accurately describe the SolidJS disposal lifecycle.

Cleanup patterns across frameworks are functionally equivalent:

  • React:useEffect with[client] dependency (cleanup on unmount or whenclient changes, thoughclient only changes whenclientId changes, which is rare)
  • SolidJS:createEffect without dependencies (cleanup on unmount only)
  • Vue:onScopeDispose (cleanup on scope disposal)
  • Svelte: Manual cleanup required due to framework constraints

The slight differences in dependency arrays reflect framework-specific lifecycle patterns but result in the same practical behavior: cleanup runs on unmount, andstop() is reliably called to cancel in-flight requests.

packages/typescript/ai-openai/src/audio/tts-provider-options.ts (1)

1-31:LGTM!

The TTS provider options type definitions are well-structured with appropriate JSDoc documentation. The types align with OpenAI's TTS API and follow the tree-shakeable adapter pattern established in the codebase.

docs/guides/text-to-speech.md (1)

1-248:LGTM!

Comprehensive and well-structured TTS documentation that covers both OpenAI and Gemini providers. The code examples are consistent with the new tree-shakeable adapter architecture, and the best practices section provides valuable guidance.

packages/typescript/ai-gemini/src/model-meta.ts (1)

797-801:Experimental TTS models appropriately documented.

The hardcoded model string is acceptable here given that the full model metadata constant (GEMINI_2_5_FLASH_TTS) is commented out as experimental. The@experimental JSDoc tag clearly communicates this status.

packages/typescript/ai/src/activities/tts/index.ts (1)

107-118:LGTM!

Clean activity implementation with proper delegation to the adapter and appropriate type re-exports for the tree-shakeable architecture.

docs/config.json (2)

72-80:LGTM!

Navigation entries for Text-to-Speech and Transcription guides are correctly added and follow the existing structure.


174-179: The reference targets exist:docs/reference/functions/text.md anddocs/reference/functions/textOptions.md are both present in the repository.

packages/typescript/ai/src/activities/video/adapter.ts (2)

32-65:LGTM!

Well-designed video adapter interface with proper support for async job-based workflows. The generic parameters provide good type safety, and the@experimental tags appropriately communicate the feature's status.


73-101:LGTM!

TheBaseVideoAdapter abstract class follows established patterns with proper use ofdeclare for type-only properties and a usefulgenerateId utility. The implementation provides a solid foundation for provider-specific adapters.

packages/typescript/ai-gemini/src/index.ts (1)

76-83:Deprecation messaging looks clear + keeps back-compat.
The@deprecated block points users to the new split adapters while preservingGeminiAdapter exports for existing users. As per coding guidelines, this aligns with the tree-shakeable adapter architecture.

packages/typescript/ai-openai/src/adapters/tts.ts (1)

132-169:Factories match the established pattern.
createOpenaiTTS + env-basedopenaiTTS are consistent with other adapters and keep the tree-shakeable surface clean.

packages/typescript/ai/src/activities/tts/adapter.ts (1)

1-72:Nice, minimal base contract for TTS adapters.
Clear discriminator (kind: 'tts') + typed provider options hook.

packages/typescript/ai-gemini/src/adapters/tts.ts (2)

155-192:Factory helpers look consistent with other Gemini adapters.
Explicit-key + env-key factory pair is clean and tree-shakeable. As per coding guidelines, this matches the split adapter architecture.


86-101:TTS wiring is correct and matches current @google/genai SDK behavior.

TheresponseModalities: ['AUDIO'] andspeechConfig.voiceConfig.prebuiltVoiceConfig.voiceName structure conform to the official SDK documentation. Response handling correctly extracts audio viacandidates[0].content.parts and processesinlineData for audio content.

packages/typescript/ai/src/activities/video/index.ts (1)

194-226:No action required—field names match the VideoAdapter contract.

ThevideoActivity function correctly passesduration toadapter.createVideoJob(), which aligns with theVideoGenerationOptions interface definition. The field is properly documented as "Video duration in seconds" in the type definition, so there is no naming inconsistency or risk of drifting from provider quirks.

packages/typescript/ai-openai/src/adapters/transcription.ts (1)

49-125:Theverbose_json cast is necessary and correct — the issue is incomplete OpenAI SDK typings, not incompatibility.

The OpenAI SDK v6.9.1 typings only defineTranscription { text: string }, but the actualverbose_json response includesduration,language,segments (withid,start,end,text,avg_logprob), andwords (withword,start,end) per the official OpenAI API specification. The inline cast is a reasonable workaround for this known typings gap. The field mappings (avg_logprobconfidence viaMath.exp()) align with the API schema. The real brittleness risk is if OpenAI's API response shape changes, not SDK type drift; consider extracting the cast to a dedicated type file or adding a comment explaining why the cast exists.

packages/typescript/ai-openai/src/index.ts (1)

1-112:Well-structured tree-shakeable exports.

The organization with clear section comments and consistent export patterns aligns well with the coding guidelines for tree-shakeable adapter architecture. The@experimental JSDoc tag on Video adapter appropriately signals API instability.

docs/adapters/openai.md (2)

59-91:Comprehensive model documentation.

The categorized model lists provide clear guidance for users selecting models for each capability.


178-245:Well-documented new adapter capabilities.

The Embeddings and Summarization sections provide clear examples including batch operations and provider options. The API patterns are consistent with the text adapter examples.

packages/typescript/ai/src/ai.ts (3)

72-157:Type mappings would need updates if TTS/Transcription are added.

If theGenerateAdapter union is extended to include TTS and Transcription adapters, theAIOptionsFor andAIReturnFor conditional types will also need corresponding branches to handle these adapter kinds, otherwise they'll resolve tonever.


293-302:Clean routing implementation.

The delegation pattern usingactivityMap is simple and extensible. The error message for unknown adapter kinds is helpful for debugging.


308-339:Comprehensive re-exports for public API surface.

The re-exports provide a clean public API, including both current adapter types and legacy aliases for backwards compatibility.

packages/typescript/ai-openai/src/adapters/video.ts (3)

144-153:Graceful 404 handling in status check.

Returning afailed status instead of throwing on 404 is appropriate for polling scenarios where the job may have been cleaned up.


303-323:Safe status mapping with sensible default.

Defaulting unknown statuses to'processing' is a reasonable choice for forward compatibility when OpenAI adds new status values.


346-400:Consistent factory function pattern.

ThecreateOpenaiVideo andopenaiVideo factory functions follow the same pattern established by other adapters (text, embed, image), maintaining API consistency across the package.

Comment on lines 62 to 66
-`gemini-2.0-flash-exp` - Gemini 2.0 Flash (fast, efficient)
-`gemini-2.0-flash-lite` - Gemini 2.0 Flash Lite (fastest)
-`gemini-2.5-pro` - Gemini 2.5 Pro (most capable)
-`gemini-2.5-flash` - Gemini 2.5 Flash
-`gemini-exp-1206` - Experimental Pro model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash# Check Gemini model names in model-meta.tsecho"=== Model definitions in model-meta.ts ==="rg -n"name:" packages/typescript/ai-gemini/src/model-meta.ts| grep -i"gemini-2"echo""echo"=== Full context for gemini-2.0-flash models ==="rg -B2 -A2"gemini-2\.0-flash" packages/typescript/ai-gemini/src/model-meta.ts

Repository: TanStack/ai

Length of output: 1214


🏁 Script executed:

#!/bin/bash# Check model names used in the documentationecho"=== Model references in gemini.md ==="rg -n"gemini-2\.[0-9]" docs/adapters/gemini.md| head -30

Repository: TanStack/ai

Length of output: 712


Update model name fromgemini-2.0-flash-exp togemini-2.0-flash.

The documented modelgemini-2.0-flash-exp does not exist in model-meta.ts. The actual model name isgemini-2.0-flash. This incorrect name appears in the model list (line 62) and throughout code examples (lines 25, 42, 96, 128, 141, 232), which will cause failures when users run these examples.

🤖 Prompt for AI Agents
In docs/adapters/gemini.md around lines 62 to 66 and at the other mentionedlocations (lines 25, 42, 96, 128, 141, 232), the model name`gemini-2.0-flash-exp` is incorrect; replace every occurrence with the correctmodel name `gemini-2.0-flash` so the examples and list match model-meta.ts andwill run successfully; update the model list entry and all codeexamples/snippets to use `gemini-2.0-flash` consistently.

Comment on lines 68 to 71
###Embedding Models

-`gemini-embedding-001` - Text embedding model
-`text-embedding-004` - Latest embedding model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -n"GEMINI_EMBEDDING" packages/typescript/ai-gemini/src/model-meta.ts -A 10

Repository: TanStack/ai

Length of output: 1017


🏁 Script executed:

rg"text-embedding-004" packages/typescript/ai-gemini/

Repository: TanStack/ai

Length of output: 338


🏁 Script executed:

rg -B 5 -A 5"text-embedding-004" packages/typescript/ai-gemini/src/adapters/embed.ts

Repository: TanStack/ai

Length of output: 620


🏁 Script executed:

cat -n packages/typescript/ai-gemini/src/model-meta.ts| sed -n'720,800p'

Repository: TanStack/ai

Length of output: 2989


GEMINI_EMBEDDING_MODELS registry is incomplete.

The embedding modeltext-embedding-004 is documented, used as the default model in the embed adapter, and listed inGeminiEmbeddingModels, but it's missing fromGEMINI_EMBEDDING_MODELS in model-meta.ts. Additionally, the model list in embed.ts (['text-embedding-004', 'embedding-001']) diverges from model-meta.ts (['gemini-embedding-001']), indicating an inconsistency between the two model registries. Both lists should be consolidated and kept in sync.

🤖 Prompt for AI Agents
In docs/adapters/gemini.md around lines 68 to 71, the embedding model registryis inconsistent: the docs and embed adapter reference `text-embedding-004` whilemodel-meta.ts only lists `gemini-embedding-001`. Update model-meta.ts to include`text-embedding-004` in the GEMINI_EMBEDDING_MODELS array (and ensureGeminiEmbeddingModels enum/type includes it), and modify embed.ts to derive itssupported models from that centralized registry (or import the same constants)so both places reference the single source of truth and remain in sync.

Comment on lines +327 to +329

// Drain any actions that were queued while the stream was in progress
awaitthis.drainPostStreamActions()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Consider clearing the queue whenstop() is called.

Currently, if a user callsstop() while tool results were queued, those actions will still execute when the finally block drains the queue. This meanscheckForContinuation could trigger a new stream immediately after the user explicitly stopped the current one.

Consider this addition to thestop() method (line 359):

stop():void{if(this.abortController){this.abortController.abort()this.abortController=null}this.postStreamActions=[]// Clear queued actionsthis.setIsLoading(false)this.events.stopped()}

Similarly, consider clearing the queue inclear() (line 371) to prevent stale actions from executing after messages are cleared.

🤖 Prompt for AI Agents
In packages/typescript/ai-client/src/chat-client.ts around lines 327-329 (andspecifically in the stop() implementation at ~line 359 and clear() at ~line371), queued postStreamActions are not cleared when a stream is aborted ormessages are cleared, allowing those actions (and checkForContinuation) to runafter the user stopped the stream; update stop() to reset this.postStreamActions= [] immediately after aborting the controller (and before firingstopped/setIsLoading), and likewise clear this.postStreamActions = [] inclear(), ensuring no stale queued actions execute after stop() or clear().

Comment on lines +462 to +467
privateasyncdrainPostStreamActions():Promise<void>{
while(this.postStreamActions.length>0){
constaction=this.postStreamActions.shift()!
awaitaction()
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

Add error handling in the drain loop.

If a queued action throws an error, the exception will propagate and prevent subsequent actions from executing. This could leave the system in an inconsistent state where some continuations run but others don't.

Apply this diff to handle errors gracefully:

 private async drainPostStreamActions(): Promise<void> {   while (this.postStreamActions.length > 0) {     const action = this.postStreamActions.shift()!-    await action()+    try {+      await action()+    } catch (error) {+      // Log but don't block remaining actions+      console.error('Post-stream action failed:', error)+      this.setError(error instanceof Error ? error : new Error(String(error)))+    }   } }
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
privateasyncdrainPostStreamActions():Promise<void>{
while(this.postStreamActions.length>0){
constaction=this.postStreamActions.shift()!
awaitaction()
}
}
privateasyncdrainPostStreamActions():Promise<void>{
while(this.postStreamActions.length>0){
constaction=this.postStreamActions.shift()!
try{
awaitaction()
}catch(error){
console.error('Post-stream action failed:',error)
}
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai-client/src/chat-client.ts around lines 462 to 467, thedrainPostStreamActions loop currently awaits each queued action directly whichlets an exception stop the loop and skip remaining actions; wrap the awaitaction() call in a try/catch so errors from a single action are caught, logged(use this.logger.error if available, otherwise console.error) and the loopcontinues to the next action; optionally collect failures into an array ifcallers need a summary, but do not let one action throw and abort the rest.

Comment on lines 78 to 132
// Use Gemini's multimodal content generation with audio output
// Note: This requires the model to support audio output
constvoiceConfig=providerOptions?.voiceConfig||{
prebuiltVoiceConfig:{
voiceName:'Kore',// Default Gemini voice
},
}

constresponse=awaitthis.client.models.generateContent({
model,
contents:[
{
role:'user',
parts:[{text:`Please speak the following text:${text}`}],
},
],
config:{
// Configure for audio output
responseModalities:['AUDIO'],
speechConfig:{
voiceConfig,
},
},
})

// Extract audio data from response
constcandidate=response.candidates?.[0]
constparts=candidate?.content?.parts

if(!parts||parts.length===0){
thrownewError('No audio output received from Gemini TTS')
}

// Look for inline data (audio)
constaudioPart=parts.find((part:any)=>
part.inlineData?.mimeType?.startsWith('audio/'),
)

if(!audioPart||!('inlineData'inaudioPart)){
thrownewError('No audio data in Gemini TTS response')
}

constinlineData=(audioPartasany).inlineData
constaudioBase64=inlineData.data
constmimeType=inlineData.mimeType||'audio/wav'
constformat=mimeType.split('/')[1]||'wav'

return{
id:generateId(this.name),
model,
audio:audioBase64,
format,
contentType:mimeType,
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Default voice can be skipped whenvoiceConfig is provided withoutvoiceName.
Right nowvoiceName: 'Kore' is only set whenproviderOptions?.voiceConfig is falsy. If callers pass{ voiceConfig: { prebuiltVoiceConfig: {} } },voiceName stays undefined.

Possible adjustment:

-    const voiceConfig = providerOptions?.voiceConfig || {-      prebuiltVoiceConfig: {-        voiceName: 'Kore', // Default Gemini voice-      },-    }+    const voiceConfig = {+      prebuiltVoiceConfig: {+        voiceName:+          providerOptions?.voiceConfig?.prebuiltVoiceConfig?.voiceName ?? 'Kore',+      },+      ...providerOptions?.voiceConfig,+    }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/tts.ts around lines 78 to 132, thedefault voiceName 'Kore' is only applied when providerOptions?.voiceConfig isfalsy, so if callers pass providerOptions.voiceConfig.prebuiltVoiceConfig butomit voiceName it remains undefined; update the voiceConfig construction tomerge providerOptions.voiceConfig with a default prebuiltVoiceConfig.voiceName ='Kore' (i.e., if prebuiltVoiceConfig exists but has no voiceName, set it to'Kore'), ensuring you preserve any other provided fields.

Comment on lines 52 to 110
asyncgenerateSpeech(
options:TTSOptions<OpenAITTSProviderOptions>,
):Promise<TTSResult>{
const{ model, text, voice, format, speed, providerOptions}=options

// Validate inputs using existing validators
constaudioOptions={
input:text,
model,
voice:voiceasOpenAITTSVoice,
speed,
response_format:formatasOpenAITTSFormat,
...providerOptions,
}

validateAudioInput(audioOptions)
validateSpeed(audioOptions)
validateInstructions(audioOptions)

// Build request
constrequest:OpenAI_SDK.Audio.SpeechCreateParams={
model,
input:text,
voice:voice||'alloy',
response_format:format,
speed,
...providerOptions,
}

// Call OpenAI API
constresponse=awaitthis.client.audio.speech.create(request)

// Convert response to base64
constarrayBuffer=awaitresponse.arrayBuffer()
constbase64=Buffer.from(arrayBuffer).toString('base64')

constoutputFormat=format||'mp3'
constcontentType=this.getContentType(outputFormat)

return{
id:generateId(this.name),
model,
audio:base64,
format:outputFormat,
contentType,
}
}

privategetContentType(format:string):string{
constcontentTypes:Record<string,string>={
mp3:'audio/mpeg',
opus:'audio/opus',
aac:'audio/aac',
flac:'audio/flac',
wav:'audio/wav',
pcm:'audio/pcm',
}
returncontentTypes[format]||'audio/mpeg'
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check package.json to see if this is an isomorphic packagecat packages/typescript/ai-openai/package.json| head -50

Repository: TanStack/ai

Length of output: 1163


🏁 Script executed:

# Search for existing isomorphic base64 utilities in the reporg"arrayBuffer.*[Bb]ase64|[Bb]ase64.*arrayBuffer" --type ts --type tsx

Repository: TanStack/ai

Length of output: 82


🏁 Script executed:

# Check if there are other Buffer usages in adapters that might indicate the pattern already establishedrg"Buffer\.from" --type ts --type tsx

Repository: TanStack/ai

Length of output: 82


🏁 Script executed:

# Check for any isomorphic utility files or helpersfd -e ts -e tsx| xargs grep -l"typeof Buffer\|btoa\|atob"| head -20

Repository: TanStack/ai

Length of output: 181


🏁 Script executed:

# Look at the actual file to see contextcat -n packages/typescript/ai-openai/src/adapters/tts.ts| sed -n'45,120p'

Repository: TanStack/ai

Length of output: 2486


🏁 Script executed:

# Check transcription adapter for Buffer usagecat -n packages/typescript/ai-openai/src/adapters/transcription.ts| grep -A 3 -B 3"Buffer"

Repository: TanStack/ai

Length of output: 509


🏁 Script executed:

# Look for utilities or helpers in the core @tanstack/ai packagefind packages/typescript/ai/src -type f -name"*util*" -o -name"*helper*"| head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search for any existing base64 or arrayBuffer handling in core packagerg"base64|arrayBuffer|Buffer" packages/typescript/ai/src --type ts -A 2 -B 2| head -50

Repository: TanStack/ai

Length of output: 3500


🏁 Script executed:

# Check if there's a shared utilities folder or isomorphic utilsfd -e ts"util|helper|iso" packages/typescript/ai/src| head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the build/vite config to see if there are any isomorphic/browser targetscat packages/typescript/ai-openai/vite.config.ts2>/dev/null|| cat packages/typescript/ai-openai/vite.config.js2>/dev/null||echo"No vite config found"

Repository: TanStack/ai

Length of output: 827


🏁 Script executed:

# Check testing/panel useTTS hookcat -n testing/panel/src/hooks/useTTS.ts

Repository: TanStack/ai

Length of output: 2571


🏁 Script executed:

# Check the tts route componentcat -n testing/panel/src/routes/tts.tsx| head -100

Repository: TanStack/ai

Length of output: 3505


Use an isomorphic base64 conversion instead ofBuffer.from().

The adapter callsBuffer.from(arrayBuffer).toString('base64') at line 86, which will fail in browser environments. The testing panel'suseTTS hook (line 39) expects base64-encoded audio from the TTS API endpoint, and while the current code runs server-side, this pattern prevents future isomorphic usage or client-side execution (e.g., in RSC or edge runtimes). Replace with an isomorphic helper:

-    const arrayBuffer = await response.arrayBuffer()-    const base64 = Buffer.from(arrayBuffer).toString('base64')+    const arrayBuffer = await response.arrayBuffer()+    const base64 = arrayBufferToBase64(arrayBuffer)

Add the helper function:

+function arrayBufferToBase64(arrayBuffer: ArrayBuffer): string {+  if (typeof Buffer !== 'undefined') {+    return Buffer.from(arrayBuffer).toString('base64')+  }+  let binary = ''+  const bytes = new Uint8Array(arrayBuffer)+  for (let i = 0; i < bytes.length; i++) binary += String.fromCharCode(bytes[i]!)+  return btoa(binary)+}
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/tts.ts around lines 52 to 110, thecode uses Buffer.from(arrayBuffer).toString('base64') which breaks inbrowser/edge runtimes; replace that call with an isomorphic helper that convertsan ArrayBuffer/Uint8Array to base64 (e.g., a small function that checks forglobalThis.btoa and TextDecoder/Blob usage or falls back to Buffer in Node), addthe helper in this file (or import from a shared utils) and call it like constbase64 = arrayBufferToBase64(arrayBuffer); update the generateSpeech return touse the helper and ensure typing accepts ArrayBuffer | Uint8Array for robustconversion.

Comment on lines +213 to +214
constbaseUrl=this.config.baseUrl||'https://api.openai.com/v1'
constapiKey=this.config.apiKey

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Verify config property name:baseUrl vsbaseURL.

Line 213 accessesthis.config.baseUrl but OpenAI SDK configuration typically usesbaseURL (uppercase URL). This could result in the fallback URL always defaulting to'https://api.openai.com/v1' even when a custom base URL is configured.

#!/bin/bash# Check the OpenAIClientConfig interface for the correct property namerg -n"baseUrl|baseURL" packages/typescript/ai-openai/src/utils.ts
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 213 to 214,the code reads this.config.baseUrl which likely mismatches theOpenAIClientConfig property name (baseURL); update the access to usethis.config.baseURL (or check the config interface and normalize both names) soa provided custom base URL is respected instead of always falling back to thehardcoded 'https://api.openai.com/v1'; ensure the change aligns with theOpenAIClientConfig definition in utils.ts and add a safe fallback if neitherproperty is present.

Comment on lines +241 to +252
// The response is the raw video file - convert to base64 data URL
constvideoBlob=awaitcontentResponse.blob()
constbuffer=awaitvideoBlob.arrayBuffer()
constbase64=Buffer.from(buffer).toString('base64')
constmimeType=
contentResponse.headers.get('content-type')||'video/mp4'

return{
jobId,
url:`data:${mimeType};base64,${base64}`,
expiresAt:undefined,// Data URLs don't expire
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

Buffer is not available in browser environments.

Buffer.from() is a Node.js API and will throwReferenceError: Buffer is not defined in browser environments. For isomorphic code, consider using a cross-platform approach.

-        const base64 = Buffer.from(buffer).toString('base64')+        // Cross-platform base64 encoding+        const base64 = typeof Buffer !== 'undefined'+          ? Buffer.from(buffer).toString('base64')+          : btoa(String.fromCharCode(...new Uint8Array(buffer)))

Alternatively, consider using a utility likeuint8array-extras or checking if video URL retrieval should be server-only.

📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// The response is the raw video file - convert to base64 data URL
constvideoBlob=awaitcontentResponse.blob()
constbuffer=awaitvideoBlob.arrayBuffer()
constbase64=Buffer.from(buffer).toString('base64')
constmimeType=
contentResponse.headers.get('content-type')||'video/mp4'
return{
jobId,
url:`data:${mimeType};base64,${base64}`,
expiresAt:undefined,// Data URLs don't expire
}
// The response is the raw video file - convert to base64 data URL
constvideoBlob=awaitcontentResponse.blob()
constbuffer=awaitvideoBlob.arrayBuffer()
// Cross-platform base64 encoding
constbase64=typeofBuffer!=='undefined'
?Buffer.from(buffer).toString('base64')
:btoa(String.fromCharCode(...newUint8Array(buffer)))
constmimeType=
contentResponse.headers.get('content-type')||'video/mp4'
return{
jobId,
url:`data:${mimeType};base64,${base64}`,
expiresAt:undefined,// Data URLs don't expire
}
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 241 to 252,the code uses Buffer.from(...) which breaks in browser environments; replacethat Node-only call with an isomorphic conversion: after getting theArrayBuffer, branch on whether globalThis.Buffer exists and useBuffer.from(buffer).toString('base64') for Node, otherwise convert theUint8Array to a binary string (e.g. by iterating or chunking to avoid stackissues) and call btoa(binaryString) to produce the base64 string (or useFileReader.readAsDataURL to get a data URL in browsers), keep the existingmimeType handling and return shape, and ensure no Buffer references remain whenrunning in the browser.

Comment on lines +52 to +60
/** The audio data to transcribe - can be base64 string, File, Blob, or Buffer */
audio: string | File | Blob | ArrayBuffer
/** The language of the audio in ISO-639-1 format (e.g., 'en') */
language?: string
/** An optional prompt to guide the transcription */
prompt?: string
/** The format of the transcription output */
responseFormat?: 'json' | 'text' | 'srt' | 'verbose_json' | 'vtt'
/** Provider-specific options for transcription */

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Fix JSDoc/type mismatch foraudio (“Buffer” isn’t in the type).

Either addBuffer to the actualaudio type (if supported), or remove “Buffer” from JSDoc/examples to matchstring | File | Blob | ArrayBuffer.

Also applies to: 80-90

🤖 Prompt for AI Agents
packages/typescript/ai/src/activities/transcription/index.ts around lines 52-60(and similarly lines 80-90): the JSDoc mentions "Buffer" but the TypeScript typeis string | File | Blob | ArrayBuffer, causing a mismatch; update the TypeScriptsignature to include Node's Buffer (e.g., string | File | Blob | ArrayBuffer |Buffer) if Buffer is intended to be supported, or remove "Buffer" from theJSDoc/examples to match the existing type — ensure both JSDoc and the TS typeare consistent across the file (and update any related overloads or testsaccordingly).

Comment on lines 47 to 52
exporttypeGenerateAdapter=
|TextAdapter<ReadonlyArray<string>,object,any,any,any>
|EmbeddingAdapter<ReadonlyArray<string>,object>
|SummarizeAdapter<ReadonlyArray<string>,object>
|ImageAdapter<ReadonlyArray<string>,object,any,any>
|VideoAdapter<ReadonlyArray<string>,object>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find all adapter definitions in the activities directoryfind packages/typescript/ai/src/activities -name"adapter.ts" -o -name"adapter.d.ts"| head -20

Repository: TanStack/ai

Length of output: 452


🏁 Script executed:

# Check the structure of activities directoryfd -t d"activities" packages/typescript/ai/src/ --exec find {} -type f -name"*.ts"\;

Repository: TanStack/ai

Length of output: 1583


🏁 Script executed:

# Search for TTS and Transcription adapter definitionsrg"export.*(?:TTS|Transcription)Adapter" packages/typescript/ai/src/ -A 2

Repository: TanStack/ai

Length of output: 2254


🏁 Script executed:

# Check the current GenerateAdapter definition and importscat -n packages/typescript/ai/src/ai.ts| head -60

Repository: TanStack/ai

Length of output: 2173


🏁 Script executed:

# Look for AnyAIAdapter definition to see what it includesrg"type AnyAIAdapter|export type.*AnyAIAdapter" packages/typescript/ai/src/ -A 5

Repository: TanStack/ai

Length of output: 612


🏁 Script executed:

# Get full AnyAIAdapter definitionrg"export type AnyAIAdapter" packages/typescript/ai/src/activities/index.ts -A 20

Repository: TanStack/ai

Length of output: 988


🏁 Script executed:

# Check the activityMap in activities/index.tsrg"const activityMap|export.*activityMap" packages/typescript/ai/src/activities/index.ts -A 10

Repository: TanStack/ai

Length of output: 422


🏁 Script executed:

# Check what activities are defined and exportedcat packages/typescript/ai/src/activities/index.ts| head -100

Repository: TanStack/ai

Length of output: 2948


🏁 Script executed:

# Check the ai() function signaturerg"export.*function ai|export const ai" packages/typescript/ai/src/ai.ts -A 15

Repository: TanStack/ai

Length of output: 833


🏁 Script executed:

# Check what imports are in ai.ts from activitiesrg"^import.*from.*activities" packages/typescript/ai/src/ai.ts

Repository: TanStack/ai

Length of output: 426


Add missing TTS and Transcription adapters toGenerateAdapter union type.

TheGenerateAdapter union (lines 47-52) is missingTTSAdapter andTranscriptionAdapter, even though theai() function is typed to acceptAnyAIAdapter which includes these adapters, and theactivityMap supports routing to bothttsActivity andtranscriptionActivity. This creates a type consistency gap where the publicGenerateAdapter export doesn't reflect all adapter types the function actually handles.

+import type { TTSAdapter } from './activities/tts/adapter'+import type { TranscriptionAdapter } from './activities/transcription/adapter' /** Union of all adapter types that can be passed to ai() */ export type GenerateAdapter =   | TextAdapter<ReadonlyArray<string>, object, any, any, any>   | EmbeddingAdapter<ReadonlyArray<string>, object>   | SummarizeAdapter<ReadonlyArray<string>, object>   | ImageAdapter<ReadonlyArray<string>, object, any, any>   | VideoAdapter<ReadonlyArray<string>, object>+  | TTSAdapter<ReadonlyArray<string>, object>+  | TranscriptionAdapter<ReadonlyArray<string>, object>
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
exporttypeGenerateAdapter=
|TextAdapter<ReadonlyArray<string>,object,any,any,any>
|EmbeddingAdapter<ReadonlyArray<string>,object>
|SummarizeAdapter<ReadonlyArray<string>,object>
|ImageAdapter<ReadonlyArray<string>,object,any,any>
|VideoAdapter<ReadonlyArray<string>,object>
importtype{TextAdapter}from'./activities/text/adapter'
importtype{EmbeddingAdapter}from'./activities/embedding/adapter'
importtype{SummarizeAdapter}from'./activities/summarize/adapter'
importtype{ImageAdapter}from'./activities/image/adapter'
importtype{VideoAdapter}from'./activities/video/adapter'
importtype{TTSAdapter}from'./activities/tts/adapter'
importtype{TranscriptionAdapter}from'./activities/transcription/adapter'
/** Union of all adapter types that can be passed to ai() */
exporttypeGenerateAdapter=
|TextAdapter<ReadonlyArray<string>,object,any,any,any>
|EmbeddingAdapter<ReadonlyArray<string>,object>
|SummarizeAdapter<ReadonlyArray<string>,object>
|ImageAdapter<ReadonlyArray<string>,object,any,any>
|VideoAdapter<ReadonlyArray<string>,object>
|TTSAdapter<ReadonlyArray<string>,object>
|TranscriptionAdapter<ReadonlyArray<string>,object>
🤖 Prompt for AI Agents
In packages/typescript/ai/src/ai.ts around lines 47 to 52, the GenerateAdapterunion is missing TTSAdapter and TranscriptionAdapter which causes a type gap;update the exported GenerateAdapter union to include the TTSAdapter andTranscriptionAdapter variants with the correct generic parameter shapes usedelsewhere (match the existing TTSAdapter<TInput, TOptions, TStream?> andTranscriptionAdapter<TInput, TOptions> signatures in the codebase), and add oradjust imports if necessary so those adapter types are referenced; ensure thefinal union mirrors AnyAIAdapter’s supported adapter kinds so ai() consumers seeconsistent types.

* fixing JSON-Schema conversion* summarize now really supports streaming* ci: apply automated fixes* linting fixes* feat: add createOptions helper and improve summarization streaming- Add createOptions() function for type-safe adapter option creation- Refactor OpenAI summarize adapter to use text adapter for streaming- Deprecate textOptions() in favor of createOptions()- Update examples to use createOptions pattern- Add runtime adapter switching documentation guide* massive overhaul to remove ai* ci: apply automated fixes* Going to openaiText* ci: apply automated fixes* doc fixes and removal of legacy exports* adding more options to gemini audio* doc updates* ci: apply automated fixes* add docs script---------Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Copy link

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts (1)

67-72:Use the publicchat() function instead of callingadapter.chatStream() directly.

The@ts-ignore comment indicates a typing issue. WhilechatStream is technically a public method on the adapter, the documented and recommended pattern is to use thechat() function from@tanstack/ai. Replace the direct adapter method call with:

conststream=chat({  adapter,model:'gpt-4o-mini',  messages,tools:[getGuitarsTool],})

This approach follows the published API pattern and eliminates the need for type suppression.

packages/typescript/ai-solid/README.md (1)

1-3:Incorrect package name in README title.

The title says@tanstack/ai-react but this file is located in theai-solid package directory. This appears to be a copy-paste error that should be corrected.

-# @tanstack/ai-react+# @tanstack/ai-solid-React hooks for building AI chat interfaces with TanStack AI.+Solid hooks for building AI chat interfaces with TanStack AI.
♻️ Duplicate comments (32)
packages/typescript/ai/src/activities/chat/messages.ts (1)

14-69:Previous review feedback not yet addressed.

The issues raised in the prior review remain:

  1. Line 23: TSDoc example importschat but doesn't use it — remove the unused import from the example
  2. Lines 63-66:msgs parameter should acceptReadonlyArray<...> to supportas const inputs
  3. Lines 67-68: Addvoid _options to silence potential unused parameter warnings
packages/typescript/ai/src/activities/chat/tools/zod-converter.ts (2)

59-72:Handle edge case: optional fields withouttype property.

The logic for making optional fields nullable assumesprop.type exists. However, JSON Schema properties can omittype when usingoneOf,anyOf, or other combinators. If an optional field lacks atype property, it won't be made nullable, which could violate OpenAI's structured output requirements.

Consider adding a fallback for properties withouttype:

       } else if (wasOptional) {         // Make optional fields nullable by adding null to the type         if (prop.type && !Array.isArray(prop.type)) {           properties[propName] = {             ...prop,             type: [prop.type, 'null'],           }         } else if (Array.isArray(prop.type) && !prop.type.includes('null')) {           properties[propName] = {             ...prop,             type: [...prop.type, 'null'],           }+        } else if (!prop.type && (prop.oneOf || prop.anyOf || prop.allOf)) {+          // For schema composition, wrap in anyOf with null+          properties[propName] = {+            ...prop,+            anyOf: [...(prop.anyOf || [prop]), { type: 'null' }],+          }         }       }

181-181:Update comment to reflect thattoJSONSchema is part of Zod v4's standard API.

The importimport { toJSONSchema } from 'zod' is correct. However, the comment "Use Alcyone Labs fork which is compatible with Zod v4" is outdated.toJSONSchema is now part of Zod v4's standard API.

Update the comment to reflect current reality:

-  // Use Alcyone Labs fork which is compatible with Zod v4+  // Use Zod's built-in toJSONSchema for OpenAPI 3.0 compatibility   const jsonSchema = toJSONSchema(schema, {
packages/typescript/ai/src/activities/chat/adapter.ts (1)

12-18:TextAdapterConfig properties remain unused (known issue).

As noted in a previous review, thetimeout,maxRetries, andheaders properties inTextAdapterConfig are defined but not applied to SDK clients. This is a known issue that should be addressed in a follow-up.

packages/typescript/ai/src/activities/embedding/index.ts (1)

131-156:Error handling concern already raised in previous review.

The unpaired lifecycle events issue (ifadapter.createEmbeddings throws) has been flagged in a previous review comment. Wrapping the adapter call in try-catch with proper error event emission remains necessary for consistent event lifecycle tracking.

docs/adapters/ollama.md (1)

127-127:Hyphenate "tool-calling".

This issue was already flagged in a previous review comment.

-**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.+**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.
packages/typescript/ai-openai/src/adapters/transcription.ts (1)

127-173:Critical:instanceof File/Blob can throw in environments without these globals.

This issue was flagged in a previous review. The code usesinstanceof File andinstanceof Blob which will throwReferenceError in Node.js environments where these globals don't exist. Additionally,atob may not be available.

Guard withtypeof checks before usinginstanceof, and provide fallback decoding:

   private prepareAudioFile(audio: string | File | Blob | ArrayBuffer): File {     // If already a File, return it-    if (audio instanceof File) {+    if (typeof File !== 'undefined' && audio instanceof File) {       return audio     }     // If Blob, convert to File-    if (audio instanceof Blob) {+    if (typeof Blob !== 'undefined' && audio instanceof Blob) {       return new File([audio], 'audio.mp3', {         type: audio.type || 'audio/mpeg',       })     }

Foratob, consider usingBuffer.from(base64, 'base64') as a fallback in Node.js environments.

packages/typescript/ai/src/activities/generateTranscription/index.ts (1)

44-62:JSDoc/type mismatch: "Buffer" mentioned but not in type.

The JSDoc at line 52 mentions "Buffer" as a valid audio input type, but the TypeScript type isstring | File | Blob | ArrayBuffer. Either addBuffer to the type (if Node.js Buffer is supported) or remove "Buffer" from the JSDoc.

packages/typescript/ai-ollama/src/adapters/summarize.ts (1)

99-102:Existing review comment addresses the token/word semantic mismatch here.

The past review already flagged thatnum_predict (tokens) and the prompt instruction (words) usemaxLength inconsistently. Please address that feedback.

docs/adapters/gemini.md (1)

156-169:Verify embedding model name matches the adapter's supported models.

The documentation usesgemini-embedding-001. A past review flagged inconsistencies between the embedding model lists in different files. Please ensure this model name matches what's defined in the Gemini embed adapter.

#!/bin/bash# Check embedding model definitionsecho"=== Gemini embedding models in adapter ==="rg -n"GeminiEmbeddingModels|GEMINI_EMBEDDING" packages/typescript/ai-gemini/src/echo""echo"=== Model meta definitions ==="rg -B2 -A5"GEMINI_EMBEDDING_MODELS" packages/typescript/ai-gemini/src/model-meta.ts
packages/typescript/ai-gemini/src/adapters/tts.ts (1)

103-107:Default voice can be skipped whenvoiceConfig is provided withoutvoiceName.

The current logic only sets the defaultvoiceName: 'Kore' whenmodelOptions?.voiceConfig is entirely falsy. If a caller provides{ voiceConfig: { prebuiltVoiceConfig: {} } },voiceName will beundefined.

Consider merging defaults more defensively:

-    const voiceConfig = modelOptions?.voiceConfig || {-      prebuiltVoiceConfig: {-        voiceName: 'Kore' as const,-      },-    }+    const voiceConfig = {+      prebuiltVoiceConfig: {+        voiceName:+          modelOptions?.voiceConfig?.prebuiltVoiceConfig?.voiceName ?? ('Kore' as const),+      },+      ...modelOptions?.voiceConfig,+    }
docs/adapters/openai.md (1)

115-129:VerifymodelOptions parameter names match the underlying API.

The example usesmax_tokens,frequency_penalty,presence_penalty, andstop which are Chat Completions API parameters. If the adapter uses the Responses API internally, these should bemax_output_tokens (and some parameters may not be supported).

#!/bin/bash# Check which API the OpenAI text adapter uses and what parameters it acceptsrg -n"max_tokens|max_output_tokens|frequency_penalty" packages/typescript/ai-openai/src/adapters/text.ts -B 2 -A 2rg -n"responses\.create|chat\.completions" packages/typescript/ai-openai/src/adapters/text.ts
packages/typescript/ai-ollama/src/adapters/text.ts (2)

189-212:Ollama'sToolCall type lacks anid property—runtime risk.

The type assertion on lines 190-193 assumesToolCall hasid andfunction.index, but Ollama v0.6.3'sToolCall only providesfunction.name andfunction.arguments. Line 201 usesactualToolCall.id without a fallback, risking undefined access.

Consider usingresponseId as a fallback:

-            id: actualToolCall.id,+            id: actualToolCall.id ?? `${responseId}-${chunk.message.tool_calls?.indexOf(toolCall) ?? 0}`,

359-378:systemPrompts not forwarded to Ollama.

ThemapCommonOptionsToOllama function doesn't includeoptions.systemPrompts in the returnedChatRequest. Unlike other adapters, system prompts are silently dropped. Ollama's API supports system role messages.

+    // Prepend system messages if systemPrompts provided+    const formattedMessages = this.formatMessages(options.messages)+    const messagesWithSystem = options.systemPrompts?.length+      ? [+          { role: 'system' as const, content: options.systemPrompts.join('\n') },+          ...formattedMessages,+        ]+      : formattedMessages     return {       model,       options: ollamaOptions,-      messages: this.formatMessages(options.messages),+      messages: messagesWithSystem,       tools: this.convertToolsToOllamaFormat(options.tools),     }
packages/typescript/ai-openai/src/adapters/video.ts (2)

213-214:Verify config property name:baseUrl vsbaseURL.

Line 213 accessesthis.config.baseUrl but OpenAI SDK configuration typically usesbaseURL (uppercase URL). This could result in the fallback URL always being used even when a custom base URL is configured.

#!/bin/bash# Check the OpenAIClientConfig interface for the correct property namerg -n"baseUrl|baseURL" packages/typescript/ai-openai/src/utils/client.ts

241-252:Buffer is not available in browser environments.

Buffer.from() is a Node.js API and will throwReferenceError: Buffer is not defined in browsers. For isomorphic code, use a cross-platform approach.

-        const base64 = Buffer.from(buffer).toString('base64')+        // Cross-platform base64 encoding+        const uint8Array = new Uint8Array(buffer)+        const base64 = typeof Buffer !== 'undefined'+          ? Buffer.from(buffer).toString('base64')+          : btoa(Array.from(uint8Array, byte => String.fromCharCode(byte)).join(''))

Alternatively, consider making this method server-only or using a utility library for consistent cross-platform encoding.

packages/typescript/ai-anthropic/src/adapters/summarize.ts (1)

51-61:AnthropicSummarizeProviderOptions is defined but not used.

The interface definestemperature andmaxTokens (lines 23-28), but thesummarize method hardcodestemperature: 0.3 and ignoresmodelOptions. Users cannot customize these values.

Wire the provider options into the implementation:

 async summarize(options: SummarizationOptions): Promise<SummarizationResult> {   const systemPrompt = this.buildSummarizationPrompt(options)+  const providerOpts = (options as any).modelOptions as AnthropicSummarizeProviderOptions | undefined   const response = await this.client.messages.create({     model: options.model,     messages: [{ role: 'user', content: options.text }],     system: systemPrompt,-    max_tokens: options.maxLength || 500,-    temperature: 0.3,+    max_tokens: providerOpts?.maxTokens ?? options.maxLength ?? 500,+    temperature: providerOpts?.temperature ?? 0.3,     stream: false,   })
packages/typescript/ai-gemini/src/adapters/text.ts (6)

203-204:Tool call ID collision: function name is not unique across multiple calls.

UsingfunctionCall.name directly as thetoolCallId causes collisions when the same function is called multiple times in one stream. Each tool call must have a unique identifier.

Also applies to: 271-272


252-263:Remove dead code handling non-existentchunk.data property.

Thechunk.data property does not exist onGenerateContentResponse in the@google/genai types. This branch will never execute.


327-334:Incorrect token count mapping—usecandidatesTokenCount instead ofthoughtsTokenCount.

thoughtsTokenCount represents the model's internal reasoning tokens (chain-of-thought), not completion tokens. Completion tokens should come fromcandidatesTokenCount, which represents the actual generated response.


353-367:Incorrect default MIME type for non-image content.

The fallback'image/jpeg' is used for all media types including audio, video, and documents. This will cause issues when processing non-image content without explicit metadata.


419-428:Use the actual function name, nottoolCallId, forfunctionResponse.name.

According to the Gemini API specification, thefunctionResponse.name field must be thefunctionDeclaration.name (the tool/function identifier), not a call ID.


395-408:Unsafe type casting after failed JSON parse.

When JSON parsing fails, the code caststoolCall.function.arguments toRecord<string, unknown>, but ifarguments is a string (which it likely is if parsing failed), this cast is incorrect and could cause runtime errors.

packages/typescript/ai-openai/src/adapters/text.ts (5)

88-97:Normalize headers to plain object before passing to OpenAI SDK, and useabortController.signal.

TextOptions.request can contain aHeaders object, but the OpenAI Node SDK expectsheaders to be a plainRecord<string, string>. Additionally,options.abortController?.signal should be prioritized overoptions.request?.signal.

Also applies to: 137-155


230-230:Unused variableeventTypeCounts.

This map is declared but never populated with event type counts. Either implement the counting logic or remove this variable and its reference in error logging at line 386.


667-672:Base64 images need data URI format for OpenAI API.

The code passespart.source.value directly toimage_url, but forsource.type === 'data', this is raw base64. OpenAI's Responses API expects a data URI likedata:image/jpeg;base64,....


674-687:Remove audio handling from Responses API adapter—not supported.

The OpenAI Responses API does not support audio inputs at all. Audio must use the dedicated Audio API endpoints (transcriptions/translations/realtime). This code will fail at runtime.


106-113:chatStream() should yield an error chunk instead of throwing.

Throwing from the generator forces consumers to wrap the iteration in try/catch and prevents unified stream error handling. Other adapters yield error chunks instead.

packages/typescript/ai-anthropic/src/adapters/text.ts (3)

257-266:Spread ofvalidProviderOptions can overridesystemPrompts.

Thesystem field is set explicitly fromoptions.systemPrompts?.join('\n') at line 263, but since'system' is included invalidKeys (line 228), it can be extracted fromproviderOptions and will override the joinedsystemPrompts when spread at line 265.


500-518:Use theindex field fromcontent_block_stop to identify which block stopped.

The handler checkstoolCallsMap.get(currentToolIndex) without verifying that the stopped block is actually a tool_use. Sincecontent_block_stop events include anindex field, use that to confirm the stopped block type.


109-121:Error code may result in string "undefined".

If botherr.code anderr.status are undefined,String(err.status) produces"undefined" as the error code, which isn't meaningful for consumers.

Also applies to: 581-594

packages/typescript/ai-openai/src/adapters/tts.ts (1)

52-98:Use an isomorphic base64 conversion instead ofBuffer.from().

Line 86 usesBuffer.from(arrayBuffer).toString('base64') which will fail in browser and edge runtime environments. This prevents the adapter from being truly isomorphic.

🧹 Nitpick comments (23)
docs/reference/functions/toStreamResponse.md (1)

39-50:Example correctly demonstrates the new tree-shakeable adapter API.

The update fromopenai() toopenaiText() properly reflects the refactoring to modular, provider-specific adapters. The example clearly shows how to use the text adapter within thechat() function.

Consider adding an import statement for full clarity, especially if this guide is the primary reference for users learning the new API pattern:

 ```typescript+import { openaiText } from '@tanstack/ai-openai/adapters';+ export async function POST(request: Request) {

This helps users immediately understand whereopenaiText() is imported from and makes the example fully self-contained. Based on learnings, adapters should be exported with clear subpath exports.

docs/reference/functions/combineStrategies.md (1)

33-43:Add import statement to the example for clarity.

The example usesopenaiText() without showing the import path. Users need to know where to import this adapter from to use the example.

Consider adding the import statement at the top of the code block:

 ```typescript+import { openaiText } from '@tanstack/ai-openai/adapters';+ const stream = chat({   adapter: openaiText(),

Please verify: Is this the correct import path for the new tree-shakeable adapter exports? If the path differs, this should be updated to match the documented pattern.

packages/typescript/ai/src/activities/generateVideo/adapter.ts (1)

98-100:Consider usingcrypto.randomUUID() for more robust ID generation.

The current implementation usingMath.random().toString(36).substring(7) produces IDs with limited entropy and inconsistent length due to thesubstring(7) behavior (which may produce fewer than 6 characters if the random number is small).

  protected generateId(): string {-   return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`+   return `${this.name}-${Date.now()}-${Math.random().toString(36).slice(2, 10)}`  }

Alternatively, if targeting modern runtimes, considercrypto.randomUUID() for standards-compliant unique IDs.

packages/typescript/ai/src/activities/generateVideo/index.ts (2)

83-106:Themodel parameter appears unused in status/URL operations.

VideoStatusOptions andVideoUrlOptions extendVideoActivityBaseOptions which requires amodel field, but the underlying adapter methodsgetVideoStatus(jobId) andgetVideoUrl(jobId) only acceptjobId. Themodel parameter becomes dead weight for these operations.

Consider either:

  1. Creating a simpler base interface for status/URL operations withoutmodel:
+interface VideoJobOptions<TAdapter extends VideoAdapter<ReadonlyArray<string>, object>> {+  adapter: TAdapter & { kind: typeof kind }+  jobId: string+}+ export interface VideoStatusOptions<   TAdapter extends VideoAdapter<ReadonlyArray<string>, object>,-  TModel extends VideoModels<TAdapter>,-> extends VideoActivityBaseOptions<TAdapter, TModel> {+> extends VideoJobOptions<TAdapter> {   request: 'status'-  jobId: string }
  1. Or documenting thatmodel is required for API consistency even if unused.

234-242:Silent error swallowing may mask issues.

When the video iscompleted but URL fetch fails, the function returnserror but still reportsstatus: 'completed'. This could confuse consumers who checkstatus === 'completed' and expect a URL to be available.

Consider returning a distinct status or throwing to make the failure explicit:

     } catch (error) {-      // If URL fetch fails, still return status       return {-        status: statusResult.status,+        status: 'failed' as const,         progress: statusResult.progress,         error:           error instanceof Error ? error.message : 'Failed to get video URL',       }     }

Alternatively, document the behavior clearly in JSDoc thatstatus: 'completed' witherror set indicates a URL retrieval failure.

packages/typescript/ai/src/activities/chat/tools/zod-converter.ts (1)

161-164:Consider renaming function to reflect dual input support.

The function nameconvertZodToJsonSchema suggests it only handles Zod schemas, but it also accepts and passes through JSONSchema objects (line 170-178). While the documentation clarifies this, a more accurate name likeconvertSchemaToJsonSchema ornormalizeSchema would better reflect the function's actual behavior.

examples/ts-group-chat/chat-server/claude-service.ts (1)

7-43:Consider using Zod for tool definitions per coding guidelines.

While the JSONSchema approach is valid, the coding guidelines recommend using Zod for tool input/output definitions to leverage runtime validation and type inference. As an example file, demonstrating the recommended pattern would be beneficial.

As per coding guidelines, "Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition() and Zod schema inference."

Example refactor using Zod:

import{z}from'zod'constgetWeatherTool=toolDefinition({name:'getWeather',description:'Get the current weather for a location. Returns temperature, conditions, and humidity.',inputSchema:z.object({location:z.string().describe('The city or location to get weather for'),unit:z.enum(['celsius','fahrenheit']).optional().describe('Temperature unit (defaults to celsius)'),}),outputSchema:z.object({location:z.string(),temperature:z.number(),unit:z.string(),conditions:z.string(),humidity:z.number().optional(),}),}).server((args)=>{// Implementation remains the same// ...})
packages/typescript/ai/src/activities/index.ts (2)

27-34:Consider consolidating duplicate imports from./chat/index.

There are two separate import statements from'./chat/index' requiringeslint-disable comments. These could be merged into a single import block.

 // Import model types for use in local type definitions import type {   InputModalitiesForModel,   MessageMetadataForAdapter,   TextModels,   TextProviderOptionsForModel,-  // eslint-disable-next-line import/no-duplicates+  TextActivityOptions,+  TextActivityResult, } from './chat/index'...-// eslint-disable-next-line import/no-duplicates-import type { TextActivityOptions, TextActivityResult } from './chat/index'

Also applies to: 84-85


251-252:Consider stronger typing forActivityHandler if this type will be retained.

Theany types lose type safety. If this type remains in use after deprecation cleanup, consider using generics or a more specific union type.

packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts (1)

1-1:Import is correct; consider adding subpath exports for full tree-shaking benefits.

ThecreateOpenaiChat export is properly defined inpackages/typescript/ai-openai/src/adapters/text.ts (line 743) and correctly re-exported from the package index. The function is typed to returnOpenAITextAdapter and integrates well with the existing tree-shakeable adapter architecture.

However, the adapter architecture could be optimized by configuring subpath exports inpackage.json (e.g.,@tanstack/ai-openai/adapters) to allow consumers to import directly from adapter modules, further reducing bundle size for those only using specific adapters. Currently, all adapters are exported from the main entry point, which limits tree-shaking effectiveness.

packages/typescript/ai/src/activities/generateSpeech/index.ts (1)

44-62:Consider reusing TTSOptions fields to reduce duplication.

Thevoice,format, andspeed fields are duplicated fromTTSOptions intypes.ts. While the current approach works, consider usingPick orOmit to derive these from the base type for easier maintenance.

// Alternative approach using Pick:exportinterfaceTTSActivityOptions<TAdapterextendsTTSAdapter<ReadonlyArray<string>,object>,TModelextendsTTSModels<TAdapter>,>extendsPick<TTSOptions<TTSProviderOptions<TAdapter>>,'text'|'voice'|'format'|'speed'|'modelOptions'>{adapter:TAdapter&{kind:typeofkind}model:TModel}
docs/guides/streaming.md (1)

20-20:Consider preserving trailing commas for consistency.

The trailing comma aftermessages was removed, which is inconsistent with common TypeScript style guides that recommend trailing commas for better diffs and easier reordering.

Apply this diff to restore the trailing comma:

   model: "gpt-4o",-  messages,+  messages, });

This pattern also applies to line 43.

docs/guides/runtime-adapter-switching.md (1)

82-82:Consider simplifying the wording.

The phrase "exact same" could be replaced with "identical" or "same" for more concise writing.

-`createChatOptions` is a simple identity function with the **exact same type signature** as `chat()`.+`createChatOptions` is a simple identity function with the **same type signature** as `chat()`.
packages/typescript/ai-openai/src/tools/function-tool.ts (1)

27-30:Potential issue withrequired property access.

On line 29,inputSchema.required is accessed, but theJSONSchema type may not guarantee this property exists. While the default schema on lines 21-25 includesrequired: [], iftool.inputSchema is provided but lacks arequired property, this could passundefined tomakeOpenAIStructuredOutputCompatible.

Consider using the fallback consistently:

   const jsonSchema = makeOpenAIStructuredOutputCompatible(     inputSchema,-    inputSchema.required || [],+    (inputSchema.required as string[] | undefined) ?? [],   )
examples/ts-react-chat/src/routes/api.tanchat.ts (2)

22-45:Consider extracting model defaults to a configuration object.

TheadapterConfig pattern with inlinecreateChatOptions calls is functional but couples model defaults to the adapter configuration. For better maintainability, consider extracting model defaults:

constDEFAULT_MODELS={anthropic:'claude-sonnet-4-5',gemini:'gemini-2.0-flash-exp',ollama:'mistral:7b',openai:'gpt-4o',}asconstconstadapterConfig={anthropic:()=>createChatOptions({adapter:anthropicText(),model:DEFAULT_MODELS.anthropic,}),// ... etc}

This makes it easier to update models without touching the adapter creation logic.


105-121:The cast comment is misleading about union types.

The comment states that TypeScript sees "a union of all possible return types", butchat() with properly typed adapters should return a well-typed stream. The real issue is likely thatchat() is overloaded or has multiple signatures.

Consider removing or clarifying the comment to explain why the cast is needed more accurately. If the cast is truly necessary, it might indicate a type definition issue in the core library that should be addressed.

packages/typescript/ai/src/activities/generateTranscription/adapter.ts (1)

50-74:Consider usingcrypto.randomUUID() for more robust ID generation.

ThegenerateId helper at line 71-73 usesDate.now() combined with a 5-character random suffix. While functional, this could produce collisions under high-frequency calls (same millisecond + ~1 in 60 million collision chance per call).

For a library that may be used in high-throughput scenarios, consider usingcrypto.randomUUID() if available, or at minimum increasing the random suffix length.

  protected generateId(): string {-   return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`+   return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(2, 11)}`  }
packages/typescript/ai/src/activities/generateImage/index.ts (1)

22-55:Consider replacingany with more specific type constraints in conditional types.

The type helpers useany ininfer positions (e.g.,ImageAdapter<infer M, any, any, any>). While this works, usingobject orunknown would be more type-safe and consistent with the constraints used elsewhere in the file.

 export type ImageModels<TAdapter> =-  TAdapter extends ImageAdapter<infer M, any, any, any> ? M[number] : string+  TAdapter extends ImageAdapter<infer M, object, object, object> ? M[number] : string
packages/typescript/ai/src/activities/summarize/adapter.ts (1)

60-91:SamegenerateId pattern as transcription adapter.

ThegenerateId implementation has the same potential collision concern noted in the transcription adapter. Consider standardizing ID generation across all adapters, possibly extracting it to a shared utility.

  protected generateId(): string {-   return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`+   return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(2, 11)}`  }
packages/typescript/ai-openai/src/adapters/embed.ts (1)

119-131:Consider adding deprecation notice or JSDoc to alias functions.

openaiEmbed andcreateOpenaiEmbed are aliases foropenaiEmbedding andcreateOpenaiEmbedding. If these are deprecated aliases, consider adding@deprecated JSDoc annotations. If they're intentional alternatives, brief JSDoc would clarify the relationship.

+/**+ * @deprecated Use openaiEmbedding() instead+ */ export function openaiEmbed(   config?: Omit<OpenAIEmbedConfig, 'apiKey'>, ): OpenAIEmbedAdapter {   const apiKey = getOpenAIApiKeyFromEnv()   return createOpenaiEmbedding(apiKey, config) }+/**+ * @deprecated Use createOpenaiEmbedding() instead+ */ export function createOpenaiEmbed(
packages/typescript/ai-ollama/src/adapters/summarize.ts (1)

80-89:Unused constructor parameter_options.

The_options parameter is accepted but never used. Either wire it through to configure default model/behavior, or remove it to avoid confusion.

   constructor(     hostOrClient?: string | Ollama,-    _options: OllamaSummarizeAdapterOptions = {},   ) {

Or use it:

   constructor(     hostOrClient?: string | Ollama,-    _options: OllamaSummarizeAdapterOptions = {},+    private options: OllamaSummarizeAdapterOptions = {},   ) {
packages/typescript/ai-gemini/src/adapters/tts.ts (1)

140-151:Consider narrowing types instead of usingany casts.

Theany casts work around SDK type limitations but reduce type safety. Consider defining a local interface for the expected response shape:

interfaceGeminiAudioPart{inlineData?:{mimeType?:stringdata?:string}}

This makes the expected structure explicit and catches regressions if the SDK changes.

packages/typescript/ai/src/activities/chat/index.ts (1)

769-794:Consider improving type safety incollectClientState.

Lines 771-772 use(message as any).parts which bypasses type checking. Ifparts is a valid property for certain message types, consider extending theModelMessage type or using a type guard.

// Type guard approachfunctionhasToolCallParts(message:ModelMessage):message isModelMessage&{parts:Array<any>}{returnmessage.role==='assistant'&&'parts'inmessage}

Comment on lines +149 to +171
```typescript
const stream=chat({
adapter:anthropicText(),
model:"claude-sonnet-4-5",
messages: [
{
role:"user",
content: [
{
type:"text",
content:"What is the capital of France?",
metadata: {
cache_control: {
type:"ephemeral",
},
},
},
],
},
],
model:"claude-sonnet-4-5",
});
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Duplicatemodel property in prompt caching example.

Themodel property is specified twice in this code example—once on line 152 and again on line 169. Remove the duplicate to fix the syntax error.

         },       ],     },   ],-  model: "claude-sonnet-4-5", });
🤖 Prompt for AI Agents
docs/adapters/anthropic.md around lines 149 to 171: the example contains aduplicate model property which causes a syntax/logic error; remove the secondoccurrence so the object only declares model once (keep the intended model value"claude-sonnet-4-5" in the remaining property) and ensure the example compilesand accurately demonstrates prompt caching.

Comment on lines +260 to +273
```typescript
import {generateSpeech }from"@tanstack/ai";
import {geminiSpeech }from"@tanstack/ai-gemini";

const adapter=geminiSpeech();

const result=awaitgenerateSpeech({
adapter,
model:"gemini-2.5-flash-preview-tts",
text:"Hello from Gemini TTS!",
});

console.log(result.audio);// Base64 encoded audio
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Inconsistent naming:geminiSpeech in examples vsgeminiTTS in API reference.

The example importsgeminiSpeech (line 262) but the API reference documentsgeminiTTS (line 346). Align these for consistency.

🤖 Prompt for AI Agents
In docs/adapters/gemini.md around lines 260 to 273, the example imports and usesgeminiSpeech while the API reference documents geminiTTS, causing inconsistentnaming; update the example to use the same exported name as the API reference(replace geminiSpeech with geminiTTS in the import and variable name) oralternatively change the API reference to geminiSpeech so both sections match,and ensure any example usage and import path match the actual exported symbolfrom the package.

Comment on lines +310 to +320
###`geminiEmbed(config?)`

Creates a Gemini embedding adapter using environment variables.

**Returns:** A Gemini embed adapter instance.

###`createGeminiEmbed(apiKey, config?)`

Creates a Gemini embedding adapter with an explicit API key.

**Returns:** A Gemini embed adapter instance.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Inconsistency between example imports and API reference.

The examples usegeminiEmbedding() (line 158) but the API reference documentsgeminiEmbed() (line 310). These should be consistent. Either both are valid aliases (document both), or pick one canonical name.

🤖 Prompt for AI Agents
In docs/adapters/gemini.md around lines 310 to 320, the API reference uses thename geminiEmbed() while examples use geminiEmbedding(); make these consistentby choosing a single canonical function name (e.g., geminiEmbedding) and eitherrename the API reference to match the examples or explicitly document both namesas aliases (geminiEmbedding and geminiEmbed) with one marked as preferred;update all example import lines and the API reference text to use the chosencanonical name (or add an alias note) so the docs are consistent.

Comment on lines +34 to +36
import {createOllamaChat }from"@tanstack/ai-ollama";

const adapter=createOllamaChat("http://your-server:11434");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Inconsistent function naming in documentation.

Line 34 usescreateOllamaChat but the API Reference section (lines 309-318) documentscreateOllamaText. The examples should use consistent naming throughout:

-import { createOllamaChat } from "@tanstack/ai-ollama";+import { createOllamaText } from "@tanstack/ai-ollama";-const adapter = createOllamaChat("http://your-server:11434");+const adapter = createOllamaText("http://your-server:11434");

Also apply this fix to line 48 and line 280.

📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import {createOllamaChat } from "@tanstack/ai-ollama";
const adapter =createOllamaChat("http://your-server:11434");
import {createOllamaText } from "@tanstack/ai-ollama";
const adapter =createOllamaText("http://your-server:11434");
🤖 Prompt for AI Agents
docs/adapters/ollama.md around lines 34-36, 48, and 280: the documented examplesuse createOllamaChat but the API Reference documents createOllamaText; updatethe examples on the specified lines to call createOllamaText (and adjust anyimport usage if needed) so the function name is consistent with the APIReference throughout the file.

Comment on lines +278 to +287
```typescript
const result=awaitgenerateSpeech({
adapter:openaiSpeech(),
model:"tts-1-hd",
text:"High quality speech",
modelOptions: {
speed:1.0,// 0.25 to 4.0
},
});
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Inconsistent adapter factory name in TTS example.

Line 258 usesopenaiTTS() but line 280 usesopenaiSpeech(). These should be consistent. Based on the API Reference section (line 391),openaiTTS appears to be the correct name.

 const result = await generateSpeech({-  adapter: openaiSpeech(),+  adapter: openaiTTS(),   model: "tts-1-hd",
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```typescript
const result = await generateSpeech({
adapter: openaiSpeech(),
model: "tts-1-hd",
text: "High quality speech",
modelOptions: {
speed: 1.0, // 0.25 to 4.0
},
});
```
🤖 Prompt for AI Agents
In docs/adapters/openai.md around lines 278 to 287, the TTS example uses theadapter factory name openaiSpeech() which is inconsistent with openaiTTS() usedearlier and with the API reference; replace openaiSpeech() with openaiTTS() sothe example uses the correct adapter factory name and matches the rest of thedoc.

Comment on lines +79 to +94
async*summarizeStream(
options:SummarizationOptions,
):AsyncIterable<StreamChunk>{
constsystemPrompt=this.buildSummarizationPrompt(options)

// Delegate directly to the text adapter's streaming
yield*this.textAdapter.chatStream({
model:options.model,
messages:[{role:'user',content:options.text}],
systemPrompts:[systemPrompt],
options:{
maxTokens:options.maxLength,
temperature:0.3,
},
})
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Same provider options issue applies to streaming.

ThesummarizeStream method also ignoresoptions.modelOptions. Apply the same fix to use provider options when available.

   async *summarizeStream(     options: SummarizationOptions,   ): AsyncIterable<StreamChunk> {     const systemPrompt = this.buildSummarizationPrompt(options)     yield* this.textAdapter.chatStream({       model: options.model,       messages: [{ role: 'user', content: options.text }],       systemPrompts: [systemPrompt],       options: {-        maxTokens: options.maxLength,-        temperature: 0.3,+        maxTokens: options.modelOptions?.maxTokens ?? options.maxLength,+        temperature: options.modelOptions?.temperature ?? 0.3,       },     })   }
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async*summarizeStream(
options:SummarizationOptions,
):AsyncIterable<StreamChunk>{
constsystemPrompt=this.buildSummarizationPrompt(options)
// Delegate directly to the text adapter's streaming
yield*this.textAdapter.chatStream({
model:options.model,
messages:[{role:'user',content:options.text}],
systemPrompts:[systemPrompt],
options:{
maxTokens:options.maxLength,
temperature:0.3,
},
})
}
async*summarizeStream(
options:SummarizationOptions,
):AsyncIterable<StreamChunk>{
constsystemPrompt=this.buildSummarizationPrompt(options)
// Delegate directly to the text adapter's streaming
yield*this.textAdapter.chatStream({
model:options.model,
messages:[{role:'user',content:options.text}],
systemPrompts:[systemPrompt],
options:{
maxTokens:options.modelOptions?.maxTokens??options.maxLength,
temperature:options.modelOptions?.temperature??0.3,
},
})
}
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/summarize.ts around lines 79 to94, the summarizeStream method currently ignores options.modelOptions(provider-specific options) when delegating to textAdapter.chatStream; updatethe call to include/merge provider options by spreading options.modelOptions (oroptions.providerOptions if that naming is used) into the options passed tochatStream while keeping maxTokens and temperature, and ensure the modelargument still falls back to options.model; this will pass provider-specificstreaming settings through to the text adapter.

Comment on lines +182 to +214

/**
* Creates an OpenAI Transcription adapter with explicit API key
*
*@param apiKey - Your OpenAI API key
*@param config - Optional additional configuration
*@returns Configured OpenAI Transcription adapter instance
*
*@example
* ```typescript
* const adapter = createOpenaiTranscription("sk-...");
*
* const result = await generateTranscription({
* adapter,
* model: 'whisper-1',
* audio: audioFile,
* language: 'en'
* });
* ```
*/
/**
* Creates an OpenAI transcription adapter with explicit API key
*
*@param model - The model name (e.g., 'whisper-1')
*@param apiKey - Your OpenAI API key
*@param config - Optional additional configuration
*@returns Configured OpenAI transcription adapter instance
*
*@example
* ```typescript
* const adapter = createOpenaiTranscription('whisper-1', "sk-...");
* ```
*/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Remove duplicate JSDoc comment block.

There are two consecutive JSDoc blocks forcreateOpenaiTranscription. The first block (lines 183-201) and the second block (lines 202-214) describe the same function but with different parameter descriptions. Keep only one and ensure it's accurate:

-/**- * Creates an OpenAI Transcription adapter with explicit API key- *- * @param apiKey - Your OpenAI API key- * @param config - Optional additional configuration- * @returns Configured OpenAI Transcription adapter instance- *- * @example- * ```typescript- * const adapter = createOpenaiTranscription("sk-...");- *- * const result = await generateTranscription({- *   adapter,- *   model: 'whisper-1',- *   audio: audioFile,- *   language: 'en'- * });- * ```- */ /**  * Creates an OpenAI transcription adapter with explicit API key  *- * @param model - The model name (e.g., 'whisper-1')  * @param apiKey - Your OpenAI API key  * @param config - Optional additional configuration  * @returns Configured OpenAI transcription adapter instance  *  * @example  * ```typescript- * const adapter = createOpenaiTranscription('whisper-1', "sk-...");+ * const adapter = createOpenaiTranscription("sk-...");+ *+ * const result = await generateTranscription({+ *   adapter,+ *   model: 'whisper-1',+ *   audio: audioFile,+ *   language: 'en'+ * });  * ```  */

Note: The second JSDoc incorrectly mentions@param model which isn't a parameter of this function.

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/transcription.ts around lines 182to 214, there are two duplicate JSDoc blocks for createOpenaiTranscription;remove the redundant block and keep a single, accurate JSDoc that documents theactual function parameters (apiKey and optional config), removes the incorrect@param model entry, and retains the correct example showingcreateOpenaiTranscription("sk-...") followed by generateTranscription usage;ensure the final docblock has @param apiKey, @param config (optional), @returnsand the example matching the function signature.

Comment on lines +6 to +12
exportinterfaceTTSAdapterConfig{
apiKey?:string
baseUrl?:string
timeout?:number
maxRetries?:number
headers?:Record<string,string>
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash# Find all TTS adapter files in the repositoryfind. -type f -name"*tts*" -o -name"*speech*"| grep -E"\.(ts|tsx)$"| head -20

Repository: TanStack/ai

Length of output: 372


🏁 Script executed:

#!/bin/bash# Search for TTSAdapterConfig usage and inheritancerg"TTSAdapterConfig|TTSAdapter|BaseTTSAdapter" -t ts -t tsx --max-count 50

Repository: TanStack/ai

Length of output: 82


🏁 Script executed:

#!/bin/bash# Search for TTSAdapterConfig and related adapter coderg"TTSAdapterConfig|TTSAdapter|BaseTTSAdapter|timeout|maxRetries" -t ts --max-count 100| grep -E"(tts|TTS|generateSpeech)"| head -50

Repository: TanStack/ai

Length of output: 4383


🏁 Script executed:

#!/bin/bash# Check the adapter.ts file structurecat packages/typescript/ai/src/activities/generateSpeech/adapter.ts

Repository: TanStack/ai

Length of output: 2145


🏁 Script executed:

#!/bin/bash# Check OpenAI TTS adapter implementationcat packages/typescript/ai-openai/src/adapters/tts.ts

Repository: TanStack/ai

Length of output: 4907


🏁 Script executed:

#!/bin/bash# Check Gemini TTS adapter implementationcat packages/typescript/ai-gemini/src/adapters/tts.ts

Repository: TanStack/ai

Length of output: 6605


🏁 Script executed:

#!/bin/bash# Check OpenAIClientConfig definition and usagerg"OpenAIClientConfig" -t ts -A 10 packages/typescript/ai-openai/src/utils.ts| head -50

Repository: TanStack/ai

Length of output: 137


🏁 Script executed:

#!/bin/bash# Check GeminiClientConfig definition and usagerg"GeminiClientConfig" -t ts -A 10 packages/typescript/ai-gemini/src/utils.ts| head -50

Repository: TanStack/ai

Length of output: 137


🏁 Script executed:

#!/bin/bash# Check createOpenAIClient implementationrg"createOpenAIClient" -t ts -A 15 packages/typescript/ai-openai/src/utils.ts

Repository: TanStack/ai

Length of output: 137


🏁 Script executed:

#!/bin/bash# Check createGeminiClient implementationrg"createGeminiClient" -t ts -A 15 packages/typescript/ai-gemini/src/utils.ts

Repository: TanStack/ai

Length of output: 137


🏁 Script executed:

#!/bin/bash# Find where createOpenAIClient is definedrg"export.*createOpenAIClient|function createOpenAIClient" -t ts --max-count 20

Repository: TanStack/ai

Length of output: 181


🏁 Script executed:

#!/bin/bash# Find where createGeminiClient is definedrg"export.*createGeminiClient|function createGeminiClient" -t ts --max-count 20

Repository: TanStack/ai

Length of output: 182


🏁 Script executed:

#!/bin/bash# List files in ai-openai src directoryfind packages/typescript/ai-openai/src -type f -name"*.ts"| head -30

Repository: TanStack/ai

Length of output: 1817


🏁 Script executed:

#!/bin/bash# List files in ai-gemini src directoryfind packages/typescript/ai-gemini/src -type f -name"*.ts"| head -30

Repository: TanStack/ai

Length of output: 1300


🏁 Script executed:

#!/bin/bash# Check OpenAI createOpenAIClient implementationcat packages/typescript/ai-openai/src/utils/client.ts

Repository: TanStack/ai

Length of output: 1219


🏁 Script executed:

#!/bin/bash# Check Gemini createGeminiClient implementationcat packages/typescript/ai-gemini/src/utils/client.ts

Repository: TanStack/ai

Length of output: 1190


🏁 Script executed:

#!/bin/bash# Check OpenAIClientConfig definitionrg"interface OpenAIClientConfig|type OpenAIClientConfig" -t ts -A 10 packages/typescript/ai-openai/src/

Repository: TanStack/ai

Length of output: 1305


🏁 Script executed:

#!/bin/bash# Check GeminiClientConfig definitionrg"interface GeminiClientConfig|type GeminiClientConfig" -t ts -A 10 packages/typescript/ai-gemini/src/

Repository: TanStack/ai

Length of output: 1003


🏁 Script executed:

#!/bin/bash# Double-check if timeout/maxRetries/headers are used anywhere in generateSpeech methods or elsewhere in the adaptersrg"timeout|maxRetries|headers" packages/typescript/ai-openai/src/adapters/tts.ts packages/typescript/ai-gemini/src/adapters/tts.ts

Repository: TanStack/ai

Length of output: 37


Confirm: TTSAdapterConfig properties (timeout,maxRetries,headers) are declared but not applied to SDK clients.

The config interface declares these properties, but they are never extracted or passed to the underlying OpenAI or Gemini SDK clients.OpenAIClientConfig only supportsapiKey,organization, andbaseURL, whileGeminiClientConfig only supportsapiKey. Either implement these properties in the client initialization or remove them from the config interface to avoid confusion.

Comment on lines +207 to +251
exportasyncfunctiongetVideoJobStatus<
TAdapterextendsVideoAdapter<ReadonlyArray<string>,object>,
TModelextendsVideoModels<TAdapter>,
>(options:{
adapter:TAdapter&{kind:typeofkind}
model:TModel
jobId:string
}):Promise<{
status:'pending'|'processing'|'completed'|'failed'
progress?:number
url?:string
error?:string
}>{
const{ adapter, jobId}=options

// Get status first
conststatusResult=awaitadapter.getVideoStatus(jobId)

// If completed, also get the URL
if(statusResult.status==='completed'){
try{
consturlResult=awaitadapter.getVideoUrl(jobId)
return{
status:statusResult.status,
progress:statusResult.progress,
url:urlResult.url,
}
}catch(error){
// If URL fetch fails, still return status
return{
status:statusResult.status,
progress:statusResult.progress,
error:
errorinstanceofError ?error.message :'Failed to get video URL',
}
}
}

// Return status for non-completed jobs
return{
status:statusResult.status,
progress:statusResult.progress,
error:statusResult.error,
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Themodel parameter is required in options but never used.

The function signature requiresmodel: TModel in the options object (line 212), but the implementation only usesadapter andjobId (line 220). This creates a confusing API where callers must provide a model that has no effect.

Either remove the unusedmodel parameter:

 export async function getVideoJobStatus<   TAdapter extends VideoAdapter<ReadonlyArray<string>, object>,-  TModel extends VideoModels<TAdapter>, >(options: {   adapter: TAdapter & { kind: typeof kind }-  model: TModel   jobId: string }): Promise<{

Or use it (e.g., for validation/logging) if it serves a purpose not shown in this implementation.

📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
exportasyncfunctiongetVideoJobStatus<
TAdapterextendsVideoAdapter<ReadonlyArray<string>,object>,
TModelextendsVideoModels<TAdapter>,
>(options:{
adapter:TAdapter&{kind:typeofkind}
model:TModel
jobId:string
}):Promise<{
status:'pending'|'processing'|'completed'|'failed'
progress?:number
url?:string
error?:string
}>{
const{ adapter, jobId}=options
// Get status first
conststatusResult=awaitadapter.getVideoStatus(jobId)
// If completed, also get the URL
if(statusResult.status==='completed'){
try{
consturlResult=awaitadapter.getVideoUrl(jobId)
return{
status:statusResult.status,
progress:statusResult.progress,
url:urlResult.url,
}
}catch(error){
// If URL fetch fails, still return status
return{
status:statusResult.status,
progress:statusResult.progress,
error:
errorinstanceofError ?error.message :'Failed to get video URL',
}
}
}
// Return status for non-completed jobs
return{
status:statusResult.status,
progress:statusResult.progress,
error:statusResult.error,
}
}
exportasyncfunctiongetVideoJobStatus<
TAdapterextendsVideoAdapter<ReadonlyArray<string>,object>,
>(options:{
adapter:TAdapter&{kind:typeofkind}
jobId:string
}):Promise<{
status:'pending'|'processing'|'completed'|'failed'
progress?:number
url?:string
error?:string
}>{
const{ adapter, jobId}=options
// Get status first
conststatusResult=awaitadapter.getVideoStatus(jobId)
// If completed, also get the URL
if(statusResult.status==='completed'){
try{
consturlResult=awaitadapter.getVideoUrl(jobId)
return{
status:statusResult.status,
progress:statusResult.progress,
url:urlResult.url,
}
}catch(error){
// If URL fetch fails, still return status
return{
status:statusResult.status,
progress:statusResult.progress,
error:
errorinstanceofError ?error.message :'Failed to get video URL',
}
}
}
// Return status for non-completed jobs
return{
status:statusResult.status,
progress:statusResult.progress,
error:statusResult.error,
}
}

Comment on lines +291 to +298
exporttypeAnyAdapter=
|TextAdapter<any,any,any,any,any>
|EmbeddingAdapter<any,any>
|SummarizeAdapter<any,any>
|ImageAdapter<any,any,any>
|VideoAdapter<any,any>
|TTSAdapter<any,any>
|TranscriptionAdapter<any,any>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

⚠️ Potential issue |🟡 Minor

Missing type parameter inImageAdapter withinAnyAdapter union.

ImageAdapter on line 295 has 3 type parameters but should have 4 to match its definition and usage inAIAdapter (line 282).

 export type AnyAdapter =   | TextAdapter<any, any, any, any, any>   | EmbeddingAdapter<any, any>   | SummarizeAdapter<any, any>-  | ImageAdapter<any, any, any>+  | ImageAdapter<any, any, any, any>   | VideoAdapter<any, any>   | TTSAdapter<any, any>   | TranscriptionAdapter<any, any>
📝 Committable suggestion

‼️IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
exporttypeAnyAdapter=
|TextAdapter<any,any,any,any,any>
|EmbeddingAdapter<any,any>
|SummarizeAdapter<any,any>
|ImageAdapter<any,any,any>
|VideoAdapter<any,any>
|TTSAdapter<any,any>
|TranscriptionAdapter<any,any>
exporttypeAnyAdapter=
|TextAdapter<any,any,any,any,any>
|EmbeddingAdapter<any,any>
|SummarizeAdapter<any,any>
|ImageAdapter<any,any,any,any>
|VideoAdapter<any,any>
|TTSAdapter<any,any>
|TranscriptionAdapter<any,any>
🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/index.ts around lines 291 to 298, theAnyAdapter union uses ImageAdapter with only 3 type parameters but ImageAdapteris defined/used elsewhere expecting 4; update the ImageAdapter entry in theunion to include the missing fourth type parameter (use the appropriate generictype from the ImageAdapter definition or substitute a generic like any ifuncertain) so the union signature matches the adapter definition and fixes thetype error.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@coderabbitaicoderabbitai[bot]coderabbitai[bot] left review comments

@tannerlinsleytannerlinsleytannerlinsley left review comments

@jherrjherrAwaiting requested review from jherr

At least 0 approving reviews are required to merge this pull request.

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

4 participants

@AlemTuzlak@tannerlinsley@jherr

[8]ページ先頭

©2009-2025 Movatter.jp