Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork79
Ai migration doc#146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:feat/splitting-adapters-up
Are you sure you want to change the base?
Ai migration doc#146
Conversation
coderabbitaibot commentedDec 13, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
WalkthroughThis pull request refactors TanStack AI from monolithic adapters to tree-shakeable modular adapters, renaming the primary entry point from Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Areas requiring extra attention:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for usingCodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
nx-cloudbot commentedDec 13, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
View yourCI Pipeline Execution ↗ for commita0c983c
☁️Nx Cloud last updated this comment at |
@tanstack/ai@tanstack/ai-anthropic@tanstack/ai-client@tanstack/ai-devtools-core@tanstack/ai-gemini@tanstack/ai-ollama@tanstack/ai-openai@tanstack/ai-react@tanstack/ai-react-ui@tanstack/ai-solid@tanstack/ai-solid-ui@tanstack/ai-svelte@tanstack/ai-vue@tanstack/ai-vue-ui@tanstack/react-ai-devtools@tanstack/solid-ai-devtoolscommit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (12)
packages/typescript/ai-devtools/src/store/ai-context.tsx (1)
1312-1384:Address remainingchat:*event references across the codebase.The migration from
chat:*totext:*events is incomplete. While the code in this file has been correctly migrated to usetext:started,text:completed, andtext:iteration, the following files still reference the oldchat:*namespace and should be updated for consistency:
testing/panel/src/lib/recording.ts: Lines 161, 322 usechat:startedandchat:completedpackages/typescript/ai/tests/ai-text.test.ts: References tochat:startedandchat:iterationin test casesdocs/guides/observability.md: Line 25 example useschat:startedUpdate these references to use the
text:*event namespace to maintain consistency across the codebase.examples/ts-vue-chat/vite.config.ts (1)
232-251:Abort upstream work on client disconnect to avoid leaking in-flight AI requests.The
AbortControlleris created and passed toai()andtoStreamResponse(), but it's never aborted when the HTTP client disconnects. This allows the AI provider call to continue running unnecessarily, wasting resources.const abortController = new AbortController()+ req.on('close', () => abortController.abort())+ req.on('abort', () => abortController.abort()) const stream = ai({ adapter: adapter as any,Also remove the
as anytype casts foradapterandselectedModel. The TanStack AIai()function supports proper generic type inference without requiring type assertions:const stream = ai({- adapter: adapter as any,- model: selectedModel as any,+ adapter: adapter,+ model: selectedModel, tools: [If the conditionally-assigned
adaptermakes inference difficult, use a type union or proper type guards instead ofas any.examples/ts-react-chat/src/routes/api.tanchat.ts (1)
100-115:Refactor to enable compile-time type safety for adapter and model selection.The
as anycasts on lines 101–102 completely bypass TypeScript's type checking. Theai()function requires concrete generic types—specifically a literal model string type paired with the specific adapter type—which cannot be satisfied when both vary at runtime.Move the
ai()call into each switch case so that the adapter and model are hardcoded per provider, allowing TypeScript to properly inferAIOptionsForand enforce type safety:switch(provider){case'anthropic':returntoStreamResponse(ai({adapter:anthropicText(),model:'claude-sonnet-4-5',tools:[...],// ... rest of options}),{ abortController})case'gemini':returntoStreamResponse(ai({adapter:geminiText(),model:'gemini-2.0-flash-exp',tools:[...],// ... rest of options}),{ abortController})// ... other cases}This pattern provides per-model type safety as required by the coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."
packages/typescript/ai-gemini/src/model-meta.ts (1)
224-606:Remove IMAGEN_3 model definition—it was deprecated and retired on November 10, 2025.Model identifiers verified against current Gemini API documentation (December 2025):
- Gemini 2.5 Flash variants: gemini-2.5-flash-image, gemini-2.5-flash-lite, gemini-2.5-flash-lite-preview-09-2025 ✓
- Gemini 2.0 Flash variants: gemini-2.0-flash, gemini-2.0-flash-preview-image-generation, gemini-2.0-flash-lite ✓
- Imagen 4 variants: imagen-4.0-generate-001, imagen-4.0-ultra-generate-001, imagen-4.0-fast-generate-001 ✓
However, IMAGEN_3 (imagen-3.0-generate-002) is no longer available—Google deprecated it and recommends using Imagen 4 instead. The model definition should be removed from active code.
examples/README.md (1)
385-390:Inconsistent API usage - complete the migration.This code example still uses
chat({instead ofai({, which is inconsistent with the rest of the migration in this file (lines 311, 314, 363) and the PR's objectives.Apply this diff to complete the migration:
-const stream = chat({+const stream = ai({ adapter: openai(), model: 'gpt-4o', messages, tools: [weatherTool], // SDK executes these automatically })docs/guides/tools.md (1)
282-288:Fix invalid snippet: duplicatetoolskey in object literal.
As written, this won’t compile and readers won’t know which form to use.ai({ adapter: openaiText(), messages,- tools: [addToCartDef], // Client will execute, or- tools: [addToCartServer], // Server will execute+ // Client will execute (definition-only):+ // tools: [addToCartDef],++ // Server will execute (server impl):+ tools: [addToCartServer], });docs/getting-started/quick-start.md (1)
181-194:Incompleteai()call in tool example may confuse users.The example shows
ai({ tools: [getProducts] })without the requiredadapter,model, andmessagesparameters. Consider showing a complete call or adding a comment indicating this is a partial snippet.const getProducts = getProductsDef.server(async ({ query }) => { return await db.products.search(query) })-ai({ tools: [getProducts] })+// Use with the ai() function+ai({+ adapter: openaiText(),+ model: 'gpt-4o',+ messages: [{ role: 'user', content: 'Find products' }],+ tools: [getProducts],+})packages/typescript/ai-anthropic/src/anthropic-adapter.ts (1)
83-96:WireTextOptions.abortControllerinto the SDK request signal (or document precedence).After switching to
TextOptions, consumers will reasonably expectabortControllerto work. Consider:- const stream = await this.client.beta.messages.create(+ const signal =+ options.abortController?.signal ??+ (options.request instanceof Request ? options.request.signal : options.request?.signal)++ const stream = await this.client.beta.messages.create( { ...requestParams, stream: true }, {- signal: options.request?.signal,+ signal, headers: options.request?.headers, }, )docs/adapters/anthropic.md (1)
89-117:Docs:messagesis undefined in the “With Tools” example.
The snippet won’t run as-is; add a minimalconst messages = [...](or show request parsing like the earlier POST example).docs/api/ai.md (3)
18-30:Docs: missingmaxIterationsimport in the mainai()example.
Copy/paste will fail. Add it to the import line.-import { ai } from "@tanstack/ai";+import { ai, maxIterations } from "@tanstack/ai";
64-75:Docs: summarizationstylevalues don’t match the actual type.SummarizationOptions.styleis'bullet-points' | 'paragraph' | 'concise'(no"detailed"). (Based on the provided type snippet.)-- `style?` - Summary style ("concise" | "detailed")+- `style?` - Summary style ("concise" | "bullet-points" | "paragraph")
101-144:Align toolDefinition examples to consistent Zod +.server()/.client()pattern.The
getWeatherexample at lines 334–359 uses{ parameters, execute }, contradicting thetoolDefinition()pattern shown at lines 101–144 (inputSchema,outputSchema,.server()/.client()). Update the weatherTool example to match the documented pattern:-const weatherTool = toolDefinition({- name: "getWeather",- description: "Get the current weather for a city",- parameters: {- city: { type: "string", description: "City name" },- },- async execute({ city }) {- return { temperature: 72, condition: "Sunny" };- },-});+import { z } from "zod";++const weatherToolDef = toolDefinition({+ name: "getWeather",+ description: "Get the current weather for a city",+ inputSchema: z.object({+ city: z.string(),+ }),+ outputSchema: z.object({+ temperature: z.number(),+ condition: z.string(),+ }),+});++const weatherTool = weatherToolDef.server(async ({ city }) => {+ return { temperature: 72, condition: "Sunny" };+});Also clarify the distinction between passing a
ToolDefinition(schema only) versustoolDef.server(...)(executable on server).
♻️ Duplicate comments (2)
packages/typescript/ai-openai/package.json (1)
43-52:Verify dual listing of @tanstack/ai in both dependencies and peerDependencies.Similar to the Anthropic adapter package, this package lists
@tanstack/aiin bothdependencies(line 43) andpeerDependencies(line 51). Clarify whether this dual listing is intentional or if@tanstack/aishould be exclusively in one section.packages/typescript/ai-ollama/package.json (1)
44-53:Verify dual listing of @tanstack/ai in both dependencies and peerDependencies.Consistent with the Anthropic and OpenAI adapter packages, this package lists
@tanstack/aiin bothdependencies(line 44) andpeerDependencies(line 52). This pattern appears across all adapter packages in this PR, suggesting it may be intentional, but it warrants verification.
🟠 Major comments (25)
examples/ts-react-chat/src/routes/api.tanchat.ts-116-116 (1)
116-116:Stream type cast toanyis unnecessary with proper adapter typing.The stream returned from
ai()is cast toanybefore passing totoStreamResponse(). When the text adapter is properly typed,ai()returnsAsyncIterable<StreamChunk>by default (matchingtoStreamResponse()'s parameter type exactly). This cast is a workaround for the adapter being cast toanyon line 102 and would be eliminated by fixing the root adapter type issue.packages/typescript/ai-ollama/src/adapters/embed.ts-40-73 (1)
40-73:OllamaEmbedAdapterOptions.hostis unused (misleading public API)Right now
hostinOllamaEmbedAdapterOptionsis never read; onlyhostOrClientaffects the client. Either removehostfrom the options type, or honor it whenhostOrClientisn’t provided.Proposed fix (honor
options.host):export class OllamaEmbedAdapter implements EmbeddingAdapter< typeof OllamaEmbeddingModels, OllamaEmbedProviderOptions > {@@ constructor( hostOrClient?: string | Ollama, options: OllamaEmbedAdapterOptions = {}, ) { if (typeof hostOrClient === 'string' || hostOrClient === undefined) {- this.client = createOllamaClient({ host: hostOrClient })+ const host = typeof hostOrClient === 'string' ? hostOrClient : options.host+ this.client = createOllamaClient({ host }) } else { this.client = hostOrClient } this.defaultModel = options.model ?? 'nomic-embed-text' }@@ export function createOllamaEmbed( host?: string, options?: OllamaEmbedAdapterOptions, ): OllamaEmbedAdapter { return new OllamaEmbedAdapter(host, options) }Also applies to: 114-129
packages/typescript/ai-ollama/src/adapters/text.ts-100-131 (1)
100-131:OllamaTextAdapterOptions.hostis unused (misleading public API)
Same as the other adapters: either removehostfrom options or honor it whenhostOrClientis undefined.Also applies to: 391-406
packages/typescript/ai-ollama/src/adapters/summarize.ts-55-88 (1)
55-88:OllamaSummarizeAdapterOptions.hostis unused (misleading public API)
Same issue asembed.ts: either removehostfrom options or honor it whenhostOrClientis undefined.Also applies to: 152-167
packages/typescript/ai-ollama/src/adapters/text.ts-189-281 (1)
189-281:Tool callindexis not provided by ollama's streamed responses; derive it deterministically from array positionOllama's OpenAI-compatible streamed tool-call responses do not include the
indexfield onfunction, making the unsafe cast to{ index: number }unreliable. This breaks consumers that depend on ordering or reconciliation. Use the loop index when iterating tool calls instead.- const handleToolCall = (toolCall: ToolCall): StreamChunk => {+ const handleToolCall = (toolCall: ToolCall, index: number): StreamChunk => { const actualToolCall = toolCall as ToolCall & { id: string- function: { index: number }+ function: { index?: number } } return {@@- index: actualToolCall.function.index,+ index, } }@@ if (chunk.message.tool_calls && chunk.message.tool_calls.length > 0) {- for (const toolCall of chunk.message.tool_calls) {- yield handleToolCall(toolCall)+ for (const [i, toolCall] of chunk.message.tool_calls.entries()) {+ yield handleToolCall(toolCall, i) hasEmittedToolCalls = true }@@ if (chunk.message.tool_calls && chunk.message.tool_calls.length > 0) {- for (const toolCall of chunk.message.tool_calls) {- yield handleToolCall(toolCall)+ for (const [i, toolCall] of chunk.message.tool_calls.entries()) {+ yield handleToolCall(toolCall, i) hasEmittedToolCalls = true } }packages/typescript/ai-ollama/src/adapters/summarize.ts-90-118 (1)
90-118:Provider options interface is unused; SummarizationOptions framework limitation prevents implementation
OllamaSummarizeProviderOptionsis defined but cannot be applied becauseSummarizationOptions(from@tanstack/ai) does not include aproviderOptionsfield—unlikeTextOptionswhich supports generic, type-safe provider options. Thesummarize()method hardcodestemperature: 0.3andnum_predict, leaving model tuning parameters likenum_ctx,top_p, andrepeat_penaltyinaccessible to callers.To fix this, extend
SummarizationOptionsto includeproviderOptionssimilar toTextOptions, then update the adapter to extract and merge them:async summarize(options: SummarizationOptions): Promise<SummarizationResult> { const model = options.model || this.defaultModel const prompt = this.buildSummarizationPrompt(options)+ const providerOptions = options.providerOptions as+ | OllamaSummarizeProviderOptions+ | undefined const response = await this.client.generate({ model, prompt, options: { temperature: 0.3, num_predict: options.maxLength ?? 500,+ ...providerOptions, }, stream: false, })Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/adapters/text.ts-147-187 (1)
147-187:structuredOutput()should validate with Zod after JSON parsingThe method converts the
outputSchemato JSON Schema and passes it to Ollama'sformatparameter (correct), but then only callsJSON.parse()without validating the result against the original schema. This means callers can receive structurally-invalid data even though they provided a schema.Add
outputSchema.safeParse()after JSON parsing to ensure the response matches the intended schema:let parsed: unknown try { parsed = JSON.parse(rawText) } catch { throw new Error( `Failed to parse structured output as JSON. Content: ${rawText.slice(0, 200)}${rawText.length > 200 ? '...' : ''}`, ) }+ // Runtime validation with Zod+ const validated = outputSchema.safeParse(parsed)+ if (!validated.success) {+ throw new Error(+ `Structured output did not match schema: ${validated.error.message}`,+ )+ } return {- data: parsed,+ data: validated.data, rawText, }packages/typescript/ai-ollama/src/adapters/embed.ts-75-108 (1)
75-108:Pass provider options through embeddings request + use generateId for IDsThe
createEmbeddingsmethod ignoresoptions.providerOptionsand usesDate.now()for ID generation, creating two issues:
- Callers cannot pass provider-specific options (e.g.,
num_gpu,num_thread) since they're not forwarded to the Ollama embeddings request.id: \embed-${Date.now()}`can collide under concurrency; other adapters usegenerateId()` instead.Import
generateIdand apply:import { createOllamaClient, estimateTokens, getOllamaHostFromEnv,+ generateId, } from '../utils'async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> { const model = options.model || this.defaultModel+ const providerOptions = options.providerOptions as+ | OllamaEmbedProviderOptions+ | undefined for (const input of inputs) { const response = await this.client.embeddings({ model, prompt: input,+ options: providerOptions, })return {- id: `embed-${Date.now()}`,+ id: generateId('emb'), model, embeddings,Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-openai/src/audio/tts-provider-options.ts-1-31 (1)
1-31:Remove "verse" voice option and clarify theinstructionsparameter limitation.The voice "verse" is not available in the current OpenAI TTS API and should be removed. The
instructionsparameter comment is incomplete—while it correctly notes that instructions do not work withtts-1ortts-1-hd, it should clarify that instructions are supported bygpt-4o-mini-tts. The audio formats listed (mp3, opus, aac, flac, wav, pcm) are correct.packages/typescript/ai-gemini/src/utils/schema-converter.ts-8-15 (1)
8-15:Remove reliance on internal Zod APIschema.def.typefor object detection; use JSON Schema structure instead.
Theschema.def.typeproperty is not part of Zod's public API and may be undefined or removed in future versions, making this converter fragile at runtime. The JSON Schema structure-based checks (checking forpropertieskey, empty objects) are more stable alternatives.- const isZodObject =- typeof schema === 'object' &&- 'def' in schema &&- schema.def.type === 'object'-- if (isZodObject && !result.type) {- result.type = 'object'- }-- if (Object.keys(result).length === 0) {+ if (Object.keys(result).length === 0) { result.type = 'object' } if ('properties' in result && !result.type) { result.type = 'object' }Also applies to: packages/typescript/ai-ollama/src/utils/schema-converter.ts, packages/typescript/ai-anthropic/src/utils/schema-converter.ts, packages/typescript/ai-openai/src/utils/schema-converter.ts
packages/typescript/ai-gemini/src/utils/client.ts-38-43 (1)
38-43:Extract duplicatedgenerateIdto a shared utility.The
generateIdfunction is duplicated across multiple providers (Anthropic, OpenAI, Ollama, and now Gemini). This violates the DRY principle and creates maintenance overhead.Consider extracting this to a shared utility package (e.g.,
@tanstack/ai-utilsor within the core@tanstack/aipackage) that all provider packages can import. This ensures consistent ID generation and reduces code duplication.Example location:
packages/typescript/ai-utils/src/id-generator.tsorpackages/typescript/ai/src/utils/id-generator.ts// Shared utilityexportfunctiongenerateId(prefix:string):string{return`${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`}Then import in each provider:
+import { generateId } from '@tanstack/ai-utils'-export function generateId(prefix: string): string {- return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`-}Based on learnings, tree-shakeable architecture is preferred, so ensure the shared utility is properly exported.
packages/typescript/ai-openai/src/adapters/image.ts-59-92 (1)
59-92:Don’t letproviderOptionsoverride validated request fields (n,size, etc.).Right now
...providerOptionsis last, so a caller can bypass validation by supplying conflicting values insideproviderOptions. Prefer applyingproviderOptionsfirst, then writing the normalized/validated fields last.private buildRequest( options: ImageGenerationOptions<OpenAIImageProviderOptions>, ): OpenAI_SDK.Images.ImageGenerateParams { const { model, prompt, numberOfImages, size, providerOptions } = options return {- model,- prompt,- n: numberOfImages ?? 1,- size: size as OpenAI_SDK.Images.ImageGenerateParams['size'],- ...providerOptions,+ ...providerOptions,+ model,+ prompt,+ n: numberOfImages ?? 1,+ size: size as OpenAI_SDK.Images.ImageGenerateParams['size'], } }packages/typescript/ai-gemini/src/adapters/tts.ts-73-85 (1)
73-85:Either support or explicitly rejectvoice/format/speedfrom TTSOptions.
Right now they’re silently ignored, which is confusing. Consider throwing if provided, or document “Gemini TTS ignores these” in JSDoc.packages/typescript/ai-anthropic/src/adapters/text.ts-130-167 (1)
130-167:Useclient.beta.messages.create()for structured output instead of non-betaclient.messages.create().Tool-based structured outputs and
tool_choiceare beta-only features in the Anthropic SDK. The streaming implementation correctly usesclient.beta.messages.create(), but thestructuredOutput()method calls the non-betaclient.messages.create(), which does not have full tool support. Change lines 155–162 to usethis.client.beta.messages.create()to ensure both endpoints use the same compatible API contract.packages/typescript/ai-anthropic/src/adapters/text.ts-341-426 (1)
341-426:Add explicit guard forrole: 'tool'messages withouttoolCallId.Currently, a
role: 'tool'message withouttoolCallIdfalls through to the generic fallback and is converted to{ role: 'user', ... }, corrupting the conversation. Although the framework never creates such messages, there is no explicit validation. Add an error throw or explicit log to reject this invalid state rather than silently mishandling it.packages/typescript/ai-anthropic/src/adapters/text.ts-504-522 (1)
504-522:Add block type check tocontent_block_stophandling to prevent duplicate empty-args emissions.The current code yields empty arguments when a tool call has no input, but doesn't verify that the stopped block is actually a
tool_useblock. A subsequentcontent_block_stopevent (for text or thinking blocks) will reference the samecurrentToolIndex, potentially re-emitting the empty args for the last tool block. Use the event's block type or index to ensure only actual tool_use block completions trigger this emission.packages/typescript/ai-anthropic/src/adapters/text.ts-482-502 (1)
482-502:Emit accumulated tool arguments instead of delta fragments to match content/thinking pattern and prevent data loss.During
input_json_deltaevents, you accumulate intoexisting.inputbut yield onlyevent.delta.partial_json. This inconsistency causes consumers to lose non-empty tool call arguments—unlike content and thinking streams which emit both delta and accumulated state. The diff correctly changesargumentsto useexisting.inputfor consistency and completeness.} else if (event.delta.type === 'input_json_delta') { const existing = toolCallsMap.get(currentToolIndex) if (existing) { existing.input += event.delta.partial_json yield { type: 'tool_call', id: genId(), model: model, timestamp, toolCall: { id: existing.id, type: 'function', function: { name: existing.name,- arguments: event.delta.partial_json,+ arguments: existing.input, }, }, index: currentToolIndex, } } }packages/typescript/ai-openai/src/adapters/embed.ts-22-27 (1)
22-27:OpenAIEmbedProviderOptions are defined but never forwarded to the OpenAI API.
TheencodingFormatanduseroptions have no effect unless extracted from the options parameter and passed to the embeddings request asencoding_formatanduser. Additionally, consider usingOPENAI_EMBEDDING_MODELS[0]instead of hardcoding'text-embedding-ada-002'for consistency with the model definitions.export class OpenAIEmbedAdapter extends BaseEmbeddingAdapter< typeof OPENAI_EMBEDDING_MODELS, OpenAIEmbedProviderOptions > { async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {+ const providerOptions = (options as EmbeddingOptions & {+ providerOptions?: OpenAIEmbedProviderOptions+ }).providerOptions+ const response = await this.client.embeddings.create({- model: options.model || 'text-embedding-ada-002',+ model: options.model || OPENAI_EMBEDDING_MODELS[0], input: options.input, dimensions: options.dimensions,+ encoding_format: providerOptions?.encodingFormat,+ user: providerOptions?.user, })Also applies to lines 50–55.
packages/typescript/ai-gemini/src/adapters/embed.ts-20-29 (1)
20-29:Wire GeminiEmbedProviderOptions into embedContent request and throw on missing embeddings.The adapter defines
GeminiEmbedProviderOptionswithtaskType,title, andoutputDimensionalitybut never uses them in the API request. Additionally, ifresponse.embeddings?.[0]?.valuesis missing, the embedding is silently dropped, causing the returned embeddings array to have fewer entries than the input array (index misalignment).Fix this by:
- Extracting
providerOptionsfrom options- Passing
taskType,title, andoutputDimensionalityto the embedContent config- Throwing an error instead of silently dropping missing embeddings
async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> { const model = options.model || this.defaultModel+ const providerOptions = (options as any).providerOptions as+ | GeminiEmbedProviderOptions+ | undefined // Ensure input is an array const inputs = Array.isArray(options.input) ? options.input : [options.input] const embeddings: Array<Array<number>> = []- for (const input of inputs) {+ for (const [idx, input] of inputs.entries()) { const response = await this.client.models.embedContent({ model, contents: [{ role: 'user', parts: [{ text: input }] }], config: {- outputDimensionality: options.dimensions,+ outputDimensionality:+ providerOptions?.outputDimensionality ?? (options as any).dimensions,+ taskType: providerOptions?.taskType,+ title: providerOptions?.title, }, })- if (response.embeddings?.[0]?.values) {- embeddings.push(response.embeddings[0].values)- }+ const values = response.embeddings?.[0]?.values+ if (!values) {+ throw new Error(+ `Gemini embedContent returned no embedding values (model=${model}, index=${idx})`,+ )+ }+ embeddings.push(values) }Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-openai/src/adapters/video.ts-78-112 (1)
78-112:Validate the actual size you send (providerOptions.size can bypass validation).
createVideoJob()validatesvalidateVideoSize(model, size)wheresize = options.size, butbuildRequest()will fall back toproviderOptions.sizewhenoptions.sizeis undefined. That means an invalidproviderOptions.sizecan slip through and hit the API.Suggested fix:
- const { model, size, duration, providerOptions } = options+ const { model, size, duration, providerOptions } = options+ const resolvedSize = size ?? providerOptions?.size- validateVideoSize(model, size)+ validateVideoSize(model, resolvedSize)(And use
resolvedSizeconsistently when building the request.)Also applies to: 275-301
packages/typescript/ai-gemini/src/adapters/text.ts-205-255 (1)
205-255:UsenextToolIndexto generate unique tool call IDs.When the same function is called multiple times in a stream,
const toolCallId = functionCall.namecreates collisions intoolCallMap, causing separate tool calls to merge their arguments. Use a stable synthetic ID likeconst toolCallId = functionCall.name ||${++callIndex}; // or any monotonic counter approachto ensure each call gets a unique identifier and preserves its own arguments.packages/typescript/ai-openai/src/adapters/video.ts-92-110 (1)
92-110:Heavy reliance onanyfor OpenAI “videos” API needs a compatibility gate and clearer failure modes.Right now the adapter assumes
client.videos.create/retrieve/...exist and have certain response shapes. If the SDK differs, this becomes runtime-unsafe. Consider feature-detectingclient.videosonce (constructor) and throwing a clear “SDK does not support videos in this version” error before runtime calls.Also applies to: 132-154, 171-272
packages/typescript/ai-gemini/src/adapters/summarize.ts-26-40 (1)
26-40:Provider options interface defined but non-functional due to missing framework support.
GeminiSummarizeProviderOptionsdefinesgenerationConfigandsafetySettings, butSummarizationOptions(the method parameter type) has noproviderOptionsfield to pass them through. Additionally, thegenerateContent()call at lines 86-99 only appliessystemInstructionto config, ignoring temperature, topP, topK, maxOutputTokens, stopSequences, and safety settings entirely.Requires two-part fix: (1) extend
SummarizationOptionsin the framework to supportproviderOptions, and (2) mapoptions.providerOptions.generationConfig.*andsafetySettingsinto the SDK config object.packages/typescript/ai-gemini/src/adapters/text.ts-382-439 (1)
382-439:Store function name with tool call results for proper Gemini API attribution.
functionResponse: { name: msg.toolCallId, ... }violates Gemini's API contract. According to the Gemini API specification,functionResponse.namemust be the function's declared name (e.g., "get_current_weather"), not the toolCallId. Tool results won't be correctly attributed to their function calls. TheModelMessagestructure for tool role messages needs to preserve the function name from the original tool call, or the adapter must have a way to maptoolCallIdback to the function name.packages/typescript/ai-gemini/src/image/image-provider-options.ts-160-174 (1)
160-174:Aspect ratio mappings appear incorrect for some sizes.Several mappings don't match the actual aspect ratios:
'1536x1024': '3:4'— 1536/1024 ≈ 1.5, which is closer to 3:2 (landscape), not 3:4 (portrait).'1024x1536': '4:3'— 1024/1536 ≈ 0.67, which is 2:3 (portrait), not 4:3 (landscape).The comments acknowledge these are "approximate mappings," but users relying on these mappings may get unexpected results.
export const GEMINI_SIZE_TO_ASPECT_RATIO: Record<string, GeminiAspectRatio> = { // Square '1024x1024': '1:1', '512x512': '1:1', // Landscape '1024x768': '4:3',- '1536x1024': '3:4', // Actually this is portrait, but matching common dimensions+ '1536x1024': '3:4', // Note: 1536x1024 is actually ~3:2, mapped to closest supported ratio '1792x1024': '16:9', '1920x1080': '16:9', // Portrait '768x1024': '3:4',- '1024x1536': '4:3', // Inverted+ '1024x1536': '3:4', // Portrait dimensions, using 3:4 portrait ratio '1024x1792': '9:16', '1080x1920': '9:16', }
🟡 Minor comments (15)
packages/typescript/ai-devtools/src/store/ai-context.tsx-1309-1309 (1)
1309-1309:Update comment to reflect new event namespace.The comment references "Chat Events" but the event listeners below use the
text:*namespace. Update the comment for consistency.Apply this diff:
- // ============= Chat Events (for usage tracking) =============+ // ============= Text Events (for usage tracking) =============examples/ts-vue-chat/vite.config.ts-5-10 (1)
5-10:Consolidate the duplicate@tanstack/aiimport.The
@tanstack/aipackage is imported twice—once on line 5 and again on line 10. MovetoolDefinitionto the first import statement:-import { ai, maxIterations, toStreamResponse } from '@tanstack/ai'+import { ai, maxIterations, toStreamResponse, toolDefinition } from '@tanstack/ai' import { openaiText } from '@tanstack/ai-openai' import { anthropicText } from '@tanstack/ai-anthropic' import { geminiText } from '@tanstack/ai-gemini' import { ollamaText } from '@tanstack/ai-ollama'-import { toolDefinition } from '@tanstack/ai'packages/typescript/ai-ollama/src/utils/client.ts-39-41 (1)
39-41:Potential for short/empty random suffix in ID generation.
Math.random().toString(36).substring(7)can produce strings of varying length (0-6 characters) depending on the random value. For more consistent IDs, consider usingsubstring(2, 9)or a fixed-length approach.export function generateId(prefix: string = 'msg'): string {- return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`+ return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2, 9)}` }packages/typescript/ai-ollama/src/adapters/summarize.ts-120-146 (1)
120-146:maxLengthmixes “words” in the prompt with “tokens” innum_predict
The prompt says “under N words”, but you treatmaxLengthas tokens fornum_predict. Either (a) rename semantics in the prompt to “tokens”, or (b) keep it as words and translate to an approximate token budget.packages/typescript/ai-ollama/src/adapters/text.ts-283-386 (1)
283-386:Add explanatory comments for image source handling or implement explicit format conversionThe current code treats
dataand non-dataimage sources identically by pushingsource.valuedirectly. While the ollama-js SDK accepts multiple image formats (base64, URLs, file paths, bytes) and handles conversion internally, the previous implementation included clarifying comments that "URL-based images [are] not directly supported" and "Ollama may need the image to be fetched externally." These comments should be restored to document the current behavior and any limitations, or the code should explicitly handle format conversion for consistency with how other adapters differentiate source types.docs/adapters/ollama.md-126-126 (1)
126-126:Fix hyphenation in compound modifier.The phrase should use a hyphen: "tool-calling support" instead of "tool calling support" when used as a compound adjective modifying "support."
Apply this diff:
-**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.+**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.packages/typescript/ai-gemini/src/tools/tool-converter.ts-79-82 (1)
79-82:Wrap variable declaration in block to prevent leakage.The
jsonSchemavariable is declared in a switch case without a surrounding block. This can cause the variable to be accessible in other case clauses, leading to unexpected behavior.Apply this diff to wrap the declaration in a block:
default:+ { // Collect function declarations to group together // Description is required for Gemini function declarations if (!tool.description) { throw new Error( `Tool ${tool.name} requires a description for Gemini adapter`, ) } // Convert Zod schema to Gemini-compatible JSON Schema const jsonSchema = tool.inputSchema ? convertZodToGeminiSchema(tool.inputSchema) : { type: 'object', properties: {}, required: [] } functionDeclarations.push({ name: tool.name, description: tool.description, parameters: jsonSchema, }) break+ }Based on static analysis hints.
docs/reference/type-aliases/TextStreamOptionsForModel.md-14-16 (1)
14-16:Update comment to reflect renamed type.Line 14 still refers to "Chat options" but should be "Text options" to match the renamed type
TextStreamOptionsForModel.Apply this diff:
-Chat options constrained by a specific model's capabilities.+Text options constrained by a specific model's capabilities.docs/config.json-189-196 (1)
189-196:Consider adding redirects/aliases for renamed function reference routes (chat→text).
If the docs site supports redirects, keeping legacy routes working avoids breaking deep links from older blog posts / READMEs.docs/config.json-484-490 (1)
484-490:Same redirect/alias concern for renamed type-alias pages (ChatStream→TextStream).
Worth preserving legacy URLs if feasible.packages/typescript/ai-openai/src/adapters/tts.ts-71-79 (1)
71-79:Ensureinstructionsis only passed to compatible models.
Theinstructionsfield is supported inSpeechCreateParams, but only works withgpt-4o-mini-tts; passing it totts-1ortts-1-hdwill fail at runtime. Validate the model before spreadingproviderOptionsor document this constraint.packages/typescript/ai-openai/src/adapters/summarize.ts-57-66 (1)
57-66:Handle emptychoicesexplicitly (avoid silently returning '').
Returning''can mask upstream failures; consider throwing whenchoices[0]is missing.packages/typescript/ai-anthropic/src/adapters/summarize.ts-55-68 (1)
55-68:Guard against missingresponse.usage/ unexpected content blocks.
If the SDK can omitusage(or include non-text blocks), consider null-safe access + clearer erroring when summary ends up empty unexpectedly.packages/typescript/ai-gemini/src/image/image-provider-options.ts-208-226 (1)
208-226:Comment/implementation mismatch for image count limits.The comment states "Imagen models support 1-8 images per request (varies by model)" but the implementation hardcodes
maxImages = 4. Themodelparameter is accepted but not used for model-specific validation.Consider either:
- Using the
modelparameter to apply model-specific limits, or- Updating the comment to reflect the actual hardcoded limit.
export function validateNumberOfImages( model: string, numberOfImages: number | undefined, ): void { if (numberOfImages === undefined) return- // Most Imagen models support 1-4 images, some support up to 8- const maxImages = 4+ // All Imagen models currently support up to 4 images per request+ const maxImages = 4 // TODO: Implement model-specific limits if needed if (numberOfImages < 1 || numberOfImages > maxImages) {packages/typescript/ai-openai/src/adapters/text.ts-226-226 (1)
226-226:Dead code:eventTypeCountsmap is never populated.The
eventTypeCountsmap is declared at line 226 but never updated during stream processing. The error handler at lines 479-486 logs it, but it will always be empty.Either populate the map during processing:
for await (const chunk of stream) { chunkCount+++ eventTypeCounts.set(chunk.type, (eventTypeCounts.get(chunk.type) || 0) + 1)Or remove the unused declaration:
- const eventTypeCounts = new Map<string, number>()Committable suggestion skipped: line range outside the PR's diff.
| ##Available Models | ||
| ###Chat Models | ||
| -`claude-sonnet-4-5-20250929` - Claude Sonnet 4.5 (balanced) | ||
| -`claude-opus-4-5-20251101` - Claude Opus 4.5 (most capable) | ||
| -`claude-haiku-4-0-20250514` - Claude Haiku 4.0 (fastest) | ||
| -`claude-3-5-sonnet-20241022` - Claude 3.5 Sonnet | ||
| -`claude-3-opus-20240229` - Claude 3 Opus | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
🧩 Analysis chain
🏁 Script executed:
fd model-meta.ts
Repository: TanStack/ai
Length of output: 201
🏁 Script executed:
cat packages/typescript/ai-anthropic/src/model-meta.ts
Repository: TanStack/ai
Length of output: 12900
🏁 Script executed:
rg"claude-sonnet-4-5-20250929|claude-opus-4-5-20251101|claude-haiku-4-0-20250514|claude-3-5-sonnet-20241022|claude-3-opus-20240229" --type ts --type mdRepository: TanStack/ai
Length of output: 4378
Update ANTHROPIC_MODELS inpackages/typescript/ai-anthropic/src/model-meta.ts to match versioned model IDs used throughout the codebase.
The model IDs in docs/adapters/anthropic.md are correct and used consistently across examples and testing, but the model-meta.ts constants have incorrect IDs that don't match actual Anthropic API model identifiers:
claude-sonnet-4-5should beclaude-sonnet-4-5-20250929claude-opus-4-5should beclaude-opus-4-5-20251101claude-haiku-4-5should beclaude-haiku-4-0-20250514claude-3-7-sonnetshould beclaude-3-5-sonnet-20241022claude-opus-4should beclaude-3-opus-20240229
Correct the model ID constants to match the actual versioned IDs used in the codebase.
🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/model-meta.ts around lines 58 to 67, theANTHROPIC_MODELS constants use unversioned/incorrect IDs; update each constantto the versioned model IDs used in the repo/docs: replace claude-sonnet-4-5 →claude-sonnet-4-5-20250929, claude-opus-4-5 → claude-opus-4-5-20251101,claude-haiku-4-5 → claude-haiku-4-0-20250514, claude-3-7-sonnet →claude-3-5-sonnet-20241022, and claude-opus-4 → claude-3-opus-20240229 so theconstants match the documented and tested model identifiers.| ```ts | ||
| import {openaiText }from'@tanstack/ai-openai' | ||
| const textAdapter=openaiText() | ||
| ``` | ||
| 2.**Class constructor** (for more control): | ||
| ```ts | ||
| import {createOpenAIText }from'@tanstack/ai-openai/adapters' | ||
| const textAdapter=createOpenAIText({ | ||
| apiKey:'your-api-key', | ||
| // additional configuration... | ||
| }) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
🧩 Analysis chain
🏁 Script executed:
cat packages/typescript/ai-openai/package.json| jq'.exports'
Repository: TanStack/ai
Length of output: 145
🏁 Script executed:
#!/bin/bash# Check if adapters directory exists in ai-openai packagels -la packages/typescript/ai-openai/src/| head -20# Also check what's exported from the main index.tsecho"=== Main index.ts exports ==="head -50 packages/typescript/ai-openai/src/index.ts
Repository: TanStack/ai
Length of output: 2228
🏁 Script executed:
#!/bin/bash# Search for actual imports of adapters in the codebaseecho"=== Imports from @tanstack/ai-openai ==="rg"from ['\"]@tanstack/ai-openai['\"]" --type ts --type tsx -A 2| head -30echo -e"\n=== Imports from @tanstack/ai-openai/adapters ==="rg"from ['\"]@tanstack/ai-openai/adapters['\"]" --type ts --type tsx -A 2| head -30# Check adapters directory index.ts if it existsecho -e"\n=== Checking adapters/index.ts ==="if [-f"packages/typescript/ai-openai/src/adapters/index.ts" ];then cat packages/typescript/ai-openai/src/adapters/index.tselseecho"No adapters/index.ts found"fi
Repository: TanStack/ai
Length of output: 282
🏁 Script executed:
#!/bin/bash# List contents of adapters directoryls -la packages/typescript/ai-openai/src/adapters/# Check one of the adapter files to see what it exportsecho -e"\n=== Contents of text adapter ==="head -30 packages/typescript/ai-openai/src/adapters/text.ts
Repository: TanStack/ai
Length of output: 1480
Incorrect import path - use@tanstack/ai-openai instead of/adapters subpath.
Line 96 shows importing from'@tanstack/ai-openai/adapters', but this path is not properly exported in the package configuration. The package.json only exports from the root (.), and there is noadapters/index.ts file to support the subpath import.
The correct import should be from'@tanstack/ai-openai', which re-exports all adapter functions (as shown in lines 19-21 and 165). Update line 96 to:
import{createOpenAIText}from'@tanstack/ai-openai'
This ensures consistency throughout the document and uses the actual exported API.
🤖 Prompt for AI Agents
In docs/guides/tree-shakeable-adapters.md around lines 86 to 102, the exampleimports from the non-exported subpath '@tanstack/ai-openai/adapters' (line ~96);change that import to use the package root: import { createOpenAIText } from'@tanstack/ai-openai' so it matches the package.json exports and other examplesin the doc.| // ============================================================================ | ||
| // New Tree-Shakeable Adapters (Recommended) | ||
| // ============================================================================ | ||
| // Text (Chat) adapter - for chat/text completion | ||
| export{ | ||
| AnthropicTextAdapter, | ||
| anthropicText, | ||
| createAnthropicText, | ||
| typeAnthropicTextConfig, | ||
| typeAnthropicTextProviderOptions, | ||
| }from'./adapters/text' | ||
| // Summarize adapter - for text summarization | ||
| export{ | ||
| AnthropicSummarizeAdapter, | ||
| anthropicSummarize, | ||
| createAnthropicSummarize, | ||
| typeAnthropicSummarizeConfig, | ||
| typeAnthropicSummarizeProviderOptions, | ||
| }from'./adapters/summarize' | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bashset -euo pipefail# Find the ai-anthropic package.json and print its exports (if any).fd -a'^package\.json$' packages/typescript/ai-anthropic|whileread -r f;doecho"==>$f" python -<<'PY' "$f"import json,sysp=sys.argv[1]d=json.load(open(p))print(json.dumps(d.get("exports", {}), indent=2))PYdone
Repository: TanStack/ai
Length of output: 141
🏁 Script executed:
cat packages/typescript/ai-anthropic/package.json
Repository: TanStack/ai
Length of output: 1232
🏁 Script executed:
fd -a'package\.json$' packages/typescript| head -10|whileread -r f;doecho"==>$f" cat"$f"| python3 -c"import json, sys; d=json.load(sys.stdin); print('exports:', json.dumps(d.get('exports', {}), indent=2)[:500])"done
Repository: TanStack/ai
Length of output: 1941
Add subpath exports to package.json to enable tree-shaking of adapters. Thepackage.json exports only the root "." entry point. To support tree-shakeable imports as shown in the code (e.g.,from './adapters/text'), add subpath exports like:
"exports": {".": {"types":"./dist/esm/index.d.ts","import":"./dist/esm/index.js" },"./adapters": {"types":"./dist/esm/adapters/index.d.ts","import":"./dist/esm/adapters/index.js" }}
This allows consumers to useimport { anthropicText } from '@tanstack/ai-anthropic/adapters' and properly tree-shake unused adapters.
🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/index.ts around lines 1–22 the modulere-exports adapter modules directly which requires subpath exports so consumerscan import adapters for tree-shaking; update this package's package.json"exports" to include a subpath entry for "./adapters" (and keep the root "."export) that maps "types" to the adapters types file in dist/esm (e.g.,./dist/esm/adapters/index.d.ts) and "import" to the adapters JS entry (e.g.,./dist/esm/adapters/index.js), ensuring consumers can import from"@tanstack/ai-anthropic/adapters" and tree-shake unused adapters.| case'image':{ | ||
| constimageMetadata=part.metadata | ||
| if(part.source.type==='url'){ | ||
| return{ | ||
| type:'input_image', | ||
| image_url:part.source.value, | ||
| detail:imageMetadata?.detail||'auto', | ||
| } | ||
| } | ||
| // For base64 data, construct a data URI | ||
| return{ | ||
| type:'input_image', | ||
| image_url:part.source.value, | ||
| detail:imageMetadata?.detail||'auto', | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -nA10"base64" --type=ts packages/typescript/ai-openai/Repository: TanStack/ai
Length of output: 13450
🏁 Script executed:
rg -n"source.*value|image.*url" --type=ts packages/typescript/ai-openai/src/adapters/text.ts| head -30
Repository: TanStack/ai
Length of output: 233
🏁 Script executed:
# Check how image sources are created and formattedrg -nB5A5"type.*base64|source.*.type.*base64" --type=ts packages/typescript/ai-openai/
Repository: TanStack/ai
Length of output: 136
🏁 Script executed:
rg -nA10"source.*type.*base64|type.*base64.*source" --type=ts packages/typescript/ai-openai/Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Look at how image parts are structured when createdrg -nB5"image.*base64" --type=ts packages/typescript/
Repository: TanStack/ai
Length of output: 2691
🏁 Script executed:
# Check if there's any test data or documentation about image formatfind packages/typescript/ai-openai/tests -name"*.test.ts" -exec grep -l"image" {}\;| head -5
Repository: TanStack/ai
Length of output: 167
🏁 Script executed:
# Look at the part structure and how images are passed incat packages/typescript/ai-openai/tests/text.test.ts| head -100
Repository: TanStack/ai
Length of output: 135
🏁 Script executed:
find packages/typescript/ai-openai/tests -name"*.test.ts"| xargs grep -l"base64\|image"| head -5
Repository: TanStack/ai
Length of output: 167
🏁 Script executed:
# Look at the part/content structure definitionrg -n"type.*base64|part.*source" --type=ts packages/typescript/ai/src/types.ts -A3 -B3| head -60
Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check for any image content handling testsfind packages/typescript -name"*.test.ts" -exec grep -l"source.*base64" {}\;
Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Find the content/part type definitionsrg -n"type.*Part|interface.*Part|ContentPart" --type=ts packages/typescript/ai/src/types.ts| head -20
Repository: TanStack/ai
Length of output: 966
🏁 Script executed:
# Look at the complete type definitions for contentcat packages/typescript/ai/src/types.ts| grep -A 20"interface.*Content\|type.*Content"| head -60
Repository: TanStack/ai
Length of output: 1645
🏁 Script executed:
# Check the model-meta tests which might show usagecat packages/typescript/ai-openai/tests/model-meta.test.ts| head -150
Repository: TanStack/ai
Length of output: 6016
🏁 Script executed:
# Check if ImagePart metadata or structure includes mime type informationrg -n"ImagePart|image.*metadata|mime" --type=ts packages/typescript/ai/src/types.ts -A5| head -40
Repository: TanStack/ai
Length of output: 1030
🏁 Script executed:
# Look at the actual code to see if mime type is available anywherecat packages/typescript/ai-openai/src/adapters/text.ts| sed -n'640,690p'
Repository: TanStack/ai
Length of output: 1341
🏁 Script executed:
# Check OpenAI documentation or any comments about image_url format requirementsrg -n"image_url|data:image" --type=ts packages/typescript/ai-openai/ -B2 -A2| head -50
Repository: TanStack/ai
Length of output: 3124
Fix base64 image handling to construct proper data URI.
The comment at line 663 correctly states "For base64 data, construct a data URI," but the code incorrectly passespart.source.value directly toimage_url. Whensource.type is'data', the value is raw base64 (not a data URI), which OpenAI's API will reject. Construct a proper data URI by prefixing the base64 value:data:image/jpeg;base64,${part.source.value} (or useimage/png as appropriate).
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around lines 654 to 668,the image branch returns raw base64 for parts with source.type === 'data'instead of a data URI; update the logic so when part.source.type === 'url' youkeep using the URL, but when part.source.type === 'data' you build a proper dataURI (use part.metadata?.mime if available, otherwise default to image/jpeg orimage/png) and prefix the base64 with "data:{mime};base64," before assigning toimage_url; preserve the detail fallback to imageMetadata?.detail || 'auto'.| // Last resort: The /content endpoint returns raw binary video data, not JSON. | ||
| // We need to construct a URL that the client can use to fetch the video. | ||
| // The URL needs to include auth, so we'll create a signed URL or return | ||
| // a proxy endpoint. | ||
| // For now, return a URL that goes through our API to proxy the request | ||
| // since the raw endpoint requires auth headers that browsers can't send. | ||
| // The video element can't add Authorization headers, so we need a workaround. | ||
| // Option 1: Return the direct URL (only works if OpenAI supports query param auth) | ||
| // Option 2: Return a blob URL after fetching (memory intensive) | ||
| // Option 3: Return a proxy URL through our server | ||
| // Let's try fetching and returning a data URL for now | ||
| constbaseUrl=this.config.baseUrl||'https://api.openai.com/v1' | ||
| constapiKey=this.config.apiKey | ||
| constcontentResponse=awaitfetch( | ||
| `${baseUrl}/videos/${jobId}/content`, | ||
| { | ||
| method:'GET', | ||
| headers:{ | ||
| Authorization:`Bearer${apiKey}`, | ||
| }, | ||
| }, | ||
| ) | ||
| if(!contentResponse.ok){ | ||
| // Try to parse error as JSON, but it might be binary | ||
| constcontentType=contentResponse.headers.get('content-type') | ||
| if(contentType?.includes('application/json')){ | ||
| consterrorData=awaitcontentResponse.json().catch(()=>({})) | ||
| thrownewError( | ||
| errorData.error?.message|| | ||
| `Failed to get video content:${contentResponse.status}`, | ||
| ) | ||
| } | ||
| thrownewError( | ||
| `Failed to get video content:${contentResponse.status}`, | ||
| ) | ||
| } | ||
| // The response is the raw video file - convert to base64 data URL | ||
| constvideoBlob=awaitcontentResponse.blob() | ||
| constbuffer=awaitvideoBlob.arrayBuffer() | ||
| constbase64=Buffer.from(buffer).toString('base64') | ||
| constmimeType= | ||
| contentResponse.headers.get('content-type')||'video/mp4' | ||
| return{ | ||
| jobId, | ||
| url:`data:${mimeType};base64,${base64}`, | ||
| expiresAt:undefined,// Data URLs don't expire | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/typescript/ai-openai/src/adapters/video.ts| head -50Repository: TanStack/ai
Length of output: 1918
🏁 Script executed:
wc -l packages/typescript/ai-openai/src/adapters/video.ts
Repository: TanStack/ai
Length of output: 110
🏁 Script executed:
cat -n packages/typescript/ai-openai/src/adapters/video.ts| tail -100Repository: TanStack/ai
Length of output: 3190
🏁 Script executed:
rg -A 5 -B 5"\.server\(\)|\.client\(\)" packages/typescript/ai-openai/src/adapters/Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg"getVideoUrl" packages/typescript/ai-openai/src/ -A 3 -B 3Repository: TanStack/ai
Length of output: 1065
🏁 Script executed:
sed -n'199,252p' packages/typescript/ai-openai/src/adapters/video.tsRepository: TanStack/ai
Length of output: 2327
🏁 Script executed:
rg"class BaseVideoAdapter" packages/typescript/ -A 20Repository: TanStack/ai
Length of output: 1867
🏁 Script executed:
fd"adapters" packages/typescript/ -type dRepository: TanStack/ai
Length of output: 225
🏁 Script executed:
ls -la packages/typescript/ai/src/adapters/| head -30Repository: TanStack/ai
Length of output: 138
🏁 Script executed:
rg"getVideoUrl" packages/typescript/ -B 5 -A 5Repository: TanStack/ai
Length of output: 4293
🏁 Script executed:
rg"\.server\(\)|\.client\(\)" packages/typescript/ -B 2 -A 2| head -80
Repository: TanStack/ai
Length of output: 2795
🏁 Script executed:
find packages/typescript -name"*.ts" -path"*/adapters/*" -type f| head -20
Repository: TanStack/ai
Length of output: 1158
🏁 Script executed:
rg"typeof window|typeof document|'use server'|'use client'" packages/typescript/ai-openai/src/adapters/ -A 2 -B 2Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
cat packages/typescript/ai/src/activities/video/index.ts| head -100Repository: TanStack/ai
Length of output: 2996
🏁 Script executed:
rg"apiKey" packages/typescript/ai-openai/src/adapters/ -B 2 -A 2Repository: TanStack/ai
Length of output: 16304
🏁 Script executed:
cat packages/typescript/ai-openai/src/adapters/text.ts| grep -A 50"class OpenAITextAdapter"
Repository: TanStack/ai
Length of output: 1754
🏁 Script executed:
grep -r"typeof window" packages/typescript/ai-openai/src/||echo"No typeof window checks found"
Repository: TanStack/ai
Length of output: 84
getVideoUrl() fallback exposes API key and creates problematic data URLs.
This fallback path usesthis.config.apiKey directly in a fetch call and constructs base64 data URLs for potentially large video files. Risks:
- API key can be exposed if adapter is instantiated in client-side environments
- Base64 data URLs for large videos consume enormous memory and can crash browser tabs
- Uses
Buffer.from()which isn't available in browsers without polyfills
Add an environment guard to prevent client-side execution:
+ if (typeof window !== 'undefined') {+ throw new Error('Fetching video content requires a server environment to protect API keys.')+ }
Avoid converting to data URLs by default. Consider returning a server-side retrievable URL or instructing users to implement proper proxying instead.
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 199 to 252,the fallback in getVideoUrl() fetches the video using this.config.apiKey andbuilds a base64 data: URL (using Buffer), which can leak the API key in clientenvironments, crash browsers with large videos and relies on Node Buffer; changethis by adding a runtime environment guard that throws or rejects if executed ina browser (e.g., detect window or navigator and refuse to use API keyclient-side), stop returning base64 data URLs for large media and instead returna server-side retrievable URL (either require a configured proxy/signed URLendpoint on your server or return an instruction/error that caller must requestthe proxied URL from the server), ensure any server-only code uses Node-safeAPIs (avoid Buffer in code paths that could run in the browser) and validatethat this.config.apiKey is only used when running server-side.
Uh oh!
There was an error while loading.Please reload this page.
🎯 Changes
Adds a migration doc to help folks migrate from the first version of the API to the proposed newer version.
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
Release Notes
New Features
ai()API for all AI interactions, replacingchat()Breaking Changes
chat()renamed toai()openai()→openaiText(),anthropic()→anthropicText(), etc.ChatOptions→TextOptionsDocumentation
✏️ Tip: You can customize this high-level summary in your review settings.