- Notifications
You must be signed in to change notification settings - Fork468
[inference provider] Add wavespeed.ai as an inference provider#1424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Hello, thank you for your contribution
The code is of great quality overall - I left a few comments regarding our code style.
Please make sure the client can be used to query your API for all supported tasks, and that the payload are matching your API.
Thanks again!
packages/inference/README.md Outdated
- [HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
- [HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
has deleted
hfModelId: "wavespeed-ai/wan-2.1/i2v-480p", | ||
providerId: "wavespeed-ai/wan-2.1/i2v-480p", | ||
status: "live", | ||
task: "image-to-video", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
this task is not supported in the client code - let's remove it for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
has deleted
import { InferenceOutputError } from "../lib/InferenceOutputError"; | ||
import { ImageToImageArgs } from "../tasks"; | ||
import type { BodyParams, HeaderParams, RequestArgs, UrlParams } from "../types"; | ||
import { delay } from "../utils/delay"; | ||
import { omit } from "../utils/omit"; | ||
import { base64FromBytes } from "../utils/base64FromBytes"; | ||
import { | ||
TaskProviderHelper, | ||
TextToImageTaskHelper, | ||
TextToVideoTaskHelper, | ||
ImageToImageTaskHelper, | ||
} from "./providerHelper"; | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
We useimport type
when the import is only used as a type
import{InferenceOutputError}from"../lib/InferenceOutputError"; | |
import{ImageToImageArgs}from"../tasks"; | |
importtype{BodyParams,HeaderParams,RequestArgs,UrlParams}from"../types"; | |
import{delay}from"../utils/delay"; | |
import{omit}from"../utils/omit"; | |
import{base64FromBytes}from"../utils/base64FromBytes"; | |
import{ | |
TaskProviderHelper, | |
TextToImageTaskHelper, | |
TextToVideoTaskHelper, | |
ImageToImageTaskHelper, | |
}from"./providerHelper"; | |
import{InferenceOutputError}from"../lib/InferenceOutputError"; | |
importtype{ImageToImageArgs}from"../tasks"; | |
importtype{BodyParams,HeaderParams,RequestArgs,UrlParams}from"../types"; | |
import{delay}from"../utils/delay"; | |
import{omit}from"../utils/omit"; | |
import{base64FromBytes}from"../utils/base64FromBytes"; | |
importtype{ | |
TaskProviderHelper, | |
TextToImageTaskHelper, | |
TextToVideoTaskHelper, | |
ImageToImageTaskHelper, | |
}from"./providerHelper"; | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Modify as suggested
}; | ||
} | ||
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I'm not sure this type alias is needed, can we remove it?
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>; |
WaveSpeedAICommonResponse
can be renamed toWaveSpeedAIResponse
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This type is needed and will be used in two places. It's uncertain whether it will be used again in the future.
It follows the DRY (Don't Repeat Yourself) principle
It provides better type safety (through default generic parameters)
It makes the code more readable and maintainable
case "completed": { | ||
// Get the video data from the first output URL | ||
if (!taskResult.outputs?.[0]) { | ||
throw new InferenceOutputError("No video URL in completed response"); | ||
} | ||
const videoResponse = await fetch(taskResult.outputs[0]); | ||
if (!videoResponse.ok) { | ||
throw new InferenceOutputError("Failed to fetch video data"); | ||
} | ||
return await videoResponse.blob(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
From what I understand, the payload can be something else than a video (eg an image)
Let's update the error message to reflect that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
yes,
I revised it.
if (!args.parameters) { | ||
return { | ||
...args, | ||
model: args.model, | ||
data: args.inputs, | ||
}; | ||
} else { | ||
return { | ||
...args, | ||
inputs: base64FromBytes( | ||
new Uint8Array(args.inputs instanceof ArrayBuffer ? args.inputs : await (args.inputs as Blob).arrayBuffer()) | ||
), | ||
}; | ||
} | ||
} | ||
override preparePayload(params: BodyParams): Record<string, unknown> { | ||
return { | ||
...omit(params.args, ["inputs", "parameters"]), | ||
...(params.args.parameters as Record<string, unknown>), | ||
image: params.args.inputs, | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think only one of the two (preparePayload
orpreparePayloadAsync
) should be responsible for building the payload, meaning, I'd rather move the rename ofinputs
toimage
inpreparePayloadAsync
an havepreparePayload
as dumb as possible
cc@hanouticelina - would love your opinion on that specific point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I only kept preparePayloadAsync func
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think only one of the two ( preparePayload or preparePayloadAsync) should be responsible for building the payload, meaning, I'd rather move the rename of inputs to image in preparePayloadAsync an have preparePayload as dumb as possible
yes agree!
inputs: base64FromBytes( | ||
new Uint8Array(args.inputs instanceof ArrayBuffer ? args.inputs : await (args.inputs as Blob).arrayBuffer()) | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Does the wavespeed API support base64-encoded images as inputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thank you@arabot777 for the PR! I left some minor comments. I tested the 3 tasks supported by Wavespeed.ai and it works as expected with the changes I suggested.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Co-authored-by: célina <hanouticelina@gmail.com>
Co-authored-by: célina <hanouticelina@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Second round of code review, thank you! We're getting there
Note: make sure you runpnpm format
andpnpm lint
to conform our code style.
/** | ||
* Common response structure for all WaveSpeed AI API responses | ||
*/ | ||
interface WaveSpeedAICommonResponse<T> { | ||
code: number; | ||
message: string; | ||
data: T; | ||
} | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This abstraction is not necessary IMO, let's remove it (see my other comment)
/** | |
*CommonresponsestructureforallWaveSpeedAIAPIresponses | |
*/ | |
interfaceWaveSpeedAICommonResponse<T>{ | |
code:number; | |
message:string; | |
data:T; | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
It has been modified as suggested
type WaveSpeedAIResponse<T = WaveSpeedAITaskResponse> = WaveSpeedAICommonResponse<T>; | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Following the previous comment - let's remove one level of abstraction
typeWaveSpeedAIResponse<T=WaveSpeedAITaskResponse>=WaveSpeedAICommonResponse<T>; | |
interfaceWaveSpeedAIResponse{ | |
code:number; | |
message:string; | |
data:WaveSpeedAITaskResponse | |
} | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
It has been modified as suggested
preparePayload(params: BodyParams): Record<string, unknown> { | ||
const payload: Record<string, unknown> = { | ||
...omit(params.args, ["inputs", "parameters"]), | ||
...(params.args.parameters as Record<string, unknown>), | ||
prompt: params.args.inputs, | ||
}; | ||
// Add LoRA support if adapter is specified in the mapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
We don't need to cast intoResult<string, unknown>
if theparams
have the proper typeImageToImageArgs
,TextToImageArgs
, andTextToVideoArgs
need to be improrted from"../tasks"
preparePayload(params:BodyParams):Record<string,unknown>{ | |
constpayload:Record<string,unknown>={ | |
...omit(params.args,["inputs","parameters"]), | |
...(params.args.parametersasRecord<string,unknown>), | |
prompt:params.args.inputs, | |
}; | |
// Add LoRA support if adapter is specified in the mapping | |
preparePayload(params:BodyParams<ImageToImageArgs|TextToImageArgs|TextToVideoArgs>):Record<string,unknown>{ | |
constpayload:Record<string,unknown>={ | |
...omit(params.args,["inputs","parameters"]), | |
...params.args.parameters, | |
prompt:params.args.inputs, | |
}; | |
// Add LoRA support if adapter is specified in the mapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
It has been modified as suggested
if (params.mapping?.adapter === "lora" && params.mapping.adapterWeightsPath) { | ||
payload.loras = [ | ||
{ | ||
path: params.mapping.adapterWeightsPath, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
For reference,adapterWeightsPath
is the path to the LoRA weights inside the associated HF repo
eg, fornerijs/pixel-art-xl, it will be
"pixel-art-xl.safetensors"
Let's make sure that is indeed what your API is expecting when running LoRAs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Here I see that fal is the endpoint that has been concatenated with hf.
Can I directly set the adapterWeightsPath to a lora http address? Or any other address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
In the test cases, I conducted the test in this way. TheadapterWeightsPath
was directly passed over as an input parameter of lora.
"wavespeed-ai/flux-dev-lora": {hfModelId: "wavespeed-ai/flux-dev-lora",providerId: "wavespeed-ai/flux-dev-lora",status: "live",task: "text-to-image",adapter: "lora",adapterWeightsPath:"https://d32s1zkpjdc4b1.cloudfront.net/predictions/599f3739f5354afc8a76a12042736bfd/1.safetensors",},"wavespeed-ai/flux-dev-lora-ultra-fast": {hfModelId: "wavespeed-ai/flux-dev-lora-ultra-fast",providerId: "wavespeed-ai/flux-dev-lora-ultra-fast",status: "live",task: "text-to-image",adapter: "lora",adapterWeightsPath: "linoyts/yarn_art_Flux_LoRA",},
However, I'm not sure whether the input parameters submitted by hf to lora must be the abbreviation of the file path of the hf model and then concatenated with the hf address in the code. If it is this kind of specification, I can complete it in the format of fal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think your API can just take the hf model id as the loras path, right?
path:params.mapping.adapterWeightsPath, | |
path:params.mapping.hfModelId,, |
As mentioned by@SBrandeis, this part depends on what your API is expecting as inputs when using LoRAs weights.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Yes, you're correct.
In the example,linoyts/yarn_art_Flux_LoRA
is the lora model address of hf. We will automatically match and download the hf model。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I completed the modification and ran the use case successfully
override prepareHeaders(params: HeaderParams, isBinary: boolean): Record<string, string> { | ||
this.accessToken = params.accessToken; | ||
const headers: Record<string, string> = { Authorization: `Bearer ${params.accessToken}` }; | ||
if (!isBinary) { | ||
headers["Content-Type"] = "application/json"; | ||
} | ||
return headers; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This is the same behavior as the blanket implementation here:
https://github.com/arabot777/huggingface.js/blob/f706e02d6128f559bd5551072344ff6e31b9c4be/packages/inference/src/providers/providerHelper.ts#L114-L124
No need for an override IMO
overrideprepareHeaders(params:HeaderParams,isBinary: boolean):Record<string,string>{ | |
this.accessToken=params.accessToken; | |
constheaders:Record<string,string>={Authorization:`Bearer${params.accessToken}`}; | |
if(!isBinary){ | |
headers["Content-Type"]="application/json"; | |
} | |
returnheaders; | |
} |
arabot777May 22, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I removed this part of the logic at the beginning. However, thegetresponse
method ofimageToimage.ts
did not pass in header information.
I have to rewrite prepareHeaders here and by assignmentthis.accessToken = params.accessToken;
To ensure that the complete ak information of the header can be passed on when calling getresponse
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I'd rather updateImageToImage
to be able to pass headers togetResponse
:
exportasyncfunctionimageToImage(args:ImageToImageArgs,options?:Options):Promise<Blob> {constprovider=awaitresolveProvider(args.provider,args.model,args.endpointUrl);constproviderHelper=getProviderHelper(provider,"image-to-image");constpayload=awaitproviderHelper.preparePayloadAsync(args);const {data:res }=awaitinnerRequest<Blob>(payload,providerHelper, {...options,task:"image-to-image",});const {url,info }=awaitmakeRequestOptions(args,providerHelper, { ...options,task:"image-to-image" });returnproviderHelper.getResponse(res,url,info.headersasRecord<string,string>);}
rather than overridingprepareHeaders
and doingthis.accessToken = params.accessToken
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Your suggestion makes sense. Initially, this was a common/public function, so I took a minimalistic approach and didn't modify it. Now, let me try making some changes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I completed the modification and ran the use case successfully
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thanks@arabot777 for the iteration. I left a few comments, but we're almost at something merge-ready!
override prepareHeaders(params: HeaderParams, isBinary: boolean): Record<string, string> { | ||
this.accessToken = params.accessToken; | ||
const headers: Record<string, string> = { Authorization: `Bearer ${params.accessToken}` }; | ||
if (!isBinary) { | ||
headers["Content-Type"] = "application/json"; | ||
} | ||
return headers; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I'd rather updateImageToImage
to be able to pass headers togetResponse
:
exportasyncfunctionimageToImage(args:ImageToImageArgs,options?:Options):Promise<Blob> {constprovider=awaitresolveProvider(args.provider,args.model,args.endpointUrl);constproviderHelper=getProviderHelper(provider,"image-to-image");constpayload=awaitproviderHelper.preparePayloadAsync(args);const {data:res }=awaitinnerRequest<Blob>(payload,providerHelper, {...options,task:"image-to-image",});const {url,info }=awaitmakeRequestOptions(args,providerHelper, { ...options,task:"image-to-image" });returnproviderHelper.getResponse(res,url,info.headersasRecord<string,string>);}
rather than overridingprepareHeaders
and doingthis.accessToken = params.accessToken
Uh oh!
There was an error while loading.Please reload this page.
@hanouticelina Thank you for your suggestion. I completed the modification and ran the use case successfully |
arabot777 commentedMay 29, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Hi, the code has been updated per the review comments. Appreciate if you could verify the changes and point out any remaining concerns for us to address. thanks@SBrandeis |
HuggingFaceDocBuilderDev commentedJun 3, 2025
The docs for this PR livehere. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Hi@arabot777, we recently merged an improvement of error handling for inference (PR:#1504). i've added suggestions on how to incorporate it into the WaveSpeed AI inference provider implementation.
Other than that, the PR looks good to me but let's wait for@SBrandeis final review!
headers?: Record<string, string> | ||
): Promise<Blob> { | ||
if (!headers) { | ||
throw new InferenceOutputError("Headers are required for WaveSpeed AI API calls"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError("Headers are required for WaveSpeed AI API calls"); | |
thrownewInferenceClientInputError("Headers are required for WaveSpeed AI API calls"); |
const resultResponse = await fetch(resultUrl, { headers }); | ||
if (!resultResponse.ok) { | ||
throw new InferenceOutputError(`Failed to get result: ${resultResponse.statusText}`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError(`Failed to get result:${resultResponse.statusText}`); | |
thrownewInferenceClientProviderApiError( | |
"Failed to fetch response status from WaveSpeed AI API", | |
{url:resultUrl,method:"GET"}, | |
{ | |
requestId:resultResponse.headers.get("x-request-id")??"", | |
body:awaitresultResponse.text(), | |
} | |
); |
const result: WaveSpeedAIResponse = await resultResponse.json(); | ||
if (result.code !== 200) { | ||
throw new InferenceOutputError(`API request failed with code ${result.code}: ${result.message}`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError(`API request failed with code${result.code}:${result.message}`); | |
thrownewInferenceClientProviderOutputError(`API request to WaveSpeed AI API failed with code${result.code}:${result.message}`); |
case "completed": { | ||
// Get the media data from the first output URL | ||
if (!taskResult.outputs?.[0]) { | ||
throw new InferenceOutputError("No output URL in completed response"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError("No output URL in completed response"); | |
thrownewInferenceClientProviderOutputError("Received malformed response from WaveSpeed AI API:No output URL in completed response"); |
} | ||
const mediaResponse = await fetch(taskResult.outputs[0]); | ||
if (!mediaResponse.ok) { | ||
throw new InferenceOutputError("Failed to fetch output data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError("Failed to fetch output data"); | |
thrownewInferenceClientProviderApiError( | |
"Failed to fetch response status from WaveSpeed AI API", | |
{url:taskResult.outputs[0],method:"GET"}, | |
{ | |
requestId:mediaResponse.headers.get("x-request-id")??"", | |
body:awaitmediaResponse.text(), | |
} | |
); |
return await mediaResponse.blob(); | ||
} | ||
case "failed": { | ||
throw new InferenceOutputError(taskResult.error || "Task failed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError(taskResult.error||"Task failed"); | |
thrownewInferenceClientProviderOutputError(taskResult.error||"Task failed"); |
continue; | ||
default: { | ||
throw new InferenceOutputError(`Unknown status: ${taskResult.status}`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
thrownewInferenceOutputError(`Unknown status:${taskResult.status}`); | |
thrownewInferenceClientProviderOutputError(`Unknown status:${taskResult.status}`); |
Uh oh!
There was an error while loading.Please reload this page.
Thank you for your reminder. I have completed the new error handling |
Hi@SBrandeis , Just checking in—is there anything I can do to help move this PR forward? Let me know if you'd like any changes or have questions. Thanks for your time! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Looks good - just a few minor comments to address
Let's merge soon and proceed with the next steps:https://huggingface.co/docs/inference-providers/register-as-a-provider
const result: WaveSpeedAIResponse = await resultResponse.json(); | ||
if (result.code !== 200) { | ||
throw new InferenceClientProviderOutputError( | ||
`API request to WaveSpeed AI API failed with code ${result.code}: ${result.message}` | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Already covered by the previous check onresultResponse
ref:https://developer.mozilla.org/en-US/docs/Web/API/Response/ok
constresult:WaveSpeedAIResponse=awaitresultResponse.json(); | |
if(result.code!==200){ | |
thrownewInferenceClientProviderOutputError( | |
`API request to WaveSpeed AI API failed with code${result.code}:${result.message}` | |
); | |
} |
const mediaResponse = await fetch(taskResult.outputs[0]); | ||
if (!mediaResponse.ok) { | ||
throw new InferenceClientProviderApiError( | ||
"Failed to fetch response status from WaveSpeed AI API", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
"Failed to fetchresponse status from WaveSpeed AI API", | |
"Failed to fetchgeneration output from WaveSpeed AI API", |
HARDCODED_MODEL_INFERENCE_MAPPING["wavespeed-ai"] = { | ||
"wavespeed-ai/flux-schnell": { | ||
hfModelId: "wavespeed-ai/flux-schnell", | ||
providerId: "wavespeed-ai/flux-schnell", | ||
status: "live", | ||
task: "text-to-image", | ||
}, | ||
"wavespeed-ai/wan-2.1/t2v-480p": { | ||
hfModelId: "wavespeed-ai/wan-2.1/t2v-480p", | ||
providerId: "wavespeed-ai/wan-2.1/t2v-480p", | ||
status: "live", | ||
task: "text-to-video", | ||
}, | ||
"wavespeed-ai/hidream-e1-full": { | ||
hfModelId: "wavespeed-ai/hidream-e1-full", | ||
providerId: "wavespeed-ai/hidream-e1-full", | ||
status: "live", | ||
task: "image-to-image", | ||
}, | ||
"openfree/flux-chatgpt-ghibli-lora": { | ||
hfModelId: "openfree/flux-chatgpt-ghibli-lora", | ||
providerId: "wavespeed-ai/flux-dev-lora", | ||
status: "live", | ||
task: "text-to-image", | ||
adapter: "lora", | ||
adapterWeightsPath: "openfree/flux-chatgpt-ghibli-lora", | ||
}, | ||
"linoyts/yarn_art_Flux_LoRA": { | ||
hfModelId: "linoyts/yarn_art_Flux_LoRA", | ||
providerId: "wavespeed-ai/flux-dev-lora-ultra-fast", | ||
status: "live", | ||
task: "text-to-image", | ||
adapter: "lora", | ||
adapterWeightsPath: "linoyts/yarn_art_Flux_LoRA", | ||
}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
In order to reflect how mappings will work when deployed live, you need to:
- add a
provider
field to the mapping - use the HF model IDs as keys
HARDCODED_MODEL_INFERENCE_MAPPING["wavespeed-ai"]={ | |
"wavespeed-ai/flux-schnell":{ | |
hfModelId:"wavespeed-ai/flux-schnell", | |
providerId:"wavespeed-ai/flux-schnell", | |
status:"live", | |
task:"text-to-image", | |
}, | |
"wavespeed-ai/wan-2.1/t2v-480p":{ | |
hfModelId:"wavespeed-ai/wan-2.1/t2v-480p", | |
providerId:"wavespeed-ai/wan-2.1/t2v-480p", | |
status:"live", | |
task:"text-to-video", | |
}, | |
"wavespeed-ai/hidream-e1-full":{ | |
hfModelId:"wavespeed-ai/hidream-e1-full", | |
providerId:"wavespeed-ai/hidream-e1-full", | |
status:"live", | |
task:"image-to-image", | |
}, | |
"openfree/flux-chatgpt-ghibli-lora":{ | |
hfModelId:"openfree/flux-chatgpt-ghibli-lora", | |
providerId:"wavespeed-ai/flux-dev-lora", | |
status:"live", | |
task:"text-to-image", | |
adapter:"lora", | |
adapterWeightsPath:"openfree/flux-chatgpt-ghibli-lora", | |
}, | |
"linoyts/yarn_art_Flux_LoRA":{ | |
hfModelId:"linoyts/yarn_art_Flux_LoRA", | |
providerId:"wavespeed-ai/flux-dev-lora-ultra-fast", | |
status:"live", | |
task:"text-to-image", | |
adapter:"lora", | |
adapterWeightsPath:"linoyts/yarn_art_Flux_LoRA", | |
}, | |
}; | |
HARDCODED_MODEL_INFERENCE_MAPPING["wavespeed-ai"]={ | |
"black-forest-labs/FLUX.1-schnell":{ | |
provider:"wavespeed-ai", | |
hfModelId:"wavespeed-ai/flux-schnell", | |
providerId:"wavespeed-ai/flux-schnell", | |
status:"live", | |
task:"text-to-image", | |
}, | |
"Wan-AI/Wan2.1-T2V-14B":{ | |
provider:"wavespeed-ai", | |
hfModelId:"wavespeed-ai/wan-2.1/t2v-480p", | |
providerId:"wavespeed-ai/wan-2.1/t2v-480p", | |
status:"live", | |
task:"text-to-video", | |
}, | |
"HiDream-ai/HiDream-E1-Full":{ | |
provider:"wavespeed-ai", | |
hfModelId:"wavespeed-ai/hidream-e1-full", | |
providerId:"wavespeed-ai/hidream-e1-full", | |
status:"live", | |
task:"image-to-image", | |
}, | |
"openfree/flux-chatgpt-ghibli-lora":{ | |
provider:"wavespeed-ai", | |
hfModelId:"openfree/flux-chatgpt-ghibli-lora", | |
providerId:"wavespeed-ai/flux-dev-lora", | |
status:"live", | |
task:"text-to-image", | |
adapter:"lora", | |
adapterWeightsPath:"openfree/flux-chatgpt-ghibli-lora", | |
}, | |
"linoyts/yarn_art_Flux_LoRA":{ | |
provider:"wavespeed-ai", | |
hfModelId:"linoyts/yarn_art_Flux_LoRA", | |
providerId:"wavespeed-ai/flux-dev-lora-ultra-fast", | |
status:"live", | |
task:"text-to-image", | |
adapter:"lora", | |
adapterWeightsPath:"linoyts/yarn_art_Flux_LoRA", | |
}, | |
}; |
it(`textToImage - wavespeed-ai/flux-schnell`, async () => { | ||
const res = await client.textToImage({ | ||
model: "wavespeed-ai/flux-schnell", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Following my previous comment, model IDs used here need to match the keys in theHARDCODED_MODEL_INFERENCE_MAPPING
(which are the HF model IDs)
model:"wavespeed-ai/flux-schnell", | |
model:"black-forest-labs/FLUX.1-schnell", |
@SBrandeis Thanks for your review and help with the PR! I’ve addressed all the comments—could you help take a look and merge when you get a chance? Really appreciate your time and insights. |
@SBrandeis@hanouticelina Hello, it's been a few weeks. Could you follow up on the progress |
Uh oh!
There was an error while loading.Please reload this page.
What’s in this PR
WaveSpeedAI is a high-performance AI image and video generation service platform, offering industry-leading generation speeds. Now, want to be listed as an Inference Provider on the Hugging Face Hub
The JS Client Integration was completed based on the inference-providers help documentation and passed the test. I am submitting the pr now and look forward to further communication with you
Test