- Notifications
You must be signed in to change notification settings - Fork0
The official Typescript library for Openlayer, the Evaluation Platform for AI. 📈
License
openlayer-ai/openlayer-ts
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This library provides convenient access to the Openlayer REST API from server-side TypeScript or JavaScript.
The REST API documentation can be found onopenlayer.com. The full API of this library can be found inapi.md.
It is generated withStainless.
npm install openlayer
The full API of this library can be found inapi.md.
importOpenlayerfrom'openlayer';constclient=newOpenlayer({apiKey:process.env['OPENLAYER_API_KEY'],// This is the default and can be omitted});constresponse=awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',{config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp',},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000,},],});console.log(response.success);
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
importOpenlayerfrom'openlayer';constclient=newOpenlayer({apiKey:process.env['OPENLAYER_API_KEY'],// This is the default and can be omitted});constparams:Openlayer.InferencePipelines.DataStreamParams={config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp',},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000,},],};constresponse:Openlayer.InferencePipelines.DataStreamResponse=awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',params,);
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
When the library is unable to connect to the API,or if the API returns a non-success status code (i.e., 4xx or 5xx response),a subclass ofAPIError will be thrown:
constresponse=awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',{config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp',},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000,},],}).catch(async(err)=>{if(errinstanceofOpenlayer.APIError){console.log(err.status);// 400console.log(err.name);// BadRequestErrorconsole.log(err.headers);// {server: 'nginx', ...}}else{throwerr;}});
Error codes are as follows:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use themaxRetries option to configure or disable this:
// Configure the default for all requests:constclient=newOpenlayer({maxRetries:0,// default is 2});// Or, configure per-request:awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',{config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp'},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000}]},{maxRetries:5,});
Requests time out after 1 minute by default. You can configure this with atimeout option:
// Configure the default for all requests:constclient=newOpenlayer({timeout:20*1000,// 20 seconds (default is 1 minute)});// Override per-request:awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',{config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp'},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000}]},{timeout:5*1000,});
On timeout, anAPIConnectionTimeoutError is thrown.
Note that requests which time out will beretried twice by default.
The "raw"Response returned byfetch() can be accessed through the.asResponse() method on theAPIPromise type that all methods return.This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
You can also use the.withResponse() method to get the rawResponse along with the parsed data.Unlike.asResponse() this method consumes the body, returning once it is parsed.
constclient=newOpenlayer();constresponse=awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',{config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp',},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000,},],}).asResponse();console.log(response.headers.get('X-My-Header'));console.log(response.statusText);// access the underlying Response objectconst{data:response,response:raw}=awaitclient.inferencePipelines.data.stream('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',{config:{inputVariableNames:['user_query'],outputColumnName:'output',numOfTokenColumnName:'tokens',costColumnName:'cost',timestampColumnName:'timestamp',},rows:[{user_query:'what is the meaning of life?',output:'42',tokens:7,cost:0.02,timestamp:1610000000,},],}).withResponse();console.log(raw.headers.get('X-My-Header'));console.log(response.success);
Important
All log messages are intended for debugging only. The format and content of log messagesmay change between releases.
The log level can be configured in two ways:
- Via the
OPENLAYER_LOGenvironment variable - Using the
logLevelclient option (overrides the environment variable if set)
importOpenlayerfrom'openlayer';constclient=newOpenlayer({logLevel:'debug',// Show all log messages});
Available log levels, from most to least verbose:
'debug'- Show debug messages, info, warnings, and errors'info'- Show info messages, warnings, and errors'warn'- Show warnings and errors (default)'error'- Show only errors'off'- Disable all logging
At the'debug' level, all HTTP requests and responses are logged, including headers and bodies.Some authentication-related headers are redacted, but sensitive data in request and response bodiesmay still be visible.
By default, this library logs toglobalThis.console. You can also provide a custom logger.Most logging libraries are supported, includingpino,winston,bunyan,consola,signale, and@std/log. If your logger doesn't work, please open an issue.
When providing a custom logger, thelogLevel option still controls which messages are emitted, messagesbelow the configured level will not be sent to your logger.
importOpenlayerfrom'openlayer';importpinofrom'pino';constlogger=pino();constclient=newOpenlayer({logger:logger.child({name:'Openlayer'}),logLevel:'debug',// Send all messages to pino, allowing it to filter});
This library is typed for convenient access to the documented API. If you need to access undocumentedendpoints, params, or response properties, the library can still be used.
To make requests to undocumented endpoints, you can useclient.get,client.post, and other HTTP verbs.Options on the client, such as retries, will be respected when making these requests.
awaitclient.post('/some/path',{body:{some_prop:'foo'},query:{some_query_arg:'bar'},});
To make requests using undocumented parameters, you may use// @ts-expect-error on the undocumentedparameter. This library doesn't validate at runtime that the request matches the type, so any extra values yousend will be sent as-is.
client.inferencePipelines.data.stream({// ...//@ts-expect-error baz is not yet publicbaz:'undocumented option',});
For requests with theGET verb, any extra params will be in the query, all other requests will send theextra param in the body.
If you want to explicitly send an extra argument, you can do so with thequery,body, andheaders requestoptions.
To access undocumented response properties, you may access the response object with// @ts-expect-error onthe response object, or cast the response object to the requisite type. Like the request params, we do notvalidate or strip extra properties from the response from the API.
By default, this library expects a globalfetch function is defined.
If you want to use a differentfetch function, you can either polyfill the global:
importfetchfrom'my-fetch';globalThis.fetch=fetch;
Or pass it to the client:
importOpenlayerfrom'openlayer';importfetchfrom'my-fetch';constclient=newOpenlayer({ fetch});
If you want to set customfetch options without overriding thefetch function, you can provide afetchOptions object when instantiating the client or making a request. (Request-specific options override client options.)
importOpenlayerfrom'openlayer';constclient=newOpenlayer({fetchOptions:{// `RequestInit` options},});
To modify proxy behavior, you can provide customfetchOptions that add runtime-specific proxyoptions to requests:
Node[docs]
importOpenlayerfrom'openlayer';import*asundicifrom'undici';constproxyAgent=newundici.ProxyAgent('http://localhost:8888');constclient=newOpenlayer({fetchOptions:{dispatcher:proxyAgent,},});
Bun[docs]
importOpenlayerfrom'openlayer';constclient=newOpenlayer({fetchOptions:{proxy:'http://localhost:8888',},});
Deno[docs]
importOpenlayerfrom'npm:openlayer';consthttpClient=Deno.createHttpClient({proxy:{url:'http://localhost:8888'}});constclient=newOpenlayer({fetchOptions:{client:httpClient,},});
This package generally followsSemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
- Changes that only affect static types, without breaking runtime behavior.
- Changes to library internals which are technically public but not intended or documented for external use.(Please open a GitHub issue to let us know if you are relying on such internals.)
- Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open anissue with questions, bugs, or suggestions.
TypeScript >= 4.9 is supported.
The following runtimes are supported:
- Web browsers (Up-to-date Chrome, Firefox, Safari, Edge, and more)
- Node.js 20 LTS or later (non-EOL) versions.
- Deno v1.28.0 or higher.
- Bun 1.0 or later.
- Cloudflare Workers.
- Vercel Edge Runtime.
- Jest 28 or greater with the
"node"environment ("jsdom"is not supported at this time). - Nitro v2.6 or greater.
Note that React Native is not supported at this time.
If you are interested in other runtime environments, please open or upvote an issue on GitHub.
About
The official Typescript library for Openlayer, the Evaluation Platform for AI. 📈
Resources
License
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.