Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The official Ruby library for Openlayer, the Evaluation Platform for AI. 📈

License

NotificationsYou must be signed in to change notification settings

openlayer-ai/openlayer-ruby

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

The Openlayer Ruby library provides convenient access to the Openlayer REST API from any Ruby 3.2.0+ application. It ships with comprehensive types & docstrings in Yard, RBS, and RBI –see below for usage with Sorbet. The standard library'snet/http is used as the HTTP transport, with connection pooling via theconnection_pool gem.

It is generated withStainless.

Documentation

Documentation for releases of this gem can be foundon RubyDoc.

The REST API documentation can be found onopenlayer.com.

Installation

To use this gem, install via Bundler by adding the following to your application'sGemfile:

gem"openlayer","~> 0.7.0"

Usage

require"bundler/setup"require"openlayer"openlayer=Openlayer::Client.new(api_key:ENV["OPENLAYER_API_KEY"]# This is the default and can be omitted)response=openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:{inputVariableNames:["user_query"],outputColumnName:"output",numOfTokenColumnName:"tokens",costColumnName:"cost",timestampColumnName:"timestamp"},rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}])puts(response.success)

Handling errors

When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass ofOpenlayer::Errors::APIError will be thrown:

begindata=openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:{inputVariableNames:["user_query"],outputColumnName:"output",numOfTokenColumnName:"tokens",costColumnName:"cost",timestampColumnName:"timestamp"},rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}])rescueOpenlayer::Errors::APIConnectionError=>eputs("The server could not be reached")puts(e.cause)# an underlying Exception, likely raised within `net/http`rescueOpenlayer::Errors::RateLimitError=>eputs("A 429 status code was received; we should back off a bit.")rescueOpenlayer::Errors::APIStatusError=>eputs("Another non-200-range status code was received")puts(e.status)end

Error codes are as follows:

CauseError Type
HTTP 400BadRequestError
HTTP 401AuthenticationError
HTTP 403PermissionDeniedError
HTTP 404NotFoundError
HTTP 409ConflictError
HTTP 422UnprocessableEntityError
HTTP 429RateLimitError
HTTP >= 500InternalServerError
Other HTTP errorAPIStatusError
TimeoutAPITimeoutError
Network errorAPIConnectionError

Retries

Certain errors will be automatically retried 2 times by default, with a short exponential backoff.

Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, >=500 Internal errors, and timeouts will all be retried by default.

You can use themax_retries option to configure or disable this:

# Configure the default for all requests:openlayer=Openlayer::Client.new(max_retries:0# default is 2)# Or, configure per-request:openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:{inputVariableNames:["user_query"],outputColumnName:"output",numOfTokenColumnName:"tokens",costColumnName:"cost",timestampColumnName:"timestamp"},rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}],request_options:{max_retries:5})

Timeouts

By default, requests will time out after 60 seconds. You can use the timeout option to configure or disable this:

# Configure the default for all requests:openlayer=Openlayer::Client.new(timeout:nil# default is 60)# Or, configure per-request:openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:{inputVariableNames:["user_query"],outputColumnName:"output",numOfTokenColumnName:"tokens",costColumnName:"cost",timestampColumnName:"timestamp"},rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}],request_options:{timeout:5})

On timeout,Openlayer::Errors::APITimeoutError is raised.

Note that requests that time out are retried by default.

Advanced concepts

BaseModel

All parameter and response objects inherit fromOpenlayer::Internal::Type::BaseModel, which provides several conveniences, including:

  1. All fields, including unknown ones, are accessible withobj[:prop] syntax, and can be destructured withobj => {prop: prop} or pattern-matching syntax.

  2. Structural equivalence for equality; if two API calls return the same values, comparing the responses with == will return true.

  3. Both instances and the classes themselves can be pretty-printed.

  4. Helpers such as#to_h,#deep_to_h,#to_json, and#to_yaml.

Making custom or undocumented requests

Undocumented properties

You can send undocumented parameters to any endpoint, and read undocumented response properties, like so:

Note: theextra_ parameters of the same name overrides the documented parameters.

response=openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:{inputVariableNames:["user_query"],outputColumnName:"output",numOfTokenColumnName:"tokens",costColumnName:"cost",timestampColumnName:"timestamp"},rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}],request_options:{extra_query:{my_query_parameter:value},extra_body:{my_body_parameter:value},extra_headers:{"my-header":value}})puts(response[:my_undocumented_property])

Undocumented request params

If you want to explicitly send an extra param, you can do so with theextra_query,extra_body, andextra_headers under therequest_options: parameter when making a request, as seen in the examples above.

Undocumented endpoints

To make requests to undocumented endpoints while retaining the benefit of auth, retries, and so on, you can make requests usingclient.request, like so:

response=client.request(method::post,path:'/undocumented/endpoint',query:{"dog":"woof"},headers:{"useful-header":"interesting-value"},body:{"hello":"world"})

Concurrency & connection pooling

TheOpenlayer::Client instances are threadsafe, but are only are fork-safe when there are no in-flight HTTP requests.

Each instance ofOpenlayer::Client has its own HTTP connection pool with a default size of 99. As such, we recommend instantiating the client once per application in most settings.

When all available connections from the pool are checked out, requests wait for a new connection to become available, with queue time counting towards the request timeout.

Unless otherwise specified, other classes in the SDK do not have locks protecting their underlying data structure.

Sorbet

This library provides comprehensiveRBI definitions, and has no dependency on sorbet-runtime.

You can provide typesafe request parameters like so:

openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:Openlayer::InferencePipelines::DataStreamParams::Config::LlmData.new(input_variable_names:["user_query"],output_column_name:"output",num_of_token_column_name:"tokens",cost_column_name:"cost",timestamp_column_name:"timestamp"),rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}])

Or, equivalently:

# Hashes work, but are not typesafe:openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",config:{inputVariableNames:["user_query"],outputColumnName:"output",numOfTokenColumnName:"tokens",costColumnName:"cost",timestampColumnName:"timestamp"},rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}])# You can also splat a full Params class:params=Openlayer::InferencePipelines::DataStreamParams.new(config:Openlayer::InferencePipelines::DataStreamParams::Config::LlmData.new(input_variable_names:["user_query"],output_column_name:"output",num_of_token_column_name:"tokens",cost_column_name:"cost",timestamp_column_name:"timestamp"),rows:[{user_query:"what is the meaning of life?",output:"42",tokens:7,cost:0.02,timestamp:1610000000}])openlayer.inference_pipelines.data.stream("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e", **params)

Enums

Since this library does not depend onsorbet-runtime, it cannot provideT::Enum instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:

# :"llm-base"puts(Openlayer::ProjectCreateParams::TaskType::LLM_BASE)# Revealed type: `T.all(Openlayer::ProjectCreateParams::TaskType, Symbol)`T.reveal_type(Openlayer::ProjectCreateParams::TaskType::LLM_BASE)

Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:

# Using the enum constants preserves the tagged type information:openlayer.projects.create(task_type:Openlayer::ProjectCreateParams::TaskType::LLM_BASE,# …)# Literal values are also permissible:openlayer.projects.create(task_type::"llm-base",# …)

Versioning

This package followsSemVer conventions. As the library is in initial development and has a major version of0, APIs may change at any time.

This package considers improvements to the (non-runtime)*.rbi and*.rbs type definitions to be non-breaking changes.

Requirements

Ruby 3.2.0 or higher.

Contributing

Seethe contributing documentation

About

The official Ruby library for Openlayer, the Evaluation Platform for AI. 📈

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors3

  •  
  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp