Movatterモバイル変換


[0]ホーム

URL:


Notice  The highest tagged major version isv9.

inference

package
v8.19.1Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 12, 2025 License:Apache-2.0Imports:12Imported by:2

Details

Repository

github.com/elastic/go-elasticsearch

Links

Documentation

Overview

Perform inference on the service.

This API enables you to use machine learning models to perform specific taskson data that you provide as an input.It returns a response with the results of the tasks.The inference endpoint you use can perform one specific task that has beendefined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock,Anthropic, or HuggingFace, refer to the service-specific documentation.

> info> The inference APIs enable you to use certain services, such as built-inmachine learning models (ELSER, E5), models uploaded through Eland, Cohere,OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, orHugging Face. For built-in models and models uploaded through Eland, theinference APIs offer an alternative way to use and manage trained models.However, if you do not plan to use the inference APIs to use these models orif you want to use non-NLP models, use the machine learning trained modelAPIs.

Index

Constants

This section is empty.

Variables

View Source
var ErrBuildPath =errors.New("cannot build path, check for missing path parameters")

ErrBuildPath is returned in case of missing parameters within the build of the request.

Functions

This section is empty.

Types

typeInference

type Inference struct {// contains filtered or unexported fields}

funcNew

Perform inference on the service.

This API enables you to use machine learning models to perform specific taskson data that you provide as an input.It returns a response with the results of the tasks.The inference endpoint you use can perform one specific task that has beendefined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock,Anthropic, or HuggingFace, refer to the service-specific documentation.

> info> The inference APIs enable you to use certain services, such as built-inmachine learning models (ELSER, E5), models uploaded through Eland, Cohere,OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, orHugging Face. For built-in models and models uploaded through Eland, theinference APIs offer an alternative way to use and manage trained models.However, if you do not plan to use the inference APIs to use these models orif you want to use non-NLP models, use the machine learning trained modelAPIs.

https://www.elastic.co/guide/en/elasticsearch/reference/current/post-inference-api.html

func (Inference)Do

func (rInference) Do(providedCtxcontext.Context) (*Response,error)

Do runs the request through the transport, handle the response and returns a inference.Response

func (*Inference)ErrorTraceadded inv8.14.0

func (r *Inference) ErrorTrace(errortracebool) *Inference

ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errorswhen they occur.API name: error_trace

func (*Inference)FilterPathadded inv8.14.0

func (r *Inference) FilterPath(filterpaths ...string) *Inference

FilterPath Comma-separated list of filters in dot notation which reduce the responsereturned by Elasticsearch.API name: filter_path

func (*Inference)Header

func (r *Inference) Header(key, valuestring) *Inference

Header set a key, value pair in the Inference headers map.

func (*Inference)HttpRequest

func (r *Inference) HttpRequest(ctxcontext.Context) (*http.Request,error)

HttpRequest returns the http.Request object built from thegiven parameters.

func (*Inference)Humanadded inv8.14.0

func (r *Inference) Human(humanbool) *Inference

Human When set to `true` will return statistics in a format suitable for humans.For example `"exists_time": "1h"` for humans and`"eixsts_time_in_millis": 3600000` for computers. When disabled the humanreadable values will be omitted. This makes sense for responses beingconsumedonly by machines.API name: human

func (*Inference)Input

func (r *Inference) Input(inputs ...string) *Inference

Input The text on which you want to perform the inference task.It can be a single string or an array.

> info> Inference endpoints for the `completion` task type currently only support asingle string as input.API name: input

func (*Inference)InputTypeadded inv8.19.0

func (r *Inference) InputType(inputtypestring) *Inference

InputType Specifies the input data type for the text embedding model. The `input_type`parameter only applies to Inference Endpoints with the `text_embedding` tasktype. Possible values include:* `SEARCH`* `INGEST`* `CLASSIFICATION`* `CLUSTERING`Not all services support all values. Unsupported values will trigger avalidation exception.Accepted values depend on the configured inference service, refer to therelevant service-specific documentation for more info.

> info> The `input_type` parameter specified on the root level of the request bodywill take precedence over the `input_type` parameter specified in`task_settings`.API name: input_type

func (Inference)Perform

func (rInference) Perform(providedCtxcontext.Context) (*http.Response,error)

Perform runs the http.Request through the provided transport and returns an http.Response.

func (*Inference)Prettyadded inv8.14.0

func (r *Inference) Pretty(prettybool) *Inference

Pretty If set to `true` the returned JSON will be "pretty-formatted". Only usethis option for debugging only.API name: pretty

func (*Inference)Queryadded inv8.14.0

func (r *Inference) Query(querystring) *Inference

Query The query input, which is required only for the `rerank` task.It is not required for other tasks.API name: query

func (*Inference)Raw

func (r *Inference) Raw(rawio.Reader) *Inference

Raw takes a json payload as input which is then passed to the http.RequestIf specified Raw takes precedence on Request method.

func (*Inference)Request

func (r *Inference) Request(req *Request) *Inference

Request allows to set the request property with the appropriate payload.

func (*Inference)TaskSettings

func (r *Inference) TaskSettings(tasksettingsjson.RawMessage) *Inference

TaskSettings Task settings for the individual inference request.These settings are specific to the task type you specified and override thetask settings specified when initializing the service.API name: task_settings

func (*Inference)TaskTypeadded inv8.13.0

func (r *Inference) TaskType(tasktypestring) *Inference

TaskType The type of inference task that the model performs.API Name: tasktype

func (*Inference)Timeoutadded inv8.14.0

func (r *Inference) Timeout(durationstring) *Inference

Timeout The amount of time to wait for the inference request to complete.API name: timeout

typeNewInference

type NewInference func(inferenceidstring) *Inference

NewInference type alias for index.

funcNewInferenceFunc

func NewInferenceFunc(tpelastictransport.Interface)NewInference

NewInferenceFunc returns a new instance of Inference with the provided transport.Used in the index of the library this allows to retrieve every apis in once place.

typeRequest

type Request struct {// Input The text on which you want to perform the inference task.// It can be a single string or an array.//// > info// > Inference endpoints for the `completion` task type currently only support a// single string as input.Input []string `json:"input"`// InputType Specifies the input data type for the text embedding model. The `input_type`// parameter only applies to Inference Endpoints with the `text_embedding` task// type. Possible values include:// * `SEARCH`// * `INGEST`// * `CLASSIFICATION`// * `CLUSTERING`// Not all services support all values. Unsupported values will trigger a// validation exception.// Accepted values depend on the configured inference service, refer to the// relevant service-specific documentation for more info.//// > info// > The `input_type` parameter specified on the root level of the request body// will take precedence over the `input_type` parameter specified in// `task_settings`.InputType *string `json:"input_type,omitempty"`// Query The query input, which is required only for the `rerank` task.// It is not required for other tasks.Query *string `json:"query,omitempty"`// TaskSettings Task settings for the individual inference request.// These settings are specific to the task type you specified and override the// task settings specified when initializing the service.TaskSettingsjson.RawMessage `json:"task_settings,omitempty"`}

Request holds the request body struct for the package inference

https://github.com/elastic/elasticsearch-specification/blob/470b4b9aaaa25cae633ec690e54b725c6fc939c7/specification/inference/inference/InferenceRequest.ts#L26-L104

funcNewRequest

func NewRequest() *Request

NewRequest returns a Request

func (*Request)FromJSON

func (r *Request) FromJSON(datastring) (*Request,error)

FromJSON allows to load an arbitrary json into the request structure

func (*Request)UnmarshalJSONadded inv8.12.1

func (s *Request) UnmarshalJSON(data []byte)error

typeResponse

type Response struct {AdditionalInferenceResultProperty map[string]json.RawMessage      `json:"-"`Completion                        []types.CompletionResult        `json:"completion,omitempty"`Rerank                            []types.RankedDocument          `json:"rerank,omitempty"`SparseEmbedding                   []types.SparseEmbeddingResult   `json:"sparse_embedding,omitempty"`TextEmbedding                     []types.TextEmbeddingResult     `json:"text_embedding,omitempty"`TextEmbeddingBits                 []types.TextEmbeddingByteResult `json:"text_embedding_bits,omitempty"`TextEmbeddingBytes                []types.TextEmbeddingByteResult `json:"text_embedding_bytes,omitempty"`}

Response holds the response body struct for the package inference

https://github.com/elastic/elasticsearch-specification/blob/470b4b9aaaa25cae633ec690e54b725c6fc939c7/specification/inference/inference/InferenceResponse.ts#L22-L25

funcNewResponse

func NewResponse() *Response

NewResponse returns a Response

Source Files

View all Source files

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f orF : Jump to
y orY : Canonical URL
go.dev uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.Learn more.

[8]ページ先頭

©2009-2025 Movatter.jp