Movatterモバイル変換


[0]ホーム

URL:


Notice  The highest tagged major version isv9.

reindex

package
v8.19.1Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 12, 2025 License:Apache-2.0Imports:13Imported by:4

Details

Repository

github.com/elastic/go-elasticsearch

Links

Documentation

Overview

Reindex documents.

Copy documents from a source to a destination.You can copy all documents to the destination index or reindex a subset ofthe documents.The source can be any existing index, alias, or data stream.The destination must differ from the source.For example, you cannot reindex a data stream into itself.

IMPORTANT: Reindex requires `_source` to be enabled for all documents in thesource.The destination should be configured as wanted before calling the reindexAPI.Reindex does not copy the settings from the source or its associatedtemplate.Mappings, shard counts, and replicas, for example, must be configured aheadof time.

If the Elasticsearch security features are enabled, you must have thefollowing security privileges:

* The `read` index privilege for the source data stream, index, or alias.* The `write` index privilege for the destination data stream, index, orindex alias.* To automatically create a data stream or index with a reindex API request,you must have the `auto_configure`, `create_index`, or `manage` indexprivilege for the destination data stream, index, or alias.* If reindexing from a remote cluster, the `source.remote.user` must have the`monitor` cluster privilege and the `read` index privilege for the sourcedata stream, index, or alias.

If reindexing from a remote cluster, you must explicitly allow the remotehost in the `reindex.remote.whitelist` setting.Automatic data stream creation requires a matching index template with datastream enabled.

The `dest` element can be configured like the index API to control optimisticconcurrency control.Omitting `version_type` or setting it to `internal` causes Elasticsearch toblindly dump documents into the destination, overwriting any that happen tohave the same ID.

Setting `version_type` to `external` causes Elasticsearch to preserve the`version` from the source, create any documents that are missing, and updateany documents that have an older version in the destination than they do inthe source.

Setting `op_type` to `create` causes the reindex API to create only missingdocuments in the destination.All existing documents will cause a version conflict.

IMPORTANT: Because data streams are append-only, any reindex request to adestination data stream must have an `op_type` of `create`.A reindex can only add new documents to a destination data stream.It cannot update existing documents in a destination data stream.

By default, version conflicts abort the reindex process.To continue reindexing if there are conflicts, set the `conflicts` requestbody property to `proceed`.In this case, the response includes a count of the version conflicts thatwere encountered.Note that the handling of other error types is unaffected by the `conflicts`property.Additionally, if you opt to count version conflicts, the operation couldattempt to reindex more documents from the source than `max_docs` until ithas successfully indexed `max_docs` documents into the target or it has gonethrough every document in the source query.

NOTE: The reindex API makes no effort to handle ID collisions.The last document written will "win" but the order isn't usually predictableso it is not a good idea to rely on this behavior.Instead, make sure that IDs are unique by using a script.

**Running reindex asynchronously**

If the request contains `wait_for_completion=false`, Elasticsearch performssome preflight checks, launches the request, and returns a task you can useto cancel or get the status of the task.Elasticsearch creates a record of this task as a document at`_tasks/<task_id>`.

**Reindex from multiple sources**

If you have many sources to reindex it is generally better to reindex themone at a time rather than using a glob pattern to pick up multiple sources.That way you can resume the process if there are any errors by removing thepartially completed source and starting over.It also makes parallelizing the process fairly simple: split the list ofsources to reindex and run each list in parallel.

For example, you can use a bash script like this:

```for index in i1 i2 i3 i4 i5; do

curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty-d'{    "source": {      "index": "'$index'"    },    "dest": {      "index": "'$index'-reindexed"    }  }'

done```

**Throttling**

Set `requests_per_second` to any positive decimal number (`1.4`, `6`, `1000`,for example) to throttle the rate at which reindex issues batches of indexoperations.Requests are throttled by padding each batch with a wait time.To turn off throttling, set `requests_per_second` to `-1`.

The throttling is done by waiting between batches so that the scroll thatreindex uses internally can be given a timeout that takes into account thepadding.The padding time is the difference between the batch size divided by the`requests_per_second` and the time spent writing.By default the batch size is `1000`, so if `requests_per_second` is set to`500`:

```target_time = 1000 / 500 per second = 2 secondswait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds```

Since the batch is issued as a single bulk request, large batch sizes causeElasticsearch to create many requests and then wait for a while beforestarting the next set.This is "bursty" instead of "smooth".

**Slicing**

Reindex supports sliced scroll to parallelize the reindexing process.This parallelization can improve efficiency and provide a convenient way tobreak the request down into smaller parts.

NOTE: Reindexing from remote clusters does not support manual or automaticslicing.

You can slice a reindex request manually by providing a slice ID and totalnumber of slices to each request.You can also let reindex automatically parallelize by using sliced scroll toslice on `_id`.The `slices` parameter specifies the number of slices to use.

Adding `slices` to the reindex request just automates the manual process,creating sub-requests which means it has some quirks:

* You can see these requests in the tasks API. These sub-requests are "child"tasks of the task for the request with slices.* Fetching the status of the task for the request with `slices` only containsthe status of completed slices.* These sub-requests are individually addressable for things likecancellation and rethrottling.* Rethrottling the request with `slices` will rethrottle the unfinishedsub-request proportionally.* Canceling the request with `slices` will cancel each sub-request.* Due to the nature of `slices`, each sub-request won't get a perfectly evenportion of the documents. All documents will be addressed, but some slicesmay be larger than others. Expect larger slices to have a more evendistribution.* Parameters like `requests_per_second` and `max_docs` on a request with`slices` are distributed proportionally to each sub-request. Combine thatwith the previous point about distribution being uneven and you shouldconclude that using `max_docs` with `slices` might not result in exactly`max_docs` documents being reindexed.* Each sub-request gets a slightly different snapshot of the source, thoughthese are all taken at approximately the same time.

If slicing automatically, setting `slices` to `auto` will choose a reasonablenumber for most indices.If slicing manually or otherwise tuning automatic slicing, use the followingguidelines.

Query performance is most efficient when the number of slices is equal to thenumber of shards in the index.If that number is large (for example, `500`), choose a lower number as toomany slices will hurt performance.Setting slices higher than the number of shards generally does not improveefficiency and adds overhead.

Indexing performance scales linearly across available resources with thenumber of slices.

Whether query or indexing performance dominates the runtime depends on thedocuments being reindexed and cluster resources.

**Modify documents during reindexing**

Like `_update_by_query`, reindex operations support a script that modifiesthe document.Unlike `_update_by_query`, the script is allowed to modify the document'smetadata.

Just as in `_update_by_query`, you can set `ctx.op` to change the operationthat is run on the destination.For example, set `ctx.op` to `noop` if your script decides that the documentdoesn’t have to be indexed in the destination. This "no operation" will bereported in the `noop` counter in the response body.Set `ctx.op` to `delete` if your script decides that the document must bedeleted from the destination.The deletion will be reported in the `deleted` counter in the response body.Setting `ctx.op` to anything else will return an error, as will setting anyother field in `ctx`.

Think of the possibilities! Just be careful; you are able to change:

* `_id`* `_index`* `_version`* `_routing`

Setting `_version` to `null` or clearing it from the `ctx` map is just likenot sending the version in an indexing request.It will cause the document to be overwritten in the destination regardless ofthe version on the target or the version type you use in the reindex API.

**Reindex from remote**

Reindex supports reindexing from a remote Elasticsearch cluster.The `host` parameter must contain a scheme, host, port, and optional path.The `username` and `password` parameters are optional and when they arepresent the reindex operation will connect to the remote Elasticsearch nodeusing basic authentication.Be sure to use HTTPS when using basic authentication or the password will besent in plain text.There are a range of settings available to configure the behavior of theHTTPS connection.

When using Elastic Cloud, it is also possible to authenticate against theremote cluster through the use of a valid API key.Remote hosts must be explicitly allowed with the `reindex.remote.whitelist`setting.It can be set to a comma delimited list of allowed remote host and portcombinations.Scheme is ignored; only the host and port are used.For example:

```reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200,localhost:*"]```

The list of allowed hosts must be configured on any nodes that willcoordinate the reindex.This feature should work with remote clusters of any version ofElasticsearch.This should enable you to upgrade from any version of Elasticsearch to thecurrent version by reindexing from a cluster of the old version.

WARNING: Elasticsearch does not support forward compatibility across majorversions.For example, you cannot reindex from a 7.x cluster into a 6.x cluster.

To enable queries sent to older versions of Elasticsearch, the `query`parameter is sent directly to the remote host without validation ormodification.

NOTE: Reindexing from remote clusters does not support manual or automaticslicing.

Reindexing from a remote server uses an on-heap buffer that defaults to amaximum size of 100mb.If the remote index includes very large documents you'll need to use asmaller batch size.It is also possible to set the socket read timeout on the remote connectionwith the `socket_timeout` field and the connection timeout with the`connect_timeout` field.Both default to 30 seconds.

**Configuring SSL parameters**

Reindex from remote supports configurable SSL settings.These must be specified in the `elasticsearch.yml` file, with the exceptionof the secure settings, which you add in the Elasticsearch keystore.It is not possible to configure SSL in the body of the reindex request.

Index

Constants

This section is empty.

Variables

View Source
var ErrBuildPath =errors.New("cannot build path, check for missing path parameters")

ErrBuildPath is returned in case of missing parameters within the build of the request.

Functions

This section is empty.

Types

typeNewReindex

type NewReindex func() *Reindex

NewReindex type alias for index.

funcNewReindexFunc

func NewReindexFunc(tpelastictransport.Interface)NewReindex

NewReindexFunc returns a new instance of Reindex with the provided transport.Used in the index of the library this allows to retrieve every apis in once place.

typeReindex

type Reindex struct {// contains filtered or unexported fields}

funcNew

Reindex documents.

Copy documents from a source to a destination.You can copy all documents to the destination index or reindex a subset ofthe documents.The source can be any existing index, alias, or data stream.The destination must differ from the source.For example, you cannot reindex a data stream into itself.

IMPORTANT: Reindex requires `_source` to be enabled for all documents in thesource.The destination should be configured as wanted before calling the reindexAPI.Reindex does not copy the settings from the source or its associatedtemplate.Mappings, shard counts, and replicas, for example, must be configured aheadof time.

If the Elasticsearch security features are enabled, you must have thefollowing security privileges:

* The `read` index privilege for the source data stream, index, or alias.* The `write` index privilege for the destination data stream, index, orindex alias.* To automatically create a data stream or index with a reindex API request,you must have the `auto_configure`, `create_index`, or `manage` indexprivilege for the destination data stream, index, or alias.* If reindexing from a remote cluster, the `source.remote.user` must have the`monitor` cluster privilege and the `read` index privilege for the sourcedata stream, index, or alias.

If reindexing from a remote cluster, you must explicitly allow the remotehost in the `reindex.remote.whitelist` setting.Automatic data stream creation requires a matching index template with datastream enabled.

The `dest` element can be configured like the index API to control optimisticconcurrency control.Omitting `version_type` or setting it to `internal` causes Elasticsearch toblindly dump documents into the destination, overwriting any that happen tohave the same ID.

Setting `version_type` to `external` causes Elasticsearch to preserve the`version` from the source, create any documents that are missing, and updateany documents that have an older version in the destination than they do inthe source.

Setting `op_type` to `create` causes the reindex API to create only missingdocuments in the destination.All existing documents will cause a version conflict.

IMPORTANT: Because data streams are append-only, any reindex request to adestination data stream must have an `op_type` of `create`.A reindex can only add new documents to a destination data stream.It cannot update existing documents in a destination data stream.

By default, version conflicts abort the reindex process.To continue reindexing if there are conflicts, set the `conflicts` requestbody property to `proceed`.In this case, the response includes a count of the version conflicts thatwere encountered.Note that the handling of other error types is unaffected by the `conflicts`property.Additionally, if you opt to count version conflicts, the operation couldattempt to reindex more documents from the source than `max_docs` until ithas successfully indexed `max_docs` documents into the target or it has gonethrough every document in the source query.

NOTE: The reindex API makes no effort to handle ID collisions.The last document written will "win" but the order isn't usually predictableso it is not a good idea to rely on this behavior.Instead, make sure that IDs are unique by using a script.

**Running reindex asynchronously**

If the request contains `wait_for_completion=false`, Elasticsearch performssome preflight checks, launches the request, and returns a task you can useto cancel or get the status of the task.Elasticsearch creates a record of this task as a document at`_tasks/<task_id>`.

**Reindex from multiple sources**

If you have many sources to reindex it is generally better to reindex themone at a time rather than using a glob pattern to pick up multiple sources.That way you can resume the process if there are any errors by removing thepartially completed source and starting over.It also makes parallelizing the process fairly simple: split the list ofsources to reindex and run each list in parallel.

For example, you can use a bash script like this:

```for index in i1 i2 i3 i4 i5; do

curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty-d'{    "source": {      "index": "'$index'"    },    "dest": {      "index": "'$index'-reindexed"    }  }'

done```

**Throttling**

Set `requests_per_second` to any positive decimal number (`1.4`, `6`, `1000`,for example) to throttle the rate at which reindex issues batches of indexoperations.Requests are throttled by padding each batch with a wait time.To turn off throttling, set `requests_per_second` to `-1`.

The throttling is done by waiting between batches so that the scroll thatreindex uses internally can be given a timeout that takes into account thepadding.The padding time is the difference between the batch size divided by the`requests_per_second` and the time spent writing.By default the batch size is `1000`, so if `requests_per_second` is set to`500`:

```target_time = 1000 / 500 per second = 2 secondswait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds```

Since the batch is issued as a single bulk request, large batch sizes causeElasticsearch to create many requests and then wait for a while beforestarting the next set.This is "bursty" instead of "smooth".

**Slicing**

Reindex supports sliced scroll to parallelize the reindexing process.This parallelization can improve efficiency and provide a convenient way tobreak the request down into smaller parts.

NOTE: Reindexing from remote clusters does not support manual or automaticslicing.

You can slice a reindex request manually by providing a slice ID and totalnumber of slices to each request.You can also let reindex automatically parallelize by using sliced scroll toslice on `_id`.The `slices` parameter specifies the number of slices to use.

Adding `slices` to the reindex request just automates the manual process,creating sub-requests which means it has some quirks:

* You can see these requests in the tasks API. These sub-requests are "child"tasks of the task for the request with slices.* Fetching the status of the task for the request with `slices` only containsthe status of completed slices.* These sub-requests are individually addressable for things likecancellation and rethrottling.* Rethrottling the request with `slices` will rethrottle the unfinishedsub-request proportionally.* Canceling the request with `slices` will cancel each sub-request.* Due to the nature of `slices`, each sub-request won't get a perfectly evenportion of the documents. All documents will be addressed, but some slicesmay be larger than others. Expect larger slices to have a more evendistribution.* Parameters like `requests_per_second` and `max_docs` on a request with`slices` are distributed proportionally to each sub-request. Combine thatwith the previous point about distribution being uneven and you shouldconclude that using `max_docs` with `slices` might not result in exactly`max_docs` documents being reindexed.* Each sub-request gets a slightly different snapshot of the source, thoughthese are all taken at approximately the same time.

If slicing automatically, setting `slices` to `auto` will choose a reasonablenumber for most indices.If slicing manually or otherwise tuning automatic slicing, use the followingguidelines.

Query performance is most efficient when the number of slices is equal to thenumber of shards in the index.If that number is large (for example, `500`), choose a lower number as toomany slices will hurt performance.Setting slices higher than the number of shards generally does not improveefficiency and adds overhead.

Indexing performance scales linearly across available resources with thenumber of slices.

Whether query or indexing performance dominates the runtime depends on thedocuments being reindexed and cluster resources.

**Modify documents during reindexing**

Like `_update_by_query`, reindex operations support a script that modifiesthe document.Unlike `_update_by_query`, the script is allowed to modify the document'smetadata.

Just as in `_update_by_query`, you can set `ctx.op` to change the operationthat is run on the destination.For example, set `ctx.op` to `noop` if your script decides that the documentdoesn’t have to be indexed in the destination. This "no operation" will bereported in the `noop` counter in the response body.Set `ctx.op` to `delete` if your script decides that the document must bedeleted from the destination.The deletion will be reported in the `deleted` counter in the response body.Setting `ctx.op` to anything else will return an error, as will setting anyother field in `ctx`.

Think of the possibilities! Just be careful; you are able to change:

* `_id`* `_index`* `_version`* `_routing`

Setting `_version` to `null` or clearing it from the `ctx` map is just likenot sending the version in an indexing request.It will cause the document to be overwritten in the destination regardless ofthe version on the target or the version type you use in the reindex API.

**Reindex from remote**

Reindex supports reindexing from a remote Elasticsearch cluster.The `host` parameter must contain a scheme, host, port, and optional path.The `username` and `password` parameters are optional and when they arepresent the reindex operation will connect to the remote Elasticsearch nodeusing basic authentication.Be sure to use HTTPS when using basic authentication or the password will besent in plain text.There are a range of settings available to configure the behavior of theHTTPS connection.

When using Elastic Cloud, it is also possible to authenticate against theremote cluster through the use of a valid API key.Remote hosts must be explicitly allowed with the `reindex.remote.whitelist`setting.It can be set to a comma delimited list of allowed remote host and portcombinations.Scheme is ignored; only the host and port are used.For example:

```reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200,localhost:*"]```

The list of allowed hosts must be configured on any nodes that willcoordinate the reindex.This feature should work with remote clusters of any version ofElasticsearch.This should enable you to upgrade from any version of Elasticsearch to thecurrent version by reindexing from a cluster of the old version.

WARNING: Elasticsearch does not support forward compatibility across majorversions.For example, you cannot reindex from a 7.x cluster into a 6.x cluster.

To enable queries sent to older versions of Elasticsearch, the `query`parameter is sent directly to the remote host without validation ormodification.

NOTE: Reindexing from remote clusters does not support manual or automaticslicing.

Reindexing from a remote server uses an on-heap buffer that defaults to amaximum size of 100mb.If the remote index includes very large documents you'll need to use asmaller batch size.It is also possible to set the socket read timeout on the remote connectionwith the `socket_timeout` field and the connection timeout with the`connect_timeout` field.Both default to 30 seconds.

**Configuring SSL parameters**

Reindex from remote supports configurable SSL settings.These must be specified in the `elasticsearch.yml` file, with the exceptionof the secure settings, which you add in the Elasticsearch keystore.It is not possible to configure SSL in the body of the reindex request.

https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html

func (*Reindex)Conflictsadded inv8.9.0

func (r *Reindex) Conflicts(conflictsconflicts.Conflicts) *Reindex

Conflicts Indicates whether to continue reindexing even when there are conflicts.API name: conflicts

func (*Reindex)Destadded inv8.9.0

func (r *Reindex) Dest(dest *types.ReindexDestination) *Reindex

Dest The destination you are copying to.API name: dest

func (Reindex)Do

func (rReindex) Do(providedCtxcontext.Context) (*Response,error)

Do runs the request through the transport, handle the response and returns a reindex.Response

func (*Reindex)ErrorTraceadded inv8.14.0

func (r *Reindex) ErrorTrace(errortracebool) *Reindex

ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errorswhen they occur.API name: error_trace

func (*Reindex)FilterPathadded inv8.14.0

func (r *Reindex) FilterPath(filterpaths ...string) *Reindex

FilterPath Comma-separated list of filters in dot notation which reduce the responsereturned by Elasticsearch.API name: filter_path

func (*Reindex)Header

func (r *Reindex) Header(key, valuestring) *Reindex

Header set a key, value pair in the Reindex headers map.

func (*Reindex)HttpRequest

func (r *Reindex) HttpRequest(ctxcontext.Context) (*http.Request,error)

HttpRequest returns the http.Request object built from thegiven parameters.

func (*Reindex)Humanadded inv8.14.0

func (r *Reindex) Human(humanbool) *Reindex

Human When set to `true` will return statistics in a format suitable for humans.For example `"exists_time": "1h"` for humans and`"eixsts_time_in_millis": 3600000` for computers. When disabled the humanreadable values will be omitted. This makes sense for responses beingconsumedonly by machines.API name: human

func (*Reindex)MaxDocsadded inv8.9.0

func (r *Reindex) MaxDocs(maxdocsint64) *Reindex

MaxDocs The maximum number of documents to reindex.By default, all documents are reindexed.If it is a value less then or equal to `scroll_size`, a scroll will not beused to retrieve the results for the operation.

If `conflicts` is set to `proceed`, the reindex operation could attempt toreindex more documents from the source than `max_docs` until it hassuccessfully indexed `max_docs` documents into the target or it has gonethrough every document in the source query.API name: max_docs

func (Reindex)Performadded inv8.7.0

func (rReindex) Perform(providedCtxcontext.Context) (*http.Response,error)

Perform runs the http.Request through the provided transport and returns an http.Response.

func (*Reindex)Prettyadded inv8.14.0

func (r *Reindex) Pretty(prettybool) *Reindex

Pretty If set to `true` the returned JSON will be "pretty-formatted". Only usethis option for debugging only.API name: pretty

func (*Reindex)Raw

func (r *Reindex) Raw(rawio.Reader) *Reindex

Raw takes a json payload as input which is then passed to the http.RequestIf specified Raw takes precedence on Request method.

func (*Reindex)Refresh

func (r *Reindex) Refresh(refreshbool) *Reindex

Refresh If `true`, the request refreshes affected shards to make this operationvisible to search.API name: refresh

func (*Reindex)Request

func (r *Reindex) Request(req *Request) *Reindex

Request allows to set the request property with the appropriate payload.

func (*Reindex)RequestsPerSecond

func (r *Reindex) RequestsPerSecond(requestspersecondstring) *Reindex

RequestsPerSecond The throttle for this request in sub-requests per second.By default, there is no throttle.API name: requests_per_second

func (*Reindex)RequireAlias

func (r *Reindex) RequireAlias(requirealiasbool) *Reindex

RequireAlias If `true`, the destination must be an index alias.API name: require_alias

func (*Reindex)Scriptadded inv8.9.0

func (r *Reindex) Script(script *types.Script) *Reindex

Script The script to run to update the document source or metadata when reindexing.API name: script

func (*Reindex)Scroll

func (r *Reindex) Scroll(durationstring) *Reindex

Scroll The period of time that a consistent view of the index should be maintainedfor scrolled search.API name: scroll

func (*Reindex)Sizeadded inv8.9.0

func (r *Reindex) Size(sizeint64) *Reindex

API name: size

func (*Reindex)Slices

func (r *Reindex) Slices(slicesstring) *Reindex

Slices The number of slices this task should be divided into.It defaults to one slice, which means the task isn't sliced into subtasks.

Reindex supports sliced scroll to parallelize the reindexing process.This parallelization can improve efficiency and provide a convenient way tobreak the request down into smaller parts.

NOTE: Reindexing from remote clusters does not support manual or automaticslicing.

If set to `auto`, Elasticsearch chooses the number of slices to use.This setting will use one slice per shard, up to a certain limit.If there are multiple sources, it will choose the number of slices based onthe index or backing index with the smallest number of shards.API name: slices

func (*Reindex)Sourceadded inv8.9.0

func (r *Reindex) Source(source *types.ReindexSource) *Reindex

Source The source you are copying from.API name: source

func (*Reindex)Timeout

func (r *Reindex) Timeout(durationstring) *Reindex

Timeout The period each indexing waits for automatic index creation, dynamic mappingupdates, and waiting for active shards.By default, Elasticsearch waits for at least one minute before failing.The actual wait time could be longer, particularly when multiple waits occur.API name: timeout

func (*Reindex)WaitForActiveShards

func (r *Reindex) WaitForActiveShards(waitforactiveshardsstring) *Reindex

WaitForActiveShards The number of shard copies that must be active before proceeding with theoperation.Set it to `all` or any positive integer up to the total number of shards inthe index (`number_of_replicas+1`).The default value is one, which means it waits for each primary shard to beactive.API name: wait_for_active_shards

func (*Reindex)WaitForCompletion

func (r *Reindex) WaitForCompletion(waitforcompletionbool) *Reindex

WaitForCompletion If `true`, the request blocks until the operation is complete.API name: wait_for_completion

typeRequest

type Request struct {// Conflicts Indicates whether to continue reindexing even when there are conflicts.Conflicts *conflicts.Conflicts `json:"conflicts,omitempty"`// Dest The destination you are copying to.Desttypes.ReindexDestination `json:"dest"`// MaxDocs The maximum number of documents to reindex.// By default, all documents are reindexed.// If it is a value less then or equal to `scroll_size`, a scroll will not be// used to retrieve the results for the operation.//// If `conflicts` is set to `proceed`, the reindex operation could attempt to// reindex more documents from the source than `max_docs` until it has// successfully indexed `max_docs` documents into the target or it has gone// through every document in the source query.MaxDocs *int64 `json:"max_docs,omitempty"`// Script The script to run to update the document source or metadata when reindexing.Script *types.Script `json:"script,omitempty"`Size   *int64        `json:"size,omitempty"`// Source The source you are copying from.Sourcetypes.ReindexSource `json:"source"`}

Request holds the request body struct for the package reindex

https://github.com/elastic/elasticsearch-specification/blob/470b4b9aaaa25cae633ec690e54b725c6fc939c7/specification/_global/reindex/ReindexRequest.ts#L27-L317

funcNewRequestadded inv8.5.0

func NewRequest() *Request

NewRequest returns a Request

func (*Request)FromJSONadded inv8.5.0

func (r *Request) FromJSON(datastring) (*Request,error)

FromJSON allows to load an arbitrary json into the request structure

typeResponseadded inv8.7.0

type Response struct {// Batches The number of scroll responses that were pulled back by the reindex.Batches *int64 `json:"batches,omitempty"`// Created The number of documents that were successfully created.Created *int64 `json:"created,omitempty"`// Deleted The number of documents that were successfully deleted.Deleted *int64 `json:"deleted,omitempty"`// Failures If there were any unrecoverable errors during the process, it is an array of// those failures.// If this array is not empty, the request ended because of those failures.// Reindex is implemented using batches and any failure causes the entire// process to end but all failures in the current batch are collected into the// array.// You can use the `conflicts` option to prevent the reindex from ending on// version conflicts.Failures []types.BulkIndexByScrollFailure `json:"failures,omitempty"`// Noops The number of documents that were ignored because the script used for the// reindex returned a `noop` value for `ctx.op`.Noops *int64 `json:"noops,omitempty"`// RequestsPerSecond The number of requests per second effectively run during the reindex.RequestsPerSecond *float32 `json:"requests_per_second,omitempty"`// Retries The number of retries attempted by reindex.Retries *types.Retries `json:"retries,omitempty"`SliceId *int           `json:"slice_id,omitempty"`Tasktypes.TaskId   `json:"task,omitempty"`// ThrottledMillis The number of milliseconds the request slept to conform to// `requests_per_second`.ThrottledMillis *int64 `json:"throttled_millis,omitempty"`// ThrottledUntilMillis This field should always be equal to zero in a reindex response.// It has meaning only when using the task API, where it indicates the next time// (in milliseconds since epoch) that a throttled request will be run again in// order to conform to `requests_per_second`.ThrottledUntilMillis *int64 `json:"throttled_until_millis,omitempty"`// TimedOut If any of the requests that ran during the reindex timed out, it is `true`.TimedOut *bool `json:"timed_out,omitempty"`// Took The total milliseconds the entire operation took.Took *int64 `json:"took,omitempty"`// Total The number of documents that were successfully processed.Total *int64 `json:"total,omitempty"`// Updated The number of documents that were successfully updated.// That is to say, a document with the same ID already existed before the// reindex updated it.Updated *int64 `json:"updated,omitempty"`// VersionConflicts The number of version conflicts that occurred.VersionConflicts *int64 `json:"version_conflicts,omitempty"`}

Response holds the response body struct for the package reindex

https://github.com/elastic/elasticsearch-specification/blob/470b4b9aaaa25cae633ec690e54b725c6fc939c7/specification/_global/reindex/ReindexResponse.ts#L26-L92

funcNewResponseadded inv8.7.0

func NewResponse() *Response

NewResponse returns a Response

Source Files

View all Source files

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f orF : Jump to
y orY : Canonical URL
go.dev uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.Learn more.

[8]ページ先頭

©2009-2025 Movatter.jp