Movatterモバイル変換


[0]ホーム

URL:


Loading
  1. Elastic Docs/
  2. Reference/
  3. Elasticsearch/
  4. REST APIs

Elasticsearch API conventions

The Elasticsearch REST APIs are exposed over HTTP. Except where noted, the following conventions apply across all APIs.

The type of the content sent in a request body must be specified using theContent-Type header. The value of this header must map to one of the supported formats that the API supports. Most APIs support JSON, YAML, CBOR, and SMILE. The bulk and multi-search APIs support NDJSON, JSON, and SMILE; other types will result in an error response.

When using thesource query string parameter, the content type must be specified using thesource_content_type query string parameter.

Elasticsearch only supports UTF-8-encoded JSON. Elasticsearch ignores any other encoding headings sent with a request. Responses are also UTF-8 encoded.

You can pass anX-Opaque-Id HTTP header to track the origin of a request in Elasticsearch logs and tasks. If provided, Elasticsearch surfaces theX-Opaque-Id value in the:

For the deprecation logs, Elasticsearch also uses theX-Opaque-Id value to throttle and deduplicate deprecation warnings. SeeDeprecation logs throttling.

TheX-Opaque-Id header accepts any arbitrary value. However, we recommend you limit these values to a finite set, such as an ID per client. Don’t generate a uniqueX-Opaque-Id header for every request. Too many uniqueX-Opaque-Id values can prevent Elasticsearch from deduplicating warnings in the deprecation logs.

Elasticsearch also supports atraceparent HTTP header using theofficial W3C trace context spec. You can use thetraceparent header to trace requests across Elastic products and other services. Because it’s only used for traces, you can safely generate a uniquetraceparent header for each request.

If provided, Elasticsearch surfaces the header’strace-id value astrace.id in the:

For example, the followingtraceparent value would produce the followingtrace.id value in the above logs.

`traceparent`: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01`trace.id`: 0af7651916cd43dd8448eb211c80319c

A number of Elasticsearch GET APIs— most notably the search API— support a request body. While the GET action makes sense in the context of retrieving information, GET requests with a body are not supported by all HTTP libraries. All Elasticsearch GET APIs that require a body can also be submitted as POST requests. Alternatively, you can pass the request body as thesource query string parameter when using GET.

A cron expression is a string of the following form:

<seconds> <minutes> <hours> <day_of_month> <month> <day_of_week> [year]

Elasticsearch uses the cron parser from theQuartz Job Scheduler. For more information about writing Quartz cron expressions, see theQuartz CronTrigger Tutorial.

All schedule times are in coordinated universal time (UTC); other timezones are not supported.

Tip

You can use theelasticsearch-croneval command line tool to validate your cron expressions.

All elements are required except foryear. SeeCron special characters for information about the allowed special characters.

<seconds>
(Required) Valid values:0-59 and the special characters,-*/
<minutes>
(Required) Valid values:0-59 and the special characters,-*/
<hours>
(Required) Valid values:0-23 and the special characters,-*/
<day_of_month>
(Required) Valid values:1-31 and the special characters,-*/?LW
<month>
(Required) Valid values:1-12,JAN-DEC,jan-dec, and the special characters,-*/
<day_of_week>
(Required) Valid values:1-7,SUN-SAT,sun-sat, and the special characters,-*/?L#
<year>
(Optional) Valid values:1970-2099 and the special characters,-*/
*
Selects every possible value for a field. For example,* in thehours field means "every hour".
?
No specific value. Use when you don’t care what the value is. For example, if you want the schedule to trigger on a particular day of the month, but don’t care what day of the week that happens to be, you can specify? in theday_of_week field.
-
A range of values (inclusive). Use to separate a minimum and maximum value. For example, if you want the schedule to trigger every hour between 9:00 a.m. and 5:00 p.m., you could specify9-17 in thehours field.
,
Multiple values. Use to separate multiple values for a field. For example, if you want the schedule to trigger every Tuesday and Thursday, you could specifyTUE,THU in theday_of_week field.
/
Increment. Use to separate values when specifying a time increment. The first value represents the starting point, and the second value represents the interval. For example, if you want the schedule to trigger every 20 minutes starting at the top of the hour, you could specify0/20 in theminutes field. Similarly, specifying1/5 inday_of_month field will trigger every 5 days starting on the first day of the month.
L
Last. Use in theday_of_month field to mean the last day of the month— day 31 for January, day 28 for February in non-leap years, day 30 for April, and so on. Use alone in theday_of_week field in place of7 orSAT, or after a particular day of the week to select the last day of that type in the month. For example6L means the last Friday of the month. You can specifyLW in theday_of_month field to specify the last weekday of the month. Avoid using theL option when specifying lists or ranges of values, as the results likely won’t be what you expect.
W
Weekday. Use to specify the weekday (Monday-Friday) nearest the given day. As an example, if you specify15W in theday_of_month field and the 15th is a Saturday, the schedule will trigger on the 14th. If the 15th is a Sunday, the schedule will trigger on Monday the 16th. If the 15th is a Tuesday, the schedule will trigger on Tuesday the 15th. However if you specify1W as the value forday_of_month, and the 1st is a Saturday, the schedule will trigger on Monday the 3rd— it won’t jump over the month boundary. You can specifyLW in theday_of_month field to specify the last weekday of the month. You can only use theW option when theday_of_month is a single day— it is not valid when specifying a range or list of days.
#
Nth XXX day in a month. Use in theday_of_week field to specify the nth XXX day of the month. For example, if you specify6#1, the schedule will trigger on the first Friday of the month. Note that if you specify3#5 and there are not 5 Tuesdays in a particular month, the schedule won’t trigger that month.
0 5 9 * * ?
Trigger at 9:05 a.m. UTC every day.
0 5 9 * * ? 2020
Trigger at 9:05 a.m. UTC every day during the year 2020.
0 5 9 ? * MON-FRI
Trigger at 9:05 a.m. UTC Monday through Friday.
0 0-5 9 * * ?
Trigger every minute starting at 9:00 a.m. UTC and ending at 9:05 a.m. UTC every day.
0 0/15 9 * * ?
Trigger every 15 minutes starting at 9:00 a.m. UTC and ending at 9:45 a.m. UTC every day.
0 5 9 1/3 * ?
Trigger at 9:05 a.m. UTC every 3 days every month, starting on the first day of the month.
0 1 4 1 4 ?
Trigger every April 1st at 4:01 a.m. UTC.
0 0,30 9 ? 4 WED
Trigger at 9:00 a.m. UTC and at 9:30 a.m. UTC every Wednesday in the month of April.
0 5 9 15 * ?
Trigger at 9:05 a.m. UTC on the 15th day of every month.
0 5 9 15W * ?
Trigger at 9:05 a.m. UTC on the nearest weekday to the 15th of every month.
0 5 9 ? * 6#1
Trigger at 9:05 a.m. UTC on the first Friday of every month.
0 5 9 L * ?
Trigger at 9:05 a.m. UTC on the last day of every month.
0 5 9 ? * 2L
Trigger at 9:05 a.m. UTC on the last Monday of every month.
0 5 9 LW * ?
Trigger at 9:05 a.m. UTC on the last weekday of every month.

Date math name resolution lets you to search a range of time series indices or index aliases rather than searching all of your indices and filtering the results. Limiting the number of searched indices reduces cluster load and improves search performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.

Most APIs that accept an index or index alias argument support date math. A date math name takes the following form:

<static_name{date_math_expr{date_format|time_zone}}>

Where:

static_name
Static text
date_math_expr
Dynamic date math expression that computes the date dynamically
date_format
Optional format in which the computed date should be rendered. Defaults toyyyy.MM.dd. Format should be compatible with java-timehttps://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html
time_zone
Optional time zone. Defaults toUTC.
Note

Pay attention to the usage of small vs capital letters used in thedate_format. For example:mm denotes minute of hour, whileMM denotes month of year. Similarlyhh denotes the hour in the1-12 range in combination withAM/PM, whileHH denotes the hour in the0-23 24-hour range.

Date math expressions are resolved locale-independent. Consequently, it is not possible to use any other calendars than the Gregorian calendar.

You must enclose date math names in angle brackets. If you use the name in a request path, special characters must be URI encoded. For example:

# PUT /<my-index-{now/d}>PUT /%3Cmy-index-%7Bnow%2Fd%7D%3E
Percent encoding of date math characters

The special characters used for date rounding must be URI encoded as follows:

<%3C
>%3E
/%2F
{%7B
}%7D
|%7C
+%2B
:%3A
,%2C

The following example shows different forms of date math names and the final names they resolve to given the current time is 22nd March 2024 noon UTC.

ExpressionResolves to
<logstash-{now/d}>logstash-2024.03.22
<logstash-{now/M}>logstash-2024.03.01
<logstash-{now/M{yyyy.MM}}>logstash-2024.03
<logstash-{now/M-1M{yyyy.MM}}>logstash-2024.02
<logstash-{now/d{yyyy.MM.dd&#124;+12:00}}>logstash-2024.03.23

To use the characters{ and} in the static part of a name template, escape them with a backslash\, for example:

  • <elastic\\{ON\\}-{now/M}> resolves toelastic{{ON}}-2024.03.01

The following example shows a search request that searches the Logstash indices for the past three days, assuming the indices use the default Logstash index name format,logstash-YYYY.MM.dd.

# GET /<logstash-{now/d-2d}>,<logstash-{now/d-1d}>,<logstash-{now/d}>/_searchGET /%3Clogstash-%7Bnow%2Fd-2d%7D%3E%2C%3Clogstash-%7Bnow%2Fd-1d%7D%3E%2C%3Clogstash-%7Bnow%2Fd%7D%3E/_search{  "query" : {    "match": {      "test": "data"    }  }}

Most APIs that accept a<data-stream>,<index>, or<target> request path parameter also supportmulti-target syntax.

In multi-target syntax, you can use a comma-separated list to run a request on multiple resources, such as data streams, indices, or aliases:test1,test2,test3. You can also useglob-like wildcard (*) expressions to target resources that match a pattern:test* or*test orte*t or*test*.

Targets can be excluded by prefixing with the- character. This applies to both concrete names and wildcard patterns.For example,test*,-test3 resolves to all resources that start withtest except for the resource namedtest3.It is possible for exclusion to exclude all resources. For example,test*,-test* resolves to an empty set.An exclusion affects targets listedbefore it and has no impact on targets listedafter it.For example,test3*,-test3,test* resolves to all resources that start withtest, includingtest3, because it is includedby the lasttest* pattern.

A dash-prefixed (-) expression is always treated as an exclusion. The dash character must befollowed by a concrete name or wildcard pattern. It is invalid to use the dash character on its own.

In previous versions, dash-prefixed expressions were sometimes not treated as exclusions due to a bug. For example:

This bug is fixed in 9.3.

You can also exclude clusters from a list of clusters to search using the- character:remote*:*,-remote1:*,-remote4:* will search all clusters with an alias that starts with "remote" except for "remote1" and "remote4". Note that to exclude a cluster with this notation you must exclude all of its indexes. Excluding a subset of indexes on a remote cluster is currently not supported. For example, this will throw an exception:remote*:*,-remote1:logs*.

Multi-target APIs that can target indices support the following query string parameters:

ignore_unavailable
(Optional, Boolean) Iffalse, the request returns an error if it targets a missing or closed index. Defaults tofalse.
allow_no_indices
(Optional, Boolean) Iffalse, the request returns an error if any wildcard expression,index alias, or_all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar* returns an error if an index starts withfoo but no index starts withbar.
expand_wildcards
(Optional, string) Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such asopen,hidden. Valid values are:
all
Match any data stream or index, includinghidden ones.
open
Match open, non-hidden indices. Also matches any non-hidden data stream.
closed
Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
hidden
Match hidden data streams and hidden indices. Must be combined withopen,closed, or both.
none
Wildcard patterns are not accepted.

The defaults settings for the above parameters depend on the API being used.

Some multi-target APIs that can target indices also support the following query string parameter:

ignore_throttled

(Optional, Boolean) Iftrue, concrete, expanded or aliased indices are ignored when frozen. Defaults totrue.

Deprecated in 7.16.0

This parameter was deprecated in 7.16.0.

Note

APIs with a single target, such as theget document API, do not support multi-target syntax.

For most APIs, wildcard expressions do not match hidden data streams and indices by default. To match hidden data streams and indices using a wildcard expression, you must specify theexpand_wildcards query parameter.

Alternatively, querying an index pattern starting with a dot, such as.watcher_hist*, will match hidden indices by default. This is intended to mirror Unix file-globbing behavior and provide a smoother transition path to hidden indices.

You can create hidden data streams by settingdata_stream.hidden totrue in the stream’s matchingindex template. You can hide indices using theindex.hidden index setting.

The backing indices for data streams are hidden automatically. Some features, such as machine learning, store information in hidden indices.

Global index templates that match all indices are not applied to hidden indices.

Elasticsearch modules and plugins can store configuration and state information in internalsystem indices. You should not directly access or modify system indices as they contain data essential to the operation of the system.

Important

Direct access to system indices is deprecated and will no longer be allowed in a future major version.

To view system indices within cluster:

GET _cluster/state/metadata?filter_path=metadata.indices.*.system
Warning

When overwriting current cluster state, system indices should be restored as part of theirfeature state.

Some cluster-level APIs may operate on a subset of the nodes which can be specified with node filters.For example,task management,node stats, andnode info APIs can all report results from a filtered set of nodes rather than from all nodes.

Node filters are written as a comma-separated list of individual filters, each of which adds or removes nodes from the chosen subset.Each filter can be one of the following:

  • _all, to add all nodes to the subset.
  • _local, to add the local node to the subset.
  • _master, to add the currently-elected master node to the subset.
  • a node ID or name, to add this node to the subset.
  • an IP address or hostname, to add all matching nodes to the subset.
  • a pattern, using* wildcards, which adds all nodes to the subset whose name, address, or hostname matches the pattern.
  • master:true,data:true,ingest:true,voting_only:true,ml:true, orcoordinating_only:true, which respectively add to the subset all master-eligible nodes, all data nodes, all ingest nodes, all voting-only nodes, all machine learning nodes, and all coordinating-only nodes.
  • master:false,data:false,ingest:false,voting_only:false,ml:false, orcoordinating_only:false, which respectively remove from the subset all master-eligible nodes, all data nodes, all ingest nodes, all voting-only nodes, all machine learning nodes, and all coordinating-only nodes.
  • a pair of patterns, using* wildcards, of the formattrname:attrvalue, which adds to the subset all nodes with acustom node attribute whose name and value match the respective patterns. Custom node attributes are configured by setting properties in the configuration file of the formnode.attr.attrname: attrvalue.

Node filters run in the order in which they are given, which is important if using filters that remove nodes from the set.For example,_all,master:false means all the nodes except the master-eligible ones.master:false,_all means the same as_all because the_all filter runs after themaster:false filter.

If no filters are given, the default is to select all nodes.If any filters are specified, they run starting with an empty chosen subset.This means that filters such asmaster:false which remove nodes from the chosen subset are only useful if they come after some other filters.When used on its own,master:false selects no nodes.

Here are some examples of the use of node filters with somecluster APIs:

# If no filters are given, the default is to select all nodesGET /_nodes# Explicitly select all nodesGET /_nodes/_all# Select just the local nodeGET /_nodes/_local# Select the elected master nodeGET /_nodes/_master# Select nodes by name, which can include wildcardsGET /_nodes/node_name_goes_hereGET /_nodes/node_name_goes_*# Select nodes by address, which can include wildcardsGET /_nodes/10.0.0.3,10.0.0.4GET /_nodes/10.0.0.*# Select nodes by roleGET /_nodes/_all,master:falseGET /_nodes/data:true,ingest:trueGET /_nodes/coordinating_only:trueGET /_nodes/master:true,voting_only:false# Select nodes by custom attribute# (for example, with something like `node.attr.rack: 2` in the configuration file)GET /_nodes/rack:2GET /_nodes/ra*:2GET /_nodes/ra*:2*

A data stream component is a logical grouping of indices that help organize data inside a data stream. All data streams contain adata component by default. Thedata component comprises the data stream's backing indices. When searching, managing, or indexing into a data stream, thedata component is what you are interacting with by default.

Some data stream features are exposed as additional components alongside itsdata component. These other components are comprised of separate sets of backing indices. These additional components store supplemental data independent of the data stream's regular backing indices. An example of another component is thefailures component exposed by the data streamfailure store feature, which captures documents that fail to be ingested in a separate set of backing indices on the data stream.

Some APIs that accept a<data-stream>,<index>, or<target> request path parameter also supportselector syntax which defines which component on a data stream the API should operate on. To use a selector, it is appended to the index or data stream name. Selectors can be combined with other index pattern syntax likedate math and wildcards.

There are two selector suffixes supported by Elasticsearch APIs:

::data
Refers to a data stream's backing indices containing regular data. Data streams always contain a data component.
::failures
This component refers to the internal indices used for a data stream'sfailure store.

As an example,search,field capabilities, andindex stats APIs can all report results from a different component rather than from the default data.

# Search a data stream normallyGET my-data-stream/_search# Search a data stream's failure data if presentGET my-data-stream::failures/_search# Syntax can be combined with other index pattern syntax (wildcards, multi-target, date math, cross cluster search, etc)GET logs-*::failures/_searchGET logs-*::data,logs-*::failures/_countGET remote-cluster:logs-*-*::failures/_searchGET *::data,*::failures,-logs-rdbms-*::failures/_statsGET <logs-{now/d}>::failures/_search

Rest parameters (when using HTTP, map to HTTP URL parameters) follow the convention of using underscore casing.

For libraries that don’t accept a request body for non-POST requests, you can pass the request body as thesource query string parameter instead. When using this method, thesource_content_type parameter should also be passed with a media type value that indicates the format of the source, such asapplication/json.

Major version upgrades often include a number of breaking changes that impact how you interact with Elasticsearch. While we recommend that you monitor the deprecation logs and update applications before upgrading Elasticsearch, having to coordinate the necessary changes can be an impediment to upgrading.

You can enable an existing application to function without modification after an upgrade by including API compatibility headers, which tell Elasticsearch you are still using the previous version of the REST API. Using these headers allows the structure of requests and responses to remain the same; it does not guarantee the same behavior.

You set version compatibility on a per-request basis in theContent-Type andAccept headers. Settingcompatible-with to the same major version as the version you’re running has no impact, but ensures that the request will still work after Elasticsearch is upgraded.

To tell Elasticsearch 8.0 you are using the 7.x request and response format, setcompatible-with=7:

Content-Type: application/vnd.elasticsearch+json; compatible-with=7Accept: application/vnd.elasticsearch+json; compatible-with=7

Elasticsearch APIs may respond with the HTTP429 Too Many Requests status code, indicating that the cluster is too busy to handle the request. When this happens, consider retrying after a short delay. If the retry also receives a429 Too Many Requests response, extend the delay by backing off exponentially before each subsequent retry.

Many users use a proxy with URL-based access control to secure access to Elasticsearch data streams and indices. Formulti-search,multi-get, andbulk requests, the user has the choice of specifying a data stream or index in the URL and on each individual request within the request body. This can make URL-based access control challenging.

To prevent the user from overriding the data stream or index specified in the URL, setrest.action.multi.allow_explicit_index tofalse inelasticsearch.yml.

This causes Elasticsearch to reject requests that explicitly specify a data stream or index in the request body.

All REST API parameters (both request parameters and JSON body) support providing boolean "false" as the valuefalse and boolean "true" as the valuetrue. All other values will raise an error.

When passing a numeric parameter in a request body, you may use astring containing the number instead of the native numeric type. For example:

POST /_search{  "size": "1000"}

Integer-valued fields in a response body are described asinteger (or occasionallylong) in this manual, but there are generally no explicit bounds on such values. JSON, SMILE, CBOR and YAML all permit arbitrarily large integer values. Do not assume thatinteger fields in a response body will always fit into a 32-bit signed integer.

Whenever the byte size of data needs to be specified, e.g. when setting a buffer size parameter, the value must specify the unit, like10kb for 10 kilobytes. Note that these units use powers of 1024, so1kb means 1024 bytes. The supported units are:

b
Bytes
kb
Kilobytes
mb
Megabytes
gb
Gigabytes
tb
Terabytes
pb
Petabytes

Wherever distances need to be specified, such as thedistance parameter in theGeo-distance), the default unit is meters if none is specified. Distances can be specified in other units, such as"1km" or"2mi" (2 miles).

The full list of units is listed below:

Mile
mi ormiles
Yard
yd oryards
Feet
ft orfeet
Inch
in orinch
Kilometer
km orkilometers
Meter
m ormeters
Centimeter
cm orcentimeters
Millimeter
mm ormillimeters
Nautical mile
NM,nmi, ornauticalmiles

Whenever durations need to be specified, e.g. for atimeout parameter, the duration must specify the unit, like2d for 2 days. The supported units are:

d
Days
h
Hours
m
Minutes
s
Seconds
ms
Milliseconds
micros
Microseconds
nanos
Nanoseconds

Unit-less quantities means that they don’t have a "unit" like "bytes" or "Hertz" or "meter" or "long tonne".

If one of these quantities is large we’ll print it out like 10m for 10,000,000 or 7k for 7,000. We’ll still print 87 when we mean 87 though. These are the supported multipliers:

k
Kilo
m
Mega
g
Giga
t
Tera
p

Peta


[8]ページ先頭

©2009-2026 Movatter.jp