Movatterモバイル変換


[0]ホーム

URL:


Loading

Elasticsearch filter plugin

For other versions, see theVersioned plugin docs.

For questions about the plugin, open a topic in theDiscuss forums. For bugs or feature requests, open an issue inGithub. For the list of Elastic supported plugins, please consult theElastic Support Matrix.

Search Elasticsearch for a previous log event and copy some fields from it into the current event. Below are two complete examples of how this filter might be used.

The first example uses the legacyquery parameter where the user is limited to an Elasticsearch query_string. Whenever logstash receives an "end" event, it uses this elasticsearch filter to find the matching "start" event based on some operation identifier. Then it copies the@timestamp field from the "start" event into a new field on the "end" event. Finally, using a combination of the "date" filter and the "ruby" filter, we calculate the time duration in hours between the two events.

if [type] == "end" {   elasticsearch {      hosts => ["es-server"]      query => "type:start AND operation:%{[opid]}"      fields => { "@timestamp" => "started" }   }   date {      match => ["[started]", "ISO8601"]      target => "[started]"   }   ruby {      code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)"   }}

The example below reproduces the above example but utilises the query_template. This query_template represents a full Elasticsearch query DSL and supports the standard Logstash field substitution syntax. The example below issues the same query as the first example but uses the template shown.

if [type] == "end" {      elasticsearch {         hosts => ["es-server"]         query_template => "template.json"         fields => { "@timestamp" => "started" }      }      date {         match => ["[started]", "ISO8601"]         target => "[started]"      }      ruby {         code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)"      }}

template.json:

{  "size": 1,  "sort" : [ { "@timestamp" : "desc" } ],  "query": {    "query_string": {      "query": "type:start AND operation:%{[opid]}"    }  },  "_source": ["@timestamp"]}

As illustrated above, through the use ofopid, fields from the Logstash events can be referenced within the template. The template will be populated per event prior to being used to query Elasticsearch.

Notice also that when you usequery_template, the Logstash attributesresult_size andsort will be ignored. They should be specified directly in the JSON template, as shown in the example above.

Authentication to a secure Elasticsearch cluster is possible usingone of the following options:

Authorization to a secure Elasticsearch cluster requiresread permission at index level andmonitoring permissions at cluster level. Themonitoring permission at cluster level is necessary to perform periodic connectivity checks.

This plugin supports the following configuration options plus theCommon options described later.

Note

As of version4.0.0 of this plugin, a number of previously deprecated settings related to SSL have been removed. Please see theElasticsearch Filter Obsolete Configuration Options for more details.

Also seeCommon options for a list of options supported by all filter plugins.

  • Value type ishash
  • Default value is{}

Hash of aggregation names to copy from elasticsearch response into Logstash event fields

Example:

filter {  elasticsearch {    aggregation_fields => {      "my_agg_name" => "my_ls_field"    }  }}
  • Value type ispassword
  • There is no default value for this setting.

Authenticate using Elasticsearch API key. Note that this option also requires enabling thessl_enabled option.

Format isid:api_key whereid andapi_key are as returned by the ElasticsearchCreate API key API.

  • Value type isstring, and must contain exactly 64 hexadecimal characters.
  • There is no default value for this setting.
  • Use of this optionrequires Logstash 8.3+

The SHA-256 fingerprint of an SSL Certificate Authority to trust, such as the autogenerated self-signed CA for an Elasticsearch cluster.

  • Value type ispassword
  • There is no default value for this setting.

Cloud authentication string ("<username>:<password>" format) is an alternative for theuser/password pair.

For more info, check out theLogstash-to-Cloud documentation.

  • Value type isstring
  • There is no default value for this setting.

Cloud ID, from the Elastic Cloud web console. If sethosts should not be used.

For more info, check out theLogstash-to-Cloud documentation.

  • Value type ishash
  • Default value is{}

Hash of docinfo fields to copy from old event (found via elasticsearch) into new event

Example:

filter {  elasticsearch {    docinfo_fields => {      "_id" => "document_id"      "_index" => "document_index"    }  }}
  • Value type isboolean
  • Default value istrue

Whether results should be sorted or not

  • Value type isarray
  • Default value is{}

An array of fields to copy from the old event (found via elasticsearch) into the new event, currently being processed.

In the following example, the values of@timestamp andevent_id on the event found via elasticsearch are copied to the current event’sstarted andstart_id fields, respectively:

fields => {  "@timestamp" => "started"  "event_id" => "start_id"}
  • Value type isarray
  • Default value is["localhost:9200"]

List of elasticsearch hosts to use for querying.

  • Value type isstring
  • Default value is""

Comma-delimited list of index names to search; use_all or empty string to perform the operation on all indices. Field substitution (e.g.index-name-%{{date_field}}) is available

  • Value type ispassword
  • There is no default value for this setting.

Basic Auth - password

  • Value type isuri
  • There is no default value for this setting.

Set the address of a forward HTTP proxy. An empty string is treated as if proxy was not set, and is useful when using environment variables e.g.proxy => '${LS_PROXY:}'.

  • Value type isstring
  • There is no default value for this setting.

Elasticsearch query string. More information is available in theElasticsearch query string documentation. Use eitherquery orquery_template.

  • Value type isstring
  • There is no default value for this setting.

File path to elasticsearch query in DSL format. More information is available in theElasticsearch query documentation. Use eitherquery orquery_template.

  • Value type isnumber
  • Default value is1

How many results to return

  • Value type isnumber
  • Default value is0 (retries disabled)

How many times to retry an individual failed request.

When enabled, retry requests that result in connection errors or an HTTP status code included inretry_on_status

  • Value type isarray
  • Default value is an empty list[]

Which HTTP Status codes to consider for retries (in addition to connection errors) when usingretry_on_failure,

  • Value type isstring
  • Default value is"@timestamp:desc"

Comma-delimited list of<field>:<direction> pairs that define the sort order

  • Value type ispath
  • There is no default value for this setting.

SSL certificate to use to authenticate the client. This certificate should be an OpenSSL-style X.509 certificate file.

Note

This setting can be used only ifssl_key is set.

  • Value type is a list ofpath
  • There is no default value for this setting

The .cer or .pem files to validate the server’s certificate.

Note

You cannot use this setting andssl_truststore_path at the same time.

  • Value type is a list ofstring
  • There is no default value for this setting

The list of cipher suites to use, listed by priorities. Supported cipher suites vary depending on the Java and protocol versions.

  • Value type isboolean
  • There is no default value for this setting.

Enable SSL/TLS secured communication to Elasticsearch cluster. Leaving this unspecified will use whatever scheme is specified in the URLs listed inhosts or extracted from thecloud_id. If no explicit protocol is specified plain HTTP will be used.

  • Value type ispath
  • There is no default value for this setting.

OpenSSL-style RSA private key that corresponds to thessl_certificate.

Note

This setting can be used only ifssl_certificate is set.

  • Value type ispassword
  • There is no default value for this setting.

Set the keystore password

  • Value type ispath
  • There is no default value for this setting.

The keystore used to present a certificate to the server. It can be either.jks or.p12

Note

You cannot use this setting andssl_certificate at the same time.

  • Value can be any of:jks,pkcs12
  • If not provided, the value will be inferred from the keystore filename.

The format of the keystore file. It must be eitherjks orpkcs12.

  • Value type isstring
  • Allowed values are:'TLSv1.1','TLSv1.2','TLSv1.3'
  • Default depends on the JDK being used. With up-to-date Logstash, the default is['TLSv1.2', 'TLSv1.3'].'TLSv1.1' is not considered secure and is only provided for legacy applications.

List of allowed SSL/TLS versions to use when establishing a connection to the Elasticsearch cluster.

For Java 8'TLSv1.3' is supported only since8u262 (AdoptOpenJDK), but requires that you set theLS_JAVA_OPTS="-Djdk.tls.client.protocols=TLSv1.3" system property in Logstash.

Note

If you configure the plugin to use'TLSv1.1' on any recent JVM, such as the one packaged with Logstash, the protocol is disabled by default and needs to be enabled manually by changingjdk.tls.disabledAlgorithms in the$JDK_HOME/conf/security/java.security configuration file. That is,TLSv1.1 needs to be removed from the list.

  • Value type ispassword
  • There is no default value for this setting.

Set the truststore password

  • Value type ispath
  • There is no default value for this setting.

The truststore to validate the server’s certificate. It can be either.jks or.p12.

Note

You cannot use this setting andssl_certificate_authorities at the same time.

  • Value can be any of:jks,pkcs12
  • If not provided, the value will be inferred from the truststore filename.

The format of the truststore file. It must be eitherjks orpkcs12.

  • Value can be any of:full,none
  • Default value isfull

Defines how to verify the certificates presented by another party in the TLS connection:

full validates that the server certificate has an issue date that’s within the not_before and not_after dates; chains to a trusted Certificate Authority (CA), and has a hostname or IP address that matches the names within the certificate.

none performs no certificate validation.

Warning

Setting certificate verification tonone disables many security benefits of SSL/TLS, which is very dangerous. For more information on disabling certificate verification please readhttps://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf

  • Value type isarray
  • Default value is["_elasticsearch_lookup_failure"]

Tags the event on failure to look up previous log event information. This can be used in later analysis.

  • Value type isstring
  • There is no default value for this setting.

Basic Auth - username

Warning

As of version4.0.0 of this plugin, some configuration options have been replaced. The plugin will fail to start if it contains any of these obsolete options.

SettingReplaced byca_file
ssl_certificate_authoritieskeystoressl_keystore_path
keystore_passwordssl_keystore_passwordssl

These configuration options are supported by all filter plugins:

  • Value type ishash
  • Default value is{}

If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the%{{field}}.

Example:

filter {  elasticsearch {    add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }  }}
# You can also add multiple fields at once:filter {  elasticsearch {    add_field => {      "foo_%{somefield}" => "Hello world, from %{host}"      "new_field" => "new_static_value"    }  }}

If the event has field"somefield" == "hello" this filter, on success, would add fieldfoo_hello if it is present, with the value above and the%{{host}} piece replaced with that value from the event. The second example would also add a hardcoded field.

  • Value type isarray
  • Default value is[]

If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the%{{field}} syntax.

Example:

filter {  elasticsearch {    add_tag => [ "foo_%{somefield}" ]  }}
# You can also add multiple tags at once:filter {  elasticsearch {    add_tag => [ "foo_%{somefield}", "taggedy_tag"]  }}

If the event has field"somefield" == "hello" this filter, on success, would add a tagfoo_hello (and the second example would of course add ataggedy_tag tag).

  • Value type isboolean
  • Default value istrue

Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

  • Value type isstring
  • There is no default value for this setting.

Add a uniqueID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 elasticsearch filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

filter {  elasticsearch {    id => "ABC"  }}
Note

Variable substitution in theid field only supports environment variables and does not support the use of values from the secret store.

  • Value type isboolean
  • Default value isfalse

Call the filter flush method at regular interval. Optional.

  • Value type isarray
  • Default value is[]

If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the%{{field}} Example:

filter {  elasticsearch {    remove_field => [ "foo_%{somefield}" ]  }}
# You can also remove multiple fields at once:filter {  elasticsearch {    remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]  }}

If the event has field"somefield" == "hello" this filter, on success, would remove the field with namefoo_hello if it is present. The second example would remove an additional, non-dynamic field.

  • Value type isarray
  • Default value is[]

If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the%{{field}} syntax.

Example:

filter {  elasticsearch {    remove_tag => [ "foo_%{somefield}" ]  }}
# You can also remove multiple tags at once:filter {  elasticsearch {    remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]  }}

If the event has field"somefield" == "hello" this filter, on success, would remove the tagfoo_hello if it is present. The second example would remove a sad, unwanted tag as well.

Welcome to the docs for thelatest Elastic product versions, including Elastic Stack 9.0 and Elastic Cloud Serverless.To view previous versions, go toelastic.co/guide.


[8]ページ先頭

©2009-2025 Movatter.jp