Movatterモバイル変換


[0]ホーム

URL:


Loading

Elastic Integration filter plugin

For other versions, see theVersioned plugin docs.

For questions about the plugin, open a topic in theDiscuss forums. For bugs or feature requests, open an issue inGithub. For the list of Elastic supported plugins, please consult theElastic Support Matrix.

Elastic Enterprise License

Use of this plugin requires an active Elastic Enterprisesubscription.

Use this filter to process Elastic integrations powered by Elasticsearch Ingest Node in Logstash.

Extending Elastic integrations with Logstash

This plugin can help you take advantage of the extensive, built-in capabilities ofElastic Integrations—such as managing data collection, transformation, and visualization—and then use Logstash for additional data processing and output options. For more info about extending Elastic integrations with Logstash, check outUsing Logstash with Elastic Integrations.

When you configure this filter to point to an Elasticsearch cluster, it detects which ingest pipeline (if any) should be executed for each event, using an explicitly-definedpipeline_name or auto-detecting the event’s data-stream and its default pipeline.

It then loads that pipeline’s definition from Elasticsearch and run that pipeline inside Logstash without transmitting the event to Elasticsearch. Events that are successfully handled by their ingest pipeline will have[@metadata][target_ingest_pipeline] set to_none so that any downstream Elasticsearch output in the Logstash pipeline will avoid running the event’s default pipelineagain in Elasticsearch.

Note

Some multi-pipeline configurations such as logstash-to-logstash over http(s) do not maintain the state of[@metadata] fields. In these setups, you may need to explicitly configure your downstream pipeline’s Elasticsearch output withpipeline => "_none" to avoid re-running the default pipeline.

Events thatfail ingest pipeline processing will be tagged with_ingest_pipeline_failure, and their[@metadata][_ingest_pipeline_failure] will be populated with details as a key/value map.

  • This plugin requires Java 17 minimum with Logstash8.x versions and Java 21 minimum with Logstash9.x versions.

  • When you upgrade the Elastic Stack, upgrade Logstash (or this plugin specifically)before you upgrade Kibana. (Note that this requirement is a departure from the typical Elastic Stackinstallation order.)

    The Elasticsearch-Logstash-Kibana installation order ensures the best experience with Elastic Agent-managed pipelines, and embeds functionality from a version of Elasticsearch Ingest Node that is compatible with the plugin version (major.minor).

Elastic Integrations are designed to work withdata streams andECS-compatible output. Be sure that these features are enabled in theoutput-elasticsearch plugin.

Check out theoutput-elasticsearch plugin docs for additional settings.

You will need to configure this plugin to connect to Elasticsearch, and may need to also need to provide local GeoIp databases.

filter {  elastic_integration {    cloud_id   => "YOUR_CLOUD_ID_HERE"    cloud_auth => "YOUR_CLOUD_AUTH_HERE"    geoip_database_directory => "/etc/your/geoip-databases"  }}

Read on for a guide to configuration, or jump to thecomplete list of configuration options.

This plugin communicates with Elasticsearch to identify which ingest pipeline should be run for a given event, and to retrieve the ingest pipeline definitions themselves. You must configure this plugin to point to Elasticsearch using exactly one of:

  • A Cloud Id (seecloud_id)
  • A list of one or more host URLs (seehosts)

Communication will be made securely over SSL unless you explicitly configure this plugin otherwise.

You may need to configure how this plugin establishes trust of the server that responds, and will likely need to configure how this plugin presents its own identity or credentials.

When communicating over SSL, this plugin fully-validates the proof-of-identity presented by Elasticsearch using the system trust store. You can provide analternate source of trust with one of:

You can also configure which aspects of the proof-of-identity are verified (seessl_verification_mode).

When communicating over SSL, you can also configure this plugin to present a certificate-based proof-of-identity to the Elasticsearch cluster it connects to using one of:

You can configure this plugin to present authentication credentials to Elasticsearch in one of several ways:

Note

Your request credentials are only as secure as the connection they are being passed over. They provide neither privacy nor secrecy on their own, and can easily be recovered by an adversary when SSL is disabled.

This plugin communicates with Elasticsearch to resolve events into pipeline definitions and needs to be configured with credentials with appropriate privileges to read from the relevant APIs. At the startup phase, this plugin confirms that current user has sufficient privileges, including:

Privilege nameDescription
monitorA read-only privilege for cluster operations such as cluster health or state. Plugin requires it when checks Elasticsearch license.
read_pipelineA read-only get and simulate access to ingest pipeline. It is required when plugin reads Elasticsearch ingest pipeline definitions.
manage_index_templatesAll operations on index templates privilege. It is required when plugin resolves default pipeline based on event data stream name.
Note

This plugin cannot determine if an anonymous user has the required privileges when it connects to an Elasticsearch cluster that has security features disabled or when the user does not provide credentials. The plugin starts in an unsafe mode with a runtime error indicating that API permissions are insufficient, and prevents events from being processed by the ingest pipeline.

To avoid these issues, set up user authentication and ensure that security in Elasticsearch is enabled (default).

This filter can run Elasticsearch Ingest Node pipelines that arewholly comprised of the supported subset of processors. It has access to the Painless and Mustache scripting engines where applicable:

SourceProcessorCaveats
Ingest Commonappendnone
bytesnone
community_idnone
convertnone
csvnone
datenone
date_index_namenone
dissectnone
dot_expandernone
dropnone
failnone
fingerprintnone
foreachnone
groknone
gsubnone
html_stripnone
joinnone
jsonnone
kvnone
lowercasenone
network_directionnone
pipelineresolved pipelinemust be wholly-composed of supported processors
registered_domainnone
removenone
renamenone
reroutenone
scriptlang must bepainless (default)
setnone
sortnone
splitnone
trimnone
uppercasenone
uri_partsnone
urldecodenone
user_agentside-loading a custom regex file is not supported; the processor will use the default user agent definitions as specified inElasticsearch processor definition
Redactredactnone
GeoIpgeoiprequires MaxMind GeoIP2 databases, which may be provided by Logstash’s Geoip Database ManagementOR configured usinggeoip_database_directory

During execution the Ingest pipeline works with a temporary mutableview of the Logstash event called an ingest document. This view contains all of the as-structured fields from the event with minimal type conversions.

It also contains additional metadata fields as required by ingest pipeline processors:

  • _version: along-value integer equivalent to the event’s@version, or a sensible default value of1.
  • _ingest.timestamp: aZonedDateTime equivalent to the event’s@timestamp field

After execution completes the event is sanitized to ensure that Logstash-reserved fields have the expected shape, providing sensible defaults for any missing required fields. When an ingest pipeline has set a reserved field to a value that cannot be coerced, the value is made available in an alternate location on the event as described below.

Logstash fieldtypevalue
@timestampTimestampFirst coercible value of the ingest document’s@timestamp,event.created,_ingest.timestamp, or_now fields; or the current timestamp.When the ingest document has a value for@timestamp that cannot be coerced, it will be available in the event’s_@timestamp field.
@versionString-encoded integerFirst coercible value of the ingest document’s@version, or_version fields; or the current timestamp.When the ingest document has a value for@version that cannot be coerced, it will be available in the event’s_@version field.
@metadatakey/value mapThe ingest document’s@metadata; or an empty map.When the ingest document has a value for@metadata that cannot be coerced, it will be available in the event’s_@metadata field.
tagsa String or a list of StringsThe ingest document’stags.When the ingest document has a value fortags that cannot be coerced, it will be available in the event’s_tags field.

Additionally, these Elasticsearch IngestDocument Metadata fields are made available on the resulting eventif-and-only-if they were set during pipeline execution:

Elasticsearch document metadataLogstash field
_id[@metadata][_ingest_document][id]
_index[@metadata][_ingest_document][index]
_routing[@metadata][_ingest_document][routing]
_version[@metadata][_ingest_document][version]
_version_type[@metadata][_ingest_document][version_type]
_ingest.timestamp[@metadata][_ingest_document][timestamp]

This plugin uses Elasticsearch to resolve pipeline names into their pipeline definitions. When configuredwithout an explicitpipeline_name, or when a pipeline uses the Reroute Processor, it also uses Elasticsearch to establish mappings of data stream names to their respective default pipeline names.

It uses hit/miss caches to avoid querying Elasticsearch for every single event. It also works to update these cached mappingsbefore they expire. The result is that when Elasticsearch is responsive this plugin is able to pick up changes quickly without impacting its own performance, and it can survive periods of Elasticsearch issues without interruption by continuing to use potentially-stale mappings or definitions.

To achieve this, mappings are cached for a maximum of 24 hours, and cached values are reloaded every 1 minute with the following effect:

  • when a reloaded mapping is non-empty and is thesame as its already-cached value, its time-to-live is reset to ensure that subsequent events can continue using the confirmed-unchanged value
  • when a reloaded mapping is non-empty and isdifferent from its previously-cached value, the entry isupdated so that subsequent events will use the new value
  • when a reloaded mapping is newlyempty, the previous non-empty mapping isreplaced with a new empty entry so that subsequent events will use the empty value
  • when the reload of a mappingfails, this plugin emits a log warning but the existing cache entry is unchanged and gets closer to its expiry.

Troubleshooting ingest pipelines associated with data streams requires a pragmatic approach, involving thorough analysis and debugging techniques.To identify the root cause of issues with pipeline execution, you need to enable debug-level logging.The debug logs allow monitoring the plugin's behavior and help to detect issues.The plugin operates through following phases: pipelineresolution, ingest pipelinecreation, and pipelineexecution.

Plugin does not resolve ingest pipeline associated with data stream

If you encounterNo pipeline resolved for event ... messages in the debug logs, the error indicates that the plugin is unable to resolve the ingest pipeline from the data stream.To further diagnose and resolve the issue, verify whether the data stream's index settings include adefault_pipeline orfinal_pipeline configuration.You can inspect the index settings by running aPOST _index_template/_simulate_index/{type}-{dataset}-{namespace} query in the Kibana Dev Tools.Make sure to replace{type}-{dataset}-{namespace} with values corresponding to your data stream.

For further guidance, we recommend exploringManage Elastic Agent Integrations,Ingest pipelines for Fleet, andElastic Integrations topics.

Ingest pipeline does not exist

If you noticepipeline not found: ... messages in the debug logs orPipeline {pipeline-name} could not be loaded warning messages, it indicates that the plugin has successfully resolved the ingest pipeline fromdefault_pipeline orfinal_pipeline, but the specified pipeline does not exist.To confirm whether a pipeline exists, run aGET _ingest/pipeline/{ingest-pipeline-name} query in the Kibana Dev Tools console.

For further guidance, we recommend exploringManage Elastic Agent Integrations,Ingest pipelines for Fleet, andElastic Integrations topics.

If you encounterfailed to create ingest pipeline {pipeline-name} from pipeline configuration error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration.This issue typically arises when the pipeline configuration contains unsupported or invalid processor(s) that the plugin cannot execute.In such situations, the log output includes information about the issue.For example, the following error message indicatinginference processor in the pipeline configuration which is not supported processor type.

2025-01-21 12:29:13 [2025-01-21T20:29:13,986][ERROR][co.elastic.logstash.filters.elasticintegration.IngestPipelineFactory][main] failed to create ingest pipeline logs-my.custom-1.0.0 from pipeline configuration2025-01-21 12:29:13 org.elasticsearch.ElasticsearchParseException: No processor type exists with name [inference]2025-01-21 12:29:13     at org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException(ConfigurationUtils.java:470) ~[logstash-filter-elastic_integration-0.1.16.jar:?]2025-01-21 12:29:13     at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635)

For further guidance, we recommend exploringManage Elastic Agent Integrations andHandling pipeline failures topics.

These errors typically fall into two main categories, each requiring specific investigation and resolution steps:

Logstash catches issues while running ingest pipelines

When errors occur during the execution of ingest pipelines, Logstash attaches the_ingest_pipeline_failure tag to the event, making it easier to identify and investigate problematic events.The detailed logs are available in the Logstash logs for your investigation.The root cause may depend on configuration, environment or integration you are running.

For further guidance, we recommend exploringManage Elastic Agent Integrations andHandling pipeline failures topics.

Errors internally occurred in the ingest pipeline

If an ingest pipeline is configured withon_failure conditions, failures during pipeline execution are internally handled by the ingest pipeline itself and not be visible to Logstash.This means that errors are captured and processed within the pipeline, rather than being passed to Logstash for logging or tagging.To identify and analyze such cases, go to the Kibana -> Stack Management -> Ingest pipelines and find the ingest pipeline you are using.Click on it and navigate to theFailure processors section. If processors are configured, they may specify which field contains the failure details.For example, the pipeline might store error information in aerror.message field or a custom field defined in theFailure processors configuration.Go to the Kibana Dev Tools and search for the data (GET {index-ingest-pipeline-is-writing}/_search) and look for the fields mentioned in the failure processors .The fields have error details which help you to analyze the root cause.

For further guidance, we recommend exploringManage Elastic Agent Integrations,Handling pipeline failures topics.

This plugin supports the following configuration options plus theCommon options described later.

  • Value type ispassword
  • There is no default value for this setting.

The encoded form of an API key that is used to authenticate this plugin to Elasticsearch.

  • Value type ispassword
  • There is no default value for this setting.

Cloud authentication string ("<username>:<password>" format) is an alternative for theusername/password pair and can be obtained from Elastic Cloud web console.

  • Value type isstring
  • There is no default value for this setting.
  • Cannot be combined with[ssl_enabled](plugins-filters-elastic_integration.md#plugins-filters-elastic_integration-ssl_enabled)⇒false.

Cloud Id, from the Elastic Cloud web console.

When connecting with a Cloud Id, communication to Elasticsearch is secured with SSL.

For more details, check out theLogstash-to-Cloud documentation.

  • Value type ispath
  • There is no default value for this setting.

When running in a Logstash process that has Geoip Database Management enabled, integrations that use the Geoip Processor wil use managed Maxmind databases by default. By using managed databases you accept and agree to theMaxMind EULA.

You may instead configure this plugin with the path to a local directory containing database files.

This plugin will discover all regular files with the.mmdb suffix in the provided directory, and make each available by its file name to the GeoIp processors in integration pipelines. It expects the files it finds to be in the MaxMind DB format with one of the following database types:

  • AnonymousIp
  • ASN
  • City
  • Country
  • ConnectionType
  • Domain
  • Enterprise
  • Isp
Note

Most integrations rely on databases being present namedexactly:

  • GeoLite2-ASN.mmdb,
  • GeoLite2-City.mmdb, or
  • GeoLite2-Country.mmdb
  • Value type is a list ofuris

  • There is no default value for this setting.

  • Constraints:

    • When any URL contains a protocol component, all URLs must have the same protocol as each other.
    • https-protocol hosts use HTTPS and cannot be combined withssl_enabled => false.
    • http-protocol hosts use unsecured HTTP and cannot be combined withssl_enabled => true.
    • When any URL omits a port component, the default9200 is used.
    • When any URL contains a path component, all URLs must have the same path as each other.

A non-empty list of Elasticsearch hosts to connect.

Examples:

  • "127.0.0.1"
  • ["127.0.0.1:9200","127.0.0.2:9200"]
  • ["http://127.0.0.1"]
  • ["https://127.0.0.1:9200"]
  • ["https://127.0.0.1:9200/subpath"] (If using a proxy on a subpath)

When connecting with a list of hosts, communication to Elasticsearch is secured with SSL unless configured otherwise.

Disabling SSL is dangerous

The security of this plugin relies on SSL to avoid leaking credentials and to avoid running illegitimate ingest pipeline definitions.

There are two ways to disable SSL:

  • Value type ispassword
  • There is no default value for this setting.
  • Required when request auth is configured withusername

A password when using HTTP Basic Authentication to connect to Elasticsearch.

  • Value type isstring
  • There is no default value for this setting.
  • When present, the event’s initial pipeline willnot be auto-detected from the event’s data stream fields.
  • Value may be asprintf-style template; if any referenced fields cannot be resolved the event will not be routed to an ingest pipeline.
  • Value type ispath
  • There is no default value for this setting.
  • When present,ssl_key andssl_key_passphrase are also required.
  • Cannot be combined with configurations that disable SSL

Path to a PEM-encoded certificate or certificate chain with which to identify this plugin to Elasticsearch.

  • Value type is a list ofpaths
  • There is no default value for this setting.
  • Cannot be combined with configurations that disable SSL
  • Cannot be combined with[ssl_verification_mode](plugins-filters-elastic_integration.md#plugins-filters-elastic_integration-ssl_verification_mode)⇒none.

One or more PEM-formatted files defining certificate authorities.

This setting can be used tooverride the system trust store for verifying the SSL certificate presented by Elasticsearch.

  • Value type isboolean
  • There is no default value for this setting.

Secure SSL communication to Elasticsearch is enabled unless:

  • it is explicitly disabled withssl_enabled => false; OR
  • it is implicitly disabled by providinghttp-protocolhosts.

Specifyingssl_enabled => true can be a helpful redundant safeguard to ensure this plugin cannot be configured to use non-ssl communication.

  • Value type ispath
  • There is no default value for this setting.
  • Required when connection identity is configured withssl_certificate
  • Cannot be combined with configurations that disable SSL

A path to a PKCS8-formatted SSL certificate key.

  • Value type ispassword
  • There is no default value for this setting.
  • Required when connection identity is configured withssl_keystore_path
  • Cannot be combined with configurations that disable SSL

Password for thessl_keystore_path.

  • Value type ispath
  • There is no default value for this setting.
  • When present,ssl_keystore_password is also required.
  • Cannot be combined with configurations that disable SSL

A path to a JKS- or PKCS12-formatted keystore with which to identify this plugin to Elasticsearch.

  • Value type ispassword
  • There is no default value for this setting.
  • Required when connection identity is configured withssl_certificate
  • Cannot be combined with configurations that disable SSL

A password or passphrase of thessl_key.

  • Value type ispath
  • There is no default value for this setting.
  • When present,ssl_truststore_password is required.
  • Cannot be combined with configurations that disable SSL
  • Cannot be combined with[ssl_verification_mode](plugins-filters-elastic_integration.md#plugins-filters-elastic_integration-ssl_verification_mode)⇒none.

A path to JKS- or PKCS12-formatted keystore where trusted certificates are located.

This setting can be used tooverride the system trust store for verifying the SSL certificate presented by Elasticsearch.

  • Value type ispassword
  • There is no default value for this setting.
  • Required when connection trust is configured withssl_truststore_path
  • Cannot be combined with configurations that disable SSL

Password for thessl_truststore_path.

  • Value type isstring
  • There is no default value for this setting.
  • Cannot be combined with configurations that disable SSL

Level of verification of the certificate provided by Elasticsearch.

SSL certificates presented by Elasticsearch are fully-validated by default.

  • Available modes:

    • none: performs no validation, implicitly trusting any server that this plugin connects to (insecure)
    • certificate: validates the server-provided certificate is signed by a trusted certificate authority and that the server can prove possession of its associated private key (less secure)
    • full (default): performs the same validations ascertificate and also verifies that the provided certificate has an identity claim matching the server we are attempting to connect to (most secure)
  • Value type isstring
  • There is no default value for this setting.
  • When present,password is also required.

A user name when using HTTP Basic Authentication to connect to Elasticsearch.

These configuration options are supported by all filter plugins:

  • Value type ishash
  • Default value is{}

If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the%{{field}}.

Example:

filter {  elastic_integration {    add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }  }}
# You can also add multiple fields at once:filter {  elastic_integration {    add_field => {      "foo_%{somefield}" => "Hello world, from %{host}"      "new_field" => "new_static_value"    }  }}

If the event has field"somefield" == "hello" this filter, on success, would add fieldfoo_hello if it is present, with the value above and the%{{host}} piece replaced with that value from the event. The second example would also add a hardcoded field.

  • Value type isarray
  • Default value is[]

If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the%{{field}} syntax.

Example:

filter {  elastic_integration {    add_tag => [ "foo_%{somefield}" ]  }}
# You can also add multiple tags at once:filter {  elastic_integration {    add_tag => [ "foo_%{somefield}", "taggedy_tag"]  }}

If the event has field"somefield" == "hello" this filter, on success, would add a tagfoo_hello (and the second example would of course add ataggedy_tag tag).

  • Value type isboolean
  • Default value istrue

Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

  • Value type isstring
  • There is no default value for this setting.

Add a uniqueID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 elastic_integration filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

filter {  elastic_integration {    id => "ABC"  }}
Note

Variable substitution in theid field only supports environment variables and does not support the use of values from the secret store.

  • Value type isboolean
  • Default value isfalse

Call the filter flush method at regular interval. Optional.

  • Value type isarray
  • Default value is[]

If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the%{{field}} Example:

filter {  elastic_integration {    remove_field => [ "foo_%{somefield}" ]  }}
# You can also remove multiple fields at once:filter {  elastic_integration {    remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]  }}

If the event has field"somefield" == "hello" this filter, on success, would remove the field with namefoo_hello if it is present. The second example would remove an additional, non-dynamic field.

  • Value type isarray
  • Default value is[]

If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the%{{field}} syntax.

Example:

filter {  elastic_integration {    remove_tag => [ "foo_%{somefield}" ]  }}
# You can also remove multiple tags at once:filter {  elastic_integration {    remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]  }}

If the event has field"somefield" == "hello" this filter, on success, would remove the tagfoo_hello if it is present. The second example would remove a sad, unwanted tag as well.

Welcome to the docs for thelatest Elastic product versions, including Elastic Stack 9.0 and Elastic Cloud Serverless.To view previous versions, go toelastic.co/guide.


[8]ページ先頭

©2009-2025 Movatter.jp