- Notifications
You must be signed in to change notification settings - Fork31
A Fluent Bit output plugin for Amazon Kinesis Data Firehose
License
aws/amazon-kinesis-firehose-for-fluent-bit
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
NOTE: A new higher performance Fluent Bit Firehose Plugin has been released. Check out ourofficial guidance.
A Fluent Bit output plugin for Amazon Kinesis Data Firehose.
If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructionshere or email AWS security directly ataws-security@amazon.com.
Runmake
to build./bin/firehose.so
. Then use with Fluent Bit:
./fluent-bit -e ./firehose.so -i cpu \-o firehose \-p "region=us-west-2" \-p "delivery_stream=example-stream"
For building Windows binaries, we need to installmingw-64w
for cross-compilation. The same can be done using-
sudo apt-get install -y gcc-multilib gcc-mingw-w64
After this step, runmake windows-release
to build./bin/firehose.dll
. Then use with Fluent Bit on Windows:
./fluent-bit.exe -e ./firehose.dll -i dummy `-o firehose `-p "region=us-west-2" `-p "delivery_stream=example-stream"
region
: The region which your Firehose delivery stream(s) is/are in.delivery_stream
: The name of the delivery stream that you want log records sent to.data_keys
: By default, the whole log record will be sent to Kinesis. If you specify a key name(s) with this option, then only those keys and values will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specifydata_keys log
and only the log message will be sent to Kinesis. If you specify multiple keys, they should be comma delimited.log_key
: By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specifylog_key log
and only the log message will be sent to Firehose.role_arn
: ARN of an IAM role to assume (for cross account access).endpoint
: Specify a custom endpoint for the Kinesis Firehose API.sts_endpoint
: Specify a custom endpoint for the STS API; used to assume your custom role provided withrole_arn
.time_key
: Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis. The timestamp inserted comes from the timestamp that Fluent Bit associates with the log record, which is set by the input that collected it. For example, if you are reading a log file with thetail input, then the timestamp for each log line/record can be obtained/parsed by using a Fluent Bit parser on the log line.time_key_format
:strftime compliant format string for the timestamp; for example,%Y-%m-%dT%H:%M:%S%z
. This option is used withtime_key
. You can also use%L
for milliseconds and%f
for microseconds. Remember that thetime_key
option only inserts the timestamp Fluent Bit has for each record into the record. So the record must have been collected with a timestamp with precision in order to use sub-second precision formatters. If you are using ECS FireLens, make sure you are running Amazon ECS Container Agent v1.42.0 or later, otherwise the timestamps associated with your stdout & stderr container logs will only have second precision.replace_dots
: Replace dot characters in key names with the value of this option. For example, if you addreplace_dots _
in your config then all occurrences of.
will be replaced with an underscore. By default, dots will not be replaced.simple_aggregation
: Option to allow plugin send multiple log events in the same record if current record not exceed the maximumRecordSize (1 MiB). It joins together as many log records as possible into a single Firehose record and delimits them with newline. It's good to enable if your destination supports aggregation like S3. Default to befalse
, set totrue
to enable this option.
The plugin requiresfirehose:PutRecordBatch
permissions.
This plugin uses the AWS SDK Go, and uses itsdefault credential provider chain. If you are using the plugin on Amazon EC2 or Amazon ECS or Amazon EKS, the plugin will use your EC2 instance role orECS Task role permissions orEKS IAM Roles for Service Accounts for pods. The plugin can also retrieve credentials from ashared credentials file, or from the standardAWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_SESSION_TOKEN
environment variables.
FLB_LOG_LEVEL
: Set the log level for the plugin. Valid values are:debug
,info
, anderror
(case insensitive). Default isinfo
.Note: Setting log level in the Fluent Bit Configuration file using the Service key will not affect the plugin log level (because the plugin is external).SEND_FAILURE_TIMEOUT
: Allows you to configure a timeout if the plugin can not send logs to Firehose. The timeout is specified as aGolang duration, for example:5m30s
. If the plugin has failed to make any progress for the given period of time, then it will exit and kill Fluent Bit. This is useful in scenarios where you want your logging solution to fail fast if it has been misconfigured (i.e. network or credentials have not been set up to allow it to send to Firehose).
In the summer of 2020, we released anew higher performance Kinesis Firehose plugin namedkinesis_firehose
.
That plugin has almost all of the features of this older, lower performance and less efficient plugin. Check out itsdocumentation.
This plugin will continue to be supported. However, we are pausing development on it and will focus on the high performance version instead.
If the features of the higher performance plugin are sufficient for your use cases, please use it. It can achieve higher throughput and will consume less CPU and memory.
As time goes on we expect new features to be added to the C plugin only, however, this is determined on a case by case basis. There is a small feature gap between the two plugins. Please consult theC plugin documentation and this document for the features offered by each plugin.
For many users, you can simply replace the plugin namefirehose
with the new namekinesis_firehose
. At the time of writing, the only feature missing from the high performance version is thereplace_dots
option. Check out itsdocumentation.
Yes. The high performance plugin is written in C, and this plugin is written in Golang. We understand that Go is an easier language for amateur contributors to write code in- that is the primary reason we are continuing to maintain this repo.
However, if you can write code in C, please consider contributing new features to thehigher performance plugin.
This plugin has been tested with Fluent Bit 1.2.0+. It may not work with older Fluent Bit versions. We recommend using the latest version of Fluent Bit as it will contain the newest features and bug fixes.
[INPUT] Name forward Listen 0.0.0.0 Port 24224[OUTPUT] Name firehose Match * region us-west-2 delivery_stream my-stream replace_dots _
We distribute a container image with Fluent Bit and these plugins.
github.com/aws/aws-for-fluent-bit
Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:<tag>
For example, you can pull the image with latest version by:
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
If you see errors for image pull limits, try log into public ECR with your AWS credentials:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
You can check theAmazon ECR Public official doc for more details.
You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:
aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/
For more seeour docs.
This library is licensed under the Apache 2.0 License.
About
A Fluent Bit output plugin for Amazon Kinesis Data Firehose
Resources
License
Code of conduct
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors14
Uh oh!
There was an error while loading.Please reload this page.