ECS Logging Python installation
$ python -m pip install ecs-loggingecs-logging-python has formatters for the standard librarylogging module and thestructlog package.
import loggingimport ecs_logging# Get the Loggerlogger = logging.getLogger("app")logger.setLevel(logging.DEBUG)# Add an ECS formatter to the Handlerhandler = logging.StreamHandler()handler.setFormatter(ecs_logging.StdlibFormatter())logger.addHandler(handler)# Emit a log!logger.debug("Example message!", extra={"http.request.method": "get"}){ "@timestamp": "2020-03-20T18:11:37.895Z", "log.level": "debug", "message": "Example message!", "ecs": { "version": "1.6.0" }, "http": { "request": { "method": "get" } }, "log": { "logger": "app", "origin": { "file": { "line": 14, "name": "test.py" }, "function": "func" }, "original": "Example message!" }}You can exclude fields from being collected by using theexclude_fields option in theStdlibFormatter constructor:
from ecs_logging import StdlibFormatterformatter = StdlibFormatter( exclude_fields=[ # You can specify individual fields to ignore: "log.original", # or you can also use prefixes to ignore # whole categories of fields: "process", "log.origin", ])TheStdlibLogger automatically gathersexc_info into ECSerror.* fields. If you’d like to control the number of stack frames that are included inerror.stack_trace you can use thestack_trace_limit parameter (by default all frames are collected):
from ecs_logging import StdlibFormatterformatter = StdlibFormatter( # Only collects 3 stack frames stack_trace_limit=3,)formatter = StdlibFormatter( # Disable stack trace collection stack_trace_limit=0,)2.3.0
By default, theStdlibFormatter escapes non-ASCII characters in the JSON output using Unicode escape sequences. If you want to preserve non-ASCII characters (such as Chinese, Japanese, emojis, etc.) in their original form, you can use theensure_ascii parameter:
from ecs_logging import StdlibFormatter# Default behavior - non-ASCII characters are escapedformatter = StdlibFormatter()# Output: {"message":"Hello \\u4e16\\u754c"}# Preserve non-ASCII charactersformatter = StdlibFormatter(ensure_ascii=False)# Output: {"message":"Hello 世界"}This is particularly useful when working with internationalized applications or when you need to maintain readability of logs containing non-ASCII characters.
Note that the structlog processor should be the last processor in the list, as it handles the conversion to JSON as well as the ECS field enrichment.
import structlogimport ecs_logging# Configure Structlogstructlog.configure( processors=[ecs_logging.StructlogFormatter()], wrapper_class=structlog.BoundLogger, context_class=dict, logger_factory=structlog.PrintLoggerFactory(),)# Get the Loggerlogger = structlog.get_logger("app")# Add additional contextlogger = logger.bind(**{ "http": { "version": "2", "request": { "method": "get", "bytes": 1337, }, }, "url": { "domain": "example.com", "path": "/", "port": 443, "scheme": "https", "registered_domain": "example.com", "top_level_domain": "com", "original": "https://example.com", }})# Emit a log!logger.debug("Example message!")2.3.0
Similar toStdlibFormatter, theStructlogFormatter also supports theensure_ascii parameter to control whether non-ASCII characters are escaped:
import structlogimport ecs_logging# Configure Structlog with ensure_ascii=False to preserve non-ASCII charactersstructlog.configure( processors=[ecs_logging.StructlogFormatter(ensure_ascii=False)], wrapper_class=structlog.BoundLogger, context_class=dict, logger_factory=structlog.PrintLoggerFactory(),)logger = structlog.get_logger("app")logger.info("你好世界")- Non-ASCII characters will be preserved in output
{ "@timestamp": "2020-03-26T13:08:11.728Z", "ecs": { "version": "1.6.0" }, "http": { "request": { "bytes": 1337, "method": "get" }, "version": "2" }, "log": { "level": "debug" }, "message": "Example message!", "url": { "domain": "example.com", "original": "https://example.com", "path": "/", "port": 443, "registered_domain": "example.com", "scheme": "https", "top_level_domain": "com" }}ecs-logging-python supports automatically collectingECS tracing fields from theElastic APM Python agent in order tocorrelate logs to spans, transactions and traces in Elastic APM.
You can also quickly turn on ECS-formatted logs in your python app by settingLOG_ECS_REFORMATTING=override in the Elastic APM Python agent.
The best way to collect the logs once they are ECS-formatted is withFilebeat:
- Follow theFilebeat quick start
- Add the following configuration to your
filebeat.yamlfile.
For Filebeat 7.16+
filebeat.inputs:- type: filestream paths: /path/to/logs.json parsers: - ndjson: overwrite_keys: true add_error_key: true expand_keys: trueprocessors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~- Use the filestream input to read lines from active log files.
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Processors enhance your data. Seeprocessors to learn more.
For Filebeat < 7.16
filebeat.inputs:- type: log paths: /path/to/logs.json json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true json.expand_keys: trueprocessors:- add_host_metadata: ~- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~- Make sure your application logs to stdout/stderr.
- Follow theRun Filebeat on Kubernetes guide.
- Enablehints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations: co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Make sure your application logs to stdout/stderr.
- Follow theRun Filebeat on Docker guide.
- Enablehints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
labels: co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
For more information, see theFilebeat reference.