Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Extreme cardinality protection doesn't take effect.#20493

Unanswered
celiawa123 asked this question inQ&A
Discussion options

Because of frequently pods creating and deleting, the netdata-meta.db file increased to several GB after a period of time, then the netdata memory used very huge, about 10GB.

So we tried the new [extreme cardinality protection] (https://github.com/netdata/netdata/blob/eca9a8f25ccc968e54ee4f9b9117da8e042270e7/docs/extreme-cardinality-protection.md?plain=1) feature and hoped it will help to decrease the metadata and memory.

We upgraded the netdata version to v2.5.0 with below configuration in netdata.conf. After running
for in in {1..1000}; do kubectl apply -f deploy-nopv2.yml; sleep 5 ; kubectl delete -f deploy-nopv2.yml ;done
No EXTREME CARDINALITY PROTECTION log, and the metdata file keep increasing, so the netdata memory.

# journalctl --namespace=netdata MESSAGE_ID=d1f59606dd4d41e3b217a0cfcae8e632-- No entries --
[global]    run as user = netdata    # the default database size - 1 hour    history = 3600    # some defaults to run netdata with least priority    process scheduling policy = idle    OOM score = 1000    stock config directory = /usr/lib/netdata/conf.d[db]    db = dbengine    extreme cardinality protection = yes    extreme cardinality keep instances = 100    extreme cardinality min ephemerality = 10[web]    tls version = 1.3    tls ciphers = TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256    web files owner = root    web files group = netdata    # by default do not expose the netdata port    bind to = *=dashboard|registry|badges|management|streaming|netdata.conf^SSL=optional    ssl key = /etc/netdata/netdata.key    ssl certificate = /etc/netdata/netdata.crt[registry]    enabled = no[plugins]    ebpf = no[plugin:proc:diskspace]    exclude space metrics on filesystems = nfs ceph
You must be logged in to vote

Replies: 2 comments 1 reply

Comment options

Hi@ktsaou , could you help on this, Thank you very much.

You must be logged in to vote
0 replies
Comment options

ilyam8
Jun 16, 2025
Collaborator

Hi,@celiawa123.

We upgraded the netdata version to v2.5.0 with below configuration in netdata.conf. After running
for in in {1..1000}; do kubectl apply -f deploy-nopv2.yml; sleep 5 ; kubectl delete -f deploy-nopv2.yml ;done
No EXTREME CARDINALITY PROTECTION log, and the metdata file keep increasing, so the netdata memory.

It is nothow the protection works.

The mechanism kicks induring tier0 (high-resolution) database rotations (i.e., when the oldest tier0 samples are deleted).

You must be logged in to vote
1 reply
@celiawa123
Comment options

Hi@ilyam8,
Thanks a lot for your answer. After I set dbengine tier 0 retention time = 30s, I can now see the 'EXTREME CARDINALITY PROTECTION' log appearing.

We're running Netdata in standalone mode across thousands of nodes. Currently, our db mode is set to RAM, keeping 1 hour of data in memory on each Netdata node, while saving long-term data to VictoriaMetrics.
As our Kubernetes clusters continue running, we're observing many nodes where Netdata consumes excessive memory.
Do you think we should switch to dbengine mode and leverage the EXTREME CARDINALITY PROTECTION mechanism to reduce Netdata's memory usage? Alternatively, would it be better to run a scheduled job that deletes the netdata-metadata.db file and restarts Netdata regularly?
I'd really appreciate your suggestions on the best approach. Thanks!

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
2 participants
@celiawa123@ilyam8

[8]ページ先頭

©2009-2025 Movatter.jp