Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[DEBIAN] Disk and memory usage in percents#21253

Unanswered
sudo-nitz asked this question inQ&A
Discussion options

Hi!

I'm using Debian 12.

I installed NetData directly from APT-GET.

I want to measure three metrics - all of them inpercents:

  1. RAM memory usage as:
    (total - available) / total * 100
    wheretotal isMemTotal andavailable isMemAvailable from/proc/meminfo.
  2. Disk usage of root ("/").
  3. Disk usage of mounted device ("/dev/sda1") <is it possible using labels or PARTUUID?>.

How can I do that? Please help.

My configuration file:

# netdata configuration## You can download the latest version of this file, using:##  wget -O /etc/netdata/netdata.conf http://localhost:19999/netdata.conf# or#  curl -o /etc/netdata/netdata.conf http://localhost:19999/netdata.conf## You can uncomment and change any of the options below.# The value shown in the commented settings, is the default value.## global netdata configuration[global]        run as user = netdata        # option 'web files owner' is not used.        web files owner = root        # option 'web files group' is not used.        web files group = root        # glibc malloc arena max for plugins = 1        # glibc malloc arena max for netdata = 1        # libuv worker threads = 16        hostname = GITHUB        # host access prefix =        # enable metric correlations = yes        # metric correlations method = ks2        timezone = Europe/Warsaw        # OOM score = -900        # process scheduling policy = batch        # process nice level = 19        # pthread stack size = 8388608[db]        update every = 1        mode = dbengine        dbengine page cache with malloc = yes        dbengine page cache size MB = 32        dbengine disk space MB = 512        # dbengine multihost disk space MB = 256        memory deduplication (ksm) = yes        cleanup obsolete charts after secs = 3600        gap when lost iterations above = 1        # enable replication = yes        # seconds to replicate = 86400        # seconds per replication step = 600        cleanup orphan hosts after secs = 3600        storage tiers = 2        # dbengine page fetch timeout secs = 3        # dbengine page fetch retries = 3        # dbengine page descriptors in file mapped memory = no        dbengine tier 1 page cache size MB = 16        dbengine tier 1 multihost disk space MB = 128        dbengine tier 1 update every iterations = 60        dbengine tier 1 backfill = new        dbengine tier 2 page cache size MB = 8        dbengine tier 2 multihost disk space MB = 384        dbengine tier 2 update every iterations = 600        dbengine tier 2 backfill = new        delete obsolete charts files = yes        delete orphan hosts files = yes        enable zero metrics = no        # replication threads = 1        dbengine pages per extent = 64[directories]        # config = /etc/netdata        # stock config = /usr/lib/netdata/conf.d        # log = /var/log/netdata        # web = /usr/share/netdata/web        # cache = /var/cache/netdata        # lib = /var/lib/netdata        # home = /var/lib/netdata        # lock = /var/lib/netdata/lock        # plugins = "/usr/lib/netdata/plugins.d" "/etc/netdata/custom-plugins.d"        # registry = /var/lib/netdata/registry        # health config = /etc/netdata/health.d        # stock health config = /usr/lib/netdata/conf.d/health.d[logs]        # debug flags = 0x0000000000000000        # debug = /var/log/netdata/debug.log        # error = /var/log/netdata/error.log        # access = /var/log/netdata/access.log        # health = /var/log/netdata/health.log        # facility = daemon        # errors flood protection period = 1200        # errors to trigger flood protection = 200[environment variables]        # PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin        # PYTHONPATH =        # TZ = :/etc/localtime[host labels]        # name = value[sqlite]        # auto vacuum = INCREMENTAL        # synchronous = NORMAL        # journal mode = WAL        # temp store = MEMORY        # journal size limit = 16777216        # cache size = -2000[health]        # silencers file = /var/lib/netdata/health.silencers.json        # enabled = yes        # default repeat warning = never        # default repeat critical = never        # in memory max health log entries = 1000        # script to execute on alarm = /usr/lib/netdata/plugins.d/alarm-notify.sh        # enable stock health configuration = yes        # run at least every seconds = 10        # postpone alarms during hibernation for seconds = 60        # rotate log every lines = 2000        # is ephemeral = no        # has unstable connection = no[web]        bind to = 0.0.0.0        # ssl key = /etc/netdata/ssl/key.pem        # ssl certificate = /etc/netdata/ssl/cert.pem        # tls version = 1.3        # tls ciphers = none        # ses max window = 15        # des max window = 15        # mode = static-threaded        # listen backlog = 4096        # default port = 19999        # disconnect idle clients after seconds = 60        # timeout for first request = 60        # accept a streaming request every seconds = 0        # respect do not track policy = no        # x-frame-options response header =        # allow connections from = localhost *        # allow connections by dns = heuristic        # allow dashboard from = localhost *        # allow dashboard by dns = heuristic        # allow badges from = *        # allow badges by dns = heuristic        # allow streaming from = *        # allow streaming by dns = heuristic        # allow netdata.conf from = localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN        # allow netdata.conf by dns = no        # allow management from = localhost        # allow management by dns = heuristic        # enable gzip compression = yes        # gzip compression strategy = default        # gzip compression level = 3        # web server threads = 4        # web server max sockets = 16384        # custom dashboard_info.js =[registry]        # enabled = no        # netdata unique id file = /var/lib/netdata/registry/netdata.public.unique.id        # registry db file = /var/lib/netdata/registry/registry.db        # registry log file = /var/lib/netdata/registry/registry-log.db        # registry save db every new entries = 1000000        # registry expire idle persons days = 365        # registry domain =        # registry to announce = https://registry.my-netdata.io        # registry hostname = LIPA-WIREHOLE        # verify browser cookies support = yes        # enable cookies SameSite and Secure = yes        # max URL length = 1024        # max URL name length = 50        # netdata management api key file = /var/lib/netdata/netdata.api.key        # allow from = *        # allow by dns = heuristic[global statistics]        # update every = 1[plugins]        timex = no        idlejitter = no        netdata monitoring = no        tc = no        diskspace = no        # proc = yes        cgroups = no        # enable running new plugins = yes        # check for new plugins every = 60        slabinfo = no        apps = no        statsd = no        cups = no        perf = no        nfacct = no        python.d = no        charts.d = no        fping = no        ioping = no[statsd]        # update every (flushInterval) = 1        # udp messages to process at once = 10        # create private charts for metrics matching = *        # max private charts hard limit = 1000        # private charts history = 3600        # decimal detail = 1000        # disconnect idle tcp clients after seconds = 600        # private charts hidden = no        # histograms and timers percentile (percentThreshold) = 95.00000        # dictionaries max unique dimensions = 200        # add dimension for number of events received = no        # gaps on gauges (deleteGauges) = no        # gaps on counters (deleteCounters) = no        # gaps on meters (deleteMeters) = no        # gaps on sets (deleteSets) = no        # gaps on histograms (deleteHistograms) = no        # gaps on timers (deleteTimers) = no        # gaps on dictionaries (deleteDictionaries) = no        # statsd server max TCP sockets = 16384        # listen backlog = 4096        # default port = 8125        # bind to = udp:localhost tcp:localhost[plugin:timex]        # update every = 10        # clock synchronization state = yes        # time offset = yes[plugin:idlejitter]        # loop time in ms = 20[plugin:apps]        # update every = 1        # command options =[plugin:cups]        # update every = 1        # command options =[plugin:perf]        # update every = 1        # command options =[plugin:nfacct]        # update every = 1        # command options =[plugin:python.d]        # update every = 1        # command options =[plugin:charts.d]        # update every = 1        # command options =[plugin:fping]        # update every = 1        # command options =[plugin:ioping]        # update every = 1        # command options =[plugin:tc]        # script to run to get tc values = /usr/lib/netdata/plugins.d/tc-qos-helper.sh        # enable new interfaces detected at runtime = yes        # enable traffic charts for all interfaces = auto        # enable packets charts for all interfaces = auto        # enable dropped charts for all interfaces = auto        # enable tokens charts for all interfaces = no        # enable ctokens charts for all interfaces = no        # enable show all classes and qdiscs for all interfaces = no        # qos for eth0 = yes        # traffic chart for eth0 = auto        # packets chart for eth0 = auto        # dropped packets chart for eth0 = auto        # tokens chart for eth0 = no        # ctokens chart for eth0 = no        # show all classes for eth0 = no        # cleanup unused classes every = 120[plugin:cgroups]        # update every = 1        # check for new cgroups every = 10        # use unified cgroups = auto        # containers priority = 40000        # enable cpuacct stat (total CPU) = auto        # enable cpuacct usage (per core CPU) = auto        # enable cpuacct cpu throttling = yes        # enable cpuacct cpu shares = no        # enable memory = auto        # enable detailed memory = auto        # enable memory limits fail count = auto        # enable swap memory = auto        # enable blkio bandwidth = auto        # enable blkio operations = auto        # enable blkio throttle bandwidth = auto        # enable blkio throttle operations = auto        # enable blkio queued operations = auto        # enable blkio merged operations = auto        # enable cpu pressure = auto        # enable io some pressure = auto        # enable io full pressure = auto        # enable memory some pressure = auto        # enable memory full pressure = auto        # recheck zero blkio every iterations = 10        # recheck zero memory failcnt every iterations = 10        # recheck zero detailed memory every iterations = 10        # enable systemd services = yes        # enable systemd services detailed memory = no        # report used memory = yes        # path to unified cgroups = /sys/fs/cgroup        # max cgroups to allow = 1000        # max cgroups depth to monitor = 0        # enable by default cgroups matching =  !*/init.scope  !/system.slice/run-*.scope  *.scope  /machine.slice/*.service  */kubepods/pod*/*  */kubepods/*/pod*/*  */*-kubepods-pod*/*  */*-kubepods-*-pod*/*  !*kubepods* !*kubelet*  !*/vcpu*  !*/emulator  !*.mount  !*.partition  !*.service  !*.socket  !*.slice  !*.swap  !*.user  !/  !/docker  !*/libvirt  !/lxc  !/lxc/*/*  !/lxc.monitor*  !/lxc.pivot  !/lxc.payload  !/machine  !/qemu  !/system  !/systemd  !/user  *        # enable by default cgroups names matching =  *        # search for cgroups in subpaths matching =  !*/init.scope  !*-qemu  !*.libvirt-qemu  !/init.scope  !/system  !/systemd  !/user  !/user.slice  !/lxc/*/*  !/lxc.monitor  !/lxc.payload/*/*  !/lxc.payload.*  *        # script to get cgroup names = /usr/lib/netdata/plugins.d/cgroup-name.sh        # script to get cgroup network interfaces = /usr/lib/netdata/plugins.d/cgroup-network        # run script to rename cgroups matching =  !/  !*.mount  !*.socket  !*.partition  /machine.slice/*.service  !*.service  !*.slice  !*.swap  !*.user  !init.scope  !*.scope/vcpu*  !*.scope/emulator  *.scope  *docker*  *lxc*  *qemu*  */kubepods/pod*/*  */kubepods/*/pod*/*  */*-kubepods-pod*/*  */*-kubepods-*-pod*/*  !*kubepods* !*kubelet*  *.libvirt-qemu  *        # cgroups to match as systemd services =  !/system.slice/*/*.service  /system.slice/*.service[plugin:proc]        # /proc/net/dev = yes        /proc/pagetypeinfo = no        # /proc/stat = yes        /proc/uptime = no        /proc/loadavg = no        /proc/sys/kernel/random/entropy_avail = no        /proc/pressure = no        /proc/interrupts = no        /proc/softirqs = no        /proc/vmstat = no        /proc/meminfo = no        /sys/kernel/mm/ksm = no        /sys/block/zram = no        /sys/devices/system/edac/mc = no        /sys/devices/system/node = no        /proc/net/wireless = no        /proc/net/sockstat = no        /proc/net/sockstat6 = no        /proc/net/netstat = no        /proc/net/sctp/snmp = no        /proc/net/softnet_stat = no        /proc/net/ip_vs/stats = no        /sys/class/infiniband = no        /proc/net/stat/conntrack = no        /proc/net/stat/synproxy = no        /proc/diskstats = no        /proc/mdstat = no        /proc/net/rpc/nfsd = no        /proc/net/rpc/nfs = no        /proc/spl/kstat/zfs/arcstats = no        /proc/spl/kstat/zfs/pool/state = no        /sys/fs/btrfs = no        ipc = no        /sys/class/power_supply = no[plugin:proc:diskspace]        # remove charts of unmounted disks = yes        # update every = 1        # check for new mount points every = 15        # exclude space metrics on paths = /proc/* /sys/* /var/run/user/* /run/user/* /snap/* /var/lib/docker/*        # exclude space metrics on filesystems = *gvfs *gluster* *s3fs *ipfs *davfs2 *httpfs *sshfs *gdfs *moosefs fusectl autofs        # space usage for all disks = auto        # inodes usage for all disks = auto[plugin:proc:/proc/stat]        # cpu utilization = yes        per cpu core utilization = no        cpu interrupts = no        context switches = no        processes started = no        processes running = no        keep per core files open = no        keep cpuidle files open = no        core_throttle_count = no        package_throttle_count = no        cpu frequency = no        cpu idle states = no        # core_throttle_count filename to monitor = /sys/devices/system/cpu/%s/thermal_throttle/core_throttle_count        # package_throttle_count filename to monitor = /sys/devices/system/cpu/%s/thermal_throttle/package_throttle_count        # scaling_cur_freq filename to monitor = /sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq        # time_in_state filename to monitor = /sys/devices/system/cpu/%s/cpufreq/stats/time_in_state        # schedstat filename to monitor = /proc/schedstat        # cpuidle name filename to monitor = /sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/name        # cpuidle time filename to monitor = /sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/time        # filename to monitor = /proc/stat[plugin:proc:/proc/uptime]        # filename to monitor = /proc/uptime[plugin:proc:/proc/loadavg]        # filename to monitor = /proc/loadavg        # enable load average = yes        # enable total processes = yes[plugin:proc:/proc/sys/kernel/random/entropy_avail]        # filename to monitor = /proc/sys/kernel/random/entropy_avail[plugin:proc:/proc/pressure]        # base path of pressure metrics = /proc/pressure        # enable cpu some pressure = yes        # enable cpu full pressure = yes        # enable memory some pressure = yes        # enable memory full pressure = yes        # enable io some pressure = yes        # enable io full pressure = yes[plugin:proc:/proc/interrupts]        # interrupts per core = auto        # filename to monitor = /proc/interrupts[plugin:proc:/proc/softirqs]        # interrupts per core = auto        # filename to monitor = /proc/softirqs[plugin:proc:/proc/vmstat]        # filename to monitor = /proc/vmstat        # swap i/o = auto        # disk i/o = yes        # memory page faults = yes        # out of memory kills = yes        # system-wide numa metric summary = auto[plugin:proc:/sys/devices/system/node]        # directory to monitor = /sys/devices/system/node[plugin:proc:/proc/meminfo]        # system ram = yes        # system swap = auto        # hardware corrupted ECC = auto        # committed memory = yes        # writeback memory = yes        # kernel memory = yes        # slab memory = yes        # hugepages = auto        # transparent hugepages = auto        # filename to monitor = /proc/meminfo[plugin:proc:/sys/kernel/mm/ksm]        # /sys/kernel/mm/ksm/pages_shared = /sys/kernel/mm/ksm/pages_shared        # /sys/kernel/mm/ksm/pages_sharing = /sys/kernel/mm/ksm/pages_sharing        # /sys/kernel/mm/ksm/pages_unshared = /sys/kernel/mm/ksm/pages_unshared        # /sys/kernel/mm/ksm/pages_volatile = /sys/kernel/mm/ksm/pages_volatile[plugin:proc:/sys/devices/system/edac/mc]        # directory to monitor = /sys/devices/system/edac/mc[plugin:proc:/proc/net/wireless]        # filename to monitor = /proc/net/wireless        # status for all interfaces = auto        # quality for all interfaces = auto        # discarded packets for all interfaces = auto        # missed beacon for all interface = auto[plugin:proc:/proc/net/sockstat]        # ipv4 sockets = auto        # ipv4 TCP sockets = auto        # ipv4 TCP memory = auto        # ipv4 UDP sockets = auto        # ipv4 UDP memory = auto        # ipv4 UDPLITE sockets = auto        # ipv4 RAW sockets = auto        # ipv4 FRAG sockets = auto        # ipv4 FRAG memory = auto        # update constants every = 60        # filename to monitor = /proc/net/sockstat[plugin:proc:/proc/net/sockstat6]        # ipv6 TCP sockets = auto        # ipv6 UDP sockets = auto        # ipv6 UDPLITE sockets = auto        # ipv6 RAW sockets = auto        # ipv6 FRAG sockets = auto        # filename to monitor = /proc/net/sockstat6[plugin:proc:/proc/net/netstat]        # bandwidth = auto        # input errors = auto        # multicast bandwidth = auto        # broadcast bandwidth = auto        # multicast packets = auto        # broadcast packets = auto        # ECN packets = auto        # TCP reorders = auto        # TCP SYN cookies = auto        # TCP out-of-order queue = auto        # TCP connection aborts = auto        # TCP memory pressures = auto        # TCP SYN queue = auto        # TCP accept queue = auto        # filename to monitor = /proc/net/netstat[plugin:proc:/proc/net/snmp]        # ipv4 packets = auto        # ipv4 fragments sent = auto        # ipv4 fragments assembly = auto        # ipv4 errors = auto        # ipv4 TCP connections = auto        # ipv4 TCP packets = auto        # ipv4 TCP errors = auto        # ipv4 TCP opens = auto        # ipv4 TCP handshake issues = auto        # ipv4 UDP packets = auto        # ipv4 UDP errors = auto        # ipv4 ICMP packets = auto        # ipv4 ICMP messages = auto        # ipv4 UDPLite packets = auto        # filename to monitor = /proc/net/snmp[plugin:proc:/proc/net/snmp6]        # ipv6 packets = auto        # ipv6 fragments sent = auto        # ipv6 fragments assembly = auto        # ipv6 errors = auto        # ipv6 UDP packets = auto        # ipv6 UDP errors = auto        # ipv6 UDPlite packets = auto        # ipv6 UDPlite errors = auto        # bandwidth = auto        # multicast bandwidth = auto        # broadcast bandwidth = auto        # multicast packets = auto        # icmp = auto        # icmp redirects = auto        # icmp errors = auto        # icmp echos = auto        # icmp group membership = auto        # icmp router = auto        # icmp neighbor = auto        # icmp mldv2 = auto        # icmp types = auto        # ect = auto        # filename to monitor = /proc/net/snmp6[plugin:proc:/proc/net/sctp/snmp]        # established associations = auto        # association transitions = auto        # fragmentation = auto        # packets = auto        # packet errors = auto        # chunk types = auto        # filename to monitor = /proc/net/sctp/snmp[plugin:proc:/proc/net/softnet_stat]        # softnet_stat per core = yes        # filename to monitor = /proc/net/softnet_stat[plugin:proc:/proc/net/ip_vs_stats]        # IPVS bandwidth = yes        # IPVS connections = yes        # IPVS packets = yes        # filename to monitor = /proc/net/ip_vs_stats[plugin:proc:/sys/class/infiniband]        # dirname to monitor = /sys/class/infiniband        # bandwidth counters = yes        # packets counters = yes        # errors counters = yes        # hardware packets counters = auto        # hardware errors counters = auto        # monitor only active ports = auto        # disable by default interfaces matching =        # refresh ports state every seconds = 30[plugin:proc:/proc/net/stat/nf_conntrack]        # filename to monitor = /proc/net/stat/nf_conntrack        # netfilter new connections = no        # netfilter connection changes = no        # netfilter connection expectations = no        # netfilter connection searches = no        # netfilter errors = no        # netfilter connections = yes[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max]        # filename to monitor = /proc/sys/net/netfilter/nf_conntrack_max        # read every seconds = 10[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_count]        # filename to monitor = /proc/sys/net/netfilter/nf_conntrack_count[plugin:proc:/proc/net/stat/synproxy]        # SYNPROXY cookies = auto        # SYNPROXY SYN received = auto        # SYNPROXY connections reopened = auto        # filename to monitor = /proc/net/stat/synproxy[plugin:proc:/proc/diskstats]        # enable new disks detected at runtime = yes        # performance metrics for physical disks = auto        # performance metrics for virtual disks = auto        # performance metrics for partitions = no        # bandwidth for all disks = auto        # operations for all disks = auto        # merged operations for all disks = auto        # i/o time for all disks = auto        # queued operations for all disks = auto        # utilization percentage for all disks = auto        # extended operations for all disks = auto        # backlog for all disks = auto        # bcache for all disks = auto        # bcache priority stats update every = 0        # remove charts of removed disks = yes        # path to get block device = /sys/block/%s        # path to get block device bcache = /sys/block/%s/bcache        # path to get virtual block device = /sys/devices/virtual/block/%s        # path to get block device infos = /sys/dev/block/%lu:%lu/%s        # path to device mapper = /dev/mapper        # path to /dev/disk/by-label = /dev/disk/by-label        # path to /dev/disk/by-id = /dev/disk/by-id        # path to /dev/vx/dsk = /dev/vx/dsk        # name disks by id = no        # preferred disk ids = *        # exclude disks = loop* ram*        # filename to monitor = /proc/diskstats        # performance metrics for disks with major 179 = yes        # performance metrics for disks with major 8 = yes[plugin:proc:/proc/diskstats:ram0]        # enable = no[plugin:proc:/proc/diskstats:ram1]        # enable = no[plugin:proc:/proc/diskstats:ram2]        # enable = no[plugin:proc:/proc/diskstats:ram3]        # enable = no[plugin:proc:/proc/diskstats:ram4]        # enable = no[plugin:proc:/proc/diskstats:ram5]        # enable = no[plugin:proc:/proc/diskstats:ram6]        # enable = no[plugin:proc:/proc/diskstats:ram7]        # enable = no[plugin:proc:/proc/diskstats:ram8]        # enable = no[plugin:proc:/proc/diskstats:ram9]        # enable = no[plugin:proc:/proc/diskstats:ram10]        # enable = no[plugin:proc:/proc/diskstats:ram11]        # enable = no[plugin:proc:/proc/diskstats:ram12]        # enable = no[plugin:proc:/proc/diskstats:ram13]        # enable = no[plugin:proc:/proc/diskstats:ram14]        # enable = no[plugin:proc:/proc/diskstats:ram15]        # enable = no[plugin:proc:/proc/diskstats:loop0]        # enable = no[plugin:proc:/proc/diskstats:loop1]        # enable = no[plugin:proc:/proc/diskstats:loop2]        # enable = no[plugin:proc:/proc/diskstats:loop3]        # enable = no[plugin:proc:/proc/diskstats:loop4]        # enable = no[plugin:proc:/proc/diskstats:loop5]        # enable = no[plugin:proc:/proc/diskstats:loop6]        # enable = no[plugin:proc:/proc/diskstats:loop7]        # enable = no[plugin:proc:/proc/diskstats:mmcblk0]        # enable = yes        # enable performance metrics = yes        # bandwidth = auto        # operations = auto        # merged operations = auto        # i/o time = auto        # queued operations = auto        # utilization percentage = auto        # extended operations = auto        # backlog = auto[plugin:proc:/proc/diskstats:bootfs]        # enable = yes        # enable performance metrics = no        # bandwidth = no        # operations = no        # merged operations = no        # i/o time = no        # queued operations = no        # utilization percentage = no        # extended operations = no        # backlog = no[plugin:proc:/proc/diskstats:rootfs]        # enable = yes        # enable performance metrics = no        # bandwidth = no        # operations = no        # merged operations = no        # i/o time = no        # queued operations = no        # utilization percentage = no        # extended operations = no        # backlog = no[plugin:proc:/proc/diskstats:sda]        # enable = yes        # enable performance metrics = yes        # bandwidth = auto        # operations = auto        # merged operations = auto        # i/o time = auto        # queued operations = auto        # utilization percentage = auto        # extended operations = auto        # backlog = auto[plugin:proc:/proc/diskstats:sdb]        # enable = yes        # enable performance metrics = yes        # bandwidth = auto        # operations = auto        # merged operations = auto        # i/o time = auto        # queued operations = auto        # utilization percentage = auto        # extended operations = auto        # backlog = auto[plugin:proc:/proc/diskstats:sdc]        # enable = yes        # enable performance metrics = yes        # bandwidth = auto        # operations = auto        # merged operations = auto        # i/o time = auto        # queued operations = auto        # utilization percentage = auto        # extended operations = auto        # backlog = auto[plugin:proc:/proc/diskstats:backup]        # enable = yes        # enable performance metrics = no        # bandwidth = no        # operations = no        # merged operations = no        # i/o time = no        # queued operations = no        # utilization percentage = no        # extended operations = no        # backlog = no[plugin:proc:/proc/diskstats:backup2]        # enable = yes        # enable performance metrics = no        # bandwidth = no        # operations = no        # merged operations = no        # i/o time = no        # queued operations = no        # utilization percentage = no        # extended operations = no        # backlog = no[plugin:proc:/proc/diskstats:backup3]        # enable = yes        # enable performance metrics = no        # bandwidth = no        # operations = no        # merged operations = no        # i/o time = no        # queued operations = no        # utilization percentage = no        # extended operations = no        # backlog = no[plugin:proc:/proc/mdstat]        # faulty devices = yes        # nonredundant arrays availability = yes        # mismatch count = auto        # disk stats = yes        # operation status = yes        # make charts obsolete = yes        # filename to monitor = /proc/mdstat        # mismatch_cnt filename to monitor = /sys/block/%s/md/mismatch_cnt[plugin:proc:/proc/net/rpc/nfsd]        # filename to monitor = /proc/net/rpc/nfsd        # read cache = yes        # file handles = yes        # I/O = yes        # threads = yes        # network = yes        # rpc = yes        # NFS v2 procedures = yes        # NFS v3 procedures = yes        # NFS v4 procedures = yes        # NFS v4 operations = yes[plugin:proc:/proc/net/rpc/nfs]        # filename to monitor = /proc/net/rpc/nfs        # network = yes        # rpc = yes        # NFS v2 procedures = yes        # NFS v3 procedures = yes        # NFS v4 procedures = yes[plugin:proc:/proc/spl/kstat/zfs/arcstats]        # filename to monitor = /proc/spl/kstat/zfs/arcstats[plugin:proc:/proc/spl/kstat/zfs]        # directory to monitor = /proc/spl/kstat/zfs[plugin:proc:/sys/fs/btrfs]        # path to monitor = /sys/fs/btrfs        # check for btrfs changes every = 60        # physical disks allocation = auto        # data allocation = auto        # metadata allocation = auto        # system allocation = auto[plugin:proc:ipc]        # message queues = yes        # semaphore totals = yes        # shared memory totals = yes        # msg filename to monitor = /proc/sysvipc/msg        # shm filename to monitor = /proc/sysvipc/shm        # max dimensions in memory allowed = 50[plugin:proc:/sys/class/power_supply]        # battery capacity = yes        # battery charge = no        # battery energy = no        # power supply voltage = no        # keep files open = auto        # directory to monitor = /sys/class/power_supply[plugin:proc:/proc/net/dev]        # filename to monitor = /proc/net/dev        # path to get virtual interfaces = /sys/devices/virtual/net/%s        # path to get net device speed = /sys/class/net/%s/speed        # path to get net device duplex = /sys/class/net/%s/duplex        # path to get net device operstate = /sys/class/net/%s/operstate        # path to get net device carrier = /sys/class/net/%s/carrier        # path to get net device mtu = /sys/class/net/%s/mtu        # enable new interfaces detected at runtime = auto        # bandwidth for all interfaces = auto        # packets for all interfaces = auto        # errors for all interfaces = auto        # drops for all interfaces = auto        # fifo for all interfaces = auto        # compressed packets for all interfaces = auto        # frames, collisions, carrier counters for all interfaces = auto        # speed for all interfaces = auto        # duplex for all interfaces = auto        # operstate for all interfaces = auto        # carrier for all interfaces = auto        # mtu for all interfaces = auto        # disable by default interfaces matching = lo fireqos* *-ifb fwpr* fwbr* fwln*[plugin:proc:/proc/net/dev:lo]        # enabled = no        # virtual = yes[plugin:proc:/proc/net/dev:eth0]        # enabled = yes        # virtual = no        # bandwidth = auto        packets = no        errors = no        drops = no        fifo = no        compressed = no        events = no        speed = no        duplex = no        operstate = no        carrier = no        mtu = no[plugin:proc:/proc/net/dev:wlan0]        # enabled = yes        # virtual = no        # bandwidth = auto        packets = no        errors = no        drops = no        fifo = no        compressed = no        events = no        speed = no        duplex = no        operstate = no        carrier = no        mtu = no[plugin:proc:/proc/net/dev:wg0]        # enabled = yes        # virtual = yes        # bandwidth = auto        packets = no        errors = no        drops = no        fifo = no        compressed = no        events = no        speed = no        duplex = no        operstate = no        carrier = no        mtu = no[plugin:proc:diskspace:/]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/dev]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/dev/shm]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run/lock]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run/user/1000]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/sys/kernel/security]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/sys/fs/cgroup]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/sys/fs/pstore]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/sys/fs/bpf]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/sys/kernel/tracing]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/sys/kernel/config]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/proc/fs/nfsd]        # space usage = no        # inodes usage = no[plugin:proc:diskspace:/boot/firmware]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/media/BACKUP]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/media/BACKUP2]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/media/BACKUP3]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/mnt/LIPA-NAS]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run/credentials]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run/netdata]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run/systemd/incoming]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/run/user]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/tmp]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/var/cache/netdata]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/var/lib/netdata]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/var/log]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/var/spool]        # space usage = auto        # inodes usage = auto[plugin:proc:diskspace:/var/tmp]        # space usage = auto        # inodes usage = auto```
You must be logged in to vote

Replies: 0 comments

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
1 participant
@sudo-nitz

[8]ページ先頭

©2009-2025 Movatter.jp