Create graph for CPU frequency grouped by processors/cores using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s CPU frequency changes. This plugin gathers CPU Frequency of all the virtual processors aka cores. In general, this module collects simple data for the processors’ frequencies like the Linux command of showing the number in /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq. The purpose of this article is to make a graph showing CPU frequency changes, which may be a hit for CPU load on the system.

main menu
example cpu cores frequency chart

The Linux machine is using collectd to gather CPU frequency statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the cpufreq collectd plugin

  • The collectd plugin cpufreq official page – https://collectd.org/wiki/index.php/Plugin:CPUFreq
  • The CPUFreq plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_cpufreq There are no options for this plugin, at present.
  • to enable the CPUFreq plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin cpufreq
    
  • The CPUFreq plugin collects data every 10 seconds.
  • cpufreq_value – a single Gauge value – a metric, which value that can go up and down. It is used to store the current CPU (or core)frequency. So there are multiple gauge values with different tags for the different cores (processors).
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type cpufreq The current frequency of the current processor or the current core.
    instance processors/cores ids The processors (or cores) starting from 0 to N.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used, frequencies, etc.

  • To cross-check the value, the user can use the /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq and replacing the * with integer number like 0, 1, 2, etc.
    [root@srv ~]# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq
    4161945
    4184149
    4062907
    4044231
    4183620
    4107467
    4187644
    4167952
    

    The values are in Hz for the each virtual processor shown in /proc/cpuinfo under a Linux system.

Keep on reading!

Create graph for swap usage using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s swap memory. This plugin gathers physical swap memory utilization – cached, free, and used. In general, this module collects simple data for the swap memory like the Linux command free. The purpose of this article is to make a graph showing swap memory usage and consumption.

main menu
example usage of swap usage

The Linux machine is using collectd to gather the swap memory statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Swap collectd plugin

  • The collectd plugin Swap official page – https://collectd.org/wiki/index.php/Plugin:Swap
  • The Swap plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_swap This article relies on the default plugin’s options
    <Plugin swap>
    #       ReportByDevice false
    #       ReportBytes true
    #       ValuesAbsolute true
    #       ValuesPercentage false
    #       ReportIO true
    </Plugin>
    

    All the devices are reported as a single device not per device and bytes and absolute values are used not percentages.

  • to enable the Swap plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin swap
    
  • The Swap plugin collects data every 10 seconds.
  • swap_value – includes a single Gauge value under swap type – a metric, which value that can go up and down. It is used to count the swap occupancy for the different categories (the category is saved in a tag value of one record, and the categories are free, used and etc.). So there are multiple gauge values with different tags for the different swap categories at a given time. And a second counter under swap_io type
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type swap | swap_io swap is the type, which will group the swap usage categories (cached, free, used). The swap_io groups the swap IO usage – how many IO operations are executed (in, out).
    type_instance swap categories The categories are cached, free, used.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • A DERIVE value – a metric, in which the change of the value is interesting. For example, it can go up indefinitely and it is important how fast it goes up, there are functions and queries, which will give the user the derivative value.

    These data sources assume that the change of the value is interesting, i.e. the derivative. Such data sources are very common with events that can be counted, for example, the number of emails that have been received per second by an MTA since it was started. The total number of emails is not interesting.

  • To cross-check the value, the user can use the /proc/swap, /proc/meminfo and /proc/vmstat
    [root@srv ~]# cat /proc/swaps
    Filename                                Type            Size    Used    Priority
    /dev/zram0                              partition       16777212        2533856 -1
    [root@srv ~]# cat /proc/meminfo |egrep -e "^(SwapTotal:|SwapFree:|SwapCached:)"
    SwapCached:       175416 kB
    SwapTotal:      16777212 kB
    SwapFree:       14202396 kB
    [root@srv ~]# cat /proc/vmstat |egrep -e "^(pswpout|pswpin)"
    pswpin 877611681
    pswpout 376878365
    

    The swap_io values are multiplied by the page size of the current system. For Linux, it is 4K. Note, by default, the ReportBytes collectd option is not enabled, so the swap_io measurement is in pages since the last reboot. The swap_io counter is read from the pswpout and pswpin (i.e. they also represent the pages since the last reboot). In fact, these two values are really important to track down because they tell how much the system touches swap device(s) and it could point out a problem with the physical memory shortages.

Keep on reading!

Run podman/docker InfluxDB 1.8 container to collect statistics from collectd

Yet another article on the topic of the InfluxDB 1.8 and collectd gathering statistics, in continuation of the articles Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9. This time, the InfluxDB runs in a container created with podman or docker software.

main menu
podman run and show databases

Here are the important points to mind when running InfluxDB 1.8 in a docker/podman container:
Keep on reading!

Create graph for Physical Memory grouped by states using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s memory. This plugin gathers physical memory utilization – used, buffered, cached, and free. In general, this module collects simple data for the physical memory like the Linux command free or top command. The purpose of this article is to make a graph showing memory usage and consumption.

main menu
example usage of physical memory

The Linux machine is using collectd to gather the memory statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Memory collectd plugin

  • The collectd plugin Memory official page – https://collectd.org/wiki/index.php/Plugin:Memory
  • The Memory plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_memory
  • to enable the Memory plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin memory
    
  • The Memory plugin collects data every 10 seconds.
  • memory_value – a single Gauge value – a metric, which value that can go up and down. It is used to count the memory occupancy for the different categories (the category is saved in a tag value of one record, and the categories are Used, Free and etc.). So there are multiple gauge values with different tags for the different memory categories at a given time.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type memory memory is the type, which will group the memory categories.
    type_instance memory categories The categories are buffered, cached, free, slab_recl, slab_unrecl, used.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross-check the value, the user can use the /proc/meminfo
    [root@srv ~]# cat /proc/meminfo |egrep -e "^(MemTotal|MemFree|Buffers|Cached|Slab|SReclaimable|SUnreclaim)"
    MemTotal:        3726476 kB
    MemFree:         2869736 kB
    Buffers:            5248 kB
    Cached:           400740 kB
    Slab:              67700 kB
    SReclaimable:      29200 kB
    SUnreclaim:        38500 kB
    

    Some of the lines are pretty clear about what they mean by “MemTotal“, “MemFree“, “Buffers” and so on.

Keep on reading!

Create graph for Linux Processes grouped by states using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s processes states. This plugin could gather the number of the processes grouped by their state or metadata per the selected process defined in the configuration (metadata includes process state, size of the resident segment size (RSS), system/user time used, and so on.). The purpose of this article is to make a graph with all the processes grouped by their state. Graphs per process data are not included here.

main menu
Processes states of a live web server.

The Linux machine is using collectd to gather the processes statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Processes collectd plugin

  • The collectd plugin Processes official page – https://collectd.org/wiki/index.php/Plugin:Processes
  • The Processes plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_processes
  • to enable the Processes plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin processes
    
  • The Processes plugin collects data every 10 seconds.
  • processses_value – a single Gauge value – a metric, which value that can go up and down. It is used to count the number of processes in the different states (the state is saved in a tag value of one record). So there are multiple gauge values with different tags for the different process states at a given time.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type cpu ps_state is the type, which will group the processes by states.
    type_instance processes’ states States are – blocked, paging, running, sleeping, stopped, zombies.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross check the value, the user can use the /proc/stat
    [root@srv ~]# cat /proc/stat 
    cpu  804 0 732 6240 198 106 25 0 0 0
    cpu0 444 0 345 3092 121 44 14 0 0 0
    cpu1 359 0 387 3147 76 62 11 0 0 0
    intr 72376 117 9 0 0 0 0 0 0 1 2 0 0 156 0 187 187 0 0 188 273 0 0 0 0 0 0 6574 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    ctxt 216350
    btime 1667997331
    processes 1359
    procs_running 2
    procs_blocked 0
    softirq 38704 2 5003 5 290 6565 0 74 5796 0 20969
    

    Some of the lines are pretty clear about what they mean by “procs_running“, “processes“, “procs_blocked” and so on.

Keep on reading!

Create graph for Linux CPU usage using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s CPU Usage.

main menu
example cpu usage

The Linux machine is using collectd to gather the load average and send it to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9 and Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS.
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the cpu collectd plugin

  • The collectd plugin CPU official page – https://collectd.org/wiki/index.php/Plugin:CPU
  • The CPU plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_cpu
  • to enable the CPU plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin cpu
    
  • The CPU plugin collects data every 10 seconds.
  • cpu_value – 1 derive value is saved in the database. All values are in jiffies – the kernel unit of time. Showing just jiffers is not practical, that’s why all CPU graphs convert jiffers to CPU percentage usage.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    instance execution units number The execution unit this measurement was recorded. For example, systems with 8 cores will have 8 different execution units, so instances from 0 to 7. A graph representing the usage of a single CPU core is possible.
    type cpu The only type available is cpu.
    type_instance CPU usage metrics CPU metrics – idle, interrupt, nice, softirq, steal, system, user, wait.
  • DERIVE value – a metric, in which the change of the value is interesting. For example, it can go up indefinitely and it is important how fast it goes up, there are functions and queries, which will give the user the derivative value.

    These data sources assume that the change of the value is interesting, i.e. the derivative. Such data sources are very common with events that can be counted, for example, the number of emails that have been received per second by an MTA since it was started. The total number of emails is not interesting.

  • To cross check the value, the user can use the /proc/stat
    [root@srv ~]# cat /proc/stat 
    cpu  939 0 988 51486 200 261 56 0 0 0
    cpu0 483 0 473 25796 89 114 25 0 0 0
    cpu1 455 0 514 25690 110 147 31 0 0 0
    intr 123072 118 9 0 0 0 0 0 0 1 6 0 0 156 0 409 409 0 0 1184 501 0 0 0 0 0 0 6823 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    ctxt 279137
    btime 1666874114
    processes 1373
    procs_running 1
    procs_blocked 0
    softirq 64069 2 13685 7 544 6967 0 77 15801 0 26986
    

Keep on reading!

Create graph for Linux Load Average using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s load average.

main menu
A real load average graph of a web server

The Linux machine is using collectd to gather the load average and send it to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9 and Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS.
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the load collectd plugin

  • The collectd plugin Load official page – https://collectd.org/wiki/index.php/Plugin:Load
  • The Load plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_load
  • to enable the load plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin load
    
  • The Load plugin collects data every 10 seconds.
  • load_longterm, load_midterm, load_shortterm – 3 gauge values are saved in the database.
  • Gauge value – a metric, which value that can go up and down.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross check the value, the user can use the uptime command under Linux or /proc/loadavg
    [root@srv ~]# uptime
     23:08:09 up 52 min,  2 users,  load average: 1.00, 0.77, 0.38
    [root@srv ~]# cat /proc/loadavg 
    1.00 0.77 0.38 2/176 1900
    

Keep on reading!

How to install collectd in Ubuntu 22.04 LTS and in general under Ubuntu

It appears Ubuntu 22.04 LTS still does not include in its packages base one of the best server software to gather metrics from different sources. collectd is a small and fast daemon, which can gather metrics from more than 80 different sources.
In fact, Ubuntu 22.04 LTS does not include it, but the new not LTS Ubuntu 22.10 provides the package in the universe repository – https://packages.ubuntu.com/kinetic/collectd-core. At least, one more file should be installed collectd from https://packages.ubuntu.com/kinetic/collectd. The name of the package is collectd, collectd-core and there are 4 more files of interests – collectd-dev, collectd-utils, libcollectdclient-dev, libcollectdclient1.
Check out the pool folder of an Ubuntu mirror, for example, the mirror – http://mirrors.kernel.org/ubuntu/pool/universe/c/collectd/ and download the latest file.
Now, the latest files are http://mirrors.kernel.org/ubuntu/pool/universe/c/collectd/collectd-core_5.12.0-11_amd64.deb and http://mirrors.kernel.org/ubuntu/pool/universe/c/collectd/collectd_5.12.0-11_amd64.deb. Download them and install the files with apt like usually but pointing to the files:
Keep on reading!

Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9

This article describes how to build a modern analytic and monitoring solutions for system and application performance metrics. A solution, which may host all the server’s metrics and a sophisticated application, allows easy analyses of the data and powerful graphs to visualize the data.
A brief introduction to the main three software used to build the proposed solution:

  1. Grafana – an analytics and a web visualization tool. It supports dashboards, charts, graphs, alerts, and many more.
  2. influxdb – a time series database. Bleeding fast reads and writes and optimized for time.
  3. collectd – a data collection daemon, which obtain metrics from the host it is started and sends the metrics to the database (i.e. influxdb). It has around 170 plugins to collect metrics.

What is the task of each tool:

  1. collectd – gathers metrics and statistics using its plugins every 10 seconds on the host it runs and then sends the data over UDP to the influxdb using a simple text-based protocol.
  2. influxdb – listens on an open UDP port for data coming from multiple collectd instances installed on many different devices. In this case, a Linux server running CentOS Stream 9.
  3. Grafana – an analytics and a web visualization tool. A web application, which connects to the InfluxDB and visualizes the time series metrics in graphs organized in dashboards. Graphs for CPU, memory, network, storage usage, and many more.
  4. nginx to enable SSL and proxy in front of the Grafana.

The whole solution uses the CentOS Stream 9 Linux distro. Installing the CentOS Stream 9 is a mandatory step to proceed further with this article – Network installation of CentOS Stream 9 (20220606.0) – minimal server installation
The UDP influxdb port should be open per IP basis and web port of the web server (nginx) is up to the purpose of the solution – it can be behind a VPN or openly accessible by Internet.

STEP 1) Install additional repositories for Grafana, influxdb and collectd.

Install CentOS official EPEL and OpsTools repositories. EPEL provides additional packages to the base CentOS packages and OpsTools provides collectd and more collectd plugins than the ones included in the built-in repositories.

dnf install -y epel-release centos-release-opstools

Add the InfluxDB repository by creating a file in /etc/yum.repos.d/influxdb.repo

[influxdb]
name = InfluxDB Repository - RHEL $releasever
baseurl = https://repos.influxdata.com/centos/$releasever/$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Finally, add the Grafana repository in file /etc/yum.repos.d/grafana.repo

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Keep on reading!

Install and use collectd-ping under CentOS 8 to monitor latency

Tracking the network latency of the servers’ network is not an easy job. Most monitoring software is capable to monitor the state of the server, but how to monitor the state of the connectivity and the network latency and even the Internet connectivity with some respectful addresses like 1.1.1.1 or 8.8.8.8? It should be easy to do it with ICMP and ping command but using the collectd daemon and one of its plugins offers collectd-ping from https://collectd.org/wiki/index.php/Plugin:Ping to save all the history in a time series back-end and using grafanahttps://grafana.com/ (or other graphs/histograms and etc software) to make graphs.
Using the collectd-ping plugin in conjunction with grafana may reach the similar effect as using the old and gold smokeping.
CentOS 7 included the collectd-ping plugin in its official repository, but in CentOS 8 the plugin is missing! Under Cent OS 8 the CentOS SIG OpsTools https://wiki.centos.org/SpecialInterestGroup/OpsTools includes the collectd-ping plugin in their repository. More on SIG and OpsTools may be obtained in the later page. In general, it is safe to use this repository it would not break user’s system.
Here is how to install and configure it. Real grafana examples are also included at the end.

The example here assumes there is a grafana server installed with influxdb backend.

STEP 1) Add OpsTools repository and install the collectd and collectd-ping.

The OpsTools repository is installed with centos-release-opstools package.
Here is what is going to install:

dnf install -y centos-release-opstools
dnf install -y collectd collectd-ping

Keep on reading!