Create graph for CPU frequency grouped by processors/cores using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s CPU frequency changes. This plugin gathers CPU Frequency of all the virtual processors aka cores. In general, this module collects simple data for the processors’ frequencies like the Linux command of showing the number in /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq. The purpose of this article is to make a graph showing CPU frequency changes, which may be a hit for CPU load on the system.

main menu
example cpu cores frequency chart

The Linux machine is using collectd to gather CPU frequency statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the cpufreq collectd plugin

  • The collectd plugin cpufreq official page – https://collectd.org/wiki/index.php/Plugin:CPUFreq
  • The CPUFreq plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_cpufreq There are no options for this plugin, at present.
  • to enable the CPUFreq plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin cpufreq
    
  • The CPUFreq plugin collects data every 10 seconds.
  • cpufreq_value – a single Gauge value – a metric, which value that can go up and down. It is used to store the current CPU (or core)frequency. So there are multiple gauge values with different tags for the different cores (processors).
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type cpufreq The current frequency of the current processor or the current core.
    instance processors/cores ids The processors (or cores) starting from 0 to N.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used, frequencies, etc.

  • To cross-check the value, the user can use the /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq and replacing the * with integer number like 0, 1, 2, etc.
    [root@srv ~]# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq
    4161945
    4184149
    4062907
    4044231
    4183620
    4107467
    4187644
    4167952
    

    The values are in Hz for the each virtual processor shown in /proc/cpuinfo under a Linux system.

Keep on reading!

Create graph for swap usage using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s swap memory. This plugin gathers physical swap memory utilization – cached, free, and used. In general, this module collects simple data for the swap memory like the Linux command free. The purpose of this article is to make a graph showing swap memory usage and consumption.

main menu
example usage of swap usage

The Linux machine is using collectd to gather the swap memory statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Swap collectd plugin

  • The collectd plugin Swap official page – https://collectd.org/wiki/index.php/Plugin:Swap
  • The Swap plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_swap This article relies on the default plugin’s options
    <Plugin swap>
    #       ReportByDevice false
    #       ReportBytes true
    #       ValuesAbsolute true
    #       ValuesPercentage false
    #       ReportIO true
    </Plugin>
    

    All the devices are reported as a single device not per device and bytes and absolute values are used not percentages.

  • to enable the Swap plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin swap
    
  • The Swap plugin collects data every 10 seconds.
  • swap_value – includes a single Gauge value under swap type – a metric, which value that can go up and down. It is used to count the swap occupancy for the different categories (the category is saved in a tag value of one record, and the categories are free, used and etc.). So there are multiple gauge values with different tags for the different swap categories at a given time. And a second counter under swap_io type
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type swap | swap_io swap is the type, which will group the swap usage categories (cached, free, used). The swap_io groups the swap IO usage – how many IO operations are executed (in, out).
    type_instance swap categories The categories are cached, free, used.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • A DERIVE value – a metric, in which the change of the value is interesting. For example, it can go up indefinitely and it is important how fast it goes up, there are functions and queries, which will give the user the derivative value.

    These data sources assume that the change of the value is interesting, i.e. the derivative. Such data sources are very common with events that can be counted, for example, the number of emails that have been received per second by an MTA since it was started. The total number of emails is not interesting.

  • To cross-check the value, the user can use the /proc/swap, /proc/meminfo and /proc/vmstat
    [root@srv ~]# cat /proc/swaps
    Filename                                Type            Size    Used    Priority
    /dev/zram0                              partition       16777212        2533856 -1
    [root@srv ~]# cat /proc/meminfo |egrep -e "^(SwapTotal:|SwapFree:|SwapCached:)"
    SwapCached:       175416 kB
    SwapTotal:      16777212 kB
    SwapFree:       14202396 kB
    [root@srv ~]# cat /proc/vmstat |egrep -e "^(pswpout|pswpin)"
    pswpin 877611681
    pswpout 376878365
    

    The swap_io values are multiplied by the page size of the current system. For Linux, it is 4K. Note, by default, the ReportBytes collectd option is not enabled, so the swap_io measurement is in pages since the last reboot. The swap_io counter is read from the pswpout and pswpin (i.e. they also represent the pages since the last reboot). In fact, these two values are really important to track down because they tell how much the system touches swap device(s) and it could point out a problem with the physical memory shortages.

Keep on reading!

Run podman/docker InfluxDB 1.8 container to collect statistics from collectd

Yet another article on the topic of the InfluxDB 1.8 and collectd gathering statistics, in continuation of the articles Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9. This time, the InfluxDB runs in a container created with podman or docker software.

main menu
podman run and show databases

Here are the important points to mind when running InfluxDB 1.8 in a docker/podman container:
Keep on reading!

Create graph for Physical Memory grouped by states using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s memory. This plugin gathers physical memory utilization – used, buffered, cached, and free. In general, this module collects simple data for the physical memory like the Linux command free or top command. The purpose of this article is to make a graph showing memory usage and consumption.

main menu
example usage of physical memory

The Linux machine is using collectd to gather the memory statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Memory collectd plugin

  • The collectd plugin Memory official page – https://collectd.org/wiki/index.php/Plugin:Memory
  • The Memory plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_memory
  • to enable the Memory plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin memory
    
  • The Memory plugin collects data every 10 seconds.
  • memory_value – a single Gauge value – a metric, which value that can go up and down. It is used to count the memory occupancy for the different categories (the category is saved in a tag value of one record, and the categories are Used, Free and etc.). So there are multiple gauge values with different tags for the different memory categories at a given time.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type memory memory is the type, which will group the memory categories.
    type_instance memory categories The categories are buffered, cached, free, slab_recl, slab_unrecl, used.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross-check the value, the user can use the /proc/meminfo
    [root@srv ~]# cat /proc/meminfo |egrep -e "^(MemTotal|MemFree|Buffers|Cached|Slab|SReclaimable|SUnreclaim)"
    MemTotal:        3726476 kB
    MemFree:         2869736 kB
    Buffers:            5248 kB
    Cached:           400740 kB
    Slab:              67700 kB
    SReclaimable:      29200 kB
    SUnreclaim:        38500 kB
    

    Some of the lines are pretty clear about what they mean by “MemTotal“, “MemFree“, “Buffers” and so on.

Keep on reading!

Move or backup all database measurements for a single host to another Influxdb server

This article demonstrates how to move part of the data from one InfluxDB server to another InfluxDB sThect, the data is split by criteria to another server. The InfluxDB server is version 1.8 and the InfluxQL language is used. All useful InfluxQL queries will be included. All queries are executed in the influx command-line tool, which connects to the default InfluxDB location – http://localhost:8086. It is important to be able to connect to the InfluxDB using the influx command-line tool. Unfortunately, it is not possible to use the influxd backup command to select only certain data from a database despite it being easily selectable by a unique tag value such as the hostname of the reporting server. The whole setup is following this article Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9

main menu
Show series

The initial setup – get known the database scheme

There is the initial setup of the first InfluxDB server. Multiple servers (i.e. hosts) report data to this InfluxDB server and the target is to move all measurement data of a single reporting server to another InfluxDB server, which has already been accepting the new data. So moving the old data from the first InfluxDB server to the other InfluxDB server the historical data is preserved for this reporting server (i.e. hosts).

  • InfluxDB database with name collectd.
    [root@srv ~]# influx
    Connected to http://localhost:8086 version 1.8.10
    InfluxDB shell version: 1.8.10
    > SHOW DATABASES
    name: databases
    name
    ----
    _internal
    collectd
    >
    

    It is important to show the retention policy, too. The retention policy is used to build the queries.

    [root@srv ~]# influx
    Connected to http://localhost:8086 version 1.8.10
    InfluxDB shell version: 1.8.10
    > SHOW RETENTION POLICIES ON "collectd"
    name    duration shardGroupDuration replicaN default
    ----    -------- ------------------ -------- -------
    default 0s       168h0m0s           1        true
    

    The retention policy name of the database name “collectd” is “default”. Always check the retention policy, because it might be with a different name. For example, creating a database without specifying a retention policy will add a retention policy with the default name “autogen”.

  • There are multiple measurements in the collectd database. Show all measurements associated with this database (i.e. collectd)
    [root@srv ~]# influx
    Connected to http://localhost:8086 version 1.8.10
    InfluxDB shell version: 1.8.10
    > SHOW MEASUREMENTS LIMIT 10
    name: measurements
    name
    ----
    clickhouse_value
    conntrack_value
    cpu_value
    dbi_value
    df_value
    disk_io_time
    disk_read
    disk_value
    disk_weighted_io_time
    disk_write
    

    There is a limit clause – “LIMIT 10” to show only the first 10 measurements because the whole list may be too big. The limit clause could be missed to show the whole list of measurements associated with the database collectd.
    Keep on reading!

Create graph for Linux Processes grouped by states using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s processes states. This plugin could gather the number of the processes grouped by their state or metadata per the selected process defined in the configuration (metadata includes process state, size of the resident segment size (RSS), system/user time used, and so on.). The purpose of this article is to make a graph with all the processes grouped by their state. Graphs per process data are not included here.

main menu
Processes states of a live web server.

The Linux machine is using collectd to gather the processes statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Processes collectd plugin

  • The collectd plugin Processes official page – https://collectd.org/wiki/index.php/Plugin:Processes
  • The Processes plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_processes
  • to enable the Processes plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin processes
    
  • The Processes plugin collects data every 10 seconds.
  • processses_value – a single Gauge value – a metric, which value that can go up and down. It is used to count the number of processes in the different states (the state is saved in a tag value of one record). So there are multiple gauge values with different tags for the different process states at a given time.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type cpu ps_state is the type, which will group the processes by states.
    type_instance processes’ states States are – blocked, paging, running, sleeping, stopped, zombies.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross check the value, the user can use the /proc/stat
    [root@srv ~]# cat /proc/stat 
    cpu  804 0 732 6240 198 106 25 0 0 0
    cpu0 444 0 345 3092 121 44 14 0 0 0
    cpu1 359 0 387 3147 76 62 11 0 0 0
    intr 72376 117 9 0 0 0 0 0 0 1 2 0 0 156 0 187 187 0 0 188 273 0 0 0 0 0 0 6574 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    ctxt 216350
    btime 1667997331
    processes 1359
    procs_running 2
    procs_blocked 0
    softirq 38704 2 5003 5 290 6565 0 74 5796 0 20969
    

    Some of the lines are pretty clear about what they mean by “procs_running“, “processes“, “procs_blocked” and so on.

Keep on reading!

Create graph for Linux CPU usage using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s CPU Usage.

main menu
example cpu usage

The Linux machine is using collectd to gather the load average and send it to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9 and Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS.
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the cpu collectd plugin

  • The collectd plugin CPU official page – https://collectd.org/wiki/index.php/Plugin:CPU
  • The CPU plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_cpu
  • to enable the CPU plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin cpu
    
  • The CPU plugin collects data every 10 seconds.
  • cpu_value – 1 derive value is saved in the database. All values are in jiffies – the kernel unit of time. Showing just jiffers is not practical, that’s why all CPU graphs convert jiffers to CPU percentage usage.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    instance execution units number The execution unit this measurement was recorded. For example, systems with 8 cores will have 8 different execution units, so instances from 0 to 7. A graph representing the usage of a single CPU core is possible.
    type cpu The only type available is cpu.
    type_instance CPU usage metrics CPU metrics – idle, interrupt, nice, softirq, steal, system, user, wait.
  • DERIVE value – a metric, in which the change of the value is interesting. For example, it can go up indefinitely and it is important how fast it goes up, there are functions and queries, which will give the user the derivative value.

    These data sources assume that the change of the value is interesting, i.e. the derivative. Such data sources are very common with events that can be counted, for example, the number of emails that have been received per second by an MTA since it was started. The total number of emails is not interesting.

  • To cross check the value, the user can use the /proc/stat
    [root@srv ~]# cat /proc/stat 
    cpu  939 0 988 51486 200 261 56 0 0 0
    cpu0 483 0 473 25796 89 114 25 0 0 0
    cpu1 455 0 514 25690 110 147 31 0 0 0
    intr 123072 118 9 0 0 0 0 0 0 1 6 0 0 156 0 409 409 0 0 1184 501 0 0 0 0 0 0 6823 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    ctxt 279137
    btime 1666874114
    processes 1373
    procs_running 1
    procs_blocked 0
    softirq 64069 2 13685 7 544 6967 0 77 15801 0 26986
    

Keep on reading!

Create graph for Linux Load Average using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s load average.

main menu
A real load average graph of a web server

The Linux machine is using collectd to gather the load average and send it to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9 and Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS.
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the load collectd plugin

  • The collectd plugin Load official page – https://collectd.org/wiki/index.php/Plugin:Load
  • The Load plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_load
  • to enable the load plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin load
    
  • The Load plugin collects data every 10 seconds.
  • load_longterm, load_midterm, load_shortterm – 3 gauge values are saved in the database.
  • Gauge value – a metric, which value that can go up and down.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross check the value, the user can use the uptime command under Linux or /proc/loadavg
    [root@srv ~]# uptime
     23:08:09 up 52 min,  2 users,  load average: 1.00, 0.77, 0.38
    [root@srv ~]# cat /proc/loadavg 
    1.00 0.77 0.38 2/176 1900
    

Keep on reading!

Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS

This is an updated version of the previous version of this topic – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, but this time for Ubuntu 22.04 LTS. The article describes how to build modern analytic and monitoring solutions for system and application performance metrics. A solution, which may host all the server’s metrics and a sophisticated application, allows easy analyses of the data and powerful graphs to visualize the data.
A brief introduction to the main three software used to build the proposed solution:

  1. Grafana – an analytics and a web visualization tool. It supports dashboards, charts, graphs, alerts, and many more.
  2. influxdb – a time series database. Bleeding fast reads and writes and optimized for time.
  3. collectd – a data collection daemon, which obtain metrics from the host it is started and sends the metrics to the database (i.e. influxdb). It has around 170 plugins to collect metrics.

What is the task of each tool:

  1. collectd – gathers metrics and statistics using its plugins every 10 seconds on the host it runs and then sends the data over UDP to the influxdb using a simple text-based protocol.
  2. influxdb – listens on an open UDP port for data coming from multiple collectd instances installed on many different devices. In this case, a Linux server running Ubuntu 22.04 LTS.
  3. Grafana – an analytics and a web visualization tool. A web application, which connects to the InfluxDB and visualizes the time series metrics in graphs organized in dashboards. Graphs for CPU, memory, network, storage usage, and many more.
  4. nginx to enable SSL and proxy in front of the Grafana.

The whole solution uses the Ubuntu 22.04 LTS server edition distro. Installing the Ubuntu 22.04 LTS is a mandatory step to proceed further with this article – Installation of base Ubuntu server 22.04 LTS
The UDP influxdb port should be open per IP basis and web port of the web server (nginx) is up to the purpose of the solution – it can be behind a VPN or openly accessible by Internet.

STEP 1) Install additional repositories for Grafana, InfluxDB and collectd.

collectd is part of the Ubuntu official repositories. Grafana and InfluxDB maintain their official repositories. Here is how to install them.
Add the InfluxDB repository by first, importing the key of the InfluxDB repository and add the URL of the repository in /etc/apt/sources.list.

myuser@srv:~$ sudo curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
echo 'deb https://repos.influxdata.com/debian stable main' > /etc/apt/sources.list.d/influxdata.list

Then, repeated the same procedure with the Grafana repository:

myuser@srv:~$ sudo curl -sL https://packages.grafana.com/gpg.key | sudo apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
echo 'deb https://packages.grafana.com/oss/deb stable main' > /etc/apt/sources.list.d/grafana.list

Execute apt update to include the available file packages from all repositories including the ones:

apt update

Keep on reading!

Add source InfluxDB 1.8 with basic authentication in Grafana using the web interface

This article shows how to add a new source in Grafana with screenshots. The source is InluxDB 1.8 with basic authentication enabled. The main purpose of this article is to give the user knowledge of how to:

  • Enable basic authentication in InfluxDB
  • Create users – administrative and ordinary ones in InfluxDB and give permissions for the database.
  • Add the InfluxDB source in Grafana using web interface. with basic authentication enabled with credentials created in the article.

It is supposed the InfluxDB is installed and running on the loopback 127.0.0.1, at least. If the InfluxDB service is not local for the Grafana service replace the 127.0.0.1 with the appropriate IP and adjust the firewall such that it accepts connections from the Grafana server IP. For installing InfluxDB with detailed information including firewall modifications there is another article here – Monitor and analyze with Grafana, InfluxDB 1.8 and collectd under CentOS Stream 9.
No installation information for InfluxDB or Grafana is included in this article and if they are needed check out the article above.

STEP 1) Create users in InfluxDB.

By default, InfluxDB authentication is disabled and no users are required to access and manage the service and the databases. That’s why, the first thing to do is to create an administrative user, which will manage the databases when the basic authentication will be enabled. At the same time, when creating the administrative user, ordinary users may be created, too.
To connect to the InfluxDB to manage the service the InfluxDB command-line tool influx will be used. influx connects to http://127.0.0.1:8086 – an HTTP interface to access the InfluxDB service.

[root@srv ~]# influx
Connected to http://localhost:8086 version 1.8.10
InfluxDB shell version: 1.8.10
> CREATE USER admin WITH PASSWORD 'aiqu8ohth9Cheeshai]c' WITH ALL PRIVILEGES
> SHOW USERS
user  admin
----  -----
admin true
> CREATE USER collectd WITH PASSWORD 'ohg|ahTh9Sa|quoh8zoh'
> GRANT READ ON "collectd" TO "collectd"
> SHOW USERS
user     admin
----     -----
admin    true
collectd false
>

First, the administrative user with admin name is created, and then the ordinary user with the collectd name. For the ordinary user, the access privileges are granted only for READ on the collectd database. It is typical to name the database and the user accessing it with the same name. The format of the GRANT command is the following:

GRANT "[PRIVILEGES]" ON "[database_name]" TO "[user_name]"

READ privileges are enough for Grafana to access the data.
Keep on reading!