Create graph for CPU frequency grouped by processors/cores using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s CPU frequency changes. This plugin gathers CPU Frequency of all the virtual processors aka cores. In general, this module collects simple data for the processors’ frequencies like the Linux command of showing the number in /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq. The purpose of this article is to make a graph showing CPU frequency changes, which may be a hit for CPU load on the system.

main menu
example cpu cores frequency chart

The Linux machine is using collectd to gather CPU frequency statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the cpufreq collectd plugin

  • The collectd plugin cpufreq official page – https://collectd.org/wiki/index.php/Plugin:CPUFreq
  • The CPUFreq plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_cpufreq There are no options for this plugin, at present.
  • to enable the CPUFreq plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin cpufreq
    
  • The CPUFreq plugin collects data every 10 seconds.
  • cpufreq_value – a single Gauge value – a metric, which value that can go up and down. It is used to store the current CPU (or core)frequency. So there are multiple gauge values with different tags for the different cores (processors).
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type cpufreq The current frequency of the current processor or the current core.
    instance processors/cores ids The processors (or cores) starting from 0 to N.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used, frequencies, etc.

  • To cross-check the value, the user can use the /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq and replacing the * with integer number like 0, 1, 2, etc.
    [root@srv ~]# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq
    4161945
    4184149
    4062907
    4044231
    4183620
    4107467
    4187644
    4167952
    

    The values are in Hz for the each virtual processor shown in /proc/cpuinfo under a Linux system.

Keep on reading!

Create graph for swap usage using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s swap memory. This plugin gathers physical swap memory utilization – cached, free, and used. In general, this module collects simple data for the swap memory like the Linux command free. The purpose of this article is to make a graph showing swap memory usage and consumption.

main menu
example usage of swap usage

The Linux machine is using collectd to gather the swap memory statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Swap collectd plugin

  • The collectd plugin Swap official page – https://collectd.org/wiki/index.php/Plugin:Swap
  • The Swap plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_swap This article relies on the default plugin’s options
    <Plugin swap>
    #       ReportByDevice false
    #       ReportBytes true
    #       ValuesAbsolute true
    #       ValuesPercentage false
    #       ReportIO true
    </Plugin>
    

    All the devices are reported as a single device not per device and bytes and absolute values are used not percentages.

  • to enable the Swap plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin swap
    
  • The Swap plugin collects data every 10 seconds.
  • swap_value – includes a single Gauge value under swap type – a metric, which value that can go up and down. It is used to count the swap occupancy for the different categories (the category is saved in a tag value of one record, and the categories are free, used and etc.). So there are multiple gauge values with different tags for the different swap categories at a given time. And a second counter under swap_io type
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type swap | swap_io swap is the type, which will group the swap usage categories (cached, free, used). The swap_io groups the swap IO usage – how many IO operations are executed (in, out).
    type_instance swap categories The categories are cached, free, used.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • A DERIVE value – a metric, in which the change of the value is interesting. For example, it can go up indefinitely and it is important how fast it goes up, there are functions and queries, which will give the user the derivative value.

    These data sources assume that the change of the value is interesting, i.e. the derivative. Such data sources are very common with events that can be counted, for example, the number of emails that have been received per second by an MTA since it was started. The total number of emails is not interesting.

  • To cross-check the value, the user can use the /proc/swap, /proc/meminfo and /proc/vmstat
    [root@srv ~]# cat /proc/swaps
    Filename                                Type            Size    Used    Priority
    /dev/zram0                              partition       16777212        2533856 -1
    [root@srv ~]# cat /proc/meminfo |egrep -e "^(SwapTotal:|SwapFree:|SwapCached:)"
    SwapCached:       175416 kB
    SwapTotal:      16777212 kB
    SwapFree:       14202396 kB
    [root@srv ~]# cat /proc/vmstat |egrep -e "^(pswpout|pswpin)"
    pswpin 877611681
    pswpout 376878365
    

    The swap_io values are multiplied by the page size of the current system. For Linux, it is 4K. Note, by default, the ReportBytes collectd option is not enabled, so the swap_io measurement is in pages since the last reboot. The swap_io counter is read from the pswpout and pswpin (i.e. they also represent the pages since the last reboot). In fact, these two values are really important to track down because they tell how much the system touches swap device(s) and it could point out a problem with the physical memory shortages.

Keep on reading!

Run podman/docker InfluxDB 1.8 container to collect statistics from collectd

Yet another article on the topic of the InfluxDB 1.8 and collectd gathering statistics, in continuation of the articles Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9. This time, the InfluxDB runs in a container created with podman or docker software.

main menu
podman run and show databases

Here are the important points to mind when running InfluxDB 1.8 in a docker/podman container:
Keep on reading!

Create graph for Physical Memory grouped by states using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s memory. This plugin gathers physical memory utilization – used, buffered, cached, and free. In general, this module collects simple data for the physical memory like the Linux command free or top command. The purpose of this article is to make a graph showing memory usage and consumption.

main menu
example usage of physical memory

The Linux machine is using collectd to gather the memory statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Memory collectd plugin

  • The collectd plugin Memory official page – https://collectd.org/wiki/index.php/Plugin:Memory
  • The Memory plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_memory
  • to enable the Memory plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin memory
    
  • The Memory plugin collects data every 10 seconds.
  • memory_value – a single Gauge value – a metric, which value that can go up and down. It is used to count the memory occupancy for the different categories (the category is saved in a tag value of one record, and the categories are Used, Free and etc.). So there are multiple gauge values with different tags for the different memory categories at a given time.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type memory memory is the type, which will group the memory categories.
    type_instance memory categories The categories are buffered, cached, free, slab_recl, slab_unrecl, used.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross-check the value, the user can use the /proc/meminfo
    [root@srv ~]# cat /proc/meminfo |egrep -e "^(MemTotal|MemFree|Buffers|Cached|Slab|SReclaimable|SUnreclaim)"
    MemTotal:        3726476 kB
    MemFree:         2869736 kB
    Buffers:            5248 kB
    Cached:           400740 kB
    Slab:              67700 kB
    SReclaimable:      29200 kB
    SUnreclaim:        38500 kB
    

    Some of the lines are pretty clear about what they mean by “MemTotal“, “MemFree“, “Buffers” and so on.

Keep on reading!

Install and create a GlusterFS 11 replica cluster under CentOS Stream 9

At present, the latest version of GlusterFS is 11 and the latest version of CentOS is CentOS Stream 9.

main menu
create force start and mount volume

This article will present how to build 3 file replicas node cluster using the latest version of GlusterFS and CentOS Stream 9. There are old versions of this topic here – Create and export a GlusterFS volume with NFS-Ganesha in CentOS 8 and glusterfs with localhost (127.0.0.1) nodes on different servers – glusterfs volume with 3 replicas.

Summary

Here is what the 3-nodes replicas cluster represents:

STEP 1) Install the additional repositories.

Three additional repositories should be installed – all of them are official from the CentOS community or Fedora official community, so there tend to be really stable and do not break the package integrity.
Keep on reading!

Migrate from NFS Kernel Server to NFS-Ganesha server under CentOS Stream 9

This article is to show how to migrate from the NFS kernel server to the NFS-Ganesha server under CentOS Stream 9. The most important thing for migrating from one program to another program is how much downtime will be and what is expected to be done by the clients. In this case, what the clients are needed to do when NFS-Ganesha is used for the server?

main menu
install nfs ganesha

Here are the main points when migrating from NFS Kernel Server to the NFS-Ganesha:

  • The nfs-tuils and nfs-ganesha packages and in general, the two software, are perfectly fine installed on the same system. There are no conflicts when NFS Kernel Server and the NFS-Ganesha server are installed at the same time on the same system.
  • The clients, do not need to do anything, except remount the NFS mounts.
  • It should be installed a new community repository by installing the centos-release-nfs-ganesha5 package. The Special Interest Groups (SIG) maintains the repository and the group is within the CentOS community

For installation of NFS-Ganesha and a detailed information check out the older article on the subject – Simple export of a ext4 directory with NFS Ganesha 3.5 server in CentOS 8 with SELinux enforcing, Simple export of a ext4 directory with NFS Ganesha 3.5 server in CentOS 8 without SELinux and Create and export a GlusterFS volume with NFS-Ganesha in CentOS 8

Prerequisite – NFS Kernel Configuration

NFS Kernel Server is installed with nfs-utils packages (and its dependencies) and it has the following simple configuration:

[root@srv ~]# cat /etc/exports
/mnt/storage           192.168.0.0/24(rw,sync,no_root_squash,no_subtree_check)

And here are the NFS services on the system:

[root@srv ~]# systemctl |grep nfs
  proc-fs-nfsd.mount                                         loaded active mounted   NFSD configuration filesystem
  var-lib-nfs-rpc_pipefs.mount                               loaded active mounted   RPC Pipe File System
  nfs-idmapd.service                                         loaded active running   NFSv4 ID-name mapping service
  nfs-mountd.service                                         loaded active running   NFS Mount Daemon
  nfs-server.service                                         loaded active exited    NFS server and services
  nfsdcld.service                                            loaded active running   NFSv4 Client Tracking Daemon
  nfs-client.target                                          loaded active active    NFS client services

The server’s firewall has been tuned for the NFS kernel server, so no need to edit anything in the firewall for the NFS-Ganesha server.
Keep on reading!

Switch to a new master (primary) in MySQL InnoDB Cluster 8

Switching to a new master (or new primary if to use the new naming) in a MySQL 8 InnoDB Cluster is simple with the MySQL Shell console and the function of the cluster variable – setPrimaryInstance.

main menu
MySQL Shell with setPrimaryInstance

Why would someone need to do it manually? One of the reasons may be because one of the nodes is on the same physical server and thus suppose a smaller latency.

First, get a cluster object of the cluster by connecting to the cluster API with MySQL Shell:

[root@db-cluster-1 ~]# mysqlsh
MySQL Shell 8.0.28

Copyright (c) 2016, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
 MySQL  JS > \connect clusteradmin@db-cluster-1
Creating a session to 'clusteradmin@db-cluster-1'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 166928419 (X protocol)
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
 MySQL  db-cluster-1:33060+ ssl  JS > var cluster = dba.getCluster()

Second, show the status of the cluster to get the cluster topology and the exact nodes’ names, which will use as an argument of the setPrimaryInstance. Still, in the MySQL Shell Console:

 MySQL  db-cluster-1:33060+ ssl  JS > cluster.status()
{
    "clusterName": "mycluster1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "db-cluster-2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "db-cluster-1:3306": {
                "address": "db-cluster-1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "db-cluster-2:3306": {
                "address": "db-cluster-2:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "db-cluster-3:3306": {
                "address": "db-cluster-3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "db-cluster-2:3306"
}

Keep on reading!

Viewing the progress of MySQL 8 Cluster InnoDB recovery

This article will show several handy MySQL commands for viewing the progress of MySQL 8 Cluster recovery and how the administrators may keep track of how much time will need a MySQL InnoDB Cluster node will need to complete the recovering procedure.

main menu
SHOW REPLICA STATUS CHANNEL ‘group_replication_recovery’

If the reader needs to recover from a node failure there is the other article – Recovery of MySQL 8 Cluster instance after server crash and corrupted data in log event. In this article, the MySQL commands are executed on a CentOS Stream 8 with MySQL InnoDB 8 Cluster (here is how it is installed – Install and deploy MySQL 8 InnoDB Cluster with 3 nodes under CentOS 8 and MySQL Router for HA), which one node had been offline for about several weeks. The node was powered off normally, so the MySQL instance on the server was shut down gracefully.
Initially, the cluster state after the power down was two nodes with a missing one.
Use MySQL Shell Console to view the MySQL InnoDB 8 Cluster status:

[root@db-cluster-1 ~]# mysqlsh
MySQL Shell 8.0.28

Copyright (c) 2016, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
 MySQL  JS > \connect clusteradmin@db-cluster-1
Creating a session to 'clusteradmin@db-cluster-1'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 158633505 (X protocol)
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
 MySQL  db-cluster-1:33060+ ssl  JS > var cluster = dba.getCluster()
 MySQL  db-cluster-1:33060+ ssl  JS > cluster.status()
{
    "clusterName": "mycluster1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "db-cluster-2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", 
        "topology": {
            "db-cluster-1:3306": {
                "address": "db-cluster-1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "db-cluster-2:3306": {
                "address": "db-cluster-2:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "db-cluster-3:3306": {
                "address": "db-cluster-3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2003: Could not open connection to 'db-cluster-3:3306': Can't connect to MySQL server on 'db-cluster-3:3306' (111)", 
                "status": "(MISSING)"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "db-cluster-2:3306"
}
 MySQL  db-cluster-1:33060+ ssl  JS >

When the third server in the cluster is started (from a clean shutdown), the third node will be in a recovery state. Here is the same status command and the third node in recovery. Still, with the MySQL Shell Console:
Keep on reading!

Move or backup all database measurements for a single host to another Influxdb server

This article demonstrates how to move part of the data from one InfluxDB server to another InfluxDB sThect, the data is split by criteria to another server. The InfluxDB server is version 1.8 and the InfluxQL language is used. All useful InfluxQL queries will be included. All queries are executed in the influx command-line tool, which connects to the default InfluxDB location – http://localhost:8086. It is important to be able to connect to the InfluxDB using the influx command-line tool. Unfortunately, it is not possible to use the influxd backup command to select only certain data from a database despite it being easily selectable by a unique tag value such as the hostname of the reporting server. The whole setup is following this article Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9

main menu
Show series

The initial setup – get known the database scheme

There is the initial setup of the first InfluxDB server. Multiple servers (i.e. hosts) report data to this InfluxDB server and the target is to move all measurement data of a single reporting server to another InfluxDB server, which has already been accepting the new data. So moving the old data from the first InfluxDB server to the other InfluxDB server the historical data is preserved for this reporting server (i.e. hosts).

  • InfluxDB database with name collectd.
    [root@srv ~]# influx
    Connected to http://localhost:8086 version 1.8.10
    InfluxDB shell version: 1.8.10
    > SHOW DATABASES
    name: databases
    name
    ----
    _internal
    collectd
    >
    

    It is important to show the retention policy, too. The retention policy is used to build the queries.

    [root@srv ~]# influx
    Connected to http://localhost:8086 version 1.8.10
    InfluxDB shell version: 1.8.10
    > SHOW RETENTION POLICIES ON "collectd"
    name    duration shardGroupDuration replicaN default
    ----    -------- ------------------ -------- -------
    default 0s       168h0m0s           1        true
    

    The retention policy name of the database name “collectd” is “default”. Always check the retention policy, because it might be with a different name. For example, creating a database without specifying a retention policy will add a retention policy with the default name “autogen”.

  • There are multiple measurements in the collectd database. Show all measurements associated with this database (i.e. collectd)
    [root@srv ~]# influx
    Connected to http://localhost:8086 version 1.8.10
    InfluxDB shell version: 1.8.10
    > SHOW MEASUREMENTS LIMIT 10
    name: measurements
    name
    ----
    clickhouse_value
    conntrack_value
    cpu_value
    dbi_value
    df_value
    disk_io_time
    disk_read
    disk_value
    disk_weighted_io_time
    disk_write
    

    There is a limit clause – “LIMIT 10” to show only the first 10 measurements because the whole list may be too big. The limit clause could be missed to show the whole list of measurements associated with the database collectd.
    Keep on reading!

Create graph for Linux Processes grouped by states using Grafana, InfluxDB and collectd

This article shows how to make a graph showing a Linux machine’s processes states. This plugin could gather the number of the processes grouped by their state or metadata per the selected process defined in the configuration (metadata includes process state, size of the resident segment size (RSS), system/user time used, and so on.). The purpose of this article is to make a graph with all the processes grouped by their state. Graphs per process data are not included here.

main menu
Processes states of a live web server.

The Linux machine is using collectd to gather the processes statistics and send them to the time series back-end – InfluxDB. Grafana is used to visualize the data stored in the time series back-end InfluxDB and organize the graphs in panels and dashboards. Check out the previous articles on the subject to install and configure such software to collect, store and visualize data – Monitor and analyze with Grafana, influxdb 1.8 and collectd under CentOS Stream 9, Monitor and analyze with Grafana, influxdb 1.8 and collectd under Ubuntu 22.04 LTS and Create graph for Linux CPU usage using Grafana, InfluxDB and collectd
The collectd daemon is used to gather data on the Linux system and to send it to the back-end InfluxDB.

Key knowledge for the Processes collectd plugin

  • The collectd plugin Processes official page – https://collectd.org/wiki/index.php/Plugin:Processes
  • The Processes plugin options – https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_processes
  • to enable the Processes plugin, load the plugin with the load directive in /etc/collectd.conf
    LoadPlugin processes
    
  • The Processes plugin collects data every 10 seconds.
  • processses_value – a single Gauge value – a metric, which value that can go up and down. It is used to count the number of processes in the different states (the state is saved in a tag value of one record). So there are multiple gauge values with different tags for the different process states at a given time.
    tag key tag value description
    host server hostname The name of the source this measurement was recorded.
    type cpu ps_state is the type, which will group the processes by states.
    type_instance processes’ states States are – blocked, paging, running, sleeping, stopped, zombies.
  • A Gauge value – a metric, which value that can go up and down. More on the topic – Data sources.

    A GAUGE value is simply stored as-is. This is the right choice for values which may increase as well as decrease, such as temperatures or the amount of memory used.

  • To cross check the value, the user can use the /proc/stat
    [root@srv ~]# cat /proc/stat 
    cpu  804 0 732 6240 198 106 25 0 0 0
    cpu0 444 0 345 3092 121 44 14 0 0 0
    cpu1 359 0 387 3147 76 62 11 0 0 0
    intr 72376 117 9 0 0 0 0 0 0 1 2 0 0 156 0 187 187 0 0 188 273 0 0 0 0 0 0 6574 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    ctxt 216350
    btime 1667997331
    processes 1359
    procs_running 2
    procs_blocked 0
    softirq 38704 2 5003 5 290 6565 0 74 5796 0 20969
    

    Some of the lines are pretty clear about what they mean by “procs_running“, “processes“, “procs_blocked” and so on.

Keep on reading!