Now, living in the cloud and big data there is a time when the admin may need to save all their logs in a central place! Elasticsearch and Kibana look good for the job! And after months of hassle-free work of the Elasticsearch and Kibana, Elasticsearch just stopped working and after a restart and upgrade (Elasticsearch and Kibana) Kibana showed an error message:
Kibana server is not ready yet
And if you have tried the STOP/START of Kibana and Elasticsearch and Kibana would still show the above message here is what you should do:
- Check if the two services are running! Kibana and Elasticsearch, if some of them is missing start it.
- Search for the logs and especially Elasticsearch logs. The first place to check is systemd logs with journalctl program (systemctl status also will point out the problem showing last lines of the logs).
- Look for the last lines and if they include
Another Kibana instance appears to be migrating the index
This article is probably the right place to solve the issue and start your setup successful.
STEP 1) Running services and analyzing the logs.
If kibana and elastisearch use systemd system to operate it is easy to access the logs with systemctl and journalctl
Check whether the kibana and elasticsearch are running with:
[root@loganalyzer ~]# ps ax|grep elasticsearch|grep -v grep 258 ? Ssl 836:31 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/tmp/elasticsearch-13303119363353782625 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -XX:MaxDirectMemorySize=536870912 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet 360 ? Sl 0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller [root@loganalyzer ~]# ps ax|grep kibana|grep -v grep 1284 ? Ssl 4:32 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
If one of the two services are missing you must start it! And second, each service should have only one instance (i.e. process)!
And check the kibana logs with journalctl
[root@loganalyzer ~]# journalctl -u kibana.service ..... ..... Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","bfetch"],"pid":1219,"message":"Setting up plugin"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Starting saved objects migrations"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Creating index .kibana_task_manager_2 ."} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Unable to connect to Elasticsearch . Error: [resource_already_exists_exception] index [.kibana_task_manager_2/O070AunfSyG6hwd6_pqqRA] already exists, with { index_uuid=\"O070AunfSyG6hwd6_pqqRA\" & index=\".kibana_task_manager _2\" }"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 and restarting Kibana."}
The systemctl status may be used, too. The error and the index are shown in the last lines of the status output – look below.
The problem here is there was a migration of index .kibana_task_manager_2, but it was abandoned because of unknown reason and now we should delete it to be able to use our kibana service. The index name might be with another name but it is the same problem.
STEP 2) Delete kibana index
Delete the kibana index in elasticsearch backend using curl and HTTP/HTTPS request such as:
[root@loganalyzer kibana]# curl -XDELETE http://192.168.0.2:9200/.kibana_task_manager_2 {"acknowledged":true}
Change localhost to the IP or domain name, which the elasticsearch is bound to. You must receive a JSON response as follow.
STEP 3) Start (or restart) the kibana service
If it still keeps the error message you may want to check the logs again for another index. In general, this should fix the problem and kibana service to start successfully.
Bear in mind the name of the elasticsearch index maybe with a different name, the first thing to do is to check logs about the name of the index! If you search the net you may discover popular indexes in such trouble situations to be:
curl -XDELETE http://localhost:9200/.kibana curl -XDELETE 'http://localhost:9200/.kibana*' curl -XDELETE http://localhost:9200/.kibana_2 curl -XDELETE http://localhost:9200/.kibana_1
Still, the best way is just to check the logs to find the troublesome kibana index in the elasticsearch backend.
Troubleshooting
If you receive connection refuse:
[root@loganalyzer kibana]# curl -XDELETE http://localhost:9200/.kibana_task_manager_2 curl: (7) Failed connect to localhost:9200; Connection refused
Either the elasticsearch is not working and you must start the service or the elastic search service is not bound to the IP (domain name) or the PORT is wrong (aka change in configuration). You may check the firewall settings, too.
Systemd status
Just for completeness, the “systemctl status” command could be used, too:
[root@loganalyzer elasticsearch]# systemctl status kibana -l ● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2020-04-24 23:09:24 UTC; 3h 44min ago Main PID: 1219 (node) CGroup: /system.slice/lxc.service/system.slice/kibana.service └─1219 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","spaces"],"pid":1219,"message":"Setting up plugin"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","cloud"],"pid":1219,"message":"Setting up plugin"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","apm"],"pid":1219,"message":"Setting up plugin"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","graph"],"pid":1219,"message":"Setting up plugin"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","plugins","bfetch"],"pid":1219,"message":"Setting up plugin"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Starting saved objects migrations"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["info","savedobjects-service"],"pid":1219,"message":"Creating index .kibana_task_manager_2."} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_2/O070AunfSyG6hwd6_pqqRA] already exists, with { index_uuid=\"O070AunfSyG6hwd6_pqqRA\" & index=\".kibana_task_manager_2\" }"} Apr 24 23:09:31 loganalyzer kibana[1219]: {"type":"log","@timestamp":"2020-04-24T23:09:31Z","tags":["warning","savedobjects-service"],"pid":1219,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 and restarting Kibana."}