Updated: Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 / RHEL 7

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7

ELK stack is also known as Elastic stack, consists of Elasticsearch, Logstash, and Kibana; helps you to have all of your logs stored in one place and analyze the issues by correlating the events on a particular time.

This guide helps you to install ELK stack on CentOS 7 / RHEL 7


Logstash – It does the processing (Collect, enrich and send it to Elasticsearch) of incoming logs sent by beats (forwarder).

Elasticsearch – It stores incoming logs from Logstash and provides an ability to search the logs/data in a real time

Kibana – Provides visualization of logs.

Beats – Installed on client machines, sends logs to Logstash through beats protocol.


To have a full featured ELK stack, we would need two machines to test the collection of logs.

ELK Stack:

Operating system : CentOS 7 Minimal
IP Address       :
HostName         : server.itzgeek.local


Operating System : CentOS 7 Minimal
IP Address       :
HostName         : client.itzgeek.local


Since Elasticsearch is based on Java, make sure you have either openJDK or Oracle JDK is installed on your machine.

You can also install Oracle JDK.

# java -version

openjdk version "1.8.0_101"
OpenJDK Runtime Environment (build 1.8.0_101-b13)
OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)

Install Elasticsearch:

Elasticsearch is an open source search engine, offers a real-time distributed search and analytics with the RESTful web interface. Elasticsearch stores all the data’s sent by the logstash and displays through the web interface (kibana) on users request.

Setup the Elasticsearch repository and install it.

# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
# vi /etc/yum.repos.d/elasticsearch.repo

name=Elasticsearch repository for 2.x packages

Install Elasticsearch.

# yum install -y elasticsearch

Configure Elasticsearch to start during system startup.

# systemctl daemon-reload
# systemctl enable elasticsearch.service && systemctl start elasticsearch.service

Use CURL to check whether the Elasticsearch is responding to the queries or not.

# curl -X GET http://localhost:9200
  "name" : "Marvel Boy",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.3.5",
    "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
    "build_timestamp" : "2016-07-27T10:36:52Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  "tagline" : "You Know, for Search"

Install Logstash:

Logstash is an open source tool for managing events and logs, it collects the logs, parse them and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash which provides the capability of processing different type of events with no extra work.

Let’s add the Logstash repository.

# vi /etc/yum.repos.d/logstash.repo

name=Logstash repository for 2.3.x packages

Install the Logstash package.

# yum -y install logstash

Create SSL certificate:

Filbeat (Logstash Forwarder) are normally installed on client servers, and they use SSL certificate to validate the identity of Logstash server for secure communication.

Create SSL certificate either with the hostname or IP SAN.

Option 1: (Hostname FQDN)

If you use the logstash server hostname in the beats (forwarder) configuration, make sure you have A record for logstash server and also ensure that client machine can resolve the hostname of the logstash server.

Setup a host entry on the client machine in case your environment does not have a name server.

# vi /etc/hosts server.itzgeek.local server

Go to the OpenSSL directory.

# cd /etc/ssl/

Now, create the SSL certificate. Replace “green” one in with the hostname of your real logstash server.

# openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout logstash-forwarder.key -out logstash-forwarder.crt -subj /CN=server.itzgeek.local

Option 2: (IP Address)

If you are planning to use an IP address instead of hostname, please follow the steps to create an SSL certificate for IP SAN.

To create an IP SAN certificate, you would need to add an IP address of logstash server to the SubjectAltName in the OpenSSL config file.

# vi /etc/ssl/openssl.cnf

Look for “[ v3_ca ]” section and replace “green” one with the IP of your logstash server.

subjectAltName = IP:

Goto OpenSSL directory.

# cd /etc/ssl/

Now, create an SSL certificate by running following command.

# openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

This logstash-forwarder.crt should be copied to all client servers those who send logs to logstash server.

Configure Logstash:

Logstash configuration can be found in /etc/logstash/conf.d/. If the files don’t exist, create a new one. logstash configuration file consists of three sections input, filter, and the output; all three sections can be found either in a single file or each section will have separate files end with .conf.

I recommend you to use a single file to placing input, filter and output sections.

# vi /etc/logstash/conf.d/logstash.conf

In the first section, we will put an entry for input configuration. The following configuration sets Logstash to listen on port 5044 for incoming logs from the beats (forwarder) that sit on client machines. Also, add the SSL certificate details in the input section for secure communication.

input {
 beats {
   port => 5044
   ssl => true
   ssl_certificate => "/etc/ssl/logstash-forwarder.crt"
   ssl_key => "/etc/ssl/logstash-forwarder.key"
   congestion_threshold => "40"

In the filter section. We will use Grok to parse the logs ahead of sending it to Elasticsearch. The following grok filter will look for the “syslog” labeled logs and tries to parse them to make a structured index.

filter {
if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }

    date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]


for more filter patterns, take a look at grokdebugger page.

In the output section, we will define the location where the logs to get stored; obviously, it should be Elasticsearch.

output {
 elasticsearch {
  hosts => localhost
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
stdout {
    codec => rubydebug

Now start and enable the logstash.

# systemctl start logstash
# systemctl enable logstash

You can troubleshoot any issues by looking at below log.

# cat /var/log/logstash/logstash.log

Next, we will configure beats to ship the logs to logstash server.

Install Filebeat.

There are four beats clients available.

  1. Packetbeat – Analyze network packet data.
  2. Filebeat – Real-time insight into log data.
  3. Topbeat – Get insights from infrastructure data.
  4. Metricbeat – Ship metrics to Elasticsearch.

To analyze the system logs, we will be using filebeat here. You can download filebeat from the official website, or you can use the following command to install it.

# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

# vi /etc/yum.repos.d/beats.repo

name=Elastic Beats Repository

# yum -y install filebeat

Filebeat (beats) uses SSL certificate for validating logstash server identity, so copy the logstash-forwarder.crt from the logstash server to the client.

$ scp -pr [email protected]:/etc/ssl/logstash-forwarder.crt /etc/ssl

Configure Filebeat:

Now, it’s the time to connect filebeat with Logstash; follow up the below steps to get filebeat configured with ELK stack.

Filebeat configuration file is in YAML format, which means indentation is very important. Make sure you use the same number of spaces used in the guide.

Open up the filebeat configuration file.

# vi /etc/filebeat/filebeat.yml

On top, you would see the prospectors section; specifies which log files should be sent to logstash and how they should be handled. Each prospector starts with character.

For testing purpose, we will configure filebeat to send /var/log/messages to Logstash server. To do that, modify the existing prospector under paths section. Comment out the – /var/log/*.log to avoid sending all .log files present in that directory to Logstash.

.  .  .


        - /var/log/messages
        # - /var/log/*.log

.  .  .

Find the below line; uncomment it and set the value as “syslog“. It defines the field value of “_type” in the Elasticsearch output, which means the above logs in this prospectors are of type syslog.

.  .  .

      document_type: syslog

.  .  .

In the “output:” section, comment out the elasticsearch: section as we are not going to store logs directly to Elasticsearch.

Now, find the line “logstash:” and modify the entries like below. This section defines filebeat to send logs to logstash server “server.itzgeek.local” on port “5044” and mention the path where the copied SSL certificate is placed

Note: Replace “server.itzgeek.local” with IP address in case if you are using IP SAN.

.   .   .



    hosts: ["server.itzgeek.local:5044"]


      certificate_authorities: ["/etc/ssl/logstash-forwarder.crt"]

.   .   .

Restart the service.

# systemctl restart filebeat

Beats logs are typically found syslog file.

# cat /var/log/messages


Configure a firewall on the ELK stack node to receive the logs from the client machines.

5044 – For Logstash to receive the logs

5061 – To access the Kibana Interface from the external machine.

# firewall-cmd --permanent --zone=public --add-port=5044/tcp
# firewall-cmd --permanent --zone=public --add-port=5601/tcp
# firewall-cmd --reload

Configure Kibana 4:

Kibana provides visualization of logs stored on the elasticsearch, download it from the official website or use the following command to setup repository on the server node.

# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
# vi /etc/yum.repos.d/kibana.repo

name=Kibana repository for 4.5.x packages

Install the Kibana using the following command.

# yum -y install kibana

Start and enable kibana on system startup.

# systemctl start kibana
# systemctl enable kibana

Access the Kibana using the following URL.


On your first login, you have to map the filebeat index.

Type the following in the Index name or pattern box.


Select @timestamp and then click on create.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Mapping Index
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Mapping Index

Now, it will redirect you to kibana main page. Here, you can do the search queries and view the incoming logs.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Search Kibana
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Search Kibana

That’s All; you have ELK stack running on CentOS 7 / RHEL 7.


Share This Post