How to Setup ELK Stack on Debian 9 / Debian 8


ELK stack is a full featured data analytics platform, consists of Elasticsearch, Logstash, and Kibana which helps you to store and manage logs centrally and gives an ability to analyze the issues by correlating the events on particular time.

This article helps you to install ELK stack on Debian 9 / Debian 8.


Listed below are components of ELK stack and their purpose.

Elasticsearch – It stores incoming logs from Logstash and provides an ability to search the logs/data in a real time

Logstash – Processing (Collect, enrich and send it to Elasticsearch) of incoming logs sent by beats (forwarder).

Kibana – Provides visualization of events and logs.

Beats – Installed on client machines and it sends logs to Logstash or Elasticsearch through beats protocol.


Switch to the root user.

su -


sudo su -

Elasticsearch requires either OpenJDK or Oracle JDK available on your machine. Here, for this demo, I am using OpenJDK.

apt-get update
apt-get install -y openjdk-8-jdk

Check the Java version.

java -version


openjdk version "1.8.0_141"
OpenJDK Runtime Environment (build 1.8.0_141-8u141-b15-1~deb9u1-b15)
OpenJDK 64-Bit Server VM (build 25.141-b15, mixed mode)

If you want to use OracleJDK, then read:

READ: How to install Oracle Java 8 on Debian 9 / Debian 8

Install wget and HTTPS support for apt

apt-get install -y wget apt-transport-https

Install Elasticsearch

To begin with, we will now install Elasticsearch server, an open-source search engine based on Lucene. It provides a real-time distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

Elasticsearch stores data sent by the logstash and displays through the kibana on users request. ELK stack can be easily obtained from Elastic CO by setting up its official repository.

wget -qO - | sudo apt-key add -
echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list

Install Elasticsearch using the following command, v 5.5.2 at the time of writing this article.

apt-get update
apt-get install -y elasticsearch

Start the Elasticsearch service.

systemctl enable elasticsearch
systemctl start elasticsearch

Wait for few minutes and run the following command to see the status of Elasticsearch REST interface.

curl -X GET http://localhost:9200


  "name" : "deHukIE",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "wg9dVw4pQaG8Pag8LSPcIQ",
  "version" : {
    "number" : "5.5.2",
    "build_hash" : "b2f0c09",
    "build_date" : "2017-08-14T12:33:14.154Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  "tagline" : "You Know, for Search"

The above output confirms that Elasticsearch is up and running fine.

Install Logstash

Logstash is an open-source data-collection and log-parsing engine. It collects logs, parse and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash which provides the capability of processing a different type of events with no extra work.

apt-get install -y logstash

Create SSL certificate for Logstash

Forwarder (Filebeat) which we install on client machines use SSL certificate to validate the identity of Logstash server for secure transmission of logs.

Create the SSL certificate either with the hostname or IP SAN.

Option 1: (Hostname or FQDN)

If you plan to use the hostname in the beats (forwarder) configuration, then make sure client machines can able to reach the logstash server using the hostname.

Go to the OpenSSL directory.

cd /etc/ssl/

Now, create the SSL certificate with OpenSSL. Replace “green” one with the hostname of the logstash server.

openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout logstash-forwarder.key -out logstash-forwarder.crt -subj /CN=server.itzgeek.local

Option 2: (IP Address)

Use the below steps to create an SSL certificate for IP SAN.

As a prerequisite, we would need to add the IP address of logstash server to SubjectAltName in the OpenSSL configuration file.

nano /etc/ssl/openssl.cnf

Look for “[ v3_ca ]” section and update subjectAltName with the IP Address of you logstash server.

subjectAltName = IP:

Goto OpenSSL directory.

cd /etc/ssl/

Now, create the SSL certificate by running following command.

openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

The private key should be in the PKCS8 format, so convert it using the following command.

openssl pkcs8 -in logstash-forwarder.key  -topk8 -nocrypt -out logstash-forwarder.key.pem

Change the file permissions.

chmod 644 /etc/ssl/logstash-forwarder.key.pem

Configure Logstash

logstash configuration file consists of three sections, namely input, filter, and the output. You can put all three sections in a single file, or a separate file for each section, end with .conf.

Here, we use a single file for placing input, filter and output sections. Create a configuration file under /etc/logstash/conf.d/ directory.

vi /etc/logstash/conf.d/logstash.conf

In the input section, we will configure Logstash to listen on port 5044 for incoming logs, from beats (Forwarder) that sit on client machines.

Also, add the SSL certificate details in the input section for secure communication.

input {
 beats {
   port => 5044
   ssl => true
   ssl_certificate => "/etc/ssl/logstash-forwarder.crt"
   ssl_key => "/etc/ssl/logstash-forwarder.key.pem"
   congestion_threshold => "40"

In the filter section. We will use Grok to parse logs ahead of sending it to Elasticsearch, for storing.

The following grok filter will look for labeled logs “syslog” and tries to parse them to make a structured index.

filter {
if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }

    date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]


For more GROK filter patterns, take a look at here.

In the output section, we will define where logs to get stored, obviously Elasticsearch in case of ELK stack.

output {
 elasticsearch {
  hosts => localhost
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
stdout {
    codec => rubydebug

Now start and enable the logstash service.

systemctl start logstash
systemctl enable logstash

If you face any issue take a look at the logstash-plain.log file.

cat /var/log/logstash/logstash-plain.log

Install and Configure Kibana

Kibana provides visualization of data stored on the Elasticsearch. Install Kibana using the following command.

apt-get install -y kibana

By default, Kibana listens on localhost which means you can not access Kibana web interface from another machine. Edit /etc/kibana/kibana.yml file.

nano /etc/kibana/kibana.yml

Make changes to below line with your server IP address. ""

Also, some cases Elasticsearch and Kibana run on different machines, so update the below line with IP address of Elasticsearch server.

elasticsearch.url: "http://localhost:9200"

Start and enable kibana on machine startup.

systemctl restart kibana
systemctl enable kibana


There are four beat clients available.

Packetbeat – Analyze network packet data.
Filebeat – Real-time insight into log data.
Topbeat – Get insights from infrastructure data.
Metricbeat – Ship metrics to Elasticsearch.

Install Filebeat

Filebeat is a software that runs on the client machine. It sends logs to Logstash server for parsing or Elasticsearch for storing depends on the configuration.

Install HTTPS support for apt.

apt-get updateapt-get install -y apt-transport-https

Filebeat is available on the Elastic repository, so you need to setup it for Filebeat installation.

wget -qO - | sudo apt-key add -
echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/beat.list

Install Filebeat using the following command.

apt-get update
apt-get install -y filebeat

Configure Filebeat

The main configuration file of filebeat is /etc/filebeat/filebeat.yml.

nano  /etc/filebeat/filebeat.yml

We need to edit this file to send logs to Logstash server. Below configurations send syslog (/var/log/syslog) to Logstash server.

For this demo, I have commented out /var/log/*.log to avoid sending all logs to Logstash server.

.  .  .

- input_type: log

    - /var/log/syslog
    #- /var/log/*.log
.  .  .

In the Outputs section, hash out the section output.elasticsearch: as we are not going to store logs on to Elasticsearch.

Now, go to the line “output.logstash:” and modify entries to send logs to logstash and mention the path to the copied SSL file.

Note: Replace “server.itzgeek.local” with IP address of logstash server in case if you are using IP SAN.

.   .   .


    hosts: ["server.itzgeek.local:5044"]

    ssl.certificate_authorities: ["/etc/ssl/logstash-forwarder.crt"]

.   .   .

You need to copy logstash-forwarder.crt file on to all of your client servers those who send logs to logstash server. Ensure that client machines can resolve the hostname of the logstash server.

scp -pr [email protected]:/etc/ssl/logstash-forwarder.crt /etc/ssl

If you do not have a DNS server in your environment, then you need to add the host entry for logstash server in all of your client machines.

nano /etc/hosts server.itzgeek.local

Restart the Filebeat service.

systemctl restart filebeat

Filebeats log is typically found in its log file.

cat /var/log/filebeat/filebeat

Access Kibana Interface

You can access the Kibana web interface using the following URL.




On your first login, you need to map the filebeat index.

Type the following in the Index name or pattern box.


Select @timestamp and then click on create.

Setup ELK Stack on Debian 9 - Configure Index Pattern
Setup ELK Stack on Debian 9 – Configure Index Pattern

Go through the index patterns and its mapping.

Setup ELK Stack on Debian 9 - Index Patterns Mappings
Setup ELK Stack on Debian 9 – Index Patterns Mappings

Click Discover in the left navigation to view the incoming logs from a client machine.

Setup ELK Stack on Debian 9 - Client Logs
Setup ELK Stack on Debian 9 – Client Logs

That’s All.

You might also like