Threat Hunting Lab (Part I): Setting up Elastic Stack 7.2.1

17th February 2020   |   by hilo21

“In the midst of chaos, there is also opportunity”

Sun Tzu, The Art of War


In this tutorial series I will show you how to setup how simple virtual environment LAB for testing and studying attacks TTPs. The general setup is something like this

PART I : Setting Up Elastic Stack.

We will be using CentOS server 7 to setup elastic stack version 7.2.1

Step 1 : Defining hostnames

We will explicitly define host-names of other servers that we will be using our small lab. We’re talking here about PfSense and Domain controller defined as you can see in the next figure :

Step 2 : Create an Elastic user

We will use this user to setup elasticsearch, kibana and logstash.

Step 3 : Configuring ulimits

UNIX/Linux operating systems have the ability to limit the amount of various system resources available to a user process. These limitations include how many files a process can have open, how large of a file the user can create, and how much memory can be used by the different components of the process such as the stack, data and text segments. ulimit is the command used to accomplish this. [reference]

For the ulimits to persists across reboots we need to set the ulimit values in the configuration file /etc/security/limits.conf

According to the elasticsearch documentation, the ulimits shoud be set to 65536.

The nofile option is for setting up max number of open files.

Step 4 : Virtual Memory

Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. [Reference]

On Linux, you can increase the limits by running the following command as root:

sysctl -w vm.max_map_count=262144

To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf :

Then update the configuration file :

Step 5 : Setting up Elasticsearch all-in-one node

  • Su into the elastic user :
  • Download elasticsearch 7.2.1 archive and extract its content (not recommended for production environment)
  • Rename the folder to elasticsearch/
$ rm elasticsearch-7.2.1-linux-x86_64.tar.gz
$ mv elasticsearch-7.2.1/ elasticsearch
$ ll
  • Java Virtual Machine (JVM) options heap size:

In the config/jvm.options file change the JVM configuration to -Xms2g for both value. This was made because I use a VM with 4Gb of RAM and as a rule of thumb it is recommended to set the jvm options to half of your RAM; hence the 2g :

  • Setting up the elasticsearch configuration file :

We will create a backup for the config/elasticsearch.yml configuration file to define our own node specifications :

$ cd config/
$ cp elasticsearch.yml elasticsearch.yml.backup
$ rm elasticsearch.yml
$ touch elasticsearch.yml

We will then define the following properties :

Starting Elasticsearch :

[elastic@elasticsiem elasticsearch]$ ./bin/elasticsearch

The node has successfully started. So lets test its response :

And it is the only node in our cluster :

Step 6 : Setting up Kibana

  • Cd into elastic user home directory and download the archive from website :
curl -O
  • Extract the content :
tar -xzf kibana-7.2.1-linux-x86_64.tar.gz
rm kibana-7.2.1-linux-x86_64.tar.gz
mv kibana-7.2.1-linux-x86_64/ kibana/
  • Configuring Kibana :

Set up the listening port and server host in the config/kibana.yml file like in the next figure:

  • From the root directory launch kibana with the –allow-root option like the following :
  • Kibana should run successfully :
  • You would probably need to allow http service to be accessed from other internal IPs on :
$ sudo firewall-cmd --add-service=http --permanent
$ sudo firewall-cmd --reload

Step 7 : Setting up Logstash

  • Downlaod logstash :
curl -O
  • Extract its content :
$ tar -xzf logstash-7.2.1.tar.gz
$ rm logstash-7.2.1.tar.gz
$ mv logstash-7.2.1/ logstash
  • From the logstash directory test if it is working properly by using this command :
[elastic@elasticsiem logstash]$ bin/logstash -e 'input { stdin { } } output { stdout {} }'
  • Create a Pipelines folder to put conf files for parsing
[elastic@elasticsiem logstash]$ mkdir pipelines

We will create two files, one for syslog input capturing and another one for sending data to elasticsearch :

[elastic@elasticsiem pipelines]$ cat 001-input.conf
input {
 tcp {
 type => "syslog"
 port => 5140
#udp syslogs stream via 5140
input {
 udp {
 type => "syslog"
 port => 5140
[elastic@elasticsiem pipelines]$ cat 009-output.conf
output {
        elasticsearch {

                hosts => localhost
                index => "logstash-%{+YYYY.MM.dd}"
[elastic@elasticsiem pipelines]$

In the Part II we will setup Netflow data collection from PfSense to elastic stack.

Leave Your Comment