
Building off my previous post, Introduction to ELK, I figured it would be great to begin to discuss how to create a “stack.” I have created multiple different stacks in the past couple months, each with their own specific purpose. While the services within an ELK stack are meant to be spread across different nodes, building a “single-node” stack can be a great and easy way to familiarize yourself first-hand with the functionality of Elasticsearch, Logstash, and Kibana.
NOTE: This stack is meant for educational purposes only, and would caution against taking a “single-node” stack from Proof-of-Concept (POC) to Production (PROD).
I have a post called Creating a Multi-Node ELK Stack for creating a production version.
The following steps below are heavily inspired and adopted by the work or Roberto Rodriguez, @Cyb3rWard0g. Roberto’s dedication to DFIR and Threat Hunting, as well as his generously detailed GitHub page, have taught me almost all of the fundamentals I needed to learn when starting with ELK. If you have time, I would highly recommend checking out his work.
Requirements
- ESXI Server OR VMware Workstation
- Ubuntu Server 18.04.1 LTS – ISO Download
- OR Ubuntu Server 16.04.5 LTS – ISO Download
- Minimum of 3GB storage
- Minimum 4GB RAM
Setting up Elasticsearch
The base installation of Ubuntu does not come with Java, which is necessary for both Elasticsearch and Logstash to run, so you are going to have to install it. At the time of this post, both Java 9 AND Java 10 are unsupported (yeah – I have no idea why) so we will be installing the Java 8 package.
If you are using a previously created VM, you should first check to see if Java is installed.
$ java -version
If Oracle’s Java is not installed, or is not version 8, install it by first adding it’s repository:
$ sudo add-apt-repository ppa:webupd8team/java
Grab and Install the Java 8 package using this new repository:
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
Check your Java version again. You should now see that you are running Java 8.
With Java installed, we can now turn our attention to installing Elasticsearch. To install any of Elastic’s products, we first need to grab their PGP signing key down to our VM.
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ sudo apt-get install apt-transport-https
Make sure your system is fully up-to-date and install Elasticsearch’s Debian package. At the time of this post, the most recent version of Elastic’s services is 6.3.2.
NOTE: To my knowledge, with the introduction of 6.X it became a requirement that all Elastic services working in tandem with one another in a stack are of the same version. It is important that if you install Elasticsearch 6.3.2, that you also install Logstash 6.3.2 down the road, and so on and so forth.
$ echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
$ sudo apt-get update && sudo apt-get install elasticsearch
When Elasticsearch is done installing, you will then need to navigate to it’s directory to modify its configuration file, elasticsearch.yml. Occasionally when I attempt to navigate to this directory, I am denied permission. We will also need elevated privileges later in the post, so starting here I am going to enter superuser, or root.
$ sudo su
# cd /etc/elasticsearch
# nano /etc/elasticsearch/elasticsearch.yml
You should now be presented with the following window. Since I am a big fan of Mac, there may be some differences GUI-wise; I am using SSH in Mac Terminal while also using VMware Fusion for my Ubuntu 18.04.1 virtual environment.

From here you have a couple of steps, some of which are optional while others are not:
- If you wish to name your cluster, as I have already done above, remove the “#” from the beginning of the line with cluster.name and add your own custom name
- Take note of the path.logs variable — this is the path where all your log files associated with Elasticsearch will reside
- Navigate to the Network section, and look for network.host
- Remove the “#” from the beginning of the line
- If your IP is static, you can put your current IP here. However, for the purpose of this practice stack, type “localhost”
- Exit your text editor (For nano, press CTR+X then Y to save)
Now that Elasticsearch is configured to how you want, start the service and confirm that it is running!

Setting up Kibana
This is one of the quickest to setup:
# apt-get update && sudo apt-get install kibana
# nano /etc/kibana/kibana.yml
From here, find “server.host” and remove the “#” at the beginning of the line. Then, add “localhost” as your address. Your configuration file should look like mine below.

Start the Kibana service and check to ensure it is running. It’s as simple as that!

Setting up Logstash
Logstash, in my opinion, is one of the more complex of Elastic’s services. This is due to how much goes on concerning this service, and the delicate role it plays in filtering logs it is receiving from endpoints.
# apt-get update && sudo apt-get install logstash
NOTE: Normally at this step you would begin to implement SSL capability, but for the purpose of this stack we will refrain from diving into the beast for now.
Logstash’s main purpose is to filter logs that you are ingesting. That may be a little hard without rules for these filters thought, right? So let’s add a quick one right now.
Navigate to Logstash’s directory for filters, “conf.d,” and create a new file called “02-beats-input.conf” (thanks to @Cyb3rWard0g for the naming scheme). This will be your input filter for all logs that Logstash sees from Beats data shippers.
# cd /etc/logstash/conf.d
# nano 02-beats-input.conf
Enter exactly what you see in my window below in your new input filter. Make sure SSL is set to “false” as we have not implemented it, and be sure to save the file as you exit.

Next, we need to create an output filter to correspond to out new input filter. Create a new file called “50-beats-output.conf” in the same directory as your input filter.
# nano 50-beats-output.conf
Ensure that your output filter matches my window below, and that you save the file when you exit.

NOTE: In the line starting with “index,” I chose to sort my indices by month (YYYY.MM). This is entirely by choice, and you can adjust the output filter to sort by day (YYYY.MM.dd) as well. I also chose to adopt @Cyb3rWard0g‘s naming scheme as it allows you to easily separate your input and output filters by the number that precedes them. This is why our first input filter is “02” while our first output filter is “50,” as you would normally have many different filters for bigger environments.
Now it is time to start up Logstash. If Logstash is already running, restart the service to ensure that the filters kick into place.

Congrats!
You now have a fully functional, basic ELK Stack waiting for logs to be sent over! You may have noticed that I mentioned Beats data shippers, however we never configured any. These are done on the endpoints you wish to monitor, and I hope to cover the setup process for that in a later post.
Enough of listening to me though, go have fun! Be adventurous, look around at all your newly installed directories and play around to see what does what; that is what helps me to better understand how services interact and function on my servers. If you have any questions, or think I missed something (as I often do), comment below or contact me! 🙂
good work zachy chan
LikeLiked by 1 person