Using Default Filebeat Index Templates with Logstash

In one of my prior posts, Monitoring CentOS Endpoints with Filebeat + ELK, I described the process of installing and configuring the Beats Data Shipper Filebeat on CentOS boxes. This process utilized custom Logstash filters, which require you to manually add these in to your Logstash pipeline and filter all Filebeat logs that way. But what if you don’t want customization? Luckily, Filebeat has built in index templates you can use.

Index Templates?

Yes, index templates. Elasticsearch uses these templates to define settings and mappings that determine how fields should be analyzed and shown in Kibana. These templates are ONLY applied at index creation. Changing a template will not affect pre-existing indices that use them.

Oh, Neat. how do I use them?

Based on my understanding, all index templates are normally applied in the background, which may be why some would have never dealt with them before. For example, all indices that come from Logstash SHOULD have an index template attached to them known as “logstash” unless one of your Logstash filters specifies otherwise. These templates can also be a neat way to apply Index Lifecycle Policies to groups of indices, which I hope to better understand and write a post on soon.

To use the default ones for Filebeat, you first need to upload its module templates to Elasticsearch to be used. You then need to configure Logstash to point to these templates when it recognizes a Filebeat module. This guide will explain how to do just that.

What you need

  • Working Single or Multi-Node ELK Stack
    • As always, Multi-Node Stack is recommended for production
  • Current Filebeat Implementation OR Ability to Install Filebeat
  • Ubuntu or CentOS ELK Nodes

Install Filebeat (If Needed)

If you already have Filebeat installed, you can skip this step. For the Filebeat-newbies, use the following commands to add the Elastic repo (if not already configured) and install Filebeat.

Download and install the Public Signing Key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Grab dependencies if not already installed:

sudo apt-get install apt-transport-https

Add repository link to /etc/apt/sources.list.d:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

Update your repositories and install Filebeat:

sudo apt-get update && sudo apt-get install filebeat

Enable Filebeat to run on system startup:

systemctl enable filebeat

Load default Index Templates Into Elasticsearch

Now that we have Filebeat installed, we need to link it to your pre-existing Elasticsearch array and upload the templates for Elasticsearch to use.

Elevate to sudo if not done so already:

sudo su

Go to the Filebeat installation directory:

cd /etc/filebeat

Open it’s configuration file and add the following to the Elasticsearch section:

nano /etc/filebeat/filebeat.yml
Make sure to replace these placeholders with your ES IPs

Once saved, run the following command to upload the index templates for every module you want to use. You only need to do this ONCE from any node, as once it is uploaded Elasticsearch can handle the rest. For my purposes, I know I will be utilizing the System module for grabbing authentication and system logs for my ELK nodes. I will also use the NGINX module to monitor NGINX connections to my Kibana instance:

filebeat setup --pipelines --modules system,nginx

NOTE: A full list of modules within Filebeat can be found here. You can also go to /etc/filebeat/modules.d and manually look through to see what modules are available and how they can be configured. By default, the files within this directory are all disabled, ending in “yml.disabled.” When enabled, they become active “.yml” files.

Configure Filebeat

Now that the templates are uploaded, you will now need to re-edit Filebeat’s configuration file to point it back at Logstash.

Reopen the configuration file and comment out the entire Elasticsearch section you just edited. Further down the file you will see a Logstash section – un-comment that out and add in the following:

Make sure to replace the placeholder with your LS IP

NOTE: You can optionally configure SSL, which would require you to also un-comment out the “ssl.certificate_authorities” line, however you will need to have a pre-configured certificate and key for this to work. For the purpose of this guide, we will leave SSL out of it. An earlier post I made, ELK + Beats: Securing Communication with Logstash by using SSL, will have more information about this process for those interested.

You can also optionally add the following lines at the end of your configuration file to enable Beats monitoring. Essentially, this just allows you to monitor what Beats you have in your environment and the versions they are running on, which I utilize when it is time to upgrade:

Example of Beats monitoring in Kibana – pixelation included

Save the file and exit. Repeat this process for all nodes you wish to configure for Filebeat.

Enable Desired Filebeat Modules

Now that you have Filebeat configured, you need to enable the modules you wish to utilize per every node you wish to run Filebeat. Since I utilized both the System and NGINX modules in this guide, I will want to enable System on every node and additionally NGINX only on the Kibana node.

To enable modules, run this command:

filebeat modules enable <moduleName>

Do this for every node, ensuring that every module you enable is what you want to be gathered from that specific host.

Configure Logstash

You now need to tell Logstash what to do when it sees these Filebeat logs and how to point it to the uploaded index templates we sent to Elasticsearch.

If you haven’t done so already, stop Logstash as a service:

systemctl stop logstash

On your Logstash node, navigate to your pipeline directory and create a new .conf file. You can name this file whatever you want:

cd /etc/logstash/conf.d

nano 9956-filebeat-modules-output.conf

Add the following to your new .conf file:

# Filebeat Modules Logstash Output Conf File
# Zachary Burnham (@zmbf0r3ns1cs)

output {
  if [fileset][module] == "system" {
    elasticsearch {
      hosts => ["<es01-IP>:9200", "<es02-IP>:9200", "<es03-IP>:9200"]
      manage_template => false
      index => "%{[@metadata][beat]}-system-%{+YYYY.MM}"
      pipeline => "%{[@metadata][pipeline]}"
      #user => "elastic"
      #password => "secret"
    }
  }
}
output {
  if [fileset][module] == "nginx" {
    elasticsearch {
      hosts => ["<es01-IP>:9200", "<es02-IP>:9200", "<es03-IP>:9200"]
      manage_template => false
      index => "%{[@metadata][beat]}-nginx-%{+YYYY.MM}"
      pipeline => "%{[@metadata][pipeline]}"
      #user => "elastic"
      #password => "secret"
    }
  }
}
What is going on here?
  • This output filter is looking for any logs associated with Filebeat modules
  • If part of the System module, it will put it in an index called “filebeat-system-YYYY-MM”
  • If part of the NGINX module, it will put it in an index called “filebeat-nginx-YYYY-MM”
  • pipeline => “%{[@metadata][pipeline]} is using variables to autofill the name of the Filebeat Index Templates we uploaded to Elasticsearch earlier

The above filter was inspired from examples seen on Elastic’s website, which is now located in my newly created GitHub repository for all files I use within my posts pertaining to ELK.

NOTE: I created this repository to provide ideas and properly formatted examples of alerts and configuration files that I use. It can be found here.

Save the file and exit. You should now be able to start Logstash back up again.

Test filebeat Implimentation

Now that you have enabled your modules, uploaded their templates to Elasticsearch, and configured both Filebeat and Logstash to push the logs through, we can now turn on Filebeat and test.

Start Filebeat as a service on all your desired nodes:

systemctl start filebeat

After waiting a couple minutes, you should start to see your new indices (filebeat-system and filebeat-nginx) populate in the Index Management section of Kibana.

They should be organized by month

If you see these indices, congrats! You can navigate to the Discover section of Kibana after adding the indices and look through them. If you left the module configuration files as default, you should see both system and auth/secure logs in the System index, along with various NGINX activity in the NGINX index.

If not, don’t get discouraged. There are lots of moving parts here and it can be very tricky to set up the first time. Tail the logs for your services (Logstash, Elasticsearch, Filebeat) and see if you notice anything wrong. I usually just Google like crazy until something clicks – ELK services’ logs are luckily very detailed.

As always, if you have any questions or if you think there is an easier way to do this, leave a comment below or contact me.

Sources

2 thoughts on “Using Default Filebeat Index Templates with Logstash

Leave a comment