Monitoring CentOS Endpoints with Filebeat + ELK

In some of my previous posts regarding ELK, we have touched upon numerous ways of sending data from Windows endpoints – however not from much else. In the real world, thankfully, not everything runs off Microsoft’s Operating System. Not to hit Microsoft in any way, but for anyone who has experienced systems administration in regards to Windows, headaches are usually not far away. There will be instances where you may wish to monitor sudo interactions and SSH logins on a remote DHCP server running CentOS, something that can’t be done using Winlogbeat. Luckily the company responsible for the Elastic Stack (ELK), Elastic, has another Beats Data Shipper for the job; Filebeat.

Below we will be walking through the process of setting up Linux monitoring in ELK using Filebeat, with the main focus being on CentOS. The process for monitoring an Ubuntu Server is extremely similar as well, with only some syntax differences in the commands used to get there.

  • Root Access on an accessible CentOS endpoint (CentOS 7)
  • Functional Single or Multi-Node ELK Stack

NOTE: Filebeat can be used to grab log files such as Syslog which, depending on the specific logs you set to grab, can be very taxing on your ELK cluster. Make sure you ingest responsibly during this configuration or adequately allocate resources to your cluster before beginning.

Install Java 8

As with most of Elastic’s services, Filebeat specifically needs no higher than Oracle’s Java 8 to run. It can be downloaded on your desired CentOS endpoint with the following wget command:

wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" ""

Since you are downloading an rpm package locally, so you need to manually install it:

rpm -ivh jdk-8u171-linux-x64.rpm

Checking your Java Version should show a successful installation:

java -version

NOTE: Java is always updating/refining itself, which may result in the depicted version above not matching the version you may be seeing. The commands above are specific to the time of this post.

To get the LATEST version of Java 8 you will need to go to Oracle’s Java 8 JDK Downloads Page, check the box to accept the license agreement, then copy the download link of the appropriate Linux rpm package. You can then replace the link at the end of the wget command with your newly copied download link.

Set up Filebeat Repository

Before you can download Filebeat, you need to add it’s repository so it knows what to grab. To do this on CentOS, you can grab Elastic’s public signing key and create the repository file manually.

Download and Install the Public Signing Key:

sudo rpm --import

Create “elastic.repo” in /etc/yum.repos.d/ and add the following lines:

name=Elastic repository for 6.x packages

Install FileBeat

With the repository all setup to use, you should be able to use yum to install:

sudo yum install filebeat

Enable to run at system start:

sudo systemctl enable filebeat

Since we will be ingesting system logs, enable the System module for Filebeat:

filebeat modules enable system

Configure filebeat

For the purpose of this guide, we will be ingesting two different log files found on CentOS – Secure (auth) and Messages. Navigate to Filebeat’s installation directory, /etc/filebeat, and make the following changes to “filebeat.yml” to add the paths to the log files and specify the “type” as syslog:

Type set to “Syslog” and paths to Secure and Messages logs added

Comment out the settings for Elasticsearch and configure Filebeat to send to Logstash instead:

Elasticsearch settings commented out with Logstash Hosts w/optional SSL

NOTE: You may notice that my above configuration specifies an SSL certificate. You do NOT need to have SSL enabled – if you do not have a certificate you can comment this line out and only specify your Logstash host. If you would like to learn more about setting up Logstash-Endpoint Communication with SSL, I have a post about that here.

Hold off on starting Filebeat as a service for now to help avoid any potential Logstash errors and sadness until we configure it next.

Configure Logstash

Now that you have Filebeat setup, we can pivot to configuring Logstash on what to do with this new information it will be receiving.

We previously activated the System module for Filebeat, which has a default way of ingesting these logs. For more advanced analysis, we will be utilizing Logstash filters to make it prettier in Kibana. You should already have a universal input filter, such as mine shown below, that currently allows Logstash to listen for communications over a specified port (with OPTIONAL SSL):

Don’t hack me.

Add a new file in your Logstash filter directory (default is /etc/logstash/conf.d) named something to help you know what it is. For example, I named mine 1011-syslog-filter.conf.

Now Logstash filters can be very complicated, requiring you to manually know what you want filtered and compose a filter accordingly. Luckily, Elastic has some great example pipelines for parsing with Filebeat on their site, covering Apache2, MySQL, Nginx, and System logs. For the purpose of this post, we will be borrowing their filter for System logs:

filter {
  if [fileset][module] == "system" {
    if [fileset][name] == "secure" {
      grok {
        match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} 😦 %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{}, GID=%{NUMBER:system.auth.groupadd.gid}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
        pattern_definitions => {
          "GREEDYMULTILINE"=> "(.|\n)*"
        remove_field => "message"
      date {
        match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      geoip {
        source => "[system][auth][ssh][ip]"
        target => "[system][auth][ssh][geoip]"
    else if [fileset][name] == "syslog" {
      grok {
        match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
        pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
        remove_field => "message"
      date {
        match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
output {
  elasticsearch {
    hosts => ["ES01", "ES02", "ES03"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM}"

For this file above, make sure you address the following changes:

  • I changed if [fileset][name] == “auth” to if [fileset][name] == “secure” to reflect CentOS’s log name
  • I removed the beginning input section found in Elastic’s copy from my version as I have log inputs already specified (having more than one can cause Logstash errors)
    • ADD this BACK IN if you plan on ingesting this data over a different port
  • Adjust the “hosts” section at the bottom of the file to match the IP addresses of your Elasticsearch node(s)

NOTE: You can optionally have the bottom section of this file (Output) as it’s own file in the same Logstash directory, I just chose to lump them together for the sake of this guide and my own sanity.

Once all of this checks out, save the file and exit.

Test configuration

Now that everything is in place, restart Logstash on your Logstash node:

sudo systemctl restart logstash

Then start Filebeat on your CentOS endpoint:

sudo systemctl start filebeat

Under Management –> Index Patterns in Kibana you should see your new index, most likely being referred to as Filebeat if you kept the defaults in your new Logstash filter:

Ignore my whatever-that-is index name above that

Create your index and navigate to the Discover page and view it. If you see logs, then you have log flow from Filebeat:

There is also a pretty neat Logs tab in Kibana’s sidebar to view these logs in a more familiar, Linux-like format. It is particularly cool as this view almost acts as a mass tail -f of ALL Linux logs you have configured to come through from any of your endpoints, and the filter functionality at the top acts as grep, allowing you to look for any keywords that may be of interest across everything at once:

(\/) (•,,,•) (\/)

All set!

You have successfully configured a CentOS endpoint to send logs to ELK using Filebeat. This process should be the same for all CentOS machines, and similar enough to the process for monitoring Ubuntu Server be able to help you along as well. If you have any comments or questions, let me know below or contact me here.


One thought on “Monitoring CentOS Endpoints with Filebeat + ELK

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s