Setup Centralised logging using ELK Stack
Install ElasticSearch, Logstash and Kibana on Ubuntu
In this tutorial, we will install ELK stack that is ElasticSearch 5.2.x, Logstash 5.2.0 and Kibana 5.2.x on Ubuntu. We will also configure whole stack together so that your logs can be visualize on single place using filebeat.
What is ELK stack?
So, ELK stack is a combination of three services: ElasticSearch, Logstash and Kibana. ElasticSearch is a open source, distributed, Restful search engine. In ELK stack, it is used to store logs, so that they can be easily searched and retrieved. Logstash open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed.
Why ELK stack?
Centralised logging is useful when you have number of servers working together to server your purpose, then to identify the problem over them. Manually searching logs on different servers takes lots of time to debug the problem. ELK stack allows you to search through all server logs at one place, hence makes debugging easy and timeless. With ELK stack you can identify issues that span multiple servers by correlating their logs during a specific time frame.
Working Strategy of ELK stack in this Tutorial
- ElasticSearch: Stores all the logs.
- Logstash: Processes incoming logs from client servers. Here we will only parse system logs.
- Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
- Filebeat: Log shipping agent, will be installed on client servers, that will send logs to Logstash.
Terms:
- ELK Server: Server on which ELK stack will be installed that is ElasticSearch, Logstash and Kibana.
- Client Server: Server from which we want to gether logs and on which filebeat will get install.
Prequisites:
- A Ubuntu server with sudo privileges.
- Server with atleast 4 GB RAM, and 2 CPUs.
- One or more client servers
Now, lets begin with setup procedure:
1. Install JAVA 8
ElsticSearch and Logstash requires, so we will install it first. ElasticSearch 5.x works better with JAVA 8 thats why, we will install specifically JAVA 8. For this, Add the Oracle Java PPA to
apt
:sudo add-apt-repository -y ppa:webupd8team/java
Update the apt package:
sudo apt-get update
Now, install the Java 8:
sudo apt-get -y install oracle-java8-installer
2. Install ElasticSearch 5.2.x
To install ElasticSearch, first update the apt package:
sudo apt-get update
Now, download the ElasticSearch Debian Package:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.0.deb
Now, install the Debian package that get downloaded:
Elasticsearch is now installed in
Now, edit the ElasticSearch's configuration file to run it on localhost, so that strangers cannot read your data and cant play with your ElasticSearch, For this first open the configuration file:
Now, save and close this file. To, automatically start ElasticSearch when server boots up, run:
Here, ElasticSearch is setup.
sudo dpkg -i elasticsearch-5.2.0.deb
Elasticsearch is now installed in
/usr/share/elasticsearch/
with its configuration files placed in /etc/elasticsearch
and its init script added in /etc/init.d/elasticsearch
.Now, edit the ElasticSearch's configuration file to run it on localhost, so that strangers cannot read your data and cant play with your ElasticSearch, For this first open the configuration file:
sudo nano /etc/elasticsearch/elasticsearch.yml
Find the line that specifies
network.host
, uncomment it, and replace its value with "localhost" so it looks like this:network.host: localhost
Now, save and close this file. To, automatically start ElasticSearch when server boots up, run:
sudo systemctl enable elasticsearch.service
Here, ElasticSearch is setup.
3. Install Kibana
First, download and install the public signing key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Install the apt-transport-https package:
Now, save repository definition
Now, install kibana along with updating apt package:
Now, kibana is installed. Lets configure it now. For this open the configuration file:
sudo apt-get install apt-transport-https
Now, save repository definition
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
Now, install kibana along with updating apt package:
sudo apt-get update && sudo apt-get install kibana
Now, kibana is installed. Lets configure it now. For this open the configuration file:
sudo nano /opt/kibana/config/kibana.yml
Find the line that specifies
server.host
, and replace the IP address ("0.0.0.0" by default) with "localhost". This setting will allow Kibana to be accessible from localhost only. This is fine because we will use an Nginx reverse proxy to allow external access.server.host: "localhost"
Now, start the kibana service, along with setting it up so that it can be restarted automatically whenever server gets boot up.
sudo systemctl daemon-reload
sudo systemctl enable kibana
sudo systemctl start kibana
4. Install Nginx
Since, we configured Kibana to listen onlocalhost
, we will set up a reverse proxy via Nginx to allow external access to it. So install it:sudo apt-get -y install nginx
For security, pupose, make your nginx password authenticated. For this
sudo -v
echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
Here kibanaadmin is a username, you can change it according to you. This command will ask for the password. Do remember the password, you are entering here, as it will require to access you kibana interface.
Now, edit the Nginx, default configuarion file, for this open it first:
sudo nano /etc/nginx/sites-available/default
Delete the file's contents, and paste the following code block into the file. Be sure to update the
server_name
to match your server's name or public IP address:server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit. And restart the Nginx server.
sudo systemctl restart nginx
Now, access your server's domain name or public IP address, And enter the credentaials, you will see the Kibana web interface.
5. Install Logstash
To install Logstash, run:sudo apt-get install logstash
Now, Logstash is installed. Now, we need to configure it. Logstash configuration files are in the JSON-format, and reside in
/etc/logstash/conf.d
. The configuration consists of three sections: inputs, filters, and outputs.
Let's create a configuration file called
02-beats-input.conf
and set up our "filebeat" input:sudo nano /etc/logstash/conf.d/02-beats-input.conf
Insert following code:
input {
beats {
port => 5044
}
}
Save and exit. Now let's create a configuration file called
10-syslog-filter.conf
, where we will add a filter for syslog messages. Here we are filtering logs that are tagged as "syslog" by filebeat then we will make this logs structured and queryable using grok parser.sudo nano /etc/logstash/conf.d/10-syslog-filter.conf
Enter following filter configuration code:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Save and exit. Now we will create output file named as 30-elasticsearch-output.conf
sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
Insert the following output configuration code:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Save and exit. This code, basically tells logstash to store parsed logs into ElasticSearch. Now, test your logstash configuration using:
sudo /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/
It should output the "Confuguration OK", if not, correct the error it is displaying and continue.
Now, restart logstash, so that all the configurations that we have added will work.
sudo systemctl restart logstash
sudo systemctl enable logstash
Logstash configuration is done here.
6. Load Kibana dashboard
Here, we will load filebeat index pattern in kibana dashboard. For this, download the file into your home directory.
cd ~
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.2.2.zip
Install the unzip package to extract above downloaded package
Now, extract the content.
And load the sample dashboards, visualizations and Beats index patterns into Elasticsearch with these commands:
It will load four index patterns that are as follow:
sudo apt-get -y install unzip
Now, extract the content.
unzip beats-dashboards-*.zip
And load the sample dashboards, visualizations and Beats index patterns into Elasticsearch with these commands:
cd beats-dashboards-*
./load.sh
It will load four index patterns that are as follow:
- packetbeat-*
- topbeat-*
- filebeat-*
- winlogbeat*
When we start using Kibana, we will select the Filebeat index pattern as our default.
7. Load filebaet index template in ElasticSearch
Since we will be using filebeat to ship logs to ElasticSearch, we need to load filebeat index template. For this, download filebeat index template into your home directory.
cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Then load this template
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
If everything will be fine, you will see the output "acknowledged: true".
Here, your ELK server is all setup. Now we will need to setup client server to send logs to ELK server. Lets do that.
8. Configure Filebeat on Client Server
First, install filebeat package on client server. For this update the source list.
echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list
Install the GPG key.
Now, install the filebeat package
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Now, install the filebeat package
sudo apt-get update
sudo apt-get install filebeat
Now, filebeat is installed, lets configure it. For this, create/edit the configuration file.
Add the following code
At the place of elk_server_private_ip, replace your ELK server's IP. Now restart Filebeat to put our changes into place:
Now Filebeat is sending
To test our filebeat installation, On ELK server run this command
Since, filebeat on client server is sending logs to our ELK server, you must get log data in the output. If your output shows 0 total hits then there is something wrong with your configuration. Check it and correct it. Now, continue to the next step.
sudo nano /etc/filebeat/filebeat.yml
Add the following code
filebeat: prospectors: - paths: - /var/log/auth.log - /var/log/syslog # - /var/log/*.log input_type: log document_type: syslog registry_file: /var/lib/filebeat/registry output: logstash: hosts: ["elk_server_private_ip:5044"] bulk_max_size: 1024 shipper: logging: files: rotateeverybytes: 10485760 # = 10MB
At the place of elk_server_private_ip, replace your ELK server's IP. Now restart Filebeat to put our changes into place:
sudo systemctl restart filebeat
sudo systemctl enable filebeat
Now Filebeat is sending
syslog
and auth.log
to Logstash on your ELK server! Repeat this section for all of the other servers that you wish to gather logs for.To test our filebeat installation, On ELK server run this command
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
Since, filebeat on client server is sending logs to our ELK server, you must get log data in the output. If your output shows 0 total hits then there is something wrong with your configuration. Check it and correct it. Now, continue to the next step.
9. Setup Kibana dashboard
Browse your ELK server's IP into your favourite browser. And enter the credentials. Now you will see the kibana dashboard, prompting you to select default index pattern.
Go ahead and select filebeat-* from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the Filebeat index as the default.
Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages. Now, you have all the logs at once place. Congrats, you have successfully setup the ELK 5 stack!
0 comments:
Post a Comment