Individual sharing site “mocal ™ | 2014®”: Elasticsearch & Kibana & Logstash setup and log tracking


Greetings;

I have not shared for a long time, I am ending this situation with elasticsearch. First of all, I start by giving a brief information about elasticsearch.

Elasticsearch is a Lucene based search engine. It provides a multichannel, distributed, text-based, search engine that hosts HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and released as open source under the terms of the Apache License.

After I do the introduction, I quickly go to the installation steps. Setup; It was built with “deb” package on Ubuntu 16.10 x86_64 operating system.

After viewing the address of https://www.elastic.co/ in a browser, we must reach the list of downloadable content by clicking the “downloads” link in the upper right corner.

From the list of downloadable content We proceed by clicking the “Download” link in the Elasticsearch area.

We have to download the elasticsearch deb package required for installation by clicking the “DEB” link from the screen that opens.

sudo dpkg -i elasticsearch-5.3.1.deb with the command We complete the elasticsearch setup.

After that ” nano /etc/elasticsearch/elasticsearch.yml “We are changing the file as in the example below.




# ======================== Elasticsearch Configuration ======================= ==
#
# NOTE: Elasticsearch comes with reasonable default values ​​for most settings.
# Before starting to fine-tune and adjust the configuration,
# Understand what you are trying to achieve and the consequences.
#
# The primary way to configure a node is via this file. These template lists
# the most important settings you might want to configure for a production set.
#
# For more information on configuration options, please see the documentation:
# https://ift.tt/2wPoQKX
#
# ———————————- Cluster ————– ———————
#
# Use a descriptive name for your cluster:
#
cluster.name: CEYHUN
#
# ———————————— Loop ———— ————————
#
# Use a descriptive name for the node:
#
node.name: OCAL
#
# Add custom properties to the node:
#
# node.attr.rack: r1
#
# ———————————– Roads ————- ———————–
#
# Path to the directory where the data will be stored (separate multiple locations with commas):
#
# path.data: / path / / data
#
# Path of log files:

#
# path.logs: / path / / logs
#
# ———————————– Memory ————- ———————-
#
# Lock memory at startup:
#
# bootstrap.memory_lock: true
#
# Make sure the heap size is set to about half of the available memory.
# on the system and the owner of the process is allowed to use it
# limit.
#
# Elasticsearch performs poorly while replacing system memory.
#
# ———————————- Network ————– ———————
#
# Set the link address to a specific IP (IPv4 or IPv6):
#

# network.host: 192.168.0.1
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
# http.port: 2222
#
# See the network module documentation for more information.
#
# ——————————— Discovery ————— ——————-
#
# Pass the initial list of hosts to perform discovery when the new node is started:
# Default host list [“127.0.0.1”, “[::1]”]
#
# discovery.zen.ping.unicast.hosts: [“host1”, “host2”]
#
# Avoid “split brain” by configuring most nodes (total number of master eligible nodes / 2 + 1):
#

# discovery.zen.minimum_master_nodes: 3
# For more information, see the zen discovery module documentation.
#
# ———————————- Gateway ————- – ———————
#
# Prevent initial recovery until N nodes are initialized after full cluster restarts:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the gateway module documentation.
#
# ———————————- Miscellaneous ————– ———————
#
# When deleting indices, ask for clear names:
#

# action.destructive_requires_name: true


After editing the “elasticsearch.yml” file, we have to activate the elastic “init.d” service with the following command.


enable systemctl elasticsearch

To control the service that has occurred, “/etc / init.d /elasticsearch status” to start, /etc/init.d/elasticsearch start“to stop, /etc/init.d/elasticsearch stop“We can use commands.




As additional information: service elasticsearch status or start or Stop can be used.


Now It’s time to check elasticsearch. We can use the commands below for control.


curl -XGET localhost: 9200
curl -XGET localhost: 9200 / _cluster / health? beautiful




It’s a shame 🙂 Our Elastic server is running. Let’s add data manually to the elastic server. We can use the command below to do this.


Sample structure:

curl -XPOST “http: //[localhost]: 9200 /indexname / typename / optionalUniqueId “-d ‘{” field “:” value “}’

As a command:

curl -XPOST “http: // localhost: 9200 / index_mocal / type_ceyhun / 1” -d “{” car “: ” diesel “}”

To check:

curl -XGET localhost: 9200 / _mapping

A list should come as in the example screen shot below. Since I have kibana installed, the list looks bulky, you should see “index_mocal” or any name you specify in the list.

Speaking of Kibana, let me talk about what he does quickly, Kibana; After I say elasticsearch is an application that allows us to query through the web browser, I proceed to the installation steps.


https://www.elastic.co/ After viewing the address in a browser, we reach the list of downloadable content by clicking the “downloads” link in the upper right corner. Then click the “Download” link in the Kibana section, and click the “deb” link “kibana-5.3.1-amd64.deb “We are downloading the package.






For installation “sudo dpkg -i kibana-5.3.1-amd64.deb“We are using the command, when the installation is finished”sudo nano /etc/kibana/kibana.yml“We will complete the installation by editing the file as in the example below, without forgetting, we will create the” kibana init.d “service.


To activate Kibana service “enable systemctl kibana“after running the command”sudo /etc/init.d/kibana status-start-stop“or”sudo service kibana status-start-stopWith “commands, we can access start-stop and status information.


Sample kibana.yml file:


# Kibana is served by a backend server. This setting specifies the port to be used.
# server.port: 5601
# server.port: 81
# Specifies the address to which the Kibana server will connect. IP addresses and hostnames are both valid values.
# The default is ‘localhost’, which usually means remote machines cannot connect.
# Set this parameter to a non-loopback address to allow connections from remote users.
# server.host: “localhost”
server.host: “0.0.0.0”


# Allows you to specify a way to connect Kibana if you are working behind a proxy. This only affects
# Kibana generated URLs, expected your proxy to remove basePath before forwarding requests
# To Kibana. This setting cannot end with a slash.
# server.basePath: “”


# Maximum payload size in bytes for incoming server requests.
# server.maxPayloadBytes: 1048576


# The name of the Kibana server. This is used for display purposes.
# server.name: “your-host-name”


# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: “http: // localhost: 9200”


# When the value of this setting is true, Kibana uses the hostname specified in server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# linking to this Kibana example.
# elasticsearch.preserveHost: true


# Kibana, recorded searches, visualizations and
# dashboard. Kibana creates a new directory if the directory does not already exist.
# kibana.index: “.kibana”


# The default application to be installed.
# kibana.defaultAppId: “discover”



# If your Elasticsearch is protected by basic authentication, these settings
# Username and password used by Kibana server to perform maintenance on Kibana
Initially # directory. Your Kibana users still need to authenticate with Elasticsearch.
# Is represented by the Kibana server.
# elasticsearch.username: “user”
# elasticsearch.password: “pass”


# Enables SSL and paths of PEM format SSL certificate and SSL key files respectively.
# These settings enable SSL for requests from the Kibana server to the browser.
# server.ssl.enabled: false
# server.ssl.certificate: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key


# Optional settings that provide paths to the PEM format SSL certificate and key files.
# These files verify that your Elasticsearch backend is using the same key files.
# elasticsearch.ssl.certificate: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key


# Optional setting that allows you to specify a path to the PEM file for the certificate

# Authorization for your Elasticsearch instance.

# elasticsearch.ssl.certificateAuthorities: [ “/path/to/your/CA.pem” ]

# Change the value of this setting to ‘none’ to ignore the validity of SSL certificates.

# elasticsearch.ssl.verificationMode: full

# Time, in milliseconds, to wait for Elasticsearch to respond to pings. Default value

# elasticsearch.requestTimeout setting.

# elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the backend or Elasticsearch. This value

# must be a positive integer.

# elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to be sent to Elasticsearch. To send * no * to client side

# title, set this value to [] (an empty list).

# elasticsearch.requestHeadersWhitelist: [ authorization ]

# Title names and values ​​sent to Elasticsearch. No special title can be overwritten

# elasticsearch.requestHeadersWhitelist by client-side headers regardless of configuration.

# elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from fractures. Set it to 0 to disable.

# elasticsearch.shardTimeout: 0

# The time, in milliseconds, to wait for Elasticsearch at Kibana startup before retrying.

# elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana created the process id file.

# pid.file: /var/run/kibana.pid

# Allows you to specify a file where Kibana stores the log output.

# logging.dest: stdout

# Set the value of this setting to true to suppress all log output.

# logging.silent: false

# Set the value of this setting to true to suppress all log output except error messages.

# logging.quiet: false

# Set the value of this setting to true to log all events including system usage information

# and all requests.

# logging.verbose: false

# Set interval in milliseconds to sample system and process performance

# measurement. It is minimum 100 ms. 5000 default.

# ops.interval: 5000

After changing the file “kibana.yml” as in the example above “/etc/init.d/ start of teamIf we start the application with the command “localde,” we need to display the kibana screen by typing “http: // localhost: 5601” and “http: // ip_number-or-hostname: 5601” to the web browser.

It’s a shame that Kibana also worked 🙂 Let him work on his own for a while. Now I go to “Logstash” which is the final package of the installation. In its simplest form, Logstash can be explained as an application that converts the data that it listens to the elastic server from the tcp or udp port or from the log file to the format “JSON” that the elastic understands. After giving information about the application, I proceed to its installation.

An example dashboard.

Thank you. I am planning to post about docker soon. See you.

NEXT ARTICLE Next Post
PREVIOUS ARTICLE Previous Post
NEXT ARTICLE Next Post
PREVIOUS ARTICLE Previous Post