用ELK打造強(qiáng)大的日志分析平臺(tái),具體拓?fù)浣Y(jié)構(gòu)如下:
十年建站經(jīng)驗(yàn), 成都做網(wǎng)站、成都網(wǎng)站制作客戶的見證與正確選擇。成都創(chuàng)新互聯(lián)公司提供完善的營銷型網(wǎng)頁建站明細(xì)報(bào)價(jià)表。后期開發(fā)更加便捷高效,我們致力于追求更美、更快、更規(guī)范。

在這里我們將進(jìn)行kafka+filebeat+ELK5.4的部署
各軟件版本
jdk-8u131-linux-i586.tar.gz filebeat-5.4.0-linux-x86_64.tar.gz elasticsearch-5.4.0.tar.gz kibana-5.4.0-linux-x86_64.tar.gz logstash-5.4.0.tar.gz kafka_2.11-0.10.0.0.tgz
1、JDK安裝配置(略過)
2、ELK安裝與配置
創(chuàng)建ELK用戶,并進(jìn)行文件解壓
1.elasticsearch配置
[elk@localhost elasticsearch-5.4.0]$ vi config/elasticsearch.yml
.....
network.host: 192.168.12.109
#
# Set a custom port for HTTP:
#
http.port: 9200
..........
保存,啟動(dòng)
[elk@localhost elasticsearch-5.4.0]$ nohup bin/elasticsearch &
驗(yàn)證
#
[elk@localhost elasticsearch-5.4.0]$ curl http://192.168.12.109:9200
{
"name" : "aCA2ApK",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Ea4_9kXZSaeDL1fYt4lUUQ",
"version" : {
"number" : "5.4.0",
"build_hash" : "780f8c4",
"build_date" : "2017-04-28T17:43:27.229Z",
"build_snapshot" : false,
"lucene_version" : "6.5.0"
},
"tagline" : "You Know, for Search"
}
2、kibana安裝與配置
[elk@localhost kibana-5.4.0-linux-x86_64]$ vi config/kibana.yml
## Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.12.109"
..........
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.12.109:9200"
..........
[elk@localhost kibana-5.4.0-linux-x86_64]$ nohup bin/kibana &
在瀏覽器訪問 能訪問即可3、kafka安裝與配置
這里我們只做單機(jī)192.168.12.105部署單節(jié)點(diǎn)《centos kafka單包單機(jī)部署》
4、logstah安裝與配置
[elk@localhost logstash-5.4.0]$ vi nginx.conf 這里新生成一個(gè)配置文件
input {
kafka {
codec => "json"
topics_pattern => "logstash-.*"
bootstrap_servers => "192.168.12.105:9092"
auto_offset_reset => "latest"
group_id => "logstash-g1"
}
}
filter {
if "nginx-accesslog" in [tags] {
grok {
match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float} %{GREEDYDATA:traceID}"}
}
mutate {
convert => ["status","integer"]
convert => ["body_bytes_sent","integer"]
convert => ["request_time","float"]
}
geoip {
source=>"remote_addr"
}
date {
match => [ "timestamp","dd/MMM/YYYY:HH:mm:ss Z"]
}
useragent {
source=>"http_user_agent"
}
}
if "tomcat-accesslog" in [tags] {
grok {
match => { "message" => "%{IPORHOST:clientip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{NUMBER:request_time:float} %{GREEDYDATA:traceID}"}
}
date {
match => [ "timestamp","dd/MMM/YYYY:HH:mm:ss Z"]
}
}
}
output {
elasticsearch {
hosts => ["192.168.12.109:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
}
#stdout { codec => rubydebug }
}
保存,并啟動(dòng)
[elk@localhost logstash-5.4.0]$ nohup bin/logstash -f nginx.conf &5、filebeat安裝與配置
將filebeat分別拷貝到需要采集的服務(wù)器,進(jìn)行解壓,在這里我們分別采集Nginx,tomcat日志
Nginx服務(wù)器
[user@localhost filebeat-5.4.0-linux-x86_64]$ vi filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /data/programs/nginx/logs/access.log
tags: ["nginx-accesslog"]
document_type: nginxaccess
tags: ["nginx-test-194"]
output.kafka:
enabled: true
hosts: ["192.168.12.105:9092"]
topic: logstash-%{[type]}
[user@localhost filebeat-5.4.0-linux-x86_64]$nohup filebeat -c filebeat.yml &tomcat服務(wù)器
[user@localhost filebeat-5.4.0-linux-x86_64]$ vi filebeat.yml
filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /data/tomcat/logs/localhost_access_log*
tags: ["tomcat-accesslog"]
document_type: tomcataccess
tags: ["tomcat103"]
output.kafka:
enabled: true
hosts: ["192.168.12.105:9092"]
topic: logstash-%{[type]}
[user@localhost filebeat-5.4.0-linux-x86_64]$nohup filebeat -c filebeat.yml &完成以上,我們的平臺(tái)就搭建好了,接下來我們創(chuàng)建索引
輸入:logstash-nginxaccess*

輸入logstash-tomcataccess*

數(shù)據(jù)通過filebeat到kafka、ELK成功展示出來

來張炫圖

分享名稱:filebeat+kafka+ELK5.4安裝與部署
標(biāo)題網(wǎng)址:http://chinadenli.net/article18/ggjggp.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站維護(hù)、網(wǎng)站收錄、自適應(yīng)網(wǎng)站、云服務(wù)器、網(wǎng)頁設(shè)計(jì)公司、手機(jī)網(wǎng)站建設(shè)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)