當(dāng)我們搭建好Docker集群后就要解決如何收集日志的問題 ELK就提供了一套完整的解決方案 本文主要介紹使用Docker搭建ELK 收集Docker集群的日志
ELK簡介
ELK由ElasticSearch、Logstash和Kiabana三個(gè)開源工具組成
Elasticsearch是個(gè)開源分布式搜索引擎,它的特點(diǎn)有:分布式,零配置,自動(dòng)發(fā)現(xiàn),索引自動(dòng)分片,索引副本機(jī)制,restful風(fēng)格接口,多數(shù)據(jù)源,自動(dòng)搜索負(fù)載等。
Logstash是一個(gè)完全開源的工具,他可以對(duì)你的日志進(jìn)行收集、過濾,并將其存儲(chǔ)供以后使用
Kibana 也是一個(gè)開源和免費(fèi)的工具,它Kibana可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數(shù)據(jù)日志。
使用Docker搭建ELK平臺(tái)
首先我們編輯一下 logstash的配置文件 logstash.conf
input { udp { port => 5000 type => json }}filter { json { source => "message" }}output { elasticsearch { hosts => "elasticsearch:9200" #將logstash的輸出到 elasticsearch 這里改成你們自己的host }}然后我們還需要需要一下Kibana 的啟動(dòng)方式
編寫啟動(dòng)腳本 等待elasticserach 運(yùn)行成功后啟動(dòng)
#!/usr/bin/env bash# Wait for the Elasticsearch container to be ready before starting Kibana.echo "Stalling for Elasticsearch" while true; do nc -q 1 elasticsearch 9200 2>/dev/null && breakdoneecho "Starting Kibana"exec kibana
修改Dockerfile 生成自定義的Kibana鏡像
FROM kibana:latestRUN apt-get update && apt-get install -y netcatCOPY entrypoint.sh /tmp/entrypoint.shRUN chmod +x /tmp/entrypoint.shRUN kibana plugin --install elastic/senseCMD ["/tmp/entrypoint.sh"]
同時(shí)也可以修改一下Kibana 的配置文件 選擇需要的插件
# Kibana is served by a back end server. This controls which port to use.port: 5601# The host to bind the server to.host: "0.0.0.0"# The Elasticsearch instance to use for all your queries.elasticsearch_url: "http://elasticsearch:9200"# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,# then the host you use to connect to *this* Kibana instance will be sent.elasticsearch_preserve_host: true# Kibana uses an index in Elasticsearch to store saved searches, visualizations# and dashboards. It will create a new index if it doesn't already exist.kibana_index: ".kibana"# If your Elasticsearch is protected with basic auth, this is the user credentials# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana# users will still need to authenticate with Elasticsearch (which is proxied thorugh# the Kibana server)# kibana_elasticsearch_username: user# kibana_elasticsearch_password: pass# If your Elasticsearch requires client certificate and key# kibana_elasticsearch_client_crt: /path/to/your/client.crt# kibana_elasticsearch_client_key: /path/to/your/client.key# If you need to provide a CA certificate for your Elasticsarech instance, put# the path of the pem file here.# ca: /path/to/your/CA.pem# The default application to load.default_app_id: "discover"# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to# request_timeout setting# ping_timeout: 1500# Time in milliseconds to wait for responses from the back end or elasticsearch.# This must be > 0request_timeout: 300000# Time in milliseconds for Elasticsearch to wait for responses from shards.# Set to 0 to disable.shard_timeout: 0# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying# startup_timeout: 5000# Set to false to have a complete disregard for the validity of the SSL# certificate.verify_ssl: true# SSL for outgoing requests from the Kibana Server (PEM formatted)# ssl_key_file: /path/to/your/server.key# ssl_cert_file: /path/to/your/server.crt# Set the path to where you would like the process id file to be created.# pid_file: /var/run/kibana.pid# If you would like to send the log output to a file you can set the path below.# This will also turn off the STDOUT log output.log_file: ./kibana.log# Plugins that are included in the build, and no longer found in the plugins/ folderbundled_plugin_ids: - plugins/dashboard/index - plugins/discover/index - plugins/doc/index - plugins/kibana/index - plugins/markdown_vis/index - plugins/metric_vis/index - plugins/settings/index - plugins/table_vis/index - plugins/vis_types/index - plugins/visualize/index
新聞熱點(diǎn)
疑難解答
圖片精選
網(wǎng)友關(guān)注