ELK是当前比较主流的分布式日志收集处理工具。
常用日志采集方式: Filebeat→Kafka集群→Logstash→ES→Kibana
Grafana(可视化监控工具)上配置连接ES数据库 进行实时监控
实施步骤:
Filebeat部署在应用服务器上(只负责Logstash的读取和转发,降低CPU负载消耗,确保不会抢占应用资源),Logstash、ES、Kibana在一台服务器上(此处的Logstash负责日志的过滤,会消耗一定的CPU负载,可以考虑如何优化过滤的语法步骤来达到降低负载)。
架构图:
filebeat 直接输出到 es,kibana做搜索展示
filebeat n台 输出到 kafka集群
logstash 1~3台 接收kafka日志 输出到 es集群,logstash挂1台不影响kafka日志接收。
kibana做搜索展示
通过增加消息队列中间件,来避免数据的丢失。当Logstash出现故障,日志还是存在中间件中,当Logstash再次启动,则会读取中间件中积压的日志
注意 filebeat版本一定要和es 版本一致
l里日志格式为json
docker run --privileged --name filebeat --net=host -d -m 1000M --log-driver json-file --log-opt max-size=1024m -v /data0/l:/usr/share/l -v /data0/filebeat/logs:/root -v /data0/filebeat/data:/data -v /data0:/home/logs registry.api.ww/bop_ci/filebeat:6.6.0
l
filebeat.inputs:
- type: logeabled: truepaths:- /home/logs/bop-fms-account-info/logs/*.log- /home/logs/bop-fms-advertiser-info/logs/*.log- /home/logs/bop-fms-agent-web/logs/*.log- /home/logs/bop-fms-api/logs/*.log- /home/logs/bop-fms-config/logs/*.log- /home/logs/bop-fms-vip-api/logs/*.logignore_older: 12hclean_inactive: 14htags: ["fms-log"]- type: logeabled: truepaths:- /home/logs/bop-cmc-strategy/logs/*.log- /home/logs/qualification/logs/*.log- /home/logs/bop-cmc-customer/logs/*.log- /home/logs/bop-mdm-cmc-diplomat/logs/*.log- /home/logs/bop-asm-api/logs/*.log- /home/logs/bop-asm-message/logs/*.log- /home/logs/bop-asm-notice/logs/*.logignore_older: 12hclean_inactive: 14htags: ["others-log"]json.keys_under_root: true
json.overwrite_keys: abled: false
plate.name: "bop-log"
plate.pattern: "bop-log-*"
abled: false
plate.overwrite: true
plate.settings:index.number_of_shards: 1index.number_of_replicas: dec: best_compressionoutput.elasticsearch:hosts: ["10.13.177.206:9201"]#index: "bop-log-%{+yyyy.MM.dd}"pipeline: "test-news-server-online"indices:- index: "bop-log-fms-%{+yyyy.MM.dd}ains:tags: "fms-log"- index: "bop-log-others-%{+yyyy.MM.dd}ains:tags: "others-log"processors:- decode_json_fields:fields: ["message"]target: ""overwrite_keys: true- rename:fields:- from: "error"to: "run_error"- drop_fields:fields: ["input_type", "log.offset","log.file.path","beat.version",pe","beat.name", "host.name", pe", "agent.hostname"]#ignore_missing: false
到 es 示例2:
# 输出到es
output.elasticsearch:#username: "elastic"#password: "xxxxxxxxxxx"#worker: 1#bulk_max_size: 1500#pipeline: "timestamp-pipeline-id" #@timestamp处理hosts: ["elasticsearch1:9200"]index: "pb-%{[fields.index_name]}-*"indices:- index: "pb-nginx-%{+yyyy.MM.dd}"when.equals:fields.index_name: "nginx_log"- index: "pb-log4j-%{+yyyy.MM.dd}"when.equals:fields.index_name: "log4j_log"- index: "pb-biz-%{+yyyy.MM.dd}"when.equals:fields.index_name: "biz_log"
异常堆栈的多行合并问题
在 -type下面增加属性:
multiline:# pattern for error log, if start with space or cause by pattern: '^[[:space:]]+(at|.{3})b|^Caused by:'negate: falsematch: after
abled: true
plate.json.path: "/usr/share/filebeat/logs_template.json"
plate.json.name: "logs_template"
docker 启动命令增加参数
-v /data0/filebeat/logs_template.json:/usr/share/filebeat/logs_template.json
新建logs_template.json
{"index_patterns": ["bop-log-*"],"mappings": {"doc": {"dynamic_templates": [{"strings_as_keyword": {"mapping": {"type": "text","analyzer": "standard","fields": {"keyword": {"type": "keyword"}}},"match_mapping_type": "string","match": "*"}}],"properties": {"httpmethod": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"responseheader": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"function": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"servicename": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"serviceuri": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"serviceurl": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"responsebody": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"args": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"requestheader": {"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}}}}}
}
在Kibana Management Advanced Settings 搜索Date format 格式化设置成:yyyy-MM-dd HH:mm:ss.SSS
或者
在 Kibana 中的 Devtools 界面中编写如下 pipeline 并执行
查询 GET _ingest/pipeline/timestamp-pipeline-id
PUT _ingest/pipeline/timestamp-pipeline-id
{"description": "timestamp-pipeline-id","processors": [{"grok": {"field": "message","patterns": ["%{TIMESTAMP_ISO8601:timestamp}"],"ignore_failure": true},"date": {"field": "timestamp","timezone": "Asia/Shanghai","formats": ["yyyy-MM-dd HH:mm:ss.SSS"],"ignore_failure": true}}]
}
使用filebeat采集文件到es
docker部署filebeat到es
filebeat采集日志到ES
filebeat7.7.0相关详细配置预览
filebeat7.7.0相关详细配置预览- processors
filebeat采集docker日志
docker部署filebeat到kafka
filebeat采集多个日志(推送给ES或者logstash)
Filebeat+Elasticsearch收集整洁的业务日志
docker 部署logstash
mkdir /data0/logstash/log -p
cd /data0/logstash
f
input {kafka {topics => "kafkaTopic" #kafka的topicbootstrap_servers => ["192.168.1.100:9092"] #服务器地址codec => "json" #以Json格式取数据 }
}
output {elasticsearch {hosts => ["192.168.1.110:9009"] #ES地址index => "errorlog" #ES index,必须使用小写字母 user => "elastic" #这里建议使用 elastic 用户password => "**********"}
}
l
http.host: "0.0.0.0"
#ES地址
itoring.elasticsearch.hosts: ["192.168.1.110:9009"]
abled: true
#ES中的内置账户和密码,在ES中配置
itoring.elasticsearch.username: logstash_system
itoring.elasticsearch.password: *****************
docker pull logstash:6.7.0docker run --name logstash --privileged=true -p 9007:9600 -d -v /data/f:/usr/share/logstash/f -v /data/logstash/log/:/home/public/ -v /data/l:/usr/share/logstash/l logstash:8.1.3
注意,kibana要和es的版本一致,否则版本不兼容
cd /data0
mkdir kibana
cd kibanadocker run --name kibana -p 5601:5601 -d kibana:6.6.0
docker cp kibana:/usr/share/kibana/l .
将如下内容写到l中,然后保存退出::wq
server.name: kibana
server.host: "0"
#elasticsearch.hosts: [ "elasticsearch:9200" ]
elasticsearch.hosts: [ "自己的elasticsearch的IP:9200" ]
abled: true
#设置kibana中文显示
i18n.locale: zh-CN
重新启动
docker rm -f kibana
docker run --name kibana -p 5601:5601 -d -v /data0/l:/usr/share/kibana/l kibana:6.6.0
vi auto_add_index.sh
#!/bin/bash
today=`date +%Y.%m.%d`
yestoday=`date -d "1 days ago" +%Y-%m-%d`
pattern='bop-log-'${today}
old_pattern='bop-log-'${yestoday}
index='bop-log-'${today}
echo ${pattern} ${old_pattern}#新增
curl -f -XPOST -H 'Content-Type: application/json' -H 'kbn-xsrf: anything' "localhost:5601/api/saved_objects/index-pattern/${pattern}" -d"{"attributes":{"title":"${index}","timeFieldName":"@timestamp"}}"#设置默认索引
curl -f -XPOST -H 'Content-Type: application/json' -H 'kbn-xsrf: anything' localhost:5601/api/kibana/settings/defaultIndex -d "{"value":"${pattern}"}"#删除
curl -XDELETE "localhost:5601/api/saved_objects/index-pattern/${old_pattern}" -H 'kbn-xsrf: true'
kibana DevTools执行
30天后删除
PUT _ilm/policy/logs_policy
{"policy": {"phases": {"delete": {"min_age": "30d","actions": {"delete": {}}}}}
}
kibana 自动创建索引模式
Kibana自动关联ES索引
ELK系列(2) - Kibana怎么修改日期格式Date format
iLogtail用户手册
ilogtail -> kafka -> logstash -> elasticsearch
使用Docker搭建ELK日志系统
Kibana 可视化日志分析
docker 搭个日志收集系统
Springboot+logback集成ELK处理日志实例
Kibana + Elasticsearch + Logstash + Filebeat
Spring Cloud 分布式实时日志分析采集三种方案
10分钟部署一个ELK日志采集系统
如何快速采集分析平台日志,并进行展示监控?
手把手教你搭建ELK
filebeat上报数据异常排查
filebeat占用文件句柄磁盘满
filebeat常见问题
使用 ELK 集中管理 Spring Boot 应用日志
本文发布于:2024-01-28 05:30:53,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/17063910575139.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |