ELK集群管理及Kibana使用相关

elk-cluster

  • 不同集群通过名称区分
  • Coordinating node 处理请求的节点。默认为Coordinating node
    每个节点都保存集群的状态信息,但是只有Master节点才能修改集群的状态信息。

es集群状态

Green: 监控状态 所有主分片和副本分片都可以用
Yellow: 亚健康状态 所有主分片可用,部分副本分片不可用。
Red: 不健康状态,部分主分片不可用。

es索引的生命周期可通过kibana管理

es迁移

es数据迁移参考

软件版本信息

jdk-8u202
elasticsearch-7.17.5
filebeat-7.17.5
kibana-7.17.5
logstash-7.17.5

设置系统配置

#添加系统用户
useradd elastic

#最大可创建文件数太小
echo "elastic soft nofile 655350" >>/etc/security/limits.conf
echo "elastic hard nofile 655350" >>/etc/security/limits.conf

#最大虚拟内存太小
echo "vm.max_map_count = 655360" >>/etc/sysctl.conf
sysctl -p >/dev/null 2>&1

sudo swapoff -a

要永久禁用它,您将需要编辑 /etc/fstab 文件并注释包含 swap 单词的所有行。

创建数据目录

mkdir /data/apps/elastic/{data,tmp}

chown -R elastic.elastic /data/apps/elastic
chown -R elastic.elastic /data/apps/elastic/*

#JVM 参数设置
config/jvm.options
-Xms2g 
-Xmx2g 

#临时文件
bin/elasticsearch-env
#JVM temporary directory
ES_TMPDIR="/data/apps/elastic/tmp"

其他节点上面操作一种,替换配置文件node.name

elasticsearch配置参考

es集群配置文件参考
cluster.name: es-cluster
node.name: es-node1
node.master: true
node.data: true
path.data: /data/apps/elastic/data
path.logs: /data/apps/elastic/logs

network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: ["es-node1:9300","es-node2:9300","es-node3:9300"]
cluster.initial_master_nodes: ["es-node1:9300"]

http.cors.enabled: true
http.cors.allow-origin: "*"

bootstrap.memory_lock: true
xpack.security.enabled: false

node.attr.rack_id: rack_one
cluster.routing.allocation.awareness.attributes: rack_id

elasticsearch.service配置参考

配置参考地址:https://github.com/elastic/elasticsearch/blob/main/distribution/packages/src/common/systemd/elasticsearch.service
/usr/lib/systemd/system/elasticsearch.service

[Unit]
Description=Elasticsearch
Documentation=https://www.elastic.co
Wants=network-online.target
After=network-online.target

[Service]
WorkingDirectory=/data/apps/elastic
ExecStart=/data/apps/elastic/bin/elasticsearch
PIDFile=/var/run/elasticsearch.pid
User=elastic
Group=elastic
Type=simple
Restart=always

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
LimitMEMLOCK=infinity

[Install]
WantedBy=multi-user.target

启动并查看日志

systemctl daemon-reload 
#其他服务类似
systemctl start elasticsearch

journalctl  -f -u elasticsearch

http://es-node1:9200/_cat/health?v

集群增加节点

es增加node节点,原来集群配置保存不变,新增节点配置增加discovery.seed_hosts集群会自动发现节点。

注意:集群名称保持一致。

cerebro安装

es管理工具

/data/apps/cerebro

ln -s cerebro-0.9.4 cerebro

cerebro.service配置参考

/usr/lib/systemd/system/cerebro.service
[Unit]
Description=Cerebro
After=network.target

[Service]
WorkingDirectory=/data/apps/cerebro
ExecStart=/data/apps/cerebro/bin/cerebro
Type=simple
PIDFile=/var/run/cerebro.pid
Restart=always
#User=elastic
#Group=elastic

[Install]
WantedBy=default.target

Kibana安装


tar -zxvf kibana-7.17.5-linux-x86_64.tar.gz
ln  -s kibana-7.17.5-linux-x86_64 kibana

chown -R elastic.elastic /data/apps/kibana
chown -R elastic.elastic /data/apps/kibana/*

kibana.yml配置参考

server.port: 5601
server.host: "0.0.0.0"
server.name: "kibana-cluster"
elasticsearch.hosts: ["http://es-node1:9200","http://es-node2:9200","http://es-node3:9200"]
elasticsearch.requestTimeout: 99999
#支持中文
i18n.locale: "zh-CN"

kibana.service配置参考

/usr/lib/systemd/system/kibana.service
[Unit]
Description=Kibana
After=network.target

[Service]
ExecStart=/data/apps/kibana/bin/kibana
Type=simple
PIDFile=/var/run/kibana.pid
Restart=always
User=elastic
Group=elastic

[Install]
WantedBy=default.target

安装ik分词器

在线安装
cd /data/apps/elastic/
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.17.5/elasticsearch-analysis-ik-7.17.5.zip

docker elk部署测试环境

docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-v $PWD/elastic/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-d elasticsearch:7.17.5

容器内安装ik分词器

docker exec -it elasticsearch sh
cd /usr/share/elasticsearch/plugins/
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.17.5/elasticsearch-analysis-ik-7.17.5.zip
exit
docker restart elasticsearch 

docker启动kibana

docker run --name kibana --link=elasticsearch:elk-1  -p 5601:5601 -d kibana:7.17.5
docker start kibana

问题汇总

问题描述:elasticsearch-5.4.3启动报错,bootstrap checks failed : system call filters failed to install;

问题解决:配置增加bootstrap.system_call_filter: false

https://blog.csdn.net/RUIMENG061511332/article/details/90548390

问题描述:

[INFO ][logstash.outputs.elasticsearch][main] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/8/index write (api)];"})
501
[INFO ][logstash.outputs.elasticsearch][main] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}

问题解决:es集群磁盘空间达到阈值,索引只读或者索引策略导致的索引冻结导致。

es迁移logstash开启双写

logstash双写配置参考

output{
# 	stdout{
#    	codec => "rubydebug"
#   }
    elasticsearch {
        hosts => ["192.168.1.111:9200"]
        index => "logstash-old-test-%{+YYYY.MM.dd}"
        
    }
}
output {
    elasticsearch {
        hosts => ["192.168.1.168:9200"]
        index => "logstash-new-test-%{+yyyy.MM}"
        user => "username"
        password => "password"
    }
}

logstash匹配日志时间