elasticsearch搭建教程

1添加账号
groupadd es
useradd es -g es -p es
2文件夹授权
mkdir -p /usr/local/apps/elasticsearch
mkdir -p /srv/elasticsearch/log
mkdir -p /srv/elasticsearch/data
chown -R es:es /usr/local/apps/elasticsearch
chown -R es:es /srv/elasticsearch
cd /usr/local/apps/elasticsearch
su es
3开始安装
如果服务器的内存不够用的话需要修改内存大小
vi bin/elasticsearch
找到xms项目
ES_JAVA_OPTS="-Xms500m -Xmx500m"
启动
Option Description
------ -----------
-E Configure a setting
-V, --version Prints Elasticsearch version information and exits
-d, --daemonize Starts Elasticsearch in the background
-h, --help Show help
-p, --pidfile Creates a pid file in the specified path on start
-q, --quiet Turns off standard output/error streams logging in console
-s, --silent Show minimal output
-v, --verbose Show verbose output
我们选择 后台线程模式
创建用户
创建密码
需要设置 X-Pack
[elastic@console bin]$ vi ../config/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
vi config/elasticsearch. yml
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
集群配置
cluster.name: search-center-es-cluster
#(每台机器不同 36是 slave-node-1 12 是slave-node-2 35是 node.name: master-node-1)
node.name: slave-node-1
#从机是false
node.master: true
node.data: true
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 120s
bootstrap.system_call_filter: false
path.data: /srv/elasticsearch/data
path.logs: /srv/elasticsearch/logs
bootstrap.memory_lock: true
#改成对应的ip地址
network.host: 172.20.3.35
http.port: 9200
discovery.seed_hosts: ["172.20.3.35:9300","172.20.3.12:9300"]
cluster.initial_master_nodes: ["master-node-1","master-node-2"]
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /usr/local/apps/elasticsearch/config/ssl/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/local/apps/elasticsearch/config/ssl/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /usr/local/apps/elasticsearch/config/http.p12
bin/elasticsearch-certutil ca
选择默认路径 文件名 填写密码
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
选择默认路径 文件名 填写密码
bin/elasticsearch-certutil http
第一个选择N
第二个ca 选择y
/usr/local/apps/elasticsearch1/elastic-stack-ca.p12
Generate a CSR? [y/N]n
Use an existing CA? [y/N]y
For how long should your certificate be valid? [5y]5y
Enter all the IP addresses that you need, one per line.
When you are done, press once more to move on to the next step.
172.20.3.35
172.20.3.12
172.20.3.36
You entered the following IP addresses.
- 172.20.3.35
- 172.20.3.12
- 172.20.3.36
Is this correct [Y/n]y
Do you wish to change any of these options? [y/N]n
Provide a password for the "http.p12" file: [ for none]
What filename should be used for the output zip file? [/usr/local/apps/elasticsearch1/elasticsearch-ssl-http.zip]
unzip elasticsearch-ssl-http.zip
[rd@localhost elasticsearch1]$ unzip elasticsearch-ssl-http.zip
Archive: elasticsearch-ssl-http.zip
creating: elasticsearch/
inflating: elasticsearch/README.txt
inflating: elasticsearch/http.p12
inflating: elasticsearch/sample-elasticsearch.yml
creating: kibana/
inflating: kibana/README.txt
inflating: kibana/elasticsearch-ca.pem
inflating: kibana/sample-kibana.yml
cp elasticsearch/http.p12 config/
分发https证书和 凭证文件
将整个文件打压缩包 投放到每个节点上
tar cvf elasticsearch1.tar elasticsearch1
rsync elasticsearch1.tar rd@172.20.3.36:/usr/local/apps/
rsync elasticsearch1.tar rd@172.20.3.12:/usr/local/apps/
r4g9tj2z
rsync elasticsearch/http.p12 rd@172.20.3.12:/usr/local/apps/elasticsearch1/config/
rsync config/http.p12 rd@172.20.3.12:/usr/local/apps/elasticsearch1/config/
bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
tail -fn 200 /srv/elasticsearch/logs/search-center-es-cluster.log
最常见问题是 主机启动正常了 但是其余两个拷贝过去的服务器启动失败
ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager]; nested: IOException[parseAlgParameters failed: ObjectIdentifier() -- data isn't an object ID (tag = 48)]; nested: IOException[ObjectIdentifier() -- data isn't an object ID (tag = 48)];
Likely root cause: java.io.IOException: ObjectIdentifier() -- data isn't an object ID (tag = 48)
at sun.security.util.ObjectIdentifier.(ObjectIdentifier.java:285)
at sun.security.util.DerInputStream.getOID(DerInputStream.java:321)
at com.sun.crypto.provider.PBES2Parameters.engineInit(PBES2Parameters.java:267)
at java.security.AlgorithmParameters.init(AlgorithmParameters.java:293)
at sun.security.pkcs12.PKCS12KeyStore.parseAlgParameters(PKCS12KeyStore.java:815)
at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2027)
at java.security.KeyStore.load(KeyStore.java:1445)
at org.elasticsearch.xpack.core.ssl.TrustConfig.getStore(TrustConfig.java:98)
at org.elasticsearch.xpack.core.ssl.StoreTrustConfig.createTrustManager(StoreTrustConfig.java:66)
at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:439)
at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$5(SSLService.java:528)
at java.util.HashMap.forEach(HashMap.java:1289)
at java.util.Collections$UnmodifiableMap.forEach(Collections.java:1507)
at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:526)
at org.elasticsearch.xpack.core.ssl.SSLService.(SSLService.java:144)
at org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:462)
at org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:292)
at org.elasticsearch.node.Node.lambda$new$17(Node.java:567)
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.elasticsearch.node.Node.(Node.java:571)
at org.elasticsearch.node.Node.(Node.java:278)
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:217)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:217)
<<>>
For complete error details, refer to the log at /srv/elasticsearch/logs/search-center-es-cluster.log
启动服务
./elasticsearch -d
加密码
[elastic@console bin]$ ./elasticsearch-setup-passwords interactive
future versions of Elasticsearch will require Java 11; your Java version from [/usr/java/jdk1.8.0_181/jre] does not meet this requirement
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Passwords do not match.
Try again.
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
设置日志级别
机器学习 和 x pack不支持 arm
ElasticsearchException[X-Pack is not supported and Machine Learning is not available for [linux-arm]; you can use the other X-Pack features (unsupported) by setting xpack.ml.enabled: false in elasticsearch.yml]
at org.elasticsearch.xpack.ml.MachineLearningFeatureSet.isRunningOnMlPlatform(MachineLearningFeatureSet.java:125)
at org.elasticsearch.xpack.ml.MachineLearningFeatureSet.isRunningOnMlPlatform(MachineLearningFeatureSet.java:116)
at org.elasticsearch.xpack.ml.MachineLearning.createComponents(MachineLearning.java:666)
at org.elasticsearch.node.Node.lambda$new$17(Node.java:567)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654)
at jav
xpack.ml.enabled: false
修改日志目录
path.data: /srv/elasticsearch/data
#
# Path to log files:
#
path.logs: /srv/elasticsearch/log
network 标记为0.0.0.0
es@awifi
启动报错
ERROR: [3] bootstrap checks failed. You must address the points described in the following [3] lines before starting Elasticsearch.
bootstrap check failure [1] of [3]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
bootstrap check failure [2] of [3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
bootstrap check failure [3] of [3]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /srv/elasticsearch/log/elasticsearch.log
编辑配置
vi /etc/security/limits.conf
es soft nofile 65535
es hard nofile 65537
max file descriptors [4096] for elasticsearch process is too low,
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch启动时遇到的错误
问题翻译过来就是:elasticsearch用户拥有的内存权限太小,至少需要262144;
 
解决:
切换到root用户
执行命令:
sysctl -w vm.max_map_count=262144
查看结果:
sysctl -a|grep vm.max_map_count
显示:
vm.max_map_count = 262144
 
上述方法修改之后,如果重启虚拟机将失效,所以:
解决办法:
在   /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
即可永久修改
ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
bootstrap check failure [1] of [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /srv/elasticsearch/log/elasticsearch.log
ps -ef | grep elasticsearch
查看是否启动正常
/usr/local/apps/elasticsearch/bin/elasticsearch 启动服务
bootstrap check failure [2] of [2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
修改配置
vi /usr/local/apps/elasticsearch/conf/elasticsearch.yml
放开 cluster.initial_master_nodes: ["node-1", "node-2"]
ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
bootstrap check failure [1] of [1]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
解决:
Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true
禁用:在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
终于启动成功了
添加 ssl 和用户名密码
在elasticsearch 主目录下bin下面执行命令
elasticsearch-certgen
Let's get started...
Please enter the desired output file [certificate-bundle.zip]: cert.zip
0
Please enter the desired output file [certificate-bundle.zip]: cert.zip (最终生成文件的位置)
Enter instance name: bigdata
Enter name for directories and files [bigdata]: bigdata
Enter IP Addresses for instance (comma-separated if more than one) []: (ip地址 多个用逗号分割)192.168.211.117,192.168.211.118,192.168.211.119
Enter DNS names for instance (comma-separated if more than one) []: 192.168.211.117,192.168.211.118,192.168.211.119
Would you like to specify another instance? Press 'y' to continue entering instance information: n
Certificates written to /usr/local/apps/elasticsearch/elasticsearch-7.12.1/cert.zip (这里是告诉你生成的位置)
This file should be properly secured as it contains the private keys for all
instances and the certificate authority.
这里的enter instance name 的作用是
dns names for instance
启动elasticsearch 报错: 看来6.几的版本和7.几的版本不一样
rd@hadoop-server-001 bin]$ uncaught exception in thread [main]
java.lang.IllegalArgumentException: unknown setting [xpack.ssl.key] did you mean [xpack.http.ssl.key]?
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:533)
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:478)
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:449)
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:420)
at org.elasticsearch.common.settings.SettingsModule.(SettingsModule.java:138)
at org.elasticsearch.node.Node.(Node.java:396)
at org.elasticsearch.node.Node.(Node.java:278)
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:217)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:217)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:397)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
at org.elasticsearch.cli.Command.main(Command.java:79)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81)
For complete error details, refer to the log at /usr/local/apps/elasticsearch/elasticsearch-7.12.1/logs/elasticsearch.log
[3] bootstrap checks failed. You must address the points described in the following [3] lines before starting Elasticsearch.
bootstrap check failure [1] of [3]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
需要用root 执行以下命令
ulimit -n 65535
如果是非root 用户需要退出重新登录
[2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch.
bootstrap check failure [1] of [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
bootstrap check failure [2] of [2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: [2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch.
bootstrap check failure [1] of [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
bootstrap check failure [2] of [2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/local/apps/elasticsearch/elasticsearch-7.12.1/logs/elasticsearch.log
elasticsearch启动时遇到的错误
问题翻译过来就是:elasticsearch用户拥有的内存权限太小,至少需要262144;
 
解决:
切换到root用户
执行命令:
sysctl -w vm.max_map_count=262144
查看结果:
sysctl -a|grep vm.max_map_count
显示:
vm.max_map_count = 262144
 
上述方法修改之后,如果重启虚拟机将失效,所以:
解决办法:
在   /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
即可永久修改
修改elasticsearch.yml  找到discovery那一块,做如下修改
 cluster.initial_master_nodes: ["node-1","node-2"]修改为:cluster.initial_master_nodes: ["node-1"]
kibana配置 证书访问elasticsearch
elasticsearch.ssl.certificateAuthorities: [ "config/elasticsearch-ca.pem" ]
cd /usr/local/apps/elasticsearch/awifi@123/kibana
docker cp elasticsearch-ca.pem 958d0d769f38:/usr/share/kibana/config
docker restart 958d0d769f38
kibana.yml的正确配置是
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "https://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.ssl.certificateAuthorities: [ "config/elasticsearch-ca.pem" ]
elasticsearch.username: "kibana"
elasticsearch.password: "YxSUMkHUj2aCjUKKRmcH"
如果是docker启动的
然后
docker stop xxxxx
service docker stop
需要修改文件
vi /var/lib/docker/containers/958d0d769f381aa4091b86fb76238013644f5df2e2acfa6474ee875b965da3ad/config.v2.json
把elasticsearch的连接 http 改成https
service docker start
docker start xxxxx
1、停止容器
docker stop congtainer id
2、停止容器服务
service docker stop
3、修改 /var/lib/docker/containers/ID/config.v2.json 中的参数
4、重启容器服务
service docker start
5、重启容器镜像
docker start container id
打开kibana的首页
输入用户名密码 是
ealstic
tAbX96lfmM0YAzBGn67J
elasticsearch 的倒排索引
0
其实目前 我们并不清楚 一个原来维护在mysql的数据 放到 elasticsearch 中来到底会发生什么,‘
因为就我们以前使用elasticsearch 的经验来说 elasticsearch 主要解决了一个大文本存储 和 like 性能高的特点
selcct * from a where a.title like ’%xxxx%‘
如果我们把elasticsearch 当做 大一点的 mysql 来用会出现什么问题
倒排索引过大?
什么是tim 文件
什么是 fdx fdt
数据是如何存储的
包括索引结构和 磁盘里的数据本身
数据写入的时候是先写到lucence的内存中的,经过时间间隔 refresh到了 一个segment, 多了之后会merge到更到的segment。
lucence传的时候后会遍历每个segement完成, 此外还会记录一个translog
就是elasticsearch 的translog 是可以关闭的
doc是segament里的一个数据
field表示字段
term 四索引最小单位,如果部分次 该字段内容就是一个term
倒排索引 即term 映射doc list的映射
这是一个索引级别的设置,也就是说可以独立应用给单个索引:这个配置是永久的,配置后即使集群重启也会保留。如果关闭日志记录的话将选项修改成-1即可(例如: "index.search.slowlog.threshold.query.warn" : -1)
Search Slow Log配置
curl -user 用户名:密码 -X PUT "192.168.1.3:9200/索引名字/_settings?pretty" -H 'Content-Type: application/json' -d' { "index.search.slowlog.threshold.query.warn": "10s", "index.search.slowlog.threshold.query.info": "5s", "index.search.slowlog.threshold.query.debug": "2s", "index.search.slowlog.threshold.query.trace": "500ms", "index.search.slowlog.threshold.fetch.warn": "1s", "index.search.slowlog.threshold.fetch.info": "800ms", "index.search.slowlog.threshold.fetch.debug": "500ms", "index.search.slowlog.threshold.fetch.trace": "200ms", "index.search.slowlog.level": "info" } '
Index Slow log配置

curl -user 用户名:密码 -X PUT "192.168.1.3:9200/索引名字/_settings?pretty" -H 'Content-Type: application/json' -d' { "index.indexing.slowlog.threshold.index.warn": "10s", "index.indexing.slowlog.threshold.index.info": "5s", "index.indexing.slowlog.threshold.index.debug": "2s", "index.indexing.slowlog.threshold.index.trace": "500ms", "index.indexing.slowlog.level": "info", "index.indexing.slowlog.source": "1000" } '
张大成

发表评论 取消回复 您未登录,登录后才能评论,前往登录