Docker容器中Elasticsearch 2.4.0中的root用户

我正在使用Docker运行ELK堆栈以进行日志管理,当前配置为ES 1.7,Logstash 1.5.4和Kibana 4.1.4.现在我尝试将Elasticsearch升级到2.4.0,在https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz中使用targer使用tar.gz文件.由于ES 2.X不允许以root用户身份运行,我已经使用过

-Des.insecure.allow.root=true

运行elasticsearch服务时选项,但我的容器无法启动.日志没有提到任何问题.

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k
//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found

Scheduler@0.0.0 start /opt/log-management/Scheduler
node scheduler-app.js

ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
node app.js
Jobs are registered
[2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance,but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0],pid[1],build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ...
Wed,28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Wed,28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty,busboy,formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Wed,28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex,lang-expression,lang-groovy],plugins [],sites []
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths,mounts [[/data (/dev/mapper/platform-data)]],net usable_space [1tb],net total_space [1tb],spins? [possibly],types [xfs]
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb],compressed ordinary object pointers [true]
[2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ...
[2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300},bound_addresses {[::1]:9300},{127.0.0.1:9300}
[2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA

任何线索将不胜感激.

编辑1:作为// opt //日志管理// elasticsearch / bin / elasticsearch:第134行:hostname:command not found是一个错误而docker镜像没有hostname实用程序,我尝试使用uname -ncommand来获取HOSTNAME ES.现在它不会抛出主机名错误,但问题仍然存在.它没有开始.
它是否正确使用?

还有一个疑问,当我使用的是当前正在运行的ES 1.7时,主机名实用程序也不在那里,但它运行没有任何问题.非常困惑.
使用uname -n后记录:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1083  100  1083    0     0  1093k      0 --:--:-- --:--:-- --:--:-- 1057k

> ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
> node app.js


> Scheduler@0.0.0 start /opt/log-management/Scheduler
> node scheduler-app.js

Jobs are registered
[2016-09-30 10:10:37,785][INFO ][bootstrap                ] max_open_files [1048576]
[2016-09-30 10:10:37,822][WARN ][bootstrap                ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance,but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-30 10:10:37,993][INFO ][node                     ] [Helleyes] version[2.4.0],build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-30 10:10:37,993][INFO ][node                     ] [Helleyes] initializing ...
Fri,30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Fri,30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty,formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Fri,30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-30 10:10:38,435][INFO ][plugins                  ] [Helleyes] modules [reindex,sites []
[2016-09-30 10:10:38,455][INFO ][env                      ] [Helleyes] using [1] data paths,types [xfs]
[2016-09-30 10:10:38,456][INFO ][env                      ] [Helleyes] heap size [7.8gb],compressed ordinary object pointers [true]
[2016-09-30 10:10:38,483][WARN ][threadpool               ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-30 10:10:40,151][INFO ][node                     ] [Helleyes] initialized
[2016-09-30 10:10:40,152][INFO ][node                     ] [Helleyes] starting ...
[2016-09-30 10:10:40,278][INFO ][transport                ] [Helleyes] publish_address {10.240.118.68:9300},{127.0.0.1:9300}
[2016-09-30 10:10:40,283][INFO ][discovery                ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ
[2016-09-30 10:10:40,360][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x329b2977,/172.17.0.15:53388 => /10.240.118.69:9300]],closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:40,360][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6,/172.17.0.15:46846 => /10.240.118.70:9300]],closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,798][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6,/172.17.0.15:46958 => /10.240.118.70:9300]],800][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6,/172.17.0.15:53501 => /10.240.118.69:9300]],closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,302][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f,/172.17.0.15:47057 => /10.240.118.70:9300]],303][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0,/172.17.0.15:53598 => /10.240.118.69:9300]],closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

java.util.concurrent.ThreadPoolExecutor中的$Worker.run(ThreadPoolExecutor.java:617)
        在java.lang.Thread.run(Thread.java:745)
    [2016-09-30 10:10:44,807] [INFO] [cluster.service] [Helleyes] new_master {Helleyes} {wvVGkhxnTqaa_wS5GGjZBQ} {10.240.118.68} {10.240.118.68:9300},原因:zen-disco-join (elections_as_master,[0]加入收到)
    [2016-09-30 10:10:44,852] [INFO] [http] [Helleyes] publish_address {10.240.118.68:9200},bound_addresses {[:: 1]:9200},{127.0.0.1:9200}
    [2016-09-30 10:10:44,852] [INFO] [节点] [Helleyes]开始了
    [2016-09-30 10:10:44,984] [INFO] [网关] [Helleyes]将[32]索引恢复到cluster_state

部署失败后出错

failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "","failed": true,"item": {"url": "http://10.240.118.68:9200"},"msg": "Status code was not [200]: Request failed: 

编辑2:即使安装了hostnameutility并且工作正常,容器也无法启动.日志与EDIT 1相同.

编辑3:容器确实启动但在地址http:// nodeip:9200处无法访问.在3个节点中,只有1个有2.4个其他2个仍然有1.7个而2.4不是集群的一部分.在运行2.4的容器内部,curl到localhost:9200给出了运行的elasticsearch结果,但是从外部无法访问.

编辑4:我尝试在集群上运行ES 2.4的基本安装,在同样的设置ES 1.7工作正常.我已经运行了ES迁移插件来检查群集是否可以运行ES 2.4并且它给了我绿色.基本安装细节如下

Dockerfile

#Pulling SLES12 thin base image
FROM private-registry-1

#Author
MAINTAINER XYZ

# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2

RUN zypper --no-gpg-checks -n refresh

#Install required packages and dependencies
RUN zypper -n in  net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1 

#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_CONFIG_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300

WORKDIR /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} 

#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}

#Removing binary files which are not needed
RUN zypper -n rm wget

# Removing zypper repos
RUN zypper rr caspiancs_common

#Running elasticsearch executable
WORKDIR ${ES_DIR}
ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true

用.构建

docker build -t es-test .

1)当使用docker run -d –name elasticsearch –net = host -p 9200:9200 -p 9300:9300 es-test时,如其中一条评论中所述,并在容器或节点内卷曲localhost:9200正在运行容器,我得到了正确的响应.我仍然无法到达9200端口上的群集的其他节点.

2)当使用docker run -d –name elasticsearch -p 9200:9200 -p 9300:9300 es-test并在容器内卷曲localhost:9200时,它工作正常,但不在节​​点给我错误

curl: (56) Recv failure: Connection reset by peer

我仍然无法到达9200端口上的群集的其他节点.

编辑5:使用this answer on this question,我得到了运行ES 2.4的三个容器中的三个.但ES无法与所有这三个容器组成一个集群.网络配置如下
network.host:0.0.0.0,http.port:9200,

#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml

使用docker日志获取的日志有以下内容:

[2016-10-06 12:31:28,887][WARN ][bootstrap                ] running as ROOT user. this is a bad idea!
[2016-10-06 12:31:29,080][INFO ][node                     ] [Screech] version[2.4.0],build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-06 12:31:29,081][INFO ][node                     ] [Screech] initializing ...
[2016-10-06 12:31:29,652][INFO ][plugins                  ] [Screech] modules [reindex,sites []
[2016-10-06 12:31:29,684][INFO ][env                      ] [Screech] using [1] data paths,mounts [[/ (rootfs)]],net usable_space [8.7gb],net total_space [9.7gb],spins? [unknown],types [rootfs]
[2016-10-06 12:31:29,684][INFO ][env                      ] [Screech] heap size [989.8mb],compressed ordinary object pointers [true]
[2016-10-06 12:31:29,720][WARN ][threadpool               ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead
[2016-10-06 12:31:31,387][INFO ][node                     ] [Screech] initialized
[2016-10-06 12:31:31,387][INFO ][node                     ] [Screech] starting ...
[2016-10-06 12:31:31,456][INFO ][transport                ] [Screech] publish_address {172.17.0.16:9300},bound_addresses {[::]:9300}
[2016-10-06 12:31:31,465][INFO ][discovery                ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw
[2016-10-06 12:31:34,500][WARN ][discovery.zen            ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}],retrying...
ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
    at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
    at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

每当我提到运行该容器的主机的IP地址为network.host时,我最终会遇到旧情况,即只有一个容器运行ES 2.4,另外两个容器运行1.7.

刚刚看到docker代理正在收听9300或“我认为”正在收听.

elasticsearch-server/src/main/docker # netstat -nlp | grep 9300
tcp        0      0 :::9300                 :::*                    LISTEN      6656/docker-proxy   

这有什么线索?

最佳答案
我能够使用以下设置形成群集

network.publish_host = CONTAINER_HOST_ADDRESS即容器正在运行的节点的地址.
network.bind_host = 0.0.0.0
transport.publish_port = 9300
transport.publish_host = CONTAINER_HOST_ADDRESS

当您在代理/负载均衡器(如nginx或haproxy)之后运行ES时,tranport.publish_port非常重要.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


最近一直在开发Apworks框架的案例代码,同时也在一起修复Apworks框架中的Bug和一些设计上的不足。遇到的一个普遍问题是,代码的调试过程需要依赖很多外部系统,比如MongoDB、PostgreSQL、RabbitMQ等。当然可以在本机逐一安装这些服务,然后对服务进行配置,使其满足自己开发调试
最近每天都在空闲时间努力编写Apworks框架的案例代码WeText。在文本发布和处理微服务中,我打算使用微软的SQL Server for Linux来做演示,于是也就在自己的docker-compose中加入了MS SQL Server的服务。其实在Docker中运行SQL Server是非常容
在《Kubernetes中分布式存储Rook-Ceph部署快速演练》文章中,我快速介绍了Kubernetes中分布式存储Rook-Ceph的部署过程,这里介绍如何在部署于Kubernetes的ASP.NET Core MVC的应用程序中使用Rook-Ceph所创建的存储对象。 构建ASP.NET C
最近在项目中有涉及到Kubernetes的分布式存储部分的内容,也抽空多了解了一些。项目主要基于Rook-Ceph运行,考虑到Rook-Ceph部署也不那么简单,官方文档的步骤起点也不算低,因此,在整合官方文档的某些步骤的基础上,写篇文章简单总结一下。 Rook-Ceph是Kubernetes中分布
CentOS下Docker与.netcore(一) 之 安装 CentOS下Docker与.netcore(二) 之 Dockerfile CentOS下Docker与.netcore(三)之 三剑客之一Docker-Compose CentOS下Docker与.netcore(四)之 三剑客之一D
CentOS下Docker与.netcore(一) 之 安装 CentOS下Docker与.netcore(二) 之 Dockerfile CentOS下Docker与.netcore(三)之 三剑客之一Docker-Compose CentOS下Docker与.netcore(四)之 三剑客之一D
构建镜像最具挑战性的一点是使镜像大小尽可能的小。Dockerfile中的每条指令都为图像添加了一个图层,您需要记住在移动到下一层之前清理任何不需要的工件。对于多阶段构建,您可以在Dockerfile中使用多个FROM语句。每个FROM指令可以使用不同的基础,并且每个指令都开始一个新的构建。您可以选择
本文介绍compose配置文件参数的使用,熟练编写compose文件 [root@docker lnmp]# cat lnmp.yaml version: '3' services: nginx: build: /root/docker_demo/nginx/ ports: - &q
环境 docker-machine主机:192.168.1.9 docker主机:192.168.1.10 步骤: 安装docker-machine 创建ssh密钥对,实现两主机无密登录 创建docker主机,命名host1 变更docker环境变量 运行容器查看两端是否同步 镜像容器同步测试成功
CentOS下Docker与.netcore(一) 之 安装 CentOS下Docker与.netcore(二) 之 Dockerfile CentOS下Docker与.netcore(三)之 三剑客之一Docker-Compose CentOS下Docker与.netcore(四)之 三剑客之一D
https://blog.csdn.net/wanglei_storage/article/details/77508620 实践中会发现,生产环境中使用单个 Docker 节点是远远不够的,搭建 Docker 集群势在必行。然而,面对 Kubernetes, Mesos 以及 Swarm 等众多容
1.引言 紧接上篇.NET Core容器化@Docker,这一节我们先来介绍如何使用Nginx来完成.NET Core应用的反向代理,然后再介绍多容器应用的部署问题。 2. Why Need Nginx .NET Core中默认的Web Server为Kestrel。 Kestrel is grea
docker rm `docker ps -a | grep Exited | awk '{print $1}'` 删除异常停止的docker容器 docker rmi -f `docker images | grep '<none>' | awk &#3
什么是Docker Compose 在微服务盛行的今天,我们通常是这么定义Compose的:对容器的统一启动和关闭的编排工具。 但是我以前还是有个疑惑,谁会用Compose在一台服务器上部署多个服务呢?干脆直接用单体服务就行了!直到我遇到了以下的一个需求,让我明白了在一台服务器上不得不用多个服务的时
CentOS下Docker与.netcore(一) 之 安装 CentOS下Docker与.netcore(二) 之 Dockerfile CentOS下Docker与.netcore(三)之 三剑客之一Docker-Compose CentOS下Docker与.netcore(四)之 三剑客之一D
很多时候,我们在本地开发过程中程序运行很正常,但是发布到线上之后由于环境的原因,可能会有一些异常。通常我们会通过日志来分析问题,除了日志还有一种常用的调试手段就是:附加进程。 VS中的附加进程非常强大,目前提供了9种常用的附加方式。 在当前.Net Core支持跨平台的大背景下,其中Linux环境和
https://www.cnblogs.com/bigberg/p/8867326.html 一、简介 Docker有个编排工具docker-compose,可以将组成某个应该的多个docker容器编排在一起,同时管理。同样在Swarm集群中,可以使用docker stack 将一组相关联的服务进行
.Net6中想实现对某个网址截屏,可通过Selenium模拟访问网址并实现截图。 实现 安装Nuget包 <PackageReference Include="Selenium.Chrome.WebDriver" Version="85.0.0" /&g
原文 https://www.cnblogs.com/gispathfinder/p/5871043.html 我们在使用docker run创建Docker容器时,可以用--net选项指定容器的网络模式,Docker有以下4种网络模式: host模式,使用--net=host指定。 co