加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 编程开发 > Java > 正文

java – 主分片不活动或未分配是已知节点?

发布时间:2020-12-14 16:22:51 所属栏目:Java 来源:网络整理
导读:我在 Windows 8上运行弹性搜索版本4.1.我尝试通过java索引文档.运行JUNIT测试时,错误如下所示. org.elasticsearch.action.UnavailableShardsException: [wms][3] Primary shard is not active or isn't assigned is a known node. Timeout: [1m],request: in
我在 Windows 8上运行弹性搜索版本4.1.我尝试通过java索引文档.运行JUNIT测试时,错误如下所示.
org.elasticsearch.action.UnavailableShardsException: [wms][3] Primary shard is not active or isn't assigned is a known node. Timeout: [1m],request: index {[wms][AUpdb-bMQ3rfSDgdctGY],source[{
    "fleetNumber": "45","timestamp": "1245657888","geoTag": "73.0012312,-123.00909","videoName": "timestamp.mjpeg","content": "ASD123124NMMM"
}]}
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.retryBecauseUnavailable(TransportShardReplicationOperationAction.java:784)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.doStart(TransportShardReplicationOperationAction.java:402)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onTimeout(TransportShardReplicationOperationAction.java:500)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:497)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)

我无法弄清楚,为什么会导致这个错误发生.当一个删除数据或索引它工作正常.
什么可能是它的可能的原因.

解决方法

你应该看看那个链接:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html

特别是那部分:

cluster.routing.allocation.disk.watermark.low controls the low
watermark for disk usage. It defaults to 85%,meaning ES will not
allocate new shards to nodes once they have more than 85% disk used.
It can also be set to an absolute byte value (like 500mb) to prevent
ES from allocating shards if less than the configured amount of space
is available.

cluster.routing.allocation.disk.watermark.high controls the high watermark. It defaults to 90%,meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%. It can also be set to an absolute byte value (similar to the low watermark) to relocate shards once less than the configured amount of space is available on the node.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读