Failed to replace a bad datanode on the existing pipeline du
发布时间:2020-12-13 22:30:02 所属栏目:百科 来源:网络整理
导读:java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.2.1:50010],original=[192.168.2.1:50010]). The current failed datanode replacement
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.2.1:50010],original=[192.168.2.1:50010]). The current failed datanode replacement policy is DEFAULT,and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. 在hadoop hdfs-size.xml 中加入:
<property> <name>dfs.client.block.write.replace-datanode-on-failure.enable</name> <value>true</value> </property> <property> <name>dfs.client.block.write.replace-datanode-on-failure.policy</name> <value>NEVER</value> </property>(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |