加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 综合聚焦 > 服务器 > 安全 > 正文

scala – 尝试写入hdfs时出错:服务器IPC版本9无法与客户端版本4

发布时间:2020-12-16 09:24:24 所属栏目:安全 来源:网络整理
导读:我正在尝试使用scala将文件写入hdfs并且我不断收到以下错误 Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4at org.apache.hadoop.ipc.Client.call(Client.java:1113)at org.apache.hado
我正在尝试使用scala将文件写入hdfs并且我不断收到以下错误

Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at bcomposes.twitter.Util$.<init>(TwitterStream.scala:39)
at bcomposes.twitter.Util$.<clinit>(TwitterStream.scala)
at bcomposes.twitter.StatusStreamer$.main(TwitterStream.scala:17)
at bcomposes.twitter.StatusStreamer.main(TwitterStream.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)

我在this tutorial之后安装了hadoop.下面的代码是我用来将示例文件插入hdfs的代码.

val configuration = new Configuration();
val hdfs = FileSystem.get( new URI( "hdfs://192.168.11.153:54310" ),configuration );
val file = new Path("hdfs://192.168.11.153:54310/s2013/batch/table.html");
if ( hdfs.exists( file )) { hdfs.delete( file,true ); } 
val os = hdfs.create( file);
val br = new BufferedWriter( new OutputStreamWriter( os,"UTF-8" ) );
br.write("Hello World");
br.close();
hdfs.close();

Hadoop版本是2.4.0,我使用的hadoop库版本是1.2.1.我应该做些什么改变才能做到这一点?

解决方法

hadoop和spark版本应该同步. (就我而言,我正在使用spark-1.2.0和hadoop 2.2.0)

第1步 – 转到$SPARK_HOME

第2步 – 只需构建spark with the version of hadoop client you want,

mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package

第3步 – 火花项目也应该有适当的火花版本,

name := "smartad-spark-songplaycount"

version := "1.0"

scalaVersion := "2.10.4"

//libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.1"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.2.0"

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.2.0"

libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.2.0"

resolvers += "Akka Repository" at "http://repo.akka.io/releases/"

参考

Building apache spark with mvn

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读