scala – 本地加载Spark数据不完整的HDFS URI
发布时间:2020-12-16 09:24:27 所属栏目:安全 来源:网络整理
导读:我在本地CSV文件中遇到了SBT加载问题.基本上,我在Scala Eclipse中编写了一个Spark程序,它读取以下文件: val searches = sc.textFile("hdfs:///data/searches") 这在hdfs上工作正常,但出于de-bug的原因,我希望从本地目录加载此文件,我已将其设置为项目目录.
我在本地CSV文件中遇到了SBT加载问题.基本上,我在Scala Eclipse中编写了一个Spark程序,它读取以下文件:
val searches = sc.textFile("hdfs:///data/searches") 这在hdfs上工作正常,但出于de-bug的原因,我希望从本地目录加载此文件,我已将其设置为项目目录. 所以我厌倦了以下几点: val searches = sc.textFile("file:///data/searches") val searches = sc.textFile("./data/searches") val searches = sc.textFile("/data/searches") 这些都不允许我从本地读取文件,并且所有这些都在SBT上返回此错误: Exception in thread "main" java.io.IOException: Incomplete HDFS URI,no host: hdfs:/data/pages at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135) at org.apache.spark.rdd.RDD.count(RDD.scala:904) at com.user.Result$.get(SparkData.scala:200) at com.user.StreamingApp$.main(SprayHerokuExample.scala:35) at com.user.StreamingApp.main(SprayHerokuExample.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 在错误报告中,com.user.Result $.get(SparkData.scala:200)是调用sc.textFile的行.它似乎默认在Hadoop环境中运行.我有什么办法可以在本地读取这个文件吗? 编辑:在本地时,我已经重新配置了build.sbt: submit <<= inputTask{(argTask:TaskKey[Seq[String]]) => { (argTask,mainClass in Compile,assemblyOutputPath in assembly,sparkHome) map { (args,main,jar,sparkHome) => { args match { case List(output) => { val sparkCmd = sparkHome+"/bin/spark-submit" Process( sparkCmd :: "--class" :: main.get :: "--master" :: "local[4]" :: jar.getPath :: "local[4]" :: output :: Nil)! } case _ => Process("echo" :: "Usage" :: Nil) ! } } }}} submit命令是我用来运行代码的. 找到解决方案:事实证明file:/// path /是正确的方法,但在我的情况下,完整的路径是有效的:即主页/项目/数据/搜索.虽然只是把数据/搜索没有(尽管在home / projects目录下工作). 解决方法
这应该工作:
sc.textFile("file:///data/searches") 从你的错误似乎火花正在加载Hadoop配置,这可以确定你有一个Hadoop配置文件或Hadoop环境变量集(如HADOOP_CONF_DIR) (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |