加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 综合聚焦 > 服务器 > 安全 > 正文

通过zeppelin从docker-hadoop-spark – workbench访问hdfs

发布时间:2020-12-16 03:38:32 所属栏目:安全 来源:网络整理
导读:我已经安装了https://github.com/big-data-europe/docker-hadoop-spark-workbench 然后用docker-compose启动它.我导航到了the various urls mentioned in the git readme,所有人似乎都出现了. 然后我开始了一个本地的apache zeppelin: ./bin/zeppelin.sh st

我已经安装了https://github.com/big-data-europe/docker-hadoop-spark-workbench

然后用docker-compose启动它.我导航到了the various urls mentioned in the git readme,所有人似乎都出现了.

然后我开始了一个本地的apache zeppelin:

./bin/zeppelin.sh start

在zeppelin解释器设置中,我已导航到spark解释器并更新master以指向安装了docker的本地群集

master:从local [*]更新到spark:// localhost:8080

然后我在笔记本中运行以下代码:

import  org.apache.hadoop.fs.{FileSystem,Path}

FileSystem.get( sc.hadoopConfiguration ).listStatus( new Path("hdfs:///")).foreach( x => println(x.getPath ))

我在zeppelin日志中遇到此异常:

 INFO [2017-12-15 18:06:35,704] ({pool-2-thread-2} Paragraph.java[jobRun]:362) - run paragraph 20171212-200101_1553252595 using null org.apache.zeppelin.interpreter.LazyOpenInterpreter@32d09a20
 WARN [2017-12-15 18:07:37,717] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2064) - Job 20171212-200101_1553252595 is finished,status: ERROR,exception: null,result: %text java.lang.NullPointerException
    at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
    at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
    at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398)
    at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387)
    at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
    at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843)
    at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
    at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
    at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

如何从zeppelin和java / spark代码访问hdfs?

最佳答案
异常的原因是在Zeppelin中由于某种原因,sparkSession对象为null.

参考:
?https://github.com/apache/zeppelin/blob/master/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java

private SparkContext createSparkContext_2() {
    return (SparkContext) Utils.invokeMethod(sparkSession,"sparkContext");
}

可能是与配置相关的问题.请交叉验证设置/配置和火花群设置.确保火花正常工作.

参考:https://zeppelin.apache.org/docs/latest/interpreter/spark.html

希望这可以帮助.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读