python – pyspark:使用spark-submit运送jar依赖项
发布时间:2020-12-20 11:41:58 所属栏目:Python 来源:网络整理
导读:我写了一个pyspark脚本,它读取两个json文件,coGroup它们并将结果发送到elasticsearch集群;当我在本地运行它时,一切都按预期工作(大部分),我为org.elasticsearch.hadoop.mr.EsOutputFormat和org.elasticsearch.hadoop.mr.LinkedMapWritable类下载了elasticsea
我写了一个pyspark脚本,它读取两个json文件,coGroup它们并将结果发送到elasticsearch集群;当我在本地运行它时,一切都按预期工作(大部分),我为org.elasticsearch.hadoop.mr.EsOutputFormat和org.elasticsearch.hadoop.mr.LinkedMapWritable类下载了elasticsearch-hadoop jar文件,然后运行我的作业pyspark使用–jars参数,我可以看到弹性搜索集群中出现的文档.
但是,当我尝试在spark集群上运行它时,我收到此错误: Traceback (most recent call last): File "/root/spark/spark_test.py",line 141,in <module> conf=es_write_conf File "/root/spark/python/pyspark/rdd.py",line 1302,in saveAsNewAPIHadoopFile keyConverter,valueConverter,jconf) File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",line 538,in __call__ File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py",line 300,in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile. : java.lang.ClassNotFoundException: org.elasticsearch.hadoop.mr.LinkedMapWritable at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:274) at org.apache.spark.util.Utils$.classForName(Utils.scala:157) at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:611) at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:610) at scala.Option.map(Option.scala:145) at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:610) at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:609) at scala.Option.flatMap(Option.scala:170) at org.apache.spark.api.python.PythonRDD$.getKeyValueTypes(PythonRDD.scala:609) at org.apache.spark.api.python.PythonRDD$.saveAsNewAPIHadoopFile(PythonRDD.scala:701) at org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:207) at java.lang.Thread.run(Thread.java:745) 对我来说这似乎很清楚:在工人身上没有弹性研究 – hadoop jar;所以问题是:我如何将它与我的应用程序一起发送?我可以使用sc.addPyFile作为python依赖项,但它不适用于jar,并且使用spark-submit的–jars参数也无济于事. 解决方法
–jars正常工作;问题是我如何首先运行火花提交工作;正确的执行方式是:
./bin/spark-submit <options> scriptname 因此,必须在脚本之前放置–jars选项: ./bin/spark-submit --jars /path/to/my.jar myscript.py 如果您认为这是将参数传递给脚本本身的唯一方法,那么这很明显,因为脚本名称后面的所有内容都将用作脚本的输入参数: ./bin/spark-submit --jars /path/to/my.jar myscript.py --do-magic=true (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |