scala – 如何为Spark结构化流编写JDBC Sink [SparkException:T
发布时间:2020-12-16 18:49:17 所属栏目:安全 来源:网络整理
导读:我需要一个JDBC接收器用于我的Spark结构化流数据帧.目前,据我所知,DataFrame的API缺少对JDBC实现的writeStream(在PySpark和 Scala(当前Spark版本2.2.0)中都没有).我发现的唯一建议是编写基于 this article的我自己的ForeachWriter Scala类. 因此,我通过添加
我需要一个JDBC接收器用于我的Spark结构化流数据帧.目前,据我所知,DataFrame的API缺少对JDBC实现的writeStream(在PySpark和
Scala(当前Spark版本2.2.0)中都没有).我发现的唯一建议是编写基于
this article的我自己的ForeachWriter Scala类.
因此,我通过添加自定义ForeachWriterclass修改了一个简单的字数统计示例,并尝试将writeStream写入PostgreSQL.单词流是从控制台手动生成的(使用NetCat:nc -lk -p 9999)并由Spark从套接字读取. 不幸的是,我得到“任务不可序列化”的错误. APACHE_SPARK_VERSION = 2.1.0 我的Scala代码: //Spark context available as 'sc' (master = local[*],app id = local-1501242382770). //Spark session available as 'spark'. import java.sql._ import org.apache.spark.sql.functions._ import org.apache.spark.sql.SparkSession val spark = SparkSession .builder .master("local[*]") .appName("StructuredNetworkWordCountToJDBC") .config("spark.jars","/tmp/data/postgresql-42.1.1.jar") .getOrCreate() import spark.implicits._ val lines = spark.readStream .format("socket") .option("host","localhost") .option("port",9999) .load() val words = lines.as[String].flatMap(_.split(" ")) val wordCounts = words.groupBy("value").count() class JDBCSink(url: String,user:String,pwd:String) extends org.apache.spark.sql.ForeachWriter[org.apache.spark.sql.Row]{ val driver = "org.postgresql.Driver" var connection:java.sql.Connection = _ var statement:java.sql.Statement = _ def open(partitionId: Long,version: Long):Boolean = { Class.forName(driver) connection = java.sql.DriverManager.getConnection(url,user,pwd) statement = connection.createStatement true } def process(value: org.apache.spark.sql.Row): Unit = { statement.executeUpdate("INSERT INTO public.test(col1,col2) " + "VALUES ('" + value(0) + "'," + value(1) + ");") } def close(errorOrNull:Throwable):Unit = { connection.close } } val url="jdbc:postgresql://<mypostgreserver>:<port>/<mydb>" val user="<user name>" val pwd="<pass>" val writer = new JDBCSink(url,pwd) import org.apache.spark.sql.streaming.ProcessingTime val query=wordCounts .writeStream .foreach(writer) .outputMode("complete") .trigger(ProcessingTime("25 seconds")) .start() query.awaitTermination() 错误信息: ERROR StreamExecution: Query [id = ef2e7a4c-0d64-4cad-ad4f-91d349f8575b,runId = a86902e6-d168-49d1-b7e7-084ce503ea68] terminated with error org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108) at org.apache.spark.SparkContext.clean(SparkContext.scala:2094) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:924) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:923) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:923) at org.apache.spark.sql.execution.streaming.ForeachSink.addBatch(ForeachSink.scala:49) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply$mcV$sp(StreamExecution.scala:503) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:503) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:503) at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:262) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:46) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch(StreamExecution.scala:502) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply$mcV$sp(StreamExecution.scala:255) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply(StreamExecution.scala:244) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply(StreamExecution.scala:244) at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:262) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:46) at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamExecution.scala:244) at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:43) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:239) at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:177) Caused by: java.io.NotSerializableException: org.apache.spark.sql.execution.streaming.StreamExecution Serialization stack: - object not serializable (class: org.apache.spark.sql.execution.streaming.StreamExecution,value: Streaming Query [id = 9b01db99-9120-4047-b779-2e2e0b289f65,runId = e20beefa-146a-4139-96f9-de3d64ce048a] [state = TERMINATED]) - field (class: $line21.$read$$iw$$iw,name: query,type: interface org.apache.spark.sql.streaming.StreamingQuery) - object (class $line21.$read$$iw$$iw,$line21.$read$$iw$$iw@24747e0f) - field (class: $line21.$read$$iw,name: $iw,type: class $line21.$read$$iw$$iw) - object (class $line21.$read$$iw,$line21.$read$$iw@1814ed19) - field (class: $line21.$read,type: class $line21.$read$$iw) - object (class $line21.$read,$line21.$read@13e62f5d) - field (class: $line25.$read$$iw,name: $line21$read,type: class $line21.$read) - object (class $line25.$read$$iw,$line25.$read$$iw@14240e5c) - field (class: $line25.$read$$iw$$iw,name: $outer,type: class $line25.$read$$iw) - object (class $line25.$read$$iw$$iw,$line25.$read$$iw$$iw@11e4c6f5) - field (class: $line25.$read$$iw$$iw$JDBCSink,type: class $line25.$read$$iw$$iw) - object (class $line25.$read$$iw$$iw$JDBCSink,$line25.$read$$iw$$iw$JDBCSink@6c096c84) - field (class: org.apache.spark.sql.execution.streaming.ForeachSink,name: org$apache$spark$sql$execution$streaming$ForeachSink$$writer,type: class org.apache.spark.sql.ForeachWriter) - object (class org.apache.spark.sql.execution.streaming.ForeachSink,org.apache.spark.sql.execution.streaming.ForeachSink@6feccb75) - field (class: org.apache.spark.sql.execution.streaming.ForeachSink$$anonfun$addBatch$1,type: class org.apache.spark.sql.execution.streaming.ForeachSink) - object (class org.apache.spark.sql.execution.streaming.ForeachSink$$anonfun$addBatch$1,<function1>) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295) ... 25 more 如何使它工作? 解 (感谢所有人,特别感谢@zsxwing的简单解决方案): >将JDBCSink类保存到文件中. 解决方法
只需在一个单独的文件中定义JDBCSink,而不是将其定义为可以捕获外部引用的内部类.
(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |