scala – 如何减少Spark的运行时输出的冗长度?
如何减少Spark运行时产生的跟踪信息量?
默认值太冗长了, 如何关闭它,并在需要时打开它。 谢谢 详细模式 scala> val la = sc.parallelize(List(12,4,5,3,6,781)) scala> la.collect 15/01/28 09:57:24 INFO SparkContext: Starting job: collect at <console>:15 15/01/28 09:57:24 INFO DAGScheduler: Got job 3 (collect at <console>:15) with 1 output ... 15/01/28 09:57:24 INFO Executor: Running task 0.0 in stage 3.0 (TID 3) 15/01/28 09:57:24 INFO Executor: Finished task 0.0 in stage 3.0 (TID 3). 626 bytes result sent to driver 15/01/28 09:57:24 INFO DAGScheduler: Stage 3 (collect at <console>:15) finished in 0.002 s 15/01/28 09:57:24 INFO DAGScheduler: Job 3 finished: collect at <console>:15,took 0.020061 s res5: Array[Int] = Array(12,781) 静音模式(预期) scala> val la = sc.parallelize(List(12,781)) scala> la.collect res5: Array[Int] = Array(12,781) 解决方法
引用“
Learning Spark”书。
(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |