scala – Spark:分组数据的“数量”是转换还是行动?
发布时间:2020-12-16 09:04:43 所属栏目:安全 来源:网络整理
导读:我知道 count 调用RDD或DataFrame是一个动作.但是在摆弄火花壳时,我发现了以下情况 scala val empDF = Seq((1,"James Gordon",30,"Homicide"),(2,"Harvey Bullock",35,(3,"Kristen Kringle",28,"Records"),(4,"Edward Nygma","Forensics"),(5,"Leslie Thompk
我知道
count 调用RDD或DataFrame是一个动作.但是在摆弄火花壳时,我发现了以下情况
scala> val empDF = Seq((1,"James Gordon",30,"Homicide"),(2,"Harvey Bullock",35,(3,"Kristen Kringle",28,"Records"),(4,"Edward Nygma","Forensics"),(5,"Leslie Thompkins",31,"Forensics")).toDF("id","name","age","department") empDF: org.apache.spark.sql.DataFrame = [id: int,name: string,age: int,department: string] scala> empDF.show +---+----------------+---+----------+ | id| name|age|department| +---+----------------+---+----------+ | 1| James Gordon| 30| Homicide| | 2| Harvey Bullock| 35| Homicide| | 3| Kristen Kringle| 28| Records| | 4| Edward Nygma| 30| Forensics| | 5|Leslie Thompkins| 31| Forensics| +---+----------------+---+----------+ scala> empDF.groupBy("department").count //count returned a DataFrame res1: org.apache.spark.sql.DataFrame = [department: string,count: bigint] scala> res1.show +----------+-----+ |department|count| +----------+-----+ | Homicide| 2| | Records| 1| | Forensics| 2| +----------+-----+ 当我在GroupedData(empDF.groupBy(“department”))上调用count时,我得到另一个DataFrame作为结果(res1).这让我相信在这种情况下,计数是一个转变.当我调用count时没有触发任何计算这一事实进一步支持了它,相反,当我运行res1.show时它们就开始了. 我无法找到任何表明计数也可能是转换的文档.有人可以对此有所了解吗? 解决方法
您在代码中使用的.count()是通过RelationalGroupedDataset,它创建一个包含分组数据集中元素数量的新列.这是一种转变.参考:
https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.GroupedDataset 通常在RDD / DataFrame / Dataset上使用的.count()与上面完全不同,而.count()是一个Action.参见:https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.rdd.RDD 编辑: 在distributedDataSet上运行时,总是使用.count()和.agg(),以避免将来混淆: empDF.groupBy($"department").agg(count($"department") as "countDepartment").show (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |