scala – Spark从DataFrame中删除重复的行
发布时间:2020-12-16 18:18:53 所属栏目:安全 来源:网络整理
导读:参见英文答案 How to select the first row of each group?????????????????????????????????????8个 假设我有一个DataFrame,如: val json = sc.parallelize(Seq("""{"a":1,"b":2,"c":22,"d":34}""","""{"a":3,"b":9,"d":12}""","""{"a":1,"b":4,"c":23,"d"
参见英文答案 >
How to select the first row of each group?????????????????????????????????????8个
假设我有一个DataFrame,如: val json = sc.parallelize(Seq("""{"a":1,"b":2,"c":22,"d":34}""","""{"a":3,"b":9,"d":12}""","""{"a":1,"b":4,"c":23,"d":12}""")) val df = sqlContext.read.json(json) 我想根据列“b”的值删除列“a”的重复行.即,如果列“a”有重复的行,我想保留“b”值较大的行.对于上面的例子,经过处理后,我只需要
和
Spark DataFrame dropDuplicates API似乎不支持这一点.使用RDD方法,我可以执行map().reduceByKey(),但是DataFrame特定的操作是做什么的呢? 感谢一些帮助,谢谢. 解决方法
您可以在sparksql中使用window函数来实现此目的.
df.registerTempTable("x") sqlContext.sql("SELECT a,b,c,d FROM( SELECT *,ROW_NUMBER()OVER(PARTITION BY a ORDER BY b DESC) rn FROM x) y WHERE rn = 1").collect 这将实现您的需求. (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |