如何解决根据火花时间分割时间戳记间隔
根据火花的小时数分割时间戳记
1,2019-04-01 04:00:21,12
1,2019-04-01 06:01:22,34
1,2019-04-01 09:21:23,10
1,2019-04-01 11:23:09,15
1,2019-04-01 12:02:10,2019-04-01 15:00:21,2019-04-01 18:00:22,2019-04-01 19:30:22,30
1,2019-04-01 20:22:30,2019-04-01 22:20:30,2019-04-01 23:59:00,10
将基于小时的时间戳记(每6小时一次)分解为一天中的4个部分,并将其相加。 这里我像0-6AM,6 AM-12PM等分裂。
1,2019-04-01,59
1,25
1,110
解决方法
试试这个-
加载测试数据
spark.conf.set("spark.sql.session.timeZone","UTC")
val data =
"""
|c1,c2,c3
|1,2019-04-01 04:00:21,12
|1,2019-04-01 06:01:22,34
|1,2019-04-01 09:21:23,10
|1,2019-04-01 11:23:09,15
|1,2019-04-01 12:02:10,2019-04-01 15:00:21,2019-04-01 18:00:22,2019-04-01 19:30:22,30
|1,2019-04-01 20:22:30,2019-04-01 22:20:30,2019-04-01 23:59:00,10
""".stripMargin
val stringDS2 = data.split(System.lineSeparator())
.map(_.split("\\,").map(_.replaceAll("""^[ \t]+|[ \t]+$""","")).mkString(","))
.toSeq.toDS()
val df2 = spark.read
.option("sep",",")
.option("inferSchema","true")
.option("header","true")
.option("nullValue","null")
.csv(stringDS2)
df2.show(false)
df2.printSchema()
/**
* +---+-------------------+---+
* |c1 |c2 |c3 |
* +---+-------------------+---+
* |1 |2019-03-31 22:30:21|12 |
* |1 |2019-04-01 00:31:22|34 |
* |1 |2019-04-01 03:51:23|10 |
* |1 |2019-04-01 05:53:09|15 |
* |1 |2019-04-01 06:32:10|15 |
* |1 |2019-04-01 09:30:21|10 |
* |1 |2019-04-01 12:30:22|10 |
* |1 |2019-04-01 14:00:22|30 |
* |1 |2019-04-01 14:52:30|30 |
* |1 |2019-04-01 16:50:30|30 |
* |1 |2019-04-01 18:29:00|10 |
* +---+-------------------+---+
*
* root
* |-- c1: integer (nullable = true)
* |-- c2: timestamp (nullable = true)
* |-- c3: integer (nullable = true)
*/
截断日期为6 hrs
,然后为groupBy().sum
val seconds = 21600 // 6 hrs
df2.withColumn("c2_long",expr(s"floor(cast(c2 as long) / $seconds) * $seconds"))
.groupBy("c1","c2_long")
.agg(sum($"c3").as("c3"))
.withColumn("c2",to_date(to_timestamp($"c2_long")))
.withColumn("c2_time",to_timestamp($"c2_long"))
.orderBy("c2")
.show(false)
/**
* +---+----------+---+----------+-------------------+
* |c1 |c2_long |c3 |c2 |c2_time |
* +---+----------+---+----------+-------------------+
* |1 |1554055200|12 |2019-03-31|2019-03-31 18:00:00|
* |1 |1554120000|100|2019-04-01|2019-04-01 12:00:00|
* |1 |1554076800|59 |2019-04-01|2019-04-01 00:00:00|
* |1 |1554141600|10 |2019-04-01|2019-04-01 18:00:00|
* |1 |1554098400|25 |2019-04-01|2019-04-01 06:00:00|
* +---+----------+---+----------+-------------------+
*/
,
SCALA:我评论的帖子中的答案很好用。
df.groupBy($"id",window($"time","6 hours").as("time"))
.agg(sum("count").as("count"))
.orderBy("time.start")
.select($"id",to_date($"time.start").as("time"),$"count")
.show(false)
+---+----------+-----+
|id |time |count|
+---+----------+-----+
|1 |2019-04-01|12 |
|1 |2019-04-01|59 |
|1 |2019-04-01|25 |
|1 |2019-04-01|110 |
+---+----------+-----+
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。