根据火花时间分割时间戳记间隔

如何解决根据火花时间分割时间戳记间隔

根据火花的小时数分割时间戳记

1,2019-04-01 04:00:21,12
1,2019-04-01 06:01:22,34
1,2019-04-01 09:21:23,10
1,2019-04-01 11:23:09,15
1,2019-04-01 12:02:10,2019-04-01 15:00:21,2019-04-01 18:00:22,2019-04-01 19:30:22,30
1,2019-04-01 20:22:30,2019-04-01 22:20:30,2019-04-01 23:59:00,10

将基于小时的时间戳记(每6小时一次)分解为一天中的4个部分,并将其相加。 这里我像0-6AM,6 AM-12PM等分裂。

1,2019-04-01,59
1,25
1,110

解决方法

试试这个-

加载测试数据

spark.conf.set("spark.sql.session.timeZone","UTC")
    val data =
      """
        |c1,c2,c3
        |1,2019-04-01 04:00:21,12
        |1,2019-04-01 06:01:22,34
        |1,2019-04-01 09:21:23,10
        |1,2019-04-01 11:23:09,15
        |1,2019-04-01 12:02:10,2019-04-01 15:00:21,2019-04-01 18:00:22,2019-04-01 19:30:22,30
        |1,2019-04-01 20:22:30,2019-04-01 22:20:30,2019-04-01 23:59:00,10
      """.stripMargin
    val stringDS2 = data.split(System.lineSeparator())
      .map(_.split("\\,").map(_.replaceAll("""^[ \t]+|[ \t]+$""","")).mkString(","))
      .toSeq.toDS()
    val df2 = spark.read
      .option("sep",",")
      .option("inferSchema","true")
      .option("header","true")
      .option("nullValue","null")
      .csv(stringDS2)
    df2.show(false)
    df2.printSchema()
    /**
      * +---+-------------------+---+
      * |c1 |c2                 |c3 |
      * +---+-------------------+---+
      * |1  |2019-03-31 22:30:21|12 |
      * |1  |2019-04-01 00:31:22|34 |
      * |1  |2019-04-01 03:51:23|10 |
      * |1  |2019-04-01 05:53:09|15 |
      * |1  |2019-04-01 06:32:10|15 |
      * |1  |2019-04-01 09:30:21|10 |
      * |1  |2019-04-01 12:30:22|10 |
      * |1  |2019-04-01 14:00:22|30 |
      * |1  |2019-04-01 14:52:30|30 |
      * |1  |2019-04-01 16:50:30|30 |
      * |1  |2019-04-01 18:29:00|10 |
      * +---+-------------------+---+
      *
      * root
      * |-- c1: integer (nullable = true)
      * |-- c2: timestamp (nullable = true)
      * |-- c3: integer (nullable = true)
      */

截断日期为6 hrs,然后为groupBy().sum


    val seconds = 21600 // 6 hrs

    df2.withColumn("c2_long",expr(s"floor(cast(c2 as long) / $seconds) * $seconds"))
      .groupBy("c1","c2_long")
      .agg(sum($"c3").as("c3"))
      .withColumn("c2",to_date(to_timestamp($"c2_long")))
      .withColumn("c2_time",to_timestamp($"c2_long"))
      .orderBy("c2")
      .show(false)

    /**
      * +---+----------+---+----------+-------------------+
      * |c1 |c2_long   |c3 |c2        |c2_time            |
      * +---+----------+---+----------+-------------------+
      * |1  |1554055200|12 |2019-03-31|2019-03-31 18:00:00|
      * |1  |1554120000|100|2019-04-01|2019-04-01 12:00:00|
      * |1  |1554076800|59 |2019-04-01|2019-04-01 00:00:00|
      * |1  |1554141600|10 |2019-04-01|2019-04-01 18:00:00|
      * |1  |1554098400|25 |2019-04-01|2019-04-01 06:00:00|
      * +---+----------+---+----------+-------------------+
      */
,

SCALA:我评论的帖子中的答案很好用。

df.groupBy($"id",window($"time","6 hours").as("time"))
  .agg(sum("count").as("count"))
  .orderBy("time.start")
  .select($"id",to_date($"time.start").as("time"),$"count")
  .show(false)

+---+----------+-----+
|id |time      |count|
+---+----------+-----+
|1  |2019-04-01|12   |
|1  |2019-04-01|59   |
|1  |2019-04-01|25   |
|1  |2019-04-01|110  |
+---+----------+-----+

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-