在MySQL中,我可以有这样的查询:
select
cast(from_unixtime(t.time, '%Y-%m-%d %H:00') as datetime) as timeHour
, ...
from
some_table t
group by
timeHour, ...
order by
timeHour, ...
其中GROUP BY中的timeHour是select表达式的结果.
但我只是尝试了类似于Sqark SQL的查询,我得到了一个错误
Error: org.apache.spark.sql.AnalysisException:
cannot resolve '`timeHour`' given input columns: ...
我对Spark SQL的查询是这样的:
select
cast(t.unixTime as timestamp) as timeHour
, ...
from
another_table as t
group by
timeHour, ...
order by
timeHour, ...
这个构造在Spark SQL中是否可行?
解决方法:
Is this construct possible in Spark SQL?
是的.您可以通过两种方式使它在Spark SQL中工作,以在GROUP BY和ORDER BY子句中使用新列
方法1使用子查询:
SELECT timeHour, someThing FROM (SELECT
from_unixtime((starttime/1000)) AS timeHour
, sum(...) AS someThing
, starttime
FROM
some_table)
WHERE
starttime >= 1000*unix_timestamp('2017-09-16 00:00:00')
AND starttime <= 1000*unix_timestamp('2017-09-16 04:00:00')
GROUP BY
timeHour
ORDER BY
timeHour
LIMIT 10;
方法2使用WITH //优雅方式:
-- create alias
WITH table_aliase AS(SELECT
from_unixtime((starttime/1000)) AS timeHour
, sum(...) AS someThing
, starttime
FROM
some_table)
-- use the same alias as table
SELECT timeHour, someThing FROM table_aliase
WHERE
starttime >= 1000*unix_timestamp('2017-09-16 00:00:00')
AND starttime <= 1000*unix_timestamp('2017-09-16 04:00:00')
GROUP BY
timeHour
ORDER BY
timeHour
LIMIT 10;
替代使用Scala的Spark DataFrame(wo SQL)API:
// This code may need additional import to work well
val df = .... //load the actual table as df
import org.apache.spark.sql.functions._
df.withColumn("timeHour", from_unixtime($"starttime"/1000))
.groupBy($"timeHour")
.agg(sum("...").as("someThing"))
.orderBy($"timeHour")
.show()
//another way - as per eliasah comment
df.groupBy(from_unixtime($"starttime"/1000).as("timeHour"))
.agg(sum("...").as("someThing"))
.orderBy($"timeHour")
.show()
原文地址:https://codeday.me/bug/20190608/1196033.html
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。