如何解决优化PostgreSQL VIEW
我有一段代码在除一种方法之外的所有方面都能很好地工作。执行时间太长。我的怀疑是执行时间过长是由于过度使用JOIN(准确地说是11)引起的。 这是我正在尝试做的事情:
设置 我有3张桌子:
- v_na_measurement_status
id. start_measurement. stop_measurement.
0 2020-09-01 22:19:48.480668+00:00 2020-09-01 22:29:29.713952+00:00
1 2020-09-01 22:17:10.392089+00:00 2020-09-01 22:18:08.139668+00:00
2 2020-09-01 22:12:49.795404+00:00 2020-09-01 22:15:14.564597+00:00
...
- sensor_double_precision
id. sensor_name. timestamps. value_cal
40548 "curved_mirror_steps" "2020-09-01 22:29:23.526468+00" 432131
40547 "na_average_enable". "2020-09-01 22:29:23.410416+00" 1
40546 "na_averages" "2020-09-01 22:29:23.295404+00". 16
40545 "na_power" "2020-09-01 22:29:23.174255+00" -5
40544 "na_stop_freq" "2020-09-01 22:29:23.05868+00" 18000000000
40543 "na_start_freq" "2020-09-01 22:29:22.944205+00" 15000000000
...
- sensor_double_precision_arr
id. sensor_name. timestamp value_cal
3831 "na_s11_iq_data_trace2" "2020-09-01 22:29:27.456345+00". [array with ~2000 points]
3830 "na_s21_iq_data" "2020-09-01 22:29:27.389617+00". [array with ~2000 points]
3829 "na_s11_iq_data_trace2" "2020-09-01 22:29:20.543466+00". [array with ~2000 points]
3828 "na_s21_iq_data" "2020-09-01 22:29:20.443416+00". [array with ~2000 points]
目标 使用这3个表,我想创建一个名为v_na_log的VIEW,如下所示:
start_measurement stop_measurement curved_mirror_steps. na_averages na_start_freq na_stop_freq. na_s21_iq_data. na_s11_iq_data
"2020-09-01 22:29:22.913366+00" "2020-09-01 22:29:27.478287+00" 432131 16 15000000000. 18000000000 [array with ~2000 points]. [array with ~2000 points]
...
...
基本上,我想转置sensor_double_precision_arr和sensor_double_precision,以便sensor_name列中的行本身成为列。
一个解决方案 这是我用来实现此目的的长代码:
DROP VIEW IF EXISTS v_na_log;
CREATE VIEW v_na_log AS
SELECT
final_view.id,vnms.start_measurement,vnms.stop_measurement,final_view.curved_mirror_steps,final_view.na_averages,final_view.na_start_freq,final_view.na_stop_freq,iq_data.freq_resolution,iq_data.na_s21_iq_data,iq_data.na_s11_iq_data
FROM crosstab(
'WITH sorted_data AS (WITH current_data AS
(SELECT DISTINCT ON (id,sensor_name)
id,sensor_name,value_cal,timestamp
FROM sensor_double_precision
WHERE sensor_name in (''na_start_freq'',''na_stop_freq'',''na_averages'',''curved_mirror_steps'')
ORDER BY id,timestamp ASC)
SELECT
m.id,s.sensor_name,s.value_cal
FROM v_na_measurement_status m
INNER JOIN current_data s ON s.timestamp
BETWEEN m.start_measurement AND m.stop_measurement),log_ids AS (SELECT distinct id FROM sorted_data),sensor_names AS (SELECT distinct sensor_name FROM sorted_data)
SELECT log_ids.id,sensor_names.sensor_name,sorted_data.value_cal
FROM log_ids CROSS JOIN sensor_names
LEFT JOIN sorted_data on (log_ids.id= sorted_data.id and
sensor_names.sensor_name=sorted_data.sensor_name)')
final_view(id bigint,curved_mirror_steps double precision,na_averages double precision,na_start_freq double precision,na_stop_freq double precision)
LEFT JOIN v_na_measurement_status vnms on vnms.id = final_view.id
LEFT JOIN
(SELECT final_view.id,array_length(final_view.na_s21_iq_data,1) AS freq_resolution,final_view.na_s11_iq_data,final_view.na_s21_iq_data
FROM crosstab(
'WITH sorted_data AS (with current_data AS
(SELECT DISTINCT ON (id,sensor_name)
id,timestamp
FROM sensor_double_precision_arr
WHERE sensor_name LIKE ''%na_s21_iq_data%'' OR sensor_name LIKE ''%na_s11_iq_data%''
ORDER BY id,sorted_data.value_cal
FROM log_ids CROSS JOIN sensor_names
LEFT JOIN sorted_data ON (log_ids.id= sorted_data.id AND sensor_names.sensor_name = sorted_data.sensor_name)')
final_view(id bigint,na_s11_iq_data double precision[],na_s21_iq_data double precision[])
LEFT JOIN v_na_measurement_status vnms ON vnms.id = final_view.id) iq_data ON iq_data.id = final_view.id
ORDER BY vnms.start_measurement ASC;
问题
我用这段代码来做到这一点。它工作得很好,到目前为止,即使在边缘情况下,我也没有遇到任何问题。但是,这需要很长时间才能运行。
例如,如果我运行:
SELECT * FROM v_na_log LIMIT 10
执行大约需要13秒。如果删除LIMIT
子句,则需要更长的时间。我通常必须处理10行以上,因此花费的时间越长,对我来说数据分析就越糟。
就像我说的那样,我认为这必须与使用许多JOIN一起使用。但是,我没有看到更好的方法。我想知道是否有更好的解决方案,因为我怀疑随着表越来越大,这个问题会越来越严重。
我已经发布了有关其他类似任务的简化版本,并收到了宝贵的建议。我已将这些建议纳入我的代码中。这是我有一个与此相关的问题的链接: Crosstab using data from 2 different tables
也欢迎提出任何有关在数据库本身中更好地组织表的建议。
解决方法
基于发布的数据和期望的输出以及给定的枢轴值,我们已经预先知道,请考虑运行条件聚合或使用post_id = posts.insert_one(dict(post)).inserted_id
进行Postgres的选择性聚合。
post_id = posts.insert_one(post.copy()).inserted_id
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。