Google Cloud Pus / Sub :: google.api_core.exceptions.DeadlineExceeded:超过504个截止日期

如何解决Google Cloud Pus / Sub :: google.api_core.exceptions.DeadlineExceeded:超过504个截止日期

我正在测试Google Cloud pub / sub的流处理。 将消息从发布者转发到主题,在apache-beam上的pub / sub上阅读消息,并使用beam.Map(print)进行检查。

从发布/订阅中读取消息,可以正常工作。但是,阅读所有消息后发生了错误。

ㅡ。该代码将消息从发布者传递到主题

from google.cloud import pubsub_v1
from google.cloud import bigquery
import time

# TODO(developer)
project_id = [your-project-id]
topic_id = [your-topic-id]

# Construct a BigQuery client object.
client = bigquery.Client()

# Configure the batch to publish as soon as there is ten messages,# one kilobyte of data,or one second has passed.
batch_settings = pubsub_v1.types.BatchSettings(
max_messages=10,# default 100
max_bytes=1024,# default 1 MB
max_latency=1,# default 10 ms'

)
publisher = pubsub_v1.PublisherClient(batch_settings)    
topic_path = publisher.topic_path(project_id,topic_id)

query = """
    SELECT *
    FROM `[bigquery-schema.bigquery-dataset.bigquery-tablename]`
    LIMIT 20
"""
query_job = client.query(query)

# Resolve the publish future in a separate thread.
def callback(topic_message):
    message_id = topic_message.result()
    print(message_id)

print("The query data:")
for row in query_job:
    data = u"category={},language={},count={}".format(row[0],row[1],row[2])
    print(data)
    data = data.encode("utf-8")
    time.sleep(1)
    topic_message = publisher.publish(topic_path,data=data)
    topic_message.add_done_callback(callback)

print("Published messages with batch settings.")

ㅡ。 Apache-beam代码[用于读取和处理pub / sub中的数据]

# Copyright 2019 Google LLC.
#
# Licensed under the Apache License,Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,software
# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# [START pubsub_to_gcs]
import argparse
import datetime
import json
import logging
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
import apache_beam.transforms.window as window

pipeline_options = PipelineOptions(
    streaming=True,save_main_session=True,runner='DirectRunner',return_immediately=True,initial_rpc_timeout_millis=25000,)

class GroupWindowsIntoBatches(beam.PTransform):
    """A composite transform that groups Pub/Sub messages based on publish
    time and outputs a list of dictionaries,where each contains one message
and its publish timestamp.
"""

def __init__(self,window_size):
    # Convert minutes into seconds.
    self.window_size = int(window_size * 60)

def expand(self,pcoll):
    return (
        pcoll
        # Assigns window info to each Pub/Sub message based on its
        # publish timestamp.
        | "Window into Fixed Intervals"
        >> beam.WindowInto(window.FixedWindows(self.window_size))
        | "Add timestamps to messages" >> beam.ParDo(AddTimestamps())
        # Use a dummy key to group the elements in the same window.
        # Note that all the elements in one window must fit into memory
        # for this. If the windowed elements do not fit into memory,# please consider using `beam.util.BatchElements`.
        # https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.util.html#apache_beam.transforms.util.BatchElements
        | "Add Dummy Key" >> beam.Map(lambda elem: (None,elem))
        | "Groupby" >> beam.GroupByKey()
        | "Abandon Dummy Key" >> beam.MapTuple(lambda _,val: val)
    )


class AddTimestamps(beam.DoFn):
    def process(self,element,publish_time=beam.DoFn.TimestampParam):
            """Processes each incoming windowed element by extracting the Pub/Sub
            message and its publish timestamp into a dictionary. `publish_time`
            defaults to the publish timestamp returned by the Pub/Sub server. It
            is bound to each element by Beam at runtime.
        """

        yield {
            "message_body": element.decode("utf-8"),"publish_time": datetime.datetime.utcfromtimestamp(
                float(publish_time)
            ).strftime("%Y-%m-%d %H:%M:%S.%f"),}

class WriteBatchesToGCS(beam.DoFn):
    def __init__(self,output_path):
        self.output_path = output_path
    def process(self,batch,window=beam.DoFn.WindowParam):
        """Write one batch per file to a Google Cloud Storage bucket. """

        ts_format = "%H:%M"
        window_start = window.start.to_utc_datetime().strftime(ts_format)
        window_end = window.end.to_utc_datetime().strftime(ts_format)
        filename = "-".join([self.output_path,window_start,window_end])
        with beam.io.gcp.gcsio.GcsIO().open(filename=filename,mode="w") as f:
            for element in batch:
                f.write("{}\n".format(json.dumps(element)).encode("utf-8"))

class test_func(beam.DoFn) :
    def __init__(self,delimiter=','):
        self.delimiter = delimiter
    def process(self,topic_message):
        print(topic_message)

def run(input_topic,output_path,window_size=1.0,pipeline_args=None):
    # `save_main_session` is set to true because some DoFn's rely on
    # globally imported modules.
    pipeline_options = PipelineOptions(
        pipeline_args,streaming=True,save_main_session=True
    )

    with beam.Pipeline(options=pipeline_options) as pipeline:
        (
            pipeline
            | "Read PubSub Messages"
            >> beam.io.ReadFromPubSub(topic=input_topic)
            | "Pardo" >> beam.ParDo(test_func(','))
        )

if __name__ == "__main__":  # noqa
    input_topic = 'projects/[project-id]/topics/[pub/sub-name]'
    output_path = 'gs://[bucket-name]/[file-directory]'
    run(input_topic,2)
# [END pubsub_to_gcs]

作为临时措施,我设置了return_immediately=True。但是,这也不是根本的解决方案。 谢谢您阅读。

解决方法

这似乎是其他SO thread中报告的PubSub库的一个已知问题,并且看起来它最近已用版本1.4.2解决,但尚未包含在BEAM dependencies中使用google-cloud-pubsub>=0.39.0,<1.1.0

我进行了一些研究,发现DataflowRunner似乎比Apache Beam团队维护的DirectRunner更能解决此错误。该问题已在beam site上报告,但尚未解决。

还请注意,here可以找到DEADLINE_EXCEEDED错误的故障排除指南。您可以检查提出的建议是否对您有帮助,例如升级到最新版本的客户端库。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-