数据流流作业-写入BigQuery时出错

如何解决数据流流作业-写入BigQuery时出错

使用“ FILE_LOADS”技术通过Apache Beam Dataflow作业向BigQuery写入错误。 Streaming INSERT(其他块)可以正常运行,并且符合预期。 “ FILE_LOAD”(如果是块)失败,并在代码后给出以下错误。存储桶中GCS上的临时文件是有效的JSON对象。

示例来自发布/订阅的原始事件:

"{'event': 'test','entityId': 13615316690,'eventTime': '2020-08-12T15:56:07.130899+00:00','targetEntityId': 8947793,'targetEntityType': 'item','entityType': 'guest','properties': {}}" 
 
"{'event': 'test','properties': {‘action’: ‘delete’}}"  
from __future__ import absolute_import

import logging
import sys
import traceback
import argparse
import ast
import json
import datetime
import dateutil.parser as date_parser

import apache_beam as beam
import apache_beam.pvalue as pvalue
from google.cloud.bigquery import CreateDisposition,WriteDisposition
from apache_beam.io.gcp.bigquery_tools import RetryStrategy

def get_values(element):
    # convert properties from dict to arr of dicts to form a repeatable bq table record
    prop_list = [{'property_name': k,'property_value': v} for k,v in element['properties'].items()]
    date_parsed = date_parser.parse(element.get('eventTime'))
    event_time = date_parsed.strftime('%Y-%m-%d %H:%M:00')
    
    raw_value = {'event': element.get('event'),'entity_type': element.get('entityType'),'entity_id': element.get('entityId'),'target_entity_type': element.get('targetEntityType'),'target_entity_id': element.get('targetEntityId'),'event_time': event_time,'properties': prop_list
                 }

    return raw_value

def stream_to_bq(c: dict):
    argv = [
        f'--project={c["PROJECT"]}',f'--runner=DataflowRunner',f'--job_name={c["JOBNAME"]}',f'--save_main_session',f'--staging_location=gs://{c["BUCKET_NAME"]}/{c["STAGING_LOCATION"]}',f'--temp_location=gs://{c["BUCKET_NAME"]}/{c["TEMP_LOCATION"]}',f'--network={c["NETWORKPATH"]}',f'--subnetwork={c["SUBNETWORKPATH"]}',f'--region={c["REGION"]}',f'--service_account_email={c["SERVICE_ACCOUNT"]}',# f'--setup_file=./setup.py',# f'--autoscaling_algorithm=THROUGHPUT_BASED',# f'--maxWorkers=15',# f'--experiments=shuffle_mode=service','--no_use_public_ips',f'--streaming'
    ]

    if c['FILE_LOAD']:
        argv.append('--experiments=allow_non_updatable_job')
        argv.append('--experiments=use_beam_bq_sink')

    p = beam.Pipeline(argv=argv)
    valid_msgs = (p
                          | 'Read from Pubsub' >>
                          beam.io.ReadFromPubSub(subscription=c['SUBSCRIPTION']).with_output_types(bytes)
                          )

    records = (valid_msgs
               | 'Event Parser(BQ Row) ' >> beam.Map(get_values)
               )

    # Load data to BigQuery using - 'Load Jobs' or 'Streaming Insert',choice based on latency expectation.
    if c['FILE_LOAD']:
        records | 'Write Result to BQ' >> beam.io.WriteToBigQuery(c["RAW_TABLE"],project=c["PROJECT"],dataset=c["DATASET_NAME"],method='FILE_LOADS',triggering_frequency=c['FILE_LOAD_FREQUENCY'],create_disposition=CreateDisposition.CREATE_NEVER,write_disposition=WriteDisposition.WRITE_APPEND
                                                                  )

        
    else:
        records | 'Write Result to BQ' >> beam.io.WriteToBigQuery(c["RAW_TABLE"],write_disposition=WriteDisposition.WRITE_APPEND,insert_retry_strategy=RetryStrategy.RETRY_ON_TRANSIENT_ERROR
                                                                  )

    

    p.run()

数据流作业出错:

message: 'Error while reading data,error message: JSON table encountered too many errors,giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more details.' reason: 'invalid'> [while running 'generatedPtransform-1801'] java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:57)

解决方法

该问题似乎是BigQuery的错误加载。我的建议是尝试在Dataflow之外进行测试加载工作,以确保您的架构和数据结构正确。您可以关注this BQ documentation

此外,我注意到您未指定schemaSCHEMA_AUTODETECT。我建议您指定它。

要了解这些错误,请尝试检查“数据流作业”日志,其中可能包含很多信息。如果您的加载作业失败,则可以检查BigQuery中的那些作业,它们还将为您提供有关失败原因的更多信息。您可以使用此StackDriver日志查找BQ加载作业ID:

resource.type="dataflow_step"
resource.labels.job_id= < YOUR DF JOB ID >
jsonPayload.message:("Triggering job" OR "beam_load")

我非常确定问题是由重复字段properties或架构引起的,因为考虑到它仅在Load Job中失败,架构似乎更有可能(也许该表的架构是错误)。无论如何,在这里您有一个有效的管道,我在一边进行了测试,并且两个BQ插入件均有效:

        schema = {
            "fields":
                [
                    {
                        "name": "name","type": "STRING"
                    },{
                        "name": "repeated","type": "RECORD","mode": "REPEATED","fields": [
                            {
                                "name": "spent","type": "INTEGER"
                            },{
                                "name": "ts","type": "TIMESTAMP"
                            }
                        ]
                    }
                ]
            }

        def fake_parsing(element):
            # Using a fake parse so it's easier to reproduce
            properties = []

            rnd = random.random()
            if rnd < 0.25:
                dict_prop = {"spent": random.randint(0,100),"ts": datetime.now().strftime('%Y-%m-%d %H:%M:00')}
                properties.append(dict_prop)
            elif rnd > 0.75:
                # repeated
                dict_prop = {"spent": random.randint(0,"ts": datetime.now().strftime('%Y-%m-%d %H:%M:00')}
                properties += [dict_prop,dict_prop]
            elif 0.5 > rnd > 0.75:
                properties.append({"ts": datetime.now().strftime('%Y-%m-%d %H:%M:00')})

            return {"name": 'inigo',"repeated": properties}

        pubsub = (p | "Read Topic" >> ReadFromPubSub(topic=known_args.topic)
                    | "To Dict" >> beam.Map(fake_parsing))

        pubsub | "Stream To BQ" >> WriteToBigQuery(
            table=f"{known_args.table}_streaming_insert",schema=schema,write_disposition=BigQueryDisposition.WRITE_APPEND,method="STREAMING_INSERTS")

        pubsub | "Load To BQ" >> WriteToBigQuery(
            table=f"{known_args.table}_load_job",method=WriteToBigQuery.Method.FILE_LOADS,triggering_frequency=known_args.triggering,insert_retry_strategy="RETRY_ON_TRANSIENT_ERROR")

我建议您尝试部分管道,而不是立即尝试所有部分,即首先尝试仅加载作业,如果它们失败,请检查它们为什么失败(在Dataflow日志,BigQuery日志或BigQuery UI中)。完成后,添加Streaming插入(或相反)。

,

我面临同样的问题。 如果我从本地执行数据流作业,它会按预期运行。 但是当我在云数据流环境中运行它时,它显示了这一点 -

error message: JSON table encountered too many errors,giving up. Rows: 9; errors: 1. Please look into the errors[] collection for more details.' reason: 'invalid'> [while running 'WriteTable/BigQueryBatchFileLoads/WaitForDestinationLoadJobs/WaitForDestinationLoadJobs']

我已经确保本地和云 SKD 都在同一个 apache-beam 版本 2.27 上

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-