使用Apache Beam Python API从MySQL读取数据并在GCP Bucket中写入

如何解决使用Apache Beam Python API从MySQL读取数据并在GCP Bucket中写入

我正在尝试从MySQL数据库(位于GCP中)读取数据,并将其写入GCP存储桶中。我想同样使用Python SDK。下面是我编写的代码。

from __future__ import generators 
import apache_beam as beam
import time
import jaydebeapi 
import os
import argparse
from google.cloud import bigquery
import logging
import sys
from string import lower
from google.cloud import storage as gstorage
import pandas as  pd
from oauth2client.client import GoogleCredentials

print("Import Successful")

class setenv(beam.DoFn): 
      def process(self,context):
          os.system('gsutil cp gs://<MY_BUCKET>/mysql-connector-java-5.1.45.jar /tmp/')
          logging.info('Enviornment Variable set.')
        
class readfromdatabase(beam.DoFn): 
      def process(self,context):
          logging.info('inside readfromdatabase')
          database_user='root'
          database_password='<DB_PASSWORD>'
          database_host='<DB_HOST>'
          database_port='<DB_PORT>'
          database_db='<DB_NAME>'
          logging.info('reached readfromdatabase 1')        
          jclassname = "com.mysql.jdbc.Driver"
          url = ("jdbc:mysql://{0}:{1}/{2}?user={3}&password={4}".format(database_host,database_port,database_db,database_user,database_password))
          jars = ["/tmp/mysql-connector-java-5.1.45.jar"]
          libs = None
          cnx = jaydebeapi.connect(jclassname,url,jars=jars,libs=libs)   
          logging.info('Connection Successful..') 
          cursor = cnx.cursor()
          logging.info('Reading Sql Query from the file...')
          query = 'select * from employees.employee_details'
          logging.info('Query is %s',query)
          logging.info('Query submitted to Database..')

          for chunk in pd.read_sql(query,cnx,coerce_float=True,params=None,parse_dates=None,columns=None,chunksize=500000):
                 chunk.apply(lambda x: x.replace(u'\r',u' ').replace(u'\n',u' ') if isinstance(x,str) or isinstance(x,unicode) else x).WriteToText('gs://first-bucket-arpan/output2/')
    
          logging.info("Load completed...")
          return list("1")      
          
def run():  
    try:
        
        pcoll = beam.Pipeline()
        dummy= pcoll | 'Initializing..' >> beam.Create(['1'])
        logging.info('inside run 1')
        dummy_env = dummy | 'Setting up Instance..' >> beam.ParDo(setenv())
        logging.info('inside run 2')
        readrecords=(dummy_env | 'Processing' >>  beam.ParDo(readfromdatabase()))
        logging.info('inside run 3')
        p=pcoll.run()
        logging.info('inside run 4')
        p.wait_until_finish()
    except:
        logging.exception('Failed to launch datapipeline')
        raise 

def main():
    logging.getLogger().setLevel(logging.INFO)
    GOOGLE_APPLICATION_CREDENTIALS="gs://<MY_BUCKET>/My First Project-800a97e1fe65.json"
    run()

if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  
  main()

我正在使用以下命令运行: python /home/aarpan_roy/readfromdatabase.py --region $REGION --runner DataflowRunner --project $PROJECT --temp_location gs://$BUCKET/tmp

在运行时,投注在输出下方,并且未创建任何数据流作业:

INFO:root:inside run 2
INFO:root:inside run 3
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function annotate_downstream_side_inputs at 0x7fc1ea8a27d0> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function fix_side_input_pcoll_coders at 0x7fc1ea8a28c0> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function lift_combiners at 0x7fc1ea8a2938> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function expand_sdf at 0x7fc1ea8a29b0> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function expand_gbk at 0x7fc1ea8a2a28> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function sink_flattens at 0x7fc1ea8a2b18> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function greedily_fuse at 0x7fc1ea8a2b90> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function read_to_impulse at 0x7fc1ea8a2c08> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function impulse_to_input at 0x7fc1ea8a2c80> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function sort_stages at 0x7fc1ea8a2e60> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function setup_timer_mapping at 0x7fc1ea8a2de8> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function populate_data_channel_coders at 0x7fc1ea8a2ed8> ====================
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 100
INFO:apache_beam.runners.portability.fn_api_runner.worker_handlers:Created Worker handler <apache_beam.runners.portability.fn_api_runner.worker_handlers.EmbeddedWorkerHandler object at 0x7fc1ea356ad0> for environment ref_Environment_default_environment_1 (beam:env:embedded_python:v1,'')
INFO:apache_beam.runners.portability.fn_api_runner.fn_runner:Running (ref_AppliedPTransform_Initializing../Impulse_3)+((ref_AppliedPTransform_Initializing../FlatMap(<lambda at core.py:2632>)_4)+((ref_AppliedPTransform_Initializing../Map(decode)_6)+((ref_AppliedPTransform_Setting up Instance.._7)+(ref_AppliedPTransform_Processing_8))))
Copying gs://<MY_BUCKET>/mysql-connector-java-5.1.45.jar...
- [1 files][976.4 KiB/976.4 KiB]
Operation completed over 1 objects/976.4 KiB.
INFO:root:Enviornment Variable set.
INFO:root:inside run 4

请帮助我解决问题,并指导如何使用Apache Beam Python API将数据从MySql提取到GCP存储桶。

谢谢。

================================================ ================================= 大家好, 导出GOOGLE_APPLICATION_CREDENTIALS后,我对代码进行了一些更改并从Shell脚本运行它。但是,在执行时,出现以下错误:

WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: File /home/aarpan_roy/script/dataflowerviceaccount.json (pointed by GOOGLE_APPLICATION_CREDENTIALS environment variable) does not exist!
Connecting anonymously.
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['True','--service_account_name','dataflowserviceaccount','--service_account_key_file','/home/aarpan_roy/script/dataflowserviceaccount.json']
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['True','/home/aarpan_roy/script/dataflowserviceaccount.json']
ERROR:root:Failed to launch datapipeline"

下面是总日志文件:


    aarpan_roy@my-dataproc-cluster-m:~/script/util$ sh -x dataflow_runner.sh
    + export GOOGLE_APPLICATION_CREDENTIALS=gs://<MY_BUCKET>/My First Project-800a97e1fe65.json
    + python /home/aarpan_roy/script/util/loadfromdatabase.py --config config.properties --productconfig cts.properties --env dev --sourcetable employee_details --sqlquery /home/aarpan_roy/script/sql/employee_details.sql --connectionprefix d3 --incrementaldate 1900-01-01
    /home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/__init__.py:82: UserWarning: You are using Apache Beam with Python 2. New releases of Apache Beam will soon support Python 3 only.
      'You are using Apache Beam with Python 2. '
    /home/aarpan_roy/script/util/
    INFO:root:Job Run Id is 22152
    INFO:root:Job Name is load-employee-details-20200820
    SELECT * FROM EMPLOYEE_DETAILS
    WARNING:apache_beam.options.pipeline_options_validator:Option --zone is deprecated. Please use --worker_zone instead.
    INFO:apache_beam.runners.portability.stager:Downloading source distribution of the SDK from PyPi
    INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/bin/python','-m','pip','download','--dest','/tmp/tmpEarTQ0','apache-beam==2.23.0','--no-deps','--no-binary',':all:']
    INFO:apache_beam.runners.portability.stager:Staging SDK sources from PyPI: dataflow_python_sdk.tar
    INFO:apache_beam.runners.portability.stager:Downloading binary distribution of the SDK from PyPi
    INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/bin/python','--only-binary',':all:','--python-version','27','--implementation','cp','--abi','cp27mu','--platform','manylinux1_x86_64']
    INFO:apache_beam.runners.portability.stager:Staging binary distribution of the SDK from PyPI: apache_beam-2.23.0-cp27-cp27mu-manylinux1_x86_64.whl
    WARNING:root:Make sure that locally built Python SDK docker image has Python 2.7 interpreter.
    INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.23.0. If the image is not available at local,we will try to pull from hub.docker.com
    INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
    INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
    WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: File /home/aarpan_roy/script/dataflowerviceaccount.json (pointed by GOOGLE_APPLICATION_CREDENTIALS environment variable) does not exist!
    Connecting anonymously.
    INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/pipeline.pb...
    INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
    INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
    INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/pipeline.pb in 0 seconds.
    INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/pickled_main_session...
    INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/pickled_main_session in 0 seconds.
    INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/dataflow_python_sdk.tar...
    INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/dataflow_python_sdk.tar in 0 seconds.
    INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/apache_beam-2.23.0-cp27-cp27mu-manylinux1_x86_64.whl...
    INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://<MY_BUCKET>/load-employee-details-20200820.1597916566.568419/apache_beam-2.23.0-cp27-cp27mu-manylinux1_x86_64.whl in 0 seconds.
    WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['True','/home/aarpan_roy/script/dataflowserviceaccount.json']
    WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['True','/home/aarpan_roy/script/dataflowserviceaccount.json']
    ERROR:root:Failed to launch datapipeline
    Traceback (most recent call last)
    File "/home/aarpan_roy/script/util/loadfromdatabase.py",line 105,in run
        p=pcoll.run()
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/pipeline.py",line 521,in run
        allow_proto_holders=True).run(False)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/pipeline.py",line 534,in run
        return self.runner.run_pipeline(self,self._options)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py",line 586,in run_pipeline
        self.dataflow_client.create_job(self.job),self)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/utils/retry.py",line 236,in wrapper
        return fun(*args,**kwargs)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/runners/dataflow/internal/apiclient.py",line 681,in create_job
        return self.submit_job_description(job)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/utils/retry.py",line 748,in submit_job_description
        response = self._client.projects_locations_jobs.Create(request)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apache_beam/runners/dataflow/internal/clients/dataflow/dataflow_v1b3_client.py",line 667,in Create
        config,request,global_params=global_params)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apitools/base/py/base_api.py",line 731,in _RunMethod
        return self.ProcessHttpResponse(method_config,http_response,request)
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apitools/base/py/base_api.py",line 737,in ProcessHttpResponse
        self.__ProcessHttpResponse(method_config,request))
      File "/home/aarpan_roy/.local/lib/python2.7/site-packages/apitools/base/py/base_api.py",line 604,in __ProcessHttpResponse
        http_response,method_config=method_config,request=request)
    HttpForbiddenError: HttpError accessing <https://dataflow.googleapis.com/v1b3/projects/turing-thought-277215/locations/asia-southeast1/jobs?alt=json>: response: <{'status': '403','content-length': '138','x-xss-protection': '0','x-content-type-options': 'nosniff','transfer-encoding': 'chunked','vary': 'Origin,X-Origin,Referer','server': 'ESF','-content-encoding': 'gzip','cache-control': 'private','date': 'Thu,20 Aug 2020 09:42:47 GMT','x-frame-options': 'SAMEORIGIN','content-type': 'application/json; charset=UTF-8','www-authenticate': 'Bearer realm="https://accounts.google.com/",error="insufficient_scope",scope="https://www.googleapis.com/auth/compute.readonly https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/userinfo.email email https://www.googleapis.com/auth/userinfo#email"'}>,content <{
      "error": {
        "code": 403,"message": "Request had insufficient authentication scopes.","status": "PERMISSION_DENIED"
      }
    }
    >
    Traceback (most recent call last):
      File "/home/aarpan_roy/script/util/loadfromdatabase.py",line 194,in <module>
        args.sourcetable)
      File "/home/aarpan_roy/script/util/loadfromdatabase.py",line 142,in main
        run()  
      File "/home/aarpan_roy/script/util/loadfromdatabase.py",request=request)
    apitools.base.py.exceptions.HttpForbiddenError: HttpError accessing <https://dataflow.googleapis.com/v1b3/projects/turing-thought-277215/locations/asia-southeast1/jobs?alt=json>: response: <{'status': '403',"status": "PERMISSION_DENIED"
      }
    }
    >
    + echo 1
    1

我无法理解我在哪里犯错。请帮助我解决问题。 预先感谢。

解决方法

我认为您的帐户凭据中可能存在拼写错误

/home/aarpan_roy/script/dataflowerviceaccount.json应该是/home/aarpan_roy/script/dataflowserviceaccount.json。你能检查一下吗?

,

这里似乎有两个问题:

  1. 一方面是身份验证问题,请执行@Jayadeep提供的测试,以验证是否是凭据名称问题。

    以同样的方式,凭据的处理可能存在问题,因为您正在Dataproc实例中运行代码,所以我建议您在Cloud Shell

    中测试代码
  2. 另一方面,我发现another post处提到在python中与Cloud SQL的连接似乎不像Java(使用jdbcIO)那样透明。

    与此同时,我发现了另一个post,其中提到了一种解决方案,可以使用psycopg2代替 jaydebeapi连接到云SQL。我建议您尝试一下。

import psycopg2

connection = psycopg2.connect( 
    host = host,hostaddr = hostaddr,dbname = dbname,user = user,password = password,sslmode=sslmode,sslrootcert = sslrootcert,sslcert = sslcert,sslkey = sslkey
)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-