如何解决无法创建Dataproc集群
我尝试通过Airflow和Google Cloud UI创建Dataproc集群,但集群创建总是在最后失败。以下是我用来创建集群的气流代码-
# STEP 1: Libraries needed
from datetime import timedelta,datetime
from airflow import models
from airflow.operators.bash_operator import BashOperator
from airflow.contrib.operators import dataproc_operator
from airflow.utils import trigger_rule
from poc.utils.transform import main
from airflow.contrib.hooks.gcp_dataproc_hook import DataProcHook
from airflow.operators.python_operator import BranchPythonOperator
import os
YESTERDAY = datetime.combine(
datetime.today() - timedelta(1),datetime.min.time())
project_name = os.environ['GCP_PROJECT']
# Can pull in spark code from a gcs bucket
# SPARK_CODE = ('gs://us-central1-cl-composer-tes-fa29d311-bucket/spark_files/transformation.py')
dataproc_job_name = 'spark_job_dataproc'
default_dag_args = {
'depends_on_past': False,'email_on_failure': False,'email_on_retry': False,'retries': 1,'start_date': YESTERDAY,'retry_delay': timedelta(minutes=5),'project_id': project_name,'owner': 'DataProc',}
with models.DAG(
'dataproc-poc',description='Dag to run a simple dataproc job',schedule_interval=timedelta(days=1),default_args=default_dag_args) as dag:
CLUSTER_NAME = 'dataproc-cluster'
def ensure_cluster_exists(ds,**kwargs):
cluster = DataProcHook().get_conn().projects().regions().clusters().get(
projectId=project_name,region='us-east1',clusterName=CLUSTER_NAME
).execute(num_retries=5)
print(cluster)
if cluster is None or len(cluster) == 0 or 'clusterName' not in cluster:
return 'create_dataproc'
else:
return 'run_spark'
# start = BranchPythonOperator(
# task_id='start',# provide_context=True,# python_callable=ensure_cluster_exists,# )
print_date = BashOperator(
task_id='print_date',bash_command='date'
)
create_dataproc = dataproc_operator.DataprocClusterCreateOperator(task_id='create_dataproc',cluster_name=CLUSTER_NAME,num_workers=2,use_if_exists='true',zone='us-east1-b',master_machine_type='n1-standard-1',worker_machine_type='n1-standard-1')
# Run the PySpark job
run_spark = dataproc_operator.DataProcPySparkOperator(
task_id='run_spark',main=main,job_name=dataproc_job_name
)
# dataproc_operator
# Delete Cloud Dataproc cluster.
# delete_dataproc = dataproc_operator.DataprocClusterDeleteOperator(
# task_id='delete_dataproc',# cluster_name='dataproc-cluster-demo-{{ ds_nodash }}',# trigger_rule=trigger_rule.TriggerRule.ALL_DONE)
# STEP 6: Set DAGs dependencies
# Each task should run after have finished the task before.
print_date >> create_dataproc >> run_spark
# print_date >> start >> create_dataproc >> run_spark
# start >> run_spark
我检查了群集日志,并看到以下错误-
- 无法存储主密钥1
- 无法存储主密钥2
- 初始化失败。退出125以防止重新启动
- 无法启动主服务器:等待2个数据节点和节点管理器超时。 操作超时:正在运行的2个最小必需数据节点中只有0个正在运行。 操作超时:正在运行的2个最低必需节点管理器中只有0个正在运行。
解决方法
Cannot start master: Timed out waiting for 2 datanodes and nodemanagers. Operation timed out: Only 0 out of 2 minimum required datanodes running. Operation timed out: Only 0 out of 2 minimum required node managers running.
此错误表明工作节点无法与主节点通信。当工作节点无法在给定时间内向主节点报告时,集群创建失败。
请检查您是否设置了正确的防火墙规则以允许虚拟机之间的通信。
您可以参考以下网络配置最佳实践:https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/network#overview
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。