如何解决AWS Glue ConcurrentModificationException
我有一个AWS Glue作业,它以换行符分隔的JSON格式读取S3中的一些数据,然后根据某个字段的值将数据拆分为单独的存储桶。
例如:
{"category":"foo",...other fields}
{"category":"bar",...other fields}
它将第一个文档写到某个prefix-foo
存储桶中,第二个文档写到prefix-bar
中。我正在使用Filter进行过滤。
我认为这项工作应该相当简单,但是最近,随着输入数据集的大小增加,这项工作开始失败了。有时,它会因此ConcurrentModificationException而失败,我不确定这是否是其他一些潜在条件的红鲱鱼,或者是实际错误:
20/10/10 09:43:30 ERROR Executor: Exception in task 18.1 in stage 2.0 (TID 883)
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
at java.util.HashMap$EntryIterator.next(HashMap.java:1479)
at java.util.HashMap$EntryIterator.next(HashMap.java:1477)
at net.razorvine.pickle.Pickler.getCustomPickler(Pickler.java:343)
at net.razorvine.pickle.Pickler.dispatch(Pickler.java:251)
at net.razorvine.pickle.Pickler.save(Pickler.java:141)
at net.razorvine.pickle.Pickler.put_map(Pickler.java:367)
at net.razorvine.pickle.Pickler.dispatch(Pickler.java:315)
at net.razorvine.pickle.Pickler.save(Pickler.java:141)
at net.razorvine.pickle.Pickler.put_map(Pickler.java:368)
at net.razorvine.pickle.Pickler.dispatch(Pickler.java:315)
at net.razorvine.pickle.Pickler.save(Pickler.java:141)
at net.razorvine.pickle.Pickler.put_arrayOfObjects(Pickler.java:535)
at net.razorvine.pickle.Pickler.dispatch(Pickler.java:210)
at net.razorvine.pickle.Pickler.save(Pickler.java:141)
at net.razorvine.pickle.Pickler.dump(Pickler.java:111)
at net.razorvine.pickle.Pickler.dumps(Pickler.java:96)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:159)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:148)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:557)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)
又一次失败,出现一个简单的内存不足错误:
Aborted due to stage failure: Task 16 in stage 2.0 failed 4 times,most recent failure: Lost task 16.3 in stage 2.0 (TID 927,<ip>,executor 106): ExecutorLostFailure (executor 106 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 5.5 GB of 5.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
我的工作正在执行的任务-这种过滤-对我来说似乎很简单,所以我不明白为什么Spark无法处理它。任何人都可以就这些错误提出建议,或者我可以执行我想要的过滤的另一种方法吗?我的代码如下(注意:这还包括一些根据天/小时对输出数据进行分区的逻辑):
@udf
def generate_partition_key(timestamp):
if not timestamp:
return "unknown"
try:
dt = dateutil.parser.parse(timestamp)
except:
return "unknown"
return dt.strftime("%Y-%m-%d")
@udf
def generate_partition_hour(timestamp):
if not timestamp:
return "unknown"
try:
dt = dateutil.parser.parse(timestamp)
except:
return "unknown"
return dt.strftime("%H")
## @params: [TempDir,JOB_NAME]
args = getResolvedOptions(sys.argv,['TempDir','JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'],args)
deliveryDF = glueContext.create_dynamic_frame.from_options(
's3',{'paths': ['s3://input-bucket'],'recurse': True,'groupFiles':'inPartition'},'json',transformation_ctx = "deliveryDF"
)
def bucket_name_from_category(category):
return "special-prefix-" + category
SPLIT_CATEGORIES = ['foo','bar'] # Categories to split out
for category in SPLIT_CATEGORIES:
categoryDF = Filter.apply(
frame = deliveryDF,f = lambda x: x['category'] == category
)
categoryDF = ResolveChoice.apply(
categoryDF,specs=[(name,"cast:double") for name in ['numeric_field1','numeric_field2']]
)
categoryDF = ResolveChoice.apply(categoryDF,choice = "make_cols")
categoryDF = DropNullFields.apply(frame = categoryDF)
# Convert to dataframe and add partition key column
dataframe = categoryDF.toDF()
dataframe = dataframe.withColumn("partition_key",generate_partition_key(dataframe["@timestamp"]))
dataframe = dataframe.withColumn("partition_hour",generate_partition_hour(dataframe["@timestamp"]))
dynamicframe = DynamicFrame.fromDF(dataframe,glueContext,"nested_" + category)
bucket_name = bucket_name_from_category(category)
glueContext.write_dynamic_frame.from_options(
frame = dynamicframe,connection_type = "s3",connection_options = {
"path": "s3://" + bucket_name + "/data","compression": "gzip","partitionKeys": ["partition_key","partition_hour"]
},format = "json"
)
job.commit()
在此先感谢任何可以帮助我的人!
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。