如何解决无法使用 Spark 脚本将 Spark 数据集写入 HBase
我正在尝试使用 Spark 写入 HBase 表。我正在使用来自 link 的 HBase Spark 连接器的示例。我用 spark-shell
call
$ spark-shell --jars /opt/cloudera/parcels/CDH/jars/hbase-spark-2.1.0-cdh6.2.1.jar,/opt/cloudera/parcels/CDH/jars/hbase-client-2.1.0-cdh6.2.1.jar
代码:
val sql = spark.sqlContext
import java.sql.Date
case class Person(name: String,email: String,birthDate: Date,height: Float)
var personDS = Seq(
Person("alice","alice@alice.com",Date.valueOf("2000-01-01"),4.5f),Person("bob","bob@bob.com",Date.valueOf("2001-10-17"),5.1f)).
toDS
personDS.write.format("org.apache.hadoop.hbase.spark").
option("hbase.columns.mapping","name STRING :key,email STRING c:email,birthDate DATE p:birthDate,height FLOAT p:height") .
option("hbase.table","test").
option("hbase.spark.use.hbasecontext",false).
option("spark.hadoop.validateOutputSpecs",false).
save()
例外是
java.lang.NullPointerException
at org.apache.hadoop.hbase.spark.HBaseRelation.<init>(DefaultSource.scala:139)
at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
... 49 elided
出现异常的原因是什么,如何避免?
解决方法
我怀疑这里发生 NPE 是因为在 HBase-Spark 连接器可以在 HBaseContext
中查找您正在引用的表并创建数据源之前,应该正确初始化 hbase:meta
。 IE。按照链接中的自定义 HBase 配置部分进行操作,例如:
import org.apache.hadoop.hbase.spark.HBaseContext
import org.apache.hadoop.hbase.HBaseConfiguration
new HBaseContext(spark.sparkContext,new HBaseConfiguration())
...
,
还有另一种初始化HBaseContext
的方法:
import org.apache.hadoop.fs.Path
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.spark.HBaseContext
val conf = HBaseConfiguration.create()
// use your actual path to hbase-site.xml
conf.addResource(new Path("/etc/hbase/conf.cloudera.hbase/hbase-site.xml"))
new HBaseContext(sc,conf)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。