为什么此sqoop命令会引发异常?无法找到或加载主类org.apache.hadoop.mapreduce.v2.app.MRAppMaster

如何解决为什么此sqoop命令会引发异常?无法找到或加载主类org.apache.hadoop.mapreduce.v2.app.MRAppMaster

如果您有帮助,我对sqoop有疑问,非常感谢您的帮助。

我从本地计算机编写了一个sqoop命令,将数据从hdfs导出到oracle数据数据库。我在本地计算机上使用hadoop-3.3.0和sqoop 1.4.7。

,错误是:

错误:找不到或加载主类org.apache.hadoop.mapreduce.v2.app.MRAppMaster

sqoop命令:

sqoop export --connect "jdbc:oracle:thin:@(description=(address=(protocol=tcp)(host=172.16.49.30)(port=1521))(connect_data=(service_name=stgdb)))" --table CORE_ETL.DEPOSIT_TURNOVER --username username --password password  --export-dir /tmp/merged_deposit_turnover/sqoop/ --input-fields-terminated-by "," --input-lines-terminated-by '\n'

yarn-site.xml:

 <configuration>
  <property>
    <name>yarn.acl.enable</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.admin.acl</name>
    <value>*</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>cluster.com:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>cluster.com:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>cluster.com:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>cluster.com:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>cluster.com:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address</name>
    <value>cluster.com:8090</value>
  </property>
  <property>
    <name>yarn.resourcemanager.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.client.thread-count</name>
    <value>50</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.client.thread-count</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.increment-allocation-mb</name>
    <value>512</value>
  </property>
  <property>
    <name>  yarn.nodemanager.resource.memory-mb</name>
    <value>2048</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.increment-allocation-vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.am.max-attempts</name>
    <value>2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>
    <value>1000</value>
  </property>
  <property>
    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
    <value>600000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>
    <value>50</value>
  </property>
 <property>
    <name>yarn.application.classpath</name>
    <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
  </property>
        
    
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
  </property>
  <property>
    <name>yarn.resourcemanager.max-completed-applications</name>
    <value>10000</value>
  </property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
  </property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
    <value>logs</value>
  </property>
</configuration>

环境变量:

export HADOOP_HOME=/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export YARN_CONF_DIR=/etc/hadoop/etc/hadoop
export HADOOP_CONF_DIR=/etc/hadoop/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

hdfs-site.xml

<configuration>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///data/dfs/nn</value>
  </property>
  <property>
    <name>dfs.namenode.servicerpc-address</name>
    <value>cluster.com:8022</value>
  </property>
  <property>
    <name>dfs.https.address</name>
    <value>cluster.com:9871</value>
  </property>
  <property>
    <name>dfs.https.port</name>
    <value>9871</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>cluster.com:9870</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.blocksize</name>
    <value>67108864</value>
  </property>
  <property>
    <name>dfs.client.use.datanode.hostname</name>
    <value>false</value>
  </property>
  <property>
    <name>fs.permissions.umask-mode</name>
    <value>022</value>
  </property>
  <property>
    <name>dfs.client.block.write.locateFollowingBlock.retries</name>
    <value>7</value>
  </property>
  <property>
    <name>dfs.namenode.acls.enabled</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.client.read.shortcircuit</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.domain.socket.path</name>
    <value>/var/run/hdfs-sockets/dn</value>
  </property>
  <property>
    <name>dfs.client.read.shortcircuit.skip.checksum</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.client.domain.socket.data.traffic</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
  <name>yarn.app.mapreduce.am.staging-dir</name>
  <value>/user</value>
</property>

    
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=/etc/hadoop</value>
</property>
    
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=/etc/hadoop</value>
</property>
    
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=/etc/hadoop</value>
</property>


    <property> 
    <name>mapreduce.application.classpath</name>
    <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/common/*,$HADOOP_MAPRED_HOME/share/hadoop/common/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/yarn/*,$HADOOP_MAPRED_HOME/share/hadoop/yarn/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/hdfs/*,$HADOOP_MAPRED_HOME/share/hadoop/hdfs/lib/*</value>
</property>
    
</configuration>

sqoop错误:

Warning: /usr/lib/sqoop/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/lib/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2020-08-22 17:56:24,879 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
2020-08-22 17:56:25,173 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2020-08-22 17:56:25,492 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
2020-08-22 17:56:25,579 INFO manager.SqlManager: Using default fetchSize of 1000
2020-08-22 17:56:25,579 INFO tool.CodeGenTool: Beginning code generation
2020-08-22 17:56:27,694 INFO manager.OracleManager: Time zone has been set to GMT
2020-08-22 17:56:27,883 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CORE_ETL.DEPOSIT_TURNOVER t WHERE 1=0
2020-08-22 17:56:28,188 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /etc/hadoop
Note: /tmp/sqoop-hatef/compile/dc629ada72d032251eb72d68f8f68c85/CORE_ETL_DEPOSIT_TURNOVER.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
2020-08-22 17:56:33,829 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hatef/compile/dc629ada72d032251eb72d68f8f68c85/CORE_ETL.DEPOSIT_TURNOVER.jar
2020-08-22 17:56:33,902 INFO mapreduce.ExportJobBase: Beginning export of CORE_ETL.DEPOSIT_TURNOVER
2020-08-22 17:56:33,902 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead,use mapreduce.jobtracker.address
2020-08-22 17:56:34,381 INFO Configuration.deprecation: mapred.jar is deprecated. Instead,use mapreduce.job.jar
2020-08-22 17:56:36,685 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-08-22 17:56:38,545 INFO manager.OracleManager: Time zone has been set to GMT
2020-08-22 17:56:38,638 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead,use mapreduce.reduce.speculative
2020-08-22 17:56:38,645 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead,use mapreduce.map.speculative
2020-08-22 17:56:38,647 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead,use mapreduce.job.maps
2020-08-22 17:56:38,996 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hdp-name1-esxi12.sdb247.com/172.16.49.10:8032
2020-08-22 17:56:40,130 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/airflow/.staging/job_1597060731030_0459
2020-08-22 18:01:01,798 INFO input.FileInputFormat: Total input files to process : 1
2020-08-22 18:01:01,885 INFO input.FileInputFormat: Total input files to process : 1
2020-08-22 18:01:02,817 INFO mapreduce.JobSubmitter: number of splits:4
2020-08-22 18:01:02,999 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead,use mapreduce.map.speculative
2020-08-22 18:01:05,962 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1597060731030_0459
2020-08-22 18:01:05,962 INFO mapreduce.JobSubmitter: Executing with tokens: []
2020-08-22 18:01:08,561 INFO conf.Configuration: resource-types.xml not found
2020-08-22 18:01:08,562 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2020-08-22 18:01:08,901 INFO impl.YarnClientImpl: Submitted application application_1597060731030_0459
2020-08-22 18:01:09,086 INFO mapreduce.Job: The url to track the job: http://hdp-name1-esxi12.sdb247.com:8088/proxy/application_1597060731030_0459/
2020-08-22 18:01:09,088 INFO mapreduce.Job: Running job: job_1597060731030_0459
2020-08-22 18:01:11,442 INFO mapreduce.Job: Job job_1597060731030_0459 running in uber mode : false
2020-08-22 18:01:11,444 INFO mapreduce.Job:  map 0% reduce 0%
2020-08-22 18:01:11,671 INFO mapreduce.Job: Job job_1597060731030_0459 failed with state FAILED due to: Application application_1597060731030_0459 failed 2 times due to AM Container for appattempt_1597060731030_0459_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2020-08-22 18:03:19.337]Exception from container-launch.
Container id: container_1597060731030_0459_02_000001
Exit code: 1

[2020-08-22 18:03:19.338]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>

[2020-08-22 18:03:19.339]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>

For more detailed output,check the application tracking page: http://cluster.com:8088/cluster/app/application_1597060731030_0459 Then click on links to logs of each attempt.
. Failing the application.
2020-08-22 18:01:11,780 INFO mapreduce.Job: Counters: 0
2020-08-22 18:01:11,916 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2020-08-22 18:01:11,921 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 273.1812 seconds (0 bytes/sec)
2020-08-22 18:01:12,013 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2020-08-22 18:01:12,015 INFO mapreduce.ExportJobBase: Exported 0 records.
2020-08-22 18:01:12,015 ERROR mapreduce.ExportJobBase: Export job failed!
2020-08-22 18:01:12,016 ERROR tool.ExportTool: Error during export: 
Export job failed!
    at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
    at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:465)
    at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
    at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:252)

解决方法

您提到您已经在Cloudera中安装了一个群集,但是不清楚Sqoop在何处运行或在何处获得这些XML文件。

如果您已经完全安装了Cloudera Cluster,则应该已经在其中安装并配置了Sqoop,以便在运行时没有很多问题(您可能需要额外的JDBC驱动程序,但是应该是这样)

否则,如果尝试从外部设置Sqoop(和Hadoop),则需要从Hadoop集群中的工作程序节点获取$HADOOP_HOME/conf文件夹的副本,以确保所有客户端配置一样。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-