如何解决hadoop常见问题
- 错误描述
# 启动hadoop集群后DataNode不显示
# 执行命令 hdfs dfs -ls -R / 不起作用
- 解决方案
# 先重启并格式化
hdfs namenode -format
# 如果不起作用
# 删除data目录并新建
cd /opt/software/hadoop-2.9.2/dfs
rm -rf data
mkdir data
- 错误2
# 启动hive,向表中插入1条数据时报错
# 浏览器查看http://192.168.128.103:8088/cluster
# 报错详情
Application application_1705164916779_0001 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1705164916779_0001_000001 exited with exitCode: 1
Failing this attempt.Diagnostics: [2024-01-14 00:57:25.540]Exception from container-launch.
Container id: container_1705164916779_0001_01_000001
Exit code: 1
[2024-01-14 00:57:25.550]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
错误: 找不到或无法加载主类 org.apache.spark.deploy.yarn.ApplicationMaster
[2024-01-14 00:57:25.551]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
错误: 找不到或无法加载主类 org.apache.spark.deploy.yarn.ApplicationMaster
For more detailed output, check the application tracking page: http://slave3:8088/cluster/app/application_1705164916779_0001 Then click on links to logs of each attempt.
. Failing the application.
- 解决方案
# 错误原因,当前环境配置了hive on spark,hdfs中jar包被删除了
# hdfs中新建目录,将spark/jars中的包复制到该目录
hdfs dfs -mkdir /spark/hive-jars
hadoop fs -put /usr/local/software/spark-2.2.0/jars/* /spark/hive-jars
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。