centos6安装hadoop

Hadoop是一个由Apache基金会所开发的分布式系统基础架构。

用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力进行高速运算和存储。

Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求,可以以流的形式访问(streaming access)文件系统中的数据。

Hadoop的框架最核心的设计就是:HDFS和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。


namenode 192.168.31.243

datenode 192.168.31.165


实验环境

centos6_x64


实验软件

jdk-6u31-linux-i586.bin

hadoop-1.0.0.tar.gz


软件安装

yum install -y rsync* openssh*

yum install -y ld-linux.so.2

groupadd hadoop

useradd hadoop -g hadoop

mkdir /usr/local/hadoop

mkdir -p /usr/local/java

service iptables stop

ssh-keygen -t rsa 192.168.31.243配置 (192.168.31.165配置相同)

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.


scp -r /root/.ssh/id_rsa.pub 192.168.31.165:/root/.ssh/authorized_keys

scp -r /root/.ssh/id_rsa.pub 192.168.31.243:/root/.ssh/authorized_keys

scp -r jdk-6u31-linux-i586.bin hadoop-1.0.0.tar.gz 192.168.31.165:/root/


mv jdk-6u31-linux-i586.bin /usr/local/java/

cd /usr/local/java/

chmod +x jdk-6u31-linux-i586.bin

./jdk-6u31-linux-i586.bin


vim /etc/profile 最后一行追加配置

# set java environment

export JAVA_HOME=/usr/local/java/jdk1.6.0_31

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

# set hadoop path

export HADOOP_HOME=/usr/local/hadoop

export PATH=$PATH:$HADOOP_HOME/bin

source /etc/profile


java -version

java version "1.6.0_31"

Java(TM) SE Runtime Environment (build 1.6.0_31-b04)

Java HotSpot(TM) Client VM (build 20.6-b01,mixed mode,sharing)


tar zxvf hadoop-1.0.0.tar.gz

mv hadoop-1.0.0 /usr/local/hadoop

chown -R hadoop:hadoop /usr/local/hadoop

ll /usr/local/hadoop/

drwxr-xr-x 14 hadoop hadoop 4096 Dec 16 2011 hadoop-1.0.0


cp /usr/local/hadoop/conf/hadoop-env.sh /usr/local/hadoop/conf/hadoop-env.sh.bak

vim /usr/local/hadoop/conf/hadoop-env.sh

# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

export JAVA_HOME=/usr/local/java/jdk1.6.0_31 修改为


cd /usr/local/hadoop/conf

cp core-site.xml hdfs-site.xmlmapred-site.xmlcore-site.xml 这几个文件都备份一下


vim core-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration> 红色为需要修改的地方

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop/tmp</value>

<description>Abaseforothertemporarydirectories.</description>

</property>

<!--filesystemproperties-->

<name>fs.default.name</name>

<value>hdfs://192.168.31.243:9000</value>

</property>

</configuration>


vim hdfs-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

</configuration>


vim mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name> mapred.job.tracker</name>

<value>http://192.168.21.243:9001</value>

</property>

</configuration>


cp masters masters.bak

vim masters

localhost

192.168.31.243


cp slave slave.bak 192.168.31.165配置

vim /usr/local/hadoop/conf/slaves

localhost

192.168.31.165


scp -r core-site.xml hdfs-site.xml mapred-site.xml 192.168.31.165:/usr/local/hadoop/conf/

/usr/local/hadoop/bin/hadoop namenode -format

Warning: $HADOOP_HOME is deprecated.

16/09/21 22:51:13 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = java.net.UnknownHostException: centos6: centos6

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 1.0.0

STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1214675; compiled by 'hortonfo' on Thu Dec 15 16:36:35 UTC 2011

************************************************************/

16/09/21 22:51:14 INFO util.GSet: VM type = 32-bit

16/09/21 22:51:14 INFO util.GSet: 2% max memory = 19.33375 MB

16/09/21 22:51:14 INFO util.GSet: capacity = 2^22 = 4194304 entries

16/09/21 22:51:14 INFO util.GSet: recommended=4194304,actual=4194304

16/09/21 22:51:14 INFO namenode.FSNamesystem: fsOwner=root

16/09/21 22:51:14 INFO namenode.FSNamesystem: supergroup=supergroup

16/09/21 22:51:14 INFO namenode.FSNamesystem: isPermissionEnabled=true

16/09/21 22:51:14 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100

16/09/21 22:51:14 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),accessTokenLifetime=0 min(s)

16/09/21 22:51:14 INFO namenode.NameNode: Caching file names occuring more than 10 times

16/09/21 22:51:14 INFO common.Storage: Image file of size 110 saved in 0 seconds.

16/09/21 22:51:14 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.

16/09/21 22:51:14 INFO namenode.NameNode: SHUTDOWN_MSG:

SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: centos6: centos6


/usr/local/hadoop/bin/start-all.sh

starting namenode,logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-centos6.out

The authenticity of host 'localhost (::1)' can't be established.

RSA key fingerprint is 81:d9:c6:54:a9:99:27:c0:f7:5f:c3:15:d5:84:a0:99.

Are you sure you want to continue connecting (yes/no)? yes

localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

root@localhost's password:

localhost: starting datanode,logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-centos6.out

The authenticity of host '192.168.31.243 (192.168.31.243)' can't be established.

192.168.31.243: Warning: Permanently added '192.168.31.243' (RSA) to the list of known hosts.

root@192.168.31.243's password:

192.168.31.243: starting secondarynamenode,logging to /usr/local/hadoop/libexec/../logs/hadoop-root-secondarynamenode-centos6.out

starting jobtracker,logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-centos6.out

localhost: starting tasktracker,logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-centos6.out


ll /usr/local/hadoop/tmp/

drwxr-xr-x 5 root root 4096 Sep 21 22:53 dfs

drwxr-xr-x 3 root root 4096 Sep 21 22:53 mapred 看到这两项证明没有错误


jps

3237 SecondaryNameNode

3011 NameNode

3467 Jps

netstat -tuplna | grep 500

tcp 0 0 :::50070 :::* LISTEN 3011/java

tcp 0 0 :::50090 :::* LISTEN 3237/java

http://192.168.31.243:50070/dfshealth.jsp

wKiom1fiocDQOnp5AAEwALKsVoY518.jpg-wh_50

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


linux下开机自启: 在/etc/init.d目录下新建文件elasticsearch 并敲入shell脚本: 注意, 前两行必须填写,且要注释掉。 第一行为shell前行代码,目的告诉系统使用shell。 第二行分别代表运行级别、启动优先权、关闭优先权,且后面添加开机服务会用到。 shell脚本
1、因为在centos7中/etc/rc.d/rc.local的权限被降低了,所以需要赋予其可执行权 chmod +x /etc/rc.d/rc.local 2、赋予脚本可执行权限假设/usr/local/script/autostart.sh是你的脚本路径,给予执行权限 chmod +x /usr
最简单的查看方法可以使用ls -ll、ls-lh命令进行查看,当使用ls -ll,会显示成字节大小,而ls- lh会以KB、MB等为单位进行显示,这样比较直观一些。 通过命令du -h –max-depth=1 *,可以查看当前目录下各文件、文件夹的大小,这个比较实用。 查询当前目录总大小可以使用d
ASP.NET Core应用程序发布linux在shell中运行是正常的。可一但shell关闭网站也就关闭了,所以要配置守护进程, 用的是Supervisor,本文主要记录配置的过程和过程遇到的问题 安装Supervisor&#160;1 yum install python-setuptools
设置时区(CentOS 7) 先执行命令timedatectl status|grep &#39;Time zone&#39;查看当前时区,如果不是时区(Asia/Shanghai),则需要先设置为中国时区,否则时区不同会存在时差。 #已经是Asia/Shanghai,则无需设置 [root@xia
vim&#160;/etc/sysconfig/network-scripts/ifcfg-eth0 BOOTPROTO=&quot;static&quot; ONBOOT=yes IPADDR=192.168.8.106 NETMASK=255.255.252.0 GATEWAY=192.168.
一、安装gcc依赖 由于 redis 是用 C 语言开发,安装之前必先确认是否安装 gcc 环境(gcc -v),如果没有安装,执行以下命令进行安装 [root@localhost local]# yum install -y gcc 二、下载并解压安装包 [root@localhost local
第一步 On CentOS/RHEL 6.*: $ sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el6/x86_64/nux-dextop-release-0-2.el6.nux.noarch.rpm On CentOS/RHEL 7: $
/// &lt;summary&gt; /// 取小写文件名后缀 /// &lt;/summary&gt; /// &lt;param name=&quot;name&quot;&gt;文件名&lt;/param&gt; /// &lt;returns&gt;返回小写后缀,不带“.”&lt;/ret
which nohup .bash_profile中并source加载 如果没有就安装吧 yum provides */nohup nohup npm run start &amp; nohup ./kibana &amp;
1.1 MySQL安装 1.1.1 下载wget命令 yum -y install wget 1.1.2 在线下载mysql安装包 wget https://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm 1.1.3 安装My
重启 reboot shutdown -r now init 6 关闭 init 0 shutdown -h now shutdown -h 20:25 #8点25关机查看内存 free CPU利用率 top 日期 date 设置时间 date 033017002015 #月日时间年 日历 cal
1、firewalld的基本使用 启动: systemctl start firewalld 关闭: systemctl stop firewalld 查看状态: systemctl status firewalld 开机禁用 : systemctl disable firewalld 开机启用 :
1 下载并安装MySQL官方的&#160;Yum Repository wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm 使用上面的命令就直接下载了安装用的Yum Repository,大概
CentOS6.x CentOS6中转用Upstrat代替以前的init.d/rcX.d的线性启动方式。 一、相关命令 通过initctl help可以查看相关命令 [root@localhost ~]# initctl help Job commands: start Start job. sto
1、使用命令:df -lk 找到已满磁盘 2、使用命令:du --max-depth=1 -h 查找大文件,删除
ifconfig:查看网卡信息 网卡配置文件位置: /etc/sysconfig/network-scripts/文件夹 nmtui:配置网卡 netstat -tlunp:查看端口信息 端口信息存储位置: /etc/services文件 route:查看路由信息 wget:下载网路文件,例如 wg
ps -ef:查看所有进程,&#160;ps -ef |grap firewalld 查看与firewalld相关的进程 which :查看进程:which firewalld kill 进程id:杀掉进程 kill 640,强制杀:kill -9 640 man:查看帮助,例如 man ps 查看
useradd:添加用户 useradd abc,默认添加一个abc组 vipw:查看系统中用户 groupadd:添加组groupadd ccna vigr:查看系统中的组 gpasswd:将用户abc添加到ccna组 gpasswd -a abc ccna groups abc:查看用户abc属