Hadoop
参考文档 http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/SingleCluster.html 1.下载安装
> cd /tmp
> wget http://mirrors.noc.im/apache/hadoop/common/hadoop-2.6.4/hadoop-2.6.4.tar.gz
> tar -xf hadoop-2.6.4.tar.gz -C /home/bigdata
2.配置ssh免认证登录
> yum install -y openssh-clients
> ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
> ssh localhost #不需要输入密码,则配置成功
3.配置hadoop的运行环境
> cd /home/bigdata/hadoop-2.6.4
> vi etc/hadoop/hadoop-env.sh #修改其中${JAVA_HOME}为/usr/lib/jvm/java-1.7.0-openjdk.x86_64/
> bin/hadoop #出现hadoop的输出,说明hadoop已经可以正常运行了
4.配置启动HDFS
> vi etc/hadoop/core-site.xml
配置如下
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/bigdata/hdfs</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
> vi etc/hadoop/hdfs-site.xml
配置如下
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
> bin/hdfs namenode -format
> sbin/start-dfs.sh #jps可以简单确认hdfs是否正常工作
> bin/hdfs dfs -help #查看所有HDFS指令集
> bin/hdfs dfs -mkdir /tmp #尝试在HDFS创建tmp目录
> bin/hdfs dfs -ls / #HDFS目录列表看看tmp是否已经创建