
ubuntu 12
hadoop 1.1.2
首先保证hadoop配置成功
1、在Hadoop的解压目录的如下位置可以找到WordCount.java的源文件 src/examples/org/apache/hadoop/examples/WordCount.java
新建一个wordcount的文件夹,将WordCount.java拷贝至dev/wordcount文件夹下
2.编译wordcount.java
3.将生成的class文件打包
4.在wordcount下建立file01 file02两个文件
5.启动hadoop,在hdfs上创建input文件夹,并将两个输入文件上传至input文件夹
[java] view plain copy
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop dfs -ls
ls: Cannot access .: No such file or directory.
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop dfs -mkdir input
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop dfs -ls
Found 1 items
drwxr-xr-x - root supergroup 0 2014-03-04 17:48 /user/root/input
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -put /home/zcf/ 桌面/file01 input
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -put /home/zcf/ 桌面/file02 input
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -ls input
Found 2 items
-rw-r--r-- 1 root supergroup 22 2014-03-04 17:50 /user/root/input/file01
-rw-r--r-- 1 root supergroup 28 2014-03-04 17:50 /user/root/input/file02
6.运行wordcount.jar
[java] view plain copy
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop jar wordcount/wordcount.jar org.apache.hadoop.examples.WordCount input output
14/03/04 17:58:14 INFO input.FileInputFormat: Total input paths to process : 2
14/03/04 17:58:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/04 17:58:14 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/04 17:58:15 INFO mapred.JobClient: Running job: job_201403041744_0001
14/03/04 17:58:16 INFO mapred.JobClient: map 0% reduce 0%
14/03/04 17:58:21 INFO mapred.JobClient: map 50% reduce 0%
14/03/04 17:58:22 INFO mapred.JobClient: map 100% reduce 0%
14/03/04 17:58:29 INFO mapred.JobClient: map 100% reduce 33%
14/03/04 17:58:31 INFO mapred.JobClient: map 100% reduce 100%
14/03/04 17:58:32 INFO mapred.JobClient: Job complete: job_201403041744_0001
14/03/04 17:58:32 INFO mapred.JobClient: Counters: 29
14/03/04 17:58:32 INFO mapred.JobClient: Job Counters
14/03/04 17:58:32 INFO mapred.JobClient: Launched reduce tasks=1
14/03/04 17:58:32 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=8421
14/03/04 17:58:32 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/04 17:58:32 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/03/04 17:58:32 INFO mapred.JobClient: Launched map tasks=2
14/03/04 17:58:32 INFO mapred.JobClient: Data-local map tasks=2
14/03/04 17:58:32 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=9155
14/03/04 17:58:32 INFO mapred.JobClient: File Output Format Counters
14/03/04 17:58:32 INFO mapred.JobClient: Bytes Written=41
14/03/04 17:58:32 INFO mapred.JobClient: FileSystemCounters
14/03/04 17:58:32 INFO mapred.JobClient: FILE_BYTES_READ=79
14/03/04 17:58:32 INFO mapred.JobClient: HDFS_BYTES_READ=268
14/03/04 17:58:32 INFO mapred.JobClient: FILE_BYTES_WRITTEN=152857
14/03/04 17:58:32 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=41
14/03/04 17:58:32 INFO mapred.JobClient: File Input Format Counters
14/03/04 17:58:32 INFO mapred.JobClient: Bytes Read=50
14/03/04 17:58:32 INFO mapred.JobClient: Map-Reduce Framework
14/03/04 17:58:32 INFO mapred.JobClient: Map output materialized bytes=85
14/03/04 17:58:32 INFO mapred.JobClient: Map input records=2
14/03/04 17:58:32 INFO mapred.JobClient: Reduce shuffle bytes=85
14/03/04 17:58:32 INFO mapred.JobClient: Spilled Records=12
14/03/04 17:58:32 INFO mapred.JobClient: Map output bytes=82
14/03/04 17:58:32 INFO mapred.JobClient: CPU time spent (ms)=2840
14/03/04 17:58:32 INFO mapred.JobClient: Total committed heap usage (bytes)=306511872
14/03/04 17:58:32 INFO mapred.JobClient: Combine input records=8
14/03/04 17:58:32 INFO mapred.JobClient: SPLIT_RAW_BYTES=218
14/03/04 17:58:32 INFO mapred.JobClient: Reduce input records=6
14/03/04 17:58:32 INFO mapred.JobClient: Reduce input groups=5
14/03/04 17:58:32 INFO mapred.JobClient: Combine output records=6
14/03/04 17:58:32 INFO mapred.JobClient: Physical memory (bytes) snapshot=382898176
14/03/04 17:58:32 INFO mapred.JobClient: Reduce output records=5
14/03/04 17:58:32 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1164251136
14/03/04 17:58:32 INFO mapred.JobClient: Map output records=8
7.查看运行结果
[java] view plain copy
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -ls
Found 2 items
drwxr-xr-x - root supergroup 0 2014-03-04 17:50 /user/root/input
drwxr-xr-x - root supergroup 0 2014-03-04 17:58 /user/root/output
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -ls output
Found 3 items
-rw-r--r-- 1 root supergroup 0 2014-03-04 17:58 /user/root/output/_SUCCESS
drwxr-xr-x - root supergroup 0 2014-03-04 17:58 /user/root/output/_logs
-rw-r--r-- 1 root supergroup 41 2014-03-04 17:58 /user/root/output/part-r-00000
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -cat /output/part-r-00000
cat: File does not exist: /output/part-r-00000
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -cat output/part-r-00000
Bye 1
Goodbye 1
Hadoop 2
Hello 2
World 2
至此,hadoop下的WordCount实例运行结束,如果还想重新运行一遍,这需把hadoop下的output文件夹删除,因为hadoop为了保证结果的正确性,存在输出的文件夹的话,就会报异常,异常如下
[java] view plain copy
ERROR security.UserGroupInformation: PriviledgedActionException as:
root cause:org.apache.hadoop.mapred.FileAlreadyExistsException:
Output directory output already exists
删除hdfs上的output文件
[java] view plain copy
root@zcf-K42JZ:/usr/local/hadoop# bin/hadoop fs -rmr output
Deleted hdfs://localhost:9000/user/root/output
ubuntu配置单机hadoop的步骤:1、创建hadoop用户组
2、创建hadoop用户
sudo adduser -ingroup hadoop hadoop
回车后会提示输入新的UNIX密码,这是新建用户hadoop的密码,输入回车即可。
如果不输入密码,回车后会重新提示输入密码,即密码不能为空。
最后确认信息是否正确,如果没问题,输入 Y,回车即可。
3、为hadoop用户添加权限
输入:sudo gedit /etc/sudoers
回车,打开sudoers文件
给hadoop用户赋予和root用户同样的权限
二、用新增加的hadoop用户登录Ubuntu系统
三、安装ssh
sudo apt-get install openssh-server
安装完成后,启动服务
sudo /etc/init.d/ssh start
查看服务是否正确启动:ps -e | grep ssh
设置免密码登录,生成私钥和公钥
ssh-keygen -t rsa -P ""
此时会在/home/hadoop/.ssh下生成两个文件:id_rsa和id_rsa.pub,前者为私钥,后者为公钥。
下面我们将公钥追加到authorized_keys中,它用户保存所有允许以当前用户身份登录到ssh客户端用户的公钥内容。
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
登录ssh
退出
exit
四、安装Java环境
sudo apt-get install openjdk-7-jdk
查看安装结果,输入命令:java -version,结果如下表示安装成功。
五、安装hadoop2.4.0
1、官网下载
2、安装
解压
sudo tar xzf hadoop-2.4.0.tar.gz
假如我们要把hadoop安装到/usr/local下
拷贝到/usr/local/下,文件夹为hadoop
sudo mv hadoop-2.4.0 /usr/local/hadoop
赋予用户对该文件夹的读写权限
sudo chmod 774 /usr/local/hadoop
3、配置
1)配置~/.bashrc
配置该文件前需要知道Java的安装路径,用来设置JAVA_HOME环境变量,可以使用下面命令行查看安装路径
update-alternatives - -config java
完整的路径为
/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
我们只取前面的部分 /usr/lib/jvm/java-7-openjdk-amd64
配置.bashrc文件
sudo gedit ~/.bashrc
该命令会打开该文件的编辑窗口,在文件末尾追加下面内容,然后保存,关闭编辑窗口。
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END
执行下面命,使添加的环境变量生效:
source ~/.bashrc
2)编辑/usr/local/hadoop/etc/hadoop/hadoop-env.sh
执行下面命令,打开该文件的编辑窗口
sudo gedit /usr/local/hadoop/etc/hadoop/hadoop-env.sh
找到JAVA_HOME变量,修改此变量如下
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
修改后的hadoop-env.sh文件如下所示:
六、WordCount测试
单机模式安装完成,下面通过执行hadoop自带实例WordCount验证是否安装成功
/usr/local/hadoop路径下创建input文件夹
mkdir input
拷贝README.txt到input
cp README.txt input
执行WordCount
bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.4.0-sources.jar org.apache.hadoop.examples.WordCount input output
执行结果:
执行 cat output/*,查看字符统计结果
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)