Hadoop 测试中的一些问题

源于前端时间安装hadoop 后的一些工作

问题:Hadoop namenode 无法格式化!

描述:在namenode 中执行hadoop namenode –format 后出现以下提示:

12/05/28 15:34:25 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = u33/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by ‘chrisdo’ on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /home/hadoop/hadoop20/name ? (Y or N) y
Format aborted in /home/hadoop/hadoop20/name
12/05/28 15:34:29 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at u33/127.0.1.1
************************************************************/

原因:在格式化前没有删除dfs.name.dir (hdfs-site.xml 中配置)对应的文件夹。这样做是为了防止误格式化已经格式化了的系统。 (同样的也要删除datanode 的data 文件夹)


问题:hadoop 无法启动datanode

描述: ./start-all.sh 启动后,只有两个tasktracer 启动了,而datanode 没有启动。查看对应datanode 的log 文件

/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = u32/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by ‘chrisdo’ on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2012-05-29 17:09:40,186 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.UnknownHostException: Invalid hostname for server: u33
    at org.apache.hadoop.ipc.Server.bind(Server.java:198)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:191)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2012-05-29 17:09:40,187 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at u32/127.0.1.1
************************************************************/

原因:从log 文件可以看出datanode 无法识别namenode 的主机名,在/etc/hosts 添加namenode 的主机名和地址的映射即可。


问题:datanode 无法连接namenode

描述:运行测试“hadoop jar hadoop-0.20.2-examples.jar randomwriter random-data” 后提示:

12/05/29 19:45:59 INFO ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 0 time(s).
12/05/29 19:46:00 INFO ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 1 time(s).
12/05/29 19:46:01 INFO ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 2 time(s).

查看log 文件:

2012-05-29 19:58:34,752 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = u32/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by ‘chrisdo’ on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2012-05-29 19:58:35,932 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 0 time(s).
2012-05-29 19:58:36,933 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 1 time(s).
2012-05-29 19:58:37,933 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 2 time(s).
2012-05-29 19:58:38,934 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 3 time(s).
2012-05-29 19:58:39,935 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 4 time(s).
2012-05-29 19:58:40,936 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 5 time(s).
2012-05-29 19:58:41,937 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 6 time(s).
2012-05-29 19:58:42,938 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 7 time(s).
2012-05-29 19:58:43,938 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 8 time(s).
2012-05-29 19:58:44,939 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.33:9000. Already tried 9 time(s).
2012-05-29 19:58:44,941 INFO org.apache.hadoop.ipc.RPC: Server at /192.168.1.33:9000 not available yet, Zzzzz…

原因:/etc/hosts 配置文件,将ip 和主机名配置填写正确,将多余的127.0.0.1    u33(主机名) 去掉


问题:hadoop map-reduce 速度非常慢

描述:各种测试速度非常慢,十几个单词的统计都运行了十几分钟。IO 也很慢,查看磁盘利用率却非常低。

原因:不明,初步怀疑和网络拥塞有关系。

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据