Listen:
The first step to debug issues in a running hadoop - environment to take a look at the stacktraces, easy accessible over jobtracker/stacks and let you show all running stacks in a jstack view. You will see the running processes, as an example I discuss a lab testing scenario, see below.
http://jobtracker:50030/stacks
Process Thread Dump:
43 active threads
Thread 3203101 (IPC Client (47) connection to NAMENODE/IP:9000 from hdfs):
State: TIMED_WAITING
Blocked count: 6
Waited count: 7
Stack:
java.lang.Object.wait(Native Method)
org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:676)
org.apache.hadoop.ipc.Client$Connection.run(Client.java:719)
In that case the RPC connection has a state "TIMED_WAIT" in a block and waited count. That means, the namenode could not answer the RPC request fast enough. The problem belongs the setup as I see often in production environments.
For demonstration I use a ESX Cluster with a VM for the namenode. The ESX abstraction layer for networks isn't performant enough and block the requests. It is always a good idea to use physical servers for infrastructure and services.
Another problem I figured out depends on HP Bladecenter switches from Nortel, a newer update set a hidden switch "dos-filter", disable it. The switch will block all traffic which looks like a DOS attack. That's a serious bug and I wondering why such params are delivered and enabled per default.
Now we take a closer look at the namenode:
With "jps" you can list all running java-processes:
jps
24158 SecondaryNameNode
31684 FlumeMaster
7898 JobTracker
18613 NameNode
16631 Jps
31653 FlumeWatchdog
We check the logs with "tail -f /var/log/hadoop-0.20/*.log|grep -i error". If you sure that all things are well you should look at the java-threads on the jobtracker. With "top -Hc" you'll see the threads and the running command:
Mem: 4043792k total, 3916788k used, 127004k free, 352684k buffers
Swap: 5242864k total, 1448628k used, 3794236k free, 653296k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18448 hdfs 17 0 2345m 102m 12m S 55.1 1.3 0:01.66 /usr/java/jdk1.6.0_23/bin/java -Dproc_jar -Xmx2000m -Dhadoo
18458 hdfs 16 0 2345m 102m 12m S 31.5 1.3 0:00.95 /usr/java/jdk1.6.0_23/bin/java -Dproc_jar -Xmx2000m -Dhadoo
18457 hdfs 15 0 2345m 102m 12m R 30.9 1.3 0:00.93 /usr/java/jdk1.6.0_23/bin/java -Dproc_jar -Xmx2000m -Dhadoo
18356 hdfs 15 0 2510m 544m 11m S 1.3 6.8 0:00.06 jsvc.exec -Dproc_datanode -pidfile /usr/lib/hadoop-0.20/pid
18366 hdfs 18 0 2345m 102m 12m S 0.7 1.3 0:00.02 /usr/java/jdk1.6.0_23/bin/java -Dproc_jar -Xmx2000m -Dhadoo
to check the IO load on a running job you can use vmstat (sysstat-package):
vmstat -n 2
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 1707244 368124 296528 482624 0 0 0 276 1626 1336 14 17 68 1 0
0 0 1707244 367752 296544 482724 0 0 0 144 1362 716 2 1 97 0 0
0 0 1707244 366884 296564 482728 0 0 8 66 1268 750 2 3 94 1 0
1 0 1707244 366216 296588 482736 0 0 8 260 1269 672 2 4 94 1 0
Here we see a snippet with a running hive-job, the system has no problems with IO, but the interrupts and context switching looks a bit high. That depends on setup, it is always a bad idea to run all services on a single server instance.
Ex. Solution:
The namenode is going deeply into swap, so the first case should be to add more RAM (it was a testcluster with 5 nodes and a VM as namenode to reproduce some timing errors).
Remember, each block of HDFS take 4kb RAM usage on the namenode. The best case here will be to split the services in single instances (jobtracker, namenode and secondary namenode should be outsourced).
Another effective way would be to write a debugscript [1] and use setStatus() and incrCounter() methods on Reporter. Let run your debug script in streaming mode and use the cmd-line-options "-mapdebug" and "-reducedebug".
[1] http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Debugging
Comments
Post a Comment