Listen:
Yes, that happen if you not configure your installation well. I got some mails from our customers regarding that problem.
The secondary namenode hadoop.tmp.dir have to redefined in core-site.xml to a directory outside of /tmp, because the most linux-servers cleanup /tmp when a server reboot. That causes a lost of last edit logs and fsimage, in fact the namenode could not be replayed at a server crash. Simply add a new property into core-site.xml:
<property>
<name>hadoop.tmp.dir</name>
<value>/path/for/node/${user.name}</value>
</proper ty>
restart the secondary namenode and you'll be save. The same you should do with your hbase-configuration (hbase-site.xml):
<property>
<name>hbase.tmp.dir</name>
<value>/path/for/node/${user.name}</value>
</proper ty>
The secondary namenode hadoop.tmp.dir have to redefined in core-site.xml to a directory outside of /tmp, because the most linux-servers cleanup /tmp when a server reboot. That causes a lost of last edit logs and fsimage, in fact the namenode could not be replayed at a server crash. Simply add a new property into core-site.xml:
<property>
<name>hadoop.tmp.dir</name>
<value>/path/for/node/${user.name}</value>
</proper
restart the secondary namenode and you'll be save. The same you should do with your hbase-configuration (hbase-site.xml):
<property>
<name>hbase.tmp.dir</name>
<value>/path/for/node/${user.name}</value>
</proper
Comments
Post a Comment