Skip to main content

Posts

Showing posts from August, 2011

Using MySQL as a Hive backend database

Hive let us use a SQL like (HiveQL) style to analyse large datasets with ad-hoc queries, and comes as a service on top of hdfs. It is easy to use and most SQL programmers can instant write some queries. The lack of the installation are the included derby DB, which is running on the node locally. For that Hive is not really multiuser-capable. To use Hive with more than one user you have to setup a backend database. The database will hold all metainformations regarding your tables, partitions, splits and rows. For that the database should be safe (maybe replication) or a HVA installation. I use 2 MySQL servers in a ESX Cluster environment with enabled binary logs (Active/Standby). Setup a server and install mysql-server version 5.1 and up. To get absolute safe you can setup a MySQL cluster ;) Let us configure the mysql-database: # cat /etc/my.cnf [mysqld_safe] socket      = /var/lib/mysql/mysql.sock [mysqld] user        = mysql pid-file ...

Secondary namenode data loss

Yes, that happen if you not configure your installation well. I got some mails from our customers regarding that problem.  The secondary namenode hadoop.tmp.dir have to redefined in core-site.xml to a directory outside of /tmp, because the most linux-servers cleanup /tmp when a server reboot. That causes a lost of last edit logs and fsimage, in fact the namenode could not be replayed at a server crash. Simply add a new property into core-site.xml: <property> <name>hadoop.tmp.dir</name> <value>/path/for/node/${ user.name }</value> </proper ty> restart the secondary namenode and you'll be save.  The same you should do with your hbase-configuration (hbase-site.xml): <property> <name>hbase.tmp.dir</name> <value>/path/for/node/${ user.name }</value> </proper ty>