Listen:
For some reasons it could be a good idea to make a hdfs filesystem available across networks as a exported share. Here I describe a working scenario with linux and hadoop with tools both have on board.
I used fuse and libhdfs to mount a hdfs filesystem. Change namenode.local and <PORT> to fit your environment.
Install:
yum install hadoop-0.20-fuse.x86_64 hadoop-0.20-libhdfs.x86_64
Create a mountpoint:
mkdir /hdfs-mount
Mount your hdfs (testing):
hadoop-fuse-dfs dfs://namenode.local:<PORT> /hdfs-mount -d
You will show like that:
INFO fuse_options.c:162 Adding FUSE arg /hdfs-mount
INFO fuse_options.c:110 Ignoring option -d
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.10
flags=0x0000000b
max_readahead=0x00020000
INFO fuse_init.c:101 Mounting namenode.local:<PORT>
INIT: 7.8
flags=0x00000001
max_readahead=0x00020000
max_write=0x00020000
unique: 1, error: 0 (Success), outsize: 40
Hit crtl-C after you see "Success".
Make the mount available at boot time:
echo "hadoop-fuse-dfs#dfs://namenode.local:<PORT> /hdfs-mount fuse usetrash,rw 0 0" >> /etc/fstab
Test:
#> mount -a
#> mount
[..]
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
fuse on /hdfs-mount type fuse (rw,nosuid,nodev,allow_other,default_permissions)
To tune the memory for each JVM process take a look into /etc/default/hadoop-0.20-fuse and adjust the settings there.
Export via NFS (unsecure):
First we have to decide which user we use, I suppose the user hdfs. Use "id hdfs":
uid=104(hdfs) gid=105(hdfs) groups=105(hdfs),104(hadoop) context=root:staff_r:staff_t:SystemLow-SystemHigh
Create an exports-file:
cat /etc/exports
/hdfs-mount/user (fsid=111,rw,wdelay,anonuid=104,anongid=105,sync,insecure,no_subtree_check,no_root_squash)
Expl.: read-write, fsid=unused ID (man 5 exports), write-delay, hdfs user, sync
To export only the user-directory from HDFS prevents you from unwanted changes in system relevant directories (mapred as example).
Restart your NFS Server (service nfs restart).
Now you can use your hdfs as a "local" filesystem, which makes some tasks easier. Note that the "use user" are mapped to the local user, to using root is a bad idea.
Mount the exported NFS on your machine and create / copy your jobdefinitions or files simply.
PS: works only from kernel 2.6.27 upwards
What kind of throughput do you observe with this setup?
ReplyDeleteDoes rsync work?
Does NFS reorder writes?
Upps, did see your post now, sry Ted. Of course, I'm agree with you, performance looks other. Its only a PoC and my private playground.
ReplyDeleteI tested with a scp:
scp /tmp/10GB hdfs@dn-node:/123/hdfs/user/
10GB 100% 10GB 45.7MB/s 03:44
- alex
I have followed your steps and it works fine on centos(although there are still some issues). But when I export the mount point to windows7 via NFS, here comes some problems. I can mount NFS on win7, but I can't see anything in the directory. Can win7 access HDFS via NFS? Or should I use samba3?
ReplyDeleteDid you have installed the Unix Support Tools? For my experience, I use smb3/4 for, since we have here also xattr as well as kerberos support.
ReplyDeleteThanks for reply! Do you mean SFU or cygwin or something else? I have no idea about Unix Support Tools. Have you ever access HDFS via NFS on windows?
Delete