Listen:
Facebooks's scribe was the first available service for managing a hughe amount on logfiles. We didn't talk over 2 GB / day or so, I mean more as 1 TB per day. Compressed.
Now, a new apache incubator project is flume [1]. It is a pretty nice piece of software, so I love it. It is reliable, fast, safe and has no proprietary stack inside. And you can create really cool logging tasks.
If you use Clouderas Distribution you get flume easy with a "yum install flume-master" on the master and "yum install flume-node" on a node. Check [2] for more infos about.
Flume has a lot of sources to get logfiles:
- from a text-file
- as a tail (one or more files)
- syslog UDP or TCP
- synthetic sources
Flume's design belongs to a large logfile distribution process. Let's assume, we have a 100 Node Webcluster and incoming traffic around 3 GB/s. The farm produce 700 MB raw weblogs per minute.
Through the processing over flume we can compress the files, sort them into buckets you need and fast deliver into our hdfs. Here a working example:
cat /flume-zk/running.cfg
collector1.local : autoCollectorSource | collectorSink( "hdfs://namenode.local:9000/user/flume/weblogs/%Y-%m-%d/%H00/%M/", "%{host}-" );
collector2.local : autoCollectorSource | collectorSink( "hdfs://namenode.local:9000/user/flume/weblogs/%Y-%m-%d/%H00/%M/", "%{host}-" );
collector3.local : autoCollectorSource | collectorSink( "hdfs://namenode.local:9000/user/flume/weblogs/%Y-%m-%d/%H00/%M/", "%{host}-" );
collector4.local : autoCollectorSource | collectorSink( "hdfs://namenode.local:9000/user/flume/weblogs/%Y-%m-%d/%H00/%M/", "%{host}-" );
agent1.local : syslogTcp( "19800" ) | autoE2EChain;
agent2.local : syslogTcp( "19800" ) | autoE2EChain;
The chain autoE2EChain describe a failover process, if one of the nodes didn't respond they will be moved at the end. You will see the logical mapping at the webinterface (http://flume-master:35871/flumemaster.jsp).
We split the data here into minutes and set as a identifier the host from which we get the logs at the end. Make it easier to debug. The webfarm logs via a loadbalancer to the agents (input syslog, output 19800). 19800 is a free unprivileged port, as example.
Let us check one of the agents:
# cd /tmp/flume/agent/agent1.local/
# ls
done logged sending sent writing
# ls -lah writing/
total 418M
drwxr-xr-x 2 flume flume 4.0K Sep 28 16:25 .
drwxr-xr-x 7 flume flume 4.0K Sep 28 10:21 ..
-rw-r--r-- 1 flume flume 418M Sep 28 16:25 log.00000019.20111011-162503316+0200.10204828793997118.seq
That the logfile we receive at the moment from our nodes. After writing (you can define the split in flume.conf) the log will be sent to the collectors, so we connect to collector1:
# tail -f -n 10 /var/log/flume/*.log
<del>: Creating org.apache.hadoop.io.compress.BZip2Codec@3c9d9efb compressed HDFS file: hdfs://namenode.local:9000/user/flume/weblogs/2011-09-28/1600/25/agent1.local-log.00000019.20111011-162503316+0200.10204828793997118.seq.bzip2
<del>: Finishing checksum group called 'log.00000019.20111011-162503316+0200.10204828793997118.seq'
<del>: Checksum succeeded 1325c55440e
<del>: moved from partial to complete log.00000019.20111011-162503316+0200.10204828793997118.seq
<del>: Closing hdfs://namenode.local:9000/user/flume/weblogs/2011-09-28/1600/25/agent1.local-log.00000019.20111011-162503316+0200.10204828793997118.seq
<del>: Closing HDFS file: hdfs://namenode.local:9000/user/flume/weblogs/2011-09-28/1600/25/agent1.local-log.log.00000019.20111011-162503316+0200.10204828793997118.seq.bzip2
You see, the collector receive the action from the agent, open a sink into hdfs, write via stream the file and close the sink after the time we configured. Pretty nice! The logging mechanism works perfectly, the files will be splitted and compressed as bzip into 1 minute-pieces into our hdfs. Remember, use always bzip as compress codecs because the codec understand markers (blocksize, reducing etc).
[1] https://cwiki.apache.org/FLUME/
[2] https://ccp.cloudera.com/display/CDHDOC/Flume+Installation
Comments
Post a Comment