Skip to main content

FlumeNG - the evolution

Listen:
Flume, the decentralized log collector, makes some great progress. Since the project has reached an Apache incubating tier the development on the next generation (NG) has reached a significant level.

Now, what's the advantage of a new flume? Simply the architecture. FlumeNG doesn't need zookeeper anymore, has no master / client concept nor nodes / collectors. Simply run a bunch of agents, connect each of them together and create your own flow. Flume now can run with 20MB heapsize, uses inMemory Channel for flows, can have multiflows, different sinks to one channel and will support a few more sinks as flume =< 1.0.0. But, flumeNG will not longer support tail and tailDir, here a general exec sink is available, which lets the user the freedom to use everything. 

Requirements
On the build host we need jre 1.6.x, maven 3.x and git or svn. 

Installation
To check out the code we use git and maven in a simple one-line command:
git clone git://git.apache.org/flume.git; cd flume; git checkout trunk && mvn clean && mvn package -DskipTests

After few seconds the build should be done:

[INFO] Apache Flume ...................................... SUCCESS [7.276s]
[INFO] Flume NG Core ..................................... SUCCESS [3.043s]
[INFO] Flume NG Sinks .................................... SUCCESS [0.275s]
[INFO] Flume NG HDFS Sink ................................ SUCCESS [0.892s]
[INFO] Flume NG IRC Sink ................................. SUCCESS [0.515s]
[INFO] Flume NG Channels ................................. SUCCESS [0.214s]
[INFO] Flume NG JDBC channel ............................. SUCCESS [0.802s]
[INFO] Flume NG Agent .................................... SUCCESS [0.893s]
[INFO] Flume NG file-based channel ....................... SUCCESS [0.516s]
[INFO] Flume NG distribution ............................. SUCCESS [16.602s]
[INFO] Flume legacy Sources .............................. SUCCESS [0.143s]
[INFO] Flume legacy Thrift Source ........................ SUCCESS [0.599s]
[INFO] Flume legacy Avro source .......................... SUCCESS [0.458s]
[INFO] Flume NG Clients .................................. SUCCESS [0.133s]
[INFO] Flume NG Log4j Appender ........................... SUCCESS [0.385s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS

Now we find the build and sources in flume-ng-dist/target. Copy the wanted distribution to the host you want to play with, unpack them and start to use flume. 

What is a flow?
A flow in flumeNG describes the whole transport from a source to a sink. The sink could also be a new source to collect different streams into one sink. The process flume starts is an agent. A setup could be run like the example:

source -             -> source => channel => sink
        \           /       
source - => channel => sink 
        /           \
source -             -> channel => source => channel => sink                  

Configuration
Before we can start to play with flumeNG, we need to configure it. The config is logical, but for the first time difficult to understand. The matrix is always <identifier>.type.subtype.parameter.config, where <identifier> is the name of the agent we call later to startup. As the picture above shows, we can have all complexity we want. For that we need also a config which reflects our complexity, so we have to define the source, channel, sink for all entry and end points. 
Let me explain an example (syslog-agent.cnf):

syslog-agent.sources = Syslog
syslog-agent.channels = MemoryChannel-1
syslog-agent.sinks = Console

syslog-agent.sources.Syslog.type = syslogTcp
syslog-agent.sources.Syslog.port = 5140

syslog-agent.sources.Syslog.channels = MemoryChannel-1
syslog-agent.sinks.Console.channel = MemoryChannel-1

syslog-agent.sinks.Console.type = logger
syslog-agent.channels.MemoryChannel-1.type = memory

In the configuration example above we define a simple syslog flow, start point Syslog, endpoint Console and transport channel MemoryChannel-1. The name of a segment we can define as we wish, that are the main identifiers to setup a valid flow. 

The example flow will listen at the configured port and will send all events to the logger (logger is an internal sink for debugging and sends captured events to stdout).  The same config with HDFS as an sink would looks like:

syslog-agent.sources = Syslog
syslog-agent.channels = MemoryChannel-1
syslog-agent.sinks = HDFS-LAB

syslog-agent.sources.Syslog.type = syslogTcp
syslog-agent.sources.Syslog.port = 5140

syslog-agent.sources.Syslog.channels = MemoryChannel-1
syslog-agent.sinks.HDFS-LAB.channel = MemoryChannel-1

syslog-agent.sinks.HDFS-LAB.type = hdfs

syslog-agent.sinks.HDFS-LAB.hdfs.path = hdfs://NN.URI:PORT/flumetest/'%{host}''
syslog-agent.sinks.HDFS-LAB.hdfs.file.Prefix = syslogfiles
syslog-agent.sinks.HDFS-LAB.hdfs.file.rollInterval = 60
syslog-agent.sinks.HDFS-LAB.hdfs.file.Type = SequenceFile
syslog-agent.channels.MemoryChannel-1.type = memory

Flume supports at the moment avro, syslog and exec sources, hdfs and logger sinks. 

Start the flow
Flume-ng starts a single flow per process. That's will be done with:
bin/flume-ng agent -n YOUR_IDENTIFIER -f YOUR_CONFIGFILE 
eg:
bin/flume-ng agent -n syslog-agent -f conf/syslog-agent.cnf

Links: flumeNG Wiki


Comments

  1. Thank you so much Alo for your wonderful writeup..It was of great help..After reading it I tried Hdfs sink but I am getting some error like -

    12/06/10 06:36:16 ERROR hdfs.HDFSEventSink: process failed
    java.lang.NoSuchMethodError: org.apache.hadoop.io.SequenceFile$Writer.syncFs()V
    at org.apache.flume.sink.hdfs.HDFSSequenceFile.sync(HDFSSequenceFile.java:77)
    at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:276)
    at org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:46)
    at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:265)

    Could you please help me out with this..Many thanks.

    ReplyDelete
    Replies
    1. Anonymous13 June, 2012

      Hi,

      It looks like that you have mixed different versions of hadoop, could that be? Please check for an existend hadoop.jar in Flume's lib directory and delete that.

      cheers,
      Alex

      Delete
    2. Thank you for the valuable response alo..Sorry, I was a bit out of touch for sometime because of some other commitments..I got working..But I am not able to use any of the escape sequences..I have a small config file that looks like this -
      agent1.sources = tail
      agent1.channels = MemoryChannel-2
      agent1.sinks = HDFS

      agent1.sources.tail.type = exec
      agent1.sources.tail.command = tail -F /var/log/apache2/access.log.1
      agent1.sources.tail.channels = MemoryChannel-2

      agent1.sinks.HDFS.channel = MemoryChannel-2
      agent1.sinks.HDFS.type = hdfs
      agent1.sinks.HDFS.hdfs.path = hdfs://localhost:9000/flume/'%{host}'
      agent1.sinks.HDFS.hdfs.file.Type = DataStream

      agent1.channels.MemoryChannel-2.type = memory

      But instead of creating a directory with the hostname, it is creating a directory with name - ''

      Am I missing something here??Many thanks

      Delete
    3. Anonymous21 June, 2012

      Hi Mohammad,

      please use:
      agent.sinks.hdfsSink.hdfs.filePrefix = %{host}

      instead to add the escape sequence in the hdfs path (I know, in Flume 0.9x that was the correct way, but now we've moved them out).

      cheers,
      Alex

      Delete
  2. Please let me know how to add custom serialization for supporting custom appender (like in log4j).

    Please suggest me .

    ReplyDelete

Post a Comment

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...

Beyond Ctrl+F - Use LLM's For PDF Analysis

PDFs are everywhere, seemingly indestructible, and present in our daily lives at all thinkable and unthinkable positions. We've all got mountains of them, and even companies shouting about "digital transformation" haven't managed to escape their clutches. Now, I'm a product guy, not a document management guru. But I started thinking: if PDFs are omnipresent in our existence, why not throw some cutting-edge AI at the problem? Maybe Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) could be the answer. Don't get me wrong, PDF search indexes like Solr exist, but they're basically glorified Ctrl+F. They point you to the right file, but don't actually help you understand what's in it. And sure, Microsoft Fabric's got some fancy PDF Q&A stuff, but it's a complex beast with a hefty price tag. That's why I decided to experiment with LLMs and RAG. My idea? An intelligent knowledge base built on top of our existing P...

Run Llama3 (or any LLM / SLM) on Your MacBook in 2024

I'm gonna be real with you: the Cloud and SaaS / PaaS is great... until it isn't. When you're elbow-deep in doing something with the likes of ChatGPT or Gemini or whatever, the last thing you need is your AI assistant starts choking (It seems that upper network connection was reset) because 5G or the local WiFi crapped out or some server halfway across the world is having a meltdown(s). That's why I'm all about running large language models (LLMs) like Llama3 locally. Yep, right on your trusty MacBook. Sure, the cloud's got its perks, but here's why local is the way to go, especially for me: Privacy:  When you're brainstorming the next big thing, you don't want your ideas floating around on some random server. Keeping your data local means it's  yours , and that's a level of control I can get behind. Offline = Uninterrupted Flow:  Whether you're on a plane, at a coffee shop with spotty wifi, or jus...