Skip to main content

Apache Flume 1.2.x and HBase

Listen:
The newest (and first) HBase sink was committed into trunk one week ago and was my point at the HBase workshop @Berlin Buzzwords. The slides are available in my slideshare channel.

Let me explain how it works and how you get an Apache Flume - HBase flow running. First, you've got to checkout trunk and build the project (you need git and maven installed on your system):

git clone git://git.apache.org/flume.git && cd flume && git checkout trunk && mvn package -DskipTests && cd flume-ng-dist/target

Within trunk, the HBase sink is available in the sinks - directory (ls -la flume-ng-sinks/flume-ng-hbase-sink/src/main/java/org/apache/flume/sink/hbase/)

Please note a few specialities:
The sink controls atm only HBase flush (), transaction and rollback. Apache Flume reads out the $CLASSPATH variable and uses the first available hbase-site.xml. If you use different versions of HBase on your system please keep that in mind. The HBase table, columns and column family have to be created. Thats all.

The using of an HBase sink is pretty simple, an valid configuration could look like:

host1.sources = src1
host1.sinks = sink1 
host1.channels = ch1 
host1.sources.src1.type = seq 
host1.sources.src1.port = 25001
host1.sources.src1.bind = localhost
host1.sources.src1.channels = ch1
host1.sinks.sink1.type = org.apache.flume.sink.hbase.HBaseSink 
host1.sinks.sink1.channel = ch1
host1.sinks.sink1.table = test3
host1.sinks.sink1.columnFamily = testing
host1.sinks.sink1.column = foo
host1.sinks.sink1.serializer = org.apache.flume.sink.hbase.SimpleHbaseEventSerializer
host1.sinks.sink1.serializer.payloadColumn = pcol
host1.sinks.sink1.serializer.incrementColumn = icol 
host1.channels.ch1.type=memory

In this example we start a Seq interface on localhost with a listening port, point the sink to the HBase sink jar and define the event serializer. Why? HBase needs the data in a HBase format, to achieve that we need to transform the input into a HBase compilant format. Apache Flume's HBase sink uses synchronous / blocking client, asynchronous support will follow (FLUME-1252). 

Links:

Comments

  1. Hi !
    Thanks for informations. I tried that and i have the following result http://pastebin.com/0YZBw8YL . Flume and ZK don't seem to communicate together :(
    Have you an idea ?

    ReplyDelete
  2. Anonymous13 June, 2012

    Hi,

    didn't work on a SASL Zookeeper I guess. Hmm, the certificate are readable? If yes, file pls a jira about.
    http://hbase.apache.org/configuration.html#zk.sasl.auth

    Thanks,
    Alex

    ReplyDelete
  3. Sorry but I don't use SASL ZK. :/

    ReplyDelete
    Replies
    1. Anonymous14 June, 2012

      The pastebin wasn't clear about, so I fired the gun. The nio exception means _mostly_ a heavy used zookeeper cluster. Are HBase and flume installed on the same host? I would point into a problem there. If the error persists please write a mail to the mailinglist.

      Delete
    2. I use HBase on 8 nodes, but it's just for benchmark so my cluster is never used... In addition Zookeeper is deployed on 3 nodes and i have the same problem on all nodes.
      Which mailinglist do you want i notify ?
      Thanks for your work and your help :)

      Delete
    3. That's work now ! Thanks for your helpfull informations.
      Your work is fantasic :)

      Delete
  4. thank you so much alo for the wonderful work..using this writeup I was able to use hbase-sink..but being a newbie I was left some questions..could you please tell something about following 2 line -

    host1.sinks.sink1.serializer.payloadColumn = pcol
    host1.sinks.sink1.serializer.incrementColumn = icol

    Many thanks

    ReplyDelete

Post a Comment

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...

Beyond Ctrl+F - Use LLM's For PDF Analysis

PDFs are everywhere, seemingly indestructible, and present in our daily lives at all thinkable and unthinkable positions. We've all got mountains of them, and even companies shouting about "digital transformation" haven't managed to escape their clutches. Now, I'm a product guy, not a document management guru. But I started thinking: if PDFs are omnipresent in our existence, why not throw some cutting-edge AI at the problem? Maybe Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) could be the answer. Don't get me wrong, PDF search indexes like Solr exist, but they're basically glorified Ctrl+F. They point you to the right file, but don't actually help you understand what's in it. And sure, Microsoft Fabric's got some fancy PDF Q&A stuff, but it's a complex beast with a hefty price tag. That's why I decided to experiment with LLMs and RAG. My idea? An intelligent knowledge base built on top of our existing P...

Run Llama3 (or any LLM / SLM) on Your MacBook in 2024

I'm gonna be real with you: the Cloud and SaaS / PaaS is great... until it isn't. When you're elbow-deep in doing something with the likes of ChatGPT or Gemini or whatever, the last thing you need is your AI assistant starts choking (It seems that upper network connection was reset) because 5G or the local WiFi crapped out or some server halfway across the world is having a meltdown(s). That's why I'm all about running large language models (LLMs) like Llama3 locally. Yep, right on your trusty MacBook. Sure, the cloud's got its perks, but here's why local is the way to go, especially for me: Privacy:  When you're brainstorming the next big thing, you don't want your ideas floating around on some random server. Keeping your data local means it's  yours , and that's a level of control I can get behind. Offline = Uninterrupted Flow:  Whether you're on a plane, at a coffee shop with spotty wifi, or jus...