Skip to main content

Sqoop and Microsoft SQL Server

Listen:
From Microsoft's technet:
With SQL Server-Hadoop Connector [1], you import data from:
Tables in SQL Server to delimited text files on HDFS
Tables in SQL Server to SequenceFiles files on HDFS
Tables in SQL Server to tables in Hive*
Queries executed in SQL Server to delimited text files on HDFS
Queries executed in SQL Server to SequenceFiles files on HDFS
Queries executed in SQL Server to tables in Hive*
 
With SQL Server-Hadoop Connector, you can export data from:
Delimited text files on HDFS to SQL Server
SequenceFiles on HDFS to SQL Server
Hive Tables* to tables in SQL Server
But before it works you have to setup the connector. First get the MS JDBC driver [2]:
You have just to download the driver, unpack them and copy the driver (sqljdbc4.jar) file to the $SQOOP_HOME/lib/ directory. Now download the connector (.tar.gz) from [1], unpack them and set the MSSQL_CONNECTOR_HOME into that directory. Let's assume you unpack into /usr/sqoop/connector/mssql, do:
# export MSSQL_CONNECTOR_HOME=/usr/sqoop/connector/mssql 

control the export:
# echo $MSSQL_CONNECTOR_HOME
/usr/sqoop/connector/mssql


and run the install.sh in the unpacked directory.
sh ./install.sh

Tip: create a profile.d file:
# cat /etc/profile.d/mssql.sh
export MSSQL_CONNECTOR_HOME=/usr/sqoop/connector/mssql
and chmod into 755

An example:
Sqoop <=> MS SQL Server and hadoop processing works well. Just setup a larger PoC with split the data in 3 maps:
# sqoop import --connect 'jdbc:sqlserver://<IP>;username=dbuser;password=dbpasswd;database=<DB>' --table <table> --target-dir /path/to/hdfs/dir --split-by <KEY> -m 3

=> export of 1.3 GB data tooks around one minute. After processing just send back:
# sqoop export --connect 'jdbc:sqlserver://<IP>;username=dbuser;password=dbpasswd;database=<DB>' --table=<table> --direct --export-dir /path/from/hdfs/dir

You can do the same operations as you know from oracle or mysql sqoop scripts.

[1] http://www.microsoft.com/download/en/details.aspx?id=27584
[2] http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=21599

Comments

  1. hi,
    I am trying to import from sql server into HDFS, but I am getting errors as:

    hadoop@ubuntu:~/sqoop-1.1.0/bin$ ./sqoop import --connect 'jdbc:sqlserver://192.168.230.1;username=xxx;password=xxxxx;database=HadoopTest' --table PersonInfo --target-dir /home/hadoop/hadoop-0.21.0/

    11/12/10 12:13:20 ERROR tool.BaseSqoopTool: Got error creating database manager: java.io.IOException: No manager for connect string: jdbc:sqlserver://192.168.230.1;username=xxx;password=xxxxx;database=HadoopTest
    at com.cloudera.sqoop.ConnFactory.getManager(ConnFactory.java:119)
    at com.cloudera.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:178)
    at com.cloudera.sqoop.tool.ImportTool.init(ImportTool.java:81)
    at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:411)
    at com.cloudera.sqoop.Sqoop.run(Sqoop.java:134)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
    at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:170)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:196)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:205)

    What is the problem I am not getting?
    My Hadoop version : hadoop-0.21.0
    Sqoop version : sqoop-1.1.0

    Pls suggest me solution.
    Thanks.

    ReplyDelete
  2. The driver is installed and sqoop can find it? The install.sh was running without an error?

    ReplyDelete
  3. I followed all the steps but can't get sqoop running.I am getting this error.Can you please tell what is wrong

    [hduser@master bin]$ ./sqoop-help
    Warning: $HADOOP_HOME is deprecated.

    Exception in thread "main" java.lang.NoClassDefFoundError: com/cloudera/sqoop/Sqoop
    Caused by: java.lang.ClassNotFoundException: com.cloudera.sqoop.Sqoop
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    Could not find the main class: com.cloudera.sqoop.Sqoop. Program will exit.

    ReplyDelete
  4. Anonymous18 July, 2012

    How do you have sqoop installed? What shows sqoop -version?

    ReplyDelete
  5. Untar the sqoop to /usr/local/sqoop
    downloaded sqoop-sqlserver connector and copied to connectors folder
    and ran install.sh
    Copied hadoop-core-1.0.3.jar in sqoop lib
    Copied sqoop-sqlserver-1.0.jar,mysql-connector-java-5.1.21-bin.jar in sqoop lib
    Set the environment variables

    MSSQL_CONNECTOR_HOME=/usr/local/sqoop/sqoop-sqlserver-1.0/
    HADOOP_HOME=/usr/local/hadoop
    SQOOP_CONF_DIR=/usr/local/sqoop/conf
    SQOOP_HOME=/usr/local/sqoop
    HBASE_HOME=/usr/local/hbase-0.92.1/
    HADOOP_CLASSPATH=:/usr/local/sqoop/sqoop-1.4.1-incubating.jar

    ReplyDelete
  6. Laura,

    I am new to hadoop and sqoop.
    Can you tell me the steps to install hadoop and sqoop on my ubuntu 12.04. I did install hadoop 1.0.3 but unable to install sqoop.

    ReplyDelete
  7. when i export the to SQL server,it cause the Exception below:
    SQLServerException:incorrect sytnax near ','

    what's wrong??

    ReplyDelete
  8. Anonymous25 July, 2012

    @Andy: Follow the instructions here:
    http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

    After you've got them running, download the latest sqoop release from sqoop.apache.org

    ReplyDelete
  9. how can we export using sqoop to mssql using select statements??

    ReplyDelete

Post a Comment

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...

Beyond Ctrl+F - Use LLM's For PDF Analysis

PDFs are everywhere, seemingly indestructible, and present in our daily lives at all thinkable and unthinkable positions. We've all got mountains of them, and even companies shouting about "digital transformation" haven't managed to escape their clutches. Now, I'm a product guy, not a document management guru. But I started thinking: if PDFs are omnipresent in our existence, why not throw some cutting-edge AI at the problem? Maybe Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) could be the answer. Don't get me wrong, PDF search indexes like Solr exist, but they're basically glorified Ctrl+F. They point you to the right file, but don't actually help you understand what's in it. And sure, Microsoft Fabric's got some fancy PDF Q&A stuff, but it's a complex beast with a hefty price tag. That's why I decided to experiment with LLMs and RAG. My idea? An intelligent knowledge base built on top of our existing P...

Run Llama3 (or any LLM / SLM) on Your MacBook in 2024

I'm gonna be real with you: the Cloud and SaaS / PaaS is great... until it isn't. When you're elbow-deep in doing something with the likes of ChatGPT or Gemini or whatever, the last thing you need is your AI assistant starts choking (It seems that upper network connection was reset) because 5G or the local WiFi crapped out or some server halfway across the world is having a meltdown(s). That's why I'm all about running large language models (LLMs) like Llama3 locally. Yep, right on your trusty MacBook. Sure, the cloud's got its perks, but here's why local is the way to go, especially for me: Privacy:  When you're brainstorming the next big thing, you don't want your ideas floating around on some random server. Keeping your data local means it's  yours , and that's a level of control I can get behind. Offline = Uninterrupted Flow:  Whether you're on a plane, at a coffee shop with spotty wifi, or jus...