Using Sqoop

Sqoop provides an excellent way to import data in parallel from existing RDBMs to HDFS. We get an exact set of table structures that are imported. This happens because of parallel processing. These files can have text delimited by ',' '|', and so on. After manipulating imported records by using MapReduce or Hive, the output result set can be exported back to RDBMS. The data imported can be done in real time or in the batch process (using a cron job).

Getting ready

Prerequisites:

HBase and Hadoop cluster must be up and running.

You can do a wget to http://mirrors.gigenet.com/apache/sqoop/1.4.6/sqoop-1.4.6.tar.gz

Untar it to /u/HbaseB using tar –zxvf sqoop-1.4.6.tar.gz

It will create a /u/HbaseB/sqoop-1.4.6 folder.

A Sqoop user is created ...

Get HBase High Performance Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.