Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Using Sqoop incremental import as chunk


Copy link to this message
-
Using Sqoop incremental import as chunk

Hello everyone,
Is it really possible to import chunk-wise data through sqoop incremental import?
Say I have a table with id 1,2,3..... N (here N is 100) and now I want to import it as chunk. Like1st import: 1,2,3.... 202nd import: 21,22,23.....40last import: 81,82,83....100
I have read about the Sqoop job with incremental import and also know the --last-value parameter but do not know how to pass the chunk size. For the above example, chunk size here is 20.

Any information will be highly appreciated. Thanks in advance.
     
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB