Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Reg: when failures on writing to DB from map\reduce


Copy link to this message
-
Reg: when failures on writing to DB from map\reduce
Hi All,

In Sqoop:
When exporting from HDFS to DB, If an export map task fails due to these or
other reasons, it will cause the export job to fail. The results of a
failed export are undefined. Each export map task operates in a separate
transaction. Furthermore, individual map tasks commit their current
transaction periodically. If a task fails, the current transaction will be
rolled back. Any previously-committed transactions will remain durable in
the database, leading to a partially-complete export.

When using DBOutputFormat in custom map\reduce program:
When emitting from multiple mappers or reducers and if any one of the
mapper or reducer got failure only the failed transaction will be rolled
back Any previously-committed transactions will remain durable in the
database, leading to a partially-complete export.

Is there any way to commit at last to avoid partially-complete?

Cheers!
Manoj.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB