Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> importdirectory command gets stuck


Copy link to this message
-
Re: importdirectory command gets stuck
Accumulo will always generate a warning on any thrift call that is taking
longer than 2 minutes (configurable).

The actual delay is happening elsewhere.

Bulk import is a complicated process.  Your request is submitted to the
master, which moves the file into place, and then hands the file over to a
random tablet server for analysis.  The tablet server examines the index of
the file and determines which of the other tablets the bulk file belongs.
 Then it requests each of the tablet servers to incorporate the file into
the tablet.  Each of these requests can timeout, and they are retried.
 Finally, the master will retry the bulk import several times in case the
random tablet server has failed.  During this entire process there are a
series of markers in the !METADATA table to ensure that the file is added
to the appropriate tablets only once.

I've included this long-winded explanation so that you can debug what is
going on.

If, for example, you are generating a file for bulk loading that needs to
be incorporated into a thousand tablets, it will probably take too long,
and the master will assume failure and retry.

For this reason, and many others, one should try to produce bulk files that
correspond to tablet boundaries.

-Eric

On Wed, Nov 6, 2013 at 9:04 PM, Korb, Michael [USA] <[EMAIL PROTECTED]>wrote:

>  Sometimes when I try to run importdirectory on Rfiles, the thread hangs
> and eventually fails. The shell says, "WARN : Thread 'shell' stuck on IO to
> …" and the Recent Logs in the UI say "Thread 'bulk import XX' stuck on IO"
> and "rpc failed server … org.apache.thrift.transport.TTransportException …"
>
>  Sometimes it puts the Rfiles in failures, and sometimes it writes a text
> file failures.txt in failures, where failures.txt contains the location of
> an Rfile in HDFS under the Accumulo data directory.
>
>  Is there any way to fix this Thrift error so I can complete bulk ingest?
> Also, what does failures.txt mean? It looks like the Rfile is in the right
> place. I would greatly appreciate any help with these issues.
>
>  Thanks,
> Mike
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB