Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - Error while trying to initialize


Copy link to this message
-
Re: Error while trying to initialize
Supun Kamburugamuva 2013-04-01, 19:41
Thank you all for the help. I could get the latest release to working.

Supun..
On Fri, Mar 29, 2013 at 4:18 PM, Eric Newton <[EMAIL PROTECTED]> wrote:

> How many tablet servers (and loggers in 1.4.x) are showing up in the
> monitor?
>
> If zero, check to make sure the write-ahead log directory exists on all
> slave nodes.  By default, this will be $ACCUMULO_HOME/walogs.
>
> -Eric
>
>
>
> On Fri, Mar 29, 2013 at 3:59 PM, Supun Kamburugamuva <[EMAIL PROTECTED]>wrote:
>
>> Here is my jps -lm. It seems all are running. I've started zoo keeper
>> for-ground and I can see it is running.
>>
>> 27457 org.apache.hadoop.hdfs.server.namenode.NameNode
>> 8394 org.apache.accumulo.start.Main gc --address localhost
>> 10536 sun.tools.jps.Jps -lm
>> 8504 org.apache.accumulo.start.Main tracer --address localhost
>> 2142 com.intellij.idea.Main
>> 28109 org.apache.hadoop.mapred.JobTracker
>> 27732 org.apache.hadoop.hdfs.server.datanode.DataNode
>> 19888 org.jetbrains.idea.maven.server.RemoteMavenServer
>> 8304 org.apache.accumulo.start.Main master --address localhost
>> 28387 org.apache.hadoop.mapred.TaskTracker
>> 6590 org.apache.zookeeper.server.quorum.QuorumPeerMain
>> /home/supun/dev/apache/zookeeper-3.4.5/bin/../conf/zoo.cfg
>> 28019 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
>> 7952 org.apache.accumulo.start.Main monitor --address localhost
>>
>> Supun..
>>
>> On Fri, Mar 29, 2013 at 3:47 PM, William Slacum
>> <[EMAIL PROTECTED]> wrote:
>> > When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
>> > accumulo processes running?
>> >
>> >
>> > On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <[EMAIL PROTECTED]
>> >
>> > wrote:
>> >>
>> >> I'm getting following exception while starting accumulo.
>> >>
>> >> ./start-all.sh
>> >>
>> >> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
>> >> errors are show in other logs as well.
>> >>
>> >> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
>> >> zookeepers localhost:2181
>> >> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
>> >> tablet for table : !0 row : ~err_^@
>> >> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
>> >> problem reports
>> >> java.lang.RuntimeException:
>> >>
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
>> >>         at
>> >>
>> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
>> >>         at
>> >>
>> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
>> >>         at
>> >> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
>> >>         at
>> >> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
>> >>         at
>> >>
>> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>> >>         at java.lang.Thread.run(Thread.java:722)
>> >> Caused by:
>> >>
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
>> >>         ... 6 more
>> >>
>> >> Thanks,
>> >> Si[im//
>> >>
>> >>
>> >> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <
>> [EMAIL PROTECTED]>
>> >> wrote:
>> >> > Thanks Eric. It appears my datanode is not running.
>> >> >
>> >> > Supun..
>> >> >
>> >> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <[EMAIL PROTECTED]
>> >
>> >> > wrote:
>> >> >> HDFS is not up and working.  In particular, your data node(s) are
>> not
>> >> >> up.
>> >> >>
>> >> >> You can verify this without using accumulo:
>> >> >>
>> >> >> $ hadoop fs -put somefile .
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: [EMAIL PROTECTED];  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com