Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Error while trying to initialize


Copy link to this message
-
Re: Error while trying to initialize
Here is my jps -lm. It seems all are running. I've started zoo keeper
for-ground and I can see it is running.

27457 org.apache.hadoop.hdfs.server.namenode.NameNode
8394 org.apache.accumulo.start.Main gc --address localhost
10536 sun.tools.jps.Jps -lm
8504 org.apache.accumulo.start.Main tracer --address localhost
2142 com.intellij.idea.Main
28109 org.apache.hadoop.mapred.JobTracker
27732 org.apache.hadoop.hdfs.server.datanode.DataNode
19888 org.jetbrains.idea.maven.server.RemoteMavenServer
8304 org.apache.accumulo.start.Main master --address localhost
28387 org.apache.hadoop.mapred.TaskTracker
6590 org.apache.zookeeper.server.quorum.QuorumPeerMain
/home/supun/dev/apache/zookeeper-3.4.5/bin/../conf/zoo.cfg
28019 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
7952 org.apache.accumulo.start.Main monitor --address localhost

Supun..

On Fri, Mar 29, 2013 at 3:47 PM, William Slacum
<[EMAIL PROTECTED]> wrote:
> When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
> accumulo processes running?
>
>
> On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <[EMAIL PROTECTED]>
> wrote:
>>
>> I'm getting following exception while starting accumulo.
>>
>> ./start-all.sh
>>
>> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
>> errors are show in other logs as well.
>>
>> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
>> zookeepers localhost:2181
>> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
>> tablet for table : !0 row : ~err_^@
>> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
>> problem reports
>> java.lang.RuntimeException:
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>>         at
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
>>         at
>> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
>>         at
>> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
>>         at
>> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
>>         at
>> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
>>         at
>> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>         at java.lang.Thread.run(Thread.java:722)
>> Caused by:
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>>         at
>> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
>>         at
>> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
>>         at
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
>>         ... 6 more
>>
>> Thanks,
>> Si[im//
>>
>>
>> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <[EMAIL PROTECTED]>
>> wrote:
>> > Thanks Eric. It appears my datanode is not running.
>> >
>> > Supun..
>> >
>> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <[EMAIL PROTECTED]>
>> > wrote:
>> >> HDFS is not up and working.  In particular, your data node(s) are not
>> >> up.
>> >>
>> >> You can verify this without using accumulo:
>> >>
>> >> $ hadoop fs -put somefile .
>> >>
>> >> You will want to check your hadoop logs for errors.
>> >>
>> >> -Eric
>> >>
>> >>
>> >>
>> >> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva
>> >> <[EMAIL PROTECTED]>
>> >> wrote:
>> >>>
>> >>> Hi All,
>> >>>
>> >>> I'm using a trunk build and when I try to init accumulo it gives the
>> >>> following exception.
>> >>>
>> >>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
>> >>> native-hadoop library
>> >>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
>> >>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> >>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
>> >>> replicated to 0 nodes, instead of 1
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)

Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: [EMAIL PROTECTED];  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com