Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> Error while trying to initialize


+
Supun Kamburugamuva 2013-03-29, 15:57
+
Eric Newton 2013-03-29, 16:03
+
Supun Kamburugamuva 2013-03-29, 16:07
+
Supun Kamburugamuva 2013-03-29, 19:43
Copy link to this message
-
Re: Error while trying to initialize
When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
accumulo processes running?

On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <[EMAIL PROTECTED]>wrote:

> I'm getting following exception while starting accumulo.
>
> ./start-all.sh
>
> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
> errors are show in other logs as well.
>
> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
> zookeepers localhost:2181
> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
> tablet for table : !0 row : ~err_^@
> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
> problem reports
> java.lang.RuntimeException:
> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>         at
> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
>         at
> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
>         at
> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
>         at
> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
>         at
> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
>         at
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         at java.lang.Thread.run(Thread.java:722)
> Caused by:
> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>         at
> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
>         at
> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
>         at
> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
>         ... 6 more
>
> Thanks,
> Si[im//
>
>
> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <[EMAIL PROTECTED]>
> wrote:
> > Thanks Eric. It appears my datanode is not running.
> >
> > Supun..
> >
> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <[EMAIL PROTECTED]>
> wrote:
> >> HDFS is not up and working.  In particular, your data node(s) are not
> up.
> >>
> >> You can verify this without using accumulo:
> >>
> >> $ hadoop fs -put somefile .
> >>
> >> You will want to check your hadoop logs for errors.
> >>
> >> -Eric
> >>
> >>
> >>
> >> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva <
> [EMAIL PROTECTED]>
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I'm using a trunk build and when I try to init accumulo it gives the
> >>> following exception.
> >>>
> >>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
> >>> native-hadoop library
> >>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
> >>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> >>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
> >>> replicated to 0 nodes, instead of 1
> >>>         at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >>>         at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> >>>         at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
> >>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>         at javax.security.auth.Subject.doAs(Subject.java:415)
> >>>         at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >>>
> >>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
+
Supun Kamburugamuva 2013-03-29, 19:59
+
Eric Newton 2013-03-29, 20:18
+
Supun Kamburugamuva 2013-04-01, 19:41
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB