Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - Re: [External]  Re: accumulo init not working


Copy link to this message
-
Re: [External] Re: accumulo init not working
William Slacum 2012-06-19, 15:42
I'd suggest running `jps -lm` again to see if a TServer process has
started and check the Cloudbase tserver log to verify that no errors
happened.

On Tue, Jun 19, 2012 at 8:08 AM, Shrestha, Tejen [USA]
<[EMAIL PROTECTED]> wrote:
> Accumulo started up fine now but now when I try to get to the  Accumulo
> shell I get this:
>
> $ $ACCUMULO_HOME/bin/accumulo shell -u root
> 19 11:01:32,014 [impl.ServerClient] WARN : There are no tablet servers:
> check that zookeeper and accumulo are running.
>
> It prints that out and doesn't do anything after just sits there.  I started
> Hadoop and Zookeeper before starting Accumulo.  Am I missing a step here?
>
> From: John Vines <[EMAIL PROTECTED]>
> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Date: Tuesday, June 19, 2012 12:12 AM
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Re: [External] Re: accumulo init not working
>
> If your hdfs is up and fine, as per Bill, check to make sure $HADOOP_HOME in
> Accumulo_env.sh is pointing to the same configuration as the one that is
> running. Accumulo uses that environment variable to not only pick up the
> hadoop jars, but also the configuration so it can find hdfs.
>
> John
>
> On Mon, Jun 18, 2012 at 11:31 PM, Shrestha, Tejen [USA]
> <[EMAIL PROTECTED]> wrote:
>>
>> Thank you for the quick reply.  You were right I had downloaded the source
>> instead of the dist.
>> I ran:mvn package && mvn assembly:single –Nas per the Accumulo README.
>>  I'm not getting the exception anymore but now I can't get it to connect for
>> some reason.  Again, Hadoop and Zookeeper are running fine and this is the
>> error that I get after $ACCUMULO/bin/accumulo init
>>
>> 18 23:04:55,614 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 0 time(s).
>> 18 23:04:56,618 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 1 time(s).
>> 18 23:04:57,620 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 2 time(s).
>> 18 23:04:58,621 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 3 time(s).
>> 18 23:04:59,623 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 4 time(s).
>> 18 23:05:00,625 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 5 time(s).
>> 18 23:05:01,625 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 6 time(s).
>> 18 23:05:02,627 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 7 time(s).
>> 18 23:05:03,629 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 8 time(s).
>> 18 23:05:04,631 [ipc.Client] INFO : Retrying connect to server:
>> localhost/127.0.0.1:9000. Already tried 9 time(s).
>> 18 23:05:04,634 [util.Initialize] FATAL: java.net.ConnectException: Call
>> to localhost/127.0.0.1:9000 failed on connection exception:
>> java.net.ConnectException: Connection refused
>> java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on
>> connection exception: java.net.ConnectException: Connection refused
>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> at $Proxy0.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)