Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> remote accumulo instance issue


+
Marc Reichman 2013-05-08, 15:09
Copy link to this message
-
Re: remote accumulo instance issue
On Wed, May 8, 2013 at 11:09 AM, Marc Reichman <[EMAIL PROTECTED]
> wrote:

> I have seen this as ticket ACCUMULO-687 which has been marked resolved,
> but I still see this issue.
>
> I am connecting to a remote accumulo instance to query and to launch
> mapreduce jobs using AccumuloRowInputFormat, and I'm seeing an error like:
>
> 91 [main-SendThread(padres.home:2181)] INFO
> org.apache.zookeeper.ClientCnxn  - Socket connection established to
> padres.home/192.168.1.160:2181, initiating session
> 166 [main-SendThread(padres.home:2181)] INFO
> org.apache.zookeeper.ClientCnxn  - Session establishment complete on server
> padres.home/192.168.1.160:2181, sessionid = 0x13e7b48f9d17af7, negotiated
> timeout = 30000
> 1889 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
> Failed to find an available server in the list of servers:
> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
>
> My zookeeper's "tservers" key looks like:
> [zk: localhost:2181(CONNECTED) 1] ls
> /accumulo/908a756e-1c81-4bea-a4de-675456499a10/tservers
> [192.168.1.164:9997, 192.168.1.192:9997, 192.168.1.194:9997,
> 192.168.1.162:9997, 192.168.1.190:9997, 192.168.1.166:9997,
> 192.168.1.168:9997, 192.168.1.196:9997]
>
> My masters and slaves file look like:
> [hadoop@padres conf]$ cat masters
> 192.168.1.160
> [hadoop@padres conf]$ cat slaves
> 192.168.1.162
> 192.168.1.164
> 192.168.1.166
> 192.168.1.168
> 192.168.1.190
> 192.168.1.192
> 192.168.1.194
> 192.168.1.196
>
> tracers, gc, and monitor are the same as masters.
>
> I have no issues executing on the master, but I would like to work from a
> remote host. The remote host is on a VPN, and its default resolver is NOT
> the resolver from the remote network. If I do reverse lookup over the VPN
> *using* the remote resolver it shows proper hostnames.
>
> My concern is that something is causing the "host:port" entry plus the
> port to come up with this concatenated view of host:port:port, which is
> obviously not going to work.
>

The second port is nothing to worry about. Its created by concatenating
what came from zookeeper with the default tserver port.  The location from
zookeeper can contain a port.   If for some reason the location in
zookeeper did not have a port, it would use the default.

That second port should probably go away, its being added
by vestigial code.  We always expect what comes from zookeeper to have port
now.
>
> What else can I try? I previously had hostnames in the masters/slaves/etc.
> files but now have the IPs. Should I re-init the instance to see if it
> changes anything in zookeeper?
>
+
Marc Reichman 2013-05-08, 16:38
+
John Vines 2013-05-08, 16:43
+
Marc Reichman 2013-05-08, 16:45
+
Marc Reichman 2013-05-08, 17:04
+
John Vines 2013-05-08, 15:25
+
Marc Reichman 2013-05-08, 15:37
+
Eric Newton 2013-05-08, 15:29
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB