Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - remote accumulo instance issue


Copy link to this message
-
Re: remote accumulo instance issue
Eric Newton 2013-05-08, 15:29
That ip:port:port is just a problem with the log message.  It's trying the
correct host:port.

Verify that your client can connect to port 9997 on your tserver nodes.

-Eric

On Wed, May 8, 2013 at 11:25 AM, John Vines <[EMAIL PROTECTED]> wrote:

> Is that remote instance behind a firewall or anything like that?
>
>
> On Wed, May 8, 2013 at 11:09 AM, Marc Reichman <
> [EMAIL PROTECTED]> wrote:
>
>> I have seen this as ticket ACCUMULO-687 which has been marked resolved,
>> but I still see this issue.
>>
>> I am connecting to a remote accumulo instance to query and to launch
>> mapreduce jobs using AccumuloRowInputFormat, and I'm seeing an error like:
>>
>> 91 [main-SendThread(padres.home:2181)] INFO
>> org.apache.zookeeper.ClientCnxn  - Socket connection established to
>> padres.home/192.168.1.160:2181, initiating session
>> 166 [main-SendThread(padres.home:2181)] INFO
>> org.apache.zookeeper.ClientCnxn  - Session establishment complete on server
>> padres.home/192.168.1.160:2181, sessionid = 0x13e7b48f9d17af7,
>> negotiated timeout = 30000
>> 1889 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
>> Failed to find an available server in the list of servers:
>> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
>> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
>> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
>> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
>>
>> My zookeeper's "tservers" key looks like:
>> [zk: localhost:2181(CONNECTED) 1] ls
>> /accumulo/908a756e-1c81-4bea-a4de-675456499a10/tservers
>> [192.168.1.164:9997, 192.168.1.192:9997, 192.168.1.194:9997,
>> 192.168.1.162:9997, 192.168.1.190:9997, 192.168.1.166:9997,
>> 192.168.1.168:9997, 192.168.1.196:9997]
>>
>> My masters and slaves file look like:
>> [hadoop@padres conf]$ cat masters
>> 192.168.1.160
>> [hadoop@padres conf]$ cat slaves
>> 192.168.1.162
>> 192.168.1.164
>> 192.168.1.166
>> 192.168.1.168
>> 192.168.1.190
>> 192.168.1.192
>> 192.168.1.194
>> 192.168.1.196
>>
>> tracers, gc, and monitor are the same as masters.
>>
>> I have no issues executing on the master, but I would like to work from a
>> remote host. The remote host is on a VPN, and its default resolver is NOT
>> the resolver from the remote network. If I do reverse lookup over the VPN
>> *using* the remote resolver it shows proper hostnames.
>>
>> My concern is that something is causing the "host:port" entry plus the
>> port to come up with this concatenated view of host:port:port, which is
>> obviously not going to work.
>>
>> What else can I try? I previously had hostnames in the
>> masters/slaves/etc. files but now have the IPs. Should I re-init the
>> instance to see if it changes anything in zookeeper?
>>
>
>