Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> remote accumulo instance issue


Copy link to this message
-
Re: remote accumulo instance issue
These are from the client machine:
(9997 on a tserver)
[mreichman@packers: ~]$ nmap -p 9997 192.168.1.162

Starting Nmap 5.51 ( http://nmap.org ) at 2013-05-08 16:35 ric
Nmap scan report for giants.home (192.168.1.162)
Host is up (0.0063s latency).
PORT     STATE SERVICE
9997/tcp open  unknown
MAC Address: 7A:79:C0:A8:01:A2 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.60 seconds

(2181 zookeeper on the master)
[mreichman@packers: ~]$ nmap -p 2181 192.168.1.160

Starting Nmap 5.51 ( http://nmap.org ) at 2013-05-08 16:35 ric
Nmap scan report for padres.home (192.168.1.160)
Host is up (0.0071s latency).
PORT     STATE SERVICE
2181/tcp open  unknown
MAC Address: 7A:79:C0:A8:01:A0 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.56 seconds

Any chance it could be anything related to DNS or reverse DNS?
On Wed, May 8, 2013 at 10:25 AM, John Vines <[EMAIL PROTECTED]> wrote:

> Is that remote instance behind a firewall or anything like that?
>
>
> On Wed, May 8, 2013 at 11:09 AM, Marc Reichman <
> [EMAIL PROTECTED]> wrote:
>
>> I have seen this as ticket ACCUMULO-687 which has been marked resolved,
>> but I still see this issue.
>>
>> I am connecting to a remote accumulo instance to query and to launch
>> mapreduce jobs using AccumuloRowInputFormat, and I'm seeing an error like:
>>
>> 91 [main-SendThread(padres.home:2181)] INFO
>> org.apache.zookeeper.ClientCnxn  - Socket connection established to
>> padres.home/192.168.1.160:2181, initiating session
>> 166 [main-SendThread(padres.home:2181)] INFO
>> org.apache.zookeeper.ClientCnxn  - Session establishment complete on server
>> padres.home/192.168.1.160:2181, sessionid = 0x13e7b48f9d17af7,
>> negotiated timeout = 30000
>> 1889 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
>> Failed to find an available server in the list of servers:
>> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
>> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
>> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
>> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
>>
>> My zookeeper's "tservers" key looks like:
>> [zk: localhost:2181(CONNECTED) 1] ls
>> /accumulo/908a756e-1c81-4bea-a4de-675456499a10/tservers
>> [192.168.1.164:9997, 192.168.1.192:9997, 192.168.1.194:9997,
>> 192.168.1.162:9997, 192.168.1.190:9997, 192.168.1.166:9997,
>> 192.168.1.168:9997, 192.168.1.196:9997]
>>
>> My masters and slaves file look like:
>> [hadoop@padres conf]$ cat masters
>> 192.168.1.160
>> [hadoop@padres conf]$ cat slaves
>> 192.168.1.162
>> 192.168.1.164
>> 192.168.1.166
>> 192.168.1.168
>> 192.168.1.190
>> 192.168.1.192
>> 192.168.1.194
>> 192.168.1.196
>>
>> tracers, gc, and monitor are the same as masters.
>>
>> I have no issues executing on the master, but I would like to work from a
>> remote host. The remote host is on a VPN, and its default resolver is NOT
>> the resolver from the remote network. If I do reverse lookup over the VPN
>> *using* the remote resolver it shows proper hostnames.
>>
>> My concern is that something is causing the "host:port" entry plus the
>> port to come up with this concatenated view of host:port:port, which is
>> obviously not going to work.
>>
>> What else can I try? I previously had hostnames in the
>> masters/slaves/etc. files but now have the IPs. Should I re-init the
>> instance to see if it changes anything in zookeeper?
>>
>
>