Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - remote accumulo instance issue


Copy link to this message
-
Re: remote accumulo instance issue
John Vines 2013-05-08, 16:43
What version of Accumulo are you running?

Sent from my phone, please pardon the typos and brevity.
On May 8, 2013 12:38 PM, "Marc Reichman" <[EMAIL PROTECTED]>
wrote:

> I can't find anything wrong with the networking. Here is the whole error
> with stack trace:
> 2057 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
> Failed to find an available server in the list of servers:
> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
> Exception in thread "main" java.lang.IncompatibleClassChangeError:
> Implementing class
> at java.lang.ClassLoader.defineClass1(Native Method)
>  at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>  at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>  at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>  at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> at
> org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:146)
>  at
> org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123)
> at
> org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:105)
>  at
> org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:71)
> at
> org.apache.accumulo.core.client.impl.ConnectorImpl.<init>(ConnectorImpl.java:75)
>  at
> org.apache.accumulo.core.client.ZooKeeperInstance.getConnector(ZooKeeperInstance.java:218)
> at
> org.apache.accumulo.core.client.ZooKeeperInstance.getConnector(ZooKeeperInstance.java:206)
>
> Running on JDK 1.6.0_27
>
>
> On Wed, May 8, 2013 at 10:38 AM, Keith Turner <[EMAIL PROTECTED]> wrote:
>
>>
>>
>>
>> On Wed, May 8, 2013 at 11:09 AM, Marc Reichman <
>> [EMAIL PROTECTED]> wrote:
>>
>>> I have seen this as ticket ACCUMULO-687 which has been marked resolved,
>>> but I still see this issue.
>>>
>>> I am connecting to a remote accumulo instance to query and to launch
>>> mapreduce jobs using AccumuloRowInputFormat, and I'm seeing an error like:
>>>
>>> 91 [main-SendThread(padres.home:2181)] INFO
>>> org.apache.zookeeper.ClientCnxn  - Socket connection established to
>>> padres.home/192.168.1.160:2181, initiating session
>>> 166 [main-SendThread(padres.home:2181)] INFO
>>> org.apache.zookeeper.ClientCnxn  - Session establishment complete on server
>>> padres.home/192.168.1.160:2181, sessionid = 0x13e7b48f9d17af7,
>>> negotiated timeout = 30000
>>> 1889 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
>>> Failed to find an available server in the list of servers:
>>> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
>>> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
>>> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
>>> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
>>>
>>> My zookeeper's "tservers" key looks like:
>>> [zk: localhost:2181(CONNECTED) 1] ls
>>> /accumulo/908a756e-1c81-4bea-a4de-675456499a10/tservers
>>> [192.168.1.164:9997, 192.168.1.192:9997, 192.168.1.194:9997,
>>> 192.168.1.162:9997, 192.168.1.190:9997, 192.168.1.166:9997,
>>> 192.168.1.168:9997, 192.168.1.196:9997]
>>>
>>> My masters and slaves file look like:
>>> [hadoop@padres conf]$ cat masters
>>> 192.168.1.160
>>> [hadoop@padres conf]$ cat slaves
>>> 192.168.1.162
>>> 192.168.1.164