Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # dev >> Re: [jira] [Commented] (ACCUMULO-683) Accumulo ignores HDFS max replication configuration


+
David Medinets 2012-07-10, 21:19
Copy link to this message
-
Re: [jira] [Commented] (ACCUMULO-683) Accumulo ignores HDFS max replication configuration
+1

Sent from my iPhone

On Jul 16, 2012, at 5:14 PM, "John Vines (JIRA)" <[EMAIL PROTECTED]> wrote:

>
>    [ https://issues.apache.org/jira/browse/ACCUMULO-683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13415640#comment-13415640 ]
>
> John Vines commented on ACCUMULO-683:
> -------------------------------------
>
> Unless there are any objections, I'm going to implement this as an initialization time prompt which will check both max and mins and ask for what setting the user wants IFF the default of 5 is not within bounds.
>
>> Accumulo ignores HDFS max replication configuration
>> ---------------------------------------------------
>>
>>                Key: ACCUMULO-683
>>                URL: https://issues.apache.org/jira/browse/ACCUMULO-683
>>            Project: Accumulo
>>         Issue Type: Bug
>>         Components: tserver
>>   Affects Versions: 1.4.1
>>           Reporter: Jim Klucar
>>           Assignee: Keith Turner
>>           Priority: Minor
>>
>> I setup a new 1.4.1 instance that was running on top of a Hadoop installation that had the maximum block replications set to 3 and the following error showed up on the monitor page.
>> java.io.IOException: failed to create file /accumulo/tables/!0/table_info/F0000001.rf_tmp on client 127.0.0.1. Requested replication 5 exceeds maximum 3 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1123) at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:551) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
>> Tablet server error is:
>> 10 10:56:25,408 [tabletserver.MinorCompactor] WARN : MinC failed (java.io.IOException: failed to create file /accumulo/tables/!0/
>> table_info/F0000001.rf_tmp on client 127.0.0.1.
>> Requested replication 5 exceeds maximum 3
>>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220)
>>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1123)
>>        at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:551)
>>        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
>> ) to create /accumulo/tables/!0/table_info/F0000001.rf_tmp retrying ...
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
> For more information on JIRA, see: http://www.atlassian.com/software/jira
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB