Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: Error for larger jobs


+
Azuryy Yu 2013-11-28, 00:10
+
Siddharth Tiwari 2013-11-28, 00:20
+
Vinayakumar B 2013-11-28, 01:08
+
Azuryy Yu 2013-11-28, 01:44
+
Siddharth Tiwari 2013-11-28, 01:59
+
Azuryy Yu 2013-11-28, 02:04
Copy link to this message
-
Re: Error for larger jobs
Siddharth :
Take a look at 2.1.2.5.  ulimit and nproc under
http://hbase.apache.org/book.html#os

Cheers
On Wed, Nov 27, 2013 at 6:04 PM, Azuryy Yu <[EMAIL PROTECTED]> wrote:

> yes. you need to increase it, a simple way is put it in your /etc/profile
>
>
>
>
> On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Vinay and Azuryy
>> Thanks for your responses.
>> I get these error when I just run a teragen.
>> Also, do you suggest me to increase nproc value ? What should I increase
>> it to ?
>>
>> Sent from my iPad
>>
>> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <[EMAIL PROTECTED]>
>> wrote:
>>
>>  Hi Siddharth,
>>
>>
>>
>> Looks like the issue with one of the machine.  Or its happening in
>> different machines also?
>>
>>
>>
>> I don’t think it’s a problem with JVM heap memory.
>>
>>
>>
>> Suggest you to check this once,
>>
>> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>>
>>
>>
>> Thanks and Regards,
>>
>> Vinayakumar B
>>
>>
>>
>> *From:* Siddharth Tiwari [mailto:[EMAIL PROTECTED]<[EMAIL PROTECTED]>]
>>
>> *Sent:* 28 November 2013 05:50
>> *To:* USers Hadoop
>> *Subject:* RE: Error for larger jobs
>>
>>
>>
>> Hi Azury
>>
>>
>>
>> Thanks for response. I have plenty of space on my Disks so that cannot be
>> the issue.
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>   ------------------------------
>>
>> Date: Thu, 28 Nov 2013 08:10:06 +0800
>> Subject: Re: Error for larger jobs
>> From: [EMAIL PROTECTED]
>> To: [EMAIL PROTECTED]
>>
>> Your disk is full from the log.
>>
>> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <[EMAIL PROTECTED]>
>> wrote:
>>
>> Hi Team
>>
>>
>>
>> I am getting following strange error, can you point me to the possible
>> reason.
>>
>> I have set heap size to 4GB but still getting it. please help
>>
>>
>>
>> *syslog logs*
>>
>> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable
>>
>> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>>
>> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
>> session.id is deprecated. Instead, use dfs.metrics.session-id
>>
>> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId>>
>> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
>> exited with exit code 0
>>
>> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
>> ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>>
>> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
>> split:
>> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>>
>> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
>> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
>> and  BYTES_READ as counter name instead
>>
>> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 0
>>
>> 2013-11-27 19:01:52,250 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
>> "chmod": error=11, Resource temporarily unavailable
>>
>> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error
>> running child
>>
>> java.io.IOException: Cannot run program "chmod": error=11, Resource
>> temporarily unavailable
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>>
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
+
Siddharth Tiwari 2013-11-28, 02:17