Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >>


+
Lanati, Matteo 2013-06-01, 13:57
+
Harsh J 2013-06-01, 15:49
+
Shahab Yunus 2013-06-01, 16:35
+
Harsh J 2013-06-01, 16:43
+
Shahab Yunus 2013-06-01, 19:23
+
Azuryy Yu 2013-06-03, 13:02
+
Lanati, Matteo 2013-06-03, 22:33
+
Azuryy Yu 2013-06-04, 01:27
+
Harsh J 2013-06-04, 01:46
+
Azuryy Yu 2013-06-04, 01:51
+
Lanati, Matteo 2013-06-04, 15:34
+
Alexander Alten-Lorenz 2013-06-04, 15:42
+
Lanati, Matteo 2013-06-04, 16:11
Hi all,

I finally solved the problem. It was due to the cloud middleware I used to run the Hadoop VMs.
The domain type in the libvirt xm file was incorrectly set to 'qemu'. Once I fixed this and changed to 'kvm' everything started to work properly.
Thanks for the support.

Matteo
On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <[EMAIL PROTECTED]> wrote:

> Hi Matteo,
>
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
>
> - Alex
>
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <[EMAIL PROTECTED]> wrote:
>
>> Hi again,
>>
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>>
>> Matteo
>>
>>
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <[EMAIL PROTECTED]> wrote:
>>
>>> Hi Harsh,
>>>
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>>
>>>
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>> Azuryy,
>>>
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>>
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <[EMAIL PROTECTED]> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>>
>>>>
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>> Hi Azuryy,
>>>>>
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>>
>>>>> Matteo
>>>>>
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>>
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>>
>>>>>> --Send from my Sony mobile.
>>>>>>
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <[EMAIL PROTECTED]> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>>
>>>>>> Regards,
>>>>>> Shahab
>>>>>>
>>>>>>
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>>
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>>
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <[EMAIL PROTECTED]>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>>
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>>
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>>>>>>
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>>
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>
Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724
+
Azuryy Yu 2013-06-01, 16:34
+
Azuryy Yu 2013-06-01, 16:35
+
Lanati, Matteo 2013-06-01, 16:11
+
Shahab Yunus 2013-06-01, 14:04
+
Lanati, Matteo 2013-06-01, 14:25