Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >>


+
Anit Alexander 2013-07-13, 02:51
+
Suresh Srinivas 2013-07-13, 05:38
+
闫昆 2013-07-16, 00:54
+
Anit Alexander 2013-07-16, 03:15
+
Mohammad Tariq 2013-07-17, 19:52
+
Anit Alexander 2013-07-19, 07:40
Glad to hear that :)

Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jul 19, 2013 at 1:10 PM, Anit Alexander <[EMAIL PROTECTED]>wrote:

> Hello Tariq,
> I solved the problem. There must have been some problem in the custom
> input format i created. so i took a sample custom input format which was
> working in cdh4 environment and applied the changes as per my requirement.
> It is working now. But i havent tested that code in apache hadoop
> environment yet :)
>
> Regards,
> Anit
>
>
> On Thu, Jul 18, 2013 at 1:22 AM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>
>> Hello Anit,
>>
>> Could you show me the exact error log?
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Tue, Jul 16, 2013 at 8:45 AM, Anit Alexander <[EMAIL PROTECTED]>wrote:
>>
>>> yes i did recompile. But i seem to face the same problem. I am running
>>> the map reduce with a custom input format. I am not sure if there is some
>>> change in the API to get the splits correct.
>>>
>>> Regards
>>>
>>>
>>> On Tue, Jul 16, 2013 at 6:24 AM, 闫昆 <[EMAIL PROTECTED]> wrote:
>>>
>>>> I think you should recompile the program after run the program
>>>>
>>>>
>>>> 2013/7/13 Anit Alexander <[EMAIL PROTECTED]>
>>>>
>>>>> Hello,
>>>>>
>>>>> I am encountering a problem in cdh4 environment.
>>>>> I can successfully run the map reduce job in the hadoop cluster. But
>>>>> when i migrated the same map reduce to my cdh4 environment it creates an
>>>>> error stating that it cannot read the next block(each block is 64 mb). Why
>>>>> is that so?
>>>>>
>>>>> Hadoop environment: hadoop 1.0.3
>>>>> java version 1.6
>>>>>
>>>>> chd4 environment: CDH4.2.0
>>>>> java version 1.6
>>>>>
>>>>> Regards,
>>>>> Anit Alexander
>>>>>
>>>>
>>>>
>>>
>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB