Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: about replication


Copy link to this message
-
Re: about replication
Seriously??You are planning to develop something using Hadoop on windows.
Not a good idea. Anyways, cold you plz show me your log files?I also need
some additional info :
-The exact problem which you are facing right now
-Your cluster summary(no. of nodes etc)
-Your latest configuration files
-Your /etc.hosts file

Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Aug 23, 2013 at 1:42 AM, Irfan Sayed <[EMAIL PROTECTED]> wrote:

> ok. thanks
> now, i need to start with all windows setup first as our product will be
> based on windows
> so, now, please tell me how to resolve the issue
>
> datanode is not starting . please suggest
>
> regards,
> irfan
>
>
>
> On Thu, Aug 22, 2013 at 7:56 PM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>
>> It is possible. Theoretically Hadoop doesn't stop you from doing that.
>> But it is not a very wise setup.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Thu, Aug 22, 2013 at 5:01 PM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>
>>> please suggest
>>>
>>> regards
>>> irfan
>>>
>>>
>>>
>>> On Thu, Aug 22, 2013 at 11:49 AM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>>
>>>> thanks.
>>>> can i have setup like this :
>>>> namenode will be on linux (flavour may be RHEL, CentOS, UBuntu etc)
>>>> and datanodes are the combination of any OS (windows , linux , unix etc
>>>> )
>>>>
>>>> however, my doubt is,  as the file systems of  both the systems (win
>>>> and linux ) are different ,  datanodes of these systems can not be part of
>>>> single cluster . i have to make windows cluster separate and UNIX cluster
>>>> separate ?
>>>>
>>>> regards
>>>>
>>>>
>>>>
>>>> On Thu, Aug 22, 2013 at 11:26 AM, Arpit Agarwal <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> I just noticed you are on Cygwin. IIRC Windows PIDs are not the same
>>>>> as Cygwin PIDs so that may be causing the discrepancy. I don't know how
>>>>> well Hadoop works in Cygwin as I have never tried it. Work is in progress
>>>>> for native Windows support however there are no official releases with
>>>>> Windows support yet. It may be easier to get familiar with a release<https://www.apache.org/dyn/closer.cgi/hadoop/common/>on Linux if you are new to it.
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 21, 2013 at 10:05 PM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>>>>
>>>>>> thanks
>>>>>> here is what i did .
>>>>>> i stopped all the namenodes and datanodes using ./stop-dfs.sh command
>>>>>> then deleted all pid files for namenodes and datanodes
>>>>>>
>>>>>> started dfs again with command : "./start-dfs.sh"
>>>>>>
>>>>>> when i ran the "Jps" command . it shows
>>>>>>
>>>>>> Administrator@DFS-DC /cygdrive/c/Java/jdk1.7.0_25/bin
>>>>>> $ ./jps.exe
>>>>>> 4536 Jps
>>>>>> 2076 NameNode
>>>>>>
>>>>>> however, when i open the pid file for namenode then it is not showing
>>>>>> pid as : 4560. on the contrary, it shud show : 2076
>>>>>>
>>>>>> please suggest
>>>>>>
>>>>>> regards
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Aug 22, 2013 at 9:59 AM, Arpit Agarwal <
>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>
>>>>>>> Most likely there is a stale pid file. Something like
>>>>>>> \tmp\hadoop-*datanode.pid. You could try deleting it and then restarting
>>>>>>> the datanode.
>>>>>>>
>>>>>>> I haven't read the entire thread so you may have looked at this
>>>>>>> already.
>>>>>>>
>>>>>>> -Arpit
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Aug 21, 2013 at 9:22 PM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>>>>>>
>>>>>>>> datanode is trying to connect to namenode continuously but fails
>>>>>>>>
>>>>>>>> when i try to run "jps" command it says :
>>>>>>>> $ ./jps.exe
>>>>>>>> 4584 NameNode
>>>>>>>> 4016 Jps
>>>>>>>>
>>>>>>>> and when i ran the "./start-dfs.sh" then it says :
>>>>>>>>
>>>>>>>> $ ./start-dfs.sh
>>>>>>>> namenode running as process 3544. Stop it first.
>>>>>>>> DFS-1: datanode running as process 4076. Stop it first.
>>>>>>>> localhost: secondarynamenode running as process 4792. Stop it first.
>>>>>>>>
>>>>>>>> both these logs are contradictory
>>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB