Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> about replication


Copy link to this message
-
Re: about replication
I just noticed you are on Cygwin. IIRC Windows PIDs are not the same as
Cygwin PIDs so that may be causing the discrepancy. I don't know how well
Hadoop works in Cygwin as I have never tried it. Work is in progress for
native Windows support however there are no official releases with Windows
support yet. It may be easier to get familiar with a
release<https://www.apache.org/dyn/closer.cgi/hadoop/common/>on Linux
if you are new to it.
On Wed, Aug 21, 2013 at 10:05 PM, Irfan Sayed <[EMAIL PROTECTED]> wrote:

> thanks
> here is what i did .
> i stopped all the namenodes and datanodes using ./stop-dfs.sh command
> then deleted all pid files for namenodes and datanodes
>
> started dfs again with command : "./start-dfs.sh"
>
> when i ran the "Jps" command . it shows
>
> Administrator@DFS-DC /cygdrive/c/Java/jdk1.7.0_25/bin
> $ ./jps.exe
> 4536 Jps
> 2076 NameNode
>
> however, when i open the pid file for namenode then it is not showing pid
> as : 4560. on the contrary, it shud show : 2076
>
> please suggest
>
> regards
>
>
>
> On Thu, Aug 22, 2013 at 9:59 AM, Arpit Agarwal <[EMAIL PROTECTED]>wrote:
>
>> Most likely there is a stale pid file. Something like
>> \tmp\hadoop-*datanode.pid. You could try deleting it and then restarting
>> the datanode.
>>
>> I haven't read the entire thread so you may have looked at this already.
>>
>> -Arpit
>>
>>
>>
>> On Wed, Aug 21, 2013 at 9:22 PM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>
>>> datanode is trying to connect to namenode continuously but fails
>>>
>>> when i try to run "jps" command it says :
>>> $ ./jps.exe
>>> 4584 NameNode
>>> 4016 Jps
>>>
>>> and when i ran the "./start-dfs.sh" then it says :
>>>
>>> $ ./start-dfs.sh
>>> namenode running as process 3544. Stop it first.
>>> DFS-1: datanode running as process 4076. Stop it first.
>>> localhost: secondarynamenode running as process 4792. Stop it first.
>>>
>>> both these logs are contradictory
>>> please find the attached logs
>>>
>>> should i attach the conf files as well ?
>>>
>>> regards
>>>
>>>
>>>
>>> On Wed, Aug 21, 2013 at 5:28 PM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>>>
>>>> Your DN is still not running. Showing me the logs would be helpful.
>>>>
>>>> Warm Regards,
>>>> Tariq
>>>> cloudfront.blogspot.com
>>>>
>>>>
>>>> On Wed, Aug 21, 2013 at 5:11 PM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>>>
>>>>> i followed the url and did the steps mention in that. i have deployed
>>>>> on the windows platform
>>>>>
>>>>> Now, i am able to browse url : http://localhost:50070 (name node )
>>>>> however, not able to browse url : http://localhost:50030
>>>>>
>>>>> please refer below
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> i have modified all the config files as mentioned and formatted the
>>>>> hdfs file system as well
>>>>> please suggest
>>>>>
>>>>> regards
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 20, 2013 at 4:14 PM, Irfan Sayed <[EMAIL PROTECTED]>wrote:
>>>>>
>>>>>> thanks. i followed this url :
>>>>>> http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html
>>>>>> let me follow the url which you gave for pseudo distributed setup and
>>>>>> then will switch to distributed mode
>>>>>>
>>>>>> regards
>>>>>> irfan
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 20, 2013 at 3:23 PM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>>>>>>
>>>>>>> You are welcome. Which link have you followed for the
>>>>>>> configuration?Your *core-site.xml* is empty. Remove the property *
>>>>>>> fs.default.name *from *hdfs-site.xml* and add it to *core-site.xml*.
>>>>>>> Remove *mapred.job.tracker* as well. It is required in *
>>>>>>> mapred-site.xml*.
>>>>>>>
>>>>>>> I would suggest you to do a pseudo distributed setup first in order
>>>>>>> to get yourself familiar with the process and then proceed to the
>>>>>>> distributed mode. You can visit this link<http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UhM8d2T0-4I>if you need some help. Let me know if you face any issue.
>>>>>>>
>>>>>>> HTH
>>>>>>>
>>>>>>> Warm Regards,

CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB