what is your replication factor?
when you removed node A as datanode .. did you first mark it for
retirement? if you just removed it from service then the blocks from that
datanode are missing and namenode when starts up it checks for the blocks.
Unless it reaches its threshold value it will not let you write any more
data on your hdfs.
I will suggest to start datanode on A, then mark it for retirement so
namenode will move the blocks to new datanode and once it is done namenode
will retire that datanode.
On Wed, Mar 6, 2013 at 3:21 PM, AMARNATH, Balachandar <
[EMAIL PROTECTED]> wrote:
> I have created a hadoop cluster with two nodes (A and B). ‘A’ act both as
> namenode and datanode, and ‘B’ act as datanode only. With this setup, I
> could store, read files. Now, I added one more datanode ‘C’ and relieved
> ‘A’ from datanode duty. This means, ‘A’ act only as namenode, and both B
> and C act as datanodes. Now, I tried to create a directory, it says
> ‘ org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create
> directory Name node is in safe mode’
> Can someone tell me why the namenode now is in safe mode?
> With thanks and regards
> The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.