right now HDFS has a trash functionality that moves files removed with 'hadoop dfs -rm' to an intermediate directory (/trash). You can configure how may time a file spends in that directory before it's actually removed from the filesystem. Look for 'fs.trash.interval' on your hdfs-site.xml or configuration guide . You can force trash clean up with 'hdfs dfs -expunge'.
Un saludo, Ramón Pin
-----Original Message----- From: Artem Ervits [mailto:[EMAIL PROTECTED]] Sent: lunes, 01 de abril de 2013 23:04 To: [EMAIL PROTECTED] Subject: Protect from accidental deletes
I'd like to know what users are doing to protect themselves from accidental deletes of files and directories in HDFS? Any suggestions are appreciated.
This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.
Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.