-Re: hdfs system crashes when loading files bigger than local space left
On Jul 16, 2010, at 3:15 AM, Vitaliy Semochkin wrote:
> That is likely way too small.
> Will setting 512Mb be better in case the whole volume size is only 190Gb?
I'd recommend at least 5gb. I'm also assuming this same disk space isn't getting used for MapReduce.
> Does hadoop detect/distinct the client that uploads data from datanode and not from datanode?
> lets say I execute
> hadoop -put someFile hdfs://namenode.mycompany.com/
> from namenode.mycompany.com and from some other pc. Will it be any different for hadoop and will hadoop orgonize data more balanced in the last case?
Again, namenode is irrelevant. Do not do put's from a datanode if you want the data to be reasonably balanced.