the Child of our jobs are launched using user hdfs
we got OOM in some jobs when they try to mmap() a big file.
I manually checked the ulimit -v of hdfs on the slave nodes, they are
but when I launched a fake streaming job, whose mapper is simply bash -c '
ulimit -a ',
I found that it has a 1G virtual memory limit.
why is this??
looks like a Cloudera bug
we are using CDH3U3