I've tried pushing a large amount of messages into Kafka on Windows, and got the following error:
Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263) at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262) at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247) at kafka.log.Log.rollToOffset(Log.scala:518) at kafka.log.Log.roll(Log.scala:502) at kafka.log.Log.maybeRoll(Log.scala:484) at kafka.log.Log.append(Log.scala:297) ... 19 more
I suspect that Windows is not releasing memory mapped file references soon enough.
I wonder if there is any good workaround or solutions for this?
We've been running into this issue when running perf.Performance as per http://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/. When running it using 100K messages, it works fine on Windows with about 20-30K msg/s. But when running it with 1M messages, then the broker fails as per the message below. It does not appear that modifying the JVM memory configurations nor running on SSDs has any effect. As for JVMs - no plug ins and we've tried both 1.6 and OpenJDK 1.7.
This looks like a JVM memory map issue on Windows issue - perhaps running some System.gc() to prevent the roll?
On 7/9/13 7:55 AM, "Jun Rao" <[EMAIL PROTECTED]> wrote:
The problem appears to be that we are resizing a memory mapped file which it looks like windows does not allow (which is kind of sucky).
The offending method is OffsetIndex.resize().
The most obvious fix would be to first unmap the file, then resize, then remap it. We can't do this though because Java actually doesn't support unmapping files (it does this lazily with garbage collection, which really sucks). In fact as far as I know there is NO way to guarantee an unmap occurs at a particular time, so if this is correct and windows doesn't allow resizing then this combination of suckiness means that there is no way to resize a file that has ever been mapped short of closing the process.
I actually don't have access to a windows machine so it is a little hard for me to test this. The question is whether there is any work around. I am happy to change that method but we do need to be able to resize memory mapped files. On Tue, Jul 9, 2013 at 9:04 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
Thanks very much for digging in! I was a tad concerned about that approach but in the process of testing that idea out along with some other more dramatic ideas ;). Will keep you updated - thanks again! On 7/10/13 7:55 AM, "Jun Rao" <[EMAIL PROTECTED]> wrote:
Does anyone understand the discussion on that ticket Sriram posted? It sounds like they have an unmap call but they appear to be concerned about protecting threads from one another--i.e. if one thread unmapped the file and another mapped a different file it would show up in the old memory mapping. This is true but in what universe are you trying to protect one thread from another thread in the same process? If they are in different processes then file permissions and memory protection should handle it, no? This seems like it would only be a problem in a situation where you are running untrusted threads in your jvm, which is not a concern we would care to address.
Assuming we understand the issue correctly I think it would be fine to just use reflection and force the unmap provided that this is a no-op if it fails.
Needless to say we have not done any testing on windows, so this is good.
Another concern I have is whether the index file preallocation is working properly? We are using RandomAccessFile.setLength(xxx) to preallocate a sparse file. I believe NTFS does support sparse files but I'm not sure if this method will actually do sparse allocation on Windows. If not there could be some latency as the file is physically created which would be a concern.
-Jay On Wed, Jul 10, 2013 at 7:59 AM, Denny Lee <[EMAIL PROTECTED]> wrote: