This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Build: NetBeans IDE 6.1 RC1 (Build 200804100130) VM: Java HotSpot(TM) Client VM, 1.5.0_14-b03, Java(TM) 2 Runtime Environment, Standard Edition, 1.5.0_14-b03 OS: Linux, 2.6.11-1.1369_FC4smp, i386 User comments: running tests. STACKTRACE: (first 10 lines) java.lang.OutOfMemoryError at java.util.zip.ZipFile.open(ZipFile.java:0) at java.util.zip.ZipFile.<init>(ZipFile.java:203) at java.util.jar.JarFile.<init>(JarFile.java:132) at java.util.jar.JarFile.<init>(JarFile.java:112) at org.netbeans.JarClassLoader$JarSource.getJarFile(JarClassLoader.java:379) at org.netbeans.JarClassLoader$JarSource.resource(JarClassLoader.java:407) at org.netbeans.Archive.getData(Archive.java:193) at org.netbeans.JarClassLoader$JarSource.doGetResource(JarClassLoader.java:395) at org.netbeans.JarClassLoader$Source.getResource(JarClassLoader.java:303) at org.netbeans.JarClassLoader.findResource(JarClassLoader.java:251)
Reassigning to "java".
Why was this assigned to java/*? There is no data that would suggest that java (and much less java/compiler) is using unreasonable amount of memory, or leaks. There is not stack trace that would point to java (wouldn't mean anything anyway - the OOME can occur anywhere, not necessarily for the piece of code that uses too much memory/leaks).
The only reason that I blamed it on the background compile was that the IDE was idle at the time ( I was using a different application ) and there were no documents open. The only think the IDE appeared to be doing was the background compile.
Could you please create a histogram (at least; full heap dump would be ideal) if you see the OOME happen again? jmap -histo <pid> (where pid is id of the java process running NetBeans)
Created attachment 68499 [details] screen out when out of memory
Created attachment 68500 [details] messages.log
Created attachment 68501 [details] jmap -histo pid
logs and screen shots attached as requested. The full heap dump will be huge.. not sure if that's something you want me to upload. PS. Do I remove the "incomplete" keyword ?
The full heap dump is far too big to upload over the web. [nigel@localhost ~]$ ls -l heap.bin -rw-rw-r-- 1 nigel nigel 839503704 Aug 28 13:03 heap.bin
I'm getting OOME while running tests also ( so *NOT* background compile). This is really odd and causing issues all over the place the memory log shows ~400m of the allocated 1024m for the IDE is used so I'm not sure why we are getting this error.
Well, _reduce_ your -Xmx to some reasonable value and you'll be fine. Explanation: There are several distinct memory pools in Java process - code segments, native heap, java heap, mmaped areas. But there's only so much virtual address space available for a 32bit process, 2-3GB, depending on the setup of your kernel. All the memory areas have to fit in that space, and if you cut out 2GB just for the heap, other requests may come short. As you're running on JDK1.5, the OOME from ZipFile.open means there wasn't enough address space for mmapping given jar. It has nothing to do with the heap or lack thereof. Out of curiosity, what lead you to setting up -Xmx2048? Was anybody anywhere suggesting this? With CMS on, the IDE might even work OK (as long as it has enough address space), but will hardly need that much memory. Now, the heap histogram looks really weird, but this might be caused by the fact that there was no pressure on the GC, so it left all the debris where it was so far. Pressing the memory bar (i.e. forcing GC) before generating histogram/dump would help to answer this question, but the top 5 or so lines are suspicious anyway (600.000 RP tasks were scheduled over the time!?)
This machine has 4gigs of ram. The other machine that I work on has 32 gigs of RAM. The machines themselves aren't running out of Ram. If we use the *default* values Netbeans will run out of memory fairly quickly as we have a large source base. The reason we are adjusting the memory setings is that we are having large performance issues see #144131 for example. I'll set -Xmx1024m and -Xms512m and see if we get the error again.
This is working, THANK YOU.
Sorry, I'm getting the out of memory error again even after setting -Xmx1024m and -Xms512m
Created attachment 68785 [details] message log
sorry... i must be on drugs. It was using the old settings again.
sorry, I'm still getting the Out Of Memory issue when we run test cases which produces a far amount of output. The memory settings are -Xmx1024m -Xms512m
Just looking at the associated exception, there are over 110 duplicates from a number of different people. This problem seems to be occuring on a number of different systems.
While running the same tests cases under Netbeans 6.1 using the same memory parameters, I DO NOT get the out of memory error.
Do you need more information on this ? I've downloaded and tried on the latest build 200809021401 memory settings are :- -Xss2m -Xmx1024m -Xms512m -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:PermSize=64m -XX:MaxPermSize=200m -Xverify:none -XX:+CMSPermGenPrecleaningEnabled
Created attachment 68934 [details] message log from Build 200809021401
There's a lot of "Cannot allocate memory" failures in the output view, reassigning for evaluation.
There could be a problem in output in case it repeatedly allocates memory mapped space - there is no control in java to free the previously allocated space. BTW how big is the printed output? Still can't believe it could be that big it eats so much memory even if it was allocated several times. You could also try to make Xmx even smaller - more memory would be available for m-mapping and also there would be greater chance that the objects holding the m-mapped space are garbage collected and the space freed.
Created attachment 68990 [details] message log after lowering memory
same issue after lowering to -Xmx512m -Xms256m This issue doesn't occur in 6.1 with -Xmx1024m -Xms1024 The output isn't that huge just a number of Log4j outputs from 16 test cases.
Could you please attach the output of "pmap <pid>" (or "cat /proc/<pid>/maps") just after the output window related OOME?
Created attachment 69113 [details] pmap
This is what I was afraid of. The output window backing file is mapped many times and fills the whole remaining virtual address space. How large the output file (/tmp/output1220570513950) was? (too late for such a question, isn't it? ;-)) There might be a bug in the output window implementation, but generally, the problem is that unless there is heap space pressure (that is, something generating enough java heap garbage), the old and already abandoned memory mapped regions (that is, a completely unrelated scarce resource - virtual address space) are not freed by the JVM. It is also possible that the output window buffer implementation remaps too frequently, and it may in fact benefit from exponential preallocation, but I don't recall details of the output window implementation. Note: Output window used to behave very well even for ridiculously huge outputs as it was specifically designed so. Maybe latest few changes Tomas did broke it...
I've run the test again ( see below) it's only 2.9m. This problem did not occur in nb6.1 reguardless of the memory settings I use. [nigel@dev1 tmp]$ ls -lt |more total 119568 -rw-r--r-- 1 nigel developer 2937204 Sep 5 19:48 output1220607867797
4a95b819da11
Will that be in tomorrow's build ? I'll download and test if you like.
It should propagate, message will be added here automatically once the changeset is in build. Verification would be appreciated.
Tomasi, What if you overallocated the backing file (by the means of RandomAccessFile.setLength) in an (limited) exponencial manner? Then you could memory map also the overallocated portion and don't need to change the mapping after every additional line-to-be-displayed. and if done well, you'll never need more than 2 times the size of the output in the virtual address space, not even if you really displayed every line of the output and no GC ever.
OW mmaps only part of backing file, so it works up to 2GB limit (swing components) quite well. If user scrolls in OW you still need remap. However, reported problem was due to frequent remapping at the end of the file (if OW auto-scrolled) so I agree it might be good idea to map preallocated portion of file to minimize remapping at the end of the file.
Integrated into 'main-golden', will be available in build *200809111401* on http://bits.netbeans.org/dev/nightly/ (upload may still be in progress) Changeset: http://hg.netbeans.org/main/rev/4a95b819da11 User: Tomas Holy <t_h@netbeans.org> Log: #145696 and #129099: using single RandomAccessFile for both writing and memory mapping. #145255 firing changes less often to avoid abusing AWT thread and reduce chance to run out of address space by mmaping
Fixing this issue also fixed issue 146914, which was result of JDK bug. You can monitor this bug on the Java Bug Database at http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6748784.
cool... just download the lastest daily build. This issue is fixed.
*** Issue 150032 has been marked as a duplicate of this issue. ***
*** Issue 160318 has been marked as a duplicate of this issue. ***