This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.

Bug 69544 - Memory leak
Summary: Memory leak
Status: RESOLVED WONTFIX
Alias: None
Product: java
Classification: Unclassified
Component: Unsupported (show other bugs)
Version: 5.x
Hardware: All All
: P2 blocker (vote)
Assignee: _ rkubacki
URL:
Keywords: PERFORMANCE
Depends on:
Blocks:
 
Reported: 2005-11-29 16:49 UTC by _ gtzabari
Modified: 2007-09-26 09:14 UTC (History)
1 user (show)

See Also:
Issue Type: DEFECT
Exception Reporter:


Attachments
Heap histogram (162.00 KB, application/x-compressed)
2005-11-29 22:51 UTC, _ gtzabari
Details

Note You need to log in before you can comment on or make changes to this bug.
Description _ gtzabari 2005-11-29 16:49:17 UTC
dev build 200511271900
Java 1.6.0-rc-b61

I've recently reported issue #68903 regarding a thread leak. Today, I was able
to reproduce the original memory leak issue I was trying to report. I used jmap
to dump the heap (500MB file!) but I don't know what to do next. I can't even
invoke jhat on the file because it runs out of memory even if I allocate it a
maximum heap of 1GIG. I don't think it is feasible for me to upload a 500MB file
to you either :)

Any ideas?
Comment 1 _ gtzabari 2005-11-29 16:58:38 UTC
Seems we might have a bigger problem. The maximum heap Mustang seems to accept
is 1.4GIG. If you try passing a larger value using -Xmx it gives:

Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

It is unlikely 1.4GIG is enough memory for JHAT to process this file. So now I
have two questions:

1) Any idea why Mustang is limited to this amount of memory? I remember reading
a while back that it should be able to access *much* more. Is this a bug?

2) What do we do about this dump file? Is there a way for us to strip
information out of it so maybe JHAT can process the slimmed-down file?
Comment 2 _ gtzabari 2005-11-29 17:12:22 UTC
I've compressed the dump file to 54MB. Now that the size is more reasonable I
can try uploading it to you, or is there no point?
Comment 3 _ gtzabari 2005-11-29 22:47:56 UTC
Ok, I've now got a new heap dump which indicates the leak is coming from the
Java parser module. See the attached "heap histogram" HTML file.

The new dump file is 20MB compressed but IssueZilla still limits me to 1MB per
attachment. I could put the JHAT server online temporarily for one of your
engineers to access remotely if you want, please email me privately to arrange
this. If you have any other ideas or require any more information, please let me
know.
Comment 4 _ gtzabari 2005-11-29 22:51:28 UTC
Created attachment 27405 [details]
Heap histogram
Comment 5 _ gtzabari 2005-11-30 20:57:16 UTC
I suspect this issue is Mustang-specific. Under Mustang I cannot run Netbeans
for over 10 mins without it hanging (100% cpu usage, sometimes OOME).

Upon switching back to Tiger I haven't run into this problem (yet). Is anyone
else on the Netbeans team using Mustang on a regular basis?
Comment 6 _ gtzabari 2005-11-30 21:06:07 UTC
No, seems I was wrong. I can still reproduce a OOME under dev build 200511271900
and JDK 1.5.0_05. It seems to be somewhat worse under Mustang in that the entire
IDE hangs and I don't seem to get a OOME exception panel (I am forced to kill
the process after a few mins) but Tiger still exhibits the problem.
Comment 7 _ rkubacki 2005-12-01 12:16:02 UTC
Looks like problem with leaking Ant - is it OK that we have 7 instances of 
org.apache.tools.ant.module.bridge.AuxClassLoader (and 2
org.apache.tools.ant.module.bridge.AntBridge$MaskedClassLoader, 1
org.apache.tools.ant.loader.AntClassLoader2)? This can hold javac data.
Comment 8 Jesse Glick 2005-12-01 23:06:26 UTC
Don't know what could cause that.
Comment 9 _ gtzabari 2005-12-02 21:12:23 UTC
Can you guys run a profiler on your end and find out what objects reference
org.netbeans.lib.java.parser.ScannerToken? I tried using JHAT on the heap dump
to find this out (i.e. "list all references to this object excluding weak
references") but JHAT runs out of memory (I gave it 1.4GB which is the limit of
the JVM) so I'm out of ideas.

With such a huge memory leak I suspect you will be able to find a problem very
quickly. As for repro steps, I've done two things very frequently recently:

1) Rename classes/packages using the refactor menu
2) CTRL-left-click on a class in the editor to jump to that class

Both of these things manipulate the Java model so maybe there is a bug in one of
them. Also, with your permission I'd like to increase the priority of this bug
to P2 because it is a serious memory leak.
Comment 10 _ rkubacki 2005-12-05 14:07:15 UTC
I am not able to open the dump. What version of JDK did you use? I am trying to
open it with JDK 1.6.0_b61 on my Linux machine. 
Comment 11 _ gtzabari 2005-12-05 16:16:06 UTC
I believe I used the same version as you, though under Windows, to produce the
dump. Some of the more recent dumps were probably taken with b62 which has
recently been released. What kind of errors do you get opening the dump?
Comment 12 _ rkubacki 2005-12-12 12:19:44 UTC
I will try to open them again. So far I tried to do few tests on my machine and
filled several bugs that can be related - 70150, 70157, 70161, 70165, 70172 and
also reopened 70052.
Comment 13 _ rkubacki 2005-12-13 17:51:21 UTC
At least the first one can be opened with mustang b63.
Comment 14 _ gtzabari 2005-12-13 18:55:46 UTC
Are you saying that a dump file which refused to open prior to b63 now opens
properly? On the toic of the memory leaks, I am glad to see you filed many
related issues and some of them have gotten fixed over the past couple of days.
I think that for the original problem I ran into (the major memory leak) if we
close issue #69576 then the problem will go away. Ever since I fixed my
namespace collisions (as discussed in #69576) I haven't experienced any more
OutOfMemoryException. I also downgraded myself from Mustang to Tiger but I don't
think that was responsible (I remember getting OOME in Tiger too).

If we ensure that MDR does not end up with cyclical dependencies or namespace
collisions then the remaining memory leaks should be small.
Comment 15 _ rkubacki 2006-01-03 13:21:28 UTC
I am looking at the second dump now. Most of memory is occupied by 

class org.netbeans.lib.java.parser.ScannerToken 	1386166 	51288142
class [Lorg.netbeans.lib.java.parser.Token; 	1385342 	11098752
class [C 	116377 	10469368
class [Ljava.lang.Object; 	20223 	7181940
class [B 	12216 	5261669
class [I 	37006 	4953552
class java.lang.String 	120073 	2881752
class java.util.HashMap$Entry 	100130 	2403120
class [Lcom.sun.tools.javac.parser.Tokens; 	2 	2219488
class [Ljava.util.HashMap$Entry; 	15461 	1643784
class java.util.TreeMap$Entry 	49914 	1447506
class [S 	17837 	1022988
class java.lang.Class 	11354 	862904
class [J 	49704 	820184
class java.util.HashMap 	15032 	601280
class com.sun.tools.javac.util.List 	26999 	539980
class java.lang.Long 	20435 	326960
Comment 16 _ rkubacki 2006-01-03 15:31:11 UTC
According to dump there is an instance of org.netbeans.lib.gjast.ASScanner that
holds extremely large ArrayList of org.netbeans.lib.java.parser.ScannerToken
(almost all of them). This is held only as a Java local reference in a task
executed in RequestProcessor (task is running or referenced to get result
status?) I am not sure what caused this.

If I get it correctly there is some Java source parsed that has ~50-60kB so the
size of parsed data is not apropriate.  Perhaps a few thread dumps (to confirm
that parsing is involved) and more details about parsing activity can give us
more details. Java module developers know some debugging flags that can reveal
what is parsed.

Re 1.4GiG limit - this is related to amount of memory that your system can
provide to JVM process. I was able to open it on some server class machine now. 
Comment 17 Jan Becicka 2006-01-03 15:41:32 UTC
Gtzabari, I'd like to ask you for another dump. Please run with
-J-Dperf.refactoring=true, we will see what files are being parsed.
Comment 18 _ rkubacki 2006-01-04 16:51:43 UTC
This problem can be fixed with latest update of gjast.jar done because of
#63594. This was integrated on Dec 8 (before 5.0 branching). I will ask for
waiver because so far this is the only report of this type. 
Comment 19 _ rkubacki 2006-01-04 17:04:23 UTC
To conclude what can help us to move further:
- any reproducible test would be the best
- preferably run the IDE with Mustang using -J-XX:+HeapDumpOnOutOfMemoryError
-J-XX:HeapDumpPath=<path for storing dumps>. This can produce dump on first
OOME. Alternative way is jmap
- use -J-Dperf.refactoring=true flag to get more details about parsing activity
- if we could get some thread dumps before the OOME is thrown it would be nice too.

Still we do not know what is the reason that ASScanner holds so big list of
ScannerTokens.
Comment 20 _ rkubacki 2006-04-24 11:03:37 UTC
Does this problem persist? I've never seen anything similar from othre users.
Comment 21 _ gtzabari 2006-04-24 15:05:42 UTC
I haven't seen it for a while. Then again, it might just be that I am no longer
doing the same things as a few months ago. Anyway, I'm fine with closing this
issue if you want.
Comment 22 _ rkubacki 2006-04-24 15:15:57 UTC
OK, it seems that we do not know how to reproduce so I am closing as won't fix.
Let us know if it reappears.