This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
I have opened few projects (~10, openide and core amnong them). I have just started the IDE (had to close it before because of the same problem) with ~3 simple sources opened. I opened (AS-O) TreeView and followed the class hierarchy of ExplorerTree using A-G on some super. calls through JComponent and Component. Memory usage went up to over 100MB, but went down on gc(). I then followed also to JTee class, the IDE was frozen for a while and then OOME appeared in the log. After that, the memory usage fell down to ~40MB. I believe exhausting whole heap just because of opening a source file, while so much memory is not needed for subsequent reading of the file (note: it was r/o) is at least P2 problem.
Petr, could you please investigate (using INSANE) what takes so much memory?
Did you open those files quickly one after another? I tried to reproduce the OOME having netbeans/misc open but I do not get it. Anyway, the memory does go above 100MB. I suspect this is due to the error checking.
The first two quite quickly, the last one after short idle period It was probably combination of error checking and editor folds (why the hell editor causes deep parse/resolution of the source when just asking for which part of the source file are imports?). I tried running Insane on it but failed to catch the important time window. The memory peak is temporary but strongly held during that time. After OOME, the memory is freed. I could try again (so far I've reproduced it twice) and starting Insane shortly before OOME.
I suspect overrides/implements annotations too. These are maybe more eager than folds.
What do you mean by "deep parse/resolution"? The code folding should trigger normal parsing with attribution of feature headers (that's the lightest parse we have) - should not be an issue since it is immediately required also by navigator, java nodes, synchronization support, override method annotations and maybe some other clients.
Tom, could you please try to help us with this? Tomas mentioned some time ago he noticed that error checker seems to do a deep attribution of all classes a file depends on transitively as a consequence of fixing issue 55769. If that's really the case, could it be fixed?
Sorry, I don't recognize offhand whether it was real deep parse. I just saw very deep stack trace. I'll post few stack traces I've managed to capture before OOME This is mixed from both the runs, folding is there, error annotations as well.
Created attachment 20955 [details] Several thread dumps and also one of the OOMEs
*** Issue 56645 has been marked as a duplicate of this issue. ***
My fix was to add javac's flow analysis pass (Flow) to the error checker, to catch any other errors javac might find in source code. Tomas's original version of the error checker already did attribution, since otherwise it can only find lexical and syntactic errors. Flow doesn't do any further attribution, but some of its checks may force more secondary class reading since that is purposely done lazily to conserve memory. Can this be fixed by not gathering attribution information? No, since that information is necessary to know whether a given source construct is correct. The only way to reduce attribution requirements is to reduce the set of errors you want found. Each pass we remove will reduce the number of error types that get checked. We have a saying that is appropriate here: "you get what you pay for." That said, one area of memory consumption that has not been given any focus is the new ECRequestDescImpl.getReader(filename). This method returns the source code of a secondary file needed for attribution from either an editor buffer or the filesystem. The current implementation creates a new ASTProvider and calls its getFileReader() method; this code looks like it shouldn't impact memory substantially or have leaks, but it's worth a profiler check to make sure.
Tom, the issue I was referring to had nothing to do with the flow analysis. It was about missing error annotations for semantic errors of "secondary" top-level classes in the same file - i.e. semantic errors for the second, third, etc. top-level class in the file we are asking the error annotations for. I remember Tomas warned me after you fixed that issue, that you seemed to turn on the attribution not only for the secondary top-level classes in the same file, but also for all other classes that are visited during the attribution of these classes - so the attribution is done transisively on everything touched like in case of compilation. Till that time the attribution was done only for a single class in a single file. That was a bug. Ideal state would be to do attribution just for classes in a single file, but for all of them and not for classes from other files. I don't know if that's possible and maybe it is already the case and Tomas was wrong. Could you please check it? Thanks in advance.
I modified the error checker to only attribute classes which are defined in the source file being checked. This should reduce memory consumption during checking, while still preserving the fix for issue 55749.
Petre, can you verify this issue, please? Thanks.
OK
Reorganization of java component