This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
[NB 3.3.1 RC1, #200201160331] jdk1.4 b91, rh71 ============================= I have checkouted sources of netbeans.org project (NOT ALL!) and have mounted it using Generic CVS FS. On the top of this FS I performed a search ation for files with [Local] statuses. I must say I didn't updated thoses sources more then 2 mounths. And updated them a few days ago. So there were some files with [Local] statuses. I wanted to find them...The search service found them more then 600 and then ide was almost "dead". I receive OOME. see details in attachment
Created attachment 4202 [details] stacktrace
This is a design problem of the search functionality. The problem with it is that it holds all the DataObjects it inspects which could casue very large memory hog. The DataObjects hold references to FileObjects, FileObjects in VCS have some additional caching data and there is also a java parser which tries to parse all the created JavaDataObjects. Isn't it ugly?
*** Issue 19187 has been marked as a duplicate of this issue. ***
It will need architecture changes -- not planned for future releases. Workaround: do not search on whole big filesystems. :-)
*** Issue 24907 has been marked as a duplicate of this issue. ***
Target milestone was changed from not determined to TBD
Currently the search engine listen on all files for changes -- live search result. This problem could be solved by do not listen on any FO.
I have run across this too. Or a persistent major slowdown of the IDE generally after running and cancelling a search, which I suspect is due to large memory usage. Probably the full text search type should either not listen to changes files, or make this an option which is by default off. It is nice in some cases but impractical for big searches.
I think this problem is related to the exceptions I've been seeing with NB 3.5 RC1 (and jdk 1.4.2 beta). During a search of a large filesystem, I see the following exceptions, the search seems to stop, and the whole IDE slows down. java.io.IOException: Too many open files at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:106) at org.openide.filesystems.LocalFileSystem.inputStream(LocalFileSystem.j ava:346) at org.openide.filesystems.LocalFileSystem$Impl.inputStream(LocalFileSys tem.java:543) at org.openide.filesystems.AbstractFileObject.getInputStream(AbstractFil eObject.java:164) [catch] at org.netbeans.modules.search.types.FullTextType.testDataObject(FullTex tType.java:112) at org.netbeans.modules.search.types.DataObjectType.testObject(DataObjec tType.java:92) at org.openidex.search.SearchGroup.processSearchObject(SearchGroup.java: 254) at org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea rchGroup.java:153) at org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea rchGroup.java:149) at org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea rchGroup.java:149) at org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea rchGroup.java:149) at org.openidex.search.DataObjectSearchGroup.doSearch(DataObjectSearchGr oup.java:74) at org.openidex.search.SearchGroup.search(SearchGroup.java:156) at org.netbeans.modules.search.SearchTask.run(SearchTask.java:79) at java.lang.Thread.run(Thread.java:534)
I'm somewhat surprised that the fix for issue 30613 didn't fix this.
I'd guess the exception is related rather to issue 29306, but that should be fixed...
The exception reported by John Richardson is maybe a separate issue. Method method FullTextType.testDataObject(DataObject) seems suspicious. It uses InputStreams and Readers but does not close any of them.
The exception reported by John Richardson was caused by bug #33856 ("Out of IO descriptors after multiple Find in Files"). Searching should be much more reliable now for large filesystems.
John, please do not paste whole stack-traces or thread-dumps to the Comments field. Paste just the most important/suspicious/topmost snippet and create an attachment for the whole.
Sorry... though in this case, I thought the whole stack trace could be of interest due to it not coming from the EDT.
John, have you encountered an OOME in NetBeans 3.5 or a development version of NetBeans 3.6?
I donno about 3.5, but I haven't seen this in the 3.6 development versions. In fact, the memory meter indicates that the heap hovers around 35-40M before and after a search (this is a far cry from 80to 90M in some previous versions).
However the memory consumption is better now, the problem with OOME is still there, if just more files are searched. I've tried current dev build and searched through all NB sources (which is rather extreme set of files) for "assert" - after an hour, the IDE wents deadly slow and stopped on OOME. About a half of sources were actually went through (cca 15000). So the architecture problem is still there, however it probably won't show up in typical use cases.
Tomas, thank you for the verification. It is too late for changes of architecture in the NetBeans 3.6 timeframe, so this bug will not be completely fixed in NB 3.6.
FWIW, I just had a reason to search a large source tree w/ dev build 20040219. Indeed, memory jumped from 35M to 85M+ and eventually got OOME.
*** Issue 41128 has been marked as a duplicate of this issue. ***
As bug #40504 ("memory leak in search") was fixed since the last comment to this bug report had been added, this bug needs to be checked again. Maybe the memory leak was the only remaining cause of this bug.
*** Issue 42231 has been marked as a duplicate of this issue. ***
*** Issue 42486 has been marked as a duplicate of this issue. ***
Huh, still not fixed?
No, it is not fixed yet. The bug is caused by multiple factors - at least two of the factors (holding references to all searched objects and memory leak in the search results window) were eliminated but the problem persists.
I removed the keyword ARCH. This is not an architecture problem, not at the platform level. The feature is simply badly designed. See Petr Nejedly's comment from 2002-01-16. I don't agree that searching over a large dir is an uncommon scenario. Quite the opposite. It is the exact situation when the user wants to use this feature. If we can't fix the OOME then we'd better remove the feature. It's unusable. Given how old this bug report is, and how easy the users can run into it, I request the bug to be finally fixed in promotion D. This actually qualifies for being P1 (dataloss). Once the OOME happens the IDE cannot recover
SearchGroup.java new revision: 1.4 eliminates huge (cumulative) leak.
I have kicked off java icon badging (that spawned parser) and removed children for result nodes. It's 2 times better now (both memory consumption and time it takes). ResultModel.java new revision: 1.29
ResultModel.java new revision: 1.31 batches setKeys() calls. It improves speed two times and increases searchable dataset by 5%.
I added explicit cleanup logic. It means that search module blocks leak spreading. Leak except small empty shell referenced by VisulaizerNode is eliminated. Well to be puristics, it should be eliminated too. Scalability issue still remains (for bigger datasets than before). There are several possibilities how to address it: - avoid memory expensive Nodes & Children layer currently blocked by requirement that results behaves as its source DataObject (getNodeDelegate) - store results model to file and visualize it in steps requires nontrivial UI change that moreover worse usability (directly exposing implementation problems) - detect low memory condition and stop search in advance requires reliable low memory condition detection logic In this late development stage only last option is deliverable I'm affaid. And it has two dependencies: - need for issue #42786 - low mem API - define subset of supported JDKs that return correct values from Runtime.freeMemory, maxMemory &comp I really do not want to take TODOs approach of stopping the search after reaching some limit (300) because here users cannot work inrementally as with TODOs in all use cases (e.g commit all locally changed files in batch).
PS: I already eliminated Children layer a little bit. Hence the new UI.
Linked to issue #32708 Lightweight Children.
I compared to 3.6. Searched trunk/nb_all (939MB) for fulltext "void": init mem found files time memory after before OOME closing results 3.6 10MB 1789 2:57 91MB NOW 15MB 3421 2:40 15MB for fulltext "woid": 3.6 10MB 1 8:09 95MB NOW 15MB 1 (no OOME) 10:01 71MB The second case still leaks and I will address it now.
With modified code for MFS (issue #42992) no major leak can be observed.
*** Issue 42669 has been marked as a duplicate of this issue. ***
I added low memory condition detection logic and handler for such event. Resolving as LATER because there are still unresolved issues that cause memory leak if provoked by this user action. Once we get solved issue #32708 we can improve search scalability by next grade.
We don't use RESOLVED/LATER.
that's what I wanted to say too and had a mid-air collision w/ Jesse :-) we should never mark the issue with LATER resolution, that's the rule, even though IZ allows us to do it. So either mark this as FIXED, or if you think it's not really FIXED but much less a problem now then lower priority
*** Issue 44078 has been marked as a duplicate of this issue. ***
Trying to verify but unfortunately there is issue #46727 which prevent me to assure this fix is fine :-(
I'm glad I can verify this issue as successfuly fixed now :-) Despite of that fix is based on a result constrains to 500 matches:-o