This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.

Bug 19484 - OOME when performing search on larger FS
Summary: OOME when performing search on larger FS
Status: VERIFIED FIXED
Alias: None
Product: utilities
Classification: Unclassified
Component: Search (show other bugs)
Version: 3.x
Hardware: All All
: P2 blocker (vote)
Assignee: _ pkuzel
URL:
Keywords: PERFORMANCE
: 19187 24907 41128 42231 42486 42669 44078 (view as bug list)
Depends on: 32708 42786 42992 43012
Blocks: 24907 41448
  Show dependency tree
 
Reported: 2002-01-16 14:51 UTC by dmladek
Modified: 2004-08-13 12:12 UTC (History)
8 users (show)

See Also:
Issue Type: DEFECT
Exception Reporter:


Attachments
stacktrace (1.40 KB, text/plain)
2002-01-16 15:28 UTC, dmladek
Details

Note You need to log in before you can comment on or make changes to this bug.
Description dmladek 2002-01-16 14:51:18 UTC
[NB 3.3.1 RC1, #200201160331]
jdk1.4 b91, rh71
=============================

I have checkouted sources of netbeans.org project (NOT ALL!)
and have mounted it using Generic CVS FS.
On the top of this FS I performed a search ation for files with [Local] statuses.
I must say I didn't updated thoses sources more then 2 mounths. And updated them
a few days ago. So there were some files with [Local] statuses.
I wanted to find them...The search service found them more then 600 and then ide
was almost "dead". I receive OOME.

see details in attachment
Comment 1 dmladek 2002-01-16 15:28:44 UTC
Created attachment 4202 [details]
stacktrace
Comment 2 Petr Nejedly 2002-01-16 17:44:38 UTC
This is a design problem of the search functionality.
The problem with it is that it holds all the DataObjects it
inspects which could casue very large memory hog.
The DataObjects hold references to FileObjects, FileObjects
in VCS have some additional caching data and there is also
a java parser which tries to parse all the created JavaDataObjects.
Isn't it ugly?
Comment 3 Petr Nejedly 2002-02-04 08:57:53 UTC
*** Issue 19187 has been marked as a duplicate of this issue. ***
Comment 4 _ lkramolis 2002-06-13 11:44:32 UTC
It will need architecture changes -- not planned for future releases.

Workaround: do not search on whole big filesystems. :-)
Comment 5 _ lkramolis 2002-06-18 13:39:29 UTC
*** Issue 24907 has been marked as a duplicate of this issue. ***
Comment 6 Marek Grummich 2002-07-19 17:25:29 UTC
Target milestone was changed from not determined to TBD
Comment 7 _ lkramolis 2002-11-05 14:49:04 UTC
Currently the search engine listen on all files for changes -- live
search result.

This problem could be solved by do not listen on any FO.
Comment 8 Jesse Glick 2003-03-18 09:50:17 UTC
I have run across this too. Or a persistent major slowdown of the IDE
generally after running and cancelling a search, which I suspect is
due to large memory usage.

Probably the full text search type should either not listen to changes
files, or make this an option which is by default off. It is nice in
some cases but impractical for big searches.
Comment 9 _ jrichard 2003-05-03 02:18:34 UTC
I think this problem is related to the exceptions I've been seeing
with NB 3.5 RC1 (and jdk 1.4.2 beta).  During a search of a large
filesystem, I see the following exceptions, the search seems to stop,
and the whole IDE slows down.

java.io.IOException: Too many open files
       at java.io.FileInputStream.open(Native Method)
       at java.io.FileInputStream.<init>(FileInputStream.java:106)
       at
org.openide.filesystems.LocalFileSystem.inputStream(LocalFileSystem.j
ava:346)
       at
org.openide.filesystems.LocalFileSystem$Impl.inputStream(LocalFileSys
tem.java:543)
       at
org.openide.filesystems.AbstractFileObject.getInputStream(AbstractFil
eObject.java:164)
[catch] at
org.netbeans.modules.search.types.FullTextType.testDataObject(FullTex
tType.java:112)
       at
org.netbeans.modules.search.types.DataObjectType.testObject(DataObjec
tType.java:92)
       at
org.openidex.search.SearchGroup.processSearchObject(SearchGroup.java:
254)
       at
org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea
rchGroup.java:153)
       at
org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea
rchGroup.java:149)
       at
org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea
rchGroup.java:149)
       at
org.openidex.search.DataObjectSearchGroup.scanContainer(DataObjectSea
rchGroup.java:149)
       at
org.openidex.search.DataObjectSearchGroup.doSearch(DataObjectSearchGr
oup.java:74)
       at org.openidex.search.SearchGroup.search(SearchGroup.java:156)
       at org.netbeans.modules.search.SearchTask.run(SearchTask.java:79)
       at java.lang.Thread.run(Thread.java:534)


Comment 10 _ jrichard 2003-05-03 02:33:13 UTC
I'm somewhat surprised that the fix for issue 30613 didn't fix this.
Comment 11 Tomas Pavek 2003-05-05 09:51:30 UTC
I'd guess the exception is related rather to issue 29306, but that
should be fixed...
Comment 12 Marian Petras 2003-05-05 14:16:31 UTC
The exception reported by John Richardson is maybe a separate issue.
Method method FullTextType.testDataObject(DataObject) seems
suspicious. It uses InputStreams and Readers but does not close any of
them.
Comment 13 Marian Petras 2003-06-13 14:01:29 UTC
The exception reported by John Richardson was caused by bug #33856
("Out of IO descriptors after multiple Find in Files").

Searching should be much more reliable now for large filesystems.
Comment 14 Marian Petras 2003-06-13 14:05:51 UTC
John, please do not paste whole stack-traces or thread-dumps to the
Comments field. Paste just the most important/suspicious/topmost
snippet and create an attachment for the whole.
Comment 15 _ jrichard 2003-06-14 03:22:36 UTC
Sorry... though in this case, I thought the whole stack trace could be
of interest due to it not coming from the EDT.
Comment 16 Marian Petras 2004-01-27 11:08:00 UTC
John, have you encountered an OOME in NetBeans 3.5 or a development
version of NetBeans 3.6?
Comment 17 _ jrichard 2004-01-28 19:47:08 UTC
I donno about 3.5, but I haven't seen this in the 3.6 development
versions.  In fact, the memory meter indicates that the heap hovers
around 35-40M before and after a search (this is a far cry from 80to
90M in some previous versions).

Comment 18 Tomas Pavek 2004-01-29 09:21:34 UTC
However the memory consumption is better now, the problem with OOME is
still there, if just more files are searched. I've tried current dev
build and searched through all NB sources (which is rather extreme set
of files) for "assert" - after an hour, the IDE wents deadly slow and
stopped on OOME. About a half of sources were actually went through
(cca 15000). So the architecture problem is still there, however it
probably won't show up in typical use cases.
Comment 19 Marian Petras 2004-01-29 09:38:23 UTC
Tomas, thank you for the verification.

It is too late for changes of architecture in the NetBeans 3.6
timeframe, so this bug will not be completely fixed in NB 3.6.
Comment 20 _ jrichard 2004-02-21 01:20:29 UTC
FWIW, I just had a reason to search a large source tree w/ dev build
20040219.  Indeed, memory jumped from 35M to 85M+ and eventually got
OOME.  
Comment 21 Marian Petras 2004-03-30 14:10:45 UTC
*** Issue 41128 has been marked as a duplicate of this issue. ***
Comment 22 Marian Petras 2004-03-30 14:59:32 UTC
*** Issue 41128 has been marked as a duplicate of this issue. ***
Comment 23 Marian Petras 2004-03-30 15:17:33 UTC
As bug #40504 ("memory leak in search") was fixed since the last
comment to this bug report had been added, this bug needs to be
checked again. Maybe the memory leak was the only remaining cause of
this bug.
Comment 24 Marian Petras 2004-04-21 08:28:09 UTC
*** Issue 42231 has been marked as a duplicate of this issue. ***
Comment 25 Petr Nejedly 2004-04-27 12:02:41 UTC
*** Issue 42486 has been marked as a duplicate of this issue. ***
Comment 26 Petr Nejedly 2004-04-27 12:03:32 UTC
Huh, still not fixed?
Comment 27 Marian Petras 2004-04-27 13:43:33 UTC
No, it is not fixed yet. The bug is caused by multiple factors - at
least two of the factors (holding references to all searched objects
and memory leak in the search results window) were eliminated but the
problem persists.
Comment 28 _ ttran 2004-04-30 15:23:01 UTC
I removed the keyword ARCH.  This is not an architecture problem, not
at the platform level.  The feature is simply badly designed.  See
Petr Nejedly's comment from 2002-01-16.

I don't agree that searching over a large dir is an uncommon scenario.
 Quite the opposite.  It is the exact situation when the user wants to
use this feature.  If we can't fix the OOME then we'd better remove
the feature.  It's unusable.

Given how old this bug report is, and how easy the users can run into
it,  I request the bug to be finally fixed in promotion D.  This
actually qualifies for being P1 (dataloss).  Once the OOME happens the
IDE cannot recover
Comment 29 _ pkuzel 2004-05-04 16:26:20 UTC
SearchGroup.java new revision: 1.4 eliminates huge (cumulative) leak.
Comment 30 _ pkuzel 2004-05-04 18:44:09 UTC
I have kicked off java icon badging (that spawned parser) and removed
children for result nodes. It's 2 times better now (both memory
consumption and time it takes).

ResultModel.java new revision: 1.29
Comment 31 _ pkuzel 2004-05-05 13:08:45 UTC
ResultModel.java new revision: 1.31 batches setKeys() calls. It
improves speed two times and increases searchable dataset by 5%.
Comment 32 _ pkuzel 2004-05-07 18:30:01 UTC
I added explicit cleanup logic. It means that search module blocks
leak spreading.  Leak except small empty shell referenced by
VisulaizerNode is eliminated. Well to be puristics, it should be
eliminated too.

Scalability issue still remains (for bigger datasets than before).
There are several possibilities how to address it:

  - avoid memory expensive Nodes & Children layer
    currently blocked by requirement that results behaves as
    its source DataObject (getNodeDelegate)

  - store results model to file and visualize it in steps
    requires nontrivial UI change that moreover worse usability
    (directly exposing implementation problems)

  - detect low memory condition and stop search in advance
    requires reliable low memory condition detection logic

In this late development stage only last option is deliverable I'm
affaid. And it has two dependencies:
  - need for issue #42786 - low mem API
  - define subset of supported JDKs that return
    correct values from Runtime.freeMemory, maxMemory &comp

I really do not want to take TODOs approach of stopping the search
after reaching some limit (300) because here users cannot work
inrementally as with TODOs in all use cases (e.g commit all locally
changed files in batch).

Comment 33 _ pkuzel 2004-05-07 18:32:44 UTC
PS: I already eliminated Children layer a little bit. Hence the new UI.
Comment 34 _ pkuzel 2004-05-07 18:57:00 UTC
Linked to issue #32708 Lightweight Children.
Comment 35 _ pkuzel 2004-05-10 10:54:00 UTC
I compared to 3.6. Searched trunk/nb_all (939MB)

for fulltext "void":

      init mem    found files    time      memory after
                  before OOME              closing results
3.6   10MB        1789           2:57      91MB
NOW   15MB        3421           2:40      15MB

for fulltext "woid":

3.6   10MB        1              8:09      95MB
NOW   15MB        1 (no OOME)    10:01     71MB

The second case still leaks and I will address it now.
Comment 36 _ pkuzel 2004-05-10 17:58:23 UTC
With modified code for MFS (issue #42992) no major leak can be observed.
Comment 37 _ rkubacki 2004-05-11 08:59:59 UTC
*** Issue 42669 has been marked as a duplicate of this issue. ***
Comment 38 _ pkuzel 2004-05-11 14:45:57 UTC
I added low memory condition detection logic and handler for such event.

Resolving as LATER because there are still unresolved issues that
cause memory leak if provoked by this user action.

Once we get solved issue #32708 we can improve search scalability by
next grade.
Comment 39 Jesse Glick 2004-05-11 16:59:53 UTC
We don't use RESOLVED/LATER.
Comment 40 _ ttran 2004-05-11 17:02:41 UTC
that's what I wanted to say too and had a mid-air collision w/ Jesse :-)

we should never mark the issue with LATER resolution, that's the rule,
even though IZ allows us to do it.  So either mark this as FIXED, or
if you think it's not really FIXED but much less a problem now then
lower priority
Comment 41 Marian Petras 2004-06-01 09:51:42 UTC
*** Issue 44078 has been marked as a duplicate of this issue. ***
Comment 42 dmladek 2004-07-29 12:20:58 UTC
Trying to verify but unfortunately there is issue #46727 which prevent
me to assure this fix is fine :-(
Comment 43 dmladek 2004-08-09 13:06:34 UTC
I'm glad I can verify this issue as successfuly fixed now :-)
Despite of that fix is based on a result constrains to 500 matches:-o