This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.

Bug 34834 - Terrible performance and subsequent OutOfMemory Error
Summary: Terrible performance and subsequent OutOfMemory Error
Status: CLOSED WONTFIX
Alias: None
Product: ide
Classification: Unclassified
Component: Performance (show other bugs)
Version: 3.x
Hardware: All All
: P3 blocker with 1 vote (vote)
Assignee: Antonin Nebuzelsky
URL:
Keywords: PERFORMANCE
Depends on: 35656
Blocks:
  Show dependency tree
 
Reported: 2003-07-09 16:18 UTC by iformanek
Modified: 2011-05-25 11:36 UTC (History)
5 users (show)

See Also:
Issue Type: DEFECT
Exception Reporter:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description iformanek 2003-07-09 16:18:51 UTC
Starting from a clean install of NB 3.5:
- mounted core module, performed CVS Update on /src 
subfolder
- mounted openide module, performed CVS Update on /src 
subfolder
- mounted all of release35 as one filesystem, performed CVS 
Update
- ran nbbuild/build ANT script

Subsequent editing of Java source was dog slow (like 20 
seconds wait on each click or keyboard type), and 
OutOfMemory errors started to appear.
Windows TaskManager shows 177MB allocated memory, the 
Memory toolbar in NB shows 97000 KB.

HW Config: brand new Dell C840 laptop, 512 MB RAM, Pentium 
IV prrocessor
SW Config: Win XP Pro, simultaneously running Mozilla, 
Outlook, Windows Commander
Comment 1 Jesse Glick 2003-07-09 16:56:11 UTC
Does the same problem occur if you mount an existing checkout of
release35 sources as a plain filesystem, without using the NB CVS
support at all?
Comment 2 iformanek 2003-07-10 15:43:08 UTC
Will try when I find a moment, this type of testing takes 
nontrivial time, so if someone from QA wants to jump on it, 
it might work better.
Comment 3 iformanek 2003-07-10 17:42:46 UTC
Some more ideas:
- the same happened to me today, when using the IDE with 
the same filesystems (CVS), but not using the CVS operations
- the OutOfMemory Error thrown has the same annotation as 
during the first occurence:

-------------
Annotation: Parser error
[catch]java.lang.OutOfMemoryError
==>
-------------
with no more details.

Perhaps it is connected to background Parser database 
creation and the sheer amount of data on the filesystems 
cause the problems?
Comment 4 iformanek 2003-07-10 17:43:58 UTC
Raising to P1, as this is yielding my NB 3.5 unusable and 
potential data loss can occur...
Comment 5 Jesse Glick 2003-07-10 18:49:23 UTC
"the same happened to me today, when using the IDE with the same
filesystems (CVS), but not using the CVS operations" - does this you
mean you had NB CVS mounts and were just not using any actions on
them; or that you had plain Directory mounts? It is important, as the
VCS filesystems can do special things (refresh, update caches, ...)
even when you did not explicitly invoke any action. Also command-line
vs. Java-based CVS could be important, etc.

Re. background parser DB creation: assuming you correctly turned off
all capabilities on the topmost release35 mount after mounting it,
that should have cancelled the parser DB creation immediately. (Only
leave on caps if it is a Java source root.) Letting the parser DB run
on all of release35 sources will definitely kill performance. Doing it
with a VCS mount will probably make things even worse. IMHO you should
be asked when you are mounting a filesystem whether it is a source
root or not, but you are not - if it is not, you have to know to turn
off all caps right after mounting it.

In any event, auto creation of the parser database is only triggered
by mounting a FS - never by restarting the IDE (even if it did not
finish in the last session). I assume you have restarted NB - after an
OOME, generally the VM is fried and must be restarted.

"Parser error" could mean a lot of things, not just parser database,
e.g. XML parsing. OOME's never print stack traces; I guess there is
not enough memory left to even do that (and anyway a stack trace would
not tell you much in that case). The only way I know of to deal with
them is to be able to consistently reproduce the problem, then go
through the whole sequence running inside a profiler that will show
what is chewing up so much heap.

Again, a more detailed description of what it was you did is necessary
to know what is going on - why your NB session, not other people's, is
getting hosed. Of course if QA can help in reproducing this that would
be great, but as reporter you have a head start.
Comment 6 iformanek 2003-07-11 12:24:39 UTC
> "the same happened to me today, when using the IDE with 
the same
> filesystems (CVS), but not using the CVS operations" - 
does this you
> mean you had NB CVS mounts and were just not using any 
actions on
> them; or that you had plain Directory mounts? It is 
important, as the
> VCS filesystems can do special things (refresh, update 
caches, ...)
> even when you did not explicitly invoke any action. Also 
command-line
> vs. Java-based CVS could be important, etc.

As I said, it is on CVS filesystem, not local filesystem.

> Re. background parser DB creation: assuming you correctly 
turned off
> all capabilities on the topmost release35 mount after 
mounting it,
> that should have cancelled the parser DB creation 
immediately.

Yes, I actually did.

> Again, a more detailed description of what it was you did 
is necessary
> to know what is going on - why your NB session, not other 
people's, is
> getting hosed. Of course if QA can help in reproducing 
this that would
> be great, but as reporter you have a head start.

I did exactly what I described when I submitted the bug, 
not more not less, perhaps if somebody else (again, QA 
comes to mind as a logical candidate) tries to reproduce, 
we can find out if this is specific to my machine and then 
take it farther from there.
Comment 7 Jesse Glick 2003-07-11 16:51:41 UTC
I was able to get one OOME (during the build process) using similar
steps, though there are a lot of variables involved so the steps are
not exactly the same I think. Could not reproduce any particular
slowdown or failure to operate beyond that.

0. Linux 2.4.18, JDK 1.4.2 FCS. 1 Gb RAM, the usual other apps running
(Emacs, Mozilla, another NB...)

1. Started a std 3.5 dist on a fresh userdir (in a RAM disk).

2. I have a 3.5 source checkout in /space/src/r35. It includes more
modules than are in the std config (e.g. translatedfiles). It was not
built - cleaned - but had extra bins unscrambled already.

3. Mounted /space/src/r35/core as a CVS filesystem, using cmdline CVS
(I am behind SWAN), did not select a relative moint pt. Did not cancel
code completion update (too fast).

4. Ran update on core/src. Got some random exception which I reported
in vcscore, otherwise seemed to run.

5. Did #3 and #4 on openide. No exception.

6. Mounted /space/src/r35. Canceled code completion this time (turned
off caps). Updated all. It ran for a while, failed strangely in
translatedfiles - some bug in the CVS integration, I guess, did not
pursue.

7. Selected nbbuild/build.xml and clicked Execute.

8. It ran for a while with no problems. Somewhere around building
Tomcat, I got a single OOME reported in a dialog. No apparent printing
of OOME to console, log, or Ant output window.

Memory toolbar shows 82Mb out of 97Mb consumed after build. Goes down
to 55Mb on its own after a while. Manual GC takes it down to 45Mb.

Opening a Java file and working with it, seems fine, no apparent
problems, perfectly responsive.

I then ran a real-clean on build.xml; worked fine.

Maybe QA can do better.

I do once in a while see OOME's after running many complex Ant scripts
in a single session. But not often enough to reproduce in a profiler,
alas.
Comment 8 brightgreen2003 2003-09-10 11:01:45 UTC
I have Netbeans 3.5.1 on XP, Pentium III 512MB Ram. There are various 
ways of increasing the memory usage in netbeans with out actually 
doing anything apart from simple activities.

If you have a simple project with a single directory mapped and this 
directory has 1 java file in it. If you close netbeans down and then 
restart it from scratch (So we have some initial condiditions set).

If you bring up windows task manager and look at the runide processes 
(There will be two). If you choose Mem Usage as the column to view 
you will see the largest value is set and stable at a specific value. 
In my case the value was set to 

53,372K

Now with the mouse, click on the filesystem window and then the code 
window, do this multiple times. You will notice the memory increase 
slowly. It seems just by clicking somewhere you have increased the 
memory. You can easily add a Mb just by repeating the clicking thing 
for a short while. Leaving the gui and looking at task manager shows 
that the memory is then stable.

Now minimise Netbeans, the memory then goes down a great deal (Mine 
goes down to 14,004K), and then maximise again, the memory should go 
up again slighly (Mine went to 20,100K). If you do the clicking thing 
again the memory goes up even faster than before.

When the memory goes above the 100Mb line Netbeans slows right down 
and it can be painfull to do anything. There is definitley something 
up with the memory in Netbeans and this should really be addressed.
Comment 9 Jesse Glick 2003-09-10 18:18:25 UTC
Note: for Ant-related OOME's, please use issue #35974 which is
probably more specific.
Comment 10 _ ttran 2003-12-14 16:23:54 UTC
Ian, did the problem happen to you again?
Comment 11 iformanek 2004-01-30 16:48:23 UTC
I have not performed extensive tests on daily builds. Will do when 
3.6 Beta gets out.
Comment 12 _ ttran 2004-02-29 23:35:37 UTC
-> Tonda, please follow up w/ Ian 
Comment 13 Antonin Nebuzelsky 2004-03-01 17:10:33 UTC
Trying to reproduce. On my W2K machine the CVS updates finished and
the heap consumption was not too large after them, so there does not
seem to be a huge memory leak with CVS updating. Though after the
updates finished, I executed CLEAN target of nbbuild/build.xml and the
whole IDE freezed. Windows Task Manager freezes immediately when I
invoke it. Thread dump on console does not seem to contain any
deadlock. Strange.

On Linux, I was able to finish CVS updates successfully as well. And
the clean target on nbbuild/build.xml finished successfully too. The
heap usage at the end (as shown by the memory meter on toolbar) is
<40MB. I started this adventure with >30MB of heap used.

I will continue by investigating the freeze on W2K tomorrow...
Comment 14 Antonin Nebuzelsky 2004-03-02 13:41:10 UTC
The freeze on W2K was caused by the jdk hack, which we use for
responsiveness measurements, on bootclasspath. For some reason ant's
output stream's writeLine locked with the patched EventQueue. So, this
is not a problem related in any way with this bug.

I then continued after restart of IDE (without the jdk hack this time
:) by running build-nozip on nbbuild/build.xml. It finished
successfully after 13:14. The memory meter shows 53/95 (the minimum
after several GCs).

Doing the same on linux. The target build-nozip was running for some
time until it wrote Out Of Memory in the Output window and finished. I
was trying to make a snapshot of the heap from an attached JProfiler,
but unfortunatelly IDE's VM aborted with an exception in a native code.

So my experiments show a similar behaviour as Jesse described.
Comment 15 Antonin Nebuzelsky 2004-03-02 13:49:39 UTC
But I don't think this must be IDE's fault. Running ant on the whole
netbeans code needs a lot memory. As you can see on

   http://nbbuild.netbeans.org/ant-quick.html

it is highly recommended to use -Xmx200m when running ant on netbeans
sources. And the IDE is running with -Xmx96m so the internally running
ant IMHO simply does not have enough heap for its needs. I suggest
that we close this bug as WONTFIX.

What's your opinion, Jesse?
Comment 16 iformanek 2004-03-02 13:53:33 UTC
Guess the same would be true with building any large project of a size
comparable to NB, right? In such case, this is still a bug, because
normal user has no way of knowing those low level tweaks.
Comment 17 Antonin Nebuzelsky 2004-03-02 15:34:01 UTC
IMO either the IDE must be run with more heap space, or ant must be
run externally from the IDE not internally and a user-specifiable
amount of heap must be allocated for this external ant.

I think that the only possible solution for NB36 is running the IDE
with more heap, but I don't think we want to increase the default
value in ide.cfg. Instead I suggest that we release-note this problem...
Comment 18 iformanek 2004-03-02 15:43:14 UTC
Agreed there is not much we can do for 3.6.
Suggest to note it in Release notes, and address this better in 4.0

Also suggest not to close this bug, as the bug is valid and can be
encountered by users, we can downgrade to P2, as it is not going to be
hit frequently, and the potential for data loss during build operation
is small.

Adding a dependency on the Issue requesting ability to run Ant in a
separate VM.
Comment 19 Antonin Nebuzelsky 2004-03-02 17:23:47 UTC
A release note item suggestion for NB36:

If you execute a very complex ant target of your ant build script in
the IDE, you may encounter out of memory error. This is caused by ant
being run within the same JVM as the IDE is running and the default
maximum heap size not being high enough for your complex ant build
script. In such a case you can modify ide.cfg configuration file in
bin/ directory of your IDE installation, specifying a maximum heap
size higher than the default. For example "-J-Xmx200m".

Patrick, John, feel free to change the wording of the text.
Comment 20 Patrick Keegan 2004-03-04 22:20:22 UTC
Added to release36 notes with only slight rewording:

Description: If you execute a very complex Ant target of your ant
build script in the IDE, you may encounter an out of memory error.
This is caused by Ant being run within the same JVM as the IDE is
running in and the default maximum heap size not being high enough for
your build
script. 
Comment 21 Antonin Nebuzelsky 2004-03-05 08:43:48 UTC
Patrick, do we want to include the part about increasing max heap size
by modifying ide.cfg?
Comment 22 Antonin Nebuzelsky 2004-03-05 08:46:44 UTC
Oh, I see you have it in the release notes. THX.
Comment 23 Jesse Glick 2004-03-07 23:07:42 UTC
Huh, for some reason I was not on CC here...

For 3.6, a release note saying to increase the heap size should be
fine. I am not convinced that is the end of the story; I run -Xmx128m
with command-line Ant to do full clean builds w/ complete tests all
the time and have never seen an OOME.

Won't speculate on what the best solution is until I know what the
problem is; need a reproducible case and a usable profiler. (Have
tried repeatably to use OptimizeIt to analyze in-VM Ant memory usage
and have gotten nowhere.) If you have such a reproducible test case,
please reopen issue #35974 with full details, and maybe mark this one
a duplicate.
Comment 24 Antonin Nebuzelsky 2004-03-08 16:03:41 UTC
Decreasing to P3. Not a critical issue for NB36, though don't want to
close as wontfix, rather keeping open as P3 to follow up on this for D.
Comment 25 julienetienne 2004-07-03 11:15:43 UTC
This issue seems to be a real problem. I use netbeans 3.6 with large   
ant scripts (more than 30 targets) and there is like a memory leak.   
I tried it on 24 machines (laptops and computers) with 256Mo to 1Go   
of RAM. Even with increasing the values in ide.cfg, it still crashes   
after 10 to 30 minutes. The more you build the slower is the   
computer.   
   
I am not used to bug reporting: if you need help, I can provide a   
development package that will help you to reproduce this bug.  
  
Don't hesitate to mail me. 
Comment 26 _ alexlamsl 2005-08-15 00:58:27 UTC
Is this a valid bug anymore? if not that I suggest we can safely closed it.
Comment 27 _ alexlamsl 2005-11-29 13:37:45 UTC
close it as won't fix - it is outdated now.
Comment 28 Marian Mirilovic 2005-12-14 16:26:55 UTC
closing