This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
I don't know, wich module to choose for the issue filing. As the issue is visible for me with "Build All" command, the java module was selecetd :-) OK, the issue. Env: build from current development CVS trunk, j2se 1.4.1_02, win xp Cfg: VCS FS, Internal compiler, treefs, ... Case 1: - run the IDE - run "Clean All" on some FS root (~1000 files) - run "Build All" (the same FS root) "Build All" command takes about 40 seconds. Case 2: - run the IDE - NEW: work with IDE actively during 10-20 minutes (editing, compiling, ant tasks running, ...) - run "Clean All" - run "Build All" "Build All" command takes from few minutes to "kill process tree" after ~10 minutes waiting with 100% CPU. I'll attach to files (in one jar): one dump for the first case, few dumps for the second one.
Created attachment 9435 [details] thread dumps for both cases
In the slow case, there is a "VCS Object Integrity Analyzer" thread working hard on rewriting and rewriting file attributes. BTW: Do you think it is reasonable to write data to disk in ReferenceQueue handler? The handler shlould do fast cleanup, not backup of "lost" data. What if the IDE was ended without releasing (and thus saving) your data?
If this is in 4.0 dev, it must be in 3.5 as well, the codebase is identical. > "VCS Object Integrity Analyzer" thread working hard on rewriting and > rewriting file attributes Yes. Unfortunately I thought, that this is better handled by DefaultAttributes. I've already made some optimizations, but it seems, that they were not enough. I will make some further lazy scheduling of the writting process. > BTW: Do you think it is reasonable to write data to disk in > ReferenceQueue handler? The handler shlould do fast cleanup, > not backup of "lost" data. What if the IDE was ended without > releasing (and thus saving) your data? This is a different issue, it's not associated with the "VCS Object Integrity Analyzer". It's a problem in the caching architecture and it will probably need some more tunning. Basically you're right, that when JVM exits without finalizing these objects, they're lost, which is not good. I've filled issue #32089 for that problem.
Fixed in the main trunk. The storage of the attribute is performed lazily. Please verify whether it helps. If yes, I'll merge it into release35 branch. /cvs/vcscore/src/org/netbeans/modules/vcscore/objectintegrity/IntegritySupportMaintainer.java,v <-- IntegritySupportMaintainer.java new revision: 1.2; previous revision: 1.1
Martin, Unfortunately, the fix doesn't help. After ~1.5 hour of work building time got up from ~40 sec to ~400 sec :-(
And the stack traces? Are they similar (i.e. is it still a problem of frequent rewriting of attributes or something else)? Now, Radek may have something to tell us about the attributes, CCing him.
Created attachment 9460 [details] threads dump after Martin's fixing
I have uploaded thread dumps catched when building was about 4x longer rather building after the IDE started.
OK, I was doing some measurements by using different compilers on different filesystems rebuilding openide/src. The results are: LocalFS + external compiler: 44s LocalFS + internal compiler: 60s JavaCVS + external compiler: 49s JavaCVS + internal compiler: 130s Generally, for larger builds, the external compiler is faster because the internal have higher overhead when writing class files (folder.createData() causes refresh). It is way more significant on JavaCVS FS, because it have somehow more complicated way of refreshing. Slowing down with time: The first invocation of build on JavaCVS was faster, about 95s. The slowdown of subsequent builds can be caused by JavaCVS actively releasing some info during refresh. User workaround: Don't use internal compiler. Possible solution for JavaCVS (and maybe other VCS FSes): Optimize the refresh caused by createData. The FS knows exactly what have changed so no need to do a full refresh. I've also tested it under profiler and a big portion of the time was spent in CvsCacheDir.repopulate() called from the refresh() caused by createData(). I'll attach the log.
Created attachment 9473 [details] Log from OptimizeIt
Thansk Petr for your analysis. This issue was however submitted for the command-line CVS, not JavaCVS. There are big differencies in the implementation between these two. However it looks like the original problem with "VCS Object Integrity Analyzer" does not appear any more (at least I did not find it in the latest thread dump). So this part looks like fixed, there might be other issues connected with the caching system. PS.: javacvs does not use the VCS Object Integrity Analyzer at all. Also there is probably not reason to optimize JavaCVS if there are no big problems (from the Petr's analysis it does not look that bad). We should concentrate on command-line VCS filesystems, because JavaCVS filesystem will not exist in NB 4.0 (it will be replaced with a profile for command-line VCS filesystem). The problem with the cache performance (CvsCacheDir.repopulate()) is probably the same or very similar in both JavaCVS and command-line VCS.
OK, probably no more problems VCS Object Integrity Analyzer now, "only" other problem that the internal compiler on VCS based FS is still 2x slower than on LocalFS due to expensive refreshes (they probably do occur on LFS as well, they're only cheaper, so Radek will look at it in AbstractFS) Just for record: phrebejk was complaining about one-file recompilation taking long time for single file on VCS based FS. This is the other side of the problem - using external compiler. I did some measurements and the numbers were: External compiler / LocalFS: ~7s External compiler / JavaCVS: ~12s Internal compiler / *: ~2s In this case, from the profiler log, it seems that the problem is with JavaCompiler refreshing too much filesystems, but the compiler don't know in advance what will change... I'll pass the one-file-recompilation problem to the Java guys.
I'm not sure it is important, but it seems to me, I have found another plase with accumulative delay: after VCS update command finished (judging by the status string), the IDE eats 100% CPU during some period. This period grows during working with the IDE from 4-5 sec to 15-20 sec. Is threads dump interesting for this case? You see - this case has nothing with java compiling :-)
Radek sent me a testing patch for unnecessary refresh and the situation is much better with it, the speed of internal compiler is similar to the speed of the external one, it is even falster on local FS. I've made issue 32243 for tracking integration of this improvement.
So, to sum is up: - after fix in IntegritySupportMaintainer.java revision: 1.2 the build time of command-line CVS FS should be similar to JavaCVS FS. So it would be good to have this in NB 3.5 as well. - after issue #32243 is fixed, the build time on all CVS filesystems should reduce by ~ 50% according to tests. - The problem with 100% CPU after update is quite separate issue (there's performed refresh of all updated files even though they need not to be actually updated). This fix is not simple, I've created issue #32345 for that problem. Andrew, you've said, that "After ~1.5 hour of work building time got up from ~40 sec to ~400 sec". Is the time still growing as you work with the IDE without any limit? Or is there some fixed limit, that the Build All action reach every time? In case the time is still growing I would appreciate if you can attach thread dumps as the build time grows. Thanks.
Martin, It seems the build time stabilizes. Some successive measured periods are (min:sec): 7:50, 8:21, 7:16, 8:19. I don't know, is this fact interesting, or not - IDE status string have this dynamic: 1. very fast compiling a set of files, 2. freezing on some file (up ~20 sec, varies from file to file), 3. go to 1.
Interesting. I have some questions to the last measurement: *) How many filesystems of what kind have you mounted *) Which compiler type are you using (I guess still internal, you won't see compiled filenames otherwise, please try to use external compiler if it makes a difference) *) How big is the project you're rebuilding (# of sources) *) What is the memory behaviour (Turn on the memory toolbar by right clicking in the toolbar area and checking memory) It can be caused by several things. Internal compiler w/o the fix for #32243 is slow for accessing files but for me, it have stabilized the build time at less than 2x the first build-all External compiler have to refresh all the code filesystems, which may be very slow (issue 32171) and I suspect java module is doing it wrong way, which may cause the subsequent refreshes to be slower and slower (off by 1 problem with refreshing each time one more tree level than necessary, time stops growing when everything is already covered) Refreshes on VCSFS are several times slower than on LocalFS. The pause may be caused by big garbage collection in tight memory conditions, you'll see it on the memory toolbar.
Petr, 1. I have one VCS (win 2000 profile) FS and 6 treefs over it 2. I use internal compiler. Eexternal one has unacceptable long time when compiling single file. I have worked few days with external compiler and have not noticed any unacceptable "Build All" time growing. 3. Project has about 1100 files, ~3.1MB 4. I'll report about memory indicator later. Windows task manager shows, that consumed by IDE memory is about 200MB, and this value is rather stable.
CCing Jarda, he told me about some TreeFS refresh problems so he may have something more to say. For slow one-file recompilation using external compiler, see issue 32171, I thing I've solved it, the impact is _huge_, Svata will integrate it ASAP. I'll try to play with treefs, it may make the difference. Could you try newer dev build with issue 32243 fixed (e.g. today daily build (still not on the web site...) or build from current trunk sources)? If you'll still see longer compilation time, please post the stacktraces as well. Thanks much for your time.
There are so many fixes of "depends on" issues ... :-) Well, just now I'm updating from CVS trunk. I'll report any changings as I will notice them. You see, it takes some time to measure somthing. I have not noticed anything interesting in memory indicator behaviour. These phases [1. very fast compiling a set of files, 2. freezing on some file (up ~20 sec, varies from file to file), 3. go to 1.] have not any noticable differences.
"There are so many fixes of "depends on" issues ... :-)" Yes, I took a broader look at the edit-compile loop. We were doing a lot of responsiveness stuff in 3.5 timeframe and didn't watched throughput closely, so we have to fix it now... "You see, it takes some time to measure somthing." Yes, I know, I'm doing a lot of measurement ;-) "2. freezing on some file (up ~20 sec, varies from file to file)" I thought it may be GC kicking in, but in fact it can be pretty normal, it may be a file with a lot of yet-uncompiled dependencies which are complied underneath.
After all related changins in CVS trunk, there are two news from me now: 1. Building time doesn't grow more than 4x times now (was ~10x). 2. After 16 buldings (well, appreciate this amount :-) during ~8 hours, I have got java.lang.OutOfMemoryError. I have reported my project weight (~1100 files, ~3.1MB). Must -Xmx=128m be sufficient for such project?
> After 16 buldings (well, appreciate this amount :-) > during ~8 hours, I have got java.lang.OutOfMemoryError. > I have reported my project weight (~1100 files, ~3.1MB). Thanks very much. OOME was likely caused by some memory leak, that was growing during each building. It would need some profiling tool to detect this. > Must -Xmx=128m be sufficient for such project? If it's enough for one build it should be enough for 16 builds. Unless there's a memory leak somewhere, which is likely to be true :-(( I'm working on issue #32385. After this is fixed, this issue can hopefully be resolved as fixed...
Martin, your last comment is very interesting for me: probably, there is no issue #32065, I think :-) And a memory leak is the reason of problems. What do you think? OK, if memory leak exists, this leak have provoked few NB developers to do great work for perfomance (I can not achieve growth more than 4x, 10x time growth was). But I don't know what I can do useful to determing the leak (if it exists) reason. Can you point me? Thanks!
Issue #32385 is fixed, so I'm marking this as fixed as well. Hopefully all important performance problems that occur during compile/clean/build are fixed. According to Andrew's tests, there might be a memory leak. It seems like the memory leak, if exists, does not have direct impact on performance (only secondary by bigger memory consumption). To determine the memory leak, one would have to use a profiling tool (e.g. HAT, JProfiler), that shows the objects which live in JVM and find which objects' count grows after every compilation and are not garbage-collected.
Is it serious enough to be fixed in release35? Petr N? If so please raise prio to P1
Trung, Yes, the problem in integrity analyzer was quite serious. The rest is already covered (and mostly integreated in r3.5) by other issues. Andrew, I appreciate your effort! The memory leak seems quite serious problem, we should try to nail it down. It probably occurs only when using TreeFS, but one never knows. It can cause the slowdown as the leaking objects are often linked through listeners and make firing changes more and more expensive. I'll look at it later, if you wish (and have some time for it), you can look at the problem yourself in your enviroment. Little HOWTO is at http://performance.netbeans.org/articles/hat-howto but beware of the bug in jdk1.4 and 1.4.1 hprof implementation.
Created attachment 9608 [details] The text diff of the fix.
Created attachment 9609 [details] The binary patch of this issue (put into <NB install>/modules/autoload/patches/org-netbeans-modules-vcscore
The issue is verified from reporter point of view. Thanks to all! Petr, I have j2se 1.4.1_02 only, and my attempts to do anything useful with hprof have had not any success. So, I have decided to unmount all treefs FSs, and mount all my src roots as VCS FS. And ... magic! All works fine and quick. Moreover, filesystem explorer became noticeable more responsive. On the other hand, I think, the treefs module is a must-have module (before new prj_40), as it is not handy to customize every source root as a VCS FS. Shoud I report a bug against the treefs? The reason of my doubts is absence of clear and short way to reproduce the problem (measurements, measurements, measurements, ...).
Martin: I've reviewed the patch. Although it would be better to have this facility implemented directly in DefaultAttributes, the patch is OK for now. Andrew: Thanks to you! Now we'll have faster IDE. "Petr, I have j2se 1.4.1_02 only, and my attempts to do anything useful with hprof have had not any success." I know, see http://developer.java.sun.com/developer/bugParade/bugs/4719788.html "On the other hand, I think,the treefs module is a must-have module" It is still not part of the official distribution, so it is not watched that closely, but I agree it is very useful. "Shoud I report a bug against the treefs?" Surely. "The reason of my doubts is absence of clear and short way to reproduce the problem" I think there is: Mount something, mount several TreeFSes over it, do "Build All" several times. Memory profiler will tell us ;-)
Can someone review the diff so that Martin can integrate it into release35? Thx
Petr Nejedly aready reviewed it, see above. For me, the patch looks OK too.
approved for 3.5 by rel coordinator
Thanks for the review and approval. The fix is integrated in the main trunk: /cvs/vcscore/src/org/netbeans/modules/vcscore/objectintegrity/IntegritySupportMaintainer.java,v <-- IntegritySupportMaintainer.java new revision: 1.1.2.2; previous revision: 1.1.2.1
Just to correct my last message: I should say release35 branch instead of main trunk,of course. The fix is integrated in release35 branch :-)
Verifying, since there is big significant improvement for cleaning and compiling in NB35 #200306022350