I have ~50 projects opened in the IDE (cnd* and dlight* from NB dev repository).
IDE has been opened for ~5 days.
Go to type works extremely slow!
I have to wait for 10-20 seconds for a class which is already opened!
All opened projects contain ~5000 files, go to type should work instantly in such conditions.
>cnd* and dlight* from NB dev repository
You have the apisupport project for cnd and dlinght, right?
Can you attach the snapshot file(s)?
The go to type provides spi where other modules (some of them even when you have no project of this type) try to add items.
I am having bigger set up than 5000 files and do not have such a problem. The problem may be related to not enough memory, but without snapshot it's hard to say.
Which OS are you using?
I'm using OpenSolaris. Will create a snapshot.
What is an apisupport project?
Created attachment 91420 [details]
Some info from me as well.
I use daily build, clean var/cache and start IDE.
For developing CND I open all cnd* and dlight* modules in the IDE.
I have 8Gb on my workstation and start IDE with flags:
-J-client -J-Xss2m -J-Xms152m -J-Xmx1500m -J-XX:PermSize=62m -J-XX:MaxPermSize=500m
Complete scan takes some time. If you need, I can send full log.
After that (I think all indexes are in memory) I call Go To Type for "Rfs". The first search takes about 5 sec. Next searches are made immediately and are fast enough.
But after working with code for an hour => go to type works for 10 sec and more.
So, I often cancel activity and open files from project packages view
Product Version = NetBeans IDE Dev (Build 091119-12d843db6d39) (#12d843db6d39)
Operating System = SunOS version 5.10 running on x86
Java; VM; Vendor = 1.6.0_14; Java HotSpot(TM) Client VM 14.0-b16; Sun Microsystems Inc.
Runtime = Java(TM) SE Runtime Environment 1.6.0_14-b08
Compiler: HotSpot Client Compiler
Heap memory usage: initial 152.0MB maximum 1493.6MB
Non heap memory usage: initial 64.0MB maximum 532.0MB
Garbage collector: ParNew (Collections=54 Total time spent=0s)
Garbage collector: ConcurrentMarkSweep (Collections=0 Total time spent=0s)
Classes: loaded=7066 total loaded=7066 unloaded 0
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 279 source roots took: 247268 ms (New or modified files: 13397, Deleted files: 0)
I used default memory settings, machine has 4Gb mem
The apisupport project is the internal name of the project type for netbeans modules.
One more thing: Because of missing file descriptors on UNIX (Linux) we are closing the cache files as there is more then 400 opened indexes (LRUCache). You can try to increase the size of cache using this cmd line option:
which may improve the performance.
IMO the snapshot is created when user came back from another application. It is indicated by MenuWarmUpTask$NbWindowsAdapter listener.
Egon, Vladimir: If you are using your own builds, you can try to apply bug 175999 patch. It starts automated profiling when GoToType took more than 3s. (You might set the higher number.) The patch is planned to be integrated to trunk immediately after 6.8 branch created.)
ok, will wait until this integration is done
Anyway the -J-DIndexCache.size=1000 may help in your case. As the Go To Type (File) are most affected features. They have to iterate over all indexes (LRU does not help) and Lucene index opening may take same time.
Egor, Vladimir, how probable is that this will happen to a regular user? Is this a stopper for 6.8? If you think it is please make it P1, otherwise any possible solution will end up post 6.8 ...
David, it's difficult to consider it as showstopper if can not provide repeatable testcase. I'm trying to find how to reproduce, but can not. On Monday I will take sources with your patch and will use it.
Created attachment 91432 [details]
snapshot when Go To ... "Rfs"
The Vladimir log seems quite interesting:
1) There are several requests to close the index => increasing the property described above will help.
2) There are several GlobFileBuildQuery doing the IO (will affect the index performace) which is strange.
Was the snapshot generated after IDE start or after build, hg update or some operation which may affect source file time stamps?
1) IDE wasn't restarted, but external hg update was done
2) IDE did "scanning" and some minutes later Go To Rfs was called.
Created attachment 91434 [details]
Go To Rfs (after clean&&build)
one more snapshot:
1) IDE wasn't restarted, but external clean&&build was done
2) I have waited till about 15 times "Scanning" appeared and disappeared in status bar
3) when I have not seen "scanning" for the minute I have made Go To Rfs again
Created attachment 91437 [details]
Go To Rfs when doing some I/O
1) trunk is clean&&built
2) removed var/cache dir
3) start IDE with -J-DIndexCache.size=1000
4) wait till full scanning is finished (Complete indexing of 279 source roots took: 245658 ms (New or modified files: 13397, Deleted files: 0)
-- prstat shows 0.1% usage of CPU
5) from command line I started copy operation of big folder
6) in IDE called Go To Type "Rfs" => now it's quite slow
Analyzing the second Vladimir's log:
The overall GTT time: 7.5s
The JavaIndexer: 4.7s, in this time:
QuerySupport.findRoots (ending in core.startup.URLArchiveMapper.getFS) - 1.5s
LuceneIndex.isValid - 0.7 s - can be probably eliminated
LuceneIndex.getReader (opening the index) 2.2s
The java search itself 1.7ms (NOT a mistake 2 MILISECONDS)
The index open can be caused be 2 things:
a) The index was modified by the previous hg update and needs to be reopen to see the new data.
b) Performance team file handler limit, we cannot have more than 400 concurrently opened indexes on UNIX.
I believe that the b) is the problem in your case as the log contained several index close. Please try the IDE with the -J-DIndexCache.size=1000 and let me know if it helped.
you comment #17 is about mine #15, right?
What about my #16 where I made -J-DIndexCache.size=1000
It was about the comment number 15 (GoToRfs2.nps).
I haven't looked at the next snapshots yet.
I will take a look at the #16.
The log #16 (copy in background).
The profiler data are nearly the same as in #15 including the apisupport (creation of some apisupport projects).
It's quite natural that the first usage of GTT is slower and specially in case when the copy is done on the background. The indexes are not stored in memory (only some metadata are cached) and the rest is let on the OS (kernel fs caching) - the indexes may be large. The IndexReaders are created on the demand (not by initial scan, the initial scans uses IndexWriter to store data and do not reopen the IndexReader if it does not exist ). The first time you call GTT it needs to create these readers which is IO operation and it may take an longer time when there is a background IO.
I will create separate issues for all the problems:
1) API Support queries (no need to create it, there are already several issues related to this)
2) ArchiveURLMapper is expensive
3) LuceneIndex.isValid should be faster
5) Document the file descriptor problem and the property to increase the cache size.
6) ENH: Try to merge indexes for different languages into single index file => decrease number of indexes.
Created attachment 91615 [details]
No background disk activity
Tomas, I have reported two automatic snapshots. They are in bug
The second is with zero external activity and it took 19 sec :-(
*** Bug 177830 has been marked as a duplicate of this bug. ***
Build: NetBeans IDE Dev (Build 091127-1d2649d76544)
VM: Java HotSpot(TM) 64-Bit Server VM, 14.0-b16, Java(TM) SE Runtime Environment, 1.6.0_14-b08
OS: Linux, 188.8.131.52-170.2.113.fc10.x86_64, amd64
Maximum slowness yet reported was 47618 ms, average is 20034
Created attachment 91910 [details]
#2 - ArchiveURLMapper is expensive - is probably not that critical now after pushing http://hg.netbeans.org/jet-main/rev/d1d31024c803
> #2 - ArchiveURLMapper is expensive - is probably not that critical now after pushing http://hg.netbeans.org/jet-main/rev/d1d31024c803
Not true Vito, the main problem of the AURLM is when no one holds the archive opened (there is ref counting in the ZipFile impl) and AURLM needs to do JNI call to open the zip file.
The RepositoryUpdater.URLCache does not help at all.
*** Bug 178137 has been marked as a duplicate of this bug. ***
Seems that all the sub-defects except of the JarFS and api support project are fixed.
Now we need to wait for new snapshots to identify another hot spots.
I'm using daily bits and will check on Monday the progress.
The last time I've checked (on Friday) even increasing to 2000 open file limit didn't help...
Thanks very much Vladimir.
I have uploaded today snapshot. But I do not know where it was attached to...
Not very good results. 11 sec even with -J-DIndexCache.size=2000 (ulimit -n shows 2000 => I have increased my system settings)
I will try to find it.
Created attachment 92594 [details]
Created attachment 92595 [details]
After discussion in perf-team, changing to task. Many fixes and improvements has been done by Tomas and Vita. The bug should work as an entry point (umbrella) for individual problems reported.
Tomas, here you are 12 sec on todays build
Thanks Vladimir for the snapshot.
It's really interesting, I will attach a screenshot from profiler.
Here is the description of the screenshot:
1) The GoToTypeAction.Worker.run() (the goto type search) took 9471 ms
2) Java contributed to this time by JavaTypeProvider.computeTypeNames 9471 ms (all the time)
3) From this time 9198 ms was spent in the SourceForBinaryQuery (in the SPI used by this query)
4) Nearly all time is spent in ProjectSourceForBinaryQuery (namely in FileOwnerQuery) 9053ms
5) The ClassIndex.getDeclaredTypes() took 181 ms - which shows that the fixes helped (there is no isValid time at all - took most of the getDeclaredTypes).
I will create a new issue for apisupport project but there are already several of them.
Created attachment 92684 [details]
will at least fixes with most impact appear in patch 1?
6.8 is really slow!
Created attachment 92949 [details]
Yes, some of the fixes can be integrated into 6.8.1
Created attachment 93041 [details]
To ShurikAg: problem not in NetBeans
To ShurikAg: problem not in NetBeans, have a look at the screenshot
Tomas, any news on progress with this issue?
I'm using daily bits and it's really very slow.
I have increased limit of file descriptors on all my systems (Ubuntu laptop and Solaris workstation) and Go To Type is still very far from been responsive (>12 sec in general).
I send snapshots by reporting tool regularly, but don't know how it is going.
Jarda found another possible perf problem, report http://statistics.netbeans.org/analytics/exception.do?id=354184. Fix may speed up non java index queries, I will try it.
Fixed Jarda's problem, jet-main 57b368802fe1
Another less IO => speed up of java index.
Integrated into 'main-golden', will be available in build *201003250201* on http://bits.netbeans.org/dev/nightly/ (upload may still be in progress)
User: Tomas Zezula <firstname.lastname@example.org>
Log: #177274:Go to type is very slow!
Why is this a TASK? Shouldn't it be a defect?
To pjiricka: This is an umbrella task and contains the references to defects. All the issues on the parsing.api are fixed. There are only one defect on api support project and one enhancement with very questionable outcome. If you have a concrete issue including profiling dump please create an issue and add it to blockers.
Thomas, just uploaded http://statistics.netbeans.org/analytics/exception.do?id=375996
27 sec after one hour of ide idle.
From snapshot I see a lot of time (13 sec) spent in lucene itself which ends at
which calls java.io.RandomAccessFile.read()(13 sec)
Does it mean that lucene technology is blocker here?
(In reply to comment #51)
> which calls java.io.RandomAccessFile.read()(13 sec)
> Does it mean that lucene technology is blocker here?
I would very much like to do something with this for next version after 6.9. Until 6.9 is out we can at most gather data and brainstorm. The ideas that come to my mind range from little tweaks to paradigm shift. We shall call a meeting once the 6.9 rush is behind us.
To your snapshot. Yes, there is 13s spend in lucene. But look at the # of invocations. Rather than lucene technology being poor, the blocker may be our way of using it. We have many lucene indexes (something like one per mime type times # of source roots) and when accessing them on cold system, we have trouble in opening all the files. The same issue happens after start.
Funny observation is that our system is optimized for adding and removing source roots! Given the fact that this is the least common thing I do during development, we shall have plenty of room for optimization. I'd like to brainstorm about the system creating one (temporary, mostly read-only) file with information needed for "Go To Type" or "Go To File" in current set of open projects.
In the above snapshot, reading one file would take 13s/80 (plus something as the file would be bigger). Would not that be an improvement?
Cnd switched from default (lucene based provider) to own Go To File provider and for chromium (70K files) search is instant as you type.
We support own file index for all opened cnd projects (for 70K files it consumes 3Mb)
I think Go To File can be improved even in current release without any significant changes.
I.e. provider for file names could be the current index of files:
instead of lucene index data.
>Does it mean that lucene technology is blocker here?
Looking at the snapshot. The 13sec (readInternal) time is spent in the java.io.RandomAccessFile.readBytes.
Also 2.6sec is spent in 55 calls of java.io.File.isDirectory(). Seems that filesystem was overloaded.
>We support own file index for all opened cnd projects (for 70K files it consumes 3Mb)
The memory index does not scale. What should be kept in memory? Everything, just types, symbols? Any other disk based format will have similar performance for search (B-Tree). We can cash some more data but as a function of memory usage.
>I.e. provider for file names could be the current index of files: parsing.api/src/org/netbeans/modules/parsing/impl/indexing/TimeStamps.java instead of lucene index data.
Yes, this can be done. I will look how the FileIndexer uses the lucene index, it was written by Vita. We have the files in memory (there is a bug not to do this) and it can be used for GoTo File.
To Jarda: From the snapshots it's obvious that the problem is caused by IO intensive operations. Originally there was a single lucene index for java source root and it makes sense to optimize for cheap add|remove the indexes were kept opened. This is not true anymore as other languages (supports) are using it and we need to close the index (adds IO overhead). Merging indexes into a single file will definitely help as it's a B-Tree (the number of IO ops is log(n)).
Tomas, I give IDE at least 1G exactly to use it if it can => scaling is good, but I'd prefer IDE to be fast as the first priority.
I.e. we use SoftReference based caches to allow JVM unload our caches when not enough RAM for other tasks
snapshot after applying RAMDirectory patch:
What I expect from ide:
- as soon as initial scanning took 200sec and I waited all this time => I expect after that IDE to be very fast for Go To actions (=> keep all in RAM which I give IDE)
- during rescan (which sometimes fast but sometimes slow) old cache should stay in RAM and all Go To actions should use it, because 80% of old data is still valid and I'm interested in it in 80% of cases
- as soon as scanning is finished => replace RAM cache atomically
- I have powerful workstation (4 cores, 8Gb RAM) and working in IDE is not the only task I'm doing on it
- I give enough memory to IDE to keep all needed data in memory, so my other external work (i.e. building of the release version of NB) should not affect my IDE work a lot
- call of external "hg status" takes time and occupies IO again. It's not the reason why IDE should become slow
General comment to your general comment:
The IDE should behave reasonably well for people with 2GB of memory and 2 cores. Your config is extreme in a sense - I am not saying that your use case (plenty of RAM + plenty of cores) should behave badly but the priority should be for users with different machines. Also I seriously doubt that a normal user would have "hg update" and "make all world" running at the same time (s)he is working with the IDE (especially updating or building with the files that the IDE knows about).
Ideally you should be able to tweak something so we behave good even in your case. BTW I have tried to put everything I work on on a RAM disk and especially in your case it would speed things up since there would be no waiting for IO (which it is according to your snapshots).
Optionally enabling the in-memory index (or generally more caching) for your use case is a good start IMHO.
David thank for the hint.
I switched my nb home dir to /tmp which is mapped to memory on Solaris => it should be very very fast, right?
If the snapshot reveals what was happening on your machine it should help. But please note that ideally you need in RAM your projects (sources) + netbeans user dir (for NB caches).
I also switched my nb home dir to /tmp and it works much much faster now (as expected). Can this switch (to store everything/something in memory) be done automatically. On startup Netbeans detects how much memory user machine has. Can this information change the behavior of caches somehow?
It would look strange to advise users with considerable memory amount to store their user dirs in memory...
FYI nb homedir size is now 286Mb now and NB itself consumes 364Mb.
(In reply to comment #60)
> If the snapshot reveals what was happening on your machine it should help. But
> please note that ideally you need in RAM your projects (sources) + netbeans
> user dir (for NB caches).
David, it seems that *main* issue is access to Lucene, not to sources => moving only var/cache (lucene caches) helped unbelievably! => we have to think to use
memory caches (RAMDirectories?) where possible to solve several issues:
- users working on machines with 2Gb will have default Xmx512mb and they would have at least some caches in RAM
- users with bigger heap (ant faster workstations) will use their machine resources much better and will get speed for money they spend on hardware
=> can we use RAM caches as practical as possible and measure results?
One more proof of i/o issues of lucene based algorithms:
- I performed Go To File "atp*.g" to find text file "aptlexer.g" and it took... 2 secs!!!
My env (opened full dligh cluster, full cnd cluster, java.source, some other external Suites) (511 source roots as I see from log) is really huge.
=> only moving into memory lucene data helped a lot.
Please, think in this direction to make NB6.9 usable IDE :-)
(In reply to comment #63)
> One more proof of i/o issues of lucene based algorithms:
> - I performed Go To File "atp*.g" to find text file "aptlexer.g" and it took...
> 2 secs!!!
Previously it took about 15-20 sec
Is there a startup option to switch the cache directory to a different location (instead of switching the whole user dir) ?
go to type / go to file is a very useful feature that I had gotten used to, and is not completely unusable ( it takes me less time to navigate to the right place in the projects view. So, I'm most definitely interested in using some extra ram and getting the feature back to a usable state.
it's second line in Netbeans/etc/netbeans.conf file, change it to something like:
(on *nix) and go to file/class will be super fast!
I am glad that the patch helped.
Here are my answers questions:
1st) Vladimir's snapshot analyzes (with patch): Overall GTT time: 1540ms, csl (not patched) 439ms, 804ms java index, rest is source root set up. Seems OK.
>as soon as initial scanning took 200sec and I waited all this time => I expect after that IDE to be very fast for Go To actions
The patch loads the data into memory cache during first usage as explained offline, but it can be changed to do so in the CIF.scanFinished (during scan). So it will behave in this way.
>- during rescan (which sometimes fast but sometimes slow) old cache should stay in RAM and all Go To actions should use it, because 80% of old data is still valid and I'm interested in it in 80% of cases
Works exactly in this way.
>- as soon as scanning is finished => replace RAM cache atomically
Currently the mem cache is replaced (only for dirty root) in the next usage of index for the invalidated root.
>On startup Netbeans detects how much memory user machine has. Can this information change the behavior of caches somehow?
I would rather add an option to enable the caching (the patch) or enable it automatically if there is enough of memory (not sure if automatic enabling is not dangerous for NB 6.9). As the cache is de facto byte I know exactly the cache size and can do some LRU.
>- users working on machines with 2Gb will have default Xmx512mb and they would have at least some caches in RAM
I think there is only 256MB on 2Gb at least Martin has it. Anyway we can either add an option to enable the mem caching or do it automatically (eg. cache_size < 0.2 heap_size) 0.2 is just a try can be changed.
>- users with bigger heap (ant faster workstations) will use their machine resources much better and will get speed for money they spend on hardware => can we use RAM caches as practical as possible and measure results?
Same as previous answer. Yes, we should do a measurement. But the measurement should be a bit tricky, it has to flush fs caches, otherwise the GTT will be fast even without mem cache. The memory manager will do the caching for you.
>Is there a startup option to switch the cache directory to a different location (instead of switching the whole user dir) ?
If the RAMDirectory patch will be integrated into 6.9 (even on option), you will not need to switch the user dir. Switching user dir to ramfs does not work on Windows and also don't forget that /tmp is not very persistent.
Tomas, can you share where this patch is to try out ?
Created attachment 98160 [details]
The prototype patch, needs to be cleaned up and a bit extended.
(In reply to comment #67)
> Switching user dir to ramfs does not work
> on Windows and also don't forget that /tmp is not very persistent.
Can we parametrize only lucene cache:
parsing.api/src/org/netbeans/modules/parsing/impl/indexing/CacheFolder.java:69: private static final String INDEX_DIR = "var"+File.separatorChar+"cache"+File.separatorChar+"index"; //NOI18N
to be taken from system property?
I've added the memory caches for java index (jet-main 0c26cae7f0d4). The caches are not enabled by default, to enable them start netbeans with -J-Djava.index.useMemCache=true command line option.
I've also decrease the size of the index (jet-main 550df3bb6e0c)
Tomas, in order to support memory caches for the groovy index, would parallel changes (similar to your patch) have to be made in the Groovy support or would this change also cover that (somehow)?
I've integrated a cache on option and tried it. After some returns back to the IDE the GoToType was slow anyway, so I created a snapshots (I will attach them) and found that much of the time is spent in the QuerySupports.findRoots which spends the time in URLMapper respectively in JarFileSystem.init(). The situation is even worse as the URLMapper competes to "Checking External Changes" (fs refresh).
Jardo, I've thought you fixed the expensive JarFS constructor, or not?
Times from evaluation:
Java Index (with cache) 370 ms
Jardo, can you look at snapshots?
Created attachment 98305 [details]
Slow URLMapper Snapshot
Created attachment 98306 [details]
Slow URLMapper Snapshot 2
I've forgot to mention that there was no change in the classpath root and the jars had to be used before as I did several GTT and editing before I switched from the IDE.
To akochnev: The patch cannot be copied to Groovy as it does not maintain the index. Currently it's only for java. If it will work fine it can be also put to parsing.api.
(In reply to comment #75)
> Created an attachment (id=98306) [details]
> Slow URLMapper Snapshot 2
I guess the fix for bug 177461 was not absolutely correct. I'll provide an improved one.
I use -J-Djava.index.useMemCache=true
and after some idle of ide Go To Class took 14 sec (there are zero external activity during Go To Class operation, just ide was idle during some hours, so some os caches could be flush)
Looking at Vladimir snapshot:
The JavaTypeProvider which is controlled by index.useMemCache is not even on the snapshot (caused by -J-Djava.index.useMemCache=true).
Tomas, can we improve GTT to run providers in parallel and show results as soon as they are available?
I definetely don't have any java script files in cnd* or dlight* opened modules, so I can expect that pure j2se users can be affected same way as me => bad impression about j2se support (they don't know our internals about slow java script provider)
> approximately 13 calls of File.list() took 8 sec.
and 30 invocations of java.io.File.exists() took another 3 seconds. That added together gives your 11 seconds that was the slowdown. Are you absolutely sure that your OS is ok?
>Can we improve GTT to run providers in parallel and show results as soon as they are available?
In code no problem, we can even order them dynamically according to speed to avoid io contention.
The only problem we found while discussing this with Jarda is the UI. We cannot add the items later when there are already some of them as the existing items will move (forbidden by UI). Jarda had an idea which seems nice to me. The types will be ordered by language types and in each language type alphabetically.
I would also like to prefer the language you navigate from (if exists), if you are in java class and you call GTT you probably don't want to go to php or ruby class.
>I definetely don't have any java script files in cnd* or dlight* opened modules, so I can expect that pure >j2se users can be affected same way as me =>bad impression about j2se support (they don't know our >internals about slow java script provider)
(In reply to comment #83)
> > approximately 13 calls of File.list() took 8 sec.
> and 30 invocations of java.io.File.exists() took another 3 seconds. That added
> together gives your 11 seconds that was the slowdown. Are you absolutely sure
> that your OS is ok?
I faced it on Ubuntu 8.10. Laptop was idle for some time (no sleeping, no hibernating), then I opened laptop and went to NB to continue my work.
I don't think HDD was stopped :-)
Btw, can slowness in file.exists and file.list be somehow related to i/o monitor implementation in platform?
>Btw, can slowness in file.exists and file.list be somehow related to i/o monitor implementation in platform?
Previously yes, it was one of the most slow downs of the GTT - you returned back to IDE invoke GTT or even code completion but the IDE was busy with refresh. On the monday Jarda has added a new friend API priorityIO used by both parsing API and java to prevent this - issue #185526. But in your last snapshot there is already this friend API and it seems that "refresh" correctly waits in the FileChangedManager.waitIOLoadLowerThan.
Created attachment 99557 [details]
reported Go to Type slowness
Tried the memory switch in build 20100521 and got this slowness reported. Another project just opened was scanned at that moment.
Added memory cache for GTT, jet-main f571d3f43fa9.
The -J-Djava.index.useMemCache=true still works (everything in mem), -J-Djava.index.useMemCache=false (nothing in mem), -J-Djava.index.useMemCache=dynamic SoftRefs used to control what is in memory. Default if nothing given is dynamic.
Integrated into 'main-golden', will be available in build *201008130001* on http://bits.netbeans.org/dev/nightly/ (upload may still be in progress)
User: Tomas Zezula <email@example.com>
Log: #177274:Go to type is very slow!
For NB 7.0 the indexes were rewritten with memory based cache and also java indexes are available directly after projects are opened. This improved the speed significantly. Now the performance problem happens only when the IDE is swapped out by OS, but this is not problem of GTT as the whole IDE is slow as the OS needs to fetch data pages back from swap file.
Solaris by default keeps /tmp to be mounted in memory.
To do RAM disk on Linux:
#sudo mkdir -p /mount/ramdisk
#sudo mount -t tmpfs /dev/ram0 /mount/ramdisk
then make a link to your cache
.netbeans/dev/var/cache -> /mount/ramdisk/cache
and Go To will be faster in times
In reply to comment #93
I don't know details of Linux memory management but I think that ram disk is swapped before processes pages. So for NB 7.0 where indexes are held by SoftRefs in memory it should not be very much difference.
Exception report has already 936 duplicates!
According to my latest report (http://statistics.netbeans.org/analytics/exception.do?id=556567) it seems that lucene preloads the whole index (or a large chunk of it) into memory before trying to run the query. As my indices are pretty large - 744M - it takes quite a long time to parse it and populate the lucene buffer.
>It seems that lucene preloads the whole index (or a large chunk of it) into memory before trying to run the query.
Lucene loads just a part of the BTree (can be set on IndexReader). The NB preloads some hot indexes into LRU RAM cache (bounded by RAM, can be set on LuceneIndex). No query uses all indexes as there are indexes of unrelated roots and unrelated languages. In the "global queries" just indexes for opened projects and relevant technology or language are used.
(In reply to comment #97)
> >It seems that lucene preloads the whole index (or a large chunk of it) into memory before trying to run the query.
> Lucene loads just a part of the BTree (can be set on IndexReader). The NB
> preloads some hot indexes into LRU RAM cache (bounded by RAM, can be set on
> LuceneIndex). No query uses all indexes as there are indexes of unrelated roots
> and unrelated languages. In the "global queries" just indexes for opened
> projects and relevant technology or language are used.
This still leaves more than 11s of non-responsive go-to-type dialog on a machine with no other load. Anyway, it shows that this issue may not be related exclusively to swapping.
Was it after start or during IDE usage?
(In reply to comment #99)
> Was it after start or during IDE usage?
It was the first usage of go-to-type after the scanning has finished. No scanning was in progres and no other disk or cpu heavy activity took place in the system.
I've tried to reproduce the problem with less projects being open and it seems that the culprit is the fact that the transitive dependencies of the projects while experiencing this problem covered more or less the whole netbeans source base. When I leave open just a few projects with relatively small dependency sets the go-to-type is ready in under a second.
The number of logs in not important in this case as the limit is 3s which is low for huge projects or after the IDE was swapped out.
After start we can do index preloading but this swill slow down some other tasks, but we can try.
If the limit is low, increase it. Btw. right now I was waiting 40s after start without getting any reply. I was hoping 7.2 to have some improvements...
>I was hoping 7.2 to have some improvements...
Unfortunately nothing planned for NB 7.2.
There are 2 main problems:
1st) The cache of indexes is small compared to number of indexes used, it should be hierarchical.
2nd) There is a race introduced in 6.8 by performance team which causes that queries are done several time instead of once.
(In reply to comment #103)
> >I was hoping 7.2 to have some improvements...
> Unfortunately nothing planned for NB 7.2.
Surprising. I thought 7.2 was supposed to be Java editor performance release. It is unfortunate, if nothing has been planned to improve poor goto type experience.
> There are 2 main problems:
> 1st) The cache should be hierarchical.
Really? This is the first time I hear this. Do you have a link where I could learn more details?
> 2nd) There is a race introduced in 6.8
6.8 was released in 2009. Now we are working forth subsequent major release. Looks to me, there was a lot of time to plan mitigation of any such problems. Especially, if they were considered severe.