This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Build: NetBeans IDE Dev (Build 201210110002) VM: Java HotSpot(TM) 64-Bit Server VM, 23.3-b01, Java(TM) SE Runtime Environment, 1.7.0_07-b11 OS: Windows 7 User Comments: Chiana: This is the result of trying a new package for the first time, one thing I did notice is that none of the errors occuring actually exist. My guess is that this is caused by the debugger. It had similar behaviour as a previous error I reported where the debugger gradually went into some uncertain state when trying to expand a few tables, that is the system went into really high gear requiring it to actually use it's cooling fan, also noted it was using exessive amount of ram (approx 4 Gig). Will now reboot and then try this again Stacktrace: java.lang.OutOfMemoryError: GC overhead limit exceeded at com.sun.tools.javac.code.Scope.<init>(Scope.java:113) at com.sun.tools.javac.code.Scope$ErrorScope.<init>(Scope.java:737) at com.sun.tools.javac.jvm.ClassReader.complete(ClassReader.java:2212) at com.sun.tools.javac.code.Symbol.complete(Symbol.java:422) at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:833) at com.sun.tools.javac.jvm.ClassReader.loadClass(ClassReader.java:2413)
Created attachment 125821 [details] stacktrace
So, I did find the cause of this. It was actually an assignment to a null pointer. Examin the following private ThreadData[] pool=new ThreadData[INITIAL_POOL_SIZE]; and then in the constructor for (int i=0;i<INITIAL_POOL_SIZE;i++) { pool[i].thread=new WorkerThread(i); pool[i].thread.start(); } As you can see I did forgot to create the "pool[i]", this is an error, but then again, it should not cause 5-6 "just not related" exceptions from the debugger.
Could you please post the generated heap dump? Thanks.
I tried to reproduce this with the following class that is similar in structure but somewhat smaller (B-) to the one that fails. public static class Debug { static Debug d=new Debug(); public static void test() { } private class LInt { Thread a; } private LInt[] I=new LInt[10]; Debug(){ for (int i=0;i<10;i++) { I[i].a=new Thread(new Runnable() { @Override public void run() { } }); I[i].a.start(); } } } and the calling Debug.test() to start it, but it works as it should. The heapdumps are on it's way, currently beeing compressed as it is a big chunk of data.
Heapdump delayed. Will post link as soon as available.
Dumps uploaded to hudson...
Unfortunately, it seems that your upload was not successful :-( Started by user Chiana Building remotely on upload-node Copying file to ./heap_dump FATAL: channel is already closed hudson.remoting.ChannelClosedException: channel is already closed at hudson.remoting.Channel.send(Channel.java:483) at hudson.remoting.ProxyOutputStream.doClose(ProxyOutputStream.java:183) at hudson.remoting.ProxyOutputStream.close(ProxyOutputStream.java:147) at hudson.remoting.RemoteOutputStream.close(RemoteOutputStream.java:118) at hudson.FilePath.copyFrom(FilePath.java:707) at hudson.model.FileParameterValue$1.setUp(FileParameterValue.java:109) at hudson.model.Build$RunnerImpl.doRun(Build.java:131) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:429) at hudson.model.Run.run(Run.java:1367) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) Caused by: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Channel$ReaderThread.run(Channel.java:1030) Caused by: java.io.EOFException at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2553) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350) at hudson.remoting.Channel$ReaderThread.run(Channel.java:1024) Is it possible to upload it to some alternative location? Thanks and sorry for an inconvenience.
I think it succeeded this time, build #79 in hudson. Started by user anonymous Building remotely on upload-node Copying file to ./heap_dump [upload] $ /bin/sh -xe /tmp/hudson1020646864153618562.sh + mv heap_dump heap_dump.zip + set +e + zip -qT heap_dump.zip + ERROR_CODE=3 + set -e + '[' 3 '!=' 0 ']' + mv heap_dump.zip heap_dump + zip -m heap_dump.zip heap_dump adding: heap_dump (deflated 0%) ++ md5sum heap_dump.zip ++ cut -d ' ' -f1 + MD5PRINT=e73c135f4bb0b8221beebdc20189ff5d + mv heap_dump.zip 219962_2012-10-24_04-06-40_e73c135f4bb0b8221beebdc20189ff5d.zip [DEBUG] Skipping watched dependency update; build not configured with trigger: upload #79 Finished: SUCCESS
Heap dump is available at <http://netbeans.org/projects/profiler/downloads/download/Heapdumps/heapdump-219962.zip>
*** Bug 218984 has been marked as a duplicate of this bug. ***
There are 8 instances of SymTab. Instance #1, #2, #3, #4, #6, #7 are referenced from org.netbeans.modules.debugger.jpda.projects.EditorContextImpl. See attached path to GC root. Re-assiging to debugger.
Created attachment 126882 [details] Path to GC root
There are three staled entries in the WeakHashMap. The length of the associated ReferenceQueue is 3. We need to expunge the staled entries somehow, since WeakHashMap does not do it automatically :-(
Fixed by changeset: 239791:9bb864527e30 http://hg.netbeans.org/core-main/rev/9bb864527e30
Integrated into 'main-golden', will be available in build *201211150001* on http://bits.netbeans.org/dev/nightly/ (upload may still be in progress) Changeset: http://hg.netbeans.org/main-golden/rev/9bb864527e30 User: mentlicher@netbeans.org Log: #219962: An active implementation of weak hash map is used to hold the source handles. The entries are removed automatically as soon as they are freed by GC.
*** Bug 196492 has been marked as a duplicate of this bug. ***