This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
This issue was reported manually by sj-nb. It already has 1 duplicates Build: NetBeans IDE 7.4 RC1 (Build 201309112301) VM: Java HotSpot(TM) 64-Bit Server VM, 24.0-b56, Java(TM) SE Runtime Environment, 1.7.0_40-b43 OS: Windows 7 User Comments: aquaglia: Profiling java application Stacktrace: java.lang.OutOfMemoryError: Java heap space at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:75) at java.awt.image.Raster.createPackedRaster(Raster.java:467) at java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel.java:1032) at java.awt.image.BufferedImage.<init>(BufferedImage.java:340) at com.sun.java.swing.plaf.windows.XPStyle$SkinPainter.createImage(XPStyle.java:673) at sun.swing.CachedPainter.paint0(CachedPainter.java:139)
Created attachment 140225 [details] stacktrace
The heap dump shows over 11 millions of org.netbeans.lib.profiler.results.memory.RuntimeMemoryCCTNode, reassigning.
The OoME is caused by accumulation of more than 400Mb of profiler related data, but it is not held erroneously, but because it is needed. The solution is to raise the heap size via -xmx parameter when there is a need to gather a lot of data (very long sessions, especially for memory profiling)
I understand your point. However, the debugging session had just started. It was not a long session. I am unable to profile the memory with stack allocations turned on. I have run NB 64bit on Windows 7 with 24GB of RAM. I start NB with its default options.
(In reply to aquaglia from comment #4) > I understand your point. > However, the debugging session had just started. It was not a long session. > I am unable to profile the memory with stack allocations turned on. According to the heap dump - there was profiling, not debugging, going on. The clear consequence of having allocation stack traces turned on is big memory consumption (for storing the allocation stack traces). Especially when the stack traces are long it is possible to consume a lot of memory very fast. > I have run NB 64bit on Windows 7 with 24GB of RAM. It is irrelevant that the system has 24 GB - the JVM only sees the memory allocated to it - in this case by -xmx setting (so in this case the JVM only had 768MB of available memory > I start NB with its default options. If you want to use memory-intensive functions it is easy to change -xmx http://wiki.netbeans.org/FaqSettingHeapSize
Of course, I meant profiling and not debugging. I am aware of the method to increase the memory allocated to NB but I wanted to use the default options. Can you suggest a good value for the max memory? The profiling session had really just started and it seems to me that profiling with stack allocation traces is utterly unusable the way it works now.
Created attachment 142515 [details] stacktrace don't know
(In reply to aquaglia from comment #6) > Of course, I meant profiling and not debugging. > > I am aware of the method to increase the memory allocated to NB but I wanted > to use the default options. There are situation where default settings need to be changed and this looks like one of such situations. > Can you suggest a good value for the max memory? The profiling session had > really just started and it seems to me that profiling with stack allocation > traces is utterly unusable the way it works now. This is not true. A lot of people are able to use memory profiling with stack-traces allocations. The memory consumption depends on a lot of factors. How many instances profiled applications allocates. How deep and how different are the stack-traces. You need to do some experiments. Start with 1500M and see if it helps.
I like NetBeans very much, I use it every day and I appreciate the work of the Development Team. I am sure that the profiler works fine for demo or small projects, but it does not work for my project whenever I ask for allocation stack traces. At all. But when I say that the functionality has become unusable, that is what I mean. I launched it before lunch and now NB has gone black and unresponsive. The snapshot is here: https://dl.dropboxusercontent.com/u/4422938/bug236118/application-1386681425649.zip I have now killed it. Upon restart, the "Report Problem" dialog box appeared. The Summary field said "OutOfMemoryError: GC overhead limit exceeded. The log file is being uploaded. If you are unwilling to help on this, I kindly ask you to hand this bug over to someone else in your team.
How big is your project? (Number of classes, parallel threads) What kind of project is it? (Web application, desktop, ...)
It is not such a big project. The project INSPIREGeoportalLibrary is a java maven class library and has 207 source files It has a Maven dependency on another one that has 1081 source files.
https://netbeans.org/bugzilla/show_bug.cgi?id=239318
Please run the project with increased heap space. This will help triage if there is a real bug (memory leak or something). Are you using the embedded maven or an external binary?
(In reply to aquaglia from comment #9) > I am sure that the profiler works fine for demo or small projects, but it > does not work for my project whenever I ask for allocation stack traces. At > all. > > But when I say that the functionality has become unusable, that is what I > mean. No, you said that 'it seems to me that profiling with stack allocation traces is utterly unusable the way it works now.' - and this is not true. I understand that it does not work __for you__, but it does not mean that it is not unusable for others. > I launched it before lunch and now NB has gone black and unresponsive. > > The snapshot is here: > https://dl.dropboxusercontent.com/u/4422938/bug236118/application- > 1386681425649.zip > > I have now killed it. > > Upon restart, the "Report Problem" dialog box appeared. The Summary field > said "OutOfMemoryError: GC overhead limit exceeded. > The log file is being uploaded. ... and the messages.log file and VisualVM snapshot shows that you did not change the default Xmx and you are still running with default 768M. > If you are unwilling to help on this, I kindly ask you to hand this bug over > to someone else in your team. I am helping you the best I can. It is sad that you fail to take our advice and increase Xmx. Without your cooperation, we will get nowhere.
*** Bug 239318 has been marked as a duplicate of this bug. ***
I read at http://wiki.netbeans.org/FaqSettingHeapSize what follows: NetBeans 6.x+ note Since version 6.0, NetBeans defaults to dynamically setting its Xmx heap size limit to something like 1/3 or 1/4 of the RAM installed on the system. For that reason there's no -J-Xmx value set in the default netbeans.conf. If you find that the automatically selected limit is too little/too much, you can of course still add an appropriate -J-Xmx... option to your netbeans.conf) to override the automatically selected limit. Here is what I have in my netbeans.conf: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dsun.java2d.dpiaware=true -J-Dsun.zip.disableMemoryMapping=true" I have 24GB installed on my system (Win 7 64bit) So, why are you asking me to change the default options?
This is a good point: I always set the xmx manually as well, I don't think Netbeans chooses correctly here.The documentation could be improved in that regard, IMHO - from https://performance.netbeans.org/howto/jvmswitches/index.html
This is a good point: I always set the xmx manually as well, I don't think Netbeans chooses correctly here. The documentation could be improved in that regard, IMHO - from https://performance.netbeans.org/howto/jvmswitches/index.html "-J-Xmx256m - this settings tells the Java virtual machine the maximum amount of memory it should use for the heap. Placing a hard upper limit on this number means that the Java process cannot consume more memory than physical RAM available. This limit can be raised on systems with more memory. Current default value is 128MB." while your citation of the FAQ page states that it is automatically determined. I believe neither is correct :) On the other hand: Depending on how you launch your project, a different setting will be used since an external JVM may be started. I usually have that with maven projects and set the option in the project properties.
Shall I open a new bug for this other issue?
(In reply to aquaglia from comment #16) > So, why are you asking me to change the default options? You should change your xmx settings because you want the feature in working state for your usecase. The default options are there for common usecase, for this usecase more memory is needed. (In reply to everflux from comment #18) > while your citation of the FAQ page states that it is automatically determined. > I believe neither is correct :) The xmx value used is currently max(96 , min( COMPUTERS_RAM / 5 , 768 ) ). BTW In 8.0 the max will be raised to 1024, but this is just the default - there is a quiet assumption, that user should change this when using something more memory intensive - like openning of thousands of projcts or in our case memory profiling with allocation stack traces on something with big number of allocations|| deep stacktraces || very varied stacktraces
Many thanks for your explanation. I set -J-Xmx1500m in netbeans.conf as per your suggestion. I started profiling. When I right-click on the first class showing and I click on "Take Snapshot and show allocation stack traces nothing happens. If I right-click again I get the Windows spinning wheel. I cannot stop the profiling. I have to kill NB. I then increase to the following values (extremely high in my humble opinion for just a few seconds of execution): -J-Xmx4500m -J-XX:PermSize=1200m Now I get a stack allocation but only after I choose "stop profiling this class". After that I cannot stop the profiling session unless I kill NB again. I then set -J-Xmx10000m -J-XX:PermSize=3000m Same behaviour but the UI froze again. I have to kill NB again. For me, the profiling with stack allocation traces is unusable at the moment. P.S: In addition, I think that such hard-coded limit you mentioned on the max memory used by NB needs to be included in the documentation. If, as you say "there is a quiet assumption", that assumption needs to be stated clearly. I have not "opened thousands of projects". I chose to profile with allocation stack traces.
The latest snapshot is here: https://dl.dropboxusercontent.com/u/4422938/bug236118/application-1386697231240.zip
So, as I said, this is really a big issue for me, as I really would like to use the NetBeans profiler to detect possible memory leaks in my application.
Can you provide the project (sources) and exact steps to reproduce the problem?
(In reply to aquaglia from comment #21) > Many thanks for your explanation. > > I set -J-Xmx1500m in netbeans.conf as per your suggestion. > > I started profiling. > > When I right-click on the first class showing and I click on "Take Snapshot > and show allocation stack traces nothing happens. > If I right-click again I get the Windows spinning wheel. > I cannot stop the profiling. I have to kill NB. > > I then increase to the following values (extremely high in my humble opinion > for just a few seconds of execution): > > -J-Xmx4500m -J-XX:PermSize=1200m > > Now I get a stack allocation but only after I choose "stop profiling this > class". > After that I cannot stop the profiling session unless I kill NB again. > > I then set > -J-Xmx10000m -J-XX:PermSize=3000m > > Same behaviour but the UI froze again. I have to kill NB again. > > For me, the profiling with stack allocation traces is unusable at the moment. Thanks for trying it with different Xmx. Note that you don't need to change perm size so you don't need to touch -J-XX:PermSize parameter. This definitely looks strange. However to investigate it, we need the heap dump from the OutOfMemoryError which happened when you changed the Xmx. So start NetBeans with -J-Xmx1500m. When OOME happens, zip the heap dump together with messages.log from the same session and upload it on Dropbox or you can use <http://deadlock.netbeans.org/job/upload/build> to upload it. If you can provide exact steps how to reproduce this issue, it would great and it will save us a lot of time when investigation it.
(In reply to aquaglia from comment #22) > The latest snapshot is here: > https://dl.dropboxusercontent.com/u/4422938/bug236118/application- > 1386697231240.zip This snapshot shows that your are running with Xmx 10000m, the maximum allocated memory was around 1600M and there is no evidence of Out-of-memory-error. The thread dump shows that NetBeans is waiting for data from profiled application. It looks like profiled application does not send any data and therefore NetBeans UI is frozen. It is not clear what is going on in profiled application and why it does not send any data.
If the application is launched with a separate VM it may have a different heap/permgen setting. aquaglia: How do you launch the application? I offer to verify your bug report, given that you provide me with: - SVN location (I guess it is an EU project with according licence) - instructions how I can reproduce your problem, i.e. how to setup the project and launch it
(In reply to Tomas Hurka from comment #26) > (In reply to aquaglia from comment #22) > > The latest snapshot is here: > > https://dl.dropboxusercontent.com/u/4422938/bug236118/application- > > 1386697231240.zip > This snapshot shows that your are running with Xmx 10000m, the maximum > allocated memory was around 1600M and there is no evidence of > Out-of-memory-error. The thread dump shows that NetBeans is waiting for data > from profiled application. It looks like profiled application does not send > any data and therefore NetBeans UI is frozen. It is not clear what is going > on in profiled application and why it does not send any data. I have just tried with the following settings: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-Xmx1500m -J-XX:PermSize=32m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dsun.java2d.dpiaware=true -J-Dsun.zip.disableMemoryMapping=true" With -J-Xmx1500m I get: java.lang.OutOfMemoryError: GC overhead limit exceeded
(In reply to everflux from comment #27) > If the application is launched with a separate VM it may have a different > heap/permgen setting. > > aquaglia: How do you launch the application? > > I offer to verify your bug report, given that you provide me with: > - SVN location (I guess it is an EU project with according licence) > - instructions how I can reproduce your problem, i.e. how to setup the > project and launch it Thanks everflux@netbeans.org! It will be published as a EU project but it hasn't yet, so I cannot send the source code around. But the publication is planned to happen soon. That's why I would like to check for memory leaks. If you can give me additional steps to diagnose this further, I will do it. Otherwise, we will need to postpone this until the official publication of the code. I created a Main class in my code with something to test in it, and I launch the profiler right-clicking on the project node. It is a Maven project, so this is how it is executed: --- exec-maven-plugin:1.2.1:exec (default-cli) @ INSPIREGeoportalLibrary --- Profiler Agent: Waiting for connection on port 5140, timeout 10 seconds (Protocol version: 14) Profiler Agent: Established connection with the tool Profiler Agent: Local accelerated session SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. *** Profiler engine warning: class sun.reflect.GeneratedConstructorAccessor1 that should be instrumented is not loaded by target VM *** Requested classloader: sun.reflect.DelegatingClassLoader@36ac1596, its class = class sun.reflect.DelegatingClassLoader, index = 3, hashcode = 917247382 *** Profiler engine warning: target VM cannot load class to instrument sun.reflect.GeneratedConstructorAccessor1 *** probably it has been unloaded recently *** Profiler engine warning: class sun.reflect.GeneratedConstructorAccessor2 that should be instrumented is not loaded by target VM *** Requested classloader: sun.reflect.DelegatingClassLoader@240c5895, its class = class sun.reflect.DelegatingClassLoader, index = 4, hashcode = 604788885 *** Profiler engine warning: target VM cannot load class to instrument sun.reflect.GeneratedConstructorAccessor2 *** probably it has been unloaded recently *** Profiler engine warning: class sun.reflect.GeneratedConstructorAccessor3 that should be instrumented is not loaded by target VM ... Angelo
The exception reporter is now uploading
I assume it is a problem with not enough memory in your child-JVM, not Netbeans itself. Can you try to increase the VM heap size for the project by going to the project's Properties and under VM Options in Run, put in -Xmx2048m You could otherwise set a global environment MAVEN_OPTS=-Xmx2048m
netbeans.conf set to -J-Xmx2000m -J-XX:PermSize=32m I have increased the VM heap size for the project by going to the project's Properties and under VM Options in Run, and put in -Xmx2048m. I right click and choose "Take Snapshot and show allocation stack traces" I get a new exception: ArrayIndexOutOfBoundsException: 4532 http://statistics.netbeans.org/analytics/exception.do?id=703970
https://dl.dropboxusercontent.com/u/4422938/bug236118/application-1386862828022.zip
(In reply to aquaglia from comment #30) > The exception reporter is now uploading I examined the heap dump uploaded in issue #239405 and OOME happened when snapshot is taken. There is a massive amount of profiled data, so there is no straightforward way to decrease the memory consumption. Can you please, try it with Xmx 2G. Please provide also exact steps, including any custom profiler settings. Thanks.
*** Bug 239405 has been marked as a duplicate of this bug. ***
(In reply to everflux from comment #31) > I assume it is a problem with not enough memory in your child-JVM, not > Netbeans itself. OOME is from NetBeans, so this problem in NetBeans.
(In reply to aquaglia from comment #32) > netbeans.conf set to -J-Xmx2000m -J-XX:PermSize=32m OK, but next time please do not change Perm size parameter. I never asked you to change. > I have increased the VM heap size for the project by going to the project's > Properties and under VM Options in Run, and put in -Xmx2048m. > > I right click and choose "Take Snapshot and show allocation stack traces" > > I get a new exception: > ArrayIndexOutOfBoundsException: 4532 > http://statistics.netbeans.org/analytics/exception.do?id=703970 Did you get OOME? Does ArrayIndexOutOfBoundsException happen every time?
OK, PermSize is as per default I put it back once you told me. "Did you get OOME? " Not this time. "ArrayIndexOutOfBoundsException happen every time?" It has happened twice.
(In reply to aquaglia from comment #38) > OK, PermSize is as per default I put it back once you told me. > > "Did you get OOME? " Not this time. Fine. > "ArrayIndexOutOfBoundsException happen every time?" It has happened twice. I saw both reports. It looks like the snapshot was taken and ArrayIndexOutOfBoundsException happens when it profiler tries to open it. If you are able to reproduce it with that already saved snapshot, please attach it to ArrayIndexOutOfBoundsException bug.
About the ArrayIndexOutOfBoundsException, we are actually discussing in bug: https://netbeans.org/bugzilla/show_bug.cgi?id=239331
*** Bug 245210 has been marked as a duplicate of this bug. ***
*** Bug 245849 has been marked as a duplicate of this bug. ***
This issue should be solved in current dev. build thanks to substantial changes in profiler infrastructure and workflow.
*** Bug 253347 has been marked as a duplicate of this bug. ***
*** Bug 256883 has been marked as a duplicate of this bug. ***
Using netbeans 8.1, I experience the same java heap problem DEBUGGING Symfony project. I increased Heap size but I cannot use gigabytes (didn't know I needed the NASA computers to make Netbeans work). This bug should be solved