This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.

Bug 254391 - Java memory consumption 4X what is configured with -J-Xmx
Summary: Java memory consumption 4X what is configured with -J-Xmx
Status: RESOLVED WORKSFORME
Alias: None
Product: ide
Classification: Unclassified
Component: Performance (show other bugs)
Version: 8.1
Hardware: PC Linux
: P1 normal (vote)
Assignee: Tomas Hurka
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-08-15 17:27 UTC by tbrunhoff
Modified: 2015-09-04 15:17 UTC (History)
2 users (show)

See Also:
Issue Type: DEFECT
Exception Reporter:


Attachments
current messages.log at startup. (957.65 KB, text/x-log)
2015-08-15 17:27 UTC, tbrunhoff
Details
output from jmap -histo:live <pid> (618.72 KB, application/octet-stream)
2015-08-15 17:30 UTC, tbrunhoff
Details

Note You need to log in before you can comment on or make changes to this bug.
Description tbrunhoff 2015-08-15 17:27:33 UTC
Created attachment 155366 [details]
current messages.log at startup.

Briefly put: if I set memory limit at 2GB, linux shows the VM size is 8-9GB.

I don't know that this is a leak, but it is certainly unexpected. A few facts:
 - I am running Dev (Build 201507280002) on Fedora 19. The system has an i7 with 32GB physical memory.
 - I am doing c++ development with 30 projects open
 - netbeans is running with -J-Xmx2000m
 - the IDE memory monitor on the title bar at startup shows numbers like 717.3/1245MB, but will rise toward the @GB limit, depending on what I do.

However, looking at the process status with top, or 'cat /proc/<pid>/status', virtual size runs at 8G with resident set size at 1.75G or higher. This isn't a problem as long as java operates within the resident set size. But after long editing sessions, or stepping with GDB in code with a deep stack, performance can become glacial.

So below is a snapshot from 'top' while the ide shows 717.3/1245MB. Note that java shows VM=8148992(K) and RES=1.795g. And /proc/<pic>/status shows the following:

VmPeak:  8279388 kB
VmSize:  8148992 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:   1899068 kB
VmRSS:   1881996 kB
VmData:  7937616 kB

-----------------------------------------------------------------------------
top - 10:21:11 up 10 days,  1:41, 13 users,  load average: 0.07, 0.21, 0.65
Tasks: 301 total,   2 running, 298 sleeping,   1 stopped,   0 zombie
%Cpu(s):  0.7 us,  0.2 sy,  0.0 ni, 98.9 id,  0.2 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  32903764 total, 31133128 used,  1770636 free,   288628 buffers
KiB Swap: 65535992 total,   286832 used, 65249160 free, 25540600 cached

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                
 1898 toddb     20   0 1572476 287916  30204 S   2.0  0.9  50:44.78 thunderbird            
17066 toddb     20   0 1099312 110532  24840 S   1.3  0.3   4:31.11 chrome                 
 1438 root      20   0  735552 563232 275776 S   1.0  1.7   2398:41 X                      
11453 toddb     20   0 1122764 236076  86596 S   0.7  0.7  11:04.34 chrome                 
24845 toddb     20   0 8148992 1.795g  29280 S   0.7  5.7  13:58.55 java                   
26772 toddb     20   0  971436 116536  36408 S   0.7  0.4  10:15.62 chrome                 
 1804 toddb     20   0 3580640 117596  38760 S   0.3  0.4 748:01.03 kwin                   
 1861 toddb     20   0  601708  38552  27084 S   0.3  0.1   8:24.79 konsole                
 1870 toddb     20   0  580352  15272  11780 S   0.3  0.0   3:38.14 konsole                
 1874 toddb     20   0  601572  34280  24036 S   0.3  0.1  18:50.41 konsole                
 2554 toddb     20   0  123768   1724   1124 R   0.3  0.0   0:00.99 top                    
12047 toddb     20   0 1026892 132696  35744 S   0.3  0.4   1:16.37 chrome                 
22244 dovenull  20   0   43088   2856   2048 S   0.3  0.0   0:01.63 imap-login
Comment 1 tbrunhoff 2015-08-15 17:30:27 UTC
Created attachment 155367 [details]
output from jmap -histo:live <pid>

This histogram memory dump (attached) shows numbers that agree with the ide's presentation.
Comment 2 tbrunhoff 2015-08-15 17:35:40 UTC
I should emphasize that most of the time, performance is good. But if you do anything that causes java to execute code or access data not in the resident set size, then memory speed is reduced to disk speed as data/text is paged in. In my case, this occurs after the ide is open for long periods (days) or when stepping through code where the stack is >10 frames deep, and the frames contain lots of local variables or large structures.
Comment 3 Tomas Zezula 2015-08-17 13:02:23 UTC
The -Xmx limit is a heap size.
There is a significant difference among heap size, physical memory and virtual memory.
The heap size is a size of java heap on which java instances are allocated, in other words Xmx1G means that there is 1GB limit on size of java objects. On JDK < 8 in addition to that there is also PermGen which contains constant strings, and loaded classes. In JDK 8 PermGen is a part of Heap memory.
The virtual memory is bigger compared to heap size as it contains also the java virtual machine, mmapped files (jar files, libraries the JVM depends on, NIO FileChannel.map() files), JIT compiler cache - the hot functions compiled from the byte code to native.

Regarding the page in/page out. The JVM affects more page in/out compared to native apps. First reason is Garbage Collector which touches pages which are swapped out and causes loading. Also the Java application has no text segments as the classes are byte code which is interpreted and compiled by JIT compiler. Even the compiled code is not pure text segment as the compiled code sometimes needs to deoptimize the compiled code, so it has to be stored in swap. For swap in / swap out problems it may be better to decrease the heap size or tune the Garbage Collector. To verify that the hiccups are really caused by page in / page out vmsat si so can be used, if the numbers are high when the IDE is "blocked" it's the case. Another reason may be JVM Stop the World checkpoints mostly caused by GC.
Comment 4 Antonin Nebuzelsky 2015-09-04 15:17:25 UTC
Closing as non-issue.