Migration Java application to Kubernetes. JVM optimisation
This is the third part of the k8s migration articles. There are some things about JVM performance monitoring and optimization here. In the k8s, Java applications usually run without any JVM configurations or with just like this one:
In this case, JVM 10: uses a Serial GC if a machine has less than two available CPUs and two GB of RAM; sets a large amount of memory reservation for the code cache; uses a bigger thread stack size than necessary, etc.
JVM memory consumption can be calculated using this formula:
JVM memory = Heap memory + Metaspace + CodeCache + (ThreadStackSize * Number of Threads) + DirectByteBuffers + Jvm-native
Let’s see how we can change the default settings of JVM.
We can start our application with additional JVM params:
-Djava.security.egd=file:/dev/./urandom see.
-Xms parameter sets the initial heap size.
-Xmx sets the maximum heap size.
-Xss defines thread stack size.
-XX:ReservedCodeCacheSize sets maximum code cache size. Used for JIT compiler.
-XX:CodeCacheMinimumFreeSpace parameter sets minimum code cache size. When less than the specified amount of space remains, stop compiling. This space is reserved for code that is not compiled methods, for example, native adapters.
-XX:CodeCacheExpansionSize sets code cache expansion size.
-XX:+UseG1GC (-XX:+UseParallelGC , -XX:+UseConcMarkSweepGC, -XX:+UseZGC , -XX:+UseShenandoahGC) enables G1 (Parallel, CMS, Z, Shenandoah) GC instead of the default.
-XX:MaxGCPauseMillis sets the target for the maximum GC pause time. JVM can exceed this target.
-XX:ParallelGCThreads sets the number of threads used for stop-the-world phases.
-XX:ConcGCThreads sets the number of threads used for concurrent phases.
-XX:InitiatingHeapOccupancyPercent sets the percentage of the heap occupancy to start a concurrent GC cycle.
-XX:MetaspaceSize when the space committed for class metadata reaches this value, a Full GC starts.
-XX:MaxMetaspaceSize defines maximum metaspace size.
-XX:MinMetaspaceExpansion the minimum growth size for a Metaspace.
-XX:MaxMetaspaceExpansion the maximum growth size for a Metaspace.
-XX:+PerfDisableSharedMem disable writing hsperfdata in persistence storage.
-XX:MaxDirectMemorySize the limit on the amount of memory reservation for all Direct Byte Buffers.
-XX:+AlwaysActAsServerClassMachine the parameter that disables Serial GC usage in cases of the small heap size.
Pay attention that some of these arguments are deprecated/removed in Java 9+!
Let’s run our application with the default JVM settings and load it with the simple workload. We can create a Postman collection or manually send requests to the application. However, jMeter is a wide-spreading solution for the load testing.
Run jMeter and create tread group in our workload:
Set 10 parallel threads with the infinity loop of workload and 10 minutes timeout.
Then, create a new HTTP request.
Provide the protocol, the host, the port, the path, and choose the type of the request.
For the result visualisation of our workload, create Graph Results.
Then, run the test plan. Check results and Grafana metrics for JVM and k8s.
The main thing here is the throughput = 42.9/minute.
We can see that the application reserves 15 times more heap than consume.
You can reduce Compressed Class Space down to 32m and slightly increase Metaspace.
K8s container consumes around 500m, but while the workload increase, it consumes almost 900m!
We can try to reduce general parameters, apply new settings, and rerun the test plan.
With the strictly defined heap size, JVM starts to use all available space.
In Non-Heap, we can reduce Compressed Class Space down to 16m and Code Cache down to 32m.
There is no peak memory consumption.
Throughput increases up to 43.6/minute with this memory limit reduction.
Unfortunately, when you try to track resource usage in Grafana or analogs, you’ll see that JVM consumes more RAM than you set via JVM params.
Container memory consumption is 473m.
But JVM heap + non-heap size is 284m.
This can be the reason of exceeding the container memory limits, and, as a consequence, cause the OOM.
For diving deeper into JVM memory usage, we can use Native Memory Tracking(NMT). One of the best topics about NMT.
In few words, we should run our application with these params:
PrintNMTStatistics flag means that NMT statistics will be printed in stdout by SIGTERM signal. Other way, you can connect in your pod console and run:
Get the PID of the process and run this program.
The result looks like this.
Java Heap reserved and committed size decreases by -Xms and -Xmx. Class size depends on Metaspace. Thread size decreases by stack size -Xss. GC size can be changed by CG changing itself.
Use NMT and JVM params for reducing memory consumption and increasing application performance. Set k8s request and limits based on the NMT results, and do not forget to remove NMT for the production.