Run Jira server in a virtualized environment
Virtual Machine (VM) environments have become a common way for system administrators to manage and deploy their IT systems. And many JIRA customers are successfully running their instances on VMware already. However, setting up an Enterprise Java application in a virtual environment requires proper configuration and tuning to maintain a high performance. This document summarizes the most important practices on configuring and tuning VMware to work with a Java application like JIRA. Also, we invite you to share your best practices and also raise comments and questions below.
While we wrote this guide to provide a high-level overview on the required configuration of VMware for JIRA, we, unfortunately, are unable to perform support for VMware itself. Please contact VMware support first if you need assistance with installation, configuration or troubleshooting of a VMware instance.
JIRA behaves on a Virtual Machine like comparable Enterprise Java applications and no specific VMware configuration changes are necessary. That being said, there are a number of VMware preferences that should be optimized to run an Enterprise Java application. These necessary configurations are elaborated in VMware's Enterprise Java Applications on VMware Best Practices Guide on which this guide is based on. Find below a summary of the most important configuration considerations.
Note: This guide assumes that JIRA will run on a dedicated virtual machine that will not be shared with another Java or other resource-demanding application. In case that is not the case, please consider the additional needs for resources in your sizing and tuning measures.
As you install VMware, the guest operating system and the Java Virtual Machine (JVM), make sure you patch the latter to the latest version. Often, simply using a newer, patched JVM version can have a significant impact on the performance of your JIRA instance. Our tests show that a 32 bit JVM is slightly faster than a 64 bit JVM under some workloads, however you may need a 64 bit VM for a larger heap. We recommend you choose based on your needs rather than performance as the difference is negligible. If unsure, we recommend using a 64 bit JVM.
Unfortunately, there are no specific, quantified best memory sizes, amount of vCPUs, etc. The best practice to determine the correct sizing is to perform load and performance testing with scripts that mimic your production workload with different VM configurations. For this activity, Atlassian has produced manuals how to conduct FIZME JIRA Performance Testing with Grinder. Additionally to your own testing, please also see general recommendations on sizing below:
The best way to size your VM's memory is to understand the requirements of your OS, JAVA application and their specific memory requirements. Allow adequate space for each component and sum up as the following:
The total VM Memory is therefore determined by the RAM requirements of the guest OS, as well as the JVM's Max Heap (-Xmx value) and JVM Perm Gen (-XX:MaxPermSize) as well as the Java Stack (NumberOfConcurrentThreads * (-Xss)). Let us use the example of a large scale system example from the JIRA Sizing Guide. For this class of production workload, the recommended JVM heap size is 4 GB. If we assume that the remaining JVM memory requirements would add another 0.5 GB. In addition, if we assume the Guest OS memory requirements to be 1GB, then the total VM memory size would be 5.5GB to start with for JIRA alone. If more applications are to be run on the VM, for example a self-contained application stack ( JIRA + Confluence + database + mail server + Apache ), in general the combination of all Java application's maximum heap setting should not exceed more than 50 percent of the overall memory. For example if JIRA and Confluence each have a max heap setting of 512mb ( 1024mb total ) the overall memory allocation for the VM should be no less than 2 gigabytes.
In addition to planning your memory requirements based on the method above, VMware recommends the following additional measures:
- Memory Reservation: Whatever VM Memory size you have determined to be adequate for your JIRA production workload, make sure to reserve this memory size in your VMware configuration.
- Ballooning, Swapping & Bursting: While VMware has features that allow VMs to expand their resource allocation beyond their limits, this should be prevented as it will impact the performance of your system. It is recommended instead to revisit the VM Memory sizing and reserve this corresponding size in your VMware configuration.
- Large Pages: VMware recommends to enable large memory pages in the guest operating system and the JVM. See also VMware's guide on Large Page Performance.
Virtual CPUs (vCPU)
VMware recommends the following measures for vCPUs for Enterprise Java applications:
- Make sure that the total vCPU load does not cause more than 80% CPU load on your host machine as vCPU overcommitment can significantly impact your system's performance.
- On the other hand, you also don't want to oversubscribe your CPU cycles with unused vCPUs. The best practice that VMware recommends is to start with a lower amount of vCPUs and add as necessary.
To further optimize VM performance and also to ensure consistent timestamps throughout the Enterprise environment, VMware recommends to install improved time synchronization features between the host and the virtual machine. Please refer to the Timekeeping in Virtual Machine guide for further instructions.
It is recommended to enable the hot add feature for the VM that JIRA is running on. It enables the addition of memory and vCPUs at runtime without shutting down the virtual machine. To enable this feature at a later time though, the machine would need to be shut down.
Garbage Collection tuning on a virtual machine does not differ from that on a physical machine. Refer to the JIRA Garbage Collection Guide for tips and help on troubleshooting. In the context of managing GC on a VM run JIRA instance, it is recommended to align the number of vCPUs along with theamount of GC threads.
JIRA Specific Performance Tuning (not specific to VMware)
You might also want to refer to the general JIRA Performance Tuning guide, once your VMware configuration has been adjusted and tuned correctly.
Hardware performance metrics with esxtop
Esxtop is a performance metrics tool that comes with the VMware packages for your system. It performs speed and capacity tests for your CPU, memory, network and hard disks. Please also refer to the Performance Troubleshooting for VMware vSphere 4 and ESX 4.0 and Interpreting esxtop 4.1 Statistics. Additionally, we can also recommend Yellow Brick's take on Esxtop.
For an interpretation of the various metrics, please see the table below:
|CPU||%RDY||10||Overprovisioning of vCPUs, excessive usage of vSMP or a limit(check %MLMTD) has been set. See Jason’s explanation for vSMP VMs|
|CPU||%CSTP||3||Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM. This should lead to increased scheduling opportunities.|
|CPU||%SYS||20||The percentage of time spent by system services on behalf of the world. Most likely caused by high IO VM. Check other metrics and VM for possible root cause|
|CPU||%MLMTD||0||The percentage of time the vCPU was ready to run but deliberately wasn’t scheduled because that would violate the “CPU limit” settings. If larger than 0 the world is being throttled due to the limit on CPU.|
|CPU||%SWPWT||5||VM waiting on swapped pages to be read from disk. Possible cause: Memory overcommitment.|
|MEM||MCTLSZ||1||If larger than 0 host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited.|
|MEM||SWCUR||1||If larger than 0 host has swapped memory pages in the past. Possible cause: Overcommitment.|
|MEM||SWR/s||1||If larger than 0 host is actively reading from swap(vswp). Possible cause: Excessive memory overcommitment.|
|MEM||SWW/s||1||If larger than 0 host is actively writing to swap(vswp). Possible cause: Excessive memory overcommitment.|
|MEM||CACHEUSD||0||If larger than 0 host has compressed memory. Possible cause: Memory overcommitment.|
|MEM||ZIP/s||0||If larger than 0 host is actively compressing memory. Possible cause: Memory overcommitment.|
|MEM||UNZIP/s||0||If larger than 0 host has accessing compressed memory. Possible cause: Previously host was overcommited on memory.|
|MEM||N%L||80||If less than 80 VM experiences poor NUMA locality. If a VM has a memory size greater than the amount of memory local to each processor, the ESX scheduler does not attempt to use NUMA optimizations for that VM and “remotely” uses memory via “interconnect”. Check “GST_ND(X)” to find out which NUMA nodes are used.|
|NETWORK||%DRPTX||1||Dropped packets transmitted, hardware overworked. Possible cause: very high network utilization|
|NETWORK||%DRPRX||1||Dropped packets received, hardware overworked. Possible cause: very high network utilization|
|DISK||GAVG||25||Look at “DAVG” and “KAVG” as the sum of both is GAVG.|
|DISK||DAVG||25||Disk latency most likely to be caused by array.|
|DISK||KAVG||2||Disk latency caused by the VMkernel, high KAVG usually means queuing. Check “QUED”.|
|DISK||QUED||1||Queue maxed out. Possibly queue depth set to low. Check with array vendor for optimal queue depth value.|
|DISK||ABRTS/s||1||Aborts issued by guest(VM) because storage is not responding. For Windows VMs this happens after 60 seconds by default. Can be caused for instance when paths failed or array is not accepting any IO for whatever reason.|
|DISK||RESETS/s||1||The number of commands reset per second.|
|DISK||CONS/s||20||SCSI Reservation Conflicts per second. If many SCSI Reservation Conflicts occur performance could be degraded due to the lock on the VMFS.|
Open a Support ticket with VMware
Work with an Atlassian Partner
We have a global network of partners who provide services, implementation, consulting, and unique solutions. Click here to find a partner.
Comment this Guide
Please note that this is a live document and the best practices here are not set in stone. Please contribute your best practices, questions and comments around JIRA on VMs to the Atlassian Community.