Documentation for JIRA 4.0. Documentation for other versions of JIRA is available too.
To our knowledge, JIRA does not have any memory leaks. We know of various public high-usage JIRA instances (eg. 40k issues, 100+ new issues/day, 22 pages/min in 750Mb of memory) that run for months without problems. When memory problems do occur, the following checklist can help you identify the cause.
Check the System Info page (see Increasing JIRA memory) after a period of sustained JIRA usage to determine how much memory is allocated.
When increasing Java's memory allocation with -Xmx, please ensure that your system actually has the allocated amount of memory free. For example, if you have a server with 1Gb of RAM, most of it is probably taken up by the operating system, database and whatnot. Setting -Xmx1Gb to a Java process would be a very bad idea. Java would claim most of this memory from swap (disk), which would dramatically slow down everything on the server. If the system ran out of swap, you would get OutOfMemoryErrors.
If the server does not have much memory free, it is better to set -Xmx conservatively (eg. -Xmx256m), and only increase -Xmx when you actually see OutOfMemoryErrors. Java's memory management will work to keep within the limit, which is better than going into swap.
Please make sure you are using the latest version of JIRA. There are often memory leaks fixed in JIRA. Here are some recent ones:
People running multiple JSP-based web applications (eg. JIRA and Confluence) in one Java server are likely to see this error:
java.lang.OutOfMemoryError: PermGen space
Java reserves a fixed 64Mb block for loading class files, and with more than one webapp this is often exceeded. You can fix this by setting the -XX:MaxPermSize=128m
property. See the Increasing JIRA memory page for details.
Tomcat caches JSP content. If JIRA is generating huge responses (eg. multi-megabyte Excel or RSS views), then these cached responses will quickly fill up memory and result in OutOfMemoryErrors.
In Tomcat 5.5.15+ there is a workaround – set the org.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true
property (see how). For earlier Tomcat versions, including that used in JIRA Standalone 3.6.x and earlier, there is no workaround. Please upgrade Tomcat, or switch to another app server.
We strongly recommend running JIRA in its own JVM (app server instance), so that web applications cannot affect each other, and each can be restarted/upgraded separately. Usually this is achieved by running app servers behind Apache or IIS.
If you are getting OutOfMemoryErrors, separating the webapps should be your first action. It is virtually impossible to work out retroactively which webapp is consuming all the memory.
notificationinstance
recordsIn order to correctly 'thread' email notifications in mail browsers, JIRA tracks the Message-Id
header of mails it sends. In heavily used systems, the notificationinstance
table can become huge, with millions of records. This can cause OutOfMemoryErrors in the JDBC driver when it is asked to generate an XML export of the data (see JRA-11725)
Occasionally people write their own services, which can cause memory problem if (as is often the case) they iterate over large numbers of issues. If you have any custom services, please try disabling them for a while to eliminate them as a cause of problems.
The CVS service sometimes causes memory problems, if used with a huge CVS repository (in this case, simply increase the allocated memory).
A symptom of a CVS (or general services-related) problem is that JIRA will run out of memory just minutes after startup.
Do you have hundreds of thousands of issues? Is JIRA's built-in backup service running frequently? If so, please switch to a native backup tool and disable the JIRA backup service, which will be taking a lot of CPU and memory to generate backups that are unreliable anyway (due to lack of locking). See the JIRA backups documentation for details.
Does a user have an e-mail address that is the same as one of the mail accounts in your mail handler services? This can cause a comment loop where notifications are sent out and appended to the issue which then triggers another notification and so forth. If a user then views that issue, it could consume a lot of memory. You can query your database using this query that will show you issues with more than 50 comments. It could be normal for issues that have 50 comments, you want to spot for any irregular pattern in the comments themselves such as repeating notifications.
SELECT count(*) as commentcount, issueid from jiraaction group by issueid having commentcount > 50 order by commentcount desc
getProjects
requestThe SOAP getProjects
call loads a huge object graph, particularly when there are many users in JIRA, and thus can cause OutOfMemoryErrors. Please always use getProjectsNoSchemes instead.
If your developers use the Eclipse Mylyn plugin, make sure they are using the latest version. The Mylyn bundled with Eclipse 3.3 (2.0.0.v20070627-1400) uses the getProjects
method, causing problems as described above.
This applies particularly to publicly visible JIRAs. Sometimes a crawler can slow down JIRA by making multiple huge requests. Every now and then someone misconfigures their RSS reader to request XML for every issue in the system, and sets it running once a minute. Similarly, people sometimes write SOAP clients without consideration of the performance impact, and set it running automatically. JIRA might survive these (although be oddly slow), but then run out of memory when a legitimate user's large Excel view pushes it over the limit.
The best way to diagnose unusual requests is to enable Tomcat access logging (on by default in JIRA Standalone), and look for requests that take a long time.
In JIRA 3.10 there is a jira.search.views.max.limit
property you can set in WEB-INF/classes/jira-application.properties
, which is a hard limit on the number of search results returned. It is a good idea to enable this for sites subject to crawler traffic.
Every now and then someone reports memory problems, and after much investigation we discover they have 3,000 custom fields, or are parsing 100Mb emails, or have in some other way used JIRA in unexpected ways. Please be aware of where your JIRA installation deviates from typical usage.
If you have been through the list above, there are a few further diagnostics which may provide clues.
By far the most powerful and effective way of identifying memory problems is to have JIRA dump the contents of its memory on exit (when exiting due to an OutOfMemoryError hang). These run with no noticeable performance impact. This can be done in one of two ways:
-XX:+HeapDumpOnOutOfMemoryError
option. If JIRA runs out of memory, it will create a jira_pid*.hprof
file containing the memory dump in the directory you started JIRA from.Please reduce your maximum heap size (-Xmx) to 750m or so, so that the generated heap dump is of manageable size. You can turn -Xmx up once a heap dump has been taken.
Garbage collection logging looks like this:
0.000: [GC [PSYoungGen: 3072K->501K(3584K)] 3072K->609K(4992K), 0.0054580 secs] 0.785: [GC [PSYoungGen: 3573K->503K(3584K)] 3681K->883K(4992K), 0.0050140 secs] 1.211: [GC [PSYoungGen: 3575K->511K(3584K)] 3955K->1196K(4992K), 0.0043800 secs] 1.734: [GC [PSYoungGen: 3583K->496K(3584K)] 4268K->1450K(4992K), 0.0045770 secs] 2.437: [GC [PSYoungGen: 3568K->499K(3520K)] 4522K->1770K(4928K), 0.0042520 secs] 2.442: [Full GC [PSYoungGen: 499K->181K(3520K)] [PSOldGen: 1270K->1407K(4224K)] 1770K->1589K(7744K) [PSPermGen: 6658K->6658K(16384K)], 0.0480810 secs] 3.046: [GC [PSYoungGen: 3008K->535K(3968K)] 4415K->1943K(8192K), 0.0103590 secs] 3.466: [GC [PSYoungGen: 3543K->874K(3968K)] 4951K->2282K(8192K), 0.0051330 secs] 3.856: [GC [PSYoungGen: 3882K->1011K(5248K)] 5290K->2507K(9472K), 0.0094050 secs]
This can be parsed with tools like gcviewer to get an overall picture of memory use:
To enable gc logging, start JIRA with the option -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:gc.log
. Replace gc.log
with an absolute path to a gc.log
file.
たとえば、Windows サービスでは以下を実行します。
tomcat5 //US//JIRA ++JvmOptions="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:c:\jira\logs\gc.log"
または、bin/setenv.sh
で次のように設定します。
export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:${CATALINA_BASE}/logs/gc.log"
If you modify bin/setenv.sh
, you will need to restart JIRA for the changes to take effect.
It is important to know what requests are being made, so unusual usage can be identified. For instance, perhaps someone has configured their RSS reader to request a 10Mb RSS file once a minute, and this is killing JIRA.
If you are using Tomcat, access logging can be enabled by adding the following to conf/server.xml
, below the </Host>
tag:
<Valve className="org.apache.catalina.valves.AccessLogValve" pattern="%h %l %u %t "%r" %s %b %T %S %D" resolveHosts="false" />
The %S
logs the session ID, allowing requests from distinct users to be grouped. The %D logs the request time in milliseconds. Logs will appear in logs/access_log.<date>
, and look like this:
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /secure/Dashboard.jspa HTTP/1.1" 200 15287 2.835 A2CF5618100BFC43A867261F9054FCB0 2835 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined-printable.css HTTP/1.1" 200 111 0.030 A2CF5618100BFC43A867261F9054FCB0 30 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined.css HTTP/1.1" 200 38142 0.136 A2CF5618100BFC43A867261F9054FCB0 136 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/global.css HTTP/1.1" 200 548 0.046 A2CF5618100BFC43A867261F9054FCB0 46 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/combined-javascript.js HTTP/1.1" 200 65508 0.281 A2CF5618100BFC43A867261F9054FCB0 281 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar.js HTTP/1.1" 200 49414 0.004 A2CF5618100BFC43A867261F9054FCB0 4 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/lang/calendar-en.js HTTP/1.1" 200 3600 0.000 A2CF5618100BFC43A867261F9054FCB0 0 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar-setup.js HTTP/1.1" 200 8851 0.002 A2CF5618100BFC43A867261F9054FCB0 2 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/cookieUtil.js HTTP/1.1" 200 1506 0.001 A2CF5618100BFC43A867261F9054FCB0 1
Alternatively, or if you are not using Tomcat or can't modify the app server config, JIRA has a built-in user access logging which can be enabled from the admin section, and produces terser logs like:
2006-09-27 10:35:50,561 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102065-4979 1266 2006-09-27 10:35:58,002 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102806-4402 1035 2006-09-27 10:36:05,774 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/browse/EAO-2 97058+3717 1730
If JIRA has hung with an OutOfMemoryError, the currently running threads often point to the culprit. Please take a thread dump of the JVM, and send us the logs containing it.