Jira run into OutOfMemoryError, CPU spike, or crash due to Large Comments or Descriptions in an Issue

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

プラットフォームについて: Server および Data Center のみ。この記事は、Server および Data Center プラットフォームのアトラシアン製品にのみ適用されます。

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Fisheye および Crucible は除く

要約

Jira may suffer from any of the below symptoms

  • Runs into OutOfMemoryError (seen in the logs)
  • Becomes very slow or unresponsive, sometimes forcing you to restart Jira manually
  • CPU of server spikes drastically


診断

Any of the following may be see

  • An OutOfMemoryError may be thrown by JIRA. For example, the following may be thrown in the atlassian-jira.log:

    2014-07-02 16:33:58,772 http-bio-6974-exec-23 ERROR username 993x628x2 bf7s1 127.0.0.1 /rest/issueNav/1/issueTable [common.error.jersey.ThrowableExceptionMapper] Uncaught exception thrown by REST service: Java heap space
    java.lang.OutOfMemoryError: Java heap space
    	at java.util.Arrays.copyOf(Arrays.java:2367)
    	at java.lang.StringCoding.safeTrim(StringCoding.java:89)
    	at java.lang.StringCoding.access$100(StringCoding.java:50)
    	at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:154)
    	at java.lang.StringCoding.decode(StringCoding.java:193)
    	at java.lang.String.<init>(String.java:416)
    	at org.apache.lucene.store.DataInput.readString(DataInput.java:182)
    	at org.apache.lucene.index.FieldsReader.addField(FieldsReader.java:431)
    	at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:261)
    	at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:471)
    	at org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)
    	at org.apache.lucene.index.IndexReader.document(IndexReader.java:844)
    	at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:242)
    	at com.atlassian.jira.index.DelegateSearcher.doc(DelegateSearcher.java:93)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.IssueDocumentAndIdCollector.addMatch(IssueDocumentAndIdCollector.java:207)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.IssueDocumentAndIdCollector.computeResult(IssueDocumentAndIdCollector.java:186)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.AbstractIssueTableCreator.executeNormalSearch(AbstractIssueTableCreator.java:237)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.AbstractIssueTableCreator.create(AbstractIssueTableCreator.java:202)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.DefaultIssueTableService.createIssueTableFromCreator(DefaultIssueTableService.java:188)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.DefaultIssueTableService.getIssueTable(DefaultIssueTableService.java:302)
    	at com.atlassian.jira.plugin.issuenav.service.issuetable.DefaultIssueTableService.getIssueTableFromFilterWithJql(DefaultIssueTableService.java:124)
    	at com.atlassian.jira.plugin.issuenav.rest.IssueTableResource.getIssueTableHtml(IssueTableResource.java:99)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  • When generating thread dumps with CPU usage (see Generating a thread dump) the threads with high CPU usage are in runnable state and performing the following

    "JIRA-INFORM-Thread-0" #738 daemon prio=5 cpu=2178855.46ms elapsed=1124913.25s tid=0x00007f9c298ed000 nid=0x1ae2 runnable  [0x00007f9a25fc2000]
       java.lang.Thread.State: RUNNABLE
    	at com.atlassian.renderer.v2.components.MacroTag.makeMacroTag(MacroTag.java:53)
    	at com.atlassian.renderer.v2.WikiMarkupParser.setEndTagIfPresent(WikiMarkupParser.java:136)
    	at com.atlassian.renderer.v2.WikiMarkupParser.handlePotentialMacro(WikiMarkupParser.java:84)
    	at com.atlassian.renderer.v2.WikiMarkupParser.parse(WikiMarkupParser.java:62)
    	at com.atlassian.renderer.v2.components.MacroRendererComponent.render(MacroRendererComponent.java:49)
    	at com.atlassian.renderer.v2.V2Renderer.render(V2Renderer.java:45)
    	at com.atlassian.renderer.v2.TokenEscapingV2Renderer.render(TokenEscapingV2Renderer.java:28)
    	at com.atlassian.renderer.v2.V2RendererFacade.convertWikiToXHtml(V2RendererFacade.java:87)
    	at com.atlassian.jira.issue.fields.renderer.wiki.AtlassianWikiRenderer.render(AtlassianWikiRenderer.java:58)
    	at com.atlassian.jira.issue.managers.DefaultRendererManager.getRenderedContent(DefaultRendererManager.java:95)
    	...
  • Run the below query to check all comments based on their character length (this SQL is PostgreSQL and may need to be modified depending upon the DBMS):

    SELECT ( p.pkey || '-' || i.issuenum ) AS issue,
           a.id,
           char_length(a.actionbody) AS size
    FROM   jiraaction a,
           jiraissue i,
           project p
    WHERE  i.project = p.id
           AND i.id = a.issueid
           AND a.actiontype = 'comment'
    ORDER  BY size DESC;

    (info) This will give the number of characters as the size. Anything over 300k may need to be reviewed.

  • Do the same for large descriptions:

    SELECT ( p.pkey || '-' || i.issuenum ) AS issue,
           i.id,
           char_length(i.description) AS size
    FROM   jiraissue i,
           project p
    WHERE  i.project = p.id
           AND i.description IS NOT NULL
    ORDER  BY size DESC;  

原因

This is caused by very large descriptions or comments in issues. The description/comments in these issues are so large that when JIRA attempts to read these issues, it will run out of memory.

Some details on this behaviour

JRA-28519 - Getting issue details... STATUS  

  • Since Jira 5.0.3, comment length can be limited by using jira.text.field.character.limit. However, there is no default value (ie. no limit on characters) so administrators will have to specifically set a value if they want to limit characters in a text field
  • Since Jira 7.0, the default value has been set to 32,767 characters

ソリューション

  1. If these issues are particularly large, deleting them will resolve the problem. This should be done by getting the ID from the queries above and deleting them by accessing http://<BASE-URL>/secure/DeleteIssue!default.jspa?id=<Issue ID>. The issue must be deleted through the GUI otherwise it can leave orphaned records in the database that may still be causing problems.
  2. If deletion is not possible, the next recommendation would be to delete the individual comments instead. You can extract the comment from the DB and attach it to the ticket as a file to retain the data.
  3. Identify how the issues are being created or how these comments are being entered and address this to prevent it from continuing to happen.
  4. Set the value of jira.text.field.character.limit to a sensible value. The default is 32,767 characters. You can set it higher if you wish, but keep in mind that the higher it is, the higher chance that it will cause this problem. You can set the value using 2 methods
    1. UI 経由
      1. Navigate to (Settings > System > General Configuration > Advanced Settings)
      2. Search for jira.text.field.character.limit
      3. Modify the value accordingly
        1. There should be a 'Revert' button if this is a custom value. Clicking on it will update the value to 32,767
    2. Via the jira-config.properties file
      1. Refer to the following article on more information on how to configure this: Edit the jira-config.properties file in Jira server.
        (warning) For mail this is only applied in JIRA 6.3.12 and higher as per JRA-38357 - Getting issue details... STATUS .

最終更新日 2022 年 11 月 25 日

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.