Jira is slow or unresponsive with many DEBUG messages in the log

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

プラットフォームについて: Server および Data Center のみ。この記事は、Server および Data Center プラットフォームのアトラシアン製品にのみ適用されます。

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Fisheye および Crucible は除く

    

要約

Jira becomes unresponsive or very slow and there are far too many DEBUG messages in any of the log files.


環境

All versions of Jira Core/Software 7 and 8.

All versions of Jira Service Manager 3 and 4.


診断

Check the main log files (mainly access_log, atlassian-jira.log, atlassian-jira-http-dump.log and atlassian-jira-sql.log) to see if there are out of order log messages:

$ cat atlassian-jira.log | cut -d" " -f-2
2020-11-12 10:50:54,684-0300
2020-11-12 10:50:54,684-0300
2020-11-12 10:50:54,705-0300
2020-11-12 10:50:54,684-0300
2020-11-12 10:50:54,705-0300
2020-11-12 10:50:54,705-0300
2020-11-12 10:50:54,705-0300
2020-11-12 10:50:54,705-0300
2020-11-12 10:50:54,705-0300
2020-11-12 10:50:54,684-0300

Notice there are log entries from millisecond 684 written after 705. They should always be in crescent order.


This other Linux-like command can be used to print the out-of-order-lines and the total count:

$ grep -E -a -h "^20[0-9]{2}-[0-9]{2}" application-logs/atlassian-jira.log | awk 'BEGIN {prevdate=$1" "$2; prevline=$0; ordercount=0}; {currdate=$1" "$2; currline=$0; if (prevdate > currdate) {{printf "%s\n%s\n\n", substr(prevline, 1, 100)"...", substr(currline, 1, 100)"..."}; ordercount++}; prevdate=currdate; prevline=currline}; END {print "Number of out-of-order entries: "ordercount}'

出力サンプル

2024-03-20 14:56:06,070+0000 https-jsse-nio-8443-exec-47 url: /browse/JIRA-123456, /secure/ProjectIs...
2024-03-20 14:56:06,069+0000 Caesium-1-4 DEBUG ServiceRunner     [c.o.scriptrunner.runner.ScriptBind...

2024-03-20 15:46:19,573+0000 https-jsse-nio-8443-exec-34 url: /rest/scriptrunner/behaviours/latest/v...
2024-03-20 15:46:19,572+0000 https-jsse-nio-8443-exec-122 url: /rest/scriptrunner/behaviours/latest/...

2024-03-20 15:58:50,748+0000 https-jsse-nio-8443-exec-123 url: /rest/wrm/2.0/resources; user: admini...
2024-03-20 15:58:50,747+0000 https-jsse-nio-8443-exec-146 url: /rest/scriptrunner/behaviours/latest/...

Number of out-of-order entries: 3


This other Linux-like command groups the packages by log verbosity and prints the top-10 from the logs:

$ grep -a -E " (DEBUG|INFO|WARN|ERROR|SEVERE|FATAL) " -h atlassian-jira.log* | awk '{match($0, / (DEBUG|INFO|WARN|ERROR|SEVERE|FATAL) /); my_level=substr($0, RSTART+1, RLENGTH-2); match($0, /\[[a-zA-Z0-9\-_ \.]+\]/); my_package=substr($0, RSTART, RLENGTH); print my_level, my_package}' | sort | uniq -c | sort -nr | column -t | head -10

86455  DEBUG  [c.o.scriptrunner.runner.ScriptBindingsManager]
46103  DEBUG  [c.o.jira.groovy.Behaviour]
43116  WARN   [c.o.scriptrunner.runner.ScriptBindingsManager]
28296  WARN   [c.a.j.plugin.editor.RteDataProvider]
23922  DEBUG  [c.o.jira.behaviours.BehaviourManagerImpl]
4557   DEBUG  [c.o.j.groovy.jql.ScriptedFunctionClauseFactory]
3034   ERROR  [c.o.scriptrunner.customfield.GroovyCustomField]
2459   INFO   [c.a.j.p.webhooks.matcher.JqlEventMatcher_SLOW]
704    WARN   [c.a.j.issue.index.DefaultIssueIndexer]
672    INFO   [c.a.jira.index.QueueingIndexStats]

You can change the head -10 by higher numbers to get more top entries.

In this example, we see ScriptRunner packages (com.onresolve) with DEBUG level.


原因

I/O operations like writing to files are exclusively locked, so only one Thread at a time can write to a file. When there's something causing too much concurrency on trying to write to a file, Threads may get blocked too often and the application hangs or operates slower than usual.

This is likely because a DEBUG level was enabled in Jira or some process started logging far too much information — in which case we could see INFO or WARN messages instead of DEBUG.

The timestamp (date) written to the log file is obtained almost at the same time as the writing of the message itself, so when we see a log message with an earlier timestamp written after another with a later timestamp we may expect the system to hang.


ソリューション

If you have recently enabled a DEBUG or profiling on Logging and Profiling, you should revert it or restart Jira so any changes made to the logs by the UI are reverted.

If you had recently changed the log4j file to increase verbosity of any package, you may revert that edition too and restart Jira.

If no recent changes were made or a restart didn't fix, you may change the log level for the specific package logging so many messages and see if it resolves the issue.

Last modified on Mar 25, 2024

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.