GC overhead limit exceeded エラーによって Jira Server がクラッシュする

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

プラットフォームについて: Server および Data Center のみ。この記事は、Server および Data Center プラットフォームのアトラシアン製品にのみ適用されます。

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Fisheye および Crucible は除く

問題

My JIRA application starts performing very slowly, or hangs completely.

The following appears in the atlassian-jira.log:

Exception in thread "Thread-142" java.lang.OutOfMemoryError: GC overhead limit exceeded

原因

This error indicates that the JVM took too long to free up memory during its GC process and can be thrown from the Serial, Parallel or Concurrent collectors. It is often accompanied by high CPU use, as the JVM will be constantly attempting to Garbage Collect, which can require intensive resources. This can lead to JIRA applications becoming unresponsive and in worse cases can result in the entire server being unresponsive (this will affect all applications on that server).

The parallel collector will throw an OutOfMemoryError (OOME) if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.

This kind of OutOfMemoryError can be caused if user requests drown the available resources in the JVM. When this occurs, performance will degrade aggressively. This will eventually require a restart or the application may recover.

ソリューション

  1. Enable garbage collection logging, as in Troubleshoot Jira Server performance with GC logs.
  2. Restart the application ASAP. This is a must do as the JVM is in an unexpected state after an OOME is thrown.
  3. Monitor the application memory usage during peak periods, and as need be increase the memory, as in our Increasing JIRA application memory documentation.
  4. Verify that the instance has enough memory (total heap) to operate.
    1. When allocating memory to a JVM, more memory does not always equate to a better experience. If there is a problem with third-party plugins or JIRA applications are running into certain behaviors that is causing it to use memory at a drastically increased rate, applying additional memory to the JVM can have detrimental effects and often addresses the symptom rather than the root cause. An increase in heap size will lead to longer GC times, which can render instances "frozen" when GC occurs, which can sometimes take up to 10 seconds (worse in certain cases).
  5. Ensure all the JIRA application plugins are up to date. It's often recommended to disable them all with Safe Mode and test the stability of the JIRA application instance as it's highly possible one or more could be causing memory issues.
  6. Disable the default, scheduled XML backup job and move to a native backup strategy.


If you are running multiple applications in one Tomcat container, split the instances into two applications. Please see our Deploying Multiple Atlassian Applications in a Single Tomcat Container docs for further information on this.

最終更新日 2024 年 11 月 15 日

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.