Jira server throws OutOfMemoryError: unable to create new native thread


アトラシアン コミュニティをご利用ください。



The JIRA application occasionally crashes and throws an OutOfMemoryError as below in the catalina.out, stdout or atlassian-jira.log:

Exception in thread "http-bio-8506-exec-106" java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:691)
	at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
	at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:992)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
	at java.lang.Thread.run(Thread.java:722)


The above error message will be present in the logs. This can be different error messages associated with an OutOfMemoryError, this one can be identified by the "unable to create new native thread" error message. We have different KBs for others as below:


To provide concurrency (the ability to do multiple things at once), Java will spawn operating system threads and use them to perform tasks. There can be hard-limits on the number of threads created by the operating system, so if the application is requesting more threads that the OS is willing to provide, the above error will be thrown. This occurs in the following way:

  1. A new Java thread is requested by JIRA applications. This can be by anything.
  2. JVM native code proxies the request to create a new native thread to the OS.
  3. The operating system attempts to create a new native thread. As it's a thread, it requires memory to be allocated to it.
  4. The operating system refuses the native memory allocation.
  5. The java.lang.OutOfMemoryError: unable to create new native thread error is thrown.

This can also happen if the operating system has no native memory left to allocate threads (say the 32-bit Java process space has been reached, or the OS virtual memory is fully depleted), or the maximum number of open files has been reached.

Another possible cause could be the parameter DefaultTasksMax has been set on system level, as it will limit the amount of possible threads.

(warning) In certain cases on Solaris it appears that this can also cause the Java application to completely crash and generate a core dump.

Workaround 1:

  1. Edit the $JIRA_INSTALL/bin/setenv.sh file and add the below to the top of the file:

    ulimit -u 16384
    ulimit -n 16384

    (info) These values may be different depending upon the operating system used, consult with your System Administrator / hosting provider about the most appropriate limits to set.

  2. You can check the current DefaultTaskMax setting by running

    systemctl show --property DefaultTasksMax
  3. If your DefaultTasksMax value is not 65535, then uncomment the line in /etc/systemd/system.conf and set the value to 65535.
  4. Jira を再起動します。
  5. /proc/<pid>/limits  を実行することで変更を確認できます。ここで <pid> はアプリケーションのプロセス ID です。

It's recommended to set this value permanently as in the resolution.

Workaround 2:

If you're running Jira as a systemd service

  1. Stop the Jira Service
  2. Increase the LimitNOFILE value in the service definition file

  3. Reload the system daemon
    systemctl daemon-reload
  4. Start the Jira Service


Setting the maximum running user processes and number of opened files permanently is recommended, and the implementation of this is operating system-specific. These can be set in Debian / Ubuntu in limits.conf by adding the below:

jira      soft   nofile  16384
jira      hard   nofile  32768
jira      soft   nproc   16384       
jira      hard   nproc   32768 

(info) Replace jira with the user that runs JIRA.

If your DefaultTasksMax value is not 65535, then uncomment the line in /etc/systemd/system.conf and set the value to 65535.

最終更新日 2022 年 6 月 27 日


Powered by Confluence and Scroll Viewport.