SSH Fail Intermittently due to upload-pack: Resource temporarily unavailable

プラットフォームについて: Server および Data Center のみ。この記事は、Server および Data Center プラットフォームのアトラシアン製品にのみ適用されます。

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Fisheye および Crucible は除く


  • SSH git clone fail intermittently while HTTP(S) works as per normal.
  • This is noticed after an upgrade to Stash 3.6.0 - 3.10.x

The following appears in the catalina.log

30-Jun-2015 04:38:43.453 SEVERE [ajp-nio-]$SocketProcessor.doRun 
 java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(

The following appears in the atlassian-stash.log

2015-06-29 05:17:36,139 ERROR [ajp-nio-] xxxx @xxxx "GET /scm/something/something.git/info/refs HTTP/1.1" c.a.s.i.s.g.p.h.GitSmartExitHandler something/something_shared[292]: Read request from failed: com.atlassian.utils.process.ProcessException: Non-zero exit code: 255
The following was written to stderr:
error: cannot fork() for git-http-backend: Resource temporarily unavailable 



  • Large Enterprise server with similar environment as follows:


Diagnostic Steps

  • Enable git debug with below will show the debug logging stuck while trying to identity file .ssh cert

    export GIT_TRACE_PACKET=1
    export GIT_TRACE=1
    git clone ssh://admin@<>/something/something.git


The default max thread on Stash 3.10.x is too high for an Enterprise server with lots of hyper-threaded cores. This changes was made for STASH-6868 - Getting issue details... STATUS to fix insufficient threads. However, this is then to be an issue when internal thread pool can grow up to 100 + 20 * 64 = 1380 threads which at some point the server could not create any more threads because it ran out of (virtual) memory.

The following are the default set per Stash version:

3.11.0: executor.max.threads=${scaling.concurrency}
3.6.0 - 3.10.x: executor.max.threads=100+20*${scaling.concurrency} (where the later is # of cpu)
2.11.x: executor.max.threads=100



This is a known bug and a workaround would be to re-configure the default value of the executor.max.threads.

  • Shutdown Stash
  • Add the following in

  • Startup Stash


This bug has been fixed in Stash 3.11.0, see STASH-7616 - Getting issue details... STATUS

Last modified on Mar 30, 2016


Powered by Confluence and Scroll Viewport.