SSH Fail Intermittently due to upload-pack: Resource temporarily unavailable
プラットフォームについて: Server および Data Center のみ。この記事は、Server および Data Center プラットフォームのアトラシアン製品にのみ適用されます。
サーバー*製品のサポートは 2024 年 2 月 15 日に終了しました。サーバー製品を利用している場合は、アトラシアンのサーバー製品のサポート終了のお知らせページにて移行オプションをご確認ください。
*Fisheye および Crucible は除く
問題
- SSH
git clone
fail intermittently while HTTP(S) works as per normal. - This is noticed after an upgrade to Stash 3.6.0 - 3.10.x
The following appears in the catalina.log
30-Jun-2015 04:38:43.453 SEVERE [ajp-nio-127.0.0.1-8009-exec-11] org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
The following appears in the atlassian-stash.log
2015-06-29 05:17:36,139 ERROR [ajp-nio-127.0.0.1-8009-exec-31] xxxx @xxxx 127.0.0.1 "GET /scm/something/something.git/info/refs HTTP/1.1" c.a.s.i.s.g.p.h.GitSmartExitHandler something/something_shared[292]: Read request from 127.0.0.1 failed: com.atlassian.utils.process.ProcessException: Non-zero exit code: 255
The following was written to stderr:
error: cannot fork() for git-http-backend: Resource temporarily unavailable
診断
環境
Large Enterprise server with similar environment as follows:
<cpus>64</cpus>
Diagnostic Steps
Enable git debug with below will show the debug logging stuck while trying to identity file .ssh cert
export GIT_TRACE_PACKET=1 export GIT_TRACE=1 git clone ssh://admin@<mystash.com>/something/something.git
原因
The default max thread on Stash 3.10.x is too high for an Enterprise server with lots of hyper-threaded cores. This changes was made for
BSERV-6868
-
Default executor.max.threads may be too small for larger instances
Closed
to fix insufficient threads. However, this is then to be an issue when internal thread pool can grow up to 100 + 20 * 64 = 1380 threads which at some point the server could not create any more threads because it ran out of (virtual) memory.
The following are the default set per Stash version:
3.11.0: executor.max.threads=${scaling.concurrency}
3.6.0 - 3.10.x: executor.max.threads=100+20*${scaling.concurrency} (where the later is # of cpu)
2.11.x: executor.max.threads=100
回避策
This is a known bug and a workaround would be to re-configure the default value of the executor.max.threads.
- Shutdown Stash
Add the following in stash-config.properties
executor.max.threads=2*cpu
- Startup Stash
ソリューション
This bug has been fixed in Stash 3.11.0, see