Bitbucket Data Center node hangs on startup - Initializing spring context


アトラシアン コミュニティをご利用ください。


プラットフォームについて: Server、Data Center、および Cloud (状況に応じる) - この記事はアトラシアンのサーバーおよびデータセンター プラットフォーム向けに記載されていますが、Atlassian Cloud のお客様も記事の内容を利用できる可能性があります。この記事で説明されている手順の実施が役立つと考えられる場合、アトラシアン サポートにお問い合わせのうえ、この記事を紹介してください。


Bitbucket Data Center node(s) do not start up. Startup stuck on the "Initializing spring context" step. Neither atlassian-bitbucket.log, nor catalina.out show any particular errors.

(info) As of Bitbucket Server 5.x, catalina.out will no longer exist. It will be written to atlassian-bitbucket.log instead. 



  • Bitbucket Data Center using shared NFS storage.
  • File creation/modification on the NFS storage works as expected.


  • While startup is stuck, take thread dumps to identify why the startup hangs.
  • If the following thread is seen in long running threads, you may be encountering the problem(s) outlined below: 

    "spring-startup" #18 daemon prio=5 tid=0x00007f4511e75000 nid=0x5afe runnable [0x00007f45a13dd000]
       java.lang.Thread.State: RUNNABLE
    	at Method)
    	at com.atlassian.stash.internal.home.HomeLock.acquireLock(
    	at com.atlassian.stash.internal.home.HomeLock.lock(
    	at com.atlassian.stash.internal.home.HomeLockAcquirer.lock(
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    	at java.lang.reflect.Method.invoke(
  • You can use a C program to validate NFS locking ability. You will simply need to put the outlined C code into a file then compile it. When done, you can move it to the NFS share on the client (application node) and run it. It should finish in less than a second and create testfile for testing file locking. If this fails, locking almost certainly doesn't work.

    • See the practical example

      Put the following contents into a file nfs_lock.c then execute either make nfs_lock or gcc -o nfs_lock nfs_lock.c, then copy and run the command nfs_lock from the NFS share on the client (application node):

      #include <stdio.h> 
      #include <fcntl.h> 
      main() { 
          int fd , ret; 
          struct flock strlock; 
          fd = open("testfile", O_CREAT | O_RDWR , 0666 ); 
          if (fd == -1){ 
              printf("FATAL ERROR: Could not open file\n"); 
      /* Set a write lock on the first 1024 bytes of the file. 
         It doesn't matter that they don't actually exist yet. */ 
          strlock.l_type = F_WRLCK; 
          strlock.l_whence = 0; 
          strlock.l_start = 0L; 
          strlock.l_len = 1024L; 
          ret = fcntl (fd, F_SETLK , &strlock); 
          if (ret == -1){ 
              printf("FATAL ERROR: Could not lock file\n"); 
          ret = close( fd ); 
          if (ret == -1){ 
              printf("FATAL ERROR: Could not close file\n"); 



The above thread indicates that Bitbucket fails to acqurie a lock on the shared storage (NFS). The causes may be - but not limited to the - following:

  • NFS daemons (lockd, nfsd, mountd, etc) not running on either the client or server side
  • NFS daemons are hung or have problems communicating
  • Permission problems on the NFS share
  • Firewall blocking connection for either of the above listed daemons. Check which ports the required daemons run by executing portmap -p on the NFS server and make sure that the relevant TCP and UDP ports are accepting connections.


Make sure that all the NFS daemons are running and accessible from the client perspective. As a troubleshooting step, try disabling firewall on both the NFS server and client(s), test NFS locking manually and ensure it works as expected. Please note that this error is very likely caused by an environmental issue, rather than Bitbucket Data Center not operating as expected.



最終更新日 2018 年 11 月 2 日


Powered by Confluence and Scroll Viewport.