Stash is now known as Bitbucket Server.
See the

Unknown macro: {spacejump}

of this page, or visit the Bitbucket Server documentation home page.

メタデータの末尾にスキップ
メタデータの先頭に移動

このページでは...

... describes how to migrate an existing instance of Stash to Stash Data Center.

For an overview, see Bitbucket Data Center resources.

If you are installing Stash server...

... go straight to Getting started, instead.

We also recommend reading Using Stash in the enterprise.

If you just want to add another node...

... we suggest you take a look at Adding cluster nodes to Stash Data Center.

This guide assumes that you already have a production instance of Stash, and that you are aiming to migrate that to a Stash Data Center instance. 

We recommend that you:

  • Initiate the purchase of a Stash Data Center license by contacting us at https://www.atlassian.com/enterprise/contact.
  • Set up and test Stash Data Center in your staging environment, before deploying to a production environment.
  • Upgrade Stash, and then make a backup of your production instance of Stash. 
  • Restore a copy of this backup into your clustered staging environment.  
  • Test Stash Data Center with identical data (repositories, users, add-ons) to your production instance. 

Regardless of the process you use, please smoke test your Stash Data Center instance every step of the way. 

このページの内容

Overview and requirements

It's worth getting a clear understanding of what you're aiming to achieve, before starting to provision your Stash Data Center.

A Stash Data Center instance consists of a cluster of dedicated machines connected like this:

Stash Data Center Architecture

The URL of the Stash Data Center instance will be the URL of the load balancer, so this is the machine that you will need to assign the name of your Stash instance in the DNS. 

The remaining machines (Stash cluster nodes, shared database, and shared file system) do not need to be publicly accessible to your users. 

Stash cluster nodes

The Stash cluster nodes all run the Stash Data Center web application. 

  • Each Stash cluster node must be a dedicated machine.
  • マシンは物理でも仮想でもかまいません。 
  • The cluster nodes must be connected in a high speed LAN (that is, they must be physically in the same data center). 
  • The usual Stash supported platforms requirements, including those for Java and Git, apply to each cluster node.
  • 各クラスタ ノードがまったく同じである必要はありませんが、一貫性のあるパフォーマンスのために、可能な限り同質になるようにします。
  • All cluster nodes must run the same version of Stash Data Center.
  • すべてのクラスタ ノードが同期クロックを使用し (NTP を使用するなど)、同じタイムゾーンで構成されている必要があります。

ロードバランサー

You can use the load balancer of your choice. Stash Data Center does not  bundle a load balancer.

  • ロード バランサは専用のマシンで実行する必要があります。
  • Your load balancer must have a high-speed LAN connection to the Stash cluster nodes (that is, it must be physically in the same data center). 
  • Your load balancer must support both HTTP mode (for web traffic) and TCP mode (for SSH traffic). 
  • Terminating SSL (HTTPS) at your load balancer and running plain HTTP from the load balancer to Stash is highly recommended for performance. 
  • ロード バランサは "セッション アフィニティ" ("スティッキー セッション") をサポートしている必要があります。

If you don't have a preference for your load balancer, we provide instructions for haproxy, a popular Open Source software load balancer. 

共有データベース

You must run Stash Data Center on an external database. You can not use Stash's internal HSQL database with Stash Data Center. 

  • 共有データベースは専用のマシンで実行する必要があります。 
  • 共有データベースはすべてのクラスタ ノードから高速 LAN 経由で到達できる (同じ物理データセンター内に存在する) 必要があります。 
  • All the usual database vendors in Stash's supported platforms are supported by Stash Data Center, with one exception: we do not recommend MySQL at this time due to inherent deadlocks that can occur in this database engine at high load. 

共有ファイル システム

Stash Data Center requires a high performance shared file system such as a SAN, NAS, RAID server, or high-performance file server optimized for I/O. 

  • 共有ファイル システムは専用のマシンで実行する必要があります。 
  • ファイル システムはすべてのクラスタ ノードから高速 LAN 経由で到達できる (同じ物理データセンター内に存在する) 必要があります。 
  • 共有ファイル システムには、単一のマウント ポイントとして NFS 経由でアクセス可能である必要があります。 

What is stored on the shared file system?

  • 設定ファイル
  • データ ディレクトリ。次を含む。
    • repositories
    • attachments
    • avatars
  • plugins

What is stored locally on each node?

  • caches
  • logs
  • 一時ファイル

 

1. Upgrade your existing production instance of Stash

Begin by upgrading your production Stash Server instance to the latest public release. This is necessary for several reasons:

  • The Stash database and home directory layout often change in each release of Stash. Upgrading first will ensure that your production Stash Server instance and your Stash Data Center instance share identical data format, and you can switch between them at will. 
  • Any add-ons in your production instance can be verified as compatible with the latest release of Stash (or updated if not). 
  • Any performance or other comparisons between single-node Stash Server and multi-node Stash Data Center will be more meaningful. 

Upgrade your Stash Server by following the instructions in the Stash upgrade guide.

 

2. Back up your production instance

Now, take a backup of your production Stash instance's database and home directory. For this you can:

  • use the Stash backup client,
  • use your own DIY backup solution, or
  • just stop Stash and manually dump your database, and zip up the home directory. 

 

3. 共有データベースのプロビジョニング

Set up your shared database server. Note that clustered databases are not yet supported.

See Connecting Stash to an external database for more information.

You must ensure your database is configured to allow enough concurrent connections. Stash by default uses up to 80 connections per cluster node, which can exceed the default connection limit of some databases.

For example, in PostgreSQL the default limit is usually 100 connections. If you use PostgreSQL, you may need to edit your postgresql.conf file, to increase the value of max_connections, and restart Postgres.

We do not support MySQL for Stash Data Center at this time due to inherent deadlocks that can occur in this database engine at high load.  If you currently use MySQL, you should migrate your data to another supported database (such as PostgreSQL) before upgrading your Stash Server instance to Stash Data Center. You can migrate databases (on a standalone Stash instance) using the Migrate database feature in Stash's Adminstration pages, or by using the Stash backup client.

 

 

4. 共有ファイルシステムのプロビジョニング

Set up your shared file server.

See Stash Data Center FAQ for performance guidelines when using NFS.

You must ensure your shared file system server is configured with enough NFS server processes.

For example, some versions of RedHat Enterprise Linux and CentOS have a default of 8 server processes. If you use one of these systems, you may need to edit your /etc/sysconfig/nfs file, increase the value of RPCNFSDCOUNT, and restart the nfs service.

You must ensure your shared file system server has the NFS lock service enabled. For example:

  • In some versions of Ubuntu Linux you must ensure that the portmap and dbus services are enabled for the NFS lockd to function.
  • In some versions of RedHat Enterprise Linux and CentOS, you must install the nfs-utils and nfs-utils-lib packages, and ensure the rpcbind and nfslock services are running.

Create a Stash user account (recommended name atlstash) on the shared file system server to own everything in the Stash shared home directory.  This user account must have the same UID on all cluster nodes and the shared file system server.  In a fresh Linux install the UID of a newly created account is typically 1001, but in general there is no guarantee that this UID will be free on every Linux system.  Choose a UID for atlstash that's free on all your cluster nodes and the shared file system server, and substitute this for 1001 in the following command:

sudo useradd -c "Atlassian STASH" -u 1001 atlstash

You must ensure that the atlstash user has the same UID on all cluster nodes and the shared file system server.

Then restore the backup you have taken in step 2 into the new shared database and shared home directory.

Only the shared directory in the Stash home directory needs to be restored into the shared home directory. The remaining directories (bincachesexportliblogplugins, and tmp) contain only caches and temporary files, and do not need to be restored. 

You must ensure that the user running Stash (usually atlstash) is able to read and write everything in the Stash home directory, both the node-local part and the shared part (in NFS). The easiest way to do this is to ensure that:

  1. atlstash owns all files and directories in the Stash home directory,
  2. atlstash has the recommended umask of 0027, and
  3. atlstash has the same UID on all machines.

Do not run Stash as root. Many NFS servers squash accesses by root to another user.

 

 

5. Provision your cluster nodes

  1. Chef、Puppet、または Vagrant などの自動構成管理ツールを使用するか、同一の仮想マシンのスナップショットをスピンアップすることで、クラスター ノードのプロビジョニングを行うことを強くお勧めします。 

  2. On each cluster node, mount the shared home directory as ${STASH_HOME}/shared. For example, suppose your Stash home directory is/var/atlassian/application-data/stash, and your shared home directory is available as an NFS export called stash-san:/stash-shared. Add the following line to /etc/fstab on each cluster node:

    /etc/fstab
    stash-san:/stash-shared /var/atlassian/application-data/stash/shared nfs nfsvers=3,lookupcache=pos,noatime,intr,rsize=32768,wsize=32768 0 0

    Only the ${STASH_HOME}/shared directory should be shared between cluster nodes. All other directories, including ${STASH_HOME}, should be node-local (that is, private to each node).

    Stash Data Center checks during startup that ${STASH_HOME} is node local and ${STASH_HOME}/shared is shared, and will fail to form a cluster if this is not true.

    Your shared file system must provide sufficient consistency for Stash and Git.

    Linux NFS clients require the lookupcache=pos mount option to be specified for proper consistency.

    NFSv4 may have issues in Linux kernels from about version 3.2 to 3.8 inclusive. The issues may cause very high load average, processes hanging in "uninterruptible sleep", and in some cases may require rebooting the machine. We recommend using NFSv3 unless you are 100% sure that you know what you're doing and your operating system is free from such issues.

    Linux NFS clients should use the nfsvers=3 mount option to force NFSv3.

    これをマウントします。

    mkdir -p /var/atlassian/application-data/stash/shared
    sudo mount -a
  3. Ensure all your cluster nodes have synchronized clocks and identical timezone configuration. For example, in RedHat Enterprise Linux or CentOS:

    sudo yum install ntp
    sudo service ntpd start
    sudo tzselect

    In Ubuntu Linux:

    sudo apt-get install ntp
    sudo service ntp start
    sudo dpkg-reconfigure tzdata

    For other operating systems, consult your system documentation. 

    The system clocks on your cluster nodes must remain reasonably synchronized (say, to within a few seconds or less). If your system clocks drift excessively or undergo abrupt "jumps" of minutes or more, then cluster nodes may log warnings, become slow, or in extreme cases become unresponsive and require restarting. You should run the NTP service on all your cluster nodes with identical configuration, and never manually tamper with the system clock on a cluster node while Stash Data Center is running.

  4. Download the latest Stash Data Center distribution from https://www.atlassian.com/software/stash/download, and install Stash as normal on all the cluster nodes. See Getting started.

 

6. 最初のクラスタ ノードの起動

Edit the file ${STASH_HOME}/shared/stash-config.properties, and add the following lines:

# Use multicast to discover cluster nodes (recommended).
hazelcast.network.multicast=true

# If your network does not support multicast, you may uncomment the following lines and substitute
# the IP addresses of some or all of your cluster nodes. (Not all of the cluster nodes have to be
# listed here but at least one of them has to be active when a new node joins.) 
#hazelcast.network.tcpip=true
#hazelcast.network.tcpip.members=192.168.0.1:5701,192.168.0.2:5701,192.168.0.3:5701

# The following should uniquely identify your cluster on the LAN. 
hazelcast.group.name=your-stash-cluster
hazelcast.group.password=your-stash-cluster

Using multicast to discover cluster nodes (hazelcast.network.multicast=true) is recommended, but requires all your cluster nodes to be accessible to each other via a multicast-enabled network. If your network does not support multicast then you can set hazelcast.network.multicast=falsehazelcast.network.tcpip=true, and hazelcast.network.tcpip.members to a comma-separated list of cluster nodes instead. Only enable one of hazelcast.network.tcpip or hazelcast.network.multicast, not both!

Choose a name for hazelcast.group.name and hazelcast.group.password that uniquely identifies the cluster on your LAN. If you have more than one cluster on the same LAN (for example, other Stash Data Center instances or other products based on similar technology such as Confluence Data Center) then you must assign each cluster a distinct name, to prevent them from attempting to join together into a "super cluster". 

Then start Stash. See Starting and stopping Stash.

Then go to http://<stash>:7990/admin/license, and install the Stash Data Center license you were issued. Restart Stash for the change to take effect. If you need a Stash Data Center license, please contact us!

 

7. Install and configure your load balancer 

You can use the load balancer of your choice, either hardware or software. Stash Data Center does not bundle a load balancer. 

ロード バランサは 3 つのプロトコルをプロキシする必要があります。

プロトコルロード バランサの一般的なポートTypical port on the Stash cluster nodes注意
http807990HTTP mode. Session affinity ("sticky sessions") should be enabled using the 52-character JSESSIONID cookie.
HTTPS4437990HTTP mode. Terminating SSL at the load balancer and running plain HTTP to the Stash cluster nodes is highly recommended.
ssh79997999TCP モード。

For best performance, your load balancer should support session affinity ("sticky sessions") using the  JSESSIONID cookie. By default, Stash Data Center assumes that your load balancer always directs each user's requests to the same cluster node. If it does not, users may be unexpectedly logged out or lose other information that may be stored in their HTTP session.

Stash Data Center also provides a property hazelcast.http.sessions that can be set in ${STASH_HOME}/shared/stash-config.properties that provides finer control over HTTP session management. This property can be set to one of the following values:

  • local (the default): HTTP sessions are managed per node. When used in a cluster, the load balancer must have session affinity ("sticky sessions") enabled. If a node fails or is shut down, users that were assigned to that node may need to log in again.
  • sticky: HTTP sessions are distributed across the cluster with a load balancer configured to use session affinity ("sticky sessions"). If a node fails or is shut down, users should not have to log in again. In this configuration, session management is optimized for sticky sessions and will not perform certain cleanup tasks for better performance.
  • replicated: HTTP sessions are distributed across the cluster. If a node fails or is shut down, users should not have to log in again. The load balancer does not need to be configured for session affinity ("sticky sessions"), but performance is likely to be better if it is.

Both the sticky and replicated options come with some performance penalty, which can be substantial if session data is used heavily (for example, in custom plugins). For best performance, local (the default) is recommended.

ロード バランサを選択する場合、それは HTTP、HTTPS、TCP プロトコルをサポートしている必要があります。注意:

  • Apache では、TCP モードの負荷分散がサポートされていません
  • 1.5.0 より古い HAProxy バージョンでは、HTTPS がサポートされていません

If your load balancer supports health checks of the cluster nodes, configure it to perform a periodic HTTP GET of http:// <stash>:7990/status, where <stash> is the cluster node's name or IP address. This returns one of two HTTP status codes:

  • 200 OK
  • 500 Internal Server Error

クラスタ ノードが合理的な時間内に 200 OK を返さない場合、ロード バランサはそのクラスタ ノードにトラフィックを転送するべきではありません。 

You should then be able to navigate to http://<load-balancer>/, where <load-balancer> is your load balancer's name or IP address. This should take you to your Stash front page. 

例: HAProxy ロード バランサ

使用するロード バランサが決まっていなかったり、ロード バランサに対するポリシーがなかったりする場合、人気のオープン ソースおソフトウェア ロード バランサである HAProxy を使用できます。

HAProxy を選択する場合、バージョン 1.5.0 以上を使用する必要があります。それ以前のバージョンの HAProxy では HTTPS がサポートされません。

Here is an example haproxy.cfg configuration file (typically found in the location /etc/haproxy/haproxy.cfg).  This assumes:

  • Your Stash cluster node is at address 192.168.0.1, listening on the default ports 7990 (HTTP) and 7999 (SSH). 
  • You have a valid SSL certificate at /etc/cert.pem.
haproxy.cfg
global
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    tune.ssl.default-dh-param 1024

defaults
    log                     global
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
    errorfile               408 /dev/null	# Workaround for Chrome 35-36 bug.  See http://blog.haproxy.com/2014/05/26/haproxy-and-http-errors-408-in-chrome/

frontend stash_http_frontend
    bind *:80
    bind *:443 ssl crt /etc/cert.pem ciphers RC4-SHA:AES128-SHA:AES256-SHA
    default_backend stash_http_backend

backend stash_http_backend
    mode http
    option httplog
    option httpchk GET /status
    option forwardfor
    option http-server-close
    balance roundrobin
    stick-table type string len 52 size 5M expire 30m
    stick store-response set-cookie(JSESSIONID)
    stick on cookie(JSESSIONID)
    server stash01 192.168.0.1:7990 check inter 10000 rise 2 fall 5
    #server stash02 192.168.0.2:7990 check inter 10000 rise 2 fall 5
    # The following "backup" servers are just here to show the startup page when all nodes are starting up
    server backup01 192.168.0.1:7990 backup
    #server backup02 192.168.0.2:7990 backup
 
frontend stash_ssh_frontend
    bind *:7999
    default_backend stash_ssh_backend
    timeout client 15m
    maxconn 50

backend stash_ssh_backend
    mode tcp
    balance roundrobin
    server stash01 192.168.0.1:7999 check port 7999
    #server stash02 192.168.0.2:7999 check port 7999
    timeout server 15m
 
listen admin
    mode http
    bind *:8090
    stats enable
    stats uri /

Review the contents of the haproxy.cfg file carefully, and customize it for your environment. See http://www.haproxy.org/ for more information about installing and configuring haproxy.

Once you have configured the haproxy.cfg file, start the haproxy service.

sudo service haproxy start

You can also monitor the health of your cluster by navigating to HAProxy's statistics page at http://<load-balancer>:8090/. You should see a page similar to this:

8. Configure Tomcat/Stash for HAProxy

Stash needs to be configured to work with HAProxy. For example:

<Stash home directory>/shared/server.xml
<Connector port="7990"
     protocol="HTTP/1.1"
     connectionTimeout="20000"
     useBodyEncodingForURI="true"
     redirectPort="443"
     compression="on"
     compressableMimeType="text/html,text/xml,text/plain,text/css,application/json,application/javascript,application/x-javascript"
     secure="true"
     scheme="https"
     proxyName="<load-balancer>"
     proxyPort="443" />

Securing Stash behind HAProxy using SSL for more details.

9. Add a new Stash cluster node to the cluster

Go to a new cluster node, and start Stash. See Starting and stopping Stash.

Once Stash has started, go to http://<load-balancer>/admin/clustering. You should see a page similar to this:

Verify that the new node you have started up has successfully joined the cluster. If it does not, please check your network configuration and the ${STASH_HOME}/log/atlassian-stash.log files on all nodes. If you are unable to find a reason for the node failing to join successfully, please contact Atlassian Support.

10. Connect the new Stash cluster node to the load balancer

If you are using your own hardware or software load balancer, consult your vendor's documentation on how to add the new Stash cluster node to the load balancer.

If you are using HAProxy, just uncomment the lines

server stash02 192.168.0.2:7990 check inter 10000 rise 2 fall 5
server stash02 192.168.0.2:7999 check port 7999

in your haproxy.cfg file and restart haproxy:

sudo service haproxy restart

新しいノードがクラスター内にあり、リクエストを受信することを検証します。これを行うには、各ノードでログを調べ、いずれもトラフィックを受信していることを確認し、さらに、一方のノードで実行された更新をもう一方のノードで表示できるかどうかも調べます。 

11. Repeat steps 9 and 10 for each additional cluster node

12. Congratulations!

You have now set up a clustered instance of Stash Data Center!  We are very interested in hearing your feedback on this process – please contact us!

For any issues please raise a support ticket and mention that you are following the 'Installing Stash Data Center' page.

Please see Using Stash in the enterprise for information about using Stash in a production environment.

  • ラベルなし