Stash is now known as Bitbucket Server.
See the

Unknown macro: {spacejump}

of this page, or visit the Bitbucket Server documentation home page.

メタデータの末尾にスキップ
メタデータの先頭に移動

To get the best performance out of your Stash deployment in AWS, it's important not to under-provision your instance's CPU, memory, or I/O resources. Note that the very smallest instance types provided by AWS do not meet Stash's minimum hardware requirements and are not recommended in production environments. If you do not provision sufficient resources for your workload, Stash may exhibit slow response times, display a Stash is reaching resource limits banner, or fail to start altogether. 

推奨される EC2 および EBS インスタンスのサイズ

The following table lists the recommended EC2 and EBS configurations for Stash Server in AWS under typical workloads. 

On this page

アクティブ ユーザー数EC2 インスタンス タイプEBS 最適化EBS ボリューム タイプIOPS

0 – 250

c3.largeいいえ汎用 (SSD)N/A
250 – 500c3.xlargeはい汎用 (SSD)N/A
500 – 1000c3.2xlargeはいプロビジョンド IOPS500 – 1000

In Stash instances with high hosting workload, I/O performance is often the limiting factor. It is recommended to pay particular attention to EBS volume options, especially the following:

  • EBS ボリュームのサイズも I/O パフォーマンスに影響します。一般に、EBS ボリュームを大きくすると、利用可能な帯域幅および 1 秒あたりの I/O 操作数のスライスが大きくなります。本番環境では最低でも 100 GiB が推奨されます。
  • The IOPS that can be sustained by General Purpose (SSD) volumes is limited by Amazon's I/O credits. If you exhaust your I/O credit balance, your IOPS will be limited to the baseline level. You should consider using a larger General Purpose (SSD) volume or switching to a Provisioned IOPS (SSD) volume. See Amazon EBS Volume Types for more information.
  • New EBS volumes in particular have reduced performance the first time each block is accessed. See Pre-Warming Amazon EBS Volumes for more information.

The above recommendations are based on a typical workload with the specified number of active users. The resource requirements of an actual Stash instance may vary with a number of factors, including:

  • The number of continuous integration servers cloning or fetching from Stash: Stash will use more resources if you have many build servers set to clone or fetch frequently from Stash.
  • Whether continuous integration servers are using push mode notifications or polling repositories regularly to watch for updates.
  • Whether continuous integration servers are set to do full clones or shallow clones.
  • Whether the majority of traffic to Stash is over HTTP, HTTPS, or SSH, and the encryption ciphers used. 
  • The number and size of repositories: Stash will use more resources when you work on many very large repositories.
  • The activity of your users: Stash will use more resources if your users are actively using the Stash web interface to browse, clone and push, and manipulate Pull Requests. 
  • The number of open Pull Requests: Stash will use more resources when there are many open Pull Requests, especially if they all target the same branch in a large, busy repository.

See Scaling Stash and Scaling Stash for Continuous Integration performance for more detailed information on Stash resource requirements.

その他のサポート対象のインスタンス サイズ

The following Amazon EC2 instances also meet or exceed Stash's minimum hardware requirements. These instances provide different balances of CPU, memory, and I/O performance, and can cater for workloads that are more CPU-, memory-, or I/O-intensive than the typical. 

モデルvCPUメモリ (GiB)インスタンス ストア (GB)

EBS
最適化を
利用できるか

専用 EBS
スループット
(Mbps)
c3.large23.752 x 16 SSD--
c3.xlarge47.52 x 40 SSDはい-
c3.2xlarge8152 x 80 SSDはい-
c3.4xlarge16302 x 160 SSDはい-
c3.8xlarge32602 x 320 SSD--
c4.large23.75-はい500
c4.xlarge47.5-はい750
c4.2xlarge815-はい1,000
c4.4xlarge1630-はい2,000
c4.8xlarge3660-はい4,000
hs1.8xlarge1611724 x 2000--
i2.xlarge430.51 x 800 SSDはい-
i2.2xlarge8612 x 800 SSDはい-
i2.4xlarge161224 x 800 SSDはい-
i2.8xlarge322448 x 800 SSD--
m3.large27.51 x 32 SSD--
m3.xlarge4152 x 40 SSDはい-
m3.2xlarge8302 x 80 SSDはい-
r3.large215.251 x 32 SSD--
r3.xlarge430.51 x 80 SSDはい-
r3.2xlarge8611 x 160 SSDはい-
r3.4xlarge161221 x 320 SSDはい-
r3.8xlarge322442 x 320 SSD--

In all AWS instance types, Stash only supports "large" and higher instances.  "Micro", "small", and "medium" sized instances do not meet Stash's minimum hardware requirements and are not recommended in production environments. 

Stash does not support D2 instancesBurstable Performance (T2) Instances, or Previous Generation Instances

In any instance type with available Instance Store device(s), a Stash instance launched from the Atlassian Stash AMI will configure one Instance Store to contain Stash's temporary files and caches. Instance Store can be faster than an EBS volume but the data does not persist if the instance is stopped or rebooted. Use of Instance Store can improve performance and reduce the load on EBS volumes. See Amazon EC2 Instance Store for more information. 

Advanced: Monitoring your Stash instance to tune instance sizing

This section is for advanced users who wish to monitor the resource consumption of their instance and use this information to guide instance sizing.

The above recommendations provide guidance for typical workloads. The resource consumption of every Stash instance, though, will vary with the mix of workload. The most reliable way to determine if your Stash instance is under- or over-provisioned in AWS is to monitor its resource usage regularly with Amazon CloudWatch. This provides statistics on the actual amount of CPU, I/O, and network resources consumed by your Stash instance. 

以降の単純な BASH スクリプトの例では次のものを使用します。

これらを使用して、CPU、I/O、およびネットワーク統計を収集し、インスタンス サイジングの決定に役立つシンプルなグラフで表示します。 

#!/bin/bash
# Example AWS CloudWatch monitoring script
# Usage:
#   (1) Install gnuplot and jq (minimum version 1.4)
#   (2) Install AWS CLI (http://docs.aws.amazon.com/cli/latest/userguide/installing.html) and configure it with
#       credentials allowing cloudwatch get-metric-statistics
#   (3) Replace "xxxxxxx" in volume_ids and instance_ids below with the ID's of your real instance
#   (4) Run this script

export start_time=$(date -v-14d +%Y-%m-%dT%H:%M:%S)
export end_time=$(date +%Y-%m-%dT%H:%M:%S)
export period=1800
export volume_ids="vol-xxxxxxxx"    # REPLACE THIS WITH THE VOLUME ID OF YOUR REAL EBS VOLUME
export instance_ids="i-xxxxxxxx"    # REPLACE THIS WITH THE INSTANCE ID OF YOUR REAL EC2 INSTANCE

# Build lists of metrics and datafiles that we're interested in
ebs_metrics=""
ec2_metrics=""
cpu_datafiles=""
iops_datafiles=""
queue_datafiles=""
net_datafiles=""
for volume_id in ${volume_ids}; do
  for metric in VolumeWriteOps VolumeReadOps; do
    ebs_metrics="${ebs_metrics} ${metric}"
    iops_datafiles="${iops_datafiles} ${volume_id}-${metric}"
  done
done
for volume_id in ${volume_ids}; do
  for metric in VolumeQueueLength; do
    ebs_metrics="${ebs_metrics} ${metric}"
    queue_datafiles="${queue_datafiles} ${volume_id}-${metric}"
  done
done
for instance_id in ${instance_ids}; do
  for metric in DiskWriteOps DiskReadOps; do
    ec2_metrics="${ec2_metrics} ${metric}"
    iops_datafiles="${iops_datafiles} ${instance_id}-${metric}"
  done
done
for instance_id in ${instance_ids}; do
  for metric in CPUUtilization; do
    ec2_metrics="${ec2_metrics} ${metric}"
    cpu_datafiles="${cpu_datafiles} ${instance_id}-${metric}"
  done
done
for instance_id in ${instance_ids}; do
  for metric in NetworkIn NetworkOut; do
    ec2_metrics="${ec2_metrics} ${metric}"
    net_datafiles="${net_datafiles} ${instance_id}-${metric}"
  done
done

# Gather the metrics using AWS CLI
for volume_id in ${volume_ids}; do
  for metric in ${ebs_metrics}; do
    aws cloudwatch get-metric-statistics --metric-name ${metric} \
                                         --start-time ${start_time} \
                                         --end-time ${end_time} \
                                         --period ${period} \
                                         --namespace AWS/EBS \
                                         --statistics Sum \
                                         --dimensions Name=VolumeId,Value=${volume_id} | \
      jq -r '.Datapoints | sort_by(.Timestamp) | map(.Timestamp + " " + (.Sum | tostring)) | join("\n")' >${volume_id}-${metric}.data
done
done

for metric in ${ec2_metrics}; do
  for instance_id in ${instance_ids}; do
    aws cloudwatch get-metric-statistics --metric-name ${metric} \
                                         --start-time ${start_time} \
                                         --end-time ${end_time} \
                                         --period ${period} \
                                         --namespace AWS/EC2 \
                                         --statistics Sum \
                                         --dimensions Name=InstanceId,Value=${instance_id} | \
      jq -r '.Datapoints | sort_by(.Timestamp) | map(.Timestamp + " " + (.Sum | tostring)) | join("\n")' >${instance_id}-${metric}.data
done
done

cat >aws-monitor.gnuplot <<EOF
set term pngcairo font "Arial,30" size 1600,900
set title "IOPS usage"
set datafile separator whitespace
set xdata time
set timefmt "%Y-%m-%dT%H:%M:%SZ"
set grid
set ylabel "IOPS"
set xrange ["${start_time}Z":"${end_time}Z"]
set xtics "${start_time}Z",86400*2 format "%d-%b"
set output "aws-monitor-iops.png"
plot \\
EOF
for datafile in ${iops_datafiles}; do
  echo "  \"${datafile}.data\" using 1:(\$2/${period}) with lines title \"${datafile}\", \\" >>aws-monitor.gnuplot
done

cat >>aws-monitor.gnuplot <<EOF

set term pngcairo font "Arial,30" size 1600,900
set title "IO Queue Length"
set datafile separator whitespace
set xdata time
set timefmt "%Y-%m-%dT%H:%M:%SZ"
set grid
set ylabel "Queue Length"
set xrange ["${start_time}Z":"${end_time}Z"]
set xtics "${start_time}Z",86400*2 format "%d-%b"
set output "aws-monitor-queue.png"
plot \\
EOF
for datafile in ${queue_datafiles}; do
  echo "  \"${datafile}.data\" using 1:2 with lines title \"${datafile}\", \\" >>aws-monitor.gnuplot
done

cat >>aws-monitor.gnuplot <<EOF

set term pngcairo font "Arial,30" size 1600,900
set title "CPU Utilization"
set datafile separator whitespace
set xdata time
set timefmt "%Y-%m-%dT%H:%M:%SZ"
set grid
set ylabel "%"
set xrange ["${start_time}Z":"${end_time}Z"]
set xtics "${start_time}Z",86400*2 format "%d-%b"
set output "aws-monitor-cpu.png"
plot \\
EOF
for datafile in $cpu_datafiles; do
  echo "  \"${datafile}.data\" using 1:2 with lines title \"${datafile}\", \\" >>aws-monitor.gnuplot
done

cat >>aws-monitor.gnuplot <<EOF

set term pngcairo font "Arial,30" size 1600,900
set title "Network traffic"
set datafile separator whitespace
set xdata time
set timefmt "%Y-%m-%dT%H:%M:%SZ"
set grid
set ylabel "MBytes/s"
set xrange ["${start_time}Z":"${end_time}Z"]
set xtics "${start_time}Z",86400*2 format "%d-%b"
set output "aws-monitor-net.png"
plot \\
EOF
for datafile in $net_datafiles; do
  echo "  \"${datafile}.data\" using 1:(\$2/${period}/1000000) with lines title \"${datafile}\", \\" >>aws-monitor.gnuplot
done

gnuplot <aws-monitor.gnuplot

When run on a typical Stash instance, this script produces charts such as the following:

このようなグラフの情報を使用して、CPU、ネットワーク、または I/O リソースがインスタンスでオーバー プロビジョニングかアンダー プロビジョニングかを判断できます。 

インスタンスが頻繁に最大利用可能 CPU 使用率 (インスタンス サイズのコア数を考慮) に達する場合、CPU 数が多い EC2 インスタンスが必要なことを示している場合があります (Amazon 環境の他のテナントがあなたのインスタンスが実行されているものと同じ物理ハードウェアで CPU サイクルを消費している場合、Amazon CloudWatch で報告される小さい EC2 インスタンス サイズの CPU 使用率は "ノイジー ネイバー" 現象の影響をある程度受ける可能性があります)。 

If your instance is frequently exceeding the IOPS available to your EBS volume and/or is frequently queuing I/O requests, then this may indicate you need to upgrade to an EBS optimized instance and/or increase the Provisioned IOPS on your EBS volume. See EBS Volume Types for more information. 

インスタンスがネットワーク トラフィックによって頻繁に制限される場合、利用可能なネットワーク帯域幅のスライスが大きい EC2 インスタンスを選択する必要があることを示している場合があります。