パフォーマンスおよび拡張のテスト
We’re committed to continuously evolving and enhancing the quality of our performance reports. This report presents a comprehensive overview of the performance metrics for the most recent 10.3 Long Term Support (LTS) release of Jira Software Data Center.
長期サポート リリースについて
We recommend upgrading Jira Software regularly, but if your organization's process means you only upgrade about once a year, upgrading to a Long Term Support release is a good option. It provides continued access to critical security, stability, data integrity, and performance fixes until this version reaches end of life.
TL;DR
In Jira Software Data Center 10.3 LTS, we've introduced a new performance testing framework that captures performance more accurately by using NFR thresholds, mimicking the end user experience more closely.
Our tests are conducted on datasets that mimic an extra-large cohort on minimal recommended hardware to provide performance thresholds achievable for all of our customers.
We routinely carry out internal performance testing to ensure that all actions remain within the NFR thresholds. Our continuous monitoring enables us to optimize performance and effectively address any regressions.
We conducted tests with the new framework on versions 9.12 and 10.3 using a comparable setup and found no regressions.
The report below presents test results conducted using non-optimal data shape or hardware and doesn’t represent Jira’s peak performance. Instead, it showcases the performance of a Jira instance typically encountered for customers in an extra-large cohort.
However, we also provide guidance on optimizing your instance to achieve performance that surpasses the results shown in this test. More on performance and scaling
要約
In the Jira 10.3 release, we’re showcasing the results of our performance testing through a new framework that focuses on Non-Functional Requirements (NFR) thresholds for essential user actions. These thresholds serve as benchmarks for reliability, responsiveness, and scalability, ensuring that our system adheres to the expected performance standards. We've successfully delivered features that significantly impact the product and consistently addressed performance regressions to pass all NFR tests. You can be confident that as the product evolves and you scale, we’ll remain dedicated to optimizing and maintaining the performance.
Testing overview
テスト手法
The test scenarios were all run simultaneously using scripted browsers. Each browser was scripted to execute one of the scenarios listed below repeatedly and without thinking time until at least 1000 results were gathered per scenario.
テスト環境
The performance tests were all run on a set of AWS EC2 instances, deployed in the eu-west-1
region. The tested Jira instance was an out-of-the-box fresh Jira Data Center installation setup as a single node and without any additional configuration. This approach allowed us to ensure satisfying performance in a basic setup with more nodes adding additional performance gains if necessary.
Below, you can check the details of the environments used for Jira Software Data Center and the specifications of the EC2 instances. Hardware sizes were either reduced or not increased with the introduction of the new testing framework.
ハードウェア | ソフトウェア | ||
EC2 タイプ: | c5d.9xlarge | Operating system | Ubuntu 20.04.6 LTS |
ノード | 1 ノード | Java プラットフォーム | Java 17.0.11 |
データベース
ハードウェア | ソフトウェア | ||
EC2 タイプ: | db.m5.4xlarge | データベース: | Postgres 14 |
オペレーティング システム: | Ubuntu 20.04.6 LTS |
ロード ジェネレーター
ハードウェア | ソフトウェア | ||
CPU コア: | 2 | ブラウザ: | ヘッドレス Chrome |
メモリ | 8 GB | 自動化スクリプト: | Playwright |
テスト データ セット
Before we started testing, we needed to determine what size and shape of the dataset represents a typical large Jira Software instance. To achieve that, we created a new dataset that more accurately matches the instance profiles of Jira Software in very large organizations.
The data was collected based on the anonymous statistics received from real customer instances. A machine-learning (clustering) algorithm was used to group the data into small, medium, large, and extra-large instances data shapes. For our tests, we decided to use median values gathered from extra-large instances.
The new dataset significantly increases some dimensions, such as the number of issues, comments, attachments, projects, agile boards, and workflows, compared to the dataset previously used to create the release performance reports. Some dimensions have slightly decreased, but the new values better reflect real-life customer statistics or match our guardrails.
In Jira 10.1, we introduced a new feature allowing instances to send a basic set of data that we use to improve the size of our testing datasets. Learn more about how you can send your instance's anonymized info to improve our dataset
The following table presents the dataset we used in our tests. It simulates a data shape typical in our extra-large customer cohort.
データ | 値 |
---|---|
アジャイル ボード | 2,861 |
添付ファイル | 2,100, 000 |
コメント | 8,408, 998 |
カスタム フィールド | 1200 |
グループ | 20006 |
課題 | 5,407, 147 |
権限 | 200 |
プロジェクト | 4,299 |
セキュリティ レベル | 170 |
ユーザー | 82,725 |
ワークフロー | 4,299 |
Testing results
NFR tests
This year, we've introduced a set of test scenarios based on a framework that focuses on Non-Functional Requirements (NFR) thresholds for key user actions.
We've established a target threshold for each measured action. These thresholds, set according to the action type, serve as benchmarks for reliability, responsiveness, and scalability, ensuring that our product meets the expected performance standards. We're committed to ensuring that we don’t deliver performance regressions, maintaining the quality and reliability of our product.
It’s important to clarify that the thresholds outlined in this report aren't the targets we strive to achieve; rather, they represent the lower bound of accepted performance for extra-large instances. Performance for smaller customers can and should be significantly better.
Action type | 応答時間 | |
---|---|---|
50thPercentile | 90thPercentile | |
Page load | 3 s | 5 s |
Page transition | 2.5 s | 3 s |
The measured performance of the action was defined as the time from the beginning of an action (for example, initiating browser navigation for View actions or submitting a form) until the action is performed and the crucial information is visible (for example, the Issue summary, description, and activity are displayed for the View Issue action). Thanks to this approach, we can more closely measure the performance as it's perceived by the end user.
The following interpretations of the response times apply:
50th percentile - Gives a clear understanding of the average performance for most users in extra-large instances. It's less affected by extreme outliers, so it shows the central tendency of response times.
90th percentile - Highlights performance for worst-case scenarios, which may affect a smaller, but still noteworthy, portion of users in the extra-large instances.
注意:
We routinely carry out internal performance testing to ensure that all actions remain within the NFR thresholds. Our continuous monitoring enables us to optimize performance and effectively address any regressions.
All actions had a 0% error rate, demonstrating strong system reliability and stability. For the list of bugs resolved since the previous 9.12 LTS, refer to the Jira Software 10.3 LTS release notes.
The results overall are as follows:
All the actions below achieved a PASS status within the 10.3 LTS.
操作 | 応答時間 | Target threshold | Achieved performance |
---|---|---|---|
コメントを追加 | 90thPercentile | 5000ms | 789ms |
50thPercentile | 3000ms | 630ms | |
Advanced JQL - search by assignee | 90thPercentile | 5000ms | 1081ms |
50thPercentile | 3000ms | 983ms | |
Advanced JQL - search by priority | 90thPercentile | 5000ms | 2321ms |
50thPercentile | 3000ms | 2125ms | |
Advanced JQL - search by project | 90thPercentile | 5000ms | 1026ms |
50thPercentile | 3000ms | 902ms | |
Advanced JQL - search by project and user | 90thPercentile | 5000ms | 1213ms |
50thPercentile | 3000ms | 732ms | |
Advanced JQL - search by reporter | 90thPercentile | 5000ms | 899ms |
50thPercentile | 3000ms | 825ms | |
Advanced JQL - search by resolution | 90thPercentile | 5000ms | 822ms |
50thPercentile | 3000ms | 749ms | |
Advanced JQL - search by word | 90thPercentile | 5000ms | 2415ms |
50thPercentile | 3000ms | 2231ms | |
ボードの参照 | 90thPercentile | 5000ms | 608ms |
50thPercentile | 3000ms | 572ms | |
プロジェクトの閲覧 | 90thPercentile | 5000ms | 669ms |
50thPercentile | 3000ms | 634ms | |
課題の作成 | 90thPercentile | 3000ms | 838ms |
50thPercentile | 2500ms | 828ms | |
課題の編集 | 90thPercentile | 3000ms | 718ms |
50thPercentile | 2500ms | 595ms | |
Edit sprint on backlog | 90thPercentile | 3000ms | 425ms |
50thPercentile | 2500ms | 403ms | |
Basic search - search by assignee | 90thPercentile | 5000ms | 1127ms |
50thPercentile | 3000ms | 1037ms | |
Sidebar (on Issue view) | 90thPercentile | 5000ms | 1285ms |
50thPercentile | 3000ms | 1205ms | |
バックログを表示 | 90thPercentile | 5000ms | 1065ms |
50thPercentile | 3000ms | 1018ms | |
ボードの表示 | 90thPercentile | 5000ms | 1083ms |
50thPercentile | 3000ms | 1039ms | |
ダッシュボードの表示 | 90thPercentile | 5000ms | 436ms |
50thPercentile | 3000ms | 414ms | |
View project summary | 90thPercentile | 5000ms | 496ms |
50thPercentile | 3000ms | 469ms | |
課題の表示 | 90thPercentile | 5000ms | 821ms |
50thPercentile | 3000ms | 710ms |
Verifying there are no regressions since 9.12 LTS
Comparing results between the old and new testing frameworks is challenging due to variations in their testing approaches, including differences in the datasets used and implementation details of the scenarios.
We conducted a sample batch of the same test scenarios in an identical environment, utilizing a new, more demanding dataset on Jira version 9.12.16 with Java 17. This approach allowed us to make the results comparable within the new testing framework.
When tested in the same manner, no regressions were observed in Jira Software version 10.3.0 compared to version 9.12.16.
Further resources for scaling and optimizing
Jira の拡張と最適化について詳しく知りたい場合は、その他のリソースもご参考ください。
課題をアーカイブする
The number of issues affects Jira's performance, so you might want to archive issues that are no longer needed. You may also conclude that the massive number of issues clutters the view in Jira, and therefore, you still may wish to archive the outdated issues from your instance. Learn more about archiving projects
Jira Software のガードレール
Product guardrails are data-type recommendations designed to help you identify potential risks and aid you in making decisions about the next steps in your instance optimization journey. Learn more about Jira Software guardrails
Jira ナレッジベース
パフォーマンス関連のトピックの詳細なガイドラインについては、Jira ナレッジベースの「Jira サーバーのパフォーマンスの問題のトラブルシューティング」の記事を参照してください。
Jira エンタープライズ サービス
お客様の組織内の Jira のスケーリングを経験豊富なアトラシアン社員が直接サポートする方法については、追加サポート サービスをご覧ください。
ソリューションパートナー
The Atlassian Experts in your local area can also help you scale Jira in your own environment.