How to enable Agent dependency caching in Bamboo Data Center

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

プラットフォームについて: Data Center - この記事は、Data Center プラットフォームのアトラシアン製品に適用されます。

このナレッジベース記事は製品の Data Center バージョン用に作成されています。Data Center 固有ではない機能の Data Center ナレッジベースは、製品のサーバー バージョンでも動作する可能性はありますが、テストは行われていません。サーバー*製品のサポートは 2024 年 2 月 15 日に終了しました。サーバー製品を利用している場合は、アトラシアンのサーバー製品のサポート終了のお知らせページにて移行オプションをご確認ください。

*Fisheye および Crucible は除く

要約

This article explains how to configure your reverse proxy to cache content for your Bamboo Agents, enhancing performance and reducing the load on your Bamboo server.

環境

  • Bamboo Data Center
  • Bamboo Agents (Remote, Ephemeral)
  • Reverse Proxy or Load Balancer with content caching

診断

Bamboo Data Center supports proxy functionality for caching Agent dependencies. This feature can reduce Agents' startup time by enabling content caching on your reverse proxy. Once you activate this on the proxy side, Bamboo will confirm the change by displaying information in the agent log file.

You'll see the following INFO message in the <BAMBOO_AGENT_HOME>/atlassian-bamboo-agent.log file when content caching is enabled:

INFO [WrapperSimpleAppMain] [ClasspathBuilder] Content caching is enabled for high performance bootstrap.

In case content caching is disabled, you'll see the following WARN message:

WARN [WrapperSimpleAppMain] [ClasspathBuilder] Content caching is not enabled. This can cause performance impact on servers with high-tier licenses.

The dependencies being discussed are the JAR files that the Agent must download from the server during startup. To speed up the Agent's startup time, it’s possible to cache plugin information at the proxy level. This means you won’t need to make any changes on the Bamboo side; all adjustments will be made at the reverse proxy level. Importantly, you don’t need to specify which items should be cached; Bamboo will handle that for you once content caching is enabled.

This should work with most reverse proxies that support content caching. In this article, you'll find instructions to enable agent dependency caching specifically for Apache HTTP Server and NGINX.

ソリューション

Configuring Apache HTTP Server

Since Apache HTTP Server is not an Atlassian product, Atlassian does not guarantee to provide support for its configuration. You should consider the material on this page to be for your information only; use it at your own risk. If you encounter problems with configuring Apache HTTP Server, we recommend that you refer to the Apache HTTP Server Support page.

Note that any changes you make to the httpd.conf file will be effective upon starting or re-starting Apache HTTP Server.

Step 1: Enable mod_cache and mod_cache_disk in Apache HTTP Server

Load mod_cache and mod_cache_disk dynamically, using the LoadModule directive; that means un-commenting the following lines in the httpd.conf file:

LoadModule cache_module libexec/apache2/mod_cache.so
LoadModule cache_disk_module libexec/apache2/mod_cache_disk.so

Step 2: Configure Apache HTTP Server's caching features

Include the following Cache directives in the context (protocol type, virtual server, or location) for which you want to cache server responses:

<VirtualHost *:80>
	...
	
	CacheRoot  /etc/apache2/cache/
	CacheEnable disk /
	CacheDirLevels 2
	CacheDirLength 1
	CacheHeader on

</VirtualHost>
  • The CacheRoot directive defines the name of the directory on the disk to contain cache files. You must ensure that the user running the Apache service has read/write access to this directory.
  • The CacheEnable directive instructs mod_cache to cache urls at or below url-string and cache_type disk to instruct mod_cache to use the disk based storage manager implemented by mod_cache_disk.
  • The CacheDirLevels directive sets the number of subdirectory levels in the cache. Cached data will be saved this many directory levels below the CacheRoot directory.
  • The CacheDirLength directive sets the number of characters for each subdirectory name in the cache hierarchy. It can be used in conjunction with CacheDirLevels to determine the approximate structure of your cache hierarchy.
  • When the CacheHeader directive is switched on, an X-Cache header will be added to the response with the cache status of this response.

Configuring NGINX

Since NGINX is not an Atlassian product, Atlassian does not guarantee to provide support for its configuration. You should consider the material on this page to be for your information only; use it at your own risk. If you encounter problems with configuring NGINX, we recommend that you refer to the NGINX Support page.

Step 1: Include proxy_cache_path to the top‑level http context

Include the proxy_cache_path directive in the top‑level http context; that means add the following line to the nginx.conf file:

http {
    ...
	proxy_cache_path /data/nginx/cache levels=1 keys_zone=bamboo_server:10m max_size=1024m inactive=20m use_temp_path=off;
}

The mandatory first parameter is the local filesystem path for cached content, and the mandatory keys_zone parameter defines the name and size of the shared memory zone that is used to store metadata about cached items.

  • You must ensure that the user running the NGINX service has read/write access to this directory.
  • In this example we'll name the cache bamboo_server and limit the amount of cached response data to 1024m.
  • Set inactive to 20 minutes. Cached data that are not accessed for over 20 minutes get removed from the cache regardless of their freshness.

Step 2: Include add_header and proxy_cache to start caching responses

Include the add_header directive using the 'X-Cache-Status' name and value '$upstream_cache_status'. This way the X-Cache-Status header will be added to the response with the cache status of the response. Include the proxy_cache directive in the context (protocol type, virtual server, or location) for which you want to cache server responses, specifying the zone name defined by the keys_zone parameter to the proxy_cache_path directive (in this case, bamboo_server):

server {
	...
	add_header           'X-Cache-Status' '$upstream_cache_status';
	location / {
    	...
    	proxy_cache           bamboo_server;
  	}
}

There are several other attributes you can add to tweak cache settings, as pointed out in the NGINX Content Caching documentation. This is highly customizable. However, as mentioned previously in this article, for Agent content caching you just need to enable the content caching on the proxy and Bamboo will tell the cache what to store.

Testing Agent dependency caching

The following cURL command will help you test whether caching is successful even before starting up an agent. You don't need to have an agent installed for the purpose of this test.

$ curl -I --http1.1 http://<BAMBOO_BASE_URL>/agentServer/bootstrap/content-cache-test

You should see X-Cache-Status HIT. Similar to the following:

HTTP/1.1 200 
Server: nginx
Date: Wed, 18 Dec 2024 06:35:18 GMT
Content-Type: text/plain;charset=UTF-8
Content-Length: 5002
Connection: keep-alive
Strict-Transport-Security: max-age=31536000
Referrer-Policy: no-referrer-when-downgrade
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-Seraph-LoginReason: AUTHENTICATED_FAILED
Last-Modified: Wed, 18 Dec 2024 06:25:22 GMT
Cache-Control: public, max-age=30
X-Cache-Status: HIT

This means the entity was fresh, and was served from cache. If you see MISS then the entity was fetched from the upstream server and was not served from cache.


最終更新日: 2024 年 12 月 18 日

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.