Configuration properties

Administer Bitbucket Data Center

このページの内容

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

This page describes the configuration properties that can be used to control behavior in Bitbucket Data Center. Create the bitbucket.properties file, in the shared folder of your home directory, and add the system properties you need, use the standard format for Java properties files.

Note that bitbucket.properties is created automatically when you perform a database migration.

Bitbucket must be restarted for changes to become effective.

Default values for system properties, where applicable, are specified in the tables below.

On this page:

Analytics

既定値説明
analytics.aws.enabled
true

Controls whether AWS instance analytics events are published. This setting only has an effect if analytics is enabled.

Application mode

既定値説明
application.mode
default

Controls what mode Bitbucket is in - currently "mirror" and "default" are supported

添付ファイル

既定値説明
attachment.upload.max.size
20971520

Defines the largest single attachment the system will allow. Attachments larger than this will be rejected without ever being stored locally.

This setting applies to all uploaded data, including avatars. Some types may have their own further restriction they apply on top of this. However, for such types, the file will be stored locally prior to that secondary limit being applied. If that secondary limit is higher than this limit, this limit is the one that will be applied.

This value is in bytes.

Audit

These properties control the auditing feature, determining the number of audit entries logged, or stored in the database, and the size of those entries. Changing these settings will only affect new audit entries.

Increasing the amount of auditing done may have an adverse effect on performance.

既定値説明
audit.legacy.events.logging.forced
false

This property controls whether ADVANCED or FULL level logging is enforced for actions that were audited prior to application version 7.0. If this property is set as true, those actions will be audited regardless of coverage settings.

plugin.audit.search.max.concurrent.nontext.requests
10

非 FREETEXT の同時検索結果の最大数。既定値は 1 ノードあたり 10 件です。

plugin.audit.search.max.concurrent.text.requests
5

FREETEXT の同時検索リクエストの最大数。既定値は 1 ノードあたり 10 件です。

plugin.audit.search.query.timeout
30

キューに追加された検索リクエストのタイムアウト (秒単位)。既定値は 30 秒。

plugin.audit.db.limit.rows
10000000

DB に保存される監査イベント行の最大数。上限を超えるイベントは古いものから順に削除されます。既定値は 10,000,000 で、1 時間おきにチェックされます。

plugin.audit.db.limit.buffer.rows
1000

新しい監査イベントを受け入れるためのバッファー。既定値は 1000 行です。

plugin.audit.db.delete.batch.limit
10000

リテンション制限を実施する際に使用される、データベース トランザクションごとに削除されるイベントの最大数。デフォルトは 10,000 行です。

plugin.audit.log.view.sysadmin.only
false

Only allows the the system admin (but not admin) to see the global audit log

plugin.audit.schedule.db.limiter.interval.mins
60

データベース サイズの確認。60 分ごとに実行します。

plugin.audit.broker.exception.loggedCount
3

エラーが発生した際にシステムのログ ファイルに書き込まれる監査イベントの最大数。既定は 3 です。

plugin.audit.retention.interval.hours
23

Database retention check, which deletes events exceeding retention period, running every day at midnight, and only runs if the last run is more than 23 hours.

plugin.audit.file.max.file.size
100

個々の監査ファイルのサイズ上限 (メガバイト単位)。上限に達するとファイルが切り替えられます。既定値は 100 MB です。

plugin.audit.file.max.file.count
100

監査ファイルの最大数。上限に達すると、最も古いファイルが削除されます。既定値は 100 です。

plugin.audit.consumer.buffer.size
10000

使用されるのを待機してバッファーで保持される監査イベントの最大数。既定値は 10,000 です。

plugin.audit.broker.default.batch.size
3000

コンシューマーにディスパッチされる監査イベントの最大数。既定値は 1 回のバッチにつき 3,000 です。

plugin.audit.coverage.cache.read.expiration.seconds
30

カバレッジのキャッシュの有効期間。既定値は 30 秒です。

plugin.audit.distinct.categories.and.actions.cache.refresh.seconds
900

How long the distinct categories and actions cache is refreshed, defaults to 900 seconds

plugin.audit.retention.file.configuration.expiration.seconds
300

How frequently the retention file configuration cache (i.e. count) is refreshed, defaults to 300 seconds

認証

See also Connecting Bitbucket to Crowd.

既定値説明
auth.cache.tti
5

Controls the time-to-idle for entries in the authentication cache. A short TTI (5 to 10 seconds) helps narrow the window for malicious users authenticating with outdated credentials and is recommended. Setting this to a value less than 1 will default it to the configured TTL (auth.cache.ttl).

This value is in seconds.

auth.cache.ttl
30

Controls the time-to-live for entries in the authentication cache. Setting this to a value less than 1 will disable the cache. The maximum allowed value is 300 seconds (5 minutes). Longer TTLs pose a security risk, as potentially out-of-date credentials can still be used while they remain in the cache. A short TTL, like the default, can reduce burst load on remote authentication systems (Crowd, LDAP) while keeping potential exposure to outdated credentials low, especially when paired with a shorter TTI.

This value is in seconds.

auth.remember-me.enabled
optional

Controls whether remember-me authentication is disabled, always performed or only performed when a checkbox is checked on the login form. The 'Remember my login' checkbox is only displayed when set to 'optional'. Possible values are:

  • always

    No checkbox, remember-me cookie is always generated on successful login.

  • optional

    Checkbox is displayed on login form. Remember-me cookie is only generated when checkbox is checked.

  • never

    Remember-me authentication is disabled completely.

auth.remember-me.cookie.name
_atl_bitbucket_remember_me

Defines the cookie name used for the remember-me authentication

auth.remember-me.token.expiry
30

How long remember-me tokens are valid. Note that once a remember-me token is used for authentication, the token is invalidated and a new remember-me token is returned.

This value is in days.

auth.remember-me.token.grace.period
60

How long a token can be re-used for authentication after it has been used to authenticate . This setting allows a grace period to allow parallel authentication attempts to succeed. This commonly happens when a browser is started and opens multiple tabs at once.

This value is in seconds

auth.remember-me.token.cleanup.interval
300

Controls how frequently expired remember-me tokens are cleaned up.

This value is in minutes

plugin.auth-crowd.sso.enabled
false

Whether SSO support should be enabled or not. Regardless of this setting SSO authentication will only be activated when the system is connected to a Crowd directory that is configured for SSO.

plugin.auth-crowd.sso.config.ttl
15

The auth plugin caches the SSO configuration that is retrieved from the remote Crowd server. This setting controls the time to live of that cache.

This value is in minutes.

plugin.auth-crowd.sso.config.error.wait
1

If an error occurs while retrieving the SSO configuration from the remote Crowd server, the system will wait this long before retrying. The wait time between subsequent attempts is incremented exponentially (1s -> 1.5s -> 2.3s -> 3.4s, etc). The wait time is capped at the configured TTL.

This value is in seconds.

plugin.auth-crowd.sso.http.max.connections
20

The maximum number of HTTP connections in the connection pool for communication with the Crowd server.

plugin.auth-crowd.sso.http.proxy.host

The name of the proxy server used to transport SOAP traffic to the Crowd server.

plugin.auth-crowd.sso.http.proxy.port

The connection port of the proxy server (must be specified if a proxy host is specified).

plugin.auth-crowd.sso.http.proxy.username

The username used to authenticate with the proxy server (if the proxy server requires authentication).

plugin.auth-crowd.sso.http.proxy.password

The password used to authenticate with the proxy server (if the proxy server requires authentication).

plugin.auth-crowd.sso.http.timeout
5000

The HTTP connection timeout used for communication with the Crowd server. A value of zero indicates that there is no connection timeout.

This value is in milliseconds.

plugin.auth-crowd.sso.socket.timeout
20000

The socket timeout. You may wish to override the default value if the latency to the Crowd server is high.

This value is in milliseconds.

plugin.auth-crowd.sso.session.validationinterval
3

The number of minutes to cache authentication validation in the session. If this value is set to 0, each HTTP request will be authenticated with the Crowd server.

plugin.auth-crowd.sso.session.lastvalidation
atl.crowd.sso.lastvalidation

The session key to use when storing a Date value of the user's last authentication.

plugin.auth-crowd.sso.session.tokenkey
atl.crowd.sso.tokenkey

The session key to use when storing a String value of the user's authentication token.

アバター

既定値説明
avatar.anonymous.access
false

Controls whether the user and project avatars can be accessed anonymously. When disabled, the default avatar is returned while trying to access anonymously, which is same as the one returned when not set.

avatar.max.dimension
1024

Controls the max height and width for an avatar image. Even if the avatar is within the acceptable file size, if its dimensions exceed this value for height or width, it will be rejected. When an avatar is loaded by the server for processing, images with large dimensions may expand from as small as a few kilobytes on disk to consume a substantially larger amount of memory, depending on how well the image data was compressed. Increasing this limit can substantially increase the amount of heap used while processing avatars and may result in OutOfMemoryErrors.

This value is in pixels.

avatar.max.size
1048576

Controls how large an avatar is allowed to be. Avatars larger than this are rejected and cannot be uploaded to the server, to prevent excessive disk usage.

This value is in bytes.

avatar.url.default
mm

Defines the fallback URL to be formatted into the "avatar.url.format.http" or "avatar.url.format.https" URL format for use when a user does not have an acceptable avatar configured. This value may be a URL or, if using Gravatar, it may be the identifier for one of Gravatar's default avatars.

avatar.url.format.http
http://www.gravatar.com/avatar/%1$s.jpg?s=%2$d&d=%3$s

Defines the default URL format for retrieving user avatars over HTTP. This default uses any G-rated avatar provided by the Gravatar service

The following format parameters are available:

  • %1$s

    The user's e-mail address, MD5 hashed, or "00000000000000000000000000000000" if the user has no e-mail.

  • %2$d

    The requested avatar size.

  • %3$s

    The fallback URL, URL-encoded, which may be defined using "avatar.url.default".

  • %4$s

    The user's e-mail address, not hashed, or an empty string if the user has no e-mail.

avatar.url.format.https
https://secure.gravatar.com/avatar/%1$s.jpg?s=%2$d&d=%3$s

Defines the default URL format for retrieving user avatars over HTTPS. This default uses any G-rated avatar provided by the Gravatar service

The following format parameters are available:

  • %1$s

    The user's e-mail address, MD5 hashed, or "00000000000000000000000000000000" if the user has no e-mail.

  • %2$d

    The requested avatar size.

  • %3$s

    The fallback URL, URL-encoded, which may be defined using "avatar.url.default".

  • %4$s

    The user's e-mail address, not hashed, or an empty string if the user has no e-mail.

バックアップ

既定値説明
backup.drain.scm.timeout
60

Controls how long the backup should wait for outstanding SCM operations to complete before failing.

This value is in seconds.

backup.drain.db.timeout
90

Draining database connections during backup happens in two stages. Stage 1 passively waits a set amount of time for all connections to be returned to the pool. If connections are still leased when backup.drain.db.timeout seconds has elapsed then stage 2 begins and will interrupt the owning threads, wait backup.drain.db.force.timeout seconds and finally attempt to roll back and close any remaining connections.

In stage 1 of draining connections during backup, this setting controls how long the backup should wait for outstanding database operations to complete before moving to stage 2. See backup.drain.db.force.timeout

This value is in seconds.

backup.drain.db.force.timeout
30

In stage 2 of draining connections during backup, this property controls how long the backup process should wait (after interrupting the owning threads) for those threads to release the connections before forcibly rolling back and closing them. Note if all connections have been returned to the pool stage 2 is skipped and so this property has no effect. A negative value will skip stage 2 completely. See backup.drain.db.timeout

This value is in seconds.

Branch Information

既定値説明
plugin.bitbucket-branch-information.timeout
5

Controls timeouts for retrieving branch information, which for large repositories can be quite slow and consume a single Git process.

This value is in seconds.

plugin.bitbucket-branch-information.max.branches
1000

Controls the maximum number of branches that are shown the user

Branch Utils

既定値説明
plugin.bitbucket-auto-merge.limit
30

This setting is deprecated for removal in 9.0. Use plugin.bitbucket-cascading-merge.limit instead.

plugin.bitbucket-auto-merge.timeout
300

This setting is deprecated for removal in 9.0. Use plugin.bitbucket-cascading-merge.timeout instead.

plugin.bitbucket-branch-model.version.separator
[_\\-.+]

The version component separator for branch model. This value is a regular expression pattern that is used to split branch names parsed as version strings into separate components for comparison. The default value results in any of the following 4 characters being used for comparison: _ - + .

plugin.bitbucket-branch-model.validation.prefix.length
30

The maximum allowed length of branch prefixes in branch model

plugin.bitbucket-cascading-merge.limit
${plugin.bitbucket-auto-merge.limit:30}

Defines the maximum number of merges allowed along in a single cascading merge chain. If the number of merges included in the chain exceeds this limit, the entire chain will be skipped. Setting this limit to 0 means cascading merging is effectively disabled, since the merge chain must always be empty.

plugin.bitbucket-cascading-merge.timeout
${plugin.bitbucket-auto-merge.timeout:300}

Defines the maximum amount of time any command used to perform a single cascading merge from the chain is allowed to execute or idle. Since a cascading merge may require a series of different commands at the SCM level, this timeout does not define the upper bound for how long the overall merge process might take; it only defines the duration allotted to any single command.

This value is in seconds.

ビルド

既定値説明
build.actions.threads.max
1*${scaling.concurrency}

When performing actions on a build status, this sets the upper limit to the number of concurrent threads that can be performing actions at the same time.

There is limited support for mathematical expressions; +,-,*,/ and () are supported.

build.actions.cache.expiry
5

The maximum number of minutes to wait for a response from a remote CI system.

This value is in minutes

build.actions.cache.max.pending
5000

The maximum number of concurrent requests to a remote CI system.

build.status.reject-untrusted
false

Reject build status that can't be verified to have been sent from a trusted build server.

Bitbucket, via the PluginBuildServerProvider SPI, has the ability to verify the build server that POSTed a build status. This is used to establish the concept of a trusted build status. Bitbucket will always prevent certain actions on untrusted builds, for example it will never provide a list of download URLs for artifacts for an untrusted build. This property allows Bitbucket to outright reject untrusted builds, returning a 400 status code.

クラスタリング

既定値説明
hazelcast.node.authentication.enabled
true

Enable node authentication. When this is enabled the group name and password are verified before any other checks are run

hazelcast.enterprise.license

Specifies a Hazelcast enterprise license. An enterprise license is not required, but supplying one will unlock additional Hazelcast functionality, such as the ability to use the Hazelcast Management Center with clusters containing more than 2 nodes, which may be useful in production environments.

hazelcast.managementcenter.url

Specifies the URL where the Hazelcast Management Center is running. When a URL is configured each node in the cluster will connect to the Management Center at that URL and broadcast its status. This setting is deprecated and will be removed in 9.0

hazelcast.group.name
${user.name}

Specifies the cluster group the instance should join. This can be used, for example, to partition development and production clusters.

hazelcast.group.password
${user.name}

The password required to join the specified cluster group.

hazelcast.http.sessions
local

Specifies how HTTP sessions should be managed.

The following values are supported:

  • local

    HTTP sessions are managed per node. When used in a cluster, the load balancer MUST have sticky sessions enabled. If a node fails or is shut down, users that were assigned to that node need to log in again.

  • sticky

    HTTP sessions are distributed across the cluster with a load balancer configured to use sticky sessions. If a node fails or is shut down, users do not have to log in again. In this configuration, session management is optimized for sticky sessions and will not perform certain cleanup tasks for better performance.

  • replicated

    HTTP sessions are distributed across the cluster. The load balancer does not need to be configured for sticky sessions.

local is the recommended setting for standalone installations. For clustered installations local is the most performant option, followed by sticky and replicated.

hazelcast.local.public.address

In most environments it should not be necessary to configure this explicitly. However, when using NAT, such as when starting cluster nodes using Docker, it may be necessary to configure the public address explicitly to avoid request binding errors.

hazelcast.network.autodetect
false

A boolean flag to indicate whether Hazelcast should auto-detect the appropriate network discovery mechanism. No other properties are required for this configuration. Note that this configuration is not recommended for production environments.

hazelcast.network.aws
false

A boolean flag to indicate whether Hazelcast has AWS EC2 Auto Discovery enabled. When setting this property to true, either hazelcast.network.aws.iam.role or (hazelcast.network.aws.access.key and hazelcast.network.aws.secret.key) become required properties.

hazelcast.network.aws.iam.role

If hazelcast.network.aws is true, then you must either set this property to your AWS role or set hazelcast.network.aws.access.key and hazelcast.network.aws.secret.key in order to discover your cluster node instances via the AWS EC2 API.

hazelcast.network.aws.access.key

If hazelcast.network.aws is true and hazelcast.network.aws.iam.role is not set, then you must set this property to your AWS account access key, in order to discover your cluster node instances via the AWS EC2 API.

hazelcast.network.aws.secret.key

If hazelcast.network.aws is true and hazelcast.network.aws.iam.role is not set, then you must set this property to your AWS account secret key, in order to discover your cluster node instances via the AWS EC2 API.

hazelcast.network.aws.region

The AWS region to query. If empty, Hazelcast's default ("us-east-1") is used. If set, it will override any value for "hazelcast.network.aws.host.header" (see below).

hazelcast.network.aws.host.header

Make sure this property is set to ec2 if using an AWS ECS environment with EC2 Discovery. The Host: header to use when querying the AWS EC2 API. If empty, Hazelcast's default ("ec2.amazonaws.com") is used. If set, hazelcast.network.aws.region shouldn't be set, as it will override this property.

hazelcast.network.aws.port.range

Port to connect to on instances found via the AWS discovery mechanism. Can be a port range in the format 5701-5703 or single port. Providing a range with more than 3 ports may impact the application startup time. By default this is set to the value of the hazelcast.port property

hazelcast.network.aws.security.group.name

There are 2 mechanisms for filtering out AWS instances and these mechanisms can be combined (AND).

  • If hazelcast.network.aws.security.group.name is set, only instances within that security group will be selected.

  • If hazelcast.network.aws.tag.key and hazelcast.network.aws.tag.value are set, only instances with that tag key/value will be selected.

hazelcast.network.aws.tag.key

The AWS tag key to use to filter instances to form a cluster with.

hazelcast.network.aws.tag.value

The AWS tag value to use to filter instances to form a cluster with.

hazelcast.network.azure
false

A boolean flag to indicate whether Hazelcast has Azure Auto Discovery enabled.

hazelcast.network.azure.instance.metadata.available
false

A boolean flag to indicate whether to use Azure Instance Metadata service when retrieving current configuration parameters. Should be set to false when using the plugin outside of Azure Environment or Azure Instance Metadata service is not available. When setting this property to true, none of hazelcast.network.azure.tenant.id, hazelcast.network.azure.client.id, hazelcast.network.azure.client.secret, hazelcast.network.azure.subscription.id, hazelcast.network.azure.resource.group, should be configured. When setting this property to false, all the properties above should be configured. Note: hazelcast.network.azure.tag should always be configured.

hazelcast.network.azure.tag

If hazelcast.network.azure is true, then you must set this property to the name of the tag on the hazelcast vm or scale set resources, the value of the tag should be the port that hazelcast will use to communicate. This property should have the format key\=value, note that the \ in front of the = is needed to ensure the values are read properly.

hazelcast.network.azure.group.name

Note: this has been completely replaced by hazelcast.network.azure.resource.group as part of Hazelcast 5 upgrade.

hazelcast.network.azure.resource.group
${hazelcast.network.azure.group.name}

If hazelcast.network.azure is true and hazelcast.network.azure.instance.metadata.available is `false, then you must set this property to the Azure resource group name of the cluster.

hazelcast.network.azure.subscription.id

If hazelcast.network.azure is true and hazelcast.network.azure.instance.metadata.available is false, then you must set this property to your Azure subscription ID.

hazelcast.network.azure.client.id

If hazelcast.network.azure is true and hazelcast.network.azure.instance.metadata.available is false, then you must set this property to your Azure Active Directory Service Principal client ID.

hazelcast.network.azure.client.secret

If hazelcast.network.azure is true and hazelcast.network.azure.instance.metadata.available is false, then you must set this property to your Azure Active Directory Service Principal client secret.

hazelcast.network.azure.tenant.id

If hazelcast.network.azure is true and hazelcast.network.azure.instance.metadata.available is false, then you must set this property to your Azure Active Directory tenant ID.

hazelcast.network.kubernetes
false

A boolean flag to indicate whether Hazelcast has Kubernetes Auto Discovery enabled. No other properties are required for configuration of Kubernetes.

hazelcast.network.multicast
false

A boolean flag to indicate whether Hazelcast has multicasting enabled.

hazelcast.network.multicast.address

The multicast address for the cluster, used to locate members when multicast discovery is enabled. If no value is set here, Hazelcast's default (224.2.2.3) is used. This value should not need to be configured explicitly on most networks.

hazelcast.network.multicast.port
${hazelcast.port}

The multicast port to bind to. By default, this will be the hazelcast.port (5701, unless configured otherwise), and updating that setting will also update this one unless this port is configured explicitly.

hazelcast.network.tcpip
false

A boolean flag to indicate whether Hazelcast has TCP/IP enabled.

hazelcast.network.tcpip.members
localhost:5701,localhost:5702

List of members that Hazelcast nodes should connect to when TCP/IP is enabled. These nodes function as root nodes, allowing cluster nodes to discover each other. This comma-separated list does not need to include every node in the cluster. When new nodes join they will use the connected node to find the other cluster nodes.

hazelcast.port
5701

The network port where Hazelcast will listen for cluster members. If multiple instances are run on the same server Hazelcast will automatically increment this value for additional nodes.

Code Insights

既定値説明
plugin.bitbucket-code-insights.reports.expiry.days
60

Controls how long code insight cards are kept in the database.

This value is in days.

plugin.bitbucket-code-insights.pullrequest.changedlines.cache.max
500

Controls the number of pull request diffs kept in the insights diff cache

plugin.bitbucket-code-insights.pullrequest.changedlines.cache.ttl
7200

Controls the number of seconds for which the diff cache is kept.

This value is in seconds

コード所有者

既定値説明
code.owners.file.path
.bitbucket/CODEOWNERS

Controls where bitbucket will look for the CODEOWNERS file in the repository. File path should be relative to the root of the repository.

code.owners.maximum.users
100

Controls the maximum number of code owners suggested when creating a new pull request.

code.owners.maximum.filesize
524288

Controls the maximum file size of a CODEOWNERS file, in bytes. Bitbucket will not attempt to read CODEOWNERS files larger than this size.

Comment Likes and Comment Reactions

既定値説明
plugin.bitbucket-comment-likes.max.resources
500

The maximum number of comment likes associated with a single comment. The system will allow to create more likes than this number, but the results of API calls to get likes will be strictly limited to the value configured here, with appropriate warnings recorded in the application logs.

plugin.bitbucket-comment-likes.max.page
100

The maximum size of a page of comment likes

plugin.bitbucket-comment-likes.max.reactions
50

The maximum number of specific comment reactions for a single comment. The system will not allow you to add more of the same reaction.

Commit Indexing

These properties control how commits are indexed when after pushes or pull request merges.

既定値説明
indexing.max.threads
4

Controls the maximum number of threads which are used to perform indexing. The resource limits configured below are not applied to these threads, so using a high number may negatively impact server performance.

indexing.import.max.threads
1

Controls the maximum number of threads which are used to perform indexing during Data Center migration imports. The resource limits configured below are not applied to these threads, so using a high number may negatively impact server performance.

indexing.job.batch.size
250

Defines the number of commits which will be indexed in a single database transaction.

indexing.job.queue.size
150

Defines the maximum number of pending indexing requests. When this limit is reached, attempts to queue another indexing operation will be rejected.

indexing.queue.timeout.poll
60

Controls how long indexing processes are allowed to wait for the next commit to be made available in the commit queue before assuming the process that retrieves the commits is stuck and giving up.

This value is in seconds.

indexing.queue.size
1024

Defines the size of the queue that will be used for indexing. When the limit is reached the program will block until there is space in the queue to add any required new items.

indexing.process.timeout.execution
3600

Controls how long indexing processes are allowed to execute before they are interrupted, even if they are producing output or consuming input.

This value is in seconds.

indexing.snapshot.timeout.execution
120

Controls how long snapshot generation, which captures the state of a repository's branches and tags, is allowed to execute before it is interrupted. This timeout is applied whether the process is producing output or not.

This value is in seconds.

indexing.snapshot.timeout.idle
${indexing.snapshot.timeout.execution}

Controls how long snapshot generation, which captures the state of a repository's branches and tags, is allowed to run without producing output before it is interrupted.

This value is in seconds.

Commit graph cache

既定値説明
commit.graph.cache.min.free.space
1073741824

Controls how much space needs to be available on disk (specifically under <BITBUCKET_HOME>/caches) for caching to be enabled. This setting ensures that the cache plugin does not fill up the disk.

This value is in bytes.

commit.graph.cache.max.threads
2

Defines the number of threads that will be used to create commit graph cache entries.

commit.graph.cache.max.job.queue
1000

Defines the maximum number of pending cache creation jobs.

変更を確認できる「コミット」

既定値説明
commit.diff.context
10

Defines the number of context lines to include around diff segments in commit diffs.

commit.lastmodified.timeout
120

Defines the timeout for streaming last modifications for files under a given path, applying a limit to how long the traversal can run before the process is canceled. This timeout is applied as both the execution and idle timeout.

This value is in seconds.

commit.list.follow.renames
true

Defines whether file history commands in the UI should follow renames by default. Setting this to false may reduce load for repositories with long commit logs, as users will have to manually enable the 'follow renames' toggle in the UI in order to perform the more expensive --follow command.

commit.message.max
262144

Controls the maximum length of the commit message to be loaded when retrieving one or more commits from the SCM. Commit messages longer than this limit will be truncated. The default limit is high enough to not affect processing for the general case, but protects the system from consuming too much memory in exceptional cases.

commit.message.bulk.max
16384

Controls the maximum length of the commit message to be loaded when bulk retrieving a commits from the SCM. Commit messages longer than this limit will be truncated. The default limit is high enough to not affect processing for the common case, but protects the system from consuming too much memory when many commits have long messages.

コンテンツ

既定値説明
content.archive.timeout
1800

Defines the timeout for archive processes, applying a limit to how long it can take to stream a repository's archive before the process is canceled. This timeout is applied as both the execution and idle timeout.

This value is in seconds.

content.patch.timeout
1800

Defines the timeout for patch processes, applying a limit to how long it can take to stream a diff's patch before the process is canceled. This timeout is applied as both the execution and idle timeout.

This value is in seconds.

Data Center Migration

既定値説明
migration.threadpool.size
2

Maximal size of thread pool – the maximum number of concurrent migrations.

データベース

Database properties allow explicitly configuring the database the system should use. They may be configured directly in bitbucket.properties, or they may be specified during setup. Existing systems may be migrated to a new database using the in-app migration feature.

If no database is explicitly configured, an internal database will be used automatically. Which internal database is used is not guaranteed.

If the jdbc.driver, jdbc.url, jdbc.password and jdbc.user properties are specified in bitbucket.properties when the Setup Wizard runs after installation, then those values will be used, and the Setup Wizard will not display the database configuration screen.

Warning: jdbc.driver and jdbc.url are available to plugins via the ApplicationPropertiesService. Some JDBC drivers allow the username and password to be defined in the URL. Because that property is available throughout the system (and will be included in support requests), that approach should not be used. The jdbc.username and jdbc.password properties should be used for these values instead.

既定値説明
jdbc.driver
org.h2.Driver

The JDBC driver class that should be used to connect to the database.

The system uses an internal database by default, and stores its data in the home directory.

The following JDBC drivers are bundled with the distribution:

jdbc.url
jdbc:h2:async:${bitbucket.shared.home}/data/db;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;FILE_LOCK=FILE

This is the JDBC URL that will be used to connect to the database. This URL varies depending on the database you are connecting to. Please consult the documentation for your JDBC driver to determine the correct URL.

jdbc.user
sa

This is the user that will be used to authenticate with the database. This user must have full DDL rights. It must be able to create, alter and drop tables, indexes, constraints, and other SQL objects, as well as being able to create and destroy temporary tables.

jdbc.password

The password that the user defined by jdbc.user will connect with.

Database Pool

These properties control the database pool. The pool implementation used is HikariCP. Documentation for these settings can be found on the HikariCP configuration section.

To get a feel for how these settings really work in practice, the most relevant classes in HikariCP are:

  • com.zaxxer.hikari.HikariConfig

    Holds the configuration for the database pool and has documentation for the available settings.

  • com.zaxxer.hikari.pool.HikariPool

    Provides the database pool and manages connections.

  • com.zaxxer.hikari.util.ConnectionBag

    Holds references to open connections, whether in-use or idle.

既定値説明
db.pool.rejectioncooldown
10

When a connection cannot be leased because the pool is exhausted, the stack traces of all threads which are holding a connection will be logged. This defines the cooldown that is applied to that logging to prevent spamming stacks in the logs on every failed connection request.

This value is in minutes

db.pool.size.idle
5

Defines the number of connections the pool tries to keep idle. The system can have more idle connections than the value configured here. As connections are borrowed from the pool, this value is used to control whether the pool will eagerly open new connections to try and keep some number idle, which can help smooth ramp-up for load spikes.

db.pool.size.max
80

Defines the maximum number of connections the pool can have open at once.

db.pool.timeout.connect
15

Defines the amount of time the system will wait when attempting to open a new connection before throwing an exception. The system may hang, during startup, for the configured number of seconds if the database is unavailable. As a result, the timeout configured here should not be generous.

This value is in seconds.

db.pool.timeout.idle
1750

Defines the maximum period of time a connection may be idle before it is closed. In general, generous values should be used here to prevent creating and destroying many short-lived database connections (which defeats the purpose of pooling).

Note: If an aggressive timeout is configured on the database server, a more aggressive timeout must be used here to avoid issues caused by the database server closing connections from its end. The value applied here should ensure the system closes idle connections before the database server does. This value needs to be less than db.pool.timeout.lifetime otherwise the idle timeout will be ignored.

This value is in seconds.

db.pool.timeout.leak
0

Defines the maximum period of time a connection may be checked out before it is reported as a potential leak. By default, leak detection is not enabled. Long-running tasks, such as taking a backup or migrating databases, can easily exceed this threshold and trigger a false positive detection.

This value is in minutes.

db.pool.timeout.lifetime
30

Defines the maximum lifetime for a connection. Connections which exceed this threshold are closed the first time they become idle and fresh connections are opened.

This value is in minutes.

Database Schema

These properties control aspects of how the database schema is managed.

既定値説明
db.schema.lock.maxWait
5

Defines the maximum amount of time the system can wait to acquire the schema lock. Shorter values will prevent long delays on server startup when the lock is held by another instance or, more likely, when the lock was not released properly because a previous start was interrupted while holding the lock. This can happen when the system is killed while it is attempting to update its schema.

This value is in minutes.

db.schema.lock.pollInterval
5

Defines the amount of time to wait between attempts to acquire the schema lock. Slower polling produces less load, but may delay acquiring the lock.

This value is in seconds.

デプロイメント

既定値説明
deployments.commits.max
10000

Limits the number of commits that can be part of a deployment. This ensures Bitbucket doesn't process too many commits at once when receiving a deployment notification and should only be triggered in the rare case where subsequent deployments to an environment have lots of commits between them.

If this limit is reached the deployment will still be accepted and recent commits up to this number will be indexed. However, the remaining commits will not be indexed and therefore not appear as being part of the deployment.

deployments.indexing.threads.max
1*${scaling.concurrency}

When indexing commits in a deployment, this sets the upper limit to the number of concurrent threads that can be performing indexing at the same time.

There is limited support for mathematical expressions; +,-,*,/ and () are supported.

deployments.indexing.timeout
300

Configures a hard upper limit on how long the commits command when indexing commits in a deployment is allowed to run.

This value is in seconds. Using 0, or a negative value, disables the timeout completely.

診断

既定値説明
diagnostics.alert.dispatcher.max.threads
5

The maximum number of alert dispatcher threads. The number of dispatcher threads will only be increased when the alert queue is full and this configured limit has not been reached.

diagnostics.alert.dispatcher.queue.size
250

The number of events that can be queued. When the queue is full and no more threads can be created to handle the events, events will be discarded.

diagnostics.issues.event.dropped.threaddump.cooldown.seconds
60

Configures how often thread dumps should be generated for alerts relating to dropped events. Taking thread dumps can be computationally expensive and may produce a large amount of data when run frequently.

This value is in seconds

diagnostics.issues.event.slow.listener.time.millis
15000

Configures when an alert is raised for a slow event listener. If an event listener takes longer than the configured time to process an event, an warning alert is raised and made visible on the System Health page.

This value is in milliseconds

diagnostics.issues.event.slow.listener.overrides

Configures overrides for specific event listeners and/or specific plugins. This setting can be used to suppress 'slow event listener detected' alerts for specific event listeners or plugins. The value should be comma-separated list of configurations of individual triggers, where a trigger is either the plugin-key, or the plugin-key followed by the event listener class name.

Overrides are only considered if they specify more tolerant limits than the value specified in the diagnostics.issues.event.slow.listener.time.millis value. Setting a shorter override (e.g. 1000 when the default is 15000) will have no effect.

The following example sets the trigger for the com.company.example-plugin to 60s and sets the limit for the com.company.RepositoryCreatedListener event listener in the same plugin to 30s.

com.company.example-plugin:60000, com.company.example-plugin.com.company.RepositoryCreatedListener:30000

Configured values are in milliseconds

diagnostics.issues.hookscript.slow.time.seconds
30

Defines the maximum amount of time an individual hook script is allowed to execute or idle before a warning would be logged in the diagnostics plugin.

This value is in seconds. The default is 30 seconds, with a 10 second minimum

diagnostics.alert.retention.time.minutes
43200

Configures the minimum amount of time alerts are kept in the database before being periodically truncated. The default (43200 minutes) is 30 days.

This value is in minutes

diagnostics.alert.truncation.interval.minutes
1440

Configures the interval at which alerts are truncated from the database; in case of a fresh instance (or full cluster) (re-)start, this is also the initial offset until the truncation is executed for the first time. The default (1440 minutes) is 24 hours.

This value is in minutes

diagnostics.ipd.monitoring.poll.interval.seconds
60

Configures the interval at which In-product diagnostics jobs emit IPD metrics to JMX and log to a log file.

This value is in seconds. The default is 60 seconds.

diagnostics.ipd.monitoring.poll.storage.interval.minutes
10

This value is in minutes. The default is 10 minutes. Setting a value of less than 10 will result in a warning at start-up and the default value of 10 will be used.

Display

These properties control maximum limits of what will be displayed in the web UI.

既定値説明
display.max.source.lines
20000

Controls how many lines of a source file will be retrieved before a warning banner is shown and the user is encouraged to download the raw file for further inspection. This property relates to page.max.source.lines in that up to (display.max.source.lines/page.max.source.lines) requests will be made to view the page.

display.max.jupyter.notebook.size
2097152

Controls the size of the largest jupyter notebook that will be automatically loaded and rendered in the source view. Users will be prompted to manually trigger loading the notebook for files larger than this.

This value is in bytes.

ダウンロード

既定値説明
http.download.raw.policy
Smart

Controls the download policy for raw content.

Possible values are:

  • Insecure

    Allows all file types to be viewed in the browser.

  • Secure

    Requires all file types to be downloaded rather than viewed in the browser.

  • Smart

    Forces "dangerous" file types to be downloaded, rather than allowing them to be viewed in the browser.

These options are case-sensitive and defined in com.atlassian.http.mime.DownloadPolicy

Encrypted properties

既定値説明
encrypted-property.cipher.classname

Fully qualified name of the class that's used for decrypting encrypted application properties. An application property will be considered encrypted if the value is prefixed with {ENC}.

The class specified below must implement the com.atlassian.db.config.password.Cipher interface, and must be available on the classpath. If this parameter is not specified, any property identified as encrypted will have the identifying prefix {ENC} stripped off and the remaining value treated as unencrypted plain text.

Example usage using basic base64 obfuscation on the SSL key password:

  • Set the cipher class

    encrypted-property.cipher.classname=com.atlassian.db.config.password.ciphers.base64.Base64Cipher

  • Use the encrypted value with the identifying prefix

    server.ssl.key-password={ENC}Y2hhbmdlaXQ=

Deprecated in 9.2 to be removed in 10.0. Use secrets.secured-properties instead. Note, that setting both secrets.secured-properties and this property will result in an error. Please use one or the other.

イベント

These properties control the number of threads that are used for dispatching asynchronous events. Setting this number too high can decrease overall throughput when the system is under high load because of the additional overhead of context switching. Configuring too few threads for event dispatching can lead to events being queued up, thereby reducing throughput. These defaults scale the number of dispatcher threads with the number of available CPU cores.

既定値説明
event.dispatcher.core.threads
0.8*${scaling.concurrency}

The minimum number of threads that is available to the event dispatcher. The ${scaling.concurrency} variable is resolved to the number of CPUs that are available.

event.dispatcher.max.threads
${scaling.concurrency}

The maximum number of event dispatcher threads. The number of dispatcher threads will only be increased when the event queue is full and this configured limit has not been reached.

event.dispatcher.queue.size
4096

The number of events that can be queued. When the queue is full and no more threads can be created to handle the events, events will be discarded.

event.dispatcher.queue.rejectioncooldown
10

When an event cannot be dispatched because the queue is full, the stack traces of all busy event processing threads will be logged. This defines the cooldown that is applied to that logging to prevent spamming the stacks in the logs on every rejected event.

This value is in minutes

event.dispatcher.keepAlive
60

The time a dispatcher thread will be kept alive when the queue is empty and more than core.threads threads are running.

This value is in seconds.

Executor

Controls the thread pool that is made available to plugins for asynchronous processing.

既定値説明
executor.max.threads
${scaling.concurrency}

Controls the maximum number of threads allowed in the common ExecutorService. This ExecutorService is used by for background tasks, and is also available for plugin developers to use. When more threads are required than the configured maximum, the thread attempting to schedule an asynchronous task to be executed will block until a thread in the pool becomes available. By default, the pool size scales with the number of reported CPU cores. Note: A minimum of 4 is enforced for this property. Setting the value to a lower value will result in the default 4 threads being used.

機能

These properties control high-level system features, allowing them to be disabled for the entire instance. Features that are disabled at this level are disabled completely. This means instance-level configuration for a feature is overridden. It also means a user's permissions are irrelevant; a feature is still disabled even if the user has the system admin permission.

既定値説明
feature.attachments
true

Controls whether users are allowed to upload attachments to repositories they have access to. If this feature is enabled and later disabled, attachments which have already been uploaded are not automatically removed.

feature.auth.captcha
true

Controls whether to require CAPTCHA verification when the number of failed logins is exceeded. If enabled, any client who has exceeded a set limit for failed logins using either the web interface or Git hosting will be be required to authenticate in the web interface and successfully submit a CAPTCHA before continuing. Disabling this will remove this restriction and allow users to incorrectly authenticate as many times as they like without penalty.

Warning: It is strongly recommended to keep this setting enabled. Disabling it has the following ramifications:

  • Users may lock themselves out of any underlying user directory service (LDAP, Active Directory etc) because the system will pass through all authentication requests (regardless of the number of previous failures) to the underlying directory service.

  • For installations where Bitbucket is used for user management or a directory service with no limit on failed login attempts is used, the system will be vulnerable to brute-force password attacks.

feature.bidi.character.highlighting
true

Controls whether Unicode bidirectional characters are highlighted in code contexts (source view, pull requests, code blocks in comments, etc.). If enabled, these characters will be expanded (e.g. &lt;U+2066&gt;) so they can be easily seen by reviewers.

feature.code.owners
true

Controls whether code owners will be added as suggested reviewers when creating pull requests

feature.commit.graph
true

Controls whether a commit graph is displayed to the left of the commits on the repository commits page.

feature.commit.show.signatures
true

Controls whether we will index commit signature information and if verification signature details about commits are displayed on the commits page and when selecting specific commit hashes.

feature.deployments
true

Controls whether the deployments feature is available in the UI or via REST.

feature.diagnostics
true

Controls whether diagnostics is enabled. Diagnostics looks for known anti-patterns such as long-running operations being executed on event processing threads and identifies the responsible plugins. Diagnostics adds a little bit of overhead, and can be disabled if necessary.

feature.enforce.project.settings
true

Controls wheather project admins can enforce restrictions on all the repository settings in a project

feature.file.editor
true

Controls whether users can edit repository files in the browser and via REST.

When set to false, the edit UI and REST interface are disabled globally.

feature.forks
true

Controls whether repositories can be forked. This setting supersedes and overrides instance-level configuration. If this is set to false, even repositories which are marked as forkable cannot be forked.

feature.getting.started.page
true

Controls whether new users are redirected to a getting started page after their first login.

feature.hook.scripts
false

Controls whether support for uploading, configuring and running hook scripts is enabled.

This feature permits the SYS_ADMIN user to upload scripts that will be executed by the operating system user (i.e. the user that runs the Bitbucket application) and as such may present a security risk in some environments. This feature should be enabled with caution.

feature.git.rebase.workflows
true

Controls whether rebase workflows are enabled for Git repositories. This can be used to fully disable all of the Git SCM's built-in rebase support, including:

  • "Rebase and fast-forward" and "Rebase and merge" merge strategies - "Rebase" action for pull requests - "Rebase" action for ref sync

When this feature is disabled, repository administrators and individual users cannot override it. However, third-party add-ons can still use the Java API to rebase branches or perform rebase "merges".

feature.data.center.migration.export
true

Controls whether Data Center migration archives can be generated on this instance.

feature.data.center.migration.import
true

Controls whether Data Center migration archives can be imported into the instance.

feature.jira.cloud.devinfo
true

Controls whether the system can send development information to Jira Cloud

feature.jira.commit.checker
true

Controls whether the Jira commit checker feature is enabled.

feature.personal.repos
true

Controls whether personal repositories can be created.

When set to false, personal repository creation is disabled globally.

feature.project.repo.access.tokens
true

Controls whether HTTP access tokens at project and repository level are enabled.

feature.public.access
false

Public access allows anonymous users to be granted access to projects and repositories for read operations including cloning and browsing repositories. When this feature is enabled repositories are not globally made public. Rather, it permits public access to be enabled on a per repository or per project basis in the repository and project administration settings respectively.

feature.pull.request.auto.decline
true

Controls whether the process of automatically declining inactive pull requests is available for the system.

When this feature is available, all pull requests that are inactive (no recent comments, pushes etc.) are able to be automatically declined, based on the configured auto decline settings. By default this is turned on for all repositories, but individual projects or repositories are still able to opt-out or configure a different inactivity period.

When this feature is unavailable (by setting this property to false), it is completely inaccessible by the system.

To have the feature still be available, but change the default to off for all repositories (meaning individual projects or repositories have to opt-in), this property should be true and pullrequest.auto.decline.settings.global.enabled should be set to false.

feature.pull.request.auto.merge
true

Controls whether the admins can enable the use of auto-merge feature at project and/or repository level.

When this feature is enabled, the admins can control whether the auto-merge setting is enabled or disabled for a particular project or a repository. When the auto-merge setting is enabled for a particular repository, it allows the users to request the system to automatically merge a pull request targeting that repository, on their behalf, when all the merge checks pass.

feature.pull.request.deletion
true

Controls whether the system allows pull requests to be deleted via the UI and REST. Disabling this feature will prevent pull request deletion in all repositories, including by admins and sysadmins, and will override any settings applied to individual repositories.

feature.pull.request.drafts
true

Controls whether draft pull requests are enabled

feature.pull.request.suggestions
true

Controls whether the system allows users to add pull request suggestions through inline comments via the UI.

feature.pull.request.templates
true

Controls whether the system allows users to create and manage pull request templates via REST or UI

feature.rate.limiting
true

Controls whether HTTP requests will be rate limited per user. If this is enabled, repeated HTTP requests from the same user in a short time period may be rate limited. If this is disabled, no requests will be rate limited.

feature.repository.archiving
true

Controls whether a user can archive a repository in the UI or the repository update endpoint in REST. This also allows permission policy setting for repository archiving.

feature.repository.delete.policy
true

Controls whether a user can delete a repository by checking their permission level against the repository delete policy.

feature.repository.management
true

Controls whether admins have access to the Repositories view in the UI or the repository-management endpoint in REST.

feature.required.builds
true

Controls whether admins have access to the Required Builds view in the UI or the required-builds endpoint in REST. Note that this feature is only available for Data Center installations.

feature.reviewer.groups
true

Controls whether a user can manage reviewer groups.

feature.rolling.upgrade
true

Controls whether rolling upgrade can be performed for bug-fix versions.

feature.secret.scanning
true

Controls whether secret scanning is enabled for the system.

feature.system.signed.git.objects
true

Controls whether system created Git objects (such as pull request merge commits) are signed.

When this feature is enabled the application will sign and verify system created Git objects using an automatically generated signing GPG key pair.

feature.ssh.keys.for.code.signing
true

Controls whether SSH keys can be used to sign commits.

feature.smart.mirrors
true

Controls whether mirrors can be connected to the instance. Note that this feature is only available for Data Center installations

feature.suggest.reviewers
true

Controls whether the suggest reviewers feature is enabled or not.

feature.whats.new
true

Controls whether the what's new feature is enabled or not.

feature.user.time.zone.onboarding
true

Controls whether users with mismatching time zones are shown an alert prompting them to change their user time zone.

feature.websudo
true

Controls whether the web sudo feature is enabled or not.

feature.x509.certificate.signing
true

Controls whether signed commits and tags are verified with X.509 certificates and whether trusted X.509 certificates can be managed by the system.

File editor

既定値説明
content.upload.max.size
5242880

Controls the maximum allowed file size when editing a file through the browser or file edit REST endpoint

This value is in bytes. Default is 5 MiB

Footer

既定値説明
footer.links.contact.support

Controls whether the Contact Support link is displayed in the footer. If this is not set, then the link is not displayed. Otherwise, the link will redirect to the URL or email that is provided.

Example formats:

Fork Syncing (Ref Syncing)

既定値説明
plugin.bitbucket-repository-ref-sync.fetch.timeout
300

Defines the maximum amount of time fetch commands used to synchronize branches in bulk are allowed to execute or idle. Because fetch commands generally produce the majority of their output upon completion, there is no separate idle timeout. The default value is 5 minutes.

This value is in seconds.

plugin.bitbucket-repository-ref-sync.merge.timeout
300

Defines the maximum amount of time any command used to merge upstream changes into the equivalent branch in a fork is allowed to execute or idle. Since merging branches may require a series of different commands at the SCM level, this timeout does not define the upper bound for how long the overall merge process might take; it only defines the duration allotted to any single command.

This value is in seconds.

plugin.bitbucket-repository-ref-sync.rebase.timeout
300

Defines the maximum amount of time any command used to rebase a fork branch against upstream changes is allowed to execute or idle. Since merging branches may require a series of different commands at the SCM level, this timeout does not define the upper bound for how long the overall rebase process might take; it only defines the duration allotted to any single command.

This value is in seconds.

plugin.bitbucket-repository-ref-sync.threads
3

Controls the number of threads used for ref synchronization. Higher values here will help synchronization keep up with upstream updates, but may produce noticeable additional server load.

Graceful shutdown

既定値説明
server.shutdown
graceful

Controls whether Tomcat, SSH server and job scheduler should be shutdown gracefully or immediately. When graceful shutdown is enabled, this is how different components would behave after shutdown is initiated - * Tomcat stops accepting new HTTP connections but active connections are not interrupted. * Ssh server stops accepting new SSH connections but active connections are not interrupted. * Job scheduler does not start any new jobs but currently running jobs are not cancelled.

System will allow above components to shutdown gracefully for some time which can be controlled by property graceful.shutdown.timeout. If value of this property is set as immediate, then HTTP/SSH requests and scheduled jobs will not be waited for completion and will be terminated abruptly in their usual shutdown lifecycle.

graceful.shutdown.timeout
30

This is the minimum time for which system is guaranteed to wait for currently active HTTP/SSH requests and scheduled jobs to finish before terminating them. If you are using stop-bitbucket.sh to stop Bitbucket, the value of this property is effective only if it is less than or equal to the value of BITBUCKET_SHUTDOWN_TIMEOUT environment variable. It is recommended to set the value of this property at least 10 seconds less than BITBUCKET_SHUTDOWN_TIMEOUT environment variable.

This value is in seconds

Hibernate

既定値説明
hibernate.format_sql
${hibernate.show_sql}

When hibernate.show_sql is enabled, this flag controls whether Hibernate will format the output SQL to make it easier to read.

hibernate.jdbc.batch_size
20

Controls Hibernate's JDBC batching limit, which is used to make bulk processing more efficient (both for processing and for memory usage).

hibernate.show_sql
false

Used to enable Hibernate SQL logging, which may be useful in debugging database issues. This value should generally only be set by developers, not by customers.

Hook Scripts

既定値説明
hookscripts.output.max
32768

Applies a limit to how many bytes a single hook script can write to stderr or stdout. The limit is enforced separately for each, so a limit of 32768 allows for a maximum of 65536 bytes of combined output from a single script. Output beyond the configured limit is truncated, and a message is included to indicate so.

This value is in bytes. The default is 32K, with a 16K minimum.

hookscripts.path.shell

Defines the location of bash.exe, which is used when invoking hook scripts on Windows. This property is ignored on other platforms; only Windows requires using a shell to invoke hook scripts.

hookscripts.size.max
10485760

The size a script can be when it is uploaded.

This value is in bytes.

hookscripts.timeout
120

Defines the maximum amount of time an individual hook script is allowed to execute or idle. Since multiple hook scripts can be registered, this timeout does not define an upper bound for how long overall execution can take; it only defines the duration allotted to any single hook script.

This value is in seconds. The default is 120 seconds, with a 30 second minimum

Importer

既定値説明
plugin.importer.external.source.request.socket.timeout
30

Controls how long requests to external repository source servers can continue without producing any data before the importer gives up.

This value is in seconds.

plugin.importer.external.source.request.timeout
30

Controls how long requests to external repository source servers can proceed before the importer gives up.

This value is in seconds.

plugin.importer.import.repository.thread.max
8

Maximal size of thread pool – the maximum number of concurrent repository imports.

plugin.importer.repository.fetch.timeout.execution
360

Defines the execution timeout for fetch processes, applying a hard limit to how long the operation is allowed to run even if it is producing output or reading input. The default value is 6 hours.

This value is in minutes.

plugin.importer.repository.fetch.timeout.idle
60

Defines the idle timeout for fetch processes, applying a limit to how long the operation is allowed to execute without either producing output or consuming input. The default value is 60 minutes.

This value is in minutes.

JMX

See Enabling JMX counters for performance monitoring.

既定値説明
jmx.enabled
false

Controls whether JMX management interfaces for the system and its libraries are registered.

Note: Some libraries may register their JMX management interfaces regardless of this setting.

Jira

既定値説明
plugin.jira-integration.circuitbreaker.enabled
true

Defines whether the circuit breaker is enabled/disabled for all Jira sites. If set to true, the call to Jira is allowed. If set to false, the calls to all the Jira sites will fail and no development information will be sent.

plugin.jira-integration.circuitbreaker.failurethreshold
50

Defines the threshold for when the circuit breaker should switch to OPEN (and not let calls through). When the failure rate is equal or greater than the threshold the circuit breaker transitions to OPEN.

This value is a percentage, must be between 0 inclusive and 100 exclusive.

plugin.jira-integration.circuitbreaker.callsinhalfopen
10

Defines the maximum number of calls allowed through when the circuit breaker is HALF OPEN.

plugin.jira-integration.circuitbreaker.slowcallthreshold
2

Threshold for when a call is considered slow. Too many slow calls will transition the circuit breaker to OPEN state.

This value is in seconds.

plugin.jira-integration.circuitbreaker.waitinopenstate
60

Defines the time to spend in OPEN state before transitioning over to HALF OPEN (and eventually CLOSED).

This value is in seconds.

plugin.jira-integration.comment.issues.max
500

Controls the maximum result size allowed when retrieving Jira issues linked to comments.

plugin.jira-integration.pullrequest.attribute.commits.max
100

Controls the maximum number of commits to retrieve when retrieving attributes associated with commits of a pull-request. This value must be between 50 and 1000, which are imposed as lower and upper bounds on any value specified here.

plugin.jira-integration.remote.page.max.issues
20

Controls the maximum number of issues to request from Jira. This value must be between 5 and 50, which are imposed as lower and upper bounds on any value specified here.

plugin.jira-integration.remote.timeout.connection
5000

The connection timeout duration in milliseconds for requests to Jira. This timeout occurs if the Jira server does not answer. e.g. the server has been shut down. This value must be between 2000 and 60000, which are imposed as lower and upper bounds on any value specified here.

This value is in milliseconds.

plugin.jira-integration.remote.timeout.socket
10000

The socket timeout duration in milliseconds for requests to Jira. This timeout occurs if the connection to Jira has been stalled or broken. This value must be between 2000 and 60000, which are imposed as lower and upper bounds on any value specified here.

This value is in milliseconds.

plugin.jira-development-integration.reindex.pullrequests.commit.message.max
${commit.message.bulk.max}

Controls the maximum length of the commit message to be loaded when reindexing pull requests. Any commit messages longer than this limit will be truncated.

Setting this value less than or equal to 0 will prevent indexing commits when reindexing pull requests, which may be useful for sysadmins if including commits results in excessive load on the server.

plugin.jira-development-integration.reindex.pullrequests.command.timeout
300

Defines the timeout for streaming commits when reindexing pull requests. Pull requests are reindexed in batches, with a separate process for each batch. If a batch's process times out, subsequent batches will be skipped.

This value is in seconds, and is applied as both the execution and idle timeout.

plugin.jira-development-integration.threads.core
4

When responding to changes that may need to be sent to Jira, processing will be done by a separate thread pool. This property defines the core and max threads for this thread pool.

There is limited support for mathematical expressions; +,-,*,/ and () are supported.

plugin.jira-development-integration.threads.keepAliveSeconds
60

When responding to changes that may need to be sent to Jira, processing will be done by a separate thread pool. This property defines the maximum threads for this thread pool.

There is limited support for mathematical expressions; +,-,*,/ and () are supported.

plugin.jira-development-integration.threads.queue-capacity
10000

When responding to changes that may need to be sent to Jira, processing will be done by a separate thread pool. This property defines the queue capacity for this thread pool

There is limited support for mathematical expressions; +,-,*,/ and () are supported.

plugin.jira-commit-checker.issue-key-cache.expiry
360

Defines the time-to-live of an issue key cache entry after it has been written. This cache is a cache of valid issue keys for each user in order to improve performance of the commit checker when the same user pushes a new commit with the same issue key.

This value is in minutes.

plugin.jira-commit-checker.issue-key-cache.max
10000

Defines the maximum number of entries that will be stored in the issue key cache.

plugin.jira-commit-checker.issue-key-cache.enabled
true

Defines whether the issue key cache is enabled or disabled.

plugin.jira-commit-checker.circuitbreaker.default.enabled
true

Defines whether the circuit breaker is enabled/disabled for all Jira instances. If set to true, the call to Jira is allowed. If set to false, the call to Jira will fail, no issues will be validated and the push will be rejected.

plugin.jira-commit-checker.circuitbreaker.default.failurethreshold
50

Defines the threshold for when the circuit breaker should switch to OPEN (and not let calls through). When the failure rate is equal or greater than the threshold the circuit breaker transitions to OPEN.

This value is a percentage, must be between 0 inclusive and 100 exclusive.

plugin.jira-commit-checker.circuitbreaker.default.callsinhalfopen
10

Defines the maximum number of calls allowed through when the circuit breaker is HALF OPEN.

plugin.jira-commit-checker.circuitbreaker.default.slowcallthreshold
2

Threshold for when a call is considered slow. Too many slow calls will transition the circuit breaker to OPEN state.

This value is in seconds.

plugin.jira-commit-checker.circuitbreaker.default.waitinopenstate
60

Defines the time to spend in OPEN state before transitioning over to HALF OPEN (and eventually CLOSED).

This value is in seconds.

plugin.jira-commit-checker.jira-validation.timeout
5

The timeout duration for the maximum allowed time for the hook to validate issues in Jira. If the timeout is reached, the push is rejected.

This value is in seconds.

plugin.jira-commit-checker.project.key.ignore
UTC,GMT,ISO,SHA,AES,UTF,RFC

The following project keys will be ignored when validating commit messages. The main use case for this are keys that look like Jira keys but are not in fact Jira keys (eg. UTF-8).

This value is a comma-separated list and is case-sensitive.

Jira Automatic Transition Trigger Events

These properties control whether events should be converted into Remote Events so they can trigger automatic issue transitions in Jira.

既定値説明
plugin.dev-summary.pr.commits.threshold
${plugin.jira-integration.pullrequest.attribute.commits.max}

Limit the number of commits to be scanned per pull request

plugin.dev-summary.pr.events.enabled
true

Controls whether pull request events should be published

plugin.dev-summary.branch.events.threshold
10

Limit the number of branch events sent per synchronization If set to zero then no branch events will be published

plugin.dev-summary.commit.events.threshold
100

Limit the number of commit events sent per synchronization If set to zero then no commit events will be published

plugin.dev-summary.issuechanged.events.threshold
100000

Limit the number of issue changed events sent per synchronization. Issue changed events are based on the issue keys within the commit messages for commit events or within branch names for branch events. For branches, this value is the maximum of branches considered. For commits, this value is the maximum number of commits considered. If set to zero then no issue changed events will be published

plugin.dev-summary.issue.commits.threshold
100

Limit the number of commits to be returned per issue

plugin.dev-summary.repository.trigger.settings.enabled
false

Controls whether the Jira triggers page is visible on repository settings

Jira cloud development information events

These properties control what type of information is sent to jira cloud

既定値説明
plugin.jira-development-integration.cloud.build-status.send.enabled
true

Controls whether build status information received from 3rd party CI tools are sent to jira cloud.

plugin.jira-development-integration.cloud.deployment.send.enabled
true

Controls whether deployment information received from 3rd party deployment tools are sent to jira cloud.

plugin.jira-development-integration.deployment-environment-type.keywords.dev
dev,review,development,trunk

Keywords used to map a development environment type

plugin.jira-development-integration.deployment-environment-type.keywords.prod
prod,production,prd,live

Keywords used to map a production environment type

plugin.jira-development-integration.deployment-environment-type.keywords.staging
staging,stage,stg,preprod,pre-prod,model,internal

Keywords used to map a staging environment type

plugin.jira-development-integration.deployment-environment-type.keywords.test
test,testing,tests,tst,integration,integ,intg,int,acceptance,accept,acpt,qa,qc,control,quality

Keywords used to map a testing environment type

plugin.jira-development-integration.cloud.deployment.association.values.max
500

Limit the number of values (usually a set of issue keys) for each deployment association entity. The API documentation specifies this should not be set above 500

plugin.jira-development-integration.cloud.devinfo.commit.files.max
10

Limit the number of files on each Commit object. The API documentation specifies this should not be set above 10

plugin.jira-development-integration.cloud.devinfo.entity.issuekeys.max
100

Limit the number of issue keys per entity (commit, branch, pull request) The API documentation specifies this should not be set above 100

plugin.jira-development-integration.cloud.devinfo.event.branch.max
100

Limit the number of branches we process per event This ensures Bitbucket doesn't process too many branches at once and should only be triggered on the initial push of a large, existing repository.

plugin.jira-development-integration.cloud.devinfo.event.commit.max
200

Limit the number of commits we process per event This ensures Bitbucket doesn't process too many commits at once and should only be triggered on the initial push of a large, existing repository.

plugin.jira-development-integration.cloud.devinfo.repository.entity.max
100

Limit the number of entities (branches, commits, pull requests) we put on each repository object. The API documentation specifies this should not be set above 400

plugin.jira-development-integration.cloud.permissions.cache.ttl
3600

Controls how long to cache the permissions associated with a jira cloud site. Builds, deployments and dev information is not sent to jira cloud if the corresponding permission is not present for the registered jira site. The permission check is done each time before sending the information to jira cloud.

This value is in seconds.

Job Scheduler

Controls the scheduler that processes background jobs submitted by the system and by plugins.

既定値説明
scheduler.history.expiry.days
30

Controls how long job history is remembered after it is last updated before it expires. Job history in the "RunDetails" class available to plugins from atlassian-scheduler is only valid for the last run of the job on the same cluster node. Calls to retrieve RunDetails for the same job on different cluster nodes may return different results.

This value is in days. The value cannot be negative.

scheduler.refresh.interval.minutes
1

Controls the frequency at which the scheduler will automatically poll for changes to clustered jobs. If the value is positive, then the scheduler will refresh its queue of clustered jobs at the specified interval in minutes. If the value is 0 or negative, then jobs submitted on one cluster node will never be scheduled on other cluster nodes. It is not recommended to modify this setting unless recommended by Atlassian Support.

This value is in minutes. Using 0, or a negative value, disables clustering of background jobs completely.

scheduler.worker.threads
4

Controls the number of worker threads that will accept jobs from the queue on each cluster node. If the value is 0 or negative, then the scheduler's default of 4 threads will be used.

Liquibase

既定値説明
liquibase.commit.block.size
10000

The maximum number of changes executed against a particular Liquibase database before a commit operation is performed. Very large values may cause DBMS to use excessive amounts of memory when operating within transaction boundaries. If the value of this property is less than one, then changes will not be committed until the end of the change set.

ログ

Logging levels for any number of loggers can be set in bitbucket.properties using the following format:

logging.logger.{name}={level}

To configure all classes in the com.atlassian.bitbucket package to DEBUG level:

logging.logger.com.atlassian.bitbucket=DEBUG

To adjust the ROOT logger, you use the special name ROOT (case-sensitive):

logging.logger.ROOT=INFO

移行

Draining database connections during database migration happens in two stages. Stage 1 passively waits a set amount of time for all connections to be returned to the pool. If connections are still leased when migration.drain.db.timeout seconds has elapsed then stage 2 begins and will interrupt the owning threads, wait migration.drain.db.force.timeout seconds and finally attempt to roll back and close any remaining connections.

既定値説明
migration.drain.db.timeout
${backup.drain.db.timeout}

In stage 1 of draining connections during migration, this setting controls how long the migration should wait for outstanding database operations to complete before moving to stage 2. See migration.drain.db.force.timeout

This value is in seconds.

migration.drain.db.force.timeout
${backup.drain.db.force.timeout}

In stage 2 of draining connections during migration, this property controls how long the migration process should wait (after interrupting the owning threads) for those threads to release the connections before forcibly rolling back and closing them. Note if all connections have been returned to the pool stage 2 is skipped and so this property has no effect. A negative value will skip stage 2 completely. See migration.drain.db.timeout

This value is in seconds.

Mirroring

既定値説明
plugin.mirroring.farm.max.ref.change.queue.dump.size
1024

Defines the maximum number of items that will be returned when querying the contents of the ref changes queue

plugin.mirroring.farm.operation.callback.timeout
300

Defines how long a distributed operation in a mirror farm will wait for responses from other farm members before timing out.

This values is in seconds.

plugin.mirroring.farm.operation.max.inflight
1000

Defines the maximum number of in-flight operations to keep track of per operation type before evicting the eldest.

plugin.mirroring.farm.operation.refchange.chunk.timeout
14400

Defined how long to wait before timing out an operation to distribute and fetch refs This value is in seconds

plugin.mirroring.farm.operation.update.refs.timeout
1800

Defined how long to wait before timing out a update ref command This values is in seconds

plugin.mirroring.farm.max.lock.acquisition.attempts
1800

Defines the number of times an attempt is made to acquire a lock on a repository during initial synchronization before giving up.

plugin.mirroring.farm.max.chunk.size
5000

Defines the maximum number of ref-changes before they will be split into a new chunk.

plugin.mirroring.farm.operation.threads
5

Defines the maximum thread pool size for executing distributed operations.

plugin.mirroring.farm.operation.initial.retry.delay
1

Defines how long the system should wait before retrying an operation that has failed the first time.

This values is in seconds.

plugin.mirroring.farm.operation.retry.attempts
5

Defines the maximum number of times an operation is attempted before giving up and the failure is propagated.

plugin.mirroring.farm.vet.threads
5

Defines the maximum thread pool size for executing operations to fix repository inconsistencies.

plugin.mirroring.http.write.enabled
true

For mirror instances, this controls whether SCM write operations should be allowed for HTTP requests.

If enabled, write requests are redirected to the upstream server.

If disabled, an error message will be displayed when attempting a push, etc.

plugin.mirroring.lfs.download.upstream
false

For mirror instances, this controls whether Large File Support (LFS) downloads are always requested from the upstream.

If enabled, LFS downloads are always requested from the upstream

If disabled, LFS downloads are served from the mirror

plugin.mirroring.repository.diagnostics.sync.enabled
false

Defines if reporting of inconsistent repositories on mirrors is enabled.

plugin.mirroring.repository.diagnostics.sync.tolerance
300

Defines the time duration for which a repository is allowed to be out-of-sync and hence not reported, after it is updated on upstream or on mirror before first detection of different hashes for the repository on upstream and mirror. For accurate reporting, this value is strongly recommended to be greater than the value of plugin.mirroring.hash.content.delay.

This value is in seconds.

plugin.mirroring.ssh.upstream.proxying.enabled
true

For upstream/primary instances, this controls whether SCM commands proxied by a mirror over SSH should be allowed.

If enabled, such proxied commands are performed on the upstream with the same user identity, rights and privileges as the user executing the command on the mirror.

If disabled, an error message will be sent when a user attempts to execute a command that must be proxied on the upstream (such as a push).

plugin.mirroring.ssh.proxy.enabled
true

For mirror instances, this controls whether compatible SCM commands are proxied to the upstream on behalf of the user.

If enabled, such commands are performed on the upstream by the mirror acting with the same user identity, rights and privileges as the user executing the command on the mirror.

If disabled, an error message will be sent to the user unless the command can be safely executed on the mirror (e.g. whoami).

Note that SSH push proxying is not supported for mirrors of bitbucket.org and this flag is ignored.

plugin.mirroring.ssh.proxy.parseconfig
false

This controls whether the SSH client created to proxy SCM commands is configured with the runtime user's SSH config file (usually found at ~/.ssh/config)

This is disabled by default but may be enabled if features like host aliasing are required.

plugin.mirroring.ssh.proxy.upstream.timeout.execution
86400

Defines a hard limit for the amount of time that a proxied push to an upstream over SSH can run for even if it is producing input/output

This values is in seconds.

plugin.mirroring.ssh.proxy.upstream.timeout.idle
1800

Defines an idle timeout when proxying pushes over SSH to an upstream server. If no data is sent or received in this amount of time the connection will be terminated.

This values is in seconds.

plugin.mirroring.strict.hosting.status
false

Controls whether the mirror http status endpoint will return a 200 status code only when the mirror is synchronized. NOTE Use with caution as at least one mirror node must always be accessible from the upstream server. This is intended for deployments using advanced load balancer configurations.

plugin.mirroring.upstream.url

Defines where the mirror server should be mirroring from, this is also referred to as the base URL of the upstream server. Only define this property for mirror servers.

plugin.mirroring.capabilities.refresh.interval
10

Controls how frequently the mirror will attempt to refresh the capabilities from the upstream server.

This value is in minutes

plugin.mirroring.local.command.timeout.idle
180

Defines the idle timeout for local commands, applying a limit to how long the operation is allowed to execute without either producing output or consuming input. The default value is 3 minutes.

This value is in seconds.

plugin.mirroring.remote.command.timeout.idle
3660

Defines the idle timeout for remote commands, applying a limit to how long the operation is allowed to execute without either producing output or consuming input. The default value is 61 minutes. This value should be greater than the throttle.resource.mirror-hosting.timeout value set on the upstream.

This value is in seconds.

plugin.mirroring.state.refresh.interval
1

Controls how frequently the mirror will attempt to refresh its state from the upstream server.

This value is in minutes

plugin.mirroring.synchronization.delay.initial
15

Controls how long after startup the first full synchronization should be attempted.

This value is in seconds

plugin.mirroring.synchronization.fetch.timeout.execution
172800

Defines the execution timeout for fetch processes, applying a hard limit to how long the operation is allowed to run even if it is producing output or reading input. The default value is 48 hours.

This value is in seconds.

plugin.mirroring.synchronization.interval
3

Controls how frequently a full synchronization with the upstream server should run

This value is in minutes

plugin.mirroring.synchronization.ls-remote.timeout
15

Controls how long to wait for an ls-remote command to the upstream server before it times out.

This value is in minutes

plugin.mirroring.synchronization.max.failures
3

Controls the number of consecutive failed synchronizations on a repository before the mirror gives up.

plugin.mirroring.synchronization.repository.page.size.max
100

Controls how large the pages should be when requesting upstream repositories

plugin.mirroring.upstream.auth.cache.ttl
300

Controls how long to cache the result of authentication requests made against the primary server such that if the same credentials are provided to the mirror within that period, the same result is used to calculate the outcome of the authentication and a remote (network) calls for the authentication request is avoided.

Note: this caches both authentication successes (valid username/password, SSH key registered to user) but also failures (incorrect username/password pairs, unknown SSH keys).

Also note: that cache entries expire after they are inserted, not after the last access. Repeatedly making authentication requests to the mirror with the same credentials will not prevent expiry of the result from the cache.

This value is in seconds.

A positive value will cache an authentication result for the configured period. A non-positive value will disable the cache and cause all authenticated requests to be executed remotely every time. Defaults to 5 minutes.

plugin.mirroring.upstream.auth.cache.max
-1

Configures the maximum number of authentication cache entries. Cache entries for Bitbucket mirrors can be heavy for certain permission schemes so this setting can control how much memory is devoted to this cache.

For Bitbucket mirrors, there is one cache entry consumed for each combination of {user credentials, authentication method (HTTP basic/SSH)}

For bitbucket.org mirrors, there is one cache entry consumed for each combination of {user credentials, authentication method (HTTP basic/SSH), repository}

A negative value indicates an unlimited cache. A zero value indicates the cache should be disabled. A positive value indicates a specific cache limit. Defaults to unlimited.

plugin.mirroring.upstream.auth.cache.fallback.ttl
1800

Controls the expiry of values in a secondary cache (separate to plugin.mirroring.upstream.auth.cache.expiry) which caches the result of authentication requests to be used to recover from remote authentication request fails for environmental or connectivity reasons. The types of problems this cache will try to overcome are network partition, request timeout, socket timeout, thread interruption, invalid HTTP response codes or entities from the primary.

Note: this caches both authentication successes (valid username/password, SSH key registered to user) but also failures (incorrect username/password pairs, unknown SSH keys).

Also note: that cache entries expire after they are inserted, not after the last access. Repeatedly making authentication requests to the mirror with the same credentials will not pause expiry of the result from the cache.

This value is in seconds.

A positive value will cache the authentication results for the configured period. If the value is postive and smaller than plugin.mirroring.upstream.auth.cache.expiry it will be adjusted upwards. A non-positive value will disable the cache and cause any failing authentication requests to not attempt recovery using a previously changed result thus failing any related authentication requests by clients. Defaults to half an hour.

plugin.mirroring.upstream.auth.cache.fallback.max
-1

Configures the maximum number of fallback authentication cache entries. Cache entries for Bitbucket mirrors can be heavy for certain permission schemes so this setting can control how much memory is devoted to this cache.

For Bitbucket mirrors, there is one cache entry consumed for each combination of {user credentials, authentication method (HTTP basic/SSH)}

For bitbucket.org mirrors, there is one cache entry consumed for each combination of {user credentials, authentication method (HTTP basic/SSH), repository}

A negative value indicates an unlimited cache. A zero value indicates the cache should be disabled. A positive value indicates a specific cache limit. Defaults to unlimited.

plugin.mirroring.upstream.event.ref.change.max.count
25

Defines the maximum number of ref-changes that will be published to the upstream server as part of the RepositorySynchronizedEvent. This prevents event objects from occupying unbounded amounts of memory while they are queued to be published or processed.

plugin.mirroring.upstream.request.socket.timeout
15

Controls how long active requests to upstream servers can continue without producing any data before the mirror gives up. Must be a value smaller than plugin.ssh.auth.timeout by a decent margin if auth fallback caching is to be effective for socket timeouts

This value is in seconds.

plugin.mirroring.upstream.request.timeout
15

Controls how long requests to upstream servers can proceed before the mirror gives up. Must be a value smaller than plugin.ssh.auth.timeout by a decent margin if auth fallback caching is to be effective for request timeouts

This value is in seconds.

plugin.mirroring.delete.on.startup
false

Controls whether the system will delete mirrors upon startup. Note: deleting mirrors in this way does not put them into a "deleted" state and they will continue to function as normal if they are still connected to a different upstream instance. This property can be used when setting up clones of a production instance.

通知

既定値説明
plugin.bitbucket-notification.mail.max.comment.size
2048

Controls the maximum allowed size of a single comment in characters (not bytes). Extra characters will be truncated.

plugin.bitbucket-notification.mail.max.description.size
2048

Controls the maximum allowed size of a single description in characters (not bytes). Extra characters will be truncated.

plugin.bitbucket-notification.mentions.enabled
true

Controls whether mentions are enabled

plugin.bitbucket-notification.max.mentions
200

Controls the maximum number of allowed mentions in a single comment

plugin.bitbucket-notification.sendmode.default
BATCHED

Default mode for sending notifications for users who have not set an explicit preference.

This value is either BATCHED or IMMEDIATE.

plugin.bitbucket-notification.batch.min.wait.minutes
10

The minimum time to wait for new notifications in a batch before sending it (inactivity timeout).

This value is in minutes.

plugin.bitbucket-notification.batch.max.wait.minutes
30

The maximum time to wait since the first notification of a batch before sending it (staleness avoidance timeout).

This value is in minutes.

plugin.bitbucket-notification.batch.notification.flush.limit
40

The maximum number of notifications collected in a batch before the batch is sent automatically. Once this number of notifications is reached, the batch will be sent regardless of the time settings.

Paging

These properties control the maximum number of objects which may be returned on a page, regardless of how many were actually requested by the user. For example, if a user requests Integer.MAX_INT branches on a page, their request will be limited to the value set for page.max.branches.

This is intended as a safeguard to prevent enormous requests from tying up the server for extended periods of time and then generating responses whose payload is prohibitively large. The defaults configured here represent a sane baseline, but may be overridden by customers if necessary.

既定値説明
page.max.attachments
500

Maximum number of attachments per page.

page.max.branches
1000

Maximum number of branches per page.

page.max.changes
1000

Maximum number of changes per page. Unlike other page limits, this is a hard limit; subsequent pages cannot be requested when the number of changes in a commit exceeds this size.

page.max.commits
100

Maximum number of commits per page.

page.max.diff.lines
10000

Maximum number of segment lines (of any type, total) which may be returned for a single diff. Unlike other page limits, this is a hard limit; subsequent pages cannot be requested when a diff exceeds this size.

page.max.directory.children
1000

Maximum number of directory entries which may be returned for a given directory.

page.max.directory.recursive.children
100000

Maximum number of file entries which may be returned for a recursive listing of a directory. A relatively high number as this is used by the file finder which needs to load the tree of files upfront.

page.max.groups
1000

Maximum number of groups per page.

page.max.granted.permissions
1000

Maximum number of granted permissions per page.

page.max.hookscripts
100

Maximum number of hook scripts per page.

page.max.index.results
50

Maximum number of commits which may be returned from the index when querying by an indexed attribute. For example, this limits the number of commits which may be returned when looking up commits against a Jira issue.

page.max.projects
1000

Maximum number of projects per page.

page.max.pullrequest.activities
500

Maximum number of pull request activities per page.

page.max.pullrequests
1000

Maximum number of pull requests per page.

page.max.repositories
1000

Maximum number of repositories per page.

page.max.reviewergroups
100

Maximum number of reviewer groups per page.

page.max.source.length
5000

Maximum length for any line returned from a given file when viewing source. This value truncates long lines. There is no mechanism for retrieving the truncated part short of downloading the entire file.

page.max.source.lines
5000

Maximum number of lines which may be returned from a given file when viewing source. This value breaks large files into multiple pages. This property relates to display.max.source.line in that up to (display.max.source.lines/page.max.source.lines) requests will be made to view the page.

page.max.tags
1000

Maximum number of tags per page.

page.max.users
1000

Maximum number of users per page.

page.scan.pullrequest.activity.size
500

The page size to use when searching activities to find a specific one.

page.scan.pullrequest.activity.count
4

The number of pages of activities to scan, when searching for a given activity, before giving up.

Password Reset

既定値説明
password.reset.validity.period
4320

Controls how long a password reset token remains valid for. Default period is 72 hours.

This value is in minutes.

Process execution

Controls timeouts for external processes, such as git.

既定値説明
process.timeout.execution
120

Configures a hard upper limit on how long the command is allowed to run even if it is producing output.

This value is in seconds. Using 0, or a negative value, disables the timeout completely.

process.timeout.idle
60

The idle timeout configures how long the command is allowed to run without producing any output.

This value is in seconds. Using 0, or a negative value, disables the timeout completely.

プロファイリング

既定値説明
atlassian.profile.maxframenamelength
300

Controls the maximum character length of the frame names that are written to the profiler log. Any characters beyond this maximum are truncated. Ex: [83.0ms] - nio: /usr/bin/git log --format=commit %H%n%H%x02%P%x02%aN%x02%aE%x02%at%x02%cN%x02%cE%x02%ct%n%B%n%x03END%x04 -2 --no-min-parents ...

This value is in number of characters.

atlassian.profile.mintime
1

Controls the threshold time below which a profiled section should not be reported.

This value is in milliseconds.

atlassian.profile.mintotaltime
0

Controls the threshold time below which a profiled request should not be reported.

This value is in milliseconds.

Pull Request - Reviewer Groups

既定値説明
pullrequest.reviewergroups.max.size
100

Defines the maximum number of users a reviewer group may contain

Pull Request - Suggestions

既定値説明
pullrequest.suggestions.commit-author
AUTHOR

Defines whether the author of the commit that results when applying a suggestion should be the suggestion author or the user who applied the suggestion. Valid values are: - AUTHOR the suggestion commit author should be the suggestion author - ACTOR the suggestion commit author should be the user who applied the suggestion Regardless of this value, the merge commit committer will always be the user who merged the pull request.

pullrequest.suggestions.drift.timeout
30

Defines the maximum amount of time SCM commands used to drift suggestions back to the source branch are allowed to execute before they are terminated. In most cases a user will be waiting whilst this happens, so we need to balance between applying the suggestion and user experience. If this timeout is exceeded, the user is advised to apply the suggestion manually.

This value is in seconds.

Pull Request Commit Indexing

Controls how commits are associated with pull requests.

既定値説明
pullrequest.commit.indexing.maximum.commits
1000

Defines the maximum number of commits that will be associated with a pull request when a pull request is created or when new commits are pushed to a pull request. A larger number will impact pull request creation and rescoping time. Pull requests that have more than this number of commits may not appear linked in the commit screen.

pullrequest.commit.indexing.backfill.maximum.processed
5000

Defines the maximum number of pull requests to process at once while backfilling pull request commits. A lower limit will mean less data needs to be stored in memory at once, but processing will take longer as a result.

pullrequest.commit.indexing.backfill.batch.size
250

Defines the maximum number of pull request commit relationships to commit to the database in a single transaction.

pullrequest.commit.indexing.backfill.process.timeout
600

Defines the maximum amount of time SCM commands used to backfill pull request commits are allowed to execute before they are terminated.

This value is in seconds.

Pull Requests

既定値説明
pullrequest.auto.decline.settings.global.enabled
true

Controls whether or not automatically declining inactive pull requests is on by default for all repositories. This setting applies when a repository and its project do not have any explicit auto decline configuration.

By default this is set to true and individual projects and repositories are able to opt-out. Setting this to false means automatically declining inactive pull requests is instead off by default, and individual projects and repositories are able to opt-in.

pullrequest.auto.decline.settings.global.inactivityWeeks
4

Controls the default inactivity period for all repositories when automatically declining inactive pull requests. This setting applies when a repository and its project do not have any explicit auto decline configuration. Individual projects and repositories are able to override this value by configuring a different inactivity period.

This value is in weeks, and must be set to a valid inactivity weeks value (1, 2, 4, 8, or 12). If the provided inactivity weeks is invalid the system will default to 4 weeks.

pullrequest.bulk.rescope.timeout
300

Defines the maximum amount of time any command used to analyse the effects of ref changes to open pull requests (rescoping) is allowed to execute. For these commands, the standard idle timeout for processes applies.

This value is in seconds

pullrequest.deletion.role
AUTHOR

Determines which users can delete pull requests. Valid values are: - AUTHOR pull request authors and repository admins can delete pull requests - REPO_ADMIN repository admins can delete pull requests

pullrequest.diff.context
10

Defines the number of context lines to include around diff segments in pull request diffs. By default, git only includes 3 lines. The default is 10, to try and include a bit more useful context around changes, until the ability to "expand" the context is implemented.

pullrequest.diff.context.expand.size
25

Defines the number of context lines to show at a time when clicking the expand buttons around segments in the pull request diff.

pullrequest.merge.commit-author
AUTHOR

Defines whether the author of the merge commit that results when merging a pull request should be the pull request author or the user who merged the pull request. Valid values are: - AUTHOR the merge commit author should be the pull request author - ACTOR the merge commit author should be the user who merged the pull request Regardless of this value, the merge commit committer will always be the user who merged the pull request.

pullrequest.merge.timeout
300

Defines the maximum amount of time any command used to merge a pull request is allowed to execute or idle. Since merging a pull request may require a series of different commands at the SCM level, this timeout does not define the upper bound for how long the overall merge process might take; it only defines the duration allotted to any single command.

This value is in seconds.

pullrequest.ref.cleanup.max.attempts
10

Defines the maximum number of attempts to be made by the system to clean up the pull request refs when the pull request is merged/declined/deleted.

pullrequest.ref.cleanup.retry.interval
10

Defines the amount of time between retry attempts made by the system to clean up the pull request refs, after it fails to do so for a merged/declined/deleted pull request.

This value is in seconds.

pullrequest.rescope.commits.display
5

Defines the maximum number of commits per type (either added or removed) to display in a rescope activity.

pullrequest.rescope.commits.max
1000

Defines the absolute maximum number of commits that will be evaluated when attempting to determine, for a given rescope activity, which commits were added to or removed from a pull request. Adjusting this setting can have significant memory footprint impact on the system. It is not recommended to be changed, but the option is provided here to support unique use cases.

pullrequest.rescope.cleanup.interval
30

Controls how frequently empty rescope activities are cleaned up. Because pull requests rescope very frequently it is important to remove empty rescopes from the database to keep activity queries performant.

This value is in minutes.

pullrequest.rescope.detail.threads
2

Defines the maximum number of threads to use for precalculating rescope details. These threads perform the requisite processing to determine the commits added and removed when a pull request is rescoped, where most rescopes do not add or remove any commits. Such "dead" rescopes are deleted during processing. The primary goal is to ensure all details have already been calculated when users try to view a pull request's overview.

pullrequest.rescope.drift.commandtimeout
180

Defines the maximum amount of time SCM commands used to perform comment drift are allowed to execute before they are terminated. Aggressive timeouts should not be used here. Commands which timeout may result in comments which are still present in the diff being orphaned or drifted incorrectly.

This value is in seconds.

pullrequest.rescope.drift.maxattempts
5

Controls how many times comment drift will be retried when it fails. Certain failure types are considered to be unrecoverable and will not be retried regardless of this setting. Unrecoverable failures are logged.

pullrequest.rescope.drift.threads
4

Defines the maximum number of threads to use when processing comment drift for a pull request during rescope. Higher numbers here do not necessarily mean higher throughput! Performing comment drift will usually force a new merge to be created, which can be very I/O intensive. Having a substantial number of merges running at the same time can significantly reduce the speed of performing comment drift.

pullrequest.rescope.ref-resolve.max.threads
20

Defines the maximum number of threads to use for resolving pull request refs during rescoping. Higher numbers may improve the performance of rescoping for large fork-based workflows.

A value of 0 disables concurrent resolving of pull request refs.

pullrequest.rescope.threads
1

Defines the maximum number of threads to use for rescoping pull requests.

Pull request auto-merge

既定値説明
pullrequest.auto.merge.job.interval
15m

Controls how often the pull request auto-merge job runs. This job processes all the pending auto-merge requests for all the pull requests.

This value is in minutes by default but certain suffixes (s,m,h,d for seconds, minutes, hours, or days) can be used to make the value more readable. Standard ISO-8601 format used by java.time.Duration (PT1H, for example) is also supported.

pullrequest.auto.merge.max.queue.size
1000

Controls the maximum size of the queue which holds the auto-merge requests which were ready to merge but their source and/or target branch was not up-to-date with the corresponding SCM refs.

pullrequest.auto.merge.max.threads
1

Controls the maximum number of threads allowed per node to process auto-merge requests.

pullrequest.auto.merge.request.max.lifetime
14d

Controls how long the system will wait for the pull request to be ready for merging since the time auto-merge was requested. If the pull request was not merged after this time, the auto-merge request will be cancelled by the system and the user has to either manually merge the pull request when it is ready to merge or request auto-merge again.

This value is in days by default but certain suffixes (s,m,h,d for seconds, minutes, hours, or days) can be used to make the value more readable. Standard ISO-8601 format used by java.time.Duration (PT1H, for example) is also supported.

pullrequest.auto.merge.request.mergeabilityCheck.timeout
30s

Controls the maximum time allowed to check the mergeability of a pull request while trying to auto-merge it. If the time taken exceeds this time, the auto-merge request will be cancelled for the corresponding pull request.

This value is in seconds by default but certain suffixes (s,m,h,d for seconds, minutes, hours, or days) can be used to make the value more readable. Standard ISO-8601 format used by java.time.Duration (PT1H, for example) is also supported.

Ref Restrictions (Branch permissions)

既定値説明
plugin.bitbucket-ref-restriction.case.insensitive
true

Controls whether refs are matched case insensitively for ref restrictions.

plugin.ref-restriction.feature.ascii.art
true

Controls whether ASCII art is displayed when a push is rejected.

plugin.ref-restriction.feature.splash
true

Controls whether new users are shown a splash page when first viewing ref restrictions.

plugin.bitbucket-ref-restriction.max.resources
100

The maximum number of ref restrictions per repository. This count refers to each permission item, not each branch. High numbers of branch permissions will adversely impact the speed of pushes to the repository, so increasing this limit is not recommended.

plugin.bitbucket-ref-restriction.max.resource.entities
50

The maximum number of access grants per ref restriction.

Ref metadata

既定値説明
ref.metadata.timeout
2

Controls timeouts for retrieving metadata associated with a collection of refs from all metadata providers collectively.

This values is in seconds.

ref.metadata.max.request.count
100

Maximum number of refs that can be used in a metadata query.

Ref searching

既定値説明
ref.search.boost.branches.max
1000

Maximum number of refs we can guarantee exact and prefix matching in correct ordering

Repository shortcuts

既定値説明
plugin.repository.shortcut.url.scheme.extended.whitelist

The extended whitelist for allowed URL schemes. The URL for a repository shortcut must begin with one of the schemes in the default whitelist or a scheme in this property. Schemes should be comma separated, e.g. 'scheme1:,scheme2:'.

Resource Throttling

These properties define concurrent task limits for the ThrottleService, limiting the number of concurrent operations of a given type that may be run at once. This is intended to help prevent overwhelming the server hardware with running processes. Two settings are used to control the number of processes that are allowed to process in parallel: one for the web UI and one for 'hosting' operations (pushing and pulling commits, cloning repositories).

When the limit is reached for the given resource, the request will wait until a currently running request has completed. If no request completes within a configurable timeout, the request will be rejected.

When requests to the UI are rejected, users will see either a 501 error page indicating the server is under load, or a popup indicating part of the current page failed to load.

When SCM hosting commands (pull/push/clone) are rejected, it is messaged in multiple ways:

  • An error message is returned to the client, which the user will see on the command line: "Bitbucket is currently under heavy load and is not able to service your request. Please wait briefly and try your request again"

  • A warning message is logged for every time a request is rejected due to the resource limits, using the following format: "A [scm-hosting] ticket could not be acquired (12/12)"

  • For five minutes after a request is rejected, a red banner will be displayed in the UI to warn that the server is reaching resource limits.

The underlying machine-level limits these are intended to prevent hitting are very OS- and hardware-dependent, so you may need to tune them for your instance. When hyperthreading is enabled for the server's CPUs, for example, it is likely that the default settings will allow sufficient concurrent operations to saturate the I/O on the machine. In such cases, we recommend starting off with a less aggressive default on multi-cored machines; the value can be increased later if hosting operations begin to back up.

Additional resource types may be configured by defining a key with the format 'throttle.resource.<resource-name>'. When adding new types, it is strongly recommended to configure their ticket counts explicitly using this approach.

既定値説明
scaling.concurrency
cpu

Allows adjusting the scaling factor used. Various other CPU/throttling dependent properties are defined in terms of this setting, so adjusting this value implicitly adjusts those. Some examples:

  • event.dispatcher.core.threads

  • event.dispatcher.max.threads

  • executor.max.threads

  • throttle.resource.scm-hosting

The default value, cpu, resolves to the number of detected CPU cores. On hyperthreaded machines, this will be double the amount of physical cores.

throttle.resource.git-lfs
80

Limits the number of Git LFS file transfer operations which may be running concurrently. This is primarily intended to prevent Git LFS requests from consuming all available connections thereby degrading UI and Git hosting operation. This should be a fraction of the maximum number of concurrent connections permitted by Tomcat.

throttle.resource.git-lfs.timeout
0

Controls how long threads will wait for Git LFS uploads/downloads to complete when the system is already running the maximum number of concurrent transfers. It is recommended this be set to zero (i.e. don't block) or a few seconds at most. Since waiters still hold a connection, a non-zero wait time defeats the purpose of this throttle.

This value is in seconds.

throttle.resource.mirror-hosting
2*${scaling.concurrency}

Limits the number of SCM hosting operations served to mirrors, which may be running concurrently. This limit is intended to protect the system's CPU and memory from being consumed excessively by mirror operations.

Note that dynamic throttling is not supported for mirror hosting operations.

throttle.resource.mirror-hosting.timeout
3600

Controls how long threads will wait for SCM mirrors hosting operations to complete when the system is already running the maximum number of SCM commands.

This value is in seconds.

throttle.resource.scm-command
50

Limits the number of SCM commands, such as: git diff, git blame, or git rev-list, which may be running concurrently. This limit is intended to prevent the operations which support the UI from preventing push/pull operations from being run.

throttle.resource.scm-command.timeout
2

Controls how long threads will wait for SCM commands to complete when the system is already running the maximum number of SCM commands.

This value is in seconds.

throttle.resource.scm-hosting.timeout
300

Controls how long threads will wait for SCM hosting operations to complete when the system is already running the maximum number of SCM hosting operations.

This value is in seconds.

throttle.resource.scm-hosting.strategy
adaptive

Specifies the strategy for throttling SCM hosting operations. Possible values are 'adaptive' and 'fixed'.

If 'fixed' is specified, throttle.resource.scm-hosting.fixed.limit is used as the fixed upper limit on the number of concurrent hosting operations.

If 'adaptive' is specified, the maximum number of hosting operations will vary between throttle.resource.scm-hosting.adaptive.min and throttle.resource.scm-hosting.adaptive.max based on how many hosting operations the system believes the machine can support in its current state and given past performance.

If any configured adaptive throttling setting is invalid and reverts to a default but this conflicts with other correctly configured or default settings, the throttling strategy will revert to 'fixed'. E.g. this will occur if throttle.resource.scm-hosting.adaptive.min is set to the same value as throttle.resource.scm-hosting.adaptive.max

throttle.resource.scm-hosting.adaptive.limit.min
1*${scaling.concurrency}

When the adaptive strategy is enabled for throttling SCM hosting operations, this sets the lower limit on the number of SCM hosting operations, meaning pushes and pulls over HTTP or SSH, which may be running concurrently.

Setting a lower limit provides a way to specify a minimum service level for SCM hosting operations regardless of what the adaptive throttling heuristic believes the machine can handle.

There is limited support for mathematical expressions; +,-,*,/ and () are supported.

throttle.resource.scm-hosting.adaptive.limit.max
4*${scaling.concurrency}

When the adaptive strategy is enabled for throttling SCM hosting operations, this sets the upper limit on the number of SCM hosting operations, meaning pushes and pulls over HTTP or SSH, which may be running concurrently. This is intended primarily to prevent pulls, which can be very memory-intensive, from exhausting a server's resources. Note that if the machine does not have sufficient memory to support this default value or an explicitly configured value, a smaller value will be chosen on startup.

Adaptive throttling will vary the total tickets There is limited support for mathematical expressions; +,-,*,/ and () are supported.

throttle.resource.scm-hosting.adaptive.cpu.target
0.75

When the adaptive strategy is enabled for throttling SCM hosting operations, this sets the target CPU utilisation for the machine (across all processors) which the system takes into consideration when calculating the current throttling limit.

This value represents a trade-off: higher numbers may boost raw throughput of hosting operations, but at the expense of overall system responsiveness for all users. Increasing the target too high or too low brings diminishing returns.

This must be a value between 0.0 and 1.0 and is a percentage of the total available CPU power across all cores

throttle.resource.scm-hosting.fixed.limit
1.5*${scaling.concurrency}

When the fixed strategy is enabled for throttling SCM hosting operations, this limits the number of SCM hosting operations, meaning pushes and pulls over HTTP or SSH, which may be running concurrently. This is intended primarily to prevent pulls, which can be very memory-intensive, from exhausting a server's resources. There is limited support for mathematical expressions; +,-,*,/ and () are supported.

throttle.resource.scm-refs
8*${scaling.concurrency}

Limits the number of ref. advertisement operations, which may be running concurrently. Those are throttled separately from clone operations as they are much more lightweight and much shorter so we can and should allow many more of them running concurrently.

throttle.resource.scm-refs.timeout
120

Controls how long threads will wait for ref. advertisement operations to complete when the system is already running the maximum number of ref. advertisement operations.

This value is in seconds.

Rest

既定値説明
plugin.rest.raw.content.markup.max.size
${plugin.bitbucket-readme.max.size:5242880}

Controls the maximum allowed file size when downloading a marked-up file through the raw content REST endpoint

This value is in bytes. Default is 5 MiB

SCM - Cache

See Scaling for Continuous Integration performance for more information about configuring the SCM Cache.

Note: The settings controlled by these properties can be configured via REST. The REST configuration takes precedence over the configuration in bitbucket.properties.

既定値説明
plugin.bitbucket-scm-cache.expiry.check.interval
30

Controls how frequently expired cache entries are invalidated and removed from the cache.

This value is in seconds.

plugin.bitbucket-scm-cache.eviction.hysteresis
1073741824

When eviction is triggered the amount of disk space requested for eviction is calculated as (eviction.hysteresis + eviction.trigger.free.space - free space under <Bitbucket home directory>/caches)

This value is in bytes.

plugin.bitbucket-scm-cache.eviction.trigger.free.space
6442450944

Controls the threshold at which eviction is triggered in terms of free space available on disk (specifically under <Bitbucket home directory>/caches)

This value is in bytes.

plugin.bitbucket-scm-cache.minimum.free.space
5368709120

Controls how much space needs to be available on disk (specifically under <Bitbucket home directory>/caches) for caching to be enabled. This setting ensures that the cache plugin does not fill up the disk.

This value is in bytes.

plugin.bitbucket-scm-cache.protocols
HTTP,SSH

Controls which protocols caching is applied to. The HTTP value includes both http and https.

This property value is a comma-separated list. Valid values are: HTTP and SSH.

plugin.bitbucket-scm-cache.capabilities.enabled
true

Controls whether git v2 capabilities advertisement is cached.

plugin.bitbucket-scm-cache.capabilities.maxCount
1

The maximum number of git v2 capabilities advertisement to retain per repository. If there are more than this configured limit, the least recently accessed entry will be invalidated.

plugin.bitbucket-scm-cache.capabilities.ttl
3600

Controls the 'time to live' for git v2 capability advertisement caches.

This value is in seconds.

plugin.bitbucket-scm-cache.upload-pack.enabled
true

Controls whether caching is enabled for git-upload-pack (clone operations).

plugin.bitbucket-scm-cache.upload-pack.maxCount
20

The maximum number of upload-pack cache entries to retain per repository. If there are more than this configured limit, the least recently accessed entry will be invalidated.

plugin.bitbucket-scm-cache.upload-pack.ttl
14400

Controls how long the caches for clone operations are kept around when there no changes to the repository.

Caches are automatically invalidated when someone pushes to a repository or when a pull request is merged.

This value is in seconds.

SCM - Mesh

The properties in this section are used by Bitbucket to configure its Mesh nodes. If any of these settings are present in &lt;bitbucket-shared-home&gt;/bitbucket.properties, they will be sent to all registered Mesh nodes and override Mesh's own default settings.

Mesh logging levels for any number of loggers can be set in bitbucket.properties using the following format:

mesh.logging.logger.{name}={level}

To configure all classes in the com.atlassian.bitbucket.mesh package to DEBUG level:

mesh.logging.logger.com.atlassian.bitbucket.mesh=DEBUG

To adjust the ROOT logger, you use the special name ROOT (case-sensitive):

mesh.logging.logger.ROOT=INFO

既定値説明
mesh.profiling.enabled
true

Controls whether profiling should be enabled or not.

This property controls the profiling.enabled property on Mesh nodes.

mesh.profiling.max-frame-name-length
300

Controls the maximum character length of the frame names that are written to the profiler log. Any characters beyond this maximum are truncated. Ex: [83.0ms] - nio: /usr/bin/git log --format=commit %H%n%H%x02%P%x02%aN%x02%aE%x02%at%x02%cN%x02%cE%x02%ct%n%B%n%x03END%x04 -2 --no-min-parents ...

This value is in number of characters.

This property controls the profiling.max-frame-name-length property on Mesh nodes.

mesh.profiling.min-frame-time
0

Defines the threshold time (in milliseconds) below which a profiled event should not be reported.

This property controls the profiling.min-frame-time property on Mesh nodes.

mesh.profiling.min-trace-time
0

Defines the threshold time (in milliseconds) below which an entire stack of profiled events should not be reported.

This property controls the profiling.min-trace-time property on Mesh nodes.

mesh.scaling.concurrency
cpu

Allows adjusting the scaling factor used by Mesh. Various of Mesh's CPU/throttling dependent properties are defined in terms of this setting, so adjusting this value implicitly adjusts those. Some examples:

  • throttling of git hosting operations - the size of the thread pool for asynchronous processing

The default value, cpu, resolves to the number of detected CPU cores. On hyperthreaded machines, this will be double the amount of physical cores.

This property controls the scaling.concurrency property on Mesh nodes.

plugin.bitbucket-git.mesh.authentication.allowed-clock-skew
2m

Defines the clock skew allowed when validation expiry for signed tokens during authentication. This accounts for clock drift between Bitbucket and Mesh nodes. This value is in SECONDS unless a unit (s, m, h, d) is specified.

This property controls the authentication.allowed-clock-skew property on Mesh nodes.

plugin.bitbucket-git.mesh.authentication.expiry-interval
30s

Defines the amount of time a signed token is valid after it's issued. This only affects outbound requests, such as replication requests from one Mesh node to another. Since tokens are generated right before the RPC they will be used to authenticate, this interval should generally be short. This value is in SECONDS unless a unit (s, m, h, d) is specified.

This property controls the authentication.expiry-interval property on Mesh nodes.

SCM - Misc

既定値説明
http.cloneurl.includeusername
false

Controls whether the HTTP clone URL should contain the currently authenticated user's username

http.scmrequest.async.enabled
true

Controls whether asynchronous process is enabled for HTTP SCM requests. Asynchronous processing can significantly increase the server's ability to service UI users while under heavy HTTP hosting load

http.scmrequest.async.keepalive
300

Controls how long asynchronous request threads are allowed to idle before they're terminated. Aggressive timeouts may reduce the number of idle threads, but may also reduce the server's ability to respond to load spikes.

This value is in seconds.

http.scmrequest.async.queue
0

Do not use. This property is deprecated and, as of 7.4.2, 7.5.1 and 7.6.0 (or newer), it no longer does anything.

http.scmrequest.async.threads
250

Controls how many threads are used to process HTTP SCM requests asynchronously. Asynchronous processing frees up servlet container threads to allow them to handle other requests, like page requests for UI users.

http.scmrequest.process.error.response.status.override
true

Controls if the status code for Git HTTP requests is overridden to be 200, or if the "real" status code is sent. When set to true a status code of 200 is always sent. When set to false a more accurate status code is sent, for example a 404, if the repository does not exist.

This is useful because the standard Git client's behaviour will only process the content of HTTP responses if the status code is 200 OK. This means we can send meaningful error messages to Git, at the expense of sending a more accurate response code.

SCM - git

既定値説明
plugin.bitbucket-git.path.executable
git

Defines the default path to the git executable. On Windows machines, the .exe suffix will be added to the configured value automatically if it is not present. In general, "git" should be an acceptable default for every platform, here, assuming that it will be available in the runtime user's PATH.

With the new path searching performed by DefaultGitBinaryHelper, setting a default value here is unnecessary, as the plugin will quickly discard the value. This is left here purely for documenting how to set an explicit path.

plugin.bitbucket-git.path.libexec

Defines the path to the git libexec directory (containing the git-core directory). This path is hard-coded into the git executable and is used for forking processes like git-http-backend. If this value is set, those processes will be forked directly. This eliminates an unnecessary fork (git -> git-http-backend) and may improve scalability.

plugin.bitbucket-git.author.name.type
displayname

Defines whether commits created by the system should use the username (jdoe) or the display name (John Doe) to specify the Git author/committer. By default, the display name will be used.

This value can be either displayname or username.

plugin.bitbucket-git.diff.renames
copies

Defines whether copy and/or rename detection should be performed. By default, both rename and copy detection are performed. Only files modified in the same commit are considered as rename or copy origins, to minimize overhead.

The possible settings are:

  • "copy" or "copies"

    Applies --find-copies.

  • "rename" or "renames"

    Applies --find-renames.

  • "off"

    Disables rename and copy detection.

When using "copy" or "copies", the value may optionally be suffixed with a "+" to use --find-copies-harder. This setting should be used with caution, as it can be very expensive. It considers every file in the repository, even files not modified in the same commit, as possible origins for copies. When copy and/or rename detection is enabled plugin.bitbucket-git.diff.renames.threshold can be used control the similarity index required for a change to be identified as a copy or rename. This configuration can also be applied at repository level with plugin.bitbucket-git.diff.renames.KEY.slug or at project level with plugin.bitbucket-git.diff.renames.KEY

plugin.bitbucket-git.diff.renames.threshold
50

Defines the threshold, as a percentage, for a file to be detected as a rename or a copy. This setting is only applied if copy and/or rename detection is enabled. The default threshold applied is 50% similarity (defined in git itself).

This configuration can also be applied at repository level with plugin.bitbucket-git.diff.renames.threshold.KEY.slug or at project level with plugin.bitbucket-git.diff.renames.threshold.KEY

plugin.bitbucket-git.environment.commandsize
32000

Defines the maximum number of characters that can be added to a single command. Different operating systems (and even different versions of the same operating system) have different hard limitations they apply to command line lengths. This default is intended to be low enough to work on all supported platforms out of the box, but still high enough to be usable. It is configurable in case it proves to be too high on some platform. This default is based on Windows, which has a limit of 32768 characters. Testing on Linux (Ubuntu 12.04 running a 3.2 kernel) found that its limit is at least 32x the limit imposed by Windows.

plugin.bitbucket-git.environment.variablesize
16000

Defines the maximum number of characters that can be added to a single environment variable. Different operating systems (and even different versions of the same operating system) have different hard limitations they apply to environment variables. This default is intended to be low enough to work on all supported platforms out of the box, but still high enough to be usable. It is configurable in case it proves to be too high on some platform.

plugin.bitbucket-git.hosting.allow-filter
true

Controls whether partial clones, using --filter, are allowed. Partial clones are not cached, and some of the filters offered by Git can be very resource-intensive for the server to apply, so it can sometimes be more efficient to use a normal clone (or a shallow one) instead. Partial clones are enabled by default.

plugin.bitbucket-git.hosting.http.buffersize
8192

Defines the buffer size in bytes which is used when marshaling data between the git process and the HTTP socket. The default is 8K, with a 1K minimum.

plugin.bitbucket-git.hosting.ssh.buffersize
4096

Defines the buffer size in bytes which is used when marshaling data between the git process and the SSH socket. The default is 4K, with a 1K minimum.

plugin.bitbucket-git.hosting.timeout.execution
86400

Defines the execution timeout for push/pull processes, applying a hard limit to how long the operation is allowed to run even if it is producing output or reading input. The default value is 1 day, with a 5 minute minimum.

This value is in seconds.

plugin.bitbucket-git.hosting.timeout.idle
1800

Defines the idle timeout for push/pull processes, applying a limit to how long the operation is allowed to execute without either producing output or consuming input. The default value is 30 minutes, with a 2 minute minimum.

This value is in seconds.

plugin.bitbucket-git.pullrequest.merge.auto.timeout
${pullrequest.merge.timeout}

Defines the maximum amount of time any command used to calculate a pull request's effective diff, or check for conflicts, is allowed to execute or idle. Because the commands used generally do not produce output there is no separate idle timeout.

This setting is deprecated. Use plugin.bitbucket-git.pullrequest.operation.timeout instead. See the documentation for that property for additional details about what operations this timeout applies to.

This value is in seconds.

plugin.bitbucket-git.pullrequest.operation.timeout
${plugin.bitbucket-git.pullrequest.merge.auto.timeout}

Defines the maximum amount of time any command used to calculate a pull request's effective diff, or check for conflicts, is allowed to execute or idle. Because the commands used generally do not produce output there is no separate idle timeout.

This timeout applies to operations that are run in the background. "Foreground" operations, like showing the file tree or an individual file's diff, do not use this timeout. However, in certain cases, such foreground operations may be blocked if a relevant background operation has not yet completed. For example, if a pull request's effective diff has not been calculated, displaying the file tree will block while the effective diff is calculated.

Using an aggressive timeout here may result in pull requests becoming unviewable. For example, if the system times out calculating a pull request's effective diff, it will not be possible to load the file tree and the pull request cannot be reviewed.

This value is in seconds.

plugin.bitbucket-git.ssh.binary
ssh

Defines the SSH binary to use for outgoing SSH commands, i.e. git commands that fetch from or push to external repositories over SSH. This setting does not affect incoming SSH requests.

plugin.bitbucket-git.worktree.expiry
30

Defines the amount of time after which a temporary work tree expires and can be cleaned up. A minimum value of 2 minutes and a maximum value of 24 hours is enforced.

This value is in minutes. The default is 30 minutes.

SMTP

既定値説明
mail.timeout.connect
60

Controls the timeout for establishing an SMTP connection.

This value is in seconds.

mail.timeout.send
60

Controls the timeout for sending an e-mail.

This value is in seconds.

mail.test.timeout.connect
30

Controls the timeout for establishing a test SMTP connection. Shorter timeouts should be applied for when sending test e-mails, as the test occurs in user time.

This value is in seconds.

mail.test.timeout.send
30

Controls the timeout for sending a test e-mail. Shorter timeouts should be applied for when sending test e-mails, as the test occurs in user time.

This value is in seconds.

mail.error.pause.log
300

Controls how frequently messages will go to the standard log file about mail sending errors. All errors are logged to atlassian-bitbucket-mail.log, but warnings will be added to the standard log periodically if there are errors sending messages.

This value is in seconds

mail.error.pause.retry
5

Controls how long to wait before retrying to send a message if an error occurs.

This value is in seconds

mail.threads
1

Controls the number of threads to use for sending emails. Higher values may result in higher mail server load, but may also allow the system to work through its internal queue faster.

mail.max.message.size
1048576

Controls the maximum allowed size of a single mail message in bytes, which is the sum of the subject and body sizes.

This value is in bytes.

mail.max.queue.size
157286400

Controls the maximum allowed size for the mail queue in bytes (any new message will be rejected if the mail queue reaches that size)

This value is in bytes.

mail.max.shutdown.wait
5

Controls the maximum time to wait for the mail queue to empty on shutdown. Once this time elapses, any mail remaining in the queue will be logged as rejected in the mail log and dropped. A value of 0 or less means no waiting will occur, and all mail in the queue will be dropped on shutdown.

This value is in seconds.

mail.crypto.protocols
TLSv1 TLSv1.1 TLSv1.2

Space-separated list of crypto protocols to use when sending encrypted email. This default value is POODLE-safe. Order does not matter - JavaMail always tries the latest supported protocol first. An empty value causes the product to use all protocols supported by the shipped version of JavaMail which may not be be POODLE-safe

mail.crypto.ciphers

Space-separated list of ciphers to use when connecting via SSL or TLS. An empty value causes the product to use all ciphers supported by the JVM

ssh

既定値説明
plugin.ssh.address

Sets the address where the application will listen for SSH connections. By default the application will accept SSH connections on all of its network interfaces. This property can be used to restrict that to specific interfaces, with multiple addresses separated by commas (without spaces).

plugin.ssh.port

Sets the port where the application will listen for SSH connections, 7999 by default. This value and the SSH base URL's(plugin.ssh.baseurl) port don't need to match, but they often should match.

If the SSH base URL and SSH port configurations are modified in the global Server settings page, the configurations specified in the properties file will no longer be used.

plugin.ssh.baseurl

Sets the URL on which SSH is accessible, this is used as the base for SSH clone URLs. If SSH is running on a non-standard port, the base URL must start with ssh:// or clones will fail because the port syntax is ambiguous when paired with a path (e.g. host:port/path vs. host:path) and Git does not apply the port.

If the SSH base URL and SSH port configurations are modified in the global Server settings page, the configurations specified in the properties file will no longer be used.

SSH command execution

既定値説明
plugin.ssh.command.timeout.idle
7200

Controls timeouts for all SSH commands, such as those that service git and hg operations over SSH. The idle timeout configures how long the command is allowed to run without writing any output to the client. For SCM commands, the plugin.*.hosting.timeout.idle properties defined above will be applied to the underlying process. The default value is 2 hours.

This values is in seconds.

plugin.ssh.last.authenticated.interval
60

Controls how often the lastAuthenticationTimestamp attribute in SSH_PUBLIC_KEY should be updated. The minimum value applied is 5 seconds. However, a value higher than 5 is strongly recommended to avoid continuously updating the timestamp for sequences of git requests that use SSH authentication on each request. The default value is 60 seconds.

This value is in seconds

plugin.ssh.nio.workers
-1

Controls the maximum number of NIO worker threads to process incoming SSH commands. The default value is -1, which will use a dynamic maximum based on the number of CPU cores + 1 (the default for the Apache SSHD library).

plugin.ssh.auth.timeout
30

Controls the timeout when trying to authenticate the incoming SSH command. If authentication does not complete within this period the SSH command will fail. Using a shorter timeout hastens the rejection SSH commands under heavy load and can help keep open socket/file count down.

This values is in seconds.

plugin.ssh.session.pending-writes.max
3840

Controls the number of pending writes the throttling mechanism will allow SSH sessions. Throttling works as a flow control mechanism to prevent the system from writing too much data to Apache MINA's WriteRequestQueue. This is particularly helpful when clients (such as TortoiseGit) setup session channels with extremely large window sizes (2GBs) which means Apache MINA's own flow control mechanism will not stop the command from writing data.

A default value of 512 means that at any given time the system will only be responsible for, at most, 4 MBs per session (some more data may be written to the queue as part of Apache MINA's own handling of the SSH protocol).

Rate limiting will be applied to any SSH session which establishes a channel with a remote window size larger than ${plugin.ssh.session.pending-writes.max} * 8197 (packet size optimized for Git).

A value of 0 or less effectively disables session io rate limiting.

plugin.ssh.session.max
250

Controls the maximum number of concurrent SSH sessions allowed. If this property is removed the system will set the limit at 250. If this property is configured below 100 the system will set the limit at 100. Increasing this will result in additional memory usage during peak load and can lead to out-of-memory errors.

plugin.ssh.haproxy.proxy-enabled
true

Controls whether the system will detect and parse HAProxy's PROXY protocol to expose real client IP addresses in addition to the connecting proxy's IP address.

SSH security

既定値説明
plugin.ssh.disabled.ciphers
arcfour128, arcfour256, aes128-cbc, aes192-cbc, aes256-cbc, 3des-cbc, blowfish-cbc

Controls which default ciphers are disabled when executing all SSH commands. Non-existent ciphers are ignored. Names are case-sensitive. If you override this property, the default values will NOT be appended, and should be included explicitly.

Example value: arcfour128,3des-cbc

To disable additional ciphers see the KB article Disable default SSH algorithms.

plugin.ssh.disabled.key.exchanges

Controls which default key exchange algorithms are disabled when executing all SSH commands. Non-existent key exchange algorithms are ignored. Names are case-sensitive. If you override this property, the default values will NOT be appended, and should be included explicitly.

Example value: ecdh-sha2-nistp256,ecdh-sha2-nistp384

To disable additional key exchange algorithms see the KB article Disable default SSH algorithms.

plugin.ssh.disabled.macs
hmac-md5, hmac-sha1-96, hmac-md5-96

Controls which default macs are disabled when executing all SSH commands. Non-existent macs are ignored. Names are case-sensitive. If you override this property, the default values will NOT be appended, and should be included explicitly.

Example value: hmac-sha1-96,hmac-md5-96,hmac-md5

To disable additional macs see the KB article Disable default SSH algorithms.

plugin.ssh.disabled.signatures

Controls which default signature algorithms are disabled when executing all SSH commands. Non-existent signatures are ignored. Names are case-sensitive. If you override this property, the default values will NOT be appended, and should be included explicitly.

Example value: ssh-dss,ssh-rsa

To disable additional signature algorithms see the KB article Disable default SSH algorithms.

plugin.ssh.dhgex.allow-sha1
false

Allows the usage of Diffie-Hellman SHA1 key exchange algorithms. If set to true, then the Diffie-Hellman SHA1 algorithms are enabled and can be used. Because SHA-1 is considered insecure, this property will have a default value of false but will remain configurable in 8.0.

plugin.ssh.dhgex-keysize.min
1024

Controls the minimum supported Diffie-Hellman Group Exchange key size. If unset, 1024 is used as the default value. If set to negative value then Diffie-Hellman Group Exchange is disabled. This corresponds to Apache SSHD's property, org.apache.sshd.minDHGexKeySize.

Note: The value of 1024 is only being set to avoid breaking compatibility with existing clients in a minor release. The default will be increased to 2048 in 8.0 since 1024 is considered insecure.

検索

Bitbucket 4.6+ ships with a bundled search service.

These properties enable admins to configure Bitbucket search.

Warning: If a search parameter is set in the properties file, it cannot be edited later from the admin UI. Any changes that need to be made to the search configuration must be made within the bitbucket.properties file.

既定値説明
plugin.search.codesearch.indexing.enabled
true

Controls whether code indexing is enabled. Setting this to false prevents repository content from being indexed going forward. Existing repository content will not be un-indexed automatically.

plugin.search.indexing.max.batch.size

Maximum size of indexing batches sent to the search server. This value should be less than the maximum content length for http request payloads supported by the search server. The default value is 15728640 (=15MB). (Note that for search server instances running on AWS, this value must be less than the Network Limit size of the search server instance).

This value is in bytes.

plugin.search.codesearch.indexing.exclude

Allows configuring strategies for excluding certain repositories from code search indexing. This can be used to reduce disk space requirements for the search index.

Valid values are:

  • all-forks excludes all forks from code search indexing

  • personal-forks excludes personal forks from code search indexing

  • synced-forks excludes forks which have ref synchronization enabled from code search indexing

  • undiverged-forks excludes forks from code search indexing which have ref synchronization enabled, and have not had their default branch updated

plugin.search.config.aws.region

AWS region Bitbucket is running in. When set, enables request signing for Amazon OpenSearch Service.

plugin.search.config.baseurl

Sets the base URL of a search server instance.

plugin.search.config.password

Password for connecting to an search server instance.

plugin.search.config.username

Username for connecting to an search server instance.

plugin.search.elasticsearch.aws.region

This setting is deprecated. Use plugin.search.config.aws.region instead

plugin.search.elasticsearch.baseurl

This setting is deprecated. Use plugin.search.config.baseurl instead

plugin.search.elasticsearch.password

This setting is deprecated. Use plugin.search.config.password instead

plugin.search.elasticsearch.username

This setting is deprecated. Use plugin.search.config.username instead

plugin.search.indexing.event.shutdown.timeout
2

Controls how long an search indexing process is given to complete when Bitbucket is shutting down, before it is stopped forcefully. This value is in seconds.

plugin.search.pageSize.primary
25

Controls the page size of primary search results (eg. code for code search)

plugin.search.pageSize.secondary
10

Controls the page size of secondary search results (eg. sidebar PRs/repos/etc)

Secret Scanning

既定値説明
secretscanning.max.threads
${scaling.concurrency}

Controls the maximum number of threads allowed per node to perform secret scanning

secretscanning.scan.timeout
1000

Controls the timeout for streaming the diff for a commit and detecting any secrets. This value is in MILLISECONDS.

secretscanning.scan.batch.size
200

Controls the number of commits sent to the executor to be scanned for secrets. If the number of commits pushed exceeds the batch size, then multiple batches will be sent.

secretscanning.scan.commit.limit
10000

Controls the maximum number of commits to be scanned in a single push.

secretscanning.email.maxsecrets
100

The maximum number of detected secrets that a single email can contain

This exists purely to limit the size of the emails Bitbucket sends out

シークレット スキャン

既定値説明
secretscanning.email.enabled
true

Controls whether secret scanning will send emails to users who push secrets to Bitbucket

Secrets Management

既定値説明
secrets.secured-properties

This setting specifies which properties in the bitbucket.properties file must be secured and stored in the secure storage backend, as configured in the $BITBUCKET_HOME/shared/secrets-config.yaml file. The values should be comma-separated.

When Bitbucket starts up, the properties listed here will automatically be redacted if they are specified in the bitbucket.properties file. Each property will have their values replaced with {ATL_SECURED} indicating that the original value has been securely transferred to the underlying secret storage.

For instance, given the bitbucket.properties file includes: - jdbc.password=password - secrets.secured-properties=jdbc.password, server.ssl.key-password

Once Bitbucket restarts, the jdbc.password will be securely moved to the secret storage, and the bitbucket.properties file will be modified to reflect: - jdbc.password={ATL_SECURED} - secrets.secured-properties=jdbc.password, server.ssl.key-password

If a property previously marked as {ATL_SECURED} in the bitbucket.properties file is later modified to a different value, Bitbucket interprets this new value as unsecured. It will then proceed to secure this value and update the secret storage with the newly secured value. This approach facilitates the updating or rotation of bitbucket.properties settings that have already been secured in storage.

Note: This property has an empty value by default to avoid any conflicts when upgrading from a previous version of Bitbucket that has been configured with the deprecated encrypted-property.cipher.classname property. This will change in Bitbucket 10.0 when encrypted-property.cipher.classname will be removed. When that happens, the default setting will include known protected properties such as the jdbc.password property.

secrets.value.max-length
32768

This setting specifies the maximum length (in characters) allowed for a secret to be processed by the secret service. Any secret exceeding this limit will trigger an exception, as it cannot be secured by the service.

This limit is configurable to ensure compatibility with various secret storage backends that can be made available via secrets-config.yml file.

The valid range for this setting is a minimum of 1 character to a maximum of 512000 characters. Attempting to configure the maximum length outside of these bounds will result in an exception.

Server

既定値説明
server.context-path
/

Controls the context path where the application should be published, / by default.

server.display-name
Atlassian Bitbucket

Controls the application's display name.

server.hsts.enabled
false

Controls whether HTTP Strict Transport Security (HSTS) headers are added to HTTPS responses.

server.hsts.max-age
31536000

Controls the max-age directive on HTTP Strict Transport Security (HSTS) headers.

This is the period of time, after the reception of the HSTS header field, during which the web browser regards the server as a known HSTS host. A value of 0 instructs the web browser to stop regarding the server as a known HSTS host.

This value is in seconds and the default value of 31536000 represents 1 year.

server.session.cookie.http-only
true

Controls whether the session cookie should include an HttpOnly restriction, for those browsers that honor it. HttpOnly is enabled by default.

server.session.cookie.max-age
1209600

Controls the value of "Max-Age" attribute in session cookie, for those browsers that honor it. This value is in seconds and the default value is 1209600 seconds i.e. 2 weeks. "Expires" attribute is also set based on this value which is absolute date and time when cookie will expire.

server.session.cookie.name
BITBUCKETSESSIONID

Controls the name used for the session cookie, which is "BITBUCKETSESSIONID" by default. Note that most servlet containers use "JSESSIONID" by default. That is not used here to facilitate installations where multiple applications have the same hostname (typically via reverse proxying) with distinct context paths. In such a setup, using "JSESSIONID" can result in unexpected behavior where logging into one application logs users out of others.

server.session.timeout
1800

Controls the session's timeout, which defaults to 30 minutes. Session timeout may not result in a user having to login again; they may automatically receive a new session via a separate remember-me token.

This value is in seconds.

server.session.tracking-modes
cookie

Controls which mechanisms may be used for tracking sessions. By default, only "cookie" tracking is enabled. Other options are "ssl", to use the SSL session ID, and "url", to append the session ID to the URL after a semi-colon. Multiple values may be specified separated by commas (e.g. "cookie,url")

Server Connectors

These properties control the primary server connector. Additional connectors can be configured using the prefix server.additional-connector.#, where # is the connector number. For example: to set a port on the first additional connector, the property would be server.additional-connector.1.port=7991. Numbers 1 to 5 are supported, allowing for 5 connectors in addition to the primary connector.

既定値説明
server.ajp.packet-size
0

Controls the AJP packet size. When using the default value (0), Tomcat's default packet size is applied. The minimum value is 8192, and the maximum is 65536. If this property is overridden, the value set should match the max_packet_size directive for mod_jk.

Note: Packet size can only be configured when server.connector-protocol is set to AJP/1.3; otherwise it will be ignored.

This value is in bytes.

server.ajp.secret

Controls the secret that is used to secure the AJP connector. By default no secret is used or required. If a secret is configured, by default it will be required.

server.ajp.secret-required

Controls whether the configured secret is required when connecting via AJP. If no secret is configured, any value configured here is ignored and the property is always set to false. If a secret is configured and this value is not set, it defaults to true.

server.address

Controls the network address the application will bind to, 0.0.0.0 by default.

server.compression.enabled
true

Controls whether the server will attempt to compress data to reduce network costs. By default, compression is enabled on all HTTP/1.1 connectors.

Note: Compression cannot be used when server.connector-protocol is set to AJP/1.3, and it will be ignored if enabled.

server.compression.excluded-user-agents

Controls the list of user-agents to exclude from compression.

server.compression.mime-types
text/css,text/html,text/javascript,text/json,text/plain,text/xml,text/x-javascript,application/javascript,application/json,application/x-javascript,application/vnd.git-lfs+json

Controls which MIME types are compressed, when compression is enabled. CSS, HTML, JavaScript and JSON are compressed by default.

server.compression.min-response-size

Controls the minimum response length for the compression to be performed. Only the mime-types specified as part of the server.compression.mime-types will be compressed.

This value is in bytes.

server.connection-timeout
20000

Controls the connection timeout, which defines how long the server will wait for a request after a connection is established.

This value is in milliseconds.

server.connector-protocol
HTTP/1.1

Controls the wire protocol used by the primary connector, which is HTTP/1.1 by default.

The following values are supported: * HTTP/1.1 or org.apache.coyote.http11.Http11NioProtocol (Default): A standard HTTP 1.1 connector, which can be configured with or without SSL. All of the various settings are supported for HTTP 1.1. * AJP/1.3 or org.apache.coyote.ajp.AjpNioProtocol: An AJP connector. Several settings, such as max-http-header-size and all of the compression and SSL settings, are not honored when using AJP.

Apache Portable Runtime (APR) and NIO2-based connectors are not supported and may not be used. Attempting to configure either type will result in the system failing during startup.

server.max-http-header-size
0

Controls the maximum size of the HTTP message header. When using the default value (0), Tomcat's default limit is applied.

Note: The max HTTP header size cannot be configured when server.connector-protocol is set to AJP/1.3, and it will be ignored if set; server.packet-size may be the property you're looking for instead.

This value is in bytes.

server.max-http-post-size
0

Maximum size of the HTTP post or put content. When using the default value (0), Tomcat's default limit is honored.

This value is in bytes.

server.packet-size
0

Controls the AJP packet size. When using the default value (0), Tomcat's default packet size is applied. The minimum value is 8192, and the maximum is 65536. If this property is overridden, the value set should match the max_packet_size directive for mod_jk.

Note: Packet size can only be configured when server.connector-protocol is set to AJP/1.3; otherwise it will be ignored. Deprecated in 7.14. Use server.ajp.packet-size instead.

This value is in bytes.

server.port
7990

Controls the port where the application will listen for connections, 7990 by default.

server.proxy-name

The proxy name, used to construct proper redirect URLs when a reverse proxy is in use.

server.proxy-port

The proxy port, used to construct proper redirect URLs when a reverse proxy is in use. If a proxy port is not set, but a name is, the connector's port will be used as the default.

server.redirect-port

The redirect port to use when redirecting from non-SSL to SSL. Defaults to 443, the standard SSL port.

server.require-ssl
false

Require an SSL connection when connecting to the server. Setting this to "true" will automatically redirect insecure connections to the configured "redirect-port" for their connectors.

server.scheme
http

The connector scheme, either "http" or "https". Defaults to "http" unless "secure" is set to "true"; then it defaults to "https". In general, this property should not need to be set.

Note: The scheme cannot be configured when server.connector-protocol is set to AJP/1.3, and it will be ignored if set.

server.secure
false

Whether the connector is secured. Note that setting this to "true" does not enable SSL; SSL is enabled using server.ssl.enabled instead. One use case for setting this to "true" is when the system is behind a reverse proxy and SSL is terminated at the proxy.

Note: The secure flag cannot be configured when server.connector-protocol is set to AJP/1.3, and it will be ignored if set.

server.server-header

Controls the value to use for the Server response header.

If empty, which is the default, no header is sent.

server.ssl.ciphers

Controls the supported SSL ciphers for the connector.

This property value is a comma-separated list.

server.ssl.client-auth

Controls whether the client authentication is wanted ("want") or needed ("need"), this setting directly relates to the clientAuth option for Tomcat. Requires a configured trust store.

Default is empty, which corresponds to the Tomcat's default value of false.

server.ssl.enabled
false

Controls whether SSL is enabled. By default, SSL is disabled and the primary connector uses unsecured HTTP.

Note: SSL cannot be used when the server.connector-protocol is set to AJP/1.3, and attempting to enable it will prevent the server from starting.

server.ssl.key-alias
tomcat

Controls the alias used when selecting the key to use in the keystore. For compatibility with keys setup for Bitbucket 4.x and earlier, the default is "tomcat".

server.ssl.key-password
changeit

Controls the password used to access the key within the keystore. (server.ssl.key-store-password is used to control the password for accessing the keystore itself.) For compatibility with keys setup for Bitbucket Server 4.x and earlier, the default is "changeit".

server.ssl.key-store
${bitbucket.shared.home}/config/ssl-keystore

Controls the location of the keystore, "$BITBUCKET_HOME/shared/config/ssl-keystore" by default.

server.ssl.key-store-password
changeit

Controls the password used to access the keystore. (server.ssl.key-password is used to control the password for accessing the specific key within the keystore.) For compatibility with keys setup for Bitbucket 4.x and earlier, the default is "changeit".

server.ssl.key-store-type
${keystore.type:jks}

Controls the keystore type. The JVM's default keystore type as returned by KeyStore.getDefaultType(), typically "jks", is used by default.

server.ssl.enabled-protocols
all

Controls which SSL protocols the connector will allow when communicating with clients. This value can be a comma-separated list (without spaces) to allow multiple protocols.

Possible values include: * SSLv2Hello * SSLv3 * TLSv1 * TLSv1.1 * TLSv1.2 * TLSv1.3 (Requires a supporting JVM) * all (Default; equivalent to "SSLv2Hello,TLSv1,TLSv1.1,TLSv1.2,TLSv1.3")

Note that SSLv2 and SSLv3 are inherently unsafe and should not be used.

server.ssl.protocol
TLS

Controls the SSL protocol the connector will use by default.

To control which protocols are supported, set server.ssl.enabled-protocols instead. Even if the connector defaults to a given protocol, clients can use any enabled protocol.

server.ssl.trust-store

Controls which trust store holds the SSL certificates

server.ssl.trust-store-password

Controls the password to be used to access the trust store.

server.ssl.trust-store-provider

Controls the provider for the trust store.

server.ssl.trust-store-type

Controls the type for the trust store.

Server busy banners

既定値説明
server.busy.on.ticket.rejected.within
5

Controls how long a warning banner is displayed in the UI after a request is rejected due to excessive load.

This value is in minutes. Using 0, or a negative value, disables displaying the banner.

server.busy.on.queue.time
60

Controls how long requests need to be queued before they cause a warning banner to appear.

This value is in seconds. Using 0, or a negative value, disables displaying the banner.

Setup automation

If these properties are specified in bitbucket.properties, when the Setup Wizard runs after installing Bitbucket Server their values will be used and the Setup Wizard will not display the corresponding configuration screens.

You can use these properties to automate setup and remove the need to interact with the Setup Wizard when provisioning a new server.

See our automated setup documentation for more details.

既定値説明
setup.displayName
Bitbucket

The display name for the instance.

setup.baseUrl

The base URL for the instance.

setup.license
AAABa1evaA4N...

The license.

setup.sysadmin.username
admin

The username for the system admin account.

setup.sysadmin.password
password

The password for the system admin account.

setup.sysadmin.displayName
John Doe

The display name for the system admin account.

setup.sysadmin.emailAddress
sysadmin@yourcompany.com

The email address for the system admin account.

Signed commit

既定値説明
plugin.signed-commit.batch-size
300

This sets the number of commits we process in a single thread when indexing the commits for their signature state. Increasing this number can make indexing faster, but when this number is too large, it can cause timeouts during indexing. The default value is 300 (the number of commits processed in a single thread)

Sizing Hints

既定値説明
sizing-hint.cache.users
4000

Sizing hint for a variety of caches that contain per user entries.

A reasonable starting point to set this is to set it to between 50-100% of the licensed user count.

Increasing this beyond the default may improve performance at the cost of increased Java heap memory usage. Decreasing this below the default is not necessary nor recommended.

sizing-hint.cache.groups
2500

Sizing hint for a variety of caches that contain per group entries.

A reasonable starting point to set this is to set it to between 50-100% of group membership, specifically count of distinct groups, of the set of licensed users.

Increasing this beyond the default may improve performance at the cost of increased Java heap memory usage. Decreasing this below the default is not necessary nor recommended.

Syntax highlighting

See Configuring syntax highlighting for file extensions for more information.

既定値説明
syntax.highlighter.<MIME type>.executables
exe1,exe2

Controls the language highlighter used for a given set of hashbang executables.

The '<MIME type>' refers to the MIME type CodeMirror uses.

syntax.highlighter.<MIME type>.extensions
ext1,ext2

Controls the language highlighter used for a given set of file extensions.

The '<MIME type>' refers to the MIME type CodeMirror uses.

syntax.highlighter.application/json.extensions
ipynb
syntax.highlighter.text/x-sh.executables
sh,bash,zsh
syntax.highlighter.text/x-erlang.executables
escript
syntax.highlighter.text/javascript.executables
node
syntax.highlighter.text/x-perl.executables
perl
syntax.highlighter.text/x-python.executables
python
syntax.highlighter.text/x-ruby.executables
ruby
syntax.highlighter.text/x-sh.extensions
makefile,Makefile
syntax.highlighter.text/velocity.extensions
vm
syntax.highlighter.text/x-objectivec.extensions
m

タスク

既定値説明
task.max.anchors.per.request
500

Maximum number of anchors that can be used when counting or searching tasks

task.max.contexts.per.request
100

Maximum number of contexts that can be used when counting or searching tasks

task.max.tasks.per.request
500

Maximum number of tasks that can be retrieved when searching tasks

task.query.disjunction.size
100

Sets the maximum size of the disjunction clause when querying tasks by anchors or contexts

Topic

既定値説明
topic.default.message.max.queue
1024

Controls the default size of a topic's message queue. Messages that arrive for the topic when the message queue is full will be dropped (and an error logged). This default will only apply when the topic is created (by a plugin) without specifying the message queue size.

topic.dispatch.core.threads
1

Controls the minimum number of threads to keep alive for dispatching topic messages to topic subscribers

topic.dispatch.max.threads
3

Controls the maximum number of threads to dispatch topic messages to the topic subscribers

Universal Plugin Manager

既定値説明
upm.plugin.upload.enabled
false

Controls whether the "Upload App" button is available in UPM, and for the REST API that it uses if plugins can be uploaded from sources other than Atlassian Marketplace. If this is set to true then: - The "Upload App" button will be displayed in the web user interface - Uploading a plugin JAR file will be allowed - Installing a plugin from a non-marketplace URL will be allowed

WARNING: This should be set to true with caution. Allowing the system administrator to install arbitrary plugins will allow the system administrator to execute code in the context of the Java virtual machine which essentially permits an escalation of privileges to "operating system user".

When this property is unset it is effectively defaulted to {@code false}, meaning the above will be disallowed and the "Upload App" button will not be displayed.

Web Sudo

既定値説明
websudo.allowlist.patterns

Defines the IP address allowlist for web sudo. By default, web sudo requests from ALL IP addresses will be permitted. Configuring an allowlist will result in web sudo being denied for clients not originating from an IP address on the allowlist.

Patterns can be IPv4/IPv6 addresses or subnets in both asterisk and CIDR notation. Examples of valid patterns: - 192.168.1.10 - ::10 - 192.168.1.* - 192.168.5.128/26 - 0:0:0:7b::/64

Multiple patterns can be specified as a comma separated list. An example of such a list is: - 192.168.1.10,192.168.2.*,192.168.5.128/26

The default value (empty) indicates no allow-listing is applied and web sudo requests from ALL IP addresses will be permitted.

websudo.session.timeout
10

Controls the duration that a user can hold a web sudo session before it expires. By default, the session will expire after 10 minutes.

This value is in minutes by default but certain suffixes (s,m,h,d for seconds, minutes, hours, or days) can be used to make the value more readable. Standard ISO-8601 format used by java.time.Duration (PT1H, for example) is also supported.

Webhook

既定値説明
plugin.webhooks.http.backoff.interval.exponent
1.2

In the event that a webhook is considered unhealthy and is marked for backoff, it will start by backing off at the initial backoff delay. If the webhook continues to fail, it will back off at a rate of

(initial backoff interval) * (backoff exponent ^ (number of failures - backoff trigger count))

up to a configured maximum

plugin.webhooks.http.backoff.interval.initial
10

The initial delay given to a webhook when it fails more than the configured maximum number of times. While in this backoff state, the specific webhook will be skipped

This value is in seconds

plugin.webhooks.http.backoff.interval.max
7200

The maximum delay given to a webhook when it continually fails. While in this backoff state, the specific webhook will be skipped

This value is in milliseconds, default 2 hours.

plugin.webhooks.http.ip.blacklist
169.254.169.254

This value controls a banned set of IPs that webhooks cannot connect to. Doing so will cause an exception. By default the AWS metadata endpoint is banned to avoid leaking data about the machine that the application is running on

This value is a comma separated list of IPv4/6 addresses or CIDR ranges.

plugin.webhooks.http.backoff.trigger.count
5

The maximum number of failures in a row before a webhook is considered unhealthy and will be skipped. The time skipped will continue to rise as per the delay exponent until the webhook is healthy once again. A single success will clear all instances of failures from the count.

plugin.webhooks.signature.algorithm
sha256

The algorithm to use for signing the outgoing request body, e.g. sha1 or sha256. The resulting signature will be sent in the X-Hub-Signature header for webhooks that have a secret configured.

plugin.webhooks.connection.timeout
20000

The timeout given for a single webhook request to receive a TCP connection

This value is in milliseconds

plugin.webhooks.socket.timeout
20000

Controls how long requests to external services can continue without producing any data before webhooks gives up.

This value is in milliseconds

plugin.webhooks.dispatch.queue.size
250

The maximum size of the queue used to transform webhook publishing events into HTTP events.

plugin.webhooks.dispatch.queue.timeout
250

The maximum amount of time allowed to be placed on the dispatch queue. This time is only utilised if the dispatch queue is filled. The application will wait this amount of time to attempt to be placed onto the queue.

If unsuccessful, the webhook will be skipped.

This value is in milliseconds

plugin.webhooks.io.threads
3

Number of threads to be used to process HTTP IO.

plugin.webhooks.callback.thread.count
10

Number of threads used to process callbacks to Bitbucket with results of the HTTP requests.

plugin.webhooks.http.connection.max
200

Maximum number of concurrent outgoing connections. Further connections will be placed upon the dispatch queue until they are able to be processed.

plugin.webhooks.http.connection.host.max
5

Maximum number of connections to the same HTTP host.

plugin.webhooks.statistics.flush.interval
30

The interval in seconds at which webhooks statistics are written to the database. A longer interval is more efficient, but results in slightly outdated invocation details in the webhooks administration pages. This value should not be 0

This value is in seconds

plugin.webhooks.response.header.line.size.max
8192

Maximum allowed size for response header lines in bytes. Responses with headers that have a line size exceeding this limit will fail with an exception.

This value is in bytes

plugin.webhooks.response.http.body.size.max
16384

Maximum size in bytes for a webhook response body. Any larger bodies will be truncated at this length before being provided to callbacks and/or persisted for historical tracking.

This value is in bytes

plugin.webhooks.dispatch.inflight.max
500

Maximum number of webhooks requests able to be either - Currently in progress - waiting for a HTTP connection any further connection requests will be skipped

plugin.webhooks.push.event.commits.limit
5

Maximum number of commits to be included in the payload of a webhook push event. If the number of commits pushed is greater than the limit, the payload will include only the most recent commits. Significantly increasing this number may negatively impact server performance.

X.509 Certificate Signing

既定値説明
x509.certificate.remote.connection.timeout
60000

The HTTP connection timeout used for communication with external servers that provide certificate revocation lists. This value must be between 2000 and 60000, which are imposed as lower and upper bounds on any value specified here.

This value is in milliseconds.

x509.certificate.remote.socket.timeout
60000

The socket timeout. This value must be between 2000 and 60000, which are imposed as lower and upper bounds on any value specified here.

This value is in milliseconds.

x509.certificate.signing.certificate.matching.issuer.cache.max
25

Controls the maximum number of signing certificates matching an issuer certificate known to the system based on their given fingerprint for the duration of a push. Note that this is a request level cache. Setting this number too low increases the overhead to X.509 certificate signature validation. Setting it too high increases memory consumption.

x509.certificate.signing.certificate.revoked.cache.max
250

Controls the maximum number of signing certificates that we know are revoked based on their given fingerprint for the duration of a push. Note that this is a request level cache. Setting this number too low increases the overhead to X.509 certificate signature validation. Setting it too high increases memory consumption.

x509.certificate.signing.certificate.user.cache.max
150

Controls the maximum number of signing certificates matching the application user known to the system based on their given email address for the duration of a push. Note that this is a request level cache. Setting this number too low increases the overhead to X.509 certificate signature validation. Setting it too high increases memory consumption.

Zero downtime backup/disaster recovery

既定値説明
disaster.recovery
false

When set to true repair jobs are triggered on startup

integrity.check.pullRequest.batchSize
1000

Maximum number of results returned in each database call used by integrity checking tasks

integrity.check.pullRequest.updatedSinceDays
7

Controls the date range used to filter merged pull requests for integrity checking. This property defines the number of days to go back since the most recent pull request update

integrity.check.repository.batchSize
1000

Maximum number of results returned in each database call used by repository check tasks

最終更新日: 2024 年 10 月 1 日

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.