Documentation for JIRA 5.0. Documentation for other versions of JIRA is available too.
The robots.txt protocol is used to tell search engines (Google, MSN, etc) which parts of a website should not be crawled.
For JIRA instances where non-logged-in users are able to view issues, a robots.txt file is useful for preventing unnecessary crawling of the Issue Navigator views (and unnecessary load on your JIRA server).
The information on this page does not apply to JIRA OnDemand.
JIRA (version 3.7 and later) installs the following robots.txt
file at the root of the JIRA webapp:
# robots.txt for JIRA # You may specify URLs in this file that will not be crawled by search engines (Google, MSN, etc) # # By default, all SearchRequestViews in the IssueNavigator (e.g.: Word, XML, RSS, etc) and all IssueViews # (XML, Printable and Word) are excluded by the /sr/ and /si/ directives below. User-agent: * Disallow: /sr/ Disallow: /si/
または、すでに robots.txt
ファイルがある場合は、そのファイルを編集して Disallow: /sr/
および Disallow: /si/
を追加します。。
The robots.txt
file needs to be published at the root of your JIRA internet domain, e.g. jira.mycompany.com/robots.txt
.
If your JIRA instance is published at jira.mycompany.com/jira
, change the contents of the file to Disallow: /jira/sr/
and Disallow: /jira/sr/
. However, you still need to put robots.txt
file in the root directory, i.e. jira.mycompany.com/robots.txt
(not jira.mycompany.com/jira/robots.txt
).