Too many open files error

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

この記事はアトラシアンのサーバー製品にのみ適用されます。クラウドとサーバー製品の違いについてはこちらをご確認ください。

問題

atlassian-confluence.log に次のメッセージが表示される。

Caused by: net.sf.hibernate.HibernateException: I/O errors during LOB access
	at org.springframework.orm.hibernate.support.AbstractLobType.nullSafeSet(AbstractLobType.java:163)
	at net.sf.hibernate.type.CustomType.nullSafeSet(CustomType.java:118)
	at net.sf.hibernate.persister.EntityPersister.dehydrate(EntityPersister.java:387)
	at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:460)
	at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:436)
	at net.sf.hibernate.impl.ScheduledInsertion.execute(ScheduledInsertion.java:37)
	at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2464)
	at net.sf.hibernate.impl.SessionImpl.executeAll(SessionImpl.java:2450)
	at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2407)
	at net.sf.hibernate.impl.SessionImpl.flush(SessionImpl.java:2276)
	at com.atlassian.confluence.pages.persistence.dao.hibernate.HibernateAttachmentDataDao.save(HibernateAttachmentDataDao.java:63)
	... 17 more
Caused by: java.io.IOException: Too many open files

原因

Lucene, the indexing system that is used by Confluence, does not support NFS mounts in Confluence. Using a NFS mount is known to cause this behaviour - further information can be found on the Lucene documentation.

Confluence has too many open files and has reached the maximum limit set in the system. UNIX systems have a limit on the number of files that can be concurrently open by any one process. The default for most distributions is only 1024 files, and for certain configurations of Confluence this is too small a number. When that limit is hit, the above exception is generated and Confluence can fail to function as it cannot open the required files to complete its current operation.

ソリューション

To resolve this, you will need to increase the maximum open file limit:

  1. Confluence をシャットダウンします。
  2. Run the following command in your terminal to check the file handler count limit in your system:

    ulimit -aS | grep open
  3. To set the limit of the file handler, add the following line into the <confluence-install>/bin/setenv.sh file. You can modify the number based on your application's needs

    ulimit -n 32768

    All limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session. This will set that value each time Confluence is started, however will be need to be manually migrated when upgrading Confluence. Please see below for a permanent resolution.

  4. After that, you will need to restart the Confluence for the modification to take effect

Resolution based on the limits.conf file

The steps below are suggested as a permanent solution and is based on Too many open files error in Jira server.

  1. As root, edit the /etc/security/limits.conf file with the below entries. The values on the first column refer to the OS user running the Confluence application. In this example the name of the user is confluence.

    confluence      soft    nofile  32768
    confluence      hard    nofile  32768

     Note that the value 32768 is recommended for large instances and may vary depending on your installation.

  2. For debian based Linux distros (such as Ubuntu) as root, edit /etc/pam.d/common-session with the below entry.

    session required pam_limits.so
  3. Reboot the server.

  4. To ensure the new value is being used by the application, take a Support Zip (link here) and search for the max-file-descriptor attribute in application-properties/application.xml file.

    <max-file-descriptor>32,768</max-file-descriptor>

Resolution when Confluence is installed as a systemd service

When Confluence is installed as a systemd service you may need to update the service unit file as described below.
If you are sure Confluence is running as a systemd service, go straight to Step 2.

  1. Check if Confluence is configured as a systemd service. This helps to identify if this is the case and the service name, in case you are not sure.

    grep -i confluence /etc/systemd/system/*.service /lib/systemd/system/*.service

    In this example, the name of our service is confluence.service, which is the name of the unit file itself and is located in the standard folder /etc/systemd/system.


  2. Edit the service unit file (/etc/systemd/system/confluence.service in our example) and add the following line.

    LimitNOFILE=32768

    Note that the value 32768 is recommended for large instances and may vary depending on your installation.

  3. Reboot the server.

  4. To ensure the new value is being used by the application, take a Support Zip (link here) and search for the max-file-descriptor attribute in application-properties/application.xml file.

    <max-file-descriptor>32,768</max-file-descriptor>


説明 Lucene, the indexing system that is used by Confluence, does not support NFS mounts in Confluence. Using an NFS mount is known to cause this behavior.
製品 Confluence
プラットフォーム Server,
最終更新日 2019 年 11 月 13 日

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.