JVM crashes after Fisheye Crucible upgrade - Native memory allocation mmap

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

問題

After upgrading Fisheye/Crucible to a version > 3.8 the JVM crashes. See the hs_err_pid<PID>.log file:

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2673), pid=19269, tid=139984742496000
#
# JRE version: Java(TM) SE Runtime Environment (8.0_51-b16) (build 1.8.0_51-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.51-b03 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /atlassian/fecru/data/core or core.19269

診断

環境

サーバー

  • VMWare ESX
  • Memory ballooning drivers enabled
  • EM4J

Instance

  • ~2500 git repos

診断ステップ

  • Free memory observed in the hs_err_pid<PID>.log file:

    /proc/meminfo:
    MemTotal:       20470980 kB
    MemFree:          836420 kB
    Buffers:         3088332 kB
    Cached:          6591540 kB
    SwapCached:         8452 kB
  • max_map_count (default: 65536) was observed to grow prior to the JVM crash. 

  • What is my system current per process mmap limit? 

    $ sysctl vm.max_map_count
    vm.max_map_count = 65530
  • How many memory mapped regions is Fisheye/Crucible currently using? 

    # Find the PID of the FishEye/Crucible process first:
    $ jps -l
    1968 sun.tools.jps.Jps
    1605 /opt/fecru-4.0.3/fisheyeboot.jar
    
    # Check the number of memory mapped regions using pmap on the PID:
    $ pmap 1605 | wc -l
    984
     
    # Alternatively:
    $ cat /proc/1605/maps | wc -l
    982

原因

Fisheye/Crucible version 3.8.0 introduced a platform upgrade of Apache Lucene (version 3.6.2) which uses a library that uses mmap to efficiently allow the operating system to maximize memory utilization. Each segment of each repository index as well as each segment of the cross repository index requires an individual memory map area when accessed. A large number of active repositories consequently leads to a large number of open Lucene index files in memory which can readily exceed the default mmap count. Also, in repositories with high activity, the Lucene indexes will get defragmented and subsequently get merged more often which leads to a higher number of segments being accessed.

max_map_count:

This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.

While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation.

The default value is 65536.

ソリューション

Increase the value of max_map_count and restart the application:

sysctl -w vm.max_map_count=131072

最終更新日 2018 年 7 月 31 日

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.