Workflow/Dashboard modification or plugin action leads to error "Violation of PRIMARY KEY constraint 'PK_clusterlockstatus'"

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問


プラットフォームについて: Data Center のみ - この記事は、Data Center プラットフォームのアトラシアン製品にのみ適用されます。

この KB は Data Center バージョンの製品用に作成されています。Data Center 固有ではない機能の Data Center KB は、製品のサーバー バージョンでも動作する可能性はありますが、テストは行われていません。サーバー*製品のサポートは 2024 年 2 月 15 日に終了しました。サーバー製品を利用している場合は、アトラシアンのサーバー製品のサポート終了のお知らせページにて移行オプションをご確認ください。

*Fisheye および Crucible は除く

要約

When trying to

  • update/delete a workflow
  • update/delete a dashboard
  • other workflow scheme actions
  • some plugin related actions

you receive an error message that the Jira server has returned an error response:


Checking in the atlassian-jira.log shows entries similar to the following:

2023-03-20 07:47:41,049-0400 JiraTaskExecutionThread-6 ERROR test@example.com 461x43499x1 y7y3d9 10.88.248.155,172.25.254.132 /secure/project/SelectProjectWorkflowSchemeStep2.jspa [c.a.jira.task.TaskManagerImpl] Task 'Publishing workflow scheme 'My updated workflow'' failed.
com.querydsl.core.QueryException: Caught SQLServerException for insert into dbo.clusterlockstatus (lock_name, update_time, id)
values (?, ?, ?)
...
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY constraint 'PK_clusterlockstatus'. Cannot insert duplicate key in object 'dbo.clusterlockstatus'. The duplicate key value is (1816843).


診断

Checking the clusterlockstatus  table using the following query

select count(*) from clusterlockstatus where locked_by_node is NULL;

reveals a high amount of rows (>1000)

In a Data Center environment Jira uses Beehive library to manage locks. Locks are held internally in JVM and also implemented through the database table clusterlockstatus, which is shared between nodes.

The current cluster lock mechanism was designed to work with statically-named locks. The implementation stores each lock in the database permanently. Therefore using dynamically generated lock names, such as "lock_for_task_" + taskId, causes rows to pile up in large numbers in the clusterlockstatus table, and there is no mechanism to prune them.

Suggestion  JRASERVER-69114 - Getting issue details... STATUS has been opened to modify this behaviour.

ソリューション

 To prune the {{clusterstatuslock}} table and get out the situation you can use one of the following options:

回避策 #1

  1. Shut down the whole cluster (all nodes).
  2. Remove all the locks using the following db query


    delete from clusterlockstatus;


    (warning) Note there's no where clause in the query above. Cluster locks do not survive cluster shutdown, so all rows can be safely removed when the cluster is down.
  3. Start nodes one by one (as usual).

回避策 #2

You can prune the clusterlockstatus table without downtime, too.

  1. Remove only the unlocked locks:


    delete from clusterlockstatus where locked_by_node is NULL;
  2. At this point these pruned locks are unacquirable. Therefore you need to...
  3. Do a rolling restart of all nodes.



Last modified on Mar 22, 2023

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.