After database migration to Amazon Aurora Postgres, running builds throws internal server errors

お困りですか?

アトラシアン コミュニティをご利用ください。

コミュニティに質問

プラットフォームについて: Server および Data Center のみ。この記事は、Server および Data Center プラットフォームのアトラシアン製品にのみ適用されます。

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Fisheye および Crucible は除く

問題

After database migration to Amazon Aurora Postgres database, an internal server error is received when running a build plan. 

atlassian-bamboo.log に次のメッセージが出力される。

com.atlassian.activeobjects.internal.ActiveObjectsInitException: bundlg [com.atlassian.bamboo.plugins.brokenbuildtracker.atlassian-bamboo-plugin-brokenbuildtracker]
	at com.at1assian.activeobjects.osgi.TenantAwareActiveObjects$1 131.call(TenantAwareActiveObjects.javaz95)
	at com.at1assian.activeobjects.osgi.TenantAwareActiveObjects$1 131.call(TenantAwareActiveObjects.java:86)
	at com.at1assian.sal.core.executor.ThreadLocalDelegateCallable.call(ThreadLocaIDelegateCaIlable.java:38)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorkertThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor‘Norker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.runtThread.java:745)
Caused by: java.tang.IllegalStateException: TABLE: A0_7A45FB_AOTRACKING_ENTRY: ACTIVE - PLAN_ID - TRACKING_ID - can't find type 5 (precision=5) in field ACTIVE
	at net.java.ao.schema.helper.DatabaseMetaDataReaderImpl.getFields(DatabasefletaDataReaderImpl.java:86)
	at net.java.ao.schema.ddl.SchemaReader.readFields(SchemaReader.java:122)
	at net.java.ao.schema.ddl.SchemaReader.readTab1e(SchemaReader.java:107)
	at net.java.ao.schema.ddl.SchemaReader.access$000(SchemaReader.javaz59)
	at net.java.ao.schema.ddl.SchemaReadersl.apply(SchemaReader.javaz96)
	at net.java.ao.schema.ddl.SchemaReadersl.apply(SchemaReader.javaz94)
	at com.google.common.collect.Iteratorsss.transform(Iterators.java:799)
	at com.google.common.collect.TransformedIterator.next(Transfo edIterator.java:48)
	at com.google.common.collect.Iterators.addAll(Iterators.java:;32)
	at com.google.common.collect.Lists.newArrayList(Lists.java:160}
	at com.google.common.collect.Lists.newArrayList(Lists.java:144}
	at net.java.ao.schema.ddl.SchemaReader.readSchema(SchemaReader.java:94)
	at net.java.ao.schema.ddl.SchemaReader.readSchema(SchemaReader.javazss)
	at net.java.ao.schema.ddl.SchemaReader.readSchema(SchemaReader.javaz78)
	at net.java.ao.schema.SchemaGenerator.generateImpl(SchemaGenerator.java:107)
	at net.java.ao.schema.SchemaGenerator.migrate(5chemaGenerator.java:84)
	at net.java.ao.EntityManager.migrate(EntityManager.java:128)
	at com.at1assian.activeobjects.internal.EntityHanagedActiveObjacts.migrate(EntityHanagedActiveObjects.java:45)
	at com.at1assian.activeobjects.internal.AbstractActiveObjectsFactorysl.doInTransaction(AbstractActiveObjectsFactory.javaz77)
	at com.at1assian.activeobjects.internal.AbstractActiveObjectsFactorysl.doInTransaction(AbstractActiveObjectsFactory.javaz72)
	at com.at1assian.sal.core.transaction.HostContextTransactionTemplatesl.doInTransaction(HostContextTransactionTemplate.java:21)
	at com.at1assian.sal.spring.component.SpringHostContextAccessor51.doInTransaction(SpringHostContextAccessor.javaz71)
	at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTempIate.java:133)
	at com.at1assian.sal.spring.component.SpringHostContextAccessor.doInTransaction(SpringHostContextAccessor.java:68)
	at sun.reflect.NativeMethodAccessorImpl.invokeOtNative Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpI.java:62)
	at sun.reflect.DelegatingHethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoketMethod.java:498)
	at com.at1assian.plugin.util.ContextClassLoaderSettingInvocationHandler.invoke(ContextClassLoaderSettingInvocationHandIer.java:26)

原因

After the migration to Amazon Aurora Postgres database, there will be two schemas, one a public schema with uppercase tables and Bamboo's schema with lowercase tables.

ソリューション

Drop the Amazon Aurora public schema with uppercase tables and restart Bamboo.

 

Last modified on Mar 9, 2017

この内容はお役に立ちましたか?

はい
いいえ
この記事についてのフィードバックを送信する
Powered by Confluence and Scroll Viewport.