How to connect to external database with Confluence Server in local development?

Hi,

Looks like the migration with the default H2 are still not working properly so I’m trying to configure external PostgreSQL database to be used for development server. I have the server running as a docker-container with port 5432 exposed and it works fine.

However, I’m having trouble configuring Confluence Server to use the PostgreSQL instead of the embedded H2-database.

By browsing the forums I found this kind of example that is supposed to work:

<dataSources>
    <dataSource>
        <jndi>jdbc/confluence</jndi>
        <driver>org.postgresql.Driver</driver>
        <libArtifacts>
            <libArtifact>
                <groupId>org.postgresql</groupId>
                <artifactId>postgresql</artifactId>
                <version>42.3.3</version>
            </libArtifact>
        </libArtifacts>
        <defaultDatabase>jdbc:postgresql://localhost:5432/postgres</defaultDatabase>
        <systemUsername>root</systemUsername>
        <systemPassword>root_pass</systemPassword>
        <url>jdbc:postgresql://localhost:5432/confdb</url>
        <schema>public</schema>
        <username>conf_user</username>
        <password>conf_pass</password>
    </dataSource>
</dataSources>

So I added the above section inside the <product>-block in the amps-maven-plugin configuration in the pom.xml.

But, when starting the server, this configuration seems to be ignored and the server always initialises just the H2-database and of course then fails to run migrations on it.

This is the log entry that I always see which I would expect NOT to see with the above configuration:

[core.persistence.hibernate.ConfluenceHibernateConfig] getHibernateProperties 
STARTED H2 database server at URL jdbc:h2:tcp://localhost:9092//Users/petri.riipinen/
src/confluence-plugin/packages/server/target/confluence/home/database/h2db

I’ve examined the decompiled source and seems to me that this piece of code causes that H2 server to be started:

protected boolean shouldRunH2Server(Properties prop) {
        return this.isH2() && prop.getProperty("hibernate.connection.url") == null;
}

And isH2() returns true of the configured hibernate.dialect contains H2 in it. So… Should I somehow set some Hibernated-related property somewhere in order to force Confluence Server to use the external dataSource configuration instead of spinning up the H2?

Any ideas how to make this work properly?

Hi @PetriJuhaniRiipinen,

Are you referring to H2 issues documented in this issue?

I have a workaround there I used a lot with a locally install Confluence/Jira and database. I used atlas-install-plugin to deploy the app. It’s not as nice as quickreload, but it gives me a lot more control over the environment.

Regards,
James.

Hi @jrichards

I don’t think it’s that issue. It’s some other issue that kicks in when the migration agent tries to run migrations on H2-database so it’s not related to quick-reload. When starting Confluence Server and migration agent tries to run migrations, I get lots and lots of errors executing SQL for the migrations.

I’ve written about this issue before: Database migrations fail to run on Server 7.15.0 when starting in dev-mode and there is this ticket that describes the problem: [CONFSERVER-60949] Confluence 7.11 EAP development instance keeps throwing "Failed to update database schema" caused by migration assistant app - Create and track feature requests for Atlassian products.

So for I’ve disabled the migration-agent to get around this but the problem is that when I now start Confluence Server, I need to update to new CCMA in order to test if MIG-905 is actually fixed as it claims in a comment for the ticket and it seems that the new updated CCMA version is expecting some database changes and wants to run the migration agent, but of course it won’t run because I’ve disabled it and thus CCMA fails to initialise with all sorts of SQL errors (column not found etc).

So I would need to have either

  • Migrations with H2 working as expected
  • Use external database where migrations seem to work as this is a H2-specific problem

Either one of those would help me to finally proceed with the server-cloud migration implementation and especially verifying if the space-restriction issue would be finally fixed. But unfortunately now I’m unable to proceed.

EDIT: So this has been my workaround so far: -Datlassian.plugins.startup.options=disable-addons=com.atlassian.migration.agent but with the new CCMA version that is not working anymore.

EDIT2: I don’t need quick-reload because JRebel (a product from our company) works fine with confluence server in development as it will hot-reload any code changes into running JVM so I can modify the plugin and have the code reloaded almost immediately into the running JVM.

Hi @PetriJuhaniRiipinen,

I’ve been having a look at this and so far I’m unable to reproduce this error. From what I understand you’re running atlas-debug with pom.xml configuration something like

            <plugin>
                <groupId>com.atlassian.maven.plugins</groupId>
                <artifactId>confluence-maven-plugin</artifactId>
                <version>8.1.2</version>
                <configuration>
                    <productVersion>7.15.0</productVersion>
                    <productDataVersion>7.15.0</productDataVersion>
...

I think there must be something in your configuration that’s causing an issue. When a new Confluence 7.15.0 starts, version 3.1.2 of the Migration Agent also starts, and this will create the database tables if they don’t exist and will only run the upgrade statements if a very old version of the tables exist. For example, the code that drops the SCHEDULEDATE column was from Feb 2019, so it shouldn’t be run.

You mentioned that you don’t have <productDataPath/> setting either that could be bringing in old code.

Can you post your entry for confluence-maven-plugin for the settings and <configuration/> section?

Also, please try and mvn clean and then atlas-debug for me.

Regards,
James.

Hi @jrichards

Ok, finally had time to return to this issue.

We do have a productData ZIP-file (with bunch of test spaces/pages + license), which I’ve prepared according roughly so:

  • Exported backup from older Confluence Server version (I think 7.7.4) that worked fine with the product data initialisation.
  • Imported said backup file into 7.15.0
  • Used atlas-create-home-zip to generate the ZIP-file.

Basically this ZIP-file comes from years back, it has been regularly exported/imported and new zip generated from that whenever the migration seemed to fail. We do need it as it has huge bunch of ready-made testing data and licenses so it isn’t feasible to start local dev testing without any initialization data.

But here is our configuration, which might definitely have something that’s not anymore up-to-date, but that’s how I “inherited” it from other devs that left the company so haven’t dared to touch it too much. If you have suggestions to improve it, greatly appreciated.

<configuration>
    <compressResources>false</compressResources>
    <containerId>tomcat9x</containerId>
    <instanceId>confluence</instanceId>
    <products>
        <product>
            <id>confluence</id>
            <instanceId>confluence</instanceId>
            <version>7.15.0</version>
            <dataVersion>7.15.0</dataVersion>
        </product>
    </products>
    <productDataPath>${productDataPath.name}</productDataPath>
    <allowGoogleTracking>false</allowGoogleTracking>
    <extractDependencies>false</extractDependencies>
    <instructions>
        <DynamicImport-Package>
            com.atlassian.migration.app
        </DynamicImport-Package>
        <Import-Package>
            com.gliffy.transform*;version="${gliffy.transform.version}",
            javax.xml.bind.*;version=${jaxb.version},
            com.google.code.gson*;version="${google.gson.version}",
            org.apache.batik*;version="[${batik.version},${batik.version}]",
            org.apache.commons.codec*;version="${apache.commons.codec.version}",
            org.apache.commons.io*;version="${apache.commons.io.version}",
            org.apache.commons.lang*;version="${apache.commons.lang.version}",
            org.springframework.*;version="${springframework.version}",
            com.fasterxml.jackson*;version="${jackson.version}",
            javax.servlet.*;version="${javax.servlet.version}",
            org.slf4j*;version="${slf4j.version}",
            com.google.common*;version="${google.guava.version}",
            com.atlassian.confluence.plugins.index.api;version="${confluence-extractor-api-plugin.version}",
            com.atlassian.sal*;version="${atlassian.sal.api.version}",
            com.atlassian.soy*;version="${atlassian.templaterenderer.version}",
            org.joda.time*;version="0.0.0";resolution:=optional,
            javax.xml.parsers,
            org.apache.xerces*;version="${apache.xerces.version}",
            com.atlassian.activeobjects*;version="0.0.0",
            !com.atlassian.migration.app,
            org.springframework.osgi.*;resolution:="optional",
            org.eclipse.gemini.blueprint.*;resolution:="optional",
            *;resolution:=optional
        </Import-Package>
        <Export-Package>org.apache.batik*;version="${batik.version}"</Export-Package>
        <Require-Bundle>
            com.atlassian.confluence.plugins.confluence-extractor-api-plugin;bundle-version="${confluence-extractor-api-plugin.version}",
            com.atlassian.labs.lucene-compat-plugin;bundle-version="${lucene-compat-plugin.version}"
        </Require-Bundle>
        <Spring-Context>*;timeout:=300</Spring-Context>
    </instructions>
    <bundledArtifacts>
        <bundledArtifact>
            <groupId>com.atlassian.confluence.plugins</groupId>
            <artifactId>confluence-extractor-api-plugin</artifactId>
            <version>${confluence-extractor-api-plugin.version}</version>
        </bundledArtifact>
        <bundledArtifact>
            <groupId>com.atlassian.labs</groupId>
            <artifactId>lucene-compat-plugin</artifactId>
            <version>${lucene-compat-plugin.version}</version>
        </bundledArtifact>
    </bundledArtifacts>
    <pluginDependencies>
        <pluginDependency>
            <!-- required for Cloud / OnDemand -->
            <groupId>com.atlassian.confluence.plugins</groupId>
            <artifactId>confluence-extractor-api-plugin</artifactId>
        </pluginDependency>
        <pluginDependency>
            <groupId>com.atlassian.labs</groupId>
            <artifactId>lucene-compat-plugin</artifactId>
        </pluginDependency>
    </pluginDependencies>
</configuration>

Regards,

  • Petri

Hi @PetriJuhaniRiipinen,

Can I assume by this post

You got the connection working?

I was going to say, in cases where you code handed over like this, my preference is to start again. Create a new hello world app and try and rebuild the AMPS entry using the new data in the current pom.xml, or copy all your code over to the Hello World app and see what you don’t need.

Regards,
James.

Hi @jrichards

Well, I didn’t get the external database connection working regardless of what I tried, but I resolved the issue by just starting with empty database and not including the old productData-seeding into H2 and forcing migration. I just created new content for testing migration and then created a new product-data ZIP-file from that as a seed for the latest Confluence Server, thus avoiding the need for migration altogether. So this was good enough solution for my migration testing needs and was able to verify the proper migration processing functionality in my local environment.

But the keyword search still doesn’t work. I do add all keywords as per documentation in the Cloud in the migration post processing, but after migration those words still won’t find the attachments related to the keywords. Not sure how to proceed with that, should I do a new support ticket or what would you suggest?

Regards,

  • Petri

Hi @PetriJuhaniRiipinen,

Maybe post a new topic in Developer Community first with all the details of the search issue, and then we can take a look.

Regards,
James.

Anyone looking for whole pom.xml, as I didn’t have any idea where product tag exists.
https://developer.atlassian.com/server/framework/atlassian-sdk/declaring-jndi-datasources-in-amps/

1 Like