RFC-80: App migration - Multi transfers

RFCs are a way for Atlassian to share what we’re working on with our valued developer community.

It’s a document for building shared understanding of a topic. It expresses a technical solution, but can also communicate how it should be built or even document standards. The most important aspect of an RFC is that a written specification facilitates feedback and drives consensus. It is not a tool for approving or committing to ideas, but more so a collaborative practice to shape an idea and to find serious flaws early.

Please respect our community guidelines: keep it welcoming and safe by commenting on the idea not the people (especially the author); keep it tidy by keeping on topic; empower the community by keeping comments constructive. Thanks!

Project Summary:

“App migration - Multi transfers” aims to provide marketplace partners the ability to decompose their monolithic app migration transfer into several independent transfers.

Publish: 23rd December 2024
Discuss: 20th January 2025
Resolve: 3rd February 2025

Problem:

App migration is currently modelled as a large monolithic transfer, which necessitates that app data migration should be completed within the migration downtime of customer’s cloud instance. This process has proven to be particularly challenging for customer instances with substantial app data. This situation presents an opportunity to develop a platform feature that enables marketplace partners to decompose their single monolithic app migration transfer into multiple smaller transfers, allowing for the migration of only the critical user-blocking data during the migration downtime.

Who are we solving for:

For marketplace partners:

“App migration - Multi transfers” offers a feature for marketplace partners to decompose their app migration process as follows:

  • Critical data that must be migrated prior to customers being able to utilise their application on the cloud - executed through one or more blocking transfers
  • Non-critical data that can be migrated after the migration downtime - executed through one or more non-blocking transfers

For customers:

“App migration - Multi transfers” aims to minimise app migration downtime for customers.

Proposed Solution

Marketplace partners can segment their single app migration transfer into multiple independent transfers by utilising a new migrations listener and implementing the following methods:

  • getTransferDefinitions - to declare a set of multiple independent transfers, where each transfer can be blocking or non-blocking
    • App migration platform will initially support maximum of 3 transfers in a multi transfer app migration
  • onStartAppMigration - for executing migration operations of each transfer.

App migration platform will,

  • Notify server apps when to start each transfer, by calling onStartAppMigration for each transfer
  • Notify cloud apps about app migration events for each transfer, including additional fields in event payloads to identify a specific transfer.
  • Support all existing app migration platform APIs for each transfer
  • Calculate the overall status of an app migration by aggregating the statuses of all transfers

In your server app:

Implement the new listener in your server app:

Interface for migrating to a Forge app:

interface MultiTransferDiscoverableForgeListener {
    // New methods
    Set<TransferDefinition> getTransferDefinitions();
    void onStartAppMigration(AppCloudForgeMigrationGateway gateway, TransferInfo transferInfo, MigrationDetailsV1 migrationDetails);
    // Existing Forge listener methods
    UUID getForgeAppId();
    Set<AccessScope> getDataAccessScopes();
    String getForgeEnvironmentName(); // Optional
    String getServerAppKey(); // Optional
    String getCloudAppKey(); // Optional
}

This interface will include some existing Forge listener methods for the same purpose as described here.

Server apps will have access to all APIs of AppCloudForgeMigrationGateway during the execution of each transfer. App migration platform will internally use transfer ID of respective transfer, when these APIs are invoked using AppCloudForgeMigrationGateway instance passed on onStartAppMigration.

Interface for migrating to a Connect app:

interface MultiTransferDiscoverableListener {
    // New methods
    Set<TransferDefinition> getTransferDefinitions();
    void onStartAppMigration(AppCloudMigrationGateway gateway, TransferInfo transferInfo, MigrationDetailsV1 migrationDetails);
    // Existing Connect listener methods
    String getCloudAppKey();
    Set<AccessScope> getDataAccessScopes();
    String getServerAppKey(); // Optional
}

This interface will include some existing Connect listener methods for the same purpose as described here.

Server apps will have access to all APIs of AppCloudMigrationGateway during the execution of their child transfer. Server apps should use transferInfo.transferId as transferId on the calls to AppCloudMigrationGateway.

Models used in the above interfaces:

class TransferDefinition {
    final String transferName; // Unique name for a transfer
    final boolean blocking; // Indicates whether a transfer is blocking or non-blocking
}
class TransferInfo {
    final String transferName; // Unique name for a transfer
    final UUID transferId; // Transfer ID
}

In your cloud app:

Cloud apps will receive notifications on app migration events for each transfers via,

Events for multi transfer app migrations will include additional fields in the event payloads to distinctly identify a transfer as follows:

  • transferName - unique name of the transfer
  • blocking
    • true - if the transfer is blocking
    • false - if the transfer is not blocking

For Forge apps:

Forge apps can utilise all Forge app migrations APIs with respective transfer IDs of each transfer.

Sample Forge Event Payload:

{
  "eventType": "avi:ecosystem.migration:uploaded:app_data",
  "transferId": "6c01ac6f-c512-4ef7-8b48-25c1803fe305",
  "transferName": "nameForYourChildTransferStep",
  "blocking": true,
  "migrationDetails": {
    "migrationId": "403c4f71-a0d1-4a63-97a8-487d18691c46",
    "migrationScopeId": "0ba07dd9-3804-4600-9102-fa6e1efeab08",
    "createdAt": 1723111376499,
    "cloudUrl": "https://your-customer-cloud-site.atlassian.net",
    "name": "Migration Plan Name"
  },
  "key": "e094ca53-3747-4541-b263-0bf7b56a5bca",
  "label": "file-label-you-used",
  "serverAppVersion": "1.0",
  "messageId": "53f88ea7-a2d2-4dd2-9f36-2d8c43401b11"
}

For Connect apps:

Connect apps can utilise all Connect app migration APIs with respective transfer IDs of each transfer.

Sample Connect Event Payload (for a Jira migration) :

{
    "eventType": "app-data-uploaded",
    "cloudAppKey": "my-cloud-app-key",
    "transferId": "6c01ac6f-c512-4ef7-8b48-25c1803fe305",
    "transferName": "nameForYourChildTransferStep",
    "blocking": true,
    "migrationDetails": {
        "migrationId": "403c4f71-a0d1-4a63-97a8-487d18691c46",
        "migrationScopeId": "0ba07dd9-3804-4600-9102-fa6e1efeab08",
        "createdAt": 1723111376499,
        "cloudUrl": "https://your-customer-cloud-site.atlassian.net",
        "name": "Migration Plan Name"
        "jiraClientKey": "acb711b8-a878-356e-abbf-1ae1730308a2",
        "confluenceClientKey": "unknown"
    },
    "s3Key": "e094ca53-3747-4541-b263-0bf7b56a5bca",
    "label": "file-label-you-used",
    "serverAppVersion": "1.0",
    "messageId": "53f88ea7-a2d2-4dd2-9f36-2d8c43401b11"
} 

Lifetime of multi transfers:

The lifetime of each transfer in multi transfer app migration will be 14 days, which is aligned with the existing lifetime of app migration transfers. While the lifetime of each transfer is generous at this point, we envision that marketplace partners can bring down the migration downtime for their apps by moving the migration of non-critical data to non-blocking transfers.

Progress for app migration with parent and child transfers :

App migration platform will calculate the overall status for an app migration by aggregating the statuses of all transfers in a multi transfer app migration.

For Forge apps:

Forge apps are required to send a messageProcessed acknowledgment for each app data uploaded per transfer, using respective transfer ID. App migration platform will automatically compute the statuses of each transfer, similar to automatic progress calculation explained here.

For Connect apps:

Connect apps should individually set the status for each transfer by using the existing progress endpoint along with the respective transfer ID.

Extensions:

App migration platform will also support,

  • Cancellation of each transfer in multi transfer app migration
  • Rerun of each transfer in multi transfer app migration

Asks:

While we would appreciate any feedback you may have for this RFC (even if it is simply a supportive acknowledgment such as “Agree, no serious flaws”), we are particularly interested in gaining further insights on the following points:

  • Will you be able to divide your app migration transfer into multiple blocking or non-blocking independent transfers ? If this is not feasible, could you please provide details about your use cases ?
  • Do you have any potential extension ideas for the solution proposed in this RFC ? If so, could you please share details regarding your ideas or use cases ?
3 Likes

Hi! I’m not quite sure I understand what is the distinction between blocking and non-blocking transfers. Our app just migrates its data as it gets it (oversimplifying a bit), and it becomes available as it is added to the Cloud. Can you provide some examples of things that could/should/must be blocking and non-blocking in practice?

Also, could we get a guarantee that the event payloads will not change unless the server app implements the new interfaces? (E.g., no null or empty/defaulted transferName properties and such received by the cloud app until the server app is actually upgraded to send them.)

Also also, what and how much of this is (supposed to be) visible to the customer and how?

Now that I think about it, the main pain-point for our large customers is that they cannot pick what part of the app’s data will be migrated. They often have thousands of dashboards and app-specific config items on the server. But they try to migrate partially (first a test project or two, then more, and so on). They need to pick between just not migrating our app until they finish the migration (and thus are unable to test it until the end), or migrating all of it from the beginning (where most of the migrated data will be incomplete because of missing projects, fields, and other Jira data).

Some kind of API whereby the users could configure what app data they want to migrate (the way they can pick projects now) would be nice.

It might also help if the app had some info about what things will be available on the cloud (such as projects, filters and dashboards), both via the current migration and from past migrations. That way the app could potentially migrate only things that are relevant, or maybe warn the user what app data can’t be migrated yet and why.

Hi @BogdanButnaru

To answer some of your questions:
Blocking vs non blocking

  • blocking transfers - should contain critical data from your app that customers cannot function without or data that critical data is dependent on. If we take confluence as an example this could be pages, spaces, users.
  • non-blocking transfers - this is auxiliary data that customers don’t see as part of the core experience for your app. If we take confluence as an example, this could be page analytics, page history.

Backwards compability
We can guarantee that unless the new interface (MultiTransferDiscoverableListener) is implemented on the server-side, event payloads will remain the same. Out of curiosity, what would be the impact of new properties (that are either empty or null)?

Customer visibility
Customers should be able to see progress updates at each transferName, this will be visible when they kick off an app migration in the cloud migration assistants. There is a plan to breakdown app migrations into more granular bits, think running app migration for a particular project, filter, dashboard, space however it is currently too early to share more details.

Happy Holidays!
Max

As a general policy we treat unknown properties as errors for (almost all) ingested messages. The idea is that an unknown property might say something critical, and ignoring it could cause the app to do something bad. E.g., the new property could be processOnlyIf: "EU", and our servers are in the US, and we get sued or something. We’d rather the app error out until a dev actually confirms that the property can be safely ignored.

Not adding new properties to existing payloads unless the recipient is known to be aware of them (even if it then ignores them) would be the dual of this policy.

How about dependencies/pre-requisites? For example, using your Confluence use case, history data should only be migrated after the transfer of pages and spaces has finished successfully.

So, perhaps a TransferDefinition should include a dependsOn attribute:

class TransferDefinition {
    final String transferName; // Unique name for a transfer
    final boolean blocking; // Indicates whether a transfer is blocking or non-blocking
    final String dependsOn; // Name of the transfer which this transfer depends on
}

Hi @VitorPelizza ,

For the use case you’ve described - if blocking transfers are executed before non-blocking transfers (effectively making non-blocking transfers dependent on blocking transfers) will that be sufficient?

This RFC aims to break down existing monolithic transfers into multiple independent transfers. We may introduce transfer dependencies as an extension to this feature.

Another question from us, can you foresee any situations where app data exports for one transfer is dependent on other transfers?

Hey Max,

Yes, I think that’s a good start. Honestly, I don’t see us breaking down the migration so much that we’ll have a big chain of dependencies, so if we can have two levels (non-blocking depends on blocking) should be fine.

Our use case would be:

  • First migrate all test cases/plans/cycles/executions with their dependencies (project, status, priority, etc) - blocking
  • Then migrate change history data - non-blocking

To your question, it’s not a problem to export both in paralell, but the import has to execute following the right order - change history can’t be imported before test cases, for example.

Thank you everyone for the engagement in this RFC, helping us shape the App migration - Multi transfers feature.

We have taken your feedback onboard regarding:

  • Further clarification around blocking vs non-blocking transfers
  • Concerns on new fields breaking backwards compatibility
  • Requiring dependencies for transfers

We are closing this RFC for discussion now, please stay tuned for our resolution on or before 2025-02-02T13:00:00Z.

If you have further questions you can open a discussion in Cloud Migrations - The Atlassian Developer Community with the app-migrations tag.

Thanks,
David

Thank you all again for your feedback here are our takeaways.

What did we hear?

There were no major flaws identified for multi-transfer app migrations, key points were:

  • Transfer dependencies: some transfers cannot run before others. Running blocking transfers first then non-blocking transfers after is a good start.
  • Field validation: some marketplace partners deliberately fail on new fields to ensure they are validated to avoid ignoring critical fields.
  • Data filtering: enabling customers to filter app data can help improve their migration experience.

We have taken your feedback into account and will consider it when developing upcoming features.

What did we change?

We plan to also introduce the capability to declare transfer dependencies, ensuring that a transfer will only execute once its dependent transfer has settled successfully.

Enabling customers to filter app data are out of scope for this RFC and will be considered in future.

What is coming next?

We are currently developing multi-transfers and expect to make this feature generally available in the coming weeks. Please keep an eye on our documentation page for the change log announcement.