RFC 118: Cloud-to-cloud identifier mapping

Thank you for your detailed input @osiebenmarck.

The list would be consistent with that used for Data Center to Cloud migrations - which can be viewed here: https://developer.atlassian.com/platform/app-migration/mappings/#mappings-namespaces-and-entities

Not every identifier would change in every migration.

I also do not understand how the mapping would happen in Solution A: Why would the vendor provide the regex to find an identifier – wouldn’t Atlassian know which identifiers need to be replaced? Or is this meant to help Atlassian find a place where the identifier is that needs replacement?

It is the latter - in Solution A, partners just tell us where the identifiers are (and what type of identifier), and we take it from there.

We would obviously prefer to not implement either of these two solutions. If we had to, I would likely opt for Solution B, as that one seems like it will be easier to test. The downside being, that we will incur cost – potentially a lot of costs as you already said. But, as we do not know which identifiers might change and how the API looks that we’d need to work with, it is impossible for us to give a preference.

That is a fair point - I assume particularly you’d be interested in being able to write local unit / integration tests for migrations?

For manual testing, you could do a Commercial Cloud to Commercial Cloud migration.

For local testing, would having some kind of downloadable Atlassian provided test harness that can read the manifest file, and transform identifiers locally using mock mappings help you?

Putting mappings in the manifest is not really convenient, I’d prefer it if we could put those mappings in a separate file, like we can already do with Rovo agent prompts. Also, as @AndreasEbert already mentioned, since when do we have JSON manifests in Forge?

Sorry about saying JSON, I’ve updated it to say YAML!

So just to understand further, would you ideally prefer to have those files referenced from the manifest, and deployed with the manifest, it’s just they are in a separate file, or as a completely separate thing?

To wrap up, I am worried that this would introduce risks, instability and bugs into our products; the RFC does not even mention a way to test the migration. And as @remie pointed out already: We will need to test whatever we implement before any code goes live. This goes for either solution.
If this went live today as is, we would likely opt-out and ask our customers to reconfigure on the new instance.

And just to understand further, is that an option because you don’t have a lot of data beyond simple configuration in any of your apps? Or perhaps the data you have beyond configuration data is only ‘cache’ data where deleting and recreating doesn’t have a big impact on customers? Would having an explicit way to signal that to upgrading customers, and perhaps even purge unwanted cache data on migration simplify things for you?

Thank you @AaronMorris1 for providing detailed input on this RFC.

Definitely there are, in the short term, going to be far more migrations from Server/DC to the various clouds. That said, there are real Atlassian customers, with apps, who are already on Commercial Cloud, and are seriously thinking about moving to Isolated Cloud. In the long term, as the number of customers remaining on DC decreases, I’d expect cloud-to-cloud migrations to begin to become the main kind of migration.

Note that cloud-to-cloud can also include migrations between cloud sites on Commercial Cloud.

This means Atlassian is asking us to support a feature that won’t be exercised very often (relatively speaking), but when it is exercised, it had better work right, or it will be a big customer that gets pissed off.

So, if I were to support this use case, I would not only need the ability to test the migration mappings, but I would also need the ability to test them repeatedly as part of my automated regression tests. Otherwise, it will be too risky to support.

This is a very good point - I think having a way to test this is an important requirement.

Therefore, I would expect:

  1. The ability to provision, seed, and tear down an isolated cloud environment as part of automated testing. (It can be a simulated isolated cloud…just needs to provide valid testing.)

I would expect a Commercial Cloud to Commercial Cloud migration would be sufficient for this, since for an app partner, it should be indistinguishable (except by the URLs of the cloud sites).

  1. The ability to trigger a migration from a long-lived test environment to the temporary isolated cloud environment.

Would you do this manually from Admin Hub, or prefer an API? I think triggering it manually from Admin Hub would be possible once the feature becomes GA. I don’t believe there is a plan for a partner-callable API to trigger migrations on our backlog yet, but if it is high value for partners, we could consider it.

  1. The ability to force the isolated cloud to change IDs during the migration. (Otherwise, what’s the point?)

For entities like Jira Work Items (Issues), you could force this by ensuring there is already a different issue with a clashing identifier in the destination cloud site.

  1. The ability to do all that for free. (I won’t pay for yet another cloud site to support testing a fringe case.)

Using Commercial Cloud, you might be able to do this using an existing cloud site, or depending on your test, use a free plan.

All that being said:

PLEASE DON’T DO THIS!

I’m investing in the Runs on Atlassian platform to avoid this sort of complexity. Atlassian is adding unnecessary risk, which is precisely what ROA is intended to prevent.

I acknowledge that migrations can bring complexity. However, I’m not sure I understand your suggestion here. Customers are going to want to do cloud-to-cloud migrations, and some of those customers will have apps installed. Are you suggesting that we don’t support cloud-to-cloud migrations at all for apps (a bad customer experience), or that we provide an easy way for app partners to opt out of their apps being migrateable (this would probably be the default if partners do nothing - but it might increase the risk customers churn away from your app), or that we implement some other solution for it? If the latter, do you have a suggestion for what an ideal solution would look like for you?

Just on this off-topic point, YAML Is a strict superset of JSON. So it’s not untrue to say that the manifest is a JSON file. If you want to use JSON (God knows why), it should certainly work!

4 Likes

Thanks for sharing the RFC @AndrewMiller1 . A lot of thoughts that came to my mind while reading it have already been addressed in other comments. So I’ll only focus on the following question:

Do you expect your Forge app will store identifiers to Atlassian objects in any vendor controlled / free text place apart from Forge SQL, Forge KVP, and Entity Properties? If so, where else would you store them?

Another place where our app is storing such identifiers are Confluence macros. An example is a macro which can show data of another page. The page can be selected in the configuration dialog of the macro and the ID of it will be saved in a macro configuration parameter. We also have more complex cases where the stored macro configuration parameter contains an array of differing objects of which some can contain Atlassian identifiers. I’m not sure if solution A would be powerful enough to handle such cases.

In your question you mentioned Entity Properties. To make sure we are having the same properties in mind: from Confluence app perspective I’d include Space Properties, Content Properties (any kind of Confluence content that supports properties according to REST API), User Properties and App Properties. Would this match your definition of Entity Properties?
In our case the values of these properties can also be complex JSON objects and some members have random names which we can’t hard code in the manifest. So I think for approach A to be suitable, the jsonpath would have to support RegEx patterns.

6 Likes

Hi @AndrewMiller1 – Thanks for all your follow-ups and for keeping this discussion moving. I know it’s a lot of work, and it’s much appreciated.

Before I answer your latest questions, I’d appreciate some clarification on two points, just to ensure I understand the RFC.

First, can you please clarify the intent behind this feature proposal?

Many of us were surprised that Atlassian couldn’t guarantee ID preservation during a cloud-to-cloud migration. You mentioned potential ID conflicts when merging two cloud sites. That makes sense.

But is merging sites the only scenario this feature addresses? Or are there other cases where IDs might not be preserved?

Second, can you please clarify this comment?

That raised a concern for me. It sounds like Solution A might handle ID conversions as partial transactions.

I had assumed the only difference between the two solutions was who performs the processing:

  • Solution A: Atlassian migrates an entity from ID X → Y, then executes all manifest mappings to complete the conversion.
  • Solution B: Atlassian migrates X → Y, then invokes a vendor-registered function to complete the conversion.

In both cases, I assumed each ID conversion (X → Y) would be fully executed as a single operation to avoid data inconsistencies. Is that assumption incorrect?

(Sorry if I’m reading too deeply into the comment. I just don’t understand how Solution A could support incremental migrations better than Solution B, unless it’s scheduling the map executions incrementally.)

Thank you!

2 Likes

Thank you very much for your detailed input @RonnyWinkler.

Noted - and to confirm understanding, I think you are saying that you place IDs in the macro custom config object.

With jsonpath, you can write selectors that define a criterion to select a node (e.g. to decide which objects in your array you want to select), so you might be okay as long as we make sure we support transforming macro configs.

In your question you mentioned Entity Properties. To make sure we are having the same properties in mind: from Confluence app perspective I’d include Space Properties, Content Properties (any kind of Confluence content that supports properties according to REST API), User Properties and App Properties. Would this match your definition of Entity Properties?

We were treating Jira Entity Properties and Confluence Content Properties as equivalent (I should have been explicit in the RFC) - but I think this is a good call out that you might be using Space / User / App Properties as well.

In our case the values of these properties can also be complex JSON objects and some members have random names which we can’t hard code in the manifest. So I think for approach A to be suitable, the jsonpath would have to support RegEx patterns.

Standard jsonpath does support regex, as well as wildcards. Noted that if we want to make Solution A work for your apps, we’d need to make sure we implement the full jsonpath spec and not a cut back version.

1 Like

Thank you for your detailed input on this RFC @scott.dudley

I am hoping that within-perimeter moves (including merges) will be covered by this. However, for the case of merges where both sides of the merge involve different installations of the same app, there are more complex considerations for the app partner which are not discussed in the RFC - so it is a good callout. We’d probably need to write some guidelines for how to make an app safe for merges (for example, using UUIDs as primary keys, and responding to a post-migration event to fix up any app-generated sequential secondary identifiers that might be duplicated - or if possible to avoid allocating such identifiers).

Even if you are targeting only RoA for phase 1, it would also be helpful if you could think about Forge Remote when designing this feature.

One approach to minimise future complexity would be to not egress any Atlassian-generated identifiers, and to store the link between Atlassian-generated and remote-generated identifiers within Forge Storage. Then the only extra migration process step for a remote would be about transitioning access to the Forge Remote.

I think Solution B would be necessary for any partners who do need to store Atlassian identifiers externally.

As others have pointed out, option B is effectively mandatory for more complex data types. Option A might be helpful for many apps, but it can never be 100% sufficient.

Understood - it sounds like some apps will need an escape hatch like Solution B. Given I think we can build a better customer experience with Solution A, my hope is that Solution A could be sufficient for 99% (or 99.9%) of all data migration, and the escape hatch would be for the rare cases when it is needed.

For large migrations, this could mean a lot of function calls, which might get expensive for app partners (once compute is monetised).

Atlassian controls the entire platform, so how about thinking outside the box here too? Why not make function invocations free for calls generated by migration of customers to another cloud site? It does not seem right to saddle vendors with the cost for events that are outside of their control and for which they will have little visibility (due to the closed nature of Isolated Cloud and the inability to access logs without customer approval). I would guess that migrations can be invoked an arbitrary number of times by the customer when staging a migration, so this could easily lead to extraordinary costs with no notice to the vendor.

I have shared the link to a similar piece of partner feedback internally - but note that solution B does involve general purpose compute through Forge, and the current plan is for that to be paid beyond the free tier. That’s one reason why, if we go for supporting both Solution A & Solution B, we’d want to make sure only very fringe use cases need Solution B. Apps that don’t need Solution B would therefore avoid risks to the partner from unexpected high compute usage during migrations.

An example of such a use case would be where Atlassian identifiers are used in combination with other identifiers to create a new identifier, i.e.:

createHash('sha256').update(`${Jira.IssueId}-${SomeOtherId}-${YetAnotherId}-${ThisIsAlsoAnId}`).digest('hex');

Or, as others also mentioned, Atlassian identifiers used in macro properties.

Oh sweet summer child :joy: Please don’t take this personal, but it is statements like these that cause the biggest friction between Atlassian and partners. Atlassian approaches these types of projects from a pure theoretical point of view which is limited by massive blinders. Atlassian can’t create solutions for problems that she doesn’t know about. The utter lack of real life experience with Marketplace Partners solutions within Atlassian hurts the relationship more than any UI change Atlassian comes up with. Like I said, partners are more creative than Atlassian realises.

This is also why this is somewhat unacceptable:

I understand that this is beyond the scope of your team, but it is not beyond the scope of this RFC. Any solution that Atlassian wants to move forward should include pricing considerations. Otherwise Atlassian will loose partners well before it even started.

Atlassian needs to learn that working with Marketplace Partners requires cross-team collaboration, instead of throwing stuff over the fence just so that your team can move on with her project and meet your KPI’s. Otherwise, this will just be another project where partners are reinforced in her believes that the only goal for Atlassian engineers is to chase yet another promotion instead of actually delivering value to customers.

1 Like

I would guess that this ratio will by as low as 30%. On our little green field of apps - none will work with that.

Just create the poll for it and make the numbers grow.

1 Like

Hi @AaronMorris1,

The intent is to find a general solution for all types of customer operations that moving data between (and within) cloud sites on Atlassian’s clouds - the aim is to solve the general problem so we don’t need to go back to partners multiple times.

The most immediate use case for this is customers who are already on Commercial Cloud, but would like the additional security and control Isolated Cloud offers. This is an active scenario for large customer(s) who were already able to move from Data Center or Server to Commercial Cloud for the extra functionality, but also consider the extra isolation of Isolated Cloud to be worth the cost. However, we expect many other use cases in the future - for example, customers with new requirements moving into Atlassian Government Cloud, and customers merging cloud sites following an acquisition (either within cloud, or cross-cloud - e.g. if a company on Isolated Cloud acquires a company on Commercial Cloud).

Note that moving into a new cloud site is really just a special case of merging - the customer creates the new cloud site, probably populates it with a few admin users and test projects first, and then merges their existing cloud site into the new cloud site.

Second, can you please clarify this comment?

You are correct that the difference is in who performs the transformation. When Atlassian performs the transformation, we can do so in our Extract, Transform, Load (ETL) pipeline while the data is being moved between cloud sites.

By incremental migration, what I really mean is that we are aiming to reduce the downtime on the source cloud site. We do that by having a phase when we pre-copy all customer and app data from the source to the destination, doing the transformation as we move the data. During the pre-copy, the destination site is unavailable, and no invocations of your app installation on the destination site occur, but the source cloud site and Forge apps operate as normal. Next, we close off access to the source cloud site (and stop app invocations), and repeat the process, but only for records that changed or were created during the pre-copy (meaning it is much faster). Records that were deleted would be synchronised as a deletion. After that, the destination cloud site is firstly re-enabled for Forge invocations, and then opened up for users.

By the time the first Forge invocation happens on the destination cloud site, the state is an exact replica of the source site at the time access was shut off, and so apps cannot tell any difference compared to a non-incremental migration. But from a customer perspective, the source site is shut off for a much shorter period of time (and so hopefully they are happier to take a data heavy app with them as they migrate). Most of the mapping work happens while the data is being moved during the pre-copy, before the outage window.

3 Likes

Hi @AndrewMiller1 ,

Thank you for the prompt reply, much appreciated!

[…] - I assume particularly you’d be interested in being able to write local unit / integration tests for migrations?

Yes, that would be the goal. Ideally, this could become part of an automated regression test.

For local testing, would having some kind of downloadable Atlassian provided test harness that can read the manifest file, and transform identifiers locally using mock mappings help you?

Yes, that would indeed be helpful. I am not sure what would be transformed locally though, can you elaborate a bit? Would this be running locally, but affected the storage in my dev instance? Or would it purely be mock data on mock systems?

So just to understand further, would you ideally prefer to have those files referenced from the manifest, and deployed with the manifest, it’s just they are in a separate file, or as a completely separate thing?

A separate file that we reference in the manifest. The rovo:agent module already does this with the prompt and it looks like this:

modules:
  rovo:agent: 
    - key: risk-agent
      name: "My Risk Register Assistant"
      description: A Rovo Agent that helps you manage your project risks

      prompt: resource:agent-resource;prompts/agent-prompt.txt

That would keep the manifest.yml manageable, especially when there’s a lot of mappings.

If this went live today as is, we would likely opt-out and ask our customers to reconfigure on the new instance.

And just to understand further, is that an option because you don’t have a lot of data beyond simple configuration in any of your apps? Or perhaps the data you have beyond configuration data is only ‘cache’ data where deleting and recreating doesn’t have a big impact on customers? Would having an explicit way to signal that to upgrading customers, and perhaps even purge unwanted cache data on migration simplify things for you?

Asking customers to reconfigure is not a preferable option. It’s just the least bad and least risky option I can think of for now. The amount of configuration our customers have ranges from really small, maybe just a couple of kB to hundreds of kB with lots of different settings – there will very much be a demand to have this migrated automatically, at least for some of our customers. However, if there is an automated migration, the expectation will be that it is flawless. And, at least at this point, I am not confident that we’ll be able to provide this. And if something goes wrong in production, we have no way to fix it. (Hence all the emphasis on testing in this thread.)

1 Like

Hi @AndrewMiller1 - Thanks for the clarifications. That helped a lot!

Before I share my expectations, let me start with what I most want to avoid:

Suppose I implement ID mapping and it appears to work correctly. Six months later, a customer using my ROA app runs a Cloud-to-Cloud migration with Atlassian. The migration completes successfully, but three weeks later, an end user finds that Feature X in my app is broken.

I get a support ticket and only then learn about the migration. After troubleshooting, we discover that a small code change five weeks ago invalidated one of the regular expressions in the manifest. At that point, the data is unrecoverable, the customer is frustrated, and both Atlassian and I have a problem.

With that in mind, here’s what I would expect from this feature:

Short-term expectations

  • Atlassian should minimize ID changes wherever possible. ID conflicts during a merge of two production sites are understandable, but merging a production site into a test site should be discouraged. (i.e., Atlassian should overwrite the test data with the production data.)

  • Migration map testing should be fully automatable. I’d prefer an API trigger rather than a manual process in the developer console. Automated unit tests and functional post-migration checks are critical.

  • Automated tests must be deterministic. Developers need a way to provision environments and trigger migrations (via automation) that reliably cause relevant IDs to change.

  • I’d also expect a partnership mindset. Customers moving from Commercial Cloud to Isolated or Government Cloud will perform extensive validation before migrating. It would be ideal if Atlassian involved affected app partners in that process so that we can test the migration mapping against the customer’s actual data (via the customer’s test migration). This would be much better than learning about migrations after problems occur.

Long-term expectation:

  • If Atlassian anticipates this being a recurring use case, I’d expect investment in a long-term solution that either prevents or mitigates these issues.

Regarding test environments: I agree that Commercial Cloud is a suitable test environment, as long as Atlassian maintains functional equivalence between Commercial-to-Commercial and Commercial-to-Isolated migrations.

Thank you!

3 Likes

Hi @AndrewMiller1 – thanks for the reply!

My suggestion to support Forge Remote was not related to the egress of Atlassian identifiers, but simply to be able to support processing of “solution B”-style remapping within a Forge Remote rather than a native function. I realize that remotes will not be possible in IC, but if the same framework is used for commercial-to-commercial migrations, it would be great if Atlassian could design the framework in a way that Remote support could be built in the future.

Here is another use case that will not work with solution A: host app content properties are generally mutable by users, so they are not secure. To prevent tampering, these properties can be signed by the Marketplace app. If you have a signed content property that contains a content identifier, if your “solution A” tries to remap it, the signature will become broken. There is no choice here except to use solution B.

I love the internal optimism about solution A meeting the needs of the many, but I also agree with others that a significant number of apps will probably not be able to use it.

I would, however, love to see similar optimism from Atlassian channeled into the idea of not forcing vendors to pay for customer migrations:

To channel Remie, can “free compute for solution B” not be made an integral part of the RFC, rather than incidental? If this requires another team, can you bring that team into the fold?

For the types and scale of customers that will be presumably migrating to the IC, the existing Forge “free tier” provides budget for (in relative terms) almost nothing, so I do not believe that part will help vendors much.

What I am suggesting is a switch of thinking: instead of vendors having to argue here why we should not have to pay for it, can Atlassian make a case for why vendors should pay for it?

Is there any real worry that vendors will launch a side job to, say, mine bitcoin every time a customer does a migration to IC? With Atlassian raking in millions of dollars annually for a single IC site, is it truly a budget issue? IC sites will likely be the biggest and the most complicated of migrations, and this also implies a larger number of migration test runs. Vendors will have almost no visibility or control, and no advance insight into costs generated.

Yes, Atlassian runs its own ETL pipeline where it will do it for “free” for a subset of data types. Atlassian also has total control over this, meaning that Atlassian can easily revise its transformations, look at costs in real time, and decide to stop/pause/abort each migration if it turns out to be costly. Vendors have no such control. We just release a version of our app, and if it just happens to take $1k of compute for each test migration, we cannot do anything about it, other than hope the customer stops mashing the “migrate” button (before we have to go get a second mortgage).

I could understand Atlassian being concerned by having to foot a potentially unlimited bill…but this is exactly the same concern that we have as vendors too! Except that we have no control, as noted above.

One alternative would be to grant apps a per-migration allocation of “free” compute plus storage operations, all of which is proportional to instance size, and which is consumable via the custom Forge function that handles migrations.

You should also give vendors a toggle where they can opt to either kill the migration if the “free” limit is exceeded, or else consent to pay for additional migration usage at standard Forge rates. Atlassian already built the tooling to limit Forge compute resources through the end of this year, so perhaps some of this logic can be reused here.

5 Likes

My background is more enterprise, and less aligned to the very product-led vision of Atlassian, so the attempt to do migrate a customer between hosting environments automatically is a little antithetical to my thinking.

From what is described, this cloud migration will be an uncommon activity, with each attempt a one-shot, but very high-value. (i.e., a large enterprise customer spending six or seven figures moving from commercial to isolated cloud). In my experience, this would be part of a large project/programme, because the customer will likely need to rebuild integrations and automations. Further, it is often an opportunity to rethink existing practices, so customers often don’t want data copied as is.

So whilst I applaud the aim to make it as easy as possible for the customer, I’d advocate for having the option of having human involvement. Whether that is the customer, a solution partner, or the app vendor, I’d like to suggest having the option to mark the migration as manual, to take advantage of the inevitable accompanying project.

In terms of specifics, some of our apps need no migration (they will just work), another may be best with an automated migration process (they read the content in the instance, and reconfigure themselves after a button press). And for the others that may need a migration, my gut feel is that the effort to build an automated solution would be outweighed (especially with the necessary ongoing testing) by us getting involved each time. As other have said here, testing is so important because if the migration fails, it sounds like there is no second chance. The old instance will be gone. So we’d have to be confident the migration will work 100% of the time.

I was looking at the DC/server migration mechanism recently. ( Migration path readiness checklist | App migration platform ) That has a different approach that I’m sure you are aware of, but one aspect that is useful is the opportunity to give documentation and indicate the manual approach.

All of this is predicated on the assumption that this activity may happens once or twice in the lifetime of an enterprise customer, and essentially never for a smaller (sub 1000 seat) customer. So I’d imagine seeing maybe 2 of these a year max for our apps. If that’s wrong, then the effort/value balance will change for me.

@AndrewMiller1 perhaps you could give us an estimate of volume to help us understand? What percentage of customers will migrate each year? (Perhaps broken down by user tier at a high level). What is perhaps valuable for Atlassian to automate (due to scale) is not for us.

1 Like

Hi @AndrewMiller1 ,

I’d like to highlight @scott.dudley ‘s remark regarding backups again. Taking a backup and restoring is very similar, likely also needing a remapping of identifiers. Can’t you look at a migration as of taking a backup in e.g. commercial cloud and restoring in isolated cloud?

If the backup features are handled separately, vendors now also need to put backup identifier mapping into their app manifest, and most likely these mappings will be a different chosen solution by a different Atlassian team…

1 Like

Hi @AndrewMiller1 ,

A lot of interesting things have already been said on this thread, and I agree with a lot of them. Mainly that option A seems too basic to cover most real life cases.
Also, if the same solution will cover instances merge, migration, import … that will be excellent.
And also, please design something that will handle easily later that we will need this for non RoA apps soon after, aka I need that my Forge remote backend is aware that my app have been re-installed on a new instance after an import, is aware of instances merge, is aware of a migration from cloudId xxx to yyy …

Now, one thing that doesn’t seem to be covered and that will greatly impact us: what about custom fields ? Our app use Forge custom fields and reference Jira custom fields and our own custom fields.

More precisely: the settings of our app contain complex queries that reference customfield_11288, which could be a Jira custom field, or our own custom field.
Currently, this is stored outside of Forge, but we will migrate that to either Forge KVS or Forge SQL in 2026.

Can you confirm that our Forge custom fields will be migrated too ? With their values ? And their context config ?
Do the references like customfield_11288 will stay the same ? If they change, this will break a lot of things as they could be used by others tools, like Automation script. How we and customers are supposed to handle that ?

Thanks for your quick reply.

1 Like

Thanks for the input on the RFC @piotr.bojko. Would the reasons why your apps would need a more complex migration solution be similar to those already given on this RFC (identifier is being hashed, or identifier is in a signed block of content), or are there other reasons?

Thank you for the further input @scott.dudley .

I note your valuable input around the value of allowing interaction with Forge remotes when this is applied to commercial-to-commercial migrations, and on cryptographically signed properties as use cases that would need solution B.

Regarding the cost model, cost observability & cost control, I appreciate your input. I’m not personally in a position to comment on that, but the relevant team are aware of partner concerns in this area, and the comments on this RFC.

Thanks @MartinWood for your detailed input on the RFC!

It is certainly true that large customers doing such a migration will contact Atlassian and app partners to negotiate contractual terms and so forth.

However, keep in mind that with Runs on Atlassian apps, as in Isolated Cloud, as an app partner your only access to Forge Storage is through deployed code (as the data may be highly sensitive). The customer also does not have direct access to app-specific data except through the app, and Atlassian also does not build tools for their staff to directly access the data. So the way you’d achieve a migration would need to be through code.

In terms of specifics, some of our apps need no migration (they will just work), another may be best with an automated migration process (they read the content in the instance, and reconfigure themselves after a button press). And for the others that may need a migration, my gut feel is that the effort to build an automated solution would be outweighed (especially with the necessary ongoing testing) by us getting involved each time. As other have said here, testing is so important because if the migration fails, it sounds like there is no second chance. The old instance will be gone. So we’d have to be confident the migration will work 100% of the time.

Note that the pattern we’ve seen with DC to Cloud migrations is that admins at customers do multiple test runs of the migration (without shutting down the source site), test the destination (with only admins having access) and when they are happy it is working, they declare an outage window (and shut down access), migrate the source site, do user acceptance testing on the destination site, and have a go/no go decision. If it’s a go, they open up the destination to users and tell them to go there instead of the source. If it’s a no-go, they tell users to keep using the source.

If your app fails in a test run, app partners would expect contact from a customer, and in most cases a chance to fix it - but if they decide your app is too much hassle, you’d be at risk of losing the customer. If your app causes a no-go decision for the final migration, you could expect an angry customer - but by that stage, there would have been testing already done.

However, it is a good idea to have a way for app partners to have a way to flag custom migration instructions (e.g. we recommend contacting the vendor).

@AndrewMiller1 perhaps you could give us an estimate of volume to help us understand? What percentage of customers will migrate each year? (Perhaps broken down by user tier at a high level). What is perhaps valuable for Atlassian to automate (due to scale) is not for us.

I don’t have any estimates at the moment unfortunately - we’d expect an initial group of customers (which may or may not have one of your apps installed) moving from Cloud to IC as it becomes available (Early Access or General Availability), and then a slow ongoing trickle of customers who acquire new requirements to support regulated industries organically. We’d also expect to see occasional customers with needs to consolidate following mergers & acquisitions, or otherwise to restructure their cloud site presence.

Thank you for your input on the RFC @SilvreLestang.

Now, one thing that doesn’t seem to be covered and that will greatly impact us: what about custom fields ? Our app use Forge custom fields and reference Jira custom fields and our own custom fields.

More precisely: the settings of our app contain complex queries that reference customfield_11288, which could be a Jira custom field, or our own custom field.
Currently, this is stored outside of Forge, but we will migrate that to either Forge KVS or Forge SQL in 2026.

Can you confirm that our Forge custom fields will be migrated too ? With their values ? And their context config ?
Do the references like customfield_11288 will stay the same ? If they change, this will break a lot of things as they could be used by others tools, like Automation script. How we and customers are supposed to handle that ?

I think there are a few things to consider about Custom Fields:

  • Are custom fields migrated at all? Yes, they would need to be!
  • Do Forge custom field type identifiers? Since these are defined by App Partners, they would not.
  • Do non app custom fields have changing identifiers? Yes, this can happen, as they may clash.
  • Custom field identifiers do change, but through APIs you would refer to custom fields by type and issue, so you probably wouldn’t be using that ID.
  • Will identifiers placed in Custom Fields be mapped? Custom Fields can explicitly be typed to be an identifier (user, issue), in which case the base migration would handle them. If the identifier is embedded in text, number or string fields by your app, the app partner would need to identify this.

Does your app have any Atlassian-issued IDs in text, string or object typed fields? If there are apps doing this, we’d need to ensure that is considered.