Thanks @AdamMoore even though I am a bit surprised by this statement. Not having a clear and timely roadmap for residency support in Forge will be major blocker in terms of platform adoption.
Why would I migrate a Connect application to Forge if I loose this functionality that is especially requested by large customers. Same goes for starting new apps: a Connect competitor will always be able to support those crucial features.
Forge is about „secure, reliable, and scalable“ apps - no plan for residency and realms is at least not a scalable approach.
For Forge Remotes: Can Atlassian make sure that not every request (e.g. for Macro rendering) is proxied by the US-based Forge backend to the vendors remote? This would result in a really slow experience for users, especially when the vendors remote is EU based.
Thanks @AdamMoore, sounds great overall, and we agree with Milestones 1-3. As for “Let’s see how the other responses go”, I indeed disagree with Remie’s proposal to promote Milestone 4 - to the contrary, I think Sven captured things well here:
I would go even further and propose to demote Milestone 4 to be the last one, given you can indeed access Forge storage via Milestone 5 anyway?
Or put another way, having dedicated support for Forge storage (or any other Forge capability) only seems to be a DX improvement rather than delivering additional capabilities, whereas Milestone 5 unlocks all of them, or am I missing something here?
Does this mean that we would have to configure a proxy function? Or will this be a service that the Forge platform provides? In either path - I’m concerned about the latency that Atlassian might introduce…
If it is a service in between - any chance that it would be able to do proper caching based on the content headers returned?
If you are already on Forge, like yourself, and you are only looking to use Forge Remote to offload highly specific tasks it might not be very important to have access to Forge Storage from the Forge Remote. You already have access to Forge Storage from your Forge functions, and you either pass the data on or call a web trigger to retrieve the data (if even required).
However, if you are coming from Connect, the lack of secure storage within Atlassian infrastructure is crippling.
We will probably be using Forge Remote to keep 95% of our app on our own infrastructure and will only begin with static hosting on Forge. Having access to Forge Storage from our own infrastructure will allow us to move more quickly and achieve better trust signals with our customers as their data will remain on the Atlassian infrastructure whilst we can steadily migrate to Forge.
So yeah, I would say it’s just different perspectives.
Hi everyone, regarding Forge data residency, this is currently work-in-progress. I mentioned in the November update that one of the key dependencies we have to rollout Forge data residency is enabling multi-region compute. We are currently doing some heavy lifting to get multi-region compute out, so we can ship data residency of hosted storage. I would like to emphasise that Forge data residency is one of our top priorities. As we are still early in the project, keep an eye on the public Trello card for updates. Thanks!
I think ScriptRunner would need milestones 2 and 3 first so we can do ingress and egress, maybe including some of the event filtering that @SvenSchatter mentioned, then 1 and 5 so our UI can be native Forge and any business logic that fits Forge functions better than our Connect implementation can be migrated to Forge but still be called by remaining Connect services, finally followed by 4.
Other use cases:
Like @SvenSchatter said (again) “It would be great if from the Forge backend you could call your remote backend, using the Atlassian-controlled authentication that is part of the Milestone 3 install hook.”
Preference for receipt of long-lived API creds:
Install hook is good, even if we have to refresh them with some frequency
Not that I can think of
For milestone 2 (event delivery) I’d expect the same retry logic and rate limiting of outbound requests from Atlassian to apply to this implementation as it currently does to the Connect implementation.
For milestone 3 (product API access) again I’d be expecting the same cost budget + concurrency rate limiting logic that is currently in place.
For milestone 4 & 5 (Forge storage and function access) I’d want to know what the rate limits would be to help identify whether moving data/compute out of Connect would help or hinder the app.
Anything that can be consumed by a Connect app I’d expect to have a long lived identifier or URL for - ie dont change it when a new version of the Forge app is deployed because that would be a nightmare to keep in sync with Connect app deployments.
Anything that the Forge platform is doing to communicate externally (eg milestone 1 and 2) I’d expect there to be well thought through error messages with plenty of metadata and for those to be visible both in the Forge logs but also available somehow for consumption by external log/metric processing tools. The existing Forge observability tooling just doesn’t match the capabilities we have with other tooling in our existing Connect app hosting setup.
In addition to the post by @SushantBista’s, it’s worth mentioning that the service responsible for making requests to remote backends is also the one that invokes Forge functions. This means that as we introduce multi-region compute support in Forge, we’ll be able to extend it to Forge Remote as well.
Once both projects make further progress, we will have a better understanding of the exact timeline for when we can provide support.
What do you think about the order of priorities? Are we building the most useful functionality first?
For us - Milestone 4 will probably remove the most blockers (in combination with Milestone 1) - so we would prefer it as the second priority. However, for us - it would not actually be useful until either milestone 2 or 3 is completed. So perhaps the overall priorities are correct (for our apps at least)
Would you have a preferred way to receive long-lived API credentials?
We have use cases for our apps where we would like to call Atlassian API’s on a schedule - that could be days or even months apart. However, we do not want to store very long lived credentials if we can avoid it.
For example, a timed batch job executing on the remote or some other external, non-Atlassian event-driven application interaction.
So the point mentioned about scheduled delivery of access tokens - for us this would be ideal as we would if we could control the schedule. This would allow us to not store the access token beyond the scope of the actual scheduled task we are executing for the user.
Any initial thoughts you might have on the finer details of running a remote backend e.g.
As far as I can tell a lot of the ideas around Milestone 3 are similar to what Connect-on-Forge already does today. So it would be great if we could work together to minimize migration efforts between Connect-on-Forge and Forge Remote.
What I perceive is that this will result in quite a confusing landscape for developing cloud apps for the Atlassian platform. Developers will have the choice between Connect, Connect on Forge, Forge, Forge with remote backend and OAuth…
This does not feel sustainable. Perhaps some clear outline of which developer platforms are here for long term and which should be viewed as a step in migration might be very helpful.
Webhooks/events: Yeah we’re definitely going to take the learnings from Connect webhooks and apply the same thinking to Forge Remote. It makes a lot of sense to keep them the same.
API rate limits: Yep, it makes total sense to keep them the same.
Storage/Webhooks rate limits: We haven’t thought this far ahead yet, but perhaps we’ll reach out for feedback when we’re further down the road
Long live URLs: Yep, makes sense
Observability/traceability: Good timing, we’re discussing this at the moment actually. The plan would be to capture the remote invocations in the Forge logs so you’d be able to see errors, view metrics in developer console etc. The outbound requests also include a unique traceId you can use to to trace requests from Forge to your remote.
@AdamMoore and @SushantBista, I don’t follow the reasoning here. Are you saying that customers’ needs for data residency would be satisfied when Forge’s data is resident in the customer regions? How will the “remotes” know where to process data?
Developers will be able to specify different remote URLs for each region they support as they do in Connect. The invocation service (which will be running in the customer’s region) will route requests directly to the appropriate URL for that region.
So, if the app is only doing remote compute (and not storage) then it could meet a customer’s need for data residency at that point.
For full data residency support (including remote storage) we’ll need a similar suite of capabilities as Connect (realm migration hooks etc). Probably something worthy of its own RFC at some point.
Thanks for the update. I cannot find a public Jira ticket for Forge data residency in your ecosyste.atlassian.net Forge project. It would be nice to have a Jira ticket that we can watch in order to followup on your progress. Could you create one or give us a link if there is a existing Jira ticket?
We have use cases for our apps where we would like to call Atlassian API’s on a schedule - that could be days or even months apart. However, we do not want to store very long lived credentials if we can avoid it
Just thinking out loud a bit… I wonder if this is a use case for remote Forge scheduled triggers? Perhaps you could set up a daily scheduled trigger that makes a call to your remote with short-lived access tokens (similar to product events in milestone 2). If you don’t need it on a particular day could ignore it but you’d be able to call Atlassian APIs at least once a day without storing any credentials.
And yeah, milestone 3 is probably the biggest overlap between Connect-on-Forge and Forge Remote so we’ll need to provide more clarity and guidance here.
Perhaps you could set up a daily scheduled trigger that makes a call to your remote with short-lived access tokens
Yes - I think this would work for us - in general being able to have authenticated calls to the remote “on demand” from our forge backend functions would mean we would never need to store credentials outside the scope of a single event / request.
I agree with everybody here; this is awesome and will unblock a lot of things, including making it a lot easier to progressively migrate to Forge from Connect.
What do you think about the order of priorities?
For us the right order is: 1, 2, 3, 5 (I don’t see any usage for 4 right now)
Are there any use cases you don’t think we’ve covered?
It is unclear to me where the ability to call our remote backend from Forge backend sit in this? Did I miss something, or it is not in this roadmap?
Would you have a preferred way to receive long-lived API credentials?
The current Connect mechanism, aka an installation trigger that send us an App/instance specific long live token used to request a shorter token, seems the right way to me.
I have a question about the context_token, what info could we extract from it?
CloudId? AppId? environment id? instanceUrl/baseUrl/siteUrl? userId?
Documenting this early one and making sure it is coherent between the different type of calls/triggers will simplify things.
Behind this question is the need to correlate the data between the Connect part, identified by the clientKey and the Forge part, identified by … the cloudId I suppose.
Right now, we use the siteUrl in the install trigger to call our backend with a hardcoded token to make the link between the clientKey and the cloudId base on the URL … A better mechanism will be good.
It is unclear to me where the ability to call our remote backend from Forge backend sit in this?
Yes, a few commenters have mentioned this so it’s another milestone to add to the mix. Compared to the others, I think it would be relatively low effort so we’ll look to include early in the roadmap if we can.
I have a question about the context_token , what info could we extract from it?
You’ve touched on a really important point here. This is probably one of the main things I want to collaborate on and get feedback on during the EAP. The various ids will be similar to what is currently available in Forge’s app context. One thing we want to introduce is a single id (installationId maybe) that you can key data to without having to build or decompose other ids like you need to currently in Forge.
The relationship with clientId is also something we need to solve but we’re not sure exactly what that looks like yet.
For us the important parts are milestones 1 and probably 4, though only transitionally. Our use case is an app that was built on Connect because Forge didn’t exist yet, but that we think can be migrated completely to Forge. (We’d like to do that so we can stop worrying about hosting for the backend.)
The blocker that is relevant to this RFC is that we already have a lot of users who have a lot of configuration data in our backend. Anything other than seamless automatic migration is implausible from a UX point of view.
In our case, the (aspirational) plan is to first move most of the app to Forge using something like milestone 1 to keep loading the configuration data from our backend. Then, we would use something like milestone 4 to migrate the existing data into Forge storage. Finally, we would just get rid of the legacy backend and the Forge Remote part will no longer be used at all.