I understand if this is not necessarily part of this RFC, but can you please elaborate on #2 (Tenant Isolated access credentials):
Where does this security control come from?
Why has Atlassian chosen to apply this control when this is not an industry best practice? It is very common for OAuth apps to have access to all accounts using the same client_id and client_secret
It seems like this security control is overcomplicating engineering efforts (making it error prone) without any validation of the necessity within the industry. Is Atlassian trying to say that the rest of the world are doing it wrong?
Can you please confirm that this is a change of behaviour compared to the OP of this RFC?
Apologies that sequence diagram certainly could have been clearer I have updated this now to more clearly mark the boundaries. Both the Remote Endpoint & Auth Endpoint would be hosted on partner infrastructure.
The updated diagram together with the code sample does make it more clear. However, I’m still concerned by the amount of moving parts with points of failure that is required by this flow.
The process that was triggered will have to wait for a double round trip (remote resource request to authTriggerUrl → forge request to authTriggerRemoteEndpoint → response from authTriggerRemoteEndpoint > response from authTriggerUrl).
There is a lot that can go wrong (timeouts, network failures, etc)
I hear that preference across multiple posts. What might not be as clear for the Forge team is what going against that preference might imply as a trade-off. Could you and others who have voiced this standards-based preference help elaborate on this kind of statement:
If we go with this more proprietary flow, what does this do to your app development efforts? How does it affect your operations, including concerns like performance, reliability, and security? How will this impact customers & users?
I ask because I raised the same objection internally. But for me, the preference is just “expert opinion” and doesn’t really help the Forge team make an informed trade-off between the developer experience (for you) and the trust experience for customers.
Writing this sort of glue code is (almost) never an impossible challenge, nor a super difficult one.
The first problem I see is that people coming from other ecosystem will have this additional barrier to entry: one more thing to learn in an environment that already has a learning curve.
The other issue is that this will require extra code that normally wouldn’t be necessary: with a standard flow we’d just plug in a library and never think about it again, every vendor’s implementation can have bugs, and to be honest I haven’t really had to implement a custom authentication flow in years, I’d probably make mistakes myself
IMO this is not a huge deal, but if we can have a say, it’s pretty much expected that devs will prefer sticking to industry standards.
At least, having a reference implementation would be a good starting point if it turns out that your goals for security cannot be achieved with OAuth.
That being said, I agree with the comments in this thread: the world easily gets by with OAuth, I don’t know whether all this is actually necessary.
A proprietary flow is more work to understand (its unique vs something off the shelf that people have worked with before) and that takes a bit longer to internalize and design and build and integrate and get working correctly and tested. It also amps up the maintenance cost because, again, this is unique and needs to be understood as unique when changing it.
I’m a little concerned about performance and reliability, because you’ve got this abnormal “nested” HTTP requests situation and we run at a scale where things fail all the time and we have to handle failure well; when things fail we need to retry the request to get a token, and Atlassian need to retry sending the token to us a few times if things go wrong before responding to the request for a token.
I’m uncomfortable with the unrotated, persistent authTriggerURL - I’d like credentials and tokens and whatnot to be rotated as a matter of course. This avoids a persistent “backdoor” if anyone does manage to get hold of it and decrypt it. It’s not “secure by design” for want of a better phrase.
It might take us longer to build this integration vs a “standard” one. That’s then a delay for delivering customer value. In this specific context, the customer value is hard to justify because this would be done as part of moving a big Connect app onto the Forge platform to operate as a hybrid Connect/Forge app, or Forge+Remote app which falls outside of the “hosted solely on Atlassian infrastructure” customer value stream. The customer may not get any value until we adopt some Forge specific integration that we wouldn’t be able to unless we went down this path.
So: we’d prefer a standardised way, but we can (and have, and will) work with what’s been proposed.
First of all, thanks a lot Adam for going into it and share this information with all of us.
We are under a similar situation: trying to move a Connect app that has been in the Marketplace for years to Forge. As we need to use some modules to place some items in Jira that are not accesible in Forge, we are evaluating our options using a new architecture based in the Connect in Forge approach.
We have some big concerns with the RFC
I would prefer to have something more standard as others have said. We were opting at the moment for OAuth Connect tokens too.
I know that you are still defining the solution, but do you have some tentative moment of time when it can be ready?
Do you have any plans to get a user token too? We will need it too
To give you some more context about my concerns, we want to have a Public API in our app. At the moment, building it natively in Forge using webtriggers seems not to be a good option. For that, building it in Connect will make choose one of both:
Storing data outside Forge so I can get it from Forge (calling to Connect) or directly from Connect.
Storing data inside Forge (it will be great to have data residency by default) and forget about the API for the moment. To be able to have an API in Connect and the data stored in Forge we will need to get this Forge authentication tokens, but also asUser() to take into account permissions.
Maybe I missing something (any kind of feedback here is really appreciated!), but at the moment it is complicated to define an app architecture to follow Forge remote or Connect in Forge approaches. Authentication is one of the pain points, so this RFCs may be so important for us to unlock different possibilities here.
Currently, the situation is that if we want to go to Forge with a Connect app, you will try to put as much as you can in Forge infrastructure but you would face that you have to keep most things out of Forge unluckily.
I want to highlight again what marc has said here, as I did not see it get answered:
Why not require separate OAuth client credentials for each combination of app and clientKey? This would allow the usage standard OAuth.
I also want to underscore what @alvaro.aranda said - this will be again one of those areas where existing connect capabilities will be missed and replaced by an inferior solution from a vendors perspective, keeping us invested in Connect
I just want to give a quick update with where we’re at.
The feedback is pretty clear, everyone would prefer a solution where you store a client secret and/or refresh token on your servers and use some combination of those to synchronously fetch access tokens as required.
There have also been some interesting suggestions (like Remie’s) and options to layer on some extra security (IP allow-listing etc) to make things more secure and protect against secrets falling into the wrong hands.
We’re taking our time to re-evaluate some of those options to make sure we go with the best solution. To be clear, we’re not abandoning the current proposal yet but we will go deeper exploring alternatives.
Unfortunately none of those alternatives are particularly straightforward, just for context:
Refresh tokens are part of the authorization code flow which doesn’t fit into the Forge installation flow. To offer refresh tokens with something that works more like a client credentials grant type we will need to create a custom grant type and implement it in Atlassian’s identity service. This might be doable but it would be a heavy lift. Also, at the end of the day, while the concepts will feel more familiar, I’m not sure how well it will work with standard OAuth libraries out-of-the-box
Putting controls on traffic coming out of Forge is much simpler that controlling traffic coming in because the surface area is much broader. Again, we might find a good solution for this but it’s just more complicated.
Which is all a long way of saying I don’t expect we’ll have this RFC closed out on March 1st as originally planned, but we’ll take the time to make sure we find a good solution.
Thanks again for everyone’s feedback and their patience
Which obviously also includes adding the authorization step to the installation flow. It would make sense to do this for apps that somehow indicate that they have need/support for offline access, which should be added to the manifest. This should also allow for the more traditional approach of having an allow list of callback URLs, making it easier to control token delivery.
It also meets the requirement of having tenant isolated access credentials and allows for greater data access control as the end-user will have the ability to revoke offline access at any time without having to uninstall the app.
I understand that this would involve more work from your part, and probably also make this project span multiple teams within Atlassian, but I think moving forward, embracing the industry best practice of OAuth using a flow that both developers and end-users are familiar with will benefit Atlassian greatly. The time & effort you will have to put into engineering now will pay off greatly in terms of security and trust in the future.
Making the OAuth authorization part of the installation step allows Atlassian to inform customers of the Forge remote backend and get the admins authorization at install time.
This would also make it simple to base the authorization on a combined token of clientKey and app key.
Hi all. I’ll chip in my 2 cents as well. First of I will concede the point that the proposed mechanism does remove the need to store any long-term credentials. If that is the main aspect of security we are concerned about and it trumps utility then fine, let’s crack on.
That been said I think that keys are still generated and exchanged and I am not at all confident that the resulting implementation on both sides will actually result in a more secure system overall. Not to build a strawman here, but it seems to me we are getting out of our way to design and implement a more secure system than what is the industry standard, even though we are not in the security business. Assume we have a breech and clients come to our door asking “why did you not use oAuth2”, do we have a good answer?
On a separate note regarding the solution itself, we say that this can act as a sync operation but in order for that to happen we have to wait until all moving parts confirm both the right and read capabilities. The in flight request now needs to:
→ call the authTriggerUrl. Runs on forge. Needs to work in a blocking IO fashion, write to the source of truth and not a queue, and ensure any read mechanisms are up to date (i.e. replicas caught up, caches are flushed)
→ then send the token to the authTriggerRemoteEndpoint, that needs to work in exactly the same fashion, and then return a 200 to confirm NOT that the token is received, but that the token is actually ready to be read.
→ then we need to respond to the initial request to make a new call to the tokenStore to get the token.
and I cannot help but wonder, why all the “requesting part”? Why not just have forge proactively send a new key to the callback every so often? It removes 90% of the complexity and has the exact same effect from the point of view of security, and it acts like connect in the sense that we can just pick up the key and make a request. All we have to do is implement an endpoint that receives a key and updates a db.
Apologies I never officially closed this post out.
As I said above, we’ve heard your feedback loud and clear. There is a very strong preference for long-lived credentials that you can use synchronously to fetch new access tokens. There is also a strong preference for it to be as close as possible to standard OAuth.
There are still a number of variations to how a mechanism like that could work, we’re still working through what the best alternate proposal would look like. Stay tuned for a future RFC to discuss this in more detail.