Every time we renew a site suing Atlassian credits, we have to remove all apps due to how the billing system worked. In the last renewal cycle we removed the app and then couldn’t re-add it due to this. We forgot about this issue happening which wasted dev time trying to analyze the problem.
Yes, changing app keys would work around this issue. It would also break other contracts we have with Atlassian like what Remie mentioned.
I would add that this is one of several security changes where it feels like no one is representing the needs of vendors to push back on changes requested by the security team.
This is what we are using during development on our personal instances. There I have the production and dev version installed at the same time to be able to compare stuff.
The issue starts when you want to test the release version of an app before a release which may also be an instance where you try to reproduce support tickets. This instance is mostly on the marketplace version but need to have the staging version for the last tests right before release.
In addition it’s currently not possible to give a customer a pre release (private) version without the need to create a support ticket AMKT-24206 was initially fixed but is broken again since a few month
We do this already, dev versions of our prod app have keys like com.example.app-fred and com.example.app-bob, but the ability to upgrade/install on a site eventually breaks due to this “previously installed from marketplace” security error for no apparent reason. Then we have to get in contact with Atlassian to reset something their side. By all means secure public paid apps, but this should not be something we have to deal with for private dev apps.
Thanks for the additional context @boris, @remie, @m.herrmann, @mike1 — I’ll follow up with the Connect, Marketplace, and Security teams internally and get back to you.
The original change in behaviour was related to an undisclosed security vulnerability, and while the impact to the developer experience was weighed, that analysis did not incorporate all of the negative impacts you have all colourfully provided above.
There have been some major evolutions to both the platform and the way that we think about Ecosystem security and trust since that time. We don’t have a firm decision yet, but the initial investigation looks promising, and the team is scheduling a deeper exploration in the next couple of sprints to carefully evaluate whether and how we can gracefully revert this change in behaviour.
Thanks for bringing this thread to my attention @boris — I’ll provide a further update once we have it!
Appreciate you raising this and getting a review of this going.
Any chance the process for reviewing security issues for vendor impact may be updated to include a long tenured vendor or two? Given that many of us have been doing this for longer than Atlassian employees have been Atlassian employees, we have a body of tribal knowledge and an understanding of ecosystem tech debt that may be missing internally and could help guide architectural decisions on how a given security fix should be addressed.
I love this idea. While we have broad analytics for app usage of platform modules and APIs, they can’t always give us a great picture of how badly a particular change will impact the ecosystem (such as the thread above).
I could certainly see a sister mechanism to RFCs with a slightly different workflow to cater for the needed urgency—and secrecy, when necessary for sensitive security issues—being effective at reducing the risk of these sorts of changes.
I’ll look into this with some of the teams that are on the critical path for security and incident management. Cheers!
Thanks for the update, Tim and thanks for diving deep into this old issue to ensure the signal reaches the teams for us.
I am just chiming in to add that this does not impact us at Easy Agile as we consider our app-key to be “production”. Our dev flow looks like this.
Developer creates a feature branch and submits a PR for that feature branch which is deployed to our staging cloud instance. Seeing as we have multiple features in flight simultaneously, it is imperative that we can have multiple versions of our apps installed into the same Cloud instance at the same time meaning they have to have different app keys. We use a suffix of the app-key based on the PR number. Licensing check is disabled for these builds to avoid needing to generate/recycle install tokens.
On the staging instance, the “production” app-key install is the main branch pre-production build (installed with an app token) where we do our staging checks and smoke tests. Licensing is enabled for this build. In our main Cloud instance, the app is installed from the marketplace.
I wanted to provide a follow-up and update to @tpettersen’s message above. We have re-assessed this change and the decisions which lead to this and have subsequently opted to partially roll this back. This means that you should no longer experience the could not be installed as a local app error when installing a local app.
Thank you so much! This will safe us tons of time.
But I also have to ask: what changed in terms of architecture that this is now no longer a security concern?
The reason I’m asking is because the question to either fix or roll back this change has been asked by the community since this change was implemented in August '19 (4 years ago) and it would be really messed up if the only reason why this can now be reverted is because we added @tpettersen to the conversation.
Fair question - it’s more a case of gradually gathering data and sentiment (from partners and ourselves) about the upsides and downsides of this control over time and finding that on balance the downsides outweigh the upsides - adding Tim to the conversation was something of a catalyst to act but by no means the only reason. I can see how it looks that way, and I see what you’re getting at - ideally we could always spot and address these sorts of things without needing nudges or mentions but in practice attention is a limited resource.
Although I understand how this works within large organisations, especially those that are going through hyper growth, I hope you do realise that I do have some reservations about this.
This topic has run for over 4 years, has 53 comments, is mentioned 6 times in other threads asking the same question, has been viewed 7.7K times, has 38 users interact with it and gathered over 138 likes. The corresponding issue has 56 votes.
From a Ecosystem community engagement perspective, these number are huge.
I’ve said it before, and I will say it again: Atlassian has a blind spot with regard to understanding the impact of A) partner fatigue and B) partner scale.
There aren’t that many partners who actively voice their concern with anything Atlassian related, and that number becomes increasingly low for issues that they preemptively decide will never be fixed by Atlassian. It’s basically not worth the ammo, as we have been trained to only start raising our voice a little when there are extinction level events in the Ecosystem. We’ve come to a point Atlassian attention is only achieved by either hyperbole, mass-tagging people or shit posting on X or LinkedIn.
I think it would be wise for Atlassian to start creating a more documented (and community validated) process as to what data and sentiment warrant action, and also how to gauge that sentiment correctly.