@PhilipGrove this sound much better.
Directionally, this sounds much better.
There are, of course, many details and follow-up questions a vendor may have about implementation specifics, but I’d assume there will be supporting documentation alongside the high-level requirements, and room for adjustments as things evolve.
As with other areas, there’s a level of “trust” required here, absent some kind of formal audit by Atlassian or a third party. Once a partner has a single egress domain (no wildcard necessary), data could freely flow anywhere.
We, along with other vendors, have worked alongside Atlassian in the hope that the Forge platform can better accommodate these customer requirements. That said, for the foreseeable future, Forge Remote will remain a key pillar of our architecture, and we need to ensure we can accommodate trust signals without being “Runs on Atlassian”.
4.1 SOC 2 Type 2 or ISO27001
Just to confirm, this is not anymore just a requirement for Platinum and Gold MP, but it becomes a requirement for the A4A badge entirely, is it correct?
If so, this alone is more work than anything else together. We’re just a 10-people team, and this is already quite a size for a lot of apps on the Marketplace (we can probably find most leading products being fewer than 25 developers). It probably is a full-time-equivalent for a manager for 6 months for the initialization of SOC2 compliance. I can’t say it’s impossible, but it is at the cost of features and global customer experience.
Here are responses to your other asks:
For the new Architected for Atlassian program, do you agree, in principle, with the requirements detailed above?
We’re generally happy with the layout of the program (except for the SOC2 requirement).
We’re happy that Atlassian will provide a solution for the pentests. We perform them, we love performing them because it’s reassuring. Whitebox / security consultants sitting with developers to analyze the code and practices was fruitful. But most pentesters were always incompetent about JWT tokens and with the Atlassian platform, so an Atlassian-provided program is very useful, both for customers and for us.
Are there any requirements you think shouldn’t be included?
In our opinion SOC2 / ISO27001 should be a separate badge since you’ll advertise it separately anyway.
In addition, for Gold partners, Atlassian accepted pending SOC2 applications, so can you be flexible with pending SOC2 applications when applying to the A4A program too?
What tooling or platform support would you expect from Atlassian to successfully participate in the A4A program?
- We haven’t looked into accessibility yet, so I assume we can find some automated accessibility scanners,
- Performance testing on authorized Confluence Cloud instances, which is currently forbidden,
- Forge is unperforming. Everyone in the Marketplace is hitting the rate limiting, customers complain it is extremely slow, we can notice it on our instances, it really feels like new software is architected in such a way that apps won’t be part of a native experience,
Do you believe the timeframe for retiring the Cloud Security Participant badge (end of March, 2026) is reasonable?
For us, its retirement should be synchronized with Fortified, in June.
If so, where do you usually publish your pen testing reports?
We publish them in our documentation (“Trust center”), together with the CAIQ Lite, the privacy policy, etc.
I think it’s excellent that your team is investing in security and supplementing with AI enabled testing tooling.
The intent of creating a Penetration Testing program was to streamline testing for app specific use cases, we’ve put together a large team of security researchers with Atlassian and marketplace specific testing experience.
We feel it the offers partners a unique proposition to get experienced testers at a competitive price, to provide higher value vulnerability results than utilizing any other third party. That being said, we wanted to allow partners to utilize alternative vendors by providing a large industry recognized pre-approved CREST list minimizing the need to do manual, one-off approvals of vendors.
Usage of AI PenTesting won’t be recognized for this program for a couple reasons as it was a consideration:
- Enterprise customers won’t accept automated tooling as valid independently completed PenTests.
- We see more value from experienced pentesters with Forge platform specific experience.
My opinion as a security professional which you can take with a grain of salt: AI PenTest products are non deterministic, they’re typically built on a collection of agents designed for specific common vulnerability classes, these tend to hook into headless OpenSource and commercial security tools.
They do a great job of detecting common vulnerabilities that existing vulnerability scanners can detect in a deterministic way, they provide a nice addition to your vuln management programs but aren’t going to yield the same results as an experienced tester with domain experience of our platform.
Since these tools are designed around specific vulnerability classes, there are entire categories of vulnerabilities or security misconfigurations that can be built into Forge apps that these tools aren’t designed to look for.
This is entirely off topic, but thank you @ZacharyEchouafni for this thoughtful answer
.
It is an absolute pleasure to read a post by an Atlassian that is authentic, that includes personal and professional deliberations and that both recognises the validity of alternative solutions whilst also providing a clear reasoning as to why Atlassian made certain choices.
I hope to read many posts like these in the future, and may you be an example to your peers
@ZacharyEchouafni @PhilipGrove can we drop having to have automated migration from DC. One of our apps can have 10s of millions of records, and there is no way we can migrate that, automatically or manually. Forge just can’t handle large volumes of data
In regard to 1.15 (app must default to asUser) does that if we have an api that requires admin privileges that we must call it with asUser and fail, then call it with asApp? Also, some of our apps allow admins to give privileges to less authorised users (by user group) to perform a task. Would this need to try asUser first and then asApp? That ends up being two calls, which moves us closer to rate limiting issues
Can I suggest that this is broken into pieces and that the vendor can work towards full certification of A4A. This will mean that vendors won’t have to do one massive application, they can complete pieces until they have the whole thing. It will also mean that vendors can be partially compliant, rather than not at all. We don’t have SOC2 compliance, so, as it currently stands, we won’t bother with anything else. But if we could get 85% compliant with exclusions for SOC2 then it is worth our while doing most of it. It will mean more security on more apps rather than full security for less apps
Regards
Paul
Hey @markrekveld, thank you for your detailed feedback. It is very much appreciated.
@ZacharyEchouafni has addressed your AI pen test questions above, and I wanted to follow up on your other questions.
How do you envision this being verified? Is Atlassian going to believe the “checkbox-checker” on the colour of his/her eyes? Will Atlassian engineers need access to app source to verify this? Or will the Developer Console metrics be used for this?
It is very important to us that the partner community has input into the requirements for A4A. As I mentioned to @Anja above, this RFC is one of the first steps we have taken to determine the program’s requirements, so verification of these new requirements (which don’t already exist for other active programs) is still TBD. However, we will share more as we finalise the requirements.
Respects rate-limiting headers
Same as above, how is Atlassian going to verify this?
We haven’t yet explored the extent to which we can verify rate-limiting best practices, though analysing API traffic logs may help determine whether rate-limiting guidance is being followed.
How does this impact the use of current Forge packages that use deprecated packages?
Good point, perhaps we were getting a bit carried away ![]()
- Suggested change to current requirement: “The app must not use end-of-life NodeJS runtimes”
- And a new, more relaxed requirement for libraries: “The app is expected to follow best practices by avoiding unmaintained or deprecated libraries, or by mitigating their risks where avoidance is not yet feasible.”
What is the difference between 1.5 All data at rest must be encrypted and 1.9 Any Atlassian end user data stored outside the Atlassian product or users’ browsers must use full disk encryption at rest?
If 1.5 relates to the app vendor infrastructure and 1.9 related to the users system, then how can an app vendor ensure full disk encryption?
Good catch, this is a duplicate. I’ll strikethrough 1.5 ![]()
Looks very ambitious to me. With all the other changes coming to partners, in Q1 of 2026, this looks to be a hard one to include in that timeframe.
To clarify, the end-of-March 2026 timeline applies only to the retirement of the CSP badge. It is not the timeline for retiring CFA or launching A4A.
Hey @marc, thanks for your comment.
It would be much simpler for Atlassian and vendors if respecting rate-limiting headers was built into Forge. We as vendors already use the Forge functions requestConfluence or requestJira (or similar), and adding the rate-limiting here once would automatically make Forge apps compliant. And it would save a ton of work for each vendor to implement this independently.
I’m not sure how practical automatic rate-limit handling would be in practice. Best practice would be to trap the error response, schedule a retry after the recommended retry period has elapsed, and warn the user that the app is currently under load and operating slowly.
Encapsulating back-off retries within requestJira might not be a great idea, as the client wouldn’t have a chance to provide feedback to the user, and the developer would have to reason about a requestJira call taking a potentially lengthy period of time. I can see it being valuable in some circumstances (e.g. headless jobs where there’s no user) but it’s not a silver bullet.
We do have support for automatically retrying trigger & async events (with an error code specifically for rate limiting), you can read more here:
Hey @lexek-92, thanks for your feedback.
Since A4A is a new program, this RFC is focused on learning from partner feedback as we develop it. We really appreciate suggestions on the name. Is there something you feel would work better, especially one that speaks to both where customer data is processed and stored and how the app is built and run?
Hey @UlrichKuhnhardtIzym1, thanks for your feedback.
Although we will be retiring the Cloud Security Participant badge (logically, the naming doesn’t make sense for DC apps anyway
), we have no plans to remove the callout to the apps participation in the Bug Bounty program that is displayed in the “Privacy and Security” section at the bottom of the overview tab on the app listing.
Hey Oliver,
I agree with the intent, but the timing is challenging.
Some Forge LLM capabilities required for production-grade user experience (e.g. streaming responses) are still missing. Today, external AI providers offer more mature APIs for certain use cases. Forcing a migration to Forge LLMs at this stage would lead to a worse experience despite responsible and secure AI usage.
Suggestion:
A4A should focus on responsible AI controls and outcomes, not on provider choice. Forge LLM preference should be revisited once feature parity is reached.
Thanks for the feedback. You’re right, the timing is a bit awkward. Perhaps it could be more like “Uses Forge LLMs or has publicly documented Responsible AI policies and controls that they can link to”.
But FYI Forge LLMs do support streaming now
https://developer.atlassian.com/platform/forge/changelog/#CHANGE-2978
Hey @oliver.straesser, we really appreciate your feedback. Thank you for taking the time to respond.
I strongly support the overall direction of this RFC. Simplifying trust programs, separating WHERE (RoA) from HOW (A4A), and moving toward verified trust signals aligns well with how enterprise customers evaluate risk and procurement today.
Thank you! This is great to hear!
I support accessibility, but the cost and scope impact are unclear:
- What level of testing is expected?
- What is the minimum scope?
Clear guidance is needed to make this predictable and plan-able.
As mentioned in some other replies above, we are still building out the requirements for A4A and want to ensure the partner community is involved in defining them. The exact details are still TBD.
It would be very helpful to understand your perspective: what level of accessibility testing do you believe is appropriate, and what types of accessibility-related requests are you receiving from customers?
Whether all private programs must become public - or was this a misswording?
Yes, all private programs must become public. This will not be an immediate change, though; it will be gradual to ensure that each individual program is ready. Our goal is to have all programs be public-facing.
Data Residency
I’d appreciate clarification on:
- Whether Forge-only apps automatically qualify
If your app uses only Forge Storage, then yes, it will qualify, since DaRe is included “out of the box” with Forge Storage.
- Expectations for Connect-on-Forge or hybrid apps
Requirement 1.3 states that “an app does not use any Connect modules”, so a Connect-on-Forge app would not be eligible for A4A.
- The required number of supported regions, as it has an impact on costs
We are not mandating which regions your app supports, but, as part of the A4A requirements, we would require the app allows migration between the regions you support.
- Will apps using such customer-approved egress be considered compliant with A4A requirements?
Customer-managed egress can be used to support circumstances where the app is unaware of the remote or egress locations the app needs to communicate with, requiring customer input to establish these.
Because customer-managed egress is under the customers’ full control, these apps would be eligible for A4A.
Hey @scott.dudley, thank you for your comprehensive feedback. I really appreciate the time you took to share such detailed thoughts.
To start with, the name is objectively “wrong” and it fails to address the problem stated above. I imagine that this has already gone through a bunch of committees and it will be impossible to change, but…
A4A is a new program, and this RFC is very much about shaping it together with partner input. The name is definitely not locked in and is something we’re open to feedback on. Do you have any suggestions you think might work better? Ideally, we’re looking for something that speaks to both the WHERE (where the customer’s data is processed and stored) and the HOW (how the app is built and run).
1.3 The app does not use any Connect modules
This should not be present in the current version of A4A. Atlassian is unnecessarily mixing its definitions of security posture with different, organizational goals to move people off Connect, when these two are not related.
A4A is intended to cover more than security; it’s designed to reflect our view of what a well‑architected app looks like. As we continue investing in Forge as the future of our developer platform, our current expectation is that A4A will apply to Forge‑built apps.
4.2 The app supports Data Residency (pinning and migration)
This should be clarified to indicate that apps are also eligible if they store data only within the host product.
Good call out, I will update the requirement to reflect this ![]()
6.1 The app has an automated migration path from Data Center to cloud
I also do not see why this should be present. This is again conflating Atlassian organizational goals with the security/trust posture. At first glance, this has nothing to do with trust or security. Certain apps may have good reasons for having manual migration processes.
This requirement is tied to how we’re thinking about well‑architected apps today and what customers consider most important when discovering and procuring apps.
Supporting customers on their journey from Data Center to Cloud is a major focus right now, and an automated migration path makes a big difference to that experience. Over time, as migrations become less central, we expect this requirement to evolve or potentially be retired. For now, it remains one of the signals we’re using when we think about an apps architecture.
I will answer your more technical question in another reply. I am just waiting on some internal responses from my peers who are more technical than I am!
If API utilization is influenced by the customer and not directly in the apps control (except for obeying rate limits and such), how is this going to work? How does Atlassian define “efficient”?
Some examples of API efficiency would be:
- The right API being used for the task (e.g. bulk APIs are used for bulk operations, rather than calling a single object API in a loop)
- API calls are tuned effectively (e.g. fields or expand parameters are used to constrain the data in the response or minimise the total number of requests needed)
- Push methods (e.g. event triggers) are preferred over polling methods for detecting changes to data
- Effective caching strategies to prevent the need to refetch data from the API constantly
How is this measured in practice? Is it a box tick, or are there metrics apps will be evaluated against?
This is TBD. Defining the A4A requirements is the first step; we will then begin building the verification and validation processes. More details on this to come.
Thanks for your feedback @paul
You have summarised, perhaps more eloquently than I did in the initial RFC, how we see trust investments being surfaced on the app listing. We believe each of the “trust signals” defined in Section 2 above should be seen as a progressive journey towards A4A. Each investment a partner makes in trust will be highlighted on their app listing.
The A4A program can help customers quickly identify apps with the highest trust posture, or they can also use the new trust filters to find apps that meet their specific needs (e.g., SOC 2 or Bug Bounty Program participation).
Hey @aragot, thanks so much for taking the time to share your feedback.
If so, this alone is more work than anything else together. We’re just a 10-people team, and this is already quite a size for a lot of apps on the Marketplace (we can probably find most leading products being fewer than 25 developers). It probably is a full-time-equivalent for a manager for 6 months for the initialization of SOC2 compliance. I can’t say it’s impossible, but it is at the cost of features and global customer experience.
The main objective of the A4A program is to help streamline the procurement process for customers. We’re consistently hearing from customers that industry‑standard compliance, such as SOC 2 Type 2 and ISO 27001, has moved from “nice‑to‑have” to “table stakes” in their procurement process.
Because of that, A4A is intentionally setting a high bar. The proposed requirements are designed to make it easier for IT admins to get the documentation and evidence they need during procurement.
That said, we absolutely want to recognise the incremental investments partners are making in trust as they work towards A4A. One of our goals is to surface more of these trust signals directly on the app listing and through additional trust‑related filters on the Marketplace, so partners can get showcase their investment in trust even before they meet the full A4A bar.
In addition, for Gold partners, Atlassian accepted pending SOC2 applications, so can you be flexible with pending SOC2 applications when applying to the A4A program too?
Given that A4A is focused on helping customers get the concrete documentation they need to move through procurement, a pending certification on its own wouldn’t meet the requirements in this case.
For us, its retirement should be synchronized with Fortified, in June.
I’d love to understand more about your thinking here. As I mentioned to @UlrichKuhnhardtIzym1 above, we will continue to highlight an app’s participation in the Bug Bounty Program in the Privacy and Security section at the bottom of the Overview tab on the app listing, and there are no plans to remove that signal.
Also, for clarity, the retirement date for the Cloud Fortified Apps program has not yet been set. As per the milestones above, we aim to start adding more trust signals to the app listing page, including those from the CFA program, by June, but this does not necessarily mean the CFA program will be retired in June.
Hi @paul,
I thought I’d just chime in regarding your question about 1.15:
The cloud security guidelines read:
- An application must default to using
asUser()when performing an operation on behalf of the user.- Before making calls using
asApp(), you must verify the expected permissions (for example, from product context) with the permissions REST APIs.…
The first guideline is intended to prevent developers from accidentally allowing end-users to escalate their own privileges by authenticating as the app user. Thus you should “default to” using asUser() over asApp() when architecting your app, as asUser will automatically respect the product’s permission schemes.
However there may be some situations where you need to call an admin API when processing the end-user’s request, even if the end-user is not an administrator. Provided you have authorised the request in a way that’s consistent with your app’s use-case, then calling asApp() is still consistent with the cloud security requirements. I wouldn’t interpret “default to using asUser()” as literally requiring a try { asUser(..) } catch { asApp(..) } — there’s no security benefit since your app will just auto-escalate the user’s privileges if they don’t have them ![]()
If you’re able to share a bit more about the API and use-case you’re solving for here I’m happy to chat in more detail about whether asUser or asApp is appropriate.
cheers,
Tim
Maybe instead of a “A4A” badge have a “Trust-o-Meter” with red to green bar graph
![]()
Thanks for sharing your own professional take. AI gets sold as the solution for everything, while this is just BS in most cases, I do think that in scope of coding and testing it can provide value.
Having said that, I also recognize that AI own its own may not be the best approach for the Pen Test requirement. I didn’t however expect this from the AI message Atlassian has been sharing.
Good thing BugCrowd is listed! Already being part of the Bug Bounty Program this will likely help kickstart the process.
I understand, and am looking forward to the verification method RFC
I would add my vote for actual API traffic as a metric to be used for this requirement. In this case however, it would be needed to make those metrics available to the Partners as well, through the developer console.
Not using end-of-life runtimes should be easy to do, as far as I can tell Atlassian has been quick enough to support new runtimes giving time to upgrade before older runtimes go EOL.
Will the Forge provided libraries and its dependencies be excluded in the more relaxed requirement for the libraries? We can try to upgrade dependencies to fix vulnerability issues, but that is it. Other issues around unmaintained or deprecated libraries in out of our control. Sometimes even overriding a dependency to fix a vulnerability issue is not possible.
Thanks for that conformation. That would limit the impact I think for partners as they would only need to update their messages/marketing where the badges are used.
@PhilipGrove – Thank you for this RFC. Like many here, I am 110% in favor of improving the trust signals in the Marketplace. And it’s in that frame that I’d like to share the following feedback:
-
The new badge names are likely to confuse customers.
-
Tiered trust badges would be helpful to everyone.
-
Badge criteria should be objectively verifiable.
I’ll elaborate on each of these:
Badge Names are Confusing:
You stated the naming goals very clearly:
“Ideally, we’re looking for something that speaks to both the WHERE (where the customer’s data is processed and stored) and the HOW (how the app is built and run).”
Yet, a reasonable customer might assume that “RUNS on Atlassian” is the badge that describes “how the app is built and RUN”. But it doesn’t. That’s what A4A is for.
And “Architected for Atlassian” communicates “designed with Atlassian in mind” rather than “meets verifiable trust signals”. Just like a good political slogan, it will mean whatever the reader wants it to.
As a result, customers will be left guessing what real-world assurances the badge provides, which is precisely the problem the new trust signals are meant to solve.
My advice is to go with “clear” over “clever”.
For example, “Atlassian Trust Badge”, “Atlassian Trust Certified”, or “Atlassian Seal of Trust” would much more clearly describe the purpose of the new badge. Sometimes boring names are okay. In this case, avoiding confusion and ambiguity should be the primary goal.
(Note: I’m assuming it’s too late to change the name of Runs on Atlassian.)
Tiered trust badges would be helpful to everyone.
Tiered badges were already proposed, and you have already responded to those proposals. I would just like to add this:
Customers understand tiers. They don’t understand filters that indicate progress towards a badge. (At least not nearly as well as they understand tiers.)
And Atlassian already knows this. That’s why the Marketplace has Silver, Gold, and Platinum Partners. Atlassian could have only Platinum Partners with filters that indicate progress towards Platinum status. But that wouldn’t work nearly as well, and we all understand that.
Please consider the same principle when implementing the new trust badge(s).
That would promote clarity to customers, and it would provide a stronger motivation for vendors to invest in trust signals. Everybody wins!
Badge criteria should be objectively verifiable.
Please consider passing all trust requirements through an objectivity filter before finalizing the program.
Customers expect trust signals to be objective. That’s why SOC 2 is one of the strongest trust signals. Presenting objective criteria is the #1 way to create strong trust signals.
However, this RFC contains many subjective requirements. For example,
-
“The app only requests access to the data it needs (least privileged access).”
-
“The app is expected to follow best practices by avoiding unmaintained or deprecated libraries, or by mitigating their risks where avoidance is not yet feasible.”
-
“Respects rate-limiting headers”
These examples are inarguably best practices, but they are difficult (or impossible) to verify.
When criteria are subjective, they tend to create misaligned incentives: honest vendors are penalized for nuance, while less rigorous responses can appear compliant. Over time, that weakens trust signals rather than strengthening them.
The badge requirements should be as objective as possible, and all the other genuinely good ideas belong in a policy, an agreement, or something similar.
Again, I’m fully in favor of this initiative. Thank you for collaborating with the community on it!
