We support the move to simplify and consolidate Marketplace trust programs to provide clearer, more meaningful signals to current and prospective customers. Reducing overlapping badges is a positive step toward making trust easier to understand and evaluate.
Clarity of intent
On first reading, however, the intent of the proposal was somewhat confusing. The RFC opens with goals of “simplifying trust signals” and “uplifting security requirements”, which suggests a primarily security-focused change. As the proposal unfolds, it becomes clear that A4A is significantly broader in scope, covering operational maturity, reliability, performance, platform alignment, and responsible AI — with security being an important component, but not the sole focus.
We’d strongly encourage future communication of A4A — to both partners and customers — to be explicit about its broader intent upfront, positioning it as a holistic app quality and operational excellence standard.
Verification and assessment of requirements
Many of the proposed A4A requirements are reasonable in intent but are currently described using vague or subjective language (for example, “consumes APIs efficiently”). For a program of this importance, requirements need to be precise, measurable, and consistently verifiable.
At this stage, it’s difficult to provide detailed feedback without clearer information on how requirements will be measured and assessed — whether through automated tooling, defined thresholds, or partner-provided evidence.
Timeline expectations
We understand that concrete timelines may not yet be available. However, some A4A requirements could require significant investment and lead time, depending on how expectations are ultimately defined (for example, SOC2 compliance or accessibility).
If possible, even a high-level indication of the expected timeframe for A4A to replace Cloud Fortified (e.g. months vs. years) would help partners prioritise work and plan investment more effectively.
New Penetration Testing Program — alignment with SOC2 evidence requirements
We’d appreciate some clarification on whether the requirements defined for the new Atlassian-provided penetration testing program are intended to be sufficient for use as evidence in SOC2 compliance assessments.
For partners maintaining SOC2 certification, penetration testing is a recurring compliance requirement. If the scope or outputs of this program do not meet auditor expectations, partners may still need to commission additional third-party testing, resulting in duplicated effort and increased cost with limited additional security benefit.
Requirement 1.14 — “Respects rate-limiting headers”
Partners are currently unable to meet this requirement due to known limitations in the Forge ecosystem. When calling Jira and Confluence APIs from the frontend, the headers required to detect rate-limiting and implement backoff behaviour are obfuscated, making compliance impractical.
This is tracked under FRGE-1923 (https://ecosystem.atlassian.net/browse/FRGE-1923). Until resolved, we recommend deferring enforcement of this requirement or clarifying acceptable interim behaviour.
Requirement 2.2 — “Publicly defined SLI/SLO reliability metrics”
This requirement is noted as being “partially required for the Cloud Fortified Apps program”. However, the Cloud Fortified reliability documentation also states that “we don’t yet have SLIs and SLOs defined for Forge Marketplace apps” (see cloud fortified docs).
Given that A4A is explicitly limited to Forge applications (see Requirement 1.3: “The app does not use any Connect modules”), this implies that there are currently no defined platform SLIs or SLOs for partners to build upon.
Until Forge Marketplace app SLIs and SLOs are clearly defined and documented, partners are left to make assumptions — or, at best, define metrics only for their remote services (where applicable), which may not reflect overall app reliability. We recommend deferring enforcement of this requirement until Forge-specific SLIs and SLOs are available.
Requirement 4.2 — “App supports Data Residency (pinning and migration)”
The requirement does not currently specify a minimum expectation for the number of realms that remote data stores must support.
As written, it’s unclear whether existing realm coverage is sufficient or whether additional regional investment is expected. Clear guidance on what constitutes adequate coverage at launch (and how this may evolve) would help partners assess gaps and plan investment with confidence.
Requirement 7.1 — “The app conforms to Atlassian’s Responsible Technology Principles”
There appears to be overlap between this requirement and Requirement 5.1 (Accessibility).
Requirement 5.1 requires accessibility testing and a publicly available VPAT, while Requirement 7.1 introduces additional accessibility expectations, including conformance with WCAG 2.2 AA. If these standards are required under A4A, they should be made explicit and consolidated, ideally by reflecting those targets directly in Requirement 5.1 to avoid ambiguity.
It’s also important to note that WCAG 2.2 AA conformance represents a significant upfront and ongoing investment, particularly for existing applications that were not originally designed with accessibility as a primary concern. Achieving compliance often requires substantial design, engineering, and testing effort, followed by continuous validation to ensure new features and changes do not introduce regressions. The amount of time and resources required to meet these expectations could make A4A alignment untenable for many partners, reducing participation in the program rather than increasing trust across the Marketplace.
Additionally, many partners rely heavily on Atlassian-provided libraries (such as UI Kit and the Atlassian Design System) to deliver a seamless and consistent user experience. If partners are expected to meet these accessibility standards, clarity on Atlassian’s ongoing support, ownership, and SLAs for addressing accessibility issues identified in these libraries would be important. Without this, partners may be held accountable for gaps outside their direct control.
Requirement 7.2 — “Uses Forge LLMs where practical”
As written, it’s unclear who determines what is considered practical, or what process will be used to assess or validate this.
Practicality in this context is inherently subjective and use-case dependent. For example, Forge LLMs may not currently be practical for partners who require robust analytics, observability, and quality monitoring to ensure AI-generated outputs meet internal standards and customer expectations.
Without clearer criteria or an explicit evaluation process, partners may be left uncertain as to whether their AI architecture choices will be considered compliant. Clarifying how “practical” will be defined, evaluated, and enforced — and whether there will be an exception or review mechanism — would help partners make informed decisions without risking retroactive non-compliance.
We support the direction of this proposal and appreciate the effort to simplify trust signals while raising the bar for Marketplace apps. The feedback above is intended to help ensure A4A is clear, achievable, and consistently applied, particularly given the investment some requirements may entail.