Action Required: Update your Apps to comply with Jira Cloud Burst API rate limits

As part of Atlassian’s ongoing commitment to reliability, performance, and fair usage, burst API rate limits are now introduced for all Jira Cloud customers. These limits, first announced in the change-log in August 2025, are designed to prevent service disruptions caused by sudden spikes and sustained high load to API traffic. By proactively managing high-volume requests, we help ensure a stable and seamless experience for all customers.

What does this mean for you?

  • If your app exceeds the allowed number of API requests in a short period, you will receive a “429: Too Many Requests” error.

  • These limits are enforced on a per-tenant and per-API basis.

  • Rate limits are enforced using industry-standard token bucket algorithms, allowing for short bursts of activity but maintaining a sustainable average rate over time.

What action is required?

  • Review and update your apps and integrations to ensure compliance with the new burst rate limits.

  • Implement robust error handling, including exponential backoff and retry logic, to gracefully manage 429 responses.

Why is this important?

  • These limits are based on industry best practices and are essential for maintaining a reliable and high-performing Jira Cloud platform for all users and partners.

  • Adhering to these limits helps prevent service disruptions and ensures a positive experience for your customers.

Need help or more information?

Thank you for your cooperation in adopting these changes. Respecting burst limits is critical to delivering a seamless Jira Cloud experience for all customers.

Thank you for sharing some details about changes to the rate limits, @Prashanth_M. I have two requests:

  1. Please share more details how the burst rate limits work
  2. How should we test our apps?

More details about burst rate limits

Since rate limiting is quite an essential factor for our app (and I believe many others), I’d appreciate some more details how the burst rate limiting mechanism works.

I have read your post, the changelog, and the documentation and I think I understood the different buckets (or Cost Budgets as called in the docs). However the explanation about the burst api rate limiting is quite vague:

Quota and burst rate limiting is implemented as a set of rules that consider the app sending the request, the tenant receiving that request, the number of requests, the edition of the product being queried, and its number of users.

The rules are applied independently to burst (per 1 second) and quota (per 1 hour) periods to determine an appropriate maximum amount of requests. Request processing above the threshold is blocked, including downstream processing.

To name a good example, there are the docs about rate limiting in Jira DC. There it’s clearly explained that we have a bucket per user with X requests in a certain time interval. If they don’t use it, it accumulates until a certain threshold.

Testing

The rate limit docs state for a long time already that we shouldn’t perform rate limit testing against Atlassian Cloud tenants. Can you please tell me how we should implement a robust rate limit mechanism without testing it?

We have rate limiting mechanisms already implemented and I want to ensure they also work properly in the future. I want to make sure that we build a reliable ecosystem together where apps and the base product can excel with a good performance without getting overloaded.

5 Likes

Hi @Prashanth_M,

could you kindly provide an update on https://ecosystem.atlassian.net/browse/FRGE-1923?

I understand that Atlassian needs to have control over the amount of traffic it receives, but it is imperative that consumers can meaningfully respond to such situations.

8 Likes

Just a note that we have multiple large customers on DC that can not move to Cloud because Atlassian Cloud has API throughput limits that are 3 or more orders of magnitude fewer RPS than their on premise system.

5 Likes

@Prashanth_M any news on the matter? Thank you.

@Prashanth_M, friendly reminder that the Ecosystem is waiting on an update regarding https://ecosystem.atlassian.net/browse/FRGE-1923. Can you please have someone provide an update on this?

4 Likes

Hi Matthias,

Apologies for the delay in replying to your comment. My name is Suyash and I am a Principal Product Manager at Atlassian responsible for API & Extensibility for Jira. I will work with you to ensure this initiative lands well and all your concerns are addressed.

Since rate limiting is quite an essential factor for our app (and I believe many others), I’d appreciate some more details how the burst rate limiting mechanism works.

Regarding your concerns around how Burst rate limiting works, I request you to review the Burst rate limits section on this page. If you still have questions that are unanswered, feel free to reply to this thread and I will be happy to address your queries.

The rate limit docs state for a long time already that we shouldn’t perform rate limit testing against Atlassian Cloud tenants. Can you please tell me how we should implement a robust rate limit mechanism without testing it?

Thanks for calling this out – it’s a very fair question.

The line in our docs about “Do not perform rate limit testing against Atlassian cloud tenants” is specifically intended to discourage aggressive load / stress testing. Atlassian Cloud is a multi‑tenant SaaS environment, and deliberate stress tests can put unnecessary pressure on the platform and trigger protective controls, without giving you better signal than testing against the documented contract.

It is still absolutely possible to build and validate robust rate‑limit handling without full‑scale load tests:

  • Rely on the documented behavior: when limits are hit, Jira Cloud returns 429 Too Many Requests along with rate‑limit headers such as X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After, and RateLimit-Reason:
    Rate limiting

  • Simulate 429s and headers in your own unit/integration tests (mocks/fakes) to verify backoff, retry, and logging behavior.

  • Run controlled tests on your own non‑production tenants (not end‑user production sites) to observe real 429s under realistic but not extreme load.

  • Optionally introduce a local “test quota” layer in staging that starts returning synthetic 429s earlier than Atlassian would, so you can see how your system behaves under sustained throttling without trying to push Atlassian Cloud to its limits.

Separately, Atlassian Cloud includes automated protections to maintain platform reliability. In extreme cases where traffic is clearly abusive or misconfigured and causing reliability concerns, that traffic may be subject to additional throttling or temporary blocking, as covered by our cloud terms and platform policies.

We’re reviewing the wording in the official docs so it better reflects this intent and points to recommended testing approaches, rather than only stating what not to do.

For full details on Jira Cloud rate limits and responses, see:
Rate limiting

2 Likes

Hi Hannes,
Apologies for the delay in providing this update. This issue is being debugged form our side and we have identified the problem and the fix. We will soon be able to make the fix and update the ticket. Thanks for your patience.

3 Likes

In my view, with the never ending changes and requirements around rate limiting, Atlassian must implement implicit rate limit handling inside forge/api and other applicable libraries when it comes to Forge with options to disable rate limit handling, and to specify max time window for retries for each request.

There is no reason for each vendor to implement rate limit handling outside of a few very narrow use cases when Atlassian controls the platform and frameworks end-to-end.

2 Likes

If you’re debugging from your side and so on and so forth, why is the ticket still in status Needs Triage?

Hi @BogdanButnaru - I have updated the ticket. Thanks for following up. You will hear from me soon on the final outcome. Appreciate your patience.

1 Like

Hi @SuyashKumarTiwari, it’s been a while since the last update; are there any news on the matter?

Hi Hannes,

Thanks for your patience. Here is the response from Atlassian engg team, I have also updated the ticket with this information.

Recommended solution for Forge apps (use backend resolver, not frontend headers)

For Forge apps implementing rate‑limit awareness, the supported and reliable way to read rate‑limit headers today is via a backend resolver using @forge/api, not directly from frontend requestJira calls.

This aligns with Forge’s documented execution model:

  • @forge/api runs on Atlassian infrastructure in the Forge function runtime, with full access to the HTTP Response (including headers) and no browser CORS restrictions.

  • @forge/bridge.requestJira() runs in the browser (Custom UI iframe) and is therefore subject to CORS and Access-Control-Expose-Headers behaviour, as described earlier on this page and in the “Forge FrontEnd Rate Limiting Headers Requirements” doc.

A backend resolver can reliably read the rate‑limit headers and return only the data the UI needs:

// Forge function (backend) – canonical place to read rate-limit headers
import api, { route } from '@forge/api';
import Resolver from '@forge/resolver';
const resolver = new Resolver();
resolver.define('getIssueWithRateLimitInfo', async () => {
const response = await api.asApp().requestJira(
route`/rest/api/3/issue/KAN-1`
);
const rateLimitInfo = {
limit: response.headers.get('X-RateLimit-Limit'),
remaining: response.headers.get('X-RateLimit-Remaining'),
reset: response.headers.get('X-RateLimit-Reset'),
retryAfter: response.headers.get('Retry-After') ??
response.headers.get('Beta-Retry-After'),
};
const body = await response.json();
return { body, rateLimitInfo };
});

The Custom UI frontend then calls this resolver via @forge/bridge.invoke and does not need to read or interpret raw HTTP headers itself.

Why rate‑limit handling should not live in the frontend

Even once Access-Control-Expose-Headers is correctly configured on Jira/Confluence

  1. Browser/CORS limitations remain a moving part
    Frontend access to headers always depends on CORS configuration and browser behaviour. Backend @forge/api calls are not constrained by this, so rate‑limit logic there is more robust.

  2. Security and platform alignment
    Forge documentation and internal guidance consistently model heavy integration logic and product REST API calls as backend responsibilities implemented with @forge/api, with Custom UI frontends calling backend resolvers via @forge/bridge.invoke. This pattern also appears in the public Forge app REST APIs (Preview) documentation, where app logic (including product API calls) is implemented in Forge functions and exposed via apiRoute, and clients—whether UIs or external systems—call those backend endpoints rather than interacting with product REST APIs directly.

Let me know if this explanation sounds reasonable. Feel free to reach out for any doubts.

1 Like

This can only be a workaround and not a real solution. Here is why:

  • Using a resolver adds costs, especially with the upcoming change that was reverted temporary (include rest api wait time into the pricing)
  • the last time I checked, using a resolver increase the request time
  • I don’t see why this would be good for the overall performance (let’s imagine what would happen to your architecture, when all of the current plugins would use a resolver to access any REST API)
3 Likes

No, it does not.

You’re asking app developers to rearchitect their entire frontend API layer because Atlassian won’t add a few header names to Access-Control-Expose-Headers. That’s not a reasonable explanation, that’s shifting the burden.

3 Likes

@VickyHu @SeanBourke bringing this to your attention

Hi @SuyashKumarTiwari

This does not, in fact, sound reasonable. It also contradicts what Atlassian has communicated so far and it is quite strange to read. Let me elaborate why:

This guidance is news to me and I have been developing Forge apps for a while. If REST API calls are seen by Atlassian as a backend responsibility, why would @forge/bridge even ship with methods like requestJira() and friends? And if this is indeed a pattern that Atlassian wants to discourage, why promote about it:

Even the Forge architecture diagram has the usage of @forge/bridge APIs to call the host product as normal usage:

I’ve tried, but I cannot find ANYTHING in the Forge app REST APIs (Preview) documentation about invoking a resolver to make an API call instead of making the call directly from the frontend. Can you maybe enlighten me as to where that is?

Safety concerns

Apart from what’s in the documentation and guidance or not, it makes no sense to use an invocation to call a product REST API when the same call can be made safely, even safer, from the UI. Using an invocation:

  1. Adds overhead to the request, ie. it takes longer for the UI to receive a response and react to user input.
  2. Greatly increases the amount of backend invocations, which pushes against the platform limits and creates compute costs.
  3. Forces API calls from an environment where they can only be made asUser into a runtime where it is possible to make them either asUser or asApp – making it more likely that bugs introduce data leakage or permission escalations.

Missing headers are not a new problem

It’s been three years since I opened FRGE-1147, a bug about how the absence of header information in @forge/bridge made a part of the Confluence API unusable. And while that issue was fixed quickly and competently, at no point did Atlassian suggest that making calls these calls directly without an intermediate invocation is the problem.

App developers by and large want to build secure and well-performing products. Being able to respect rate-limiting information without rewriting all of our frontends to stop using an essential feature that was introduced five years ago with @forge/bridge 2.0 should not be too much to ask.

6 Likes

Hi @SuyashKumarTiwari,

As others have already described, this explanation seems anything but reasonable.

In what way are CORS limitations a “moving part”? The CORS protocol is a web standard, and the solution to the problem is well-known.

How does the Forge REST API feature relate to the issue discussed here? Obviously, a Forge REST API is implemented as a back-end function, but there are legitimate reasons for using the bridge functions in application UIs. I also cannot see how the Forge documentation prefers REST API calls in the back-end. Atlassian’s own “reference architecture” uses the requestJira bridge function instead of a resolver.


As a side note: If I understand you correctly, you received the response from your engineering team and just forwarded it here. However, I find it very frustrating to have to respond to what I believe is an (at least partially) AI-generated text. I invested time debugging the problem and creating the bug report, so I find it all the more disappointing to receive such a response.

Best regards,
Christian

4 Likes

This further highlights that Forge needs to have an option implicit rate limit handling. If any request may be rate limited, you are basically saying that all interaction with Atlassian APIs needs to be moved to backend with significant RTT overheads, which plays in hand of Atlassian as Atlassian becomes increasingly more and more involved as a competitor for marketplace apps.

Here is one way how it may look like:

And this also highlights how disconnected Atlassian engineers are from third-party cloud development platform, which is a direct result of the refusal to adopt Forge as a first class citizen in the platform to the same extent as P2 was adopted in Server and DC platform. Unlike P2, which was extensively used for the actual core product development, Forge remains a second class citizen and afterthought which is not used by Atlassian engineers to build any serious core product features.