Action Required: Update your Apps to comply with Jira Cloud Burst API rate limits

As part of Atlassian’s ongoing commitment to reliability, performance, and fair usage, burst API rate limits are now introduced for all Jira Cloud customers. These limits, first announced in the change-log in August 2025, are designed to prevent service disruptions caused by sudden spikes and sustained high load to API traffic. By proactively managing high-volume requests, we help ensure a stable and seamless experience for all customers.

What does this mean for you?

  • If your app exceeds the allowed number of API requests in a short period, you will receive a “429: Too Many Requests” error.

  • These limits are enforced on a per-tenant and per-API basis.

  • Rate limits are enforced using industry-standard token bucket algorithms, allowing for short bursts of activity but maintaining a sustainable average rate over time.

What action is required?

  • Review and update your apps and integrations to ensure compliance with the new burst rate limits.

  • Implement robust error handling, including exponential backoff and retry logic, to gracefully manage 429 responses.

Why is this important?

  • These limits are based on industry best practices and are essential for maintaining a reliable and high-performing Jira Cloud platform for all users and partners.

  • Adhering to these limits helps prevent service disruptions and ensures a positive experience for your customers.

Need help or more information?

Thank you for your cooperation in adopting these changes. Respecting burst limits is critical to delivering a seamless Jira Cloud experience for all customers.

Thank you for sharing some details about changes to the rate limits, @Prashanth_M. I have two requests:

  1. Please share more details how the burst rate limits work
  2. How should we test our apps?

More details about burst rate limits

Since rate limiting is quite an essential factor for our app (and I believe many others), I’d appreciate some more details how the burst rate limiting mechanism works.

I have read your post, the changelog, and the documentation and I think I understood the different buckets (or Cost Budgets as called in the docs). However the explanation about the burst api rate limiting is quite vague:

Quota and burst rate limiting is implemented as a set of rules that consider the app sending the request, the tenant receiving that request, the number of requests, the edition of the product being queried, and its number of users.

The rules are applied independently to burst (per 1 second) and quota (per 1 hour) periods to determine an appropriate maximum amount of requests. Request processing above the threshold is blocked, including downstream processing.

To name a good example, there are the docs about rate limiting in Jira DC. There it’s clearly explained that we have a bucket per user with X requests in a certain time interval. If they don’t use it, it accumulates until a certain threshold.

Testing

The rate limit docs state for a long time already that we shouldn’t perform rate limit testing against Atlassian Cloud tenants. Can you please tell me how we should implement a robust rate limit mechanism without testing it?

We have rate limiting mechanisms already implemented and I want to ensure they also work properly in the future. I want to make sure that we build a reliable ecosystem together where apps and the base product can excel with a good performance without getting overloaded.

5 Likes

Hi @Prashanth_M,

could you kindly provide an update on https://ecosystem.atlassian.net/browse/FRGE-1923?

I understand that Atlassian needs to have control over the amount of traffic it receives, but it is imperative that consumers can meaningfully respond to such situations.

8 Likes

Just a note that we have multiple large customers on DC that can not move to Cloud because Atlassian Cloud has API throughput limits that are 3 or more orders of magnitude fewer RPS than their on premise system.

5 Likes

@Prashanth_M any news on the matter? Thank you.

@Prashanth_M, friendly reminder that the Ecosystem is waiting on an update regarding https://ecosystem.atlassian.net/browse/FRGE-1923. Can you please have someone provide an update on this?

4 Likes

Hi Matthias,

Apologies for the delay in replying to your comment. My name is Suyash and I am a Principal Product Manager at Atlassian responsible for API & Extensibility for Jira. I will work with you to ensure this initiative lands well and all your concerns are addressed.

Since rate limiting is quite an essential factor for our app (and I believe many others), I’d appreciate some more details how the burst rate limiting mechanism works.

Regarding your concerns around how Burst rate limiting works, I request you to review the Burst rate limits section on this page. If you still have questions that are unanswered, feel free to reply to this thread and I will be happy to address your queries.

The rate limit docs state for a long time already that we shouldn’t perform rate limit testing against Atlassian Cloud tenants. Can you please tell me how we should implement a robust rate limit mechanism without testing it?

Thanks for calling this out – it’s a very fair question.

The line in our docs about “Do not perform rate limit testing against Atlassian cloud tenants” is specifically intended to discourage aggressive load / stress testing. Atlassian Cloud is a multi‑tenant SaaS environment, and deliberate stress tests can put unnecessary pressure on the platform and trigger protective controls, without giving you better signal than testing against the documented contract.

It is still absolutely possible to build and validate robust rate‑limit handling without full‑scale load tests:

  • Rely on the documented behavior: when limits are hit, Jira Cloud returns 429 Too Many Requests along with rate‑limit headers such as X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After, and RateLimit-Reason:
    Rate limiting

  • Simulate 429s and headers in your own unit/integration tests (mocks/fakes) to verify backoff, retry, and logging behavior.

  • Run controlled tests on your own non‑production tenants (not end‑user production sites) to observe real 429s under realistic but not extreme load.

  • Optionally introduce a local “test quota” layer in staging that starts returning synthetic 429s earlier than Atlassian would, so you can see how your system behaves under sustained throttling without trying to push Atlassian Cloud to its limits.

Separately, Atlassian Cloud includes automated protections to maintain platform reliability. In extreme cases where traffic is clearly abusive or misconfigured and causing reliability concerns, that traffic may be subject to additional throttling or temporary blocking, as covered by our cloud terms and platform policies.

We’re reviewing the wording in the official docs so it better reflects this intent and points to recommended testing approaches, rather than only stating what not to do.

For full details on Jira Cloud rate limits and responses, see:
Rate limiting

2 Likes

Hi Hannes,
Apologies for the delay in providing this update. This issue is being debugged form our side and we have identified the problem and the fix. We will soon be able to make the fix and update the ticket. Thanks for your patience.

3 Likes

In my view, with the never ending changes and requirements around rate limiting, Atlassian must implement implicit rate limit handling inside forge/api and other applicable libraries when it comes to Forge with options to disable rate limit handling, and to specify max time window for retries for each request.

There is no reason for each vendor to implement rate limit handling outside of a few very narrow use cases when Atlassian controls the platform and frameworks end-to-end.

2 Likes

If you’re debugging from your side and so on and so forth, why is the ticket still in status Needs Triage?