2026 point-based rate limits

Thank you all for continuing to engage here. I know this thread has gotten tense, and I want to acknowledge the frustration some of you are feeling about both the changes and our communication around them. You’re running real businesses on top of our platform, and you’re right to push for clarity.

Several of you have also raised concerns about the timeline between the announcement in December and the original enforcement date. While we conducted extensive internal due diligence to ensure these changes would not cause widespread breakage, it’s clear that the timeframe we provided hasn’t been sufficient for you to fully digest the changes, ask questions, and, where necessary, speak with us directly.

To support a smoother rollout and give you additional time to prepare, we are moving the start date of the phased enforcement for the new point-based rate limits to Mar 2, 2026.

Our goal in doing this is not to reopen the core design or to suggest that we lack confidence in the underlying approach. It’s to:

  • Give you more space to understand what this means for your specific apps

  • Allow time for 1:1 conversations where a public thread isn’t the right place to go into detail

  • Help ensure the transition is as smooth and predictable as possible for you and your customers

Several questions in this thread have requested very specific details about the data, models, and thresholds underlying the new rate limits and tiering. I want to be upfront about the tension we’re working within:

  • We do want to give you enough information to operate and plan with confidence.

  • We are constrained in what we can publish in a public forum about our internal modeling, data sets, and the way we allocate shared capacity.

Because of that second point, there are some things we’re not able to do in this thread, given security, abuse, and fairness constraints:

  • We’re not going to publish the full underlying data sets or models used to set tiers and limits.

  • We’re not going to share exact internal thresholds or “guarantees” around how other apps will behave under specific scenarios.

  • We’re not going to turn those internals into public, customer-facing SLAs.

That said, we are committed to partnering with you one‑on‑one: we can walk through how these limits apply to your specific app, review your traffic patterns, and discuss options or adjustments through the support and engagement channels we’ve outlined.

What we have done, and will continue to do:

  • Ensure that these changes are guided by clear governance principles, including:

    • Protecting customers from “noisy neighbor” patterns, where one customer can degrade another customer’s experience

    • Providing enough headroom in Tier 1’s global pool for normal organic growth for the vast majority of apps, including free apps

    • Proactively moving apps whose traffic profiles indicate they and their customers are better served by per-tenant limits into Tier 2

  • Offer qualitative guidance about usage patterns that are likely to work well within each tier

  • Engage directly about your specific app:

    • If you’re concerned that your app’s limits or tiering aren’t appropriate given your real traffic, please raise a ticket here: Increase Marketplace App Rate limits.

    • Our team will review your actual profile, and in more complex cases, we’re happy to schedule a call to walk through your situation in more detail than is possible here.

The move to Mar 2, 2026 is intended to provide sufficient time for these conversations to occur and, where applicable, to make targeted adjustments based on your actual usage patterns.

We acknowledge that we could have done better in the amount of detail we provided upfront and the time available for you to respond. We’ll apply this feedback to improve how we design and communicate future changes, particularly in areas such as earlier examples, concrete guidance, and giving you greater confidence and predictability as changes roll out.

To make sure we can give you the depth you’re asking for, we’ll handle app‑specific questions via support tickets and reserve this thread for net-new, broadly applicable themes. I’ll keep monitoring for new themes we haven’t addressed, but:

  • For questions about specific apps, please open a support ticket so we can investigate your case in more detail.

  • For requests for internal models, datasets, or exact thresholds, the answer will remain that we will not publish those.

We do appreciate the time and thought you’ve put into this feedback, even when it’s uncomfortable to read, and we’re committed to making this transition as smooth as possible for you and your customers.

— Alan

17 Likes

Hi @AlanBraun ,

Thank you for adjusting the deadline.

Given that Atlassian cannot share her internal reasoning and data points with us, I would like to ask the following commitments:

  1. Atlassian will update the developer console with API usage statistics for all apps using the new point-based system, to allow developers to monitor real-time the impact of changes that we are making in order to address the new system.

  2. Atlassian will allow developers at least 1 month to test based on the API usage statistics in the developer console. If the changes to the developer console are not live before feb 2, 2026, this means the Mar 2, 2026 deadline is postponed.

If the goal is to instil trust in the new system, Atlassian needs to allow us to get our hands dirty and tweak our systems, similar to what Atlassian has done with the Forge pricing where Atlassian has given us several months to see the impact of usage costs in the developer console prior to the new pricing to go in effect. This has allowed us to optimize our apps for cost efficiency. It has also given Atlassian a lot of input with regard to bugs in the price calculations, which have helped us avoid a possible catastrophe when Forge pricing goes live.

Given the fear within the partner community that the new point-based rate limiting system will hurt customer experience, our ask is to extend us the same courtesy. Allow us to optimise our apps for API usage efficiency.

We share a common goal, and we can help Atlassian achieve it if Atalssian actually includes us in the process.

25 Likes

No SLA because you know that a global pool rate limiting model obviously enables DDoS attacks on all Tier 1 apps?

1 Like

Thanks for the update, @AlanBraun. The one thing I do not understand is why you are doing this backwards.

  1. Atlassian announces the change, considers it a non-breaking change, so the thinking is that partners are not affected (not following the regular deprecation period).
  2. Partners have no visibility into the change. So partners scramble to make estimates and form their own judgments, with little insight into the actual system behavior or numbers.
  3. Atlassian says they will update the Developer Console with actual numbers in the future (I am still not clear when that will be - before or after the rollout?).

Why not:

  1. Soft-launch the system in the background
  2. Display point usage in the Developer Console
  3. Announce it with a cut-off date and give us time to make our own assessment and iron out issues with Atlassian, based on real data.
16 Likes

Thank you for the response @AlanBraun .

In addition to the Developer Console updates remie mentioned, the number one thing that would alleviate our concerns would be to get the new headers asap. Only by observing how our apps behave under the new rate limits in practice can we be confident that they won’t break.

4 Likes

One thing I would like to add to this thread, as a new topic:

One of the ways to limit API calls is aggressive caching, however Atlassian has discontinued the Forge Cache EAP and will not offer a caching solution.

Is there an option for Atlassian to allow data egress for caching purposes, in the same way it allows analytics, whilst still preserving RoA compliance?

5 Likes

To add to that, and the headers are still WIP, can we receive the point cost of the request also as a response header? It would help analyze our traffic and identify areas we could optimize or open discussions with our customers about optimizing their processes.

4 Likes

Hi @AlanBraun

I certainly appreciate the delay of the enforcement by a month, although as with Remie, this would seem more “fair” to vendors if the enforcement could be delayed until after vendors have had access to appropriate statistics for a reasonable length of time.

In terms of Atlassian not “shar[ing] guarantees around how other apps will behave under specific scenarios”, is it correct to interpret this as meaning that Atlassian does not intend to disclose if JCMA/CCMA-sourced API calls will be counted towards the points limits?

Here is one more new theme: considering the statistics needed in the developer console, past focus has been placed on tier 1 apps. Tier 2 apps will still need statistics and their needs are different from those of tier 1 apps.

The problem that vendors need to solve is alerting of when clients are getting close to breaching their limits, but before the limits are breached and damage is caused. For example, if I have a number of customers who are at 70% of max points usage, I need to be careful and possibly redesign my app (especially if this usage is growing). This problem can easily get exacerbated with large clients, because the points-per-user starts to decline once the customer reaches the 500k-point cap.

For tier 2 apps, in addition to the overall per-app points calculations that are needed for tier 1, vendors would need to understand what points usage looks like on a per-tenant basis.

For example, for a given tenant, what is the median point usage per hour? What do the 95th, 99th and 100th percentiles look like for points/hour? How are all of these percentiles broken down across front end and back end requests?

Perhaps displaying these statistics is not necessary for every single tenant, but for at least (say) the top 10 outliers, having this data would be instrumental for Marketplace partners to properly manage their products.

As a reminder, vendors cannot do this without Atlassian’s help because the only other source of data for this, the points-based rate limit header, is:

  1. not even seemingly provided as of today’s date (!),
  2. not accessible from the Forge front end due to headers being stripped (even if they were otherwise available), and
  3. never going to be accessible to the vendor for Runs on Atlassian apps under any circumstances.
7 Likes

One of my worries is the migration. We have a customer who is about to migrate 75,000 records. This isn’t through JCMA (we had too many problems with that), this is through an import. Won’t these changes mean an end to those sorts of imports? If it does then how do we cater for customers still on the Forge version that is allowing the import?

4 Likes

How do different types of rate limits interact with each other? For example: if a request gets blocked by burst rate limit, would this request count towards the other rate limits and the global rate limit specifically?

The “vast majority of the apps” keeps getting pointed out by Atlassian in this thread, but how does it apply for the top 10% of the apps by revenue and/or by active users/installations?

2 Likes

Hi @MaheshPopudesi

We’ve done an analysis of our main apps, and we have discovered the following:

Most of our API calls are in response to a request coming from Confluence, as our apps mainly render static content macros in Confluence.

What surprised us is that Confluence sends 2 identical requests in short sequence to render a single macro. This means we do the API calls twice, once for each request.

If Confluence would only send a single request, overall we would save about 40% of our API calls.

Can Atlassian please call our apps just once for a single render request? This would reduce the API traffic substantially.

I assume apps of other vendors are also affected, thus a global reduction of the API call pressure for Atlassian could be achieved.

7 Likes

See this @marc , https://jira.atlassian.com/browse/CONFCLOUD-69108

Something I have reported in 2020

Checked today, and I can confirm it is still the same behaviour - the static macro is called to be rendered twice (2 identical requests)

9 Likes

Confirm we’ve seen this as well for confluence macro exports, for years.

1 Like

Hi,

With the current rate limits, I don’t think our app completes a DC to Cloud migration in a timely manner. We have a timesheet app and we have to migrate worklog attributes for each worklog. Number of worklogs can be in the order of millions in a large instance. There is no bulk APIs for setting worklog properties or geting worklogs for an issue. If an instance has 2 million worklogs, and with 1 API call for each worklog, it would take arround 17 hours to migrate even if we don’t use any additional APIs. Is this expected behaviour? Will it cause migration to timeout?

Ok, putting all concerns aside, we are looking into handling rate-limits in our Jira app, especially the quote-based one (hourly reset).

Our situation: Our app’s remote has 2 modes: issue-event based sync and background sync. The issue-event based sync has prio over background sync, in that it’s more important there for a timely response. So we want to avoid situations where the background sync breaches the hourly limit, therefore blocking the issue-event based sync from being able to respond immediately. To achieve that we want to throttle the background-sync if it reaches 50% of the limit. The 50% was chosen arbitrarily by us.

My question: How can we detect that we are at 50% of the quota-based limit (or any other percentage for that matter)? The docu mentions the special case of >80% (i.e. <20% left), indicated by special header X-RateLimit-NearLimit. But what about percentages <80%?
How can we compute that? It seems that headers X-RateLimit-Limit and X-RateLimit-Remaining could be used for that, but in my tests I see that their data is not for the quota-based limit, but for some endpoint-specific limit. Example data for /rest/api/3/field/search:

  • X-Ratelimit-Limit: 350
  • X-Ratelimit-Remaining: 349

350 cannot be the quota-based limit; that’s way too small.
Or do those headers don’t reflect quota-based data yet, but will do so after March 2nd? If so, how can we test it now? The docu mentions “beta- prefixed headers”, but I don’t see those in my tests. Do I have to enable them somehow?

Any advise from @Atlassian, @AlanBraun?

6 Likes

Hi @AndreasEbert ,

I think these are the bursts limits and they are In Jira Cloud for few years.

Hi @MaheshPopudesi ,

For me, the main issue with Tier 1 (Global Pool) is that even a small customer on a free tier or generating minimal revenue can consume all global points and effectively block major customers who drive most of our income.

While we do have caching in place and may introduce more aggressive caching for entities that are not yet cached, nothing currently prevents a small Jira customer with, say, 10 users from retrieving several hundred thousand issues.

3 Likes

@MaheshPopudesi, would you be able to reply to my last post? I’m still eager to test how my app handles rate-limits, and am at a loss how to do that.

2 Likes

Quick note, thanks for your patience. We’ve been reviewing the latest requests and will circle back shortly.