2026 point-based rate limits

Hi Atlassian,

I saw that you announced a new points-based rate limit system in CHANGE-2958, which is going live in 48 days.

This announcement leaves a number of things ambiguous and I am hoping that you can provide some clarifications:

  • Previously, rate limits were documented as applying only to back-end calls. With the new points-based system, will rate limits also be applied to front-end calls?

  • Forge is not currently capable of providing rate-limit headers to front-end code (ECO-899, FRGE-1923). If the rate limits will be applied to front-end code too, can the implementation date of this change please be bumped to give the Forge team time to fix the issue, as well as to give at least a 60-day buffer on top of that so that vendors have time to implement and deploy changes on their end?

  • The timing of a refreshed-once-per-hour token bucket is far too harsh. With one bucket shared across tenants, whenever any tenant causes the app to exceed its token limit (for whatever reason), this effectively causes a DoS for all other customers for up to one one hour. Given that most customers are paying for Marketplace apps, this length of outage is not acceptable. Can Atlassian propose an enforcement system that is less destructive to the vendor/customer relationship? Ways to work around this might include: use a smaller token window (instead of 65k/hour, what about 1083/minute, with some burst ability)? Or providing degraded response times rather than completely stopping service? Even switching all apps to tier 2 is not a cure, because although that limits the blast radius to just one customer, you are still putting the entire app’s functionality out of service for an extended period of time.

  • How are migrations (DC to Cloud, Cloud to Cloud) treated in this new scheme? DC->Cloud migrations typically require a specific header for API calls. Are migration requests included in the new token bucket or not?

  • You wrote: “Planned updates to the developer console to show which tier your app belongs to ahead of the enforcement”. Can Atlassian be more precise about when this is going to be made available? We already do not have a lot of time (48 days) and vendors need time to pivot and react.

  • The Confluence page states: “Each request must stay within the maximum allowed cost per request”. What does this mean? What is the “maximum allowed cost per request”?

  • Do requests that return multiple objects of a type incur a point-based cost for every single object? For example, if I request a group object that contains 15 users, am I billed 15*2=30 points for this request? (If an app inadvertently requests an enterprise group that contains 30,000 users, does that exhaust the app’s token bucket with one request?)

  • The changelog says “If your app requires additional capacity in the future, rest assured we will provide clear pathways to request for this”. These clear pathways seem to be missing. What is the escalation path for getting an app moved into tier 2? Can there please be a specific contact point, an ECOHELP ticket type, or something else public that gives vendors a visible path to request this? And, pretty please, establish and publish a SLA for resolving these type of requests that is proportional to how critical this is (hours or less)? When an app hits the limit for tier 1, this creates a critical issue where users cannot use the app, so vendors need a quick way to resolve this. Again, this is going live in less than two months and this would ideally not be a “we’ll post the details eventually” item.

  • Is Atlassian planning to proactively reach out to all existing vendors who are already above (say) 50% of the tier 1 limits to let them know about these changes and where the vendors’ apps currently stand?

  • Do the current rate limit headers exposed in production allow vendors to already see their current points-based consumption? (I admit that I have not checked.) If not, when will this be shipped?

  • The Confluence page says: “We plan to expand our catalog in the future to provide more detail on object costs. Most requests are dominated by object costs”. When will this detail be delivered? Again, this is going live in less than two months and vendors need to better understand object cost for planning purposes.

Thanks!
Scott

81 Likes

Adding to the remarks of Scott above;

For apps that facilitate planning across different teams, we typically have the case that:

  • many users are working in parallel at the same time on some very specifics periods of the year (e.g. quarterly planning)
  • apps are consolidating data across 100’s or even 1000’s of work items at once.
    This generates high peaks, but still with a relative low average consumption on a year.

We’ve heavily optimized the app to cache data across routes in the app (within a session), apply jitter for parallel requests and only request data (fields) that are really used, when they are used.

  • Will jqlSearch also be accounted on the number of objects ? E.g. will a request for 2000 work items with only a limited (counted in bits) data fields cost 2000 points? On peak periods, we are guaranteed to hit the limits then quite fast.
  • Can we go for another bucket system that allows for low averages with high peaks instead of the hourly system? I can’t tell our enterprise users to stop planning together.
  • Is there anything to do before the hour is over when limits are hit? Because retries and exponential backoffs won’t work. A user will not wait for 30 minutes before the screen is loaded.
7 Likes

For those who have access, I also just found the QRG here.

3 Likes

Thanks for initiating this conversation @scott.dudley

I believe that all Marketplace vendors share Atlassian’s goal for system stability. If counting points instead of requests helps with that than I think we would all support it.

The REST API contract includes rate limiting and many vendors (including ourselves) have written custom handling into our apps to prevent/handle 429’s. To me this rate limit change is global REST API change that affects every single API in Jira and Confluence.
How does this not require a 6 month deprecation window like any other API change?

We see bursts of traffic to our apps due to large customers running bulk operations or transitions that trigger webhooks. We see hundreds of these requests per minute; each may need 6 to 10 callbacks to the Rest API for data retrieval and write back. This is on top of the interactive use where users are waiting real time for the app to contact the Jira API and respond.

Our experience using the existing rate limit remaining headers show they usually start at 200 or 300 capacity remaining per tenant and count down with every rest api call, resetting after about 1 minute. To avoid 429’s and still be (somewhat) responsive to interactive users we use dynamic smoothing to slow down outbound REST calls when we have used up > 50% of the capacity. This works because the buckets are small, so adding 500 to 1500 ms pause to outbound calls lets the system keep running and still respond to customer clicks.

A 60 minute bucket is not feasible to smooth this way. We will absolutely have a dead application across hundreds of customers when one customer sends a few thousand bulk operations our way and we hit 429 hard and every user is locked out for an hour.

But changing rate limit buckets from 1 minute per tenant (observed) to a new system where you may be global or per tenant with various tiers is much more complex and increases the risk of outage for customers using marketplace apps.
Please reconsider this granularity change; do not create the risk of 1 hour app lockouts.

Sincerely,
Chris Cairns
Digital Rose

20 Likes

Global pool model will be an unmitigated disaster…

  1. Install any free app trial
  2. Fire a measly ~32,500 requests at it (at most!)
  3. Every customer with that app installed gets an hour downtime

Um, what?!

The attack surface here is so wide that anyone could feasibly crash the entire Marketplace.

33 Likes

The already extremely tight timeline is made event more difficult by the lack of visibility into what kind of changes are required of the to stay within the new API limits. Not only don’t we yet know if our app falls under Tier 1 or Tier 2, but more crucially, we lack visibility into how many points our requests are actually consuming.

Please consider adding a points-used header to all API responses as soon as possible. This would allow vendors to understand and adjust their usage proactively, rather than guessing and hoping we stay within limits.

18 Likes

I agree. This proposed global quota pool will break the multi-tenant model that’s at the core of Atlassian Cloud and Forge platform. It’s very dangerous and IMO unacceptable in any modern cloud platform.

15 Likes

This is likely one of the most damaging approaches Atlassian could take to reduce the number of API requests made by app providers. It puts us in an untenable position with our customers. How are we expected to explain that our app is unavailable for an hour because a completely different customer triggered an export operation? That is not reasonable.

Our customers are paying for our apps, and we are contractually bound by SLAs. We now have 48 days to design and implement a global, cross-tenant mechanism to determine peak API usage and mitigate it. Possibly restricting functionalities or imposing ourselves limits to the App usage.

There are many better ways to do it like:

  1. Create the points system but do not block requests
  2. Inform partners that are breaching the limit and give a deadline
  3. Let partners appeal and explain why they need
  4. Evaluate and recalibrate the points system
  5. Start blocking apps

This is on top of an already substantial list of unilateral changes introduced by Atlassian that we are required to absorb, like Forge migration, additional rate limits, Java migrations for Data Center, deprecations, etc.

18 Likes

The best part of Forge – especially with Forge hosted compute – is tenant isolation, which sometimes can be annoying and somewhat limiting, but it’s for sure useful to avoid the noisy neighbour problem.
This change, at least for Tier 1 apps, introduces the worst possible kind of neighbours: as @nathanwaters mentioned, anyone can set up a trial and effectively DoS an app.

Marketplace apps are only useful if they can call Atlassian APIs (otherwise they’d be standalone products), as long as they’re not compromising the platform stability or clearly abusing the limits, 429s shouldn’t even exist IMHO. If we’re paying for compute on Forge, I don’t want to pay for sleep(1000) to honor a Retry-After header.

Hourly quotas are reasonable on paper, but there are bursty workloads that can cause problems: OP mentioned migrations, but even a simple bulk transition, or closing a large sprint… can trigger a large number of concurrent events.

We already have cases in which customers are asking why they have to adapt their behaviour so that our apps can fit within the current rate limits, so I welcome changes that can improve the user experience, unfortunately I don’t think that this is the correct solution.

20 Likes

eazyBI as a data-intensive app would be significantly impacted even with the proposed Tier 2 limits which would make eazyBI unusable for larger enterprise customers.

Larger enterprise customers have millions of Jira issues (work items) and JSM assets, and from time to time, we need to perform full data scanning and synchronization in our background jobs. With the proposed new limits, if points are counted for each accessed Jira issue or JSM asset, then scanning all data for millions of issues and assets would consume millions of points. But the current proposed maximum limits are just 150,000 + 30 Ă— users points/hour.

Currently, many enterprises are migrating from DC to Cloud, and Cloud is presented as enterprise-ready. In DC, there are no such rate limits, and it is possible to process millions of issues and assets within an hour. After migrating to the Cloud with these new limits, it will make it impossible and will critically affect DC-to-Cloud migrations to these large enterprises.

We are one of the largest consumers of Jira and Confluence REST APIs and have previously discussed performance improvements with Atlassian. We have already implemented all the suggested best practices for many years. I have given two talks on the efficient use of Atlassian REST APIs at Atlas Camp conferences. But this time nobody has discussed these new planned rate limits with us. There was no RFC regarding this change, and all Marketplace partners are unpleasantly surprised by the planned change, which is expected to take effect in about a month (excluding the holiday season). We don’t even have a way to calculate the current point usage, as the REST API headers do not include it yet.

For all Marketplace partners, our current priority is the Connect to Forge migration, which requires significant effort. We currently do not have the capacity to work on totally unplanned rate limit changes. If these rate limit changes are implemented, then it will result in a damaged experience for our shared customers, as many apps will start to break.

Please postpone the implementation of these new rate limit changes. Please start with the RFC so that partners can provide their concerns and provide real usage data. And please make Atlassian Cloud enterprise-ready with the capability to process large data volumes, as promised to customers.

Kind regards,
Raimonds Simanovskis
CEO of eazyBI

44 Likes

Hi together,

Let me add my 200 cents here as well - as I’m shocked about the process of such a change - without RFC.

From our perspective, the timing of this change is challenging, especially given the Christmas period. Many teams are already at capacity and currently blocked or heavily occupied with Forge migrations, which leaves very limited bandwidth for additional architectural changes or deep impact analysis.

Lack of concrete data

At the moment, we lack concrete data from Atlassian (real usage metrics, representative simulations, or calculation examples) to perform a verified and reliable risk assessment. Without this, it is very difficult to assess whether mitigations are required, where the real thresholds are, and which customers will be affected.

Enterprise customer frustration

We also see a high potential for customer frustration, particularly among enterprise customers, where large-scale and coordinated usage patterns are normal and expected.

For at least one of our apps, we already know that it is API-intensive and will likely hit these limits for large customers. A concrete concern is long-running, non-interactive operations (e.g. exports or consolidations) that already take multiple minutes today. Under the new model, there is a real risk that such operations could take significantly longer- or become infeasible altogether- for very large tenants, even though they are not directly user-driven.

Global quota concerns - a new attack vector

In addition, the introduction of a global quota is concerning. A shared global bucket significantly increases the blast radius, as misuse, misconfiguration, or unexpected load from a single tenant could negatively impact unrelated customers. This is not something vendors can reliably control, isolate, or fully mitigate at the application level.

From our perspective, this also creates a new attack vector: a malicious or compromised tenant could intentionally exhaust the global quota and cause denial-of-service–like effects for other customers using the same app, despite normal behavior on their side.

Why no RFC?

Overall, we are currently very unsure how to rate the impact and risk of this change due to the lack of transparent data, examples, and clear guidance on expected behavior at scale.

Finally, the short notice and overall timing amplify these concerns. A change of this magnitude - affecting virtually all REST API usage - would normally benefit from being handled through a formal RFC-style process, allowing vendors to provide early feedback, validate assumptions, and assess impact based on real-world workloads before commitments are made.

Cheers
Oli

18 Likes

Another illustrative example:

  1. A customer decides to move more than 65,000 issues from one project to another (we have already seen cases of customers moving millions of issues in a short space of time).

  2. This action generates one update webhook per issue.

  3. If an app performs a single API call in response to each update webhook, BAM: the app is effectively taken offline for an hour.

Basically, a global app rate limit penalizes only successful apps with large scale enterprise customers.

9 Likes

We understand the motivation for moving to a points-based model and appreciate the additional transparency compared to the previous capacity system.

That said, as a Marketplace partner operating a data-heavy analytics product, we’re concerned about both the practical impact of this change and the timeline, and we’re currently missing some critical information needed to properly assess and mitigate risk.

Tier assignment and visibility

The documentation references multiple quota tiers (including a global/shared pool and tenant-based quotas), but it is currently unclear:

  • which tier a given app is in today (are we Tier 1 or Tier 2),

  • when partners will be informed of their tier,

  • and whether there is a process or criteria for moving between tiers.

Given that Tier 1 uses a single global hourly points pool shared across all tenants, this distinction materially affects how apps must be architected and throttled. And since we don’t know if we’re Tier 2 - we have to assume Tier 1

Can Atlassian clarify:

  • the exact date partners can determine which tier they are currently in?

  • If they’re in Tier 1 and they believe that they’re going to be impacted - how can they escalate?

Front-end vs back-end traffic

Many Marketplace apps make REST calls both:

  • from back-end services (batch data loads, scheduled syncs), and

  • from front-end code in response to user interactions.

We would appreciate explicit confirmation on:

  • whether front-end initiated REST calls are counted toward the same rate-limit pools as back-end traffic, and

  • whether there is any differentiation in how these are accounted for under the new points system.

Impact on analytics and bulk-read use cases

Analytics apps often perform bursty, read-heavy operations (e.g. /rest/api/3/search over thousands of issues, followed by changelog reads). Even when individual tenants are relatively small, the aggregate effect across many tenants can be significant under a global points pool.

Without clear per-app or per-tenant visibility into expected point consumption, it is difficult to validate that existing, well-behaved apps will remain within limits.

Are there plans to provide:

  • tooling, dashboards, or reports to help partners understand real-world point usage? (I’d rather not be building this ourselves)

  • guidance or examples specifically tailored to analytics or bulk-read workloads? Especially under heavy loads/large tenants

Risk of cross-tenant impact from a single noisy tenant or user

Under the Tier 1 global quota model, all tenants served by an app draw from the same hourly points pool.

This creates a risk scenario where:

  • a single noisy tenant or user (intentional or accidental), or

  • a burst of legitimate activity from one customer,

can consume a disproportionate share of the global quota and cause rate limiting for unrelated tenants that are otherwise behaving normally.

From an app vendor perspective, this is particularly challenging:

  • the behavior may be customer-driven and not easily preventable without degrading the experience,

  • the impact is cross-tenant, making incidents harder to diagnose and explain,

  • and it increases the likelihood of customer-visible failures outside the originating tenant.

We would appreciate guidance on:

  • whether Atlassian considers this an acceptable risk under the new model,

  • whether additional safeguards or per-tenant isolation are planned,

  • or whether high-read Marketplace apps should expect to remain in a global shared pool long-term.

This represents a materially different failure mode compared to per-tenant limits and has implications for both customer trust and Marketplace app reliability.

Timeline concerns (February 2 enforcement)

Finally, we want to raise a serious concern about the announced enforcement timeline.

This change was announced in mid-December, when many Marketplace partners are in:

  • year-end change freezes,

  • reduced staffing,

  • and limited ability to deploy architectural changes.

For our team specifically, we are in code freeze until early January, leaving only a short window to design, implement, test, and safely roll out changes that affect core data-loading behavior.

We respectfully ask Atlassian to consider:

  • extending the enforcement timeline beyond February 2, 2026, or

  • providing an explicit grace period for existing Marketplace apps that are actively working toward compliance.

We’re committed to being good platform citizens and adapting to the new model, but we need sufficient time and clarity to do so without risking customer-visible outages.

We appreciate any additional detail Atlassian can share and are happy to engage further if more information from partners would be helpful.

9 Likes

We have the same questions and problems as Scott and would greatly appreciate the answers and clarifications from Atlassian.

We are especially concerned about the global bucket instead of per tenant. If this goes live, simply having one user perform a migration from DC to the cloud or conduct a bulk update is enough to break the app for hundreds of instances.

We would also like to know how webhooks are counted toward the limit. Are they not considered at all? Only the calls caused by them?

And only 2 months to verify all the apps, change core features (which I am sure will break due to the mentioned limits), and test them is not enough.

Kind regards,
Kamil Parzyjagla
Soldevelo

6 Likes

I think the more technical folks from marketplace partners have given great feedback on the specifics of this change. As a marketplace vendor CEO that is trying to grow a business, this is another clear example of Atlassian rushing to solve a problem without considering the impact on the ecosystem that we keep being told is so important. If this is true, why aren’t our needs prioritized? Why aren’t the impacts of these serious changes considered before these pronouncements? We are so tired of having to be on alert for the next play we have to defend our businesses against.

This is not only mentally draining on owners of marketplace businesses but it is costly as well. When does this become toxic to us? We just want Atlassians actions to match their words when it comes to the ecosystem. If we are indeed important to the future of Atlassian, start showing it.

what other solutions have been explored that could have less negative impact on our customer experience? Have you identified ways to punish bad actors while still allowing the rest of us to thrive (even if it costs more to us?) Can you show us your work on the investigation of the impact to Marketplace vendorS? If not then you have no business announcing this

One final thing, for the love of god stop announcing business endangering changes in December - you know that the holidays will eat of your ”notice period” as it were.

When can Atlassian add a value called ”Don’t f@*k our partners?”

28 Likes

Chiming in with concerns about some use cases which we currently have and which have already been presented, but I want to emphasize them again. I also want to focus on a constructive path forward for Atlassian, us and our common customers.

Bulk Operations

A couple of our apps process issues via event triggers (work item created / updated, comment added etc.). This kind of work is susceptible to customers running bulk operations, which does happen occasionally. Not often, but it would not be great if that were to make the app inoperable for all other customers for the remainder of the hour.

I think what could mitigate this is an allowance for the amounts of triggers received by the app. For example if the rate limit pool could allow for a number of operations per trigger it is sent, that would work great.

Or maybe the architecture of the triggers for these could be changed. I could see a “bulk work items changed” trigger be a thing app vendors could opt in to for scenarios like these to optimize their API usage.

Periodic large-scale operations (e.g. syncs)

Another one of our apps periodically syncs data from the API into its own database to facilitate faster lookups. Feel free to contact us about the exact use case, but the gist is that we do this because the APIs aren’t flexible enough w.r.t. filtering data and there are no triggers available for the data we’re watching. To keep the UX good enough, we have to periodically query all data and index it ourselves.

If we could work with Atlassian to get those cases accounted for, we wouldn’t have to do this. But frankly, our past experiences asking for API changes have not typically resulted in changes. Which is fair, don’t get me wrong. We know Atlassian dev have a lot on their plate and prioritize different interests where we are not the biggest fish in the sea by any stretch of the imagination. I’m just trying to explain that cases like these, rare as they may be, cause us to use the API a lot.

Noisy neighbor issues

Yeah, that would be concerning :grimacing:

In my opinion, apps listed on the Marketplace or where distribution is enabled should not fall into Tier 1 exactly because of this.

Arguably, we could and probably should implement rate limiting ourselves then as a way to counter one tenant abusing the app. But a motivated attacker could still perform the attack across, say, 10 tenants. And since there is not a good mechanism I am aware of to communicate cross-tenant in the app (on purpose, I would guess!), I don’t see how we could prevent this effectively. But we would be able to mitigate it a bit. Leading me to the next point:

Guidance for implementing rate limiting on our apps

Might I suggest a guide article or two about how to implement rate limiting of user requests on our side? Maybe even building it into the platform and making it configurable in the manifest for our functions.

Or maybe a function in the developer console that show APIs that are particularly heavy for Atlassian that the app might be mis-using, like fetching all fields of a work item for example. I could see there being several endpoints with pitfalls like this that make an API call particularly expensive that a developer might just have to be educated about.

Timing

Please understand that a lot of us are currently migrating our apps to Forge. These changes to the rate limiting make this already complex and time-costly migration more difficult. I would appreciate if enforcement of this could be moved 6-12 months into the future.

Enforcement

I would appreciate a warning about our apps running into the new rate limits with a sufficient time frame (1-2 months) for solving it before an app stops working for customers for an hour at a time.


All in all, I think if those changes were implemented, it would make the new rate limits much more workable for us.

Thanks for taking the time to read and consider this :slight_smile:

Cheers,
Tobi

11 Likes

No-one wants to be tier 1. It just doesn’t scale.

Where is the visibility for this? How are we supposed to know (with plenty of advance warning) if the app is about to breach these point-based quotas? Will any of this be visible in the developer console?

9 Likes

This sounds nasty, and communication is very lacking from Atlassian side. I’m not a Marketplace partner, but I do have a customer which has a huge Jira, with 500K issues, and 100K users. Do I understand correctly, that if we have a few different apps (owned by a single account), then rate limits are separate for each app?

As noted by others, it would be good if the new rate limit headers would be present in the current responses, so we can also adapt our solutions before the changes are effectively brought into life.

9 Likes

One has to assume that CHANGE-2958 is read also by customers, not only partners. Is this a public warning that DC → Cloud migration should be postponed if one has concerns about Cloud being enterprise ready? I can’t imagine that current DC enterprise admins would install Jira DC 12 if the release notes said “starting from this version marketplace apps are capped at processing speed of 139 issues per second (500’000 hard cap divided by 3600 seconds in one hour). 70 issues or less per second if they need anything more than 1 read operation per issue. You can’t mitigate it with node count or other hardware scaling.”

Edit: I see that I may be mistaken. Could it be that these T2 limits are per user? So Standard pool in one hour would have 100k for app user itself, and then 100k for each user that app does impersonated requests with? In that case it may be workable, but we will likely have to introduce some sort of pooling where we offer users a button “agree to batch your allocated requests to this job”, and then add “share” link which they can send to their coworkers if we still run short, and then they all band together to overcome the limit. Not the cleanest design, but what can you do.

9 Likes

I agree with the concerns already raised. Also, when trying out the new headers on Jira, I noticed that they are either not working as documented or the documentation is confusing:

  • I send 3 requests to one site (GET /rest/api/3/issue/[issueKey]) within 10 seconds. For all 3 requests, I get headers X-Beta-Ratelimit-Limit: 400 and X-Beta-Ratelimit-Remaining: 399. The “Remaining” Limit isn’t decreased as expected. Also, it seems the Limit is set to 400 on this site and resets after every request, where the limit communicated in the documentation should be at least 65,000 and not reset twice within 10 seconds.
  • I send 3 similar requests to another site within 10 seconds. For all 3 requests, I get headers X-Beta-Ratelimit-Limit: 200 and X-Beta-Ratelimit-Remaining: 199. Same concerns as above.
  • The X-Beta-RateLimit-Reset header is missing in all responses.
  • There is no header indicating how many points were used by the current request, which would enable us to count how many points our apps are currently using. This makes it much harder to debug current usage.

IMHO the introduction of rate limits needs to be postponed until:

  • Rate Limit headers work as expected.
  • Partners have an easy way to verify how many points we’re currently using (e.g. via another header) or - better - Atlassian shows partners e.g. on Developer Console how many points we’re currently consuming, grouped by site.
  • Atlassian communicates to all partners which rate limiting tier their apps belong to or responds to Tier 2 requests.
  • Afterwards, partners are given at the very least 3 months (preferably the usual 6 months deprecation notice) to adjust their apps to match the new limits.

It was mentioned recently on the !MPACt Session that Atlassian is listening to the partner community’s feedback. As such, I’m hopeful that adjustments will be made and @AlanBraun and @ChrisHemphill1 will listen and find a path more suitable for us partners going forward without putting many Marketplace businesses at existential risk. Thank you!

12 Likes