2026 point-based rate limits

We have not received any information if our app is moved to Tier 2. And top used app developers are not concerned about “vast majority” of apps, we are concerned about our own apps. This lack of communication and data from Atlassian before announcing this change is the most frustrating for us.

This is not communicated in the change description. We cannot rely on the hope that “model will forgive us”.

I also do not like the attitude that apps are considered “violators” that disturb Atlassian with annoying API requests. We are not using APIs to trick Atlassian and get some benefits out of that. We use APIs in the most efficient way to serve our common customers.

The current X-RatelLmit-Limit and X-RateLimit-Remaining headers that we receive do not show the new rate limits. They show some old undocumented limits (it seems they are based on number of requests, rate limit is typically returned as 350). We are not able to test the behavior of new rate limits.

Enforcing the current proposed limits is not creating a scalable platform for large enterprise customers who have millions of Jira issues and JSM assets that apps need to process in a timely manner. The Atlassian platform should be improved to handle these enterprise-scale payloads before forcing enterprise customers to migrate to it.

As many of us have written here, please postpone the current proposed February deadline for implementing this. As all partners are telling, this has not been discussed with us, we are still not able to test the impact, and there is not enough time for that. We are now focused on Forge migrations.

Kind regards,
Raimonds Simanovskis

13 Likes

You mention that most apps are expected not to be impacted, but from a vendor perspective it’s still unclear whether our app is one of those covered by that expectation. Expectation alone doesn’t give us enough confidence to plan.

Without clear, verifiable signals (beyond reactive shadow headers), it’s hard to know:

  • Are we clearly within safe limits?

  • Are we close to thresholds?

  • Or are we one of the apps that should already prepare mitigation or Tier-2 escalation?

Given the timeline toward enforcement and ongoing Forge migration work, having a way to know where we stand—not just after limits are exceeded—would be critical.

Is there a recommended way today for vendors to confidently determine whether they are at risk, or will Atlassian provide clearer guidance or visibility on this?

Thanks.

6 Likes

We have some concerns regarding the approach and timeline for introducing such a significant change for vendors. Below we have collected a few of our key observations:

Timeline
The proposed timeline appears challenging. A 2-month window is sufficient for monitoring and learning, but not for implementing optimizations or architectural changes that may affect core app functionality. 6-month formal notice period should be maintained for changes of this magnitude.

Global Tier limits
We would appreciate clarification on whether traffic from test or staging environments will be isolated from production. Without such separation, routine activities such as performance testing or CI pipelines (e2e tests) could unintentionally impact customer-facing workloads.

Additionally, all Connect apps must perform frequent permission checks, as JWT tokens do not include user permissions (we must comply with Atlassian Security requirements to authorize every request). Even with reasonable caching, these authorization-related calls can consume a significant portion of lower tiers before accounting for feature-level API usage.

We would also like to ask whether Solution Partner Jira sites (provided via Partner Benefits) could be excluded from the new rate limits. We value being able to offer free app licenses to partners, and a shared rate-limit pool between partners and customers could negatively impact customer traffic, especially in Global Tier scenarios.

Caching and platform trade-offs
To adapt, many vendors will likely introduce additional caching backed by Forge Storage (KVS) (we are already implementing such KVS-based caching in our Forge apps). However, in the absence of a true cache solution on Forge, this shifts pressure from API rate limits to storage read/write limits, potentially introducing new constraints rather than resolving the original issue. Forge currently does not offer solutions that allow request optimization through caching (Forge Containers may change this in the future).

User scope clarification
Finally, clarification would be helpful on whether rate limits account only for licensed Jira users or also include portal/customer users (e.g. in JSM), as this distinction has a material impact on usage patterns and tier calculations.

4 Likes

That’s irrelevant. The other 5% is probably making > 50% of Marketplace revenue and are the apps you should be concerned about.

20 Likes

If these policy violations are significant, why are these not shut down? All API request are authenticated, and it should be simple to pinpoint policy violations.

6 Likes

@MaheshPopudesi if you do proceed with the global pool model I’d like to see you attempt to explain in this thread in technical detail how you think developers should handle this.

eg…

  1. Ok 65,000 points per hour globally…
  2. So I need to know how many tenants/installs there are via Marketplace API.
  3. Oh ok that’ll be a new fetch permission triggering a new major Forge version, hmm.
  4. I guess 65,000 points / total installs = per tenant limit? Not very flexible, hmm.
  5. Would be better to track the usage globally and set dynamic per-tenant limits.
  6. Oh but Forge storage is scoped per tenant so I’d need a remote database, hmm.
  7. And that will be yet another external permission and new major Forge version.
  8. And none of this will qualify for Runs on Atlassian, hmm.
  9. etc…
7 Likes

I’m just confused why the marketplace vendors are having to talk the Atlassian architects out of a global pool.

With the regional data centers and required guarantees of data residency and the customer data not leaving the tenant, I don’t understand why anyone wants a single “global” rate limiting firewall that is collecting/intercepting API calls, aggregating usage points and deciding when to block (429) API’s across the entire world.

In practical terms, how is that a hyperactive tenant in Australia that spends 65000 points in queries against the Australian Atlassian region has any impact on tenants in Germany, Canada or the US Regions?

I am not a network architect… maybe some people with more expertise could explain the logic here…

Sincerely
Chris

13 Likes

I think everything has to be said. But it’s worth considering the consequences of the proposal as this will be the death of free apps in the marketplace.

Global, cross-tenant rate limits turn noisy neighbors into app-wide outages: one tenant’s burst can burn the hourly budget and lock out everyone else. The more popular a free app is, the bigger the blast radius.

The economics don’t hold. You’re expected to meet SLAs and keep the UX snappy. But that it really means is that the extremely popular free apps have to shut down or move to paid.

11 Likes

this is already in motion with the Forge Pricing coming into effect. The economics for free apps do not work anymore, unless you have a very strong freemium pipeline

4 Likes

100% agree to all the points already raised. I just want to say that global limits are insane.

You completely ignored the partners. But maybe have some decency and go ask your 20 largest customers this question:

You are using app xyz, we are going to introduce API rate limits for that app which is going to be pooled with 10k other instances who are also using the same app. What do you think?

That’s all. I assure you that you will understand why this is not a good idea.

16 Likes

I second all of the concerns already voiced, and want to ask a specific question about how points are calculated:
Situation: Our Forge app has a Forge-specific customfield, and the app uses specific API PUT /rest/api/2/app/field/{fieldIdOrKey}/value to set the field’s value on work items. That API can set a value to multiple work items in one request (i.e. a batch operation).
Question: What is the cost of 1 request that updates, say, 10 work items?

  • Is it 2, because 1 point for the request + 1 points for all work items combined?
  • Is it 11, because 1 point for the request + 10 points (1 point * 10 work items)?

If it is 11 and not 2, then I’m questioning the benefit of using such a batch operation. Sure, we use less requests, but we are still “penalized” for touching multiple work items. A further incentive for API callers would be to be more lenient with points-given when using such batch operations.

5 Likes

I agree with all that have been said about global rate limits, but I also want to add one more ask.
I appreciate your goal, to keep the platform stable and limit bad players. That’s fair for everyone and I’m willing to invest to make sure we are a good citizen of the platform. I think this is a shared responsibility.

But if I invest, I’d like to see some technical solutions from your side as well. What are the endpoints that are truely painful for you?
Identify the top 10 API endpoints that cause a majority of traffic and make them more performant, provide more flexibility or
a different interface that matches the actual usage. Many API calls are not always because of “a bad app” but sometimes also because of a “bad API”.

As an concrete example, we work a lot with issue-created and issue-updated webhooks.
With connect, we get the full issue and we do not do any API Call to process the webhook.
With forge, we get a fixed list of fields with the product event. If just one field is missing, we have to fetch the issue. (e.g. FRGE-1599, FRGE-1801)

If we switch to Forge Product Events, we will have to make 2 million additional API calls per day for feature parity, because of badly design features.
I think there are a lot of homework Atlassian could do too to ensure a stable API and allow us to reduce API usage without sacrificing user experience

16 Likes

@andreas.schmidt Oh, we are also affected by the very same problem! In Connect all relevant issue-data was included in the webhook-data; in Forge not → We have to make multiple extra requests to Jira to fetch missing data. (Specifically: all customfields + field “Resolution”; accountType of user-fields, like Assignee)

4 Likes

Hi @MaheshPopudesi

The hourly quota is intended as an accounting boundary, not as a guarantee that an app will be unavailable for a full hour if a spike occurs.

The model will forgive occasional hourly spikes for the app; however, when the app consistently crosses the Global Pool thresholds, please explore optimizing API usage patterns. Additionally, please apply for the Tier 2 quota. We recognize that apps can experience short, high-intensity bursts, so that short-lived spikes don’t translate into prolonged service disruptions for all other tenants.

A design that results in hour-long lockouts for apps would not be acceptable, and this is one of the key scenarios we’re pressure-testing during this period before enforcement.

I am glad we both agree that an hour-long lockout is not acceptable. If this new design truly cannot lock out apps for an hour and this hour component is actually just an accounting boundary, then it sounds like the original description was not quite right. Could Atlassian please describe exactly what the time window is (since it’s apparently not one hour?), explain which parameters influence any lockout period, and describe worst-case expectations for recovery (conditions required to get there, the ability/inability to use any APIs, potential for slowed responses or other impacts, and the maximum time period over which an outage could be expected)?

All of these things need to be understood in order to run our business and plan around this change.

I mentioned this earlier, but it is also really important to understand really soon if front-end calls are included in the new rate limits or not.

We understand the question about why this doesn’t follow a longer changeover period. Over the past year, we’ve seen a significant increase in overall API usage, which drives important business outcomes. We have also observed an increase in policy violations that can affect the experience of all other apps in our ecosystem.

I recognize that all of the above is important to Atlassian. It is not clear to me though why anything in “driving business outcomes” justifies not following the standard six-month notice period? One-off violators can be presumably dealt with individually, so it feels like this is mostly a question of cost optimization? Or maybe the internal team was hoping to move on to a different project? Neither of these would tend to give warm and fuzzy feelings to vendors, especially given the holiday timing. Although nothing obligates Atlassian to describe its internal thinking, if you are looking for warm and fuzzy, being more transparent here about why this has to happen right now (and not in June) would go a long way.

Along with introducing the point-based rate limits, our goal is to support a smooth transition for all apps and minimize impact. Based on our traffic analysis for the past year, ~95% of the apps never cross boundaries of the Global Pool; of those that did, we proactively moved qualifying apps to Tier 2. We don’t expect the vast majority of apps to experience any impact from this change.

95% of apps (right now) do not cross the global pool. What about planning for growth? The more successful an app is, the more customers it has, and the more usage is generated.

Atlassian is deliberately creating what amounts to a trap that will cause apps to go out of service once they hit a magic threshold (and ironically, only for successful apps). What happens with this project in two years when traffic increases threefold, no one on this original rate limit team is still even in their current role at Atlassian, and mostly chatbots are handling the responses to “increase my rate tier” tickets?

To ask a more pointed question, why does tier 1 even exist? Can all apps not just be tier 2? Why do vendors need to be the ones to raise a ticket with a dozen detailed data fields to justify why they need to keep their apps working? If Atlassian insists on having a tier 1 and a tier 2, why can Atlassian not just transition apps for us automatically (or at least alert us) before they start breaking? Can this not be part of the MVP? Requiring vendors to take proactive actions based on response headers is literally impossible for Runs on Atlassian apps. It seems like Atlassian is saying that vendors now have to manually log into a dashboard and monitor some metric on a periodic basis just to ensure that their app is not inadvertently taken out of service?

Vendors are paying Atlassian millions in revenue share in order to work with its products. We all pay the same percentage. It does not seem objectively fair that some apps get X calls to APIs, whereas other apps get 5000X calls.

Looking at the questions in the “apply for tier 2” support ticket, it seems that Atlassian wants to curtail misuse and sloppy API calls.

To handle high API usage transparently and fairly, there is also the nuclear option: charge vendors for API usage.

You can do this fairly by reducing the existing Marketplace cut in order to end up net-neutral in revenue for Atlassian (where the aggregate new_vendor_share_dollars + api_usage_dollars = old_vendor_share_dollars). Customers and apps who make more calls will be billed more, and those who have lower demands will be billed less. You can then presumably dispense with most arbitrary API cutoffs and tiers, except whatever burst limits you need in order to keep your infrastructure healthy.

This would still need a few safety controls (allow vendors to specify their own per-tenant and global limits, with both warning and hard cutoff thresholds) in order to provide cost control and visibility, but it is at least transparent, it does not arbitrarily create app failures, and it puts control in the hands of app vendors. There will be winners and losers, but I would say that this is arguably fair for everyone, and it drives down Atlassian infrastructure costs (because vendors are directly incentivized to optimize their API calls).

17 Likes

To add to the topic of frontend calls and monitoring API metrics, it appears to me that frontend calls are not considered in Developer Console’s Monitor API metrics, but are included in Rate Limits, see CDAC thread Forge Monitor API metrics don't consider frontend requests for details. Please correct me if I’m wrong here.

8 Likes

Thanks for calling this out. We definitely don’t want successful or widely used free apps to be disadvantaged.

One important clarification: free apps can also be in Tier 2. We’ve already reviewed traffic across the ecosystem and proactively moved hundreds of free apps into Tier 2, where their usage profile benefits from per‑tenant capacity.

Thank you for raising these concerns and outlining the regulated use cases. To clarify, federated and other regulated environments (including those subject to US FDA and EU EMA requirements) are not part of this rollout.

We’ll approach any future changes for those environments separately and with additional safeguards and consultation.

Hi @MaheshPopudesi ,

This seems simply not true. Atlassian has by default no information if a customer is under US FDA and/or EU EMA requirements. They are mostly on the “normal” Atlassian cloud.

2 Likes

As Marc already stated, this is simply not true. Atlassian has many FDA-regulated customers in Commercial Cloud. Their editions range from Standard to Enterprise, but many of them use Commercial Cloud. The ones that aren’t simply haven’t migrated yet from Data Center.

In fact, Atlassian has enough FDA-regulated customers to support a small ecosystem of partners that support them. Frankly, the fact that the decision-makers behind this change are unaware of this is deeply concerning! :scream:

Edit: I should clarify that I am in no way insinuating that Atlassian has any regulatory obligations to these customers. It most certainly does not. Atlassian’s only “obligation” is to continue to provide a stable platform, as it has for 20+ years. But a platform where one customer’s uptime can be impacted by another customer is not a stable platform.

4 Likes

Thank you for the correction, you’re right, and I should clarify. My earlier comment was imprecise, and I apologize for that. I wanted to indicate that IC and FedRamp are not part of this rollout.

3 Likes