How best to monitor rate limiting from within a Runs on Atlassian (RoA) app?

Atlassian is implementing a new point-based rate-limiting system on February 2nd. I have a Tier 1 RoA app that must be designed to handle the Global Pool.

I understand the X-RateLimit-* headers will (eventually?!?) be updated to reflect global consumption of the app’s hourly quota. That will be helpful for 1) detecting an approaching global threshold, 2) enforcing progressive throttling from within each app instance, and 3) displaying performance warnings to end users.

(Edit 2026-01-07: This is assuming the use of @forge/api instead of @forge/bridge.)

However, the X-RateLimit-* headers seem insufficient for monitoring the rate-limiting at an app level. Particularly from within an RoA app. Specifically, I need the ability to:

  1. Monitor the effectiveness of the app’s throttling implementation.
  2. Request an upgrade to Tier 2 as my app’s usage increases in accordance with this advice from Atlassian:

So, what is the best way to monitor the app’s global API consumption? Any advice on how to implement a reliable monitoring solution will be appreciated.

Here are some ideas I have considered:


:light_bulb:Use the Developer Console.

:no_entry:Unsupported:

  • API Tokens Points are not (yet) a supported metric.
  • Metrics are not global.
  • Alerts can only be configured from Production sites, but the Global Pool applies across all environments. (Including test environments where automated tests might exceed the global threshold.)

:light_bulb:Implement “homegrown” alerts.

:no_entry:Not allowed. This is prohibited by the Runs on Atlassian data egress policy.


:light_bulb:Leverage an analytics API.

:warning:Unreliable. Using the X-RateLimit-* headers as a metric that is then transmitted to a permitted analytics API (e.g., Google) could work. However, customers can disable analytics egress, which means this solution could not be trusted.


Again, any advice on how to implement a reliable monitoring solution will be appreciated.

Note: Atlassian has insinuated that the Developer Console might be updated to meet the necessary use cases:

However, there is no mention of this in any of the official change announcements or documentation, and the above comment is (understandably) light on details.

5 Likes

Hi @AaronMorris1 , there is an Export app metrics API that can be used which supports the export of API and Invocation metrics today - https://developer.atlassian.com/platform/forge/export-app-metrics/#export-app-metrics.

This shouldn’t affect the RoA status.

1 Like

Hi @ChandanaMeka - Does that Export App Metrics API include API point usage?

According to the documentation, the supported metrics are limited to: FORGE_API_REQUEST_COUNT , FORGE_API_REQUEST_LATENCY , FORGE_BACKEND_INVOCATION_COUNT , FORGE_BACKEND_INVOCATION_ERRORS, and FORGE_BACKEND_INVOCATION_LATENCY.

I asked them in that thread too and no response. If you go look up all the internal decision makers in that thread it doesn’t appear that any of them write code.

Practically you’d need both Marketplace API calls and a remote database to manage rate limiting for a global pool. And of course both would invalidate RoA status :upside_down_face:

3 Likes

Hi @ChandanaMeka – I know from other conversations that you work closely with the Developer Console. Do you have any insight into the timing and nature of the updates that will be made to support the API point system? If not, is there someone I can ask?

All controversy aside, I’m just looking for help to design a technically reliable and responsible solution.

Hi @ChandanaMeka ,

Thank you for this information.

However we need an automatic way for our ROA apps to react to global rate limits.

If we go to the developer console, download the metrics and determine (retroactively) that our app has a problem, we might need to change the behaviour of our app.

This requires a new deployment for ROA, and an upgrade for all customer instances.

This feedback loop is in the order of days or weeks, whereas our apps need to react within seconds on increased load, caused by other tenants.

I hope there is another solution.

I think the X-RateLimit-* headers will give me the ability to react “in the moment” to increased load. My current plan (not yet implemented) is to

  1. Monitor the headers on every API call.
  2. Then, enforce progressively more aggressive throttles as different arbitrary thresholds are reached.
  3. Display a warning to users when a perceptibly aggressive throttle is in effect.

Edit (2026-01-07): I failed to clarify I’m using @forge/api rather than @forge/bridge. This strategy won’t work for @forge/bridge, as was fairly called out by Scott Dudley in a comment below. And of course, the Developer Console and metrics APIs are entirely inappropriate for in-app handling of rate limiting.


This should, in theory, handle the cross-tentant impact scenarios. For example, if Installation A triggers an increased load, then Installation B will detect it on its first API call.

(To be clear, the fact that cross-tenent impact is possible is :poop: in my opinion. But that is what it is for now. :frowning:)

What I’m looking for is the ability to monitor the app enough that I can:

  1. Ensure the “in-the-moment” load handling is working properly.
  2. Know when I need to request Tier 2 status.

With sufficient upgrades, the Developer Console could provide that functionality. But it’s entirely unclear what Atlassian is planning in this regard.

Likewise, the Metrics API suggested by @ChandanaMeka could work through automated periodic polling, but it needs to be upgraded first. (According to the documentation.) And again, there is no mention of upgrading this in any of the official change announcements.

To clarify what I mean by “sufficient upgrades”, I’m looking for the abilities to:

  • Monitor API Points as a metric, rather than just API Requests.
  • Report on API Points globally, in addition to per environment and per site.
  • Configure automated alerts for the global API points metric based on configurable threshold values.
2 Likes

Hi @AaronMorris1, got it, thanks for sharing the feedback. I mainly look into the console aspects but @MaheshPopudesi has been looking into this initiative, will pass the feedback to him to plan the console changes accordingly.

1 Like

@ChandanaMeka – Thank you!

@MaheshPopudesi – I am trying to follow your suggestions about monitoring a Tier 1 app’s API Point usage in order to 1) monitor application stability, and 2) proactively request a Tier 2 upgrade at the appropriate time.

As you can see in my original post, above, that seems technically infeasible from within a Runs on Atlassian app.

Can you please confirm:

  1. What is the expected technical implementation for monitoring API points at the application-level from within RoA? Will it only be through the Developer Console (as you suggested) or is another solution planned, too?
  2. If it is only through the Developer Console, what specifically is the planned functionality and when will it be available?
    1. Note: I’m concerned that no current metrics can be reported, monitored, or alerted on in a global manner. So it seems like it will be quite a significant change to implement.
  3. Will there be a significant gap (> 90 days) between enforcement (February 2nd) and the availability of monitoring? If so, what is the recommended interim solution?

Without wanting to be the bearer of bad news, I wanted to point out that this plan is not really feasible either: Atlassian has stated that front end requests are included in the new points calculation, but Atlassian also strips the rate-limit headers on the front end, making them inaccessible to your app (FRGE-1923).

It seems like the best plan of action for all RoA apps is simply to apply for tier 2 immediately, regardless of points usage.

5 Likes

Yeah, there is a lot about this entire situation that is bad. Regardless, I’ve already isolated all API calls into the Forge resolvers for this and other reasons. So, API calls are made through @forge/api instead of @forge/bridge. I’m just trying to deal with the reality of the situation while we all pray that Atlassian reverses course.

(Edit: I’ve updated the original post to clarify this distinction. Thank you for calling it out.)

I would do this if I thought it wouldn’t just be a waste of my own time. The new AI Support Agents that will deny me don’t get paid by the hour. :joy:

4 Likes

What will be the user experience when an app is rate limit blocked? Will Jira detect the fact that app is blocked and show a warning message inside the panel of the app automatically? Since Atlassian has this information, instead of trying to render the app’s UI, they can just render a standard “This app is blocked due to rate limit” warning. This will provide a unified experience for all apps and users.

1 Like

While I understand the intended convenience, I’m not sure I understand how Atlassian could effectively do this in practice. In terms of user experience, there isn’t much difference between an HTTP 429 error and an HTTP 401, an HTTP 500 error, or any other error code. Meaning, the API knows about the error condition, but it doesn’t know anything about the context of the API request and the best resulting user experience of the error condition.

That is a good example of what I’m trying to describe. Sometimes it might be appropriate to replace the whole app UI with an error message. (For example, you can’t retrieve enough data to even load the app.) Other times, you might not even need to alert the user. (For example, you’re auto-saving data while the user is working, but it’s okay to leave it cached in the UI until the 429 clears.)

There’s no way for the Forge platform to know the importance of the API request or what the best experience for the user is, right?

@AaronMorris1 I can’t think of any scenario in which our apps will be able to show something useful when it is rate blocked. Even for a request like getting user preferences, it should fail immediately if it is rate blocked, otherwise it will cause displaying wrong unit on the screen and additional support requests. Usually 500/401 errors occurs in a specific place in the app, but this will make the app unusuable for every extension point. This may be an opt-in feature declerated manifest of the app.
There will be a lot of support requests due to this rate limit blocking. The issue is who will need to deal with them.

2 Likes