Project Summary
-
Publish: April 14, 2026
-
Discuss: April 28, 2026
-
Resolve: May 25, 2026
Problem
Context
-
Forge apps are subject to both Forge platform and Atlassian app limits, which are in place to ensure fair usage and reliability
-
REST API calls from Forge apps to Atlassian apps (such as Jira and Confluence) are impacted by Atlassian app limits (e.g. points-based rate limiting). This RFC does not relate to these limits.
-
The Forge platform applies limits on the rates of invocations, static resources, and storage consumption. The limits are documented here.
-
-
Different rate-limiting models are applied to different types of invocations in order to cater to their different use cases. This document is seeking feedback for user-led invocations only. The table below outlines the different types of invocations and their rate limits:
| Type of invocation | How is rate limiting handled? | In scope? |
|---|---|---|
| Interactive invocations (i.e. resulting from user interaction with a specific Forge module and/or app), including: - Usage of app via UI - Web triggers - Jira workflow modules - Confluence adfExport - JQL functions |
Limited on a fixed, per-minute window and not automatically retried. The current limits are documented here and are as follows: - Per environment: 30k RPM - Per installation: 5k RPM - Per user: 1.2k RPM |
Yes |
| Scheduled triggers | There are separate limits applied to the number of scheduled triggers per app so they will not be restricted by invocation rate limits | No |
| Atlassian app events and async events (via the async events API) | Rate limiting events are handled gracefully by the Forge platform and invocations are eventually consistent | No |
Areas of concern
As the Forge platform continues to grow, rate limits and how they are handled will need to adapt to maintain reliability and support larger-scale apps. There are two key issues that this RFC addresses:
-
The existing model of global, app-level rate limits doesn’t scale for apps with large numbers of users and installations.
-
Retry logic for user-led invocations is difficult to implement as waiting through the minute-long rate-limiting window can be quite disruptive
Proposed Solution
-
Flatten rate-limiting model by removing app-level and per-user limit. Apps will only be rate-limited on a per-installation basis
-
This ensures that limits scale fairly for apps with large numbers of users and installations
-
Per-user invocation limits will no longer be enforced at a platform-level and can instead by implemented within each Forge app as needed. Point (2) will support this change.
-
-
Rate limited invocations will return information about the rate-limiting event to support in-app retry logic.
- Rate limiting errors will contain the field rateLimitProperties which contains rateLimitValue, rateLimitRemaining, rateLimitReset. These can be used to implement retry behaviour
-
Migrate rate limits from a per-minute to a per-second, sliding window
-
This will support platform reliability with the removal of global limits
-
This will also provide an opportunity for graceful retry methods
-
The following table describes the proposed changes to user-initiated invocation limits, via the UI or web triggers. Our analysis of current invocation traffic indicates that a very small number of installations would experience rate limiting under these proposed limits. We will contact you directly if your app will be impacted to discuss next steps, but you are also welcome to reach out to us if you have questions or concerns about your app.
| Level | Current | Proposed | Change |
|---|---|---|---|
| Per user | 1.2k RPM | Remove | |
| Per app installation | 5k RPM | 300 RPS OR 7k RPM: whichever is hit first - The per-second limit protects against spiky traffic |
1.4x increase over a one-minute timeline |
| Per environment | 30k RPM | Remove | Removal of global limit provides more flexibility, especially for apps with large numbers of installs |
Handling invocation rate limits
In preparation for these rate limits, we recommend implementing appropriate rate-limiting retry mechanisms in your Forge apps. Some examples of this are demonstrated below:
Retrying user-initiated UI invocations (i.e. invoke via resolver)
A back-off with retry method is suitable for user-initiated invocations as the invocation response is returned directly to the invoking user.
async function fetchTextWithRetry() {
const maxRetries = 3;
let retries = 0;
while (retries < maxRetries) {
try {
const data = await invoke('getText', { example: 'my-invoke-variable' });
setData(data);
return;
} catch (error) {
// Do not retry if the error is not related to rate-limiting
if (error.status !== 429) {
throw error;
}
// Throw an error if the invocation has been retried
retries++;
if (retries === maxRetries) {
throw error;
}
// Wait until the rate limit window resets before retrying
// This value will always be less than 1 second
const delayInMilliseconds = Math.floor(error.limitHeaders['rateLimitReset'] - Date.now());
if (delayInMilliseconds > 0) {
await new Promise(resolve => setTimeout(resolve, delayInMilliseconds));
}
}
}
}
Retrying web trigger invocations
async function callWebtriggerWithRetry() {
const maxRetries = 3;
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch('http://webtrigger.example.com', {
method: 'POST',
});
// Retry if rate limit is exceeded
if (response.status === 429) {
if (attempt === maxRetries - 1) {
throw new Error(`Failed to call webtrigger after ${maxRetries} attempts`);
}
// Wait until the rate limit resets before retrying
// This value will always be less than 1 second
const delayInMilliseconds = Math.floor(response.headers.get('rateLimitReset') - Date.now());
if (delayInMilliseconds > 0) {
await new Promise(resolve => setTimeout(resolve, delayInMilliseconds));
}
continue;
}
// Throw non-rate-limit errors
if (response.status !== 200) {
throw new Error(`Webtrigger returned status: ${response.status}`);
}
// Return successful response
return response.json();
}
}
Invocations not initiated by users (i.e. product events and async events)
These are retried under the hood and will eventually succeed. If a rate-limiting event does happen, developers should consider making requests in batches to distribute the load.
Asks
While we would appreciate any reactions you have to this RFC (even if it’s simply giving it a supportive “Agree, no serious flaws”), we’re especially interested in learning more about:
-
Whether installation-level, per-second limits provide enough flexibility for your apps, especially those with large numbers of users
- If not, what processes and/or changes can we put in place to best support your use-case?
-
How else we can be providing support in the period leading up to this change
-
Other limits that may be impacting your apps which may also require review