Can we talk about Forge platform quotas and limits?

Hej @AdamMoore / @SeanBourke / @danielwinterw,

There are a couple of Forge platform quotas and limits that are a bit surprising within the context of modern software development. As many vendors are currently spending a lot of brain cycles (and engineering time) to work around these limitations, would anyone of you (or someone else from the Forge team) be able to shed more light as to why these quotas are imposed?

Perhaps if we understand why we are spending resources on these limitations, it might be less frustrating.

Resource update quotas

There is currently a hard limit on how many custom UI resource (HTML/JS/CSS/assets) files can be uploaded per week (500 / 250 files for paid / free apps), incl. a limit on size (150MB / 75MB).

Modern front-end development best practices tell us to split files in chunks of <250KB. A complex web application can easily generate 100-250 files. Our Figma for Jira app generates 214 files. That means we can only update the application twice a week because it is a paid app. If it were a free app, it would only be once :scream:

This seems antithetical to best practices in both software development lifecycle / delivery management as well as security, which tell us to deliver value to our customers as often as possible and provide security fixes as soon as possible. The only option would be to increase resource file sizes and limit code splitting, but this comes at a performance cost for our mutual customers.

Can you please tell us why this resource limitation is imposed?

Invocation limits

There are several invocation limits that raise eyebrows, specifically with regard to runtimes, but these have been discussed previously and seem to mostly stem from limitations of the underlying AWS architecture. For now I would like to focus on the following limitations that do not seem to have an architectural reason:

Available memory per invocation
It seems that we cannot consume more than 1024MB of memory per invocation. This is very limiting. A lot of our functions require 4-8 GB of memory as we store a lot of intermittent data during processing. AWS Lambda supports up to 10GB memory allocation. What is the reason that memory allocation is not configurable?

Front-end invocation request payload size
There is currently a limitation on how much data one can sent using invoke and invokeRemote. The payload is limited to 500Kb. That is not a lot of data, especially when dealing with user generated data. Multi-part messages which may include metadata or binary data can easily exceed this limit. For more complex apps, a 500Kb limit quickly becomes an issue, and it comes without an easy workaround. Having to split data into 500KB chunks and stitch them together in a stateless backend can be a challenging engineering feat.

Now obviously, this does also tie in to the available memory (1MB), KVS storage size (240 KiB) and Forge SQL request (1MB) limits, meaning that even if this limit is increased it is difficult to process a larger payload.

So the question really is: why does Forge have such limited support for data processing and storage?

Looking forward to hearing your thoughts!

Cheers,

Remie

18 Likes

Hey Remie, thanks for raising this.

This is timely with Forge pricing approaching and more large, complex apps moving to Forge.

We plan to refresh the limits and quotas page, which has remained mostly unchanged since Forge went GA in 2021. With Forge pricing starting on Jan 1, we have more flexibility to adjust many quotas and limits, though some remain constrained by downstream limits etc. and will take a bit more effort to adjust.

For example, the function-as-a-service quotas will likely be removed once functions become a paid feature. We will announce further changes before Jan 1.

Let’s review the specific limits you mentioned. I’m happy to discuss others if people join the thread.

Custom UI resource upload quotas

We are getting an increasing amount of feedback that these quotas are causing headaches, especially those migrating apps from Connect. It’s something we’re actively looking into and will report back on what we can do on this one.

Invocation limits and payload sizes

Available memory per allocation

We started preparing for this earlier this year by making function memory configurable and raising the max to 1024MB (from 512MB).

After Forge pricing starts, we will increase the maximum again, giving developers more control over their function’s resources. We have added this to the public roadmap: https://ecosystem.atlassian.net/browse/ROADMAP-184 (thanks Joe!)

Front-end invocation request payload size

This is more complex due to limits in Atlassian’s architecture that are hard to change.

To ease the pressure on some use cases we are looking at providing a direct connection from a Forge UI module to a remote https://ecosystem.atlassian.net/browse/FRGE-1856. This would bypass the 500kb limit and allow binary payloads.

For other architectures we’re still investigating how we might solve this.

For storage, we are developing object store to better support large file use cases. Other limits on SQL and KVS are also being reviewed.

Thanks again @Remie for kicking off the conversation. If anyone else has questions/feedback/comments please drop them below.

6 Likes

Thanks for the details and explanation!

As you’ve mentioned Forge pricing. Will it be possible to add quotas and limits in-app. E.g. to limit (costly) actions for small customers (free < 10)?

Hi @AdamMoore

With customers having variable usage/traffic volumes, Forge ‘hard limits’ can be problematic as they can trigger abrupt app service cessation for higher volume (i.e. larger) customers with no way out for customers or vendors. Does Atlassian plan to address this, perhaps through customer purchase of additional capacity to get past such limits?

Related, vendors currently have to foot the bill for any/all Forge usage for any/all sizes of users which doesn’t seem fair, yes per-user pricing is there but still doesn’t help, a 100 user instance can cause load X, another instance could cause load 100 * X. Customers exceeding vendor set ‘normal’ limits should be funded by the Customer that needs that load rather than the vendor with such purchasing needing Marketplace support.

1 Like