Hej @AdamMoore / @SeanBourke / @danielwinterw,
There are a couple of Forge platform quotas and limits that are a bit surprising within the context of modern software development. As many vendors are currently spending a lot of brain cycles (and engineering time) to work around these limitations, would anyone of you (or someone else from the Forge team) be able to shed more light as to why these quotas are imposed?
Perhaps if we understand why we are spending resources on these limitations, it might be less frustrating.
Resource update quotas
There is currently a hard limit on how many custom UI resource (HTML/JS/CSS/assets) files can be uploaded per week (500 / 250 files for paid / free apps), incl. a limit on size (150MB / 75MB).
Modern front-end development best practices tell us to split files in chunks of <250KB. A complex web application can easily generate 100-250 files. Our Figma for Jira app generates 214 files. That means we can only update the application twice a week because it is a paid app. If it were a free app, it would only be once ![]()
This seems antithetical to best practices in both software development lifecycle / delivery management as well as security, which tell us to deliver value to our customers as often as possible and provide security fixes as soon as possible. The only option would be to increase resource file sizes and limit code splitting, but this comes at a performance cost for our mutual customers.
Can you please tell us why this resource limitation is imposed?
Invocation limits
There are several invocation limits that raise eyebrows, specifically with regard to runtimes, but these have been discussed previously and seem to mostly stem from limitations of the underlying AWS architecture. For now I would like to focus on the following limitations that do not seem to have an architectural reason:
Available memory per invocation
It seems that we cannot consume more than 1024MB of memory per invocation. This is very limiting. A lot of our functions require 4-8 GB of memory as we store a lot of intermittent data during processing. AWS Lambda supports up to 10GB memory allocation. What is the reason that memory allocation is not configurable?
Front-end invocation request payload size
There is currently a limitation on how much data one can sent using invoke and invokeRemote. The payload is limited to 500Kb. That is not a lot of data, especially when dealing with user generated data. Multi-part messages which may include metadata or binary data can easily exceed this limit. For more complex apps, a 500Kb limit quickly becomes an issue, and it comes without an easy workaround. Having to split data into 500KB chunks and stitch them together in a stateless backend can be a challenging engineering feat.
Now obviously, this does also tie in to the available memory (1MB), KVS storage size (240 KiB) and Forge SQL request (1MB) limits, meaning that even if this limit is increased it is difficult to process a larger payload.
So the question really is: why does Forge have such limited support for data processing and storage?
Looking forward to hearing your thoughts!
Cheers,
Remie