Long running, infrequent tasks against larger data sets is a key use case for one of our DC apps, and it is perceived as a key blocker to implementing it via forge.
-
Bulk calling of external and product API’s, on a regular schedule, and displaying the results to the end user. The tasks are not time sensitive, and are of the order of magnitude of ~1000-10000 api calls per execution. Being able to build via Forge would be highly advantageous as the app requires third party authentication tokens, which the customer is much more likely to trust within Atlassian’s infrastructure.
-
Because of the use case of our app, this is obviously a blocker even before starting prototyping. We have prototyped various hacky ways to implement within Forge limits, but it is much more complicated and less reliable than it could be.
-
Creating a tree of async events, creating a storage record for each batch and aggregating the storage records to display the results. We ran into limits at every point of this process. Limited number of async events in a consecutive queue, runtime limit of 25 seconds (limits our batch size), storage size limit (since each batch ended up being limited not by number of requests, but by likely size of storage entity created, storage read limits (trying to aggregate too many items at once)
-
Key things that would improve things for us:
a: increased storage limits (size of payload and/or number of reads) - the use case for our app is that such tasks may only happen once per day, week, or month per client, so the overall usage is quite low, but we still hit these limits as each batch tends to be quite large.
b: common integration pattern for large batch tasks. Part of our issue is we feel like we are reinventing the wheel, with the splitting into small tasks, splitting the storage per batch and aggregating the results. We would love to have a simple supported pattern from Atlassian that supports large batch use cases.
c: increased batched timeout limits. Any meaningful increase would be a huge difference for us. Our overall runtime is low (due to low number of tasks and low frequency) - but having to split across many invocations makes things much more complicated for us. Our tasks would range from 1min overall to 1hour. But even 1 or 5 minute tasks would be a huge benefit for us (15 minutes would be an amazing improvement)