Improving support for long-running tasks in Forge

Long running, infrequent tasks against larger data sets is a key use case for one of our DC apps, and it is perceived as a key blocker to implementing it via forge.

  1. Bulk calling of external and product API’s, on a regular schedule, and displaying the results to the end user. The tasks are not time sensitive, and are of the order of magnitude of ~1000-10000 api calls per execution. Being able to build via Forge would be highly advantageous as the app requires third party authentication tokens, which the customer is much more likely to trust within Atlassian’s infrastructure.

  2. Because of the use case of our app, this is obviously a blocker even before starting prototyping. We have prototyped various hacky ways to implement within Forge limits, but it is much more complicated and less reliable than it could be.

  3. Creating a tree of async events, creating a storage record for each batch and aggregating the storage records to display the results. We ran into limits at every point of this process. Limited number of async events in a consecutive queue, runtime limit of 25 seconds (limits our batch size), storage size limit (since each batch ended up being limited not by number of requests, but by likely size of storage entity created, storage read limits (trying to aggregate too many items at once)

  4. Key things that would improve things for us:

a: increased storage limits (size of payload and/or number of reads) - the use case for our app is that such tasks may only happen once per day, week, or month per client, so the overall usage is quite low, but we still hit these limits as each batch tends to be quite large.

b: common integration pattern for large batch tasks. Part of our issue is we feel like we are reinventing the wheel, with the splitting into small tasks, splitting the storage per batch and aggregating the results. We would love to have a simple supported pattern from Atlassian that supports large batch use cases.

c: increased batched timeout limits. Any meaningful increase would be a huge difference for us. Our overall runtime is low (due to low number of tasks and low frequency) - but having to split across many invocations makes things much more complicated for us. Our tasks would range from 1min overall to 1hour. But even 1 or 5 minute tasks would be a huge benefit for us (15 minutes would be an amazing improvement)

3 Likes

Another use case I’ve noticed since this was originally posted.

I’ve noticed that OpenAI endpoints take double digit seconds to respond. I know Atlassian has it’s own AI features being added to the platform but there are countless interesting new add-ons that could be built around AI. I’ve noticed that response times average around 30 seconds depending on the use-case for OpenAI endpoints. I haven’t tried other services but I imagine it’s the same.

My assumption was that at some point Atlassian would charge for Forge usage and remove the limits. Is that not going to be the case?

Looking forward to this. Need a way to make more then 100 network requests for syncing large projects from Jira to another platform (during onboarding, then a two way sync where there is much less data). 100 network requests max and the invocation time limit just don’t work for us, and we are already batch uploading.

We first tried a regular resolver, and hit the number of network requests limit. From there I read that Async Events could run up to 15 minutes and implemented a queue, to find out that was a lie (please update the docs)?

Related issue Async Events Queue times out after 25sec - #3 by IlyaRadchenko

1 Like

Hi everyone,

We’ve announced a small enhancement in this space today by increasing the runtime limit for functions invoked via the Async Events API from 25 seconds to 55 seconds.

We intend to spend more time in this problem space in the future. This is an incremental improvement that we felt was valuable enough to launch on its own.

Please let me know if this increased runtime limit has been useful for you!

2 Likes

My use case is to copy an entire space: (https://marketplace.atlassian.com/apps/1227319/simple-space-manager?hosting=cloud&tab=overview)

This seems to work now for larger spaces as the limit is enhanced, thanks for that, but it still does not guarantee that it will work up to a space with N amount of pages.

Best Regards