Questions about the new concurrency limits (async events)

I would like to use the new concurrency option of the async event queue, but I have a few questions about how the events are processed. I’ve created a simple app that pushes five events to a queue with a concurrency limit of one:

const queue = new Queue({ key: "queue" });

for (let i = 0; i < 5; i += 1) {
  queue.push({
    body: { message: `Event ${i}` },
    concurrency: { key: "event", limit: 1 }
  });
}

I assumed that the events are processed immediately one after the other, but there seems to be a random (?) delay between the events:

2025-07-15T07:09:41.494Z Event 4
2025-07-15T07:09:57.753Z Event 1 // 16 second delay
2025-07-15T07:10:01.740Z Event 0 // 4 second delay
2025-07-15T07:10:05.763Z Event 3 // 4 second delay
2025-07-15T07:10:10.432Z Event 2 // 5 second delay
  • Why is there a delay and is it configurable somehow?
  • What’s the maximum possible delay?
  • Is there a limit to how many unprocessed events can be in a queue?
2 Likes

Hi @klaussner ,
when Async Events cannot be delivered (in other words, the function cannot be invoked) we retry with an exponential backoff. We retry on multiple occasions, rate limiting due to configured concurrency is one of the scenarios. Not all the retry scenarios are visible to the app (e.g. when reaching the invocation limits we don’t attempt the invocation, or if there’s an internal error etc). So, the delays may be a result of that - when you push all 5 events to the queue, all 5 are trying to be delivered at the same time, but only 1 can, and the rest is postponed and retried internally after some time. The more strict the concurrency is, when you push a lot of events at the same time, the time differences may be higher. There also comes the function result processing time vs synchronizing the concurrency counters, and other factors.
The maximum delay in-between the retries is 15 minutes. So, if you push thousands of events, process them until the max possible timeout with the limit=1, then you’ll likely hit the max delay.
Presumably, you thought that we might be polling the Async Events invocations per installation+queue, making it synchronize nicely when the limit is 1, but given the Atlassian/Forge scale that’s not a viable, scalable solution at the moment, so we’re going for the proactive invocation attempts and backing off if needed.
There’s currently no limit to the number of unprocessed events that use the concurrency setting. The one case you may run into when pushing too many events with a limit=1 is that some of the events from this batch may run out of retention window eventually. Reaching the invocation limits does not extend it.

4 Likes

Thank you for the detailed answer, @KamilKozlowski!

Hi @KamilKozlowski,
I also have a question regarding the async event api - thought I just post it here to keep the questions together. Maybe you can help me with this one as well :slight_smile:

I’m logging the processings of the events I’m pushing to the queue. In the logs I saw that sometimes events are processed several times (the same job UUID is occurring over and over again in the logs). Some events were processed more than 15 times… I’m not throwing any error etc. from the event handler so it should not be the built-in retry mechanism. For now this only occurred when pushing more than 100 events at a time to the queue.
Later I saw that I was not awaiting the queue.push call like in the example above but just pushed 200 events directly after each other to the queue. I don’t know, if this could be the reason why I had this strange behavior. I changed that part of the code to await the calls but didn’t have time to retest with that many events so far.
For now, I could only think of 2 potential causes:

  • Events are not correctly removed from/ added to the the queue by forge and are therefore re-processed again and again (maybe some bug because I pushed so many events at the same time?)
  • UUIDs for the event ids are being reused resulting in events with the same event ids and therefore it looked as if events were re-processed.

Since await jobProgress.cancel(); was not working for me for stopping the processing of the events, I finally had to delete my app completely (uninstalling from instance also didn’t stop the execution). That was a bit ugly because in this case it was just my development app I could easily delete but what if this happened in production?
Do you maybe have some insights there on how the queues are behaving in general and do you have an idea what could be the reason for the observation I made?
Thank you!

Cheers
Jona

Hey @JonaIttermannDecadis,

For now this only occurred when pushing more than 100 events at a time to the queue.

I assume you’re pushing them one by one given the maximum number of events per request is 50?

Later I saw that I was not awaiting the queue.push call like in the example above but just pushed 200 events directly after each other to the queue. I don’t know, if this could be the reason why I had this strange behavior. I changed that part of the code to await the calls but didn’t have time to retest with that many events so far.

We’ve seen cases before where unresolved promise led to function timeouts, triggering the retry. This could be happening in your case as well.
Also, are you using forge tunnel when testing? We’ve seen cases during higher load that the tunnel failed to respond and an event that seemed successfully processed was retried because of that.

Anyway, it’s a bit hard to give a good answer here. If you believe you’re running into some unwanted behaviors, I recommend raising a support ticket where you can share your app and job IDs and the team can look closer into the logs, try reproducing your case and assist with the fix or raise a public bug.

3 Likes

Alright, thank you very much for the quick response @KamilKozlowski!
We will keep a close eye on our implementation and the queue behavior and will raise a ticket, if we observe some strange behavior again :wink:

Cheers!