Feedback and improvement suggestions : Cache EAP

Hi everyone!
We will be using this topic to collect any feedback and improvement suggestions that you wish to share with us related to Forge Cache EAP(Early Access Program)
Thank you!

Update : You can now use this form to submit feedback in addition to this topic

1 Like

Thank you for adding me to the EAP, @SunandanGokhroo.

I tested the new cache API to see if it could replace our current implementation based on Forge storage, but we would need more control over the TTL to make it work. I would like to implement a system like the following:

A scheduled trigger starts a background processing task where the work is split up into many shorter function invocations using async events. After doing some computations in each step, an intermediate result is stored in the cache and the next function is invoked. After all computations are completed, the cache entries are merged and processed in a final step.

The computations can take several hours, depending on how much content the customer has in their Confluence instance. This means that the one hour maximum TTL is too short because the first cache results could already be deleted before reaching the final step. Ideally, we would have a way to dynamically reset or extend the TTL of previously cached entries if needed.

1 Like

Hi @klaussner thank you so much for sharing the feedback, this is quite valuable. We would love to have more detailed discussion on this use case.

We already have a way to request an increase in TTL for the EAP, more info here :

Would you be willing to raise a ticket above? I would like our engineers to be able to discuss possible solution/alternatives with you and then look into the possibility of increasing the TTL.
Thank you!

1 Like

I just wanted to say that this is working really well in testing, and it’s a great feature that will let us avoid implementing a makeshift queue using scheduled triggers.

Our use case is fairly simple, and backout and retry is working well, but the dream would be to have a semaphore-type API using the cache storage. That is, a mechanism to call a enqueued job after a lock is released. The API as-is gives us the tools to implement it ourselves though. An extension to the queue module that runs the jobs 1 at a time would also work.


Hi @SunandanGokhroo,
Thank you for adding me to the EAP,

The functionality works well when called from the Forge backend for the following use case: caching API results for a few minutes instead of constantly making API calls.

However, we have implemented the Forge remote for Jira custom fields. We cannot combine a resolver function and a resolver endpoint.(

Is there a planned mechanism to set and retrieve cached values from the Forge frontend?
Or do we need to wait for the implementation (Trello) in order to use both functionalities simultaneously?
Thank you!


Hi @AlexandreBOREL thank you so much for the feedback. Happy to heat that it is working well for you.
Let me take a look into this and get back.
I would also love to hear any other feedback or improvement suggestions you might have. We are actively working on improving this to solve for partner pain points.

Thank you so much!

1 Like