Are expanded developer responsibilties for none sandbox runtimes compatible with pushing customer data to "@forge/events"-queue as payload?

Hi,

We’re currently migrating our Forge apps to Node 20.x runtimes, as the sandbox runtime has been deprecated for some time. The new runtime brings new developer responsibilities, specifically:

  • Your app must not persist customer data or sensitive content in a global state, in memory or on disk, between subsequent invocations.

(as stated in the Legacy Runtime Migration documentation)

We’re using the Forge Async Events API for long-running tasks in our app, which provides a queue to handle these tasks. However, these queues need to be globally defined.

We’re unsure about the implications of submitting a job with customer data as payload to a queue. From our understanding, this is equivalent to storing the data in memory, as it may not be processed immediately. We’re wondering:

  1. Should we avoid sending customer payload to the queue and instead save the payload (e.g., in Forge storage) and send only the identifier as payload?
  2. Is it acceptable to have customer data as payload in the queue, as Atlassian takes care of preventing data leakage (similar to Forge storage)?

A related question is: Does Atlassian ensure that queue events are customer-specific, even though the runtime is no longer sandboxed?

Best regards,
Jason

1 Like

Hello @JasonMarx,
Here is the response regarding this topic that we discussed with the appropriate team:

While queues are defined globally in the manifest, they are isolated per tenant under the cover—each tenant has its own instance of the queue. So, data put into the async queue from tenant A’s invocation context will result in async events being executed in tenant A’s context. The responsibility is not to fetch data of tenant B (e.g., from disk or memory) while processing invocation for tenant A (as then such data would cross the tenant boundary).

Best regards,
Damian