How to acquire an exclusive or pessimistic lock in Forge RoA

Hi, hopefully a quick question for the Forge experts here.
We are migrating a legacy connect app to Forge RoA and are having trouble figuring out how to get a safe exclusive/pessimistic lock on objects.

For example, if the app has to work on a Jira issue/work item by loading some KVS data attached to that issue, update it, and then save it back to kvs.

If multiple users, functions, threads happen to run concurrently it is critical that the ( load, update, and save ) is locked to avoid corrupting or overwriting data.

In Connect we do this with a simple redis mutex lock. What is the equivalent inside Forge?

Thank you
Chris

Hi @Chris_at_DigitalRose

Forge doesn’t provide a Redis-style distributed mutex or atomic operations (such as CAS) in the standard Storage API, so implementing reliable locking on top of @forge/kvs is very hard in practice, as you can’t guarantee atomicity between the read and the write steps.

For cases where correctness matters (read → modify → write), one workable approach in Forge is optimistic locking using @forge/sql.

This is natively supported in SQL via conditional updates (for example, UPDATE … WHERE version = :currentVersion). If the row was modified concurrently, the update affects zero rows, allowing you to detect the conflict and apply retry logic safely.

I’ve written a detailed guide on this pattern here, in case it’s useful:

Hope this helps, and good luck with the migration from Connect!

3 Likes

Thank you @vzakharchenko for the suggestion and write-up on using the new forge sql. We have not used SQL yet, but your guidance looks promising.

Regards,
Chris

fyi – If you’re moving to optimistic locking, I think you can build that in the Forge Custom Entity Store in the same way as Forge SQL. The Custom Entity Store is more similar to KVS than SQL is, so it might be a smaller jump.

Thanks @AaronMorris1 , we’ll look at custom entity too. Though we have 1ms exclusive locking working perfectly with Redis (in remote), so any of the external stores are not going to compete with that latency.

We are assessing whether RoA was suitable for complex apps, and now we have added locking on top of webhooks, email/notification, caching and background report/pdf generation. Until those all are addressed in forge, we will have to stay with remotes for our full featured apps.

Chris

1 Like

Hi @vzakharchenko ,

Thank you very much for the link to the article about optimistic locking using Forge sql. It is well written!

However, as I understand, this allows you to lock a row in the Forge sql db. It does not allow you to directly lock an operation on e.g. a issue or content property. You can work around this limitation by using optimistic locking together with acquiring a lock or mutex through the Forge sql optimistic locking. You then also have to take care of lock expiry and clean up (e.g. when your process hits a rate limit or your function throws).

Hi @marc,

Yes, I agree with you - this does not lock the Jira issue itself, but rather provides application-level coordination, and it does require explicit expiry and cleanup handling.

This kind of coordination is possible in Forge using optimistic locking semantics in SQL.

A simplified “distributed lease” pattern can be implemented as follows:

1. Initial attempt (no version in the browser state):
Attempt to create the lock record:

INSERT IGNORE INTO issue_lock (issue_id, version, expiry, status)
VALUES (:issueId, 0, NOW() + INTERVAL 30 SECOND, 'LOCKED');

If 0 rows are affected, the lock already exists. In this case, the current record can be read and, if it is still locked (status = 'LOCKED'), its version can be returned to the browser for a subsequent retry.

2. Atomic Takeover / Renewal (using version from browser state):
When the browser retries, it passes the version it received back to the resolver. The lock can be acquired only if it was explicitly unlocked (matching that version) or has already expired:

UPDATE issue_lock
SET version = version + 1,
    expiry = NOW() + INTERVAL 30 SECOND,
    status = 'LOCKED'
WHERE issue_id = :issueId
  AND (
    (status = 'UNLOCKED' AND version = :versionFromBrowser)
    OR expiry <= NOW()
  );

If this update affects 1 row, the caller has successfully acquired the lease.
If 0 rows are affected, the lease was not acquired. In this case, the current record can be read and, if it is still locked (status = 'LOCKED'), its version can be returned to the browser for a subsequent retry.

3. Release:
On completion (success or failure), release the lease:

UPDATE issue_lock
SET status = 'UNLOCKED', expiry = NOW()
WHERE issue_id = :issueId;

This approach handles the “zombie lock” problem: if an invocation crashes, times out, or is rate-limited, the lease naturally expires and can be reclaimed by the next invocation.

So yes — this is a workaround rather than true pessimistic locking, but in this model, the SQL row acts as a semaphore coordinating operations on the Jira entity.


PS: @AaronMorris1 ,

One small clarification: Forge Custom Entity Store (and KVS) does not provide the guarantees needed for optimistic locking.

While individual writes are atomic, the Storage API uses a “last write wins” conflict resolution strategy and does not support conditional writes (compare-and-set). This means concurrent updates can still overwrite each other.

From the docs:

Because of this, optimistic locking patterns (read → check version → update if unchanged) cannot be implemented safely on top of KVS or Custom Entity Store. SQL works differently here, as conditional UPDATE … WHERE provides the required semantics.

1 Like

@vzakharchenko – KVS supports conditional updates through custom entity transactions:

Here’s a basic example. If you use the kvs.transact().set() feature, you can provide the set() method with a conditional filter that checks the current version of the property against the expected version. If the check fails, the transaction is aborted.

# manifest.yaml

modules:
  jira:issuePanel:
    - key: issue-panel-1
      resource: main
      resolver:
        function: resolver
      render: native
      title: Optimistic Locking - Panel 1
      icon: https://developer.atlassian.com/platform/forge/images/icons/issue-panel-icon.svg
    - key: issue-panel-2
      resource: main
      resolver:
        function: resolver
      render: native
      title: Optimistic Locking - Panel 2
      icon: https://developer.atlassian.com/platform/forge/images/icons/issue-panel-icon.svg
  function:
    - key: resolver
      handler: index.handler
resources:
  - key: main
    path: src/frontend/index.jsx
permissions:
  scopes:
    - storage:app
app:
  runtime:
    name: nodejs24.x
    memoryMB: 256
    architecture: arm64
  id: [redacted]
  storage:
    entities:
      - name: my-counter
        attributes:
          value:
            type: integer
          version:
            type: integer
        indexes:
          - version
// src/resolvers/index.js

import Resolver from '@forge/resolver';
import { kvs, Filter, FilterConditions } from '@forge/kvs';

const resolver = new Resolver();

const CUSTOM_ENTITY_NAME = 'my-counter';
const SHARED_COUNTER_KEY = "shared-counter-key";

const NEW_COUNTER = { value: 0, version: 1 };

async function getCurrentCounter(){
    const result = await kvs.entity(CUSTOM_ENTITY_NAME).get(SHARED_COUNTER_KEY);
    if (!result) {
        await kvs.entity(CUSTOM_ENTITY_NAME).set(SHARED_COUNTER_KEY, NEW_COUNTER);
        return NEW_COUNTER;
    }
    return result;
}

resolver.define('getCounter', async (req) => {
    return await getCurrentCounter();
});

resolver.define('setCounter', async ({payload}) => {
  const expectedCurrentVersion = payload.version;
  const newVersion = expectedCurrentVersion + 1;

  const updatedCounter = { value: payload.value, version: newVersion };

  try {
    await kvs
        .transact()
        .set(
            SHARED_COUNTER_KEY,
            updatedCounter,
            {
              entityName: CUSTOM_ENTITY_NAME,
              conditions: new Filter().and('version', FilterConditions.equalTo(expectedCurrentVersion))
            })
        .execute();
    return { success: true, currentCounter: updatedCounter}
  } catch (e) {
      console.error(e);
      const currentCounter = await getCurrentCounter();
      return {success: false, currentCounter: currentCounter, error: "Optimistic locking failed."}
  }
});

export const handler = resolver.getDefinitions();

// src/frontend/index.jsx

import React, {useEffect, useState} from 'react';
import ForgeReconciler, {Box, Button, Inline, Text} from '@forge/react';
import {invoke} from '@forge/bridge';

const App = () => {
    const [counter, setCounter] = useState(null);
    const [errorMessage, setErrorMessage] = useState(null);

    useEffect(() => {
        invoke('getCounter').then(setCounter);
    }, []);

    const handleSave = async () => {
        setErrorMessage(null);
        try {
            const response = await invoke('setCounter', counter);
            setCounter(response.currentCounter);
            if (!response.success) {
                setErrorMessage(response.error);
            }
        } catch (err) {
            setErrorMessage("An unexpected error occurred.");
        }
    };

    return (
        <>
            {!counter && <Text>Loading...</Text>}
            {counter && (
                <Box>
                    <Text>Counter: {counter?.value}</Text>
                    <Inline space="space.100">
                        <Button onClick={() => setCounter({ ...counter, value: counter.value + 1 })}>+1</Button>
                        <Button onClick={handleSave}>Save</Button>
                    </Inline>
                    <Text>Current version: {counter.version}</Text>
                    {errorMessage && <Text appearance="error">{errorMessage}</Text>}
                </Box>
            )}
        </>
    );
};

ForgeReconciler.render(
    <React.StrictMode>
        <App/>
    </React.StrictMode>
);
2 Likes

Hi @AaronMorris1,

Great catch — you’re absolutely right. I overlooked that kvs.transact() supports conditional updates.

Your approach using Custom Entity Store is indeed cleaner and more Forge-native than the SQL-based workaround.

One trade-off worth keeping in mind is quotas and rate limits. Locking patterns often involve polling (periodically checking whether the lock is free). With heavy polling, the Storage API may hit rate limits sooner than a simple SQL SELECT–based check.

That said, for most standard apps, your approach is likely the better and more idiomatic choice. Thanks for sharing!

3 Likes