Hi @marc,
Yes, I agree with you - this does not lock the Jira issue itself, but rather provides application-level coordination, and it does require explicit expiry and cleanup handling.
This kind of coordination is possible in Forge using optimistic locking semantics in SQL.
A simplified “distributed lease” pattern can be implemented as follows:
1. Initial attempt (no version in the browser state):
Attempt to create the lock record:
INSERT IGNORE INTO issue_lock (issue_id, version, expiry, status)
VALUES (:issueId, 0, NOW() + INTERVAL 30 SECOND, 'LOCKED');
If 0 rows are affected, the lock already exists. In this case, the current record can be read and, if it is still locked (status = 'LOCKED'), its version can be returned to the browser for a subsequent retry.
2. Atomic Takeover / Renewal (using version from browser state):
When the browser retries, it passes the version it received back to the resolver. The lock can be acquired only if it was explicitly unlocked (matching that version) or has already expired:
UPDATE issue_lock
SET version = version + 1,
expiry = NOW() + INTERVAL 30 SECOND,
status = 'LOCKED'
WHERE issue_id = :issueId
AND (
(status = 'UNLOCKED' AND version = :versionFromBrowser)
OR expiry <= NOW()
);
If this update affects 1 row, the caller has successfully acquired the lease.
If 0 rows are affected, the lease was not acquired. In this case, the current record can be read and, if it is still locked (status = 'LOCKED'), its version can be returned to the browser for a subsequent retry.
3. Release:
On completion (success or failure), release the lease:
UPDATE issue_lock
SET status = 'UNLOCKED', expiry = NOW()
WHERE issue_id = :issueId;
This approach handles the “zombie lock” problem: if an invocation crashes, times out, or is rate-limited, the lease naturally expires and can be reclaimed by the next invocation.
So yes — this is a workaround rather than true pessimistic locking, but in this model, the SQL row acts as a semaphore coordinating operations on the Jira entity.
PS: @AaronMorris1 ,
One small clarification: Forge Custom Entity Store (and KVS) does not provide the guarantees needed for optimistic locking.
While individual writes are atomic, the Storage API uses a “last write wins” conflict resolution strategy and does not support conditional writes (compare-and-set). This means concurrent updates can still overwrite each other.
From the docs:
Because of this, optimistic locking patterns (read → check version → update if unchanged) cannot be implemented safely on top of KVS or Custom Entity Store. SQL works differently here, as conditional UPDATE … WHERE provides the required semantics.