Pull request locking

I’m wondering when should we use com.atlassian.bitbucket.concurrent.LockService#getPullRequestLock ?

From the documentation and source code it’s not too clear under what circumstances we should use this. I assume it’s for when we want to update a pull request where multiple threads could be updating it?

One example of this I may have found is updating a pull request in an asynchronous event handler. For example an event handler registered for the PullRequestOpenedEvent where we add comments, set reviewers or update the PR in any other other way.

Is my assumption of this correct?

What other cases would one use a pull request lock in?

1 Like

I think the pull request lock is used for clustered instances. If one node/thread needs to make a modification to a pull request object.

Thanks, however the documentation seems a bit misleading on this. It says here:

Locking a pull request in one instance does not lock it in any other.

But I see a ClusterPullRequestLock implementation is returned so what you’ve said makes sense.

Just wondering if for example you have an event handler for an asynchronous event and if multiple event handlers for that event will update the PR (maybe other plugins) then should we also use the pull request lock?

I thought the API may handle this, but I don’t think it does.

1 Like

@adam.markham,

Locking a pull request in one instance does not lock it in any other.

That takes the documentation a bit out of context. The sentence before that says “Separate instances of this interface are not related.” In other words, if you create a PullRequestLock using getPullRequestLock("a") and create another with getPullRequestLock("b"), locking a PullRequest with the first will not lock it for the second–they are separate instances of locks and may be acquired concurrently.

For an event like PullRequestOpenedEvent, which is raised exactly once for a given PullRequest, and only on a single node in a Data Center installation, and then never raised again, you wouldn’t need locking. There can’t possibly be any concurrency.

If, on the other hand, you were listening for PullRequestRescopedEvent, or some other event that can happen multiple times for the same PullRequest, you may need to lock. It’s up to you to decide whether you need to, though, based on what you’re doing. Should your code be allowed to run concurrently if the event you’re listening to is raised multiple times (potentially on multiple nodes) in a “short” period of time? (Note that, if you’re using PullRequestLock, you should never do so directly on an event thread. Ensure your @EventListener submits the task to an ExecutorService or otherwise moves the processing off the event thread.)

Another possibility would be where your app performs some form of background processing which is allowed to run on multiple nodes concurrently in a Data Center installation. You could use a PullRequestLock to ensure the processors on each node never modify the same PullRequest concurrently if doing so would be problematic.

Where possible, it’s best to design your app so that it doesn’t need cluster-wide locks. However, that’s not always possible.

I thought the API may handle this, but I don’t think it does.

Our API absolutely handles concurrency “correctly” (barring any bugs I don’t know about :slight_smile:). That handling may take the form of concurrent calls working without issue (adding comments, for example), or throwing an exception (calling PullRequestService.update, where concurrent updates would result in nondeterministic final state).

Our API can’t really know the big picture for how you’re using it, though. Only you, the app developer, know whether you want your logic to be able to run concurrently, or whether it might produce unexpected output or other, more subtle issues. PullRequestLock is one tool in your toolbox for controlling concurrency to meet your needs.

Hope this helps,
Bryan Turner
Atlassian Bitbucket

1 Like

When updating the same pull request almost simultaneously from 2 separate sources

  1. installed app (Workzone): executorService Thread started from pull request opened event listener: adding configured reviewers (via Workzone)
  2. an external app triggered via a webhook (pull request created message) - modifying the pull request description via the REST API

we see a DB error

2018-12-10 10:34:53,662 ERROR [threadpool:thread-45] [xxx-username] [xxx-token-1] [xxx-token-2] [xxx-IP-address],[xxx-IP-address] "POST /projects/[xxx-project]/repos/[xxx-repo]/pull-requests HTTP/1.1" o.h.e.j.batch.internal.BatchingBatch HHH000315: Exception executing batch [org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1], SQL: update sta_pull_request set closed_timestamp=?, description=?, from_hash=?, locked_timestamp=?, rescoped_timestamp=?, pr_state=?, title=?, to_branch_name=?, to_hash=?, to_branch_fqn=?, updated_timestamp=?, entity_version=? where id=? and entity_version=?
2018-12-10 10:34:53,664 ERROR [threadpool:thread-45] [xxx-username] [xxx-token-1] [xxx-token-2] [xxx-IP-address],[xxx-IP-address] "POST /projects/[xxx-project]/repos/[xxx-repo]/pull-requests HTTP/1.1" c.i.w.listener.RepoEventListener can't add designated reviewers
com.atlassian.bitbucket.DataStoreException: A database error has occurred.
	at com.atlassian.stash.internal.aop.ExceptionRewriteAdvice.afterThrowing(ExceptionRewriteAdvice.java:45)
	at com.atlassian.stash.internal.pull.InternalPullRequestParticipantHelper.addParticipant(InternalPullRequestParticipantHelper.java:293)
	at com.atlassian.stash.internal.pull.InternalPullRequestParticipantHelper.lambda$updateParticipants$12(InternalPullRequestParticipantHelper.java:563)
	at java.lang.Iterable.forEach(Iterable.java:75)
	at com.atlassian.stash.internal.pull.InternalPullRequestParticipantHelper.updateParticipants(InternalPullRequestParticipantHelper.java:563)
	at com.atlassian.stash.internal.pull.InternalPullRequestParticipantHelper.setReviewers(InternalPullRequestParticipantHelper.java:252)
	at com.atlassian.stash.internal.pull.DefaultPullRequestService.update(DefaultPullRequestService.java:1102)
	at com.atlassian.plugin.util.ContextClassLoaderSettingInvocationHandler.invoke(ContextClassLoaderSettingInvocationHandler.java:26)
	at com.izymes.workzone.listener.RepoEventListener$6.perform(RepoEventListener.java:364)
	at com.izymes.workzone.listener.RepoEventListener$6.perform(RepoEventListener.java:354)
	at com.atlassian.stash.internal.concurrent.ClusterMappedLock.withLock(ClusterMappedLock.java:49)
	at com.atlassian.stash.internal.concurrent.ClusterPullRequestLock.withLock(ClusterPullRequestLock.java:25)
	at com.atlassian.stash.internal.concurrent.ClusterPullRequestLock.withLock(ClusterPullRequestLock.java:30)
	at com.izymes.workzone.listener.RepoEventListener.updatePullRequestReviewers(RepoEventListener.java:354)
	at com.izymes.workzone.listener.RepoEventListener.access$800(RepoEventListener.java:99)
	at com.izymes.workzone.listener.RepoEventListener$2.call(RepoEventListener.java:204)
Caused by: org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
	at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:67)
	at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:54)
	at org.hibernate.engine.jdbc.batch.internal.BatchingBatch.checkRowCounts(BatchingBatch.java:141)
	at org.hibernate.engine.jdbc.batch.internal.BatchingBatch.performExecution(BatchingBatch.java:116)
	at org.hibernate.engine.jdbc.batch.internal.BatchingBatch.doExecuteBatch(BatchingBatch.java:97)
	at org.hibernate.engine.jdbc.batch.internal.AbstractBatchImpl.execute(AbstractBatchImpl.java:147)
	at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.executeBatch(JdbcCoordinatorImpl.java:212)
	at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:618)
	at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:463)
	at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:337)
	at org.hibernate.event.internal.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:50)
	at org.hibernate.internal.SessionImpl.autoFlushIfRequired(SessionImpl.java:1398)
	at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1483)
	at org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1445)
	at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1414)
	at org.hibernate.query.internal.AbstractProducedQuery.uniqueResult(AbstractProducedQuery.java:1457)
	at com.atlassian.stash.internal.watcher.HibernateWatcherMappingDao.findByWatchableAndUser(HibernateWatcherMappingDao.java:106)
	at com.atlassian.stash.internal.watcher.DefaultWatcherService.addUserAsWatcher(DefaultWatcherService.java:74)

When the webhook is disabled and the external app is not notified, the problem goes away.

Workzone uses pull request lock, but that doesn’t seem to help.

1 Like

Default to diagnose/debug without additional details, especially since I don’t have access to your code to see the context of what you’re doing, but my guess would be you’ve got stale objects.

  • Do you start your database transaction inside the lock, or outside it?
  • Do you re-retrieve the pull request once you hold the lock?
2 Likes

Hi Brian,

thanks for getting back so quick!!! Pls find my answers inline and the code snippet

  • Do you start your database transaction inside the lock, or outside it? >> I would say inside.
  • Do you re-retrieve the pull request once you hold the lock? >> Yes. Retrieving latest version before updating
        lockService.getPullRequestLock(RepoEventListener.class.getName()).withLock(pullRequest, new UncheckedOperation<PullRequest>() {
            @Override
            public PullRequest perform() {
                PullRequest updatedPullRequest = pullRequestService.getById(toRepository.getId(), pullRequest.getId());
                PullRequestUpdateRequest updateRequest = new PullRequestUpdateRequest.Builder(toRepository.getId()
                        , updatedPullRequest.getId()
                        , updatedPullRequest.getVersion())
                        .title(updatedPullRequest.getTitle())
                        .description(updatedPullRequest.getDescription())
                        .reviewers(existingReviewers).build();
                return pullRequestService.update(updateRequest);
            }
        });

I found the following on PullRequestLock | Bitbucket Server - API

Note : When used by plugin developers, no instance of this lock can ever prevent the host application from performing any of its own processing on any aspect of a pull request."

1 Like

@bturner I would like to add that problem occurs in Stash 5.6.5. Moreover Ulrich answered your questions. Do you need more information?

@izymesdev

So, just to make sure I understand:

  • One of the updates is being done using Bitbucket Server’s built-in REST endpoint
  • The other is being done from inside a PullRequestLock with RepoEventListener.class.getName() for its name

The REST API functionality won’t be acquiring any pull request lock and will not be blocked by any pull request lock acquired somewhere else. That’s documented on PullRequestLock explicitly:

Note: When used by plugin developers, no instance of this lock can ever prevent the host application from performing any of its own processing on any aspect of a pull request.

Your locking will prevent Workzone from attempting to update the same pull request on two threads concurrently, but it won’t prevent any updates performed by the host app.

Assuming I have the situation right, the only answer I can offer is that you need to build in handling for optimistic locking failures like these, because there’s (intentionally) no way for the Workzone plugin to prevent the host application from updating a pull request.

Best regards,
Bryan Turner
Atlassian Bitbucket

2 Likes