Avoid polluting caches on bulk operations

Hi!

I need to implement a migration task for a macro that will potentially go through all the pages on a Confluence instance.

I have had a look at https://developer.atlassian.com/server/confluence/hibernate-session-and-transaction-management-for-bulk-operations/ and https://developer.atlassian.com/server/confluence/hibernate-sessions-and-transaction-management-guidelines/ , and I think that I know how to avoid problems with too big transactions.

However, I would also want to avoid polluting eg the Page cache by reading content just for migration purposes. The migration task will use the content entity only once and continue, so the cached items should be the once really used by users of the instance. Also there is no hurry in completing the migration task, all the data is usable also when the task is running, only important change from a user’s point of view is that a space can be correctly exported and imported only after the migration is finished.

I see that eg DefaultImportExportManager.exportAs() has this code snippet:

try (Cleanup cleanup = temporarilySetCacheMode(CacheMode.IGNORE)) {
    return doExport(context, progress);
}

The CacheMode.IGNORE seems to be appropriate to use also for my case, and should keep the second level / query caches more effective for the actual users.

So my question is if somebody knows if using SessionCacheModeThreadLocal.temporarilySetCacheMode() is recommended to do from a plugin when running a bulk operation like this?