How to implement cache in data center plugin

I am trying to implement the atlassian-cache-api on Jira 8.5.4 datacenter. I don’t know what I am doing wrong but the cache is not replicating between the nodes, below is my cache class.

package com.custom.jira.cache;

import com.atlassian.cache.Cache;
import com.atlassian.cache.CacheLoader;
import com.atlassian.cache.CacheSettings;
import com.atlassian.cache.CacheSettingsBuilder;
import com.atlassian.plugin.spring.scanner.annotation.component.JiraComponent;
import com.atlassian.plugin.spring.scanner.annotation.imports.ComponentImport;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.Nonnull;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.inject.Inject;
import java.util.concurrent.TimeUnit;

@JiraComponent
public class CacheManagerImpl implements CacheManager {

    private static final Logger log = LoggerFactory.getLogger(CacheManager.class);
    private static final String CACHE_NAME = "myCache";
    private static Cache<CacheKeys, String> cache = null;
    private final com.atlassian.cache.CacheManager cacheManager;

    @Inject
    public CacheManagerImpl(@ComponentImport com.atlassian.cache.CacheManager cacheManager) {
        this.cacheManager = cacheManager;
    }

    @PostConstruct
    public void afterPropertiesSet() {
        log.info("Enabling {}", this.getClass().getName());
        init();
    }

    @PreDestroy
    public void destroy() {
        log.info("Disabling {}", this.getClass().getName());
        save();
    }

    @Override
    public void init() {
        CacheSettings cacheSettings = new CacheSettingsBuilder()
                .remote()
                .replicateViaCopy()
                .expireAfterWrite(2, TimeUnit.MINUTES)
                .build();
        CacheLoader<CacheKeys, String> cacheLoader = new CacheLoader<CacheKeys, String>() {
            @Nonnull
            @Override
            public String load(@Nonnull CacheKeys keys) {
                return "value for" + keys.getDefaultValue();
            }
        };
        cache = cacheManager.getCache(CACHE_NAME, cacheLoader, cacheSettings);

    }

    @Override
    public void save() {

    }

    @Override
    public void delete() {

    }

    @Override
    public boolean isInit() {
        return cache != null;
    }

    @Override
    public String get(CacheKeys key) throws CacheNotInitializedException {
        if (cache == null) {
            throw new CacheNotInitializedException();
        }
        if (cache.containsKey(key)) {
            return cache.get(key);
        }
        return null;
    }

    @Override
    public void set(CacheKeys key, String value) {
        if (cache == null) {
            throw new CacheNotInitializedException();
        }
        cache.put(key, value);
    }
}

The set method updates the cache only on 1 node and not on the others. I already went through below links -
https://developer.atlassian.com/server/confluence/atlassian-cache-2-overview/
https://developer.atlassian.com/server/jira/platform/developing-for-high-availability-and-clustering/
One question which I have based on the link, am I suppose to be the one to invalidate a cache so replication occurs between nodes.

1 Like

I solved my problem using below links to understand and implement.

https://labs.consol.de/java-caches/part-3-2-peer-to-peer-with-ehcache/index.html
https://bitbucket.org/cfuller/jira-clustering-compat-example (The code is a little old, but still the concept is same)

Thanks