How would you architect a storage-heavy Forge app for Jira/JSM (managing dynamic project configuration, routing rules, automation workflows, and audit history) that currently uses Storage API legacy with “block” keys like data-block, data-block-1, data-block-2 (each storing arrays of ~20 objects), in order to avoid hitting storage and invocation limits as the data grows? Specifically, would you recommend moving to a one-entity-per-key pattern (e.g. config:{id}, rule:{id}, workflow:{id}) with index keys and issue/project snapshots (e.g. snapshot:issue:{issueKey} / snapshot:project:{projectKey}), or migrating to Custom Entities / KVS, and how would you approach that migration in a live app that already has a lot of data and active customers?
Hi @PedroMorocho,
We recommend migrating to Custom Entities in this case. You can use a scheduled trigger that runs periodically to migrate data over, and track the migrated keys in KVS to avoid repeated migrations. If you require execution time longer than the usual 25s, you can use Async events API. To maintain live data operations, you can implement dual writes (write to existing + CE) into the app.
Migrate-on-read is what I would do, this guarantees the rest of app logic only needs to deal with migrated data and spread out the migration workload.
If you have a reliable way to collect data from your app installation (logs aren’t as customer can turn it off), I’d use Scheduled Trigger to scan the old keys just to determine if the migration has been completed for all customer so you know when you can remove the migration code.