RFC-128: Local development mocks for Forge Storage


RFCs are a way for Atlassian to share what we’re working on with our valued developer community.

It’s a document for building shared understanding of a topic. It expresses a technical solution, but can also communicate how it should be built or even document standards. The most important aspect of an RFC is that a written specification facilitates feedback and drives consensus. It is not a tool for approving or committing to ideas, but more so a collaborative practice to shape an idea and to find serious flaws early.

Please respect our community guidelines: keep it welcoming and safe by commenting on the idea not the people (especially the author); keep it tidy by keeping on topic; empower the community by keeping comments constructive. Thanks!


Project Summary

This RFC presents our plan for Forge Storage local development mocks.

  • Publish: 24th Feburary 2026

  • Discuss: 10th March 2026

  • Resolve: 10th March 2026


Problem

With the introduction of monetisation for Forge, all traffic to Forge Storage services (tunnelled or otherwise) will be charged. Given the implications of this change for the partner and developer community and their development costs, we’re exploring tooling to avoid incurring costs during app development and CI when real environments aren’t required, as well as general improvements to the local development loop.


Proposed Plan

As a first step, we are looking to provide development mocks for our different storage products, in particular the KVS and SQL offerings. Below, we outline the capabilities we intend to provide:

Capabilities

  • Avoid incurring compute usage costs - For test scenarios that don’t require a real environment, local mocks allow you to bypass monetised resource usage.

  • Faster test execution - Give developers a faster development loop for testing app functionality. Ensuring the tool’s usability in both local and CI environments for large-scale testing.

  • Bulk seeding data - Enable developers to quickly set up and pack down the underlying state of a data store to support different test scenarios.


What we are not doing

Complete parity with real environments - Local development mocks won’t include all features and characteristics of our real services. For example, request throughput and connection limits, as well as some platform-level constraints.

Open source - At the moment, we are not planning to open-source these tools, but we will consider it in the future, based on demand.


Asks

We understand that there will be a wide variety of use cases. In light of this, we are reaching out to gather feedback and ensure that what we build aligns with the base needs of the developer community and business partners.

Engage with us in the following questions:

  1. What is the most important Forge storage offering you work with (KVS, SQL, OS)?

  2. Which of the proposed capabilities would add the most value to your workflow?

  3. Are there any capabilities not listed that would be important to your specific needs?


13 Likes

Hey @EthanFernandez

Thank you for the RFC, this is a very welcome improvement.

The most important service for us would be FSQL.

The biggest value we hope to get out of this is having full control over the DB. Being able to seed and update data, and to connect to the DB with a client of our choice.

Can you please elaborate on what features will be missing. Will it still be a TiDB based solution with HTTP access (for the application). And it will have the same limitations production FSQL has (no transactions and no foreign keys.)

And please confirm that we will be able to open a direct connection to the database.

2 Likes

@EthanFernandez the proposal for local development mocks for Forge Storage sounds great! We’re currently using DynamoDB and make heavy use of DynamoDB local for unit and integration tests running locally and on CI. Migrating to Forge KVS or Forge SQL, it would be a great help if we could also migrate our test cases without having to use Forge Storage on the actual Forge environment.

From our experience, parity with features is more important than parity with environment specific limits. Specifically, please make sure it supports all features, e.g. kvs.set options like TTL and transactions. Request throughput and other limits that can be considered environment specific are usually not relevant when it comes to unit and integration tests.

Is support for Forge Custom Entity Store planned as well?

5 Likes

This is great. We currently haven’t own pure JS mock of the KVS storage to implement our tests.

But what we miss of the mock is:

  • getMany pagination with cursor, limit a.s.o
  • StartsWith/ beginsWith query mocks that actually work
  • Actual last-write-wins logic for KVS

That would be awesome :rocket:

Hi,

Thanks for sharing this RFC — this is a very welcome initiative, especially with the introduction of monetisation for Forge Storage.

I primarily work with KVS alongside Forge Compute. I store key/value data (sometimes with relatively large payloads) and run end-to-end tests using Playwright. Because of this, local KVS mocks would bring significant value, particularly in CI.

Most valuable capabilities

• Avoiding compute and storage costs in CI
This is the biggest benefit for my workflow. E2E tests currently interact with real KVS storage, which will become costly. A reliable local mock would allow us to run full regression suites without incurring unnecessary usage.

• Faster feedback loop
Removing dependency on real Forge environments would speed up debugging and automated test runs.

• Bulk seeding and reset
Being able to seed and reset storage state between scenarios isinteresting E2E testing.


Additional capabilities that would be important

To make the mock truly useful for integration and E2E testing, the following would be highly valuable:

  • Simulation of rate limits

  • Support for error injection (timeouts, quota exceeded, internal errors)

  • Basic permission simulation

  • Configurable latency simulation

  • Optional enforcement of storage size constraints

While full production parity can be hard to simulate, configurable “realistic” behaviour would significantly increase confidence when testing KVS-heavy applications locally.

Looking forward to seeing this evolve.

5 Likes

Hi,

and thanks for the RFC. While I like the idea of being able to mock storage locally (also without paying for it), I am concerned about the performance aspect, specifically, the mock implementation being faster than the actual implementation.

We use both KVS and SQL quite heavily and have ran into several performance problems. Had we started with mock data, we would have discovered these issues much later than we did.

Because of this, an option to make the mock behave like the actual implementation (in terms of speed, API limits, and reliability) would be very helpful.

Something like a “realistic mode” that simulates real-world latency and constraints. Otherwise, I see a real danger of developers building an app that works great locally and then finding out it’s too slow in production.

Best regards,
Christopher

1 Like

Hi, thanks for publishing this RFC.

We build a Forge-native Confluence app with heavy reliance on Forge Storage. Here’s our feedback.
What is the most important Forge storage offering you work with?
Forge SQL it’s our primary persistence layer.
Which of the proposed capabilities would add the most value?
All three are valuable, but in priority order:

  1. Avoiding compute costs Our CI runs storage-heavy test suites in parallel. With monetised compute, those costs will add up.
  2. Bulk seeding data Our integration tests currently lack a direct way to seed or inspect Forge SQL state. Being able to populate and tear down tables per test scenario would significantly improve our testing workflow.
  3. Faster test execution Storage interactions during integration tests involve significant network overhead.

Are there any capabilities not listed that would be important?

  • TiDB dialect fidelity as our SQL targets the TiDB dialect that Forge SQL provides. If the mock uses a different engine under the hood, query compatibility issues would undermine its value.
  • Drop-in compatibility. The mock should work as a drop-in replacement for the @forge/sql package, so existing abstraction layers work without modification.
  • Being able to simulate rate limiting for stress testing will be valuable.
1 Like

Thank you for this RFC, looks great! For us Forge KVS is the most important storage offering, and the capabilities for avoiding compute usage costs and bulk seeding data are most valuable. Are you able to share any information on how we would set up an app to work with local mocks (do you expect code changes to be required or would storage requests be redirected to a local service via an environment variable etc.)?

All three storage engines are important. If forced to stack-rank, I’d say KVS, OS, SQL (which clearly doesn’t align well with others).

All three of the proposed capabilities are important. If forced to stack-rank, I’d say faster test execution and bulk seeding data (these are both enabling moving testing closer to the change, which is important), then avoiding cost.

I do want to note that the ability to inject failures / faults / latency is important for completing testing - as others have said, this is important. Do the mocks have value without this ability? yes. Do they have a lot more value with it? yes.

:clap:

That would be amazing and even better if error and rate limit event probability distribution was configurable.

This would be a good improvement. Here are some details of what our current e2e setup is and how this RFC would help.

Current setup e2e setup:

  • Feature branch runs CI (Bitbucket job)
  • Creates a new forge environment for an app
  • Deploys the app to the new environment (app uses Forge Containers)
  • Install the environment to a confluence instance
  • Create a web trigger that allows running sql commands from outside of the app towards FSQL database
  • Run end-to-end test with playwright targeting the new installation
  • Each test uses the web trigger to setup the state in the FSQL database

Challenges with current setup

  • No direct database access from outside of the app → Need to use web triggers
  • Tests have to run sequentially so that the fsql is not rate limited
  • The test performance is generally slow since fsql queries are quite slow in the development environments (at least when used with Forge Containers)

What I would like to see

  • Mock database for Forge SQL without ratelimits
  • Access to that database from outside of the app.
  • Related to “Bulk seeding data”: Preferably have a way to do a different seeding for different test scenarios.
  • To run e2e tests in parallel you generally need to be careful about what data gets wiped during a test case so the seeding should also have fine grain control (it should not just wipe the entire database)
1 Like

Thanks for the feedback @ErkkiLepre

Will it still be a TiDB-based solution with HTTP access (for the application).

Regarding the solution approach for SQL, while it’s not yet finalised we are prioritising http access for the app in addition to direct access for seeding test data as part of a first release. We’ve noted your preference for direct access supporting a client of your choice. We would love to hear a bit more about how you would use this client as part of your test setup and execution?

Can you please elaborate on what features will be missing.

Our first release will be prioritising API parity and improving the local dev loop / test execution experience while avoiding unnecessary costs. Certain platform behaviours and constraints won’t be included as part of this first release, such as:

  • rate limits and DB connection limits

  • full replication of all error scenarios

  • permissions / auth checks

We are however still in the design phase, and we will be considering additional features like these for future releases based on feedback and demand.

1 Like

Thank you for the feedback and questions @BenRomberg .

Is support for Forge Custom Entity Store planned as well?

Yes, we also plan to support Custom Entity Store, though the first release likely won’t have the same level of validation on entity properties and indexes that currently exists during app deployment events.

Thank you!

1 Like

Thanks for the feedback @clouless! We do intend to prioritise API parity in a first release including the points you’ve mentioned above. Cheers.

Thanks for the detailed feedback @FlixAndre.
Regarding the additional capabilities you’ve mentioned, it’s likely these won’t be included in a first release but we are considering capabilities like these for later releases based on demand.
As a follow up, would like-for-like rate limits be suitable, or would you want configurable limits similar to what you’ve raised about latency simulation? At the very least you would be able to toggle these on or off.

Thanks @chrschommer! We definitely understand the concerns. Initially we would be focusing on API parity and improving the local dev loop / test execution experience while avoiding unnecessary costs. It won’t be a complete replacement to testing in real environments, but we will clearly documentation limitations, and over time we hope to address many of those limitations based on demand.

On that note, I have some follow up questions if you have a moment.

  • Would latency modelling be the most important feature to you, and would you want it configurable or simply as accurate as possible?

  • Would rate limiting and / or error injection be valuable for you?

Thanks again!

Thanks for taking the time to provide this feedback @jonlopezdeguerena, it’s really valuable.

Bulk seeding data is definitely a priority for us across each of the storage offerings. Given SQL is your primary concern, would you want direct access to the local database or would a convenience api be preferred / just as valuable?

TiDB dialect fidelity as our SQL targets the TiDB dialect that Forge SQL provides.

API parity is another big priority for us for the first release and we consider it especially important for SQL.

Drop-in compatibility

This aligns with our current plans

Being able to simulate rate limiting for stress testing will be valuable

While I don’t expect this will be part of a first release, it is something we’ve already highlighted internally as a valuable addition and based on the feedback so far it appears the community agrees. We’ll keep this in mind.

Thanks again.

Hi @ac-tom thank you for taking the time to comment.

Are you able to share any information on how we would set up an app to work with local mocks (do you expect code changes to be required or would storage requests be redirected to a local service via an environment variable etc.)?

While we’re still in the process of deciding on the final solution, we do not expect any code changes to be required. We hope to share more on this soon.

Thank you!

Hi @billjamison thank you for taking the time to reply.

I do want to note that the ability to inject failures / faults / latency is important for completing testing - as others have said, this is important. Do the mocks have value without this ability? yes. Do they have a lot more value with it? yes.

While we do not have plans to support error injection or latency simulation on first release, we can see that these are popular requests from developers and would like to prioritise them for a future release.

Thank you.

1 Like

Direct access to the DB is very useful:

  • For seeding test-specific data in E2E test automation
  • When troubleshooting and debugging. The SQL interface in Dev Console is slow, doesn’t show TEXT columns and doesn’t allow modifying data.
1 Like