If you build non-trivial apps on Atlassian Forge SQL, sooner or later you run into the same limitation: TiDB in Forge does not support materialized views.
This becomes painful as soon as your application needs read-heavy features built on complex joins, aggregations, or denormalized relational data.
In practice, this usually leaves you with two unattractive options: recompute expensive queries on every resolver call and risk hitting strict Forge SQL limits (like the 5s connection timeout or 16MB memory usage), or manually maintain precomputed tables and synchronization logic, which quickly turns into brittle application code.
In forge-sql-orm, I’ve implemented a two-level caching approach that acts as a virtual materialized view at the application layer.
- The first level is an invocation-scoped in-memory cache that eliminates duplicate queries during a single resolver execution.
- The second level is a persistent cross-invocation cache backed by @forge/kvs, allowing expensive results to be reused across requests.
The library provides cache-aware write operations and automatic invalidation of affected queries after data changes. This means you can effectively “materialize” the results of expensive read queries and keep your relational features responsive without building a manual synchronization subsystem.
In this article, I’ll show how this pattern works, why a single invocation cache layer is usually not enough for Forge apps, and how this “virtual materialized“ view approach helps reduce repeated expensive SQL execution and keeps cacheable read paths fast within Forge platform limits.
The “Why”: Understanding the 5-Second Timeout and the 16MB Memory Limit
When you start building simple apps on Forge SQL, TiDB feels surprisingly capable. Basic queries are fast, relational modeling is straightforward, and everything works well on small datasets. But as the application grows, the query layer changes too. Instead of simple lookups, you start building read-heavy features based on joins, aggregations, sorting, and denormalized relational views.
That is usually the point where you hit the real platform boundaries.
The 5-Second Timeout
Forge SQL enforces a strict 5-second connection timeout for SELECT queries. Even if the Forge function itself still has time left to run (up to 25s), the database query can be terminated before the resolver finishes.
This creates a common and dangerous pattern: a query that performs well on a small development dataset can become unreliable or fail completely on a large production tenant. Managing this is difficult for two main reasons:
- Small datasets hide the problem: A query that looks acceptable during development can cross the timeout boundary once the tables contain hundreds of thousands or millions of rows.
- Tenant size is unpredictable: Forge apps run in a multi-tenant environment where different customers have radically different data volumes. You cannot reliably predict query behavior from your local setup alone.
The 16MB Memory Limit
Execution time is not the only constraint. Forge SQL also enforces a strict per-query memory limit of 16MB.
This means a query can still fail even when it is not particularly slow. Complex joins or memory-heavy execution strategies, such as HashJoin, may exceed the memory budget and get cancelled with an “Exceeding the allowed memory limit” error. In practice, this often shows up only on enterprise-scale tenants, making these failures especially painful to reproduce locally.
Observability, Optimization, and the Caching Alternative
Once you hit these limits, you need a clear strategy.
- Observability: You need to know which queries are slow or failing. I explored how to detect and surface these cases in my work on Practical SQL Observability for Forge Apps.
- Optimization: Some queries can be improved with better indexing or rewriting logic using EXPLAIN ANALYZE. I covered this in detail in Optimizing Forge SQL on a 600K database with TiDB EXPLAIN.
- The Caching Alternative: However, optimization is not always enough. Some queries are expensive by nature due to business requirements. As discussed in my AtlasCamp 2026 session, Making Forge SQL observable, caching becomes an architectural necessity.
The “Virtual Materialized View” Approach
The most practical way to bypass these limits is to mimic the behavior of a Materialized View at the application layer. Since TiDB in Forge does not support native materialized views, forge-sql-orm implements a Virtual Materialized View pattern using two-level caching:
- Materialized Read Results: Expensive query results are stored in @forge/kvs, creating a persistent snapshot that survives across invocations.
- Transparent Integration: Methods like selectCacheable() and executeCacheable() allow caching to happen behind the scenes, with cache keys derived from the SQL and parameters.
- Automatic Invalidation: The ORM tracks affected tables and automatically invalidates cached entries after INSERT, UPDATE, or DELETE operations.
- Batch-based Eviction: To stay within Forge platform limits, invalidation is performed in controlled batches (up to 25 per transaction), maintaining consistency without manual synchronization code.
The result is a practical alternative to native materialized views: expensive relational queries are no longer recomputed on every request, keeping your application responsive within strict platform limits.
Deep Dive: The Two-Level Caching Architecture
To improve performance while staying within Forge platform limits, forge-sql-orm uses a hybrid two-level caching strategy. The idea is simple: combine the low-latency benefits of in-memory caching inside a single invocation with the cross-invocation persistence of the Forge Key-Value Store (KVS).
Level 1: Local Invocation Cache
The first cache layer lives entirely in memory and is scoped to a single resolver invocation. It is implemented using Node.js AsyncLocalStorage.
This layer solves a common problem in Forge applications: the same data may be requested multiple times during one execution path. A resolver might call helper functions, perform permission checks, or fetch related entities - all of which can trigger repeated reads for the same query.
With L1 cache, the database is queried only once for a given cacheable read during that invocation. Any subsequent access returns the result directly from memory in <1ms, avoiding additional SQL execution and unnecessary work. Because it is invocation-scoped, it is automatically discarded when the resolver finishes, ensuring no state is carried across unrelated requests.
Level 2: Persistent Cross-Invocation Cache
The second layer extends caching beyond a single invocation by storing results in @forge/kvs.
Unlike in-memory state, KVS-backed entries survive across cold starts and new lambda initializations until their TTL expires or they are explicitly invalidated. This makes L2 caching ideal for expensive read results that are stable enough to be reused across requests. Instead of recalculating a heavy join for every user, the ORM reuses the stored result.
To make this deterministic, the ORM generates a unique cache key from a hash of the SQL query text and its parameters. KVS is significantly faster than Forge SQL; while a SQL query might take 500ms (including latency), KVS typically returns results in 40–50ms.
Why This Architecture Fits Forge Well
This two-level design addresses two distinct inefficiencies:
- Reduces repeated SQL work within an invocation: It stops duplicate reads triggered by nested service calls or helper functions.
- Reduces repeated computation across invocations: Since local process memory is not persistent in serverless environments, @forge/kvs provides the necessary persistence across execution boundaries.
This architecture creates a practical optimization path. While KVS doesn’t “solve” a 5-second SQL timeout for a query that is inherently too slow to run even once, it allows you to decompose complex queries into stable, reusable parts. By caching these intermediate or final results, you significantly reduce pressure on Forge SQL limits.
Finally, reusing cached results also reduces repeated processing work in the application layer.
Implementing the Virtual Materialized View Pattern
To make the two-level caching model practical, forge-sql-orm provides a set of cache-aware query methods that let you treat expensive relational reads as reusable, cacheable views. Instead of manually working with @forge/kvs, you use an API that stays close to regular Drizzle ORM query building while adding transparent caching support for joins, distinct queries, and raw SQL.
The library includes several cache-aware methods:
- selectCacheable() - the main method for custom selections, joins, and derived read models.
- selectCacheableFrom() - shorthand for selecting and caching all columns with automatic field aliasing.
- selectDistinctCacheable() and selectDistinctCacheableFrom() - variants for queries that require distinct results.
- executeCacheable() - caching support for raw SQL queries.
The Core Method: selectCacheable()
The most flexible entry point is selectCacheable(). It allows you to define exactly which columns should be included in the cached result and how the underlying relational query should be composed.
In practice, this method handles the full read lifecycle:
- Check the invocation-scoped memory cache (L1).
- Check the persistent KVS cache (L2).
- Fall back to Forge SQL if no cached result exists.
In the following example, a joined query over users and orders is treated as a virtual materialized view. To make the behavior easier to observe, the query includes SLEEP(0.5) to simulate an expensive read path.
const SQL_CACHE_QUERY = FORGE_SQL_ORM
.selectCacheable({
userId: demoUsers.id,
userName: demoUsers.name,
product: demoOrders.product,
productId: demoOrders.id,
sleep: sql`SLEEP(0.5)`,
}).from(demoUsers).leftJoin(demoOrders, eq(demoOrders.userId, demoUsers.id));
What Happens at Runtime
When this cacheable query is executed, the ORM performs a deterministic sequence of steps:
- Cache key generation: A unique key is derived from the generated SQL and its bound parameters.
- L1 cache lookup: If the same query was already executed during the current resolver invocation, the result is returned directly from memory in <1ms.
- L2 cache lookup: If no in-memory entry exists, the ORM checks @forge/kvs for a persisted result. KVS typically responds in 40–50ms, which is much faster than a full SQL execution.
- Database execution on cache miss: If both layers miss, the ORM executes the SQL, stores the result in the cache layers, and reuses it for subsequent reads.
Using Virtual Materialized Views in Subsequent Queries
Once you have “materialized” your complex query results into a cached array, you can use that data to drive further relational queries with high efficiency. A common pattern is to use the IDs or keys from your materialized view to filter a primary table, essentially performing a “join” between a cached snapshot and live SQL data.
Because the materialized result is already in memory (L1) or quickly retrieved from KVS (L2), you can use it to build highly targeted WHERE ... IN (...) clauses. This avoids re-executing the original complex logic while ensuring you only fetch the specific records relevant to your materialized view.
Example: Filtering by Materialized IDs
In this example, we take the results of our joined SQL_CACHE_QUERY and use the resulting user IDs to fetch full profiles from the database.
// 1. Fetch the materialized view results (from cache or SQL fallback)
const materializedViewResult = await SQL_CACHE_QUERY;
// 2. Use the cached results to drive a targeted SQL query
// This is significantly faster than re-joining inside the database
const detailedUsers = await FORGE_SQL_ORM
.selectFrom(demoUsers)
.where(
inArray(
demoUsers.id,
materializedViewResult.map(m => m.userId)
)
);
The Hard Part: Cache Invalidation and Consistency
A Virtual Materialized View is only as good as its invalidation strategy. In Forge, the real challenge is not storing expensive query results, but keeping them consistent after the underlying relational data changes.
forge-sql-orm addresses this by automating the synchronization between Forge SQL tables and cached entries stored in @forge/kvs.
Automatic Invalidation
The library tracks table-level dependencies for cacheable queries. When data is modified through the ORM, any cached query associated with the affected tables is invalidated automatically.
This applies to write operations such as INSERT, UPDATE, and DELETE, but only when they are executed through the cache-aware ORM APIs.
Immediate Eviction for Single Writes
For standalone write operations, the library provides methods that execute the SQL statement and immediately evict related cache entries:
- insertAndEvictCache()
- updateAndEvictCache()
- deleteAndEvictCache()
These methods are essential when a resolver performs a single modification and consistency must be preserved immediately.
Consolidated Eviction with Cache Context
When a resolver modifies multiple tables in one logical flow, evicting cache entries after each individual write is inefficient.
To solve this, forge-sql-orm provides executeWithCacheContext(). Instead of triggering eviction immediately for every operation, this wrapper collects all affected tables during execution and performs one consolidated invalidation step at the end.
This makes multi-step write flows both faster and cheaper, which is critical in the Forge environment where execution time and KVS operations are metered.
Example: Batched Invalidation Across Multiple Tables
In the resolver below, both demo_users and demo_orders may be modified inside the same cache context. Instead of purging related cache entries multiple times, the ORM aggregates the affected tables and performs one batch invalidation pass after the block completes.
resolver.define(“insertUserOrOrder”, async (req: Request) => {
await FORGE_SQL_ORM.executeWithCacheContext(async () => {
let userId = req.payload.userId;
if (!userExists) {
// Registers demoUsers for consolidated eviction
const result = await FORGE_SQL_ORM.insert(demoUsers).values({
name: req.payload.userName,
});
userId = result[0].insertId;
}
if (newOrder) {
// Registers demoOrders for consolidated eviction
await FORGE_SQL_ORM.insert(demoOrders).values({
userId,
product: req.payload.product,
createdAt: new Date(),
});
}
// After the block finishes, the ORM invalidates cache entries
// associated with the affected tables in one consolidated pass.
});
});
Optional Scheduled Cleanup
As cache usage grows, invalidation-related operations can become more expensive, especially if many expired cache entries remain in storage.
Although Forge KVS supports TTL, expired entries are not removed immediately; physical cleanup is asynchronous and can take up to 48 hours. Over time, this can increase the amount of stale metadata the invalidation logic needs to process.
To reduce this overhead, forge-sql-orm provides an optional scheduled cleanup trigger. It periodically removes expired cache entries using an expiration index, keeping the cache footprint small and eviction performance predictable.
// index.ts
import { clearCacheSchedulerTrigger } from “forge-sql-orm”;
export const clearCache = () => {
return clearCacheSchedulerTrigger({
cacheEntityName: “cache”,
});
};
# manifest.yml
scheduledTrigger:
- key: clear-cache-trigger
function: clearCache
interval: fiveMinute # Proactive cleanup every 5 minutes
function:
- key: clearCache
handler: index.clearCache
This scheduler is particularly useful in high-traffic apps with heavy cache churn, preventing the accumulation of expired entries from slowing down your INSERT and UPDATE operations.
Performance Benchmarks and Conclusion
To see the practical effect of the virtual materialized view pattern, let’s compare the execution of the same joined query in two modes: direct SQL execution and cached execution through forge-sql-orm.
For this example, the query joins demo_users and demo_orders. To make the difference easier to observe, the SQL includes an artificial SLEEP(0.5) delay that simulates an expensive read path.
const SQL_CACHE_QUERY = FORGE_SQL_ORM.selectCacheable({
userId: demoUsers.id,
userName: demoUsers.name,
product: demoOrders.product,
productId: demoOrders.id,
sleep: sql`SLEEP(0.5)`,
})
.from(demoUsers).leftJoin(demoOrders, eq(demoOrders.userId, demoUsers.id));
Scenario A: Direct SQL Execution
Without caching, every request forces Forge SQL to execute the full joined query again. That means the database must repeat the join work and consume execution resources for every invocation.
The Execution Plan (EXPLAIN):
SQL: SELECT … FROM demo_users LEFT JOIN demo_orders … | Time: 4014 ms
Plan:
Projection_6 | task:root | … | \[estRows:8.75, actRows:8\] | execution info:time:4.01s
└─IndexHashJoin_13 | task:root | left outer join, inner:IndexLookUp_10 …
├─TableReader_30(Build) | task:root | data:TableFullScan_29 | \[estRows:7.00, actRows:7\]
│ └─TableFullScan_29 | task:cop\[tikv\] | access object:table:demo_users
└─IndexLookUp_10(Probe) | task:root | \[estRows:8.75, actRows:8\]
├─IndexRangeScan_8(Build) | task:cop\[tikv\] | access object:table:demo_orders, index:user_id
└─TableRowIDScan_9(Probe) | task:cop\[tikv\] | access object:table:demo_orders
In the benchmark environment, this query completed in 4014 ms. From a platform perspective, that is already uncomfortably close to the 5-second SELECT timeout. On a larger tenant, this kind of query can easily cross the boundary and fail.
Scenario B: L2 Cache Hit
Once the query result has been stored in the persistent KVS-backed cache, subsequent executions no longer need to hit Forge SQL for the same read.
In the benchmark environment, the cached result was returned in 20 ms. The important difference is not only lower latency, but also a different failure profile: the application avoids repeating the expensive SQL execution path entirely.
Benchmark Summary
| Metric | Direct SQL | Cache Hit | Effect |
|---|---|---|---|
| Response time | 4014 ms | 20 ms | Significantly lower latency |
| SQL timeout exposure | High | None | Safer read path |
| Database execution cost | Paid on every call | Avoided | Less repeated SQL work |
When This Pattern Makes Sense
The virtual materialized view approach is especially useful for read-heavy features where the underlying data changes less frequently than it is read. Typical examples include:
- Dashboards and reporting queries.
- Derived read models or aggregated relational views.
- Permission-related lookups and complex configurations.
It also serves as a late-stage optimization strategy. When a query remains expensive even after better indexes or query rewrites, caching the materialized result at the application layer becomes a practical alternative.
Best Practices
- Use this pattern selectively. Small, cheap, and highly dynamic reads are often better executed directly in SQL to avoid KVS overhead.
- Mind Platform Limits. This pattern works best for filtered, aggregated, or derived results, not for copying large tables into the cache.
- Consistency Matters. If a resolver modifies multiple tables, wrapping the operation in executeWithCacheContext() helps keep invalidation efficient and predictable.
Final Thoughts
Forge SQL gives developers a powerful relational model, but platform limits mean some read-heavy workloads need more than query tuning alone. When native materialized views are not available, moving that behavior into the application layer keeps complex features responsive.
That is the role of the virtual materialized view pattern in forge-sql-orm: combining two-level caching, deterministic cache keys, and automated invalidation to make expensive relational reads reusable without building a separate synchronization subsystem by hand.
Resources & Implementation Examples
If you want to dive deeper into the code or see how this pattern is implemented in production-grade apps, explore the following resources:
- Core Library: forge-sql-orm on GitHub - the complete source code, documentation, and advanced features.
- Official Cache Example: Advanced Caching Capabilities Example- a dedicated example showcasing performance monitoring and tiered caching.
- Real-World Production App: Forge Secure Notes for Jira - a full implementation of an AI-powered security app using the Virtual Materialized View pattern for secure analytics.
- Atlassian Marketplace: Secure Notes for Jira - see the final product in action on the marketplace.




