Before we discuss monitoring, it is important to understand the limits applied to Forge SQL. Atlassian enforces strict per-install restrictions to ensure fair resource usage:
-
Memory usage per query: 16 MiB
-
Response size per query: 4 MiB
-
Request size per query: 1 MiB
-
Per-connection timeout: 5 seconds for
SELECT, 10 seconds forINSERT/UPDATE/DELETE, 20 seconds for DDL -
Query time per minute (s/minute): 62.5 seconds
-
Total query execution time for all invocations: 62.5 seconds (within each minute)
-
Number of tables: 200
-
Total stored data: up to 1 GiB (production installs), 256 MiB (staging), 128 MiB (development)
These constraints mean that slow or memory-heavy queries can easily break your app if left unnoticed.
Currently, developers can analyze query performance only through what the Forge Developer Console provides:
-
Query latency (execution time)
-
Slow query detection
While useful, these metrics have important limitations:
-
Per-installation visibility for slow queries
Slow query metrics are scoped to each app installation. If your app is installed across multiple sites, you need to manually open each installation to review them. This becomes inconvenient and time-consuming at scale. -
Lack of execution plans for slow queries
While you can see which queries are slow, the console does not provide the execution plan behind those queries. Without the plan, it is harder to understand whether the bottleneck is due to missing indexes, table scans, or join inefficiencies. -
No visibility into memory-heavy queries
There is no way to check which queries consume a lot of memory, nor on which instance this happens. This makes it difficult to correlate performance problems with specific tenants or installations.
A new approach
To address these limitations, I built a solution into my project forge-sql-orm:
-
Every hour, a performance monitoring trigger runs in the background.
-
It automatically logs queries that exceed configured thresholds:
-
the single slowest query per instance in the last hour (above a latency threshold), or
-
the most resource-intensive query per instance (above a memory threshold).
-
-
If multiple instances exceed thresholds, the console will show information for each affected instance.
-
Optionally, it can also output the execution plan for further analysis.
The results are written directly into the Atlassian Developer Console logs, giving developers a lightweight way to monitor SQL performance without manually browsing per-installation metrics. In addition, it becomes possible to spot memory-hungry queries before the 16 MiB per-query limit is reached, allowing proactive optimization.
Example output
When the trigger detects a slow or memory-heavy query, it writes detailed information into the logs, including execution statistics and even the que
ry plan if enabled. Developers can immediately see which queries are problematic and on which instance they occurred.
Installation guide
- Install dependencies
npm install forge-sql-orm @forge/sql drizzle-orm -S
- Create a Scheduler Function
import ForgeSQL, { topSlowestStatementLastHourTrigger } from 'forge-sql-orm';
// Initialize ForgeSQL ORM instance
const forgeSQL = new ForgeSQL();
// Basic usage with default thresholds (300ms latency, 8MB memory warning)
export const performanceMonitorTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL);
- Configure in
manifest.yml
As Scheduler Trigger (Recommended for Production):
scheduledTrigger:
- key: performance-monitor-trigger
function: perfMonTrigger
interval: hour
function:
- key: perfMonTrigger
handler: index.performanceMonitorTrigger
At this point, the trigger is fully configured and ready to use in your Forge app.
Configuration examples
// Basic usage with default thresholds (300ms latency, 8MB memory warning)
export const performanceTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL);
// Conservative performance monitoring: 4MB warning (well below 16MB limit)
export const conservativeMemoryTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL, { memoryThresholdBytes: 4 * 1024 * 1024 });
// Aggressive performance monitoring: 12MB warning (75% of 16MB limit)
export const aggressiveMemoryTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL, { memoryThresholdBytes: 12 * 1024 * 1024 });
// Memory-only performance monitoring: Only trigger on memory usage (latency effectively disabled)
export const memoryOnlyTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL, { warnThresholdMs: 10000, memoryThresholdBytes: 4 * 1024 * 1024 });
// Latency-only monitoring: Only trigger on slow queries (memory effectively disabled)
export const latencyOnlyTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL, { warnThresholdMs: 500, memoryThresholdBytes: 16 * 1024 * 1024 });
// With execution plan in logs
export const performanceWithPlanTrigger = () =>
topSlowestStatementLastHourTrigger(forgeSQL, { showPlan: true });
How it works under the hood
This solution is based on system tables INFORMATION_SCHEMA.CLUSTER_STATEMENTS_SUMMARY_HISTORY and INFORMATION_SCHEMA.CLUSTER_STATEMENTS_SUMMARY. According to the configuration of @forge**/sql**, these statistics are stored in memory for up to 24 hours, but can be lost if TiDB is restarted. Therefore, a scheduler trigger every hour is a practical approach.
If you don’t want to wait for the hourly trigger in development, you can also use a webtrigger. Keep in mind, however, that webtriggers break the eligibility requirements for run on Atlassian, so they are not recommended for production.
As Web Trigger (Development Only):
webtrigger:
- key: print-slowest-queries
function: perfTrigger
function:
- key: perfTrigger
handler: index.performanceTrigger
