Introducing forge-sql-orm: Type-Safe ORM for @forge/sql Using Drizzle

:light_bulb: forge-sql-orm: Type-safe ORM for @forge/sql based on Drizzle

Are you working with @forge/sql and tired of writing raw SQL with manual typings?

forge-sql-orm brings a structured, type-safe, and developer-friendly experience using Drizzle ORM — fully compatible with Forge’s infrastructure.

Hey everyone! :waving_hand:

I’d like to introduce forge-sql-orm — a type-safe ORM designed for working with the Atlassian Forge platform using @forge/sql, built on top of Drizzle ORM.


:magnifying_glass_tilted_left: Why forge-sql-orm?

🗋 Atlassian provides @forge/sql

…but you still have to write SQL queries manually and manage typings yourself:

import sql from '@forge/sql';

interface City {
  name: string;
  state?: string;
  country: string;
};

const results = await sql
  .prepare<City>(`SELECT * FROM cities WHERE name = ?`)
  .bindParams('New York')
  .execute();

console.log(results.rows[0].country);

:no_entry: You must manually define interfaces, manage aliasing, and ensure column matching for joins.

:warning: Also, prepare<City>() does not guarantee that the returned SQL data actually matches the City interface — it’s just a TypeScript hint, not runtime-validated.

:memo: While this works fine for simple queries, it becomes problematic when:

  • Fields are selected dynamically (e.g., from frontend input),
  • The structure of the result changes conditionally,
  • You join multiple tables that may share the same column names.
SELECT users.name, company.name ... -- which 'name' is which?

You end up needing:

SELECT users.name AS users_name, company.name AS company_name

And then you must manually write:

.prepare<{ users_name: string; company_name: string }>()

:roll_eyes: It’s verbose and hard to maintain.


:white_check_mark: forge-sql-orm solves this in 2 ways:

1. Automatically via forgeSQL.select({...})

import ForgeSQL from "forge-sql-orm";
const forgeSQL = new ForgeSQL();

const result = await forgeSQL
  .select({ user: users, order: orders })
  .from(orders)
  .innerJoin(users, eq(orders.userId, users.id));

:glowing_star: Columns are aliased by object keys (user.name, order.id) and typed safely.

2. Manually via selectAliased(...)

import { drizzle } from "drizzle-orm/mysql-proxy";
import { forgeDriver, patchDbWithSelectAliased } from "forge-sql-orm";

const db = patchDbWithSelectAliased(drizzle(forgeDriver()));

const result = await db
  .selectAliased({ user: users, order: orders })
  .from(orders)
  .innerJoin(users, eq(orders.userId, users.id));

:pushpin: Ideal for those working directly with Drizzle’s low-level API.


:white_check_mark: Built 100% on Official Forge SQL API

forge-sql-orm is built entirely on the official, documented Forge SQL API:
:link: Atlassian Forge SQL Reference

  • No undocumented features or internal hacks are used.
  • All queries run through @forge/sql.prepare(...) => bindParams(…) => execute().
  • Compatible with the current Forge runtime and safe to use within your app boundaries.

You can safely adopt this tool in your Forge project, knowing it relies only on what Atlassian supports.


:triangular_ruler: Schema First Migration Strategy

forge-sql-orm and forge-sql-orm-cli follow a Schema First approach — where your database schema is the single source of truth, rather than relying on manually written model definitions.

This means you have two paths to get started:

  • If you already have a working Forge SQL database, you can extract the schema directly from it.
  • Or, if you’re starting fresh, you can create a brand-new local or shared MySQL database and define your schema manually.

:brain: What does that mean?

  • You define your tables using SQL or existing Forge migrations.
  • You can extract your current schema from Forge SQL and apply it to a local environment.
  • forge-sql-orm-cli then generates Drizzle models based on the actual schema — not assumptions.

:white_check_mark: Bonus: it works with any MySQL-compatible database, not just Forge SQL. That means you can use a shared or local dev DB as your source of truth.

This is especially useful if:

  • You already have an existing Forge SQL project with migrations.
  • You want to migrate to a type-safe ORM layer.
  • You work in a team and want consistent, reliable database structure.

:globe_with_meridians: Extracting schema from a Forge app

import { fetchSchemaWebTrigger } from "forge-sql-orm";

export const fetchSchema = async () => {
  return fetchSchemaWebTrigger();
};

manifest.yml configuration:

webtrigger:
  - key: fetch-schema
    function: fetchSchema
sql:
  - key: main
    engine: mysql
function:
  - key: fetchSchema
    handler: index.fetchSchema

Example output:

SET foreign_key_checks = 0;
CREATE TABLE IF NOT EXISTS users (...);
CREATE TABLE IF NOT EXISTS orders (...);
SET foreign_key_checks = 1;

:test_tube: You can apply this script to a local database and run:

npx forge-sql-orm-cli generate:model

:building_construction: Defining a Schema from Scratch

Alternatively, you don’t need to start with a Forge SQL database at all:

  • You can create a brand-new local (or shared) MySQL database,
  • Define your schema from scratch or apply an existing SQL script,
  • Then use it as your reference schema for generating models and migrations.

This makes forge-sql-orm ideal for greenfield projects or building outside-in from an existing backend architecture.

To create an initial migration:

npx forge-sql-orm-cli migrations:create

To generate follow-up migrations after schema changes:

npx forge-sql-orm-cli migrations:update
npx forge-sql-orm-cli generate:model

:spouting_whale: Docker Example for Local Database

export MYSQL_ROOT_PASSWORD=admin

docker run -d \
  --name forge-sql-orm-example-db \
  -p 3366:3306 \
  -e MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} \
  --security-opt seccomp=unconfined \
  --restart=always \
  mysql

# Wait 30 seconds, then:
docker exec forge-sql-orm-example-db \
  mysql -uroot -padmin -e "create database forgesqlorm"

:dna: UUID as Primary Key (with VARBINARY(16))

Although Atlassian’s examples often use AUTO_INCREMENT for primary keys, TiDB documentation recommends UUIDs.

:white_check_mark: Why UUID?

  • Better support for distributed systems.
  • Lower chance of key conflicts.
  • More scalable for indexing.
  • When using UUID_TO_BIN, binary UUIDs reduce storage size and improve index performance.
  • Using UUID_TO_BIN(uuid, 1) sorts UUIDs by timestamp, enhancing insertion order and avoiding random I/O bottlenecks.

Recommended pattern:

id VARBINARY(16) PRIMARY KEY DEFAULT (UUID_TO_BIN(UUID()))

To implement UUIDs with varbinary(16) in Drizzle:

export const uuidBinary = customType<{
  data: string;
  driverData: { type: "Buffer"; data: number[] };
  config: [];
}>({
  dataType() {
    return "varbinary(16)";
  },
  toDriver(value) {
    return sql`UUID_TO_BIN(${value})`;
  },
  fromDriver(value) {
    const buffer = Buffer.from(value.data);
    return uuidStringify(new Uint8Array(buffer));
  },
});

Usage:

export const userStatus = mysqlTable("user_status", {
  id: uuidBinary().notNull(),
  name: varchar({ length: 100 }).notNull(),
});

:brain: TiDB also supports UUID_TO_BIN(uuid, 1) to reorder bits and improve clustering/indexing.


:locked: Optimistic Locking Support

To prevent race conditions when multiple users edit the same record, forge-sql-orm supports optimistic locking using a version column.

  • On crud().insert(), if a version field exists:

    • If it’s a number, it defaults to 1.
    • If it’s a date/time, it’s set to current timestamp.
  • On crud().updateById():

    • The version is incremented or updated.
    • The version is included in the WHERE clause to ensure no conflicts.

This helps avoid overwriting someone else’s changes by accident.

:warning: This logic is not available if you use .getDrizzleQueryBuilder().insert(...) — that API is lower-level and does not handle version automatically.


:test_tube: Insert Examples

// Single insert
const userId = await forgeSQL.crud().insert(users, [{ id: 1, name: "Smith" }]);

// Bulk insert
await forgeSQL.crud().insert(users, [
  { id: 2, name: "Smith" },
  { id: 3, name: "Vasyl" },
]);

:package: Quick Start

npm install forge-sql-orm @forge/sql drizzle-orm moment -S
npm install forge-sql-orm-cli -D

:small_blue_diamond: Generate migration:

npx forge-sql-orm-cli migrations:create

:small_blue_diamond: Generate models:

npx forge-sql-orm-cli generate:model

:small_blue_diamond: Query example:

import ForgeSQL from "forge-sql-orm";

const forgeSQL = new ForgeSQL();

const orderWithUser = await forgeSQL
  .select({ user: users, order: orders })
  .from(orders)
  .innerJoin(users, eq(orders.userId, users.id));

:speech_balloon: Feedback

Give it a try and let me know what you think!

If you’re already using @forge/sql and want to level up your developer experience with type safety and migrations, I’d love to hear your feedback.

:star: Star the repo on GitHub: vzakharchenko/forge-sql-orm

:technologist: PRs, issues, and questions are welcome — or just drop a comment below!

4 Likes

:light_bulb: Pagination with COUNT(*) OVER() and Window Functions in Forge SQL

Since Forge SQL is built on TiDB, which supports MySQL-compatible window functions, you can simplify pagination logic with a single query instead of two.

Instead of doing this:

  • Query 1: SELECT * FROM users LIMIT 10 OFFSET 20
  • Query 2: SELECT COUNT(*) FROM users

You can do:

SELECT 
  id,
  name,
  COUNT(*) OVER() AS total_count
FROM users
ORDER BY name
LIMIT 10 OFFSET 20;

:brain: This returns a paginated result set where each row includes total_count, useful for frontend pagination components.


:white_check_mark: Example in forge-sql-orm

import { sql } from "drizzle-orm";
import ForgeSQL from "forge-sql-orm";
import { users } from "./schema";

const forgeSQL = new ForgeSQL();

const result = await forgeSQL
  .select({
    id: users.id,
    name: users.name,
    totalCount: sql<number>`COUNT(*) OVER()`,
  })
  .from(users)
  .orderBy(users.name)
  .offset(formatLimitOffset(offset))
  .limit(formatLimitOffset(limit));

:warning: Caveat: Forge SQL does not support parameterized LIMIT/OFFSET

By default, Drizzle uses parameterized queries like:

SELECT ... FROM users ORDER BY name LIMIT ? OFFSET ?

But Forge SQL does not support that — it throws an error.

So you must pass values inline. To do it safely and avoid SQL injection:

export function formatLimitOffset(limitOrOffset: number): number {
  if (typeof limitOrOffset !== "number" || isNaN(limitOrOffset)) {
    throw new Error("limitOrOffset must be a valid number");
  }
  return sql.raw(`${limitOrOffset}`) as unknown as number;
}

Then use:

.offset(formatLimitOffset(offset))
.limit(formatLimitOffset(limit))

:locked_with_key: Important — SQL Injection Risk with @forge/sql.prepare(...)

Even if you’re using @forge/sql with .prepare(...), like this:

const result = await sql
  .prepare<User>(`SELECT * FROM users LIMIT ${limit} OFFSET ${offset}`)
  .execute();

:stop_sign: Be careful — this usage does not use bind parameters for LIMIT and OFFSET.
When you write ${limit} and ${offset} inside the template string, they are directly inlined into the SQL query — not passed as safe bind values.

This means:

  • If limit or offset come from user input (e.g. frontend query params),
  • And you don’t validate them properly,
  • You’re at risk of a SQL injection, even when using .prepare().

Also:
Even if TypeScript says limit: number, that doesn’t mean it’s a number at runtime. A malicious user could still pass a string or invalid input.

:white_check_mark: Always sanitize values like this:

function safeLimitOffset(value: unknown): number {
  if (typeof value !== "number" || isNaN(value)) {
    throw new Error("Invalid limit or offset");
  }
  return value;
}

const result = await sql
  .prepare<User>(
    `SELECT * FROM users LIMIT ${safeLimitOffset(limit)} OFFSET ${safeLimitOffset(offset)}`
  )
  .execute();

This is excellent work. Thank you for authoring and sharing this.

The API has been designed to work with HTTP based drivers and we are glad to see this use case :slight_smile:

Until recently, drizzle had a peer-dependency on react that created incompatibility related errors with Forge. This has since been resolved, so we invite you (and the broader community) to share feedback with the ORM.

We are continuously evolving our SQL (and other storage) offerings. Please let us know if you have any specific requests/feedback. We’d love to hear more from the community!


Varun

1 Like

@varun, I appreciate your comment and involvement in the dev community. Could you clarify on Forge SQL: Enable sql.storageApi again for ORM usage if the React dependency has something to do with the removal of storageAPI? I like to understand if this was intentional and if there were any announcements about the change. Thanks!

1 Like

Hi Varun, thanks a lot for the kind words and support — it really means a lot!

I wasn’t affected by the Drizzle + Forge issue myself — I started using Drizzle when it was already resolved,
but it’s great to know the path is clearer for everyone else now.


Feedback from working with Forge SQL:

  1. Timeout-sensitive migrations
    If a migration contains many steps or involves heavier ALTER TABLE operations, there’s a risk of hitting Forge web/scheduler timeouts.
    This can lead to ALTER TABLE being successfully applied, but the corresponding insert into the migration log never completing — causing the next run to retry the same schema change and fail with an error like “column already exists.”
  2. Concurrent migration execution
    There’s currently no way to guarantee that only one migration process runs at a time.
    If multiple web triggers, event triggers, or scheduled tasks execute concurrently, they may all try to apply pending migrations — leading to race conditions or conflicting DDL operations.
    One potential direction could be support for SQL-level locking via GET_LOCK(...) / RELEASE_LOCK(...) (which TiDB supports),
    or perhaps another mechanism for coordinating safe execution — like using app.storage as a lightweight lock.

At the moment, these types of coordination issues (timeouts, concurrent execution) are left for developers to handle themselves at the application level.
It would be great to see Forge offer some built-in primitives or patterns for safe, idempotent orchestration across triggers and scheduler functions.

Thanks again for listening!

— Vasiliy

1 Like

Hi! Just to clarify — @forge/sql works perfectly fine with Drizzle ORM without relying on the deprecated sql.storageApi.
You can check out how it’s implemented in my project — it works with the latest version of @forge/sql.

Here’s the custom Forge driver used for drizzle-orm/mysql-proxy:
:backhand_index_pointing_right: forge-sql-orm/src/utils/forgeDriver.ts

Feel free to explore or adapt it!

3 Likes

Thanks for sharing feedback on the limitations of timeout and running migrations. There’s a community post/RFC coming soon on how we want to move away from triggers and, potentially, manage running migrations using industry standard migration runners. This will alleviate the problems with timeouts and concurrent migration runs.

We may still provide the DDL endpoint; however we see most migration runs happening via developer actions or when the app is installed.

Apologies for the delayed response, but yeah as vzakharchenko mentions above, this is not affected by storageAPI

The library wraps over the Forge SQL SDK methods (which in turn call REST endpoints) and execute the statement that drizzle provides.

2 Likes

Thanks! We’re definitely looking forward to the upcoming support for managing migrations using industry-standard migration runners — that sounds like a great improvement, especially considering the current limitations with timeouts and concurrent migration runs.

I also wanted to share a use case where DDL access is essential:

Let’s say the app has entities created via migrations. At some point, end-users may want to customize those entities — for example, by adding their own fields — while keeping the original migration structure intact. And at the same time, they still want to query, filter, and sort based on both standard and custom fields.

One way to achieve this is by creating a separate table via CREATE TABLE, where the primary key matches the one from the original entity (1:1 relationship). As end-users define new fields, we can use ALTER TABLE ADD COLUMN. These tables can then be joined using LEFT JOIN on the primary key, enabling seamless access to both base and custom data.

An alternative would be to store custom fields in a JSON column within the main table. While this provides flexibility, it comes with several downsides:

  • You can’t add indexes on individual JSON fields, making queries involving filtering and sorting much slower.
  • Queries become harder to write, read, and maintain, since they rely on functions like json_extract.
  • You lose type safety and schema validation at the ORM level, which limits the benefits of using a type-safe ORM like Drizzle.

In contrast, the 1:1 table approach allows for:

  • Indexing on custom fields,
  • Better memory and performance efficiency compared to storing a large number of custom fields in a single JSON blob.

So while most use cases are covered by standard migrations, having runtime DDL access opens up powerful patterns for advanced customization that are otherwise hard to implement efficiently.