Project Summary
We propose a new API/SDK that lets Forge apps access Atlassian-hosted LLMs from Forge functions and containers without data egress.
Publish: 29 October 2025
Discuss: 12 November 2025
Resolve: 19 November 2025
Problem
Currently, the only way to integrate LLMs with Forge apps is by egressing data to remote servers or third party providers (e.g. OpenAI). Egressing data outside of Atlassian can be a blocker to enterprise adoption and cause the app to lose eligibility for Runs on Atlassian.
Proposed Solution
Provide access to Atlassian-hosted LLMs from the Forge functions (and containers in future) via an API. Apps using these LLMs would be eligible for Runs on Atlassian.
Developer experience
We will provide an easy-to-use SDK, aligned with industry standards and other Forge APIs. The proposed approach is a simple function call pattern, similar to the Claude SDK, for example:
const response = await chat({
maxCompletionTokens: 50,
model: 'claude-sonnet',
temperature: 0.7,
topP: 0,
messages: [
{ role: 'user', content: 'Find any typo in the following text' },
{ role: 'user', content: 'Tis is an intereating artivle!' },
{ role: 'system', content: 'You are a text editing agent' }
]
});
If you require a structured response to use programatically (e.g. function calling), you will use “tools” (see Tools) and get a defined, json response back. It would look something like:
...
"messages": [
{
"content": "Your task is to assist",
"role": "system"
},
{
"role": "user",
"content": "What is the weather like in Boston today?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"celsius",
"fahrenheit"
]
}
},
"required": [
"location"
]
}
}
}
]
To get a response like:
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_bkhLcH2zVrkSgMGqE5NPAKX4",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\":\"Boston, MA\"}"
}
}
]
}
Manifest
You will need to add a new module in the manifest to enable LLM access. This will be at the model family level, not a specific model/version (e.g. adding claude would give your app access to Opus, Sonnet and Haiku)
For example:
modules:
llm:
- key: my-ai-module
model:
- claude
Note - introducing an LLM will trigger a new major version.
Model selection
We’re planning to launch with support for the three Claude 4 models: Sonnet, Opus and Haiku. You will specify the model you want to use on each request, this will give you flexibility to choose the right model for each use case.
Opus
- Most capable
- Slowest (deep reasoning)
- Highest cost
Sonnet
- Balanced capability
- Moderate speed and cost
Haiku
- Fast & efficient
- Lowest cost
Note: AI moves quickly so specific models and versions may change before launch.
Initially only text will be supported but multimodal options will be considered in future.
Admin Experience
We plan to be transparent about which apps have adopted Forge LLMs. Administrators will be informed via the Marketplace listing and at installation time if an app uses Forge LLMs.
Adding Forge LLMs (or a new model family) to your app will be a major version upgrade that requires admin approval.
Pricing
LLMs will become a paid feature of Forge as part of the pricing changes starting 1 Jan 2025. Usage will be visible in the developer console on the usage and costs page.
Usage will be measured by the volume of tokens sent and received by your app. Specific pricing/rates will be available before the capability goes to preview.
Responsible AI
Requests sent to LLMs on Forge will be subject to the same moderation checks as Atlassian’s first party AI/Rovo features. Messages that are flagged as high risk (according to Atlassian’s Acceptable Use Policy) will be blocked.
Asks
While we would appreciate any reactions you have to this RFC (even if it’s simply giving it a supportive “Agree, no serious flaws”), we’re especially interested in learning more about:
-
What are your thoughts on the proposed SDK interface? Are there patterns or features you would expect that are missing?
-
Are there use cases or requirements for LLMs in Forge apps that won’t be well supported in the proposed design?
-
How important is model choice, and what additional models or capabilities would you like to see prioritised?
-
Are there concerns about pricing, usage limits, or transparency that we should address?