Our path to our "disconnect" from Forge

Recently there have been discussions between the larger Atlassian Marketplace Partners on how to deal with the limitations of Forge within the context of more complex apps.

From a vendor perspective, there are two major topics that we have to reconcile with when developing on the Forge platform when it comes to complex apps:

  • the limitations of Forge, in terms of resource limits, versioning, speed and cost(!)
  • the expectations of customers that Forge apps have zero data egress

These two things are almost impossible to combine. The limitations of Forge require significant engineering investments to work around, and even with all these efforts will still result in a degraded customer experience due to performance issues. There is also a strong vendor lock-in with Forge pricing coming at a premium compared to other FaaS/PaaS platforms.

At the same time, we want to make sure that we offer customers that value the security posture of Forge the option to benefit from the platform.

This has led us to a path in which we explored the feasibility of “best of both worlds”: we will use App Editions to create two different versions of our app:

  • Standard edition: 99% disconnected from Forge
  • Advanced edition: Forge native by default, Forge Remote where required

The Forge native implementation is rather straightforward, so I won’t go into the details of that, as it will just use the Forge platform features which are already documented.

I will dive into the “disconnect” option, and share with you our approach for those vendors who are also investigating this path.

Disconnecting the front-end

The first step is to disconnect the front-end. Our current architecture is already FaaS-based, meaning that our front-end is a static React app served from CDN which calls our REST API in the back-end.

This offers us some advantages, as it means that we can use the same front-end for both Standard and Advanced edition. We will push the same code to our own infrastructure and to Forge.

There is one difference: as the app runs on Forge, the front-end needs to be loaded as a Forge resource. We use a single resource for all our modules and do the routing client-side.

resources:
  - key: frontend
    path: ./public

We also added the required remote and permissions to the Forge manifest:

remotes:
  - key: our-remote-infra
    baseUrl: ${REMOTE_URL}

permissions:
  external:
    scripts:
      - address: ${REMOTE_URL}
    fetch:
      client:
        - address: ${REMOTE_URL}
  content:
    scripts:
      - unsafe-inline
      - unsafe-eval
    styles:
      - unsafe-inline

So the entry point for all custom UI requests is the Forge version of the app. As we want to limit our exposure to Forge for the Standard edition, our Forge entry point is a single typescript file that will only determine if the customer is using the Standard or the Advanced version of the app:

import { view } from '@forge/bridge';

view.getContext().then(async (context) => {
  if (context?.license?.capabilitySet !== 'capabilityadvanced') {
    // Load the front-end application from the remote
    // We use document.createElement('script') and document.body.appendChild()
  } else {
    const { [ 'App' ]: App } = await import(/* webpackChunkName: "forgeNativeApp" */ `./app`)
    App()
  }
});

The Standard edition of the app will now load the front-end code from our own infrastructure :tada:

Direct communications with the REST API

To improve performance, we do not want to rely on invokeRemote() for every request to our remote backend. But we need to know the tenant, and we also want to have secure communications. We might also need to access Atlassian REST APIs from our remote. Which requires us to have a Forge Invocation Token (FIT) and the App System token (or even the User Token for impersonation).

So we established a token exchange where we use invokeRemote() to replace the FIT with a custom JWT token that is used when connecting to our own infrastructure:

Now this is a bit of opinionated code, so you will need to read through it. We use PassportJS for authenticating the FIT and putting the important stuff in the request User object (or the session). In addition, we use Inversify with Inversify-express-utils to support dependency injection within the context of an ExpressJS server. For those interested, I’m happy to share more details on that, but for now I’m just sharing this code snippet “as is” just for reference:

import { AbstractController } from '@collabsoft-net/controllers';
import { TokenExchangeDTO } from '@collabsoft-net/dto';
import { authenticate } from '@collabsoft-net/middleware';
import { CachingService } from '@collabsoft-net/types';
import Injectables from 'API/Injectables';
import { InstanceService } from 'API/services/InstanceService';
import { randomBytes, scryptSync } from 'crypto';
import StatusCodes from 'http-status-codes';
import { inject } from 'inversify';
import { controller, httpGet, results } from 'inversify-express-utils';
import jwt from 'jwt-simple';
import uniqid from 'uniqid';

@controller('/.well-known', authenticate('forge'))
export class WellKnownController extends AbstractController<ForgeSession> {

  constructor(
    @inject(Injectables.InstanceService) private instanceService: InstanceService,
    @inject(Injectables.CacheService) private cacheService: CachingService
  ) {
    super();
  }

  @httpGet('/token-exchange.json')
  async TokenExchangeHandler(): Promise<TokenExchangeDTO|results.StatusCodeResult> {
    try {
      const { instance: { app, principal, context }, appToken, userToken } = this.session;

      let instance = await this.instanceService.findByProperty('installationId', app.installation.id);
      if (!instance) {
        const baseUrlProductMatch = /^https:\/\/api.atlassian.com\/ex\/(.*)\//.exec(app.apiBaseUrl);
        const product: 'jira'|'confluence'|undefined = baseUrlProductMatch ? baseUrlProductMatch[1] as 'jira'|'confluence' : undefined;
        if (!product || (product !== 'jira' && product !== 'confluence')) throw new Error('Unable to determine host product, which is required for exchanging tokens');

        const salt = randomBytes(32).toString('hex');
        instance = await this.instanceService.save({
          id: uniqid(),
          salt,
          installationId: app.installation.id,
          apiBaseUrl: app.apiBaseUrl,
          region: 'EU',
          product
        });
      }

      const ttl = 15 * 60;
      const expires = new Date().getTime() + (ttl * 1000);

      // Store the appToken and userToken in cache (encrypted)
      const appTokenCacheKey = scryptSync(randomBytes(16).toString('hex'), instance.id, 16).toString('hex');
      const userTokenCacheKey = scryptSync(randomBytes(16).toString('hex'), instance.id, 16).toString('hex');

      if (this.cacheService) {
        if (appToken) {
          await this.cacheService.set(appTokenCacheKey, appToken, ttl, true);
        }
        if (userToken) {
          await this.cacheService.set(userTokenCacheKey, userToken, ttl, true);
        }
      }

      const payload: ForgeRemoteJWT = {
        iss: instance.id,
        sub: principal || context?.accountId as string,
        iat: new Date().getTime(),
        exp: expires,
        appToken: appTokenCacheKey,
        userToken: userTokenCacheKey,
        region: instance.region
      };

      const hash = this.instanceService.getHash(instance);
      const token = jwt.encode(payload, hash);

      return token;
    } catch {
      return this.statusCode(StatusCodes.NOT_FOUND);
    }
  }

}

What you can see here is that we use the information from the FIT to retrieve the tenant from our own database. We put the app system token and user token in cache with a TTL of 15 minutes (the same TTL as the FIT). We create a new signed JWT with custom claims (incl. data residency region, which allows us to call the correct REST API that adheres to the customer region for data residency).

Re-using AP.context.getToken()

This JWT token is used by the front-end to call the remote infrastructure securely. In order to do this, we basically recreated the concept of AP.context.getToken() from Atlassian Connect, this has allowed us to do a quick migration from Connect to Forge, as the code didn’t need much adjustment. The only difference is that it will now use invokeRemote() to get the token:

  async getToken(): Promise<TokenExchangeDTO|undefined> {
    if (!this.token || this.token.expires <= new Date().getTime()) {
      const { data } = await invokeRemote({ path: '/.well-known/token-exchange.json' }).catch(() => ({ data: undefined }));
      this.token = data;
    }
    return this.token;
  }

The token can then be used with Authorization: JWT ${token} in a native fetch request on the front-end. The token will automatically be refreshed when it expires after 15 minutes.

Better versioning

One of the benefits of this approach is that for customers on the Standard edition, we can deploy as often as we want and there is a lower change of being impacted by major versions that require manual approval from administrators to be installed. As Atlassian is still shaping the versioning policies, there are situation in which vendors can be caught by surprise that a deployment is labelled as a major version.

For the Standard Edition, customers will almost never have to deal with manual installations to receive updates, and when they do, we can guide them in the front-end as we can still publish updates to front-end code without requiring a new version to be installed.

Benefits & downsides of this approach

The benefits of this approach is that customers will have a choice: performance and lower pricing in exchange for Data Egress, or the security posture of Forge (with all the limitations, performance impact and higher costs that are associated with the Forge platform).

The biggest downside is that this means that the app will never qualify for “Runs on Atlassian” as we will always need to ask customers to approve our remote and permissions from the manifest. Even when they are on the Advanced edition. It would be great if Atlassian would allow us to publish different manifests for each edition, but I don’t expect that this will end up on the roadmap.

It is possible that customers will not be convinced as technically we cannot guarantee that there will be no data egress in the Advanced edition because they had to approve access to the remote resources.

But given that we current believe 90%+ of our customers will go for the Standard edition anyway, this is a risk we are willing to take. In addition, almost all our apps will require data egress anyway as they are either too complex to completely run Forge native, or are integration apps that will need to connect to a 3rd party. In addition, our customers have never made a big deal about our Atlassian Connect apps running on our infrastructure.

I hope this write-up has shed some light on the approach that we’ve taken. Special thanks go to @scott.dudley and @tobias.viehweger for sharing also their POCs and thoughts and input.

30 Likes

Thank you @remie this is really cool and fascinating!

I’d like to second the request/wish for different manifests for editions. We would love to have a no-egress standard edition and an advanced edition with advanced features that require egress. For now, we’ve decided to forgo editions instead.

1 Like

Thanks for this post.

It was a great inspiration and we might use parts of this idea. We ported some of our apps to Forge, where it was ‘good enough’, and are now at the apps where its getting tricky.

One of the reasons we might use this is that it ‘works’ with the Browser. Like it also opens the door to things like ‘signed’ URLS to dynamic images, that are Fetched via the browser. (delivering images via Forge Bridge won’t be fun).

Another personal reason might be performance =). Usually the Forge performance was unimpressive. I would love to see snappier Atlassian Products, so I want that our apps contribute to that :wink:

1 Like

This is also one of the reasons we are taking this approach. Our Figma for Jira and Figma for Confluence apps rely heavily on caching for performance reasons and to avoid Figma rate limiting. We have a 75GB redis instance with 16K keys to help us deliver the most optimal user experience. This is not feasible with Forge, and being able to deliver these images from cache is crucial for our app. Users that go Forge native will have a very degraded experience due to the lack of caching.

2 Likes

I did understood almost nothing, therefore I asked ChatGPT5:

Is that a good idea? Why? Brief response in English

And copied & pasted the post.

This was the answer:

It’s a pragmatic workaround, but not an ideal solution. It trades true Forge security guarantees for performance and flexibility. This dual-edition model will likely confuse customers, complicate maintenance, and still won’t fully satisfy zero-egress expectations. It’s “good” only if your priority is performance and cost—not strict data residency purity.

Hmmm


Seems accurate, and it fits with our assumption that although Atlassian has made the top 1000 Enterprise her priority, our customer base is the other 299.000 SMBs that value privacy & security less and for which performance and cost are their top priority.

We have successfully built a business on top of Atlassian Connect with a customer base that has never complained about data egress or the fact that we use our own infrastructure. We would like to continue to serve this group at the same performance and cost as we are today.

7 Likes

I think that it was already considered by the PM’s when they were crafting the original App Editions work, but there are complexities.

For example, the one I remember is that if a customer moved from Standard to Advanced, would they expect to need to do a data migration? If an app stored data off Atlassian in the Standard edition then should it have to support data migration to the “Runs on Atlassian” Advanced edition? If so, will that mean downtime for the app when changing editions?

I would be interested to see what everybody thinks about that flow?

1 Like

If an app stored data off Atlassian in the Standard edition then should it have to support data migration to the “Runs on Atlassian” Advanced edition? If so, will that mean downtime for the app when changing editions?

This is indeed a challenge, but no more challenging than it is to migrate data from Connect to Forge.

To be honest, the lack of support from Atlassian on how to bulk-migrate customer data from Connect to Forge is underwhelming considering the push to Forge, and many vendors have been scrambling to find a way to support this, especially because the migration from Connect to Forge is seamless from a customer perspective.

They press a button in the UPM and expect the app to continue to work without any downtime when switching from Connect to Forge.

If we manage to find a solution for that, I’m sure we can do the same for switching app editions :wink:

3 Likes