Forge invoke is very slow

I’ve been working on an app that makes use of timescaledb for some reporting queries. Running in development I’m making a request using invoke where the following happens.

Now granted this is a development setup but seeing as both the Browser and the App Server are running on my local machine (using Tailscale to expose the App Server to forge) I’d expect the Browser->Forge Backend Function and Forge Backend Function->App Server to be similar. Bear in mind that the 120ms for the fetch report includes all of the code inside the resolver function so there’s no additional overhead to consider there.

The lag is significant enough that I’m pondering whether the architecture above is viable. I’m showing multiple filters in the reporting view but waiting over a second every time a user changes one is going to be a lousy user experience. Are response times times better when an app is deployed? Perhaps I should be going direct to the App Server rather than via a forge backend function? (I was hoping to stick with just Forge’s auth system).

4 Likes

Just to understand your setup more.

If you use forge tunnel then Forge Backend Function would be running on your local machine as well. I’m assuming you’re not using this and have deployed your app at least to the development environment.

I’m curious about the numbers you’re receiving because they are highly unexpected. How many times did you measure this latency?

Is there much code in your resolvers?

Can I have your appId and a rough time range on this measurement so I can check the logs to see if something is unusual?

Hey @JoshuaHwang, this is just running on my local machine. My understanding was that forge tunnel was uploading the backend functions to run in forge (it takes quite a while to update after each change to the backend code and the requests are to an atlassian domain). Having read around a bit it turns out that they’re actually running in a docker container on my local machine. This means there’s an extra hop because each invocation has to go via an atlassian server then back to my local machine. This explains some of the additional overhead still doesn’t explain 1sec plus.

Hey @JingYuan, the app id is df1dbe2f-dc85-4a85-be1a-d81101f5dc86 and the time period would have been an hour or two before my original post.

Hi @opsb, I checked a few of your app’s invocation logs, the invocation overhead is between 100ms - 200ms (from the Platform service received the invocation request to Platform service sending request to your local backend service (tunnel)), which matches our SLO.

I’m not sure which part is adding the significant overhead. Can you please try to deploy your app to forge development environment and invoke from there? you can check the app’s logs using forge logs

1 Like

We are seeing very similar Forge function performance in our app. I have created an empty resolver function for testing and called it through the custom UI bridge with the following results (request timing shown in the browser’s developer tools):

  • Without tunnel: ~700ms
  • With tunnel: ~1,100ms

The latency is pretty high even in the deployed version and the tunnel adds about 400ms. Part of it could be due to my location. I’m in Europe and the function is executed in the US.

That’s a good point, I’m on a 500mb fiber connection but I’m in Spain so there’s a degree of latency there. In the end I decided to switch to using atlassian-connect. Since I need a database it doesn’t make much sense to use Forge as I need a separate app server for Forge to talk to the database. There’s a bit to setup but it’s nice to have more control and a simpler architecture.

Hi @opsb ,

I’m the Product Manager responsible for the Forge runtime, which includes the problem you’ve flagged here.

I suspect what you are are seeing is geographic latency because you’re in Europe but Forge apps are deployed in US West. Even when you are running Forge locally using the forge tunnel command, a central broker service sits between the browser and your Forge function to route the request to your local machine (and that service is in US West).

This problem can be further exacerbated if your app is also making REST API calls back to Atlassian product APIs and the Atlassian product is pinned to a different region.

We are planning to make some improvements to how Forge apps are deployed later in 2023. By supporting multi-region deployments, you’ll be able to have the end user, the Forge app and the Atlassian product DB all in the same region.

Thanks for the feedback!

3 Likes

Hi @HeyJoe, thanks for reaching out here. I’m finding the developer experience is far better using Connect so for now I’m going to stick with that. That said Forge definitely provides some nice conveniences and I can imagine migrating back once it’s matured a little.

Thanks for the feedback. Other than the invocation speed, are there any other key parts of the experience that lead you to choosing Connect?

Support interaction this morning:

  • Customer: “it’s very slow”
  • Me: “nothing we can do sorry, it’s using Atlassian’s UI Kit and is served/hosted on their infrastructure”
  • Customer: emails Atlassian support
  • Atlassian support: “you’ll need to engage with the vendor”
  • Customer: “they sent me back to you”
  • Me: “yeah again nothing we can do sorry, I can submit a ticket with them but considering they haven’t fixed many core Forge problems in over two years now it’s unlikely to be resolved any time soon. This is why our Forge apps are still free because we can’t justify charging customers.”
3 Likes

Same here. Seconds of waiting time are killing the user experience. This is a serious issue.

2 Likes

@yvedb,

Thank you for your confirmation of the problem.

What would really help is additional details, could you provide the kind of details as in the original post? Do you have any data to confirm or deny @HeyJoe’s hypothesis that geographic latency is the problem? Can you share customer anecdotes to help us better understand the problem?

@HeyJoe the other big thing was that I couldn’t access a database directly from Forge. This meant I had to introduce a separate api server to act as the backend so I ended up with a situation where Forge was just an additional (unneeded) hop in each request. Once I’d been forced to introduce the api server it just seemed far simpler to cut out Forge completely.

2 Likes

We are also seeing atrociously high latencies at every stage of Forge request lifetime. A simple invoke that does this (and only this mind you) takes 3.56 seconds (!!!)

const response = await api.asApp().requestJira(route`/rest/api/3/field`);
const data = await response.json();

console.debug(`JIRA: /rest/api/3/field GET returned ${response.ok ? `OK: ${data?.length} field(s)` : `ERROR: ${JSON.stringify(response)}`}`);

Same with other endpoints that read data from Storage. An endpoint that simply reads a single 10-50KB record from storage (i.e. storage.get) and does very lightweight in-memory processing takes 3.17 seconds (!). We know it’s not region-related (at least not significatnly) because customers from Americas are facing similarly grueling response times from the same APIs.

One sample request timings:

  • Resource Scheduling: 5.12 ms
  • Connection Start: 25.50 ms
  • Request Sent: 0.89 ms
  • Waiting for server response: 3080ms
    (does 4 parallel GET requests to storage and Jira and aggregates results)
    • Jira (get field context) 483
    • Jira (get fields configurations) 393
    • Storage (get) 779
    • Storage (get) 801
      Even with these extremely slow times, this should amount to 800ms, the remaining 2200ms (!) are some kind of request overhead
  • Content Download: 2.3ms

App ID: ari:cloud:ecosystem::app/46040245-5cda-487c-816e-73e2774387c6

3 Likes

Hi @opsb,
Since I am having similar performance issues with forge I am also thinking of switching to connect… :wink:

May I ask you to share your experience with connect? Are API calls from connect now “fast enough” for your app?

1 Like

I see similar issues. I decided to use Storage to store field options and I was thinking I should keep it simple - one option per one enty. How disappointed I was when I found out that you can query only 20 items max, and each query took in my case ~ 800ms. For 200 options it took almost 10s. That’s insane.

1 Like

Dear Atlassian Staff, do you have any updates on this one? This is a blocker for the European market.

2 Likes

Agree 100%! Customers in EU are simply saying they wont move to our latest app (Forge one) because its awfully slow. So please share any details about where is the forge data residency program at or any other performance related initiative.
I’ve seen customers is states using app and its more or less ok, but in EU you cannot say its ok. Far from it.

1 Like