Auth.atlassian.io performance :-(

I don’t yet know if this is a legitimate gripe, or if what I’m seeing is expected and normal.

Having recently installed New Relic for our Jira Cloud add-on, Risk Register, I’m learning where the performance bottlenecks are. One that stands out is this: calls to auth.atlassian.io are relatively slow: ~800ms on average over the last 12 hours.

You’ll recall that auth.atlassian.io is the authorization server involved in OAuth 2.0 bearer tokens for apps.

In my view, the OAuth 2.0 JWT Bearer token authorization grant flow needs to be lightning fast, since it forms part of the infrastructure for add-ons across the board.

Right now, auth.atlassian.io is not listed on Atlassian’s Statuspage, and I think it should be because it’s an important piece of the add-on platform.

You could help by:

  • letting me know if you’ve seen similar performance from calls to auth.atlassian.io in your add-ons; and
  • commenting on the performance itself: acceptable? or something that needs to be addressed.
1 Like

Hi David,

Looking at our internal monitoring, the median response time for auth.atlassian.io is around 200ms, and the 75th percentile usually around 300ms. We do have some spikes in the 75th percentile that go to around 800ms, but these could easily be caused by a handful of slow cloud instances dominating the response times (we do occasionally have to call back to the cloud instance to retrieve some information). We also retry in case we get timeouts or errors from the downstream cloud instance, so those could also lengthen the response time.

If you’re regularly seeing 800ms response times, it could be caused by the specific cloud instances your addon is installed to. Also just to make sure, are you reusing the tokens? They stay valid for 15 minutes, so it should not be necessary to get a new token for every single request.

I’ll also see if I can get auth.atlassian.io listed under Atlassian statuspage. Just for your peace of mind, we have not had any downtime since launch (expect for one ~2 minute blip caused by an AWS networking issue), and the service is monitored 24/7.

Cheers,
Eero

1 Like

Thanks for your detailed reply, Eero. I’ve been following the advice under Token expiration:

We suggest tracking expiration time and requesting a new token before it expires, rather than handling a token expiration error. Refresh tokens at the 30-60 second mark.

I interpreted that as: refresh every token 60 seconds after it is obtained. I suppose it could also be interpreted as: refresh every token 60 seconds before it is due to expire; i.e at the fourteen-minute mark.

Yes, so I guess the documentation could be more clear about this. The idea is that you should always check the exp claim for the actual expiration time (it’s 15 minutes at the moment but could change in the future), and then refresh the token 30-60 seconds before the expiration time just to make sure you’re not using an expired token.

1 Like

Perfect! Thanks for the clarification.