Hi there,
What is the expiry time on the refresh tokens given with the “offline_access” scope on the Jira Cloud API? Is this documented anywhere that I have missed?
Hi there,
What is the expiry time on the refresh tokens given with the “offline_access” scope on the Jira Cloud API? Is this documented anywhere that I have missed?
Hello @delliot,
AFAIK, refresh tokens do not expire. I’ll ping the team to see what other details I can provide here (ex: revocation).
Cheers,
Ian
I have a similar followup question: I see that API calls to refresh access tokens sometimes get 403 Forbidden from calling POST https://auth.atlassian.com/oauth/token. Any idea what could cause the 403 errors? I didn’t manage to replicate this myself yet but see it happening in my production deployment from time to time.
The docs don’t really mention what expected responses are from this API.
Hi @tbinna - do you have these 403s logged? If so, can you file a bug and include as many details possible? (i.e. request timestamp, error response from API, client_id) To be clear, don’t post those here, but rather, in the bug ticket.
I’m also unable to reproduce… but if you’ve hit a bona fide 403, we should dig in.
@nmansilla I have them logged but I need to see if I can collect a few more details. Main thing I am missing is the exact error response body from the API (if there is actually one). I will try to get that logged as well and then file an issue - or comment back here if I found the issue is with my own code.
Hey @nmansilla, I just created DEVHELP-2517 with my findings regarding my previous question.
@tbinna - I’ve been chatting with the team members about this. So, while not awesome, when an intermittent 500 is encountered the current advice is to retry the request. I’ll inquire if this something we can expect to improve when 3LO is out of developer preview / GA.
Re: the 403s, still digging in.
Thanks @nmansilla, shouldn’t be too difficult to just retry once if we see 500 responses from the API. I don’t think it makes sense to retry several times though (e.g. with delay) because it’s not like a rate limiting error or anything like that.
Looking forward to your findings on the 403s.
Hi @nmansilla - bumping this again. We still see 403 errors when refreshing access tokens on a fairly regular basis. The 403 error response payload from https://auth.atlassian.com/oauth/token looks something like this
{
"error":"invalid_grant",
"error_description":"Unknown or invalid refresh token."
}
I tried to google a bit on how to understand this message but there is not much coming up. Seems this might be the direct error response from Auth0 (which I believe Atlassian’s OAuth2 implementation is based on?). This thread suggest that this is because the user revoked the token but that’s not very clear from the responses - and I can’t find any other documentation to support this. If this really means someone has revoked the token we should redirect the user to a re-auth flow.
Do you have any more details on this?
We’re seeing the same errors. Has anyone been able to figure out what the root cause of this is? I’ve not been able to access @tbinna DEVHELP-2517 ticket.
@anon12976859 unfortunately, this has not been resolved yet. According to @shraj the respective Atlassian team is looking into this but I don’t think there is any public issue that we could follow.
Hi @anon12976859/ @tbinna
Normally that error message would indicate that the refresh token is invalid, for example the user has revoked the access for the app or somebody has tampered with the token (unlikely). In those cases sending the user back through the authorization flow again is the right thing to do. We should maybe clarify this in the documentation
If you suspect the user did not revoke the access and the error shouldn’t be happening, it’d help us to investigate if you could provide us the oauth client id for your app and timestamps on a DEVHELP ticket.
Hi @ekaukonen I’ve created the ticket DEVHELP-2517. Which contains the request. I’ve updated the ticket with client_id and timestamps.
I’m requesting a token against my dev account which lasts about an hour before I need to re-request the token at which point I get the error described above. I’ve not revoked or removed access to my app. I also get the error if I try and request a token immediately after the callback and with a successful API call to https://api.atlassian.com/oauth/token/accessible-resources.
Thanks
Hey @ekaukonen, our customers are reporting issues with our app that accesses the Jira REST API via OAuth 3LO and we are seeing an increased number of 5xx errors again for requests made to the API via OAuth 3LO. Status codes include 500, 502 and 503 all of which to /ex/jira/{siteId}/rest/api/2/search
. We see 500 and 502 the most. 503s are not so common but still happen every now and then.
As suggested before in this thread we have implemented a retry strategy but the fact that we still see these errors regularly suggests that this is not a very effective method to fix these issues. Also, retrying requests that return these errors might just be contributing to the 503 errors (see example below).
Additionally, we still see regular 403 errors. The frequency of these errors makes it hard to believe that they are all customers who have revoked the connection. However, I will try to work with some customers to get some concrete examples for this in a separate post.
Here are some examples for 5xx errors:
500 returns the usual message as reported before by others
{
error: "Internal Server Error"
exception: "com.netflix.zuul.exception.ZuulException"
message: "Read timed out"
status: 500
timestamp: "2019-11-20T09:44:29.781+0000"
}
Sample response headers in case they contain any useful information for you
{
connection: "close"
content-type: "application/json;charset=UTF-8"
date: "Wed, 20 Nov 2019 09:44:29 GMT"
strict-transport-security: "max-age=315360000; includeSubDomains; preload"
transfer-encoding: "chunked"
vary: "Accept-Encoding"
x-application-context: "Stargate:prod,prod-east:8080"
x-content-type-options: "nosniff"
x-failure-category: "SOCKET_TIMEOUT"
x-xss-protection: "1; mode=block"
}
502: Bad gateway without any extra details
503 returns
This Jira instance is currently under heavy load and is not able to process your request. Try again in a few seconds. If the problem persists, contact Jira support.
We see the message above also sometimes exposed as 429 errors.
Do you have any insights from Atlassian’s side on what could be wrong here?
For those of you that are following this issue (in addition to @tbinna), there are engineers that are looking into this issue. If you’re making API calls against api.atlassian.com
to a customer cloud instance, then this may affect you – that is, if your app is using OAuth 2.0 3LO. We’ll post more details once we have something definite to share.
Hi @nmansilla - i’m facing the same issue reported by @tbinna , is there any update ?
@nmansilla Any update on this? Bumping this because I seem to be still encountering this issue as well and am wondering if this has been fixed or not? Any information on this would be great
Hello @ianRagudo
can you please confirm this I am also using refresh token to create access token as access token expires in 1 hour
but we dont know if the refresh token expires if yes then after how many days’
Thanks
@tbinna Ad to 5XX
errors, see https://developer.atlassian.com/cloud/jira/platform/rate-limiting/
Some transient
5XX
errors are accompanied by aRetry-After
header. For example, a503
response may be returned when a resource limit has been reached. While these are not rate limit responses, they can be handled with similar logic as outlined below.
With my experience with other cloud apps (non Atlassian), I recommend to handle all 5XX
errors and 429
error with retry up to some time limit. If the problem is persistent, then the retry will reach the time limit and fail anyway.
It doesn’t expire on timeout, you get a new refresh token whenever you update the access token. So you must save both.
However I am also getting 403 like @tbinna after few refreshes.
{
"error":"invalid_grant",
"error_description":"Unknown or invalid refresh token."
}
@nmansilla was there any update on those 403s?