Just a thought: is there any chance to have no limit access as a paid service? I understand that Atlassian has to cut costs of running infrastructure with gazillion requests from apps. But I think we as vendors can come up with a balance between fees we have to pay for fast performance and our pricing.
Hi, thanks for sharing this initiative!
Could you please clarify if POST /rest/api/3/search
is affected when posting a JQL in the body (as opposed to /rest/api/3/search/jql
)?
Sorry for the naive question.
Thank you!
This is quite a significant change that will take our focus away from other initiatives, such as registering on and migrating to Forge. What is the priority for Atlassian for us to focus on?
I have an additional concern for external data synchronization in a pull-like model via JQL search due to the eventual consistency.
Let’s take an example with some base query, eg PROJECT = ASD
With current APIs, we are able to request PROJECT = ASD
and then pull new changes with query like PROJECT = ASD AND UPDATED >= PREV_UPDATE_TS AND UPDATED < CURRENT_UPDATE_TS
.
CURRENT_UPDATE_TS
will obviously be very close to the timestamp of “now”, so now we cannot be sure that we will retrieve all issues updated within specified timeframe due to eventual consistency.
Some services provide in their APIs field/parameter like nextSyncToken/syncToken in order to allow incremental and consistent retrieval of the data which was updated since previous search.
We have concerns covered by your first ask:
Any challenges that the proposed search endpoint fails to address, especially when your application demands immediate search functionality following write consistency.
Our application makes edits to Jira configuration (at scale) as part of clean-up activities. We present the issue count associated with configuration items (e.g., custom fields) to indicate their usage. If a user performs a clean-up activity to a configuration item, then one of the ways they verify the change is to consult the issue count for that item’s new usage. With your proposed changes, we either lose this ability (because only an approximate count is provided and won’t be timely updated) or the experience is degraded (because the user will have to wait until the search becomes write consistent and the changes are reflected in the count). Overall, we are concerned that there will be a loss of accuracy in our JQL searches when this has been relied upon as authoritative until now.
We have the following clarification questions:
- What is the duration after a change (i.e., write) where search consistency is assured?
- Can we get an exact definition/better explanation of what ‘approximate’ means in the new approximate issue count endpoint?
- If you will not provide an endpoint to recover the exact issue count for a JQL query, then we will have to paginate as you have described to recover it. What is the maximum value for
maxResults
on/search/jql
? But even this search endpoint will have approximate results, so won’t this be no different to just using the approximate issue count endpoint?
It seems there is no way to request strong consistency for a new expression api. If it uses new search api and the list of issues is small, then why it’s not possible to reconcile them.
I feel this particular change will have a significant impact for a number of partners. It will also seems likely to impact many customers who have written their own integrations with these API’s or similar scripts written by solution partners.
I think you should consider a potentially longer deprecation period with this in mind. Potentially it may also be worth giving admins the tools to understand whether they have any integrations or scripts using these API’s, or emailing customers that are using these endpoints. ie. a page showing which users are using the endpoints (or for a user, which API token is being used to call the endpoints)
To put in context I searched our Bitbucket for usages of the search endpoints and identified many different projects using these endpoints. Some of which are deployed in customer environments and very time consuming to change.
Thank you for your suggestion. I understand the changes are impactful. They are also necessary. We’ll consider how to improve the discoverability of affected users.
From what you have mentioned, it seems from an app developer POV, the fastest way to load this information now is likely not to be the new expression endpoint, but:
- collect all issue ID’s via /search/jql (100 at a time)
- make parallel requests to the bulk fetch endpoint for those same issue ids and same fields we were previously requesting via the expression API.
This looks to me like it will be slower for the user (since waiting for the sequential search requests 100 at a time), and ultimately just as much of an impact on Atlassian infrastructure (bulk loading fields for 10k issues in parallel).
I have the feeling that these changes are ignoring the valid use case of wanting to return small amounts of data across many issues (10k, 100k, 1m) in a time frame where a user will not give up on the page.
Collecting only the issue IDs allows for a larger batch size, enabling you to then execute parallel requests to the bulk fetch endpoint in order to hydrate them efficiently. It’s important to note that we find it more efficient to manage multiple smaller requests rather than a single large one. Fetching data from 100K to 1 million issues does require significant computational power. The goal is to ensure that while large data sets are being fetched, we don’t compromise the experience for other use cases.
Can you elaborate please how
reconcileIssues
works? For example, we have a jql withissue1
andissue2
in the response. My app reacts on webhooks and do several subsequent search requests. First one withreconcileIssues: [ "issue1"]
and the second one withreconcileIssues: [ "issue2"]
. Are there any guarantees on strong consistency forissue1
when the second request is executed?
When utilizing the reconcileIssues pattern in Jira, consistency is ensured only for the specified issues. To ensure strong consistency for both issue1 and issue2, you must include them both in the parameter, for example: reconcileIssues: [issue1, issue2]
.
Just a thought: is there any chance to have no limit access as a paid service? I understand that Atlassian has to cut costs of running infrastructure with gazillion requests from apps. But I think we as vendors can come up with a balance between fees we have to pay for fast performance and our pricing.
Thanks for sharing your thoughts! I completely understand where you’re coming from regarding the desire for unlimited access. However, at this time, we’re unable to offer a no-limit access option as a paid service.
Hi, thanks for sharing this initiative!
Could you please clarify if POST/rest/api/3/search
is affected when posting a JQL in the body (as opposed to/rest/api/3/search/jql
)?
Sorry for the naive question.
Thank you!
Hey, thank you for bringing up this question. I’m not sure if I fully grasp it, but both versions (GET&POST) of the current search endpoints are impacted.
I have an additional concern for external data synchronization in a pull-like model via JQL search due to the eventual consistency.
Let’s take an example with some base query, egPROJECT = ASD
With current APIs, we are able to request
PROJECT = ASD
and then pull new changes with query likePROJECT = ASD AND UPDATED >= PREV_UPDATE_TS AND UPDATED < CURRENT_UPDATE_TS
.
CURRENT_UPDATE_TS
will obviously be very close to the timestamp of “now”, so now we cannot be sure that we will retrieve all issues updated within specified timeframe due to eventual consistency.Some services provide in their APIs field/parameter like nextSyncToken/syncToken in order to allow incremental and consistent retrieval of the data which was updated since previous search.
Thank you for your suggestion, we will review it. To reliably sync data from Jira we recommend combination of Webhooks and periodic polling.
Webhooks require much more plumbing, have numerous limitations and there is no delivery guarantees, as I understand.
So we are left with two ways to do one thing, but neither provides consistency guarantees. While redundancy of combined approach might provide increased consistency, but there is still no “single source of truth”.
We heavily rely on these endpoints and would need to migrate our App to check first to see if something is not working.
As there is a high risk involved that we miss something in our app that is affected by these changes:
- It would be great to be able to mark our instance in a way that all of these deprecated changes are removed
(may not be required as soon as the new APIs are live, but at least a few weeks / month before they will be removed)
Questions:
- Currently the /expression API is quite a challenge to use when the requested fields are dynamic (Depending on a configuration that the user can define), because we can only guess the amount of issues that we are able to process without an error message (e.g. max Beans reached). Will the API always return the maximum number of issues that comply with the restrictions?
Feedback
- The Bulk Fetch API seems like a nice idea, but the limit of 100 issues is too much of a gap compared to expressions to make it viable for us
- I assume that bulkfetch is not based on the lucene cache (Jira Lucene collectors in Datacenter environment), correct? The performance gain on DC was quite huge to use these Collectors (to collect information like issuetype, story points etc.) that it would be great to have something similar on Cloud
Hello and thank you for sharing the RFC, giving a chance to share the feedback, asking questions and concerns for Developers.
*Bulk Fetch Issues /rest/api/{2|3|latest}/issue/bulkfetch
The RFC mentions that versionedRepresentations
and editmeta
expand options will not be supported, However, at present, the bulkfetch API returns the editMeta and versionedRepresentations values.
We wanted to confirm if this is interim mistake or will the bulkfetch API support these mentioned expand parameters?
I see it is mentioned in the RFC that, alternatively these editmeta and versionedRepresentations can get calling issue individually but that sounds less performant.
Looking forward to get the reply!
Thank you
Hi and many thanks for sharing the RFC.
I have simillar thoughts like @richard.white , can You ensure that asking for ex for 100k issues ids using new API on large client jira instance will be fast enough to fetch issue details later in the time we have left after fetching ids (in comparison with old issue search API)??
Additionally I think that some pugins which performs on smaller client instances, do not encountered any performance issues, so maybe can You only mark them as deprecated, but dont remove them?
Looking forward for reply,
Thank you
Hi, we have a few concerns about the proposed changes in the new JQL API:
- Removal of
startAt
parameter: In the current setup, we use thestartAt
parameter for pagination. While the introduction ofnextPageToken
can replace this, the lack of a total count of issues creates challenges for handling pagination. Without knowing the total number of issues, we cannot accurately determine how many pages of results we need to fetch. This would complicate the pagination logic. - Loss of
editmeta
inexpand
: Currently, we retrieve editable field metadata in a single request for 50 issues. With the new API, we would need to send a separate request for each issue to get theeditmeta
information. This would increase the number of requests from 1 to 51 for every 50 issues, creating considerable overhead and reducing system performance. - Additional questions: Is there any alternative approach planned for retrieving editable field information for bulk issue processing? This is critical for our use case, where we need to check if specific fields are editable by the current user.
Any guidance on how to best address these issues would be appreciated.
One additional Question:
How are we supposed to implement a paginated view with these changes?
Let’s say we have something like the Filter Result Gadget where you can jump to any page. From what I can see here, I would need to go through all tokens when the user want to access the last page.
Did I miss something?
- What is the duration after a change (i.e., write) where search consistency is assured?
The eventual consistency delay is dependent on various factors. 99% of modifications are typically visible within seconds. However, operations that impact numerous issues, such as lexorank rebalancing or bulk editing, may require more time to complete.
- Can we get an exact definition/better explanation of what ‘approximate’ means in the new approximate issue count endpoint?
Approximation comes from eventual consistency. Jira provides a number for a specific state, which may be impacted because recent modifications may not have been indexed yet.
- If you will not provide an endpoint to recover the exact issue count for a JQL query, then we will have to paginate as you have described to recover it. What is the maximum value for
maxResults
on/search/jql
? But even this search endpoint will have approximate results, so won’t this be no different to just using the approximate issue count endpoint?
Maximum number of returned issues is related mostly to number of fields Jira needs to populate. The API does not specify an exact limit, but typically the number falls within the range of 100 to 5000.
Our application makes edits to Jira configuration (at scale) as part of clean-up activities. We present the issue count associated with configuration items (e.g., custom fields) to indicate their usage. If a user performs a clean-up activity to a configuration item, then one of the ways they verify the change is to consult the issue count for that item’s new usage. With your proposed changes, we either lose this ability (because only an approximate count is provided and won’t be timely updated) or the experience is degraded (because the user will have to wait until the search becomes write consistent and the changes are reflected in the count). Overall, we are concerned that there will be a loss of accuracy in our JQL searches when this has been relied upon as authoritative until now.
- If you will not provide an endpoint to recover the exact issue count for a JQL query, then we will have to paginate as you have described to recover it. What is the maximum value for maxResults on /search/jql? But even this search endpoint will have approximate results, so won’t this be no different to just using the approximate issue count endpoint?
Thank you for sharing your case with us. Changing a custom field configuration can potentially impact a large number of issues in the system. Processing this amount of data takes time. As long as our interpretation is accurate, tracking the completion status should remain functional with the endpoints we are providing.
It seems there is no way to request strong consistency for a new expression api. If it uses new search api and the list of issues is small, then why it’s not possible to reconcile them.
Hey, thank you for your comment. You are correct; the new API does not ensure strong consistency. If you want to reconcile issues that were recently updated in the search results, you need to provide their IDs. We haven’t considered adding the reconcileIssues
pattern to the expression/evaluate
endpoint. Can you share a scenario when this pattern would be useful, yet the search endpoint won’t be sufficient?
Thank you for your feedback.
- It would be great to be able to mark our instance in a way that all of these deprecated changes are removed
(may not be required as soon as the new APIs are live, but at least a few weeks / month before they will be removed)
We will monitor usage of old APIs. New APIs are already ready to be used, yet, they’re marked as EXPERIMENTAL
. We can still introduce some breaking changes to them. We will lock the contract before deprecating the old APIs. If you request early opt-in to disable old endpoints before the removal date, we can raise a feature request to collect interest in this solution.
- Currently the /expression API is quite a challenge to use when the requested fields are dynamic (Depending on a configuration that the user can define), because we can only guess the amount of issues that we are able to process without an error message (e.g. max Beans reached). Will the API always return the maximum number of issues that comply with the restrictions?
Thank you for highlighting this topic. It appears to be a request for enhancement concerning the functionality of Jira Expressions. Our goal is to modify the process of loading issues with JQL into Jira Expressions, without changing the Jira Expressions themselves.
- I assume that bulkfetch is not based on the lucene cache (Jira Lucene collectors in Datacenter environment), correct? The performance gain on DC was quite huge to use these Collectors (to collect information like issuetype, story points etc.) that it would be great to have something similar on Cloud
Jira DC and Jira Cloud search architectures diverged a couple of years ago. Unlike Jira DC, which utilizes Lucene, Jira Cloud does not.