RFC-61: Evolving Search Capabilities: Addressing Scalability with a New, Enhanced Search API

Update

Thank you so much for your feedback! We really appreciate you taking the time to share your thoughts with us.
At this moment, we can’t offer larger batch sizes. We know that eventual consistency might be a bit confusing for some of you. We’ll put together some helpful tips on how to work with it. We’re also looking into the possible options for expand in the new APIs. Thanks for your understanding!
The deprecation announcement will be slightly delayed, and you can expect to hear from us about it in mid to late October. After a 6-month deprecation period, the old APIs will be phased out.

Summary of Project:

The use of JQL has grown in scale and complexity over the years, and we’ve witnessed increased scalability and reliability issues. For that, our team has been working on an architectural refresh, which will allow our JQL capabilities to scale and perform for our growing customer and vendor needs. This RFC is here to gather feedback for the proposed API changes that will give you access to the benefits of our architectural refresh.

The proposed API changes introduces a new pagination API pattern (via nextToken) and eventual consistency behaviours. We believe these two API pattern changes deserve your attention.

  • Publish : Sep 03, 2024
  • Discuss : Sep 17, 2024
  • Resolve : Sep 21, 2024

Problem

Over the years, Jira has grown its customer base, and its users are using more of Jira, in larger scale and more complex ways. JQL is one of the core capabilities of Jira, and we’ve noticed that JQL APIs, which used to work very well, are now being challenged by the scale and complexity of how users use the APIs. JQL needs to evolve to stay reliable, performant, and become more scalable to serve bigger, more complex use cases.

The problems we’re witnessing largely stems from the access patterns of the APIs.

Today, API users are able to use JQL to access paginated results randomly. To provide random access pagination, an exact number of previous issues must be found. For instance, if issues between 10000 to 10025 are requested, JQL filtering and ordering need to be performed on the prior 10000 issues to accurately return the results. This is slow for the user, costly for us, and not scalable for the JQL result sizes that we’re seeing.

API users are also able to get the total count of issues of a JQL query. To provide a total count of matched issues on every page, it requires us to calculate the number of issues matching the JQL query repeatedly. We’ve observed performance degradation for this use case, particularly when the JQL query is run across large data sets.

Given how our APIs return much of the issue’s data by default, the payloads returned by our APIs are getting bigger, to the point where we see OOM (out of memory) errors. The growing payloads returned by default are impacting the reliability of all of the systems involved.

The scope of API changes in this RFC is to address the reliability, performance, and scalability concerns outlined above. Please read on and give us feedback on the proposed API changes.

Proposed Solution

The proposal involves these API changes.

  • New endpoints:
    • JQL Search:
      • GET /rest/api/{2|3|latest}/search/jqllink
      • POST /rest/api/{2|3|latest}/search/jqllink
      • POST /rest/api/{2|3|latest}/search/approximate-countlink
    • Jira Expressions:
      • POST /rest/api/{2|3|latest}/expression/evaluate link
    • Bulk Fetch Issues: POST /rest/api/{2|3|latest}/issue/bulkfetchlink
  • Removal of these endpoints:
    • GET/rest/api/{2|3|latest}/searchlink
    • POST /rest/api/{2|3|latest}/searchlink
    • POST /rest/api/{2|3|latest}/search/idlink
    • POST /rest/api/{2|3|latest}/expression/evallink
  • graduate the following API from “experimental” to “generally available”
    • POST /rest/api/{2|3|latest}/jql/parselink
  • Changes to the amount of data returned per issue when fetching multiple issues. Read more on this below.

We don’t plan any changes to:

Changes to Issue & field data returned in the new APIs

These changes will only impact the following new APIs which return Issue field data across numerous issues:

  • Bulk Fetch Issues /rest/api/{2|3|latest}/issue/bulkfetch
  • GET /rest/api/{2|3|latest}/search/jql
  • POST /rest/api/{2|3|latest}/search/jql

The existing Get Issue API (/rest/api/{2|3|latest}/issue/{issueIdOrKey}) will not be changed.

To improve the performance and stability of APIs which return data for many issues, we are proposing changes to the amount of data available & returned per issue for these new endpoints. These changes aim to encourage integrations to only fetch the data needed to power their experiences, and looks to avoid excessive payload sizes when returning data from Jira Cloud.

The proposed changes are:

  • Changes to which fields are included by default
    • /rest/api/{2|3|latest}/issue/bulkfetch will return all navigable fields by default, instead of all fields used in the single issue Get Issue API.
    • /rest/api/{2|3|latest}/search/jql will return just the id field by default
    • In both cases, fields can still be explicitly included and excluded, and both will also still accept *navigable & *all as options. However, fewer issues may be returned in cases where request payloads sizes become excessively large.
  • When the comment field is requested, we will return a maximum of 20 comments per issue.
  • When the changelog expand parameter is provided, we will return a maximum of 20 changelogs per issue.
  • Removal of API options which are less useful within the context of working with multiple issues and/or drastically increase response payload sizes when working across large numbers of issues with large numbers of fields.
    • The Bulk Fetch Issue API /rest/api/{2|3|latest}/issue/bulkfetch will not include the the updateHistory parameter used in the single issue Get Issue API to flag if the request should treat the user as “viewing” the issue for the context of the Last Viewed field. This is to match the behaviour of other search & multi-issue views of Issues across Jira Cloud products.
    • The following expand options will not be provided on the new APIs
      • operations Returns all possible operations per issue.
      • transitions Returns all possible transitions per issues
      • editmeta Returns information about how each field can be edited.
      • versionedRepresentations Returns a JSON array for each version of a field’s value, with the highest number representing the most recent version. Note: When included in the request, the fields parameter is ignored.
    • To use these parameters, please use the single issue Get Issue API

API definition

Search API

POST /rest/api/3/search/jql

Main changes relative to the current endpoint:

  • By default, the endpoint will only return Issue IDs. Unless requested, all other issue fields will be omitted in the Issue objects. You can request them by using the fields property.
  • The endpoint doesn’t provide immediate search after write consistency. Search results may not incorporate recent changes made by users. If such behaviour is important for your case, please refer to the migration path at the end of this page.
  • The validationMode parameter is removed.
  • We’ll replace random page access with a continuation token API. This means you won’t be able to get multiple pages at the same time with parallel threads. startAt parameter will be replaced with nextPageToken. You can find usage instructions at the end of this page.
  • We will only return a maximum of 20 comments and 20 changelog items. If you require more, please refer to the migration guide at the end of this page.
  • The endpoint won’t accept unbounded JQL query and will return 400 if an unbounded JQL query is provided.

Documentation available here

Approximate count

POST /rest/api/3/search/approximate-count

Documentation available here

Expression Eval

POST /rest/api/3/expression/evaluate

The evaluate Jira expression endpoint is complex and it accepts many parameters. The changes we’re proposing are related to how Jira builds the issues.jql context. The main changes are:

  • The endpoint won’t accept unbounded JQL query and will return 400 if an unbounded JQL query is provided.
  • The maximum number for maxResults is 5000. The actual number of returned issues may be lower, depending on the complexity of the request.
  • We’ll replace random page access with a continuation token API. This means you won’t be able to get multiple pages at the same time with parallel threads. startAt parameter will be replaced with nextPageToken. You can find usage instructions at the end of this page.
  • No JQL validation will be available.
  • The endpoint won’t return the total count of matched issues.

Documentation available here

Bulk Fetch Issue

POST to /rest/api/{2|3|latest}/issue/bulkfetch

Documentation available here

Example scenarios & FAQ

We have identified a number of typical scenarios where current APIs are used and have came up migration paths for the new endpoints.

I want to fetch data from lots of issues at once

This operation may be achieved by utilizing two Jira’s endpoints:

  1. First get issue IDs via /rest/api/3/search/jql
curl --location 'https://example-jira.atlassian.net/rest/api/latest/search/jql' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "jql":"project in (FOO, BAR)"
}'

Response:

 {
    "issues": [
        {
            "id": "10068"
        },
        {
            "id": "10067"
        },
        {
            "id": "10066"
        }
    ],
    "nextPageToken": "CAEaAggD"
}
  1. Then hydrate issues’ fields with Bulk Fetch API to /rest/api/{2|3|latest}/issue/bulkfetch
curl --location 'https://example-jira.atlassian.net/rest/api/latest/issue/bulkfetch' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "issueIdsOrKeys": ["FOO-1","10067", "BAR-1"],
    "fields": ["priority", "status", "summary"]
    
}'

I need to hydrate data for known issue ids

You can use Bulk Fetch Issue endpoint

POST to /rest/api/{2|3|latest}/issue/bulkfetch

Example request:

curl --location 'https://example-jira.atlassian.net/rest/api/latest/issue/bulkfetch' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "issueIdsOrKeys": ["FOO-1","10067", "BAR-1"],
    "fields": ["priority", "status", "summary"]
}'

I want to know how many issues match my JQL

You can use approximate count endpoint

POST /rest/api/3/search/approximate-count

curl --location 'https://example-jira.atlassian.net/rest/api/latest/search/approximate-count' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "jql":"project in (FOO, BAR)"
}'

Response

{
    "count": 3
}

I need to validate query

You can use JQL Parse endpoint

POST /rest/api/3/jql/parse

curl --location 'https://example-jira.atlassian.net/rest/api/latest/jql/parse' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "queries":["project in (FOO, BAR)"]
}'

Response

{
    "queries": [
        {
            "query": "project in (FOO, BAR)",
            "structure": {
                "where": {
                    "field": {
                        "name": "project",
                        "encodedName": "project"
                    },
                    "operator": "in",
                    "operand": {
                        "values": [
                            {
                                "value": "FOO",
                                "encodedValue": "FOO"
                            },
                            {
                                "value": "BAR",
                                "encodedValue": "BAR"
                            }
                        ],
                        "encodedOperand": "(FOO, BAR)"
                    }
                }
            }
        }
    ]
}

I require immediate search after write consistency

You can use /rest/api/3/search/jql endpoint. Firstly, you need to specify issue id’s for which you require higher consistency guarantees. Example request would look like

curl --location 'https://example-jira.atlassian.net/rest/api/latest/search/jql' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "jql":"project in (FOO, BAR)",
    "fields": "key, id",
    "reconcileIssues": [10068]
}'

Response:

{
    "issues": [
        {
            "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
            "id": "10068",
            "self": "https://example-jira.atlassian.net/rest/api/latest/issue/10068",
            "key": "FOO-1"
        },
        {
            "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
            "id": "10067",
            "self": "https://example-jira.atlassian.net/rest/api/latest/issue/10067",
            "key": "BAR-2"
        },
        {
            "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
            "id": "10066",
            "self": "https://example-jira.atlassian.net/rest/api/latest/issue/10066",
            "key": "BAR-1"
        }
    ],
    "nextPageToken": "CAEaAggD"
}
  1. Typical scenario would involve Application listening to Webhooks.

  2. Application subscribes to WebHook Events

  3. New issue event arrives

{
"issue": {
    "id": "10068",
    ...
    "fields":[
    ...
    "updated": "2024-08-19T11:40:30.010+0200",
    ]
    }
}

Application requests JQL search with higher consistency guarantee for issue 10012

{
 "jql": "project in (FOO, BAR)",
 "reconcileIssues": [10068]
}

I need to paginate over large set of results

POST /rest/api/3/search/jql

Use next page token returned on each request to get further results

curl --location 'https://example-jira.atlassian.net/rest/api/latest/search/jql' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "jql":"project in (FOO, BAR)",
    "maxResults": 1
}'

Response:

{
    "issues": [
        {
            "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
            "id": "10068",
            "self": "https://example-jira.atlassian.net/rest/api/latest/issue/10068",
            "key": "FOO-1"
        }
    ],
    "nextPageToken": "CAEaAggB"
}

Request

curl --location 'https://example-jira.atlassian.net/rest/api/latest/search/jql' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "jql":"project in (FOO, BAR)",
    "maxResults": 1,
    "nextPageToken": "CAEaAggB"
}'

Response

{
    "issues": [
        {
            "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
            "id": "10067",
            "self": "https://example-jira.atlassian.net/rest/api/latest/issue/10067",
            "key": "BAR-2"
        }
    ],
    "nextPageToken": "CAEaAggC"
}

I need more than 20 comments for an issue

Use Get comments endpoint

GET /rest/api/3/issue/{issueIdOrKey}/comment

curl --location 'https://example-jira.atlassian.net/rest/api/latest/issue/BAR-1/comment' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data ''

Response

{
    "startAt": 0,
    "maxResults": 100,
    "total": 3,
    "comments": [
       ...
    ]
}

I need more than 20 changelog items for an issue

Use Get changelogs endpoint

GET /rest/api/3/issue/{issueIdOrKey}/changelog

curl --location 'https://example-jira.atlassian.net/rest/api/latest/issue/BAR-1/changelog' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \`

Response:

{
    "self": "https://example-jira.atlassian.net/rest/api/2/issue/BAR-1/changelog?maxResults=100&startAt=0",
    "maxResults": 100,
    "startAt": 0,
    "total": 3,
    "isLast": true,
    "values": [
        ...
    ]
}

I want to evaluate my Jira Expression

You can use new evaluate expression endpoint POST /rest/api/3/expression/evaluate

Example request for the first page

curl --request POST \
--url 'https://example-jira.atlassian.net/rest/api/3/expression/evaluate' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "context": {
        "issues": {
            "jql": {
                "maxResults": 5,
                "query": "summary ~ \"task\"",
            }
        }
    },
    "expression": "issues.map(i => i.summary)"
}'

Response:

{
    "value": [
        "task 7",
        "task 6",
        "task 5",
        "task 4",
        "task 3"
    ],
    "meta": {
        "issues": {
            "jql": {
                "nextPageToken": "CAEaAggF"
            }
        }
    }
}

Example request for subsequent pages

curl --request POST \
--url 'https://example-jira.atlassian.net/rest/api/3/expression/evaluate' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--data '{
    "context": {
        "issues": {
            "jql": {
                "maxResults": 5,
                "query": "summary ~ \"task\"",
                "nextPageToken":"CAEaAggF"
            }
        }
    },
    "expression": "issues.map(i => i.summary)"
}'

Response:

{
    "value": [
        "task 2",
        "task 1",
        "test task"
    ],
    "meta": {
        "issues": {
            "jql": {}
        }
    }
}

I need to run unbounded JQL query

We do not recommend running unbounded JQL queries. These queries are slow for users and costly for every system involved. As our customers grow in number and size over time, we see customers with millions of Jira issues and this trend will continue.

We strongly recommend you bound your JQL queries to give our users the best performance and reliability. Consider bounding your queries like:

updated > -1m

How long the next page token is valid for?

Next Page Token will expire after 7 days of issuing.

Timeline

We’re sharing a timeline for these changes effective:

  • Sep 03, 2024 → We’re publishing this RFC seeking your feedback and making new experimental endpoints publicly available
  • Sep 17, 2024 → We’ll close this RFC for discussion
  • Sep 21, 2024 → We’ll resolve this RFC and share any planned updates with you based on your feedback
  • Sep 30, 2024 (tentative) → Depending on feedback, we’re targeting this date to launch the new APIs, and mark the existing JQL APIs as deprecated. Rework may push this date out.
  • After deprecaction notice ends → Deprecated JQL APIs are removed from service.

Asks

While we would appreciate any reaction you may have to this RFC, we’re especially interested in:

  • any challenges that the proposed search endpoint fails to address, especially when your application demands immediate search functionality following write consistency.
  • any aspect of the search API that may have been overlooked and is not covered by any of the newly suggested endpoints
  • any feedback on the proposed changes to issue data returned in the new endpoints detailed on this page
4 Likes

Am I reading this correctly that we can no longer rely on search consistency unless we know the issues for which we need consistency in advance?

Given that this seems to be a reduction in functionality of search APIs for the sake of performance for Atlassian, has Atlassian performed any analysis on the expected increase of volume of requests we will have to make to other endpoints, and the performance impact that will have on other non-search Atlassian systems? See: https://www.amazon.com/Goal-Process-Ongoing-Improvement/dp/0884271951

2 Likes

@BorisBerenberg,

Thanks for your comment. Just wanted to let you know that I published this RFC early, and then used the “hide” feature. The RFC will go live on Monday so please be patient while the team finishes up drafting.

For others who happened to get a notification or see the RFC early, please hold comments for a few days. Sorry for the inconvenience.

3 Likes

can you give us some examples of what you mean by this to make sure we understand the same definition?

2 Likes

The links all link to the “Approximate count” API. I cannot find the documentation for /search/jql, /expression/evaluate and /issue/bulkfetch. Could you please fix the links?

Thanks!

Thanks! I fixed the links. Sorry for the inconvenience.

1 Like

Interesting proposal. We have an app that periodically syncs data to many issues, depending on which projects/issuetypes certain customfields are available, so searching for issues via JQL is central to us.

A couple of points are not quite clear to me:

  • Similar to @jbevan, I wonder how exactly “unbounded JQL queries” is defined. Specifically to our app: We use queries like so: cf[XXXXX] is not empty OR cf[YYYYY] is not empty OR .... Is that bounded or unbounded?
  • Next Page Token will expire after 7 days of issuing”: Is that for each individual token? Or for the whole token-chain, so to speak? In other words: When paging through results via tokens, can each individual page-request be done upto 7 days later? Or must the whole process of paging through all results be done in 7 days?
  • Can you speak to roughly how much performance improvement this will bring, especially when paging through results? In our app we observe that for bigger customers (>1mio issues) paging through all relevant issues takes multiple days. How much faster could this new API be, if used in the most performant way?

Thanks.

@glewandowski thanks for the links! A few questions:

It says in the documentation:

What’s the maximum maxResults we can expect? Which search criteria determine the maximum maxResults we’ll be able to use?

Same question here, which kind of complexity determines the maximum maxResults we’ll be able to use?

I saw that the context.issues.jql.validation parameter is missing in the new API. How will the new API behave compared to the old one? Would it be strict, warn or none?

It says in the documentation:

Shouldn’t that be READ scope?

We’re currently able to fetch up to 5000 issues with a few fields using 5 requests (a single request checking the number of results, and then up to 4 parallel requests to fetch issues 1001-5000) using expression/eval. We’d expect to be able to do the same with the new APIs, however it seems that bulkfetch is limited to a meager 100 issues and it seems to be the only way to fetch issues in parallel in the future, even if we already know the correct 5000 issue IDs.

Could the number of issues being fetched with bulkfetch be increased to up to 1000, if only a few specific issue fields are being requested?

3 Likes

Hi, a few questions from top of my head:

  • Is validation in /rest/api/3/jql/parse identical to the current search api validation?

  • Since it’s a new APIs, would it be possible to include count information about issues filtered per app access rules? Or a boolean field to indicate if any issues were filtered out if there are security concerns around exact count.

The following expand options will not be provided on the new APIs…
…editmeta Returns information about how each field can be edited.

I am not seeing a viable migration path for cases when we need to bulk process issues on basis of certain fields being editable or if we want to display a list of issues with editable/non-editable state, which depends on certain fields being editable by current user. Requesting it for each issue individually via “get issue” API would create a huge overhead in request amount. And requesting on demand “on hover” for “list” use case would greatly diminish UX for end users.

After deprecaction notice ends → Deprecated JQL APIs are removed from service.

Do you have an estimate for how long the deprecation period might last?

3 Likes

Thank you for your comment. Atlassian consistently plans system capacity to ensure optimal performance. Although we acknowledge that transitioning to eventual consistency may pose challenges, it is a necessary step to reliably scale to meet our customer’s ever-increasing needs. We are eager to delve deeper into your use case, especially when there is a need for immediate strong consistency without the information of tracked issues.

Bounded queries require a search restriction. Here are some examples of bounded queries:

  • project = TEST order by key
  • reporter = currentUser()

On the other hand, unbounded JQLs do not have any restrictions. They can be an empty query or one that solely lists order by clauses, such as:

  • order by createdDate
  • order by key
2 Likes

@AndreasEbert, thank you for sharing your use case with us.

  • Similar to @jbevan, I wonder how exactly “unbounded JQL queries” is defined. Specifically to our app: We use queries like so: cf[XXXXX] is not empty OR cf[YYYYY] is not empty OR … Is that bounded or unbounded?

The queries you’ve provided are bounded. Unbounded JQLs do not have any restrictions. They can be an empty query or one that solely lists order by clauses, such as:

  • order by createdDate
  • order by key

Next Page Token will expire after 7 days of issuing”: Is that for each individual token? Or for the whole token-chain, so to speak? In other words: When paging through results via tokens, can each individual page-request be done upto 7 days later? Or must the whole process of paging through all results be done in 7 days?

The expiration date applies to each individual token. Every request generates the next token to access additional pages, which will expire again in 7 days.

Can you speak to roughly how much performance improvement this will bring, especially when paging through results? In our app we observe that for bigger customers (>1mio issues) paging through all relevant issues takes multiple days. How much faster could this new API be, if used in the most performant way?

Our data shows that 99% of queries are completed within 500 ms on the new engine, compared to 94% of queries meeting this target on the old engine. The larger the instance, the more noticeable the discrepancy becomes.

1 Like
  • rest/api/3/jql/parse identical to the current search api validation?

Yes. The same validation is applied to current search endpoints.

  • Since it’s a new APIs, would it be possible to include count information about issues filtered per app access rules? Or a boolean field to indicate if any issues were filtered out if there are security concerns around exact count.

As we delve deeper into the solution, we’ll weigh whether or not this is something that is in scope for this version. We’ll address it in the RFC Resolution.

The following expand options will not be provided on the new APIs…
…editmeta Returns information about how each field can be edited.

I am not seeing a viable migration path for cases when we need to bulk process issues on basis of certain fields being editable or if we want to display a list of issues with editable/non-editable state, which depends on certain fields being editable by current user. Requesting it for each issue individually via “get issue” API would create a huge overhead in request amount. And requesting on demand “on hover” for “list” use case would greatly diminish UX for end users.

Thank you for sharing this concern with us. Can you tell us more which other expands options are important for you. Additionally, it would be helpful if you could provide further insights into why you believe UX might diminish?

Do you have an estimate for how long the deprecation period might last?

We expect all users to migrate to new APIs is 6 months after deprecation notice.

editmeta is the primary concern for us here. We don’t necessarily need the full editmeta content as it is right now. We need to know if certain fields are editable in each retrieved issue by current user. Something like “editable” boolean indicator might suffice for the majority of use cases I can think of.

Additionally, it would be helpful if you could provide further insights into why you believe UX might diminish?

Imagine a list of issues that should be draggable from one list to another, or just sortable within a single list based on issue field value. Currently, we have all the data to determine if user will be able to move issue or not and display it with appropriate controls based on editmeta for each issue immediately after issues are retrieved.

Without editmeta, we would either need to fetch each issue individually on load, which wouldn’t work in real world, since it would require making request for each issue individually and it won’t scale well.

Or we would need to lazily fetch editmeta for issues that user is trying to interact with, which would create a noticeable delay before we can render appropriate controls or indication that certain issue cannot be moved.

I can see that boards and backlog in NextGen JSW projects have moved onto this kind of solution, but it feels really clumsy. And I would assume that end users will get more frustrated as this approach is adopted in more places across Jira.

1 Like

Thanks for publishing the RFC.

I feel this particular change will have a significant impact for a number of partners. It will also seems likely to impact many customers who have written their own integrations with these API’s or similar scripts written by solution partners.

I think you should consider a potentially longer deprecation period with this in mind. Potentially it may also be worth giving admins the tools to understand whether they have any integrations or scripts using these API’s, or emailing customers that are using these endpoints. ie. a page showing which users are using the endpoints (or for a user, which API token is being used to call the endpoints)

To put in context I searched our Bitbucket for usages of the search endpoints and identified many different projects using these endpoints. Some of which are deployed in customer environments and very time consuming to change.

I want to dive specifically into the expression evaluate endpoint. The expression/eval endpoint was the only method where we determined it was possible to generate a report in a reasonable time frame on Jira work logs across realistically sized Jira projects (say > 10,000 issues with work logs). A key part of making this work in a reasonable time was making the requests in parallel.

From what you have mentioned, it seems from an app developer POV, the fastest way to load this information now is likely not to be the new expression endpoint, but:

  • collect all issue ID’s via /search/jql (100 at a time)
  • make parallel requests to the bulk fetch endpoint for those same issue ids and same fields we were previously requesting via the expression API.

This looks to me like it will be slower for the user (since waiting for the sequential search requests 100 at a time), and ultimately just as much of an impact on Atlassian infrastructure (bulk loading fields for 10k issues in parallel).

I have the feeling that these changes are ignoring the valid use case of wanting to return small amounts of data across many issues (10k, 100k, 1m) in a time frame where a user will not give up on the page.

6 Likes

Can you elaborate please how reconcileIssues works? For example, we have a jql with issue1 and issue2 in the response. My app reacts on webhooks and do several subsequent search requests. First one with reconcileIssues: [ "issue1"] and the second one with reconcileIssues: [ "issue2"]. Are there any guarantees on strong consistency for issue1 when the second request is executed?

I now have a clearer understanding of the issue. Thank you for taking the time to provide us with the details.

What’s the maximum maxResults we can expect? Which search criteria determine the maximum maxResults we’ll be able to use?

Maximum number of returned issues is related mostly to number of fields Jira needs to populate. The API does not specify an exact limit, but typically the number falls within the range of 100 to 5000.

I saw that the context.issues.jql.validation parameter is missing in the new API. How will the new API behave compared to the old one? Would it be strict, warn or none?

It will behave as you provided none.

It says in the documentation:

Connect app scope required: WRITE

Shouldn’t that be READ scope?

Yes, it should be READ. We’ll investigate this.

We’re currently able to fetch up to 5000 issues with a few fields using 5 requests (a single request checking the number of results, and then up to 4 parallel requests to fetch issues 1001-5000) using expression/eval. We’d expect to be able to do the same with the new APIs, however it seems that bulkfetch is limited to a meager 100 issues and it seems to be the only way to fetch issues in parallel in the future, even if we already know the correct 5000 issue IDs.

Could the number of issues being fetched with bulkfetch be increased to up to 1000, if only a few specific issue fields are being requested?

Thank you for sharing the situation you’re facing. We will review this suggestion and address this issue in the RFC Resolution.

3 Likes