Handling a huge amount of data with python API

Hey guys,
I’m looking for an alternative way to handle my requests, but currently I’m out of ideas.

I have a database with almost 90K issues, and need to recover every issue to add it in a database. Each request to jira took around 6s. I’ll probably need to do it again for others database, so it is kinda impractical to wait all this time

The first solution that I tried was the use of parallel requests, as multiprocessing. However, the server recognized it as DDOS and disconected me, or gives me
“SSL ERROR. None: Max retries exceeded with url: /rest/api/2/search?startAt=0&fields=%2Aall&jql=project+%3D+%2211300%22+ORDER+BY+key (Caused by None)”

Any ideas of how to proceed?

@JooAugustoFernandes did you consider exporting the data to XML?

The documentation on Rate limiting should help you build a solution that, albeit slowly, retrieves all the data reliably. You could also have a look at the recently published JavaScript Rate Limiting Handling Library.

If you want to raise the issue of the response time of that API for your particular site, I suggest you raise an issue in the support portal.

2 Likes

Thanks for your advice, I’ll change the portal and check your references.
:)))