REST API responding with 504 (Gateway Timeout) instead of 429 (API Limit Exceeded) since today

We are running some automated tests on a daily basis against our Trello board. We do respect and handle the 429 response we eventually see.
However, starting with today’s tests, after a period of time (without any 429 responses) we actually get 504 (Gateway Timeout) responses. This happens repeatedly (tried 6 times in the course of the day), and always pretty much 10min after I started my test run.
Has anything changed in the implementation around API rate limits that will actually causes this change?

Here are the headers I get back:

apache.http.headers - http-outgoing-175 << HTTP/1.1 504 Gateway Time-out
apache.http.headers - http-outgoing-175 << X-DNS-Prefetch-Control: off
apache.http.headers - http-outgoing-175 << X-Frame-Options: DENY
apache.http.headers - http-outgoing-175 << Strict-Transport-Security: max-age=15552000
apache.http.headers - http-outgoing-175 << X-Download-Options: noopen
apache.http.headers - http-outgoing-175 << Surrogate-Control: no-store
apache.http.headers - http-outgoing-175 << Cache-Control: max-age=0, must-revalidate, no-cache, no-store
apache.http.headers - http-outgoing-175 << Pragma: no-cache
apache.http.headers - http-outgoing-175 << Expires: Thu, 01 Jan 1970 00:00:00
apache.http.headers - http-outgoing-175 << X-Content-Type-Options: nosniff
apache.http.headers - http-outgoing-175 << Referrer-Policy: strict-origin-when-cross-origin
apache.http.headers - http-outgoing-175 << X-XSS-Protection: 1; mode=block
apache.http.headers - http-outgoing-175 << X-Trello-Version: 1.1989.0
apache.http.headers - http-outgoing-175 << X-Trello-Environment: Production
apache.http.headers - http-outgoing-175 << Access-Control-Allow-Origin: *
apache.http.headers - http-outgoing-175 << Access-Control-Allow-Methods: GET, PUT, POST, DELETE
apache.http.headers - http-outgoing-175 << Access-Control-Allow-Headers: Authorization, Accept, Content-Type
apache.http.headers - http-outgoing-175 << Access-Control-Expose-Headers: x-rate-limit-api-key-interval-ms, x-rate-limit-api-key-max, x-rate-limit-api-key-remaining, x-rate-limit-api-token-interval-ms, x-rate-limit-api-token-max, x-rate-limit-api-token-remaining
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-API-KEY-INTERVAL-MS: 10000
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-API-KEY-MAX: 300
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-API-KEY-REMAINING: 296
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-API-TOKEN-INTERVAL-MS: 10000
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-API-TOKEN-MAX: 100
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-API-TOKEN-REMAINING: 96
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-MEMBER-INTERVAL-MS: 10000
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-MEMBER-MAX: 200
apache.http.headers - http-outgoing-175 << X-RATE-LIMIT-MEMBER-REMAINING: 196
apache.http.headers - http-outgoing-175 << Content-Type: text/plain; charset=utf-8
apache.http.headers - http-outgoing-175 << Content-Length: 16
apache.http.headers - http-outgoing-175 << Date: Tue, 18 Feb 2020 19:02:51 GMT
apache.http.headers - http-outgoing-175 << Connection: keep-alive

We saw a spike in 504’s on API calls that included organizations and have worked to bring those back down to normal levels. We’re investigating what caused that spike. As soon as I know more, I’ll share but things should be back to normal now.

Thanks @bentley, I can confirm that our tests are passing again!