Seeking Best Practices for Data Ingestion with Cloud Enterprise

Current Situation
We are in the process of migrating from Data Center (DC) to Cloud Enterprise.

  • On DC, we had direct access to the database. Using JDBC connections, we could easily retrieve the data we needed—from Tempo approvals to Xray data—and load it into our data lake.

  • However, after moving to Cloud Enterprise, the approach has shifted to Delta Sharing, which comes with limitations on the number of tables. Additionally, data from add-ins such as Xray and Tempo now needs to be accessed via REST API (as advised in a support ticket we opened).

Challenges
This transition has introduced significant complexity:

Request for Support
Since opening tickets with Atlassian Support has not yielded the expected results, I would like to present this situation to the Developer Community and seek guidance on best practices for:

  • Handling large-scale data ingestion (REST API or alternative solutions).

  • Optimizing or extending Delta Sharing for Cloud Enterprise.

  • Ingest Add-in (3rd data)

  • Any proven patterns or documentation for Databricks-to-Databricks sharing in this context (Atlassian is using databricks, and my company is also using databricks)

1 Like