Deprecation of V1-Api Confluence – Major concerns

I’ve been attentively following the announcements regarding the deprecation of numerous V1 rest endpoints in Confluence. While I wholeheartedly understand and respect the pursuit of progress and innovation, this decision brings forth several pressing concerns:

1. Maturity of V2 and GraphQL: The current V2 API seems to lack essential features, and many critical GraphQL capabilities remain in beta. Furthermore, to my knowledge, the GraphQL API isn’t even accessible for connecting apps at this juncture. Transitioning to Forge isn’t an option for us, and many developers share this sentiment due to Forge’s existing limitations.

2. Dependence on Expands in V1: Our systems, along with many others, heavily depend on the “expands” feature in the V1 API. With the proposed changes, this critical data access could be hampered, potentially rendering integrations and specific functions unviable. Neither V2 nor GraphQL offers a suitable replacement.

3. Partial API Transition: While I’m always open to embracing newer API versions, it’s baffling that the decision has been made to deprecate substantial sections of the old API, when its successor isn’t wholly prepared to assume its responsibilities. This situation leaves us oscillating awkwardly between two distinct APIs.

4. Relevance of Content IDs: Our application greatly values the adaptability of content IDs, which can signify a page, blog post, or comment. Eliminating this functionality would necessitate a significant overhaul of our app’s logic.

5. Export View: Our application uses the export view, to my understanding the export view cannot be accessed at all with the V2 api.

6. Potential Workarounds: Given that the search API endpoint seems to endure, as evidenced by this documentation, it begs the question: Could this serve as a viable alternative for the “get content” API? If we frame CQL for ‘get content with id = 123’, it might work and we might adjust our app without to much work but pushing vendors to go down this path seems counterproductive, especially if the objective is performance enhancement.

Given the issues outlined, I respectfully request a revisit of this decision or, minimally, a well-thought-out plan ensuring a smooth transition that doesn’t compromise on key features and data access. The developer community deeply values stability and clarity, and we earnestly hope these concerns will be considered during this critical transition.

I must emphasize: I genuinely appreciate the direction of Forge and V2 and I’m not against change per se. However, it’s disheartening to see vendors continuously nudged towards adopting solutions that seem incomplete or not fully baked. This is very frustrating and I’m worried about the stability of our app when we’re forced to move to using apis that are still in beta state.

21 Likes

Well said, I share all the same feelings, and I’m sure other partners must as well.

The scope of these changes is enormous, the replacement APIs are still incomplete, and substantial amounts of legacy code that has years of battle-hardening needs ample time to be rewritten, retested, and in some cases gradually adopted by our shared customers.

Application of the standard 6 month deprecation policy feels like an outrageous policy abuse which fails to consider the impact of such a massive change, especially when the countdown to deprecation has been started before reaching consensus that v2 is feature complete.

The problem is not only the number of APIs affected, it’s also a radical change in API design that will require substantial refactoring of apps, libraries, and integrations that have been solving customer needs for years.

This is an exceptional case that deserves a separate deprecation policy.

11 Likes

Let me throw a few other items onto the pile:

7. New Primary Keys for Space Keys and Content Properties. The new API transitions off space keys and content property names, and instead, uses unique IDs for each (which apps never used to need to know). This is another paradigm shift which does not map 1:1 to the old API and which likely requires more architectural rework to (for example) communicate and store spaceIds instead of spaceKeys in various places throughout the client app.

8. Different Response Shapes. The shape of almost all of the API response objects has changed. This creates a higher testing burden on vendors because they are not 1:1. In addition, until 100% of the V.1 API is fully transitioned, apps need to awkwardly translate between one shape and the other internally, which is awkward at best and error-inducing at worst.

9. Performance Parity. Atlassian has said that they are willing to trade off receiving more API calls in exchange for the increased permission granularity. While Atlassian may be able to handle the load, the problem is that this can translate into significant speed slowdowns for the end user, because one API call often turns into 2-3. Even if Atlassian can bear the increased load, will all of our collective end users be able to bear most apps being noticeably slower? Achieving “API parity” needs to be measured not just in completeness of API calls, but also speed. And by speed, I mean real-world use cases. This has to extend beyond only the simplistic testing of the speed of one V.2 call vs V.1, because there is no real 1:1 mapping between the two, and what was one API call often becomes multiple. (At least one or two other vendors have already posted comparisons, and I would generously describe them as “not good”.)

10. Rate Limiting Increases and Transparency. There is talk of greatly expanding the number of API calls made by apps, but it is not mentioned anywhere how (or if) the rate limiting limits will be adjusted. In passing, these limits were already rather opaque to begin with.

With regards to the CQL search API as a workaround, as I may have mentioned elsewhere, this is not a suitable replacement for “fetch-content” because it depends on the search indexes being up-to-date. These indexes were apparently designed for eventual consistency, but if you do something like creating some content and then trying to fetch it immediately via search (from somewhere that happens to hit the wrong replica), you won’t necessarily find it. I also get the impression that the team would like to deprecate the V.1 search API, although they haven’t (yet) figured out how, so it has not been announced.

In response to feedback, Atlassian did provide this new API to batch-convert a list of contentIds to their appropriate types, although this still means that you will need to rearchitect your app to convert and store old contentIds as new tuples of (contentType,contentId). This is another non-trivial task that most app developers will need time to implement and test.

9 Likes

Their current transition strategy also seems to be crowdsourcing discovery of missing parity by way of short and stressful deadlines. Every week there are multiple forum posts reporting missing parity.

It should not be the community’s job to find these gaps under the hammer of looming business risks mere months out.

Nor should the deadline be incrementally adjusted +3-6 months to keep pressure on devs to continue finding these gaps. The API team should have scripted parity discovery from the beginning.

8 Likes

Agree with all your points. I just thought I’d mention the body-format query parameter that allows retrieving the export-view. https://developer.atlassian.com/cloud/confluence/rest/v2/api-group-page/#api-pages-id-get-request-Query%20parameters

3 Likes

I just thought I’d mention the body-format query parameter that allows retrieving the export-view.

So I wasn’t actually intending to test this today, but I accidentally found myself having to in order to support a particular piece of formatting, and from my experience it looks like export_view is not currently supported on pages. view is, and that’s what I ultimately used - but even though it says export_view is valid, in my experience it didn’t appear to be. Also, if you look at the shape of the object they give you as an example response, storage, view, and atlas_doc_format are all shown as valid objects, but export_view (and others) are all missing. I think probably just the three shown in the example response are valid currently.

3 Likes

They just set the new deadline to Jan 31st: RFC-19: Deprecation of Confluence Cloud REST API v1 Endpoints - #25 by AshishShirode

Completely ignored the widespread concerns. And their post says they’ll be ignoring any posts about the incompleteness of the v2 API on the forums.

And they’re saying any endpoint with gaps will only be deprecated 6 months from when it reaches parity. We all know that is false as there are countless examples of reported gaps on the forums where the deadline has not changed.

Surely the community now needs to take this to Mike and Scott?

@nathanwaters I understand you are frustrated. However please keep your feedback professional, respectful, and constructive; as the other contributors on this thread are managing to.

We are escalating the concerns raised above with the engineering team, but inflammatory statements and name calling are extremely counter-productive, and detract from the otherwise valuable feedback being surfaced by yourself and other members of the community.

Please review our Participation Guidelines and Community Code of Conduct. I really do not enjoy having to penalise or ban members of this community for misconduct, but I will have to if you are unable to keep your communications more respectful and constructive.

To be crystal clear: My concern here is in the way that you are choosing to communicate. It’s fine to raise concerns here — I appreciate the continued efforts from the developer community to raise specific and actionable feedback relating to deficiencies in the Confluence REST v2 API and the manner and timing of the v1 deprecation. The list that @SebastianGellweiler & @scott.dudley have collated above is well-articulated and constructive, and constructive feedback and discourse that abides by our guidelines is welcome on the community.

Nothing inflammatory in my post. Just re-stating what the RFC reply said. They ignored my posts in the RFC with detailed concerns and questions. Lets just pick one question I asked in the RFC (I’m not the first to ask it): GraphQL API unavailable for Connect despite the official v2 API blogpost saying developers should use it to replace deprecated expansions. No response. Nothing in CONFCLOUD.

They’ve ignored this thread (you’re the first to say anything) and many others from both small and massive vendors. You’ve even got the ScriptRunner devs saying they can’t migrate because there’s no parity for the deprecated v1 endpoints they require, and that they would like extended deprecation periods. His thread was ignored.

I’ll let others use their better composure to explain to you where and why that RFC response does not cover the concerns raised by the community…

@nathanwaters I appreciate you editing your previous post to moderate your language, but please do not pretend that the original content was not inflammatory. For the avoidance of doubt, it was the portion of your statement (prior to editing) that referred to Atlassian staff members as “jokers” and the recent update from the RFC author as a “lie” that were the reasons for censure, not your concerns about the Confluence REST API.

I do agree the most recent update from Atlassian on the RFC does not fully address the concerns that have been enumerated above. I do also understand and agree with the concern about GraphQL being the proposed replacement for expands, and the incompatibility between Connect & GraphQL. As I mentioned, we are re-escalating the concerns that have been enumerated above with engineering. However please realise that some of these concerns are quite nuanced and will require some investigation and discussion. If you have clear, actionable feedback related to the v1 deprecation that is discrete from the points 1-10 above then please do share them and I will be happy to escalate it.

In the meantime, please do take the latest comment on the RFC about deprecations in good faith:

“If there are any gaps for a specific API endpoint, rest assured that we will only deprecate that endpoint six months after achieving parity for that endpoint.”

This does not cover all concerns relating to the rollout, but does seem to address the specific concerns from the ScriptRunner folks that you mentioned.

4 Likes

Dude this has been going on for about 12 months now. These issues have been raised numerous times.

At a bare minimum I think the community wish to see the v2 API fully completed and confirmed to have 100% parity before any deprecation date is set. That’s how it should have started in the beginning. If the revised plan is to now do that per endpoint, sure cool I guess, but it just complicates the matter.

Do we now need a lookup table to see which date an endpoint deprecates? Did the RFC response link to such a thing? Nope. I can find dozens of posts in the past few months reporting parity issues in the forums. I am yet to see any of these endpoint deprecation dates reset to +6 months. Why is that RFC response saying otherwise?

There are recent comments from Atlassian staff saying that completely new v2 endpoints to assist migration are “upcoming”. Yet the deprecation date remains static and has barely incremented (again) by +1 month, which very much signals to the community a failure to acknowledge their concerns.

It’s a waste of our time and an incredible disrespect to the ecosystem to force us all to start migration with the plain knowledge that parity is not there. Particularly when the v1 to v2 migration is going to require a complete rewrite on every call given how much functionality has been removed (a huge unaddressed concern).

1 Like

To give another concrete example: Unlike the deprecated Get content endpoint, Get pages does not return the page owner.

If Atlassian follows their policy explained in the latest RFC comment, this feature gap alone should prevent a deprecation. Even if the gap were closed today, the endpoint would not be deprecated before March 2024:

If there are any gaps for a specific API endpoint, rest assured that we will only deprecate that endpoint six months after achieving parity for that endpoint.

7 Likes

I too am confused by this. There are known gaps today, such as CONFCLOUD-76343 and CONFCLOUD-76344. Does that mean that the deprecation of the wiki/rest/api/space endpoint is extended until delivery+6 months? If no, then the quote above is confusing me. If yes, then I feel like we need a granular table now to track each endpoint’s individual deprecation timeline so that we can efficiently plan our migration of API usage.

Separately, in the RFC I requested that special response headers be added to deprecated endpoints, which was at least acknowledged as a useful idea during the RFC discussion period. I was sad to see that only usage snapshots by app key are being offered in the latest RFC update. Not all uses of the Confluence Cloud APIs are made by apps. In our case, we have the Bob Swift CLI product whose commands execute outside the context of an app, so there is no app key to request a snapshot for. A response header flagging deprecation status would be much more valuable as a general purpose solution in our case, and would have the advantage for everyone of providing value beyond a one-time snapshot. Perhaps the snapshot info is also very useful for apps, so I don’t mean to downplay the value offered there, it’s just not universally helpful.

Finally, for our purposes we are now starting to think about using the GraphQL API (we do not have the Connect constraint) as a replacement for v1 rather than v2, given concerns of the performance penalties and potential rate-limiting impacts we will incur if we re-implement all existing functionality such that it functions transparently by manually performing multiple calls to mimic the previous behavior of v1’s expands functionality. As pointed out repeatedly in these conversations, however, GraphQL still has many critical features in beta and as far as I know there is no ETA for elevating them from that status. In fact AFAIK there’s no guarantee that the beta features will even be elevated to GA status – maybe they will get chopped like the v1 endpoints in question?

All of this makes it exceptionally hard to move forward with any amount of clarity or confidence.

7 Likes

In terms of the deprecation deadlines, here’s another one:

The V.2 “get content properties for (contentType)” APIs have not yet shipped. The API that these components truly replace, as mentioned in more detail earlier, is in fact the generic GET /rest/api/content/{id}?expand=metadata.properties.xyz endpoint.

The RFC closing comment says that the deprecation deadline for everything is Jan 31, but that doesn’t seem like it can be quite right. For example, echoing what @klaussner wrote, we would expect a Mar 1 deprecation date for the above APIs, if the replacements were shipped today. The dozen tickets listed also implies that other areas of the V.2 API are not complete…so to which V.1 endpoints does that deprecation date apply?

If there are indeed going to be different deadlines for different endpoints, it gets confusing quickly with information scattered across forum posts. In this eventuality, as @bobbergman1 also suggested, could we please get a spreadsheet or whatever showing, for every endpoint in the V.1 API:

  • the projected or confirmed date to sunset that specific endpoint
  • the replacement V.2 endpoint(s)
  • the ticket(s) on which the work depends, if appropriate, as well as any estimation of delivery dates

It would also be helpful to include the entire set of V.1 endpoints in this list, regardless of whether or not they are currently marked for deprecation. My assumption is that Atlassian would eventually like to deprecate the V.1 API in its entirety and that everything will move to V.2 at some point or another, and knowing whether certain non-deprecated endpoints are in the short-term or long-term backlog, or not being presently considered, would be helpful.

Building a document like this also helps to convey to the vendor community that Atlassian has a master plan for how all of this will work out, and perhaps the work of putting it together on an endpoint-by-endpoint basis might even expose dependencies that were not otherwise evident. It could even be combined with some sort of parity analysis as Nathan suggested earlier, so that this cycle does not need to be repeated in six months.

If Atlassian cannot treat the entire V.1 API with a single monolithic deprecation date, please also do not forget the issue above of different response object shapes (#8) and the pain that this causes vendors. One way to reduce this burden would be to set the deprecation dates for groups of endpoints rather than individual endpoints. For example, pages/blogposts might be one logical group, comments might be another group, and so on. (Those are just of the top of my head, and somebody needs to actually look at the response shapes to see what makes sense.) The downside is that you need to continue supporting the entire deprecated V.1 API group until the very last tiny blocker on any one of the endpoints is fully migrated…but it seems like this is exactly what Atlassian should be doing.

As an aside, I also don’t see a ticket for the promised “get content properties for contenttype” endpoint in the suggested ticket queue, so I hope it is still in progress.

7 Likes

Is there interest and suggestions on quickly and accurately writing a script to compile a real-time list of missing parity? Seems to be our job now, so we may as well just get it done.

I wonder if there’s a consumable list of every query param and variable or if we need to manually grab them from the docs.

Edit (thankyou sir):

https://dac-static.atlassian.com/cloud/confluence/swagger.v3.json
https://dac-static.atlassian.com/cloud/confluence/openapi-v2.v3.json

Thank you for the further insights folks. The Confluence engineering team are reviewing the above and we’ll respond here once we have some updates.

@klaussner & @bobbergman1 regarding your comments:

If Atlassian follows their policy explained in the latest RFC comment, this feature gap alone should prevent a deprecation.

I too am confused by this. There are known gaps today, such as CONFCLOUD-76343 and CONFCLOUD-76344. Does that mean that the deprecation of the wiki/rest/api/space endpoint is extended until delivery+6 months?

Apologies, the language we’ve used here is a little confusing—I’m making a note to be more careful with our terminology. To avoid ambiguity in this context I’m going to use “Deprecated” in the traditional sense of meaning “Marked for removal at some future date”, and “Removed from service” to mean the API is no longer accessible.

The intent behind this segment on the final comment on the RFC:

“If there are any gaps for a specific API endpoint, rest assured that we will only deprecate that endpoint six months after achieving parity for that endpoint.”

…is that specific end-points in the v1 API will not be removed from service until 6 months after the v2 API has a suitable replacement for that end-point.

It seem reasonable that that would apply to both the cases you mentioned (/rest/api/content/{id}?expand=metadata.properties.xyz and /rest/api/space) but I will confirm this with engineering.

I believe the intent behind the update was to add an additional month to the deprecation schedule for all deprecated v1 APIs as a gesture of good will, but also signal that we’re willing to extend that further for the individual APIs which do not yet have suitable replacements (I also agree we need a better public-facing mechanism for tracking this).

2 Likes

The language that has been used in all posts from Atlassian staff have been explicit that the removed from service date was Jan 1st. Now Jan 31st. They have not incremented the “deprecation” date by +1 month, they have incremented the “remove from service” date by +1 month (for ALL endpoints at this point in time since there is no table anywhere listing cut-off dates per endpoint).

It’s right there in the first post of the RFC:

Public access for the endpoints will be removed on Jan 1, 2024, except for the Content Property endpoints & the “Update inline task given global ID” endpoint, for which public access will be removed on Feb 1, 2024. This means any apps still using these V1 APIs will no longer have access and will need to move to the newer V2 endpoints.

To avoid any further confusion: no v1 endpoints should be removed from service until 6 months after there is confirmed 100% v2 parity coverage across ALL endpoints.

Nobody wants to be migrating endpoint-by-endpoint each with their own cut-off date. That’s a waste of the community’s time and a business risk if any vendor happens to miss just one of those dates.

Just give us a single access cut-off date.

And since there are dozens of known and unknown parity issues, there shouldn’t be any set cut-off or “removed from service” date at this point.

5 Likes

Hello @tpettersen,

Firstly, I want to acknowledge the complexities and challenges that come with managing such a major migration. I realize that we’re a relatively small vendor, yet I believe our sentiments echo the feelings of many others in the community. I’m genuinely thankful for your engagement with this thread and for addressing our concerns. The extension of the deadline and the resolution of the export view issue are positive steps.

That said, the ongoing process has been quite unsettling. The necessity by the developer community to persistently reemphasize concerns for acknowledgment can be disheartening. While I don’t endorse counterproductive language, I can resonate with the frustration that it stems from.

I’m particularly disconcerted about the locking of the original deprecation thread. It appears to signal a reluctance to maintain open communication. While Developer-Support and Issue-tickets serve a purpose, there should also be a space for open dialogue about the broader migration process and communication strategy.

For enduring, fruitful relationships, mutual respect, and understanding are key. We, the developer community, have shown time and again our dedication to evolving alongside Atlassian. We recognize that any migration is bound to face hurdles. However, the current strategy feels opaque, adding unnecessary stress and threatening the operational consistency of many vendors.

Here’s what we believe can drive a more seamless transition:

Clear Documentation: A continually updated, centralized source detailing the parity status and deprecation timelines for each endpoint.

Open Channel for Feedback: Reopening the original thread or initiating another transparent official channel would be ideal. If concerns about the thread’s length and consistency prompt its closure, perhaps a moderated platform dedicated to migration discussions can be introduced to maintain constructive feedback.

Regular Updates: Whether through the original thread or another medium like a newsletter, regular updates on the migration’s progress are crucial.

Collaborative Approach: Consider establishing a group comprising Atlassian representatives and community members. This collaboration can address challenges, ideate solutions, and guide the transition more effectively.

Remember, our aim isn’t to resist change. We’re seeking a well-planned, transparent, and respectful migration process that appreciates the collective endeavors and interdependencies of this vibrant ecosystem. Together, with open dialogue and collaboration, we can ensure a seamless migration that benefits both Atlassian and its developer community. We are optimistic about growing alongside Atlassian and trust that our feedback will be genuinely considered.

Thank you for your understanding and support.

10 Likes

I see that Atlassian released one carrot for developers today (thanks, @bobbergman1): “Added Deprecation headers to deprecated V1 APIs”

4 Likes

This is good news. This at least helps with monitoring.

Is the plan to add these header to all future deprecations (including V2 apis if it ever comes to that)? That would at least help a lot with setting up alerts/scan regularly for deprecations.

I’m glad there is at least some action being taken towards more transparency and I hope that there will be more to come.

1 Like