Deprecation of V1-Api Confluence – Major concerns

Dude this has been going on for about 12 months now. These issues have been raised numerous times.

At a bare minimum I think the community wish to see the v2 API fully completed and confirmed to have 100% parity before any deprecation date is set. That’s how it should have started in the beginning. If the revised plan is to now do that per endpoint, sure cool I guess, but it just complicates the matter.

Do we now need a lookup table to see which date an endpoint deprecates? Did the RFC response link to such a thing? Nope. I can find dozens of posts in the past few months reporting parity issues in the forums. I am yet to see any of these endpoint deprecation dates reset to +6 months. Why is that RFC response saying otherwise?

There are recent comments from Atlassian staff saying that completely new v2 endpoints to assist migration are “upcoming”. Yet the deprecation date remains static and has barely incremented (again) by +1 month, which very much signals to the community a failure to acknowledge their concerns.

It’s a waste of our time and an incredible disrespect to the ecosystem to force us all to start migration with the plain knowledge that parity is not there. Particularly when the v1 to v2 migration is going to require a complete rewrite on every call given how much functionality has been removed (a huge unaddressed concern).

1 Like

To give another concrete example: Unlike the deprecated Get content endpoint, Get pages does not return the page owner.

If Atlassian follows their policy explained in the latest RFC comment, this feature gap alone should prevent a deprecation. Even if the gap were closed today, the endpoint would not be deprecated before March 2024:

If there are any gaps for a specific API endpoint, rest assured that we will only deprecate that endpoint six months after achieving parity for that endpoint.


I too am confused by this. There are known gaps today, such as CONFCLOUD-76343 and CONFCLOUD-76344. Does that mean that the deprecation of the wiki/rest/api/space endpoint is extended until delivery+6 months? If no, then the quote above is confusing me. If yes, then I feel like we need a granular table now to track each endpoint’s individual deprecation timeline so that we can efficiently plan our migration of API usage.

Separately, in the RFC I requested that special response headers be added to deprecated endpoints, which was at least acknowledged as a useful idea during the RFC discussion period. I was sad to see that only usage snapshots by app key are being offered in the latest RFC update. Not all uses of the Confluence Cloud APIs are made by apps. In our case, we have the Bob Swift CLI product whose commands execute outside the context of an app, so there is no app key to request a snapshot for. A response header flagging deprecation status would be much more valuable as a general purpose solution in our case, and would have the advantage for everyone of providing value beyond a one-time snapshot. Perhaps the snapshot info is also very useful for apps, so I don’t mean to downplay the value offered there, it’s just not universally helpful.

Finally, for our purposes we are now starting to think about using the GraphQL API (we do not have the Connect constraint) as a replacement for v1 rather than v2, given concerns of the performance penalties and potential rate-limiting impacts we will incur if we re-implement all existing functionality such that it functions transparently by manually performing multiple calls to mimic the previous behavior of v1’s expands functionality. As pointed out repeatedly in these conversations, however, GraphQL still has many critical features in beta and as far as I know there is no ETA for elevating them from that status. In fact AFAIK there’s no guarantee that the beta features will even be elevated to GA status – maybe they will get chopped like the v1 endpoints in question?

All of this makes it exceptionally hard to move forward with any amount of clarity or confidence.


In terms of the deprecation deadlines, here’s another one:

The V.2 “get content properties for (contentType)” APIs have not yet shipped. The API that these components truly replace, as mentioned in more detail earlier, is in fact the generic GET /rest/api/content/{id}? endpoint.

The RFC closing comment says that the deprecation deadline for everything is Jan 31, but that doesn’t seem like it can be quite right. For example, echoing what @klaussner wrote, we would expect a Mar 1 deprecation date for the above APIs, if the replacements were shipped today. The dozen tickets listed also implies that other areas of the V.2 API are not complete…so to which V.1 endpoints does that deprecation date apply?

If there are indeed going to be different deadlines for different endpoints, it gets confusing quickly with information scattered across forum posts. In this eventuality, as @bobbergman1 also suggested, could we please get a spreadsheet or whatever showing, for every endpoint in the V.1 API:

  • the projected or confirmed date to sunset that specific endpoint
  • the replacement V.2 endpoint(s)
  • the ticket(s) on which the work depends, if appropriate, as well as any estimation of delivery dates

It would also be helpful to include the entire set of V.1 endpoints in this list, regardless of whether or not they are currently marked for deprecation. My assumption is that Atlassian would eventually like to deprecate the V.1 API in its entirety and that everything will move to V.2 at some point or another, and knowing whether certain non-deprecated endpoints are in the short-term or long-term backlog, or not being presently considered, would be helpful.

Building a document like this also helps to convey to the vendor community that Atlassian has a master plan for how all of this will work out, and perhaps the work of putting it together on an endpoint-by-endpoint basis might even expose dependencies that were not otherwise evident. It could even be combined with some sort of parity analysis as Nathan suggested earlier, so that this cycle does not need to be repeated in six months.

If Atlassian cannot treat the entire V.1 API with a single monolithic deprecation date, please also do not forget the issue above of different response object shapes (#8) and the pain that this causes vendors. One way to reduce this burden would be to set the deprecation dates for groups of endpoints rather than individual endpoints. For example, pages/blogposts might be one logical group, comments might be another group, and so on. (Those are just of the top of my head, and somebody needs to actually look at the response shapes to see what makes sense.) The downside is that you need to continue supporting the entire deprecated V.1 API group until the very last tiny blocker on any one of the endpoints is fully migrated…but it seems like this is exactly what Atlassian should be doing.

As an aside, I also don’t see a ticket for the promised “get content properties for contenttype” endpoint in the suggested ticket queue, so I hope it is still in progress.


Is there interest and suggestions on quickly and accurately writing a script to compile a real-time list of missing parity? Seems to be our job now, so we may as well just get it done.

I wonder if there’s a consumable list of every query param and variable or if we need to manually grab them from the docs.

Edit (thankyou sir):

Thank you for the further insights folks. The Confluence engineering team are reviewing the above and we’ll respond here once we have some updates.

@klaussner & @bobbergman1 regarding your comments:

If Atlassian follows their policy explained in the latest RFC comment, this feature gap alone should prevent a deprecation.

I too am confused by this. There are known gaps today, such as CONFCLOUD-76343 and CONFCLOUD-76344. Does that mean that the deprecation of the wiki/rest/api/space endpoint is extended until delivery+6 months?

Apologies, the language we’ve used here is a little confusing—I’m making a note to be more careful with our terminology. To avoid ambiguity in this context I’m going to use “Deprecated” in the traditional sense of meaning “Marked for removal at some future date”, and “Removed from service” to mean the API is no longer accessible.

The intent behind this segment on the final comment on the RFC:

“If there are any gaps for a specific API endpoint, rest assured that we will only deprecate that endpoint six months after achieving parity for that endpoint.”

…is that specific end-points in the v1 API will not be removed from service until 6 months after the v2 API has a suitable replacement for that end-point.

It seem reasonable that that would apply to both the cases you mentioned (/rest/api/content/{id}? and /rest/api/space) but I will confirm this with engineering.

I believe the intent behind the update was to add an additional month to the deprecation schedule for all deprecated v1 APIs as a gesture of good will, but also signal that we’re willing to extend that further for the individual APIs which do not yet have suitable replacements (I also agree we need a better public-facing mechanism for tracking this).


The language that has been used in all posts from Atlassian staff have been explicit that the removed from service date was Jan 1st. Now Jan 31st. They have not incremented the “deprecation” date by +1 month, they have incremented the “remove from service” date by +1 month (for ALL endpoints at this point in time since there is no table anywhere listing cut-off dates per endpoint).

It’s right there in the first post of the RFC:

Public access for the endpoints will be removed on Jan 1, 2024, except for the Content Property endpoints & the “Update inline task given global ID” endpoint, for which public access will be removed on Feb 1, 2024. This means any apps still using these V1 APIs will no longer have access and will need to move to the newer V2 endpoints.

To avoid any further confusion: no v1 endpoints should be removed from service until 6 months after there is confirmed 100% v2 parity coverage across ALL endpoints.

Nobody wants to be migrating endpoint-by-endpoint each with their own cut-off date. That’s a waste of the community’s time and a business risk if any vendor happens to miss just one of those dates.

Just give us a single access cut-off date.

And since there are dozens of known and unknown parity issues, there shouldn’t be any set cut-off or “removed from service” date at this point.


Hello @tpettersen,

Firstly, I want to acknowledge the complexities and challenges that come with managing such a major migration. I realize that we’re a relatively small vendor, yet I believe our sentiments echo the feelings of many others in the community. I’m genuinely thankful for your engagement with this thread and for addressing our concerns. The extension of the deadline and the resolution of the export view issue are positive steps.

That said, the ongoing process has been quite unsettling. The necessity by the developer community to persistently reemphasize concerns for acknowledgment can be disheartening. While I don’t endorse counterproductive language, I can resonate with the frustration that it stems from.

I’m particularly disconcerted about the locking of the original deprecation thread. It appears to signal a reluctance to maintain open communication. While Developer-Support and Issue-tickets serve a purpose, there should also be a space for open dialogue about the broader migration process and communication strategy.

For enduring, fruitful relationships, mutual respect, and understanding are key. We, the developer community, have shown time and again our dedication to evolving alongside Atlassian. We recognize that any migration is bound to face hurdles. However, the current strategy feels opaque, adding unnecessary stress and threatening the operational consistency of many vendors.

Here’s what we believe can drive a more seamless transition:

Clear Documentation: A continually updated, centralized source detailing the parity status and deprecation timelines for each endpoint.

Open Channel for Feedback: Reopening the original thread or initiating another transparent official channel would be ideal. If concerns about the thread’s length and consistency prompt its closure, perhaps a moderated platform dedicated to migration discussions can be introduced to maintain constructive feedback.

Regular Updates: Whether through the original thread or another medium like a newsletter, regular updates on the migration’s progress are crucial.

Collaborative Approach: Consider establishing a group comprising Atlassian representatives and community members. This collaboration can address challenges, ideate solutions, and guide the transition more effectively.

Remember, our aim isn’t to resist change. We’re seeking a well-planned, transparent, and respectful migration process that appreciates the collective endeavors and interdependencies of this vibrant ecosystem. Together, with open dialogue and collaboration, we can ensure a seamless migration that benefits both Atlassian and its developer community. We are optimistic about growing alongside Atlassian and trust that our feedback will be genuinely considered.

Thank you for your understanding and support.


I see that Atlassian released one carrot for developers today (thanks, @bobbergman1): “Added Deprecation headers to deprecated V1 APIs”


This is good news. This at least helps with monitoring.

Is the plan to add these header to all future deprecations (including V2 apis if it ever comes to that)? That would at least help a lot with setting up alerts/scan regularly for deprecations.

I’m glad there is at least some action being taken towards more transparency and I hope that there will be more to come.

1 Like


Short answer, yes. To the extent that our extensibility standards team can propagate the learnings from this Confluence deprecation to all teams.


Good news. I really appreciate that. These little things make a lot of difference. Thanks for listening and taking the time to respond.

So they’re doubling-down on this unnecessarily complicated transition plan to bulk deprecate dozens of endpoints one-by-one on varying dates?

Rather than migrate an app in one go, we’re all going to be forced to migrate over 12-24 months as each endpoint slowly reaches parity. Miss one of those dates scattered throughout the calendar year and your app stops working.

1 Like

Hi @tpettersen,

It’s been a couple of weeks now and I was wondering if any updates were available in general? Also, the previously-promised “get multiple content properties for contenttype” endpoint still has not shown up in the public API and I was wondering if this is still set to be released soon? (It’s a blocker on this side.)



@tpettersen I was wondering if a vendor can ask Atlassian to provide the list of endpoints in use, then why do we have to also open tickets to ask for the v2 API implementation of those endpoints? To me it looks like Atlassian already has the list of endpoints needed by vendors, they just need to go through those. Why is an extra round of back and forth required?



Jumping in since Tim is currently out-of-office.

The summary reason is the change from v1 to v2 is not strictly 1-for-1. Some of what is being addressed in the change are fundamental problems in the design of the v1 API.

One side of the problem is that continued use of an endpoint is not a clear signal that there is a blocker. We’ve done our best to get the news out but we know there are app developers who (to their own disadvantage) don’t read community posts, changelogs, and may miss the deprecation header.

Another problem is that knowing the fact of a v1 usage may not be sufficient to know your technical design intent. Even though our logging tells us which endpoints are in use, that’s not a full picture. We intentionally remove user generated content (mostly whatever is a variable in the OpenAPI spec) which can leave us with an incomplete picture about how you are using an endpoint.

It boils down to the deductive abilities of our Confluence developers. They have already used existing endpoints and the data from logs to project what is needed in the v2 API. Based on that analysis, they believe the coverage is already complete. So, for the Confluence team, the remaining gaps aren’t a simple matter of missing endpoints but a subtle set of surprising usages of the v1 API.

Hey @ibuchanan - thanks so much for the response here.

I just wanted to clarify something about endpoints versus functionality on those endpoints. When you say this:

Are we talking about whole endpoints only, here, or are we also talking about properties which those endpoints return?

As an example, one of the biggest blockers for me and my team is the fact that we’re unable to easily pull a list of all of the labels on a given space, which is one call in V1, but would be a completely manual process in V2. There’s an official issue for it already, and in theory it’s being worked on, so that’s fine - but to me I’d be surprised if that use-case was… surprising. I can’t speak for what others are blocked by, but this is the one that stands out to me.

I can understand the move from 1-2 not being a complete 1:1 with endpoints - that makes sense. But when you say that the team believes coverage is more or less complete, are we talking about about the list of endpoints in general, or does the team believe that their coverage of all of the properties/functionalities of those endpoints is essentially complete as well?


Yes, I meant the parameters, in the OpenAPI sense which appear in paths and query strings, that affect how the resource is rendered and used by Apps.

RFC-19 was an assertion of both endpoint and functional coverage, and an ask to point out gaps, which are now being tracked as CONFCLOUD bugs.

Reading CONFCLOUD-76344, I can see your point. From the set over which I was more aware, I had observed more subtle interactions, like where expand was used to cleverly organize content & metadata. In redesign, those concepts are intended to have a cleaner split. Perhaps that’s why this straightforward query of labels was overlooked: it’s not exactly surprise (which was my choice of words) but maybe a complexity with an unclear solution in the new design paradigm?

In any case, I’d like to understand the underlying assertion that @JordanBurrows and @levente.szabo are making here. I was merely explaining our reasoning, but maybe we’re wrong? Are the gaps so numerous and apparent to you both that we should be looking closer at the “hold out” usage of v1 APIs? Are we talking about 1 or 2 endpoints, a dozen, more?

Hi @ibuchanan

Due to the new API not being a 1:1 replacement (particularly with different concepts, primary keys and object shapes), it is difficult to partially migrate to the V.2 API while leaving other calls with V.1.

From a shipping-in-production perspective, I am a 100% holdout on all API V.2 endpoints without exception. This is because not all of the APIs I need are available yet, and existing V.2 migration work will stay on a branch until everything can be migrated. If other people are following similar models, you aren’t necessarily going to find any smoking guns in current endpoint usage stats.

To put it more directly, it’s hard to swap out only one API call without also swapping out all of them. As mentioned above, doing a partial lift of the API means juggling between different, incompatible shapes and PKs at various points in the app. While this is not completely impossible to manage, it generates a lot of throwaway work, and it increases risk and testing burden. It would be so much cleaner if we could just do the migration when everything is ready.

Unfortunately, the clock is ticking while we are waiting for these remaining APIs to be implemented, without any clear guidance as to delivery dates for promised or in-progress endpoints. We don’t really have any updated guidance on how receptive Atlassian might be to extending that deadline either, or making it per-endpoint, although I am hoping that we will see more once Tim gets back from OoO.

As I think I have also mentioned before, the question of sufficient API coverage also needs to take performance into account. Even if the new V.2 APIs have theoretical 100% coverage of V.1 features, this doesn’t work in a practical sense if the equivalent V.1 paradigm takes 500+% longer to execute on V.2.


Yes. This confirms my point that using a v1 API is not a clear signal for that endpoint; hence, we could not open bugs just because they continue to be used.