Hey Atlassian, let's improve the RFC process!

Hey Atlassian,

This is a message to the extensibility and ecosystem team:

I am a big fan of the RFC process that was launched last year, and I think it has created a number of great outcomes. That notwithstanding, a number RFCs have yielded less-than-desirable results. This is not the fault of the RFC process itself, but more (I think) an indirect result of the organizational culture plus the generalized lack of awareness of the existence of the ecosystem. I suggest a process improvement to produce better outcomes.

A recurring issue is that a lot of Atlassian product changes are not necessarily designed with the ecosystem in mind. It is fantastic that the RFCs are now baked into the internal design process, but I have seen issues such as:

  • RFCs being “too little, too late” in terms of when they are deployed in the design process
  • authors tossing their RFC onto CDAC but then seemingly forgetting that it exists (eliminating most opportunity for discussion)
  • RFCs that do not actually ask for feedback on design decisions, but which are seemingly announcements disguised as RFCs
  • community feedback is consistently ignored.

I understand that Atlassian is a data-driven company and that it is probably easy to forget about the ecosystem when Atlassians are focusing on their KPIs.

So, let’s fix it. I propose that RFC outcomes be included in your KPIs. I think this should certainly be done at an individual Atlassian level for the author (creating accountability for the CDAC posts), but also roll up these aggregate metrics to a team and BU level, providing visibility to leadership and to the ecosystem team.

How do you track RFC outcomes? There are many ways, but one is to create a poll after each RFC is closed and ask the ecosystem to vote. Maybe limit it to participants in the thread, maybe average out all responses from one ecosystem partner into one single weighted vote, or whatever you think you need.

You could ask questions like:

  • Was the RFC posted in a timely manner (before major decisions were committed, allowing the ecosystem voice to be heard before it is too late)?
  • Did the RFC include an appropriate set of asks for the ecosystem? Did the asks actually focus on how to drive the internal design process, rather than being a poll on the outcomes of a design that has already been decided upon?
  • Did the RFC author actively engage with the ecosystem during the discussion period?
  • Do you feel that comments relating to the “asks” were heard and acted upon by Atlassian?

Use all of this to figure out a score, remind Atlassians that they’re being evaluated on it, figure out how to weight the result internally to try to ensure that Atlassians will actually care, maybe even update the ecosystem every six months on how things are trending (similar to what the Marketplace team does already), and possibly use the results to drive further process improvement.

I do not know if this is even possible, but after all of the great work of Ian and team to create the RFC system, I hope that this would be an incremental improvement that would require fewer boulders to be moved.

58 Likes

@scott.dudley - Thank you for posting this. We have been talking about many of these items already internally and I agree with you we have an opportunity to make some changes to improve the process. I will be following this thread along with our team and bring some of our ideas back to this thread.

7 Likes

Though be careful the goal doesn’t inadvertently become “maximize number of RFCs” because I think the number of major changes already in play from RFCs is far too overwhelming and destabilizing.

It’s clear that Atlassian faces market threats from more nimble competitors and the exponential pace of AI. One can imagine within the next ~2 years that AI will be capable of generating fully functional UIs where the only moat remaining is data lock-in.

In the face of that uncertainty the strategic advantage will be the unexpected emergence that arises from the 25,000+ app developers building on this platform. But if that platform is constantly being changed, deprecated and broken then it becomes a complicated and fragile machine instead of a complex and resilient living ecosystem.

3 Likes

Hi @scott.dudley,

I appreciate you taking the time and effort to post this thoughtful feedback and suggestions. As @ChrisHemphill1 mentioned, we have been making improvements to our process in response to recent events.

Our RFC process is defined in a DAC documentation set titled Extensibility Standards, which is only visible to Atlassian staff. These standards contain guides and rules relating to the topics of API Design, API Change Management and API Collaboration, noting we have a very broad definition of API (ie, think dependency, not REST).

The standards cover the entire lifecycle of an API, from inception to retirement. The first two groups of standards are API First and RFCs. API First comprises (a) a guide explaining the benefits of creating APIs rather than directly implementing features, and (b) an assessment framework that teams can follow that will help highlight the Ecosystem impact of their proposed changes and whether an alternate approach of creating extensibility would unlock the features desired via our Ecosystem.

In response to recent events, we’ve improved the RFC practices to require an API First assessment before an RFC if the project meets certain conditions. We’ve also introduced additional rules to tighten up the RFC practice such as ensuring RFCs do not prescribe a fixed path forward. Further, we’ve made changes to highlight the need for the RFC author to be committed to providing timely responses to RFC feedback, although note that it may not always be practical to provide individual replies to each response to an RFC.

As you can imagine, there are a number of challenges in relation to ensuring our R&D teams adhere to these standards. These include:

  • There are numerous R&D teams within Atlassian to influence, yet our organisation structure is such that we do not have any authority over them;
  • Our R&D teams are always trying to move as swiftly as possible to deliver value to customers; and
  • Our R&D product teams are often focussed on the needs of customers over Ecosystem partners which sometimes results in a narrower view than we would like.

Your suggestions about KPIs and outcomes are great, although it’s probably worth decoupling them. In the past, we have explored measurement systems to hold R&D teams accountable, but we haven’t yet been able to land on a solution that strikes the right balance between Ecosystem and customer needs at a level that applies across Atlassian. The systems we’ve explored include a score carding solution and extensibility key result targets. With respect to your suggestion about measuring the outcomes of RFCs, this seems to have a lot of merit so I will take an action to explore this further.

FAQ

Q: Why aren’t the Extensibility Standards publicly available?
A: There are many R&D teams with Atlassian that are not aware of the standards so there is a concern that it wouldn’t make sense to advertise the practices as Atlassian standards when they not widely recognised as such.

Q: Will you make the Extensibility Standards publicly available?
A: This is being considered.

Regards,
Dugald

3 Likes


 Wouldn’t that be exactly the solution @scott.dudley and the wider vendor ecosystem is asking for?
It’s time that Atlassian acknowledges the importance of the MP R&D ecosystem down to Atlassian R&D team level instead of creating a bufferzone (RFCs) that often serves as a ‘too little too late’ notification feed!

1 Like

Hi Dugald,

Thank you so much for your detailed response! I have a few points to add:

Without any authority at all, it does not seem like there is any real path to accountability. I believe the ecosystem in general would agree that this is in itself a problem. (This post is, incidentally, the first of mine to receive 50+ hearts.)

If an RFC author tosses an RFC over the wall and then (say) ignores it, it does not seem like there are any real consequences (at least from the outside).

The team will presumably still meet their KPIs and get whatever internal incentives may be offered for doing so, and indeed, interacting with the ecosystem may even be perceived as a suboptimal use of time, because it gets in the way of spending time on other things that are measured and that do contribute directly to how their “success” is measured.

I also do not claim to know anything about the company structure, authority, evaluation or compensation schemes. I will just throw out the open question: is there any way to have “ecosystem” included as a measurable success component of [individuals, projects, teams, business units, or whatever seems appropriate]? Even if your department has no authority over the people, is there no way to fit the ecosystem “score” into official evaluations at some level? (Or can you get an ecosystem team “embed” in each product team?)

And even if this is not something that can currently be included in official evaluations due to political issues, can the ecosystem team collect data and publish it anyway (to both management and ecosystem partners, for transparency), with the hopes that someone can eventually be convinced to make it part of the official success criteria? Maybe this is what you meant by “decoupling”. (I think that making it part of the success criteria is critical for the ecosystem, but I infer that it may not be politically practical
yet.)

For example, a “name-and-(gently)-shame” dashboard showing the relative scores of extensibility “success” criteria for the various products teams might be a useful incentive and a good future talking point. Especially if it can be broken down on a RFC-by-RFC level, so people can see trends and see which projects are doing a better job.

The original ecosystem extensibility design (and particularly the Cloud/Connect side of things) was a joy to work with. With some limited exceptions, it covered the entire breath of the usable product. The design was cohesive and elements worked well together.

This is in danger of being destroyed by a thousand paper cuts, because new features are continually being rolled out without considering the context of the ecosystem. A lot of times, the bigger picture is not taken into consideration, and features are shipping with missing pieces that are detrimental to the ecosystem.

When you write “our R&D teams are always trying to move as swiftly as possible to deliver value to customers”, the unwritten part at the end seems to be “often by cutting corners on ecosystem interoperability”.

I would argue that shipping a feature that does not support the ecosystem is, by definition, not complete. You could also ship those same features even more swiftly (and deliver the feature to customers sooner) without building in security or reliability, but those factors are deemed critical. Why not the ecosystem too? Secure and reliable products need to be planned around those design tenets from the beginning. Same for the ecosystem.

The absence of this is leaving the ecosystem with a patchwork of interface points that sort of (but not entirely) cover the products, designs that disrupt existing products, creates make-work for vendors, and the overall standard often feels like it slips with each additional change that rolls out.

I recognize that I am preaching to the choir with all of the above, since you are part of the ecosystem team and you are already fighting for us. I just have trouble seeing how to make significant changes in those trends until accountability is built into the design process.

Cheers,
Scott

6 Likes

Hi @scott.dudley ,

I am quite aligned to your additional points, observations and tactics. I recognise the need to measure and track how teams deliver ecosystem opportunities and impact. I also recognise the need for stronger standards and increased accountability. I understand that the problems caused by our Cloud tech stacks not baking in extensibility for third party developers like our Data Center tech stack does. I believe that solving these issues will lead to better long term outcomes for both our customers and partners.

Regards,
Dugald

1 Like

Who is it that does have authority and refuses to use it? That’s where accountability falls.