Major (upcoming?) UI change in how issue content is accessible to users?

Hi everyone,

I was wondering if this change was announced anywhere, because it seems that might have quite some impact in app discovery for end users. Currently, our app uses the issueContent module quite extensively, so up until today, for some customers who use every feature of our app, the issue looked like this:

Now in some of our instances the UI changed to something like this, which is a lot less accessible:

As of tomorrow morning it seems back to the previous version, but we are still a bit concerned if this change will come after all at some point? Will there be any kind of RFC for this, because we’d def. need to look into educating users about this change…

Thanks
Tobias

8 Likes

And it’s live again… @ibuchanan Is it possible to get any more info about this change?

2 Likes

Wouldn’t it be possible to make the tooltip the actual button text? This is very inconvenient behavior…
image

1 Like

@tobias.viehweger,

Thanks for prompting. I have some information about this change. I’ll keep it brief here with the intent to work toward an RFC with the Product Manager, where the driver of the feature can better represent what’s happening.

As Atlassian was adding more quick actions to that bar, your apps were getting pushed right and into the ellipsis menu. Furthermore, our data showed that users were interacting less with the quick actions, the more we added.

This is currently running as an experiment, which you were lucky (or unlucky?) enough to see. That means there are currently more than 1 variation that customers might see. The expectation is that customers who object should open customer support tickets. So, both as app vendor and a power user, I would recommend the same path until there is an RFC.

Experiments like this don’t have a clean fit with our existing processes like RFC. As such, we (developer experience) dropped the ball in navigating that concern when the Product Manager asked what to do. I’ll take this opportunity to update our internal RFC docs to better reflect how to represent experiments where multiple feature flags will run at the same time.

Please be patient with the Product Manager as we work to get that RFC out. I believe “better late than never” for engaging on an RFC, but it’s my fault that it’s late in the first place.

Edit: Removed reference to RFC-45. Different PMs, different projects, different UI changes. Sorry if the mention confused anyone.

3 Likes

@tobias.viehweger,

I’ve sync’d up with the Product Manager. RFC will be coming in the next weeks before the change begins to roll-out at large scale. The experiment is still running so you or customers may see it at random.

If the data comes back showing improvement, then there is a process to “productionize” a change. That’s what I expect to be part of the usual lifecycle.

One change that I’ll be proposing is to exclude the dev canary program (DCP) from experiments. My reasoning is that DCP should be a reproducible state, not subject to cohort randomization. Conversely, experiments may have multiple variations and are not yet a firm solution. Most variations, and many experiments broadly, may never reach production. So my assertion is that you shouldn’t have to worry about them until you see a changelog, which will be the single that a real change (not experiment) is on it’s way. Does that seem right?

Hi Ian,

thanks for investigating further! I’d like to touch on two things, one more related to experiments/DCP & one more related to the actual change (I’ll cross post that in the RFC once live).

Experiments for DCP

That does not seem right to me, I’m afraid. I’d say it’s rather the opposite, since this allowed us to raise this change with you, which would’ve flown under the radar otherwise, and in case of a broader rollout decision on your side, would’ve caught us by surprise. I’d strongly advocate for keeping experiments running for DCP instances, and would even go so far as to say, enroll them in all the experiments - that’s what they are meant for imho. I’d only appreciate some kind of information about running experiments or if there is any way to figure out if a change is a permanent one or only an experiment, so we know if we need to adjust our documentation for example… I understand there might be no interest in having a way to see running experiments & EAPs for an instance, though I feel like that would add a lot of value.

The UI change itself
I understand you are looking to declutter this area and I guess that is fine (though I would’ve hoped for more customization options for users - e.g. pinning of buttons, so users can decide about their preferred “one-click” actions). What I would hope to get fixed though is the naming of the buttons, since that seems to change and would potentially confuse users. Our issueContent module looks like this:

"jiraIssueContents": [{
    "key": "teams-conversation-content",
    "name": {
        "value": "Microsoft Teams",
        "i18n": "teams.issueContentTeamsQuickButton"
    },
    "tooltip": {
        "value": "Start conversation",
        "i18n": "teams.startConversation"
    }
}]

The current behavior is, that the “name” attribute controls the label on the issue content area itself:
image

The “tooltip” attribute (though the naming is dumb), controls the “action” button above:
image

However, in the new UI (with the “Apps” dropdown), the buttons in the dropdown used the “name” attribute instead of the tooltip attribute. That would be our biggest ask, to restore the naming to be correct to not confuse existing users if they are looking for “Start conversation”.
image

Thanks
Tobias

1 Like

+1 for including dev instances into experiments. I would even want to go further: for those changes that impact apps, developers should have explicit control over feature flags / variants and experiments should be actively communicated.

Whilst marketplace partners will not be giving you feedback on customer usage, we can give you other valuable feedback, which you should take into consideration in your experiments. Especially if the outcome of your experiment is based on “will customers complain, yes or no” because in many situation those complaints will reach partners before Atlassian (“I can’t find your app”)

4 Likes

@tobias.viehweger and @remie,

While I don’t agree that DCP should be a place for experiments, with both your feedback, we can keep DCP enrolled as a target for experiments. If we need to change the strategy later, we can revisit. However, I cannot change what experiments are or how they work.

You both ask for information about running experiments but that’s not something Atlassian will commit to. They don’t belong in the changelog if they are not going through to full roll-out. And there isn’t any appetite to create “comms for experiments” generally.

Experiments are stochastic, which means targeting DCP will not guarantee seeing any variations. If you do see a variation, without comms, I don’t see how you could know what kind of data the experimenters are looking for. As such, my broad guidance is that customer support will typically be considered a “guardrail”. You are always welcome to post for awareness here on the dev community, but please also use the official customer support channel to get the fastest result.

There is an interest, and a running program, to provide visibility & control over feature flags. However, we cannot turn attention to experiments until after we have solved DCP & Early Access Programs. Even after implementing those, I doubt we could be as fine-grained as control over all variants because of the stochastic nature. We’ll know more about what’s possible by implementing the other controls first.

1 Like

The problem is that sometimes customers will turn to app vendor support channels, who then cannot replicate the issue. This will result in vendors spending a lot of time in trying to find root causes when the reason is that they are caused by a variation / experiment that we did not know about.

I really think A/B testing and validating ideas is very important, and I can see how Atlassian wants to limit overhead in running those experiments. But as soon as this starts impacting 3rd parties, I do think Atlassian has a responsibility to get those 3rd parties involved.

As always, you can’t have your cake and eat it. Atlassian chose to open up their products for 3rd parties. For better (multi-billion dollar business) and for worse (responsible stewardship of your products)

3 Likes

For what’s it’s worth, appreciate this initiative, this will also be an improvement, even if it does not include experiments at first, or ever. Thanks for the update!

1 Like