Marketplace benchmark Endpoints feedback

Hello,

After looking at all three benchmark Endpoints, here is our feeling towards the new data provided:

For the churn Endpoint:

  • On top of the previous detailed feedback posted here, we would like to add that the isolated churn rate is a very interesting value for us.
    However it is currently flawed and unusable on our side due to the fact that annual licenses are considered potentially churned every month (which is not the case) and artificially increase the churn values provided.
    If Atlassian considers that is the normal way churn should be calculated, we would like to suggest the idea to add a new field in the Licenses report Endpoint that would give us the reason for licenses cancellation (app cancellation, parent product cancellation,
) so that vendors could calculate the churn themselves the way they consider it right via this Endpoint. Notably with our without annual licenses considered as potential churn every month, and with or without parent product churn included.
  • An other point: the churn announcement post specifies that the app is compared with “all cloud apps on Marketplace”, does that include free apps? If that is the case that can change the perspective on the metric a lot, as free apps have less reason to be uninstalled and probably shouldn’t be compared with payed apps.

For the evaluations and sales Endpoints:

  • Like for the churn, it would be nice to have more specific categories of apps to be compared to rather than “everything” by default.
  • Evaluation comparison, if I understand it correctly according to your example with sales, is compared with the global marketplace progression for the parent product, which is problematic. Because the number of apps available for the parent product is not fixed. As more apps are released and available a natural progression of the total evaluations/sales is occurring. To keep the value comparable over time, the value should be divided or made relative to the number of apps available.
  • Month over month metrics have little to no value on our side compared to year-to-date values. Mainly because the metric is easily confusing → You can have a very good month and be in the negative for that metric if your previous month was just a little bit better. So you will see the data curve plummet which is counter-intuitive and requires explanations internally on how to read that metric properly compared to all others. Overall year-to-date values seems easier to comprehend and more relevant.
  • The sales Endpoints has a parameter “addon” that is supposed to allow us to get benchmark values for a specific app, it simply does not work according to our test and always return “All apps” benchmark.

Additional feedback:

  • Inconsistencies between Endpoints of the API are annoying to deal with, and have no reason to exist. Specific examples for benchmark metrics: For the churn Endpoint the app key is returned as “appKey” but as “addonKey” for the evalutions/sales Endpoint. For the churn Endpoint the date is returned as two field “year” and “month” (we can’t plot data with that) but as standard “date” for the evaluations/sales Endpoints.
  • The lack of definition of marketplace metrics is a big pain point for the data analysis. A big step up would be to add metrics precise calculation methods/definitions for each field available on the marketplace API documentation. Currently there is not even a list of the fields returned by the Endpoints, which should be basic information provided on any API documentation. It is probably very hard for new vendors (and still is for us) to maintain an internal knowledge of the details of the marketplace API. We think there are a lot of Endpoints available right now, and bring clarification to the data provided would be more helpful at this point than adding more and more Endpoints.
6 Likes

Hi @EliottAudry,

Thank you for the feedback. Sorry for the delayed response.

Regarding churn endpoint:

  • Churn is calculated based on license end date. Hence, annual licenses are considered to have churned only if they are not renewed beyond their license end date. They are not considered to have churned every month.
    E.g. Say an annual license was renewed in Jan 2022, valid till Dec 2022. It is considered to be an active license across all months from Jan 2022 to Dec 2022. In Jan 2023, if this license is not renewed, then it is counted as a churned license.

  • Your point regarding free apps is valid, they have less reason to be uninstalled. It is for this reason, in the endpoint, churn is calculated only for paid apps - ‘all cloud apps on marketplace’ includes only paid cloud apps.

Regarding evaluations and sales endpoints:

  • Addition of category-wise benchmarking (and more granular benchmarking wherever possible) to the endpoints is on our roadmap. We hope to build this in the near future, will keep you posted.

  • Our intention behind comparing against overall marketplace progression is to help our partners understand if a particular app’s growth is keeping pace with / outgrowing overall marketplace. Having said that, your feedback is valid. We will try to incorporate a normalised benchmarking metric (based on number of apps) in future iterations of the endpoints.

  • Glad to hear that you find year-to-date metrics useful. We added them for exactly the same reason - addressing any short term volatility.

  • We have tested sales Endpoints with multiple scenarios and they are working as intended. Can you share more details if you are still facing this issue?

Regarding additional feedback, we acknowledge them. This is very valuable feedback, we will work on them.

Thanks,
The Atlassian Marketplace Analytics Team

3 Likes

Greetings :slight_smile: ,

I am also having issues finding definitions and explanations of the data available from the benchmark APIs.
Mainly it is the Get Cloud Evaluation Benchmark API: The evaluations count I obtain from this API are significantly higher than the evaluations from the Atlassian Marketplace Reports or from just counting distinct License IDs with status = Evaluation.
Is there an explanation to this difference of values?

Thank you in advance.
Kind regards,
Iheb Kharab