Hello,
After looking at all three benchmark Endpoints, here is our feeling towards the new data provided:
For the churn Endpoint:
- On top of the previous detailed feedback posted here, we would like to add that the isolated churn rate is a very interesting value for us.
However it is currently flawed and unusable on our side due to the fact that annual licenses are considered potentially churned every month (which is not the case) and artificially increase the churn values provided.
If Atlassian considers that is the normal way churn should be calculated, we would like to suggest the idea to add a new field in the Licenses report Endpoint that would give us the reason for licenses cancellation (app cancellation, parent product cancellation,âŠ) so that vendors could calculate the churn themselves the way they consider it right via this Endpoint. Notably with our without annual licenses considered as potential churn every month, and with or without parent product churn included. - An other point: the churn announcement post specifies that the app is compared with âall cloud apps on Marketplaceâ, does that include free apps? If that is the case that can change the perspective on the metric a lot, as free apps have less reason to be uninstalled and probably shouldnât be compared with payed apps.
For the evaluations and sales Endpoints:
- Like for the churn, it would be nice to have more specific categories of apps to be compared to rather than âeverythingâ by default.
- Evaluation comparison, if I understand it correctly according to your example with sales, is compared with the global marketplace progression for the parent product, which is problematic. Because the number of apps available for the parent product is not fixed. As more apps are released and available a natural progression of the total evaluations/sales is occurring. To keep the value comparable over time, the value should be divided or made relative to the number of apps available.
- Month over month metrics have little to no value on our side compared to year-to-date values. Mainly because the metric is easily confusing â You can have a very good month and be in the negative for that metric if your previous month was just a little bit better. So you will see the data curve plummet which is counter-intuitive and requires explanations internally on how to read that metric properly compared to all others. Overall year-to-date values seems easier to comprehend and more relevant.
- The sales Endpoints has a parameter âaddonâ that is supposed to allow us to get benchmark values for a specific app, it simply does not work according to our test and always return âAll appsâ benchmark.
Additional feedback:
- Inconsistencies between Endpoints of the API are annoying to deal with, and have no reason to exist. Specific examples for benchmark metrics: For the churn Endpoint the app key is returned as âappKeyâ but as âaddonKeyâ for the evalutions/sales Endpoint. For the churn Endpoint the date is returned as two field âyearâ and âmonthâ (we canât plot data with that) but as standard âdateâ for the evaluations/sales Endpoints.
- The lack of definition of marketplace metrics is a big pain point for the data analysis. A big step up would be to add metrics precise calculation methods/definitions for each field available on the marketplace API documentation. Currently there is not even a list of the fields returned by the Endpoints, which should be basic information provided on any API documentation. It is probably very hard for new vendors (and still is for us) to maintain an internal knowledge of the details of the marketplace API. We think there are a lot of Endpoints available right now, and bring clarification to the data provided would be more helpful at this point than adding more and more Endpoints.