Developer Preview of Referentiality within Confluence Released!

As a developer, I disagree with this statement. Having access to a working piece of source code can be helpful if the documentation is incomplete (or only consists of some animated GIFs), but a proper documentation is always the better and more efficient way to understand an API.

Usage (Connect)

Here is a rough overview how it seems that this functionality is supposed to be used in Connect:

Emitting data

To emit data, macros use AP.dataProvider.emitToDataProvider(). This has to be run whenever the data changes, but I strongly assume that it also has to be called when the macro renders initially. Here is an example:

AP.dataProvider.emitToDataProvider({
    eventData: {
        'table-json': [
            ["Year", "Web", "TV", "Print"],
            [2012, 8, 153, 121],
            [2013, 41, 75, 124]
        ]
    }
});

In atlassian-connect.json, a property "refDataSchema": { "outputType": "table-json" } has to be defined for macros emitting data. The possible output types are table-json and table-adf. My guess is that the table-json format can be used for simple data, while the table-adf format has to be used for rich-text content. The data passed to AP.dataProvider.emitToDataProvider() has to have the output type that is defined for the macro.

Listening to data

To start listening to data, a macro has to run AP.register({ data_provider: (data) => { } }). The data parameter will apparently have a slightly different shape than the emitted object: The emitted table will be available as data.eventData[0]['table-json'] rather than data.eventData['table-json']. I’m not sure if the eventData array may contain multiple objects or always just one.

To listen to data, the following has to be added to the macro definition in atlassian-connect.json: "refDataSchema": { "inputType": "table-json" }. My understanding is that macros can only be linked with other macros whose outputType matches their inputType.

Open questions

I haven’t had the time to try this out yet and was just trying to understand how this is supposed to be used from the source code that you have provided. The following questions have come up for me:

  • What if I want to write a macro that wants to support both table-json and table-adf as input formats?
  • What if the emitting macro emits data before the subscribing macro is rendered? Will the subscribing macro receive the history of emitted events as soon as it is rendered?
  • Will this mechanism work both inside the editor as well as on the view page?
  • Is it possible to access the emitted data also from a macro editor? Since the macro always emits a whole table, I assume there will be many use cases where a user will have to configure which rows/columns the subscribing macro should handle in what way.
  • How will this work in static render modes?
  • How is the relationship between the macros persisted in the storage format of the page?

Overall, this looks like a very useful feature that will enable many interesting features. I think the chart example that you have given is a really good example for imagining how powerful this feature can become. However, I don’t understand how this is supposed to solve or how it is even related to the nested macro use case.

Since I am currently working on a macro that renders complex tables, I have quite a good use case for this functionality. One thing that worries me a bit is the work load that comes from the fact that this feature basically means implementing another render mode. Right now I already have to implement the dynamic render mode (React app) and the static render mode (storage format), and now I will have to implement an ADF render mode to support this. It would be much less work if I could either emit storage format here or use ADF for the static render mode.

5 Likes