I’m making a confluence app to transcribe video. I was thinking I would make a macro for it and then it could flow like
User runs “/transcribe ” in confluence page
Macro receives string “loading” from backend and displays this until file is provided
Macro receives transcript and makes no further calls
Is there a way to implement this? It seems like Macros constantly refresh their data, providing the full transcript every time would be inefficient, same for constantly storing and retrieving via macro storage
Seems nobody has a reply so I’m going to just mention how I’ve come to a solution.
Macros don’t have some stop running function. Static Macros are not the solution either.
A Macro can’t update the page it is on to delete itself after work has completed as it gives a conflict error.
Running anything on the Forge development platoform means there are runtime limitations which stop this all being implemented on the frontend. We would hit the 25 second runtime limit, or the 55 second async runtime limit, or likely the memory available limit when uploading the videos. A URL needs to be passed which won’t work for transcription APIs. Even if a URL was possible, the macro would timeout waiting for the transcription to complete.
Content Actions seem more suited to this than a macro, in the Content Action menu there is an “Import Word file” action which seems like a similar action to “Import Transcription” so I will be doing that.
Runtime limitations are still the same for Content Actions. It will need to be a content action to pass a url to my own server to run transcription calls against openAI. Whether this is by a form in the modal or by passing the page body to a backend url remains to be seen, the forge tutorial on open AI keyword extraction seems like it will be helpful in this piece anyway.
Now I’m still researching how the backend server should connect to Confluence. Apparently Connect apps are for running your own infra rather than Forge’s FaaS approach but they’re rumoured to be sunsetted eventually.