Stop running a confluence Macro when output has been returned

Hello,

I’m making a confluence app to transcribe video. I was thinking I would make a macro for it and then it could flow like

  1. User runs “/transcribe ” in confluence page
  2. Macro receives string “loading” from backend and displays this until file is provided
  3. Macro receives transcript and makes no further calls

Is there a way to implement this? It seems like Macros constantly refresh their data, providing the full transcript every time would be inefficient, same for constantly storing and retrieving via macro storage

Seems nobody has a reply so I’m going to just mention how I’ve come to a solution.

Macros don’t have some stop running function. Static Macros are not the solution either.

A Macro can’t update the page it is on to delete itself after work has completed as it gives a conflict error.

Running anything on the Forge development platoform means there are runtime limitations which stop this all being implemented on the frontend. We would hit the 25 second runtime limit, or the 55 second async runtime limit, or likely the memory available limit when uploading the videos. A URL needs to be passed which won’t work for transcription APIs. Even if a URL was possible, the macro would timeout waiting for the transcription to complete.

Content Actions seem more suited to this than a macro, in the Content Action menu there is an “Import Word file” action which seems like a similar action to “Import Transcription” so I will be doing that.

Runtime limitations are still the same for Content Actions. It will need to be a content action to pass a url to my own server to run transcription calls against openAI. Whether this is by a form in the modal or by passing the page body to a backend url remains to be seen, the forge tutorial on open AI keyword extraction seems like it will be helpful in this piece anyway.

Now I’m still researching how the backend server should connect to Confluence. Apparently Connect apps are for running your own infra rather than Forge’s FaaS approach but they’re rumoured to be sunsetted eventually.

If anyone is reading this, could you confirm that 55 seconds is the absolute limit on a runtime?

I’m assuming that the runtime works like this:

  1. User inputs URL in to my content action, async 55 second timer begins.
  2. The url is sent to my backend, work begins on prep and ripping audio which takes more than 55 seconds and we time out.

Rather than:

  1. User inputs URL. Function begins, sends URL to backend, creates results queue and then ends.
  2. My backend does processing which takes an hour.
  3. Backend puts transcript URL in results queue created in step 1.
  4. The content action starts again with a new 55 second limit, takes contents and writes them.

Long running API Call section for chat GPT, mentions chunking and invoking more async functions

Connecting to your own remote server to perform the calls off Confluence Cloud
https://developer.atlassian.com/platform/forge/forge-remote-overview/