Hi there,
since Friday we observed strange behaviour of our Rovo agent. What worked on Wednesday didn’t before the weekend. Was there any update in the Rovo infrastructure?
As far as I understood it, the prompt of a Rovo agent is somehow wrapped in an Atlassian prompt and then send to a LLM (ChatGPT?). If this is the case, changes in the wrapper prompt might break working Rovo agents, as they behave different.
What I would love is to define a “Wrapper-Prompt” version to make sure, my agent prompt works until I actively change (and test) the new wrapper.
Thanks for any thoughts!
Hi @ppasler ,
From around last Friday there was a regression which we fixed yesterday. The symptoms were pointed out in Rovo printing the action instead of calling. Is your problem still occurring?
Regards,
Dugald
1 Like
Hi @dmorrow ,
I saw that thread, but our symptoms were a bit different. I’ve spend some hours to adjust the prompt and now it works a bit better.
To outline our problem (See demo here https://www.youtube.com/watch?v=N_seN0vgTyQ):
Our Rovo workflow contains three action steps, in the first one we ask for an issue key and the agent loads some issue information. Then Rovo asks some questions which are loaded from the description. In the third step we write a comment in the same issue, but Rovo forgot about the issue key. To me it looks like that the longer the description (and therefore more questions are posed), the more likely it is, that Rovo forgets things or even make questions up.
FTR: our prompt has around 65 lines.
Are there any best practices how to improve this?
Thanks in advance,
paul
We observed that the issue worsens as the number of interactions increases. After three rounds of questions and answers, the initial information starts to fade.
It would be helpful to have a robust storage or context mechanism that persists throughout the dialogue.
Hi @ppasler ,
I’ve also observed similar behaviour to what you are describing.
If you are using the GET actionVerb
, you can return information for Rovo to process. For example, your prompt could initially say “The key of the subject issue is currently not known” and later your action could include a statement like “The key of the subject issue is ABC-123” and also make sure the action input parameter for the issue key uses similar language like “This is the key of the subject issue”.
Regards,
Dugald
Hi @dmorrow ,
thanks for the hints. Actually we are starting with the prompt to determine the issue context (by asking the user to provide an issue key). Every following action has the issue key parameter and returns it again (in JSON style `{content: “foo”, issueKey: “BAR-17”). Still we’re facing the fading behaviour. This might be caused by the question-answer part, where there’s no action invoked in between (as the whole questionnaire is loaded once).
Cheers,
paul