Recency bias in bigger prompts

Hey there,

Since an agent can support multiple actions, the single prompt for the Rovo Agent can get quite big. How do you deal with recency bias, where the model might forget instructions passed at the beginning of the prompt?

Thanks,
Vladimir

1 Like

@VladimirNegacevschi,

I’ve seen some LLM research that indicates it might be “middles” that get forgotten, more than beginnings. In any case, we have observed some agents “forgetting” instructions. The best I can recommend at this time is to keep prompts short by pushing more logic into actions.

Do you have any data about size of prompt where you’ve started to see the problem? Or constraints that would prevent you from pushing logic into actions?

2 Likes

I think you may be right, @ibuchanan, it’s the middle that gets forgotten.

I’ve noticed this in a separate app using OpenAI GPT4o, where the instructions were 2000 tokens long and the payload somewhere around 3000-4000 tokens long. I will need to retest it in the Rovo Agents once I port the app.

1 Like