technologyneutral
The Power of Context in AI: A New Way to Boost Accuracy
Monday, May 26, 2025
The researchers created an automated system to label examples as having enough or not enough context. They found that Google's Gemini 1. 5 Pro model did the best job at this, even with just one example to learn from.
When they tested various models and datasets using this new approach, they found some interesting things. Models are usually more accurate when they have enough context. But even then, they can still make up answers instead of admitting they don't know. When the context is lacking, models might abstain from answering or, in some cases, make up answers more often.
One surprising finding was that models can sometimes give correct answers even when the context isn't enough. This could be because the context helps clarify the question or fills in gaps in the model's knowledge.
To reduce these made-up answers, the researchers developed a new method. This method uses a smaller model to decide if the main model should answer or abstain. They found that using sufficient context as an extra signal in this method led to more accurate answers.
For teams working on their own RAG systems, the researchers suggest a practical approach. First, collect examples that the model will see in real use. Then, use the automated system to label each example as having enough or not enough context. This can help teams understand how well their model is performing and where it might need improvement.
Actions
flag content