Freesupertools

Generative AI can experience “hallucinations” when it does not know the answer to a question; Here’s how to discover it.

Researchers from the University of Oxford have devised a new way to help users tell when generative AI might be “hallucinating”. This happens when an AI system is asked a query that it doesn’t know the answer to, resulting in it making up an incorrect answer.

Fortunately, there are tips to spot this when it happens and prevent it from happening altogether.

How to stop artificial intelligence hallucinations

A new study by a team from the University of Oxford has produced a statistical model that can determine when questions asked to AI-generated chatbots are likely to result in an incorrect answer.

This is a real concern for generative AI models, as the advanced nature of how they communicate means they can pass off false information as truth. This was highlighted when ChatGPT went rogue with wrong answers Back in February.

As more and more people from all walks of life turn to AI tools to help them in school, work, and everyday life, AI experts like the participants in this study are calling for clearer ways for people to know when AI is shaping responses. Especially when it comes to serious topics like health care and law.

Researchers at the University of Oxford claim that their research can differentiate between when a model is correct or just making something up.

“MBAs are very capable of saying the same thing in many different ways, which can make it difficult to know when they are sure of an answer and when they are making something up,” said study author Dr. Sebastien Farquhar while speaking to. Evening standard. “With previous methods, it wasn’t possible to tell the difference between a model being unsure about what to say versus unsure about how to say it. But our new method gets around this.”

However, there is of course still more work to be done to correct the mistakes that AI models can make.

“Semantic uncertainty helps solve specific reliability problems, but that’s only part of the story,” he added. “If the LLM is constantly making mistakes, this new method won’t catch it. The most serious AI failures come when the system does something poorly but is confident and methodical.

“There is still a lot of work to be done.”

Featured image: ideogram

Leave a Reply

Your email address will not be published. Required fields are marked *