CSIRO finds that ChatGPT gets worse with health info the more evidence you give it

CSIRO and the University of Queensland have studied how ChatGPT handles health information. From the press release: "The study looked at two question formats. The first was a question only. The second was a question biased with supporting or contrary evidence. Results revealed that ChatGPT was quite good at giving accurate answers in a question-only format, with an 80% accuracy in this scenario. However, when the language model was given an evidence-biased prompt, accuracy reduces to 63%. Accuracy was reduced again to 28% when an 'unsure' answer was allowed. This finding is contrary to popular belief that prompting with evidence improves accuracy". You can check out the full research paper if you'd like to learn more.


If you liked this tiny snippet of content from The Sizzle - Australia's favourite daily email containing the latest tech news & bargains - then sign up for a 30-day free trial below. No credit card required! Learn more about The Sizzle at https://thesizzle.com.au