While artificial intelligence, or AI, tools like ChatGPT might be great for helping you pick where to go for dinner or which TV show to binge watch, would you trust it to make decisions about your medical care or finances?
AI tools like ChatGPT and Gemini include a disclaimer that the information they find scanning the internet may not always be accurate. If someone was researching a topic that they didn’t know anything about, how would they know how to confirm the information as truth? As AI tools become smarter and gain more widespread use in daily life, so do the stakes for the accuracy and dependability of using this evolving technology.
Michigan State University researchers aim to increase the reliability of AI information. To do this, they have developed a new method that acts like a trust meter and reports the accuracy of information produced from AI large language models, or LLMs.
Reza Khan Mohammadi, a doctoral student in MSU’s College of Engineering, and Mohammad Ghassemi, an assistant professor in the Department of Computer Science and Engineering, collaborated with researchers from Henry Ford Health and JPMorganChase Artificial Intelligence Research on this work.