Skip to Main Content

Colorado Christian Library Logo

Writing

Hallucinations

What are hallucinations?

ChatGPT and other large language models (LLMs) often produce inaccurate information or "hallucinations." 

"ChatGPT might give you articles by an author that usually writes about your topic, or even identify a journal that published on your topic, but the title, pages numbers [sic], and dates are completely fictional. This is because ChatGPT [3.5] is not connected to the internet, so has no way of identifying actual sources."

The videos and articles below explain what hallucinations are, why LLMs hallucinate, and how to minimize hallucinations through prompt engineering. 

You will find more resources about prompt engineering and examples of good prompt engineering in this Guide under the tab "How to Write a Prompt for ChatGPT and other AI Large Language Models."

Attribution: The quotation was provided by the University of Arizona Libraries, licensed under a Creative Commons Attribution 4.0 International License.

 

* This page is a copy of the hallucination page provided by National American University.

Quick Links

Java script disabled or chat offline
  Book a Librarian Link
Link of Databases A - Z list Research Guides Link
 Link to the library's FAQ page Links to Youtube Tutorials
Suggest a Purchase link Borrow from another library link
Link to the librarys online catalog Link to the librarys hour page

Call us
303-963-3250

Text-a-Librarian
303-622-5333

Address: 8787 W. Alameda Ave. Lakewood, CO 80226      Phone: 303-963-3250     Email: cculibrary@ccu.edu