Note #3 to Sookie: Why AI Hallucinations Give Me Hope
Dear Sookie,
I used to see AI hallucinations as a problem—but now I believe they give us real hope. Let me explain…
Last week, I encouraged you to keep practicing prompt engineering. Stick with it—over time, you’ll become more confident and skilled at working alongside generative AI.
Inevitably, you’ll run into AI hallucinations.
What are AI hallucinations? I asked ChatGPT o4-mini to simply define it, and it responded with:
An AI hallucination occurs when a model confidently generates information—facts, figures, quotes, or details—that are false, fabricated, or unsupported by its training data or the user’s prompt. It’s essentially the AI “making things up” rather than accurately recalling or inferring.
I can confirm that this definition is accurate. If you’d like more context, check out this clip of Lex Fridman and Marc Andreessen discussing it.
If you watched the video, you learned that AI hallucinations allow for creativity, but also could flat out lie. Okay, so this sounds bad… Right? What might make it worse is that there is quite a bit of research that suggests that AI will never stop hallucinating (see this paper, or this paper).
But here’s the non‑obvious twist: what seems like a flaw can actually be our greatest asset.
I recently read Rohit Bhargava’s and Ben DuPont’s Non‑Obvious Thinking: How to See What Others Miss. One of their concepts is to “see the other side” as a springboard for fresh ideas.
Because AI will likely keep hallucinating, it gives humans a mandate to “stay in the loop”—and that’s fantastic. It means we need to grow smarter, wiser, and more curious to catch and correct those falsehoods. AI hallucinations can push us to become “better humans”.
So here are some hopeful realities:
Myth: AI will dumb us down.
Reality: We must become true experts to spot its errors.
Myth: AI will steal every job.
Reality: It’ll reshape roles, forcing us to deepen our unique human skills.
Fact: AI thrives with high‑functioning humans—and we can become those humans with its help.
So don’t lose hope. Keep refining your prompts, deepen your expertise, and let AI’s surprises spark your next big idea.
As a final note, please know that there are a group of us that are working to see how we can help humanity more deeply leverage AI to upgrade ourselves quickly.
Sincerely,
Dr. Joe, Your AI Doctor
As a fun side note, Rohit and Ben’s book is the first book I’ve read that explains the Korean concept of “noonchi”. Ask your generative AI tool about noonchi!
These notes are for 'Ken' and 'Sookie,' the American names my young Korean immigrant parents adopted while navigating profound change as they moved to the US in the 1970’s. In the notes within this blog, I imagine them as young adults again, but now encountering change and uncertainty from today's AI shifts – Sookie with potential job uncertainty, Ken with business disruption.
Drawing inspiration from their historical resilience as young immigrants facing the unknown, I'm compelled to write with empathy and offer truly helpful thoughts for anyone navigating AI's rapid evolution. Remember, this isn't financial or direct strategic advice, but a perspective to encourage your own thoughtful consideration. I do not identify myself as their son in these notes, but in reality, I write with a son’s heart.
My notes to Sookie will always be free, as I understand employees often navigate workplace changes with fewer resources and support systems. My notes to Ken will be offered at an accessible price point designed to be a worthwhile investment for businesses of any size looking to adapt to AI changes.