Prompt, but Verify
The Thinking Human's Guide to Tools That Pretend to Think
It started innocently enough.
I asked ChatGPT to help me tighten an idea.
It did. Clean, fluent, borderline smug.
Then, out of some misguided curiosity, I let it rewrite a whole damn paragraph.
The result? Something that read like a TED talk delivered by someone who'd just discovered metaphors. Everything may have technically been "better," but also flatter, emptier. Like my voice had been washed on hot and ironed into something generic.
🔷 Original Paragraph (Before AI): I’ve been circling this piece for a while—trying to write something clear without draining the life out of it. Every time I get close to what I’m trying to say, the edges start to blur. Like trying to outline a puddle with a pen: it resists containment. Which is frustrating, but maybe also the point. Ambiguity isn’t failure; it’s just harder to phrase.
♦️ AI Rewrite (After AI): I've been wrestling with the development of this article for some time now. Clarity has been my North Star, yet I've found myself navigating through the fog of complexity. Each time I approach the heart of my message, it's like trying to capture lightning in a bottle—brilliantly visible for a moment, then gone. This journey of articulation has taught me that perhaps the greatest insights lie not in perfect clarity, but in embracing the beautiful tension between precision and nuance.
My favorite part(s)? Clarity has been my North Star …the greatest insights lie not in perfect clarity… what??
This is not a complaint. It's a use case.
Because the problem with generative AI isn't that it's wrong. It's that it sounds so right while being wrong—or worse, while being almost right.
So here is a short, direct, probably-too-dry guide to how to use these tools well.
Or at least, how to avoid getting duped by your own prompts.
Know When to Close the Tab
Not everything should start with a prompt.
If you’re grieving, making art, or trying to name a feeling that doesn’t have language yet—start with your own words.
The model might help later.
But sometimes reaching for ease too early erases what made the moment worth writing about in the first place.
This Is Not a Search Engine
Don't treat it like Google with better bedside manner.
Large language models don’t "look things up." They pattern-match.
They predict what words are likely to come next based on a probabilistic model trained on a lot of internet.
That’s it.
Use it like you would use a clever improv partner.
Great for riffs. Not great for factual references.
The Sources Are Made Up (But the Confidence Is Real)
Ask it for five sources and you’ll get three real ones, one ghost, and one TED Talk transcript pretending to be peer-reviewed.
If you need sources, use a tool that’s connected to them. Plain ChatGPT (or most standalone LLMs) are not.
Ask it to cite something, and it will almost certainly do so with confidence—and without basis. The titles may sound real. The authors may sound prestigious. The links may even resolve. But the references themselves? Often fiction.
Even when citations are available—via plugins, retrieval-augmented generation (RAG), or built-in search—you still have to check the original source. Was that statistic from a peer-reviewed journal, or a Medium post from 2016? Was the quote accurate, or just repeated enough times to sound canonical?
If citations matter, do your own audit. Don’t let pattern-matching pass for proof.
Hallucination Is Not a Bug. It’s the Medium.
"Hallucination" is the industry term for when an AI makes something up. The name is a bit of a deflection. It implies a momentary glitch, something out of character.
It isn’t.
The model isn’t confused. It’s doing exactly what it was designed to do: generate language that sounds good. It has no stake in whether it's true. It is fluent, not accurate. And if you're not careful, you’ll start to mistake one for the other.
When It Works (and When It Really Doesn’t)
Useful for:
Outlining ideas you already understand
Summarizing long text you don't feel like reading
Rewriting for tone, clarity, or audience
Sparring with a friendly ghostwriter who doesn’t take offense
Cracking open a stuck idea (but finishing is still on you)
Brainstorming variations on a theme—titles, taglines, metaphors, structures
Not useful for:
Anything that requires factual precision
Emotional nuance in real-time conversations
Writing about something you don't yet understand
Doing your thinking for you
If you wouldn’t trust a well-read stranger to perform the task, don’t trust the model either.
Prompt Like You Mean It
Let’s acknowledge the term first: "prompt engineering" is an absurdly grand phrase for something that mostly involves typing into a text box.
No hard hat required. No coding skills. No... other kinds of engineering skills.
But if you're going to use it well, you do need to think like a writer, not a wizard.
The biggest misunderstanding about generative AI tools is that they’re magic.
They’re not. They’re mirrors. And the clearer you are, the more useful the reflection.
Prompting is not the same as asking. It’s instructing.
Be specific. Be directional. Maybe include examples. Or a grading rubric (kidding, kind of.)
Treat it like onboarding a very enthusiastic intern who reads every blog post ever written but has yet to articulate an original thought.
If your output is vague, your input probably was too. Garbage in, garbage lit.
Want useful output? Give useful constraints. Set the tone. Define the form. Give it context. The model will often over-accommodate, so keep your directions tight.
A decent way to start, if you're not sure how to nudge it: just talk to it like you would to a collaborator who’s smart, fast, and not fully house-trained.
The trick isn’t to hand over the wheel. It’s to ask better questions.
Sharper inputs give you sharper outputs.
But the thinking part—that’s still yours.
Prompt Engineering: Without the Boilerplate
You could enroll in a certification program and become a Certified Prompt Engineer. Isn't that delicious? Whole new revenue streams being born. It's like... capitalism.
Or you could just take it from an English & Writing major who spent years learning how to revise.
Let’s stay honest:
Most of what's called "prompt engineering" is just learning how to talk to the model like it’s a smart colleague with no sense of stakes.
You’re not handing it a to-do list. You’re working out loud.
The goal isn’t to get it to write for you—it’s to get it to help you think in language.
It’s a feedback tool, not a ghostwriter.
Here are a few useful levers to pull:
Be clear and specific. How? Use your words. Use action verbs. Start your prompt this way. It will clearly indicate the desired task. Define the output. What do you want it to generate? A list? A paragraph? A toned-down version of the email response you just angrily hammered out but had enough mind not to click send?
Context is everything. The secret is in the context. For example, why did you write that angry email? Why were you angry? What was it in response to? Why do you need to tone it down? How do you want it to land? Give the model something to work with—background, goals, the situation you're in. You wouldn’t give a human collaborator a blank page and say “go.”
Shape the interaction. This isn’t “set it and forget it.” It’s an editing process. The model learns nothing between attempts, but you do. You’re not just writing a command—you’re shaping a conversation. And your voice gets clearer with each pass.
Things That Trip People Up
Examples can be helpful—but too many and the model might treat them as a strict format. Be mindful of how you're shaping the input: you’re not just providing context, you might also be accidentally locking in structure.
The same goes for ordering—models have recency bias. The last thing you say often carries the most weight, whether you meant it to or not.
Repetition never hurts.
What About Images?
Same principles, slightly different knobs.
Instead of tone, you’re cueing style—“watercolor,” “film noir,” “early 2000s tech aesthetic.”
Instead of structure, it’s composition—“centered subject, negative space, soft light.”
Be literal. Models don’t intuit—they parse.
And yes, you can pay for a certificate in this, too.
But if you’ve ever typed something like “an octopus wearing crocs and moderating a panel on polyamory at Davos” into anything, congratulations: you’re already prompting.
(I mean, I had to…)
This isn’t engineering. It’s editing in dialogue. You’re not extracting a result. You’re conditioning a possibility. The better the conditions, the more likely the model offers something worth reacting to.
You Still Have to Think
You cannot automate judgment. You can scaffold it. You can accelerate your process. But you still have to decide what you mean. You still have to know enough to know when something's off.
Use it. Rely on it, even. But don't abdicate to it. Your voice—your real one, not the smoothed-over version it learned from LinkedIn and Medium posts—is still the only thing that makes your work yours. And when those hallucinated sources finally make it to print, your byline won't mention which sentences came from a probabilistic text generator with unearned confidence and borrowed authority.
That’s Enough of That
This piece was written by a human.
You’ll know, because it doesn’t end with a motivational quote, and none of the sources are fake (though be fair, I didn’t use any…)
The comment box is below.
Post a prompt—ideally one you wouldn’t trust it to get right.
For this piece only, comments will be responded to by none other than... ChatGPT.
(I can't promise it'll make sense. Or be relevant. That might all depend on how clear and specific you are. Bonus points if it hallucinates your job title.)
Thanks for reading Opinions & Conditions May Apply—essays at the intersection of language, technology, and systemic dysfunction.
If it resonated, feel free to share or recommend. No pressure. Just signal.
I read an article last year that predicted that the most important major in this new AI world would be English majors, because they know how to communicate effectively. I’m not sure the hunt for bright math minds has ended, but clearly AI changes how we view skill sets that college majors attain.