NOTICE: Open to full-time and contract opportunities
My friend was working on a research document. Like most people, she wanted things to be fast, so she took the task she was given, pasted it directly into ChatGPT, and waited for the result. When it came back, she immediately made a face.
“This isn't what I wanted.”
That reaction made me curious. What did she actually want? So I asked her to show me a few example documents she considered good. That's when it became clear. Those documents had a very specific tone, style, and underlying intent. ChatGPT's response, while correct on the surface, didn't come close to matching that flavor.
And that's when it clicked: most non-technical users don't really understand how LLMs work. They try them once, get a generic output, decide it's useless, and move on. But the problem wasn't the model. It was the process.
In her case, there were several missing layers:
When a manager says “write a research document,” they're not just asking for information. They assume you understand the broader goal, the target audience, the tone, and what “good” looks like. Humans infer this naturally. LLMs don't. They can't read your mind, sense dissatisfaction, or pick up on subtle cues. If you don't explicitly tell them what you want, they fall back to a safe, generic style, which is rarely what you actually need.
So the real takeaway is simple: break your work into smaller, explicit tasks. You don't even need to know how to do this perfectly.
Then ask the LLM to:
From there, iterate step by step. In my experience, this alone dramatically improves the output.
This led me to a broader idea: “Please have some empathy for your LLMs.” Empathy here doesn't mean emotion. It means understanding constraints. They are not mind readers, they don't see context unless you provide it, and they don't understand intent unless you articulate it. If you treat them like a black box that should “just get it,” you'll be disappointed. You need to provide them context, clarity, examples and direction, to get a good output.
It also gave me a clearer view into how non-technical users interact with LLMs. The gap is massive. As developers, we instinctively decompose problems, iterate, and refine inputs. Many others expect a one-shot answer, and when that fails, they conclude the tool doesn't work. I've even seen strong skeptics say, “LLMs can't do my job.” But if you look closely, the issue isn't capability. It's usage.
Thanks for Reading!!