Day 3 — Prompt Engineering Is Not What You Think
AI Engineering — Day by Day
My journey to becoming an AI Engineer
When I first heard about prompt engineering, I thought it was about:
- Writing “smart” prompts
- Using fancy tricks
- Memorizing templates
But after actually digging into it, I realized something important:
Prompt engineering is not about clever wording.
It’s about reducing ambiguity for a probabilistic system.
This shift changed everything for me.
🧠What I Understood Today
At its core, an LLM:
- Doesn’t “understand” like humans
- Doesn’t “know” the right answer
It just predicts the next most probable token
So, if your prompt is vague →
The model has too many possible directions →
And your output becomes inconsistent
❌ Why Most Prompts Fail
Explain trading
What’s wrong here?
- No context
- No audience
- No structure
- No constraints
The model is forced to guess what you want.
✅ Improving the Same Prompt
Explain stock market trading in 3 bullet points for a beginner using simple language
Now:
- Clear audience
- Clear format
- Clear expectation
👉 Output becomes more predictable and useful
🧩 The Structure I Learned
A good prompt is not random. It usually has:
[ROLE] [TASK] [CONSTRAINTS] [OUTPUT FORMAT]
Example:
You are a financial analyst. Explain options trading. - Keep it under 100 words - Use simple language - Give one real-world example Return answer in bullet points.
🔥 The Biggest Insight
The quality of output depends on how much you constrain the problem
More freedom for the model = more randomness
More constraints = more control
💠Questions That Came to My Mind (And What I Learned)
While learning this, I had a few doubts. Writing them down actually helped me understand better.
❓ 1. Why do constraints improve output quality?
At first, I thought constraints just reduce verbosity.
But the real reason is deeper:
Constraints reduce the solution space the model has to explore.
Without constraints:
- Too many possible outputs
- More randomness
With constraints:
- Narrower possibilities
- More focused predictions
❓ 2. Is role prompting really necessary?
I wasn’t sure about this.
What I understood:
Role prompting is helpful, but not mandatory.
It works because:
- It biases the model toward a certain tone or domain
But:
- The model can still infer context from the task itself
So it’s more like a soft guide, not a requirement.
❓ 3. Why does breaking tasks into steps improve results?
Initially, I thought it just “adds clarity.”
But the actual reason is:
It reduces complexity by guiding the model through smaller steps.
Instead of solving everything at once:
- The model solves step-by-step
- Each step improves the next
This is why techniques like “think step by step” work.
🧠Final Mental Model
After today, this is how I think about prompts:
A prompt is not a question — it’s a system design problem
You are:
- Defining input structure
- Reducing ambiguity
- Controlling output behavior
🚀 What Changed for Me
Before:
- I wrote prompts randomly
- Blamed the model when output was bad
Now:
- I see prompts as interfaces to a probabilistic system
- If output is bad → input design is probably bad
💠Final Thought
LLMs are not unpredictable.
They just follow rules most people don’t understand.
Once you start designing prompts instead of guessing them —
you stop struggling… and start controlling the output.
This is Day 3 of my AI engineering journey — and this was a big shift in thinking.
No comments:
Post a Comment