AI Engineering, JavaScript Solutions, Competitive programming in JavaScript, MCQ in JS

Monday, April 27, 2026

Day 4 - AI Engineering Journey - LLMs Are Not Magic — Understanding Their Limitations

Day 4 — LLM Limitations (Where Things Break)

AI Engineering — Day by Day
My journey to becoming an AI Engineer




When I started learning about AI, I was honestly impressed by how accurate LLMs felt. But after spending time understanding how they actually work, I realized something important:

LLMs are powerful — but they are NOT reliable by default.

Day 4 was all about understanding where things break — and why.


🧠 The Shift in Thinking

Earlier, I used to think:

  • If output is wrong → model is bad

Now I think:

  • If output is wrong → I need to understand the system better

⚠️ 1. Hallucination — The Biggest Risk

LLMs can generate answers that sound extremely confident… but are completely wrong.

Why does this happen?

Predict what sounds correct — not what is actually correct

It has:

  • No real-time fact-checking
  • No connection to truth
  • No “I don’t know” mechanism by default

Key Insight:

Hallucination is not a bug — it’s a design limitation.


📏 2. Context Loss — Memory Is Limited

LLMs have a limited context window. This means:

  • Too much input → older information gets removed
  • Even within limit → attention gets weaker

This is why:

  • Long chats become inconsistent
  • Large documents give incorrect answers

Key Insight:

Context window is not just a limit — it directly affects accuracy.


🧾 3. Instructions Are Not Rules

You might say:

Explain in 2 lines

And the model gives you a paragraph 😄

Why?

  • Instructions are just part of the input
  • They are not enforced
  • They compete with other tokens

Key Insight:

Prompts guide behavior — they don’t enforce it.


📊 4. Overconfidence Problem

Even when the model is wrong… it sounds very confident.

That’s dangerous.

  • No uncertainty indicator
  • No validation mechanism

🤔 Questions I Had While Learning

❓ Why is hallucination NOT a bug?

Because LLMs are designed to always predict the next probable token, even when they don’t have correct information. There is no built-in system to verify truth.

❓ Why can’t we fully trust outputs?

Because outputs are based on statistical patterns, not factual validation. The model generates what sounds correct, not what is confirmed.

❓ Why do instructions get ignored?

Because instructions are just part of the input. Their influence depends on clarity, position, and competition with other tokens.


🧠 Final Mental Model

LLM failures are predictable — if you understand how the system works.
  • Hallucination → probability, not truth
  • Context loss → limited memory
  • Ignored instructions → no strict enforcement

🚀 What Changed for Me

Before:

  • I trusted outputs blindly

Now:

  • I question outputs
  • I design better prompts
  • I think in systems

💭 Final Thought

LLMs are not unreliable.

They are just misunderstood.

Once you understand their limitations —
you stop trusting blindly…
and start building intelligently.


This is Day 4 of my AI Engineering journey.

No comments:

Post a Comment