Day 6.1 — Why I Switched to Local LLMs
AI Engineering — Day by Day
My journey to becoming an AI Engineer
Until now in my AI learning journey, I was mostly focused on:
- Understanding how LLMs work
- Learning prompt engineering
- Exploring limitations and evaluation
But when it came to actually building systems, I had to make an important decision:
Should I use paid APIs… or find another way?
At first, APIs felt like the obvious choice. They are easy, powerful, and everything just works.
But the more I thought about it, the more I realized:
If I rely only on APIs, I might learn usage… but not systems.
⚠️ The Problem with API-First Learning
Using APIs is great for building quickly, but for learning deeply, it has limitations:
- Everything feels “too perfect”
- You don’t see failure modes clearly
- You depend on external systems
- You don’t control the full pipeline
This creates an illusion:
“My system works”
But in reality:
The API is doing most of the heavy lifting.
🧠The Shift in Thinking
At this point, I asked myself:
Do I want to be someone who uses AI… or someone who understands and builds AI systems?
That question changed my approach completely.
🚀 Why I Chose Local LLMs
Instead of relying on APIs, I decided to move to a local-first setup using tools like:
👉 Ollama (Local LLM runtime)
This allows me to:
- Run models directly on my machine
- Control parameters like temperature
- Experiment without cost concerns
- Understand system behavior deeply
⚖️ Tradeoffs (Important to Acknowledge)
This decision is not perfect — and that’s important.
| Local LLMs | API Models |
|---|---|
| More control | More powerful |
| No cost per request | Better output quality |
| Slower | Faster |
| More setup required | Plug and play |
And honestly, that’s exactly why I chose local models.
Because:
Better learning happens when things don’t “just work.”
🤔 What Surprised Me
Even before building anything, I realized:
- Local models are less “polished”
- They hallucinate more
- They require better prompt design
And instead of seeing this as a limitation, I now see it as:
Learning opportunity.
🧠How This Fits My Learning Goal
My goal is not just to:
- Call APIs
My goal is to:
- Understand LLM behavior
- Build systems like RAG and agents
- Debug failures properly
And for that:
Local-first approach makes more sense.
🔄 What’s Next
Now that the direction is clear, the next step is:
Actually setting up and running a local LLM.
In the next post (Day 6.2), I’ll:
- Install Ollama
- Run my first model
- Test real prompts locally
💠Final Thought
APIs make things easy.
But if you want to truly understand AI systems:
You need to get closer to the machine.
This is Day 6.1 of my AI engineering journey —
and this decision feels like a turning point.
No comments:
Post a Comment