AI Engineering, JavaScript Solutions, Competitive programming in JavaScript, MCQ in JS

Thursday, April 30, 2026

Day 6.2 - AI Engineering Journey -Running My First Local LLM

Day 6.2 — Running My First Local LLM

AI Engineering — Day by Day
My journey to becoming an AI Engineer




In my previous post, I made a decision:

Move away from API-first learning… and switch to local LLMs.

Today was about taking that decision into action.


💻 My Setup

I didn’t use a high-end machine. Instead, I used:

  • MacBook Air (2017)
  • Intel processor
  • Limited RAM

Honestly, I expected things to be slow… maybe even unusable.

But what happened next surprised me.


Step 1 — Installing Ollama

To run a local LLM, I used:

Ollama (Local LLM runtime)

Installation was straightforward:

Download → Install → Run

Once installed, I verified it using:

ollama --version

🧠 Step 2 — Running My First Model

I started with a lightweight model:

ollama run phi3

This was my first real interaction with a locally running LLM.


🧪 Step 3 — Testing Prompts

I tried a few simple prompts:

Explain AI in 2 lines  
Explain AI like I am 10 years old  
Explain AI step by step

And I started observing the behavior carefully.


 What Surprised Me

This was the most interesting part.

  • It was not as slow as I expected
  • Responses were reasonably fast
  • Usable for real experimentation

I initially assumed:

“Local models will be painfully slow”

But that wasn’t entirely true.

Yes, it’s slower than APIs — but not unusable.


⚠️ What I Noticed About the Output

While speed was better than expected, output quality showed clear differences:

  • Less polished compared to API models
  • More sensitive to prompt wording
  • Slightly higher chance of hallucination

And this actually made things more interesting.

Because now I could clearly see:

How prompt design affects output behavior.

🧠 What I Learned From This

This small experiment changed my perspective:

  • I don’t need expensive APIs to learn AI engineering
  • Local models are good enough for system-level understanding
  • Imperfections actually improve learning

🔄 How This Connects to My Goal

My goal is not just to generate responses.

My goal is to:

  • Understand how LLMs behave
  • Build systems like RAG and agents
  • Debug failures

And for that:

This setup feels perfect.

🚀 What’s Next

Now that I have a working local LLM, the next step is:

Building my own LLM Playground.

In the next post (Day 6.3), I’ll:

  • Create a UI for prompt input
  • Add controls like temperature
  • Run multiple outputs for experimentation

💭 Final Thought

Before today, local LLMs felt like a limitation.

Now they feel like:

A playground for real learning.

This is Day 6.2 of my AI engineering journey —
and this was my first real step into running AI locally.

No comments:

Post a Comment