MicroscaleLabs
0
Back to labs
Lab 07-A60–90 minGPU · Mac · Colab · CPU

LoRA for Behavioral Fine-Tuning

Act VI · Making It Yours
the aha moment

Implement LoRA from scratch — the A×B low-rank decomposition, the α/r scaling, the zero-init — attach it to Qwen3-0.6B's query projection, and fine-tune on 20 cooking-instruction examples. 24,576 trainable params. A 2 MB adapter. A noticeably shifted voice after 200 steps. Merge, verify, keep the file.

Open in ColabView on GitHub
the facts
Time
60–90 min
Hardware
GPU · Mac · Colab · CPU
Act
VI · Making It Yours
Status
Live
Artifact
A trained LoRA adapter (~2 MB) and a before/after generation comparison.
run it locally

Clone the labs repo and run this lab as a script or open it as a notebook:

git clone https://github.com/iqbal-sk/Microscale-labs.git
cd Microscale
just setup-auto      # auto-detects CPU / CUDA / Mac
just run 07-a
# or:  jupyter lab labs/07-a-lora-behavioral/lab.py

Full install options (uv, pip, or the platform-specific CUDA paths) are in the labs README.

read alongside
Open in ColabView on GitHub← all labs