the aha moment
Same LoRA from 07-A, higher rank (r=16), two target projections (q_proj + v_proj), and training examples formatted with Qwen3's native tool-calling template. Teach the model to emit valid JSON for a 6-function kitchen-assistant API. Evaluate deterministically: does the JSON parse? Does the function name match? Does it generalise to held-out prompts?
the facts
- Time
- 60–90 min
- Hardware
- GPU · Mac · Colab · CPU
- Act
- VI · Making It Yours
- Status
- Live
- Artifact
- A LoRA adapter that produces valid tool-call JSON + a pass/fail eval harness.
run it locally
Clone the labs repo and run this lab as a script or open it as a notebook:
git clone https://github.com/iqbal-sk/Microscale-labs.git cd Microscale just setup-auto # auto-detects CPU / CUDA / Mac just run 07-b # or: jupyter lab labs/07-b-lora-tool-calling/lab.py
Full install options (uv, pip, or the platform-specific CUDA paths) are in the labs README.
read alongside