Writing detail

My First Attempt at Fine-Tuning an SLM using Unsloth

A hands-on breakdown of turning a small language model into a focused information extraction system for NER-style tasks.

Key takeaways

  • Small-model fine-tuning becomes far more practical when the task is narrow and evaluation is explicit.
  • Dataset preparation and output consistency matter as much as the training run itself.
  • The article frames fine-tuning as an engineering workflow, not just a model experiment.

This portfolio page keeps a concise internal summary while the full article remains published externally on Medium.

This piece documents an early fine-tuning experiment centered on information extraction instead of open-ended chat. The goal was to see whether a small language model could be shaped into a focused assistant for structured NER-style tasks without taking on the cost profile of a larger model.

The article is valuable because it stays practical. It focuses on the parts of the workflow that usually determine whether a fine-tuning attempt becomes useful in production: data preparation, output reliability, and evaluating against the exact format the system is expected to return.