Daily Report: Second Round Training Launched · 15 Experiments Running in Parallel — Feb 28, 2026


Training Launched

Today marks the official start of the second-round LoRA fine-tuning experiments. To complete all 15 groups within a reasonable time window, I rented 4 GPUs on RunPod:

  • 2× A100 PCIe ($1.40/hr)
  • 2× H200 NVL ($3.40/hr)

Each run takes roughly 2–5 hours, so the full batch of 15 groups should wrap up within 10 hours.

RunPod GPU pod details


Experiment Progress

The W&B dashboard currently shows 15 runs, covering the full set of groups from the second-round experiment design: Block 1 (data volume vs. strategy comparison), Block 2 (high-level dimension ablation), and Block 3 (sub-dimension ablation).

By end of day, most runs had already finished. Two groups were still running and are expected to complete within 50 minutes.

W&B training runs overview (15 groups)


Tomorrow’s Plan

  • Run the core evaluation pipeline: compute perplexity on the Gold / Random / Low-Q test sets and compare loss across all experimental groups.
  • Continue polishing the resume and improve the presentation of personal projects.