Adapters
LocoPilot abstracts every training framework behind the TrainingAdapter interface:
// src/training/types.ts
export interface TrainingAdapter {
run(
config: PartialTrainingConfig,
logEmitter: EventEmitter,
): Promise<{ outputPath: string }>;
}
Three adapters ship in src/training/adapters/:
| Adapter | Platform | Python runner |
|---|---|---|
| Unsloth | Linux / Windows + NVIDIA | unsloth_runner.py |
| Axolotl | Linux / Windows + NVIDIA | axolotl_runner.py |
| MLX | macOS arm64 (Apple Silicon) | mlx_runner.py |
The TypeScript adapter spawns the Python runner as a subprocess, captures stdout/stderr line-by-line, and emits each line through logEmitter so the API can stream them as SSE.
Unsloth (recommended for Linux/Windows)
The default. ~2× faster than vanilla Hugging Face Trainer, lower VRAM, drop-in QLoRA support.
pip3 install unsloth trl transformers datasets
Best for:
- Single-GPU consumer hardware (RTX 3090, 4090, A6000)
- Llama / Mistral / Qwen / Phi family
- Quick QLoRA fine-tunes
{ "framework": "unsloth", "baseModel": "...", ... }
Axolotl (advanced)
Higher-ceiling, more knobs. Good for multi-GPU setups, mixture-of-experts models, and unusual schedulers.
pip3 install axolotl
Best for:
- Multi-GPU training
- DeepSpeed / FSDP integration
- When you want full control over every hyper-parameter
{ "framework": "axolotl", "baseModel": "...", ... }
MLX (Apple Silicon)
MLX is Apple's own ML framework. It runs natively on the Apple Neural Engine and Metal GPU.
pip3 install mlx-lm
The CLI auto-selects MLX on darwin/arm64 regardless of what you put in framework. The same dataset and JSONL formats work — only the runner differs.
{ "framework": "mlx", "baseModel": "mlx-community/Meta-Llama-3-8B-4bit", ... }
Use the mlx-community Hugging Face org's pre-quantised models for the fastest start on Apple Silicon.
Adding a new adapter
The interface is one method:
import { EventEmitter } from 'events';
import type {
TrainingAdapter,
PartialTrainingConfig,
} from '@infrarix/locopilot/training/types';
export class MyAdapter implements TrainingAdapter {
async run(
config: PartialTrainingConfig,
logs: EventEmitter,
): Promise<{ outputPath: string }> {
// 1. spawn your runner
// 2. forward stdout/stderr to logs.emit('log', line)
// 3. resolve with the absolute path of the saved adapter
return { outputPath: '/abs/path/to/adapter' };
}
}
Register your adapter in src/training/index.ts (the framework → adapter map), open a PR, and you're done.