Skip to main contentModel Configuration
Model configuration describes which foundation model you fine-tune, how you adapt it, and how you deploy it.
Supported Foundations
LLMTune syncs with IO.net inventory for:
- DeepSeek (reasoning-focused, tool-aware)
- Mistral (general-purpose, multilingual)
- Meta Llama series (assistant and coding variants)
- Intel/Qwen models for code or reasoning
- Moonshot, Swiss-AI, BAAI, and others as they become available
Each model entry includes context length, parameter count, pricing, and recommended use.
Adaptation Strategies
- LoRA / QLoRA: Parameter-efficient fine-tuning suited for fast iteration.
- Full fine-tune: Available for select models when deeper changes are required.
- Training styles: Guided (recommended defaults) or Advanced (manual hyperparameters).
Deployed models store:
- Base model identifier
- Dataset references and weights
- Training configuration hash
- Endpoint URL and region
- Current status (active, paused, retired)
Use this metadata to reproduce runs or audit model lineage.