Skip to main content

Fine-tuning limitations

Model support

  • Only platform-supported base models that are marked as fine-tunable can be used. Other models will not be accepted.
  • The list of supported models can change; always check the dashboard or the models API before starting a job.

Dataset

  • Format — Data must be in a supported format (e.g. JSONL with the expected schema). Invalid format can cause rejection or failure.
  • Size — There may be minimum or maximum sizes; the API or UI will indicate limits.
  • Access — If you use a URL, the platform must be able to fetch it (e.g. public HTTPS). Private or local paths are not supported.

Infrastructure

  • Training runs on platform infrastructure only. You cannot attach your own cluster or GPU fleet.
  • Queueing and runtime depend on platform capacity; there is no SLA unless stated in your agreement.

Billing

  • Jobs are billed from your balance. Insufficient balance can prevent the job from starting or cause it to fail.
  • Cost depends on model, dataset size, and hyperparameters. Use the cost estimate in the dashboard or API when available.

After training

  • Deployment — Deploying the trained model for inference follows the same rules as other models (e.g. via the inference API and your API key).
  • Retention — Check the dashboard or docs for how long artifacts and model outputs are retained.
For full details on workflow and dataset format, see Workflow and Dataset format.