Skip to main content

What file formats are supported for datasets?

JSONL with messages or conversations arrays is preferred. CSV and plain text (TXT) are also accepted. LLMTune will prompt you to map columns if needed. For specialized training methods (multimodal, audio), specific formats are required – see the Fine-Tuning Guide for details.

How long does training take?

Training time depends on:
  • Dataset size – Larger datasets take longer
  • Model size – Larger models require more compute time
  • Training method – PPO and RLAIF typically take longer than SFT
  • Compute option – GPU Clusters are faster than Single Instance
Typical ranges:
  • SFT/DPO – 20–40 minutes for small to medium datasets
  • PPO/RLAIF – 1–3 hours depending on complexity
  • Full fine-tunes – Can take several hours; the studio provides estimates before launch

What training methods are available?

LLMTune supports 13 training methods:
  • SFT (Supervised Fine-Tuning)
  • DPO (Direct Preference Optimization)
  • PPO (Proximal Policy Optimization)
  • RLAIF (RL with AI Feedback)
  • CTO (Controlled Tuning Optimization)
  • Code Generation
  • Multimodal
  • Text-to-Embeddings
  • Audio Understanding
  • Audio-to-Text (ASR)
  • Text-to-Audio (TTS)
  • Reward Modeling
  • Video Understanding (coming soon)
See the Fine-Tuning Guide for details on each method.

What compute options are available?

FineTune Studio offers two compute models:
  • Traditional Computing – Single location, predictable performance
    • Single Instance or GPU Cluster
  • Federated Computing – Distributed across global nodes
    • Privacy-preserving, unlimited scale, lower costs
    • Single Instance or GPU Cluster
You can switch between options anytime. See the Fine-Tuning Guide for details.

Can I bring my own model checkpoints?

Yes. Enterprise plans allow import of custom base models. Contact support at [email protected] to connect storage and complete compliance review.

How are API requests billed?

You pay per input / output token processed by each deployment. Usage dashboards show daily and monthly totals. Training runs are billed per job based on GPU hours and compute type (Traditional vs Federated).

What is the API base URL?

The public LLMTune API is available at:
https://api.llmtune.io/v1
For in-app routes used by the web application, see https://llmtune.io/api/.... External integrations should use the api.llmtune.io base URL.

Can I export models?

You can download fine-tuned adapters or full-model weights (where licensing permits) from the deployment panel or training job details.

How do I monitor endpoints?

Use the Usage dashboards in the LLMTune dashboard, subscribe to webhooks, or query the /usage endpoints programmatically via the API.

What safety features exist?

LLMTune supports guardrails such as:
  • Automatic PII detection and masking (in Dataset Hub)
  • Quality scoring and validation
  • Safety classifiers (can be enabled per training configuration)
  • Custom filters (enforced at inference time)

How does the training queue work?

The training queue processes jobs sequentially to conserve GPU resources and keep the experience stable. When multiple users submit training jobs, they are queued and executed one at a time. Each job shows its queue position, status, and progress so you always know what’s happening.

How do I report an incident?

Use the in-app support panel or email [email protected]. Include:
  • Workspace ID
  • Deployment ID or training job ID
  • Timestamps
  • Error messages or screenshots

What happens if I run out of balance?

Inference requests return 402 Payment Required errors. Add balance via the Stripe integration in Usage → Balance and retries will succeed after credit is restored.

Can I use the OpenAI SDK with LLMTune?

Yes! The inference API is OpenAI-compatible. You can use the OpenAI SDK by pointing it to the LLMTune base URL:
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.llmtune.io/v1",
  apiKey: process.env.LLMTUNE_API_KEY,
});
See the Inference API Guide for examples.

What models are supported?

LLMTune supports 17+ model families including:
  • LLaMA (3.3, 3.4, and variants)
  • Mistral (Nemo, 7B, and variants)
  • Qwen (Qwen3, Qwen-VL, Qwen2-Audio, and variants)
  • DeepSeek (R1, DeepSeek-Coder, and variants)
  • And more
See the Model Catalog for the full list.