Create a workspace, upload a dataset, fine-tune a foundation model, deploy it, and call the inference API.
1
Create your workspace
- Sign in at https://llmtune.io/login.
- Select Create Workspace and follow the prompts.
- Invite teammates as Admin, Editor, or Viewer if collaboration is required.
2
Upload and prepare data
- Navigate to Datasets and choose Upload or Connect source.
- Bring data in by uploading files (JSONL, CSV, TXT), linking a managed LLMTune dataset, syncing from Hugging Face, or attaching cloud storage like S3/GCS.
- Blend multiple sources with weights, tag records, and mask sensitive values before training.
- Versioning is automatic—every ingestion becomes a new snapshot you can roll back to.
3
Launch a fine-tune
- Open Fine-Tune Studio.
- Choose a base model from the curated catalog—each card highlights latency, context length, and recommended use cases.
- Select your dataset or blend multiple sources with weighted ratios.
- Pick a training method (SFT, RW, PRO, DPO, CTO, or RLIF) and decide whether to run Guided presets or the Advanced strategy for full-control tuning.
- Dial in LoRA / QLoRA settings, learning rate, epochs, evaluation cadence, and guardrails. Confirm the cost estimate and click Launch Training to watch tokens/sec, loss, and webhook events stream in real time.
4
Promote to an endpoint
- When the run completes, open Deployments.
- Click Promote to Endpoint, choose staging or production, and set autoscaling parameters.
- LLMTune stores deployment metadata so you can pin versions, roll back, or diff configurations later.
5
Call the inference API
- Generate a scoped key under API Keys.
- Use the OpenAI-compatible endpoint:
- Track requests, tokens, and spend in Usage → Overview.
1
Evaluate your model
- After training completes, click Evaluate on the training job.
- Test your model using:
- Single Prompt: Quick individual tests
- Compare with Base Model: Side-by-side comparison
- Batch Evaluation: Test multiple prompts at once
- Results Dashboard: Comprehensive metrics and trends
- Use evaluation presets and prompt templates for quick testing.
- Export results for documentation.
Continue with the Fine-Tuning Guide for advanced strategies, use the Evaluate Guide to test your models, explore the Playground for model comparisons, and wire up Webhooks for automation.