Create a workspace, upload a dataset, fine-tune a foundation model in FineTune Studio, deploy it, and call the LLMTune API.
Create your account and workspace
- Go to https://llmtune.io/login.
- Sign up or log in with your preferred method.
- The first time you sign in, a default workspace is created for you. You can create additional workspaces later from the dashboard.
Create an API key
- In the dashboard, open API Keys.
- Click Create API Key.
- Give the key a descriptive name (for example:
production-backendorstaging-testing). - Copy the key and store it securely – you won’t be able to see it again.
Upload and prepare data (Dataset Hub)
- Navigate to Dataset Hub from the main navigation.
- Choose Upload dataset or Connect source.
- Supported options today include:
- Uploading files (
JSONL,CSV,TXT) - Using preconfigured playground / demo datasets
- Uploading files (
- LLMTune validates the format and shows a preview.
- (Optional) Apply tags, notes, and quality checks. PII detection and cleaning are handled automatically where enabled.
Launch a fine-tune (FineTune Studio)
- Open FineTune Studio from the product navigation.
- Pick a base model from the catalog. Cards highlight provider, context length, and recommended use cases.
- Select a dataset (or blend multiple datasets) from Dataset Hub.
- Choose a training method:
- SFT – Supervised Fine-Tuning
- DPO – Direct Preference Optimization
- PPO – Policy optimization with rewards
- RLAIF – RL with AI feedback
- CTO – Controlled Tuning Optimization
- Configure key hyperparameters:
- Learning rate
- Batch size
- Epochs
- Evaluation cadence
- Choose your compute model:
- Traditional (single instance or GPU cluster)
- Federated (distributed compute)
- Confirm the cost estimate and click Launch training.
- Watch live telemetry: tokens/sec, loss curves, queue position, and status.
Promote to an endpoint (Deploy)
- When a run completes, open Deploy (Deploy Control) from the product navigation or from the training run details.
- Click Promote to endpoint.
- Choose environment (staging or production).
- Optionally configure rollout strategy and traffic splitting.
- Save the deployment – LLMTune links it to the original dataset, training job, and workspace.
Evaluate your model (Evaluate Suite)
- From the training job or deployment, click Evaluate.
- Use the evaluation interface to:
- Run single prompt checks
- Compare with base model side-by-side
- Run batch evaluations over many prompts
- Inspect the results dashboard for quality trends
- Use evaluation results to decide whether to promote, adjust training data, or launch a new run.
Continue with the Fine-Tuning Guide for advanced strategies and wire up Webhooks for automation.