Skip to main content
Create a workspace, upload a dataset, fine-tune a foundation model, deploy it, and call the inference API.
1

Create your workspace

  1. Sign in at https://llmtune.io/login.
  2. Select Create Workspace and follow the prompts.
  3. Invite teammates as Admin, Editor, or Viewer if collaboration is required.
2

Upload and prepare data

  1. Navigate to Datasets and choose Upload or Connect source.
  2. Bring data in by uploading files (JSONL, CSV, TXT), linking a managed LLMTune dataset, syncing from Hugging Face, or attaching cloud storage like S3/GCS.
  3. Blend multiple sources with weights, tag records, and mask sensitive values before training.
  4. Versioning is automatic—every ingestion becomes a new snapshot you can roll back to.
3

Launch a fine-tune

  1. Open Fine-Tune Studio.
  2. Choose a base model from the curated catalog—each card highlights latency, context length, and recommended use cases.
  3. Select your dataset or blend multiple sources with weighted ratios.
  4. Pick a training method (SFT, RW, PRO, DPO, CTO, or RLIF) and decide whether to run Guided presets or the Advanced strategy for full-control tuning.
  5. Dial in LoRA / QLoRA settings, learning rate, epochs, evaluation cadence, and guardrails. Confirm the cost estimate and click Launch Training to watch tokens/sec, loss, and webhook events stream in real time.
4

Promote to an endpoint

  1. When the run completes, open Deployments.
  2. Click Promote to Endpoint, choose staging or production, and set autoscaling parameters.
  3. LLMTune stores deployment metadata so you can pin versions, roll back, or diff configurations later.
5

Call the inference API

  1. Generate a scoped key under API Keys.
  2. Use the OpenAI-compatible endpoint:
curl https://llmtune.io/api/models/{deployment_id}/inference \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      { "role": "system", "content": "You are a precise assistant." },
      { "role": "user", "content": "Summarize this support ticket." }
    ],
    "temperature": 0.7,
    "max_tokens": 400
  }'
  1. Track requests, tokens, and spend in Usage → Overview.
1

Evaluate your model

  1. After training completes, click Evaluate on the training job.
  2. Test your model using:
    • Single Prompt: Quick individual tests
    • Compare with Base Model: Side-by-side comparison
    • Batch Evaluation: Test multiple prompts at once
    • Results Dashboard: Comprehensive metrics and trends
  3. Use evaluation presets and prompt templates for quick testing.
  4. Export results for documentation.
Continue with the Fine-Tuning Guide for advanced strategies, use the Evaluate Guide to test your models, explore the Playground for model comparisons, and wire up Webhooks for automation.