Skip to main content

Welcome to LLMTune Docs

LLMTune gives product, data, and operations teams one shared control center for fine-tuning, deploying, and governing custom language models. These docs are designed to guide you end to end—so every stakeholder knows how to get started, what decisions to make, and how to ship production-ready assistants with confidence. LLMTune operates as an independent product surface while orchestrating GPU capacity powered by IO.net compute. You retain ownership of your workflow, metadata, and roadmap.

Platform pillars

  • Guided lifecycle – Move from dataset preparation to deployment without context switching. Every stage is auditable and repeatable.
  • Operational clarity – Track spend, throughput, latency, and anomalies in real time so finance, security, and engineering stay aligned.
  • Enterprise controls – Workspaces, scoped keys, audit trails, and webhook events meet regulated-team requirements without slowing velocity.

Architecture snapshot

LayerCoverage
WorkspaceAccess control, audit history, spend limits, team-level configuration.
DatasetsUploads, managed sources, tagging, redaction, and blending across providers.
TrainingLoRA/QLoRA configurations, live telemetry, resumable runs, webhook notifications.
DeploymentsEndpoint promotion, version pinning, autoscaling policies, environment routing.
RuntimeOpenAI-compatible inference APIs, streaming, usage metering, SDK interoperability.

How to use this documentation

  1. Start with the essentials – Follow the Quickstart to create a workspace, ingest datasets, launch a fine-tune, evaluate your model, and call the inference API.
  2. Understand the primitives – Read the Core concepts to see how workspaces, datasets, model configuration, and usage metering fit together.
  3. Follow guided workflows – Use the How-to guides for detailed walkthroughs on tuning, evaluation, deployment, observability, and automation.
  4. Integrate programmatically – Reference the API documentation for REST endpoints, authentication scopes, and request examples across languages.
  5. Plan ahead – Check the Roadmap and FAQ to align stakeholders on what’s shipping next and how LLMTune operates.

Need help?