Welcome to LLMTune
LLMTune is a modern, full-stack platform that provides everything you need to fine-tune, deploy, and operate AI assistants at scale. Whether youβre training custom models, comparing production-ready checkpoints, or serving inference endpoints, LLMTune brings it all together in one unified platform.What is LLMTune?
LLMTune is a comprehensive AI platform designed to eliminate the complexity of building and managing AI infrastructure. With LLMTune, you can:- Fine-tune custom models without writing code or managing infrastructure
- Compare and deploy production-ready models from our curated catalog
- Serve inference with OpenAI-compatible APIs and built-in observability
- Manage deployments with version control, traffic management, and rollback capabilities
- Evaluate quality with automated scorecards and human review workflows
- Connect datasets from multiple sources with quality scoring and PII detection
Platform Overview
LLMTune consists of six integrated products that work together seamlessly:π― FineTune Studio
No-code fine-tuning for any model. Guided workflows, real-time monitoring, and support for all training methods (SFT, DPO, PPO, RLAIF, CTO) and modalities (text, image, audio, video, code, multimodal). Key Features:- All training methods (SFT, DPO, PPO, RLAIF, CTO)
- All modalities (text, image, audio, video, code, multimodal, embeddings, TTS)
- Traditional or Federated compute options
- Single Instance or GPU Cluster deployment
- Real-time monitoring and telemetry
- Automatic checkpoints
π¦ LLMTune Models
Production-ready model catalog with comparison tools, deployment notes, evaluation metrics, and usage guidance. Browse, compare, and deploy the best model for your use case. Key Features:- Curated catalog of open-source models
- Side-by-side model comparison
- Deployment notes and evaluation metrics
- Usage guidance and best practices
- One-click deployment
π LLMTune API
OpenAI-compatible API platform with inference endpoints, streaming support, usage telemetry, and built-in governance. Serve your models with enterprise-grade observability. Key Features:- OpenAI-compatible inference endpoints
- Streaming support
- Usage telemetry and monitoring
- Workspace-scoped API keys
- Webhooks and events
- Built-in governance
π LLMTune Deploy
Deployment control with version management, traffic routing, automated testing, and instant rollback. Ship new versions safely and scale confidently. Key Features:- Version control for models
- Traffic management (canary, shadow, blue/green)
- Automated testing and smoke tests
- Instant rollback
- Runtime observability
- Change logs and approvals
β LLMTune Evaluate
Comprehensive evaluation suite with automated scorecards, human review workflows, safety checks, and quality dashboards. Keep your models on-brand, factual, and safe. Key Features:- Automated scorecards
- Human review workflows
- Safety and quality checks
- Regression testing
- Quality dashboards
- Integration with training pipeline
π Dataset Hub
Data intelligence platform with support for multiple sources (HuggingFace, S3, GCS, direct upload), quality scoring, PII detection, and automatic cleaning. Key Features:- Multiple data sources (HuggingFace, S3, GCS, direct upload)
- Quality scoring
- PII detection and masking
- Automatic data cleaning
- Version control
- Dataset preview and validation
Platform Architecture
Key Features
No-Code Interface
Fine-tune models, deploy endpoints, and manage your AI infrastructure through an intuitive visual interface. No infrastructure setup required.All Training Methods
Support for Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), RL with AI Feedback (RLAIF), Controlled Tuning Optimization (CTO), and more.All Modalities
Train models for text, image, audio, video, code, multimodal, embeddings, and TTS. Support for 17+ model families including LLaMA, Mistral, Qwen, DeepSeek, and more.Flexible Compute
Choose between Traditional or Federated compute. Deploy on Single Instance or GPU Clusters. Switch anytime without changing your workflow.Production-Ready
Built-in observability, version control, traffic management, and quality assurance. Everything you need to run AI in production.Unified Platform
All products work together seamlessly. Fine-tune in Studio, deploy with Deploy, evaluate with Evaluate, and serve with API - all from one workspace.Workflow Integration
All products work together seamlessly:- Prepare Data β Use Dataset Hub to connect and prepare your datasets
- Train Models β Use FineTune Studio to train custom models
- Compare Models β Use LLMTune Models to compare and select the best model
- Evaluate Quality β Use LLMTune Evaluate to measure performance
- Deploy Safely β Use LLMTune Deploy to manage production deployments
- Serve at Scale β Use LLMTune API to serve models with observability
Unified Workspace
All products share:- Single Authentication - One account, one workspace
- Unified API Keys - Same keys work across all products
- Shared Resources - Models, datasets, and deployments are shared
- Integrated Telemetry - Usage and metrics across all products
- Consistent Governance - Same policies and controls everywhere
How to Use This Documentation
- Start with the essentials β Follow the Quickstart to create an account, upload datasets, launch a fine-tune, evaluate your model, and call the inference API.
- Understand the products β Read about each product to understand their capabilities and use cases.
- Follow guided workflows β Use the How-to guides for detailed walkthroughs on tuning, evaluation, deployment, observability, and automation.
- Integrate programmatically β Reference the API documentation for REST endpoints, authentication scopes, and request examples across languages.
- Plan ahead β Check the Roadmap and FAQ to align stakeholders on whatβs shipping next and how LLMTune operates.
Need Help?
- General support: [email protected]
- Documentation feedback: [email protected]
- Visit llmtune.io for the marketing site
- Head directly to the dashboard to manage your workspace