Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.llmtune.io/llms.txt

Use this file to discover all available pages before exploring further.

LLMTune publishes upcoming functionality so teams can plan ahead. Timelines may shift, but the sequence reflects the current product strategy.

Recently Shipped

  • FineTune Studio – No-code fine-tuning with guided workflows, real-time monitoring, and support for all training methods (SFT, DPO, PPO, RLAIF, CTO) and modalities
  • LLMTune Models – Production-ready model catalog with comparison tools and deployment notes
  • LLMTune API – OpenAI-compatible inference endpoints with streaming support and usage telemetry
  • LLMTune Deploy – Deployment control with version management, traffic routing, and instant rollback
  • LLMTune Evaluate – Comprehensive evaluation suite with automated scorecards and human review workflows
  • Dataset Hub – Data intelligence platform with multiple sources, quality scoring, and PII detection
  • Training Queue System – Sequential job processing to conserve GPU resources and ensure stability
  • Federated & Traditional Compute – Flexible compute options (Single Instance or GPU Cluster) for all training methods
  • All Modalities Support – Text, image, audio, video, code, multimodal, embeddings, and TTS

In Development

  • Enhanced Dataset Hub – Direct HuggingFace Hub integration, S3/GCS connectors, and advanced data blending
  • Improved Usage Dashboards – Deeper analytics for per-endpoint latency, error breakdowns, and budget alerts
  • Advanced Traffic Management – More sophisticated canary and shadow deployment patterns
  • Video Understanding Training – Full support for video-language model fine-tuning (currently in preview)
  • Enhanced Evaluation Suite – More automated evaluation metrics and integration with training pipeline

Planned

  • Team-level Permissions – Granular roles for dataset annotators versus deployment operators
  • Secrets Management – Store third-party API keys used by custom inference workflows
  • Self-hosted Agents – Export fine-tuned models with runtime configuration for on-prem orchestration
  • Advanced Model Catalog – More models, better filtering, and community-contributed models
  • Multi-workspace Management – Easier switching and management across multiple workspaces
  • Enhanced Webhooks – More event types and better webhook management UI

Feature Requests

To request features or share feedback: