Skip to main content

Installation and Setup

This guide covers how to get access to LLMTune and configure your environment for API and dashboard use. For self-hosted or internal deployments, refer to your organization’s deployment guide.

Using the Hosted Platform

  1. Sign up – Go to https://llmtune.io and sign up or log in (email, GitHub, Google, or X).
  2. Workspace – On first sign-in, a default workspace is created. All resources (datasets, jobs, API keys) are scoped to this workspace.
  3. API key – In the dashboard, open API Keys, create a key, and copy it. Keys use the sk_... format. Store the key securely; it is not shown again.
  4. Base URL – For the hosted public API, use https://api.llmtune.io/v1 (or the URL provided by your deployment). For direct platform routes, the base is https://llmtune.io/api.

Environment Variables (Integrations)

When integrating with the API from your app or scripts:
# Required for API access
export LLMTUNE_API_KEY="sk_your_key_here"

# Optional: override base URL (e.g. for self-hosted)
export LLMTUNE_API_BASE="https://api.llmtune.io/v1"
Use the API key in the Authorization header:
Authorization: Bearer sk_your_key_here

SDK / Client Usage

There is no LLMTune-specific SDK in the repository. The API is compatible with OpenAI-style clients:
  • Chat completions – Use the OpenAI SDK (or similar) with baseURL set to https://api.llmtune.io/v1 and apiKey set to your LLMTune API key.
  • Custom inference – Use fetch or any HTTP client to call POST /models/{modelId}/inference and POST /batch/inference with the same auth.
See the Inference API Guide for cURL, JavaScript, and Python examples.

CLI

The repository does not include a dedicated CLI. You can drive the API from the shell using curl or a script. Example:
curl -X POST "https://api.llmtune.io/v1/chat/completions" \
  -H "Authorization: Bearer $LLMTUNE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"meta-llama/Llama-3.3-70B-Instruct","messages":[{"role":"user","content":"Hello"}],"max_tokens":100}'

Self-Hosted / Custom Deployments

For deployments that use the platform codebase (e.g. Next.js app + optional infra backend):
  • Platform (Next.js) – Runs the dashboard and API routes under /api/. Configure environment variables (database, Stripe, inference backend, etc.) as required by the deployment.
  • Backend (optional) – If using the separate infra/backend (e.g. for API key storage or worker coordination), set INFRA_API_URL / BACKEND_API_URL in the platform so it can forward requests.
  • Worker – Fine-tuning execution may use a worker service (see llmtune-infra/worker). Configure WORKER_API_URL and related secrets in the platform.
Exact setup steps depend on your hosting environment. Refer to the Deployment Guide and your internal runbooks. TODO: Add a dedicated self-hosted deployment runbook when the deployment path is finalized.

Next Steps