Skip to main content

API Overview

LLMTune exposes REST endpoints for authentication, dataset ingestion, fine-tuning, deployment, inference, usage, and webhooks. All endpoints are versioned and require workspace-level API keys.

Base URL

https://llmtune.io/api

Response Formats

  • JSON for both success and error responses.
  • Error responses include error type, message, and optional details.

Rate Limits

  • Rate limits vary per workspace and endpoint.
  • Exceeding limits returns HTTP 429 with Retry-After.
  • Monitor limits in Usage → Rate Limits or via usage.threshold_reached webhooks.

Idempotency

  • Use Idempotency-Key headers when retrying operations (dataset uploads, training launches) to avoid duplicate jobs.

SDKs and Compatibility

The inference endpoint follows the OpenAI Chat Completions interface. You can use the OpenAI SDK by pointing it to the LLMTune base URL and setting the authorization header to your LLMTune API key.