Skip to main content
pro is available on paid plans only. Model selection will be simplified soon: you’ll pick default, fast, or pro instead of specific model IDs.
Use the Models API to see what text models are currently available.
curl https://api.llm7.io/v1/models
Example response (truncated):
[
  {
    "id": "gpt-4.1-nano-2025-04-14",
    "object": "model",
    "owned_by": "openai",
    "modalities": { "input": ["text"] },
    "tools_calling": true,
    "context_window": { "chars": 5000, "tokens": null }
  },
  {
    "id": "mistral-small-3.1-24b-instruct-2503",
    "object": "model",
    "owned_by": "mistral",
    "modalities": { "input": ["text"] },
    "tools_calling": true,
    "context_window": { "chars": null, "tokens": 32000 }
  }
]
  • default: first available free model (good general choice)
  • fast: lowest latency option
  • pro: highest quality, longer reasoning (paid plans)
Concrete model IDs will be phased out in favor of these selectors. Prefer default, fast, or pro in new integrations.