Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.llm7.io/llms.txt

Use this file to discover all available pages before exploring further.

Get started in two steps

Spin up the OpenAI SDK against the LLM7.io endpoint and make your first requests.

Step 1: Set up

pip install openai

Configure the client

LLM7.io requires a token. Get one for free at token.llm7.io for higher rate limits.
import openai

client = openai.OpenAI(
    base_url="https://api.llm7.io/v1",
    api_key="unused",  # Required. Get it for free at https://token.llm7.io/ for higher rate limits.
)

Step 2: Text generation

Chat completions (Python)

import openai

client = openai.OpenAI(
    base_url="https://api.llm7.io/v1",
    api_key="unused",  # Required. Get it for free at https://token.llm7.io/ for higher rate limits.
)

resp = client.chat.completions.create(
    model="default",
    messages=[
        {"role": "user", "content": "Tell me a short story about a brave squirrel."}
    ],
)

print(resp.choices[0].message.content)
The output will be a short story generated by the model, for example:
Once upon a time, in a lush green forest, there lived a brave squirrel named Sammy. Sammy was known for his adventurous spirit and his willingness to help others. One day, a fierce storm hit the forest, causing a massive tree to fall and block the entrance to the squirrel village. Without hesitation, Sammy gathered his friends and devised a plan to clear the path. With teamwork and determination, they managed to move the fallen tree and restore access to their home. The villagers celebrated Sammy's bravery, and he became a legend in the forest for his courageous act.

Next steps

Function calling

Bind tool calls to your own functions reliably.

JSON mode

Get guaranteed JSON outputs for structured use-cases.

Streaming

Stream tokens for lower latency UIs.

Available models

See model options.
Need help? Check the API reference above or join the community.