# LLM7.io Documentation ## Docs - [Usage overview](https://docs.llm7.io/ai-tools/cursor.md): Quick instructions for text endpoints. - [Function calling](https://docs.llm7.io/guides/function-calling.md): Let the model invoke your functions and return structured results. - [Image recognition](https://docs.llm7.io/guides/image-recognition.md): Send images to chat models for captions, OCR, and visual Q&A. - [JSON mode](https://docs.llm7.io/guides/json-mode.md): Force well-structured JSON outputs from the model. - [Available models](https://docs.llm7.io/guides/models.md): List text models and choose default, fast, or pro. - [Streaming](https://docs.llm7.io/guides/streaming.md): Stream chat tokens for lower latency responses. - [Introduction](https://docs.llm7.io/index.md): Start building with LLM7.io for text. - [Limits](https://docs.llm7.io/limits.md): Rate limits for text endpoints by plan. - [Quickstart](https://docs.llm7.io/quickstart.md): Build with LLM7.io in minutes: chat. ## OpenAPI Specs - [openapi](https://docs.llm7.io/api-reference/openapi.json)