Langdock API
Integrate leading AI models and specialized agents directly into your existing applications with a single, enterprise-ready API.

Completion API
Access leading language models from OpenAI, Anthropic, Mistral, and Google for text generation, analysis, and reasoning tasks.
Embedding API
Generate high-quality embeddings for semantic search, similarity matching, and RAG applications.
Agent API
Create and manage custom AI agents programmatically with specialized knowledge and capabilities.
Knowledge folder API
Manage your organization’s knowledge base programmatically for RAG and document processing.
Available models
One unified API for all models. GDPR-compliant, hosted in the EU.
Pricing applies only to our API product. Chat and Agents, when purchased including AI models, have no usage-based cost component. Langdock charges 10% on top of the model provider's price. Model prices origin from the model providers in USD. All prices excl. VAT.
€1.65 / 1M tokens
€13.17 / 1M tokens
€1.18 / 1M tokens
€9.4 / 1M tokens
€2.35 / 1M tokens
€14.11 / 1M tokens
€2.35 / 1M tokens
€14.11 / 1M tokens
Questions & answers
What is the Langdock API?
The Langdock API allows developers to integrate state‑of‑the‑art AI models, agents, and knowledge management features directly into their applications with enterprise‑grade security and GDPR compliance.
Which AI models can I access through the API?
You can access models from OpenAI (including GPT‑5.2), Anthropic (Claude), Mistral, and Google, all through a unified interface.
Does Langdock support embeddings and RAG workflows?
Yes. The API includes an Embedding API and a Knowledge Folder API, enabling semantic search, document retrieval, and full retrieval‑augmented generation pipelines.
Can I create custom agents using the API?
Absolutely. The Agent API lets you create, update, and query AI agents with custom instructions, knowledge, attachments, and model configurations.
How do I authenticate with the Langdock API?
All requests require a bearer token sent via the authorization header. You can generate an API token in the Langdock workspace settings.
What are the default rate limits?
500 requests per minute and 60,000 tokens per minute. Additional enterprise limits may be available upon request.

