Langdock Logo
Models

All models in one place.

Use all models inside of Chat, Assistants, Agents or via the API.

Pricing applies only to our API product. Chat and Assistants, when purchased including AI models, have no usage-based cost component. Langdock charges 15% on top of the model provider's price. Model prices origin from the model providers in USD.

OpenAI

GPT-4 TurboGPT-4 Turbo
Input price (API):10.48€ / 1M tokens
Output price (API):31.44€ / 1M tokens
Region:
EUUS
GPT-4oGPT-4o
Input price (API):5.24€ / 1M tokens
Output price (API):10.48€ / 1M tokens
Region:
EUUS
GPT-4o miniGPT-4o mini
Input price (API):0.16€ / 1M tokens
Output price (API):0.63€ / 1M tokens
Region:
EU

Anthropic

Claude 3.5 SonnetClaude 3.5 Sonnet
Input price (API):3.14€ / 1M tokens
Output price (API):15.72€ / 1M tokens
Region:
EUUS
Claude 3 HaikuClaude 3 Haiku
Input price (API):0.26€ / 1M tokens
Output price (API):1.31€ / 1M tokens
Region:
EUUS
Claude 3 OpusClaude 3 Opus
Input price (API):15.72€ / 1M tokens
Output price (API):78.61€ / 1M tokens
Region:
US

Google

Gemini 1.5 ProGemini 1.5 Pro
Input price (API):7.86€ / 1M tokens
Output price (API):22.01€ / 1M tokens
Region:
EU
Gemini 1.5 FlashGemini 1.5 Flash
Input price (API):0.16€ / 1M tokens
Output price (API):0.63€ / 1M tokens
Region:
EU

Meta

Llama 3.1 70BLlama 3.1 70B
Input price (API):2.81€ / 1M tokens
Output price (API):3.71€ / 1M tokens
Region:
EU
Llama 3.1 8BLlama 3.1 8B
Input price (API):0.31€ / 1M tokens
Output price (API):0.64€ / 1M tokens
Region:
EU

Mistral

Mistral Large 2Mistral Large 2
Input price (API):3.14€ / 1M tokens
Output price (API):9.43€ / 1M tokens
Region:
EU
Mistral NemoMistral Nemo
Input price (API):3.14€ / 1M tokens
Output price (API):3.14€ / 1M tokens
Region:
EU
Current exchange rate is 1 USD = 0.911 EUR.
Models last updated: 28. September 2024.

Tokenizer

Estimate your token consumption

Advanced language models process text using tokens, which are common sequences of characters in text. These models learn the statistical relationships between tokens to predict the next one in a sequence.

Tokenization is crucial for how these models interpret and generate text. It breaks down input text into smaller units (tokens) that the model can process.

The tokenization process can vary between different models. Newer models may use different tokenizers than older ones, potentially producing different tokens for the same input text. This can affect how the model processes text and impact token counts.

Understanding tokenization is helpful when working with these models, especially when considering input length limitations or optimizing text processing efficiency.

Tokens

0

Characters

0

Note: This is a simplified tokenization method and may not accurately represent the exact token count used by language models. For precise tokenization, consider using model-specific tokenizers.

For typical English text, one token often equals about 4 characters or ¾ of a word. As a rough estimate, 100 tokens ≈ 75 words.

For precise tokenization, developers can use programming libraries. In Python, the tiktoken package is available for tokenizing text. For JavaScript, the community-supported @dbdq/tiktoken package works with many advanced language models. These tools are valuable for accurate token counting and text processing tasks.