Changelog
We're constantly working on new features and improvements. Here's what's new with Langdock.
Web search mode
Langdock now offers an enhanced web search mode, providing quick and current answers along with links to relevant internal sources from the internet.

If web search is enabled for your workspace, you can now turn it on in the newly redesigned chat input bar. This will force the model to search the web for up-to-date news and information regarding your query.
- Sidebar resizing: You can now resize the sidebar to your preference.
- Response copying: Users can now copy responses while they are still generating.
- Spending limits in API: We now support setting spending limits for API usage in the workspace settings.
- Prompt library: Adjusted prompt library layout to show more content from the saved prompts for a better experience.
- Long chat performance: We optimized the rendering of chat messages, resulting in smoother performance in bigger chats.
- Data Analyst: We've made some larger improvements to the Data Analyst, making it more reliable.
- Upload of Python, JS, HTML, CSS, PHP: You can now upload more file types to Langdock.
Assistant API
We launched the API for assistants. You can now access assistants, including attached knowledge and connected tools through an API.

To enable an assistant to be accessible through the API, admins need to create an API key in the API settings. Afterward, you can share the assistant with the API by inviting the key like a normal workspace member.
After configuring the API in your workflow (here are our docs), you can then send messages to the assistant through the API. The API also includes structured output and document upload.
- Formatting of copied output: We improved the formatting of output when you copy a response into a different tool.
- Share chats: When a user clicks on 'create link' in the assistant sharing menu, the URL is automatically copied into the clipboard.
- Data analyst: We improved the data analyst and the handling of CSVs, PDFs and Excel Files
- Pinecone as Vector Database: We added support for Pinecone as a vector database.
Command Bar (Cmd+K)
You can now navigate Langdock and search chats directly from your keyboard with the new command bar feature. This allows for quick and easy access to the information you need, right at your fingertips.
Pressing Cmd + K on your keyboard (for Windows Ctrl + K) opens a menu to quickly perform different operations. Here are a few examples:
- Search through all your chats
- Search a specific assistant
- Quickly change settings, like switching to dark mode, opening the documentation, the changelog, or the support chat
We also added a search button in the top left corner which opens the command bar.
There are also new and updated shortcuts:
- Open a new chat: Cmd/Ctrl + Shift + O
- Open/close the sidebar: Cmd/Ctrl + Shift + S
- Copy the last response: Cmd/Ctrl + Shift + C
We hope these improvements make you even more productive when using Langdock.
- Long text in variables: The display of long text in variables has been improved.
- Actions improvement: Added support for multiple headers in actions.
- Redirect from sharing assistant page: When sharing an assistant, users were redirected to the assistant overview. We improved the behavior, so users are not redirected and stay in the assistant editor.
Prompt Variables
You can now incorporate variables directly into your prompts and create dynamic templates that can be easily reused across different contexts.

When creating a new prompt in the prompt library, wrap a word with {{ and }} or click on the variable button at the bottom to make it a variable. When using the prompt later, users can quickly fill out the variables in order to customize the prompt to their needs.
This helps to easily use a prompt in different contexts without leaving your keyboard or to make it easier for others to use the prompt when you share it.
You can find more details in our section about the prompt library in our documentation.
- Web search: Improved web search speed and display of sources.
- Claude 3.5 Sonnet: Claude 3.5 Sonnet tended to go to the web more often than other models. We improved this behavior.
- Assistant feedback: This improvement allows users to submit feedback for assistants without free-text inputs.
- API improvement: We made our API compatible with n8n.
Assistant Analytics
Gain valuable insights into how your assistants are being used with the new assistant usage insights feature available in Langdock.

Users can now upvote or downvote responses and leave comments, providing direct feedback that can help you improve your assistant's configuration. This interaction enhances the user experience and offers concrete suggestions for improvement in the feedback tab.
In the analytics tab, you as an editor or owner of an assistant, can access quantitative data about usage over specific timeframes in the analytics tab. The number of messages, conversations and users helps you understand user engagement and identify needs.
With these insights, assistant creators can assess performance and make informed improvements of the configuration, leading to a more effective and user-friendly assistant.
- Smaller improvements to canvas: We improved the performance and responsiveness of canvas on mobile, the streaming animation runs smoother now and the model with canvas triggers the feature more reliably.
- New file format: We added support for .eml files.
- Resize split screen: The Canvas split screen and the assistant split screen can now be resized horizontally by dragging the border in the middle.
- Increased image file size: You can now upload images up to 20MB.
Canvas
Introducing Canvas
We're excited to launch Canvas, a new feature in Langdock that enhances your writing and coding tasks. Canvas offers an interactive window where you can edit text and code directly, receive AI suggestions, and collaborate more effectively.
Highlights:
- Inline Editing: Easily make changes and get AI suggestions.
- Coding Tools: Review, and fix code seamlessly.
- Writing Enhancements: Adjust text tone and style with ease.
Canvas is now available for all Langdock users.
- Custom embedding models for Vector DB: When connecting to custom Vector DB’s, users can now configure which embedding model to use for converting the search queries.
- Added a Search endpoint for knowledge folders: Made our knowledge folder API more useful, with an easy way to search through uploaded content.
- Increased max-length on prompt-inputs to 120,000 characters.
- Scrolling stability improvements for answer generations: Streaming & scrolling is now super smoooooth…
- Added “Request access” capability for workspaces: Improving the onboarding process for new users and admins.
- Added a consistent breadcrumb navigation to all subpages.
Actions
Assistants can now talk to other software tools like Jira or Salesforce via actions.
Actions
Assistants can now perform API calls to external tools, opening up many integration possibilities with CRMs, ticket management tools, or internal APIs. Check out our Actions documentation for details, including specialized guides for Jira, Google Drive, and Salesforce.
- Quote parts of a response: With our new quote feature, you can now easily reference specific parts of previous model responses by selecting the text and clicking the quote button to include it in your next prompt.
- We renamed “data folders” to “knowledge folders”: You can upload up to 1,000 files in a knowledge folder and use the files in an assistant.
- API for knowledge folders: You can now programmatically upload files and set up daily jobs to keep your folders up to date. See our documentation.
- Support for .json, .xml, .vtt, .xls files: We added support for several new file formats to support more use cases within Langdock.
- Custom legal disclaimers: Admins can set up custom legal disclaimers for sharing assistants. See workspace settings here.
OpenAI o1 Models

We added the latest OpenAI models, o1 and o1 mini. Admins can enable them in the model settings. As a heads up, these models are thinking models but not replacements for all tasks. The o1 models are better at reasoning and complex thinking tasks, like math, data analysis, or coding, than previous models, but not at knowledge retrieval, text generation, or translation. This is because they will take a comparatively long time to start writing their answer since they are thinking in the background first. You can read more about this in our model guide.
Assistants Infocards

Users now have more insights into the assistants they use. When clicking on an assistant in the assistant list, users see a pop-up with high-level information about the assistant, such as whether web search is enabled or which model is used.
Changelog
We''ve added a changelog to the product and our website to inform you about new features and improvements. A pop-up will appear at the bottom left of the product whenever we launch a significant new feature. You can click on it to learn more and also discover all the other features we launched since the last update.