We're constantly working on new features and improvements. Here's what's new with Langdock.
Feb 03, 2025
Memory offers deeper personal customization of the model responses, by saving information from past interactions in the application.
When using memory, you can tell the model to remember certain information about you, your work or any preferences you have. It will then save the information in the application. For example, you could have it:
By default, Memory is disabled. To use it, head over to the preferences in your settings. There you can enable chat memory in the capabilities section.
All memories are stored in your account, and are available to you in all your chats (not assistant chats). They are not accessible by others in your workspace.
Jan 29, 2025
We've added support for the new R1 model from the Chinese AI company DeepSeek. R1 has been receiving a lot of attention in the media recently for its strong performance. The model rivals OpenAI's o1-series and is open-sourced for commercial use.
The R1 model is available in multiple versions. We are self-hosting the 32B version of the model on our own servers in the EU and consume the full 671B version from Microsoft Azure in the US. Since the model is still early and focuses on reasoning, we have deactivated tools like document upload, web search and data analysis for now.
Admins can enable the models in the settings.
Jan 26, 2025
We're excited to announce that you can now work with audio and video files in the chat.
Upload your recordings (up to 200MB) and our system will automatically transcribe them, allowing you to have natural conversations about the content.
You can work with all common formats including MP4, MP3, WAV, and MPEG files. Whether you need to review a team meeting, analyze a client call, or process a voice memo, simply upload your file and start asking questions about its content.
Jan 22, 2025
Langdock now offers an enhanced web search mode, providing quick and current answers along with links to relevant internal sources from the internet.
If web search is enabled for your workspace, you can now turn it on in the newly redesigned chat input bar. This will force the model to search the web for up-to-date news and information regarding your query.
Dec 11, 2024
We launched the API for assistants. You can now access assistants, including attached knowledge and connected tools through an API.
To enable an assistant to be accessible through the API, admins need to create an API key in the API settings. Afterward, you can share the assistant with the API by inviting the key like a normal workspace member.
After configuring the API in your workflow (here are our docs), you can then send messages to the assistant through the API. The API also includes structured output and document upload.
Dec 04, 2024
You can now share chats with users of your workspace by clicking on the button in the top right corner. It’s a great way to share your work with your colleagues directly in Langdock.
The sharing button appears after the first message has been sent in the chat. Once a chat is shared, new messages will also be shared. Others can only read the contents of that chat, but not interact with it. They will not have access to the documents attached, but can view answers based on the documents. You can unshare a chat anytime from the settings.
Nov 22, 2024
You can now navigate Langdock and search chats directly from your keyboard with the new command bar feature. This allows for quick and easy access to the information you need, right at your fingertips.
Pressing Cmd + K on your keyboard (for Windows Ctrl + K) opens a menu to quickly perform different operations. Here are a few examples:
We also added a search button in the top left corner which opens the command bar.
There are also new and updated shortcuts:
We hope these improvements make you even more productive when using Langdock.
Nov 20, 2024
You can now incorporate variables directly into your prompts and create dynamic templates that can be easily reused across different contexts.
When creating a new prompt in the prompt library, wrap a word with {{ and }} or click on the variable button at the bottom to make it a variable. When using the prompt later, users can quickly fill out the variables in order to customize the prompt to their needs.
This helps to easily use a prompt in different contexts without leaving your keyboard or to make it easier for others to use the prompt when you share it.
You can find more details in our section about the prompt library in our documentation.
Nov 07, 2024
Gain valuable insights into how your assistants are being used with the new assistant usage insights feature available in Langdock.
Users can now upvote or downvote responses and leave comments, providing direct feedback that can help you improve your assistant's configuration. This interaction enhances the user experience and offers concrete suggestions for improvement in the feedback tab.
In the analytics tab, you as an editor or owner of an assistant, can access quantitative data about usage over specific timeframes in the analytics tab. The number of messages, conversations and users helps you understand user engagement and identify needs.
With these insights, assistant creators can assess performance and make informed improvements of the configuration, leading to a more effective and user-friendly assistant.
Nov 05, 2024
We're excited to launch Canvas, a new feature in Langdock that enhances your writing and coding tasks. Canvas offers an interactive window where you can edit text and code directly, receive AI suggestions, and collaborate more effectively.
Highlights:
Canvas is now available for all Langdock users.
Oct 30, 2024
Assistants can now talk to other software tools like Jira or Salesforce via actions.
Assistants can now perform API calls to external tools, opening up many integration possibilities with CRMs, ticket management tools, or internal APIs. Check out our Actions documentation for details, including specialized guides for Jira, Google Drive, and Salesforce.
Oct 29, 2024
We added the latest OpenAI models, o1 and o1 mini. Admins can enable them in the model settings. As a heads up, these models are thinking models but not replacements for all tasks. The o1 models are better at reasoning and complex thinking tasks, like math, data analysis, or coding, than previous models, but not at knowledge retrieval, text generation, or translation. This is because they will take a comparatively long time to start writing their answer since they are thinking in the background first. You can read more about this in our model guide.
Oct 28, 2024
Users now have more insights into the assistants they use. When clicking on an assistant in the assistant list, users see a pop-up with high-level information about the assistant, such as whether web search is enabled or which model is used.
Oct 27, 2024
We''ve added a changelog to the product and our website to inform you about new features and improvements. A pop-up will appear at the bottom left of the product whenever we launch a significant new feature. You can click on it to learn more and also discover all the other features we launched since the last update.