We're constantly working on new features and improvements. Here's what's new with Langdock.
Mar 17, 2025
We launched our new integrations! It’s much easier to integrate other software tools into Langdock now to retrieve data and take actions. The update consists of three main parts:
We made integrating external tools into your assistants easier and pre-built many new integrations for the tools our customers use. For example, you can now use the following integrations: Jira, HubSpot, Google Sheets, Excel, Outlook, Google Calendar, and Google Mail.
You can now easily add actions that your assistants can perform. Example actions are:
Here are more details on how to use them. If you’re missing an integration or specific action, please let us know!
The Langdock team will build integrations to all standard software tools in the coming weeks. If we don’t have an integration (yet), or you want to integrate an internal tool, you can build your own integrations.
We deprecated the previous OpenAPI-schema-based integrations in favor of a simpler integration builder that also allows you to write custom JavaScript to cover all kinds of edge cases. The integrations/actions now live outside of assistants, so you can share and reuse them in multiple assistants. You can follow this guide to set up your own REST API based integrations.
We also improved the interface and experience of existing integrations. Here are the most significant changes:
Additional information for workspace admins:
This new integration framework will allow for many more use cases in Langdock, and it’s just the beginning. In the coming weeks, we’ll add many more functionalities to work with all kinds of data in Langdock. Stay tuned!
Mar 06, 2025
We just shipped massive speed improvements across our platform! While we are continuously working on model speed, you'll notice everything else is running much faster now. Plus, we released some much-requested improvements on our chat input and API.
Feb 26, 2025
We've just added three powerful new models to Langdock: Claude 3.7 Sonnet, OpenAI's o3 Mini and Gemini 2.0 Flash.
Claude 3.7 Sonnet is the successor to 3.5, one of the most used models in our user base. The previous 3.5 version is used already by many users for writing tasks, such as emails or translations and for coding.
The upgrade of the new model version is a dual-mode capability, which were added as two separate models
We have added the modes as two separate models (Claude 3.7 Sonnet and Claude 3.7 Reasoning).
OpenAI's o3 Mini is the latest and most-efficient model of OpenAI's reasoning series.
Reasoning models, like o3 Mini, o1, r1 from DeepSeek or the Claude 3.7 Sonnet model mentioned above use chain-of-thought thinking to split a task into several steps. This makes them useful for complex tasks, like maths, physics, complex instructions, coding or complex strategic tasks.
o3 Mini is the latest and most efficient model of OpenAl's reasoning series. o1 is the broader knowledge reasoning model, and o3-mini is faster compared to the previous one, balancing speed and accuracy. As o3 Mini allows for control over its reasoning efforts, we have added the standard mode as well as a high-effort reasoning mode as two separate models (o3 Mini and o3 Mini High).
We also added the new Gemini 2.0 Flash model, which is now available in the EU as well. The Flash model from the previous 1.5 Gemini generation was the faster, smaller model compared to the larger and more advanced Gemini 1.5 Pro. The new Gemini 2.0 Flash outperforms Gemini 1.5 Pro on key benchmarks and is twice as fast.
Feb 05, 2025
We are bringing a new way to interact with assistants in Langdock: Assistant forms. When building an assistant, editors can now choose to use the new form input method, where they can define the input fields shown to users.
You can build an interface to structure the inputs users need to enter to receive high-quality results, similar to survey forms. When users use an assistant using the new input method, they will be presented with the form the editor built. You can use inputs you know from other tools, like:
This gives assistant creators more flexibility when creating assistants and allows them to tailor the input structure to your specific needs, while making it easier for other users to use the assistant.
Feb 03, 2025
Memory offers deeper personal customization of the model responses, by saving information from past interactions in the application.
When using memory, you can tell the model to remember certain information about you, your work or any preferences you have. It will then save the information in the application. For example, you could have it:
By default, Memory is disabled. To use it, head over to the preferences in your settings. There you can enable chat memory in the capabilities section.
All memories are stored in your account, and are available to you in all your chats (not assistant chats). They are not accessible by others in your workspace.
Jan 29, 2025
We've added support for the new R1 model from the Chinese AI company DeepSeek. R1 has been receiving a lot of attention in the media recently for its strong performance. The model rivals OpenAI's o1-series and is open-sourced for commercial use.
The R1 model is available in multiple versions. We are self-hosting the 32B version of the model on our own servers in the EU and consume the full 671B version from Microsoft Azure in the US. Since the model is still early and focuses on reasoning, we have deactivated tools like document upload, web search and data analysis for now.
Admins can enable the models in the settings.
Jan 26, 2025
We're excited to announce that you can now work with audio and video files in the chat.
Upload your recordings (up to 200MB) and our system will automatically transcribe them, allowing you to have natural conversations about the content.
You can work with all common formats including MP4, MP3, WAV, and MPEG files. Whether you need to review a team meeting, analyze a client call, or process a voice memo, simply upload your file and start asking questions about its content.
Jan 22, 2025
Langdock now offers an enhanced web search mode, providing quick and current answers along with links to relevant internal sources from the internet.
If web search is enabled for your workspace, you can now turn it on in the newly redesigned chat input bar. This will force the model to search the web for up-to-date news and information regarding your query.
Dec 11, 2024
We launched the API for assistants. You can now access assistants, including attached knowledge and connected tools through an API.
To enable an assistant to be accessible through the API, admins need to create an API key in the API settings. Afterward, you can share the assistant with the API by inviting the key like a normal workspace member.
After configuring the API in your workflow (here are our docs), you can then send messages to the assistant through the API. The API also includes structured output and document upload.
Dec 04, 2024
You can now share chats with users of your workspace by clicking on the button in the top right corner. It’s a great way to share your work with your colleagues directly in Langdock.
The sharing button appears after the first message has been sent in the chat. Once a chat is shared, new messages will also be shared. Others can only read the contents of that chat, but not interact with it. They will not have access to the documents attached, but can view answers based on the documents. You can unshare a chat anytime from the settings.
Nov 22, 2024
You can now navigate Langdock and search chats directly from your keyboard with the new command bar feature. This allows for quick and easy access to the information you need, right at your fingertips.
Pressing Cmd + K on your keyboard (for Windows Ctrl + K) opens a menu to quickly perform different operations. Here are a few examples:
We also added a search button in the top left corner which opens the command bar.
There are also new and updated shortcuts:
We hope these improvements make you even more productive when using Langdock.
Nov 20, 2024
You can now incorporate variables directly into your prompts and create dynamic templates that can be easily reused across different contexts.
When creating a new prompt in the prompt library, wrap a word with {{ and }} or click on the variable button at the bottom to make it a variable. When using the prompt later, users can quickly fill out the variables in order to customize the prompt to their needs.
This helps to easily use a prompt in different contexts without leaving your keyboard or to make it easier for others to use the prompt when you share it.
You can find more details in our section about the prompt library in our documentation.
Nov 07, 2024
Gain valuable insights into how your assistants are being used with the new assistant usage insights feature available in Langdock.
Users can now upvote or downvote responses and leave comments, providing direct feedback that can help you improve your assistant's configuration. This interaction enhances the user experience and offers concrete suggestions for improvement in the feedback tab.
In the analytics tab, you as an editor or owner of an assistant, can access quantitative data about usage over specific timeframes in the analytics tab. The number of messages, conversations and users helps you understand user engagement and identify needs.
With these insights, assistant creators can assess performance and make informed improvements of the configuration, leading to a more effective and user-friendly assistant.
Nov 05, 2024
We're excited to launch Canvas, a new feature in Langdock that enhances your writing and coding tasks. Canvas offers an interactive window where you can edit text and code directly, receive AI suggestions, and collaborate more effectively.
Highlights:
Canvas is now available for all Langdock users.
Oct 30, 2024
Assistants can now talk to other software tools like Jira or Salesforce via actions.
Assistants can now perform API calls to external tools, opening up many integration possibilities with CRMs, ticket management tools, or internal APIs. Check out our Actions documentation for details, including specialized guides for Jira, Google Drive, and Salesforce.
Oct 29, 2024
We added the latest OpenAI models, o1 and o1 mini. Admins can enable them in the model settings. As a heads up, these models are thinking models but not replacements for all tasks. The o1 models are better at reasoning and complex thinking tasks, like math, data analysis, or coding, than previous models, but not at knowledge retrieval, text generation, or translation. This is because they will take a comparatively long time to start writing their answer since they are thinking in the background first. You can read more about this in our model guide.
Oct 28, 2024
Users now have more insights into the assistants they use. When clicking on an assistant in the assistant list, users see a pop-up with high-level information about the assistant, such as whether web search is enabled or which model is used.
Oct 27, 2024
We''ve added a changelog to the product and our website to inform you about new features and improvements. A pop-up will appear at the bottom left of the product whenever we launch a significant new feature. You can click on it to learn more and also discover all the other features we launched since the last update.