Changelog

We're constantly working on new features and improvements. Here's what's new with Langdock.

Jul 14, 2025

New in Langdock

Deep Research

https://www.langdock.com/changelog#introducing-deep-research

We're excited to launch Deep Research, a new chat feature designed for comprehensive, long-form research reports that require in-depth analysis across multiple sources.

What is Deep Research?

Deep Research is built for complex research projects that demand thorough investigation rather than quick answers. It conducts multiple strategic web searches, examines findings from different sources, and synthesizes everything into a well-structured report.

When does it make sense to use Deep Research?

Deep Research is particularly powerful for background research, exploring industry trends, market analysis, competitive analysis, academic research, strategic planning, and any task requiring comprehensive information gathering from multiple online sources. The resulting report with citations can be downloaded as a PDF, saving you hours of manual research and compilation.

How does it work?

Deep Research intelligently plans its approach to gather insights from multiple angles. You can watch the search activity in real-time and see sources as they are added. No matter which model you select in the chat, Deep research always uses pre-configured models to ensure the best possible quality. There is currently a usage limit of 15 searches per user per month.

Deep Research is now available across all workspaces! 🚀

On another note: Microsoft Azure is currently experiencing some speed issues with GPT-4.1. While we still recommend it as the default model, you may also try GPT-4.1 mini as a faster alternative.

Jun 26, 2025

New models available!

o3 and GPT-4.1 mini

https://www.langdock.com/changelog#o3-and-gpt-41-mini

We are excited to announce that o3 and GPT-4.1 mini are now available in Langdock! 🚀

o3 is OpenAI's most powerful reasoning model that sets new standards across coding, math, science, and visual reasoning tasks. It excels at technical writing, instruction-following, and tackling complex multi-step problems. o3 is perfect for strategy, research, and advanced coding tasks that demand sophisticated problem-solving capabilities.

GPT-4.1 mini is the smaller, faster version of GPT-4.1, designed for everyday tasks with significantly faster responses. This efficient model delivers performance competitive with GPT-4o while reducing latency by nearly half. GPT-4.1 Mini excels at high-volume tasks, real-time applications, and rapid content generation. This model was previously available as a global deployment and is now available in the EU.

Both models are now live. Choose o3 for advanced reasoning and intelligence, or GPT-4.1 mini for speed and efficiency!

Important Notice: We will deprecate GPT-4.5, o1 (Preview), o1 mini and Gemini 1.5 Pro, on the 11th of July. These models will no longer be available on Langdock from that date forward. We recommend switching to the newer versions of these models.

Jun 19, 2025

Updated Canvas

https://www.langdock.com/changelog#canvas-leaving-beta

We are excited to announce that Canvas is now out of Beta and available to all Langdock users! 🎉

Canvas is our interactive editing environment to help you with your writing and coding tasks. With Canvas, you can edit text and code directly while receiving AI suggestions.

This release comes with a few significant updates based on your feedback since the launch of the beta. Thank you to everyone who contributed their thoughts and suggestions to help us improve the Canvas feature.

Highlights of this release:

  • Streamlined Interface: We have decluttered the interface to make your workflow smoother and more intuitive.
  • Integrated Coding Tools: You can now generate and run code in Canvas directly through the integrated terminal.
  • Download as File: You can now download your canvas as PDF, word and markdown document.
  • Simplified Model Selection: There is no longer a need to select a separate Canvas model; everything works seamlessly with the main models you already use every day.

You can ask the model to use Canvas or alternatively, you can toggle the Canvas mode on by clicking the button in the chat field.

We aim to continually improve Canvas and look forward to seeing how you use it to power your work.

  • Improved “New chat” and “Search” buttons: We improved the design of the buttons in the navbar to open a new chat and open the command bar.
  • Action permissions: Admins can now define access permissions for individual actions by giving access to individual users and/or groups. You can go to the integration you want to manage and then find a new option to manage the actions.
  • New integrations: New integrations: We added integrations to Monday and Google Meet (to retrieve transcripts).
  • File support: The file formats .dotx, .rtf, .kml, .gml, .dxf, .gpx, .shp, .shx, .dbf, and .prj are now supported.
  • Improved document uploads: We improved the uploading behavior of files, especially for larger amounts of files.
  • Mermaid diagrams: Mermaid flowcharts can now also be saved as images.
Jun 18, 2025

New model available!

Gemini 2.5 Flash and Gemini 2.5 Pro

https://www.langdock.com/changelog#gemini-25-flash-and-gemini-25-pro

We are excited to announce that Gemini 2.5 Flash and Gemini 2.5 Pro are now available in Langdock! 🚀

Gemini 2.5 Flash is the faster and more efficient model of the 2.5 version, designed for real-time, high-volume tasks. It delivers fast responses, supports up to 1 million context tokens, and is ideal for instant writing, summarization, and Q&A;
Perfect for Assistants and Chat where speed and efficiency matter most.

Gemini 2.5 Pro is Google’s most advanced model for complex reasoning, coding, and multimodal tasks. Comparable to Claude's "Sonnet" and OpenAI’s “o” models, Gemini 2.5 Pro is ideal for strategy, research, and coding tasks requiring advanced problem solving.

Both models are now live - choose Flash for speed and efficiency, or Pro for intelligence and depth.

Workspace Admins can activate these models in the Workspace Settings under the "Models" tab.

Jun 2, 2025

GPT-4.1 and o4 Mini

https://www.langdock.com/changelog#gpt-41-and-o4-mini

We are excited to announce that GPT-4.1 and o4 Mini are now hosted in the EU and available in Langdock. 🙌

GPT-4.1 is the latest version of the GPT-4 series, performing better on speed and quality than previous version. We set GPT-4.1 the default model for all new workspaces on Langdock. Additionally, all workspaces previously set to GPT-4o, GPT-4o Mini, or GPT-4 have been migrated to GPT-4.1 as their default model.

We are also pleased to introduce o4 Mini as a new model option on Langdock. o4 Mini is designed for fast, efficient reasoning and excels at handling complex instructions, coding, and strategic tasks. Similar to o3 Mini, it offers a strong balance between speed and accuracy.

  • Sunsetting of GPT-4: As shared in the last changelog, by the end of this week, GPT-4 will be fully deprecated on Langdock, following the discontinuation of support by Microsoft Azure. Assistants previously using GPT-4 will now automatically use the workspace default model or GPT-4.1.
May 23, 2025

Claude Sonnet 4

https://www.langdock.com/changelog#claude-4-sonnet

We've integrated the new Claude version into Langdock! 🚀

Claude Sonnet 4 is the successor to Sonnet 3.7, a model favored by many of our users for text generation, coding and problem-solving tasks.

Like with Sonnet 3.7, Sonnet 4 offers the option to use reasoning for complex tasks. This is why we have added the modes as two separate models: Claude Sonnet 4 (Preview) and Claude 4 (Reasoning Preview).

The standout features of Claude Sonnet 4 include:

  • Text generation: Like previous Sonnet versions, Claude Sonnet 4 maintains its strength in producing natural, human-like text for writing tasks including emails, translations, and content creation.
  • Enhanced coding performance: Sonnet 4 excels in coding tasks, from generating solutions to navigating complex codebases with near-zero errors.
  • Improved steerability: The model offers greater control over its behavior, allowing you to tailor its responses to your specific needs.
  • Optimized efficiency: Designed to balance capability and practicality, Sonnet 4 is ideal for assistants requiring both speed and depth.
  • Sunsetting of GPT-4 Turbo: On June 8, GPT-4 Turbo in Langdock will be deprecated. Microsoft Azure is discontinuing support for this model, and the newer GPT-4o and GPT-4.1 models offer improved performance and speed. Assistants using GPT-4 Turbo will be set to the workspace default model
  • Export assistant feedback: Assistant editors can now export the feedback from the usage insights as a CSV.
May 6, 2025

Android and iOS App

https://www.langdock.com/changelog#mobile-app

Today, we are excited to launch the Langdock mobile app! You can now download Langdock as a dedicated Android or iOS app and use AI from your phone.

You can now use Langdock wherever you are. You can choose between your models, use the chat and your assistants.

Additionally, we added one highly requested new functionality: You can now use your voice as input, Langdock will transcribe it, and submit it as a prompt. Look out for the microphone icon in the prompt input field!

To download the app, go to the Apple App Store here or the Google Play Store here.

  • Voice input in the browser: Next to having voice input in the mobile app, you can now also use this functionality when you use Langdock in the browser.
  • Assistant management: Admins now have an improved way of managing assistants in the workspace. Admins can verify certain assistants to highlight them in the assistant list and re-assign the ownership of an assistant if the previous owner left the company. See workspace settings.
  • New integrations: We added new integrations including Salesforce, GitHub, Slack, Airtable, Zendesk, Snowflake and DeepL.
  • Increased character limit for text files: We increased the limit for text files from 2M to 4M characters.
  • Admin mode for integrations: Admins can now test all integrations before they enable them for the entire workspace. Access to integrations can be managed in the workspace settings.
Apr 3, 2025

New GPT-4o version

https://www.langdock.com/changelog#latest-version-of-gpt-4-o

We've integrated a new version of GPT-4o into our platform! 🚀
This powerful version delivers improved response quality and faster generation times.

The model was previously available as "GPT-4o (latest)". We merged the two models now and the normal "GPT-4o" is the newer model.

The GPT-4o image generation capability announced a few days ago is not available in a version hosted on EU servers yet. We will add it as soon as it becomes available in the EU.

We've also made several improvements to enhance your experience:

  • Text formatting preservation: When manually copying parts of responses in chat and assistants the formatting is now being kept, as with the copy message button
  • Native Mermaid diagram support: Create your own mermaid diagrams directly in our chat. These can be Flowcharts, Sequence diagrams and many more
  • o3 mini via API: OpenAI's o3 mini reasoning model is now available via our API
  • Table formatting: We improved the formatting of tables in the chat. This includes the appearance, the copying behavior and the ability to download a CSV of generated tables
Mar 17, 2025

New Integrations

https://www.langdock.com/changelog#new-integrations

We launched our new integrations! It’s much easier to integrate other software tools into Langdock now to retrieve data and take actions. The update consists of three main parts:

  • 20+ native integrations are now available in Langdock
  • An easier way to build integrations for your own tools
  • Improvements to existing integrations and knowledge folders
New integrations and actions

We made integrating external tools into your assistants easier and pre-built many new integrations for the tools our customers use. For example, you can now use the following integrations: Jira, HubSpot, Google Sheets, Excel, Outlook, Google Calendar, and Google Mail.

You can now easily add actions that your assistants can perform. Example actions are:

  • Write email drafts and send them to Google Mail or Outlook
  • Create or update deals in HubSpot
  • Write and update tickets in Jira
  • Add an entry to a Google Sheet or an Excel Sheet
  • Send a message in a Microsoft Teams chat
  • And many more...

Here are more details on how to use them. If you’re missing an integration or specific action, please let us know!

Integrate your own tools

The Langdock team will build integrations to all standard software tools in the coming weeks. If we don’t have an integration (yet), or you want to integrate an internal tool, you can build your own integrations.

We deprecated the previous OpenAPI-schema-based integrations in favor of a simpler integration builder that also allows you to write custom JavaScript to cover all kinds of edge cases. The integrations/actions now live outside of assistants, so you can share and reuse them in multiple assistants. You can follow this guide to set up your own REST API based integrations.

Improvements to existing integrations and knowledge sources

We also improved the interface and experience of existing integrations. Here are the most significant changes:

  • When you attach a document from an integration (e.g., SharePoint or Google Drive) as assistant knowledge, we now refresh the content of the document every 24 hours. This ensures that you always work with the latest version of the document in your Langdock assistant. You can also manually refresh a document at any time.
  • Knowledge folders can now be shared with users, groups, and the workspace (similar to assistants). The knowledge folders moved from the account settings into the integrations menu to make them more visible.
  • If you already built custom actions in an assistant, they are still available. We marked them as read-only, and they will be deprecated on April 30th. We recommend migrating your existing actions to our new, improved actions. If your action is not available out of the box yet, let us know if you need help migrating it.
  • Vector databases were also moved from individual assistants to the integrations menu to make it easier to reuse connections. Assistants with existing vector databases were migrated accordingly to ensure they worked as before.

Additional information for workspace admins:

  • By default, all integrations are enabled. You can configure which integrations should be enabled in your workspace here.
  • Workspace-wide integrations (Google Drive & Confluence via service accounts) are now deprecated in favor of the new integrations. Please let your users know so they can configure the integrations manually. The functionalities and permissions are covered completely by the new integrations.
  • The permissions per user role have changed to reflect the new integration framework: The permissions “Connect Vector Databases” and “Connect Actions” were deprecated, and the new permissions are “Share knowledge folder” and “Create integrations.”

This new integration framework will allow for many more use cases in Langdock, and it’s just the beginning. In the coming weeks, we’ll add many more functionalities to work with all kinds of data in Langdock. Stay tuned!

Mar 6, 2025

Platform Speed Improvements

https://www.langdock.com/changelog#speed-improvements

We just shipped massive speed improvements across our platform! While we are continuously working on model speed, you'll notice everything else is running much faster now. Plus, we released some much-requested improvements on our chat input and API.

  • Embedding Models in API: The OpenAI ada-002 embedding model is now available through our API and can be used for personalizing, recommending, and searching content
  • Knowledge Folders + Assistants API: Knowledge folders are now fully compatible with the assistants API for seamless integration
  • Character Count Indicator: The text input field now provides visual feedback, which shows the character count and turns red when exceeding limits
Feb 26, 2025

Claude 3.7 Sonnet, o3 Mini and Gemini 2.0

https://www.langdock.com/changelog#claude-37-sonnet-gemini-20-flash

We've just added three powerful new models to Langdock: Claude 3.7 Sonnet, OpenAI's o3 Mini and Gemini 2.0 Flash.

Claude 3.7 Sonnet

Claude 3.7 Sonnet is the successor to 3.5, one of the most used models in our user base. The previous 3.5 version is used already by many users for writing tasks, such as emails or translations and for coding.

The upgrade of the new model version is a dual-mode capability, which were added as two separate models

  • The normal mode allows users to either use it as a regular LLM and immediately create an answer for simpler tasks (like email generation or translating a text).
  • The reasoning mode allows the model to self-reflect before answering, to provide a better, deeper answer for complex problems (like strategy, maths or science).

We have added the modes as two separate models (Claude 3.7 Sonnet and Claude 3.7 Reasoning).

o3 Mini

OpenAI's o3 Mini is the latest and most-efficient model of OpenAI's reasoning series.

Reasoning models, like o3 Mini, o1, r1 from DeepSeek or the Claude 3.7 Sonnet model mentioned above use chain-of-thought thinking to split a task into several steps. This makes them useful for complex tasks, like maths, physics, complex instructions, coding or complex strategic tasks.

o3 Mini is the latest and most efficient model of OpenAl's reasoning series. o1 is the broader knowledge reasoning model, and o3-mini is faster compared to the previous one, balancing speed and accuracy. As o3 Mini allows for control over its reasoning efforts, we have added the standard mode as well as a high-effort reasoning mode as two separate models (o3 Mini and o3 Mini High).

Gemini 2.0 Flash

We also added the new Gemini 2.0 Flash model, which is now available in the EU as well. The Flash model from the previous 1.5 Gemini generation was the faster, smaller model compared to the larger and more advanced Gemini 1.5 Pro. The new Gemini 2.0 Flash outperforms Gemini 1.5 Pro on key benchmarks and is twice as fast.

Feb 5, 2025

Assistant Forms

https://www.langdock.com/changelog#assistant-forms

We are bringing a new way to interact with assistants in Langdock: Assistant forms. When building an assistant, editors can now choose to use the new form input method, where they can define the input fields shown to users.

You can build an interface to structure the inputs users need to enter to receive high-quality results, similar to survey forms. When users use an assistant using the new input method, they will be presented with the form the editor built. You can use inputs you know from other tools, like:

  • Single-line text
  • Checkboxes
  • File upload
  • Single-select options
  • Number
  • Date

This gives assistant creators more flexibility when creating assistants and allows them to tailor the input structure to your specific needs, while making it easier for other users to use the assistant.

Feb 3, 2025

Memory

https://www.langdock.com/changelog#memory

Memory offers deeper personal customization of the model responses, by saving information from past interactions in the application.

When using memory, you can tell the model to remember certain information about you, your work or any preferences you have. It will then save the information in the application. For example, you could have it:

  • Remember certain details about your job
  • Share a preference for a specific style of writing
  • Remember your name and other personal details

By default, Memory is disabled. To use it, head over to the preferences in your settings. There you can enable chat memory in the capabilities section.

All memories are stored in your account, and are available to you in all your chats (not assistant chats). They are not accessible by others in your workspace.

  • OpenAI o3 mini: We added support for the new OpenAI o3 mini model. Admins can configure it in the settings. We consume the model from Microsoft Azure and it is available as a global deployment.
  • Increasing password security requirements: We increased the minimum number of characters a password needs to have. We recommend using a password manager, the magic email link login or login through SSO.
  • Langfuse integration: We added a Langfuse integration, which allows technical users to assess the performance of assistants.
  • Prompt variables: We added support for using the same input variable several times in the prompt. You fill out one variable and it gets copied to all occurrences of the same variable.
Jan 29, 2025

DeepSeek-R1

https://www.langdock.com/changelog#deepseek-r1

We've added support for the new R1 model from the Chinese AI company DeepSeek. R1 has been receiving a lot of attention in the media recently for its strong performance. The model rivals OpenAI's o1-series and is open-sourced for commercial use.

The R1 model is available in multiple versions. We are self-hosting the 32B version of the model on our own servers in the EU and consume the full 671B version from Microsoft Azure in the US. Since the model is still early and focuses on reasoning, we have deactivated tools like document upload, web search and data analysis for now.

Admins can enable the models in the settings.

Jan 26, 2025

Audio & video upload in chat

https://www.langdock.com/changelog#transcribe-audio

We're excited to announce that you can now work with audio and video files in the chat.

Upload your recordings (up to 200MB) and our system will automatically transcribe them, allowing you to have natural conversations about the content.

You can work with all common formats including MP4, MP3, WAV, and MPEG files. Whether you need to review a team meeting, analyze a client call, or process a voice memo, simply upload your file and start asking questions about its content.

  • Llama 3.3 model: We added the newer Llama 3.3 70B model to the platform
  • OpenAI o1: We added the o1 model from OpenAI to the platform. However, it is only available as a global deployment, which means that servers could potentially be outside of the EU. The model is turned off by default, but admins can activate it in the settings.
  • Amazon Nova models: We added the Nova models from Amazon in the model settings. However, they are only available in the US at this point.
  • Gemini as backbone model: Admins can set up Gemini models as a backbone model now. The backbone model defines tasks in the background for some models.