{"id":2205,"date":"2025-07-03T15:18:04","date_gmt":"2025-07-03T15:18:04","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/07\/03\/docker-desktop-4-43-expanded-model-runner-reimagined-mcp-catalog-mcp-server-submissions-and-smarter-gordon\/"},"modified":"2025-07-03T15:18:04","modified_gmt":"2025-07-03T15:18:04","slug":"docker-desktop-4-43-expanded-model-runner-reimagined-mcp-catalog-mcp-server-submissions-and-smarter-gordon","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/07\/03\/docker-desktop-4-43-expanded-model-runner-reimagined-mcp-catalog-mcp-server-submissions-and-smarter-gordon\/","title":{"rendered":"Docker Desktop 4.43: Expanded Model Runner, Reimagined MCP Catalog, MCP Server Submissions, and Smarter Gordon"},"content":{"rendered":"<p><a href=\"https:\/\/docs.docker.com\/desktop\/release-notes\/#4430\" target=\"_blank\">Docker Desktop 4.43<\/a> just rolled out a set of powerful updates that simplify how developers run, manage, and secure AI models and MCP tools.\u00a0<\/p>\n<p>Model Runner now includes better model management, expanded OpenAI API compatibility, and fine-grained controls over runtime behavior. The improved MCP Catalog makes it easier to discover and use MCP servers, and now supports submitting your own MCP servers! Meanwhile, the MCP Toolkit streamlines integration with VS Code and GitHub, including built-in OAuth support for secure authentication. Gordon, Docker\u2019s AI agent, now supports multi-threaded conversations with faster, more accurate responses. And with the new Compose Bridge, you can convert local compose.yaml files into Kubernetes configuration in a single command.\u00a0<\/p>\n\n<p>Together, these updates streamline the process of building agentic AI apps and offer a preview of Docker\u2019s ongoing efforts to make it easier to move from local development to production.<\/p>\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<h2 class=\"wp-block-heading\">New model management commands and expanded OpenAI API support in Model Runner<\/h2>\n<p>This release includes improvements to the user interface of the Docker Model Runner, the inference APIs, and the inference engine under the hood.<\/p>\n<p>Starting with the user interface, developers can now inspect models (including those already pulled from Docker Hub and those available remotely in the <a href=\"https:\/\/hub.docker.com\/u\/ai\" target=\"_blank\">AI catalog<\/a>) via model cards available directly in Docker Desktop. Below is a screenshot of what the model cards look like:<\/p>\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 1: View model cards directly in Docker Desktop to get an instant overview of all variants in the model family and their key features.<\/strong><\/p>\n\n<p>In addition to the GUI changes, the <strong>docker model<\/strong> command adds three new subcommands to\u00a0 help developers inspect, monitor, and manage models more effectively:<\/p>\n<p><strong>docker model ps<\/strong>: Show which models are currently loaded into memory<\/p>\n<p><strong>docker model df<\/strong>: Check disk usage for models and inference engines<\/p>\n<p><strong>docker model unload<\/strong>: Manually unload a model from memory (before its idle timeout)<\/p>\n<p>For WSL2 users who enable Docker Desktop integration, all of the <strong>docker model<\/strong> commands are also now available from their WSL2 distros, making it easier to work with models without changing your Linux-based workflow.<\/p>\n<p>On the API side, Model Runner now offers additional OpenAI API compatibility and configurability. Specifically, tools are now supported with <strong>{\u201cstream\u201d: \u201ctrue\u201d}<\/strong>, making agents built on Docker Model Runner more dynamic and responsive. Model Runner\u2019s API endpoints now support <strong>OPTIONS<\/strong> calls for better compatibility with existing tooling. Finally, developers can now configure CORS origins in the Model Runner settings pane, offering better compatibility and control over security.\u00a0<\/p>\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 2: CORS Allowed Origins are now configurable in Docker Model Runner settings, giving developers greater flexibility and control.<\/strong><\/p>\n\n<p>For developers who need fine-grained control over model behavior, we\u2019re also introducing the ability to set a model\u2019s context size and even the runtime flags for the inference engine via Docker Compose, for example:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nservices:<br \/>\n  mymodel:<br \/>\n    provider:<br \/>\n      type: model<br \/>\n      options:<br \/>\n        model: ai\/gemma3<br \/>\n        context-size: 8192<br \/>\n        runtime-flags: &#8220;&#8211;no-prefill-assistant&#8221;\n<\/div>\n<p>In this example, we\u2019re using the (optional) <strong>context-size<\/strong> and <strong>runtime-flags<\/strong> parameters to control the behavior of the inference engine underneath. In this case, the associated runtime is the default (<strong>llama.cpp<\/strong>), and you can find a list of flags <a href=\"https:\/\/github.com\/ggml-org\/llama.cpp\/tree\/master\/tools\/server#usage\" target=\"_blank\">here<\/a>. Certain flags may override the stable default configuration that we ship with Docker Desktop, but we want users to have full control over the inference backend. It\u2019s also worth noting that a particular model architecture may limit the maximum context size. You can find information about maximum context lengths on the <a href=\"https:\/\/hub.docker.com\/catalogs\/gen-ai\" target=\"_blank\">associated model cards on Docker Hub<\/a>.<\/p>\n<p>Under the hood, we\u2019ve focused on improving stability and usability. We now have better error reporting in the event that an inference process crashes, along with more aggressive eviction of crashed engine processes. We\u2019ve also enhanced the Docker CE Model Runner experience with better handling of concurrent usage and more robust support for model providers in Compose on Docker CE.<\/p>\n\n<h2 class=\"wp-block-heading\">MCP Catalog &amp; Toolkit: Secure, containerized AI tools at scale<\/h2>\n<h3 class=\"wp-block-heading\">New and redesigned MCP Catalog\u00a0<\/h3>\n<p><a href=\"http:\/\/hub.docker.com\/mcp\" target=\"_blank\">Docker\u2019s MCP Catalog<\/a> now features an improved experience, making it easier to search, discover, and identify the right MCP servers for your workflows. You can still access the catalog through Docker Hub or directly from the MCP Toolkit in Docker Desktop, and now, it\u2019s also available via a <a href=\"http:\/\/hub.docker.com\/mcp\" target=\"_blank\">dedicated web link<\/a> for even faster access.\u00a0<\/p>\n\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 3: Quickly find the right MCP server for your agentic app and use the new Catalog to browse by specific use cases.<\/strong><\/p>\n\n<p>The MCP Catalog currently includes over 100 verified, containerized tools, with hundreds more on the way. Unlike traditional npx or uvx workflows that execute code directly on your host, every MCP server in the catalog runs inside an isolated Docker container. Each one includes cryptographic signatures, a Software Bill of Materials (SBOM), and provenance attestations.\u00a0<\/p>\n<p>This approach eliminates the risks of running unverified code and ensures consistent, reproducible environments across platforms. Whether you need database connectors, API integrations, or development tools, the MCP Catalog provides a trusted, scalable foundation for AI-powered development workflows that move the entire ecosystem away from risky execution patterns toward production-ready, containerized solutions.<\/p>\n\n<h3 class=\"wp-block-heading\">Submit your MCP Server to the Docker MCP Catalog<\/h3>\n<p>We\u2019re launching a new submission process, giving developers flexible options to contribute by following the process <a href=\"http:\/\/github.com\/docker\/mcp-registry\" target=\"_blank\">here<\/a>.\u00a0 Developers can choose between two options: Docker-Built and Community-Built servers.\u00a0<\/p>\n\n<h4 class=\"wp-block-heading\"><strong>Docker-Built Servers<\/strong>\u00a0<\/h4>\n<p>When you see \u201cBuilt by Docker,\u201d you\u2019re getting our complete security treatment. We control the entire build pipeline, providing cryptographic signatures, SBOMs, provenance attestations, and continuous vulnerability scanning.<\/p>\n\n<h4 class=\"wp-block-heading\"><strong>Community-Built Servers<\/strong>\u00a0<\/h4>\n<p>These servers are packaged as Docker images by their developers. While we don\u2019t control their build process, they still benefit from container isolation, which is a massive security improvement over direct execution.<\/p>\n<p>Docker-built servers demonstrate the gold standard for security, while community-built servers ensure we can scale rapidly to meet developer demand. Developers can change their mind after submitting a community-built server and opt to resubmit it as a Docker-built server.\u00a0<\/p>\n<p>Get your MCP server featured in the Docker MCP Catalog today and reach over 20 million developers. Learn more about our new MCP Catalog in our announcement <a href=\"http:\/\/www.docker.com\/blog\/docker-mcp-catalog-secure-way-to-discover-and-run-mcp-servers\/\">blog<\/a> and get insights on <a href=\"http:\/\/www.docker.com\/blog\/mcp-server-best-practices\/\">best practices<\/a> on building, running, and testing MCP servers.\u00a0 <a href=\"http:\/\/github.com\/docker\/mcp-registry\" target=\"_blank\">Join us<\/a> in building the largest library of secure, containerized MCP servers! .<\/p>\n\n<h3 class=\"wp-block-heading\">MCP Toolkit adds OAuth support and streamlined Integration with GitHub and VS Code<\/h3>\n<p>Many MCP servers\u2019 credentials are passed as plaintext environment variables, exposing sensitive data and increasing the risk of leaks. The MCP Toolkit eliminates that risk with secure credential storage, allowing clients to authenticate with MCP servers and third-party services without hardcoding secrets. We\u2019re taking it a step further with OAuth support, starting with the most widely used developer tool, GitHub. This will make it even easier to integrate secure authentication into your development workflow.<\/p>\n\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 4: OAuth is now supported for the GitHub MCP server.<\/strong><\/p>\n\n<p>To set up your GitHub MCP server, go to the OAuth tab, connect your GitHub account, enable the server, and authorize OAuth for secure authentication.<\/p>\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 5: Go to the configurations tab of the GitHub MCP servers to enable OAuth for secure authentication<\/strong><\/p>\n\n<p>The MCP Toolkit allows you to connect MCP servers to any MCP client, with one-click connection to popular ones such as Claude and Cursor. We are also making it easier for developers to connect to VSCode with the docker mcp client connect vscode command. When run in your project\u2019s root folder, it creates an mcp.json configuration file in your .vscode folder.\u00a0<\/p>\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 6: Connect to VS Code via MCP commands in the CLI<\/strong>.<\/p>\n\n<p>Additionally, you can also configure the MCP Toolkit as a global MCP server available to VSCode by adding the following config to your user settings. Check out <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/chat\/mcp-servers#_add-an-mcp-server-to-your-user-settings\" target=\"_blank\">this doc<\/a> for more details. Once connected, you can leverage GitHub Copilot in agent mode with full access to your repositories, issues, and pull requests.<\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \">\n&#8220;mcp&#8221;: {<br \/>\n  &#8220;servers&#8221;: {<br \/>\n    &#8220;MCP_DOCKER&#8221;: {<br \/>\n      &#8220;command&#8221;: &#8220;docker&#8221;,<br \/>\n      &#8220;args&#8221;: [<br \/>\n        &#8220;mcp&#8221;,<br \/>\n        &#8220;gateway&#8221;,<br \/>\n        &#8220;run&#8221;<br \/>\n      ],<br \/>\n      &#8220;type&#8221;: &#8220;stdio&#8221;<br \/>\n    }<br \/>\n  }<br \/>\n}\n<\/div>\n<h3 class=\"wp-block-heading\">Gordon gets smarter: Multi-threaded conversations and 5x faster performance<\/h3>\n<p>Docker\u2019s AI Agent Gordon just got a major upgrade: multi-threaded conversation support. You can now run multiple distinct conversations in parallel and switch between topics like debugging a container issue in one thread and refining a Docker Compose setup in another, without losing context. Gordon keeps each thread organized, so you can pick up any conversation exactly where you left off.<\/p>\n<p>Gordon\u2019s new multi-threaded capabilities work hand-in-hand with MCP tools, creating a powerful boost for your development workflow. Use Gordon alongside your favorite MCP tools to get contextual help while keeping conversations organized by task. No more losing focus to context switching!<\/p>\n<div class=\"wp-block-ponyo-image\">\n<\/div>\n<p class=\"has-sm-font-size\"><strong>Figure 7: Gordon\u2019s new multi-threaded support cuts down on context switching and boosts productivity<\/strong>.<\/p>\n\n<p>We\u2019ve also rolled out major performance upgrades, Gordon now responds 5x faster and delivers more accurate, context-aware answers. With improved understanding of Docker-specific commands, configurations, and troubleshooting scenarios, Gordon is smarter and more helpful than ever!<\/p>\n<h2 class=\"wp-block-heading\">Compose Bridge: Seamlessly go from local Compose to Kubernetes\u00a0<\/h2>\n<p>We know that developers love Docker Compose for managing local environments\u2014it\u2019s simple and easy to understand. We\u2019re excited to introduce Compose Bridge to Docker Desktop. This new powerful feature helps you transform your local compose.yaml into Kubernetes configuration with a single command.<\/p>\n\n<h3 class=\"wp-block-heading\">Translate Compose to Kubernetes in seconds<\/h3>\n<p>Compose Bridge gives you a streamlined, flexible way to bring your Compose application to Kubernetes. With smart defaults and options for customization, it\u2019s designed to support both simple setups and complex microservice architectures.<\/p>\n<p>All it takes is:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker compose bridge convert\n<\/div>\n<p>And just like that, Compose Bridge generates the following Kubernetes resources from your Compose file:<\/p>\n<p>A<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/\" target=\"_blank\"> Namespace<\/a> to isolate your deployment<\/p>\n<p>A<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/configmap\/\" target=\"_blank\"> ConfigMap<\/a> for every Compose config entry<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/\" target=\"_blank\">Deployments<\/a> for running and scaling your services<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/\" target=\"_blank\">Services<\/a> for exposed and published ports\u2014including LoadBalancer services for host access<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/\" target=\"_blank\">Secrets<\/a> for any secrets in your Compose file (encoded for local use)<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/\" target=\"_blank\">NetworkPolicies<\/a> that reflect your Compose network topology<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/\" target=\"_blank\">PersistentVolumeClaims<\/a> using Docker Desktop\u2019s hostpath storage<\/p>\n<p>This approach replicates your local dev environment in Kubernetes quickly and accurately, so you can test in production-like conditions, faster.<\/p>\n\n<h3 class=\"wp-block-heading\">Built-in flexibility and upcoming enhancements<\/h3>\n<p>Need something more customized? Compose Bridge supports advanced transformation options so you can tweak how services are mapped or tailor the resulting configuration to your infrastructure.<\/p>\n<p>And we\u2019re not stopping here\u2014upcoming releases will allow Compose Bridge to generate Kubernetes config based on your existing cluster setup, helping teams align development with production without rewriting manifests from scratch.<\/p>\n<h4 class=\"wp-block-heading\">Get started<\/h4>\n<p>You can start using Compose Bridge today:<\/p>\n<p>Download or update Docker Desktop<\/p>\n<p>Open your terminal and run:<\/p>\n<p>Review the<a href=\"https:\/\/docs.docker.com\/compose\/bridge\/usage\/\" target=\"_blank\"> documentation<\/a> to explore customization options<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker compose bridge convert\n<\/div>\n<h3 class=\"wp-block-heading\">Conclusion\u00a0<\/h3>\n<p>Docker Desktop 4.43 introduces practical updates for developers building at the intersection of AI and cloud-native apps. Whether you\u2019re running local models, finding and running secure MCP servers, using Gordon for multi-threaded AI assistance, or converting Compose files to Kubernetes, this release cuts down on complexity so you can focus on shipping. From agentic AI projects to scaling workflows from local to production, you\u2019ll get more control, smoother integration, and fewer manual steps throughout.<\/p>\n\n<h3 class=\"wp-block-heading\">Learn more<\/h3>\n<p><a href=\"http:\/\/www.docker.com\/blog\/docker-mcp-catalog-secure-way-to-discover-and-run-mcp-servers\/\">Learn more<\/a> about our new MCP Catalog.\u00a0<\/p>\n<p><a href=\"https:\/\/github.com\/docker\/mcp-registry\" target=\"_blank\">Submit<\/a> your MCP servers to the MCP Catalog.\u00a0<\/p>\n<p><a href=\"https:\/\/www.docker.com\/pricing\/\">Authenticate and update<\/a> today to receive your subscription level\u2019s newest Docker Desktop features.<\/p>\n<p>Subscribe to the <a href=\"https:\/\/www.docker.com\/newsletter-subscription\/\">Docker Navigator Newsletter<\/a>.<\/p>\n<p>Learn about our<a href=\"https:\/\/docs.docker.com\/security\/for-admins\/enforce-sign-in\/methods\/\" target=\"_blank\"> sign-in enforcement options<\/a>.<\/p>\n<p>New to Docker? <a href=\"https:\/\/hub.docker.com\/signup?_gl=1*452i3u*_ga*MjEzNzc3Njk5MC4xNjgzNjY3NDkw*_ga_XJWPQMJYHQ*MTcwODcxNjA4Ni4zNjguMS4xNzA4NzE2MzE2LjUzLjAuMA..\" target=\"_blank\">Create an account<\/a>.\u00a0<\/p>\n<p>Have questions? The<a href=\"https:\/\/www.docker.com\/community\/\"> Docker community is here to help<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Docker Desktop 4.43 just rolled out a set of powerful updates that simplify how developers run, manage, and secure AI [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2205","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2205","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2205"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2205\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2205"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2205"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2205"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}