{"id":3062,"date":"2025-12-16T13:02:21","date_gmt":"2025-12-16T13:02:21","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/12\/16\/docker-model-runner-now-included-with-the-universal-blue-family\/"},"modified":"2025-12-16T13:02:21","modified_gmt":"2025-12-16T13:02:21","slug":"docker-model-runner-now-included-with-the-universal-blue-family","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/12\/16\/docker-model-runner-now-included-with-the-universal-blue-family\/","title":{"rendered":"Docker Model Runner now included with the Universal Blue family"},"content":{"rendered":"<p>Running large language models (LLMs) and other generative AI models can be a complex, frustrating process of managing dependencies, drivers, and environments. At Docker, we believe this should be as simple as docker model run.<\/p>\n<p>That\u2019s why we built <strong>Docker Model Runner<\/strong>, and today, we\u2019re thrilled to announce a new collaboration with <a href=\"https:\/\/universal-blue.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Universal Blue<\/a>. Thanks to the fantastic work of these contributors,\u00a0 Docker Model Runner is now included in OSes such as Aurora and Bluefin, giving developers a powerful, out-of-the-box AI development environment.<\/p>\n<h2 class=\"wp-block-heading\"><strong>What is Docker Model Runner?<\/strong><\/h2>\n<p>For those who haven\u2019t tried it yet, Docker Model Runner is our new \u201cit just works\u201d experience for running generative AI models.<\/p>\n<p>Our goal is to make running a model as simple as running a container.<\/p>\n<p>Here\u2019s what makes it great:<\/p>\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.docker.com\/blog\/docker-model-run-prompt\/\"><strong>Simple UX<\/strong><\/a><strong>:<\/strong> We\u2019ve streamlined the process down to a single, intuitive command: docker model run &lt;model-name&gt;.<\/li>\n<li><strong>Broad GPU Support:<\/strong> While we started with NVIDIA, we\u2019ve recently added <a href=\"https:\/\/www.docker.com\/blog\/docker-model-runner-vulkan-gpu-support\/\"><strong>Vulkan support<\/strong><\/a>. This is a big deal\u2014it means Model Runner works on pretty much <strong>any modern GPU<\/strong>, including AMD and Intel, making AI accessible to more developers than ever.<\/li>\n<li><a href=\"https:\/\/www.docker.com\/blog\/docker-model-runner-integrates-vllm\/\"><strong>vLLM<\/strong><\/a><strong>:<\/strong> Perform high-throughput inference with an NVIDIA GPU<\/li>\n<\/ul>\n<h2 class=\"wp-block-heading\"><strong>The Perfect Home for Model Runner<\/strong><\/h2>\n<p>If you\u2019re new to it, Universal Blue is a family of next-generation, developer-focused Linux desktops. They provide modern, atomic, and reliable environments that are perfect for \u201ccloud-native\u201d workflows.<\/p>\n<p>As Jorge Castro who leads developer relations at Cloud Native Computing Foundation explains, \u201cBluefin and Aurora are reference architectures for bootc, which is a CNCF Sandbox Project. They are just two examples showing how the same container pattern used by application containers can also apply to operating systems. Working with AI models is no different \u2013 one common set of tools, built around OCI standards.\u201d<\/p>\n<p>The team already ships Docker as a core part of its developer-ready experience. By adding Docker Model Runner to the default installation (specifically in the -dx mode for developers), they\u2019ve created a complete, batteries-included AI development environment.<\/p>\n<p>There\u2019s no setup, no config. If you\u2019re on Bluefin\/Aurora, you just open a terminal and start running models.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Get Started Today<\/strong><\/h2>\n<p>If you\u2019re running the latest Bluefin LTS, you\u2019re all set when you <a href=\"https:\/\/docs.projectbluefin.io\/bluefin-dx\" rel=\"nofollow noopener\" target=\"_blank\">turn on developer mode<\/a>. The Docker engine and Model Runner CLI are already installed and waiting for you. Aurora\u2019s enablement instructions are <a href=\"https:\/\/docs.getaurora.dev\/dx\/aurora-dx-intro\" rel=\"nofollow noopener\" target=\"_blank\">documented here<\/a>.<\/p>\n<p>You can run your first model in seconds:<\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=351005314  fetchpriority=\"high\" decoding=\"async\" width=\"1000\" height=\"563\" src=\"https:\/\/www.docker.com\/app\/uploads\/2025\/12\/DMR-Aurora-image-1.png\" class=\"fade-in attachment-full size-full\" alt=\"DMR Aurora image 1\" title=\"- DMR Aurora image 1\" \/>\n        <\/div>\n<p>This command will download the model (if not already cached) and run it, ready for you to interact with.<\/p>\n<p>If you\u2019re on another Linux, you can get started just as easily. Just follow the instructions on our<a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\"> GitHub repository<\/a>.<\/p>\n<h2 class=\"wp-block-heading\"><strong>What\u2019s Next?<\/strong><\/h2>\n<p>This collaboration is a fantastic example of community-driven innovation. We want to give a huge shoutout to the greater bootc enthusiast community for their forward-thinking approach and for integrating Docker Model Runner so quickly.<\/p>\n<p>This is just the beginning. We\u2019re committed to making AI development accessible, powerful, and fun for all developers.<\/p>\n<h2 class=\"wp-block-heading\"><strong>How You Can Get Involved<\/strong><\/h2>\n<p>The strength of Docker Model Runner lies in its community, and there\u2019s always room to grow. We need your help to make this project the best it can be. To get involved, you can:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Star the repository:<\/strong> Show your support and help us gain visibility by starring the<a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\"> Docker Model Runner repo<\/a>.<\/li>\n<li><strong>Contribute your ideas:<\/strong> Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We\u2019re excited to see what ideas you have!<\/li>\n<li><strong>Spread the word:<\/strong> Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.<\/li>\n<\/ul>\n<p>We\u2019re incredibly excited about this new chapter for Docker Model Runner, and we can\u2019t wait to see what we can build together. Let\u2019s get to work!<\/p>","protected":false},"excerpt":{"rendered":"<p>Running large language models (LLMs) and other generative AI models can be a complex, frustrating process of managing dependencies, drivers, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3063,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3062","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3062","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3062"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3062\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3063"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3062"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3062"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3062"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}