{"id":2121,"date":"2025-06-11T12:38:46","date_gmt":"2025-06-11T12:38:46","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/06\/11\/publishing-ai-models-to-docker-hub\/"},"modified":"2025-06-11T12:38:46","modified_gmt":"2025-06-11T12:38:46","slug":"publishing-ai-models-to-docker-hub","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/06\/11\/publishing-ai-models-to-docker-hub\/","title":{"rendered":"Publishing AI models to Docker Hub"},"content":{"rendered":"<p>When we first released <a href=\"https:\/\/www.docker.com\/blog\/introducing-docker-model-runner\/\">Docker Model Runner<\/a>, it came with built-in support for running AI models published and maintained by Docker on Docker Hub. This made it simple to pull a model like llama3.2 or gemma3 and start using it locally with familiar Docker-style commands. <\/p>\n<p>Model Runner now supports three new commands: <strong>tag<\/strong>, <strong>push<\/strong>, and <strong>package<\/strong>. These enable you to share models with your team, your organization, or the wider community. Whether you\u2019re managing your own fine-tuned models or curating a set of open-source models, Model Runner now lets you <a href=\"https:\/\/www.docker.com\/products\/docker-hub\/\">publish them to Docker Hub<\/a> or any other OCI Artifact compatible Container Registry.\u00a0 For teams using Docker Hub, enterprise features like Registry Access Management (RAM) provide policy-based controls and guardrails to help enforce secure, consistent access.<\/p>\n\n<h2 class=\"wp-block-heading\">Tagging and pushing to Docker Hub<\/h2>\n<p>Let\u2019s start by republishing an existing model from Docker Hub under your own namespace.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n# Step 1: Pull the model from Docker Hub<br \/>\n$ docker model pull ai\/smollm2\n<p># Step 2: Tag it for your own organization<br \/>\n$ docker model tag ai\/smollm2 myorg\/smollm2<\/p>\n<p># Step 3: Push it to Docker Hub<br \/>\n$ docker model push myorg\/smollm2\n<\/p><\/div>\n<p>That\u2019s it! Your model is now available at myorg\/smollm2 and ready to be consumed using Model Runner by anyone with access.<\/p>\n<h2 class=\"wp-block-heading\">Pushing to other container registries<\/h2>\n<p>Model Runner supports other container registries beyond Docker Hub, including GitHub Container Registry (GHCR).<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n# Step 1: Tag for GHCR<br \/>\n$ docker model tag ai\/smollm2 ghcr.io\/myorg\/smollm2\n<p># Step 2: Push to GHCR<br \/>\n$ docker model push ghcr.io\/myorg\/smollm2<\/p>\n<\/div>\n<p>Authentication and permissions work just like they do with regular Docker images in the context of GHCR, so you can leverage your existing workflow for managing registry credentials.<\/p>\n<h2 class=\"wp-block-heading\">Packaging a custom GGUF file<\/h2>\n<p>Want to publish your own model file? You can use the package command to wrap a .gguf file into a Docker-compatible OCI artifact and directly push it into a Container Registry, such as Docker Hub.<\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \">\n# Step 1: Download a model, e.g. from HuggingFace<br \/>\n$ curl -L -o model.gguf https:\/\/huggingface.co\/TheBloke\/Mistral-7B-v0.1-GGUF\/resolve\/main\/mistral-7b-v0.1.Q4_K_M.gguf\n<p># Step 2: Package and push it<br \/>\n$ docker model package &#8211;gguf &#8220;$(pwd)\/model.gguf&#8221; &#8211;push myorg\/mistral-7b-v0.1:Q4_K_M<\/p>\n<\/div>\n<p>You\u2019ve now turned a raw model file in GGUF format into a portable, versioned, and sharable artifact that works seamlessly with docker model run.<\/p>\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n<p>We\u2019ve seen how easy it is to <a href=\"https:\/\/docs.docker.com\/model-runner\/\" target=\"_blank\">publish your own models<\/a> using Docker Model Runner\u2019s new tag, push, and package commands. These additions bring the familiar Docker developer experience to the world of AI model sharing. Teams and enterprises using Docker Hub can securely manage access and control for their models, just like with container images, making it easier to scale GenAI applications across teams.<\/p>\n<p><a href=\"https:\/\/www.docker.com\/newsletter-subscription\/\">Stay tuned for more improvements<\/a> to Model Runner that will make packaging and running models even more powerful and flexible.<\/p>\n<h3 class=\"wp-block-heading\">Learn more<\/h3>\n<p>Read our quickstart guide to <a href=\"https:\/\/www.docker.com\/blog\/run-llms-locally\/\">Docker Model Runner<\/a>.<\/p>\n<p>Find documentation for <a href=\"https:\/\/docs.docker.com\/model-runner\/\" target=\"_blank\">Model Runner<\/a>.<\/p>\n<p>Subscribe to the <a href=\"https:\/\/www.docker.com\/newsletter-subscription\/\">Docker Navigator Newsletter<\/a>.<\/p>\n<p>New to Docker? <a href=\"https:\/\/hub.docker.com\/signup?_gl=1*1v81gq1*_gcl_au*MTQxNjU3MjYxNS4xNzQyMjI1MTk2*_ga*MTMxODI0ODQ4LjE3NDE4MTI3NTA.*_ga_XJWPQMJYHQ*czE3NDg0NTYyNzIkbzI2JGcxJHQxNzQ4NDU2MzI2JGo2JGwwJGgw\" target=\"_blank\">Create an account<\/a>.\u00a0<\/p>\n<p>Have questions? The <a href=\"https:\/\/www.docker.com\/community\/\">Docker community is here to help<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>When we first released Docker Model Runner, it came with built-in support for running AI models published and maintained by [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2121","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2121","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2121"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2121\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2121"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2121"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2121"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}