{"id":3310,"date":"2026-01-26T21:12:43","date_gmt":"2026-01-26T21:12:43","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/01\/26\/clawdbot-with-docker-model-runner-a-private-personal-ai-assistant\/"},"modified":"2026-01-26T21:12:43","modified_gmt":"2026-01-26T21:12:43","slug":"clawdbot-with-docker-model-runner-a-private-personal-ai-assistant","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/01\/26\/clawdbot-with-docker-model-runner-a-private-personal-ai-assistant\/","title":{"rendered":"Clawdbot with Docker Model Runner, a Private Personal AI Assistant"},"content":{"rendered":"<p>Personal AI assistants are transforming how we manage our daily lives\u2014from handling emails and calendars to automating smart homes. However, as these assistants gain more access to our private data, concerns about privacy, data residency, and long-term costs are at an all-time high.<\/p>\n<p>By combining <a href=\"https:\/\/clawd.bot\/\" rel=\"nofollow noopener\" target=\"_blank\"><strong>Clawdbot<\/strong><\/a> with <a href=\"https:\/\/docs.docker.com\/ai\/model-runner\/\" rel=\"nofollow noopener\" target=\"_blank\"><strong>Docker Model Runner<\/strong><\/a><strong> (DMR)<\/strong>, you can build a high-performance, agentic personal assistant while keeping full control over your data, infrastructure, and spending.<\/p>\n<p>This post walks through how to configure Clawdbot to utilize Docker Model Runner, enabling a privacy-first approach to personal intelligence.<\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=1256960008  fetchpriority=\"high\" decoding=\"async\" width=\"1000\" height=\"651\" src=\"https:\/\/www.docker.com\/app\/uploads\/2026\/01\/Clawdbot-figure-1.png\" class=\"fade-in attachment-full size-full\" alt=\"Clawdbot figure 1\" title=\"- Clawdbot figure 1\" \/>\n        <\/div>\n<h2 class=\"wp-block-heading\">What Are Clawdbot and Docker Model Runner?<\/h2>\n<p><strong>Clawdbot<\/strong> is a self-hosted AI assistant designed to live where you already are. Unlike browser-bound bots, Clawdbot integrates directly with messaging apps like <strong>Telegram, WhatsApp, Discord, and Signal<\/strong>. It acts as a proactive digital coworker capable of executing real-world actions across your devices and services.<\/p>\n<p><strong>Docker Model Runner (DMR)<\/strong> is Docker\u2019s native solution for running and managing large language models (LLMs) as OCI artifacts. It exposes an OpenAI-compatible API, allowing it to serve as the private \u201cbrain\u201d for any tool that supports standard AI endpoints.<\/p>\n<p>Together, they create a unified assistant that can browse the web, manage your files, and respond to your messages without ever sending your sensitive data to a third-party cloud.<\/p>\n<h2 class=\"wp-block-heading\">Benefits of the Clawdbot + DMR Stack<\/h2>\n<p><strong>Privacy by Design<\/strong><\/p>\n<p>In a \u201cPrivacy-First\u201d setup, your assistant\u2019s memory, message history, and files stay on your hardware. Docker Model Runner isolates model inference, meaning:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>No third-party training:<\/strong> Your personal emails and schedules aren\u2019t used to train future commercial models.<\/li>\n<\/ul>\n<ul class=\"wp-block-list\">\n<li><strong>Sandboxed execution:<\/strong> Models run in isolated environments, protecting your host system.<\/li>\n<\/ul>\n<ul class=\"wp-block-list\">\n<li><strong>Data Sovereignty:<\/strong> You decide exactly which \u201cSkills\u201d (web browsing, file access) the assistant can use.<\/li>\n<\/ul>\n<p><strong>Cost Control and Scaling<\/strong><\/p>\n<p>Cloud-based agents often become expensive when they use \u201clong-term memory\u201d or \u201cproactive searching,\u201d which consume massive amounts of tokens. With Docker Model Runner, inference runs on your own GPU\/CPU. Once a model is pulled, there are <strong>no per-token fees<\/strong>. You can let Clawdbot summarize thousands of unread emails or research complex topics for hours without worrying about a surprise API bill at the end of the month.<\/p>\n<h2 class=\"wp-block-heading\">Configuring Clawdbot with Docker Model Runner<\/h2>\n<h3 class=\"wp-block-heading\"><strong>Modifying the Clawdbot Configuration<\/strong><\/h3>\n<p>Clawdbot uses a flexible configuration system to define which models and providers drive its reasoning. While the onboarding wizard (clawdbot onboard) is the standard setup path, you can manually point Clawdbot to your private Docker infrastructure.<\/p>\n<p>You can define your provider configuration in:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Global configuration:<\/strong> ~\/.config\/clawdbot\/config.json<\/li>\n<li><strong>Workspace-specific configuration:<\/strong> clawdbot.json in your active workspace root.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\">Using Clawdbot with Docker Model Runner<\/h3>\n<p>To bridge the two, update your configuration to point to the DMR server. Assuming Docker Model Runner is running at its default address: http:\/\/localhost:12434\/v1.<\/p>\n<p>Your config.json should be updated as follows:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; title: ; notranslate\">\n{\n  \"models\": {\n    \"providers\": {\n      \"dmr\": {\n        \"baseUrl\": \"http:\/\/localhost:12434\/v1\",\n        \"apiKey\": \"dmr-local\",\n        \"api\": \"openai-completions\",\n        \"models\": [\n          {\n            \"id\": \"gpt-oss:128K\",\n            \"name\": \"gpt-oss (128K context window)\",\n            \"contextWindow\": 128000,\n            \"maxTokens\": 128000\n          },\n          {\n            \"id\": \"glm-4.7-flash:128K\",\n            \"name\": \"glm-4.7-flash (128K context window)\",\n            \"contextWindow\": 128000,\n            \"maxTokens\": 128000\n          }\n        ]\n      }\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"model\": {\n        \"primary\": \"dmr\/gpt-oss:128K\"\n      }\n    }\n  }\n}\n\n<\/pre>\n<\/div>\n<p>This configuration tells Clawdbot to bypass external APIs and route all \u201cthinking\u201d to your private models.<\/p>\n\n<div class=\"style-plain wp-block-ponyo-houston\">\n<p><strong>Note for Docker Desktop Users:<\/strong><br \/>Ensure TCP access is enabled so Clawdbot can communicate with the runner. Run the following command in your terminal:<br \/>docker desktop enable model-runner \u2013tcp<\/p>\n<\/div>\n<h2 class=\"wp-block-heading\">Recommended Models for Personal Assistants<\/h2>\n<p>While coding models focus on logic, personal assistant models need a balance of instruction-following, tool-use capability, and long-term memory.<\/p>\n<div class=\"wp-block-ponyo-table style__default\">\n<table class=\"responsive-table\">\n<tbody class=\"wp-block-ponyo-table-body\">\n<tr class=\"wp-block-ponyo-table-header\">\n<th class=\"wp-block-ponyo-cell\" data-responsive-table-heading=\"Model\">\n<p><strong>Model<\/strong><\/p>\n<\/th>\n<th class=\"wp-block-ponyo-cell\" data-responsive-table-heading=\"Best For\">\n<p><strong>Best For<\/strong><\/p>\n<\/th>\n<th class=\"wp-block-ponyo-cell\" data-responsive-table-heading=\"DMR Pull Command\">\n<p><strong>DMR Pull Command<\/strong><\/p>\n<\/th>\n<\/tr>\n<tr class=\"wp-block-ponyo-table-row\">\n<td class=\"wp-block-ponyo-cell\">\n<p><a href=\"https:\/\/hub.docker.com\/r\/ai\/gpt-oss\" rel=\"nofollow noopener\" target=\"_blank\"><strong>gpt-oss<\/strong><\/a><\/p>\n<\/td>\n<td class=\"wp-block-ponyo-cell\">\n<p>Complex reasoning &amp; scheduling<\/p>\n<\/td>\n<td class=\"wp-block-ponyo-cell\">\n<p>docker model pull gpt-oss<\/p>\n<\/td>\n<\/tr>\n<tr class=\"wp-block-ponyo-table-row\">\n<td class=\"wp-block-ponyo-cell\">\n<p><a href=\"https:\/\/hub.docker.com\/r\/ai\/glm-4.7-flash\" rel=\"nofollow noopener\" target=\"_blank\"><strong>glm-4.7-flash<\/strong><\/a><\/p>\n<\/td>\n<td class=\"wp-block-ponyo-cell\">\n<p>Fast coding assistance and debugging<\/p>\n<\/td>\n<td class=\"wp-block-ponyo-cell\">\n<p>docker model pull glm-4.7-flash<\/p>\n<\/td>\n<\/tr>\n<tr class=\"wp-block-ponyo-table-row\">\n<td class=\"wp-block-ponyo-cell\">\n<p><a href=\"https:\/\/hub.docker.com\/r\/ai\/qwen3-coder\" rel=\"nofollow noopener\" target=\"_blank\"><strong>qwen3-coder<\/strong><\/a><\/p>\n<\/td>\n<td class=\"wp-block-ponyo-cell\">\n<p>Agentic coding workflows<\/p>\n<\/td>\n<td class=\"wp-block-ponyo-cell\">\n<p>docker model pull qwem3-coder<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<h3 class=\"wp-block-heading\">Pulling models from the ecosystem<\/h3>\n<p>DMR can pull models directly from <strong>Hugging Face<\/strong> and convert them into OCI artifacts automatically:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\ndocker model pull huggingface.co\/bartowski\/Llama-3.3-70B-Instruct-GGUF\n<\/pre>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Context Length and \u201cSoul\u201d<\/strong><\/h3>\n<p>For a personal assistant, context length is critical. Clawdbot relies on a <strong>SOUL.md<\/strong> file (which defines its personality) and a <strong>Memory Vault<\/strong> (which stores your preferences).<\/p>\n<p>If a model\u2019s default context is too small, it will \u201cforget\u201d your instructions mid-conversation. You can use DMR to repackage a model with a larger context window:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model package --from llama3.3 --context-size 128000 llama-personal:128k\n<\/pre>\n<\/div>\n<p>Once packaged, reference llama-personal:128k in your Clawdbot config to ensure your assistant always remembers the full history of your requests.<\/p>\n<h2 class=\"wp-block-heading\">Putting Clawdbot to Work: Running Scheduled Tasks\u00a0<\/h2>\n<p>With Clawdbot and DMR running, you can move beyond simple chat. Let\u2019s set up a \u201cMorning Briefing\u201d task.<\/p>\n<ol class=\"wp-block-list\">\n<li><strong>Verify the Model:<\/strong> docker model ls (Ensure your model is active).<\/li>\n<li><strong>Initialize the Soul:<\/strong> Run clawdbot init-soul to define how the assistant should talk to you.<\/li>\n<li><strong>Assign a Task:<\/strong><strong><br \/><\/strong>\u201cClawdbot, every morning at 8:00 AM, check my unread emails, summarize the top 3 priorities, and message me the summary on Telegram.\u201d<\/li>\n<\/ol>\n<p>Because Clawdbot is connected to your private Docker Model Runner, it can parse those emails and reason about your schedule privately. No data leaves your machine; you simply receive a helpful notification on your phone via your chosen messaging app.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How You Can Get Involved<\/strong><\/h3>\n<p>The Clawdbot and Docker Model Runner ecosystems are growing rapidly. Here\u2019s how you can help:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Share Model Artifacts:<\/strong> Push your optimized OCI model packages to <a href=\"https:\/\/hub.docker.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Docker Hub<\/a> for others to use.<\/li>\n<li><strong>Join the Community:<\/strong> Visit the <a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\">Docker Model Runner GitHub repo<\/a>.<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>Personal AI assistants are transforming how we manage our daily lives\u2014from handling emails and calendars to automating smart homes. However, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3311,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3310","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3310"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3310\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3311"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}