{"id":3484,"date":"2026-02-23T14:13:50","date_gmt":"2026-02-23T14:13:50","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/02\/23\/run-openclaw-securely-in-docker-sandboxes\/"},"modified":"2026-02-23T14:13:50","modified_gmt":"2026-02-23T14:13:50","slug":"run-openclaw-securely-in-docker-sandboxes","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/02\/23\/run-openclaw-securely-in-docker-sandboxes\/","title":{"rendered":"Run OpenClaw Securely in Docker Sandboxes"},"content":{"rendered":"<p><a href=\"https:\/\/docs.docker.com\/ai\/sandboxes\/\" rel=\"nofollow noopener\" target=\"_blank\">Docker Sandboxes<\/a> is a new primitive in the Docker\u2019s ecosystem that allows you to run AI agents or any other workloads in isolated micro VMs. It provides strong isolation, convenient developer experience and a strong security boundary with a network proxy configurable to deny agents connecting to arbitrary internet hosts. The network proxy will also conveniently inject the API keys, like your <code>ANTHROPIC_API_KEY<\/code>, or <code>OPENAI_API_KEY<\/code> in the network proxy so the agent doesn\u2019t have access to them at all and cannot leak them.\u00a0<\/p>\n<p>In a <a href=\"https:\/\/open.substack.com\/pub\/olegselajev\/p\/building-custom-docker-sandboxes\" rel=\"nofollow noopener\" target=\"_blank\">previous article I showed how Docker Sandboxes<\/a> lets you install any tools an AI agent might need, like a JDK for Java projects or some custom CLIs, into a container that\u2019s isolated from the host. Today we\u2019re going a step further: we\u2019ll run <a href=\"https:\/\/openclaw.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenClaw<\/a>, an open-source AI coding agent, on a local model via Docker Model Runner.<\/p>\n<p>No API keys, no cloud costs, fully private. And you can do it in 2-ish commands.<\/p>\n<h2 class=\"wp-block-heading\">Quick Start<\/h2>\n<p>Make sure you have <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\/\">Docker Desktop<\/a> and that Docker Model Runner is enabled (Settings \u2192 Docker Model Runner \u2192 Enable), then pull a model:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model pull ai\/gpt-oss:20B-UD-Q4_K_XL\n<\/pre>\n<\/div>\n<p>Now create and run the sandbox:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker sandbox create --name openclaw -t olegselajev241\/openclaw-dmr:latest shell .\ndocker sandbox network proxy openclaw --allow-host localhost\ndocker sandbox run openclaw\n<\/pre>\n<\/div>\n<p>Inside the sandbox:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\n~\/start-openclaw.sh\n<\/pre>\n<\/div>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=1782770139  fetchpriority=\"high\" decoding=\"async\" width=\"1999\" height=\"850\" src=\"https:\/\/www.docker.com\/app\/uploads\/2026\/02\/image1-2.png\" class=\"fade-in attachment-full size-full\" alt=\"Running OpenClaw inside a Docker Sandbox\" title=\"- image1 2\" \/>\n        <\/div>\n\n<p>And that\u2019s it. You\u2019re in OpenClaw\u2019s terminal UI, talking to a local gpt-oss model on your machine. The model runs in Docker Model Runner on your host, and OpenClaw runs completely isolated in the sandbox: it can only read and write files in the workspace you give it, and there\u2019s a network proxy to deny connections to unwanted hosts.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Cloud models work too<\/h2>\n<p>The sandbox proxy will automatically inject API keys from your host environment. If you have <code>ANTHROPIC_API_KEY<\/code> or <code>OPENAI_API_KEY<\/code> set, OpenClaw can run cloud models,\u00a0 just specify them in OpenClaw settings. The proxy takes care of credential injection, so your keys will never be exposed inside the sandbox.<\/p>\n<p>This means you can use free local models for experimentation, then switch to cloud models for serious work all in the same sandbox. With cloud models you don\u2019t even need to allow to proxy to host\u2019s localhost, so don\u2019t run <code>docker sandbox network proxy openclaw --allow-host localhost<\/code>.<\/p>\n<h2 class=\"wp-block-heading\">Choose Your Model<\/h2>\n<p>The startup script automatically discovers models available in your Docker Model Runner. List them:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\n~\/start-openclaw.sh list\n<\/pre>\n<\/div>\n<p>Use a specific model:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\n~\/start-openclaw.sh ai\/qwen2.5:7B-Q4_K_M\n<\/pre>\n<\/div>\n<p>Any model you\u2019ve pulled with <code>docker model pull<\/code> is available.<\/p>\n<h2 class=\"wp-block-heading\">How it works (a bit technical)<\/h2>\n<p>The pre-built image (<code>olegselajev241\/openclaw-dmr:latest<\/code>) is based on the <code>shell<\/code> sandbox template with three additions: Node.js 22, OpenClaw, and a tiny networking bridge.<\/p>\n<p>The bridge is needed because Docker Model Runner runs on your host and binds to <code>localhost:12434<\/code>. But <code>localhost<\/code> inside the sandbox means the sandbox itself, not your host. The sandbox <em>does<\/em> have an HTTP proxy, at <code>host.docker.internal:3128<\/code>, that can reach host services, and we allow it to reach <code>localhost<\/code> with <code>docker sandbox network proxy --allow-host localhost<\/code>.<\/p>\n<p>The problem is OpenClaw is Node.js, and Node.js doesn\u2019t respect <code>HTTP_PROXY<\/code> environment variables. So we wrote a ~20-line bridge script that OpenClaw connects to at <code>127.0.0.1:54321<\/code>, which explicitly forwards requests through the proxy to reach Docker Model Runner on the host:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\nOpenClaw \u2192 bridge (localhost:54321) \u2192 proxy (host.docker.internal:3128) \u2192 Model Runner (host localhost:12434)\n<\/pre>\n<\/div>\n<p>The <code>start-openclaw.sh<\/code> script starts the bridge, starts OpenClaw\u2019s gateway (with proxy vars cleared so it hits the bridge directly), and runs the TUI.<\/p>\n<h2 class=\"wp-block-heading\">Build Your Own<\/h2>\n<p>Want to customize the image or just see how it works? Here\u2019s the full build process.\u00a0<\/p>\n<h3 class=\"wp-block-heading\">1. Create a base sandbox and install OpenClaw<\/h3>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker sandbox create --name my-openclaw shell .\ndocker sandbox network proxy my-openclaw --allow-host localhost\ndocker sandbox run my-openclaw\n<\/pre>\n<\/div>\n<p>Now let\u2019s install OpenClaw in the sandbox:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\n# Install Node 22 (OpenClaw requires it)\nnpm install -g n &amp;&amp; n 22\nhash -r\n\n# Install OpenClaw\nnpm install -g openclaw@latest\n\n# Run initial setup\nopenclaw setup\n<\/pre>\n<\/div>\n<h3 class=\"wp-block-heading\">2. Create the Model Runner bridge<\/h3>\n<p>This is the magic piece \u2014 a tiny Node.js server that forwards requests through the sandbox proxy to Docker Model Runner on your host:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ncat &gt; ~\/model-runner-bridge.js &lt;&lt; 'EOF'\nconst http = require(\"http\");\nconst { URL } = require(\"url\");\n\nconst PROXY = new URL(process.env.HTTP_PROXY || \"http:\/\/host.docker.internal:3128\");\nconst TARGET = \"localhost:12434\";\n\nhttp.createServer((req, res) =&gt; {\n  const proxyReq = http.request({\n    hostname: PROXY.hostname,\n    port: PROXY.port,\n    path: \"http:\/\/\" + TARGET + req.url,\n    method: req.method,\n    headers: { ...req.headers, host: TARGET }\n  }, proxyRes =&gt; {\n    res.writeHead(proxyRes.statusCode, proxyRes.headers);\n    proxyRes.pipe(res);\n  });\n  proxyReq.on(\"error\", e =&gt; { res.writeHead(502); res.end(e.message); });\n  req.pipe(proxyReq);\n}).listen(54321, \"127.0.0.1\");\nEOF\n<\/pre>\n<\/div>\n<h3 class=\"wp-block-heading\">3. Configure OpenClaw to use Docker Model Runner<\/h3>\n<p>Now merge the Docker Model Runner provider into OpenClaw\u2019s config:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\npython3 -c \"\nimport json\np = '$HOME\/.openclaw\/openclaw.json'\nwith open(p) as f: cfg = json.load(f)\ncfg['models'] = cfg.get('models', {})\ncfg['models']['mode'] = 'merge'\ncfg['models']['providers'] = cfg['models'].get('providers', {})\ncfg['models']['providers']['docker-model-runner'] = {\n    'baseUrl': 'http:\/\/127.0.0.1:54321\/engines\/llama.cpp\/v1',\n    'apiKey': 'not-needed',\n    'api': 'openai-completions',\n    'models': [{\n        'id': 'ai\/qwen2.5:7B-Q4_K_M',\n        'name': 'Qwen 2.5 7B (Docker Model Runner)',\n        'reasoning': False, 'input': ['text'],\n        'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0},\n        'contextWindow': 32768, 'maxTokens': 8192\n    }]\n}\ncfg['agents'] = cfg.get('agents', {})\ncfg['agents']['defaults'] = cfg['agents'].get('defaults', {})\ncfg['agents']['defaults']['model'] = {'primary': 'docker-model-runner\/ai\/qwen2.5:7B-Q4_K_M'}\ncfg['gateway'] = {'mode': 'local'}\nwith open(p, 'w') as f: json.dump(cfg, f, indent=2)\n\"\n<\/pre>\n<\/div>\n<h3 class=\"wp-block-heading\">4. Save and share<\/h3>\n<p>Exit the sandbox and save it as a reusable image:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker sandbox save my-openclaw my-openclaw-image:latest\n<\/pre>\n<\/div>\n<p>Push it to a registry so anyone can use it:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker tag my-openclaw-image:latest yourname\/my-openclaw:latest\ndocker push yourname\/my-openclaw:latest\n<\/pre>\n<\/div>\n<p>Anyone with Docker Desktop (with the modern sandboxes includes) can spin up the same environment with:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker sandbox create --name openclaw -t yourname\/my-openclaw:latest shell .\n<\/pre>\n<\/div>\n<h2 class=\"wp-block-heading\">What\u2019s next<\/h2>\n<p><a href=\"https:\/\/docs.docker.com\/ai\/sandboxes\/\" rel=\"nofollow noopener\" target=\"_blank\">Docker Sandboxes<\/a> make it easy to run any AI coding agent in an isolated, reproducible environment. With Docker Model Runner, you get a fully local AI coding setup: no cloud dependencies, no API costs, and complete privacy.<\/p>\n<p>Try it out and let us know what you think.<\/p>","protected":false},"excerpt":{"rendered":"<p>Docker Sandboxes is a new primitive in the Docker\u2019s ecosystem that allows you to run AI agents or any other [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3485,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3484","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3484","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3484"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3484\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3485"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3484"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3484"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3484"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}