{"id":3239,"date":"2026-01-15T14:11:39","date_gmt":"2026-01-15T14:11:39","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/01\/15\/opencode-with-docker-model-runner-for-private-ai-coding\/"},"modified":"2026-01-15T14:11:39","modified_gmt":"2026-01-15T14:11:39","slug":"opencode-with-docker-model-runner-for-private-ai-coding","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/01\/15\/opencode-with-docker-model-runner-for-private-ai-coding\/","title":{"rendered":"OpenCode with Docker Model Runner for Private AI Coding"},"content":{"rendered":"<p>AI-powered coding assistants are becoming a core part of modern development workflows. At the same time, many teams are increasingly concerned about where their code goes, how it\u2019s processed, and who has access to it.<\/p>\n<p>By combining <a href=\"https:\/\/opencode.ai\/\" rel=\"nofollow noopener\" target=\"_blank\"><strong>OpenCode<\/strong><\/a> with <a href=\"https:\/\/www.docker.com\/products\/model-runner\/\"><strong>Docker Model Runner<\/strong><\/a>, you can build a powerful AI-assisted coding experience while keeping full control over your data, infrastructure and spend.<\/p>\n<p>This post walks through how to configure OpenCode to use Docker Model Runner and explains why this setup enables a <strong>privacy-first<\/strong> and <strong>cost-aware<\/strong> approach to AI-assisted development.<\/p>\n<h2 class=\"wp-block-heading\">What Are OpenCode and Docker Model Runner?<\/h2>\n<p><strong>OpenCode<\/strong> is an open-source coding assistant designed to integrate directly into developer workflows. It supports multiple model providers and exposes a flexible configuration system that makes it easy to switch between them.<\/p>\n<p><strong>Docker Model Runner (DMR)<\/strong> allows you to run and manage large language models easily. It exposes an OpenAI-compatible API, making it straightforward to integrate with existing tools that already support OpenAI-style endpoints.<\/p>\n<p>Together, they provide a familiar developer experience backed by models running entirely within infrastructure you control.<\/p>\n<h2 class=\"wp-block-heading\">Modifying the OpenCode Configuration<\/h2>\n<p><a href=\"https:\/\/opencode.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenCode<\/a> can be customized using a configuration file that controls how providers and models are defined.<\/p>\n<p>You can define this configuration in one of two places:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Global configuration<\/strong>: <em>~\/.config\/opencode\/opencode.json<\/em><\/li>\n<li><strong>Project-specific configuration<\/strong>: <em>opencode.json<\/em> in the root of your project<\/li>\n<\/ul>\n<p>When a project-level configuration is present, it takes precedence over the global one.<\/p>\n<h2 class=\"wp-block-heading\">Using OpenCode with Docker Model Runner<\/h2>\n<p>Docker Model Runner (DMR) exposes an OpenAI-compatible API, which makes integrating it with OpenCode straightforward. To enable this integration, you simply need to update your <em>opencode.json<\/em> file to point to the DMR server and declare the locally available models.<\/p>\n<p>Assuming Docker Model Runner is running at: <a href=\"http:\/\/localhost:12434\/engines\/v1\" rel=\"nofollow noopener\" target=\"_blank\"><em>http:\/\/localhost:12434\/v1<\/em><\/a><\/p>\n<p>your <em>opencode.json<\/em> configuration could look like this:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; title: ; notranslate\">\n{\n  \"$schema\": \"https:\/\/opencode.ai\/config.json\",\n  \"provider\": {\n    \"dmr\": {\n      \"npm\": \"@ai-sdk\/openai-compatible\",\n      \"name\": \"Docker Model Runner\",\n      \"options\": {\n        \"baseURL\": \"http:\/\/localhost:12434\/v1\",\n      },\n      \"models\": {\n        \"qwen-coder3\": {\n          \"name\": \"qwen-coder3\"\n        },\n        \"devstral-small-2\": {\n          \"name\": \"devstral-small-2\"\n        }\n      }\n    }\n  }\n}\n\n<\/pre>\n<\/div>\n<p>This configuration allows OpenCode to utilize locally hosted models through DMR, providing a powerful and private coding assistant.<\/p>\n<p><em>Note for Docker Desktop users:<\/em><\/p>\n<p><em>If you are running Docker Model Runner via Docker Desktop, make sure TCP access is enabled. OpenCode connects to Docker Model Runner over HTTP, which requires the TCP port to be exposed:<\/em><\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker desktop enable model-runner --tcp\n<\/pre>\n<\/div>\n<p><em>Once enabled, Docker Model Runner will be accessible at http:\/\/localhost:12434\/v1.<\/em><\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=634454233  fetchpriority=\"high\" decoding=\"async\" width=\"1292\" height=\"698\" src=\"https:\/\/www.docker.com\/app\/uploads\/2026\/01\/OpenCode-DMR-figure-1-1.png\" class=\"fade-in attachment-full size-full\" alt=\"OpenCode DMR figure 1 1\" title=\"- OpenCode DMR figure 1 1\" \/>\n        <\/div>\n<p class=\"has-xs-font-size\">Figure 1: Enabling OpenCode to utilize locally hosted models through Docker Model Runner<\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=1483064240  fetchpriority=\"high\" decoding=\"async\" width=\"1000\" height=\"564\" src=\"https:\/\/www.docker.com\/app\/uploads\/2026\/01\/OpenCode-DMR-figure-2-1.png\" class=\"fade-in attachment-full size-full\" alt=\"OpenCode DMR figure 2 1\" title=\"- OpenCode DMR figure 2 1\" \/>\n        <\/div>\n<p class=\"has-xs-font-size\">Figure 2: Models like qwen3-coder, devstral-small-2, gpt-oss are good for coding use cases.<\/p>\n<h2 class=\"wp-block-heading\">Benefits of using OpenCode with Model Runner<\/h2>\n<h3 class=\"wp-block-heading\">Privacy by Design<\/h3>\n<p>Using OpenCode with Docker Model Runner enables a privacy-first approach to AI-assisted development by keeping all model inference within the infrastructure you control.<\/p>\n<p>Docker Model Runner runs models behind an OpenAI-compatible API endpoint. OpenCode sends prompts, source code, and context only to that endpoint, and nowhere else.<\/p>\n<p>This means:<\/p>\n<ul class=\"wp-block-list\">\n<li>No third-party AI providers are involved<\/li>\n<li>No external data sharing or vendor-side retention<\/li>\n<li>No training on your code by external services<\/li>\n<\/ul>\n<p>From OpenCode\u2019s perspective, the provider is simply an API endpoint. Where that endpoint runs, on a developer machine, an internal server, or a private cloud, is entirely up to you.<\/p>\n<h3 class=\"wp-block-heading\">Cost Control<\/h3>\n<p>Beyond privacy, running models with Docker Model Runner provides a significant cost advantage over hosted AI APIs.<\/p>\n<p>Cloud-hosted coding assistants, can become expensive very quickly, especially when:<\/p>\n<ul class=\"wp-block-list\">\n<li>Working with large repositories<\/li>\n<li>Passing long conversational or code context<\/li>\n<li>Running frequent iterative prompts during development<\/li>\n<\/ul>\n<p>With Docker Model Runner, inference runs on your own hardware. Once the model is pulled, there are <strong>no per-token fees, no request-based pricing, and no surprise bills<\/strong>. Teams can scale usage freely without worrying about escalating API costs.<\/p>\n<h2 class=\"wp-block-heading\">Recommended Models for Coding<\/h2>\n<p>When using OpenCode with Docker Model Runner, model choice has a direct impact on both quality and developer experience. While many general-purpose might models work reasonably well, <strong>coding-focused models are optimized for long context windows and code-aware reasoning<\/strong>, which is especially important for real-world repositories.<\/p>\n<p>The following models are well suited for use with OpenCode and Docker Model Runner:<\/p>\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/hub.docker.com\/r\/ai\/qwen3-coder\" rel=\"nofollow noopener\" target=\"_blank\"><strong>qwen3-coder<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/hub.docker.com\/r\/ai\/devstral-small-2\" rel=\"nofollow noopener\" target=\"_blank\"><strong>devstral-small-2<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/hub.docker.com\/r\/ai\/gpt-oss\" rel=\"nofollow noopener\" target=\"_blank\"><strong>gpt-oss<\/strong><\/a><\/li>\n<\/ul>\n<p>Each of these models can be served through Docker Model Runner and exposed via its OpenAI-compatible API.<\/p>\n<p>You can pull these models by simply running:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model pull qwen3-coder\n<\/pre>\n<\/div>\n<h2 class=\"wp-block-heading\">Pulling Models from Docker Hub and Hugging Face<\/h2>\n<p>Docker Model Runner can pull models not only from <a href=\"https:\/\/hub.docker.com\/u\/ai\" rel=\"nofollow noopener\" target=\"_blank\"><strong>Docker Hub<\/strong><\/a>, but also directly from <strong>Hugging Face<\/strong> and automatically convert them into OCI artifacts that can be run and shared like any other Docker model.<\/p>\n<p>For example, you can pull a model directly from Hugging Face with:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model pull huggingface.co\/unsloth\/Ministral-3-14B-Instruct-2512-GGUF\n<\/pre>\n<\/div>\n<p>This gives teams access to the broader open model ecosystem without sacrificing consistency or operability.<\/p>\n<h2 class=\"wp-block-heading\">Context Length Matters<\/h2>\n<p>For coding tasks, context length is often more important than raw parameter count. Large repositories, multi-file refactors, and long conversational histories all benefit from being able to pass more context to the model.<\/p>\n<p>By default:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>qwen3-coder<\/strong> \u2192 128K context<\/li>\n<li><strong>devstral-small-2<\/strong> \u2192 128K context<\/li>\n<li><strong>gpt-oss<\/strong> \u2192 4,096 tokens<\/li>\n<\/ul>\n<p>The difference comes down to model intent.<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/ai\/qwen3-coder\" rel=\"nofollow noopener\" target=\"_blank\"><strong>qwen3-coder<\/strong><\/a> and <a href=\"https:\/\/hub.docker.com\/r\/ai\/devstral-small-2\" rel=\"nofollow noopener\" target=\"_blank\"><strong>devstral-small-2<\/strong><\/a> are coding-focused models, designed to ingest large amounts of source code, project structure, and related context in a single request. A large default context window is critical for these use cases.<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/ai\/gpt-oss\" rel=\"nofollow noopener\" target=\"_blank\"><strong>gpt-oss<\/strong><\/a>, on the other hand, is a general-purpose model. Its default context size reflects a broader optimization target, where extremely long inputs are less critical than they are for code-centric workflows.<\/p>\n<h2 class=\"wp-block-heading\">Increasing Context Size for GPT-OSS<\/h2>\n<p>If you want to use <a href=\"https:\/\/hub.docker.com\/r\/ai\/gpt-oss\" rel=\"nofollow noopener\" target=\"_blank\"><strong>gpt-oss<\/strong><\/a> for coding tasks that benefit from a larger context window, Docker Model Runner makes it easy to repackage the model with an increased context size.<\/p>\n<p>For example, to create a version of <strong>gpt-oss<\/strong> with a 128K context window, you can run:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model pull gpt-oss # In case it's not pulled\ndocker model package --from gpt-oss --context-size 128000 gpt-oss:128K\n<\/pre>\n<\/div>\n<p>This creates a new model artifact with an expanded context length that can be served by Docker Model Runner like any other model.<br \/>Once packaged, you can reference this model in your <em>opencode.json<\/em> configuration:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; title: ; notranslate\">\n{\n  \"$schema\": \"https:\/\/opencode.ai\/config.json\",\n  \"provider\": {\n    \"dmr\": {\n      \"npm\": \"@ai-sdk\/openai-compatible\",\n      \"name\": \"Docker Model Runner\",\n      \"options\": {\n        \"baseURL\": \"http:\/\/localhost:12434\/v1\"\n      },\n      \"models\": {\n        \"gpt-oss:128K\": {\n          \"name\": \"gpt-oss (128K)\"\n        }\n      }\n    }\n  }\n}\n\n<\/pre>\n<\/div>\n<h2 class=\"wp-block-heading\">Sharing Models Across Your Team<\/h2>\n<p>Packaging models as OCI Artifacts has an additional benefit: <strong>the resulting model can be pushed to Docker Hub or a private registry.<\/strong><\/p>\n<p>This allows teams to:<\/p>\n<ul class=\"wp-block-list\">\n<li>Standardize on specific model variants (including context size)<\/li>\n<li>Share models across developers without local reconfiguration<\/li>\n<li>Ensure consistent behavior across environments<\/li>\n<li>Version and roll back model changes explicitly<\/li>\n<\/ul>\n<p>Instead of each developer tuning models independently, teams can treat models as first-class artifacts, built once and reused everywhere.<\/p>\n<h2 class=\"wp-block-heading\">Putting It All Together: Using the Model from the CLI<\/h2>\n<p>With Docker Model Runner configured and the gpt-oss:128K model packaged, you can start using it immediately from OpenCode.<\/p>\n<p>This section walks through selecting the model and using it to generate an <a href=\"http:\/\/agents.md\/\" rel=\"nofollow noopener\" target=\"_blank\">agents.md<\/a> file directly inside the Docker Model project.<\/p>\n<h3 class=\"wp-block-heading\">Step 1: Verify the Model Is Available<\/h3>\n<p>First, confirm that the packaged model is available locally:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model ls\n<\/pre>\n<\/div>\n<p>You should see gpt-oss:128K listed among the available models. If not, make sure the packaging step is completed successfully.<\/p>\n<h3 class=\"wp-block-heading\">Step 2: Configure OpenCode to Use the Model<\/h3>\n<p>Ensure your project\u2019s opencode.json includes the packaged model:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; title: ; notranslate\">\n{\n  \"$schema\": \"https:\/\/opencode.ai\/config.json\",\n  \"provider\": {\n    \"dmr\": {\n      \"npm\": \"@ai-sdk\/openai-compatible\",\n      \"name\": \"Docker Model Runner\",\n      \"options\": {\n        \"baseURL\": \"http:\/\/localhost:12434\/v1\"\n      },\n      \"models\": {\n        \"gpt-oss\": {\n          \"name\": \"gpt-oss:128K\"\n        }\n      }\n    }\n  }\n}\n\n<\/pre>\n<\/div>\n<p>This makes the model available to OpenCode under the dmr provider.<\/p>\n<h3 class=\"wp-block-heading\">Step 3: Start OpenCode in the Project<\/h3>\n<p>From the root of the Docker Model project, start OpenCode:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\nopencode\n<\/pre>\n<\/div>\n<p><a href=\"https:\/\/opencode.ai\/docs\/models\/#select-a-model\" rel=\"nofollow noopener\" target=\"_blank\">Select the model<\/a> from the list by running:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\n\/models\n\n<\/pre>\n<\/div>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=2028161127  data-opt-src=\"https:\/\/www.docker.com\/app\/uploads\/2026\/01\/OpenCode-DMR-figure-1.png\"  decoding=\"async\" width=\"1000\" height=\"1028\" src=\"data:image/svg+xml,%3Csvg%20viewBox%3D%220%200%20100%%20100%%22%20width%3D%22100%%22%20height%3D%22100%%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Crect%20width%3D%22100%%22%20height%3D%22100%%22%20fill%3D%22transparent%22%2F%3E%3C%2Fsvg%3E\" class=\"fade-in attachment-full size-full\" alt=\"OpenCode DMR figure 1\" title=\"- OpenCode DMR figure 1\" \/>\n        <\/div>\n<p class=\"has-xs-font-size\">Figure 3: Selecting gpt-oss model powered by Docker Model Runner in OpenCode<\/p>\n<h3 class=\"wp-block-heading\">Step 4: Ask OpenCode to Generate agents.md<\/h3>\n<p>Once OpenCode is running, prompt the model to generate an <em>agents.md<\/em> file using the repository as context:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; title: ; notranslate\">\nGenerate an agents.md file in the project root following the agents.md specification and examples.\n\nUse this repository as context and include sections that help an AI agent work effectively with this project, including:\n- Project overview\n- Build and test commands\n- Code style guidelines\n- Testing instructions\n- Security considerations\n\nBase the content on the actual structure, tooling, and conventions used in this repository.\nKeep the file concise, practical, and actionable for an AI agent contributing to the project.\n\n<\/pre>\n<\/div>\n<p>Because OpenCode is connected to Docker Model Runner, it can safely pass repository structure and relevant files to the model without sending any data outside your infrastructure.<\/p>\n<p>The expanded 128K context window allows the model to reason over a larger portion of the project, resulting in a more accurate and useful <a href=\"http:\/\/agents.md\/\" rel=\"nofollow noopener\" target=\"_blank\">agents.md<\/a>.<\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=809273088  data-opt-src=\"https:\/\/www.docker.com\/app\/uploads\/2026\/01\/OpenCode-DMR-figure-2.png\"  decoding=\"async\" width=\"1000\" height=\"1254\" src=\"data:image/svg+xml,%3Csvg%20viewBox%3D%220%200%20100%%20100%%22%20width%3D%22100%%22%20height%3D%22100%%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Crect%20width%3D%22100%%22%20height%3D%22100%%22%20fill%3D%22transparent%22%2F%3E%3C%2Fsvg%3E\" class=\"fade-in attachment-full size-full\" alt=\"OpenCode DMR figure 2\" title=\"- OpenCode DMR figure 2\" \/>\n        <\/div>\n<p class=\"has-xs-font-size\">Figure 4: The resulting <a href=\"http:\/\/agents.md\/\" rel=\"nofollow noopener\" target=\"_blank\">agents.md<\/a> file<\/p>\n<h3 class=\"wp-block-heading\">Step 5: Review and Contribute to Docker Model Runner<\/h3>\n<p>Once the file is generated:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ncat agents.md\n<\/pre>\n<\/div>\n<p>Make any necessary adjustments so it accurately reflects the project, then commit it like any other project artifact:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ngit add agents.md\ngit commit -m \"Add agents documentation\"\n<\/pre>\n<\/div>\n<p>At this point, you\u2019re ready to open your first Docker Model Runner pull request. <\/p>\n<p>Using OpenCode with Docker Model Runner makes it easy to contribute high-quality documentation and project artifacts, while keeping all model inference and repository context within the infrastructure you control.<\/p>\n<h2 class=\"wp-block-heading\">How You Can Get Involved<\/h2>\n<p>The strength of Docker Model Runner lies in its community and there\u2019s always room to grow. We need your help to make this project the best it can be. To get involved, you can:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Star the repository:<\/strong> Show your support and help us gain visibility by starring the<a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\"> Docker Model Runner repo<\/a>.<\/li>\n<li><strong>Contribute your ideas:<\/strong> Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We\u2019re excited to see what ideas you have!<\/li>\n<li><strong>Spread the word:<\/strong> Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.<\/li>\n<\/ul>\n<p>We\u2019re incredibly excited about this new chapter for Docker Model Runner, and we can\u2019t wait to see what we can build together. Let\u2019s get to work!<\/p>\n<h3 class=\"wp-block-heading\"><strong>Learn more<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li>Check out the Docker Model Runner General Availability<a href=\"https:\/\/www.docker.com\/blog\/announcing-docker-model-runner-ga\/\"> announcement<\/a><\/li>\n<li>Visit our<a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\"> Model Runner GitHub repo<\/a>! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!<\/li>\n<li>Get started with Docker Model Runner with a simple<a href=\"https:\/\/github.com\/docker\/hello-genai\" rel=\"nofollow noopener\" target=\"_blank\"> hello GenAI application<\/a><\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>AI-powered coding assistants are becoming a core part of modern development workflows. At the same time, many teams are increasingly [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3240,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3239","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3239","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3239"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3239\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3240"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3239"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3239"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3239"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}