{"id":2359,"date":"2025-08-12T16:33:45","date_gmt":"2025-08-12T16:33:45","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/08\/12\/building-ai-agents-made-easy-with-goose-and-docker\/"},"modified":"2025-08-12T16:33:45","modified_gmt":"2025-08-12T16:33:45","slug":"building-ai-agents-made-easy-with-goose-and-docker","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/08\/12\/building-ai-agents-made-easy-with-goose-and-docker\/","title":{"rendered":"Building AI agents made easy with Goose and Docker"},"content":{"rendered":"<p>Building AI agents can be a complex task. But it also can be a fairly simple combination of answers to the following questions:\u00a0<\/p>\n<p>What is the AI backend that powers my intelligent fuzzy computation?<\/p>\n<p>What tools do you need to give to the AI to access external systems or execute predefined software commands?<\/p>\n<p>What is the application that wraps these together and provides the business logic for the agent (like when you\u2019re building a marketing agent, what makes it know more about marketing or your particular use-cases than a generic chat-GPT model)?<\/p>\n<p>A very popular way to build agents currently is to extend AI assistants or chatbots with the business logic as \u201csystem prompts\u201d or configurable profile instructions (which we\u2019ll show later), and tools via the MCP protocol.\u00a0<\/p>\n<p>In this article, we will look at an example of how you can build an agent like this (with a toy functionality of summarizing YouTube videos)\u00a0 with open source tools. We\u2019re going to run everything in containers for isolation and repeatability. We\u2019re going to use Docker Model Runner for running LLMs locally, so your agent processes stuff privately.<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p>You can find the project in the repository on GitHub: <a href=\"https:\/\/github.com\/shelajev\/hani\" target=\"_blank\">https:\/\/github.com\/shelajev\/hani<\/a>.\u00a0<\/p>\n<p>We\u2019re going to use Goose as our agent and Docker MCP gateway for accessing the MCP tools.\u00a0<\/p>\n<p>In general, hani (goose in Estonian, today you learned!) is a multi-component system defined and orchestrated by Docker Compose.<\/p>\n<p>Here is a brief description of the components used. All in all, this is a bit of a hack, but I feel it\u2019s a very interesting setup, and even if you don\u2019t use it for building agents, learning about the technologies used might come useful one day.\u00a0<\/p>\n<div class=\"wp-block-ponyo-table style__default\">\n<p>Component<\/p>\n<p>Function<\/p>\n<p><a href=\"https:\/\/github.com\/block\/goose\" target=\"_blank\">Goose<\/a><\/p>\n<p>The AI agent responsible for task execution. It is configured to use the local LLM for reasoning and the MCP Gateway for tool access.<\/p>\n<p><a href=\"https:\/\/docs.docker.com\/ai\/model-runner\/\" target=\"_blank\">Docker Model Runner<\/a><\/p>\n<p>Runs a local LLM inference engine on the host. It exposes an OpenAI-compatible API endpoint (e.g., http:\/\/localhost:12434) that the Goose agent connects to.<\/p>\n<p><a href=\"https:\/\/docs.docker.com\/ai\/mcp-gateway\/\" target=\"_blank\">MCP Gateway<\/a><\/p>\n<p>A proxy that aggregates and isolates external MCP tools in their own containers. It provides a single, authenticated endpoint for the agent, mitigating security risks like command injection.<\/p>\n<p><a href=\"https:\/\/github.com\/tsl0922\/ttyd\" target=\"_blank\">ttyd<\/a><\/p>\n<p>A command-line utility that serves the container\u2019s terminal, running the Goose CLI, as a web application accessible via a browser.<\/p>\n<p><a href=\"https:\/\/developers.cloudflare.com\/cloudflare-one\/connections\/connect-networks\/do-more-with-tunnels\/trycloudflare\/\" target=\"_blank\">Cloudflare Quick Tunnel<\/a><\/p>\n<p>(Optional) Creates a secure public URL for the local ttyd service, enabling remote access or collaboration without firewall configuration.<\/p>\n<\/div>\n<h2 class=\"wp-block-heading\">Implementation Details<\/h2>\n<p>The environment is defined by two primary configuration files: a Dockerfile to build the agent\u2019s image and a compose.yml to orchestrate the services.<\/p>\n<p>Let\u2019s look at the Dockerfile first; it creates a container image for the hani service with all necessary dependencies and configures Goose for us.\u00a0<\/p>\n<p>After installing the dependencies, there are a few lines that I want to emphasize:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nRUN wget -O \/tmp\/ttyd.x86_64 https:\/\/github.com\/tsl0922\/ttyd\/releases\/download\/1.7.7\/ttyd.x86_64 &amp;&amp; <br \/>\n   chmod +x \/tmp\/ttyd.x86_64 &amp;&amp; <br \/>\n   mv \/tmp\/ttyd.x86_64 \/usr\/local\/bin\/ttyd\n<\/div>\n<p>Installs ttyd. It\u2019s super convenient if you need a Docker image with a CLI application, but want a browser based experience.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nRUN wget -qO- https:\/\/github.com\/block\/goose\/releases\/download\/stable\/download_cli.sh | CONFIGURE=false bash &amp;&amp; <br \/>\n   ls -la \/root\/.local\/bin\/goose &amp;&amp; <br \/>\n   \/root\/.local\/bin\/goose &#8211;version\n<\/div>\n<p>This snippet installs Goose. If you like to live on the edge, you can add <a href=\"https:\/\/github.com\/block\/goose\/blob\/main\/download_cli.sh#L21\" target=\"_blank\">CANARY=true<\/a> and get the unstable but latest and greatest version.\u00a0<\/p>\n<p>Note that we are also disabling CONFIGURE, because we\u2019ll configure Goose by supplying a pre-made configuration file with the next two lines in the Dockerfile:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nCOPY config.yaml \/root\/.config\/goose\/config.yaml<br \/>\nRUN chmod u-w \/root\/.config\/goose\/config.yaml\n<\/div>\n<p>We do the same with .goosehints, which is the file goose will read and take into account the instructions in it (with the developer extension enabled). We use this to supply business logic to our agent.\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nCOPY .goosehints \/app\/.goosehints\n<\/div>\n<p>The rest is pretty straightforward, the only thing we need to remember is that we\u2019re running ttyd running goose and not the latter directly.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nENTRYPOINT [&#8220;ttyd&#8221;, &#8220;-W&#8221;]<br \/>\nCMD [&#8220;goose&#8221;]\n<\/div>\n<p>Now would be a great time to look at the config for Goose, but in order to glue the pieces together, we need to define the pieces, so first we need to explore the compose file.\u00a0<\/p>\n<p>The compose.yml file defines and connects the stack\u2019s services using Docker Compose.<\/p>\n<p>Let\u2019s look at the compose.yml file starting with the <a href=\"https:\/\/docs.docker.com\/ai\/compose\/models-and-compose\/#what-are-compose-models\" target=\"_blank\">models section<\/a>:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nmodels:<br \/>\nqwen3:<br \/>\n # pre-pull the model when starting Docker Model Runner<br \/>\n model: hf.co\/unsloth\/qwen3-30b-a3b-instruct-2507-gguf:q5_k_m<br \/>\n context_size: 16355\n<\/div>\n<p>First of all, we define the model we\u2019ll use as the brain of the operations. If it\u2019s available in the Docker Model Runner locally, it\u2019ll load it on demand for serving requests. If it\u2019s not a model you used before, it will be automatically pulled from <a href=\"https:\/\/hub.docker.com\/u\/ai\" target=\"_blank\">Docker Hub<\/a>, HuggingFace, or your OCI artifact registry. This can take a bit of time, as even the small models are considerably large downloads, so you can prepare beforehand by running:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker model pull $MODEL_NAME\n<\/div>\n<p>Now the tools part. MCP gateway is a \u201cnormal\u201d application running in a container, so we pull it in by defining a \u201cservice\u201d and specifying the correct Docker Image:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n  mcp-gateway:<br \/>\n   image: docker\/mcp-gateway:latest<br \/>\n   use_api_socket: true<br \/>\n   command:<br \/>\n     &#8211; &#8211;transport=sse<br \/>\n     &#8211; &#8211;servers=youtube_transcript\n<\/div>\n<p>We instruct it to be available as an SSE MCP server itself, and tell it which MCP servers to enable for the current deployment. The MCP toolkit catalog contains more than a hundred useful MCP servers. This is a toy example, so we enable a toy MCP server for pulling YouTube video transcripts.\u00a0<\/p>\n<p>Now with the dependencies figured out, our main application is built from the local project context and specifies the GOOSE_MODEL env variable to be the actual model we load in the Docker Model Runner:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n  hani:<br \/>\n   build:<br \/>\n     context: .<br \/>\n   ports:<br \/>\n     &#8211; &#8220;7681:7681&#8221;<br \/>\n   depends_on:<br \/>\n     &#8211; mcp-gateway<br \/>\n   env_file:<br \/>\n     &#8211; .env<br \/>\n   models:<br \/>\n     qwen3:<br \/>\n       model_var: GOOSE_MODEL\n<\/div>\n<p>Simple enough, right? Now the trick is to also configure Goose in the container to use all these services. Remember we copied the config.yaml into the container? That\u2019s the job of that file.\u00a0<\/p>\n<p>First, we configure the extensions:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nextensions:<br \/>\n developer:<br \/>\n   display_name: null<br \/>\n   enabled: true<br \/>\n   name: developer<br \/>\n   timeout: null<br \/>\n   type: builtin<br \/>\n mcpgateway:<br \/>\n   bundled: false<br \/>\n   description: &#8216;Docker MCP gateway&#8217;<br \/>\n   enabled: true<br \/>\n   name: mcpgateway<br \/>\n   timeout: 300<br \/>\n   type: sse<br \/>\n   uri: http:\/\/mcp-gateway:8811\/sse\n<\/div>\n<p>MCP gateway one will connect to the <em>mcp-gateway:8811\/sse <\/em>url, which is where according to the compose file, the MCP gateway will be running. The developer extension is built in with some useful tools, but it also enables .goosehints support for us.\u00a0<\/p>\n<p>The only thing left is to connect the brains:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nGOOSE_PROVIDER: openai<br \/>\nOPENAI_BASE_PATH: engines\/llama.cpp\/v1\/chat\/completions<br \/>\nOPENAI_HOST: http:\/\/model-runner.docker.internal\n<\/div>\n<p>We configure Goose to connect to the OpenAI API compatible endpoint that Docker Model Runner exposes. Note that since we\u2019re running Goose in a container, we don\u2019t go via the host TCP connection (localhost:12434 you could have seen in other tutorials), but via the Docker VM internal url: model-runner.docker.internal<\/p>\n<p>That\u2019s it!\u00a0<\/p>\n<p>Well if you want to show off the cool agent you built to a friend, you can also include the compose-cloudflare.yml into the setup, which will create a web tunnel from a random URL at cloudflare to your local hani container port 7681 where ttyd is running:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n  cloudflared:<br \/>\n   image: cloudflare\/cloudflared<br \/>\n   command: tunnel &#8211;url hani:7681<br \/>\n   depends_on:<br \/>\n    &#8211; hani\n<\/div>\n<p>If you have a Docker Desktop with a Docker Model Runner enabled, you can now run the whole setup with a single compose command.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker compose up &#8211;build\n<\/div>\n<p>or if you want to include the tunnel and expose your Goose to the internet:\u00a0<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker compose -f compose.yml -f compose-cloudflare.yml up &#8211;build\n<\/div>\n<p>Now opening <a href=\"http:\/\/localhost:7681\/\" target=\"_blank\">http:\/\/localhost:7681<\/a> (or the Cloudflare url the container prints int the logs) will give you the Goose session in the browser:\u00a0<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p>And it can use tools, for example if you ask it something like:\u00a0<\/p>\n<p><em>what is this video about: https:\/\/youtu.be\/X0PaVrpFD14? answer in 5 sentences<\/em><\/p>\n<p>You can see a tool call, and a sensible answer based on the transcript of the video:\u00a0<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p>One of the best things about this setup is that the architecture is modular and designed for extension:<\/p>\n<p><strong>Model Swapping<\/strong>: The LLM can be changed by modifying the model definition in the compose.yml to any other GGUF model available on Docker Hub or Hugging Face.<\/p>\n<p><strong>Adding Tools<\/strong>: New capabilities can be added by defining additional servers for the MCP gateway or wiring up standalone MCP servers and editing the Goose config.\u00a0<\/p>\n<p><strong>Adding business logic<\/strong> is just editing the goosehints file and rerunning the setup. Everything is in containers, so everything is contained and ephemeral.\u00a0<\/p>\n<p><strong>Agent framework<\/strong>: The similar setup can be reconfigured to run other agentic frameworks (e.g., LangGraph, CrewAI) that are compatible with an OpenAI-compatible API, as the underlying platform (DMR, MCP Gateway, compose) is framework-agnostic.<\/p>\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n<p>In this article we looked at how you can build a private AI agent running locally in Docker containers in the most straightforward way, integrating Goose AI assistant, Docker MCP Gateway, and running local AI models with Docker Model Runner.\u00a0<\/p>\n<p>All these technologies are open source, so the recipe can be used for creating your workflow agents easily. While the sample agent doesn\u2019t do anything particularly useful, and its functionality is limited to being a chat and transcribing videos from YouTube, it\u2019s a minimal enough starting point that you can take it into any direction.\u00a0<\/p>\n<p>Clone the repo, edit the goosehints file, add your favorite MCP servers to the config, run docker compose up and you\u2019re good to go.\u00a0<\/p>\n<p>Which tasks are you building agents for? Tell me, I\u2019d love to know: <a href=\"https:\/\/www.linkedin.com\/in\/shelajev\/\" target=\"_blank\">https:\/\/www.linkedin.com\/in\/shelajev\/<\/a>.\u00a0<\/p>","protected":false},"excerpt":{"rendered":"<p>Building AI agents can be a complex task. But it also can be a fairly simple combination of answers to [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2359","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2359","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2359"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2359\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2359"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2359"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2359"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}