{"id":2242,"date":"2025-07-14T16:25:04","date_gmt":"2025-07-14T16:25:04","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/07\/14\/ai-powered-testing-using-docker-model-runner-with-microcks-for-dynamic-mock-apis\/"},"modified":"2025-07-14T16:25:04","modified_gmt":"2025-07-14T16:25:04","slug":"ai-powered-testing-using-docker-model-runner-with-microcks-for-dynamic-mock-apis","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/07\/14\/ai-powered-testing-using-docker-model-runner-with-microcks-for-dynamic-mock-apis\/","title":{"rendered":"AI-Powered Testing: Using Docker Model Runner with Microcks for Dynamic Mock APIs"},"content":{"rendered":"<p>The non-deterministic nature of LLMs makes them ideal for generating dynamic, rich test data, perfect for validating app behavior and ensuring consistent, high-quality user experiences. Today, we\u2019ll walk you through how to use <a href=\"https:\/\/www.docker.com\/products\/model-runner\/\">Docker\u2019s Model Runner<\/a> with Microcks to generate dynamic mock APIs for testing your applications.<\/p>\n<p><a href=\"https:\/\/microcks.io\/\" target=\"_blank\">Microcks is a powerful CNCF tool<\/a> that allows developers to quickly spin up mock services for development and testing. By providing predefined mock responses or generating them directly from an OpenAPI schema, you can point your applications to consume these mocks instead of hitting real APIs, enabling efficient and safe testing environments.<\/p>\n<p>Docker Model Runner is a convenient way to <a href=\"https:\/\/www.docker.com\/blog\/run-llms-locally\/\">run LLMs locally<\/a> within your Docker Desktop. It provides an OpenAI-compatible API, allowing you to integrate sophisticated AI capabilities into your projects seamlessly, using local hardware resources.<\/p>\n<p>By integrating Microcks with Docker Model Runner, you can enrich your mock APIs with AI-generated responses, creating realistic and varied data that is less rigid than static examples.<\/p>\n<p>In this guide, we\u2019ll explore how to set up these two tools together, giving you the benefits of dynamic mock generation powered by local AI.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Setting up Docker Model Runner<\/strong><\/h2>\n<p>To start, ensure you\u2019ve enabled Docker Model Runner as described in our <a href=\"https:\/\/www.docker.com\/blog\/building-an-ai-assistant-with-goose-and-docker-model-runner\/\">previous blog<\/a> on configuring Goose for a local AI assistant setup. Next, select and pull your desired <a href=\"https:\/\/hub.docker.com\/catalogs\/models\" target=\"_blank\">LLM model<\/a> from Docker Hub. For example:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker model pull ai\/qwen3:8B-Q4_0\n<\/div>\n<h2 class=\"wp-block-heading\"><strong>Configuring Microcks with Docker Model Runner<\/strong><\/h2>\n<p>First, clone the Microcks repository:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ngit clone https:\/\/github.com\/microcks\/microcks &#8211;depth 1\n<\/div>\n<p>Navigate to the Docker Compose setup directory:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ncd microcks\/install\/docker-compose\n<\/div>\n<p>You\u2019ll need to adjust some configurations to enable the AI Copilot feature within Microcks.<br \/>In the \/config\/application.properties file, configure the AI Copilot to use Docker Model Runner:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nai-copilot.enabled=true<br \/>\nai-copilot.implementation=openai<br \/>\nai-copilot.openai.api-key=irrelevant<br \/>\nai-copilot.openai.api-url=http:\/\/model-runner.docker.internal:80\/engines\/llama.cpp\/<br \/>\nai-copilot.openai.timeout=600<br \/>\nai-copilot.openai.maxTokens=10000<br \/>\nai-copilot.openai.model=ai\/qwen3:8B-Q4_0\n<\/div>\n<p>We\u2019re using the model-runner.docker.internal:80 as the base URL for the OpenAI compatible API. Docker Model Runner is available there from the containers running in Docker Desktop.\u00a0 Using it ensures direct communication between the containers and the Model Runner and avoids unnecessary networking using the host machine ports.<\/p>\n<p>Next, enable the copilot feature itself by adding this line to the Microcks config\/features.properties file:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nfeatures.feature.ai-copilot.enabled=true\n<\/div>\n<h2 class=\"wp-block-heading\"><strong>Running Microcks<\/strong><\/h2>\n<p>Start Microcks with Docker Compose in development mode:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker-compose -f docker-compose-devmode.yml up\n<\/div>\n<p>Once up, access the Microcks UI at <a href=\"http:\/\/localhost:8080\/\" target=\"_blank\">http:\/\/localhost:8080<\/a>.<\/p>\n<p>Install the example API for testing. Click through these buttons on the Microcks page:<br \/>Microcks Hub \u2192 MicrocksIO Samples APIs \u2192 pastry-api-openapi v.2.0.0 \u2192 Install \u2192 Direct import \u2192 Go.<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>Figure 1: Screenshot of the Pastry API 2.0 page on Microcks Hub with option to install.<\/em><\/p>\n\n<h2 class=\"wp-block-heading\"><strong>Using AI Copilot samples<\/strong><\/h2>\n<p>Within the Microcks UI, <a href=\"http:\/\/localhost:8080\/#\/services\/683467ce41829c4e6acbcdf3\" target=\"_blank\">navigate to the service page of the imported API<\/a> and select an operation you\u2019d like to enhance. Open the \u201cAI Copilot Samples\u201d dialog, prompting Microcks to query the configured LLM via Docker Model Runner.<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>Figure 2: A display of the \u201cAI Copilot Samples\u201d dialog inside Microcks.<\/em><\/p>\n\n<p>You may notice increased GPU activity as the model processes your request.<\/p>\n<p>After processing, the AI-generated mock responses are displayed, ready to be reviewed or added directly to your mocked operations.<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>Figure 3: Mocked data generated within\u00a0the AI Copilot Suggested Samples on Microcks.<\/em><\/p>\n<p>You can easily test the generated mocks with a simple curl command. For example:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ncurl -X PATCH &#8216;http:\/\/localhost:8080\/rest\/API+Pastry+-+2.0\/2.0.0\/pastry\/Chocolate+Cake&#8217; <br \/>\n-H &#8216;accept: application\/json&#8217; <br \/>\n-H &#8216;Content-Type: application\/json&#8217; <br \/>\n-d &#8216;{&#8220;status&#8221;:&#8221;out_of_stock&#8221;}&#8217;\n<p>{<br \/>\n  &#8220;name&#8221; : &#8220;Chocolate Cake&#8221;,<br \/>\n  &#8220;description&#8221; : &#8220;Rich chocolate cake with vanilla frosting&#8221;,<br \/>\n  &#8220;size&#8221; : &#8220;L&#8221;,<br \/>\n  &#8220;price&#8221; : 12.99,<br \/>\n  &#8220;status&#8221; : &#8220;out_of_stock&#8221;<br \/>\n}<\/p>\n<\/div>\n<p>This returns a realistic, AI-generated response that enhances the quality and reliability of your test data.\u00a0<\/p>\n<p>Now you could use this approach in the tests; for example, a shopping cart application, where the app depends on some inventory service. With realistic yet randomized mocked data, you can cover more application behaviors with the same set of tests.\u00a0For better reproducibility, you can also specify the Docker Model Runner dependency and the chosen model explicitly in your compose.yml:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nmodels:<br \/>\n  qwen3:<br \/>\n    model: ai\/qwen3:8B-Q4_0<br \/>\n    context_size: 8096\n<\/div>\n<p>Then starting the compose setup will pull the model too and wait for it to be available, the same way it does for containers.<\/p>\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n<p>Docker Model Runner is an excellent <a href=\"https:\/\/www.docker.com\/products\/model-runner\/\">local resource for running LLMs<\/a> and provides compatibility with OpenAI APIs, allowing for seamless integration into existing workflows. Tools like Microcks can leverage Model Runner to generate dynamic sample responses for mocked APIs, giving you richer, more realistic synthetic data for integration testing.<\/p>\n<p>If you have local AI workflows or just run LLMs locally, please discuss with us in the <a href=\"https:\/\/forums.docker.com\/\" target=\"_blank\">Docker Forum<\/a>! We\u2019d love to explore more local AI integrations with Docker.<\/p>\n<h3 class=\"wp-block-heading\">Learn more<\/h3>\n<p>Get an inside look at the<a href=\"https:\/\/www.docker.com\/blog\/how-we-designed-model-runner-and-whats-next\/\"> design architecture of the Docker Model Runner<\/a>.\u00a0<\/p>\n<p>Explore the<a href=\"https:\/\/www.docker.com\/blog\/oci-artifacts-for-ai-model-packaging\/\"> story<\/a> behind our model distribution specification<\/p>\n<p>Read our quickstart guide to<a href=\"https:\/\/www.docker.com\/blog\/run-llms-locally\/\"> Docker Model Runner<\/a>.<\/p>\n<p>Find documentation for<a href=\"https:\/\/docs.docker.com\/model-runner\/\" target=\"_blank\"> Model Runner<\/a>.<\/p>\n<p>Visit our new <a href=\"https:\/\/www.docker.com\/solutions\/docker-ai\/\">AI solution page<\/a><\/p>\n<p>Subscribe to the<a href=\"https:\/\/www.docker.com\/newsletter-subscription\/\"> Docker Navigator Newsletter<\/a>.<\/p>\n<p>New to Docker?<a href=\"https:\/\/hub.docker.com\/signup?_gl=1*1v81gq1*_gcl_au*MTQxNjU3MjYxNS4xNzQyMjI1MTk2*_ga*MTMxODI0ODQ4LjE3NDE4MTI3NTA.*_ga_XJWPQMJYHQ*czE3NDg0NTYyNzIkbzI2JGcxJHQxNzQ4NDU2MzI2JGo2JGwwJGgw\" target=\"_blank\"> Create an account<\/a>.\u00a0<\/p>\n<p>Have questions? The<a href=\"https:\/\/www.docker.com\/community\/\"> Docker community is here to help<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>The non-deterministic nature of LLMs makes them ideal for generating dynamic, rich test data, perfect for validating app behavior and [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2242","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2242","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2242"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2242\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2242"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2242"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2242"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}