{"id":2703,"date":"2025-11-03T13:46:32","date_gmt":"2025-11-03T13:46:32","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/11\/03\/how-to-use-multimodal-ai-models-with-docker-model-runner\/"},"modified":"2025-11-03T13:46:32","modified_gmt":"2025-11-03T13:46:32","slug":"how-to-use-multimodal-ai-models-with-docker-model-runner","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/11\/03\/how-to-use-multimodal-ai-models-with-docker-model-runner\/","title":{"rendered":"How to Use Multimodal AI Models With Docker Model Runner"},"content":{"rendered":"<p>One of the most exciting advances in modern AI is multimodal support, the ability for models to understand and generate multiple types of input, such as text, images, or audio.\u00a0<\/p>\n<p>With multimodal models, you\u2019re no longer limited to typing prompts; you can show an image or play a sound, and the model can understand it. This opens a world of new possibilities for developers building intelligent, local AI experiences.<br \/>In this post, we\u2019ll explore how to use multimodal models with <a href=\"https:\/\/docs.docker.com\/ai\/model-runner\/\" rel=\"nofollow noopener\" target=\"_blank\">Docker Model Runner<\/a>, walk through practical examples, and explain how it all works under the hood.<\/p>\n<h2 class=\"wp-block-heading\">What Is Multimodal AI?<\/h2>\n<p>Most language models only understand text, but multimodal models go further. They can analyze and combine text, image, and audio data. That means you can ask a model to:<\/p>\n<ul class=\"wp-block-list\">\n<li>Describe what\u2019s in an image<\/li>\n<li>Identify or reason about visual details<\/li>\n<li>Transcribe or summarize an audio clip<\/li>\n<\/ul>\n<p>This unlocks new ways to build AI applications that can see and listen, not just read.<\/p>\n<h2 class=\"wp-block-heading\">How to use multimodal models<\/h2>\n<p>Not every model supports multimodal inputs, so your first step is to choose one that does.<\/p>\n<p>In <a href=\"https:\/\/hub.docker.com\/u\/ai\" rel=\"nofollow noopener\" target=\"_blank\">Docker Hub<\/a> we indicate the inputs supported on each model on its model card, for example:<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/ai\/moondream2\" rel=\"nofollow noopener\" target=\"_blank\">Moondream2,<\/a> <a href=\"https:\/\/hub.docker.com\/r\/ai\/gemma3\" rel=\"nofollow noopener\" target=\"_blank\">Gemma3<\/a>, or <a href=\"https:\/\/hub.docker.com\/r\/ai\/smolvlm\" rel=\"nofollow noopener\" target=\"_blank\">Smolvlm<\/a> models supports text and image as input, while <a href=\"https:\/\/hub.docker.com\/r\/ai\/gpt-oss\" rel=\"nofollow noopener\" target=\"_blank\">GPT-OSS<\/a> supports text only<\/p>\n<p>The easiest way to start experimenting is with the CLI. Here\u2019s a simple example that asks a multimodal model to describe an image:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ndocker model run gemma3 \"What's in this image? \/Users\/ilopezluna\/Documents\/something.jpg\"\nThe image shows the logo for **Docker**, a popular platform for containerization. \n\nHere's a breakdown of what you see:\n\n*   **A Whale:** The main element is a stylized blue whale.\n*   **A Shipping Container:** The whale's body is shaped like a shipping container.\n*   **A Stack of Blocks:**  Inside the container are a stack of blue blocks, representing the layers and components of an application.\n*   **Eye:** A simple, white eye is featured.\n\nDocker uses this iconic whale-container image to represent the concept of packaging and running applications in isolated containers.\n\n<\/pre>\n<\/div>\n<h3 class=\"wp-block-heading\">Using the Model Runner API for More Control<\/h3>\n<p>While the CLI is great for quick experiments, the API gives you full flexibility for integrating models into your apps. <strong>Docker Model Runner exposes an OpenAI-compatible API<\/strong>, meaning you can use the same client libraries and request formats you already know, just point them to Docker Model Runner.<\/p>\n<p>Here\u2019s an example of sending both text and image input:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; gutter: false; title: ; notranslate\">\ncurl --location 'http:\/\/localhost:12434\/engines\/llama.cpp\/v1\/chat\/completions' \n--header 'Content-Type: application\/json' \n--data '{\n \"model\": \"ai\/gemma3\",\n \"messages\": [\n     {\n       \"role\": \"user\",\n       \"content\": [\n         {\n           \"type\": \"text\",\n           \"text\": \"describe the image\"\n         },\n         {\n           \"type\": \"image_url\",\n           \"image_url\": {\n             \"url\": \"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAgAAAAIAQMAAAD+wSzIAAAABlBMVEX\/\/\/+\/v7+jQ3Y5AAAADklEQVQI12P4AIX8EAgALgAD\/aNpbtEAAAAASUVORK5CYII\"\n           }\n         }\n       ]\n     }\n   ]\n}'\n\n\n\n<\/pre>\n<\/div>\n<h3 class=\"wp-block-heading\">Run Multimodal models from Hugging Face<\/h3>\n<p>Thanks to our friends at Hugging Face (special shout-out to Adrien Carreria), you can also run multimodal models directly from Hugging Face in Docker Model Runner.<\/p>\n<p>Here\u2019s an example using a model capable of audio transcription:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: bash; title: ; notranslate\">\ncurl --location 'http:\/\/localhost:12434\/engines\/llama.cpp\/v1\/chat\/completions' \n--header 'Content-Type: application\/json' \n--data '{\n    \"model\": \"hf.co\/ggml-org\/ultravox-v0_5-llama-3_1-8b-gguf\",\n    \"temperature\": 0,\n    \"messages\": [\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\n                    \"type\": \"text\",\n                    \"text\": \"transcribe the audio, one word\"\n                },\n                {\n                    \"type\": \"input_audio\",\n                    \"input_audio\": {\n                        \"data\": \"\/\/PoxAB8RA5OX53xAkRAKBwMBQLRYMhmLDEXQQI0CT8QFQNMawxMYiQtFQClEDCgjDHhCzjtkQuCpb4tQY+IgZ3bGZtttcm+GnGYNIBBgRAAGB+AuYK4N4wICLCPmW0GqZsapZtCnzmq4V4AABgQA2YGoAqQ5gXgWDoBRgVBQCQWxgXAXgYEswTQHR4At2IpfL935ePAKpgGACAAvlkDP2ZrfMBkAdpq4kYTARAHCoABgWAROIBgDPhUGuotB\/GkF1EII2i6BgGwwAEVAAMAoARpqC8TRUMFcD2Ly6AIPTnuLEMAkBgwVALjBsBeMEABxWAwUgSDAIAPMBEAMwLAPy65gHgDmBgBALAOPIYDYBYVARMB0AdoKYYYAwYAIAYNANTcMBQAoEAEmAcApBRg+g5mCmBGIgATAPBFMEsBUwTwMzAXAHfMwNQKTAPAPDAHwcCoCAGkHAwBNYRHhYBwWhhwBEPyQuuHAJwuSmAOAeEAKLBSmQR2zbhC81\/ORKWnsDhhrjlrxWcBgI2+hCBiAOXzMGLoTHb681deaxoLMAUAFY5gHgCxaTQuIZUsmnpTXVsglpKonHlejAXAHXHOJ0QxnHJyafakpJ+CJAziA\/izoImFwFIO\/E37iEYij\/0+8s8c\/9YJAiAgADAHADa28k4sSA3vhE9GrcCw\/lPpTEFNRQMACQgABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\/\/PoxAB6jBZABd3gAADj6KbWk8H+MCSbKw3Jgvlxg+JpjWF5uKl4QJgiEcw5EIyCSY3E4IyvS0wFHwz3E8wrG0yzIU8E7Q5zK8xTFwwbE0HDmYaXhvZCqGoKGEgIFRyZzQxmZ2mXBOY1Aw4LDDyIN\/F47SVzdzIMqAowELgCszjgFMvmMxiHzE4hMLicyaGQUaTCoDfuaSaIhgLAsuAFQSYRC4sMxISiQtMGi0ymWTKYvCoHMyjUwAJDIBIMbAhPIKgsdACDpgkFoVGLB8Fg+YTHpkoEGFxCFh0DBeYeECPyEBgQEGDxSCRijDSLJYGwBE4wOBjDYABwWLAMC4fXCFiYHEsuGCQcY2BIcQBIqGAhGYjD5iAKGNwOYgLplAAv2OgJEsCBUwsHDBILBQuEAiMnBcDHIw4AgsACgIlAJDkGY6OICSsgEh2FBOYfCwMBLcJuHE\/0elvMKaw1UHBNFB9IdQDxxxH2V\/AvvK9cPSJonarWZcyeYd2XQ3BLhUD0yvrpQK0hscP0UesPM0FgDjoAEb1VntaO5MPzDYnJpn4fd9EnS5isDTQSGQoAAEFzAwhBQLTQAQIdi1Arwvo4z6t9FoCcdw2\/biq9fDTQ4NrsKBCFwGRDYIAiB7PFPPczALAJS4UAK3G7Sle95iVl+qKL00NXaWsmIKaigYAEhAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\/\/PoxABxxBZIBua3UAy9KUzYdYeFxCZJEQOExhYDGHg4bB7J5W0GV0QYdAhig3G9IQbDT53SunFlCZmoh0DsGmVOc6bZmZqnK0Ga7ABrYqmWUsZSNZeMwgDhYJlULjAfFQGOlAwYfTGYBMMDUmZgYazW4aNBM8w0XD5JMPDxo1KQjLilMbBA24QLviy5lAxhwzWwaFaIk+YIKg5cIAQKKgw4bI6DjJsEqERhHZtCBCdNInOgRVnMAEWNARfMuyIAxnwAJGGlBA0YFiQYSFggKBmHDlAcxIUmQsEX9HF\/R1YUeDNzJiKZKgMLBwsAhE5pSCQiDiK6bJfUOCCxswBgmKo4IjrwAoWCQ8wgtMpUjOYEZE\/DAYEgwNGxIIMMAzBAAdAbK\/qVDxv2wWN3WXNJX0opEXta2XUQBMrAACNAhh4IECTV4CRXaQzqUsScKOypSqiemQTMxelkY6\/ucCu1QwxfuSajv1pSzmXrJRxZK4Hxb2Fr7dJR+H2mlYcXmFQEmCEzR6BFCAxxTDjIRDANCVLW3LR0MKaE2N41VmiIpO+UB4sFpfoK1TFB0HCiwKBgkqhx0YKCDQQjWXlXmBgQLg6mSLCbSv2Gs8i0OL4h56926SbxTEFNRQMACQgABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\/\/PoxAB19BY4BOd00gCrS9rgyDTAxEMTgsQDA0HNzlqJNvgM31dzOAbHRAYyXZqNHG8TwaPCBhMHmn1oaiThh8gmQXQatWOcLz8dTEgaBoaYkmwYvDcaECZBZg8FJiQQhekwbCcwtE8tUYSCeYRAkJECY9BoZQjoYgmEY3r8Y2hEYsnaZuryaPT4ba1aZqr8bjGkZCmSAQqMSALCAeBoFmCoFllgUEQdJB0cCuhaSYJcYowRIjkmjWizNGTDLTOjzQRUigKFb1TktU4iqIGCF6QI1CAIWDgEAgUZUYTJwoZDwhqCpsTpCFEA8s+utVJYcQNwaPMzTDI4hRmVAmICGXOm5FmDEIBCak2hg3Znx50Z5k4o07SAAAMFHBATAWIR8gpFNonBX8xH0zxcAhw40a5aaAqYQ+Y1CYdWHIk2n\/SkVUWRLJAomXnZu8CKb+iwxE+Wui1JZZgRTvzzPonOOxYoYGgNmyuGTKnfSRxaTu3duS57aaNvtMSt4qaVxqYdWKwcytpaiDNbb4Sq1UoGwOU5bKJYoGmUwNEx3VzCMIoHSMMTnmHaL\/Splj9MZZs3MOBgwWSDKhMYS0WFLvGADiEimQXbFCLuIcVGgsOgd9AcUTCfyOKFLEWLAsafeOQmnpbMWpUxBTUUDAAkIAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\/\/PoxAB0zAI0AO7fcaG1AIUAUDAI3ph2SpjEhxkiG4yVAGjcysGkaCkwaG4xgFcLA8BhFMSxnMAyVAoRiMkzGlnDKqtT66qTfpcjBtgTE2UjUxyzaRVmsmXA1GWASGAIbmA4cgkDDAMjjEkFDDMCAgdjFANjDEUjCcRzJUlRIrTW8\/TIsDjwdgDKlbjetyTIw4TdoxRpojFwBDFwAwgIQqCQCBMwxEIxcvSzGFI1MuCzUUBpoQOI3QIQTAEEVOjZQUysGMYDDBgsoC2ENGGAFMsEAAJJQIEIC37MBHBwKCDcxJCNTOTBCF5DTg0i3zKQwRiJh4NfJAIwV1OTKjThszB+N4vwgCNfbDSyMxs+NFLjJV438TN2OwcsklwZovmLFRkqsQioiIwhTHZ8wQ0MihzXkU2iKFGF5gQaAwaMJAxIqMqSTGBYwZrNEMTHyBREaAwACLpkCXgGD08Q+gJoJEbxNlwyk9To0S2hXloiTaSYuS92ZGqdQKITZRsrgm0XROrCGZdztDUiI7o7Hpf08ex8P0LFiTByoa+P1SF0sgHBCFceqNS0N0bpYFcN8K8XNclxOsfQpSeNwviFGgk1KRAtiKJsW00VWnamIPEzblsJwfCsQs60gjPPi8sOcBrEpiCmooGABIQAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\/\/PoxAByRBYwBt800Am7elzDkQl9GYCJm2CfcMHAmBjsIcZfGLogAETDRMwkkAJURaYURjMjkBX5lwqBl85jBzsAHNIK0w\/RjyJSN8L804rDXpaMXBQx6fAERzN4XM8FkwiDDDYqQDGBgKBSsYPD4YOjArTN7Gkxq7TcJKNpKw+2GzsFXNEYU5cCzIoDMcDsDAZOYwaDEjgcAgSTM8LABMyAw1qZIY0wdBgmZJ7AkOEDSAaELUlF1sajKOyX6mCH4ECjoZLN+RCEMaBMCQBVMWWtbNjKUGDAQUBqNjgJfxsyB8AYOQgS8bW4RezPPwURRWRILoFgOUcTGljLtwFjAxoza446IEAwsLMYHFqJn1xwGJsD5j3CKaNAkBbQHElGF4CFAEGggs6TIH+f5yGXvCsKncX7JAqcjSFlszX9QqprCR6\/Ik9oCUtjc1yJ138kr+P\/QMfdymbpDLPJxrVPYhDouhDbPU7lAmpP68cWcVsqqqLC+s5Q5DWJtwlBl9LSUqLAJg6TrGVyEQtK8wgAqizFDEo1xLQW8vd2WMLte5xkqBoEwVZRVDqKIghCllhfZwyYIhCniDB4QRayY9IY4i5S1k5FIq3InM6aVLZbGWuwK2SVUl9MQU1FAwAJCAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\",\n                        \"format\": \"mp3\"\n                    }\n                }\n            ]\n        }\n    ]\n}'\n\n<\/pre>\n<\/div>\n<p>This model can listen and respond, literally.<\/p>\n<h2 class=\"wp-block-heading\">Multimodal AI example: A real-time webcam vision model<\/h2>\n<p>We have created a couple of demos in the <a href=\"https:\/\/github.com\/docker\/model-runner\/tree\/main\/demos\" rel=\"nofollow noopener\" target=\"_blank\">Docker Model Runner repository<\/a>, and of course, we couldn\u2019t miss a demo based on <a href=\"https:\/\/github.com\/ngxson\/smolvlm-realtime-webcam\" rel=\"nofollow noopener\" target=\"_blank\">ngxson\u2019s example<\/a>:<\/p>\n<div class=\"wp-block-ponyo-image\">\n            <img data-opt-id=1338169712  fetchpriority=\"high\" decoding=\"async\" width=\"766\" height=\"980\" src=\"https:\/\/www.docker.com\/app\/uploads\/2025\/10\/output.gif\" class=\"attachment-full size-full\" alt=\"output\" title=\"- output\" \/>\n    <\/div>\n\n<p>You can run this same demo by following <a href=\"https:\/\/github.com\/docker\/model-runner\/tree\/main\/demos\/multimodal\" rel=\"nofollow noopener\" target=\"_blank\">the instructions<\/a>, spoiler alert: it\u2019s just Docker Model Runner and an HTML page.<\/p>\n<h2 class=\"wp-block-heading\">How multimodel AI works: Understanding audio and images<\/h2>\n<p>How are these large language models capable of understanding images or audio? The key is something called a multimodal projector file.<\/p>\n<p>This file acts as an adapter, a small neural network layer that converts non-text inputs (like pixels or sound waves) into a token representation the language model can understand. Think of it as a translator that turns visual or auditory information into the same kind of internal \u201clanguage\u201d used for text.<\/p>\n<p>In simpler terms:<\/p>\n<ul class=\"wp-block-list\">\n<li>The projector takes an image or audio input<\/li>\n<li>It processes it into numerical embeddings (tokens)<\/li>\n<li>The language model then interprets those tokens just like it would words in a sentence<\/li>\n<\/ul>\n<p>This extra layer allows a single model architecture to handle multiple input types without retraining the entire model.<\/p>\n<h3 class=\"wp-block-heading\">Inspecting the Projector in OCI Artifacts<\/h3>\n<p>In Docker Model Runner, models are <a href=\"https:\/\/www.docker.com\/blog\/oci-artifacts-for-ai-model-packaging\/\">packaged as OCI artifacts<\/a>, so everything needed to run the model locally (weights, configuration, and extra layers) is contained in a reproducible format.<\/p>\n<p>You can actually see the multimodal projector file by inspecting the model\u2019s OCI layers. For example, take a look at <a href=\"https:\/\/oci.dag.dev\/?image=ai%2Fgemma3\" rel=\"nofollow noopener\" target=\"_blank\">ai\/gemma3<\/a>.<\/p>\n<p>You\u2019ll find a layer with the media type: \u201capplication\/<a href=\"http:\/\/vnd.docker.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">vnd.docker.ai<\/a>.mmproj\u201d.<\/p>\n<p>This layer is the multimodal projector, the component that makes the model multimodal-capable. It\u2019s what enables gemma3 to accept images as input in addition to text.<\/p>\n<h3 class=\"wp-block-heading\">We\u2019re Building This Together!<\/h3>\n<p>Docker Model Runner is a community-friendly project at its core, and its future is shaped by contributors like you. If you find this tool useful, please head over to our <a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\">GitHub repository<\/a>. Show your support by giving us a star, fork the project to experiment with your own ideas, and contribute. Whether it\u2019s improving documentation, fixing a bug, or a new feature, every contribution helps. Let\u2019s build the future of model deployment together!<\/p>\n<h3 class=\"wp-block-heading\">Learn more<\/h3>\n<ul class=\"wp-block-list\">\n<li>Check out the Docker Model Runner General Availability<a href=\"https:\/\/www.docker.com\/blog\/announcing-docker-model-runner-ga\/\"> announcement<\/a><\/li>\n<li>Visit our<a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\"> Model Runner GitHub repo<\/a>! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!<\/li>\n<li>Get started with Docker Model Runner with a simple<a href=\"https:\/\/github.com\/docker\/hello-genai\" rel=\"nofollow noopener\" target=\"_blank\"> hello GenAI application<\/a><\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>One of the most exciting advances in modern AI is multimodal support, the ability for models to understand and generate [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2704,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2703","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2703","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2703"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2703\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/2704"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2703"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2703"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2703"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}