{"id":3064,"date":"2025-12-16T14:03:55","date_gmt":"2025-12-16T14:03:55","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/12\/16\/develop-and-deploy-voice-ai-apps-using-docker\/"},"modified":"2025-12-16T14:03:55","modified_gmt":"2025-12-16T14:03:55","slug":"develop-and-deploy-voice-ai-apps-using-docker","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/12\/16\/develop-and-deploy-voice-ai-apps-using-docker\/","title":{"rendered":"Develop and deploy voice AI apps using Docker"},"content":{"rendered":"<p>Voice is the next frontier of conversational AI. It is the most natural modality for people to chat and interact with another intelligent being. However, the voice AI software stack is complex, with many moving parts. Docker has emerged as one of the most useful tools for AI agent deployment.<\/p>\n<p>In this article, we\u2019ll explore how to use open-source technologies and Docker to create voice AI agents that utilize your custom knowledge base, voice style, actions, fine-tuned AI models, and run on your own computer. It is based on a<a href=\"https:\/\/youtu.be\/Z3zBZemK7_s\" rel=\"nofollow noopener\" target=\"_blank\"> talk I recently gave at the Docker Captains Summit in Istanbul<\/a>.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Docker and AI<\/strong><\/h2>\n<p>Most developers consider Docker the \u201ccontainer store\u201d for software. The Docker container provides a reliable and reproducible environment for developing software locally on your own machine and then shipping it to the cloud. It also provides a safe sandbox to isolate, run, and scale user-submitted software in the cloud. For complex AI applications, Docker provides a suite of tools that makes it easy for developers and platform engineers to build and deploy.<\/p>\n<ul class=\"wp-block-list\">\n<li>The Docker container is a great tool for running software components and functions in an AI agent system. It can run web servers, API servers, workflow orchestrators, LLM actions or tool calls, code interpreters, simulated web browsers, search engines, and vector databases.<\/li>\n<li>With the<a href=\"https:\/\/github.com\/NVIDIA\/nvidia-container-toolkit\" rel=\"nofollow noopener\" target=\"_blank\"> NVIDIA Container Toolkit<\/a>, you can access the host machine\u2019s GPU from inside Docker containers, enabling you to run inference applications such as<a href=\"https:\/\/llamaedge.com\/docs\/ai-models\/\" rel=\"nofollow noopener\" target=\"_blank\"> LlamaEdge<\/a> that serve open-source AI models inside the container.<\/li>\n<li>The<a href=\"https:\/\/docs.docker.com\/ai\/model-runner\/\" rel=\"nofollow noopener\" target=\"_blank\"> Docker Model Runner<\/a> runs OpenAI-compatible API servers for open-source LLMs locally on your own computer.<\/li>\n<li>The<a href=\"https:\/\/docs.docker.com\/ai\/mcp-catalog-and-toolkit\/toolkit\/\" rel=\"nofollow noopener\" target=\"_blank\"> Docker MCP Toolkit<\/a> provides an easy way to run MCP servers in containers and make them available to AI agents.<\/li>\n<\/ul>\n<p>The<a href=\"https:\/\/echokit.dev\/docs\/intro\/\" rel=\"nofollow noopener\" target=\"_blank\"> EchoKit platform<\/a> provides a set of Docker images and utilizes Docker tools to simplify the deployment of complex AI workflows.<\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=1279810236  fetchpriority=\"high\" decoding=\"async\" width=\"1021\" height=\"411\" src=\"https:\/\/www.docker.com\/app\/uploads\/2025\/12\/image2-3.png\" class=\"fade-in attachment-full size-full\" alt=\"image2 3\" title=\"- image2 3\" \/>\n        <\/div>\n<h2 class=\"wp-block-heading\"><strong>EchoKit<\/strong><\/h2>\n<p>The EchoKit consists of a<a href=\"https:\/\/github.com\/second-state\/echokit_server\" rel=\"nofollow noopener\" target=\"_blank\"> server<\/a> and a<a href=\"https:\/\/github.com\/second-state\/echokit_box\" rel=\"nofollow noopener\" target=\"_blank\"> client<\/a>. The client could be an ESP32-based hardware device that can listen for user voices using a microphone, stream the voice data to the server, receive and play the server\u2019s voice response through a speaker. EchoKit provides the device hardware specifications and firmware under open-source licenses. To see it in action, check out the following video demos.<\/p>\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/youtu.be\/UiCBTA-C59w\" rel=\"nofollow noopener\" target=\"_blank\">EchoKit tells the story about the Diana exhibit at the MET museum<\/a><\/li>\n<li><a href=\"https:\/\/youtu.be\/XroT7a0DLkw\" rel=\"nofollow noopener\" target=\"_blank\">EchoKit recommends BBQ in a Texas accent<\/a><\/li>\n<li><a href=\"https:\/\/youtu.be\/Zy-rLT4EgZQ\" rel=\"nofollow noopener\" target=\"_blank\">EchoKit helps a user practice for the US Civics test<\/a><\/li>\n<\/ul>\n<p>You can check out the <a href=\"https:\/\/github.com\/second-state\/echokit_server\" rel=\"nofollow noopener\" target=\"_blank\">GitHub repo<\/a> for EchoKit.<\/p>\n<h2 class=\"wp-block-heading\"><strong>The AI agent orchestrator<\/strong><\/h2>\n<p>The<a href=\"https:\/\/github.com\/second-state\/echokit_server\" rel=\"nofollow noopener\" target=\"_blank\"> EchoKit server<\/a> is an open-source AI service orchestrator focused on real-time voice use cases. It starts up a WebSocket server that listens for streaming audio input and returns streaming audio responses. It ties together multiple AI models, including voice activity detection (VAD), automatic speech recognition (ASR), large language models (LLM), and text-to-speech (TTS), using one model\u2019s output as the input for the next model.<\/p>\n<p>You can start an EchoKit server on your local computer and configure the EchoKit device to access it over the local WiFi network. The \u201cedge server\u201d setup reduces network latency, which is crucial for voice AI applications.<\/p>\n<p>The EchoKit team publishes a<a href=\"https:\/\/github.com\/second-state\/echokit_server\/blob\/main\/docker\/server\/Dockerfile\" rel=\"nofollow noopener\" target=\"_blank\"> multi-platform Docker image<\/a> that you can use directly to start an EchoKit server. The following command starts the EchoKit server with your own <code>config.toml<\/code> file and runs in the background.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\ndocker run --rm \n  -p 8080:8080 \n  -v $(pwd)\/config.toml:\/app\/config.toml \n  secondstate\/echokit:latest-server &amp;amp;\n<\/pre>\n<\/div>\n<p>The <code>config.toml<\/code> file is mapped into the container to configure how the EchoKit server utilizes various AI services in its voice response workflow. The following is an example of <code>config.toml<\/code>. It starts the WebSocket server on port 8080. That\u2019s why in the Docker command, we map the container\u2019s port 8080 to the same port on the host. That allows the EchoKit server to be accessible through the host computer\u2019s IP address. The rest of the <code>config.toml<\/code> specifies how to access the ASR, LLM, and TTS models to generate a voice response for the input voice data.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\naddr = \"0.0.0.0:8080\"\nhello_wav = \"hello.wav\"\n\n[asr]\nplatform = \"openai\"\nurl = \"https:\/\/api.groq.com\/openai\/v1\/audio\/transcriptions\"\napi_key = \"gsk_XYZ\"\nmodel = \"whisper-large-v3\"\nlang = \"en\"\nprompt = \"Hellon\u4f60\u597dn(noise)n(bgm)n(silence)n\"\n\n[llm]\nplatform = \"openai_chat\"\nurl = \"https:\/\/api.groq.com\/openai\/v1\/chat\/completions\"\napi_key = \"gsk_XYZ\"\nmodel = \"openai\/gpt-oss-20b\"\nhistory = 20\n\n[tts]\nplatform = \"elevenlabs\"\nurl = \"wss:\/\/api.elevenlabs.io\/v1\/text-to-speech\/\"\ntoken = \"sk_xyz\"\nvoice = \"VOICE-ID-ABCD\"\n\n[[llm.sys_prompts]]\nrole = \"system\"\ncontent = \"\"\"\nYou are a comedian. Engage in lighthearted and humorous conversation with the user. Tell jokes when appropriate.\n\n\"\"\"\n<\/pre>\n<\/div>\n<p>The AI services configured for the above EchoKit server are as follows.<\/p>\n<ul class=\"wp-block-list\">\n<li>It utilizes Groq for ASR (voice-to-text) and LLM tasks. You will need to fill in your own<a href=\"https:\/\/console.groq.com\/keys\" rel=\"nofollow noopener\" target=\"_blank\"> Groq API key<\/a>.<\/li>\n<li>It utilizes ElevenLabs for streaming TTS (text-to-speech). You will need to fill in your own<a href=\"https:\/\/elevenlabs.io\/app\/sign-in?redirect=%2Fapp%2Fdevelopers%2Fapi-keys\" rel=\"nofollow noopener\" target=\"_blank\"> ElevenLabs API key<\/a>.<\/li>\n<\/ul>\n<p>Then, in the<a href=\"https:\/\/echokit.dev\/docs\/quick-start\/\" rel=\"nofollow noopener\" target=\"_blank\"> EchoKit device setup<\/a>, you just need to point your device to the local EchoKit server.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\nws:\/\/local-network-ip.address:8080\/ws\/\n<\/pre>\n<\/div>\n<p>For more options on the EchoKit server configuration, please refer to our documentation!<\/p>\n<h2 class=\"wp-block-heading\"><strong>The VAD server<\/strong><\/h2>\n<p>The voice-to-text ASR is not sufficient by itself. It could hallucinate and generate nonsensical text if the input voice is not human speech (e.g., background noise, street noise, or music). It also would not know when the user has finished speaking, and the EchoKit server needs to ask the LLM to start generating a response.<\/p>\n<p>A VAD model is used to detect human voice and conversation turns in the voice stream. The EchoKit team has a<a href=\"https:\/\/github.com\/second-state\/echokit_server\/blob\/main\/docker\/server-vad\/Dockerfile\" rel=\"nofollow noopener\" target=\"_blank\"> multi-platform Docker image<\/a> that incorporates the open-source<a href=\"https:\/\/github.com\/second-state\/silero_vad_server\" rel=\"nofollow noopener\" target=\"_blank\"> Silero VAD model<\/a>. The image is much larger than the plain EchoKit server, and it requires more CPU resources to run. But it delivers substantially better voice recognition results. Here is the Docker command to start the EchoKit server with VAD in the background.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\ndocker run --rm \n  -p 8080:8080 \n  -v $(pwd)\/config.toml:\/app\/config.toml \n  secondstate\/echokit:latest-server-vad &amp;amp;\n<\/pre>\n<\/div>\n<p>The <code>config.toml<\/code> file for this Docker container also needs an additional line in the ASR section, so that the EchoKit server knows to stream incoming audio data to the local VAD service and act on the VAD signals. The Docker container runs the Silero VAD model as a WebSocket service inside the container on port 8000. There is no need to expose the container port 8000 to the host.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\naddr = \"0.0.0.0:8080\"\nhello_wav = \"hello.wav\"\n\n[asr]\nplatform = \"openai\"\nurl = \"https:\/\/api.groq.com\/openai\/v1\/audio\/transcriptions\"\napi_key = \"gsk_XYZ\"\nmodel = \"whisper-large-v3\"\nlang = \"en\"\nprompt = \"Hellon\u4f60\u597dn(noise)n(bgm)n(silence)n\"\nvad_url = \"http:\/\/localhost:9093\/v1\/audio\/vad\"\n\n[llm]\nplatform = \"openai_chat\"\nurl = \"https:\/\/api.groq.com\/openai\/v1\/chat\/completions\"\napi_key = \"gsk_XYZ\"\nmodel = \"openai\/gpt-oss-20b\"\nhistory = 20\n\n[tts]\nplatform = \"elevenlabs\"\nurl = \"wss:\/\/api.elevenlabs.io\/v1\/text-to-speech\/\"\ntoken = \"sk_xyz\"\nvoice = \"VOICE-ID-ABCD\"\n\n[[llm.sys_prompts]]\nrole = \"system\"\ncontent = \"\"\"\nYou are a comedian. Engage in lighthearted and humorous conversation with the user. Tell jokes when appropriate.\n\n\"\"\"\n<\/pre>\n<\/div>\n<p>We recommend using the VAD enabled EchoKit server whenever possible.<\/p>\n<h2 class=\"wp-block-heading\"><strong>MCP services<\/strong><\/h2>\n<p>A key feature of AI agents is to perform actions, such as making web-based API calls, on behalf of LLMs. For example, the<a href=\"https:\/\/www.youtube.com\/watch?v=Zy-rLT4EgZQ\" rel=\"nofollow noopener\" target=\"_blank\"> \u201cUS civics test prep\u201d<\/a> example for EchoKit requires the agent to get exam questions from a database, and then generate responses that guide the user toward the official answer.<\/p>\n<p>The MCP protocol is the industry standard for providing tools (function calls) to LLM agents. For example, the<a href=\"https:\/\/hub.docker.com\/r\/mcp\/duckduckgo\" rel=\"nofollow noopener\" target=\"_blank\"> DuckDuckGo MCP server<\/a> provides a search tool for LLMs to search the internet if the user asks for current information that is not available in the LLM\u2019s pre-training data. The<a href=\"https:\/\/docs.docker.com\/ai\/mcp-catalog-and-toolkit\/toolkit\/\" rel=\"nofollow noopener\" target=\"_blank\"> Docker MCP Toolkit<\/a> provides a set of tools that make it easy to run MCP servers that can be utilized by EchoKit.<\/p>\n<div class=\"wp-block-ponyo-image\">\n                <img data-opt-id=1395288326  fetchpriority=\"high\" decoding=\"async\" width=\"1999\" height=\"979\" src=\"https:\/\/www.docker.com\/app\/uploads\/2025\/12\/image1-4.png\" class=\"fade-in attachment-full size-full\" alt=\"image1 4\" title=\"- image1 4\" \/>\n        <\/div>\n\n<p>The command below starts a Docker MCP gateway server. The MCP protocol defines several ways for agents or LLMs to access MCP tools. Our gateway server is accessible through the streaming HTTP protocol at port 8011.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\ndocker mcp gateway run --port 8011 --transport streaming\n<\/pre>\n<\/div>\n<p>Next, you can add the DuckDuckGo MCP server to the gateway. The search tool provided by the DuckDuckGo MCP server is now available on HTTP port 8011.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\ndocker mcp server enable duckduckgo\n<\/pre>\n<\/div>\n<p>You can simply configure the EchoKit server to use the DuckDuckGo MCP tools in the <code>config.toml<\/code> file.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\n[[llm.mcp_server]]\nserver = \"http:\/\/localhost:8011\/mcp\"\ntype = \"http_streamable\"\ncall_mcp_message = \"Please hold on a few seconds while I am searching for an answer!\"\n\n<\/pre>\n<\/div>\n<p>Now, when you ask EchoKit a current event question, such as \u201cWhat is the latest Tesla stock price?\u201d, it will first call the DuckDuckGo MCP\u2019s search tool to retrieve this information and then respond to the user.<\/p>\n<p>The <code>call_mcp_message<\/code> field is a message the EchoKit device will read aloud when the server calls the MCP tool. It is needed since the MCP tool call could introduce significant latency in the response.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Docker Model Runner<\/strong><\/h2>\n<p>The EchoKit server orchestrates multiple AI services. In the examples in this article so far, the EchoKit server is configured to use cloud-based AI services, such as Groq and ElevenLabs. However, many applications\u2014especially in the voice AI area\u2014require the AI models to run locally or on-premises for security, cost, and performance reasons.<\/p>\n<p><a href=\"https:\/\/docs.docker.com\/ai\/model-runner\/\" rel=\"nofollow noopener\" target=\"_blank\">Docker Model Runner<\/a> is Docker\u2019s solution to run LLMs locally. For example, the following command downloads and starts OpenAI\u2019s open-source <code>gpt-oss-20b<\/code> model on your computer.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\ndocker model run ai\/gpt-oss\n<\/pre>\n<\/div>\n<p>The Docker Model Runner starts an OpenAI-compatible API server at port 12434. It could be directly utilized by the EchoKit server via <code>config.toml<\/code>.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n<pre class=\"brush: plain; gutter: false; title: ; notranslate\">\n[llm]\nplatform = \"openai_chat\"\nurl = \"http:\/\/localhost:12434\/engines\/llama.cpp\/v1\/chat\/completions\"\nmodel = \"ai\/gpt-oss\"\nhistory = 20\n<\/pre>\n<\/div>\n<p>At the time of this writing, the Docker Model Runner only supports LLMs. The EchoKit server still relies on cloud services, or local AI solutions such as<a href=\"https:\/\/llamaedge.com\/docs\/ai-models\/\" rel=\"nofollow noopener\" target=\"_blank\"> LlamaEdge<\/a>, for other types of AI services.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n<p>The complexity of the AI agent software stack has created new challenges in software deployment and security. Docker is a proven and extremely reliable tool for delivering software to production. Docker images are repeatable and cross-platform deployment packages. The Docker container isolates software execution to eliminate large categories of security issues.<\/p>\n<p>With new AI tools, such as the Docker Model Runner and MCP Toolkit, Docker continues to address emerging challenges in AI portability, discoverability, and security.<\/p>\n<p>The easiest, most reliable, and most secure way to set up your own EchoKit servers is to use Docker.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Learn more<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li>Check out the <a href=\"https:\/\/github.com\/second-state\/echokit_server\" rel=\"nofollow noopener\" target=\"_blank\">GitHub repo<\/a> for EchoKit<\/li>\n<li><a href=\"https:\/\/hub.docker.com\/mcp\" rel=\"nofollow noopener\" target=\"_blank\">Explore the MCP Catalog<\/a>: Discover containerized, security-hardened MCP servers.<\/li>\n<li><a href=\"https:\/\/hub.docker.com\/open-desktop?url=https:\/\/open.docker.com\/dashboard\/mcp\" rel=\"nofollow noopener\" target=\"_blank\">Get started with the MCP Toolkit<\/a>: Run MCP servers easily and securely.<\/li>\n<li>Visit our<a href=\"https:\/\/github.com\/docker\/model-runner\" rel=\"nofollow noopener\" target=\"_blank\"> Model Runner GitHub repo<\/a>! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>Voice is the next frontier of conversational AI. It is the most natural modality for people to chat and interact [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3065,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3064","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3064","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3064"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3064\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3065"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3064"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3064"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3064"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}