{"id":2296,"date":"2025-07-28T22:12:52","date_gmt":"2025-07-28T22:12:52","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/07\/28\/beyond-the-chatbot-event-driven-agents-in-action\/"},"modified":"2025-07-28T22:12:52","modified_gmt":"2025-07-28T22:12:52","slug":"beyond-the-chatbot-event-driven-agents-in-action","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/07\/28\/beyond-the-chatbot-event-driven-agents-in-action\/","title":{"rendered":"Beyond the Chatbot: Event-Driven Agents in Action"},"content":{"rendered":"<p>Docker recently completed an internal 24-hour hackathon that had a fairly simple goal: create an agent that helps you be more productive.<\/p>\n<p>As I thought about this topic, I recognized I didn\u2019t want to spend more time in a chat interface. Why can\u2019t I create a fully automated agent that doesn\u2019t need a human to trigger the workflow? At the end of the day,<strong> agents can be triggered by machine-generated input.<\/strong><\/p>\n<p>In this post, we\u2019ll build an event-driven application with agentic AI. The event-driven agent we\u2019ll build will respond to GitHub webhooks to determine if a PR should be automatically closed. I\u2019ll walk you through the entire process from planning to coding, including why we\u2019re using the Gemma3 and Qwen3 models, hooking up the GitHub MCP server with the new <a href=\"https:\/\/www.docker.com\/blog\/docker-mcp-gateway-secure-infrastructure-for-agentic-ai\/\">Docker MCP Gateway<\/a>, and choosing the Mastra agentic framework.<\/p>\n<h2 class=\"wp-block-heading\">The problem space<\/h2>\n<p>Docker has a lot of repositories used for sample applications, tutorials, and workshops. These are carefully crafted to help students learn various aspects of Docker, such as <a href=\"https:\/\/docs.docker.com\/get-started\/docker-concepts\/building-images\/writing-a-dockerfile\/\" target=\"_blank\">writing their first Dockerfile<\/a>, <a href=\"https:\/\/docs.docker.com\/guides\/agentic-ai\/\" target=\"_blank\">building agentic applications<\/a>, and more.<\/p>\n<p>Occasionally, we\u2019ll get pull requests from new Docker users that include the new Dockerfile they\u2019ve created or the application updates they\u2019ve made.<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>Sample pull request in which a user submitted the update they made to their website while completing the tutorial<\/em><\/p>\n<p>Although we\u2019re excited they\u2019ve completed the tutorial and want to show off their work, we can\u2019t accept the pull request as it\u2019ll impact the ability for the next person to complete the work.<\/p>\n<p>Recognizing that many of these PRs are from brand new developers, we want to write a nice comment to let them know we can\u2019t accept the PR, yet encourage them to keep learning.<\/p>\n<p>While this doesn\u2019t take a significant amount of time, it does feel like a good candidate for automation. We can respond more timely and help keep PR queues focused on actual improvements to the materials.<\/p>\n<h2 class=\"wp-block-heading\">The plan to automate<\/h2>\n<p><strong>The goal:<\/strong> Use an agent to analyze the PR and detect if it appears to be a \u201cI completed the tutorial\u201d submission, generate a comment, and auto-close the PR. And can we automate the entire process?<\/p>\n<p>Fortunately, GitHub has webhooks that we can receive when a new PR is opened.<\/p>\n<p>As I broke down the task, I identified three tasks that need to be completed:<\/p>\n<p><strong>Analyze the PR<\/strong> \u2013 look at the contents of the PR and possibly expand into the contents of the repo (what\u2019s the tutorial actually about?). Determine if the PR should be closed.<\/p>\n<p><strong>Generate a comment<\/strong> \u2013 generate a comment indicating the PR is going to be closed, provide encouragement, and thank them for their contribution.<\/p>\n<p><strong>Post the comment and close the PR \u2013 <\/strong>do the actual posting of the comment and close the PR.<\/p>\n<p>With this setup, I needed an agentic application architecture that looked like this:<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>Architecture diagram showing the flow of the app: PR opened in GitHub triggers a webhook that is received by the agentic application and delegates the work to three sub-agents<\/em><\/p>\n<h2 class=\"wp-block-heading\">Building an event-driven application with agentic AI<\/h2>\n<p>The first thing I did was pick an agentic framework. I ended up landing on <a href=\"http:\/\/mastra.ai\/\" target=\"_blank\">Mastra.ai<\/a>, a Typescript-based framework that supports multi-agent flows, conditional workflows, and more. I chose it because I\u2019m most comfortable with JavaScript and was intrigued by the features the framework provided.<\/p>\n<h3 class=\"wp-block-heading\">1. Select the right agent tools<\/h3>\n<p>After choosing the framework, I next chose the tools that agents would need. Since this was going to involve analyzing and working with GitHub, I chose the <a href=\"https:\/\/hub.docker.com\/mcp\/server\/github-official\/\" target=\"_blank\">GitHub Official MCP server<\/a>.\u00a0<\/p>\n<p>The newly-released <a href=\"https:\/\/github.com\/docker\/mcp-gateway\" target=\"_blank\">Docker MCP Gateway<\/a> made it easy for me to plug it into my Compose file. Since the GitHub MCP server has over 70 tools, I decided to filter the exposed tools to include only those I needed to reduce the required context size and increase speed.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nservices:<br \/>\n  mcp-gateway:<br \/>\n    image: docker\/mcp-gateway:latest<br \/>\n      command:<br \/>\n        &#8211; &#8211;transport=sse<br \/>\n        &#8211; &#8211;servers=github-official<br \/>\n        &#8211; &#8211;tools=get_commit,get_pull_request,get_pull_request_diff,get_pull_request_files,get_file_contents,add_issue_comment,get_issue_comments,update_pull_request<br \/>\n    use_api_socket: true<br \/>\n    ports:<br \/>\n      &#8211; 8811:8811<br \/>\n    secrets:<br \/>\n      &#8211; mcp_secret<br \/>\nsecrets:<br \/>\n  mcp_secret:<br \/>\n    file: .env\n<\/div>\n<p>The .env file provided the GitHub Personal Access Token required to access the APIs:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ngithub.personal_access_token=personal_access_token_here\n<\/div>\n<h3 class=\"wp-block-heading\">2. Choose and add your AI models<\/h3>\n<p>Now, I needed to pick models. Since I had three agents, I could theoretically pick three different models. But, I also wanted to reduce model swapping if possible, yet keep performance as quick as possible. I experimented with a few different approaches, but landed with the following:<\/p>\n<p>PR analyzer \u2013 <a href=\"https:\/\/hub.docker.com\/r\/ai\/qwen3\" target=\"_blank\">ai\/qwen3<\/a> \u2013 I wanted a model that could do more reasoning and could perform multiple steps to gather the context it needed<\/p>\n<p>Comment generator \u2013 <a href=\"http:\/\/hub.docker.com\/r\/ai\/gemma3\" target=\"_blank\">ai\/gemma3<\/a> \u2013 the Gemma3 models are great for text generation and run quite quickly<\/p>\n<p>PR executor \u2013 <a href=\"https:\/\/hub.docker.com\/r\/ai\/qwen3\" target=\"_blank\">ai\/qwen3<\/a> \u2013 I ran a few experiments, and the qwen models did best for the multiple steps needed to post the comment and close the PR<\/p>\n<p>I updated my Compose file with the following configuration to define the models. I gave the Qwen3 model an increased context size to have more space for tool execution, retrieving additional details, etc.:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nmodels:<br \/>\n  gemma3:<br \/>\n    model: ai\/gemma3<br \/>\n  qwen3:<br \/>\n    model: ai\/qwen3:8B-Q4_0<br \/>\n    context_size: 131000\n<\/div>\n<h3 class=\"wp-block-heading\">3. Write the application<\/h3>\n<p>With the models and tools chosen and configured, it was time to write the app itself! I wrote a small Dockerfile and updated the Compose file to connect the models and MCP Gateway using environment variables. I also added <a href=\"https:\/\/docs.docker.com\/compose\/how-tos\/file-watch\/\" target=\"_blank\">Compose Watch<\/a> config to sync file changes into the container.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nservices:<br \/>\n  app:<br \/>\n    build:<br \/>\n      context: .<br \/>\n      target: dev<br \/>\n    ports:<br \/>\n      &#8211; 4111:4111<br \/>\n    environment:<br \/>\n      MCP_GATEWAY_URL: http:\/\/mcp-gateway:8811\/sse<br \/>\n    depends_on:<br \/>\n      &#8211; mcp-gateway<br \/>\n    models:<br \/>\n      qwen3:<br \/>\n        endpoint_var: OPENAI_BASE_URL_ANALYZER<br \/>\n        model_var: OPENAI_MODEL_ANALYZER<br \/>\n      gemma3:<br \/>\n        endpoint_var: OPENAI_BASE_URL_COMMENT<br \/>\n        model_var: OPENAI_MODEL_COMMENT<br \/>\n    develop:<br \/>\n      watch:<br \/>\n        &#8211; path: .\/src<br \/>\n          action: sync<br \/>\n          target: \/usr\/local\/app\/src<br \/>\n        &#8211; path: .\/package-lock.json<br \/>\n          action: rebuild\n<\/div>\n<p>The Mastra framework made it pretty easy to write an agent. The following snippet defines a MCP Client, defines the model connection, and creates the agent with a defined system prompt (which I\u2019ve abbreviated for this blog post).\u00a0<\/p>\n<p>You\u2019ll notice the usage of environment variables, which match those being defined in the Compose file. This makes the app super easy to configure.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nimport { Agent } from &#8220;@mastra\/core\/agent&#8221;;<br \/>\nimport { MCPClient } from &#8220;@mastra\/mcp&#8221;;<br \/>\nimport { createOpenAI } from &#8220;@ai-sdk\/openai&#8221;;<br \/>\nimport { Memory } from &#8220;@mastra\/memory&#8221;;<br \/>\nimport { LibSQLStore } from &#8220;@mastra\/libsql&#8221;;\n<p>const SYSTEM_PROMPT = `<br \/>\nYou are a bot that will analyze a pull request for a repository and determine if it can be auto-closed or not.<br \/>\n&#8230;`;<\/p>\n<p>const mcpGateway = new MCPClient({<br \/>\n  servers: {<br \/>\n    mcpGateway: {<br \/>\n      url: new URL(process.env.MCP_GATEWAY_URL || &#8220;http:\/\/localhost:8811\/sse&#8221;),<br \/>\n    },<br \/>\n  },<br \/>\n});<\/p>\n<p>const openai = createOpenAI({<br \/>\n  baseURL: process.env.OPENAI_BASE_URL_ANALYZER || &#8220;http:\/\/localhost:12434\/engines\/v1&#8221;,<br \/>\n  apiKey: process.env.OPENAI_API_KEY || &#8220;not-set&#8221;,<br \/>\n});<\/p>\n<p>export const prExecutor = new Agent({<br \/>\n  name: &#8216;Pull request analyzer,<br \/>\n  instructions: SYSTEM_PROMPT,<br \/>\n  model: openai(process.env.OPENAI_MODEL_ANALYZER || &#8220;ai\/qwen3:8B-Q4_0&#8221;),<br \/>\n  tools: await mcpGateway.getTools(),<br \/>\n  memory: new Memory({<br \/>\n    storage: new LibSQLStore({<br \/>\n      url: &#8220;file:\/tmp\/mastra.db&#8221;,<br \/>\n    }),<br \/>\n  }),<br \/>\n});\n<\/p><\/div>\n<p>I was quite impressed with the Mastra Playground, which allows you to interact directly with the agents individually. This makes it easy to test different prompts, messages, and model settings. Once I found a prompt that worked well, I would update my code to use that new prompt.<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>The Mastra Playground showing ability to directly interact with the \u201cPull request analyzer\u201d agent, adjust settings, and more.<\/em><\/p>\n<p>Once the agents were defined, I was able to define steps and a workflow that connects all of the agents. The following snippet shows the defined workflow and conditional branch that occurs after determining if the PR should be closed:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nconst prAnalyzerWorkflow = createWorkflow({<br \/>\n    id: &#8220;prAnalyzerWorkflow&#8221;,<br \/>\n    inputSchema: z.object({<br \/>\n      org: z.string().describe(&#8220;The organization to analyze&#8221;),<br \/>\n      repo: z.string().describe(&#8220;The repository to analyze&#8221;),<br \/>\n      prNumber: z.number().describe(&#8220;The pull request number to analyze&#8221;),<br \/>\n      author: z.string().describe(&#8220;The author of the pull request&#8221;),<br \/>\n      authorAssociation: z.string().describe(&#8220;The association of the author with the repository&#8221;),<br \/>\n      prTitle: z.string().describe(&#8220;The title of the pull request&#8221;),<br \/>\n      prDescription: z.string().describe(&#8220;The description of the pull request&#8221;),<br \/>\n    }),<br \/>\n    outputSchema: z.object({<br \/>\n      autoClosed: z.boolean().describe(&#8220;Whether the PR was auto-closed&#8221;),<br \/>\n      comment: z.string().describe(&#8220;Comment to be posted on the PR&#8221;),<br \/>\n    }),<br \/>\n  })<br \/>\n  .then(determineAutoClose)<br \/>\n  .branch([<br \/>\n    [<br \/>\n      async ({ inputData }) =&gt; inputData.recommendedToClose,<br \/>\n      createCommentStep<br \/>\n    ]<br \/>\n  ])<br \/>\n  .then(prExecuteStep)<br \/>\n  .commit();\n<\/div>\n<p>With the workflow defined, I could now add the webhook support. Since this was a simple hackathon project and I\u2019m not yet planning to actually deploy it (maybe one day!), I used the <a href=\"http:\/\/smee.io\/\" target=\"_blank\">smee.io<\/a> service to register a webhook in the repo and then the <a href=\"https:\/\/github.com\/probot\/smee-client\" target=\"_blank\">smee-client<\/a> to receive the payload, which then forwards the payload to an HTTP endpoint.<\/p>\n<p>The following snippet is a simplified version where I create a small Express app that handles the webhook from the smee-client, extracts data, and then invokes the Mastra workflow.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nimport express from &#8220;express&#8221;;<br \/>\nimport SmeeClient from &#8216;smee-client&#8217;;<br \/>\nimport { mastra } from &#8220;.\/mastra&#8221;;\n<p>const app = express();<br \/>\napp.use(express.json());<\/p>\n<p>app.post(&#8220;\/webhook&#8221;, async (req, res) =&gt; {<br \/>\n  const payload = JSON.parse(req.body.payload);<\/p>\n<p>  if (!payload.pull_request)<br \/>\n   return res.status(400).send(&#8220;Invalid payload&#8221;);<\/p>\n<p>  if (payload.action !== &#8220;opened&#8221; &amp;&amp; payload.action !== &#8220;reopened&#8221;)<br \/>\n   return res.status(200).send(&#8220;Action not relevant, ignoring&#8221;);<\/p>\n<p>  const repoFullName = payload.pull_request.base.repo.full_name;<\/p>\n<p>  const initData = {<br \/>\n    prNumber: payload.pull_request.number,<br \/>\n    org: repoFullName.split(&#8220;\/&#8221;)[0],<br \/>\n    repo: repoFullName.split(&#8220;\/&#8221;)[1],<br \/>\n    author: payload.pull_request.user.login,<br \/>\n    authorAssociation: payload.pull_request.author_association,<br \/>\n    prTitle: payload.pull_request.title,<br \/>\n    prBody: payload.pull_request.body,<br \/>\n  };<\/p>\n<p>  res.status(200).send(&#8220;Webhook received&#8221;);<\/p>\n<p>  const workflow = await mastra.getWorkflow(&#8220;prAnalyzer&#8221;).createRunAsync();<br \/>\n  const result = await workflow.start({ inputData: initData });<br \/>\n  console.log(&#8220;Result:&#8221;, JSON.stringify(result));<br \/>\n});<\/p>\n<p>const server = app.listen(3000, () =&gt; console.log(&#8220;Server is running on port 3000&#8221;));<\/p>\n<p>const smee = new SmeeClient({<br \/>\n  source: &#8220;https:\/\/smee.io\/SMEE_ENDPOINT_ID&#8221;,<br \/>\n  target: &#8220;http:\/\/localhost:3000\/webhook&#8221;,<br \/>\n  logger: console,<br \/>\n});<br \/>\nconst events = await smee.start();<br \/>\nconsole.log(&#8220;Smee client started, listening for events now&#8221;);\n<\/p><\/div>\n<h3 class=\"wp-block-heading\">4. Test the app<\/h3>\n<p>At this point, I can start the full project (run docker compose up) and open a PR. I\u2019ll see the webhook get triggered and the workflow run. And, after a moment, the result is complete! It worked!<\/p>\n<div class=\"wp-block-ponyo-image\"><\/div>\n<p><em>Screenshot of a GitHub PR that was automatically closed by the agent with the generated comment.<\/em><\/p>\n<p>If you\u2019d like to view the project in its entirety, you can check it out on GitHub at <a href=\"https:\/\/github.com\/mikesir87\/hackathon-july-2025\" target=\"_blank\">mikesir87\/hackathon-july-2025<\/a>.<\/p>\n<h2 class=\"wp-block-heading\">Lessons learned<\/h2>\n<p>Looking back after this hackathon, I learned a few things that are worth sharing as a recap for this post.<\/p>\n<h3 class=\"wp-block-heading\">1. Yes, automating workflows is possible with agents.\u00a0<\/h3>\n<p>Going beyond the chatbot opens up a lot of automation possibilities and I\u2019m excited to be thinking about this space more.<\/p>\n<h3 class=\"wp-block-heading\">2. Prompt engineering is still tough.\u00a0<\/h3>\n<p>It took <em>many<\/em> iterations to develop prompts that guided the models to do the right thing consistently. Using tools and frameworks that let you iterate quickly help tremendously (thanks Mastra Playground!).<\/p>\n<h3 class=\"wp-block-heading\">3. Docker\u2019s tooling made it easy to try lots of models.\u00a0<\/h3>\n<p>I experimented with quite a few models to find those that would handle the tool calling, reasoning, and comment generation. I wanted the smallest model possible that would still work. It was easy to simply adjust the Compose file, have environment variables be updated, and try out a new model.<\/p>\n<h3 class=\"wp-block-heading\">4.<strong> <\/strong>It\u2019s possible to go overboard on agents. Split agentic\/programmatic workflows are powerful.\u00a0<\/h3>\n<p>I was having struggles writing a prompt that would get the final agent to simply post a comment and close the PR reliably \u2013 it would often post the comment multiple times or skip the PR closing. But, I found myself asking \u201cdoes an agent need to do this step? This step feels like something I can do programmatically without a model, GPU usage, and so on. And it would be much faster too.\u201d I do think that\u2019s something to consider \u2013 how to build workflows where some steps use agents and some steps are simply programmatic (Mastra supports this by the way).<\/p>\n<h3 class=\"wp-block-heading\">5. Testing?\u00a0<\/h3>\n<p>Due to the timing, I didn\u2019t get a chance to explore much on the testing front. All of my \u201ctesting\u201d was manual verification. So, I\u2019d like to loop back on this in a future iteration. How do we test this type of workflow? Do we test agents in isolation or the entire flow? Do we mock results from the MCP servers? So many questions.<\/p>\n\n<h2 class=\"wp-block-heading\">Wrapping up<\/h2>\n<p>This internal hackathon was a great experience to build an event-driven agentic application. I\u2019d encourage you to think about agentic applications that don\u2019t require a chat interface to start. How can you use event-driven agents to automate some part of your work or life? I\u2019d love to hear what you have in mind!<\/p>\n<p>View the hackathon project <a href=\"https:\/\/github.com\/mikesir87\/hackathon-july-2025\" target=\"_blank\">on GitHub<\/a><\/p>\n<p>Try <a href=\"https:\/\/www.docker.com\/products\/model-runner\/\">Docker Model Runner<\/a> and <a href=\"https:\/\/github.com\/docker\/mcp-gateway\" target=\"_blank\">MCP Gateway<\/a><\/p>\n<p>Sign up for<a href=\"https:\/\/www.docker.com\/products\/docker-offload\/#earlyaccess\"> our Docker Offload beta program<\/a> and get 300 free GPU<strong> <\/strong>minutes to boost your agent.\u00a0<\/p>\n<p>Use<a href=\"https:\/\/github.com\/docker\/compose-for-agents\" target=\"_blank\"> Docker Compose<\/a> to build and run your AI agents<\/p>\n<p>Discover trusted and secure MCP servers for your agent on <a href=\"https:\/\/hub.docker.com\/mcp\" target=\"_blank\">Docker MCP Catalog<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>Docker recently completed an internal 24-hour hackathon that had a fairly simple goal: create an agent that helps you be [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2296","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2296","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2296"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2296\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2296"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2296"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2296"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}