{"id":3981,"date":"2026-05-04T17:51:19","date_gmt":"2026-05-04T17:51:19","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/05\/04\/microsoft-agent-framework-building-blocks-for-ai-part-3\/"},"modified":"2026-05-04T17:51:19","modified_gmt":"2026-05-04T17:51:19","slug":"microsoft-agent-framework-building-blocks-for-ai-part-3","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/05\/04\/microsoft-agent-framework-building-blocks-for-ai-part-3\/","title":{"rendered":"Microsoft Agent Framework \u2013 Building Blocks for AI Part 3"},"content":{"rendered":"<p>Welcome back to the building blocks for AI in .NET series! In <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/dotnet-ai-essentials-the-core-building-blocks-explained\/\">part one<\/a>, we explored Microsoft Extensions for AI (MEAI) and how it provides a unified interface for working with large language models. In <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/vector-data-in-dotnet--building-blocks-for-ai-part-2\/\">part two<\/a>, we dove into Microsoft.Extensions.VectorData and how it brings semantic search and RAG patterns to .NET. Today, we\u2019re exploring the third building block: the <strong>Microsoft Agent Framework<\/strong>.<\/p>\n<p>Up to this point, we\u2019ve been building the foundation. MEAI gave us a universal way to talk to models, and VectorData gave us the ability to store and search knowledge. But what if you want an AI that can <em>do things<\/em>? Not just answer questions, but take actions, use tools, remember context across conversations, and coordinate with other agents to solve complex problems? That\u2019s where agents come in.<\/p>\n<h2>What is an AI agent?<\/h2>\n<p>An AI agent is more than a chatbot. A chatbot receives input, passes it to a model, and returns the output. An agent, on the other hand, has <em>autonomy<\/em>. It can reason about a task, decide which tools to use, call those tools, evaluate the results, and decide what to do next. It can accomplish all this without forcing you to write explicit step-by-step instructions for every scenario.<\/p>\n<p>Think of it this way: if MEAI is like having a conversation with a colleague, an agent is like handing that colleague a to-do list and letting them figure out how to get it done. They might search for information, run calculations, check the weather, look up a database and use whatever tools you\u2019ve made available to them.<\/p>\n<p>The <a href=\"https:\/\/github.com\/microsoft\/agent-framework\">Microsoft Agent Framework<\/a> provides a production-ready SDK for building these intelligent agents in .NET (and other languages like Python, though we\u2019ll focus on C# here). It achieved its <strong>1.0 release<\/strong> in April 2026 and supports everything from simple single-agent scenarios to complex multi-agent workflows with graph-based orchestration.<\/p>\n<h2>Your first agent<\/h2>\n<p>Let\u2019s start simple. If you\u2019ve used MEAI and\/or read <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/dotnet-ai-essentials-the-core-building-blocks-explained\/\">Part 1<\/a>, the Agent Framework will feel familiar because it builds directly on top of <code>IChatClient<\/code>. Create a console app, then install the agent framework:<\/p>\n<pre><code class=\"language-bash\">dotnet add package Microsoft.Agents.AI<\/code><\/pre>\n<p>Here\u2019s all it takes to create an agent (<a href=\"https:\/\/github.com\/microsoft\/agent-framework\/blob\/main\/dotnet\/samples\/01-get-started\/01_hello_agent\/Program.cs\">01_hello_agent<\/a>):<\/p>\n<pre><code class=\"language-csharp\">using Azure.AI.OpenAI;\r\nusing Azure.Identity;\r\nusing Microsoft.Agents.AI;\r\n\r\nvar endpoint = Environment.GetEnvironmentVariable(\"AZURE_OPENAI_ENDPOINT\")\r\n    ?? throw new InvalidOperationException(\"AZURE_OPENAI_ENDPOINT is not set.\");\r\nvar deploymentName = Environment.GetEnvironmentVariable(\"AZURE_OPENAI_DEPLOYMENT_NAME\")\r\n    ?? \"gpt-5.4-mini\";\r\n\r\nAIAgent agent = new AzureOpenAIClient(\r\n    new Uri(endpoint),\r\n    new DefaultAzureCredential())\r\n    .GetChatClient(deploymentName)\r\n    .AsAIAgent(\r\n        instructions: \"You are good at telling jokes.\",\r\n        name: \"Joker\");\r\n\r\nConsole.WriteLine(await agent.RunAsync(\"Tell me a joke about a pirate.\"));<\/code><\/pre>\n<p>Notice the <code>.AsAIAgent()<\/code> extension method. Just like <code>.AsIChatClient()<\/code> bridges a provider\u2019s SDK to the MEAI abstraction, <code>.AsAIAgent()<\/code> takes that a step further and wraps it in an agent that can manage sessions, tools, and memory. This works with multiple providers, including Azure OpenAI, OpenAI, GitHub Models, Microsoft Foundry, or even local models via Foundry Local or Ollama.<\/p>\n<p>The agent also supports streaming out of the box:<\/p>\n<pre><code class=\"language-csharp\">await foreach (var update in agent.RunStreamingAsync(\"Tell me a joke about a pirate.\"))\r\n{\r\n    Console.Write(update);\r\n}<\/code><\/pre>\n<h2>Giving your agent tools<\/h2>\n<p>A joke-telling agent is fun, but agents become truly powerful when you give them tools. Tools are simply functions that the model can decide to call based on the user\u2019s request. The Agent Framework uses the same <code>AIFunctionFactory<\/code> from MEAI, so if you\u2019ve already defined tools for your chat client, they work here too.<\/p>\n<p>Here\u2019s an agent with a weather tool (<a href=\"https:\/\/github.com\/microsoft\/agent-framework\/blob\/main\/dotnet\/samples\/01-get-started\/02_add_tools\/Program.cs\">02_add_tools<\/a>):<\/p>\n<pre><code class=\"language-csharp\">using System.ComponentModel;\r\nusing Microsoft.Agents.AI;\r\nusing Microsoft.Extensions.AI;\r\n\r\n[Description(\"Get the weather for a given location.\")]\r\nstatic string GetWeather(\r\n    [Description(\"The location to get the weather for.\")] string location)\r\n    =&gt; $\"The weather in {location} is cloudy with a high of 15\u00b0C.\";\r\n\r\nAIAgent agent = new AzureOpenAIClient(\r\n    new Uri(endpoint),\r\n    new DefaultAzureCredential())\r\n    .GetChatClient(deploymentName)\r\n    .AsAIAgent(\r\n        instructions: \"You are a helpful assistant\",\r\n        tools: [AIFunctionFactory.Create(GetWeather)]);\r\n\r\nConsole.WriteLine(await agent.RunAsync(\"What is the weather like in Amsterdam?\"));<\/code><\/pre>\n<p>When the user asks about the weather, the agent doesn\u2019t just guess but recognizes that it has a <code>GetWeather<\/code> tool available, calls it with the appropriate parameter, and uses the result to formulate its response. You didn\u2019t have to write any \u201cif the user asks about weather, call this function\u201d logic. The model figures it out.<\/p>\n<p>The <code>Description<\/code> attributes are important. They tell the model what the tool does and what each parameter means, which helps it decide when and how to use the tool. Think of them as the tool\u2019s instruction manual for the AI.<\/p>\n<h2>Multi-turn conversations with sessions<\/h2>\n<p>Real conversations don\u2019t happen in a single exchange. Users ask follow-up questions, provide additional context, and expect the agent to remember what was discussed. The Agent Framework handles this with <code>AgentSession<\/code> (<a href=\"https:\/\/github.com\/microsoft\/agent-framework\/blob\/main\/dotnet\/samples\/01-get-started\/03_multi_turn\/Program.cs\">03_multi_turn<\/a>):<\/p>\n<pre><code class=\"language-csharp\">AgentSession session = await agent.CreateSessionAsync();\r\n\r\nConsole.WriteLine(\r\n    await agent.RunAsync(\"Tell me a joke about a pirate.\", session));\r\n\r\nConsole.WriteLine(\r\n    await agent.RunAsync(\r\n        \"Now add some emojis to the joke and tell it in the voice of a pirate's parrot.\",\r\n        session));<\/code><\/pre>\n<p>The session preserves the conversation history between calls. When the user asks to \u201cadd some emojis to the joke,\u201d the agent knows which joke is being referenced because the session maintains that context.<\/p>\n<p>Sessions can also be serialized and deserialized, which is essential for production scenarios where your agent runs in a stateless service:<\/p>\n<pre><code class=\"language-csharp\">\/\/ Save the session state\r\nJsonElement sessionState = await agent.SerializeSessionAsync(session);\r\n\r\n\/\/ Later, restore it\r\nvar restoredSession = await agent.DeserializeSessionAsync(sessionState);\r\nConsole.WriteLine(\r\n    await agent.RunAsync(\"What were we just talking about?\", restoredSession));<\/code><\/pre>\n<h2>Teaching your agent to remember<\/h2>\n<p>Sessions preserve conversation history, but what about longer-term memory? What if you want your agent to remember facts about the user across sessions \u2014 their name, preferences, or previous interactions?<\/p>\n<p>The Agent Framework provides <code>AIContextProvider<\/code>, a mechanism for injecting contextual information into the agent\u2019s workflow. Here\u2019s a simplified version of the memory sample (<a href=\"https:\/\/github.com\/microsoft\/agent-framework\/blob\/main\/dotnet\/samples\/01-get-started\/04_memory\/Program.cs\">04_memory<\/a>) that extracts and remembers user information:<\/p>\n<pre><code class=\"language-csharp\">internal sealed class UserInfoMemory : AIContextProvider\r\n{\r\n    private readonly ProviderSessionState&lt;UserInfo&gt; _sessionState;\r\n    private readonly IChatClient _chatClient;\r\n\r\n    public UserInfoMemory(IChatClient chatClient)\r\n    {\r\n        _sessionState = new ProviderSessionState&lt;UserInfo&gt;(\r\n            _ =&gt; new UserInfo(),\r\n            GetType().Name);\r\n        _chatClient = chatClient;\r\n    }\r\n\r\n    protected override async ValueTask StoreAIContextAsync(\r\n        InvokedContext context,\r\n        CancellationToken cancellationToken = default)\r\n    {\r\n        var userInfo = _sessionState.GetOrInitializeState(context.Session);\r\n\r\n        if (userInfo.UserName is null\r\n            &amp;&amp; context.RequestMessages.Any(x =&gt; x.Role == ChatRole.User))\r\n        {\r\n            var result = await _chatClient.GetResponseAsync&lt;UserInfo&gt;(\r\n                context.RequestMessages,\r\n                new ChatOptions()\r\n                {\r\n                    Instructions =\r\n                        \"Extract the user's name from the message if present.\"\r\n                },\r\n                cancellationToken: cancellationToken);\r\n\r\n            userInfo.UserName ??= result.Result.UserName;\r\n        }\r\n\r\n        _sessionState.SaveState(context.Session, userInfo);\r\n    }\r\n\r\n    protected override ValueTask&lt;AIContext&gt; ProvideAIContextAsync(\r\n        InvokingContext context,\r\n        CancellationToken cancellationToken = default)\r\n    {\r\n        var userInfo = _sessionState.GetOrInitializeState(context.Session);\r\n\r\n        var instructions = userInfo.UserName is null\r\n            ? \"Ask the user for their name.\"\r\n            : $\"The user's name is {userInfo.UserName}.\";\r\n\r\n        return new ValueTask&lt;AIContext&gt;(\r\n            new AIContext { Instructions = instructions });\r\n    }\r\n}<\/code><\/pre>\n<p>The <code>AIContextProvider<\/code> has two key methods:<\/p>\n<ul>\n<li><strong><code>StoreAIContextAsync<\/code><\/strong> runs <em>after<\/em> each interaction and is the agent\u2019s opportunity to learn from what just happened \u2014 in this case, extracting the user\u2019s name from the conversation.<\/li>\n<li><strong><code>ProvideAIContextAsync<\/code><\/strong> runs <em>before<\/em> each interaction and supplies additional context to the agent \u2014 here, either telling the agent the user\u2019s name or instructing it to ask for one.<\/li>\n<\/ul>\n<p>Wire it up when creating the agent:<\/p>\n<pre><code class=\"language-csharp\">AIAgent agent = chatClient.AsAIAgent(new ChatClientAgentOptions()\r\n{\r\n    ChatOptions = new()\r\n    {\r\n        Instructions = \"You are a friendly assistant. Always address the user by their name.\"\r\n    },\r\n    AIContextProviders = [new UserInfoMemory(chatClient.AsIChatClient())]\r\n});<\/code><\/pre>\n<p>This pattern is powerful because it separates <em>what the agent remembers<\/em> from <em>how the agent converses<\/em>. You can stack multiple context providers \u2014 one for user preferences, another for recent interactions, and a third that pulls relevant documents from your VectorData store.<\/p>\n<h2>Workflows: orchestrating multiple agents<\/h2>\n<p>Single agents are useful, but many real-world problems benefit from breaking the work across multiple specialized agents. The Agent Framework provides a graph-based workflow system where you connect <strong>executors<\/strong> (processing units) with <strong>edges<\/strong> (data flow paths).<\/p>\n<p>Here\u2019s a simple workflow that chains two text processors together (<a href=\"https:\/\/github.com\/microsoft\/agent-framework\/blob\/main\/dotnet\/samples\/01-get-started\/05_first_workflow\/Program.cs\">05_first_workflow<\/a>):<\/p>\n<pre><code class=\"language-csharp\">using Microsoft.Agents.AI.Workflows;\r\n\r\nFunc&lt;string, string&gt; uppercaseFunc = s =&gt; s.ToUpperInvariant();\r\nvar uppercase = uppercaseFunc.BindAsExecutor(\"UppercaseExecutor\");\r\nvar reverse = new ReverseTextExecutor();\r\n\r\nWorkflowBuilder builder = new(uppercase);\r\nbuilder.AddEdge(uppercase, reverse).WithOutputFrom(reverse);\r\nvar workflow = builder.Build();\r\n\r\nawait using Run run = await InProcessExecution.RunAsync(\r\n    workflow, \"Hello, World!\");\r\n\r\nforeach (WorkflowEvent evt in run.NewEvents)\r\n{\r\n    if (evt is ExecutorCompletedEvent executorComplete)\r\n    {\r\n        Console.WriteLine(\r\n            $\"{executorComplete.ExecutorId}: {executorComplete.Data}\");\r\n    }\r\n}<\/code><\/pre>\n<p>The text processing example keeps things simple, but the real power of workflows shines when you use agents as executors. Here are some of the patterns the framework supports:<\/p>\n<ul>\n<li><strong>Sequential workflows<\/strong> \u2014 agents process one after another, where each agent\u2019s output feeds the next<\/li>\n<li><strong>Concurrent workflows<\/strong> \u2014 fan-out to multiple agents in parallel, then fan-in the results<\/li>\n<li><strong>Conditional routing (\u201chand-off\u201d)<\/strong> \u2014 dynamically route work to different agents based on the output of a previous step<\/li>\n<li><strong>Feedback loops<\/strong> \u2014 a writer-critic pattern where one agent produces content and another evaluates it, looping until quality criteria are met<\/li>\n<li><strong>Sub-workflows<\/strong> \u2014 compose workflows hierarchically by embedding one workflow inside another<\/li>\n<\/ul>\n<h3>A writer-critic example<\/h3>\n<p>One of the most practical patterns is the writer-critic workflow. Imagine you have one agent that writes marketing copy and another that reviews it for quality:<\/p>\n<pre><code class=\"language-csharp\">WorkflowBuilder builder = new(writerAgent);\r\nbuilder\r\n    .AddEdge(writerAgent, criticAgent)\r\n    .AddEdge(criticAgent, writerAgent, condition: result =&gt; !result.IsApproved)\r\n    .WithOutputFrom(criticAgent, condition: result =&gt; result.IsApproved);\r\nvar workflow = builder.Build();<\/code><\/pre>\n<p>The writer produces a draft, the critic evaluates it, and if it isn\u2019t approved, the draft goes back to the writer for revision. This loop continues until the critic is satisfied. Of course, for safety, you probably want to provide a maximum iteration count.<\/p>\n<h2>Human-in-the-loop<\/h2>\n<p>AI doesn\u2019t replace humans, and often may require human input. Think of agents as specialized workers that are directed by humans through code. The Agent Framework supports tool approval workflows where the agent proposes a tool call and waits for human approval before executing it. This is critical for production scenarios involving sensitive operations like database writes, financial transactions, or sending communications.<\/p>\n<p>The approval mechanism is built on <code>FunctionApprovalRequestContent<\/code> and <code>FunctionApprovalResponseContent<\/code>, content types that are part of MEAI\u2019s content model that we introduced in Part 1. When the agent wants to call a tool that requires approval, it yields a request and waits. Your application code can present this to the user, and the response determines whether the tool call proceeds.<\/p>\n<h2>Bringing it all together<\/h2>\n<p>The beauty of the building blocks approach is that each piece composes naturally with the others. Here\u2019s how they fit:<\/p>\n<ol>\n<li><strong>MEAI<\/strong> (<code>IChatClient<\/code>) provides the foundation \u2014 the universal interface for talking to any model.<\/li>\n<li><strong>VectorData<\/strong> enables RAG patterns \u2014 your agents can search through your organization\u2019s knowledge base using semantic search to ground their responses in your data.<\/li>\n<li><strong>Agent Framework<\/strong> orchestrates everything \u2014 agents use <code>IChatClient<\/code> under the hood, can incorporate vector search through context providers, and coordinate through workflows.<\/li>\n<\/ol>\n<p>For example, you could build an <code>AIContextProvider<\/code> that searches your VectorData store before each agent invocation, providing relevant documents as additional context \u2014 exactly like the RAG pattern from Part 2, but now running automatically as part of every agent interaction.<\/p>\n<h2>Summary<\/h2>\n<p>The Microsoft Agent Framework transforms the primitives from Parts 1 and 2 into autonomous, tool-using, memory-aware agents that can work alone or together in sophisticated workflows. We covered:<\/p>\n<ul>\n<li><strong>Creating agents<\/strong> with <code>AsAIAgent()<\/code> and running them with <code>RunAsync()<\/code><\/li>\n<li><strong>Equipping agents with tools<\/strong> using <code>AIFunctionFactory<\/code> and <code>Description<\/code> attributes<\/li>\n<li><strong>Managing conversations<\/strong> across turns with <code>AgentSession<\/code><\/li>\n<li><strong>Building memory<\/strong> with <code>AIContextProvider<\/code> for persistent, cross-session knowledge<\/li>\n<li><strong>Orchestrating workflows<\/strong> with executors, edges, and patterns like writer-critic loops<\/li>\n<li><strong>Human-in-the-loop<\/strong> approval for sensitive operations<\/li>\n<\/ul>\n<p>In the next and final post, we\u2019ll explore the <strong>Model Context Protocol (MCP)<\/strong> and how it provides a standardized way for agents to discover and use external tools and resources \u2014 making your agents interoperable with the broader AI ecosystem.<\/p>\n<p>Until then, here are some resources to help you get started:<\/p>\n<ul>\n<li>Learn by code\n<ul>\n<li><a href=\"https:\/\/github.com\/microsoft\/agent-framework\/tree\/main\/dotnet\">Agent Framework repository<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/microsoft\/Agent-Framework-Samples\">Agent Framework samples<\/a><\/li>\n<\/ul>\n<\/li>\n<li>Learn by following tutorials\n<ul>\n<li><a href=\"https:\/\/learn.microsoft.com\/agent-framework\/\">Agent Framework documentation<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/agent-framework\/tutorials\/quick-start\">Quick start guide<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/agent-framework\/migration-guide\/from-semantic-kernel\">Migration from Semantic Kernel<\/a><\/li>\n<\/ul>\n<\/li>\n<li>Learn by watching videos\n<ul>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=AAgdMhftj8w\">Agent Framework introduction (30 min)<\/a><\/li>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=mOAaGY4WPvc\">DevUI in action<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Happy coding!<\/p>\n<p>The post <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/microsoft-agent-framework-building-blocks-for-ai-part-3\/\">Microsoft Agent Framework \u2013 Building Blocks for AI Part 3<\/a> appeared first on <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\">.NET Blog<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Welcome back to the building blocks for AI in .NET series! In part one, we explored Microsoft Extensions for AI [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":94,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[7],"tags":[],"class_list":["post-3981","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3981","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3981"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3981\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/94"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3981"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3981"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}