{"id":3323,"date":"2026-01-28T17:58:26","date_gmt":"2026-01-28T17:58:26","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/01\/28\/net-ai-essentials-the-core-building-blocks-explained\/"},"modified":"2026-01-28T17:58:26","modified_gmt":"2026-01-28T17:58:26","slug":"net-ai-essentials-the-core-building-blocks-explained","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/01\/28\/net-ai-essentials-the-core-building-blocks-explained\/","title":{"rendered":".NET AI Essentials \u2013 The Core Building Blocks Explained"},"content":{"rendered":"<p>Artificial Intelligence (AI) is transforming how we build applications. The .NET team has prioritized keeping pace with the rapid changes in generative AI and continue to provide tools, libraries, and guidance for .NET developers building intelligent apps. With .NET, developers have a powerful ecosystem to integrate AI seamlessly into their apps. This post introduces the <strong>building blocks for AI in .NET<\/strong>.<\/p>\n<p>These include:<\/p>\n<ul>\n<li><strong>Microsoft.Extensions.AI<\/strong> for unified LLM access<\/li>\n<li><strong>Microsoft.Extensions.VectorData<\/strong> for semantic search and persisted embeddings<\/li>\n<li><strong>Microsoft Agent Framework<\/strong> for agentic workflows<\/li>\n<li><strong>Model Context Protocol (MCP)<\/strong> for interoperability<\/li>\n<\/ul>\n<p>We\u2019ll explore each library with practical examples and tips for getting started. This is the first of four posts which will cover each of the four areas mentioned.<\/p>\n<h2>Introducing Microsoft.Extensions.AI: one API, many providers<\/h2>\n<p>The foundational library for interfacing with generative AI in .NET is the Microsoft Extensions for AI, often abbreviated as \u201cMEAI.\u201d If you are familiar with Semantic Kernel, this library replaces the primitives and universal features and APIs that were introduced by the Semantic Kernel team. It also integrates successful patterns and practices like dependency injection, middleware, and builder patterns that developers are familiar with in web technologies like ASP.NET, minimal APIs, and Blazor.<\/p>\n<h2>A unified API for multiple providers<\/h2>\n<p>Instead of juggling multiple SDKs, you can use a single abstraction for OpenAI, OllamaSharp, Azure OpenAI, and more. Let\u2019s take a look at some simple examples. Here\u2019s the \u201cgetting started\u201d steps for OllamaSharp:<\/p>\n<pre><code class=\"language-csharp\">var uri = new Uri(\"http:\/\/localhost:11434\");\r\nvar ollama = new OllamaApiClient(uri)\r\n{\r\n    SelectedModel = \"mistral:latest\"\r\n};\r\nawait foreach (var stream in ollama.GenerateAsync(\"How are you today?\"))\r\n{\r\n    Console.Write(stream.Response);\r\n}<\/code><\/pre>\n<p>If, on the other hand, you are using OpenAI, it looks like this:<\/p>\n<pre><code class=\"language-csharp\">OpenAIResponseClient client = new(\"o3-mini\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\r\n\r\nOpenAIResponse response = await client.CreateResponseAsync(\r\n            [ResponseItem.CreateUserMessageItem(\"How are you today?\")]\r\n  );\r\nforeach (ResponseItem outputItem in response.OutputItems)\r\n{\r\n    if (outputItem is MessageResponseItem message)\r\n    {\r\n\r\n        Console.WriteLine($\"{message.Content.FirstOrDefault()?.Text}\");\r\n    }\r\n}<\/code><\/pre>\n<p>To use the universal APIs, you configure your client the same way, but then you have a common API to use to make your requests. The OllamaSharp chat client already supports the universal <code>IChatClient<\/code> interface. The Open AI client does not, so you can use the handy extension method available in the <a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.AI.OpenAI\/\">Open AI adapter<\/a>.<\/p>\n<pre><code class=\"language-csharp\">IChatClient client =\r\n    new OpenAIClient(key).GetChatClient(\"o3-mini\").AsIChatClient();<\/code><\/pre>\n<p>Now you can use the same API to retrieve a response, regardless of which provider you use.<\/p>\n<pre><code class=\"language-csharp\">await foreach (ChatResponseUpdate update in client.GetStreamingResponseAsync(\"How are you today?\"))\r\n{\r\n    Console.Write(update);\r\n}<\/code><\/pre>\n<h2>Beyond convenience: what happens behind the scenes<\/h2>\n<p>In addition to handling delegation to and from the provider, the universal extensions also manage retries, token limits, integrate with dependency injection and support middleware. I\u2019ll cover middleware later in this post. Let\u2019s look at an example of how the extensions work for you to simplify the code you have to write.<\/p>\n<h3>Structured output the super easy way<\/h3>\n<p>Structured output allows you to specify a schema for returned output. This not only allows you to respond to output without having to parse the response, but also provides more context about the expected output to the model. Here is an example using the Open AI SDK:<\/p>\n<pre><code class=\"language-csharp\">class Family\r\n{\r\n    public List&lt;Person&gt; Parents { get; set; }\r\n    public List&lt;Person&gt;? Children { get; set; }\r\n\r\n    class Person\r\n    {\r\n        public string Name { get; set; }\r\n        public int Age { get; set; }\r\n    }\r\n}\r\n\r\nChatCompletionOptions options = new()\r\n{\r\n    ResponseFormat = StructuredOutputsExtensions.CreateJsonSchemaFormat&lt;Family&gt;(\"family\", jsonSchemaIsStrict: true),\r\n    MaxOutputTokenCount = 4096,\r\n    Temperature = 0.1f,\r\n    TopP = 0.1f\r\n};\r\n\r\nList&lt;ChatMessage&gt; messages =\r\n[\r\n    new SystemChatMessage(\"You are an AI assistant that creates families.\"),\r\n    new UserChatMessage(\"Create a family with 2 parents and 2 children.\")\r\n];\r\n\r\nParsedChatCompletion&lt;Family?&gt; completion = chatClient.CompleteChat(messages, options);\r\nFamily? family = completion.Parsed;<\/code><\/pre>\n<p>Let\u2019s do the same thing, only this time using the extensions for AI.<\/p>\n<pre><code class=\"language-csharp\">class Family\r\n{\r\n    public List&lt;Person&gt; Parents { get; set; }\r\n    public List&lt;Person&gt;? Children { get; set; }\r\n\r\n    class Person\r\n    {\r\n        public string Name { get; set; }\r\n        public int Age { get; set; }\r\n    }\r\n}\r\n\r\nvar family = await client.GetResponseAsync&lt;Family&gt;(\r\n    [\r\n        new ChatMessage(\r\n            ChatRole.System,\r\n            \"You are an AI assistant that creates families.\"),\r\n        new ChatMessage(\r\n            ChatRole.User,\r\n            \"Create a family with 2 parents and 2 children.\"\r\n        )]);<\/code><\/pre>\n<p>The typed extension method uses the adapters to provide the appropriate schema and even parse and deserialize the responses for you.<\/p>\n<h3>Standardized requests and responses<\/h3>\n<p>Have you ever wanted to change the <em>temperature<\/em> of a model? Some of you are asking, \u201cWhat\u2019s temperature?\u201d In the \u201creal world\u201d temperature is a measure of the energy of a system. If particles are standing still, or frozen in place, it\u2019s cold. Heat means there is a lot of energy and a lot of movement at least at the microscopic level. Temperature influences entropy, which is a measure of randomness. The same concept applies to models.<\/p>\n<p>If you remember from the introductory post, models are basically huge probability engines that roll the dice. Sort of. If you set the temperature low, the model will produce more predictable (and usually factual) responses, while a higher temperature introduces more randomness into the response. You can think of it as allowing lower probability responses to have a bigger voice in the equation, which may lead to ungrounded responses (when the model provides information that is inaccurate or out of context) but can also lead to more \u201ccreative\u201d appearing responses as well.<\/p>\n<p>More deterministic tasks like classification and summarization probably benefit from a lower temperature, while brainstorming ideas for a marketing campaign might benefit from a higher temperature. The point is, models allow you to tweak everything from temperature to a maximum token count and it\u2019s all standardized as part of the <code>ChatOptions<\/code> class.<\/p>\n<p>On the flipside, when you receive a response, the response contains a <code>UsageDetails<\/code> instance. Use this to keep track of your token counts.<\/p>\n<h3>The middle is where?<\/h3>\n<p>.NET web developers are already familiar with the power of middleware. Think of it as a plugin model for your workflow. In this case, the workflow or pipeline is the interaction with the model. Middleware allows you to intercept the pipeline and do things like:<\/p>\n<ul>\n<li>Stop malicious content from making it to the model<\/li>\n<li>Block or throttle requests<\/li>\n<li>Provide services like telemetry and tracing<\/li>\n<\/ul>\n<p>MEAI provides middleware for telemetry and tracing out of the box. You can use the familiar <em>builder pattern<\/em> to apply middleware to an existing chat client. Here\u2019s an example method that adds middleware to any existing client \u2013 regardless of provider \u2013 that will handle both logging and generation of Open Telemetry (OTEL) events.<\/p>\n<pre><code class=\"language-csharp\">public IChatClient BuildEnhancedChatClient(\r\n            IChatClient innerClient,\r\n            ILoggerFactory? loggerFactory = null)\r\n        {\r\n            var builder = new ChatClientBuilder(innerClient);\r\n\r\n            if (loggerFactory is not null)\r\n            {\r\n                builder.UseLogging(loggerFactory);\r\n            }\r\n\r\n            var sensitiveData = false; \/\/ true for debugging\r\n\r\n            builder.UseOpenTelemetry(\r\n                configure: options =&gt;\r\n                    options.EnableSensitiveData = sensitiveData);\r\n            return builder.Build();\r\n        }\r\n<\/code><\/pre>\n<p>The OTEL events can be sent to a cloud service like Application Insights or, if you are using Aspire, your Aspire dashboard. Aspire has been updated to provide additional context for intelligent apps. Look for the \u201csparkles\u201d in the dashboard to see traces related to interactions with models.<\/p>\n<p><img data-opt-id=49142019  fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2026\/01\/sparkle.webp\" alt=\"Example Aspire dashboard with sparkles indicating LLM interactions\" \/><\/p>\n<h2>DataContent for Multi-Modal Conversations<\/h2>\n<p>Models today do more than simply pass text back and forth. More and more multi-modal models are being released that can accept data in a variety of formats, including images and sounds, and return similar assets. Although most examples of MEAI focus on the text-based interactions, which are represented as <code>TextContent<\/code> instances, there are several built in content types available, all based on <code>AIContent<\/code>:<\/p>\n<ul>\n<li><code>ErrorContent<\/code> for detailed error information with error codes<\/li>\n<li><code>UserInputRequestContent<\/code> to request user input (includes <code>FunctionApprovalRequestContent<\/code> and <code>FunctionApprovalResponseContent<\/code>)<\/li>\n<li><code>FunctionCallContent<\/code> to represent a tool request<\/li>\n<li><code>HostedFileContent<\/code> to reference data hosted by an AI-specific service<\/li>\n<li><code>UriContent<\/code> for a web reference<\/li>\n<\/ul>\n<p>This is not a comprehensive list, but you get the idea. The one you will likely use the most, however, is the <code>DataContent<\/code> that can represent pretty much any media type. It is simply a byte array with a media type.<\/p>\n<p>For example, let\u2019s assume I want to pass my photograph to a model with instructions to describe it and provide a list of tags. Assuming my photo is stored as <code>c:photo.jpg<\/code> I can do this:<\/p>\n<pre><code class=\"language-csharp\">var instructions = \"You are a photo analyst able to extract the utmost detail from a photograph and provide a description so thorough and accurate that another LLM could generate almost the same image just from your description.\";\r\n\r\nvar prompt = new TextContent(\"What's this photo all about? Please provide a detailed description along with tags.\");\r\n\r\nvar image = new DataContent(File.ReadAllBytes(@\"c:photo.jpg\"), \"image\/jpeg\");\r\n\r\nvar messages = new List&lt;ChatMessage&gt;\r\n            {\r\n                new(ChatRole.System, instructions),\r\n                new(ChatRole.User, [prompt, image])\r\n            };\r\n\r\nrecord ImageAnalysis(string Description, string[] tags);\r\n\r\nvar analysis = await chatClient.GetResponseAsync&amp;lt;ImageAnalysis&amp;gt;(messages);<\/code><\/pre>\n<h2>Other highlights<\/h2>\n<p>Although these are out of scope to cover in this post, there are many other services the base extensions provide. Examples include:<\/p>\n<ul>\n<li>Cancellation tokens for responsive apps<\/li>\n<li>Built-in error handling and resilience<\/li>\n<li>Primitives to handle vectors and embeddings<\/li>\n<li>Image generation<\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this post, we explored the foundation building block for intelligent apps in .NET: Microsoft Extensions for AI. In the next post, I\u2019ll walk you through the vector-related extensions and explain why they are not part of the core model, then we\u2019ll follow up with agent framework and MCP.<\/p>\n<p>Until then, I have a few options for you to learn more and get started building your intelligent apps.<\/p>\n<ul>\n<li>Learn by code\n<ul>\n<li><a href=\"https:\/\/github.com\/dotnet\/extensions\/blob\/main\/src\/Libraries\/Microsoft.Extensions.AI.Abstractions\/\">the MEAI repo<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/dotnet\/ai-samples\/tree\/main\/src\/microsoft-extensions-ai\">MEAI samples<\/a><\/li>\n<\/ul>\n<\/li>\n<li>Learn by following quickstarts and tutorials\n<ul>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/microsoft-extensions-ai\">Microsoft Extensions for AI documentation<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/quickstarts\/build-chat-app?pivots=openai\">Build an AI chat app tutorial<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/quickstarts\/ai-templates?tabs=visual-studio%2Cconfigure-visual-studio%2Cconfigure-visual-studio-aspire&amp;pivots=azure-openai\">Create an intelligent .NET app with custom data using the AI chat template<\/a><\/li>\n<\/ul>\n<\/li>\n<li>Learn by watching videos\n<ul>\n<li><a href=\"https:\/\/youtu.be\/qcp6ufe_XYo\">AI building blocks<\/a><\/li>\n<li><a href=\"https:\/\/youtu.be\/N0DzWMkEnzk\">Building intelligent apps with .NET<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Happy coding!<\/p>\n<p>The post <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/dotnet-ai-essentials-the-core-building-blocks-explained\/\">.NET AI Essentials \u2013 The Core Building Blocks Explained<\/a> appeared first on <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\">.NET Blog<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence (AI) is transforming how we build applications. The .NET team has prioritized keeping pace with the rapid changes [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3324,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[7],"tags":[],"class_list":["post-3323","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3323","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3323"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3323\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3324"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3323"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3323"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3323"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}