{"id":3968,"date":"2026-04-30T17:14:32","date_gmt":"2026-04-30T17:14:32","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/04\/30\/building-an-ai-powered-conference-app-with-nets-composable-ai-stack\/"},"modified":"2026-04-30T17:14:32","modified_gmt":"2026-04-30T17:14:32","slug":"building-an-ai-powered-conference-app-with-nets-composable-ai-stack","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2026\/04\/30\/building-an-ai-powered-conference-app-with-nets-composable-ai-stack\/","title":{"rendered":"Building an AI-Powered Conference App with .NET\u2019s Composable AI Stack"},"content":{"rendered":"<p>Building AI features into .NET applications often means stitching together models, vector databases, ingestion pipelines, and agent frameworks from different ecosystems. Each one has its own patterns, its own client libraries, and its own breaking changes when the next version ships. We\u2019ve been working on a set of composable, extensible building blocks that give you stable abstractions across all of these concerns.<\/p>\n<p>We\u2019re excited to walk you through how we used them together. For a session at MVP Summit, we built an interactive conference assistant called ConferencePulse. It runs live polls, answers audience questions in real time, generates insights from engagement data, and summarizes the session when it wraps up. We built the app using the exact technologies we were there to present: <code>Microsoft.Extensions.AI<\/code>, <code>Microsoft.Extensions.DataIngestion<\/code>, <code>Microsoft.Extensions.VectorData<\/code>, Model Context Protocol (MCP), and Microsoft Agent Framework.<\/p>\n<p>This post walks through the app and shows how each building block fits.<\/p>\n<h2>What we built<\/h2>\n<p>ConferencePulse is a Blazor Server app for live conference sessions. Attendees scan a QR code, join the session, and interact with the presenter through polls and Q&amp;A. On the backend, AI powers several features:<\/p>\n<ul>\n<li><strong>Live polls<\/strong> that the AI generates based on session content. Attendees vote and results appear in real time.<\/li>\n<li><strong>Audience Q&amp;A<\/strong> where AI answers questions using a RAG pipeline that pulls from the session knowledge base, Microsoft Learn docs, and GitHub wiki content.<\/li>\n<li><strong>Auto-generated insights<\/strong> that surface patterns in poll results and audience questions as they come in.<\/li>\n<li><strong>Session summary<\/strong> that runs when the presenter ends the session. Multiple AI agents analyze polls, questions, and insights concurrently, then merge their findings.<\/li>\n<\/ul>\n<p><img data-opt-id=560528647  fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2026\/04\/ai-assistant-presenter-view-scaled.webp\" alt=\"ConferencePulse presenter dashboard showing real-time poll results, audience questions, and AI-generated insights\" \/><\/p>\n<p>We wanted an interactive session, not a slide deck. We wanted polls and audience insights. And we wanted to automate the preparation: point the app at a GitHub repo, and it downloads the markdown, processes it through a pipeline, and builds a searchable knowledge base. Polls, talking points, and Q&amp;A answers are all grounded in that content.<\/p>\n<p>The app runs on .NET 10, Blazor Server, and Aspire. Five projects cover the stack:<\/p>\n<pre><code class=\"language-text\">src\/\r\n\u251c\u2500\u2500 ConferenceAssistant.Web\/          \u2190 Blazor Server (UI + orchestration)\r\n\u251c\u2500\u2500 ConferenceAssistant.Core\/         \u2190 Models, interfaces, session state\r\n\u251c\u2500\u2500 ConferenceAssistant.Ingestion\/    \u2190 Data ingestion pipeline + vector search\r\n\u251c\u2500\u2500 ConferenceAssistant.Agents\/       \u2190 AI agents, workflows, tools\r\n\u251c\u2500\u2500 ConferenceAssistant.Mcp\/          \u2190 MCP server tools + MCP client\r\n\u2514\u2500\u2500 ConferenceAssistant.AppHost\/      \u2190 .NET Aspire (Qdrant, PostgreSQL, Azure OpenAI)<\/code><\/pre>\n<p>Now let\u2019s walk through the building blocks.<\/p>\n<h2>Microsoft.Extensions.AI: one interface, any provider<\/h2>\n<p><code>Microsoft.Extensions.AI<\/code> gives you <code>IChatClient<\/code>, a unified abstraction that works with OpenAI, Azure OpenAI, Ollama, Foundry Local, and other providers. Every AI call in ConferencePulse goes through a single middleware pipeline.<\/p>\n<pre><code class=\"language-csharp\">var openaiBuilder = builder.AddAzureOpenAIClient(\"openai\");\r\n\r\nopenaiBuilder.AddChatClient(\"chat\")\r\n    .UseFunctionInvocation()\r\n    .UseOpenTelemetry()\r\n    .UseLogging();\r\n\r\nopenaiBuilder.AddEmbeddingGenerator(\"embedding\");<\/code><\/pre>\n<p>That\u2019s it. Six lines. If you\u2019ve worked with ASP.NET Core middleware, this pattern will feel familiar. Each <code>.Use*()<\/code> call wraps the inner client with additional behavior. <code>UseFunctionInvocation()<\/code> handles tool-call loops. <code>UseOpenTelemetry()<\/code> traces every call. <code>UseLogging()<\/code> captures request\/response pairs.<\/p>\n<p>Want to swap Azure OpenAI for Ollama? Change the inner client. The middleware stays the same.<\/p>\n<p>This matters because <code>IChatClient<\/code> shows up everywhere in the app. Poll generation, Q&amp;A, insights, ingestion enrichment, and multi-agent workflows all share this pipeline. You register it once and use it throughout.<\/p>\n<h2>DataIngestion + VectorData: the knowledge layer<\/h2>\n<p>AI models need context to give useful answers. <code>Microsoft.Extensions.DataIngestion<\/code> provides a pipeline for processing documents into searchable chunks. <code>Microsoft.Extensions.VectorData<\/code> provides a provider-agnostic abstraction over vector stores.<\/p>\n<p>When ConferencePulse imports content from a GitHub repo, it runs the files through an ingestion pipeline:<\/p>\n<pre><code class=\"language-csharp\">IngestionDocumentReader reader = new MarkdownReader();\r\n\r\nvar tokenizer = TiktokenTokenizer.CreateForModel(\"gpt-4o\");\r\nvar chunkerOptions = new IngestionChunkerOptions(tokenizer)\r\n{\r\n    MaxTokensPerChunk = 500,\r\n    OverlapTokens = 50\r\n};\r\nIngestionChunker&lt;string&gt; chunker = new HeaderChunker(chunkerOptions);\r\n\r\nvar enricherOptions = new EnricherOptions(_chatClient) { LoggerFactory = _loggerFactory };\r\n\r\nusing var writer = new VectorStoreWriter&lt;string&gt;(\r\n    _searchService.VectorStore,\r\n    dimensionCount: 1536,\r\n    new VectorStoreWriterOptions\r\n    {\r\n        CollectionName = \"conference_knowledge\",\r\n        IncrementalIngestion = true\r\n    });\r\n\r\nusing IngestionPipeline&lt;string&gt; pipeline = new(\r\n    reader, chunker, writer, new IngestionPipelineOptions(), _loggerFactory)\r\n{\r\n    ChunkProcessors =\r\n    {\r\n        new SummaryEnricher(enricherOptions),\r\n        new KeywordEnricher(enricherOptions, ReadOnlySpan&lt;string&gt;.Empty),\r\n        frontMatterProcessor\r\n    }\r\n};<\/code><\/pre>\n<p>The pipeline reads markdown, chunks it by headers, enriches each chunk with AI-generated summaries and keywords, then embeds and stores the results in Qdrant. Each step is a pluggable component. You can swap <code>MarkdownReader<\/code> for a PDF reader, <code>HeaderChunker<\/code> for a fixed-size chunker, or Qdrant for Azure AI Search. The pipeline composition stays the same.<\/p>\n<p>Notice that <code>SummaryEnricher<\/code> and <code>KeywordEnricher<\/code> both take <code>EnricherOptions(_chatClient)<\/code>. They use the same <code>IChatClient<\/code> from the previous section. AI enriching its own context. The summary enricher generates a concise description of each chunk, and the keyword enricher extracts searchable terms. Both improve retrieval quality later.<\/p>\n<p>On the query side, <code>Microsoft.Extensions.VectorData<\/code> gives you <code>VectorStoreCollection<\/code> for semantic search over any backend:<\/p>\n<pre><code class=\"language-csharp\">var results = collection.SearchAsync(query, topK);\r\n\r\nawait foreach (var result in results)\r\n{\r\n    var content = result.Record[\"content\"] as string;\r\n    \/\/ Use the content...\r\n}<\/code><\/pre>\n<p>Similar to how you can swap database providers in EF Core, you can swap vector store providers here. Qdrant today, Azure AI Search tomorrow. Same API.<\/p>\n<p>ConferencePulse also ingests data in real time as the session progresses. Poll responses, audience questions, Q&amp;A pairs, and AI-generated insights all go into the knowledge base:<\/p>\n<pre><code class=\"language-csharp\">public async Task&lt;int&gt; IngestResponseAsync(\r\n    string pollId, string topicId, string question,\r\n    Dictionary&lt;string, int&gt; results, List&lt;string&gt;? otherResponses = null)\r\n{\r\n    var sb = new StringBuilder();\r\n    sb.AppendLine($\"Poll: {question}\");\r\n    sb.AppendLine(\"Results:\");\r\n    var total = results.Values.Sum();\r\n    foreach (var (option, count) in results)\r\n    {\r\n        var percentage = total &gt; 0 ? (count * 100.0 \/ total).ToString(\"F1\") : \"0\";\r\n        sb.AppendLine($\"  - {option}: {count} votes ({percentage}%)\");\r\n    }\r\n\r\n    await _searchService.UpsertAsync(sb.ToString(), source: \"response\", documentId: $\"response-{pollId}\");\r\n    return 1;\r\n}<\/code><\/pre>\n<p>By the end of a session, the knowledge base contains the original imported content, every poll result, every audience question, and every AI-generated insight.<\/p>\n<p><img data-opt-id=1077440086  fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2026\/04\/ai-assistant-insights.webp\" alt=\"AI Assistant generating real-time insights from poll results and audience questions\" \/><\/p>\n<h2>IChatClient with tools: choosing the right level of complexity<\/h2>\n<p>One of the design principles we followed: use the simplest approach that gets the job done. <code>IChatClient<\/code> with tools handles a lot of scenarios before you need a dedicated agent framework. At the same time, when orchestration gets complex, a framework earns its place. The key is choosing the right tool.<\/p>\n<p>ConferencePulse has three AI-powered features at different levels of complexity. All three use the same <code>IChatClient<\/code>.<\/p>\n<h3>Insight generation: a single call<\/h3>\n<p>When a poll closes, ConferencePulse generates an insight. The implementation is a single <code>GetResponseAsync<\/code> call:<\/p>\n<pre><code class=\"language-csharp\">var response = await chatClient.GetResponseAsync(\r\n[\r\n    new(ChatRole.System,\r\n        \"You are a conference analytics assistant generating real-time insights from audience data.\"),\r\n    new(ChatRole.User, prompt)  \/\/ prompt contains the poll results\r\n]);\r\n\r\nvar content = response.Text?.Trim();\r\nif (!string.IsNullOrWhiteSpace(content))\r\n{\r\n    ctx.AddInsight(new Insight\r\n    {\r\n        TopicId = poll.TopicId,\r\n        PollId = pollId,\r\n        Content = content,\r\n        Type = InsightType.PollAnalysis\r\n    });\r\n}<\/code><\/pre>\n<p>No tools, no framework. A prompt with poll results as context, and the middleware pipeline handles telemetry and logging.<\/p>\n<h3>Poll generation: IChatClient with tools<\/h3>\n<p>Generating a poll needs more context. The AI checks the current topic, looks at what\u2019s been covered, and creates something relevant. That means tools:<\/p>\n<pre><code class=\"language-csharp\">public class PollGenerationWorkflow(IChatClient chatClient, AgentTools tools)\r\n{\r\n    public async Task&lt;string&gt; ExecuteAsync(string topicId)\r\n    {\r\n        var options = new ChatOptions\r\n        {\r\n            Tools = [tools.GetCurrentTopic, tools.SearchKnowledge,\r\n                     tools.GetAudienceQuestions, tools.GetAllPollResults,\r\n                     tools.GetAllInsights, tools.CreatePoll]\r\n        };\r\n\r\n        var messages = new List&lt;ChatMessage&gt;\r\n        {\r\n            new(ChatRole.System, AgentDefinitions.SurveyArchitectInstructions),\r\n            new(ChatRole.User, $\"Generate an engaging poll for topic: {topicId}...\")\r\n        };\r\n\r\n        var response = await chatClient.GetResponseAsync(messages, options);\r\n        return response.Text ?? \"Unable to generate poll.\";\r\n    }\r\n}<\/code><\/pre>\n<p>Each tool is a strongly-typed <code>AITool<\/code> property created from a C# method:<\/p>\n<pre><code class=\"language-csharp\">public class AgentTools\r\n{\r\n    public AITool SearchKnowledge { get; }\r\n    public AITool GetCurrentTopic { get; }\r\n    public AITool CreatePoll { get; }\r\n    \/\/ ...\r\n\r\n    public AgentTools(IPollService pollService, ISemanticSearchService searchService, ...)\r\n    {\r\n        SearchKnowledge = AIFunctionFactory.Create(SearchKnowledgeCore,\r\n            new AIFunctionFactoryOptions\r\n            {\r\n                Name = nameof(SearchKnowledge),\r\n                Description = \"Search the session knowledge base for content related to the query\"\r\n            });\r\n        \/\/ ...\r\n    }\r\n}<\/code><\/pre>\n<p>The model decides it needs context, calls <code>GetCurrentTopic<\/code> and <code>SearchKnowledge<\/code>, then generates a poll and calls <code>CreatePoll<\/code> to save it. The <code>UseFunctionInvocation()<\/code> middleware handles the tool loop automatically.<\/p>\n<p><img data-opt-id=445890900  data-opt-src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2026\/04\/ai-assistant-poll-room-view-scaled.webp\"  decoding=\"async\" src=\"data:image/svg+xml,%3Csvg%20viewBox%3D%220%200%20100%%20100%%22%20width%3D%22100%%22%20height%3D%22100%%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Crect%20width%3D%22100%%22%20height%3D%22100%%22%20fill%3D%22transparent%22%2F%3E%3C%2Fsvg%3E\" alt=\"AI assistant conducting a poll in the conference room view\" \/><\/p>\n<h3>Q&amp;A answering: RAG across multiple sources<\/h3>\n<p>The Q&amp;A service brings multiple building blocks together. When an audience member asks a question, the app searches the local knowledge base, queries Microsoft Learn docs via MCP, and asks DeepWiki about relevant GitHub repos via MCP. Then it synthesizes an answer:<\/p>\n<pre><code class=\"language-csharp\">\/\/ 1. Search local knowledge base\r\nvar searchResults = await searchService.SearchAsync(questionText, topK: 5);\r\nvar localContext = string.Join(\"nn---nn\",\r\n    searchResults.Select(r =&gt; r.Content).Where(c =&gt; !string.IsNullOrWhiteSpace(c)));\r\n\r\n\/\/ 2. Search Microsoft Learn for documentation context (via MCP)\r\nvar docsContext = await mcpClient.SearchDocsAsync(questionText);\r\n\r\n\/\/ 3. Ask DeepWiki about relevant .NET repos (via MCP)\r\nvar deepWikiContext = await mcpClient.AskDeepWikiAsync(\"dotnet\/extensions\", questionText);<\/code><\/pre>\n<p>VectorData for local search, MCP for external context, <code>IChatClient<\/code> for generation.<\/p>\n<p><!-- Image of an AI assistant QA interface showing conversational interaction with an artificial intelligence system --><\/p>\n<p><img data-opt-id=660387068  data-opt-src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2026\/04\/ai-assistant-qa.webp\"  decoding=\"async\" src=\"data:image/svg+xml,%3Csvg%20viewBox%3D%220%200%20100%%20100%%22%20width%3D%22100%%22%20height%3D%22100%%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Crect%20width%3D%22100%%22%20height%3D%22100%%22%20fill%3D%22transparent%22%2F%3E%3C%2Fsvg%3E\" alt=\"AI Assistant QA Interface\" \/><\/p>\n<p>Now let\u2019s look at how MCP works.<\/p>\n<h2>MCP: consuming and providing context<\/h2>\n<p>Model Context Protocol is a standard for AI applications to discover and use external tools and context. Similar to how HTTP lets any client talk to any server, MCP lets any AI app connect to any context provider using the same protocol.<\/p>\n<p>ConferencePulse uses MCP in both directions.<\/p>\n<h3>As a consumer<\/h3>\n<p>The <code>McpContentClient<\/code> connects to two MCP servers at startup: Microsoft Learn and DeepWiki.<\/p>\n<pre><code class=\"language-csharp\">public async Task InitializeAsync(CancellationToken ct = default)\r\n{\r\n    var learnTransport = new HttpClientTransport(new HttpClientTransportOptions\r\n    {\r\n        Endpoint = new Uri(\"https:\/\/learn.microsoft.com\/api\/mcp\"),\r\n        TransportMode = HttpTransportMode.StreamableHttp\r\n    }, loggerFactory);\r\n    _learnClient = await McpClient.CreateAsync(learnTransport, null, loggerFactory, ct);\r\n\r\n    var deepWikiTransport = new HttpClientTransport(new HttpClientTransportOptions\r\n    {\r\n        Endpoint = new Uri(\"https:\/\/mcp.deepwiki.com\/mcp\"),\r\n        TransportMode = HttpTransportMode.StreamableHttp\r\n    }, loggerFactory);\r\n    _deepWikiClient = await McpClient.CreateAsync(deepWikiTransport, null, loggerFactory, ct);\r\n}<\/code><\/pre>\n<p>Once connected, calling a tool on any MCP server uses the same pattern:<\/p>\n<pre><code class=\"language-csharp\">var result = await _learnClient.CallToolAsync(\r\n    \"microsoft_docs_search\",\r\n    new Dictionary&lt;string, object?&gt; { [\"query\"] = query },\r\n    cancellationToken: ct);<\/code><\/pre>\n<p>Any server that speaks MCP works with this client code.<\/p>\n<h3>As a provider<\/h3>\n<p>ConferencePulse is also an MCP server. Any MCP-compatible client (GitHub Copilot, Claude, a custom tool) can connect and query session data.<\/p>\n<pre><code class=\"language-csharp\">[McpServerToolType]\r\npublic class ConferenceTools\r\n{\r\n    [McpServerTool(Name = \"get_session_status\", ReadOnly = true),\r\n     Description(\"Returns the current conference session status.\")]\r\n    public static string GetSessionStatus(ISessionService sessionService)\r\n    {\r\n        var session = sessionService.CurrentSession;\r\n        if (session is null) return \"No active conference session.\";\r\n        \/\/ ... build status string\r\n    }\r\n\r\n    [McpServerTool(Name = \"search_session_knowledge\", ReadOnly = true),\r\n     Description(\"Searches the session knowledge base for relevant content.\")]\r\n    public static async Task&lt;string&gt; SearchSessionKnowledge(\r\n        ISemanticSearchService searchService,\r\n        [Description(\"The search query.\")] string query,\r\n        [Description(\"Max results. Defaults to 5.\")] int maxResults = 5)\r\n    {\r\n        var results = await searchService.SearchAsync(query, maxResults);\r\n        \/\/ ... format results\r\n    }\r\n}<\/code><\/pre>\n<p>Registration takes a few lines in <code>Program.cs<\/code>:<\/p>\n<pre><code class=\"language-csharp\">builder.Services\r\n    .AddMcpServer(options =&gt; { options.ServerInfo = new() { Name = \"ConferencePulse\", Version = \"1.0.0\" }; })\r\n    .WithToolsFromAssembly(typeof(ConferenceTools).Assembly)\r\n    .WithHttpTransport();\r\n\r\napp.MapMcp(\"\/mcp\");<\/code><\/pre>\n<p>The app consumes external knowledge to answer questions and provides its own data for external tools. Same protocol in both directions.<\/p>\n<h2>Microsoft Agent Framework: multi-agent orchestration<\/h2>\n<p>For most of ConferencePulse\u2019s features, <code>IChatClient<\/code> with tools was the right choice. But the session summary needed something more: three specialized agents running concurrently, each with scoped tools, feeding their results into a synthesis step. That\u2019s where Microsoft Agent Framework comes in.<\/p>\n<pre><code class=\"language-csharp\">public class SessionSummaryWorkflow(IChatClient chatClient, AgentTools tools)\r\n{\r\n    public async Task&lt;string&gt; ExecuteAsync()\r\n    {\r\n        ChatClientAgent pollAnalyst = new(chatClient,\r\n            name: \"PollAnalyst\",\r\n            description: \"Analyzes poll results and trends\",\r\n            instructions: \"You are a poll analyst. Use GetAllPollResults to retrieve every poll...\",\r\n            tools: [tools.GetAllPollResults]);\r\n\r\n        ChatClientAgent questionAnalyst = new(chatClient,\r\n            name: \"QuestionAnalyst\",\r\n            description: \"Analyzes audience questions and themes\",\r\n            instructions: \"You are an audience question analyst...\",\r\n            tools: [tools.GetAudienceQuestions]);\r\n\r\n        ChatClientAgent insightAnalyst = new(chatClient,\r\n            name: \"InsightAnalyst\",\r\n            description: \"Analyzes generated insights and knowledge patterns\",\r\n            instructions: \"You are an insight analyst...\",\r\n            tools: [tools.GetAllInsights, tools.SearchKnowledge]);<\/code><\/pre>\n<p>Each <code>ChatClientAgent<\/code> wraps the same <code>IChatClient<\/code>. The agents get scoped tools (PollAnalyst only sees poll data, QuestionAnalyst only sees questions) and specialized instructions.<\/p>\n<p>The orchestration uses <code>AgentWorkflowBuilder.BuildConcurrent<\/code> for the fan-out, then <code>WorkflowBuilder<\/code> to compose the full pipeline:<\/p>\n<pre><code class=\"language-csharp\">        \/\/ Fan-out: three analysts run concurrently\r\n        var analysisWorkflow = AgentWorkflowBuilder.BuildConcurrent(\r\n            [pollAnalyst, questionAnalyst, insightAnalyst],\r\n            MergeAgentOutputs);\r\n\r\n        \/\/ Fan-in: synthesizer merges all findings\r\n        ChatClientAgent synthesizer = new(chatClient,\r\n            name: \"Synthesizer\",\r\n            instructions: \"Synthesize the analyses into one cohesive session summary...\");\r\n\r\n        \/\/ Compose: concurrent analysis \u2192 sequential synthesis\r\n        var analysisExec = new SubworkflowBinding(analysisWorkflow, \"Analysis\");\r\n        ExecutorBinding synthExec = synthesizer;\r\n\r\n        var composedWorkflow = new WorkflowBuilder(analysisExec)\r\n            .WithName(\"SessionSummaryPipeline\")\r\n            .BindExecutor(synthExec)\r\n            .AddEdge(analysisExec, synthExec)\r\n            .WithOutputFrom([synthExec])\r\n            .Build();\r\n\r\n        var run = await InProcessExecution.Default.RunAsync(\r\n            composedWorkflow,\r\n            \"Analyze the conference session data and provide your specialized findings.\");<\/code><\/pre>\n<p>Compare this with the poll generation workflow from earlier, which is about 10 lines using <code>IChatClient<\/code> and tools. The session summary is about 40 lines because it genuinely needs concurrent agents with scoped tools and a synthesis step.<\/p>\n<p>In ConferencePulse, the Agent Framework was the right choice for exactly one workflow. Everything else worked well with <code>IChatClient<\/code> directly. Both approaches use the same underlying abstraction.<\/p>\n<h2>How the building blocks fit together<\/h2>\n<p><img data-opt-id=973131131  data-opt-src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2026\/04\/ai-assistant-aspire-dashboard-scaled.webp\"  decoding=\"async\" src=\"data:image/svg+xml,%3Csvg%20viewBox%3D%220%200%20100%%20100%%22%20width%3D%22100%%22%20height%3D%22100%%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Crect%20width%3D%22100%%22%20height%3D%22100%%22%20fill%3D%22transparent%22%2F%3E%3C%2Fsvg%3E\" alt=\"Aspire Dashboard showing ConferencePulse services: web app, Qdrant, PostgreSQL, and Azure OpenAI\" \/><\/p>\n<p>During the MVP Summit session, attendees interacted with features powered by different layers of the stack:<\/p>\n<table>\n<thead>\n<tr>\n<th>Feature<\/th>\n<th>Powered by<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Polls<\/td>\n<td><code>IChatClient<\/code> + tools (MEAI)<\/td>\n<\/tr>\n<tr>\n<td>Knowledge grounding<\/td>\n<td><code>IngestionPipeline<\/code> + <code>VectorStoreWriter<\/code><\/td>\n<\/tr>\n<tr>\n<td>Q&amp;A answers<\/td>\n<td><code>VectorData<\/code> + <code>IChatClient<\/code> + MCP<\/td>\n<\/tr>\n<tr>\n<td>Auto-generated insights<\/td>\n<td><code>IChatClient<\/code> (single call)<\/td>\n<\/tr>\n<tr>\n<td>Session summary<\/td>\n<td>Microsoft Agent Framework (fan-out\/fan-in)<\/td>\n<\/tr>\n<tr>\n<td>Observability<\/td>\n<td><code>UseOpenTelemetry()<\/code> + Aspire Dashboard<\/td>\n<\/tr>\n<tr>\n<td>Infrastructure<\/td>\n<td>Aspire: Qdrant + PostgreSQL + Azure OpenAI<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Each building block handles one concern and composes with the others. <code>IChatClient<\/code> shows up inside the ingestion enrichers, inside the agent tools, inside the MCP-augmented Q&amp;A, and inside the Agent Framework\u2019s <code>ChatClientAgent<\/code>. You learn it once and use it everywhere.<\/p>\n<p>Providers will change and models will evolve. The building blocks give you a stable layer to build on, and you swap implementations underneath without rewriting application code.<\/p>\n<h2>Get started<\/h2>\n<p>We\u2019re excited to see what you build with these building blocks.<\/p>\n<ul>\n<li><strong>Try ConferencePulse<\/strong>: <a href=\"https:\/\/github.com\/luisquintanilla\/dotnet-ai-conference-assistant\">the source is on GitHub<\/a>. Clone it, run <code>aspire run<\/code>, and see the full stack in action.<\/li>\n<li><strong>Learn more<\/strong> about the individual libraries:\n<ul>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/ai-extensions\">Microsoft.Extensions.AI<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/vector-stores\/overview\">Microsoft.Extensions.VectorData<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/conceptual\/data-ingestion\">Microsoft.Extensions.DataIngestion<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/get-started-mcp\">Model Context Protocol in .NET<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/microsoft\/agent-framework\">Microsoft Agent Framework<\/a><\/li>\n<\/ul>\n<\/li>\n<li><strong>Give us feedback<\/strong>: file an issue in any of the repos or catch us on the <a href=\"https:\/\/dotnet.microsoft.com\/live\">.NET Community Standup<\/a>.<\/li>\n<\/ul>\n<p>Now that you\u2019ve seen how these building blocks compose, give them a try and let us know what you think.<\/p>\n<p>The post <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/building-ai-conference-app-dotnet-composable-stack\/\">Building an AI-Powered Conference App with .NET\u2019s Composable AI Stack<\/a> appeared first on <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\">.NET Blog<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Building AI features into .NET applications often means stitching together models, vector databases, ingestion pipelines, and agent frameworks from different [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3969,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[7],"tags":[],"class_list":["post-3968","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3968","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=3968"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/3968\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media\/3969"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=3968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=3968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=3968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}