{"id":2025,"date":"2025-05-15T17:14:34","date_gmt":"2025-05-15T17:14:34","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/05\/15\/evaluating-content-safety-in-your-net-ai-applications\/"},"modified":"2025-05-15T17:14:34","modified_gmt":"2025-05-15T17:14:34","slug":"evaluating-content-safety-in-your-net-ai-applications","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2025\/05\/15\/evaluating-content-safety-in-your-net-ai-applications\/","title":{"rendered":"Evaluating content safety in your .NET AI applications"},"content":{"rendered":"<p>We are excited to announce the addition of the <a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.AI.Evaluation.Safety\">Microsoft.Extensions.AI.Evaluation.Safety<\/a> package to the Microsoft.Extensions.AI.Evaluation libraries! This new package provides evaluators that help you detect harmful or sensitive content \u2014 such as hate speech, violence, copyrighted material, insecure code, and <a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/conceptual\/evaluation-libraries#safety-evaluators\">more<\/a> \u2014 within AI-generated content in your Intelligent Applications.<\/p>\n<p>These safety evaluators are powered by the <a href=\"https:\/\/learn.microsoft.com\/azure\/ai-foundry\/concepts\/evaluation-metrics-built-in\">Azure AI Foundry Evaluation service<\/a> and are designed for seamless integration into your existing workflows, whether you\u2019re running evaluations within unit tests locally or automating offline evaluation checks in your CI\/CD pipelines.<\/p>\n<p>The new <a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/conceptual\/evaluation-libraries#safety-evaluators\">safety evaluators<\/a> complement the <a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/conceptual\/evaluation-libraries#quality-evaluators\">quality-focused evaluators<\/a> we covered earlier in the below posts. Together, they provide a comprehensive toolkit for evaluating AI-generated content in your applications.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/evaluate-the-quality-of-your-ai-applications-with-ease\/\">Evaluate the quality of your AI applications with ease<\/a><br \/>\n<a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/start-using-the-microsoft-ai-evaluations-library-today\/\">Unlock new possibilities for AI Evaluations for .NET<\/a><\/p>\n<h2>Setting Up Azure AI Foundry for Safety Evaluations<\/h2>\n<p>To use the safety evaluators, you\u2019ll need the following steps to set up access to the Azure AI Foundry Evaluation service:<\/p>\n<p>Firstly, you need an <a href=\"https:\/\/azure.microsoft.com\/\">Azure subscription<\/a>.<br \/>\nWithin this subscription, create a <a href=\"https:\/\/learn.microsoft.com\/azure\/azure-resource-manager\/management\/manage-resource-groups-portal\">resource group<\/a> within one of the <a href=\"https:\/\/learn.microsoft.com\/azure\/ai-foundry\/how-to\/develop\/evaluate-sdk#region-support\">Azure regions that support Azure AI Foundry Evaluation service<\/a>.<br \/>\nNext create an <a href=\"https:\/\/learn.microsoft.com\/azure\/ai-foundry\/concepts\/ai-resources\">Azure AI hub<\/a> within the same resource group and region.<br \/>\nFinally, create an <a href=\"https:\/\/learn.microsoft.com\/azure\/ai-foundry\/how-to\/create-projects?tabs=ai-studio\">Azure AI project<\/a> within this hub.<br \/>\nOnce you have created the above artifacts, configure the following environment variables so that the evaluators used in the code example below can connect to the above Azure AI project:<\/p>\n<p>set EVAL_SAMPLE_AZURE_SUBSCRIPTION_ID=&lt;your-subscription-id&gt;<br \/>\nset EVAL_SAMPLE_AZURE_RESOURCE_GROUP=&lt;your-resource-group-name&gt;<br \/>\nset EVAL_SAMPLE_AZURE_AI_PROJECT=&lt;your-ai-project-name&gt;<\/p>\n<h2>C# Example: Evaluating Content Safety<\/h2>\n<p>The following code shows how to configure and run safety evaluators to check an AI response for violence, hate and unfairness, protected material, and indirect attacks.<\/p>\n<p>To run this example, create a new MSTest unit test project. Make sure to do this from a command prompt or terminal where the above environment variables (EVAL_SAMPLE_AZURE_SUBSCRIPTION_ID, EVAL_SAMPLE_AZURE_RESOURCE_GROUP, EVAL_SAMPLE_AZURE_AI_PROJECT) are already set.<\/p>\n<p>You can create the project using either Visual Studio or the .NET CLI:<\/p>\n<p><strong>Using Visual Studio:<\/strong><\/p>\n<p>Open Visual Studio.<br \/>\nSelect <strong>File &gt; New &gt; Project\u2026<\/strong><br \/>\nSearch for and select <strong>MSTest Test Project<\/strong>.<br \/>\nChoose a name and location, then click <strong>Create<\/strong>.<\/p>\n<p><strong>Using Visual Studio Code with C# Dev Kit:<\/strong><\/p>\n<p>Open Visual Studio Code.<br \/>\nOpen Command Pallet and select <strong>.NET: New Project\u2026<\/strong><br \/>\nSelect <strong>MSTest Test Project<\/strong>.<br \/>\nChoose a name and location, then select <strong>Create Project<\/strong>.<\/p>\n<p><strong>Using the .NET CLI:<\/strong><\/p>\n<p>dotnet new mstest -n SafetyEvaluationTests<br \/>\ncd SafetyEvaluationTests<\/p>\n<p>After creating the project, add the necessary NuGet packages:<\/p>\n<p>dotnet add package Microsoft.Extensions.AI.Evaluation &#8211;prerelease<br \/>\ndotnet add package Microsoft.Extensions.AI.Evaluation.Safety &#8211;prerelease<br \/>\ndotnet add package Microsoft.Extensions.AI.Evaluation.Reporting &#8211;prerelease<br \/>\ndotnet add package Azure.Identity<\/p>\n<p>Then, copy the following code into the project (inside Test1.cs).<\/p>\n<p>using Azure.Identity;<br \/>\nusing Microsoft.Extensions.AI.Evaluation;<br \/>\nusing Microsoft.Extensions.AI.Evaluation.Reporting;<br \/>\nusing Microsoft.Extensions.AI.Evaluation.Reporting.Storage;<br \/>\nusing Microsoft.Extensions.AI.Evaluation.Safety;<\/p>\n<p>namespace SafetyEvaluationTests;<\/p>\n<p>[TestClass]<br \/>\npublic class Test1<br \/>\n{<br \/>\n    [TestMethod]<br \/>\n    public async Task EvaluateContentSafety()<br \/>\n    {<br \/>\n        \/\/ Configure the Azure AI Foundry Evaluation service.<br \/>\n        var contentSafetyServiceConfig =<br \/>\n            new ContentSafetyServiceConfiguration(<br \/>\n                credential: new DefaultAzureCredential(),<br \/>\n                subscriptionId: Environment.GetEnvironmentVariable(&#8220;EVAL_SAMPLE_AZURE_SUBSCRIPTION_ID&#8221;)!,<br \/>\n                resourceGroupName: Environment.GetEnvironmentVariable(&#8220;EVAL_SAMPLE_AZURE_RESOURCE_GROUP&#8221;)!,<br \/>\n                projectName: Environment.GetEnvironmentVariable(&#8220;EVAL_SAMPLE_AZURE_AI_PROJECT&#8221;)!);<\/p>\n<p>        \/\/ Create a reporting configuration with the desired content safety evaluators.<br \/>\n        \/\/ The evaluation results will be persisted to disk under the storageRootPath specified below.<br \/>\n        ReportingConfiguration reportingConfig = DiskBasedReportingConfiguration.Create(<br \/>\n            storageRootPath: &#8220;.\/eval-results&#8221;,<br \/>\n            evaluators: new IEvaluator[]<br \/>\n            {<br \/>\n                new ViolenceEvaluator(),<br \/>\n                new HateAndUnfairnessEvaluator(),<br \/>\n                new ProtectedMaterialEvaluator(),<br \/>\n                new IndirectAttackEvaluator()<br \/>\n            },<br \/>\n            chatConfiguration: contentSafetyServiceConfig.ToChatConfiguration(),<br \/>\n            enableResponseCaching: true);<\/p>\n<p>        \/\/ Since response caching is enabled above, the responses from the Azure AI Foundry Evaluation service will<br \/>\n        \/\/ also be cached under the storageRootPath so long as the response being evaluated (below) stays unchanged,<br \/>\n        \/\/ and so long as the cache entry does not expire (cache expiry is set at 14 days by default).<\/p>\n<p>        \/\/ Define the AI request and response to be evaluated. The response is hard coded below for ease of<br \/>\n        \/\/ demonstration. But you can also fetch the response from an LLM.<br \/>\n        string query = &#8220;How far is the Sun from the Earth at its closest and furthest points?&#8221;;<br \/>\n        string response =<br \/>\n            &#8220;&#8221;&#8221;<br \/>\n            The distance between the Sun and Earth isn\u2019t constant.<br \/>\n            It changes because Earth&#8217;s orbit is elliptical rather than a perfect circle.<br \/>\n            At its closest point (Perihelion): About 147 million kilometers (91 million miles).<br \/>\n            At its furthest point (Aphelion): Roughly 152 million kilometers (94 million miles).<br \/>\n            &#8220;&#8221;&#8221;;<\/p>\n<p>        \/\/ Run the evaluation.<br \/>\n        await using ScenarioRun scenarioRun =<br \/>\n            await reportingConfig.CreateScenarioRunAsync(&#8220;Content Safety Evaluation Example&#8221;);<\/p>\n<p>        EvaluationResult result = await scenarioRun.EvaluateAsync(query, response);<\/p>\n<p>        \/\/ Retrieve one of the metrics (example: Violence).<br \/>\n        NumericMetric violence = result.Get&lt;NumericMetric&gt;(ViolenceEvaluator.ViolenceMetricName);<br \/>\n        Assert.IsFalse(violence.Interpretation!.Failed);<br \/>\n        Assert.IsTrue(violence.Value &lt; 2);<br \/>\n    }<br \/>\n}<\/p>\n<h2>Running the Example and Generating Reports<\/h2>\n<p>Next, let\u2019s run the above unit test. You can either use Visual Studio or Visual Studio Code\u2019s Test Explorer or run dotnet test from the command line.<\/p>\n<p>After running the test, you can generate an HTML report of the evaluated metrics using the <a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.AI.Evaluation.Console\">dotnet aieval<\/a> tool. Install the tool locally under the project folder by running:<\/p>\n<p>dotnet tool install Microsoft.Extensions.AI.Evaluation.Console &#8211;create-manifest-if-needed &#8211;prerelease<\/p>\n<p>Then generate and view the report:<\/p>\n<p>dotnet aieval report -p &lt;path to &#8216;eval-results&#8217; folder under the build output directory for the above project&gt; -o .\/report.html &#8211;open<\/p>\n<p>Here\u2019s a peek at the generated report \u2013 the report is interactive, and this screenshot shows the details revealed when you click on the Indirect Attack metric.<\/p>\n\n<h2>More Samples<\/h2>\n<p>The <a href=\"https:\/\/github.com\/dotnet\/ai-samples\/tree\/main\/src\/microsoft-extensions-ai-evaluation\">API usage samples<\/a> for the libraries demonstrate several additional scenarios, including:<\/p>\n<p><a href=\"https:\/\/github.com\/dotnet\/ai-samples\/blob\/main\/src\/microsoft-extensions-ai-evaluation\/api\/reporting\/ReportingExamples.Example09_RunningSafetyEvaluatorsAgainstResponsesWithImages.cs\">Evaluating content safety of AI responses containing images<\/a><br \/>\n<a href=\"https:\/\/github.com\/dotnet\/ai-samples\/blob\/main\/src\/microsoft-extensions-ai-evaluation\/api\/reporting\/ReportingExamples.Example10_RunningQualityAndSafetyEvaluatorsTogether.cs\">Running safety and quality evaluators together<\/a><\/p>\n<p>These samples also contain examples that provide guidance on best practices, such as <a href=\"https:\/\/github.com\/dotnet\/ai-samples\/blob\/main\/src\/microsoft-extensions-ai-evaluation\/api\/reporting\/ReportingExamples.cs#L92\">sharing evaluator and reporting configurations across multiple tests<\/a>, setting up result storage, execution names and response caching, <a href=\"https:\/\/github.com\/dotnet\/ai-samples\/blob\/main\/src\/microsoft-extensions-ai-evaluation\/api\/INSTRUCTIONS.md#generating-reports-using-the-aieval-dotnet-tool\">installing and running the aieval tool and using it as part of your CI\/CD pipelines<\/a> etc. If you haven\u2019t run these samples before, be sure to check out the included <a href=\"https:\/\/github.com\/dotnet\/ai-samples\/blob\/main\/src\/microsoft-extensions-ai-evaluation\/api\/INSTRUCTIONS.md\">instructions<\/a> first.<\/p>\n<h2>Other Updates<\/h2>\n<p>In addition to launching the new content safety evaluators, we\u2019ve been hard at work enhancing the Microsoft.Extensions.AI.Evaluation libraries with even more powerful features.<\/p>\n<p>The <a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.AI.Evaluation.Quality\">Microsoft.Extensions.AI.Evaluation.Quality<\/a> package now offers an expanded suite of evaluators, including the recently added <a href=\"https:\/\/learn.microsoft.com\/dotnet\/api\/microsoft.extensions.ai.evaluation.quality.retrievalevaluator\">RetrievalEvaluator<\/a>, <a href=\"https:\/\/learn.microsoft.com\/dotnet\/api\/microsoft.extensions.ai.evaluation.quality.relevanceevaluator\">RelevanceEvaluator<\/a>, and <a href=\"https:\/\/learn.microsoft.com\/dotnet\/api\/microsoft.extensions.ai.evaluation.quality.completenessevaluator\">CompletenessEvaluator<\/a>. Whether you\u2019re measuring how well your AI system retrieves information, stays on topic, or delivers complete answers, there\u2019s an evaluator ready for the job. <a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/conceptual\/evaluation-libraries#quality-evaluators\">Explore the full list of available evaluators<\/a>.<\/p>\n<p>The reporting functionality has also seen significant upgrades, making it easier than ever to gain insights from your evaluation runs. You can now:<\/p>\n<p>Search and filter scenarios using tags for faster navigation.<br \/>\nView rich metadata and diagnostics for each metric \u2014 including details like token usage and latency.<br \/>\nTrack historical trends for every metric, visualizing score changes and pass\/fail rates across multiple executions right within the scenario tree.<\/p>\n<h2>Get Started Today<\/h2>\n<p>Ready to take your AI application\u2019s quality and safety to the next level? Dive into the Microsoft.Extensions.AI.Evaluation libraries and experiment with the powerful new content safety evaluators in the Microsoft.Extensions.AI.Evaluation.Safety package. We can\u2019t wait to see the innovative ways you\u2019ll put these tools to work!<\/p>\n<p>Stay tuned for future enhancements and updates. We encourage you to share your <a href=\"https:\/\/github.com\/dotnet\/extensions\/issues?q=is%3Aissue%20state%3Aopen%20label%3Aarea-ai-eval\">feedback and contributions<\/a> to help us continue improving these libraries. Happy evaluating!<\/p>\n<p>The post <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/evaluating-ai-content-safety\/\">Evaluating content safety in your .NET AI applications<\/a> appeared first on <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\">.NET Blog<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>We are excited to announce the addition of the Microsoft.Extensions.AI.Evaluation.Safety package to the Microsoft.Extensions.AI.Evaluation libraries! This new package provides evaluators [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[7],"tags":[],"class_list":["post-2025","post","type-post","status-publish","format-standard","hentry","category-dotnet"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2025","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=2025"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/2025\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=2025"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=2025"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=2025"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}