top of page

Microsoft’s New AI Safety Tool Not Only Finds Errors, But Also Fixes Them

25/9/24

By:

Amitabh Srivastav

Introducing the "Correction" feature for Azure AI, designed to detect and rewrite inaccuracies before they reach users.

Introducing the "Correction" feature for Azure AI, designed to detect and rewrite inaccuracies before they reach users.

Microsoft is stepping up its AI safety game with a new feature called “Correction,” which promises not only to detect but also to rewrite inaccurate AI-generated outputs. Launched as part of Azure AI Studio’s suite of safety tools, this feature is now available in preview to companies using Microsoft Azure to power their AI systems.

The Correction feature is designed to identify errors, commonly referred to as “hallucinations” in AI models, and fix them in real-time by aligning AI outputs with customer-provided source material. The system works by scanning the content for mistakes, flagging them, and providing explanations on why they are incorrect. The key promise of the feature is that it will rewrite the inaccurate sections before they are shown to the user, streamlining AI interactions and reducing the spread of misinformation.

AI Safety: Microsoft’s New Focus

The addition of Correction to Microsoft’s growing AI safety portfolio highlights the company’s commitment to minimizing errors in generative AI. According to Microsoft, this tool leverages both small and large language models to compare AI outputs with “grounding documents,” which can include a company’s internal datasets or reference materials. This process of “grounding” ensures that the AI model stays aligned with real-world facts, making it more reliable for enterprises.

However, Microsoft is clear that Correction isn’t foolproof. “Groundedness detection does not solve for ‘accuracy,’ but helps to align generative AI outputs with grounding documents,” a company spokesperson told TechCrunch. In other words, while the tool can help reduce errors, it isn’t a magic fix for all the inaccuracies AI might produce.

How Correction Stacks Up Against Competitors

Microsoft’s rivals in the AI space, like Google, are also rolling out similar solutions. Google’s Vertex AI platform offers a feature that grounds AI models by checking outputs against Google Search, a company’s internal data, and third-party datasets. But Microsoft’s Correction tool aims to provide a more hands-on solution by not only detecting inaccuracies but also automatically rewriting them.

The Future of Safe AI Output

As generative AI continues to evolve, the need for safety mechanisms that reduce misinformation and hallucinations is becoming increasingly critical. By integrating tools like Correction, Microsoft is positioning itself at the forefront of this movement, providing businesses with more reliable AI systems.

With the growing reliance on AI in critical fields like healthcare, law, and finance, these innovations may soon become essential to ensuring the responsible use of AI technology.

While it’s still in the preview phase, Microsoft’s Correction tool represents a significant step forward in AI safety, helping companies not only find but also fix AI-generated errors before they reach the end-user.

All images used in the articles published by Kushal Bharat Tech News are the property of Verge. We use these images under proper authorization and with full respect to the original copyright holders. Unauthorized use or reproduction of these images is strictly prohibited. For any inquiries or permissions related to the images, please contact Verge directly.

Latest News

13/12/24

Apple’s New HomePod Mini and Apple TV Expected in 2025

Enhanced with Apple’s proprietary “Proxima” chip for improved connectivity and smart home integration

13/12/24

Google’s Vision for Android XR: Bringing Smart Glasses and Headsets to Life

The Android XR platform aims to redefine augmented and mixed reality, powered by Gemini AI and seamless integration.

13/12/24

Google Launches Gemini 2.0: Ushering in the AI Agentic Era

The advanced multimodal AI model can generate images, audio, and promises groundbreaking agent capabilities.

bottom of page