In Stanley Kubrick’s cinema masterpiece “2001: A Space Odyssey, an advanced onboard AI computer named HAL provides a spaceship crew with critical updates and essential information, ultimately shaping the mission’s trajectory as it journeys to Jupiter.

HAL is no ordinary computer. Indeed, the 1968 film introduces us to a fully sentient AI system that provides real-time updates to the crew while monitoring and maintaining the spacecraft’s systems, reporting system anomalies and possible malfunctions. HAL informs the crew in one scene that a radio communication device will soon fail. When asked for more information, HAL responds with complete clarity – and authority.

Why Am I Seeing This? Explain Yourself, Please

Today’s AI systems are far less sentient than HAL. But today, AI is powerful and capable enough to monitor and assess enterprise activities to help detect financial fraud.

For example, employees tasked with detecting fraud increasingly rely on AI and machine learning to detect anomalies in email correspondence.

But the fraud alert in and of  itself isn’t enough. When a reviewer is presented with recommendations, she needs to understand how the AI came to that conclusion. In short, she needs to have complete assurance that the AI is accurate and correct.

“The next revolution in machine learning and AI will be about explainability,” said Chris Stapenhurst, Senior Principal Product Manager at Veritas. “Reviewers need to understand why and how the AI engine arrives at its decisions,” he said.

When a reviewer faces a list of red flags in dozens of emails the AI has flagged, the reviewer needs to know why. It needs to provide transparency into its reasons, Stapenhurst explained.

The AI detects certain  in the subject line or in the content of the emails that lead towards relevancy. But, more than just keywords, it also factors in metadata attributes such as message direction and participants as well as receives contributions from elements based upon Natural Language Processing such as sentiment analysis.

“It could be designed with a classification engine  that provides out of the box expert pre-trained models that can identify items like money laundering,” he said.

Stapenhurst continued: “It places a tag which is then factored as a contributing element by a machine learning engine to assist with its predictions supporting human input. This type of system is often considered augmented AI, which is AI, plus human interaction.”

“It learns to predict which items may be relevant for the reviewer with specific tags like suspected money laundering and can even accept submissions of key words and phrases directly by reviewers to augment its learning as part of the item review process.”

Transparency Creates Trust

When Sherlock Holmes famously says, “Elementary my dear Watson,” he finally establishes the source of his conclusions that solved a crime. The reader feels satisfied. Watson is relieved of doubt. Trust in Holmes is reinforced once again.

Holmes was an uncanny genius and, by dramatic flourishes, kept the reader in the dark until the end of the story. Not that he was opaque. Far from it. He was the very portrait of transparency but only makes us wait until the end of the story to understand his reasoning.

That’s precisely what a reviewer needs to do his job. Not wait per se. She just needs to trust her AI in that moment.

Ask for proof. Get clear explanations. Transparency is the way.

Veritas AI: One Window Delivers Transparency You Can Trust

Veritas’ AI system provides a single pane visual representation of its review platform. It assigns a relevance score to each item and detailed explanations of why the AI system deemed it relevant.

This includes the involvement of specific users, keywords in the subject and content, and classification tags.

The system also shows contributing factors towards irrelevance, enabling reviewers and managers to gain insights into decision-making .

AI transparency also increases efficiency by saving time. It enables reviewers and managers to gain insights into decision making. This not only builds trust in the AI system but also delivers precisely which elements of the message the reviewer should focus on.

Furthermore, Veritas’ approach extends beyond AI and ML to its other solutions.

It also offers transparent policies within its classification engine, allowing reviewers to access and modify decision logic and parameters openly. This transparency also extends to other Veritas solutions.

Stapenhurst emphasized that the AI and ML tools offered by Veritas are not limited to email surveillance but can be applied to various forms of digitized data, including text messages,  chat, audio and video content.

The continuous active learning approach allows the system to adapt to new information and ensure accuracy across different data sources.

“Transparency leads to meaningful understanding and understanding leads to adoption,” Stapenhurst emphasized. “It’s always on. It’s always learning.”

We’re a long way from HAL. But we’re close enough to transparency with Veritas.

Click here for a detailed white paper that explains how to optimize surveillance with machine learning.



from UC Today https://ift.tt/qurJ1YI