In a highly-manual world, AI is the true transformer.

Indeed, many workflows, actions, and processes that have historically relied upon wholly-human activation are now ripe for intelligent, labor-saving automation that can turbocharge efficiency.

In one area in particular – regulated communication surveillance – AI can punch way above its weight.

Here, in this most high-risk and resource-heavy of environments, Machine Learning, for example, can deliver disproportionate levels of cost savings whilst simultaneously mitigating many times more risk than the once-essential human equivalent.

Those so-called ‘reviewers’ – highly-trained surveillance experts tasked with spotting potential compliance breaches in regulated communications – are expensive and often required in large numbers. Banks and law firms, for example, may employ hundreds – all reading thousands of emails, messages, and voice call transcriptions every day, looking for risk.

AI can now do much of that heavy-lifting; automatically, 24/7, and with its eyes wide shut.

To capitalize, firms (and/or their technology service providers) must partner with a vendor whose solutions come with all of that capability baked-in.

“In the past, regulated firms would have a rarely-updated lexicon of phrases; a dictionary if you will, of words and phrases which, to their trained human reviewers, indicated potential non-compliance –  today, thanks to AI, those businesses can work so much smarter,” says Chris Stapenhurst, Senior Principal Product Manager at leading data management experts Veritas, whose discovery, surveillance, and file analysis products leverage AI to the max.

“Machine Learning, for example, combs through every communication automatically and turns potential regulatory violations into alerts so that human reviewers can scrutinize them more closely.

However, most importantly, the Machine Learning can be programmed to ignore millions of words or phrases in, say, junk emails, which may appear risky but which the firm has identified as innocuous, such as ‘Out of Office’, ‘prohibited’, or ‘unauthorized’. That can reduce the number of false positives by up to 98% – and support a significant reduction in reviewer headcount.”

Of course, the best surveillance strategy is multi-layered: a mix of old-school, manually pre-built lexicons plus AI-powered add-ons. Firms and their service providers should ask whether vendors’ solutions go beyond the ability to search for just basic terms and phrases. Do they model behaviours? Do they leverage sentiment analysis?

Whilst ticking all those boxes, the Veritas solution also – and uniquely – constantly teaches and updates itself 24/7 based on human reviewer interactivity. Its Machine Learning engine sits inside customers’ UC platforms and, as reviewers are marking and labelling items, it begins to understand the difference between an item marked relevant or irrelevant; risky or innocuous.

“It looks at a variety of different parameters and basically conducts its own classification,” says Stapenhurst.

“It examines the communication’s metadata, when it was sent, who it was sent between, what the subject line was, what the content is.

“Many of our competitors take a different approach. For example, in financial services, they may hire an expert in market manipulation and put them with an AI expert to create a surveillance model based on their joint expertise. They might work together for weeks or months to create a super-accurate lexicon but, the very next day, it’s out of date because it is a non-evolutionary entity which might only get re-visited every quarter.

“New violations emerge: new ways of saying and doing things. Every day it gets less accurate, until you have to invest all over again in your two experts.”

Transparency, too, is an increasingly-differentiating factor, and one with which the Veritas solution scores heavily.

When regulators ask firms how their AI deployments are identifying risk of non-compliance, they expect to receive a full answer.

“A firm may be asked what it is doing to identify insider trading, and it may say that it is using AI,” says Stapenhurst. “But the regulators are now demanding more of a response than that. They don’t expect firms or their service providers to get down into the algorithm and explain exactly how the system works, but they do need to demonstrate a level of transparency.

“So, when our artificial intelligence alerts a reviewer to a communication it thinks is a risk, it gives an overall relevancy score that is also broken down into what we call ‘contributing factors’. For example, a reviewer’s alert may inform them that an item is relevant because the email sender is a known abuser, or it has been previously classified as market abuse, or it contains lots of attachments.

“Not only does that level of explainability help reviewers understand more about why they have been alerted to the item, it also helps the firm explain to the regulator how their AI works.”

Of course, as with everything AI-related, humans still (currently) remain indispensable. In regulatory compliance, human reviewers remain an absolute necessity – just perhaps not in such large numbers per firm.

However, AI is also about improving reviewer user experience. When faced with reading 1,000 messages, for example, it is highly desirable for AI to be able to instantly remove 95% of them because it has learned that they are harmless junk.

“There’s no question that AI is now a ‘must have’ not a ‘nice to have’,” says Stapenhurst.

“Reviewers are highly-trained and do what they do all day every day, almost on instinct. Reviewing the odd piece of junk is fine, but if 500 emails are just pure junk, it’s extremely mentally tiring. That’s why firms need reviewers to yield more risk from smaller review sets, which is what AI can deliver.

“Every year, many of the big banks will be fined for non-compliance of one kind or another; it’s the cost of doing business. The question is: how much will they get hit? One big influencing factor is how defensible those breaches are and what steps they took to try and prevent them.

“If their surveillance regime – and the AI tools within it – can help demonstrate to the regulators that they are internally policing themselves effectively, they’ll be treated more favourably. And that can represent a high-value return on investment.”

To learn more about how Veritas can help your and your customers’ businesses leverage AI-powered compliance tools, click here.



from UC Today https://ift.tt/apMFLG3