How Regulatory Grade AI helps with Model Risk Management as GenAI Impacts Businesses

While AI is far from new, enterprises are viewing the new wave of AI, notably in the form of GenAI, with caution as adoption accelerates rapidly. Currently it is relatively straightforward to deploy regulatory-grade AI by applying model risk management (MRM) to machine learning or AI models and thereby gain approval from model governance in banks and financial institutions for models to be deployed into production. This has been essential because it allows organisations to provide the necessary evidence to show their risk coverage.

As risk coverage moved on from a list of lexicons to machine learning use cases, these models utilise standard classification models to identify if a sentence contains language of interest, such as changing communication channel, spreading rumours or talking secretively, for example.

“Those classifications are reasonably self-explanatory and transparent from an interpretability perspective and that enables relatively straightforward MRM,” confirms Paul Taylor, the Vice President of Product Management at Smarsh.

“The challenge comes as we move away from these established classification models and the complexity of the underlying models becomes greater. When it comes to Generative AI, for example, the underlying models can support more than 100 different languages which enable us to support multilingual offerings without building individual models, but the complexity is increasing significantly.”

To mitigate that complexity, leverages transformer-based language models for multilingual which used a semantic-based approach rather than the sentence-based classification model. The size of these models is aligned to Smarsh’s responsible AI, which ensure the MRM needs of clients are addressed, while also meeting their cost, throughput and quality requirements. The advantage is that these models don’t require the implementation complexity and inherent costs of a large language model (LLM).

“We’ve built our models with a standard methodology with MRM at its base because we know the MRM requirements of our banking customers,” explains Taylor. “The methodology process automatically generates MRM documentation and is outcome based rather than input based.”

There are essentially four parts to MRM, which is composed of transparency to ensure factuality and interpretability, accountability for versioning and change management, safety for protection of privacy and security, and fairness to ensure no hallucinations or harmful bias are added.

“As industries utilise GenAI, the business case needs to be taken into consideration as foundation models are so large, which means scoring on a sentence or token is just too expensive. Analysing tens of millions of messages per day would be highly cost-prohibitive for most use cases,” adds Taylor.

“GenAI therefore needs to be used strategically with focus on the specific use-cases where it makes sense.”

“GenAI is definitely an enabler for banks and the capabilities we’re seeing emerge and the solutions we’re building are in areas we haven’t been able to address before,” he says. “The challenge is whether banks and regulators are ready from an MRM perspective. Due to the size of the models, the only way to evaluate the model is to ask it itself how it generated the output. Hence Smarsh has developed an MRM prompt management framework which we believe will be the key to success. I see this as less of a technology barrier but a time-to-market issue that requires process change within banks and regulators.”

“Today, we’re in production with classification models and currently deploying new multi-lingual models based on medium language models with customers, while lots of research is being done on large language models (LLM) to identify the most effective use-cases,” concludes Taylor.

“It’s extremely exciting that things we’ve talked about in the last five years are now absolutely reality.”

For more information, visit Smarsh.



from UC Today https://ift.tt/BaElHPM

Post a Comment

0 Comments