Generative artificial intelligence (AI) and large language models have enormous potential that has led to the rushed rollout of products and systems that have not been fully thought-through. The hype is such that many are jumping onto the Generative AI bandwagon without first considering the downstream implications or waiting for offerings that come with better performance and usefulness to mature.
“Generative AI brings risks as well as benefits,” confirms Theo Hill, senior director of product management at Smarsh. “Archives will have to handle the increased volumes of content that these models will create, as well as identify new types of risks in that data. New techniques are needed to determine those risks and we are focused on how technology can be used to better identify risks against a landscape of radically increased data generation.”
Finding the data points that suggest compliance risks exist becomes increasingly challenging as data volumes and the reliance on AI increase, introducing more inaccuracies. “Generative AI can create egregious mistakes but, because those mistakes are nicely formatted and confidently created, they’re believable,” says Hill. “That means your teams have to check everything which can negate the purpose of using AI in the first place.”
The risks don’t stop there. “Generative AI has been trained on general internet information which contain bias. These biases can be amplified as AI is increasingly utilised,” he adds. “Biases can lead to unfair targeting of certain groups, which is a serious issue in and of itself. However, an often-overlooked issue in supervision and surveillance is that bias toward risk detection to these targeted groups can create blind-spots elsewhere in your monitored population — thus degrading your surveillance effectiveness rather than improving it.”
Businesses therefore need to carefully assess the ways in which they will use Generative AI and Large Language Models (LLM). “Some models are built by scraping the internet and this is facing legal challenges from owners of copyright content so the models may have to be updated over and over as these cases are resolved over time,” explains Hill.
“Companies should be very careful about moving too early to deploy Generative AI built on Foundational Models, especially those with unclear data provenance.”
Nevertheless, Smarsh sees enormous potential. “We think Generative AI is going to be transformative. Our research teams are working on ideas with high probabilities of success, while at the same time, using the right level of caution and due diligence that such disruptive technology warrants,” he says. “The big hurdle that firms, ourselves included, are going to have to overcome is ensuring that only rigorously validated solutions are brought to market. That validation comes where the AI is properly tested and its fail states are known, and we believe a critical component of that will be deep industry partnerships.”
For this reason, Hill sees careful, measured adoption of Generative AI and LLMs as the path most likely to lead to the greatest long-term benefits for his clients. “We see potential use cases across compliance; for example, advisory teams could use Generative AI to understand and reason if conflicts of interest exist during deal reviews, or you could use Generative AI to make supervision and surveillance more effective through the greater level of language understanding these AI have over traditional surveillance technology.
“While the benefits could be enormous, there’s a danger that rushing in will result in large volumes of inaccurate information being repeatedly replicated. That repetition will result in a negative feedback loop as firms leverage these outputs to further “improve” the model,” Hill concludes.
“The key is to avoid the increased volume of data obscuring the real value and to limit the potential for bias being amplified. Smarsh development teams are working hard to address these challenges and bring solutions to market that enable the real value of Generative AI and LLMs to be harnessed by businesses.”
To find out more, visit Smarsh.
from UC Today https://ift.tt/0XvawH1
0 Comments