ChatGPT and AI-enabled Bots Present New Compliance Challenges

With the increasing adoption of messaging tools augmented by artificial intelligence (AI), such as ChatGPT, it’s only a matter of time before this technology is commonplace in regulated communications. Automated intelligent functionality and capabilities make tools like ChatGPT attractive. However, if a financial services customer interacts via chat, it becomes increasingly likely that the AI could suggest actions that a trader or adviser would not. The risk here boils down to customers receiving regulated advice that a qualified adviser has not provided. 

Enterprises in regulated industries need to prepare for the new dangers that AI-enabled engines present. The reality is that  while any advice or other interaction might have been generated by an AI tool, it’s likely that the burden of responsibility remains with a bank or financial services firm — depending on jurisdiction. Therefore, carefully thought-out approaches must be taken to protect both customers and the business providing services using AI tools. 

The increased use of AI in communications opens doors to new threats and risks as users trust tools such as chatbots to respond to queries. While such tools can be beneficial in providing answers to routine queries or acting on the more mundane customer concerns, the largest threat is that the AI tool decides to offer investment or other advice based on what it has learned. Yet another risk with AI is that a customer could assume the advice given to them is carefully considered and presented by someone with the proper qualifications, such as an investment adviser, rather than by an unqualified AI.  

Organisations must protect themselves by storing all interactions so they can be monitored and searched when required. Any instances where non-compliant information or advice has been shared must be identified and mitigated. Given the massive number of interactions across multiple channels and media, huge volumes of data will need to be stored. Moreover, organisations must be able to analyse and search through that data quickly. 

It is incumbent under common regulations that banks or other service providers need to handle this enormous volume of data, but it has to be done in such a way that the data stored can be utilised. The sheer scale means that AI will be necessary to track AI-supported communications and interactions. Therefore, organisations need a more efficient system to store, search, analyse and act upon the volume of messages and the range of tools in use. Smarsh, with its heritage supporting compliance in regulated industries, has foreseen the need to store and discover at a scale never previously seen across the varying platforms and communication types now commonly used.  

Being able to scale up and support this new era while minimising the cost and complexity is essential to mitigating risks, ensuring compliance and preventing AI tools from attempting to provide responses or advice to highly technical, regulated customer queries.



from UC Today https://ift.tt/o9LbdXv

Post a Comment

0 Comments