Smarsh Highlights Eliminating Bias in AI

As utilisation of artificial intelligence (AI) increases among enterprises, benefits are seen in customer experience efficiency with apparently richer, more targeted responses presented to queries. The immaturity of such automated systems and responses does create risks for enterprises, however, with strong potential for AI to include bias in responses which are highly dependent on the systems data is being pulled from, Smarsh has warned.

In many respects, it’s a re-run of the classic data management challenge of “rubbish in, rubbish out.” If the AI solution relies on data from a limited number of systems or those with inherent bias, the responses it creates can only replicate bias. The situation is exacerbated with AI because systems use their intelligence to pull data from whichever source appears to be the most authoritative. Of course, the appearance of authority could be determined simply by the source that has the most content coming out, but this is no guarantee of authority.

Responses are also subject to variance depending on how questions are presented to systems. For example, a user of an AI solution that manages cloud services is likely to get a different answer depending on where and how they ask a question, because the AI will base its response according to the channel of communications used in the interaction. The communications channel will impact the sources of data the AI system accesses — which it regards as authoritative — and whether it can access data or patterns that have been used to answer similar questions.

In this scenario, there is a danger that sub-optimal responses are replicated because they have been presented multiple times, and the system is unaware of a feedback loop that could have highlighted a better alternative. Therefore, enterprises need to ensure their systems are designed to avoid the inclusion of bias, poor quality answers and siloing according to query origin in responses. Enterprises must have the intelligence to gauge authority beyond simplistic indicators such as content volume or the frequency-specific data used to formulate responses.

Enterprises are well aware of the potential risks of introducing bias into their systems through utilising AI in this way and are making efforts to avert issues. For example, Open AI has been suggesting the adoption of ground rules around responses that lack various resources. So, if an AI-generated response pulls heavily from one source of information, it is regarded as a non-neutral or false response.  Enterprises should also consider suppressing sources pulled from social platforms such as Twitter to avoid risk and instead move the query to a more directed response.

Other issues to consider revolve around suppressing responses that utilise data from racist or bullying sources. Situations in which the system has a learning model based in a racist source or a source that uses racist language needs to be eradicated.

Ethical AI is therefore becoming a priority for enterprises as they rely more on technology. The need to avoid bias is fundamental, but systems will need to learn the nuances to take preventative measures as they become more refined. There are similarities with early text-to-speech engines that required hours of training to work accurately.

In many respects, the same is true with AI — the system needs to learn how to source balanced, accurate data to formulate optimal answers to user queries. With an ethical approach to educating AI, we will eventually get to the same points as text-to-speech has with voice transcription over our phones. However, systems need time to learn how to source and present responses without bias. The risk for enterprises lies in carrying out this learning in public.



from UC Today https://ift.tt/kRyufTZ

Post a Comment

0 Comments