Artificial intelligence (AI) has become the latest technology shaking up and reshaping the unified communications (UC) space, and with it comes new questions and concerns. How can companies responsibly and ethically use AI in their platforms? How can we program AI to ensure privacy? Can AI be biased? All these questions and more are continuing to drive conversations and must be addressed by any businesses looking to use AI.

With that in mind, here are some of the top ethical considerations around AI in unified communications, the problems that arise, and how companies can address them.

Transparency: How is AI Being Used?

One of the first key questions is the issue of transparency. AI can be a useful tool for providing assistance during calls, taking notes and transcripts, and pulling up information at a moment’s notice, but even when AI feels like a seamless part of the communication experience, it still needs to be clear when it’s being used.

While many AI developers boast about how talking with their AI feels like talking with a human being, users still need to know when they’re talking with a bot and when they’re talking with a person. If an AI bot is being used for something like customer self-service, then it’s even more important that the customer understands they’re not speaking with a real person. 

Users need to understand when generative AI is being used, how it works, and what it’s used for. For instance, companies can include a sidebar explaining what their AI does, where its training data comes from, and how it can be used. In text chats, AI responses and suggestions can appear in a new, clearly labeled tab, or as an AI participant—as long as it’s clear that it’s a bot.

The important thing is that companies are transparent about their use of generative AI and clearly explain its uses. 

Privacy: What Does the AI Know?

One major concern about AI is where its data comes from. Since AI can only use what data it’s fed, it has to rely on massive amounts of training data. In many cases, this can include existing articles, websites, and other publicly available sources, as well as internal training data for specific companies.

However, AI also tends to learn as it is used, which means it’s constantly collecting new data. Companies need to make sure that their AI is not collecting or using personal data in ways that may violate privacy laws or otherwise be intrusive—especially if the AI is used in industries like healthcare or finance, where important personal information is shared and protected under strict regulations.

There needs to be a clear separation between conversations and data an AI can learn from, and that which it needs to delete to avoid compliance issues. If the AI learns typical company jargon, that’s one thing, but if it starts learning users’ identifying information, that can be a problem, especially if it shares it with other users (intentionally or otherwise). Not only does personal information need to be protected, but employees need to feel confident that the AI is only used for enhancing communication, not for monitoring them.  

Bias and Discrimination: Is the Data Diverse Enough?

Bias in AI is a major issue, which companies need to be aware of every time they provide training data. Generative AI is only as good as the data it’s trained on, which means that if the data is biased, the AI will be too. This can often have huge unintended consequences, ranging from social or systemic biases that discriminate against groups, to data sampling that isn’t properly representative of all groups, to implicit biases skewing data.

For instance, Amazon recently made headlines for its AI recruiting tool that showed a bias against women. Because the AI was trained to identify patterns in resumes, it identified that a majority of tech resumes came from men—which then led it to erroneously assume that men were better qualified for tech jobs than women.

Errors like this are often caused by a lack of diverse and properly representative data. Companies need to make sure that the data they give the AI properly represents diverse groups equally, and does not draw erroneous conclusions based on race, gender, or any other identities.

Generative AI must not be used to discriminate against any individuals or groups. This requires careful monitoring of the data it’s fed and the conclusions it draws from them to avoid any biases that may result in discriminatory practices or behavior. Even unconscious biases can sometimes make their way into the data sets that are used for AI training, so companies need to be aware and stay on the lookout to avoid a situation like the one caused by Amazon’s recruitment AI.

Responsibility: Don’t Just Blame the Bot

AI is a tool, like any other, and despite it being billed as “intelligent,” it’s still just an algorithm repeating what it’s told.

As such, companies need to take responsibility for the decisions made by their generative AI systems. If there’s a misstep in the training, if the data is off, or if it hasn’t been tested enough to accurately carry out its tasks, that’s an issue that the company needs to address and prevent from happening again.

It’s easy to think of AI as an all-knowing robot that can sift through data and always uncover the right answer, but today’s technology is not even close to that level of intelligence. It’s up to the companies that use AI to make sure it’s trained fully and on the right data, and keep it on the right track as it grows; if it makes a mistake that has consequences for the business, the responsibility still falls on the company, not the tool.

Human Oversight: Even AI Needs a Supervisor

On a related note, one of the most important ways to prevent AI from making catastrophic mistakes, displaying bias, or discriminating against anyone is by having a human provide oversight.

Human oversight and intervention should be included from day one, starting with the training and continuing throughout its usage. This adds an extra set of eyes to watch out for any inaccurate information (generative AI as we currently know it has a tendency to make up facts based on predicted patterns, rather than on any actual evidence or knowledge), make sure the AI is being used ethically, and catch any potential issues before they become problems.

As the saying goes, “an ounce of prevention is worth a pound of cure,” and preventing AI from being used unethically or improperly can save a business a significant amount of trouble and money.

Potential Harm: Understanding the Worst that Can Happen

AI turning against humanity is a staple of science fiction, whether it’s Skynet from the “Terminator” movies or HAL-9000 from “2001: A Space Odyssey.” And while today’s AI is nowhere near that level of intelligence, and is therefore unlikely to turn evil any time soon, companies still need to understand the damage that modern AI can do, and create safety plans accordingly.

Companies need to consider the potential harm that can be caused by generative AI should it malfunction or be used maliciously. While generative AI is primarily used just for text, there are several ways that could be used to do damage, such as sharing proprietary or personal information, intentionally being used to spread false information or defamatory statements, or sending hate speech to users—any one of which could be disastrous for a business.

While AI has a vast amount of potential to help businesses and employees, it also has the potential for harm. Generative AI should be used with safety in mind, and organizations need to have contingencies in place should anything go wrong.

Understanding the Ethics of AI in UC

Artificial intelligence is a powerful tool for business communications, and companies are finding new uses by the day. It can empower employees, act as a personal assistant on calls, assist customers, and much more, but as it is still a new technology, companies using AI need to understand its risks, limitations, and potential ethical issues.

Organizations need to consider the ethical implications of generative AI along with the impact it has on their collaboration and communication. Taking a responsible, ethical approach is key to using generative AI for enhancing communication without jeopardizing the privacy and rights of employees, customers, and other users.



from UC Today https://ift.tt/wf8LrJc