In today’s AI-interested world, UCaaS solutions, from Webex to Microsoft Teams, have been inundated with a raft of new AI features.
AI transcripts, AI Copilots, AI-enabled search—the list goes on.
Yet amid all the efficiencies and optimizations AI has brought to the UC sphere, it is important to understand that it has also brought new challenges.
This not only comes in terms of regulation or integration but in the form of new AI-enabled attacks.
Yes, AI has given attackers new ways of entering enterprises’ systems, and with more and more communication over UCaaS solutions like Teams, they are becoming an increasing target.
But what exactly are the new threats to look out for in the age of AI? To find out more, UC Today spoke with Jeff Schumann, VP Collaboration Security and AI Strategist at cybersecurity company Mimecast, about the biggest cyber threats facing UCaaS users in the age of AI.
The Human Element: UCaaS Weakest Link
When it comes to cybersecurity vulnerabilities in UCaaS platforms, the greatest risk often lies with the users themselves.
“The human element is the primary weak point, and attackers are evolving to exploit it in new ways,”
Schumann said.
This comes in the evolution of the classic phishing attack. Previously, bad actors would have to trick you into opening an attachment or entering details on a fake link within an email or a message that looked like it was from a trusted source.
However, further inspection of the communique, like looking at the email address, would reveal it is not the same as the user name, and a hint of foul play would be revealed.
Yet, AI has changed these phishing attempts to remove any investigative attempts.
“Voice cloning, visual cloning, and deepfake-driven impersonation are making it nearly impossible for users to differentiate between real and fraudulent interactions,” Schumann explained.
This means attackers are no longer needing to phish for credentials; they can manipulate individuals into handing over access willingly.
Think this is outlandish? A British engineering firm, Arup, last year confirmed it was the victim of a deepfake fraud after an employee was duped into sending £20 million to criminals using AI to replicate voices and images of senior officers of the company.
This highlights how sophisticated social engineering tactics have become, with attackers leveraging advanced technologies to create highly convincing impersonations.
What’s more, such attempts at gaining access can bypass even the most robust security, as they don’t contend with any technical element.
The challenge for organizations therefore has evolved further to not just implementing technical safeguards but also preparing their workforce to recognize these increasingly realistic deception techniques.
The stakes are particularly high given the central role UCaaS platforms play in modern business.
“UCaaS platforms are the modern workspace. This is where people do their jobs, hold critical business conversations, and exchange sensitive data. They’ve become the operating system of work, integrating at scale with other enterprise tools like CRMs and financial platforms,”
Schumann noted.
“Gaining control of a UCaaS platform means gaining access to workflows, decisions, and proprietary business intelligence. Protecting UCaaS is about protecting the entire business, not just the communications layer.”
AI as Both Shield and Sword
As threats evolve, so too must defense mechanisms. AI is playing an increasingly crucial role in strengthening UCaaS security, offering capabilities beyond what traditional systems can provide.
Schumann believes that just as AI has brought new abilities to attackers, so too can it transform security defense.
“The power of AI in UCaaS security isn’t just in detection—it’s in real-time adaptation,”
Schumann explained.
“Natural Language Processing allows AI to understand intent and sentiment, flagging anomalies before an attack succeeds. Imagine a security system that doesn’t just filter messages but dynamically adjusts security controls based on emerging patterns.”
This adaptive approach is particularly valuable when dealing with deepfakes and AI-generated content.
For instance, hackers with alleged links to the Russian government have been using social engineering tactics such as impersonating officials from entities to get users to hand over authentication codes on Teams.
With AI, abnormalities from this can be detected and relayed to the user, restoring that power of investigation of an odd message.
“The more signals you can gather, the more accurate AI can be at detecting manipulation. Cross-referencing voice patterns, speech cadence, biometric verification, metadata analysis, and contextual behavioral monitoring is key,” explained Schumann.
However, AI itself is not infallible. Situations can arise where it can be either too zealous or too forgiving. Thus, having it work as an outright blocker might prove a hindrance to business and a frustration for users.
Schumann argues this is why AI security for UCaaS in this context should be seen more as a copilot than in charge:
“A layered defense model combines both, allowing AI to stop high-confidence threats while educating users on lower-confidence risks.”
Implementing AI in UCaaS security isn’t without challenges, however. Not only are threats fast-moving and adapting, so is UCaaS itself.
“The real-time nature of UCaaS is what makes security uniquely challenging. Communication happens instantly, and as generational technology shifts increase bandwidth and capabilities, new attack surfaces emerge just as rapidly. Security solutions need to evolve in tandem,” Schumann stated.
The Future of UCaaS Security
Looking ahead, it’s clear AI will not only augment UCaaS systems’ offerings but also their security.
Like UC features like Copilot, Schumann believes its utility will be in assisting humans to perform better.
“AI will become exponentially smarter, but human oversight will always be critical,” Schumann said. “Think of it like autopilot in aviation. AI will identify, assess, and mitigate most threats autonomously, but human judgment is irreplaceable when context matters.”
The real evolution, Schumann states, will be in this synergy of AI and human expertise.
2025 will see more AI advancements on UC platforms as companies race to gain a competitive edge.
Schumann argues it’ll also see an increased focus on AI for security: “Security will become more invisible, embedded into workflows without disrupting productivity.”
This will also allow users to take full advantage of the platform and its features, knowing they are secure in doing so.
“UCaaS providers that embed AI into both their security and user experience strategies will have a competitive edge,” Schumann explained. “Delivering platforms that are not only safer but also more intelligent and efficient for the people using them.”
from UC Today https://ift.tt/TybQBuY
0 Comments