How Will a Historic Week in AI Impact the UC and Collaboration Space?

This week represents a seismic week in artificial intelligence — not only for its ongoing technological evolution but for how political bodies and nations understand and approach AI.

From President Joe Biden’s executive order on AI to both the USA and the United Kingdom establishing AI institutes to the G7 outlining concrete guiding principles and a code of conduct for developing AI, it has been a whirlwind few days that will likely have significant repercussions on AI’s future development and application.

But what precisely has happened, and how will it impact the UC and collaboration world?

Why Has There Been This Whirlwind of Major AI Regulatory News?

Worries over the risks AI poses to everything from personal and business data in the cloud to the end of human civilisation have been prevalent since the concept of artificial intelligence was first suggested. It has been the purview of science and speculative fiction for decades, and as researchers and engineers began experimenting with primitive versions of AI during the latter half of the 20th century and the first couple of decades of the 21st, warnings about how its sentience could be dangerous were present in the background.

What has changed over the last 12 months, in particular, has been AI’s explosive and sudden entrance into the mainstream, as spearheaded by OpenAI’s ChatGPT generative AI chatbot.

ChatGPT uses user prompts to develop and improve its model unless users deliberately opt out. This has prompted concerns that employees might inadvertently include proprietary or confidential data in their prompts, which ChatGPT extracts to answer future queries, exposing data leak risks for organisations.

The likes of Verizon, Apple and Samsung have prohibited the use of ChatGPT in employee workflows. Investment banks — including Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan — have cracked down on their workers using generative AI, especially ChatGPT. This is because of the risks of sensitive financial data being exposed.

Beyond the security and privacy concerns are the broader, more alarming potential implications of AI’s accelerated development being left unchecked.

In June, a statement signed by AI industry leaders, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, was published warning of the risks of AI through the nonprofit organisation Center for AI Safety. The statement read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

That wasn’t the sole incident. Microsoft, who invested $13 billion in OpenAI this year, released a 40-page report stating AI regulation is necessary to address potential bad actors and risks. In March, an open letter signed by Tesla and X CEO Elon Musk, Apple co-founder Steve Wozniak and over one thousand other industry leaders demanded businesses stop production on AI projects for at least six months or until industry standards and protocols caught up.

It has clearly been a snowballing issue — although the last straw for President Biden, according to Deputy White House Chief of Staff Bruce Reed, was when the POTUS watched Mission Impossible: Dead Reckoning Part One, a film whose antagonist is a malevolent and renegade AI named the Entity. “If (Biden) hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” Reed commented to the Associated Press.

If Biden’s executive order is the catalyst for building an AI regulatory framework that prevents an extinction-level event, you might have Tom Cruise to thank.

What Exactly Has Happened?

There had been significant AI regulation progress prior to this seismic week. In 2021, the European Commission introduced the world’s first comprehensive AI law in the EU AI Act. In March, the British government revealed a white paper called “A pro-innovation approach to AI regulation,” signalling the issue has been in the minds of policy decision-makers for some time.

But the past few days have represented a landslide of announcements, launches and revelations.

On Monday, President Biden announced his executive order on the safe deployment of AI that he described as the most “significant” action that any government has taken on the issue.

What the executive order means in practice is that tech businesses developing AI products and systems that could pose a threat to national security, economic security, or health and safety will be required to share test results and other critical information for those products and systems with the US government before they can be released. That includes the likes of ChatGPT and Bing Chat and intends to guarantee AI businesses aren’t collecting private data from unaware users and using it without their consent.

The government has also established strict testing guidelines around AI development — especially around “red-team testing”, in which assessors replicate rogue actors in their test procedures — before AI products can be made commercially available. The ambition is to ensure that AI tools meet the required safety standards.

The executive order mandated the introduction of an advanced cybersecurity programme to design AI tools that can identify and mitigate vulnerabilities in critical software.

The executive order also addresses other key areas of contention around AI, including watermarking AI-made content, bias and discrimination, consumer protections, disruption in the jobs market and workers’ rights.

Vice-President Kamala Harris followed this up by announcing an AI Safety Institute on Tuesday, which will develop “guidelines, tools, benchmarks, and best practices” to help mitigate risks from AI, according to its announcement.

Across the pond, British Prime Minister Rishi Sunak announced the UK’s own AI Safety Institute a few days ago. Prime Minister Sunak stated the Institute would test new versions of AI for a variety of risks, from producing misinformation to its potential as an existential threat to humanity.

That news foreshadowed the UK’s hosting of a global AI Safety Summit this week, where the likes of Vice-President Harris, Musk and other thought and industry leaders have appeared to offer their perspectives. In a statement named the “Bletchley Declaration,” because the Summit took place at Bletchley Park, signatories, including the US, the UK, the EU, China, and dozens of other nations, stated they would seek to build:

Respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.”

Vera Jerouva, Vice-President of Values and Transparency at the European Commission, took to the stage during the Summit’s opening speeches to ask for independent scientific input to governing AI effectively and announced the European Commission was drafting new legislation on generative AI, which will be published on December 6.

This week, the G7 also agreed on guiding principles and a code of conduct for developing AI and will be announcing its signatories over the next few weeks.

How Might This Affect the UC and Collaboration Space?

Assuming government agencies introduce the regulations signalled in the executive order and/or Congress itself introduces concrete laws around AI that impact the private sector, there could be several ramifications for the US. Meanwhile, the EU will introduce more stringent legislation on generative AI next month, and the UK could potentially follow suit with both parties, albeit what that might look like is currently ambiguous.

However, if new, strict and enforceable AI regulation AI becomes the global norm, those UC and collaboration businesses releasing AI products might have to go through more stringent, government-approved testing to ensure that private business and personal data isn’t being unlawfully collected. Those businesses might need to invest in more secure data storage solutions and introduce better encryption methods to protect user privacy or establish clearer consent policies from users around data collection and AI usage.

Stricter testing guidelines will include security assessments by the government, potentially producing more secure AI applications for the customer. Enhanced security measures can help protect user data and communications.

UC and collaboration providers will have to be more transparent about the algorithms they use, including disclosing to the government how those AI algorithms make decisions. This also factors into the executive order’s focus on AI that doesn’t discriminate against certain groups of people.

More stringent testing guidelines could foster the development of industry standards for AI in UCC as well. Interoperability is already shaping up to be a major theme for UC in 2024, but government-encouraged standardization could catalyse greater interoperability between different UC and collaboration platforms and ensure a consistent user experience across different services.

This might introduce new problems for the UC and collaboration space — in theory, at least.

These new regulations on AI might result in greater compliance costs for UCC providers, including data security measures, staff training, and legal consultations. Additionally, more extensive AI testing processes can require extra resources, such as skilled testing professionals, testing tools, and infrastructure. This could increase overall development costs and, as a knock-on effect, influence the price of the UCC product as the cost is carried over to the customer.

Innovation and development will inevitably be affected too. Businesses might be more cautious about developing new AI-powered features because of the potential legal and financial risks associated with the regulations. With stricter testing standards in place, the development time of AI-powered features within UCC platforms will likely grow. Companies would need to invest more time and resources in comprehensive testing processes, potentially delaying the release of new features or products.

However, President Biden’s executive order and the Bletchley Declaration seek to securely and safely empower AI innovation and address the concerns of those worried about higher costs, poorer market competition and hindered innovation.

While the Bletchley Declaration’s promises to aid innovation are more hypothetical for now, being so recent, the US government is aware of how profitable AI might be to its economy. It announced a pilot of the National AI Research Resource, which is being built to improve AI research across the US. Funding is being provided for research into privacy-preserving technologies within AI so that businesses can continue to innovate in the area but with it being more standardised and secure.

So that businesses of any size can contribute to AI’s development, the US government will provide technical assistance and resources to ensure that every organisation is capable of making advances in the area, an umbrella under which UC and collaboration businesses fall. This will assuage at least some worries about higher costs and market competition, given stricter AI regulations would alienate those AI-experimenting UC and collaboration SMBs who might not be able to afford the greater cost of compliance.



from UC Today https://ift.tt/Sz4DPFi

Post a Comment

0 Comments