While Microsoft’s premier AI-powered productivity tool, Copilot, understandably took centre stage at this year’s Microsoft Inspire with its game-changing arrival into our work lives hurrying ever closer, it wasn’t the event’s only eye-catching announcement.

Bing Chat Enterprise was revealed, bringing its generative AI features into the business world. While its capabilities could be potentially seismic for businesses around the globe, Microsoft went to great pains during Big Chat Enterprise’s announcement to stress how secure and safe its offering would be for companies with data protection anxiety.

Why was so much focus on its security?

What is Bing Chat Enterprise?

Bing Chat’s generative AI capabilities were only previously available in Microsoft Edge for consumers. In March, Microsoft revolutionised its Bing search engine and Edge browser by implementing an AI-powered experience with ChatGPT features. The solution encompassed leveraging OpenAI’s model, Microsoft’s proprietary Prometheus model, AI within the search algorithm, and a reimagined user experience. Microsoft described it as a “copilot for the web”.

Bing Chat Enterprise is accessible “wherever Bing Chat is supported”, including Bing.com and chat and the Microsoft Edge sidebar. Its generative AI capabilities include answers to natural language questions (with citations) and visual responses, such as charts and images.

“Whether researching industry insights, analysing data, or looking for inspiration, Bing Chat Enterprise gives people access to better answers, greater efficiency and new ways to be creative,” Yusuf Mehdi, Corporate Vice President & Consumer Chief Marketing Officer, and Jared Spataro, CVP Modern Work & Business Applications, wrote in an accompanying blog post.

Bing Chat Enterprises is rolling out in preview for free today but will cost $5 per user per month when released as a standalone offering. It’ll be free for those customers with existing Microsoft 365 E3, E5, Business Standard, and Business Premium subscriptions.

However, Microsoft emphasised that the solution is designed to provide greater data security for businesses concerned about privacy and data breaches. This heightened security could be a crucial feature differential in an increasingly crowded generative AI marketplace.

Why was Microsoft so Focused on its Security Features?

Bluntly, because the privacy and security of business data most widely available generative AI solutions have come under close scrutiny in recent weeks and months.

The issue is that OpenAI’s ChatGPT, the most widely used generative AI service, uses user prompts to develop and improve its model unless users deliberately opt-out. This has facilitated concerns that employees might inadvertently include proprietary or confidential data or information in their prompts, which ChatGPT extracts to answer future queries.

There is no established, uniform obstacle to these prospective data breaches.

In May, The Wall Street Journal reported that Apple had restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot. Verizon and Samsung were among the tech companies to do similarly.

In February, Bloomberg reported that investment banks — including Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan — had started cracking down on their employees using generative AI, too, especially ChatGPT. This is due to worries over sensitive financial data being at risk.

These worries are not without merit, either.

In March, OpenAI announced a bug in ChatGPT had resulted in data leaks. Last month, OpenAI was subject to a class action lawsuit filed in California federal court, claiming it extracted “massive amounts of personal data from the internet”. The suggestion alleged that OpenAI stole and misappropriated millions of peoples’ data from across the internet to refine its AI models.

Only last week, it was announced that the US Federal Trade Commission (FTC) had opened talks with OpenAI about the risks to consumers from ChatGPT producing false information or information partly compiled through leaked sensitive data. The FTC is also assessing OpenAI’s approach to data privacy and how it extracts data to train and develop the AI.

 What Exactly Does Bing Chat Enterprise Do to Address These Concerns?

Microsoft at Inspire has gone to great pains to underline that Bing Chat Enterprise addresses some, if not all, of these concerns.

When enterprise users utilise this service, their chat data is not saved and, consequently, not extracted to train AI models. No one else can view the user’s prompts and, therefore, their data.

Mehdi and Spataro explained:

Bing Chat Enterprise gives your organization AI-powered chat for work with commercial data protection. With Bing Chat Enterprise, user and business data are protected and will not leak outside the organization. What goes in — and comes out — remains protected. Chat data is not saved, and Microsoft has no eyes-on access – which means no one can view your data. And, your data is not used to train the models.”

Notably, Microsoft’s standard Bing chatbot does not include this feature. That monitors both automated and manual reviews of Bing user prompts.

There are other inherent issues around generative AI that Bing Chat Enterprise does not fix directly. Generative AI, still in its very early (and fallible) stages, can reproduce inaccurate information or lie to users. Lawyers experimenting with ChatGPT discovered it could fabricate legal precedents and past cases.

There is still a long way to go before generative AI is a flawless and perfectly secure productivity tool. Still, Microsoft’s privacy policy with Bing Chat Enterprise is one small but welcome step on that journey.



from UC Today https://ift.tt/VpOBqAY