OpenAI Launches ChatGPT Enterprise with Security a Key Focus

OpenAI has launched ChatGPT Enterprise, with security and privacy front and centre of its offering.

ChatGPT Enterprise offers a comprehensive swathe of new features to appeal to business needs, including enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, customisation options, longer context windows for processing more extended inputs, and next-gen data analysis functionalities.

OpenAI’s blog wrote:

We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

OpenAI says it has received “unprecedented demand” for a version of ChatGPT that caters specifically for businesses, claiming that over 80 percent of Fortune 500 companies have teams that leverage the service in their workflows.

The company also mentions several Fortune 500 businesses have been utilising an early version of ChatGPT Enterprise, including PwC, Block and Canva, to “craft clearer communications, accelerate coding tasks, rapidly explore answers to complex business questions, assist with creative work”.

ChatGPT Enterprise is available now, and OpenAI says it is “onboarding as many enterprises as we can over the next few weeks”.

What New Features Will ChatGPT Enterprise Offer?

Perhaps the most notable new feature is that OpenAI will not train its AI on a business’s data or conversations, and they don’t learn from employees leveraging the service. OpenAI says that organisations “own and control” their business data. ChatGPT Enterprise is SOC 2-compliant, and conversations are encrypted in transit and at rest.

OpenAI’s new admin console enables admins to oversee team members and features domain verification, SSO, and usage insights, providing a more intuitive framework for large-scale enterprise deployment.

OpenAI also says that ChatGPT Enterprise is up to two times faster than the traditional ChatGPT, with usage limits removed. There is also 32k context added, which means users can process four times longer inputs or files.

There is also “unlimited access” to advanced data analysis capabilities, formerly known as Code Interpreter. “This feature enables both technical and non-technical teams to analyse information in seconds, whether it’s for financial researchers crunching market data, marketers analysing survey results, or data scientists debugging an ETL script,” OpenAI wrote.

There are several customisation options to tailor ChatGPT Enterprise to a user’s business, including new shared chat templates to collaborate and build common workflows.

OpenAI also includes free credits to leverage its APIs if a business wants to extend it into a fully custom offering specific to itself.

OpenAI also previewed future features and solutions currently in the works, such as the capability to securely integrate ChatGPT’s responses with a company’s data by integrating with the applications a business uses, a self-serve ChatGPT Business solution for smaller teams, and “even more powerful versions of Advanced Data Analysis and browsing that are optimised for work”.

Why is Security and Privacy Such a Contentious Topic in AI?

While ChatGPT was enormously popular when it was initially launched last November, OpenAI was also criticised for the perceived slackness of its security and privacy regulations.

The initial version of ChatGPT uses user prompts to develop and improve its model unless users deliberately opt out. This has triggered concerns that employees might accidentally include proprietary or confidential data or information in their prompts, which ChatGPT extrapolates to answer future queries.

There is no established, universal obstacle to these data breaches, and this concern was a primary factor behind Microsoft emphasising Bing Chat Enterprise’s security features when it was announced earlier this summer.

In May, The Wall Street Journal reported that Apple had limited the use of ChatGPT and, preemptively, Microsoft’s Copilot. Verizon and Samsung were among the tech businesses to do likewise.

In February, Bloomberg reported that investment banks — such as Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JP Morgan — had begun restricting their employees from using generative AI, too, especially ChatGPT. This was prompted by concerns over sensitive financial data being at risk.

These worries are not without validity.

In March, OpenAI announced a bug in ChatGPT had resulted in data leaks. In June, OpenAI was subject to a class action lawsuit filed in California federal court, alleging it extracted “massive amounts of personal data from the internet”. The case claimed that OpenAI stole and misappropriated millions of peoples’ data from the internet to train its AI models.

In July, it was announced that the US Federal Trade Commission (FTC) had opened talks with OpenAI about the risks to consumers from ChatGPT generating false information or information partly compiled through leaked sensitive data. The FTC also assessed OpenAI’s attitude to data privacy and how it extracts data to train and develop its AI.



from UC Today https://ift.tt/LNASG4W

Post a Comment

0 Comments