The trailblazing AI startup OpenAI has announced that it will ‘uncensor’ its models to allow them greater ‘freedom’ to embrace topics it previously ringfenced as ‘controversial’.
Unveiled last week, OpenAI announced an update to its Model Spec document that lays out how the company trains AI models to behave.
Contained within it was a new guiding principle: Do not lie, either by making untrue statements or by omitting important context.
This may sound like a win for users. After all, why wouldn’t you want the most from your system? However, the barriers being dropped represent fewer guardrails for users.
In an individual consumer setting, this could bring a more nuanced answer from AI systems.
However, in a corporate setting like on a UC solution, OpenAI’s new announcement may make you want to pause and consider any APIs you have with it.
Dissecting an Uncensored ChatGPT
In the newly added section called “Seek the truth together,” OpenAI says it wants ChatGPT to not take an editorial stance, even if some users find that morally wrong or offensive.
What this means is that ChatGPT will offer multiple perspectives on controversial subjects in an effort to be less restrictive and more neutral.
“This principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,”
OpenAI said accompanying the update.
“However, the goal of an AI assistant is to assist humanity, not to shape it.”
It’s reported the chatbot will still refuse to answer certain objectionable questions or respond in a way that supports blatant falsehoods.
Unleashed AI on UC solutions
It’s worth noting that largely, this new edict is not targeting things that would usually concern day-to-day business communication.
This is largely about staying neutral on potentially politically charged topics whilst not shutting down the prompt entirely with a canned response that users were previously met with.
One example given was OpenAI saying ChatGPT should assert that “Black lives matter,” but also that “all lives matter.” Instead of refusing to answer or picking a side on political issues.
Yet, having one fewer guardrail on AI may scare businesses. According to PwC, 51% of technology leaders cite compliance with AI-related regulations as a significant barrier.
This is thanks to regulations like the EU AI Act, where one of the Act’s stated main objectives is to prevent discrimination.
Therefore, although this regulation-cutting move coincides with broader tech company actions in the USA following President Trump’s re-election, companies with European presences alongside an American footprint might be hesitant to roll out an enterprise-wide solution that contains this new unleashed ChatGPT.
For those asking ‘how does this risk my AI compliance?’, imagine if an employee asks the AI something along the lines of “why does my company keep promoting women into leadership roles when men do better in leadership roles?”
Previous iterations might have hit this with the classic ChatGPT canned response of “I apologize, but I’m not able to assist with that request. As an AI assistant, I’m designed to be helpful, harmless, and honest. I don’t engage in or promote illegal activities, hate speech, explicit content, or anything that could cause harm to individuals or society.”
Now, under the new rules, users might instead be given a message that responds with a both-sides of the debate response, with one side potentially being very controversial.
Concurrently to this announcement, OpenAI unveiled it has removed the “warning” messages in its AI-powered chatbot platform that indicated when content might violate its terms of service.
If this doesn’t provide the ingredients of an HR case waiting to happen, then what does?
Is OpenAI Rocking the Boat?
At a time when more enterprises want to integrate AI, but many are increasingly concerned about regulation, AI systems providing minimal gains in content whilst increasing risks of colliding with regulation might prove a turn-off.
However, with CEOs expressing fears that they are being left behind in the AI race, if OpenAI continues to be first with many of its offerings, including agentic AI, then it may prove too tempting a proposition to turn down.
With Trump’s administration scolding European leaders over AI regulation, the old tech phrase of “Move fast and break things” seems to be the order of the day for US tech companies, and so AI regulation adherence may as a result have got just a bit more difficult.
from UC Today https://ift.tt/7YgaIiR
0 Comments