ChatGPT caused quite a stir when it entered public consciousness, and the conversational AI quickly became a part of our communications technology. GPT-3 language models are used in tools like Microsoft’s Azure OpenAI service, but the technology has continued to advance.

Today, OpenAI announced GPT-4, the latest development in its ChatGPT AI, and it’s already making a splash.

What is GPT-4?

GPT-4 is a large multimodal model AI, capable of carrying out more nuanced and complex tasks than previous iterations of ChatGPT. Like older versions, it’s trained on publicly available and licensed data, developed using deep learning, and fine-tuned using reinforcement learning with human feedback (RLHF).

This latest version was trained on an entirely new deep learning stack and built together with the team at Azure. As a result, it’s trained on more data and computations than previous models, making it more capable and advanced than the older iterations.

OpenAI tested GPT-4 against GPT-3.5 using several benchmarks, including AP free response questions and practice exams, as well as traditional tests for machine learning models. GPT-4 outperformed GPT-3.5 in virtually every metric, often scoring in the top percentile of the exams. This remained the case even when the tests were translated into different languages.

With that said, GPT-4 is still limited in its capabilities. Like previous versions of ChatGPT, the AI can make mistakes or invent facts when it doesn’t have accurate data available. It is still an improvement from previous versions, scoring 40% higher than GPT-3.5 on OpenAI’s internal adversarial factuality evaluations, and is being trained to tell the difference between factual and incorrect statements. Regardless, users should take care to fact-check and verify the statements GPT-4 makes.

One major focus for GPT-4 was improving its safety features. It is now more likely to identify requests for disallowed content (such as instructions for carrying out illegal activities) and respond to sensitive requests like medical advice in accordance with OpenAI’s policies.

Visual Prompts

One feature new to GPT-4 is the ability to receive image input, while previous versions could only respond to text. When shown an image, it can identify objects, diagrams, and written text, and reply to prompts accordingly.

This feature is still being developed and tested, so it’s not widely available yet. While it can identify images, its responses are text-only.

Who’s Using GPT-4?

Although GPT-4 was just announced to the public, the team at OpenAI has already been working with partners to integrate the AI into their apps, solutions, and business processes.

Shortly after the announcement, Microsoft confirmed that the new Bing uses GPT-4. This furthers Microsoft’s investment in and partnership with OpenAI (at the same time as the company laid off its ethical AI team).

OpenAI is currently working on GPT-4’s image recognition in collaboration with Be My Eyes, an app where visually impaired users can get assistance with vision-based tasks from volunteers. They’ve created a Virtual Volunteer, where GPT-4 receives images and provides instant identification and assistance based on what it sees.

Other partners include:

  • Duolingo, which uses GPT-4 for AI conversations in multiple languages and to explain grammatical rules (these features are currently only available for Duolingo Max subscribers)
  • Khan Academy, which uses GPT-4 to create an AI tutor (called “Khanmigo”) for its students
  • The government of Iceland, where they’re using AI to help maintain the native Icelandic language
  • Morgan Stanley, which uses GPT-4 to manage, search, and organize its vast content library

As the latest iteration of ChatGPT continues to roll out, more companies will find uses for it. We’re already seeing how conversational AI like ChatGPT can transform unified communications and the customer experience, so there will undoubtedly be new developments in the weeks and months to come.



from UC Today https://ift.tt/nA3FLj7