Home / News / Artificial Intelligence / Greg Brockman of OpenAI talks in an interview about how GPT-4 isn’t perfect, but neither are you

Greg Brockman of OpenAI talks in an interview about how GPT-4 isn’t perfect, but neither are you

Yesterday, OpenAI released the eagerly anticipated text-generating AI model, GPT-4, and it’s an intriguing piece of work.

GPT-4 is much better than its predecessor, GPT-3, because it makes more factually correct claims and makes it easier for developers to describe its style and behavior.It is also multimodal in that it can understand visuals. This means that it can put captions on photos and explain what they are about in great detail.

GPT-4, however, has significant drawbacks. The model “hallucinates” facts and commits elementary logical mistakes, just like GPT-3. In one example on OpenAI’s blog, the GPT-4 algorithm refers to Elvis Presley as the “son of an actor.” His parents, none of whom were actors,

TechCrunch called Greg Brockman, one of the co-founders of OpenAI and its president, on Tuesday to obtain a better understanding of the development timeline, capabilities, and limitations of GPT-4.

Brockman’s response when asked to contrast GPT-4 with GPT-3 was simple: different.

He told TechCrunch, “It’s just different. There are still several issues and errors with the model… But, you can clearly see the improvement in subjects like calculus or law, where it moved from being incredibly awful in some areas to actually being fairly competent in comparison to humans.

Test results are in his favor. GPT-4 receives a 4 out of 5 on the AP Calculus BC exam, while GPT-3 receives a 1. (GPT-3.5, a model that sits in the middle between GPT-3 and GPT-4, also receives a 4.) Moreover, GPT-4 succeeds on a mock bar exam with a score that is in the top 10% of test takers, whereas GPT-3.5’s score is in the bottom 10%.

Switching gears, the aforementioned multimodality is one of GPT-4’s most intriguing features. Contrary to GPT-3 and GPT-3.5, which could only accept text-based prompts (such as “Write an essay about giraffes”), GPT-4 can accept a prompt that combines both images and text to require the user to take an action (for example, an image of giraffes in the Serengeti with the prompt “How many giraffes are shown here?”).

This is because, in contrast to its predecessors, GPT-4 was trained on both text and image data. Despite the fact that OpenAI claims that the training data was derived from “a combination of licensed, generated, and publicly available data sources, which may include publicly available personal information,” Brockman evaded my questions about the specifics. (OpenAI has already run into legal issues when using training data.)

The ability of GPT-4 to comprehend images is pretty remarkable. For instance, ask participants: “What do you find hilarious about this image? GPT-4 provides a breakdown of each image panel and correctly explains the joke (“The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port”), in addition to a three-paneled image showing a fake VGA cable being plugged into an iPhone.

Currently, just one launch partner, the assistive software Be My Eyes for the Blind, has access to GPT-4’s image analysis capabilities. When it does, the larger rollout, according to Brockman, would be “slow and thoughtful” as OpenAI weighs the advantages and disadvantages.

According to Brockman, we need to discuss and resolve some policy challenges, such as facial recognition and how to handle photos of people. We must first establish the locations of the “red lines” and “danger zones” before making further clarifications.

Similar moral conundrums surrounded OpenAI’s text-to-image technology, DALL-E 2. Customers could upload images of people’s faces to alter them using the AI-powered image-generating system after OpenAI first disabled the feature. At the time, OpenAI asserted that improvements to its safety system had enabled the face-editing function by “minimizing the possibility of harm” from deepfakes and attempts to produce violent, pornographic, and political content.

Another constant is stopping GPT-4 from being used inadvertently in ways that can cause harm, whether that harm is psychological, financial, or something else.Within hours of the model’s release, the Israeli cybersecurity business Adversa AI released a blog post describing how to get through OpenAI’s content filters and force GPT-4 to produce very inflammatory text, phishing emails, and offensive depictions of LGBT people.

In the field of language models, it is not a recent phenomenon. BlenderBot from Meta and ChatGPT from OpenAI have both been prodded to utter outrageously insulting things and even divulge private information about their internal workings. But many people, including this reporter, had hoped that GPT-4 may bring about considerable gains in the area of moderation.

Brockman pointed out that GPT-4 had gone through six months of safety training and that, in internal tests, it was 40% more likely than GPT-3.5 to give “factual” answers and 82% less likely to answer requests for content that was against OpenAI’s usage policy.

Brockman added, “We spent a lot of time trying to understand what GPT-4 is capable of.” We learn by experiencing it in the real world. We’re always adding new features and updating the model, making it far more adaptable to any personality or mode you want it to be in.

To be honest, the initial real-world outcomes aren’t that encouraging. Outside of the Adversa AI testing, it has been demonstrated that the GPT-4-powered Bing Talk chatbot from Microsoft is quite vulnerable to jailbreaking. Users have successfully persuaded the bot to declare love, threaten damage, defend the Holocaust, and concoct conspiracy theories using well-chosen inputs.

Brockman did not dispute that GPT-4 fails in this instance. However, he focused on the model’s new steerability mitigation mechanisms, such as an API-level feature referred to as “system” messages. In essence, system messages are directives that establish the parameters and tone of GPT-4’s interactions. You are a tutor who consistently reacts in a Socratic manner, for instance, according to a system message. Instead of providing the pupil with the solution, you constantly aim to pose the ideal query to encourage independent thought.

The concept is that the system messages serve as barriers to keep GPT-4 on track.

We have put a lot of effort into understanding the tone, style, and content of GPT-4, according to Brockman. “I think we’re starting to grasp the engineering a little bit more, about how to have a repeatable process that kind of leads you to predictable findings that are going to be extremely valuable to people,” says the researcher.

In our talk, Brockman and I also discussed the GPT-4 context window, which is the text that the model can take into account prior to producing new text. A GPT-4 variant that OpenAI is testing has a memory capacity of about 50 pages, which is five times as much as the standard GPT-4 and eight times as much as GPT-3.

Brockman thinks that new, underutilized uses, particularly in the enterprise, were made possible by the larger context window. He imagines a business making an AI chatbot that uses context and knowledge from different sources, such as employees from different departments, to give very smart but kind answers.

That idea is not brand new. Brockman, however, argues that GPT-4’s responses will be far more helpful than those provided by chatbots and search engines at this time.

Brockman says that before, the model didn’t know who you were, what you liked, and other details. With a bigger context window and that kind of history, it will definitely be more capable. It will increase human potential.

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …