OpenAI's Mira Murati, Mark Chen, and Barret Zoph onstage at a live event

The plinky-plunky music in the lead-up to the OpenAI livestream announcing GPT-4o and a desktop app was both anxiety-inducing and placating, complete with the sounds of raindrops and a ticking clock. It’s an apt representation of the dichotomy between what OpenAI does and how it presents itself to the world. 

And, from the choice of presenters to the friendly vibe — including the soothing female GPT-4o voice, which sounds like a kind kindergarten teacher — the message of the OpenAI event was, “We’re your friends; we’re not like other tech companies; let us help you.” All of it seems like a very intentional choice to position the company as the AI maker you can trust, despite (or perhaps in light of) ongoing concerns about copyright infringement, job replacement, and misinformation risks.

From the ground up, the event was designed to be nonthreatening and intimate. For starters, it was led by CTO Mira Murati instead of CEO Sam Altman. Murati, the brilliant and beautiful chief technology officer, has been largely spared from controversy. She navigated the Sam Altman coup by stepping up as CEO when briefly appointed, but maintained support for Altman, representing a steady voice amidst the chaos.

Murati was a perfect choice to lead the event. In her casual jeans and perfect blowout, she exuded trust and confidence, breezing through the ethics and safety concerns with an assuring yet vague disclaimer:

GPT-4o presents new challenges for us when it comes to safety, because we’re dealing with real-time audio, real-time vision and our team has been hard at work, figuring out how to build in mitigations against misuse. We continue to work with different stakeholders out there from government, media, entertainment, all industries, red-teamers, and civil society about how to best bring these technologies into the world.

Next came the live demos held by research leads Mark Chen and Barret Zoph. Joining Murati in comfy chairs surrounded by wood paneling and plants, the three looked like friends having a casual conversation in a natural, organic environment. All of this was offset, of course, by the demonstration of a completely synthetic technology capable of recreating a human voice that could talk, emote, sing, and even be interrupted, all in real-time.

Speaking of comfy chairs, the whole effect was very unlike your usual Big Tech event. At Google and Apple events, you’ll typically see a keynote speaker standing on a vast stage, talking in hyperbole and absolutisms. There was none of that from OpenAI today. Instead, the event represented the opposite of what we might expect at Tuesday’s Google I/O. All of it was to say, « We’re not like the other guys. You can trust us. »

Chen humanized himself by confessing he was nervous. GTP-4o, the nonhuman entity, guided him through breathing exercises to calm him down. It all seemed designed to assure the audience that the new technology is nothing to be afraid of, as if GPT-4o was there to calm all of us.

« Our initial conception when we started OpenAI was that we’d create AI and use it to create all sorts of benefits for the world, » said Altman in a blog post published after the event. « Instead, it now looks like we’ll create AI, and then other people will use it to create all sorts of amazing things that we all benefit from. »

But the dystopian optics of a reassuring and animated robot voice was not lost on those watching. GPT-4o’s voice quickly drew comparisons to that of Scarlett Johansson’s character as a voice assistant that Joaquin Phoenix’s character falls in love with in the film Her. Altman even seemed to get in on the joke by tweeting « her » during the presentation. So, as he posted his altruistic vision for OpenAI, he was jokingly comparing GPT-4o a sci-fi technology that usurped human connection.

OpenAI has been saying one thing and doing another for some time. Its mission « is to ensure that artificial general intelligence benefits all of humanity, » but the company has been accused of training its AI models on content scraped from the web without credit or compensation. It announced its AI video generator, Sora, as a tool for making creative visions come to life, but it still hasn’t revealed what the model was trained on, although many suspect it was data scraped from YouTube and other videos on the web.

The company continues to release technology without transparency around how it was created, but throughout all of it, Altman and OpenAI insist they are for regulation and safe deployment of generative AI.

The message is that the public should blindly trust what OpenAI is doing. And today’s event, with its warm wood tones and lightness and laughter, crystallized this approach. Whether we believe it or not is a different story.