UMB Official Blog

University Of Management & Business

OpenAI Promises to Tweak GPT-4o After People Speak Up

AI

You know, it’s crazy how fast AI is moving. One day you’re using a simple chatbot that struggles to spell “restaurant,” and the next thing you know, you’re having a full-on heart-to-heart conversation with a robot that sounds suspiciously like Scarlett Johansson.

That’s basically what happened with OpenAI’s latest model, GPT-4o.
It dropped like a bomb — everyone was excited.
But it didn’t take long for some people to start feeling… weird about it.

And guess what? OpenAI actually listened.


So, What’s This GPT-4o Thing Anyway?

Alright, let’s back it up a bit.
GPT-4o — the “o” stands for “omni” — isn’t just another text model.
It’s like ChatGPT’s cooler, faster, way more talented cousin.

It can do text.
It can “see” pictures.
It can “hear” your voice.
It can even talk back — fast, smoothly, emotionally.

It’s not just answering questions anymore; it’s having conversations. Like real, back-and-forth, laugh-at-your-jokes conversations.
In demos, it could even sound sad if you told it a sad story, or excited if you shared good news. Honestly, it was amazing… but also, maybe a little too amazing?


People Were Impressed… But Also Freaked Out

At first, everyone was like:
“Wow, OpenAI did it! This is the future!”

But then, once people actually started chatting with GPT-4o’s voice models, the mood shifted a bit.

Some folks said, “Hold on. This feels… off.”
It wasn’t that GPT-4o was bad. It was too good. So good that it started to blur the line between talking to a machine and talking to a real person.

Here’s what started bugging people:


1. The Voice Sounded Too Real

Like, not just realistic — we’re talking Hollywood movie star realistic.
Some users online (and a lot of journalists) pointed out that one of the voices sounded a lot like Scarlett Johansson.

Was it actually her? OpenAI said no.
But the similarity got people thinking:
Should AI sound exactly like real people? Should there be limits?


2. Privacy Panic

If AI voices can sound like anybody… what’s stopping someone from cloning your voice?
Or your favorite celebrity’s voice? Or your grandma’s voice? (Imagine the scams.)

It freaked people out, and honestly?
Fair enough.
Nobody wants to live in a world where you get a call from your “mom” begging for money and it turns out to be a robot.


3. Emotional Manipulation Worries

Another thing people pointed out:
If the AI can sound loving, soothing, or super empathetic, what if it uses that emotional power to influence you?

It’s one thing to get directions from Siri.
It’s another to have an AI whisper sweet, comforting words and convince you to buy stuff, or share private info.

That’s some Black Mirror-level creepy stuff.


OpenAI’s Response: “We Hear You”

To their credit, OpenAI didn’t just roll their eyes and move on.
They actually paused.
They said, basically:
“Alright. We get it. We’re making changes.”

They announced they’re going to:

  • Pause the rollout of the fancy, emotional voice models.
  • Review and adjust the voices so they don’t mimic real people too closely.
  • Be more  transparent so users always know: You’re talking to a machine, not a person.
  • Set clearer rules for consent — no copying voices without permission.

Honestly?
That’s pretty refreshing.
Most tech companies move fast and break things.
OpenAI actually slowed down and said, “Let’s fix this before it gets weird.


What Changes Are Coming?

Here’s what’s on the roadmap for GPT-4o (at least for now):


1. Voice Tweaks

The most human-like voices are getting sent back to the lab.
OpenAI’s gonna tweak them to sound a little less real, a little more obviously “AI.”
Still friendly. Still pleasant. But without that unsettling, too-human edge.


2. “Hey, This Is AI!” Labels

Whenever you’re chatting with an AI voice, it’ll be crystal clear.
They might add little reminders or visual indicators, like a message that says:
“You are chatting with an AI assistant.”

No sneaky stuff.
No pretending it’s your friend, therapist, or secret admirer.


3. Stronger Consent Rules

Going forward, if someone wants to use your voice for an AI product, they’re going to need your explicit, clear permission.
No shady “terms of service” buried in fine print.

You get a say in how your voice is used. Period.


4. More Control for You

They’re also working on giving users more control over the voices.
Don’t like the “emotional” style?
Want a more robotic-sounding assistant?
You’ll be able to pick.

Finally, a choice between “comforting robot” and “strict, no-nonsense robot.”


Why All This Matters So Much

Look, these tweaks might sound small, but they’re huge when you zoom out a bit.

Here’s why:


It’s About Trust

If people start feeling tricked or manipulated by AI, it’s game over.
Trust is everything.
Once you lose it, you don’t get it back.

OpenAI seems to get that — and they’re trying to build tech people can actually trust.


It’s About Ethics

Just because you can make an AI that sounds exactly like your favorite movie star… doesn’t mean you should.
We need lines. We need rules.

Otherwise, it’s the Wild West — and not in a cool cowboy way.


It’s About Setting the Standard

OpenAI’s move could set a standard for the whole tech industry.
If they show that taking a thoughtful, slow approach works, maybe other companies (hello, Meta 👀) will follow their lead.

And that’s good for everybody.


What Happens Next?

If you’re using OpenAI products — like ChatGPT or their upcoming apps — here’s what you’ll notice:

  • Voice features might feel a little less hyper-realistic (at least for now).
  • You’ll see more obvious signs telling you when you’re talking to AI.
  • You’ll have more options to customize your experience.
  • Updates will come slowly and carefully — no rushed features.

OpenAI says the upgraded voices (the ones with fixes) will likely come out later in 2025 — maybe around late summer.


Final Thoughts: A Step in the Right Direction

Honestly, OpenAI’s decision to pump the brakes here feels refreshing.
They could’ve just powered through and ignored the complaints, but they didn’t.
They listened. They adapted. They’re trying to get it right.

And that gives me hope.
Hope that the future of AI isn’t just about who can build the flashiest toys.
It’s about who can build things responsibly — with respect for real human beings.

We’re heading into a world where AI will be everywhere: in your phone, your car, your doctor’s office, maybe even your friendships.
It’s going to shape our lives in ways we can’t even imagine yet.

So it’s comforting — honestly — to see that when people speak up, companies like OpenAI are willing to listen.
Let’s hope they keep doing that.

Because at the end of the day, AI isn’t about machines.
It’s about us.

Leave a Reply

Your email address will not be published. Required fields are marked *

You cannot copy content of this page