ChatGPT is now better than ever at faking human emotion and behaviour

Share this post on:

Earlier this week, OpenAI launched GPT-4o (“o” for “omni”), a new version of the AI system behind the popular ChatGPT chatbot. GPT-4o is designed to make talking to AI feel more natural. According to the demo video, it can chat with users in near real-time, showing a human-like personality.

OpenAI’s demos show GPT-4o being friendly, empathetic, and engaging. It tells jokes, giggles, flirts, and even sings. The AI also reacts to users’ body language and emotions.

With a new, easier interface, GPT-4o aims to increase user interaction and help create new apps using text, image, and audio features.

This new AI is a big step forward. But focusing on making it seem human raises questions about whether it truly helps users and the ethics of creating AI that can mimic human emotions.

The Personality Factor

OpenAI wants GPT-4o to be more fun and engaging to talk to. This could make chats more effective and satisfying. Studies show people trust and work better with chatbots that seem socially smart and have personality. This might be useful in education, where AI chatbots can help students learn and stay motivated.

But some worry that people might get too attached to AI with human-like personalities or get hurt by the one-sided nature of human-computer interaction.

The Her Effect

GPT-4o has been compared to the 2013 sci-fi movie Her. In the movie, the main character, Theodore, becomes fascinated by an AI. This comparison, even made by OpenAI boss Sam Altman, shows the potential risks of human-AI interactions.

“For high-quality social media services at affordable prices, check out my website”


GPT-4o is a big leap for AI, with its focus on making AI feel more human. While this can make interactions better, it’s important to consider the effects and ethics of such advancements.

Leave a Reply

Your email address will not be published. Required fields are marked *