News Today
Politics & Governance
Business & Economy
Technology & Innovation
Entertainment & Culture
About Us Contact Us Privacy Policy Terms of Use
© Copyright 2026 Eymic News. All Rights Reserved.
EymicNews
EymicNews
NEWS.EYMIC.IN / BUSINESS & ECONOMY

Is ChatGPT’s Caricature Trend Reflecting a Deeper Need to Be Seen and Understood? Experts Weigh In

As ChatGPT’s viral AI caricature trend spreads across social media, mental health experts question whether users are seeking creativity — or deeper validation and emotional connection.
UPDATED FEB 13, 2026
A 2025 Pew Research survey revealed that 34% of adults in the had used ChatGPT, a figure that doubled from 2023
A 2025 Pew Research survey revealed that 34% of adults in the had used ChatGPT, a figure that doubled from 2023

Imagine uploading a selfie to ChatGPT and asking it to “create a caricature of me and my job and everything you know about me,” only to receive a stylised cartoon highlighting your profession, quirks and even aspirations.

In early 2026, this trend went viral. Users are sharing exaggerated AI-generated portraits depicting them as larger-than-life professionals surrounded by symbolic props — all created by artificial intelligence.

What appears playful on the surface is now prompting deeper psychological questions.


The Rise of the AI Caricature Trend

More than 150 million weekly active users interact with ChatGPT, a surge partly fueled by viral image-based trends — including the 2025 Ghibli-style AI transformation wave.

Experts say the growing intimacy between humans and AI reflects a phenomenon known as anthropomorphism — the tendency to assign human qualities to non-human systems. With conversational AI becoming more personalised and creative, that tendency is intensifying.

Behavioural science expert Dr Ravineet Singh Marwah explains:

“People are not asking AI to draw them. They are asking to be seen. The caricature trend shows a deeper hunger for validation and identity reflection.”

Is AI Becoming a Digital Mirror?

The caricature trend, which gained momentum in early 2026, typically involves users prompting AI systems to create exaggerated visual representations of their profession or personality based on prior interactions and uploaded photos.

Experts describe the phenomenon as a form of digital projection — a modern-day Rorschach test. AI reflects user data in a personalised, non-judgmental way.

Survey data support the broader psychological shift:

  • A 2025 Pew Research study found 34% of US adults had used ChatGPT, double the 2023 figure.

  • Among users under 30, adoption rates climbed to 58%.

  • A 2025 Kantar survey showed that over half of global AI users had used AI tools to support emotional well-being.

This dynamic mirrors themes explored in series like Black Mirror, particularly the episode “Be Right Back,” which examined emotional attachment to AI simulations.

Why Do People Confide in AI?

Psychologists argue that AI removes social risk. There is no judgment, no embarrassment, no rejection.

A 2025 Sentio survey found that nearly half of large language model users experiencing mental health concerns turned to AI platforms for emotional support.

Dr Gorav Gupta, senior psychiatrist and CEO of Tulasi Healthcare, notes that frequent AI engagement may influence social behaviour. According to him, some individuals begin relying on AI-generated interactions for companionship, leading to reduced real-world social engagement.

Clinicians warn that excessive reliance may:

  • Distort self-perception

  • Blur emotional boundaries

  • Reinforce isolation

  • Increase unrealistic expectations in human relationships

Is AI Therapy a Solution — or a Risk?

Rising youth mental health concerns have contributed to AI’s growing role as a source of advice. In 2025:

  • Approximately 18% of US adolescents experienced major depression.

  • Nearly 40% received no formal treatment.

  • A JAMA study reported that 13.1% of young people used generative AI tools for guidance.

However, research from Stanford in 2025 indicated that AI chatbots may lack empathy and do not meet established therapeutic standards. Currently, no AI system holds regulatory approval for mental health treatment.

Dr Astik Joshi warns that vulnerable individuals may struggle to separate creative AI outputs from their real identity, particularly if emotional attachment deepens.

How the Ghibli Trend Set the Stage

The 2025 Ghibli-style transformation wave — inspired by the aesthetic associated with filmmaker Hayao Miyazaki — paved the way for AI-based self-representation trends.

Hashtags such as #Ghiblified attracted millions of views globally, expanding AI-generated identity expression into mainstream culture.

The trend also raised copyright debates over artistic style replication and AI training datasets.

The Risks of Anthropomorphising AI

Beyond emotional dependency, experts flag privacy concerns. Creating AI caricatures often requires uploading photographs, potentially exposing users to biometric data risks or misuse.

Cybersecurity analyses in 2026 highlighted concerns around deepfake vulnerabilities and image-based data exploitation.

This dynamic resembles themes explored in Westworld, where human-like machines blur emotional and ethical boundaries.

When Does It Become Clinically Relevant?

Dr Anil Kumar, senior consultant psychiatrist, differentiates between harmless entertainment and concerning patterns:

“For most people, AI caricatures are simply light-hearted fun. It becomes clinically relevant when the interaction is no longer playful curiosity but something a person feels they need to manage their emotions.”

Experts emphasise a critical distinction: AI can assist creativity and reflection — but it cannot replace human relationships or professional mental health care.

Balancing Innovation and Emotional Health

Artificial intelligence is reshaping creativity, identity, expression and digital interaction. The caricature trend demonstrates AI’s ability to reflect personal narratives in visually compelling ways.

But experts agree on one principle: technology should complement, not substitute, authentic human connection.

The deeper question may not be what AI shows us — but why we are increasingly asking it to show us who we are.


MORE ON Eymic News
Google says attackers are attempting “model extraction” and distillation attacks to clone its Gemini AI chatbot, raising concerns about rogue AI models and intellectual property theft.
Feb 13, 2026
Lost your Android phone? Use Google’s built-in Find My Device and Remote Lock features to track, secure, and protect your personal data before it falls into the wrong hands.
Feb 13, 2026
As ChatGPT’s viral AI caricature trend spreads across social media, mental health experts question whether users are seeking creativity — or deeper validation and emotional connection.
Feb 13, 2026
Microsoft AI CEO Mustafa Suleyman said artificial intelligence could automate most white-collar tasks within 12–18 months, while also predicting that AI model creation will soon become easier and more accessible.
Feb 13, 2026