News Grower

Independent coverage of AI, startups, and technology.

Ars Technica Mar 26, 2026 at 18:14 Big Tech Stable Warm

Study: Sycophantic AI can undermine human judgment

Subjects who interacted with AI tools were more likely to think they were right, less likely to resolve conflicts.

Signal weather

Stable

The story has moved beyond the first headline and now acts as a reliable context anchor.

By Jennifer Ouellette Original source
Study: Sycophantic AI can undermine human judgment

We all need a little validation now and then from friends or family, but sometimes too much validation can backfire—and the same is true of AI chatbots. There have been several recent cases of overly sycophantic AI tools leading to negative outcomes, including users harming themselves and/or others. But the harm might not be limited to these extreme cases, according to a new paper published in the journal Science. As more people rely on AI tools for everyday advice and guidance, their tendency to overly flatter and agree with users can have harmful effects on those users' judgment, particularly in the social sphere. The study showed that such tools can reinforce maladaptive beliefs, discourage users from accepting responsibility for a situation, or discourage them from repairing damaged relationships. That said, the authors were quick to emphasize during a media briefing that their findings were not intended to feed into "doomsday sentiments" about such AI models. Rather, the objective is to further our understanding of how such AI models work and their impact on human users, in hopes of making them better while the models are still in the early-ish development stages. Co-author Myra Cheng, a graduate student at Stanford University, said she and her co-authors were inspired to study this issue after they began noticing a pronounced increase in the number of people around them who had started relying on AI chatbots for relationship advice—and often ended up receiving bad advice because the AI would take their side no matter what. Their interest was bolstered by recent surveys showing nearly half of Americans under 30 have asked an AI tool for personal advice. "Given how common this is becoming, we wanted to understand how an overly affirming AI advice might impact people's real-world relationships," said Cheng. Read full article Comments

Stay on the signal

Follow Study: Sycophantic AI can undermine human judgment

Follow this story beyond a single article: new follow-ups, adjacent sources, and the evolving storyline.

We send a confirmation link first, then only meaningful digests.

Story map

Understand this topic fast

A quick entry into the story: why it matters now, who is involved, and where to go next for context.

Why it matters now

This story is still moving and pulling follow-up coverage.
There are already 6 connected articles in the same storyline to continue from here.
The story keeps orbiting around AI, Ars Technica, and Can Undermine, so the entity pages are the fastest way to build context.
Ars Technica already has 4 follow-up stories on the same theme.

Topic constellation

Open the live map for this story

See which entities, story threads, sources, and follow-up articles shape this story right now.

Click nodes to continue

Entity Cluster Article Hub Source

Story timeline

Continue with this story

A short sequence of events and follow-up stories to understand the arc quickly.

May 10, 2026 at 23:43 Hacker News

AI Productivity Fails

Comments

May 10, 2026 at 20:40 TechCrunch

Anthropic says ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic.

May 10, 2026 at 17:19 Hacker News

Local AI needs to be the norm

Comments

May 10, 2026 at 12:01 Hacker News

The left-wing case for AI

Comments

May 10, 2026 at 11:15 Ars Technica

Do you take after your dad’s RNA?

Evidence is growing that sperm carries marks of a father’s life experiences, influencing traits in offspring.

Mar 26, 2026 at 18:14 Ars Technica

Study: Sycophantic AI can undermine human judgment

Subjects who interacted with AI tools were more likely to think they were right, less likely to resolve conflicts.

How reliable this looks

Signal and trust for Ars Technica

This source works at a rapid pace: 100% of recent stories land in the hot window, and 0% carry visible search signal.

Trusted

Reliability

92

Freshness

100

Sources in storyline

3

Related articles

More stories that share tags, source, or category context.

TechCrunch May 10, 2026 at 20:40 Startups
Rising Hot

Anthropic says ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic.

Signal weather

Momentum is building quickly, so this card is a good early entry point into the topic.

Why now

Fresh coverage with immediate momentum.

More from Ars Technica

Fresh reporting and follow-up coverage from the same newsroom.

Open source page