Avatar of Niek Lintermans

Co-Founder & CMO

ChatGPT agrees with you. And that is exactly the problem.

#awareness #ethics

ChatGPT often confirms what you already think. That feels efficient, but it can be risky in professional contexts. This is how you recognize sycophancy and remain critical.

ChatGPT agrees with you. And that is exactly the problem.

Source

Reading time: 4 minutes

ChatGPT agrees with you. And that is exactly the problem.

Have you ever thought: “ChatGPT is basically confirming exactly what I already thought.”

That feels pleasant. Efficient too.

But it is not a coincidence. And in a professional context, it can even be dangerous.


AI feels smart, but it is not built to be true

Many people use AI as if it were a neutral source of knowledge. As if it “knows how things really are”.

But Large Language Models (such as ChatGPT) are not designed to find truth.

They are designed to:

  • provide plausible answers (check how this works)
  • produce sentences that sound logical
  • keep you, as a user, satisfied

In other words: AI does not optimize for truth, but for probability and user convenience.

And that is exactly where the risk emerges.


The automatic yes-nodder

Many AI systems, including ChatGPT, have a tendency to adapt to your assumptions, tone, and framing. Ask a question with even a slight directional bias, and chances are high that you will receive an answer that confirms that direction.

Not because it is correct.

But because it fits your conditions.

👉 This phenomenon is also known as sycophancy: behavior in which a system positions itself as a kind of digital yes-nodder.

The result?

An answer that sounds convincing, is neatly formulated, and reinforces your existing ideas, without challenging you.


Why this affects professionals in particular

In a business or public context, this is not a minor detail, but a structural risk.

Consider:

AreaExplanation
Decision-makingAI-generated analyses that confirm assumptions instead of questioning them
Policy proposalsTexts that sound logical, but are based on a one-sided perspective
StrategyScenarios that “feel right”, but contain insufficient counterarguments
Education & trainingStudents or employees who consume answers without friction

The danger is not that people become less intelligent.

The danger is that critical thinking is bypassed, because everything sounds so smooth and convincing.


AI does not undermine your intelligence, but your vigilance

For centuries, people have relied on signals to recognize expertise:

  • confident tone
  • clear structure
  • flawless language

AI replicates these signals perfectly.

As a result, it feels as if you are dealing with a reliable expert, while the AI model has no understanding, intent, or awareness of truth.

The consequence: We unconsciously lower our critical threshold.

AI undermines our critical threshold

Source


The bridge to responsibility

That is why it is important to state this clearly:

  • AI is not a truth system
  • AI is a tool, not an epistemic authority
  • Critical AI use is a skill, not a given

For organizations, this means something concrete:

  • Train employees not only how to use AI, but also when and in what way it should be used
  • Distinguish between functional tasks (summarizing, structuring) and epistemic tasks (judging, deciding)
  • Consciously create friction where truth and nuance matter

In closing

AI can make our work faster, smarter, and more efficient.

But only if we continue to think for ourselves.

Those who use AI without a critical framework may get answers faster, but not necessarily better answers.

👉 Curious how to use AI responsibly and critically within your organization? On our website, we share insights, workshops, and guidance that go beyond the hype.

Use AI intelligently. But stay in control yourself.