
Aalborg University, August 8, 2025 –
How much do you trust ChatGPT or other AI chatbots? According to new research by students at Aalborg University, the answer depends largely on how the chatbot communicates.
Cecilie Ellegaard Jacobsen, Emma Holtegaard Hansen, and Tania Argot from the Department of Computer Science conducted a master’s thesis project exploring how people respond to chatbot language style and expressions of certainty.
In their study, 24 participants interacted with a specially developed chatbot based on ChatGPT and answered 20 yes/no questions spanning topics like music, health, geography, and physics. What varied wasn’t the questions, but how the chatbot responded. The chatbot used four communication styles:
- Confident and personal (“I am sure that…”)
- Uncertain and personal (“I think maybe…”)
- Confident and impersonal (“The system has found that…”)
- Uncertain and impersonal (“The system may have found that…”)
The results revealed that users placed more trust in confident answers, particularly regarding the chatbot’s perceived competence. However, overconfidence without justification sometimes caused users to lose trust.
Some participants appreciated when the chatbot acknowledged uncertainty, finding it more honest and human. Others preferred neutral, less personalized language.
Participants were also given the option to compare the chatbot’s answers with Google’s top search result. Many used this to fact-check the chatbot, even when both sources gave the same answer—highlighting how trust in AI also depends on user habits and preconceptions.
Recommendations from the study include:
- Use uncertainty purposefully to help users better calibrate trust.
- Balance human-like tone with credibility. Too much “personality” can reduce trust.
Professor Niels van Berkel, who supervised the project, emphasized the importance of this work:
“The study shows that both perceived and actual trust can be influenced by how AI presents its certainty and identity. These insights can help future AI systems better guide user trust and avoid overreliance on potentially flawed outputs.”
Project Title: Framing the Machine: The Effect of Uncertainty Expressions and Presentation of Self on Trust in AI
Participants: 24 individuals aged 20–59
Program: MSc in Digitalisation and Application Development
Institution: Aalborg University
Source: Aalborg University