Cognitive resilience in the age of AI. Cognitive errors in interactions with large language models

20.10.2025
Dr Marcin Rządeczka
The next lecture in the series Encounters with Philosophy and Cognitive Science will take place on 28 October (Tuesday) at 17.00 in room 108A at the Faculty of Philosophy and Cognitive Science, UwB (Plac NZS 1). Our guest will be Dr Marcin Rządeczka (IF, UMCS)

The aim of the lecture is to analyse the limitations of large language models (LLM) in terms of epistemic opacity, reverse alignment, apparent cognitive humility and the ethical risk of providing the user with excessive cognitive comfort. The phenomenon of reverse alignment occurs when users align their expectations, attitudes and even mental states with the content generated by LLMs. Instead of serving a supportive function, large language models may intentionally or unintentionally reinforce maladaptive beliefs or cognitive distortions or errors, thereby undermining the user's decision-making autonomy. These consequences are exacerbated by the phenomenon of apparent cognitive humility (facade cognitive humility), where the signs of uncertainty expressed by LLMs do not stem from a genuine epistemic self-assessment, but from statistically learned linguistic patterns. Such expressions create a superficial impression of reflexivity, while masking a lack of ability to authentically assess the epistemic value of the content generated. As a result, users may overly trust LLMs, falling prey to the illusion of 'expertness', instead of seeing them as, for example, conversational tools optimised primarily for consistency and user satisfaction. Designing LLMs with a focus on comfort runs the risk of locking users into dynamically shaped cognitive bubbles that prioritise short-term attachment and satisfaction rather than long-term psychological resilience. Such arrangements, coupled with epistemic opacity, undermine users' cognitive autonomy, fostering an over-reliance on LLMs, and thus may limit criticality and plurality of views. Moreover, reverse alignment can subtly influence user preferences, reinforcing their passivity and unwillingness to identify the limits of their own beliefs. To counter these phenomena, it is worth considering at least a gradual shift away from a model based on providing cognitive comfort towards systems that introduce 'epistemic friction', which foster 'cognitive resilience', encourage critical engagement and cultivate virtues such as intellectual humility and epistemic courage. This approach is based on the philosophical imperative that one of the priorities of IS development should be to support the user's personal and cognitive development, including the formation of reflective and autonomous mental health agency.

Dr Marcin Rządeczka

Biography:
Dr Marcin Rządeczka is a cognitive scientist and philosopher specialising in the area of applications of artificial intelligence in mental health, computational psychiatry, evolutionary psychology and philosophy of psychiatry. Dr Rządeczka has presented his research at conferences such as the Computational Psychiatry Conference at Trinity College Dublin, AI for Mental Health at Oxford University and ECAI 2025 - 28th European Conference on Artificial Intelligence at the University of Bologna, among others. He did a research internship at the Centre Wiskunde & Informatica (CWI) in Amsterdam, where he worked with the Human-Centered Data Analytics (HCDA) team. His postdoctoral fellowship at IDEAS NCBR (now IDEAS Research Institute) in the Psychiatry and Computational Phenomenology team focused on interactions between humans and large language models in the context of mental health, cognitive errors and epistemic humility in the design of neuro-friendly artificial intelligence. He has published his work in journals such as JMIR Mental Health, Psychopathology, Frontiers in Public Health, Frontiers in Psychiatry and Community Mental Health. He is a member of the European Society for Philosophy and Psychology, the Polish Cognitive Science Society and the Polish Philosophical Society. He has completed an NCN grant entitled "Overcoming digital biases: exploring algorithmic justice in interactions with therapeutic chatbots".

©2022 Wszystkie prawa zastrzeżone.