Intelligenze artificialitecnologie

What Artificial Intelligence will never be

When we discuss artificial intelligence, our language often betrays a theoretical assumption we have never really made: we speak of systems as if they were someone, not something. We say they understand, choose, learn, make mistakes, and, taking it a step further, that they could even be morally responsible for their actions. But this linguistic shift is not harmless: it creates a conceptual illusion, because it confuses the ability to generate normatively plausible responses with what actually makes a being a moral agent
By Alessio Salviato
01 Apr 2026

The point is not (or not only) that morality would be too complex to be codified. It is true that the idea of a morality fully translatable into ex ante rules is controversial: many theories – virtue, particularism – insist that moral action requires sensitivity to context, attention to morally salient features, creativity, imagination, and empathy, and that this competence is formed from the ground up, through experience, rather than as a mechanical application of a repertoire of precepts.

But even if we granted, hypothetically, that there exists a set of formal procedures capable of guiding correct decisions, the more decisive question would remain: what does it mean to understand morally? The issue is that having moral knowledge by testimony or imitation is not enough; what distinguishes a good moral judge is a type of internal understanding, a grasp of the relationship between a moral proposition and the reasons that make it true. This interiority is what makes flexibility of judgment possible in new contexts, when there is no ready-made rule and what matters is the discernment of the case. It is at this level that AI encounters a structural limit. Even assuming that a system can generalize, extrapolate, and produce good responses in unseen scenarios, it remains doubtful that it can inhabit the internal stance of a moral reasoner, the stance of someone who does not merely correlate input and output, but sees reasons as reasons, as binding constraints that can be understood as such. If morality is, at least in part, contextual and not fully codifiable, then a further problem arises: how does AI know it has given the morally correct answer when there is no predefined target against which to measure itself? Thanks to reinforcement learning, in the typical case, one can reward approaching a target because the target is known (coordinates, score, desired state). But if we often do not know in advance what the morally correct action is, then the reference that would make this type of reasoning trainable is missing.

Yet, the deepest critique is not epistemic but existential. Precisely, it lies in the difference between a machine that receives rewards and penalties and a human being who experiences success and failure as events that affect their inner life. For us, reward and punishment certainly have an informative function, but not only that: they induce conative states, activate desires and frustrations, and above all operate as a counterweight to our constant possibility of deviating from what we know is right out of weakness, convenience, or attraction to advantage.

AI, instead, does not experience the call of deviation: it does not know the tension between duty and temptation that, for better or worse, structures human moral life. For this reason, speaking of AI’s moral motivation is at least ambiguous: the reward is like a checkmark on a task; it signals that one is on the right path, but it is never an achievement that warms, nor a failure that burns. If this difference seems only psychological, it is enough to shift the gaze to what makes morality a relational experience and not a mere exercise of calculation. An essential part of moral action consists in being able to understand the weight of what we do because we know, at least to some extent, what it means to be on the other side of our action. For example, someone who runs over an animal can read the pain and imagine what that pain means from the inside; without this capacity, it is difficult to speak of a robust understanding of the moral dimension of the act.

Now, even assuming that a system can record indicators of suffering and correct future behavior, the problem is that often those who have suffered a wrong do not only ask that the agent take note of the error; they ask that it understand it as a wrong, that is, that it grasp the offense and not just the deviation.

There remains a further step, perhaps the most important to clarify what distinguishes us: the recognition of the other as a bearer of dignity, and not as a mere object of calculation. In fact, the very core of moral law, namely the recognition that the other has dignity equal to mine, may require something beyond mere cognition. To make this “beyond” visible, it is useful to refer to what Darwall called recognition respect:1 it is not enough to be respectful as a prudential strategy (the deferent who fears the judge); one must recognize that the other deserves respect as a person, and this difference cannot be reduced to a simple behavioral disposition, because outward behavior can be identical in the two cases. If this is correct, then the most an artificial system can do is behave as if it recognizes: but simulation and phenomenon do not coincide, and in the moral domain that difference is not marginal.

At this point, it becomes clear why, putting the pieces together, one cannot simply make a list of missing capacities. The most seductive methodological error is to treat moral agency as a modular assembly: a bit of judgment, a bit of motivation, a bit of empathy, a bit of recognition. One should rather maintain an organic vision: discerning morally in new contexts requires imagination; imagination requires care and recognition; care and recognition imply attitudes and feelings; and feelings imply qualia, that is, a lived dimension that gives motivational force and cognitive depth.

For this reason, we should distrust the idea that AI can fill in piece by piece what separates it from being a moral agent: what matters is the whole package, a way of life in which reasons are not only manipulated but experienced as reasons.

If there is something that will continue to distinguish us – and which, in a public reflection on AI, should remain front and center – it is not a vague human romanticism, nor a metaphysical residue to cling to out of fear of technology. It is, more soberly, the embodied and relational structure of normativity: the fact that for us good and evil are not merely problems to solve, but realities that oblige us from within, through vulnerability, recognition, temptation, remorse, and repair. AI can become increasingly competent in producing decisions that are acceptable or even better than ours according to certain criteria; but this progress does not equate to an approach to moral personality. Precisely for this reason, there is a practical corollary we should not lose: if AI is not and will not be a moral agent, then it can never be the place where we deposit responsibility.

  1. Darwall, S. L. (1977). Two kinds of respect. Ethics, 88(1), 36-49. ↩︎
Registration with the Court of Bergamo under No. 04, 9 April 2018. Registered office: Via XXIV maggio 8, 24128 BG, VAT no. 03930140169. Layout and printing by Sestante Editore Srl. Copyright: all material by the editorial staff and our contributors is available under the Creative Commons Attribution/Non-commercial-Share Alike 3.0/ licence. It may be reproduced provided that you cite DIVERCITY magazine, share it under the same licence and do not use it for commercial purposes.
magnifiercrosschevron-down