Intelligenze artificialilibri e letteraturasociale

Artificial Intelligence: responsible use in society

What happens when a decision is made by an automated system? Where does human responsibility lie? I discuss this with Massimo Chiriatti, technologist and IT executive, author of the book Incoscienza Artificiale
By Francesca Bandieri
01 Apr 2026

AI is often presented as a decision-making support tool. How does it change the relationship between people and systems?
AI in decision-making works only if it truly remains a support. When its suggestions become an implicit norm, the balance of the relationship shifts. The point is not whether AI is “right” more often than a human, but who has the authority to disagree. If the organization does not legitimize human dissent, AI does not support—it governs. A healthy relationship is triadic: human, system, and context. And cultural, organizational, and value-based context matters as much as the algorithm.

When a decision is made by an AI system, where does human responsibility end?
Responsibility cannot be delegated to a system. AI is not a moral agent: it is an artifact designed and used by humans within organizations that make specific choices. The current risk is confusing automation with inevitability. Rules are not meant to limit AI, but to make decisions explicit: who defines the objectives, selects the data, and determines when an output is acceptable. Governing AI means clarifying these steps, designing responsibility boundaries, and preventing automation from becoming a shortcut to evade judgment.

Algorithms work on what is measurable. But is there anyone or anything left out of the data?
AI systems rely on what they can measure, categorize, and make comparable. Some experiences, bodies, and contexts remain at the margins—not by explicit choice, but because they are not included in the reference models. When a reality is underrepresented, the system tends not to recognize it or oversimplifies it. This is not just a technical issue: data reflect priorities and assumptions inherent in the design. Without conscious attention, AI ends up reproducing existing structures rather than questioning them.

Is AI changing work through reskilling or creating new inequalities?
A distinction is emerging between those who can interact with systems, interpret them, and define their questions, and those who mainly engage with outputs. This difference is not just technical; it concerns access to language and decision-making margins. In this context, reskilling cannot be the only solution. Reducing the discussion to skill updates risks focusing attention on individuals while leaving organizational choices in the background: how work is redesigned, and which roles are valued or made invisible.

Generative AI simulates empathy and listening. Is it changing how we understand relationships?
To some extent, yes, because it accustoms us to forms of interaction that do not involve reciprocity. The interesting question then becomes: what does “relationship” mean when one party lacks experience or intentionality? Observing these new exchanges can help reflect on how we interpret complexity and the meaning of human relationships, and on the boundaries between technological support and authentic interaction.

More and more people are turning to AI for advice on health and personal life. What are the consequences?
The use of AI platforms for sensitive questions stems from real needs: limited access to services, rapid responses, and guidance needs. In this sense, AI addresses a preexisting demand. The risk arises when these tools are perceived as substitutes rather than complements to professional and caregiving relationships. In areas like health, a plausible answer is not equivalent to care or clinical responsibility: AI can support access to information but cannot replace judgment, relationships, or human responsibility. A medical professional does more than provide information—they interpret, assess the context, and assume responsibility. The challenge is therefore to maintain clear role boundaries.

AI governance in companies is increasingly urgent. Where should one start?
Governance is not just a set of rules, but a daily practice. Awareness of the implicit choices in data, models, and objectives is essential. Above all, it is necessary to create a culture in which decisions produced by systems can be interpreted and contested, so that AI remains a tool rather than a mechanism without visibility or accountability.

Registration with the Court of Bergamo under No. 04, 9 April 2018. Registered office: Via XXIV maggio 8, 24128 BG, VAT no. 03930140169. Layout and printing by Sestante Editore Srl. Copyright: all material by the editorial staff and our contributors is available under the Creative Commons Attribution/Non-commercial-Share Alike 3.0/ licence. It may be reproduced provided that you cite DIVERCITY magazine, share it under the same licence and do not use it for commercial purposes.
magnifiercrosschevron-down