
Artificial intelligence as a new social infrastructure: promise, risk, and responsibility
At this moment in my life, I am (re)building many things from scratch. A new personal phase, marked by autonomy, redefinition, and profound changes. And a new professional phase, in which I am launching a vertical branch on Diversity, Equity & Inclusion within Gruppo Risorse S.p.A., with established processes and a consolidated history alongside professionals I have collaborated with for a decade. It is from this vantage point—not neutral, not theoretical—that I look at artificial intelligence.
AI is increasingly talked about as the infrastructure of the future. Honestly, I think a more accurate definition is this: AI is becoming a new social infrastructure. It can no longer be considered merely a tool, nor simply technology. It is a system that is already shaping relationships, opportunities, access, exclusion, and representation—and like any infrastructure, it is never neutral.
For a person with a disability, like me, can AI really be considered an ambivalent promise? On one hand, it is potentially revolutionary: automatic captions, voice summaries, real-time translations, speech recognition, communication support tools. All of this can vastly increase autonomy, reduce barriers, and make the world more accessible.
On the other hand, AI risks replicating and amplifying what already exists: inequalities, stereotypes, systemic exclusions. AI is not born in a vacuum. It is trained on historical data, language, behavior, and social models. If the world is ableist, sexist, racist, heteronormative, or classist, AI learns it. And it reproduces it, often in ways that are invisible and harder to challenge. The greatest risk is not that AI makes mistakes. It is that it is perceived as objective.
When an algorithm decides who is eligible for a job, reliable for a loan, or compatible with a position, we are delegating profoundly human decisions to systems that do not understand context, complexity, or intersections. I am not just a person with a disability. I am also LGBTQIA+, a professional, someone undergoing a profound transformation. No dataset can truly capture all of this.
Empathy with other individuals, understanding beyond raw data, the ability to explore and contextualize situations, behaviors, and actions—these cannot be fully delegated to an algorithm because, even with the best training, it will never truly act like a human being. In the workplace, AI is already reshaping recruitment, performance evaluation, onboarding, and training. This can be an enormous opportunity if used wisely, but it can become a sophisticated exclusion machine if not designed with inclusivity in mind.
An algorithm that discards CVs with employment gaps does not see illness, rehabilitation, caregiving—it sees only discontinuity. And in that discontinuity, empathy and the crucial role of the human interlocutor are what contextualize the data.
A system that favors certain linguistic forms does not recognize communicative diversity. A tool analyzing voice tone does not account for someone like me, who experienced a sudden, significant hearing loss. DEI cannot come afterward. It cannot be an ethical afterthought; it must be a design principle.
If AI is truly a new social infrastructure, we must ask: who designs it? Who decides what is “normal” and “efficient”? And above all: who is left out? It is on this last question that we must act, because no one should be left out. If a system is even partially exclusionary, it cannot be adopted as the sole solution.
I do not want a “kind” AI, but a just AI. I do not want an AI that simulates inclusion, but one that makes it structural. This means integrating people with disabilities, minorities, and marginalized communities into development processes. It means ongoing ethical audits, transparency, accountability, opportunities for contestation, and training for those using these tools, because a system is never better than those who govern it.
The real question is not whether AI will be the future. It already is. The question is: for whom? If we do not do this work now, we risk building a world that is more efficient but less human. More automated but less accessible. And I, who am currently rebuilding pieces of my life and identity, do not want to inhabit a future that expects me to adapt to systems designed without me. I want a future that accounts for me—and for all the differences that make this world real, complex, and alive.