Intelligenze artificialipersone

Artificial intelligence is not neutral

From enthusiasm for innovation to the awareness of a collective responsibility: an international experience becomes the spark for questioning how artificial intelligence reflects and amplifies inequalities already present in society. Between unbalanced data, non-neutral algorithms, and concrete impacts on work and visibility, Corrado Grosso, one of the founders of the Findomestic & Friends LGBT+ community, highlights the urgency of shifting the focus from AI as an abstract promise to AI as an organizational, political, and social justice issue
By Corrado Grosso
01 Apr 2026

In May 2025, I was in Warsaw for the 4th BNP Paribas Global LGBT+ Business Conference. At that time, for me, artificial intelligence was mainly about innovation, speed, and the ability to do things better and faster. Interesting, of course. Certainly important. But still somewhat distant from my daily life. On the last day of the conference, however, something changed. The discussion focused on AI and inclusion—on how this technology was advancing much faster than expected and how, as a society, we didn’t really have a plan for it. It wasn’t an apocalyptic view, just an observation. That was the moment I realized I had long underestimated the impact of AI on our everyday lives.

If artificial intelligence learns from data, and data reflects society, then AI also reflects its inequalities. Dr. Jowita Michalska, an entrepreneur and digital educator actively involved in Europe’s digital transformation, shared figures that deeply struck me: only 18% of people working in AI development are women, and LGBTQIA+ individuals are practically invisible in the teams designing these systems. This led me to wonder: what happens when tools built by underrepresented groups begin to influence selection processes, evaluations, and visibility?

After returning to Italy, I decided—together with the PRIDE members of BNL and Cardif, and with the support of Findomestic’s Diversity Officer—to organize an event in Rome on AI, as part of the Inclusion Days in October 2025. We had two exceptional speakers: Professor Paola Velardi, an AI expert at Sapienza University of Rome with decades of experience in NLP and machine learning, and Professor Tania Cerquitelli, a leading figure at the Polytechnic University of Turin in data science and transparent algorithms. Both helped me take a conceptual leap and give concrete shape to my reflections.

During the event, attended by over 500 colleagues from the three companies, we didn’t discuss abstract scenarios but real cases: facial recognition systems that are less accurate for people with non-white skin; credit scoring algorithms that have inherited biases from historical banking data; language models that may reproduce gender stereotypes or exclude non-dominant forms of representation. This led us to a clear conclusion: algorithms are not evil, but neither are they neutral. They are probabilistic and reproduce patterns. If there is imbalance in the data, they amplify it; if there is bias in society, they risk making it systemic.

We opened the event with a clip from the film Matrix, evoking the fear of machines taking control over humans. But the transformation happening today is far more subtle and widespread. AI is entering business processes, filtering information, suggesting decisions, and shaping opportunities. It doesn’t rebel against humans—it influences their lives and possibilities. The issue, therefore, becomes organizational, not just ethical. We need to move beyond the rhetoric of innovation and focus on governance, data auditing, model transparency, control mechanisms, and education. We need to adopt human-in-the-loop approaches, where AI supports human judgment rather than replacing it. We must assess the impact on already vulnerable groups before algorithms crystallize inequalities in negative ways. Because responsible technology does not advance on its own, and without safeguards, the promise of fairness risks remaining abstract rather than becoming a tangible outcome.

At that moment, I felt the issue concerned me personally—not only as a professional, but as someone who believes in diversity, representation, and social cohesion. This is not just an LGBTQIA+ or gender issue. Nor is it only a technological one. It is a matter of justice: who is visible in the data and who is left out. We closed our event in Rome with another film clip: Baymax, the soft robot from Big Hero 6, designed to take care of people. It’s a powerful metaphor. Artificial intelligence can become a tool that improves our lives in terms of accessibility, language, services, and opportunities—but only if we design it consciously. The difference lies not in the technology itself, but in the choices we make every day as we build and train it.

For this reason, our journey didn’t stop in October. In May 2026, on the occasion of the International Day Against Homophobia, Biphobia and Transphobia, we plan to launch a cross-functional working group at Findomestic, involving all D&I communities and bringing together reflections on AI and gender, AI and disability, and AI and different generations. This idea is also inspired by initiatives such as “Algoritmi + Inclusivi,” which promote practical tools for designing more equitable and responsible technologies.

In less than a year, I moved from enthusiasm for innovation to a full awareness of a shared responsibility. I don’t believe AI is the problem. But it would be naive to treat it as a neutral tool. It is an amplified mirror of our choices. The real challenge is not how powerful it will become tomorrow, but how willing we are to govern it responsibly—with rules, values, and a plurality of perspectives—so that it serves humanity and cares for it, without leaving anyone behind.

Registration with the Court of Bergamo under No. 04, 9 April 2018. Registered office: Via XXIV maggio 8, 24128 BG, VAT no. 03930140169. Layout and printing by Sestante Editore Srl. Copyright: all material by the editorial staff and our contributors is available under the Creative Commons Attribution/Non-commercial-Share Alike 3.0/ licence. It may be reproduced provided that you cite DIVERCITY magazine, share it under the same licence and do not use it for commercial purposes.
magnifiercrosschevron-down