
Who is designing the future, and who risks being left out?
Let’s start with a brief introduction.
I work with AI both as a cultural crossroads and as a lever for efficiency: every technological revolution redefines power and access to opportunity. For over twenty-five years, I have supported CEOs, executives, and boards of directors in technological and organizational transformation processes. I am an Equity Partner at BIP Group and lead the Human Capital Center of Excellence, where we focus on the impact of artificial intelligence on work, skills, leadership, and inequality. Today, my work moves along a precise boundary: helping organizations integrate AI strategically and responsibly so that innovation and inclusion grow together.
Your book Nessuna fuori dal codice was published on February 27. Where did the idea for it come from?
It started with a very simple and radical question we asked ourselves at the beginning: is there a need in Italy today for a book on AI and women? Even ChatGPT said yes. Globally, women represent about 22% of the AI workforce, and even fewer hold top positions. The book comes from the urgency of ensuring that the most transformative technology of our time does not develop within a partial imagination. Artificial intelligence is a game changer: it redefines work, power, language, and representation. If women remain at the margins of its design, the risk is that they will also remain at the margins of the future. We didn’t want to write a technical book, but a cultural one: a tool to spark conversations across schools, businesses, and research. A book that looks at gaps and tries to point a way forward—turning AI into an engine for equity.
What inequalities does AI perpetuate?
AI is not neutral. It learns from historical data that reflect an already unequal society. If women are underrepresented in STEM or leadership roles in the datasets, algorithms will tend to reproduce that frequency as the norm. The book highlights several levels of risk. The most common are biases in hiring processes: many recruiting systems favor male candidates because they were trained on male-dominated historical data. There’s the gender adoption gap, referring to the lower number of women using generative AI tools compared to men, which risks creating an augmentation gap—a growing divide in enhanced skills. There’s also differential occupational exposure, meaning a larger share of female-dominated roles (like administrative work) is potentially more exposed to automation than male-dominated managerial roles. Finally, there’s distorted symbolic representation: from virtual assistants with docile female voices to facial recognition algorithms less accurate for women with darker skin tones, as cited in the book.
How are the women you and Simona Rossitto feature in the book addressing these problems?
The women in the book act on three levels. The first concerns presence and representation: they bring technical skills to development teams, increasing diversity among data scientists and programming teams—a necessary condition for reducing bias. Next, they work on redefining the imagination: they don’t just ask for more women in AI, but challenge the paradigm itself. What kind of intelligence are we cultivating? An extractive intelligence that classifies and discards “noise,” or a relational intelligence capable of incorporating doubt and empathy? Finally, they take concrete actions: they work on more inclusive datasets, equity metrics, STEM education for girls, digital mentorship, and ethical governance. The book proposes a true feminist AI manifesto—not AI against men, but AI against discrimination.
Do you think it is realistically possible to “course-correct” and design a more inclusive technology from now on?
Yes, but it requires intentionality. AI can either amplify inequality or accelerate equality—it depends on how we design it. In the book, we imagine a utopian 2035 in which AI becomes a lever for equity, thanks to global governance, shared ethical principles, and greater female representation. This is not naive fantasy: it is a feasible direction. The tools already exist: balanced datasets, transparent corporate policies, and, of course, widespread education.