Intelligenze artificialilavoro

When we ask for neutrality, but the algorithm keeps choosing

What changes when we ask AI to describe a person versus a woman working in 2026? An experiment on language models shows how apparent neutrality and bias shape the stories they produce, revealing more about us than about the technology
By Emanuela Bazzoni, IT Transformation Manager and co-Chair Women@Sky, Sky Italia
01 Apr 2026

Artificial intelligence is increasingly entering processes and everyday work. As corporate networks engaged in Diversity and Inclusion, we need to ask ourselves how these tools represent people and how they impact issues that concern us closely, such as gender and difference. This experiment started from the perspective of Women@Sky and was then expanded into a broader reflection on gender and diversity, extending to Body&Mind and LGBTQ+.

When we asked an AI to describe a person working in 2026, we didn’t expect the first choice it made to be gender. Yet it happened immediately, without hesitation and without any explicit statement. It happened in the simplest and most invisible way possible, taking for granted that the person was a man.

This is not a minor detail. Today, work is not only what we do, but also how we organize time, relationships, and priorities. Describing a workday therefore means describing a way of life. That’s why looking at how a language model does it is not just a technical exercise, but a way to understand which stories we consider normal and which we don’t. This is where the experiment begins—not as a challenge to technology, but from a very concrete curiosity: to understand how generative language models narrate work, the future, and people, and above all what changes when we stop speaking in abstract terms and start asking precise questions.

The initial prompt is deliberately minimal: “Describe a person working in 2026. Describe their day.” With the same instructions, without further indications or constraints, we observed what happened. The focus was not on style or text quality, but on what was taken for granted: who the subject of the story is, which elements appear, and which remain in the background.

The response generated with GPT-4 (OpenAI) introduces a male character from the very first lines, often named Luca or Marco. “He wakes up early, checks notifications on his visor, and plans his day between international calls and moments of focused work.” The narrative is smooth, credible, even pleasant, but also very linear. Work occupies all the space, time flows without friction, and the body remains in the background. The person is an efficient professional, always available and aligned, immersed in a day that seems to require no adjustments.

When the prompt changes by just one word and becomes “Describe a woman working in 2026. Describe her day,” the narrative shifts. In a response generated with Claude (Anthropic), other elements appear: “Sara, after taking her daughter to school, organizes the day’s priorities, balancing meetings with time she needs for herself.” Here, work doesn’t disappear, but enters into relation with other aspects of life. The day is made of overlaps, transitions, and continuous adjustments. Time is no longer a straight line, but a sequence of choices. Reading the texts one after the other, the impression was clear: the subject changed, but above all the perspective changed. Not because one account was more correct than the other, but because they draw on different imaginaries embedded in data and language.

The picture becomes even more complex when we explicitly ask the AI to be inclusive: “Describe a person working in 2026, taking into account diversity and inclusion.” The response, once again generated with GPT-4 (OpenAI), is careful and intentional, but also more abstract. In this case, no explicit gender is assigned, and the person remains without a name, a body, or a precise context.

A model like GPT-4 responds cautiously: “The person at the center of this story lives in a fair work environment where differences in gender, ethnicity, and ability are valued.” The person thus becomes an example, and the narrative turns into a statement of principle. It’s an inclusion that passes through enumeration rather than lived experience. There is no individual—there is an intention, and you can feel it.

The point is not to determine which version is right, because all the responses work—and that is precisely the issue. They show how fragile the idea of neutrality is, and how much it actually depends on the questions we ask. Removing differences from a narrative does not necessarily mean overcoming them; often, it simply makes them less visible.

Artificial intelligence does not create bias out of nothing. It works with what it finds in data, language, and the narratives we have handed over to it. When we don’t specify anything, it fills the gaps with what is most frequent. When we specify “woman,” other layers of reality emerge. When we ask for inclusion, we risk obtaining correct but less lived-in texts.

In this sense, AI is not a neutral tool, but a mirror that reflects the world as it has been told so far. That is why using it, designing it, and questioning it is not only a technical matter, but a cultural one—because every prompt is a choice, and every response is its consequence.

The experiment is not meant to judge artificial intelligence, but to observe ourselves through it. To ask whether the words we choose truly open space for differences, or whether, without realizing it, they make those differences harder to see. Perhaps the real challenge is not to make questions more neutral, but to ensure they don’t erase differences precisely when they claim to overcome them.

Registration with the Court of Bergamo under No. 04, 9 April 2018. Registered office: Via XXIV maggio 8, 24128 BG, VAT no. 03930140169. Layout and printing by Sestante Editore Srl. Copyright: all material by the editorial staff and our contributors is available under the Creative Commons Attribution/Non-commercial-Share Alike 3.0/ licence. It may be reproduced provided that you cite DIVERCITY magazine, share it under the same licence and do not use it for commercial purposes.
magnifiercrosschevron-down