Intelligenze artificialitecnologie

If AI takes our jobs, the problem isn’t the machine

Shifting the debate on AI from the fear of automation to the very structure of work itself—this is a way to rethink time, productivity, markets, and the cult of overwork. Between the stigma surrounding the use of artificial intelligence, cultural resistance, and issues of access for those starting from marginalized positions, AI emerges as an ambivalent technology: it can either reinforce inequalities or help redesign work, welfare, and the social value of invisible labor. This is explained by Alberto Puliafito, director of Slow News and author of Artificiale for Internazionale
By Elisa Belotti
01 Apr 2026

The debate on AI often highlights a widespread fear of losing jobs or seeing creative and intellectual tasks automated. Yet the core issues seem mostly structural: the functioning of the labor market and the relentless pursuit of productivity. Could AI then become an opportunity to rethink what we consider “work”?
There is a tyranny of time under capitalism: historically, the introduction of new technologies—like the washing machine or email—has often raised expectations and accelerated work rhythms, rather than freeing up time for people. AI could offer a chance to reverse this trend if we stop using it to squeeze workers. If machines can handle tedious or repetitive tasks (so-called bullshit jobs, to quote David Graeber), this should translate into shorter working hours and wealth redistribution—not new obligations. AI only steals our time and labor if the economic system continues to extract value from every action; otherwise, it could give us “superpowers” to do more in less time—and to dedicate ourselves to other things. Care work, relationship maintenance, informal education, and widespread cultural production generate social value, even if not economic value. This should guide us: if automation makes certain tasks more efficient (cognitive or otherwise), we could take advantage of it to reduce working hours instead of increasing productivity—strengthening welfare rather than eroding it—and to rethink not just work, but the very structures that make it feel like slavery: the market.

There is also a sort of silent shame around the use of AI, especially in creative work. People use it daily but don’t talk about it for fear of moral judgment or contractual restrictions. What does this widespread silence say about our relationship with AI?
This silent shame is real and is the result of a media panic, similar to those seen in the past with the advent of the internet or computers. A revealing historical parallel: in 1988, when Umberto Eco wrote Foucault’s Pendulum using a computer, Asor Rosa criticized it as “mechanical,” noting that the final part, not written on a computer, felt better. Today, no one would criticize a novel for not being handwritten with a quill or typewriter. There is a cultural resistance to accepting that creativity has always been a collaborative, assisted process (dictionaries, frameworks, now chatbots). AI usage is often reduced to “AI slop” (mass-produced low-quality content) or copyright theft, causing professionals to hide their AI use—even when applied responsibly and creatively—to avoid accusations of cheating or being “inauthentic” artists. The puritanical cult of effort is also at play: if you don’t suffer, your work supposedly counts for nothing, as if struggle were universal and necessary for value.

Many criticisms of AI focus on bias or discrimination. Can it also be seen as an intersectional tool, offering access and expression to those starting from marginalized positions? What does it mean to use AI consciously?
We cannot ignore the problems. AI operates in a world already marked by discriminatory hierarchies, which makes it inherently intersectional. Generative models are trained on digitized archives that reflect concentrations of symbolic power: dominant languages, mainstream aesthetics, hegemonic cultural centers. When we query a model, we are querying the digitized memory of cultural capitalism: AI reproduces linguistic, cultural, and economic asymmetries. They are consumptive, oligopolistic systems. Intersectionality shifts our perspective. It forces us to ask: who can use these tools for advantage? For non-native speakers, generative systems can accelerate access. For neurodivergent individuals, they can reduce cognitive load. For small editorial teams, they can mean survival. AI can act as a tool for rebalancing, if access is not monopolized and rules do not favor only those who already have infrastructure and capital (economic, social, etc.). Intersectionality helps us see who can claim rights and who cannot. There is no AI that is equal for all people because there are no societies that are equal for all people. If we do not want AI to remain a multiplier of existing hierarchies, we must appropriate it: this is what conscious usage means.

Photo credits: Fulvio Nebbia, IK Produzioni

Registration with the Court of Bergamo under No. 04, 9 April 2018. Registered office: Via XXIV maggio 8, 24128 BG, VAT no. 03930140169. Layout and printing by Sestante Editore Srl. Copyright: all material by the editorial staff and our contributors is available under the Creative Commons Attribution/Non-commercial-Share Alike 3.0/ licence. It may be reproduced provided that you cite DIVERCITY magazine, share it under the same licence and do not use it for commercial purposes.
magnifiercrosschevron-down