Artificial intelligence is not merely a technology, but a set of socio-technical systems that learn from data, automate decisions, and shape behavior within society. It does not simply perform operational tasks: it affects how opportunities, resources, and rights are distributed, influencing work, access to services, information, relationships, and the collective imagination.
Artificial intelligences operate through algorithms trained on large amounts of data, which reflect specific historical, cultural, and economic contexts. For this reason, they are never neutral: they can reproduce and amplify existing inequalities, exclusions, and biases, or help reduce them, depending on choices in design, use, and governance.
In this issue, we explore AI as a social infrastructure, examining its ethical and political implications: from accountability in automated decision-making to system transparency, from its impact on work and human capabilities to new forms of dependency and marginalization. At the same time, we ask what characterizes human intelligence—or rather, intelligences—from the perspective of neuroinclusion and accessibility.