
AI that includes: when technology discovers hidden talent
But the real challenge isn’t how quickly we can adopt AI—it’s how we adopt it. Today, many organizations are in a phase of intense experimentation. Enthusiasm and AI fatigue coexist. New tools emerge every month, often fragmented, and they must communicate with each other and integrate into existing business processes. Processes that, let’s be honest, cannot simply be modernized by adding a new AI tool—they need to be deeply rethought. We are facing something epochal. And every technological revolution requires a parallel cultural and managerial evolution. At Indeed, when we talk about AI in the world of work, we certainly talk about productivity, efficiency, and new opportunities. But we talk with equal force about responsibility. Because AI applied to recruiting affects people’s lives, their careers, their futures. This demands a higher level of attention.
The risk is not theoretical. Algorithms learn from the data we humans produce. And that data inevitably contains our imperfections: stereotypes, historical imbalances, systemic discrimination. Multiple studies have shown that inadequately designed systems can replicate—and sometimes amplify—biases related to gender, ethnicity, age, or socioeconomic background.
That’s why at Indeed we have defined and adopted solid AI principles that guide the development of our products and processes. We invest in multidisciplinary teams—engineers, ethics experts, legal advisors, data scientists—to assess risks, test models, and continuously monitor outputs. We use datasets as representative as possible and implement controls to reduce the risk of systemic discrimination.
There is also a less-discussed but equally relevant risk: career determinism. A system based solely on a candidate’s CV or past behavior risks pigeonholing them forever. A customer service operator might continue receiving only similar offers, even if they aspire to a role in marketing or technology. Responsible AI, instead, must broaden possibilities, not narrow them. It must look not only at what a person has done, but also at what they could do and want to do.
This issue is also central from a regulatory perspective. The European Union, with the AI Act, has classified AI systems used in employment—including CV screening—as high-risk. This choice recognizes the profound impact these technologies have on fairness and social cohesion. Clear rules encourage responsible innovation and strengthen trust.
The point, however, is primarily cultural. AI must not dehumanize recruiting—it must empower it. Recruiters cannot—and should not—delegate the final judgment entirely to the machine. Total automation is neither desirable nor appropriate when it comes to decisions that affect people’s lives. The future of recruiting will be increasingly skills- and preference-oriented, less tied to rigid frameworks. The traditional CV plus cover letter is giving way to more dynamic models that highlight what a person can do and what they want to become. If carefully designed, AI can become an extraordinary enabler of inclusion: discovering unconventional talent, making opportunities more transparent, reducing access barriers. But this doesn’t happen automatically. It happens only if those who develop and use these tools put equity, transparency, and responsibility at the center.
At Indeed, we believe the most powerful innovation is not the one that replaces humans, but the one that enables them to make better choices—especially when it comes to work.