
Artificial Intelligence: the challenge between adoption and responsible use
The term “artificial intelligence” was coined in 1956 by John McCarthy along with other founding fathers of this new field of computer science. Given the limited computational power of the 1960s, despite all the theory (many of the algorithms underlying today’s AI models are the same as those from that era), practical applications were very limited, marking what we now call the first AI winter. A major breakthrough occurred in 1997 when, for the first time, a rule-based AI model, DeepBlue developed by IBM, managed to beat the then world chess champion Gary Kasparov. The news received significant media attention, reigniting public and investor interest in AI.
Rule-based models operate based on a set of rules applied to undefined input. Taking chess as an example, the possible piece combinations on the board are so numerous that not even a supercomputer could consider them all. However, by providing the rules of the game, piece values, analyses of famous openings and endgames, as well as some midgame strategies, the AI model can evaluate the probability that its next move will be the best for winning. In this case, the algorithms define the rules of the game, and the model calculates the probability of the best possible response. It is certainly an interesting approach but does not account for situations where no clear rules exist.
2010 marked another milestone in AI history. ImageNet, a database of 14 million manually labeled and described images, was made available to anyone wanting to train an AI model for automatic image recognition. Thanks to this enormous database, in 2012, the AlexNet algorithm was able to recognize image contents with only 15.3% error. It was the success of machine learning and its more advanced version, deep learning.
The main difference from rule-based models is that the rules are not explicitly programmed. In machine learning models, a vast amount of data is provided, and the underlying algorithms are asked to find interconnections between the data—essentially creating the rules themselves so that the model can distinguish a human from a horse or a tree in a photo. One caveat: the parameters considered by these models number in the billions, making it impossible to know exactly which are actually used when the model generates answers and by what criteria.
On November 30, 2022, OpenAI released Chat-GPT, a Generative AI model that has become a reference point for the sector and the conversational companion for many smartphone users. Generative AI models use the same machine learning and deep learning algorithms to produce original content.
Many of us, when talking about AI, refer to tools based on generative models rather than the entire AI ecosystem, which, to be clear, has been around since the late 1990s, though completely transparent to end users. Generative models brought direct interaction with AI-based tools and applications, fundamentally changing how ordinary users perceive AI.
With this important context in mind, today we are already well beyond 2022. We hear about AI agents, synthetic persons, digital twins, and personal intelligence, raising important questions: who is responsible for the content generated by these tools? Who ensures their ethical use? Who monitors responsible use within companies? What personal impacts will these tools have?
As a European citizen, I am proud of the work being done to safeguard personal data and related rights. Starting with the GDPR, effective since 2018, every citizen has the right to choose how their personal data is used by companies and European or non-European entities. Similarly, the European Artificial Intelligence Act of 2025 establishes rules defining lawful and unlawful AI system use.
The EU AI Act has three main components: defining risk levels for AI solutions and requiring mapping of these solutions, introducing a governance model for AI solutions, and providing basic AI model training for everyone.
The first component defines AI model risk levels based on the type of activity for which they are used, ranging from low risk to unacceptable risk—activities where AI systems cannot be implemented regardless of benefit. Unacceptable risks include, for example, subliminal, manipulative, or deceptive techniques exploiting vulnerabilities such as age or disability to unduly influence decisions and behavior, social scoring for categorizing individuals, misuse of biometric recognition, etc.
Basic training is another key tool for enabling conscious AI adoption. Both at a company and societal level, it is important to align knowledge and awareness about AI so decisions are based on facts, not purely commercial impulses. Finally, for all companies—and ideally for private citizens, governments, and public institutions—a preventive approach is necessary to monitor and validate how AI solutions, both generative and traditional, are used. Introducing AI Governance frameworks is the foundation for building an ethical and responsible AI ecosystem.
A core principle of any AI Governance framework should be Human in the Loop—that is, keeping human participation in decision-making, at least for the most important stages. While this may seem to slow AI adoption in companies and daily life, it is the only way to handle a technology whose inner workings we do not fully understand (even if we programmed it ourselves, the heart of a generative AI model is a black box). In many cases, AI is far better than humans at expressing information appropriately, even when providing inaccurate or false information.
Maintaining human control over these systems is the best way to adopt them safely, ethically, and responsibly.