
The words that guide AI
Why does language matter so much when we use AI?
When we interact with AI, we are, in a sense, speaking to people; we choose words that become actions—words that point in a direction, define boundaries, and shape expectations. Choosing precise language when engaging with AI should be seen as an act of great responsibility, because its outcomes have the power to influence decisions and behaviors. In other words, the language we use reflects who we are, how we interpret reality, and carries the cultural and social context in which we live. Generative AI doesn’t just read words: it absorbs intentions; it doesn’t think, but it combines words. For this reason, language is ultimately a true personal imprint—it reveals our vision and our values.
What does it mean, in practice, to use inclusive language with AI?
Inclusive language is now a concept that encompasses much more: respect, awareness, attentiveness, transparency, clarity, care, and responsibility. Inclusive does not mean neutral, but intentional. In practice, using inclusive language with AI means making explicit what we often take for granted: the target audience, the cultural context, and interpretative boundaries. Applied to prompts, inclusive language becomes a tool for preventing algorithmic bias, not just a matter of social awareness.
How can distortions, bias, and hallucinations be reduced without falling into technical complexity?
Through three concrete habits: clear prompts, explicit constraints, and always including human verification—the so-called human-in-the-loop approach. Prompts are effective when they clarify purpose, audience, sources, and boundaries; human review remains essential and should not be seen as a final formal check, but as an integral part of the process. Another useful practice is to ask AI for self-verification, making its weaknesses and uncertainties visible and countering the illusion of algorithmic omniscience. Designing effective human–AI interaction means keeping decision-making responsibility with people.
Transparency in language: how can it be practiced simply and clearly?
Transparency means communicating in a clear and understandable way for everyone: stating when content has been generated with the help of AI; indicating where information comes from; explaining limitations in simple terms. It is not formalism, but an act of respect toward the reader—a trust pact that keeps content open to discussion and improvement. In organizations, transparency also involves processes: documenting how texts are created, making criteria and choices explicit, and showing the path that leads to the final wording. It helps people understand, learn, and make better decisions together. Transparency is not a box to tick: it strengthens trust, enables accountability, and makes inclusion tangible by allowing readers to understand and, if needed, question.
Privacy and data: what linguistic precautions are essential?
Protection begins before words, by deciding what not to ask or include. In prompts, this means avoiding unnecessary personal details, using public or authorized information, and clearly stating what must be excluded. Privacy acts as a selective gateway, letting through only what is necessary. For readers, it translates into clarity about which data has been used and why; for writers, it becomes the habit of removing the unnecessary and evaluating proportionality between purpose and information. This too is inclusive language: consciously deciding what to leave unsaid—a skill that begins when formulating prompts, even before reviewing the generated output.
What new skills are needed for anyone working with AI?
The ability to ask questions that clearly define purpose, audience, tone, constraints, and quality criteria. This is not just for programmers—it applies to communicators, HR professionals, process designers, and policy writers. Often referred to as prompt literacy, this skill can be developed through examples, short templates, and simple checklists—lightweight tools that make a significant difference. Above all, it is a cultural competence: it encourages us to ask “For whom?” and “With what effects?” not just “How efficient is it?”. It also reinforces accountability: the person who writes the prompt remains responsible for the outcomes, even when AI supports content creation.
What is the most common mistake you see in AI-related language?
Thinking that language is only about style and not structure. When context is missing, AI fills the gaps with inferences. Another mistake is delegating final responsibility to AI: no system can replace human care in choosing the right words for real people. It requires a shift in mindset—from “do me a favor” to “let’s build responsible content together.”
And this is where the “Language Compass” project comes in: what is it and why can it help with AI use as well?
It is an internal project within our company, led by a group of volunteer colleagues together with an expert partner, Choralia. The outcome is a practical framework aligned with Brightstar values, using language as a cultural lever to better navigate day-to-day work. It reflects and translates into simple, concrete terms everything we have discussed. It works through four directions, easy to remember and apply daily. Precision: clarify purpose, audience, tone, allowed sources, constraints, and format to reduce ambiguity and rework. Responsibility: choose words that respect people, explicitly request inclusive language, and avoid stereotypes and assumptions. Transparency: state when content is co-created with AI, indicate sources and limitations, and make visible the decisions behind the result. Risk awareness: know when human review is needed and when it is better to stop. The compass does not replace individual judgment—it helps train it. It enables inclusion in everyday practices, making it natural to do the right thing: less ambiguity, fewer biases, more clarity, higher information quality, and greater trust. And because it is so cross-functional, it also helps build the kind of AI we want: not just faster, but fairer. This aligns with our vision of leadership in shaping accurate linguistic patterns, because the quality of words also guides the quality of decisions.