AI ThoughtCon – The People (Day 2)
- Balázs Kis

- 7 hours ago
- 3 min read
The second day of the AI ThoughtCon, organized by the AI Localization Think Tank, had Humans as the central theme. The session began with a panel on AI Bias and AI Ethics, featuring Hilary Atkinson Normanha, Fabiano Cid, Michael Reid, and Sheriff Issaka, moderated by the Think Tank’s own Marina Pantcheva.
At first, Marina asked what the panelists think was the most dangerous AI bias out there. Hilary started with cultural bias that is difficult to detect. Fabiano added that it’s the bias we don’t talk about or take for granted. This evolved into a discussion on cultural bias, the English-centric nature of LLMs, and the role of humans counteracting this. We learned a lot about how cultural and linguistic bias shows (see, for example, Hilary’s substack on semantic leakage), and how professionals in the language industry can and should mitigate it.
In a world where even the models show English (US)-centric and Protestant cultural values, it was refreshing to get introduced to Sheriff Issaka’s African Languages Lab. But maybe the most important takeaway was that humans are essential to ensuring that AI output becomes — and stays — culturally appropriate and inclusive. The expertise and experience of language professionals makes them the best, if not the only, experts who can potentially deal with. To quote practically every panelist, language professionals — and the language industry — must reclaim their role that should be more prominent than it is today. As Hilary pointed out, the language profession was one of the first that was exposed to the behavior of AI systems, and we have the longest experience to deal with it. Fabiano said “the term ‘language’ [itself] puts humans back in the process”.
Following the panel, Joss Moorkens gave an eye-opening talk about what translation training looks like in the age of AI. Three points stood out: (1) there is a much larger diversity of knowledge and skills that aspiring language professionals need to pick up, (2) AI and digital literacy belongs to the bare minimum, and (3) it is essential to teach ethics and critical thinking. It also matters how we depict AI, look at https://betterimagesofai.org/.
Joss presented a framework that included remarkably deep ethics education. Students were introduced to various schools of ethics and were invited to look at ethics issues through case studies. They — and we all — are encouraged to always look for the bigger picture, the social ecology of responsibility framework. They also learn about the sustainability of various solutions — if two systems produce the same quality, how we should choose the one that has the lesser environmental impact.
To a question about the apparent gap between academia and business, Joss pointed out that interaction, communication already exists, and there are methods, such as the Simulated Translation Bureaus Network, where academia approximates business processes in the education environment.
This was followed by an introduction to multilingual AI voice generation by Voiseed’s Cristina Contini and Andrea Ballista. One key takeaway was that multilingual AI voice generation serves an unmet need — makes voice communication, voice dubbing possible where it would not be possible by human narrators or voice actors.
However, AI voices must also be culturally appropriate, using the tone and register that fits the content and the communication purpose. In this context, human linguists who supervise the AI voice system will act like voice engineers, fine-tuning, sculpting the system to produce the most fitting voice for the project. Unlike human voice dubbing, the AI-based process is not linear: it asks for iterations to gradually improve the voice output.
Tne day was closed by Daniel Sebesta, who explored how we could design work that humans will still want to do and maybe even enjoy. Daniel’s assumption was that, despite the constant pressure for speed and speed only, quality (or rather plausibility, not even correctness or fitness for purpose) still matters. And if it does, it’s a valid question how the human role, the human cognitive process, changes with AI in the mix.
Creativity does not disappear but takes a different form. Humans no longer write but they constantly evaluate the possibilities that AI models uncover. Humans might not build, but they get to sculpt systems and outcomes. Or, using a different metaphor, it’s the kind of gardening that decides which branch stays and which of them go. The cognitive effort does not disappear but it shifts. Control and agency are not lost, although AI systems, through their remarkable autonomy, take over some of it.
I think the main takeaway of the day is that the role of humans does not diminish, but it undergoes a significant change; and the language industry and language professionals do best if they are mindful and well-informed about the change, and don’t just let things happens to them. In this new world, they can and should occupy the space they deserve.



