AI Localization Think Tank Looking Back at 2025 | Part 1
- AI Localization Think Tank

- 1 day ago
- 15 min read

As 2025 comes to a close, the AI Localization Think Tank paused to reflect on the year that passed. It was a year that changed workflows and professional identities across the language industry. What was once speculative became practical. What felt disruptive in theory demanded real decisions in practice.
We asked each member three questions:
What single development in 2025 had the strongest impact on localization and translation work?
Which trend or shift from 2025 do you believe will have a long-lasting impact?
What did you personally learn in 2025 that changed how you approach your work?
Because the insights were so rich and varied, we’re presenting them in two parts.

Aaron Bhugobaun
Technical Production Operations Manager, CDSA Studio Chair, and Creative Technologist
2025 has been a transitional year. In 2024, AI was a buzzword that many people feared; in 2025, people began their AI education journey and started understanding what AI means for them.
What single development in 2025 had the strongest impact on localization and translation work?
The maturation of LLMs. They are now at a stage where they can validate their answers and attempt to contextualize the meaning behind questions. You can see this in the improvement in translations.
Which trend or shift from 2025 do you believe will have a long-lasting impact?
AI education. We are now seeing parts of the industry upskilling in AI and companies pivoting to an AI-first strategy.
What did you personally learn in 2025 that changed how you approach your work?
AI is moving faster than many people realize, which makes it challenging to integrate into traditional pipelines. Fully agentic AI workflows are coming.
AI is moving faster than many people realize, which makes it challenging to integrate into traditional pipelines.

Andrea Ballista
Co-founder, CEO
What single development in 2025 had the strongest impact on localization and translation work?
Not a single development I would say, but the “speed” and the “wavefront size” of developments is mind-blowing.
Which trend or shift from 2025 do you believe will have a long-lasting impact?
2025 from my perspective is the year in which voice took the stage: non only in AI Dubbing, but voce agents, conferencing, real time with low latency, consumer and professional apps. A big shift from text to voice as the “main interface”, as “voice first application”.
What did you personally learn in 2025 that changed how you approach your work?
The possibility of LLM to become a real “working tool”, the new searching tool, capturing and mapping info/facts on request, to elaborate and reorganize the output to get fresh and usable information.
2025 from my perspective is the year in which voice took the stage.

BALAZS KIS
Co-founder, Chief Evangelist
To be honest, I don’t believe in a “single development” that makes the biggest impact. The ascent of LLMs was brought about over the course of 80 years by a convergence of many smaller factors. But if I have to pick one thing that meant the most to me, it’s clarity.
To be more precise the clarity about the difference between the intent of the makers of the largest models and the actual value of the models to humanity.
The AI that drives the most value for its users is the AI that is one, albeit very impactful, tool in the chest. For some time, I have been advocating for the thought of “AI as Normal Technology”, originating from Sayash Kapoor and Arvind Narayanan. AI providers who make money from mass engagement probably do not share my view.
Even if AI remains “normal technology”, one thing seems irreversible: that the work of localization has changed forever. No-one can be “only” a translator anymore: language professionals need to be prepared to perform, and get trained for, a much wider variety of tasks, and they will need to learn more mathematics than before, to accomplish them.
I have learnt many more things in 2025. I could say that 2025 was the year of extreme learning for me. But the one thing that stands out is this: I am much more conscious about what I “outsource” to the machine and what I insist on doing myself. For example, there is no point in asking AI to prepare a presentation for me if I am the one who needs to present it and take responsibility for it. The time I save on making the AI work will be lost when I need to internalize what I am to say. If I am the one who prepares the presentation, most of the internalization will happen in the process. We lose that if we just throw the task at our cybernetic companion.
No-one can be “only” a translator anymore: language professionals need to be prepared to perform, and get trained for, a much wider variety of tasks, and they will need to learn more mathematics than before, to accomplish them.

Belén Agulló García
Executive Consultant of Innovation
GenAI tools applied to localization have pushed us once again to the point of existential crisis and having to justify our existence as professionals and as an industry. It feels like we’ve been fighting this battle for years since the inception of language technology and its evolution over the years (translation memories, term bases, machine translation, and now AI for everything you can imagine). I love exploring the topic of the value of localization from theoretical and practical perspectives. This year, we’ve started to hear more about the shift from translation quality metrics to outcome-based quality metrics. This and the need for better storytelling in our industry to “sell” what we do at all levels.
Translation quality is a given in the mind of the purchaser (if you pay a professional for their services, you expect to get good results, right? Would you run a quality evaluation test to your hairdresser or contractor? Well, maybe but…). MQM and similar metrics, while useful for our own internal quality management systems, are not relevant for people outside of our bubble. They don’t understand those numbers, and don’t probably care much about them as they expect good quality for what they are paying. Now, AI enters the room.
Non-localization person: If we can use AI to do all this work, why should we keep using humans?
Localization person: Well, you know, we should keep doing it with humans because the quality is better…
Okay, and now how do we define quality of what we do in a way that actually matters to others? Two trends have emerged this year (although they are not necessarily new, but they are stronger):
Outcome-based quality metrics: So what are we trying to achieve with the content? Do we want more traffic to our website? More client retention? Less support tickets? More clicks in our email campaign? More views in our social media post? Avoid lawsuits against our brand? Help a patient get their treatment?This is where things get interesting and more strategic for localization professionals and teams across the board.
Storytelling matters: If we ourselves can’t explain why localization matters in a simple and clear way, how can we expect outsiders to understand it and not buy into the hype of AI for translation? We’ve seen different initiatives and workshops with a focus on supporting localization teams get better at storytelling and communicating with stakeholders, and we will likely see more of this.
What I’ve learned this year is that, in our industry, we are great at repurposing ourselves, and redefining what we do very quickly to stay relevant. I’ve never seen so many rebrandings in one year, including enterprise localization programs (global experiences and such), conferences (thinking about GALA now rebranded as WorldReady), and, of course, LSPs, all trying to find that edge that keeps them relevant (be it more focus on tech-driven workflows or more focus on the human element or something else). I believe our industry will continue showing resilience and adaptation in the next few years.
This year, we’ve started to hear more about the shift from translation quality metrics to outcome-based quality metrics.

Bridget Hylak
Head of ATA Language Technology Division, Localization Consultant
2025 was the year when no one could deny it anymore, and even the most passionate arguments “against” AI began to fall away like a layer of skin off a sunburned back.
The loudest of the filibustering voices began to fade into the sunset - their posts receiving less and less likes, while more people whether willingly or begrudgingly transitioned to the “other side,” as a new understanding emerged across industries and the collective consciousness:
AI, and specifically, genAI, is here to stay. It is changing our world as we know it. (So, what are we going to do about it?)
More loudly than before, the ethical, legal and economic implications began to take center stage. How will AI affect our culture, our children, who we are or believe we are as humans - and how much of this is already occurring without our knowledge or consent?
Linguists looked back in anguish over decades of painstakingly developing proprietary resources, TMs and TBs containing thousands upon thousands of entries that they suddenly realized might have been consumed into an LLM without their knowledge. And for the most proactive who somehow “protected” their IP, others unknowingly or willingly gave theirs away, thus diluting the value of their colleagues’ shares. Ah, the pain of a 1 million+ entry custom dictionary in a niche subject area suddenly proving to be nearly valueless, or sellable for a fraction of the blood, sweat and tears it consumed!
In short, 2025 was a year of reckoning. Many high-level professionals across our industry and others faced the existential question: after a lifetime of training and experience, do I stay, do I go, do I pivot, do I found… or do I run for the hills to that dream I have been dreaming, that artistic endeavor that has silently gnawed at me for decades…?
A surprising number in my own network chose the last option.
2026, I feel, will bring more of the same: mounting levels of anxiety and economic uncertainty interwoven with internal conflict and resolve. With times this uncertain and uncharted, returning to an interior anchor is often the best course of action, and that will likely take many of us to places, jobs and experiences we never imagined - for better and for worse.
The spiritual, artisitic and creative will have their moment. Small cafes with familiar faces, our initials drawn in whip cream across our latte or crepe, will provide more comfort than ever - especially if served by a caring, smiley individual who knows our name and looks us in the eye.
All this as we continue to chew on and agonize over daunting questions: how will this wild, untamed yet very proficient toddler called GenAI develop and shape the future? How is it affecting our children, our development, our brains…? How do we keep up with it? How do we rest from it? Is it even possible to get away? To make a living without it…?
The answer is yes and yes again, but we will have to start looking for solutions in our grandparents’ post-depression era diaries and journals. Ultimately, community and connection will be our salve and our strength. We can start today by looking around our office, our network and our dinner table. Teams need to be formed now. Many bullet-proof business solutions are out there, and will continue to sprout up; just think “beyond” AI (a loaded concept to explain, but trust that you already have much of what you need to figure it out. Queue: internal compass…).
2026, bring it on.
More loudly than before, the ethical, legal and economic implications began to take center stage.

Helena Moniz
President of the European Association for Machine Translation
No matter what AI can do, we do need to know who are we as humans and what makes us humans. The answer must be a transdisciplinary one, focused on us as social beings and on our uniqueness. We are made of star dust, we have an idiosyncratic DNA, fingerprint, iris and voice, but we are accommodating to AI-first everyone, fast and the same, without uniqueness. Although with advantages, we do not have the needed distance to understand the real impacts on human beings and how we on the present, which someone told is “pregnant of the future”, may have impacts in several layers in our future societies.
Who we are as humans is also revealed through languages, through cultures, and in 2025 we still focused on translating words, but it seems very hard to translate cultures and communication. We are fluid in technical jargon terms, we are accurate in so many domains, but we continue missing the human aspects of our communication and still do not tackle all humans, since the commercial value of languages is way above the cultural preservation of our more than 7000 languages, each one corresponding to distinct ways of experiencing the word. We do not tackle all humans in all their diversity!
2025 made me cautiously optimistic about the usages of AI by making so many fabricated AI pipelines, created with projections of the fast pace of inventing the entire future in 2025. The pipelines are now real ones, with true impact on research and industry. Both industry and academia used experimental methods to see what would work better. We were never so alike in the attempts made! Did we learn with each other more? Are we more aligned?
A big challenge: how we prepare new generations in all sectors of our society to the fast moving changes and how we understand that this is not an isolate effort? It takes a village!
2025 made me cautiously optimistic about the usages of AI by making so many AI pipelines, created with projections of the fast pace of inventing the entire future in 2025. The pipelines are now real ones, with true impact on research and industry.

Gabriel Karandysovsky
Researcher, Content Creator and Consultant
My favorite takeaway for 2025 has to be the industry finally turning the GenAI hype page — allegedly, seemingly, and if you squint hard enough. People came out of the woodwork with stories of successful implementations, obstacles overcome, and lessons learned. Now that the dust has settled and teams have real results to show for the great AI goldrush, what we’re seeing is a genuinely impactful spread of thoughtful, user-first implementations.
My least favorite part (because there’s no glass half-full without someone insisting it’s half-empty) is that this industry still struggles to market itself. That’s not new — but what worries me is how little traction we seem to have with the next generation. Where are the status-quo shakers? Are we doing enough to find them, support them, and give them space to thrive? I worry about who steps in when it’s their turn.
And what have I learned on a personal level? That good ol’ writing by a human for humans still delights (me first, readers/clients second). As long as I keep perfecting my craft and spilling my guts out on the page, I’ll be fine. Fellow creatives, you’ll be fine, too.
This industry still struggles to market itself. That’s not new — but what worries me is how little traction we seem to have with the next generation.

Johan Botha
African Language Solutions expert
The biggest shift was that AI stopped being a side tool and became the engine that drives most localisation work. It reshaped pricing, timelines and expectations almost overnight. The strongest impact was not a new model or tool, but the speed at which AI moved into the centre of the workflow. Everyone had to adjust their processes, their quality checks, and in many cases their business models. The industry has never had to move this fast and, unfortunately, for many, trepidations turned to reality.
The trend that will stay with us is the move toward smaller, purpose-built language models that actually fit the realities of different regions and domains. Instead of forcing giant systems onto every language problem, teams started looking for right-sized solutions. That shift opens the door for real linguistic diversity and for expertise that goes beyond simple text accuracy. It also strengthens the role of cultural intelligence in AI workflows, which is something machines still fail at.
This year taught me to be far more selective about where human effort adds real value. Automation handled more of the repetitive work, but the parts that needed human judgement needed it at a much higher level. I also let go of the idea that we must follow legacy paths simply because they are established. There is room to build new systems that fit our context rather than imitate others. And lastly, 2025 showed me that my optimism, while taking a beating, is still prevalent.
The trend that will stay with us is the move toward smaller, purpose-built language models that actually fit the realities of different regions and domains.

John Anthony O'Shea
Translator, Chairperson FIT Europe, independent researcher
What single development in 2025 had the strongest impact on localization and translation work?
From a broader industry perspective, 2025 was the year AI translation moved from experimental curiosity to widespread attempted adoption but also the year the limitations became undeniable at scale. From what I’m seeing and hearing across the sector, companies and clients who had invested in AI-first workflows discovered that the promised efficiency gains came with trade-offs they hadn't anticipated. For my work specifically, the single biggest development was the increase in high-stakes legal documents coming in for translation after clients had experimented with free AI tools and discovered the limits quickly. These weren't new clients. They were existing clients or referrals from colleagues who had tried the likes of ChatGPT for legal documents, assumed the fluent output was accurate, then hit a problem. Basically they got their fingers burned. Sometimes it was a colleague who read Greek noticing an error in the output. Sometimes it was opposing counsel pointing out a mistranslation in a filing. Sometimes it was just an uneasy feeling that something didn't sound right in a contract clause. So they came back. I'd say the pattern was pretty consistent. They'd used AI for speed or cost savings, the output looked professional, and they only discovered it was wrong when someone checked it or when the document failed to work as intended in practice.
I also began seeing a shift in the conversation. Clients weren't asking whether AI could handle their Greek legal translations; instead they were asking what specifically goes wrong when it tries to do that. They wanted to understand how it fails and why. I've been testing LLMs daily throughout 2025 to answer these questions with specificity. The tools are still majorly weak when it comes to specialised legal documents like the ones I work on. They miss jurisdiction-specific terms. None of this has changed my workflow because none of these tools perform reliably enough to integrate into legal translation work where accuracy matters. So I suppose the development that mattered most wasn't the technology "improving". It was clients returning after discovering firsthand that it doesn't really work well for what they need. That created space for a different kind of professional relationship, one where the conversation starts with "I need this to be defensible" rather than "I need this to be cheap."
Which trend or shift from 2025 do you believe will have a long-lasting impact?
The broader industry trend with lasting impact is a collapse of the "AI will replace translators" narrative and its replacement with a more nuanced understanding: AI creates risk that someone has to manage. Translation platforms and agencies that positioned themselves as AI-first discovered that clients in regulated industries (legal, medical, financial) won't accept the liability that comes with unverified machine output. The conversation has shifted from "how cheap can we make this" to "who is responsible when this goes wrong." That's probably a permanent change. The technology might improve, but the liability question isn't going away. In my niche specifically, it has to be the growing number of public cases where lawyers used AI tools incorrectly, resulting in fabricated citations and drafting errors that made it into court submissions, and the resulting recognition across the profession that these tools carry real risk. Professional bodies around the world have been quick to respond. Many have published detailed guidance on AI use. There is one consistent message that runs throughout these documents: responsibility for checking outputs stays with the lawyer. If you submit AI-assisted work, you own the consequences. This has shifted how lawyers approach conversations about translation. They're now more open to discussing accuracy, liability, and the limits of AI in legal documentation. The question isn't whether specialised legal translators remain necessary; lawyers now understand the risk of errors well enough to know the answer to that. The question is how to structure the legal translation work so that risk is managed appropriately.
This trend prompted me to pursue formal risk-management training in 2025. Clients' interest in defensibility and reliability has increased significantly. They want to know not just that their legal translation is accurate but that the process used to produce it can withstand scrutiny if challenged. That requires a different kind of professional competence, not just linguistic and legal expertise but the ability to document decision-making, explain methodology, and demonstrate that appropriate verification steps were followed. I expect this shift to be permanent. The law profession has seen what happens when AI tools are used in ways they shouldn't be. The cases were public enough and the consequences serious enough that the lesson won't be forgotten easily. Specialised legal translation will increasingly be understood as risk management, not just as a language service.
What did you personally learn in 2025 that changed how you approach your work?
I invested heavily in AI-related training in 2025 so I could explain clearly and specifically what these tools can and cannot handle in legal contexts. To be honest, this didn't change my translation process. I didn't remove any verification steps or integrate AI tools into my workflow. The training wasn't about adopting new technology. It was about understanding it well enough to communicate its failure modes in practical terms that lawyers can assess against their own risk tolerance.
What really changed was how I discuss risk with clients. I became more direct and more proactive. I don't wait for clients to ask whether AI could have done the work faster or cheaper. I address it upfront: here's what these tools do with Greek legal documents, here's where they fail predictably, here's why those failures matter for your specific use case. This approach works because clients in 2025 increasingly came to me after trying AI tools themselves. They'd already encountered the problem. They didn't need to be convinced that accuracy mattered. They needed someone who could explain what went wrong and what a reliable process looks like. The shift here is towards more transparent, client-focused communication about risk. Clients don't need me to be diplomatic about AI's limitations. They need me to be precise about what breaks, when, and why that matters for the legal work they're trying to do.
The development that mattered most wasn't the technology "improving". It was clients returning after discovering firsthand that it doesn't really work well for what they need.






