
Ben Hylak
Keynote
The Next 5 Years: What Will The World Look Like? (From The Builder's Perspective)
16:00 - 16:50
We stand at the precipice of change, where the decisions made today will ripple across generations. In this keynote, Ben Hylak charts the complex trajectories our world might take in the next 5 years, and maps out a spectrum of scenarios: both the risks we must mitigate, to the opportunities we must seize. Attendees will gain a clearer understanding of the forces driving change and the various future states that remain within our power to shape.
Talk #1
Scaling Without SaaS Lock-In: Practical Automation and the Boutique LSP Approach
17:00 - 17:30
Boutique LSPs face a paradox: they need to scale and modernize, but rarely have the spare capacity to redesign operations while simultaneously delivering projects and running the business. Traditional SaaS platforms offer structure but come with trade-offs — dependency, limited flexibility, and pricing that can penalize growth.
​
This session shares practical lessons from building a lightweight AI-driven workflow-orchestration layer that connects and standardizes operational steps without replacing human judgment. From automating intake, quoting, and file routing to maintaining QA checkpoints, the talk offers a simple framework for deciding what to automate first — and what to leave manual — so small teams can reduce coordination overhead, limit SaaS lock-in, and create more space for careful linguistic work.

Emily Diamantopoulou

Istvan Lengyel
Talk #2
Working with Lovable
17:30 - 18:00
BeLazy uses Lovable, an AI-powered development platform, to turn whiteboard ideas into working prototypes fast — clarifying thinking, aligning teams, and cutting down on lengthy discussions. From UI prototyping and API testing to one-off data migration tools, the key insight is that AI accelerates clarity but doesn't create it: the underlying concepts need to be well-defined first.
​
The session also draws an important distinction between prototyping, where Lovable shines, and production development, which requires tighter controls. It offers a counterintuitive observation: service providers who can verify and replace AI output may actually be safer users of AI-generated tools than software companies, who must anticipate every edge case and bear the burden of building the clean APIs and solid conceptual foundations that make AI development work in the first place.
Talk #3
What “Good” Looks Like: Evaluating GenAI Content Quality
18:00 - 18:30
In this session, we will present a practitioner's approach to assessing the quality of GenAI-generated translations and multilingual content. We will cover commonly used evaluation metrics, their limitations, and how to design a scalable quality framework for both translations and multilingual use cases. We will also discuss how evolving expectations are reshaping tools, processes, and best practices based on recent research and real-world quality management case studies.


