Making AI-Assisted Engineering Work in Practice
Making AI-Assisted Engineering Work in Practice

Making AI-Assisted Engineering Work in Practice

Thu 07 May 2026 To be decided

Many organizations introduce AI coding tools expecting immediate productivity gains, only to discover inconsistent results, architectural friction, quality concerns, and team confusion.

I was part of the core group introducing AI-assisted engineering workflows in a 5,000+ engineer organization, working through adoption challenges, governance questions, and architectural constraints at scale. I later built an AI-native engineering setup from day one in a startup environment.

In this session, I’ll share the practical lessons learned from both environments:

- Why unclear architecture kills AI leverage
- What needs to change in codebase structure
- How team practices must evolve
- The common mistakes that create AI chaos instead of acceleration

At the end of the talk you will have a clear, actionable items that you can start embedding into your teams right away

Schedule

  • 17:30 - 18:30 Walk-in & Dinner
  • 18:30 - 19:15 Part I
  • 19:15 - 19:30 Break
  • 19:30 - 20:15 Part II
  • 20:15 - 21:00 Drinks and socializing

Capacity

Maximum:
50

Sign up

You can sign up or cancel for this free Event until Monday 4 May 2026 15:00.

Making AI-Assisted Engineering Work in Practice

Event Host

Logo .NET Zuid

Event Host

Logo .NET Zuid

Speaker

Dan Patrescu-Baba
Dan Patrescu-Baba
Atherio

I build AI-native systems that operate reliably in production environments. My work focuses on two areas: implementing AI-assisted engineering in a way that actually increases organizational capability, and applying behavioral intelligence to leadership through real operational data.

As CTO of Atherio, I work at the intersection of AI, engineering discipline, and organizational behavior. We design systems that analyze communication patterns and align them against defined company values, transforming qualitative leadership signals into measurable insight.

Beyond product development, I help organizations move from AI experimentation to structured adoption. Most teams are not blocked by models or tooling. They are blocked by unclear reliability standards, missing evaluation frameworks, and operating models that were never designed for AI leverage. My work centers on defining those foundations and turning AI into infrastructure, not novelty. I have spent years building production systems in regulated and risk-sensitive environments, where reliability, traceability, and precision are not optional. That experience shapes how I approach AI: as a capability that must be engineered responsibly and measured rigorously.

To be decided