Julien May


The Responsibility of Engineers in the Age of AI

Lately I’ve been thinking a lot about what it means to grow as an engineer, in a world where AI tools are becoming increasingly powerful.

At first glance, it’s tempting to think we can just lean into automation. Why spend time writing boilerplate or struggling through implementation details when an assistant can generate a near-complete solution in seconds?

However, what happens to learning when the struggle disappears?

Today, a Machine Can Write Code, But It Can’t Think For You

Let’s be honest, engineers today have access to tools that many of us could only dream of when we started. You can scaffold a service, debug a weird API response, or generate a working test in seconds. That’s incredible. But it can also be misleading because completing a dedicated task doesn’t necessarily mean the tool also teaches you. In fact, those tools help you to skip ahead.

But if you skip too many steps, you risk never building the intuition and problem-solving muscle that turn you from a coder into an engineer.

An AI tool doesn’t care about why a particular design decision was made. It doesn’t teach you trade-offs. It doesn’t guide you through the pain of debugging a wrong abstraction. But that’s where growth happens.

The Role of Senior Engineers Is Changing

There’s a flip side to this: as AI takes over more of the how, senior engineers are increasingly responsible for teaching the why. We’re no longer just reviewing syntax, formatting or best practices. Our role is to help juniors or less experienced engineers to build judgment.

That means mentoring is more important than ever. We need to expose people to real-world problems, architectural decisions, constraints, and failures. We need to talk through trade-offs, not just best practices. AI can’t do that for us.

It also means more than ever to give juniors actual ownership, not just tickets. They need space to make decisions (and occasionally the wrong ones) in a safe environment. That’s how confidence, competence, and autonomy grow.

AI Is a Tool, Not a Mentor

I don’t want to sound like a purist. I really love these tools. I use them on a daily base and for most of the code I write meanwhile. And I find them very useful.

But they should amplify understanding, not replace it. Otherwise, we’re just creating a generation of engineers who can “prompt” their way through work but don’t really understand the systems they’re building. And that’s dangerous because learning depends on feedback. And when AI shortcuts the trial-and-error process, it often hides the why behind a solution. Without hitting walls or debugging mistakes, engineers miss the very moments where intuition is formed.

I’ve also seen a new pattern emerge: when engineers don’t fully understand what’s being generated, they sometimes stop thinking altogether. And instead of inspecting the code and making a simple one-line change, they throw another prompt at the problem, hoping the tool will fix what it just broke. This isn’t efficiency. It’s learned helplessness dressed as productivity.

We’ve seen in the recent past how a lack of understanding of technical implications, i.e. around security, data privacy, or performance, can lead to costly and even dangerous outcomes. You can’t catch what you don’t understand, and AI won’t warn you unless you know what to ask.

Yes, we can (and maybe should) ask an LLM to challenge our decisions. But ultimately, it’s humans who carry the responsibility to understand the domain, to reason through trade-offs, and to make the right calls.

It’s people who must become domain experts. And that means not just writing code, but understanding the real-world problems behind that code. Problems that are often messy, cross-cutting, and technology-agnostic.

This is where experienced engineers must take their role seriously. It’s not enough to just ship features or clean up code - we need to model what good looks like, help others build that lens for understanding, and ensure that technical work is always grounded in a clear understanding of the domain it serves.

Final Thoughts

As AI tools reshape the way we work, learn, and build, we need to step up, not just as individual contributors, but as mentors, guides, and system designers.

Things are changing fast. Far faster than universities, bootcamps, or traditional training programs can adapt. We can’t rely on formal institutions to prepare people for this new world. That responsibility now sits squarely within our teams and organizations.

We need to create environments where understanding is valued as much as output. Where people aren’t just told what to build, but are helped to understand why. Where fast tools don’t replace deep thinking but amplify it.

And yes, that means seniors, staffs, and leads must take mentoring seriously. Not as an optional “nice-to-have”, but as a core part of engineering practice in the AI era.

But it’s not just about individuals. Organizations must actively create the space for this kind of growth. If all incentives point to shipping fast, teams will optimize for that - even at the cost of long-term understanding. We need to make it safe (and expected) to slow down when needed, to reflect, to mentor, to learn.

In ten years, I don’t want us to look back and realize we trained prompt engineers who can generate solutions but can’t evaluate them. I want to work with people who care about solving the right problems, not just delivering fast not-understood answers.

That’s what makes engineering a craft. And it’s on us to protect that.