I spent a year turning my engineering team into AI power users. Here's exactly what worked, what didn't, and the playbook you can steal.
TL;DR: Six months ago, my team was skeptical about AI tools. Today, they're shipping features in days not weeks, test coverage is above 80%, and new hires are productive from day one. Here's the exact playbook: the courses I used, the metrics I tracked, and the demo that got leadership buy-in.
I write about engineering leadership, AI adoption, and building high-performing teams. Follow me for more frameworks you can actually use.
Why I Had to Figure This Out
Like every engineering manager in 2025, leadership was asking: "How can we leverage AI? Is it actually helping us move faster?"
We were using AI tools informally — some engineers loved them, others ignored them, and nobody was measuring anything.
But here's what made me act: I watched my best senior engineer spend 3 hours writing boilerplate tests that Claude could generate in 30 seconds. That's when I realized we weren't behind on AI adoption — we were hemorrhaging productivity.
I had to build a systematic approach. Here's what worked.
What Actually Works (Based on Real Usage Data)
After 6 months of tracking my team's AI usage, here's what the data shows:
Where AI is delivering massive results:
- Onboarding acceleration: New hires with no knowledge of our programming language (Kotlin) are delivering production features immediately. Previously, onboarding took 2+ months before a new engineer was productive. Now they're shipping code in their first week using Claude Opus.
- Test coverage explosion: Our test coverage skyrocketed above 80%. Engineers who previously cheaped out on tests because of time pressure now generate comprehensive test suites in minutes.
- Internal tooling that actually gets built: One engineer built an internal tool we'd been wanting for months but could never justify in a sprint. AI made it a side project instead of a quarter-long initiative.
- Everything runs on Claude: We use Opus 4.5 (switching to 4.6) for virtually everything — writing, executing, testing, and launching code. It's not a helper; it's a core part of the development workflow.
Where AI falls short:
- Business logic: AI doesn't understand our domain. It writes code that compiles but implements the wrong business rules.
- Complex system integration: AI suggests patterns that work in isolation but break our existing architecture.
The insight: AI is incredible at mechanical tasks but terrible at judgment calls. The productivity gains come from freeing engineers to focus on the hard problems AI can't solve.

My 3-Phase Implementation (Steal This Playbook)
Phase 1: Skill Up the Team (Month 1)
I started by investing in education. I enrolled my entire team in comprehensive prompt engineering courses because I realized most engineers don't know how to use AI effectively.
The courses I enrolled my team in:
- ChatGPT Prompt Engineering for Developers — The foundation. Every engineer needed this.
- Vibe Coding 101 with Replit — Practical AI-assisted development workflows
- Building Applications with Vector Databases — For engineers building AI features
- MCP: Build Rich-Context AI Apps with Anthropic — Cutting-edge context management
- Long-term Agentic Memory with LangGraph — For our AI infrastructure work
All free short courses from DeepLearning.ai + Coursera. Zero budget needed to start.
Adoption metrics I tracked:
- Daily AI tool usage — Tracked weekly, aimed for 100% daily active usage
- PR cycle time — Measured reduction from first commit to merge
- PR size discipline — Pushed for 300 lines or fewer per PR. AI makes it easy to write more code, but smaller PRs = faster reviews = fewer bugs
- PR review depth — Watched review quality closely. AI-assisted code needs MORE careful review, not less
- Commit frequency — Encouraged breaking work into small, mergeable chunks and deploying frequently
The breakthrough moment: I held cross-pollination sessions where engineers shared their best prompts and workflows. Someone showed a prompt that turned a 2-hour debugging session into 10 minutes. Another engineer shared how they use AI to write migration scripts that previously took half a day. When your peers show you what's working — with their actual prompts — adoption skyrockets faster than any mandate.
The Personal Experience That Changed Everything
Here's advice most AI strategy articles won't give you: build something yourself.
I personally used AI tools to build applications that would have taken 3 months of traditional development — in 3 days. Not toy projects. Real, production-quality web apps with databases, authentication, and deployment. I started with Cursor + Claude Sonnet, but lately my go-to is Claude Code + Claude Opus 4.6 — hard to beat that combo. I also use VS Code with Claude Code running in the terminal.
That experience fundamentally changed how I guide my team. You can't credibly coach engineers on AI-assisted development if you haven't felt the 10x speedup yourself. You won't know which tasks to push toward AI and which to protect. You won't understand the failure modes.
My recommendation to every EM: Block off a weekend. Pick a real problem you have. Build a solution using AI tools end-to-end. You'll learn more about AI's strengths and limitations in 48 hours than in any strategy meeting.
Phase 2: Get Leadership Buy-In (Month 2)
I didn't pitch with slides or metrics. I showed a video demo — no voiceover, just the tool in action.
The demo showed a more powerful AI tool going from writing code to executing to testing to launching the solution. I compared it side-by-side with a more basic tool that just injected code and caused more friction than it solved.
The video was enough. Leadership saw the difference immediately. When you can show a tool that takes a developer from problem to deployed solution in minutes vs one that creates more work, the conversation shifts from "should we invest?" to "how fast can we roll this out?"
The lesson: Don't pitch AI with abstract productivity metrics. Show the before and after. Let the demo do the talking.
Phase 3: Measure and Iterate (Month 3+)
Once adoption was rolling, I focused on tracking the real impact:
Team sentiment:
- Monthly pulse: "Do AI tools make you more productive?" (1-5 scale)
- "What tasks do you use AI for most?"
- "What AI-generated code have you had to rewrite?"
The data tells the story better than any pitch deck.

The Hard Conversations
"Will AI Replace My Engineers?"
No. But it will change what they do. Engineers who resist AI tools will be less productive than those who use them well. Your job as a manager: make sure your team develops AI fluency as a skill, not a threat.
What's actually happening: The bar for what constitutes "engineering work" is rising. Tasks that used to take a full day (writing boilerplate, setting up CI/CD, creating test suites) now take an hour. That means your engineers should be spending more time on:
- System design and architecture
- Understanding user problems deeply
- Cross-team collaboration
- Mentoring and knowledge sharing
- The creative work that AI can't do
"How Do We Handle AI in Code Reviews?"
Two non-negotiable rules:
- Review AI code MORE carefully, not less. The "it looks right" problem is real. AI generates plausible-looking code that can have subtle logic errors. Don't let the clean syntax fool you.
- The engineer is responsible. If you put AI-generated code in a PR, you own it. You tested it. You understand it. "The AI wrote it" is not an excuse for bugs in production.
Recommendations for Junior Engineers
Junior engineers who over-rely on AI miss the struggle that builds real understanding. Debugging, reading documentation, tracing through code — these painful experiences create competence.
What I'd recommend:
- First 3 months: limited AI tool usage. Build fundamentals first.
- After 3 months: gradual introduction with mentorship on when to use AI vs. when to struggle through it.
- Regular pairing sessions where they explain AI-generated code line by line.
- Explicit expectations: "You should be able to write this code without AI. AI helps you go faster, not skip learning."

A Critical Warning: Don't Let AI Make You Worse
Here's something I don't see enough people talking about: AI can atrophy your engineering skills if you're not careful.
When AI writes most of your code, you stop exercising the mental muscles that made you a good engineer in the first place. Problem decomposition, debugging intuition, understanding how systems actually work under the hood — these skills erode if you outsource them entirely.
Two rules I enforce on myself and recommend to every engineer:
-
Let AI interview you, not the other way around. When solving a problem, have AI ask YOU questions about the approach before it writes code. "What edge cases should we handle? What's the expected input format? How should errors propagate?" If you can't answer these clearly, you don't understand the problem well enough — and AI-generated code will reflect that.
-
Code by hand regularly. Block time every week to write code without AI assistance. Solve LeetCode problems, contribute to open source, or build small utilities from scratch. The engineers who will thrive in 5 years are the ones who can code without AI but choose to use it for speed. The ones who can't code without it are in trouble.
AI is a force multiplier, not a replacement for understanding. Stay sharp.
What I'm Watching For 2026
Based on what I'm seeing with my team and the tools we're testing:
1. AI-powered performance tracking and team health monitoring — The tools for writing code are mature. The tools for understanding how your team is actually doing? Primitive. I'm watching this space closely because the managers who crack this will have an enormous advantage.
2. Codebase intelligence — AI that understands your specific architecture, not just generic programming. When AI knows your team's patterns, naming conventions, and architectural decisions, it becomes 10x more useful — think Claude.md files that encode your team's conventions and architectural decisions
3. Human-AI collaboration workflows — Not AI replacing engineers, but new workflows where AI handles mechanical tasks while humans focus on creative problem-solving.
The managers who build their own AI-assisted workflows now will have a significant edge.
The Bottom Line
In 6 months, a skeptical engineering team turned into AI power users. The key wasn't finding the perfect tools — it was building a systematic approach to adoption, measurement, and iteration.
The three things that made the difference:
- Invested in education first — enrolled the team in prompt engineering courses before rolling out tools
- Measured everything — tracked daily usage, PR cycle time, review depth, and quality impact
- Built my own tooling — used AI to become a better manager, not just help my engineers code faster
If you enjoyed this article, follow me for more upcoming content on AI
The managers who figure this out now will lead the most productive teams in the industry. The ones who wait will spend the next two years catching up.
Start with education. Track what matters. Build your own tools. And for the love of shipping — try building something with AI yourself before telling your team how to use it.