AI coding assistants are everywhere: 84% of developers use or plan to use them. But only 33% trust the output. That gap tells you something important. AI is powerful, but it's not magic. In this balanced, demo-driven session, Brian cuts through the hype to show what AI actually does well, where it stumbles, and how to use it as a tool without becoming dependent or naive.
You'll see AI wins (generating boilerplate, writing tests, staying in flow) and AI fails (architecture decisions, edge cases, that "90% right" code that wastes your afternoon debugging). Brian will cover the current landscape including GitHub Copilot, frontier models from OpenAI, Anthropic, and more as well as self-hosted options for teams with privacy requirements. Whether you're a developer wondering if AI will take your job (spoiler: 64% of devs aren't worried) or a PM trying to set realistic expectations, you'll leave with practical patterns for getting real value from AI while keeping your judgment in the driver's seat.
You will learn:
- Where AI genuinely accelerates development and where it creates hidden tech debt; with live demos of both wins and failures
- The current AI landscape: frontier models, self-hosted options, and which tools excel at what tasks
- Practical validation patterns for AI-generated code; building appropriate trust without blind faith