Stop Agreeing With Me: How to Get Honest, Useful Feedback from AI

Presenter: Lori Kagebein
Description: You tried AI. It was polite. It said your lesson plan was great. It offered a few 'minor suggestions' that weren't really suggestions at all. It agreed with everything. And somewhere in the back of your mind, a quiet suspicion formed: this thing is just telling me what I want to hear.
You were right. And it doesn't have to be that way.
AI is not wired to be your critic — but you can wire it that way. This session is about flipping the dynamic entirely, from AI-as-yes-machine to AI-as-critical-friend. Educators already know the value of a critical friend: someone who respects you enough to tell you the truth. This session teaches you how to make AI play that role — and mean it.
We start by diagnosing the 'yes ma'am problem': why AI defaults to validation, how to recognize when it's hedging instead of helping, and the exact language that breaks the flattery loop. Then we spend the majority of the session on the most powerful technique in the workshop: strategic role framing.
Role framing means telling AI not just what to do, but who to be. When you ask AI to 'review your lesson plan,' you get pleasantries. When you ask AI to be a skeptical department head who has seen every edu-fad come and go and isn't impressed, you get something you can actually use. When you ask it to be a student who already knows the material and is bored, or a parent who doesn't trust the assignment, or a state auditor looking for standards alignment gaps — suddenly you have a sparring partner, not a cheerleader.
Participants will bring real work — a lesson plan, a unit overview, an email draft, a professional development proposal — and put it through a series of AI stress tests using role-based prompting. They will practice explicitly asking for criticism, requesting devil's advocate responses, and using AI as a real-time thought partner that pushes back rather than piles on.
This is the session for educators who already use AI but feel like they're only getting half of what it's capable of. It's also the session that changes how they think about their own prompting forever.
Participants will leave with:
- The exact prompt language that breaks AI out of validation mode and into honest feedback
- A library of role-framing prompts: resistant student, skeptical administrator, critical parent, standards auditor, and more
- A stress-tested version of something real they brought to the session
- The 'critical friend' prompting framework they can apply to any piece of work
- A fundamentally different mental model for what AI conversations can be
