Creating Success with Success Criteria

I love a good magic trick. Not the rabbit-in-a-hat kind, the kind where a room of thirty suddenly feels like fifteen and no one left. No smoke. No mirrors. Just clarity. When the task is crystal clear and kids can picture what success looks like, something wild happens: about three quarters of them take the baton and run. Now you’re freed up to lean into the quarter who needs you most, one-on-one, in the moment, without losing the rest of the class to “wait for me.” That’s not a fantasy; that’s design. And the design hinge is success criteria.

What Success Criteria Do for Teachers

They “reduce the class size.” Not literally, of course. But functionally, yes. When the success picture is visible, what to produce and what quality evidence looks like, roughly 75% of learners can move with independence. That frees you to rotate to the 25% who need targeted scaffolds, models, or re-teaching without stalling everyone else. It’s time triage made possible by clarity.

They turn feedback into a conversation, not a compliment. True feedback is anchored to something specific: Here’s where your work currently sits against the criteria; here’s the next, do-able move. Now your conferences have teeth: “You’ve got evidence, but it isn’t explained. Let’s add the ‘why’ statements from Level 3.” You’re not guessing; you’re pointing. And kids can point too.

They expose task problems early. If you can’t describe what success looks like beyond “finished it,” you don’t have a learning task, you have a compliance task. That’s the number one reason teachers stall out when writing success criteria: the task itself won’t produce observable evidence of the learning target. That’s a design issue, not a teacher issue. Fix the task; the criteria write themselves.

They supercharge teacher clarity. The research is strong here: when teachers make the learning intention and success criteria explicit, so both the teacher and learners can answer, “What are we learning? Why? How will we know?” The impact is large (effect size 0.85 or 2 years’ learning). That’s not a nudge; that’s acceleration. 

Where it breaks for teachers (and how to fix it fast)

The task rewards completion, not learning.

Fix: Start with the target (“I can…”) and design a product that proves it. If you can’t picture observable evidence, redesign the task before writing criteria.

The criteria are fuzzy or bloated.

Fix: Write 3–4 observable, countable features anyone could verify (e.g., “Includes two labeled pieces of evidence,” “Uses a because/so statement”). Park the rest in a “not this time” box and post the criteria where kids work.

Feedback drifts to praise or points.

Fix: Use a 10-second micro-conference: Name the place (what’s met), Name the gap (which criterion isn’t), Name the next move (one do-able action). Deliver it mid-work, pointing to the posted criteria.

You’re doing all the checking.

Fix: Require a 1–2 minute self/peer check against a single criterion before they see you. Students highlight evidence, label which criterion it satisfies, and only then queue for a conference.

What Success Criteria Do for Learners

They grow assessment-capable learners. When learners can seek and spot evidence of learning in their own work and compare it to shared criteria, they become less dependent on your nod and more dependent on evidence. Hattie’s database labels this family of practices “self-reported grades” (often discussed as assessment-capable learners), and the current effect size is 0.96 (2+ is huge. In plain speak: learners who can predict, check, and adjust against clear standards learn more, faster. 

They give kids a yardstick, not a vibe. “I think it’s good” becomes “I’m at Level 2 because I have claims but no evidence, and the criteria say I need two pieces of specific evidence.” That shift, from impression to comparison, builds metacognition and honesty. Kids don’t need to guess what you want; they can see it. 

They help learners pick the right tool at the right time. When learners can name which criterion they’re missing, they can grab the scaffold that matches the gap: sentence frames for explanation, exemplars to calibrate quality, a rubric mini-lesson, a quick model-and-mimic. Criteria create purposeful tool use instead of random tool use.

They make feedback usable. With criteria on the table, feedback lands on the work, not the worth of the learner. Better yet, formative moves during learning, checking against criteria and adjusting right then, and carry their own positive effect on achievement.

Where it breaks for learners (and how to fix it fast)

Nobody taught the how of criteria.

Fix: Model with a low-stakes sample. Think aloud as you highlight where a criterion is met or missed. Then have learners practice on a fresh sample before touching their own drafts.

They don’t know where the tools live.

Fix: Post a simple “When you’re here → use this” menu tied to each criterion (frames, checklists, exemplars, mini-videos). Train the routine. Point to it mid-lesson.

They don’t know how to accept and act on feedback.

Fix: Teach the ten-second cycle: Hear → Restate → Decide → Do → Show. Rehearse it with a partner, one criterion at a time, until it’s muscle memory.

The criteria are vague.

Fix: Make them observable and countable. Swap vibes for features: “Includes two labeled pieces of evidence” beats “be thorough.” If a learner can’t see it, they can’t self-check it.

Nuts-and-Bolts: From Target to Task to Criteria

  1. Start with the learning target. Learner-friendly, single-lesson scale: “I can explain how ___ affects ___.” (Clarity begins here—and again, that clarity is high-impact.) 
  2. Design the learning task, don’t assign work. Ask: What product would naturally prove the target? If the target is “explain,” the product must contain claims + evidence + reasoning, not just “finish the worksheet.”
  3. Draft success criteria as observable features of the product. Example (Level 3 = meets):
  4. Includes two accurate, labeled pieces of evidence
  5. Uses because/so statements to connect evidence to the claim
  6. Addresses a likely counter-example
  7. Teach kids to use the criteria. Model with a sample. Color-code evidence. Have them self-check, then peer-check, then revise.
  8. Feed back to the criteria during the work. Ten-second conferences: “You’ve got both pieces of evidence, nice. Add a ‘so’ statement to connect it back to your claim.” 

Make Learning Visible, Make Teaching Lighter

Success criteria aren’t posters. They’re permission slips. They let most kids move without you and give you the time and data to move with the ones who need you most. They transform “Did I do it right?” into “Here’s where I am and what I’m doing next.” They take the heat off your personality and put it on the product. And yes, they make the room feel smaller without moving a single desk.

If you want one lever this week that helps both teachers and learners, pull this one: articulate the finish line, then teach kids how to run toward it and check their stride on the way. That’s how we quit teaching and start designing learning. That’s how we make learning the job.

Receive the latest lesson design ideas