adaptive-hint-sequence-designer

Installation
SKILL.md

Adaptive Hint Sequence Designer

What This Skill Does

Designs a cascading hint sequence for a specific problem type — a series of progressively more revealing hints that help students move past sticking points without simply giving them the answer. This is one of the most technically demanding aspects of intelligent tutoring system (ITS) design. The critical insight from VanLehn's (2011) meta-analysis is that the effectiveness of tutoring (human or AI) depends heavily on the quality of the scaffolding — and hint sequences are the primary scaffolding mechanism. Get the sequence wrong and you either frustrate students (hints too vague, too few) or steal their learning (hints too specific, too early). The output includes the complete hint cascade (typically 3–5 levels from general strategic guidance to specific procedural nudge), a design rationale explaining the cognitive function of each level, trigger conditions (when each hint fires), and a bottom-out strategy (what happens when hints are exhausted). AI is specifically valuable here because designing effective hint sequences requires simultaneously anticipating student errors, calibrating hint specificity, and ensuring that each hint level provides just enough information to unstick the student without bypassing the cognitive work that produces learning.

Evidence Foundation

VanLehn (2011) conducted the most comprehensive meta-analysis of tutoring effectiveness, comparing human tutoring, intelligent tutoring systems, and other approaches. He found ITS effect sizes averaging 0.76 — remarkably close to human tutoring (0.79) and substantially higher than "no tutoring" conditions. Critically, ITS effectiveness depended on the quality of the step-level interaction: systems that provided feedback and hints at each problem-solving step (inner loop) were much more effective than systems that only evaluated the final answer (outer loop). Aleven & Koedinger (2002) studied hint-seeking behaviour in the Carnegie Learning Cognitive Tutor and found that students often used hints suboptimally — either requesting hints too quickly (before attempting the problem) or too slowly (struggling unproductively). They found that training students in a metacognitive hint strategy ("try first, then ask for a hint, then explain the hint to yourself") significantly improved learning outcomes. Razzaq & Heffernan (2010) compared proactive hints (given automatically) with reactive hints (given on request) and found that the optimal approach depended on student proficiency: lower-performing students benefited more from proactive hints, while higher-performing students benefited from being allowed to struggle before requesting help. Shute (2008) reviewed formative feedback research and identified that effective feedback is specific, timely, and actionable — principles that apply directly to hint design. Wood, Bruner & Ross (1976) established the concept of scaffolding: providing temporary support that enables the learner to accomplish what they cannot do alone, then gradually withdrawing the support as competence develops.

Input Schema

The teacher must provide:

  • Problem type: What students are solving. e.g. "Solving linear equations with one unknown — e.g. 3x + 7 = 22" / "Writing a topic sentence for a persuasive paragraph" / "Balancing a chemical equation" / "Debugging a Python function that should return a sorted list but returns None"
  • Common sticking points: Where students get stuck. e.g. "Students forget to do the same operation to both sides of the equation" / "Students write topic sentences that are too vague or that state a fact rather than a claim" / "Students balance atoms randomly rather than systematically" / "Students don't understand that Python's sort() returns None — they expect it to return the sorted list"

Optional (injected by context engine if available):

  • Student level: Year group and proficiency
  • Subject area: The curriculum subject
Related skills
Installs
12
GitHub Stars
216
First Seen
Apr 2, 2026