All posts

Teaching Kids to Direct AI: Four Mechanisms We Built Into Xyplor

A public technical description of novel mechanisms in Xyplor — AI Insight loops, Explorations, longitudinal child profiles, and active-time screen limits.

The Xyplor Team·7 min read
AIeducationkidslearning-design

This post documents specific technical mechanisms we've built into Xyplor, an AI-assisted career and educational exploration platform for kids ages 6-18. It is published as a public technical description of our work and as prior art for the mechanisms described below.

Why we're publishing this

We're building a product for an audience — kids — where the real goal isn't "use AI to tutor them." It's: teach kids to direct AI thoughtfully, so they graduate as confident AI-native thinkers rather than AI-dependent ones.

That's a subtle pedagogical goal. It shaped four mechanisms we built that we haven't seen combined elsewhere. We're describing each in enough technical depth that (a) other builders can learn from what worked, (b) the descriptions serve as public prior art against anyone who might later attempt to patent them, and (c) our future selves remember what we did and why.


Mechanism 1: The AI Insight Loop

Problem

When a 9-year-old types "Make a website about my cat" into our Maker Studio and gets a complete, working HTML page in five seconds, they feel magic. But they don't learn anything transferable. Tomorrow they'll type the same prompt and still be dependent on the magic.

What we built

Immediately after the primary generation returns, we trigger a second, smaller LLM call (a fast model like Claude Haiku 4.5) with a different system prompt. This second call is a meta-reflection producing a structured JSON response:

{
  "title": "5-7 word title for the reflection",
  "whatAiDid": "1-3 sentences: specifically what the AI did in response to the prompt",
  "skill": "1 sentence: the transferable skill the kid just practiced",
  "tryNext": [
    "specific next-step prompt 1",
    "specific next-step prompt 2",
    "specific next-step prompt 3"
  ]
}

The reflection is rendered as a card immediately below the generated artifact. Critically, each item in tryNext is a clickable button that auto-populates the next iteration input. This closes the loop: the kid sees magic → sees what the magic was → has three specific next prompts ready to try → clicks one → iterates.

Why this matters (and is non-obvious)

  • Most AI-in-education products treat AI as a one-way deliverer (tutor, grader, answerer).
  • Some teach "AI literacy" through decoupled curriculum (lessons about AI).
  • We know of no product that combines the raw generation with an immediate, contextual, age-adaptive meta-reflection that auto-populates the next prompt.

The auto-fill of tryNext into the iteration input is the specific detail that makes this sticky. Without it, the reflection is reading material. With it, it's a guided loop.

Prompt design notes

  • The second call is 1% the cost of the primary call ($0.002 per reflection using Haiku), so we can afford to run it on every generation.
  • Age-adaptive system prompt: younger kids get simpler vocabulary, no jargon; older kids get more conceptual framing.
  • The system prompt explicitly instructs the model to avoid empty praise (no "good job!"), always teach something real, and ground suggestions in what the kid could actually type next.

Mechanism 2: Explorations — AI-Generated Multi-Step Personalized Adventures

Problem

Kids' interests are specific ("I want to start a podcast"), but existing career-education platforms only offer generic, pre-authored content pathways. A 13-year-old typing "podcast" into most platforms gets either (a) generic articles about podcasting or (b) nothing.

What we built

An Exploration is a 3-6 step guided journey around any topic the kid provides. When a kid types a topic ("Podcasting"), we invoke an LLM with a system prompt that instructs it to design a personalized multi-step adventure. The response is structured JSON describing steps, each with:

{
  "kind": "learn" | "chat" | "make" | "project" | "reflect",
  "title": "short action title",
  "description": "what they'll do and why",
  "actionLabel": "button text (2-4 words)",
  "actionHref": "path using CHILD_ID token, resolved at runtime",
  "seedPrompt": "starter prompt to copy into Nova chat or Maker Studio"
}

The actionHref specifically links back into other features of the app:

  • /kid/CHILD_ID/chat → opens AI mentor
  • /kid/CHILD_ID/make → opens Maker Studio
  • /kid/CHILD_ID/explore/<slug> → opens a specific career field
  • /kid/CHILD_ID/projects/<slug> → opens a specific project

The seedPrompt is a literal string the kid can click-to-copy and paste into Nova or Maker. This is meaningful: instead of saying "go think about podcasts with Nova," we hand them the exact first sentence to say — scaffolded prompt engineering through example.

Why this is novel

What's unusual here:

  1. The system prompt instructs the LLM to chain tools — step 1 is a learn+chat, step 2 is a make, step 3 is a reflect. The LLM orchestrates the tool chain based on the kid's topic and profile.
  2. The generated pathway is personalized — the LLM is given the kid's strengths, interests, and recently-captured ideas as context. For a kid whose ideas mentioned entrepreneurship, the podcast exploration gets a "pitch your show" step; for another kid who cares about storytelling, it gets a "write your pilot episode" step.
  3. Step progression is enforced — only the current step is unlockable. Completing a step auto-advances and logs to the child's longitudinal profile.

This isn't "AI generates a lesson plan." It's "AI orchestrates a journey across existing app primitives, seeded with transferable prompts the kid learns by copying."


Mechanism 3: Longitudinal Child Profile as Prompt Context

Problem

Most AI-for-kids systems treat each session as independent. A chatbot doesn't remember that three months ago the kid was obsessed with marine biology, mentioned wanting to start a podcast two weeks ago, and has consistently avoided math-heavy fields. That context is gold, and throwing it away on every session is waste.

What we built

A ChildProfile model that accretes data across sessions and years:

  • strengthScores: JSON — eight strength dimensions (Gardner's Multiple Intelligences adapted), updated after every assessment
  • interests: JSON — vector of topic interests that compounds from auto-extracted ideas
  • Related records: Idea (auto-captured from chat messages), Activity (every page view, completion, chat), FieldExploration (time spent on each career field), Creation (every website/story they've made), Exploration (every adventure they've started or completed)

Every AI call in the system receives a serialized slice of this profile as system context.

Why this is novel

The AI itself is stock (we use Claude). The novelty is what we load into its context:

  1. Longitudinal accretion — the profile gets richer each week without the kid doing anything special. Ideas extracted from casual chat in April become context for a roadmap generated in July.
  2. Automatic extraction — we don't ask the kid to tell us what they're interested in. We run a structured-output LLM call on every user message and automatically file interests, aspirations, projects, questions, and goals into a typed Idea table. The kid's data structure improves silently as they use the app.
  3. Cross-tool context sharing — the roadmap generator, the exploration generator, and the daily spark generator all pull from the same evolving profile. One extraction, many downstream uses.

This matters because as the kid uses the app for more years, the personalization gets better — the opposite of most software, which degrades as data accumulates.


Mechanism 4: Earned Screen Time with Active-Input Verification

Problem

Kids apps that let users "earn more time" through activity face a gaming problem: a kid can leave the app open and still accrue time. The app thinks they're using it; they're actually watching TV.

What we built

A client-side sampling heartbeat that only increments the time counter when the kid is actively interacting:

// Every 10 seconds, check if the kid is "active right now"
setInterval(() => {
  if (isActiveNow()) {
    activeSeconds += 10;
  }
}, 10_000);

function isActiveNow(): boolean {
  if (document.hidden) return false;         // tab backgrounded → not active
  return Date.now() - lastInputAt <= 60_000; // idle if no input in last 60s
}

// Any input event refreshes lastInputAt
window.addEventListener("mousemove", () => lastInputAt = Date.now());
window.addEventListener("keydown", () => lastInputAt = Date.now());
window.addEventListener("touchstart", () => lastInputAt = Date.now());
// ... etc for scroll, pointerdown, etc.

Buffered seconds are posted to the server in batches every 60 seconds and on page unload.

How this ties into earning

Certain activities grant bonus minutes (capped per day):

  • Complete a project: +15 min
  • Complete an assessment: +10 min
  • Complete an exploration step: +10 min
  • Save a Maker creation: +10 min
  • Add a manual idea: +2 min

Parents set the base limit (default 45 min) and the max earnable (default 60 min) independently, both tunable from 0 to 3 hours.


Mechanism 5 (bonus): The Portfolio Is the Moat

Instead of building a points/tokens/crypto system to give kids "something real" they accumulate over years, we built a Legacy Portfolio: a public-shareable (with consent), printable record of every strength, creation, exploration, idea, and completed project. Years of use compound into a portfolio that:

  • Goes with the kid to college applications (verified extracurricular record)
  • Can't be lost, taken, or regulated away
  • Avoids the securities/COPPA/banking regulatory hellscape that a token system would invite
  • Is, ironically, more valuable to kids than tokens would be

What we're not claiming

We're not claiming to have invented:

  • AI chatbots for kids
  • Strength assessments
  • Career exploration tools
  • Screen time limits
  • Kids' personalized learning

All of these exist. What we think is novel is the specific combination and the specific loops we've built between them — the AI Insight auto-fill, the cross-tool Exploration orchestration, the longitudinal profile as shared context, the active-input heartbeat coupled with completion-earned time.


License

This post is released under CC BY 4.0. You're free to adapt, build on, or critique any of these ideas, with attribution.

If you're building something related, we'd love to hear about it.

License: CC BY 4.0. You're free to adapt and build on these ideas with attribution.