Back after 10 years. The problem never went away.

Nobody built this. So we did.

129 questions. 8 developmental domains. Ages 0 to 36 months. Every question weighted by clinical evidence, every result traceable to the exact logic that produced it. Open-source, runs locally, and will never send your child's data anywhere.

npm package · npm install @mychild/engine · Apache-2.0 (code) / CC BY-SA 4.0 (data)

The problem

The CDC checklists are free, but they're just checkboxes. No logic, no scoring, no follow-up. The validated instruments like ASQ-3 and M-CHAT-R/F actually work, but they're copyrighted and expensive. There's literally nothing in between.

MyChild Engine sits in that gap. It tracks milestones longitudinally, weights each observation by clinical evidence, and tells you exactly why it flagged something. It doesn't diagnose. It helps parents and caregivers notice patterns early enough to have the right conversation with a doctor.

Under the hood

Every question gets scored against the child's corrected age. Results aggregate across domains with evidence sufficiency gates, and every single decision is traceable. No black boxes.

129

Weighted questions

Not all observations are equal. Each question carries a weight: Low, Medium, High, or Red-Flag. Low-weight items need corroboration before they matter. Red-flags skip the line.

8

Independent domains

Gross Motor, Fine Motor, Receptive Language, Expressive Language, Social-Emotional, Cognitive, Self-Help, Vision/Hearing. Each one scored on its own. A delay in one doesn't contaminate the others.

10

Age-appropriate bands

Birth through 36 months, plus universal red flags that apply at any age. The engine only asks what's developmentally relevant right now. No premature questions, no wasted anxiety.

False alarm protection

The engine won't flag "high concern" from a single observation. It requires at least 2 independent data points before escalating. One bad day shouldn't send a parent into a spiral.

Preterm correction built in

If your child was born before 37 weeks, the engine automatically adjusts expectations until 24 months. Parents of preemies don't need to do mental math to figure out what's actually on track.

Show your work

Every result comes with a plain-English explanation of exactly what drove it. Parents deserve to know why. Clinicians need to verify the reasoning. Both get it.

Rule simulator

Run synthetic child timelines against different threshold configurations. Change a weight, adjust a grace period, see exactly which alerts shift and by how much. We built this because no screening tool, open or proprietary, lets you actually test the rules before deploying them.

Google made a documentary about this

Back when MyChild App was live, Google filmed the story of how a kid with dyspraxia built a screening tool used in 100+ countries. This is where the engine came from.

Five minutes to your first screening

# Install

npm install @mychild/engine


# Use

import { evaluate, computeChildAge, getDueQuestions } from '@mychild/engine';


// Get age-appropriate questions for a 7-month-old

const questions = getDueQuestions({ dob: new Date('2025-09-01') }, []);

// Returns 12 questions across Motor, Language, Cognitive, Social

Full API docs, architecture walkthrough, and integration guide at /docs. Package page on npm.

Please read this

This is not a diagnostic tool. It tracks developmental milestones and surfaces patterns. It cannot and does not diagnose any medical condition, developmental disorder, or disability. If something concerns you about your child's development, talk to your pediatrician. That conversation is the whole point.

All scoring thresholds carry a "Ruleset v0.1 (hypothesis)" label because they haven't been validated through clinical trials yet. The question bank draws from publicly available CDC milestone checklists, not copyrighted instruments like ASQ-3, M-CHAT-R/F, or Denver. Everything runs locally on your device. No child data leaves your machine.