Engine Modules · Architecture

ENGINE.
MODULES.

THE CURRICULUM ENGINE IS COMPOSED OF FOUR PRODUCTION-HARDENED MODULES. EACH IS INDEPENDENTLY SCALABLE AND EXPOSED DIRECTLY VIA API.

CORE PIPELINE01

AI Orchestrator

4-layer quality pipeline on every generation

The central AI brain of the engine. Every curriculum generation passes through a sequential 4-gate validation pipeline before any output is written. The Planner scopes the architecture, the Logic Filter enforces prerequisite ordering, the Structural Critic validates module-lesson density, and the AI Critic runs hallucination detection.

Planner → Logic Filter → Structural Critic → AI Critic
No curriculum ships without passing all 4 gates
Zero hallucination guarantee on structured output
Asynchronous orchestration — non-blocking per tenant
Full audit log of every gate decision available via API
Configurable depth and complexity per request
FIDELITY ENGINE02

Depth-Mapped Generator

4–5 lessons per module, scaled to your timeframe

Module and lesson count scales automatically based on the requested timeframe and complexity level. Each lesson includes a title, objective, estimated duration, difficulty calibration, and prerequisite dependencies. The generator never truncates — depth is always honored.

6-month curriculum generates 9–12 modules, ~47 lessons
4-week sprint generates 3–4 modules, ~16 lessons
Difficulty calibrated per lesson: beginner → expert
Estimated duration per lesson and module-level rollups
Topological ordering of lessons by knowledge dependency
Full structured JSON output — no prose summaries
INFRASTRUCTURE03

BullMQ Async Queue

ACK in <200ms, generation in the background

Your POST request returns a job ID acknowledgement in under 200ms. The actual generation runs asynchronously via BullMQ workers, completely decoupled from your request thread. Workers are concurrency-controlled per API key to protect shared resources under load.

202 Accepted + job_id in <200ms guaranteed
BullMQ workers run generation fully async
Concurrency control: max workers per key enforced
Dead-letter queue + automatic retry on failure
Real-time progress polling via status endpoint
Webhook fires on completion — no polling required
CACHE04

Semantic Cache Layer

Identical requests served in <50ms — zero cost

Requests that are semantically equivalent — even if phrased differently — hit the Redis cache instead of triggering new AI generation. Cache keys are derived from a normalized semantic vector of your parameters. Zero tokens consumed, zero generation cost on cache hits.

Semantic vector normalization — not just exact string match
<50ms response time on cache hit
Zero AI tokens consumed on cache hit
7-day TTL with configurable invalidation via API
Cache hit/miss logged per API key for cost tracking
Per-tenant cache namespacing — no cross-contamination
Generation Pipeline
01

POST Request

topic, timeframe, complexity

02

Queue ACK

job_id returned in <200ms

03

4-Gate Pipeline

Planner → Logic → Critic → AI

04

Cache Check

Redis hit? Return in <50ms

05

Webhook Fires

full JSON curriculum delivered