Engine Performance · Measured Output
ENGINE.
IMPACT.
REAL NUMBERS FROM THE PRODUCTION ENGINE. EVERY METRIC BELOW IS MEASURED AGAINST ACTUAL API PERFORMANCE — NOT MARKETING PROJECTIONS.
<10s
Generation TimeStandard curriculum. Any domain.<200ms
Queue AcknowledgementPOST → 202 Accepted. Always.<50ms
Cache Hit ResponseRedis. Zero tokens consumed.4 Gates
Validation LayersEvery job. No exceptions.99.9%
API Uptime TargetProduction SLA.$0
Cost on Cache HitSemantic vector match. Free.TopicTimeframeModulesLessonsGen Time
Advanced Node.js Microservices6 months10477.8s
Enterprise Sales Enablement3 months6284.2s
Healthcare Protocol Compliance4 weeks4183.1s
Data Science Fundamentals6 months11528.3s
Financial Risk Management2 months5233.9s
DimensionManual AuthoringCurriculum Engine
Content Architecture TimeDays to weeks per course<10 seconds via API
Prerequisite OrderingManual, error-proneTopological sort — automatic
Module Depth CalibrationSubjective, inconsistentDepth-mapped by timeframe + complexity
Output FormatWord docs, slides, PDFsStructured JSON — production-ready
Hallucination RiskHigh — no validation layer4-gate AI pipeline on every job
ScalabilityBottlenecked by headcountUnlimited concurrent jobs via BullMQ
Cost on Repeat RequestsFull authoring cost each time$0 — Redis semantic cache
Integration SurfaceNone — manual exportREST API + webhook delivery
The Core Trade-off
Why teams choose the engine
Manual curriculum authoring scales linearly with headcount. One instructional designer produces one curriculum per week. To produce 50 curricula, you need 50 person-weeks — or 50 people.
The engine generates 50 curricula in under 10 minutes. All structured JSON. All prerequisite-ordered. All validated through the 4-gate pipeline. Zero additional headcount.
