Methodology
This page documents exactly how TypoKit benchmarks are collected, how results are stored and compared, and how you can reproduce them on your own hardware.
CI Runner Specs
Section titled “CI Runner Specs”Published results on the docs site are generated by GitHub Actions runners on ubuntu-latest. Typical specs:
| Property | Value |
|---|---|
| OS | Ubuntu 22.04 (x86_64) |
| CPU | 4-core AMD EPYC (GitHub-hosted) |
| RAM | 16 GB |
| Node.js | 24.x |
| Bombardier | v2.0.2 |
The runner collects exact hardware and runtime information automatically (OS, CPU model, core count, RAM, Node/Bun/Deno/Rust versions, bombardier version) and writes it into each results file.
Bombardier Settings
Section titled “Bombardier Settings”All HTTP benchmarks use bombardier, a high-performance HTTP load-testing tool written in Go. It is auto-downloaded and cached on first run.
Default Settings (Full Suite)
Section titled “Default Settings (Full Suite)”| Setting | Value |
|---|---|
| Connections | 100 concurrent |
| Duration | 30 seconds per scenario |
| Warmup | 5 seconds |
| Runs | 3 (results averaged) |
CI Settings (PR Regression Check)
Section titled “CI Settings (PR Regression Check)”| Setting | Value |
|---|---|
| Connections | 50 concurrent |
| Duration | 10 seconds per scenario |
| Warmup | 5 seconds |
| Runs | 1 |
All apps use an in-memory SQLite database with identical schema and seed data. Bombardier outputs JSON with latency histograms (-o json -l), which are parsed into structured BenchmarkResult records.
Scenarios
Section titled “Scenarios”Each benchmark app is tested across multiple scenarios. Every scenario hits a specific HTTP endpoint:
Core Scenarios (All Apps)
Section titled “Core Scenarios (All Apps)”| Scenario | Method | Path | What It Measures |
|---|---|---|---|
| json | GET | /json | Pure routing + JSON serialization overhead. Returns a fixed JSON response. |
| validate | POST | /validate | Validation pipeline cost. Parses and validates a JSON request body against the schema. |
| db | GET | /db/1 | Realistic end-to-end latency. Reads a row from SQLite by ID. |
| middleware | GET | /middleware | Middleware dispatch overhead. Passes through a 5-layer middleware chain. |
| startup | GET | /startup | Cold start to first response. Measures framework initialization time. |
TypoKit-Only Scenarios (Validation Overhead Isolation)
Section titled “TypoKit-Only Scenarios (Validation Overhead Isolation)”These two additional scenarios run only against TypoKit apps to isolate where validation overhead comes from:
| Scenario | Method | Path | What It Measures |
|---|---|---|---|
| validate-passthrough | POST | /validate-passthrough | No validators in the route table — body is passed through untouched. Measures the framework’s base overhead for POST handling. |
| validate-handwritten | POST | /validate-handwritten | Validation is done inline with if/typeof checks, bypassing TypoKit’s validator framework entirely. Measures the framework dispatch overhead vs. native checks. |
By comparing the three validate scenarios, you can see exactly how much overhead comes from: (1) TypoKit’s POST routing alone (passthrough), (2) hand-written validation (handwritten), and (3) the compiled validator framework (validate).
Non-HTTP Scenarios
Section titled “Non-HTTP Scenarios”| Scenario | What It Measures |
|---|---|
| serialization | Compares three JSON serializers in a tight loop (1M iterations): JSON.stringify, fast-json-stringify (schema-compiled), and TypoKit’s build-time compiled serializer. Results are reported as operations per second with nanosecond-precision timing. |
Framework List
Section titled “Framework List”The benchmark suite tests 25 apps across 4 categories:
TypoKit (13 Variants)
Section titled “TypoKit (13 Variants)”TypoKit is tested across every supported platform × server adapter combination:
| App | Platform | Server Adapter |
|---|---|---|
| typokit-node-native | Node.js | @typokit/server-native |
| typokit-node-fastify | Node.js | @typokit/server-fastify |
| typokit-node-hono | Node.js | @typokit/server-hono |
| typokit-node-express | Node.js | @typokit/server-express |
| typokit-bun-native | Bun | @typokit/server-native |
| typokit-bun-fastify | Bun | @typokit/server-fastify |
| typokit-bun-hono | Bun | @typokit/server-hono |
| typokit-bun-express | Bun | @typokit/server-express |
| typokit-deno-native | Deno | @typokit/server-native |
| typokit-deno-fastify | Deno | @typokit/server-fastify |
| typokit-deno-hono | Deno | @typokit/server-hono |
| typokit-deno-express | Deno | @typokit/server-express |
| typokit-rust-axum | Rust | Axum 0.8 |
Raw Baselines (4 Apps)
Section titled “Raw Baselines (4 Apps)”Zero-framework HTTP servers — the theoretical performance ceiling for each platform:
| App | Platform | Server |
|---|---|---|
| raw-node | Node.js | node:http |
| raw-bun | Bun | Bun.serve() |
| raw-deno | Deno | Deno.serve() |
| raw-axum | Rust | Axum 0.8 |
Competitor Frameworks (8 Apps)
Section titled “Competitor Frameworks (8 Apps)”Popular frameworks tested on their recommended platform with their own validation:
| App | Framework | Version | Platform |
|---|---|---|---|
| competitor-express | Express | 5.1.0 | Node.js |
| competitor-fastify | Fastify | 5.3.3 | Node.js |
| competitor-hono | Hono | 4.12.3 | Node.js |
| competitor-koa | Koa | 2.15.0 | Node.js |
| competitor-elysia | Elysia | 1.2.0 | Bun |
| competitor-trpc | tRPC | 10.45.0 | Node.js |
| competitor-nestjs | NestJS | 10.0.0 | Node.js |
| competitor-h3 | H3 | 1.13.0 | Node.js |
Each competitor uses its own recommended validation approach (JSON Schema for Fastify, TypeBox for Elysia, Zod for tRPC, manual validation for others).
How to Reproduce Locally
Section titled “How to Reproduce Locally”Prerequisites
Section titled “Prerequisites”- Node.js ≥ 24 — nodejs.org
- pnpm 10.x — pnpm.io
- (Optional) Bun ≥ 1.0 for Bun benchmarks — bun.sh
- (Optional) Deno ≥ 2.0 for Deno benchmarks — deno.land
- (Optional) Rust/Cargo for Axum benchmarks — rustup.rs
- bombardier is auto-downloaded and cached on first run
git clone https://github.com/AltScore/typokit.gitcd typokitpnpm installpnpm nx run-many -t buildRun Benchmarks
Section titled “Run Benchmarks”# Full suite (default: 100 connections, 30s duration, 5s warmup, 3 runs)pnpm nx bench benchmarks
# Quick CI mode (50 connections, 10s duration, 1 run)pnpm nx bench-ci benchmarks
# Baselines onlypnpm nx bench-baseline benchmarks
# Custom settingspnpm nx bench benchmarks -- --connections 100 --duration 30s --warmup 5s --runs 3
# Filter to specific appspnpm nx bench benchmarks -- --filter "typokit-*"
# Run only one scenariopnpm nx bench benchmarks -- --scenario jsonExpected runtime: The full suite with default settings takes approximately 45–60 minutes (25 apps × 7 HTTP scenarios × 3 runs × ~35 seconds each, plus startup measurements and the serialization microbenchmark).
View Results
Section titled “View Results”# Print system info as JSONpnpm nx bench-info benchmarks
# Print step-by-step reproduction instructionspnpm nx bench-reproduce benchmarksResults are written to packages/benchmarks/results/:
latest.json— cumulative latest results (updated each run via merge)<timestamp>.json— individual run snapshots with full data
Nx Affected Scoping
Section titled “Nx Affected Scoping”The benchmark suite uses Nx’s affected project detection to avoid unnecessary work:
How It Works
Section titled “How It Works”- On every PR and push, CI runs
pnpm nx show projects --affectedto check if thebenchmarkspackage (or any of its dependencies) was modified. - If
benchmarksis not affected, the entire benchmark workflow is skipped — no time wasted on unrelated changes. - If
benchmarksis affected, benchmarks run with the appropriate mode (PR or full suite).
Partial Runs and Cumulative Merge
Section titled “Partial Runs and Cumulative Merge”- You can run a subset of apps using
--filter(e.g.,--filter "typokit-node-*"to benchmark only Node.js variants). - Results are cumulatively merged into
latest.jsonusing a composite key:framework|platform|server|scenario. - New results overwrite matching entries; all other entries are preserved.
- This means you can run different subsets at different times and
latest.jsonalways contains the most recent result for each combination.
When Benchmarks Skip
Section titled “When Benchmarks Skip”Benchmarks are skipped when the only changes are in packages unrelated to @typokit/benchmarks or its upstream dependencies (@typokit/core, @typokit/types, @typokit/errors, server adapters, platform bindings, etc.). Changes to docs-only packages, the CLI, or client packages will not trigger benchmark runs.
CI Regression Detection
Section titled “CI Regression Detection”PR Workflow (bench-pr)
Section titled “PR Workflow (bench-pr)”On every pull request that affects the benchmarks package:
- Build all packages
- Run a fast subset of benchmarks (CI mode: 50 connections, 10s, 1 run) filtering to 5 key apps
- Compare results against
packages/benchmarks/baseline.json(committed reference) - Comment on the PR with a markdown table showing req/s changes for each framework × scenario
- Fail if any TypoKit combination has regressed by more than 10% in req/s
The comparison uses the formula:
changePct = ((current.reqPerSec - baseline.reqPerSec) / baseline.reqPerSec) × 100Only TypoKit apps (names starting with typokit-) trigger failure on regression. Competitor framework fluctuations are reported but don’t block the PR.
PR comments use ## Benchmark Results vs Baseline as a marker to find and update existing comments rather than creating duplicates.
Main Branch Workflow (bench-main)
Section titled “Main Branch Workflow (bench-main)”On every push to main that affects the benchmarks package:
- Build all packages (including Rust toolchain)
- Run the full benchmark suite (default settings: 100 connections, 30s, 3 runs)
- Commit updated
latest.jsonif results changed (with[skip ci]to avoid loops) - Trigger a docs site redeploy via
workflow_dispatchto update the charts with fresh data
Updating the Baseline
Section titled “Updating the Baseline”The baseline file (packages/benchmarks/baseline.json) is updated when:
- The
bench-baselineNx target is run, which clearslatest.json, runs the bench suite with CI settings, and copies the results tobaseline.json - A maintainer manually copies
latest.jsontobaseline.jsonafter a full run
Results Format
Section titled “Results Format”All results files use the ResultsFile format:
{ "version": 1, "generatedAt": "2026-03-09T08:00:00.000Z", "results": [ { "framework": "typokit-node-native", "platform": "node", "server": "native", "scenario": "json", "reqPerSec": 85000, "latencyAvg": 1.17, "latencyP50": 1.05, "latencyP75": 1.32, "latencyP90": 1.65, "latencyP99": 3.21, "latencyMax": 12.5 } ], "validationAnalysis": [], "microbenchmarks": []}results— HTTP benchmark results, one entry per framework × scenariovalidationAnalysis— computed post-hoc from the three validate scenarios, showing overhead percentagesmicrobenchmarks— serialization microbenchmark results (ops/sec, avg/min/max ns per operation)
When multiple runs are configured, the runner averages all runs before writing the final result.