Skip to content

Methodology

This page documents exactly how TypoKit benchmarks are collected, how results are stored and compared, and how you can reproduce them on your own hardware.

Published results on the docs site are generated by GitHub Actions runners on ubuntu-latest. Typical specs:

PropertyValue
OSUbuntu 22.04 (x86_64)
CPU4-core AMD EPYC (GitHub-hosted)
RAM16 GB
Node.js24.x
Bombardierv2.0.2

The runner collects exact hardware and runtime information automatically (OS, CPU model, core count, RAM, Node/Bun/Deno/Rust versions, bombardier version) and writes it into each results file.

All HTTP benchmarks use bombardier, a high-performance HTTP load-testing tool written in Go. It is auto-downloaded and cached on first run.

SettingValue
Connections100 concurrent
Duration30 seconds per scenario
Warmup5 seconds
Runs3 (results averaged)
SettingValue
Connections50 concurrent
Duration10 seconds per scenario
Warmup5 seconds
Runs1

All apps use an in-memory SQLite database with identical schema and seed data. Bombardier outputs JSON with latency histograms (-o json -l), which are parsed into structured BenchmarkResult records.

Each benchmark app is tested across multiple scenarios. Every scenario hits a specific HTTP endpoint:

ScenarioMethodPathWhat It Measures
jsonGET/jsonPure routing + JSON serialization overhead. Returns a fixed JSON response.
validatePOST/validateValidation pipeline cost. Parses and validates a JSON request body against the schema.
dbGET/db/1Realistic end-to-end latency. Reads a row from SQLite by ID.
middlewareGET/middlewareMiddleware dispatch overhead. Passes through a 5-layer middleware chain.
startupGET/startupCold start to first response. Measures framework initialization time.

TypoKit-Only Scenarios (Validation Overhead Isolation)

Section titled “TypoKit-Only Scenarios (Validation Overhead Isolation)”

These two additional scenarios run only against TypoKit apps to isolate where validation overhead comes from:

ScenarioMethodPathWhat It Measures
validate-passthroughPOST/validate-passthroughNo validators in the route table — body is passed through untouched. Measures the framework’s base overhead for POST handling.
validate-handwrittenPOST/validate-handwrittenValidation is done inline with if/typeof checks, bypassing TypoKit’s validator framework entirely. Measures the framework dispatch overhead vs. native checks.

By comparing the three validate scenarios, you can see exactly how much overhead comes from: (1) TypoKit’s POST routing alone (passthrough), (2) hand-written validation (handwritten), and (3) the compiled validator framework (validate).

ScenarioWhat It Measures
serializationCompares three JSON serializers in a tight loop (1M iterations): JSON.stringify, fast-json-stringify (schema-compiled), and TypoKit’s build-time compiled serializer. Results are reported as operations per second with nanosecond-precision timing.

The benchmark suite tests 25 apps across 4 categories:

TypoKit is tested across every supported platform × server adapter combination:

AppPlatformServer Adapter
typokit-node-nativeNode.js@typokit/server-native
typokit-node-fastifyNode.js@typokit/server-fastify
typokit-node-honoNode.js@typokit/server-hono
typokit-node-expressNode.js@typokit/server-express
typokit-bun-nativeBun@typokit/server-native
typokit-bun-fastifyBun@typokit/server-fastify
typokit-bun-honoBun@typokit/server-hono
typokit-bun-expressBun@typokit/server-express
typokit-deno-nativeDeno@typokit/server-native
typokit-deno-fastifyDeno@typokit/server-fastify
typokit-deno-honoDeno@typokit/server-hono
typokit-deno-expressDeno@typokit/server-express
typokit-rust-axumRustAxum 0.8

Zero-framework HTTP servers — the theoretical performance ceiling for each platform:

AppPlatformServer
raw-nodeNode.jsnode:http
raw-bunBunBun.serve()
raw-denoDenoDeno.serve()
raw-axumRustAxum 0.8

Popular frameworks tested on their recommended platform with their own validation:

AppFrameworkVersionPlatform
competitor-expressExpress5.1.0Node.js
competitor-fastifyFastify5.3.3Node.js
competitor-honoHono4.12.3Node.js
competitor-koaKoa2.15.0Node.js
competitor-elysiaElysia1.2.0Bun
competitor-trpctRPC10.45.0Node.js
competitor-nestjsNestJS10.0.0Node.js
competitor-h3H31.13.0Node.js

Each competitor uses its own recommended validation approach (JSON Schema for Fastify, TypeBox for Elysia, Zod for tRPC, manual validation for others).

  1. Node.js ≥ 24nodejs.org
  2. pnpm 10.xpnpm.io
  3. (Optional) Bun ≥ 1.0 for Bun benchmarks — bun.sh
  4. (Optional) Deno ≥ 2.0 for Deno benchmarks — deno.land
  5. (Optional) Rust/Cargo for Axum benchmarks — rustup.rs
  6. bombardier is auto-downloaded and cached on first run
Terminal window
git clone https://github.com/AltScore/typokit.git
cd typokit
pnpm install
pnpm nx run-many -t build
Terminal window
# Full suite (default: 100 connections, 30s duration, 5s warmup, 3 runs)
pnpm nx bench benchmarks
# Quick CI mode (50 connections, 10s duration, 1 run)
pnpm nx bench-ci benchmarks
# Baselines only
pnpm nx bench-baseline benchmarks
# Custom settings
pnpm nx bench benchmarks -- --connections 100 --duration 30s --warmup 5s --runs 3
# Filter to specific apps
pnpm nx bench benchmarks -- --filter "typokit-*"
# Run only one scenario
pnpm nx bench benchmarks -- --scenario json

Expected runtime: The full suite with default settings takes approximately 45–60 minutes (25 apps × 7 HTTP scenarios × 3 runs × ~35 seconds each, plus startup measurements and the serialization microbenchmark).

Terminal window
# Print system info as JSON
pnpm nx bench-info benchmarks
# Print step-by-step reproduction instructions
pnpm nx bench-reproduce benchmarks

Results are written to packages/benchmarks/results/:

  • latest.json — cumulative latest results (updated each run via merge)
  • <timestamp>.json — individual run snapshots with full data

The benchmark suite uses Nx’s affected project detection to avoid unnecessary work:

  1. On every PR and push, CI runs pnpm nx show projects --affected to check if the benchmarks package (or any of its dependencies) was modified.
  2. If benchmarks is not affected, the entire benchmark workflow is skipped — no time wasted on unrelated changes.
  3. If benchmarks is affected, benchmarks run with the appropriate mode (PR or full suite).
  • You can run a subset of apps using --filter (e.g., --filter "typokit-node-*" to benchmark only Node.js variants).
  • Results are cumulatively merged into latest.json using a composite key: framework|platform|server|scenario.
  • New results overwrite matching entries; all other entries are preserved.
  • This means you can run different subsets at different times and latest.json always contains the most recent result for each combination.

Benchmarks are skipped when the only changes are in packages unrelated to @typokit/benchmarks or its upstream dependencies (@typokit/core, @typokit/types, @typokit/errors, server adapters, platform bindings, etc.). Changes to docs-only packages, the CLI, or client packages will not trigger benchmark runs.

On every pull request that affects the benchmarks package:

  1. Build all packages
  2. Run a fast subset of benchmarks (CI mode: 50 connections, 10s, 1 run) filtering to 5 key apps
  3. Compare results against packages/benchmarks/baseline.json (committed reference)
  4. Comment on the PR with a markdown table showing req/s changes for each framework × scenario
  5. Fail if any TypoKit combination has regressed by more than 10% in req/s

The comparison uses the formula:

changePct = ((current.reqPerSec - baseline.reqPerSec) / baseline.reqPerSec) × 100

Only TypoKit apps (names starting with typokit-) trigger failure on regression. Competitor framework fluctuations are reported but don’t block the PR.

PR comments use ## Benchmark Results vs Baseline as a marker to find and update existing comments rather than creating duplicates.

On every push to main that affects the benchmarks package:

  1. Build all packages (including Rust toolchain)
  2. Run the full benchmark suite (default settings: 100 connections, 30s, 3 runs)
  3. Commit updated latest.json if results changed (with [skip ci] to avoid loops)
  4. Trigger a docs site redeploy via workflow_dispatch to update the charts with fresh data

The baseline file (packages/benchmarks/baseline.json) is updated when:

  • The bench-baseline Nx target is run, which clears latest.json, runs the bench suite with CI settings, and copies the results to baseline.json
  • A maintainer manually copies latest.json to baseline.json after a full run

All results files use the ResultsFile format:

{
"version": 1,
"generatedAt": "2026-03-09T08:00:00.000Z",
"results": [
{
"framework": "typokit-node-native",
"platform": "node",
"server": "native",
"scenario": "json",
"reqPerSec": 85000,
"latencyAvg": 1.17,
"latencyP50": 1.05,
"latencyP75": 1.32,
"latencyP90": 1.65,
"latencyP99": 3.21,
"latencyMax": 12.5
}
],
"validationAnalysis": [],
"microbenchmarks": []
}
  • results — HTTP benchmark results, one entry per framework × scenario
  • validationAnalysis — computed post-hoc from the three validate scenarios, showing overhead percentages
  • microbenchmarks — serialization microbenchmark results (ops/sec, avg/min/max ns per operation)

When multiple runs are configured, the runner averages all runs before writing the final result.