SQLPulse
Validate that a tuned SQL query holds up under sustained load β and produce
the artifacts a perf team can hand to leadership.
SQLPulse is a self-hosted load tester for parameterized SQL queries. You point it at
a database, supply a query and a CSV of parameter values, set a target TPS, and it
drives that exact workload through a pool of concurrent connections β recording
latency percentiles, throughput, and per-query timing into shareable PDF / Excel /
CSV reports with interpretive insights and multi-run comparisons.
Realistic load, not synthetic
Token-bucket pacing for even dispatch (no bursts), real connection pools, server-enforced statement timeouts, and per-query parameters from your CSV.
Shareable evidence
Branded PDF and Excel reports with cover summary, percentile chart, histogram, error breakdown, and a "vs prior run" delta strip β built from the same data the live dashboard shows.
Zero filesystem ceremony
Upload queries and parameter CSVs through the Library page; everything persists in Postgres, so the same run reproduces on a laptop, an Azure container, or an EC2 box without copying files.
Three front doors, one engine
Web UI, CLI, and Textual TUI all wrap the same runner. Mix interactive validation in the browser with scripted CI runs from the CLI β both flow into the same history view, tagged by source.
What SQLPulse does β and what it doesn't
| Designed for | Not for |
| Validating one (or a small handful of) parameterized queries under sustained TPS |
Whole-application load testing (use k6, Locust, JMeter) |
| Pre-prod regression checks against a release candidate |
Production traffic replay (no schema-aware mutation generation) |
| Capacity planning for read-heavy DB workloads |
Functional / correctness testing β SQLPulse drains and discards results, only timing matters |
| Generating evidence artifacts for a perf review or change board |
Long-term trend monitoring (use Grafana / Datadog) |
Core concepts
- Run
- One execution of a single query against a single database, paced at a target TPS, for either a fixed duration or a fixed number of executions. Produces a per-run CSV (one row per query), a text/JSON report, and on-demand PDF / Excel.
- Query & values
- A
.sql file with named placeholders (:p1, :p2, β¦) plus a CSV where each row supplies one value per placeholder, in order. SQLPulse rewrites the placeholders for the underlying driver before binding.
- Library asset
- Queries and parameter CSVs uploaded through the web UI and persisted in Postgres. Decouples test inputs from the host filesystem β same library works on local Linux and in cloud containers.
- Connection profile
- A saved DB target (host / port / database / user / type). Passwords are never persisted β you supply them at run time. Used by both the form and the per-run database probe.
- Batch
- A group of runs launched together (e.g. a TPS sweep across 10/50/100). The history view collapses them into one row that re-expands on click.
- Source
- Where a run was launched from:
web, cli, or tui. Shown as a chip in the history table; CLI / TUI runs reach the DB whenever SQLPULSE_DATABASE_URL is set.
- Baseline
- Reference percentiles (p50/p95/p99) for a
(query, target_host, target_tps) tuple. The insights engine compares fresh runs against the baseline to flag regressions on the report cover.
Quick start
Easiest path β Docker compose:
git clone <repo-url>
cd sqlpulse
docker compose up -d
open http://localhost:8765
# In the UI: π₯ Library β + New asset β upload .sql + .csv β βΆ Start.
Postgres comes up as a sidecar; SQLPulse's metadata (saved connections, run history, library) lives there. Reports land under ./web-runs/ on the host.
30-second tour after install:
- Open β Settings β add a saved connection to the DB you want to load-test (no password β supplied at run time).
- Open π₯ Library β + New asset β upload your
query.sql, repeat for values.csv.
- In Section 1 of the form, optionally pick one of the bundled
pg_catalog samples; otherwise expand Section 3 β Library asset and pick the assets you uploaded.
- In Section 2, pick the saved connection. Set Target TPS + Duration / Total runs.
- Click βΆ Start (or ββ΅ / Ctrlβ΅). Live KPIs update every 200 ms; when complete, the run lands in Run history on the right.
Supported databases
| Database | Driver | URL form | Notes |
| PostgreSQL 13 + |
psycopg[binary] 3.x |
postgresql://user[:pw]@host:5432/db |
Default. Statement timeout enforced server-side; read-only sessions by default. |
| CockroachDB 22 + |
(Postgres wire protocol) |
postgresql://user[:pw]@host:26257/db |
Wire-compatible with Postgres β same driver, only the connection-profile type badge differs. |
| Oracle 19c + |
oracledb 2.x (thin mode) |
oracle://user:pw@host:1521/service |
Password is required (no trust auth). Probes use SELECT 1 FROM DUAL. |
SQL Server, MySQL, Redshift, BigQuery, and Snowflake are not yet supported. The driver layer (src/sqlpulse/driver/) is small (base.Driver + per-engine implementations); contributions welcome.
Reading the dashboard
9 KPI tiles refresh every 200 ms while a run is in flight:
- Achieved TPS β what the test actually produced. Below target = workers can't keep up; the badge turns yellow then red.
- p50 / p95 / p99 β the percentiles you usually quote. A big gap between p95 and p99 means unstable tail latency.
- Failed β error count (timeout, connection reset, SQL error). Drill in via the history modal for the message-level breakdown clustered by error.
Chart tabs:
- TPS Γ Latency β scatter of recent ticks; lets you spot whether p95 climbs as TPS rises.
- TPS / time + Latency / time β bucketed time series replayed from the per-query CSV.
- Histogram β distribution of successful query durations. Auto-shown after completion.
- Table β raw per-bucket aggregates with a per-run CSV download link.
- Insights β auto-generated diagnostics: "p99 is 5Γ the median β investigate slow outliers", baseline comparisons, etc.
Library & uploads
The π₯ Library button opens a CRUD modal for queries (.sql) and parameter files (.csv) stored in Postgres. Upload by paste, by file picker, or by dragging from your editor. Per-file cap is set by SQLPULSE_MAX_UPLOAD_MB (default 5 MB).
When a run is launched from a library asset, SQLPulse materializes the text into web-runs/tmp/<run_id>/, runs the test, then deletes the temp dir on completion. The DB row records both the asset id and the temp path actually used, so you can trace any past run back to the exact library version.
Why DB-backed? A fresh container has nothing on disk. After one upload, the same run reproduces verbatim on a laptop, an Azure container, or an EC2 box β no shared filesystem needed.
Run history & drill-down
The Run history panel on the right lists every completed run, with a colored chip per source (WEB CLI TUI). Click a row to open the drill-down modal:
- Pacing: target / achieved TPS, connections, mode
- Outcome: total / successful / failed / warmup, duration
- Latency: min / mean / p50 / p90 / p95 / p99 / max / stdev
- Diagnostics: auto-generated hints + insights
- Latency distribution: full histogram of successful runs
- Errors: clustered message-level breakdown if any
- Downloads: per-query CSV, text report, JSON, PDF, Excel
Multi-run compare: shift-click rows in the history table, then export a comparison PDF or XLSX from the toolbar at the top of the dashboard.
Command-line interface
The CLI runs the same engine as the web app β useful for CI/CD, scripted regression checks, and headless servers. When SQLPULSE_DATABASE_URL is set, CLI runs land in the web history tagged CLI.
Install (already done if you used the venv):
python3.12 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
Run a test:
# From inside the sqlpulse repo
python -m sqlpulse \
--query-file examples/catalog_lookup/query.sql \
--values-file examples/catalog_lookup/values.csv \
--target-tps 10 \
--total-runs 30 \
--connections 4 \
--warmup-runs 2 \
--reuse-values \
--db-url 'postgresql://you@localhost/postgres'
Persist to the web history:
SQLPULSE_DATABASE_URL=postgresql+psycopg://sqlpulse:sqlpulse@localhost/sqlpulse \
python -m sqlpulse --query-file q.sql --values-file v.csv \
--target-tps 50 --duration 60 \
--db-url 'postgresql://you@host/db'
All flags:
| Flag | Description |
--query-file PATH | .sql with :p1, :p2, β¦ placeholders |
--values-file PATH | CSV β one parameter set per line, no header |
--target-tps FLOAT | Dispatch rate. Supports rates < 1 (e.g. 0.5) |
--duration SECONDS | Mutually exclusive with --total-runs |
--total-runs N | Mutually exclusive with --duration |
--connections N | Pool size. Defaults to auto |
--warmup-runs N | Leading runs excluded from stats |
--values-mode sequential|random | CSV consumption order |
--reuse-values / --no-reuse-values | Wrap when CSV exhausted |
--allow-writes / --read-only | Drop the read-only session guard |
--query-timeout-ms N | Server-side statement timeout (default 30000) |
--db-url URL | Connection string. Honors $DATABASE_URL |
--report-file PATH | Text report destination (sibling JSON auto-written) |
--log-file PATH | Per-query CSV log destination |
--runs-dir PATH | Output dir for default report/log paths. Honors $SQLPULSE_RUNS_DIR. Default ./web-runs/ |
--config / -c FILE.toml | Pre-populate from a TOML config (CLI flags override) |
--verbose / -v | Debug-level logging |
TOML config (every flag has a kebab-case key):
target-tps = 50
duration = 60
connections = 8
warmup-runs = 5
reuse-values = true
query-file = "queries/catalog_lookup.sql"
values-file = "values/catalog_lookup.csv"
python -m sqlpulse -c sweep.toml --db-url "$DATABASE_URL"
Terminal UI
A Textual-based interactive form for one-off runs from a terminal β useful when SSH'd into a server with no browser:
python -m sqlpulse.tui # or installed script: sqlpulse-tui
Key bindings: Ctrl+R start, Ctrl+S stop, Ctrl+Q quit. Like the CLI, runs land in the web history tagged TUI when SQLPULSE_DATABASE_URL is set.
Local deployment
Option 1 β Docker compose (recommended). Brings up Postgres + the app together:
docker compose up -d
docker compose logs -f app # follow startup
open http://localhost:8765
Defaults from docker-compose.yml:
- Postgres 16 sidecar with a named
pgdata volume β survives container restarts.
- Auto-runs
alembic upgrade head on every start (see entrypoint.sh).
- Reports written to
./web-runs/ on the host (volume-mounted to /data/web-runs).
- Listens on
0.0.0.0:8765.
Stop / reset:
docker compose down # stop, keep data
docker compose down -v # stop + wipe Postgres volume
Option 2 β Bare metal (no Docker). Useful for development:
# Postgres reachable somewhere
createdb sqlpulse
python3.12 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
export SQLPULSE_DATABASE_URL=postgresql+psycopg://you@localhost/sqlpulse
alembic upgrade head
sqlpulse-web # listens on 127.0.0.1:8765
Behind a reverse proxy: the SSE endpoint at /api/runs/{id}/stream needs proxy buffering disabled. For nginx:
location /api/runs/ {
proxy_buffering off;
proxy_read_timeout 3600s;
proxy_pass http://sqlpulse-app:8765;
}
Cloud deployment (Azure / AWS)
SQLPulse's container is stateless apart from the report artifacts. Two pieces of persistent state to plan for:
- Metadata DB β any reachable Postgres pointed at by
SQLPULSE_DATABASE_URL. Holds saved connections, run history, library, baselines.
- Artifact storage β directory at
SQLPULSE_RUNS_DIR (default /data/web-runs in Docker). Holds per-query CSVs and generated PDF / Excel reports.
Azure Container Apps:
- Provision Azure Database for PostgreSQL β Flexible Server (Burstable B1ms is plenty for metadata).
- Build & push the image; deploy as a Container App with min/max replicas = 1 (the in-memory live-runs dict is per-instance β multi-instance HA is roadmap).
- Mount an Azure Files share at
/data/web-runs via the Volumes tab. Storage Class = Premium SMB if you produce many large reports.
- Set env vars:
SQLPULSE_DATABASE_URL, optionally SQLPULSE_MAX_UPLOAD_MB.
- Expose port 8765 with HTTPS ingress; enable session affinity if you sit behind a CDN.
AWS ECS / Fargate:
- Provision RDS for PostgreSQL (or Aurora Postgres-compatible). t4g.micro suffices for the metadata DB.
- Push the image to ECR; create an ECS service with desired count = 1.
- Mount an EFS access point at
/data/web-runs via the task definition's volumes + mountPoints.
- Set env vars in the task definition; put the secret URL in Secrets Manager and reference via
secrets.
- Front with an ALB; target group health check =
GET /api/health on port 8765.
Kubernetes (any cloud): a single Deployment with replicas = 1, a PersistentVolumeClaim for /data/web-runs (RWO is fine since replicas = 1), env vars from a Secret. Helm chart is on the roadmap.
Why not multi-replica HA today? Live runs (in-flight, streaming SSE) live in an in-memory dict on the instance that started them. The DB persistence ensures completed runs are always recoverable, but a load balancer routing two requests for the same in-flight run to different pods would surface as "run not found." Pin replicas = 1 for now.
Environment variables
| Variable | Default | Used by | Purpose |
SQLPULSE_DATABASE_URL | (required) | Web, CLI, TUI | Metadata DB connection. CLI/TUI run history is recorded only when set. |
SQLPULSE_RUNS_DIR | ./web-runs | Web, CLI, TUI | Where per-run artifacts (CSV, TXT, JSON, PDF, XLSX) are written. |
SQLPULSE_HOST | 127.0.0.1 | Web | uvicorn bind address (0.0.0.0 in Docker). |
SQLPULSE_PORT | 8765 | Web | uvicorn port. |
SQLPULSE_MAX_UPLOAD_MB | 5 | Web | Per-file Library upload cap. Reject above with 413. |
SQLPULSE_RUN_MIGRATIONS | β | Web | Set to 1 to run alembic on uvicorn startup (Docker uses entrypoint.sh instead). |
SQLPULSE_RELOAD | β | Web | Set to 1 for uvicorn dev-mode reload. |
SQLPULSE_CONNECTIONS_FILE | β | Web | Path to a legacy connections.json imported once on first boot. |
DATABASE_URL | β | CLI | Fallback for --db-url. Combined with $DB_PASSWORD if the URL has no password. |
CADENCE_* | β | All | Legacy aliases for the SQLPULSE_* names. Honored for one release, then dropped. |
System shape
SQLPulse is a self-hosted three-tier app: a single-page UI in the browser, a FastAPI middle tier on Python, and a Postgres metadata DB. The middle tier also opens connections to whatever target databases the operator load-tests; those are the user's own DBs, not part of SQLPulse.
USER (browser / CLI)
β HTTPS
βΌ
REVERSE PROXY (operator-supplied β ALB+Cognito / nginx+OAuth2-Proxy /
GCP IAP / Azure Easy Auth / Cloudflare Access)
β HTTP, loopback only
βΌ
MIDDLE TIER (FastAPI + uvicorn, Python 3.12 thin)
β
ββββΆ METADATA DB (Postgres 16) ββ run history, saved
β connections (no pw),
β library queries / values
β
ββββΆ TARGET DBs (Postgres / Cockroach / Oracle ADB) ββ load-tested
β
ββββΆ HOST FS ββ web-runs/ (CSV/PDF reports)
~/.sqlpulse/wallets// (Oracle wallets, mode 0600)
Only the reverse proxy is public; the middle tier and metadata DB live on a private network. SQLPULSE_HOST=127.0.0.1 binds uvicorn to loopback so only the local proxy can reach it.
UI Β· middle tier Β· DB
UI (browser)
- Single-page app, served by the middle tier as static
index.html + app.js.
- Drives the dashboard via Server-Sent Events (live KPIs, log tail).
- Stores only cosmetic preferences in
localStorage (theme, pane widths). No credentials, no tokens.
Middle tier (FastAPI on uvicorn)
- One Python process. REST handlers + worker thread pool (one thread per concurrent run, each holding a fixed pool of DB connections to the target).
- In-memory ring buffer for the live log feed (last ~2000 lines).
- Stateful only for in-flight runs; once complete, all durable state lands in the metadata DB and on disk.
- Crash / restart sweeps zombie
running rows to error on next startup.
Metadata DB (Postgres 16)
- Tables:
runs, batches, connections, query_assets, values_assets, baselines, run_tags.
- Schema versioned by Alembic; migrations run automatically on container start.
- Does not store: target DB passwords, Oracle wallet passwords, or PDF / CSV report bodies (those live on the filesystem; the DB stores paths).
Run lifecycle
- UI POSTs
/api/runs with the picked connection id, the per-run DB password, and the load-test parameters.
- Middle tier resolves the connection (looks up host / port / username from DB, combines with the per-request password), validates paths against the allowlist, and inserts a
status='running' row stamped with the authenticated user.
- Worker pool is opened: one probe connection first (fail-fast on bad creds / wrong DB), then the rest opened serially to avoid macOS Keychain races on Oracle.
- Each worker loops: acquire-pacing-token β pick value row β driver.execute_drain(query) β write CSV log line β record in live stats sink. SSE stream pushes KPIs to the UI on each tick.
- On completion: TXT + JSON reports written; PDF / XLSX generated lazily on first download. DB row updated with outcomes, error_clusters, latency / histogram. In-memory handle removed.
Security model β overview
- No DB passwords are ever stored. Saved connection rows only hold host / port / database / username; passwords are entered per-run and live in process memory for the duration of the request.
- Authentication is delegated to the corporate IdP via a reverse proxy that injects a trusted user header (
X-Forwarded-User / equivalents). SQLPulse never sees passwords, tokens, MFA codes, or SAML assertions.
- Strict mode (
SQLPULSE_REQUIRE_AUTH=1) refuses any request lacking the trusted header β the deployment fail-safe if the proxy is misconfigured.
- CSRF guard β state-changing methods require a custom
X-Requested-With: sqlpulse-ui header that browsers refuse to set on cross-origin requests.
- Container runs as non-root (uid 10001), restrictive default CORS, path allowlist on filesystem reads, certifi-only CA bundle, password-redaction filters on log output.
- SQL injection from values rows is defeated by driver-native parameter binding (
%(name)s for psycopg, :name for oracledb). Operator-authored SQL is in scope by design β the product's job.
Full finding-by-finding analysis lives in SECURITY.md on the repo.
Why no DB passwords are stored
Most enterprise security review questions start with this one. The short answer:
- The
connections table has no password column. Look at the schema β there's nowhere it could live.
- The UI prompts for the password at run-time (
<input type="password">) and POSTs it as the db_password field of a single request, over the proxy's TLS.
- The middle tier holds the password in memory only as long as the request runs. The driver builds the connection URL, opens the pool, and drops the URL.
- The password is never logged. The log buffer redacts
password=... patterns and URL-embedded passwords as a defence-in-depth backstop.
- For Oracle Autonomous DB the same posture applies twice: both the DB password and the wallet password (used to unwrap
ewallet.pem) are per-request. Neither is persisted.
What is kept: who ran a load test against which saved connection, when, and what the outcome was β that's audit data, not a credential.
Authentication β header-trust pattern
SQLPulse trusts a reverse proxy that has already authenticated the user against the corporate IdP. The proxy sets one of these headers on every request after auth:
X-Forwarded-User (OAuth2-Proxy default)
X-Auth-Request-Email (OAuth2-Proxy email pass-through)
X-MS-CLIENT-PRINCIPAL-NAME (Azure App Service Easy Auth)
X-Goog-Authenticated-User-Email (GCP IAP)
Cf-Access-Authenticated-User-Email (Cloudflare Access)
X-Amzn-Oidc-Identity (AWS ALB OIDC authoriser)
When SQLPULSE_REQUIRE_AUTH=1, the app rejects requests without one of these headers (HTTP 401). What makes header-trust safe: network isolation β port 8765 must only be reachable from the proxy. Same pattern as Vault, Argo CD, Grafana, and most internal corp tools.
Oracle wallet hygiene
- Wallet zips are validated server-side: 10 MB total cap, 5 MB per file, filename allowlist (
cwallet.sso, ewallet.pem, ewallet.p12, tnsnames.ora, sqlnet.ora, ojdbc.properties, keystore.jks, truststore.jks).
- Path-traversal segments (
.., absolute paths) are rejected; extraction is flat into ~/.sqlpulse/wallets/<conn-id>/.
- Files are written with mode
0600, directory 0700, owned by uid 10001.
- The wallet password is never stored. Operators enter it at run-time alongside the DB password.
- Deleting the saved connection removes its wallet directory.
Operational security
- Read-only by default. Every session opens with
SET default_transaction_read_only = on (Postgres) so a misconfigured query can't accidentally write. Toggle Allow writes only for intentional DML load tests.
- Server-side timeouts. Each session sets
statement_timeout to the configured value; slow queries are killed by the database, not held by the client.
- Passwords are never persisted. Saved connection profiles store host / port / user / database β passwords are typed at run time and held only for the lifetime of the request.
- No outbound calls. The container makes no network calls except to the configured target databases.
- SSRF surface. The DB URL field accepts any reachable Postgres / Oracle URL. If you expose SQLPulse to untrusted users, run it inside a network namespace that restricts egress to only the DBs you want load-tested.
- Authentication. SQLPulse has no built-in auth β it's designed to sit behind your existing reverse proxy / IdP (mTLS, OIDC, SSO). Don't expose it directly to the internet.
Artifacts & retention
Each run produces files under SQLPULSE_RUNS_DIR:
| File | When | Typical size |
runs-<ts>-<run_id>.csv | Always β one row per executed query | ~150 bytes/query (10K runs β 1.5 MB) |
report-<ts>-<run_id>.txt | Always β formatted summary | ~1 KB |
report-<ts>-<run_id>.json | Always β structured payload | ~2 KB |
report-<ts>-<run_id>.pdf | Lazy β first PDF download | ~150 KB |
report-<ts>-<run_id>.xlsx | Lazy β first Excel download | ~30 KB |
Postgres holds aggregates + paths only β never blobs. Retention is currently manual: prune old files in SQLPULSE_RUNS_DIR and call DELETE /api/runs?scope=complete to drop the corresponding DB rows.
Keyboard shortcuts
| ββ΅ / Ctrlβ΅ | Start a run (submits the form) |
| Esc | Stop the focused run |
| Ctrl+1 | Collapse / expand the form pane |
| Ctrl+3 | Collapse / expand the history pane |
Troubleshooting
too many clients already
- Postgres
max_connections exhausted. Lower the Connections field, or raise max_connections in postgresql.conf.
statement timeout
- Query took longer than the Timeout (ms) field. Either tune the query or raise the timeout.
- Achieved TPS < target
- Workers can't keep up. Try more connections, lower the target, or check whether the DB itself is the bottleneck (server CPU / IO).
- Big p95 / p99 gap
- Unstable tail latency. Download the per-run CSV, sort by
duration_ms, and inspect what params produced the slow runs.
- "values file exhausted (reuse disabled)"
- The test outran your values file. Either grow the file, lower the target rate, or tick Reuse values.
- "SQLPULSE_DATABASE_URL is not set"
- The web app needs a metadata DB. With Docker compose this is automatic; for bare metal, see Local deployment β Option 2.
- CLI run doesn't appear in web history
- Set
SQLPULSE_DATABASE_URL in the CLI shell to the same Postgres the web app uses. Without it the CLI runs file-only.
- "413 β exceeds SQLPULSE_MAX_UPLOAD_MB cap"
- Your library upload exceeds the per-file cap (default 5 MB). Bump the env var if you genuinely need larger CSVs β but consider whether the workload would be better expressed as a smaller file with Reuse values.
- 413 / 502 behind a reverse proxy
- Some proxies cap request bodies (nginx default 1 MB) or buffer SSE streams. See the Local deployment section for the nginx snippet.