Performance Benchmarks¶
Verified status as of March 28, 2026. Runtime note: FastFN resolves dependencies and build steps per function: Python uses
requirements.txt, Node usespackage.json, PHP installs fromcomposer.jsonwhen present, and Rust handlers are built withcargo. Host runtimes and tools are required infastfn dev --native, whilefastfn devdepends on a running Docker daemon.
This page publishes reproducible benchmark snapshots for FastFN. The goal is to show real measurements, not broad claims.
Quick View¶
- Complexity: Intermediate
- Typical time: 10-25 minutes
- Use this when: you need a baseline before changing daemon counts, queue sizes, or deployment defaults
- Outcome: reproducible numbers and raw artifacts you can compare over time
Reporting rules¶
Each benchmark report should include:
- workload shape
- runtime mode (
dockerornative) - concurrency and repeats
- status mix
- raw artifact path
Fast-path snapshot¶
Snapshot: February 17, 2026.
Workload:
- Endpoints:
GET /step-1(Node)GET /step-2(Python)GET /step-3(PHP)GET /step-4(Rust)- Runner:
tests/stress/benchmark-fastpath.py - Requests per point:
4000 - Concurrency matrix:
1,2,4,8,16,20,24,32
Best clean point (200 only):
| Runtime | Endpoint | Best clean point |
|---|---|---|
| Node | /step-1 |
1772.69 RPS (c=16) |
| Python | /step-2 |
878.73 RPS (c=16) |
| PHP | /step-3 |
562.90 RPS (c=20) |
| Rust | /step-4 |
866.69 RPS (c=20) |
Raw artifact:
tests/stress/results/2026-02-17-fastpath-default.json
Runtime daemon routing snapshot¶
Snapshot: March 14, 2026.
Workload:
- Fixture:
tests/fixtures/worker-pool - Request pattern:
6concurrent requests,3measured repeats,2warmup requests per case - Handler cost:
sleep(200ms) - Compared modes:
nativedocker- Compared settings:
runtime-daemons = 1runtime-daemons = 3
Results:
| Runtime | Path | Native 1 |
Native 3 |
Docker 1 |
Docker 3 |
What this means |
|---|---|---|---|---|---|---|
| Node | /slow-node |
276.7ms |
243.1ms |
284.1ms |
258.9ms |
modest gain in both modes |
| Python | /slow-python |
1283.3ms |
451.6ms |
1928.0ms |
450.1ms |
strong gain in both modes |
| PHP | /slow-php |
872.9ms |
953.0ms |
368.0ms |
268.6ms |
worse in native, better in Docker |
| Rust | /slow-rust |
529.2ms |
423.3ms |
329.5ms |
314.7ms |
better in both modes, but modest in Docker |
Raw artifact:
tests/stress/results/2026-03-14-runtime-daemon-scaling-native.jsontests/stress/results/2026-03-14-runtime-daemon-scaling-docker.json
Follow-up check after removing per-request PHP process spawning:
- PHP native quick check:
1 daemon = 802.2ms,3 daemons = 625.9ms - improvement:
22.0% - artifact:
tests/stress/results/2026-03-14-php-persistent-check.json - practical meaning: the earlier native PHP regression no longer represents the current runtime path
How to read these numbers¶
This benchmark is useful because it shows real tradeoffs instead of one blanket story:
- adding daemons helped Python strongly in both modes
- adding daemons helped Node a little in both modes
- PHP originally reacted differently between native and Docker, but the follow-up run improved after removing per-request spawning inside the PHP daemon
- Rust improved in both modes, but the Docker gain was small enough to treat as workload-dependent
The practical conclusion is simple:
- do not turn on
runtime-daemons > 1for every runtime by default - measure the workload you actually care about
- treat
worker_poolandruntime-daemonsas separate controls
One more operational point matters here:
- FastFN now exposes socket-level health in
/_fn/health - a runtime can remain
up=truewhile one socket isup=false - the remaining sockets continue serving traffic while the failed daemon restarts
That behavior is covered by:
tests/integration/test-runtime-daemon-failover.sh
worker_pool.max_workers is a per-function admission and queueing control. runtime-daemons is a per-runtime routing control. They can work together, but they answer different questions.
Reproduce the runtime-daemon benchmark¶
- Start from a clean stack.
- Run the benchmark in
native,docker, or both. - Keep the same request count, warmup, and concurrency.
- Save the raw result under
tests/stress/results/.
Minimal example:
Validation check:
Notes¶
- Results depend on host CPU, background load, and runtime install/build state.
- Native and Docker mode can behave differently, so publish both if you care about both.
- A better average time is useful only if error rate stays acceptable.
- Docker Python with one daemon showed the highest variance in this snapshot, so always inspect the raw samples, not only the average.
Troubleshooting¶
- If one runtime looks much slower than expected, inspect
/_fn/healthfirst and confirm all sockets are up. - If results vary too much between runs, increase warmup and repeats.
- If Rust gets slower with more daemons, verify that the extra process overhead is not larger than the handler cost.
- If PHP gets slower again, check first that the runtime is still using persistent PHP workers and not falling back to a one-shot execution path.
- If Node or Python do not improve, confirm that the extra daemon count is really active in
/_fn/health.
Next step¶
Continue with Scale runtime daemons if you want to tune counts, or Run and test if you want to turn these checks into a repeatable validation flow.
Related links¶
- Architecture
- Function specification
- Global config
- Scale runtime daemons
- Run and test
- HTTP API reference
- Platform runtime plumbing