Performance
This page summarises the results of performance testing conducted on openFHIR Enterprise 2.0.5. All tests were run under sustained concurrent load using Apache JMeter with 20 threads, a 30-second ramp-up, and 100 iterations per operation type (toFHIR and toOpenEHR). The test dataset covers five mapping domains from the MII KDS profile set: Laborbericht, Fall, Procedure, Medikationsverabreichung, and a mixed set — 95 distinct request payloads per direction.
Note
Detailed breakdown of performance test available at https://open-fhir.com/performance-results
Test matrix
Eight configurations were tested, varying CPU allocation, memory, and database backend:
CPUs |
Memory |
DB |
Mean (ms) |
Median (ms) |
P90 (ms) |
P95 (ms) |
|---|---|---|---|---|---|---|
2 |
2 GB |
MongoDB |
687 |
391 |
1 668 |
2 301 |
2 |
2 GB |
PostgreSQL |
1 365 |
472 |
2 682 |
5 134 |
4 |
4 GB |
MongoDB |
106 |
28 |
302 |
526 |
4 |
4 GB |
PostgreSQL |
238 |
47 |
411 |
841 |
4 |
8 GB |
MongoDB |
55 |
22 |
143 |
230 |
4 |
8 GB |
PostgreSQL |
388 |
53 |
801 |
1 605 |
8 |
8 GB |
MongoDB |
22 |
12 |
44 |
67 |
8 |
8 GB |
PostgreSQL |
34 |
19 |
74 |
112 |
Throughput and peak load
All scenarios processed 20 000 requests (20 concurrent threads). The table below shows sustained throughput and the peak single-request latency observed during each run.
CPUs |
Memory |
DB |
Throughput (req/s) |
Median (ms) |
|---|---|---|---|---|
2 |
2 GB |
MongoDB |
114.6 |
391 |
2 |
2 GB |
PostgreSQL |
91.1 |
472 |
4 |
4 GB |
MongoDB |
123.0 |
28 |
4 |
4 GB |
PostgreSQL |
107.4 |
47 |
4 |
8 GB |
MongoDB |
128.3 |
22 |
4 |
8 GB |
PostgreSQL |
105.7 |
53 |
8 |
8 GB |
MongoDB |
127.5 |
12 |
8 |
8 GB |
PostgreSQL |
124.5 |
19 |
The highest sustained throughput observed was 128 req/s at 4 CPUs / 8 GB / MongoDB. Adding a further 4 CPUs (8 CPUs total) did not materially increase throughput — it primarily reduced latency, with median dropping from 22 ms to 12 ms. This suggests the engine reaches throughput saturation around 125–128 req/s under the tested 20-thread load profile, and that additional CPUs beyond 4 are better justified by latency SLO requirements than raw throughput.
PostgreSQL configurations consistently cap around 105–107 req/s under 4 CPUs due to higher per-request DB overhead. At 8 CPUs PostgreSQL reaches 124 req/s with a median of 19 ms — on par with MongoDB.
toFHIR vs toOpenEHR breakdown
CPUs |
Memory |
DB |
toFHIR avg (ms) |
toOpenEHR avg (ms) |
|---|---|---|---|---|
2 |
2 GB |
MongoDB |
683 |
629 |
2 |
2 GB |
PostgreSQL |
1 834 |
891 |
4 |
4 GB |
MongoDB |
101 |
103 |
4 |
4 GB |
PostgreSQL |
352 |
112 |
4 |
8 GB |
MongoDB |
52 |
53 |
4 |
8 GB |
PostgreSQL |
446 |
324 |
8 |
8 GB |
MongoDB |
18 |
21 |
8 |
8 GB |
PostgreSQL |
45 |
21 |
Analysis
CPU is the dominant factor. The step from 2 to 4 CPUs delivers the largest single improvement — mean latency drops by roughly 6–7× regardless of database or memory. Moving from 4 to 8 CPUs at the same memory level brings a further 3–5× reduction. Mapping is a CPU-bound workload; adding more cores scales throughput near-linearly up to the tested range.
MongoDB consistently outperforms PostgreSQL. The gap is most pronounced under resource pressure. At 2 CPUs / 2 GB, PostgreSQL mean latency is 2× MongoDB and peak latency reaches 30 s versus 8 s for MongoDB, reflecting connection contention under burst conditions. The gap narrows significantly at 8 CPUs / 8 GB, where both backends converge to similar throughput (~125 req/s) and sub-200 ms peak latencies.
toFHIR and toOpenEHR are comparable under sufficient resources. At 2 CPUs toFHIR runs slower (particularly with PostgreSQL, 1 834 ms vs 891 ms), reflecting its more complex query and transformation path. From 4 CPUs upward with MongoDB both directions converge to within a few milliseconds of each other, indicating the engine itself is no longer the bottleneck.
Recommendations
Minimum production configuration: 4 CPUs, 4 GB RAM, MongoDB. Delivers 123 req/s sustained throughput with a 28 ms median latency.
Recommended production configuration: 4 CPUs, 8 GB RAM, MongoDB. Adds ~5 req/s throughput and cuts median latency to 22 ms compared to the 4 GB variant, at minimal cost.
High-throughput / low-latency deployments: 8 CPUs, 8 GB RAM, MongoDB. P95 drops to 67 ms and median to 12 ms. PostgreSQL is a viable alternative at this resource level if operational constraints require it (median 19 ms, 124 req/s).
PostgreSQL requires more resources to reach latency parity with MongoDB. Budget at least 8 CPUs; expect P95 latencies roughly 2–3× higher than MongoDB at 4 CPUs.
Avoid 2 CPU deployments under sustained concurrent load. Throughput drops to 91–115 req/s and tail latencies exceed 2 s at P95.