Performance
This page summarises the results of performance testing conducted on openFHIR Enterprise. All tests were run under sustained concurrent load using Apache JMeter with 20 threads, a 30-second ramp-up, and 100 iterations per operation type (toFHIR and toOpenEHR). The test dataset covers five mapping domains from the MII KDS profile set: Laborbericht, Fall, Procedure, Medikationsverabreichung, and a mixed set — 95 distinct request payloads per direction.
Note
Detailed breakdown of performance test available at https://open-fhir.com/performance-results
Test matrix
Six configurations were tested, varying CPU allocation, memory, and database backend:
CPUs |
Memory |
DB |
Mean (ms) |
Median (ms) |
P90 (ms) |
P95 (ms) |
|---|---|---|---|---|---|---|
2 |
2 GB |
MongoDB |
600 |
283 |
1 493 |
2 113 |
2 |
2 GB |
PostgreSQL |
792 |
261 |
1 381 |
2 350 |
4 |
8 GB |
MongoDB |
43 |
16 |
110 |
184 |
4 |
8 GB |
PostgreSQL |
108 |
31 |
199 |
374 |
8 |
8 GB |
MongoDB |
17 |
9 |
35 |
50 |
8 |
8 GB |
PostgreSQL |
33 |
17 |
63 |
101 |
Throughput and peak load
All scenarios processed 20 000 requests (20 concurrent threads). The table below shows sustained throughput and median latency observed during each run.
CPUs |
Memory |
DB |
Throughput (req/s) |
Median (ms) |
|---|---|---|---|---|
2 |
2 GB |
MongoDB |
131.2 |
283 |
2 |
2 GB |
PostgreSQL |
113.2 |
261 |
4 |
8 GB |
MongoDB |
145.5 |
16 |
4 |
8 GB |
PostgreSQL |
144.5 |
31 |
8 |
8 GB |
MongoDB |
142.3 |
9 |
8 |
8 GB |
PostgreSQL |
145.5 |
17 |
The highest sustained throughput observed was 145 req/s, reached by both 4 CPUs / 8 GB configurations. At 8 CPUs throughput remains similar but latency drops significantly — median falls to 9 ms for MongoDB and 17 ms for PostgreSQL. This suggests the engine reaches throughput saturation around 145 req/s under the tested 20-thread load profile, and that additional CPUs beyond 4 are better justified by latency SLO requirements than raw throughput.
MongoDB and PostgreSQL perform comparably at 4 CPUs / 8 GB and above. The gap is more visible under resource pressure at 2 CPUs / 2 GB, where PostgreSQL tail latencies are higher.
Analysis
CPU is the dominant factor. The step from 2 to 4 CPUs delivers the largest single improvement — mean latency drops by roughly 10–15× regardless of database or memory. Moving from 4 to 8 CPUs brings a further 2–3× reduction in median latency. Mapping is a CPU-bound workload; adding more cores scales latency near-linearly up to the tested range.
MongoDB and PostgreSQL converge at sufficient resources. At 4 CPUs / 8 GB both backends deliver ~145 req/s with median latencies of 16 ms and 31 ms respectively. At 8 CPUs the gap narrows further (9 ms vs 17 ms). PostgreSQL is a fully viable backend at these resource levels.
Under resource pressure MongoDB has an advantage. At 2 CPUs / 2 GB, PostgreSQL P95 reaches 2 350 ms versus 2 113 ms for MongoDB, and mean latency is 32% higher. If deployment resources are constrained, MongoDB is the safer choice.
toFHIR and toOpenEHR are comparable under sufficient resources. Both directions converge to within a few milliseconds of each other from 4 CPUs upward with MongoDB, indicating the engine itself is not the bottleneck at that resource level.
Recommendations
Minimum production configuration: 4 CPUs, 8 GB RAM, MongoDB. Delivers 145 req/s sustained throughput with a 16 ms median and 184 ms P95.
Recommended production configuration: 4 CPUs, 8 GB RAM, MongoDB or PostgreSQL. PostgreSQL at this tier delivers equivalent throughput (144 req/s) and acceptable latency (31 ms median, 374 ms P95).
High-throughput / low-latency deployments: 8 CPUs, 8 GB RAM. P95 drops to 50 ms (MongoDB) or 101 ms (PostgreSQL) and median to 9 ms and 17 ms respectively.
Avoid 2 CPU deployments under sustained concurrent load. Tail latencies exceed 2 s at P95 for both backends.