Author: Qi Zhao, Luc des Trois Maisons
Introduction
Atomic/Batch Transactions is a proposed amendment to the XRPL that enables the atomic execution of multiple transactions as a single unit. By ensuring that complex, multi-step operations either fully succeed or entirely revert, this feature introduces significant usability and flexibility improvements, enabling use-cases such as atomic swaps, conditional token minting, and integrated fee management.
To evaluate the performance implications of Batch transactions, the RippleX performance team conducted targeted benchmarking tests. This report details our testing methodology, results, and observations, highlighting the impact of Batch transactions on ledger throughput, consensus latency, and resource utilization.
Testing Objectives
The primary objectives for Batch transaction performance testing are to:
- Assess ledger throughput and response times under varying Batch transaction scenarios, including “All or Nothing”, “Only One”, “Until Failure”, and “Independent” modes.
- Identify performance bottlenecks specifically related to Batch transaction processing.
- Ensure the network maintains a 5 seconds consensus latency under high load.
Testing Methodology
Capacity Planning
To measure the maximum performance impact of integrating Batch transactions, we establish a deadline for network ledger validation of 5 seconds. This threshold ensures that the network remains in optimal condition while processing transactions. During testing, we monitor both ledger and consensus performance to establish any occurrence of the network failing to meet this deadline. Each discrete occurrence where the network fails to do so is noted, and referred to as an “overvalidation” in the results below.
Test Environment
Our testing infrastructure simulates a private XRPL environment with 9 nodes, matching Ripple's MainNet in terms of hardware specifications. The key features of this environment are:
-
Network Setup
- 5 nodes function as validators.
- 4 nodes operate in P2P mode, serving as non-validating client gateways, enabling our load generators to interface with the network.
Uniform System Specifications: Each system mirrors the hardware specifications found in Ripple’s MainNet rippled infrastructure. Specifically, they are hosted on AWS EC2’s z1d.2xlarge instance type, which provides 8 CPU cores, 64GB RAM, and 300GB NVMe SSD storage, dedicated to the rippled database.
Network Configuration: All nodes operate within the same AWS region and are interconnected through a shared VPC.
Load Submission: During testing, 4 load servers actively and continuously submit transactions to the 4 P2P nodes.
Account Setup: A total of 250K synthetic accounts were established, built atop a snapshot of the MainNet ledger captured on Aug 22nd, 2024. Provisions were made in the simulation logic to ensure the uniqueness of source and destination accounts for every consensus cycle.
Configuration Consistency: The rippled configuration used for the test network was sourced from a UNL Ripple MainNet validator. For testing purposes, a minimal set of necessary changes are applied to segregate our network and align with the specific needs of our environment.
Test Data Setup
Four primary Batch transaction scenarios are tested:
- All or nothing: For each Batch transaction submitted under this mode, either all internal transactions apply successfully, or they all are discarded with no effect on ledger state.
- Only one: In this mode, at most one inner transaction may be accepted, and update the ledger state.
- Until Failure: Inner transactions are applied in the order submitted in the Batch, and may each be accepted in-turn and update the ledger state. However, should an inner transaction fail to be applied, all subsequent inner transactions must also be discarded.
- Independent: All inner transactions are considered independently, and may be accepted or rejected notwithstanding the fate of the others submitted in the Batch.
Load Modeling
Metrics of interest
Each scenario's performance metrics included:
- Transaction throughput (TPS)
- Response times
- Consensus latency (overvalidation rate)
- CPU and memory utilization
Transaction Distribution
We integrated a realistic payment distribution into each Batch, encompassing various transaction types:
- XRP-XRP Transaction
- IOU-Direct Transaction
- AMM/LOB 1path1step transaction
- AMM/LOB 3path3step transaction
- AMM/LOB 6path8step transaction
Each Batch consisted of the maximum allowed 8 inner transactions. Batches were then submitted to evaluate ledger performance metrics.
Error Case Simulation
To rigorously test the atomicity guarantees provided by Batch transactions, we deliberately introduced an error in the final inner transaction of each Batch. This transaction was crafted to pass preflight validation checks but to fail during the consensus phase, causing the entire Batch to revert and testing the system's ability to correctly handle such atomic failures.
Performance Results
Payment Baseline
Transaction Type | TPS (requests/s) | Ledger throughput per second | Mean Ledger Publish Latency (s) | Response Time (ms) | Over Validation (count) | CPU Utilization (%) | Memory Usage (rippled RES in GB / % of Total Memory) |
---|---|---|---|---|---|---|---|
Payment Mixed load | 161.22 | 161.22 | 3.91 | 7.56 | 2 out of 936 ledgers | 10 | 16 GB / 64 GB |
System Statistics
Novel Batch Scenarios
Scenario | Batch TPS (requests/s) | Ledger throughput per second | Mean Ledger Publishing Latency (s) | Response Time (ms) | Over Validation | CPU Utilization (%) | Memory Usage (GB / Total) |
---|---|---|---|---|---|---|---|
All Or Nothing w/ no expected failures | 29.45 | 266.61 | 4.029 | 5.81 | 4 out of 908 | 9 | 21 / 64 GB |
All Or Nothing w/ trailing erroneous inner transaction | 75.63 | 75.41 | 3.959 | 6.26 | 4 out of 924 | 9 | 27.52 / 64GB 43% |
Only One | 51.46 | 102.58 | 3.589 | 5.57 | 2 in 1020 | 8 | 20.6 / 64 GB |
Until Failure w/ no expected failures @ 150 TPS target throughput | 51.42 | 465.07 | 5.518 | 7.98 | 503 out of 663 | 14 | 23.7 / 64 GB 37% |
Until Failure w/ no expected failures @ 60 TPS target throughput | 21.65 | 195.34 | 3.741 | 5.22 | 3 out of 978 | 7 | 16.6 / 64 GB 26% |
Until Failure w/ trailing erroneous inner transaction @ 60 TPS target throughput | 21.67 | 174.09 | 3.532 | 5.38 | 2 out of 1036 | 7 | 17.9 / 64 GB 28% |
Independent @ 150 TPS target throughput | 51.42 | 464.07 | 5.244 | 7.05 | 400 out of 698 | 11 | 20.5 / 64 GB 32% |
Independent @ 100 TPS target throughput | 35.60 | 321.29 | 4.522 | 5.87 | 60 out of 809 | 9 | 18.6 / 64 GB 29% |
Independent @ 83 TPS target throughput | 29.75 | 269.12 | 4.315 | 6.48 | 34 out of 848 | 12 | 24.3 / 64 GB 38% |
Independent @ 60 TPS target throughput | 21.65 | 196.00 | 3.782 | 5.40 | 3 out of 968 | 7 | 16.0 / 64 GB 25% |
Observations
- There appears to be a significant quantity of emergency throughput available at the expense of missing our desired ledger publishing deadlines.
- Each systemically successful inner transaction increases ledger throughput by the submission throughput.
- Failure scenarios and rejected inner transactions can cause the ledger to incur a significant throughput opportunity cost.
Conclusion
The total ledger throughput does not appear to be impaired or hindered when ledger processing includes Batch transactions. However, it should also be noted that Batch transactions containing the full set of 8 inner transactions apply significant leverage to the XRPL’s ledger throughput. We should be mindful of the potential for increased impact when making on-going performance decisions that involve and/or impact Batch transactions.