10 stable releases

2.0.3 Mar 16, 2020
2.0.2 Oct 24, 2019
2.0.1 Sep 11, 2019
2.0.0 Mar 28, 2019
1.9.0 Jan 30, 2018

#3 in #readiness

31 downloads per month
Used in 25 crates (2 directly)

MIT/Apache

75KB
1.5K SLoC

Continuous integration

parity-common

Collection of crates used in Parity Technologies projects


lib.rs:

Generic Transaction Pool

An extensible and performant implementation of Ethereum Transaction Pool. The pool stores ordered, verified transactions according to some pluggable Scoring implementation. The pool also allows you to construct a set of pending transactions according to some notion of Readiness (pluggable).

The pool is generic over transactions and should make no assumptions about them. The only thing we can rely on is the Scoring that defines:

  • the ordering of transactions from a single sender
  • the priority of the transaction compared to other transactions from different senders

NOTE: the transactions from a single sender are not ordered by priority, but still when constructing pending set we always need to maintain the ordering (i.e. txs[1] always needs to be included after txs[0] even if it has higher priority)

Design Details

Performance assumptions:

  • Possibility to handle tens of thousands of transactions
  • Fast insertions and replacements O(per-sender + log(senders))
  • Reasonably fast removal of stalled transactions O(per-sender)
  • Reasonably fast construction of pending set O(txs * (log(senders) + log(per-sender))

The removal performance could be improved by trading some memory. Currently SmallVec is used to store senders transactions, instead we could use VecDeque and efficiently pop_front the best transactions.

The pending set construction and insertion complexity could be reduced by introducing a notion of nonce - an absolute, numeric ordering of transactions. We don't do that because of possible implications of EIP208 where nonce might not be explicitly available.

  1. The pool groups transactions from particular sender together and stores them ordered by Scoring within that group i.e. HashMap<Sender, Vec<Transaction>>.
  2. Additionally we maintain the best and the worst transaction from each sender (by Scoring not priority) ordered by priority. It means that we can easily identify the best transaction inside the entire pool and the worst transaction.
  3. Whenever new transaction is inserted to the queue:
    • first check all the limits (overall, memory, per-sender)
    • retrieve all transactions from a sender
    • binary search for position to insert the transaction
    • decide if we are replacing existing transaction (3 outcomes: drop, replace, insert)
    • update best and worst transaction from that sender if affected
  4. Pending List construction:
    • Take the best transaction (by priority) from all senders to the List
    • Replace the transaction with next transaction (by ordering) from that sender (if any)
    • Repeat

Dependencies

~215KB