Friday, October 20, 2023

Bulk visitation in boost::concurrent_flat_map

Introduction

boost::concurrent_flat_map and its boost::concurrent_flat_set counterpart are Boost.Unordered's associative containers for high-performance concurrent scenarios. These containers dispense with iterators in favor of a visitation-based interface:

boost::concurrent_flat_map<int, int> m;
...
// find the element with key k and increment its associated value
m.visit(k, [](auto& x) {
  ++x.second;
});

This design choice was made because visitation is not affected by some inherent problems afflicting iterators in multithreaded environments.

Starting in Boost 1.84, code like the following:

std::array<int, N> keys;
...
for(const auto& key: keys) {
  m.visit(key, [](auto& x) { ++x.second; });
}

can be written more succintly via the so-called bulk visitation API:

m.visit(keys.begin(), keys.end(), [](auto& x) { ++x.second; });

As it happens, bulk visitation is not provided merely for syntactic convenience: this operation is internally optimized so that it performs significantly faster than the original for-loop. We discuss here the key ideas behind bulk visitation internal design and analyze its performance.

Prior art

In their paper "DRAMHiT: A Hash Table Architected for the Speed of DRAM", Narayanan et al. explore some optimization techniques from the domain of distributed system as translated to concurrent hash tables running on modern multi-core architectures with hierarchical caches. In particular, they note that cache misses can be avoided by batching requests to the hash table, prefetching the memory positions required by those requests and then completing the operations asynchronously when enough time has passed for the data to be effectively retrieved. Our bulk visitation implementation draws inspiration from this technique, although in our case visitation is fully synchronous and in-order, and it is the responsibility of the user to batch keys before calling the bulk overload of boost::concurrent_flat_map::visit.

Bulk visitation design

As discussed in a previous article, boost::concurrent_flat_map uses an open-addressing data structure comprising:

  • A bucket array split into 2n groups of N = 15 slots.
  • A metadata array associating a 16-byte metadata word to each slot group, used for SIMD-based reduced-hash matching.
  • An access array with a spinlock (and some additional information) for locked access to each group.

The happy path for successful visitation looks like this:

  1. The hash value for the looked-up key and its mapped group position are calculated.
  2. The metadata for the group is retrieved and matched against the hash value.
  3. If the match is positive (which is the case for the happy path), the group is locked for access and the element indicated by the matching mask is retrieved and compared with the key. Again, in the happy path this comparison is positive (the element is found); in the unhappy path, more elements (within this group or beyond) need be checked.

(Note that happy unsuccessful visitation simply terminates at step 2, so we focus our analysis on successful visitation.) As the diagram shows, the CPU has to wait for memory retrieval between steps 1 and 2 and between steps 2 and 3 (in the latter case, retrievals of mutex and element are parallelized through manual prefetching). A key insight is that, under normal circumstances, these memory accesses will almost always be cache misses: successive visitation operations, unless for the very same key, won't have any cache locality. In bulk visitation, the stages of the algorithm are pipelined as follows (the diagram shows the case of three operations in the bulk batch):

The data required at step N+1 is prefetched at the end of step N. Now, if a sufficiently large number of operations are pipelined, we can effectively eliminate cache miss stalls: all memory addresses will be already cached by the time they are used.

The operation visit(first, last, f) internally splits [first, last) into chunks of bulk_visit_size elements that are then processed as described above. This chunk size has to be sufficiently large to give time for memory to be actually cached at the point of usage. On the upper side, the chunk size is limited by the number of outstanding memory requests that the CPU can handle at a time: in Intel architectures, this is limited by the size of the line fill buffer, typically 10-12. We have empirically confirmed that bulk visitation maxes at around bulk_visit_size = 16, and stabilizes beyond that.

Performance analysis

For our study of bulk visitation performance, we have used a computer with a Skylake-based Intel Core i5-8265U CPU:


Size/core Latency [ns]
L1 data cache 32 KB 3.13
L2 cache 256 KB 6.88
L3 cache 6 MB 25.00
DDR4 RAM
77.25

We measure the throughput in Mops/sec of single-threaded lookup (50/50 successful/unsuccessful) for both regular and bulk visitation on a boost::concurrent_flat_map<int, int> with sizes N = 3k, 25k, 600k, and 10M: for the three first values, the container fits entirely into L1, L2 and L3, respectively. The test program has been compiled with clang-cl for Visual Studio 2022 in release mode.

As expected, the relative performance of bulk vs. regular visitation grows as data is fetched from a slower cache (or RAM in the latter case). The theoretical throughput achievable by bulk visitation has been estimated from regular visitation by subtracting memory retrieval times as calculated with the following model:

  • If the container fits in Ln (L4 = RAM), Ln−1 is entirely occupied by metadata and access objects (and some of this data spills over to Ln).
  • Mutex and element retrieval times (which only apply to successful visitation) are dominated by the latter.

Actual and theoretical figures match quite well, which sugggests that the algorithmic overhead imposed by bulk visitation is negligible.

We have also run benchmarks under conditions more similar to real-life for boost::concurrent_flat_map, with and without bulk visitation, and other concurrent containers, using different compilers and architectures. As an example, these are the results for a workload of 50M insert/lookup mixed operations distributed across several concurrent threads for different data distributions with Clang 12 on an ARM64 computer:

5M updates, 45M lookups
skew=0.01
5M updates, 45M lookups
skew=0.5
5M updates, 45M lookups
skew=0.99

Again, bulk visitation increases performance noticeably. Please refer to the benchmark site for further information and results.

Conclusions and next steps

Bulk visitation is an addition to the interface of boost::concurrent_flat_map and boost::concurrent_flat_set that improves lookup performance by pipelining the internal visitation operations for chunked groups of keys. The tradeoff for this increased throughput is higher latency, as keys need to be batched by the user code before issuing the visit operation.

The insights we have gained with bulk visitation for concurrent containers can be leveraged for future Boost.Unordered features:

  • In principle, insertion can also be made to operate in bulk mode, although the resulting pipelined algorithm is likely more complex than in the visitation case, and thus performance increases are expected to be lower.
  • Bulk visitation (and insertion) is directly applicable to non-concurrent containers such as boost::unordered_flat_map: the main problem for this is one of interface design because we are not using visitation here as the default lookup API (classical iterator-based lookup is provided instead). Some possible options are:
    1. Use visitation as in the concurrent case.
    2. Use an iterator-based lookup API that outputs the resulting iterators to some user-provided buffer (probably modelled as an output "meta" iterator taking container iterators).

Bulk visitation will be officially shipping in Boost 1.84 (December 2023) but is already available by checking out the Boost.Unordered repo. If you are interested in this feature, please try it and report your local results and suggestions for improvement. Your feedback on our current and future work is much welcome.

Friday, August 18, 2023

User-defined class qualifiers in C++23

It is generally known that type qualifiers (such as const and volatile in C++) can be regarded as a form of subtyping: for instance, const T is a supertype of T because the interface (available operations) of T are strictly wider than those of const T. Foster et al. call a qualifier q positive if q T is a supertype of T, and negative it if is the other way around. Without real loss of generality, in what follows we only consider negative qualifiers, where q T is a subtype of (extends the interface of) T.

C++23 explicit object parameters (coloquially known as "deducing this") allow for a particularly concise and effective realization of user-defined qualifiers for class types beyond what the language provides natively. For instance, this is a syntactically complete implementation of qualifier mut, the dual/inverse of const (not to be confused with mutable):

template<typename T>
struct mut: T
{
using T::T;
};

template<typename T>
T& as_const(T& x) { return x;}

template<typename T>
T& as_const(mut<T>& x) { return x;}

struct X
{
void foo() {}
void bar(this mut<X>&) {}
};

int main()
{
mut<X> x;
x.foo();
x.bar();

auto& y = as_const(x);
y.foo();
y.bar(); // error: cannot convert argument 1 from 'X' to 'mut<X> &'

X& z = x;
z.foo();
z.bar(); // error: cannot convert argument 1 from 'X' to 'mut<X> &'
}

The class X has a regular (generally accessible) member function foo and then bar, which is only accessible to instances of the form mut<X>. Access checking and implicit and explicit conversion between subtype mut<X> and mut<X> work as expected.

With some help fom Boost.Mp11, the idiom can be generalized to the case of several qualifiers:

#include <boost/mp11/algorithm.hpp>
#include <boost/mp11/list.hpp>
#include <type_traits>

template<typename T,typename... Qualifiers>
struct access: T
{
using qualifier_list=boost::mp11::mp_list<Qualifiers...>;

using T::T;
};

template<typename T, typename... Qualifiers>
concept qualified =
(boost::mp11::mp_contains<
typename std::remove_cvref_t<T>::qualifier_list,
Qualifiers>::value && ...);

// some qualifiers
struct mut;
struct synchronized;

template<typename T>
concept is_mut = qualified<T, mut>;

template<typename T>
concept is_synchronized = qualified<T, synchronized>;

struct X
{
void foo() {}

template<is_mut Self>
void bar(this Self&&) {}

template<is_synchronized Self>
void baz(this Self&&) {}

template<typename Self>
void qux(this Self&&)
requires qualified<Self, mut, synchronized>
{}
};

int main()
{
access<X, mut> x;

x.foo();
x.bar();
x.baz(); // error: associated constraints are not satisfied
x.qux(); // error: associated constraints are not satisfied

X y;
x.foo();
y.bar(); // error: associated constraints are not satisfied

access<X, mut, synchronized> z;
z.bar();
z.baz();
z.qux();
}
One difficulty remains, though:
int main()
{
access<X, mut, synchronized> z;
//...
access<X, mut>& w=z; // error: cannot convert from
// 'access<X,mut,synchronized>'
// to 'access<X,mut> &'

}
access<T,Qualifiers...>& converts to T&, but not to access<T,Qualifiers2...>&  where Qualifiers2 is a subset of  Qualifiers (for the mathematically inclined, qualifiers q1, ... , qN over a type T induce a lattice of subtypes Q T, Q ⊆ {q1, ... , qN}, ordered by qualifier inclusion). Incurring undefined behavior, we could do the following:
template<typename T,typename... Qualifiers>
struct access: T
{
using qualifier_list=boost::mp11::mp_list<Qualifiers...>;

using T::T;

template<typename... Qualifiers2>
operator access<T, Qualifiers2...>&()
requires qualified<access, Qualifiers2...>
{
return reinterpret_cast<access<T, Qualifiers2...>&>(*this);
}
};
A more interesting challenge is the following: As laid out, this technique implements syntactic qualifier subtyping, but does not do anything towards enforcing the semantics associated to each qualifier: for instance, synchronized should lock a mutex automatically, and a qualifier associated to some particular invariant should assert it after each invocation to a qualifier-constraied member function. I don't know if this functionality can be more or less easily integrated into the presented framework: feedback on the matter is much welcome.

Friday, July 7, 2023

Inside boost::concurrent_flat_map

Introduction

Starting in Boost 1.83, Boost.Unordered provides boost::concurrent_flat_map, an associative container suitable for high-load parallel scenarios. boost::concurrent_flat_map leverages much of the work done for boost::unordered_flat_map, but also introduces innovations, particularly in the areas of low-contention operation and API design, that we find worth discussing.

State of the art

The space of C++ concurrent hashmaps spans a diversity of competing techniques, from traditional ones such as lock-based structures or sharding, to very specialized approaches relying on CAS instructions, hazard pointers, Read-Copy-Update (RCU), etc. We list some prominent examples:

  • tbb::concurrent_hash_map uses closed addressing combined with bucket-level read-write locking. The bucket array is split in a number of segments to allow for incremental rehashing without locking the entire table. Concurrent insertion, lookup and erasure are supported, but iterators are not thread safe. Locked access to elements is done via so-called accessors.
  • tbb::concurrent_unordered_map also uses closed addressing, but buckets are organized into lock-free split-ordered lists. Concurrent insertion, lookup, and traversal are supported, whereas erasure is not thread safe. Element access via iterators is not protected against data races.
  • Sharding consists in dividing the hashmap into a fixed number N of submaps indexed by hash (typically, the element x goes into the submap with index hash(x) mod N). Sharding is extremely easy to implement starting from a non-concurrent hashmap and provides incremental rehashing, but the degree of concurrency is limited by N. As an example, gtl::parallel_flat_hash_map uses sharding with submaps essentially derived from absl::flat_hash_map, and inherits the excellent performance of this base container.
  • libcuckoo::cuckoohash_map adds efficient thread safety to classical cuckoo hashing by means of a number of carefully engineered techniques including fine-grained locking of slot groups or "strips" (of size 4 by default), optimistic insertion and data prefetching.
  • Meta's folly::ConcurrentHashMap combines closed addressing, sharding and hazard pointers to elements to achieve lock-free lookup (modifying operations such as insertion and erasure lock the affected shard). Iterators, which internally hold a hazard pointer to the element, can be validly dereferenced even after the element has been erased from the map; access, on the other hand, is constant and elements are basically treated as immutable.
  • folly::AtomicHashMap is a very specialized hashmap that imposes severe usage restrictions in exchange for very high time and space performance. Keys must be trivially copyable and 32 or 64 bits in size so that they can be handled internally by means of atomic instructions; also, some key values must be reserved to mark empty slots, tombstones and locked elements, so that no extra memory is required for bookkeeping information and locks. The internal data structure is based on open addressing with linear probing. Non-modifying operations are lock-free. Rehashing is not provided: instead, extra bucket arrays are appended when the map becomes full, the expectation being that the user provide the estimated final size at construction time to avoid this rather inefficient growth mechanism. Element access is not protected against data races.
  • On a more experimental/academic note, we can mention initiatives such as Junction, Folklore and DRAMHiT. In general, these do not provide industry-grade container implementations but explore interesting ideas that could eventually be adopted by mainstream libraries, such as RCU-based data structures, lock-free algorithms relying on CAS and/or transactional memory, parallel rehashing and operation batching.
Design principles

Unlike non-concurrent C++ containers, where the STL acts as a sort of reference interface, concurrent hashmaps in the market differ wildly in terms of requirements, API and provided functionality. When designing boost::concurrent_flat_map, we have aimed for a general-purpose container

  • with no special restrictions on key and mapped types,
  • providing full thread safety without external synchronization mechanisms,
  • and disrupting as little as possible the conceptual and operational model of "traditional" containers.

These principles rule out some scenarios such as requiring that keys be of an integral type or putting an extra burden on the user in terms of access synchronization or active garbage collection. They also inform concrete design decisions:

  • boost::concurrent_flat_map<Key, T, Hash, Pred, Allocator> must be a valid instantiation in all practical cases where boost::unordered_flat_map<Key, T, Hash, Pred, Allocator> is.
  • Thread-safe value semantics are provided (including copy construction, assignment, swap, etc.)
  • All member functions in boost::unordered_flat_map are provided by boost::concurrent_flat_map except if there's a fundamental reason why they can't work safely or efficiently in a concurrent setting.

The last guideline has the most impact on API design. In particular, we have decided not to provide iterators, either blocking or not: if not-blocking, they're unsafe, and if blocking they increase contention when not properly used, and can very easily lead to deadlocks:

// Thread 1
map_type::iterator it1=map.find(x1), it2=map.find(x2);

// Thread 2
map_type::iterator it2=map.find(x2), it1=map.find(x1);

In place of iterators, boost::concurrent_flat_map offers an access API based on internal visitation, as described in a later section.

Data structure

boost::concurrent_flat_map uses the same open-addressing layout as boost::unordered_flat_map, where the bucket array is split into 2n groups of N = 15 slots and each group has an associated 16-byte metadata word for SIMD-based reduced-hash matching and insertion overflow control.

On top of this layout, two synchronization levels are added:

  • Container level: A read-write mutex is used to control access from any operation to the container. This access is always requested in read mode (i.e. shared) except for operations that require that the whole bucket array be replaced, like rehashing, swapping, assignment, etc. This means that, in practice, this level of synchronization does not cause any contention at all, even for modifying operations like insertion and erasure. To reduce cache coherence traffic, the mutex is implemented as an array of read-write spinlocks occupying separate cache lines, and each thread is assigned one spinlock in a round-robin fashion at thread_local construction time: read/shared access does only involve the assigned spinlock, whereas write/exclusive access, which is comparatively much rarer, requires that all spinlocks be locked.
  • Group level: Each group has a dedicated read-write spinlock to control access to its slots, plus an atomic insertion counter used for transactional optimistic insertion as described below.
Algorithms

The core algorithms of boost::concurrent_flat_map are variations of those of boost::unordered_flat_map with minimal changes to prevent data races while keeping group-level contention to a minimum.

In the following diagrams, white boxes represent lock-free steps, while gray boxes are executed within the scope of a group lock. Metadata is handled atomically both in locked and lock-free scenarios.

Most steps of the lookup algorithm (hash calculation, probing, element pre-checking via SIMD matching with the value's reduced hash) are lock-free and do not synchronize with any operation on the metadata. When SIMD matching detects a potential candidate, double-checking for slot occupancy and the actual comparison with the element are done within the group lock; note that the occupancy double check is necessary precisely because SIMD matching is lock-free and the status of the identified slot may have changed before group locking.

Insertion

The main challenge of any concurrent insertion algorithm is to prevent an element x from being inserted twice by different threads running at the same time. As open-addressing probing starts at a position p0 univocally determined by the hash value of x, a naïve (and flawed) approach is to lock p0 for the entire duration of the insertion procedure: this leads to deadlocking if the probing sequences of two different elements intersect.

We have implemented the following transactional optimistic insertion algorithm: At the beginning of insertion, the value of the insertion counter for the group at position p0 is saved locally and insertion proceeds normally, first checking that an element equivalent to x does not exist and then looking for available slots starting at p0 and locking only one group of the probing sequence at a time; when an available slot is found, the associated metadata is updated, the insertion counter at p0 is incremented, and:

  • If no other thread got in the way (i.e. if the pre-increment value of the counter coincides with the local value stored at the beginning), then the transaction is successful and insertion can be finished by storing the element into the slot before releasing the group lock.
  • Otherwise, metadata changes are rolled back and the entire insertion process is started over.

Our measurements indicate that, even under adversarial situations, the ratio of start-overs to successful insertions ranges in the parts per million.

Visitation API

From an operational point of view, container iterators serve two main purposes: combining lookup/insertion with further access to the relevant element:

auto it = m.find(k);
if (it != m.end()) {
  it->second = 0;
}

and container traversal:

// iterators used internally by range-for
for(auto& x: m) {
  x.second = 0;
}

Having decided that boost::concurrent_flat_map not rely on iterators due to their inherent concurrency problems, a design alternative is to move element access into the container operations themselves, where it can be done in a thread-safe manner. This is just a form of the familiar visitation pattern:

m.visit(k, [](auto& x) {
  x.second = 0;
});

m.visit_all([](auto& x) {
  x.second = 0;
});

boost::concurrent_flat_map provides visitation-enabled variations of classical map operations wherever it makes sense:

  • visit, cvisit (in place of find)
  • visit_all, cvisit_all (as a substitute of container traversal)
  • emplace_or_visit, emplace_or_cvisit
  • insert_or_visit, insert_or_cvisit
  • try_emplace_or_visit, try_emplace_or_cvisit

cvisit stands for constant visitation, that is, the visitation function is granted read-only access to the element, which has less contention than write access.

Traversal functions [c]visit_all and erase_if have also parallel versions:

m.visit_all(std::execution::par, [](auto& x) {
  x.second = 0;
});
Benchmarks

We've tested boost::concurrent_flat_map against tbb::concurrent_hash_map and gtl::parallel_flat_hash_map for the following synthetic scenario: T threads concurrently perform N operations update, successful lookup and unsuccessful lookup, randomly chosen with probabilities 10%, 45% and 45%, respectively, on a concurrent map of (int, int) pairs. The keys used by all operations are also random, where update and successful lookup follow a Zipf distribution over [1, N/10] with skew exponent s, and unsuccessful lookup follows a Zip distribution with the same skew s over an interval not overlapping with the former.

We provide the full benchmark code and results for different 64- and 32-bit architectures in a dedicated repository; here, we just show as an example the plots for Visual Studio 2022 in x64 mode on an AMD Ryzen 5 3600 6-Core @ 3.60 GHz without hyperthreading and 64 GB of RAM.

500k updates, 4.5M lookups
skew=0.01
500k updates, 4.5M lookups
skew=0.5
500k updates, 4.5M lookups
skew=0.99
5M updates, 45M lookups
skew=0.01
5M updates, 45M lookups
skew=0.5
5M updates, 45M lookups
skew=0.99

Note that, for the scenario with 500k updates, boost::concurrent_flat_map continues to improve after the number of threads exceed the number of cores (6), a phenomenon for which we don't have a readily explanation —we could hypothesize that execution is limited by memory latency, but the behavior does not reproduce in the scenario with 5M updates, where the cache miss ratio is necessarily higher. Note also that gtl::parallel_flat_hash_map performs comparatively worse for high-skew scenarios where the load is concentrated on a very small number of keys: this may be due to gtl::parallel_flat_hash_map having a much coarser lock granularity (256 shards in the configuration used) than the other two containers.

In general, results are very dependent on the particular CPU and memory system used; you are welcome to try out the benchmark in your architecture of interest and report back.

Conclusions and next steps

boost::concurrent_flat_map is a new, general-purpose concurrent hashmap that leverages the very performant open-addressing techniques of boost::unordered_flat_map and provides a fully thread-safe, iterator-free API we hope future users will find flexible and convenient.

We are considering a number of new functionalities for upcoming releases:

  • As boost::concurrent_flat_map and boost::unordered_flat_map basically share the same data layout, it's possible to efficiently implement move construction from one to another by simply transferring the internal structure. There are scenarios where this feature can lead to more performant execution, like, for instance, multithreaded population of a boost::concurrent_flat_map followed by single- or multithreaded read-only lookup on a boost::unordered_flat_map move-constructed from the former.
  • DRAMHiT shows that pipelining/batching several map operations on the same thread in combination with heavy memory prefetching can reduce or eliminate waiting CPU cycles. We have conducted some preliminary experiments using this idea for a feature we dubbed bulk lookup (providing an array of keys to look for at once), with promising results.

We're launching this new container with trepidation: we cannot possibly try the vast array of different CPU architectures and scenarios where concurrent hashmaps are used, and we don't have yet field data on the suitability of the novel API we're proposing for boost::concurrent_flat_map. For these reasons, your feedback and proposals for improvement are most welcome.