C++ bindings for libpmemobj (part 7) - synchronization primitives

C++ bindings for libpmemobj (part 7) - synchronization primitives

To finish off the C++ bindings to libpmemobj blog marathon, I will introduce to you the synchronization mechanisms we implemented. They are mostly C++11-like implementations of different kinds of mutexes and the condition variable. They satisfy their respective concepts (Mutex, SharedMutex and so …

Read More
C++ bindings for libpmemobj (part 6) - transactions

C++ bindings for libpmemobj (part 6) - transactions

As I mentioned in my previous blog post, transactions are the heart of libpmemobj. That is why we had to take utmost care while designing their C++ versions, so that they are as easy to use as possible. There are, however, a couple of compromises we had to make due to the inadequacies of the C++11 …

Read More
C++ bindings for libpmemobj (part 5) - make_persistent

C++ bindings for libpmemobj (part 5) - make_persistent

One of the most important features of the C++ bindings to libpmemobj is the persistent_ptr smart pointer template. While using it is fairly straightforward, the allocation and object construction with the use of the C API is hard to get right. So like it’s C++ standard’s counterparts, it …

Read More
C++ bindings for libpmemobj (part 4) - pool handle wrapper

C++ bindings for libpmemobj (part 4) - pool handle wrapper

One of the necessary steps in developing the C++ libpmemobj bindings was the introduction of an abstraction of the C pool handle. We decided to do a very simple hierarchy where the pool template inherits from a generic pool_base. This was necessary to be able to have functions/methods which do not …

Read More
Persistent allocator design - fragmentation

Persistent allocator design - fragmentation

Implementing a memory allocator is a balance between numerous properties with the two most important being time and space constraints. Making the malloc/free routines reasonably fast is a must for the implementation to be considered usable at all. The algorithm also mustn’t waste excessive …

Read More