diff --git a/doc/interprocess.qbk b/doc/interprocess.qbk index e0bd4a0..2665ea2 100644 --- a/doc/interprocess.qbk +++ b/doc/interprocess.qbk @@ -146,7 +146,7 @@ list, map, so you can avoid these manual data structures just like with standard [*Boost.Interprocess] allows creating complex objects in shared memory and memory mapped files. For example, we can construct STL-like containers in shared memory. -To do this, we just need to create a an special (managed) shared memory segment, +To do this, we just need to create a special (managed) shared memory segment, declare a [*Boost.Interprocess] allocator and construct the vector in shared memory just if it was any other object. Just execute this first process: @@ -207,7 +207,7 @@ processes we have several alternatives: shared memory or memory mapped files. Once the processes set up the memory region, the processes can read/write the data like any other memory segment without calling the operating system's kernel. This - also requieres some kind of manual synchronization between processes. + also requires some kind of manual synchronization between processes. [endsect] @@ -264,7 +264,7 @@ so that two unrelated processes can use the same interprocess mechanism object. Examples of this are shared memory, named mutexes and named semaphores (for example, native windows CreateMutex/CreateSemaphore API family). -The name used to identify a interprocess mechanism is not portable, even between +The name used to identify an interprocess mechanism is not portable, even between UNIX systems. For this reason, [*Boost.Interprocess] limits this name to a C++ variable identifier or keyword: @@ -281,7 +281,7 @@ identifier or keyword: Named [*Boost.Interprocess] resources (shared memory, memory mapped files, named mutexes/conditions/semaphores) have kernel or filesystem persistency. This means that even if all processes that have opened those resources -end, the resource will still be accesible to be opened again and the resource +end, the resource will still be accessible to be opened again and the resource can only be destructed via an explicit to their static member `remove` function. This behavior can be easily understood, since it's the same mechanism used by functions controlling file opening/creation/erasure: @@ -332,7 +332,7 @@ processes, so that several processes can read and write in that memory segment without calling operating system functions. However, we need some kind of synchronization between processes that read and write shared memory. -Consider what happens when a server process wants to send a html file to a client process +Consider what happens when a server process wants to send an HTML file to a client process that resides in the same machine using network mechanisms: * The server must read the file to memory and pass it to the network functions, that @@ -457,7 +457,7 @@ For more details regarding `shared_memory_object` see the Once created or opened, a process just has to map the shared memory object in the process' address space. The user can map the whole shared memory or just part of it. The -mapping process is done using the the `mapped_region` class. The class represents +mapping process is done using the `mapped_region` class. The class represents a memory region that has been mapped from a shared memory or from other devices that have also mapping capabilities (for example, files). A `mapped_region` can be created from any `memory_mappable` object and as you might imagine, `shared_memory_object` @@ -533,7 +533,7 @@ has filesystem lifetime in those systems. [section:removing Removing shared memory] [classref boost::interprocess::shared_memory_object shared_memory_object] -provides an static `remove` function to remove a shared memory objects. +provides a static `remove` function to remove a shared memory objects. This function [*can] fail if the shared memory objects does not exist or it's opened by another process. Note that this function is similar to the @@ -567,7 +567,7 @@ behavior as the standard C (stdio.h) `int remove(const char *path)` function. Creating a shared memory segment and mapping it can be a bit tedious when several processes are involved. When processes are related via `fork()` operating system -call in UNIX sytems a simpler method is available using anonymous shared memory. +call in UNIX systems a simpler method is available using anonymous shared memory. This feature has been implemented in UNIX systems mapping the device `\dev\zero` or just using the `MAP_ANONYMOUS` in a POSIX conformant `mmap` system call. @@ -576,7 +576,7 @@ This feature is wrapped in [*Boost.Interprocess] using the `anonymous_shared_mem function, which returns a `mapped_region` object holding an anonymous shared memory segment that can be shared by related processes. -Here's is an example: +Here is an example: [import ../example/doc_anonymous_shared_memory.cpp] [doc_anonymous_shared_memory] @@ -615,7 +615,7 @@ Take in care that when the last process attached to a shared memory is destroyed shared memory. Native windows shared memory has also another limitation: a process can open and map the whole shared memory created by another process but it can't know which is the size of that memory. This limitation is imposed by the Windows API so -the user must somehow trasmit the size of the segment to processes opening the +the user must somehow transmit the size of the segment to processes opening the segment. Let's repeat the same example presented for the portable shared memory object: @@ -738,7 +738,7 @@ regarding this class see the After creating a file mapping, a process just has to map the shared memory in the process' address space. The user can map the whole shared memory or just part of it. -The mapping process is done using the the `mapped_region` class. as we have said before +The mapping process is done using the `mapped_region` class. as we have said before The class represents a memory region that has been mapped from a shared memory or from other devices that have also mapping capabilities: @@ -770,7 +770,7 @@ the mapped region covers from the offset until the end of the file. If several processes map the same file, and a process modifies a memory range from a mapped region that is also mapped by other process, the changes are inmedially visible to other processes. However, the file contents on disk are -not updated immediately, since that would hurt performance (writting to disk +not updated immediately, since that would hurt performance (writing to disk is several times slower than writing to memory). If the user wants to make sure that file's contents have been updated, it can flush a range from the view to disk. When the function returns, the data should have been written to disk: @@ -799,7 +799,7 @@ For more details regarding `mapped_region` see the Let's reproduce the same example described in the shared memory section, using memory mapped files. A server process creates a shared -memory segment, maps it and initializes all the bytes the a value. After that, +memory segment, maps it and initializes all the bytes to a value. After that, a client process opens the shared memory, maps it, and checks that the data is correctly initialized. This is the server process: @@ -892,7 +892,7 @@ to a multiple of a value called [*page size]. This is due to the fact that the If fixed mapping address is used, ['offset] and ['address] parameters should be multiples of that value. -This value is, tipically, 4KB or 8KB for 32 bit operating systems. +This value is, typically, 4KB or 8KB for 32 bit operating systems. [c++] @@ -929,7 +929,7 @@ more resources than necessary. If the user specifies the following 1 byte mappin The operating system will reserve a whole page that will not be reused by any other mapping so we are going to waste [*(page size - 1)] bytes. If we want -to use effiently operating system resources, we should create regions whose size +to use efficiently operating system resources, we should create regions whose size is a multiple of [*page size] bytes. If the user specifies the following two mapped regions for a file with which has `2*page_size` bytes: @@ -967,7 +967,7 @@ page size. The mapping with the minimum resource usage would be to map whole pag , page_size //Map the rest ); -How can we obtain the [*page size]? The `mapped_region` class has an static +How can we obtain the [*page size]? The `mapped_region` class has a static function that returns that value: [c++] @@ -997,7 +997,7 @@ When placing objects in a mapped region and mapping that region in different address in every process, raw pointers are a problem since they are only valid for the process that placed them there. To solve this, [*Boost.Interprocess] offers -an special smart pointer that can be used instead of a raw pointer. +a special smart pointer that can be used instead of a raw pointer. So user classes containing raw pointers (or Boost smart pointers, that internally own a raw pointer) can't be safely placed in a process shared mapped region. These pointers must be replaced with offset pointers, and @@ -1007,7 +1007,7 @@ if you want to use these shared objects from different processes. Of course, a pointer placed in a mapped region shared between processes should only point to an object of that mapped region. Otherwise, the pointer would point to an address that it's only valid one process and other -processes may crash when accesing to that address. +processes may crash when accessing to that address. [endsect] @@ -1040,7 +1040,7 @@ and they will crash. This problem is very difficult to solve, since each process needs a different virtual table pointer and the object that contains that pointer -is shared accross many processes. Even if we map the mapped region in +is shared across many processes. Even if we map the mapped region in the same address in every process, the virtual table can be in a different address in every process. To enable virtual functions for objects shared between processes, deep compiler changes are needed and virtual @@ -1099,7 +1099,7 @@ processes the memory segment can be mapped in a different address in each proces //This address can be different in each process void *addr = region.get_address(); -This difficults the creation of complex objects in mapped regions: a C++ +This makes the creation of complex objects in mapped regions difficult: a C++ class instance placed in a mapped region might have a pointer pointing to another object also placed in the mapped region. Since the pointer stores an absolute address, that address is only valid for the process that placed @@ -1107,9 +1107,9 @@ the object there unless all processes map the mapped region in the same address. To be able to simulate pointers in mapped regions, users must use [*offsets] -(distance between objets) instead of absolute addresses. The offset between +(distance between objects) instead of absolute addresses. The offset between two objects in a mapped region is the same for any process that maps the -mapped region, even if that region is placed in different base addreses. +mapped region, even if that region is placed in different base addresses. To facilitate the use of offsets, [*Boost.Interprocess] offers [classref boost::interprocess::offset_ptr offset_ptr]. @@ -1119,7 +1119,7 @@ needed to offer a pointer-like interface. The class interface is inspired in Boost Smart Pointers and this smart pointer stores the offset (distance in bytes) between the pointee's address and it's own `this` pointer. -Imagine an structure in a common +Imagine a structure in a common 32 bit processor: [c++] @@ -1209,7 +1209,7 @@ mapped files or shared memory objects is not very useful if the access to that memory can't be effectively synchronized. This is the same problem that happens with thread-synchronization mechanisms, where heap memory and global variables are shared between threads, but the access to these resources needs to be synchronized -tipically through mutex and condition variables. [*Boost.Threads] implements these +typically through mutex and condition variables. [*Boost.Threads] implements these synchronization utilities between threads inside the same process. [*Boost.Interprocess] implements similar mechanisms to synchronize threads from different processes. @@ -1223,7 +1223,7 @@ implements similar mechanisms to synchronize threads from different processes. a file with using a `fstream` with the name ['filename] and another process opens that file using another `fstream` with the same ['filename] argument. [*Each process uses a different object to access to the resource, but both processes - are using the the same underlying resource]. + are using the same underlying resource]. * [*Anonymous utilities]: Since these utilities have no name, two processes must share [*the same object] through shared memory or memory mapped files. This is @@ -1245,8 +1245,8 @@ Each type has it's own advantages and disadvantages: synchronization utilities. The main interface difference between named and anonymous utilities are the constructors. -Usually anonymous utilities have only one contructor, whereas the named utilities have -several constructors whose first argument is an special type that requests creation, +Usually anonymous utilities have only one constructor, whereas the named utilities have +several constructors whose first argument is a special type that requests creation, opening or opening or creation of the underlying resource: [c++] @@ -1403,7 +1403,7 @@ Boost.Interprocess offers the following mutex types: It's very important to unlock a mutex after the process has read or written the data. This can be difficult when dealing with exceptions, so usually mutexes are used with a scoped lock, a class that can guarantee that a mutex will always be unlocked -even when an exception occurs. To use an scoped lock just include: +even when an exception occurs. To use a scoped lock just include: [c++] @@ -1476,7 +1476,7 @@ will write a flag when ends writing the traces [doc_anonymous_mutex_shared_data] This is the process main process. Creates the shared memory, constructs -the the cyclic buffer and start writing traces: +the cyclic buffer and start writing traces: [import ../example/doc_anonymous_mutexA.cpp] [doc_anonymous_mutexA] @@ -1516,11 +1516,11 @@ In the previous example, a mutex is used to ['lock] but we can't use it to can do two things: * [*wait]: The thread is blocked until some other thread notifies that it can - continue because the condition that lead to waiting has dissapeared. + continue because the condition that lead to waiting has disapeared. * [*notify]: The thread sends a signal to one blocked thread or to all blocked threads to tell them that they the condition that provoked their wait has - dissapeared. + disapeared. Waiting in a condition variable is always associated with a mutex. The mutex must be locked prior to waiting on the condition. When waiting @@ -1649,7 +1649,7 @@ synchronization objects with the synchronized data: [section:semaphores_anonymous_example Anonymous semaphore example] -We will implement a integer array in shared memory that will be used to trasfer data +We will implement an integer array in shared memory that will be used to transfer data from one process to another process. The first process will write some integers to the array and the process will block if the array is full. @@ -1662,7 +1662,7 @@ This is the shared integer array (doc_anonymous_semaphore_shared_data.hpp): [doc_anonymous_semaphore_shared_data] This is the process main process. Creates the shared memory, places there -the interger array and starts integers one by one, blocking if the array +the integer array and starts integers one by one, blocking if the array is full: [import ../example/doc_anonymous_semaphoreA.cpp] @@ -1686,7 +1686,7 @@ efficient than a mutex/condition combination. [section:upgradable_whats_a_mutex What's An Upgradable Mutex?] -An upgradable mutex is an special mutex that offers more locking possibilities than +An upgradable mutex is a special mutex that offers more locking possibilities than a normal mutex. Sometimes, we can distinguish between [*reading] the data and [*modifying] the data. If just some threads need to modify the data, and a plain mutex is used to protect the data from concurrent access, concurrency is pretty limited: @@ -1937,7 +1937,7 @@ ownership. This operation is non-blocking. [*Precondition:] The thread must have upgradable ownership of the mutex. [*Effects:] The thread atomically releases upgradable ownership and acquires exclusive -ownership. This operation will block until all threads with sharable ownership releas it. +ownership. This operation will block until all threads with sharable ownership release it. [*Throws:] An exception derived from *interprocess_exception* on error.[blurb ['[*bool try_unlock_upgradable_and_lock()]]] @@ -2176,9 +2176,9 @@ This is implemented by upgradable mutex operations like `unlock_and_lock_sharabl These operations can be managed more effectively using [*lock transfer operations]. A lock transfer operations explicitly indicates that a mutex owned by a lock is -trasferred to another lock executing atomic unlocking plus locking operations. +transferred to another lock executing atomic unlocking plus locking operations. -[section:lock_trasfer_simple_transfer Simple Lock Transfer] +[section:lock_trnasfer_simple_transfer Simple Lock Transfer] Imagine that a thread modifies some data in the beginning but after that, it has to just read it in a long time. The code can acquire the exclusive lock, modify the data @@ -2239,19 +2239,19 @@ We can use [*lock transfer] to simplify all this management: //Read data - //The lock is automatically unlocked calling the appropiate unlock + //The lock is automatically unlocked calling the appropriate unlock //function even in the presence of exceptions. //If the mutex was not locked, no function is called. As we can see, even if an exception is thrown at any moment, the mutex -will be automatically unlocked calling the appropiate `unlock()` or +will be automatically unlocked calling the appropriate `unlock()` or `unlock_sharable()` method. [endsect] -[section:lock_trasfer_summary Lock Transfer Summary] +[section:lock_transfer_summary Lock Transfer Summary] -There are many lock trasfer operations that we can classify according to +There are many lock transfer operations that we can classify according to the operations presented in the upgradable mutex operations: * [*Guaranteed to succeed and non-blocking:] Any transition from a more @@ -2306,7 +2306,7 @@ is permitted: [section:lock_transfer_summary_upgradable Transfers To Upgradable Lock] -A transfer to an `upgradable_lock` is guaranteed to succeed only from an `scoped_lock` +A transfer to an `upgradable_lock` is guaranteed to succeed only from a `scoped_lock` since scoped locking is a more restrictive locking than an upgradable locking. This operation is also non-blocking. @@ -2352,7 +2352,7 @@ These operations are also non-blocking: [endsect] -[section:lock_trasfer_not_locked Transferring Unlocked Locks] +[section:lock_transfer_not_locked Transferring Unlocked Locks] In the previous examples, the mutex used in the transfer operation was previously locked: @@ -2373,7 +2373,7 @@ unlocking, a try, timed or a `defer_lock` constructor: [c++] - //These operations can left the mutex unlocked! + //These operations can leave the mutex unlocked! { //Try might fail @@ -2400,7 +2400,7 @@ unlocking, a try, timed or a `defer_lock` constructor: If the source mutex was not locked: -* The target lock does not executes the atomic `unlock_xxx_and_lock_xxx` operation. +* The target lock does not execute the atomic `unlock_xxx_and_lock_xxx` operation. * The target lock is also unlocked. * The source lock is released() and the ownership of the mutex is transferred to the target. @@ -2418,7 +2418,7 @@ If the source mutex was not locked: [endsect] -[section:lock_trasfer_failure Transfer Failures] +[section:lock_transfer_failure Transfer Failures] When executing a lock transfer, the operation can fail: @@ -2554,7 +2554,7 @@ it waits until it can obtain the ownership. [*Effects:] The calling thread tries to acquire exclusive ownership of the file lock without waiting. If no other thread has exclusive or sharable ownership of -the file lock this succeeds. +the file lock, this succeeds. [*Returns:] If it can acquire exclusive ownership immediately returns true. If it has to wait, returns false. @@ -2595,7 +2595,7 @@ waits until it can obtain the ownership. [*Effects:] The calling thread tries to acquire sharable ownership of the file lock without waiting. If no other thread has has exclusive ownership of -the file lock this succeeds. +the file lock, this succeeds. [*Returns:] If it can acquire sharable ownership immediately returns true. If it has to wait, returns false. @@ -2851,7 +2851,7 @@ The message queue is explicitly removed calling the static `remove` function: using boost::interprocess; message_queue::remove("message_queue"); -The funtion can fail if the message queue is still being used by any process. +The function can fail if the message queue is still being used by any process. [endsect] @@ -2897,7 +2897,7 @@ However, managing those memory segments is not not easy for non-trivial tasks. A mapped region is a fixed-length memory buffer and creating and destroying objects of any type dynamically, requires a lot of work, since it would require programming a memory management algorithm to allocate portions of that segment. -Many times, we also want to associate a names to objects created in shared memory, so +Many times, we also want to associate names to objects created in shared memory, so all the processes can find the object using the name. [*Boost.Interprocess] offers 4 managed memory segment classes: @@ -2958,7 +2958,7 @@ These classes can be customized with the following template parameters: * The Pointer type (`MemoryAlgorithm::void_pointer`) to be used by the memory allocation algorithm or additional helper structures - (like a map to mantain object/name associations). All STL compatible + (like a map to maintain object/name associations). All STL compatible allocators and containers to be used with this managed memory segment will use this pointer type. The pointer type will define if the managed memory segment can be mapped between @@ -3016,8 +3016,8 @@ specializations: ,/*Default index type*/> wmanaged_shared_memory; -`managed_shared_memory` allocates objects in shared memory asociated with a c-string and -`wmanaged_shared_memory` allocates objects in shared memory asociated with a wchar_t null +`managed_shared_memory` allocates objects in shared memory associated with a c-string and +`wmanaged_shared_memory` allocates objects in shared memory associated with a wchar_t null terminated string. Both define the pointer type as `offset_ptr` so they can be used to map the shared memory at different base addresses in different processes. @@ -3108,7 +3108,7 @@ To use a managed shared memory, you must include the following header: //!! If anything fails, throws interprocess_exception // managed_shared_memory segment (open_or_create, "MySharedMemory", //Shared memory object name 65536); //Shared memory object size in bytes -When the a `managed_shared_memory` object is destroyed, the shared memory +When the `managed_shared_memory` object is destroyed, the shared memory object is automatically unmapped, and all the resources are freed. To remove the shared memory object from the system you must use the `shared_memory_object::remove` function. Shared memory object removing might fail if any @@ -3184,8 +3184,8 @@ specializations: flat_map_index > wmanaged_mapped_file; -`managed_mapped_file` allocates objects in a memory mapped files asociated with a c-string -and `wmanaged_mapped_file` allocates objects in a memory mapped file asociated with a wchar_t null +`managed_mapped_file` allocates objects in a memory mapped files associated with a c-string +and `wmanaged_mapped_file` allocates objects in a memory mapped file associated with a wchar_t null terminated string. Both define the pointer type as `offset_ptr` so they can be used to map the file at different base addresses in different processes. @@ -3244,7 +3244,7 @@ To use a managed mapped file, you must include the following header: //!! If anything fails, throws interprocess_exception // managed_mapped_file mfile (open_or_create, "MyMappedFile", //Mapped file name 65536); //Mapped file size -When the a `managed_mapped_file` object is destroyed, the file is +When the `managed_mapped_file` object is destroyed, the file is automatically unmapped, and all the resources are freed. To remove the file from the filesystem you can use standard C `std::remove` or [*Boost.Filesystem]'s `remove()` functions. File removing might fail @@ -3548,7 +3548,7 @@ object. The programmer can obtain the following information: * Length of the object: Returns the number of elements of the object (1 if it's a single value, >=1 if it's an array). -* The type of construction: Whether the object was construct using a named, +* The type of construction: Whether the object was constructed using a named, unique or anonymous construction. Here is an example showing this functionality: @@ -3676,7 +3676,7 @@ reallocations, if the index is a hash structure it can preallocate the bucket ar The following functions reserve memory to make the subsequent allocation of named or unique objects more efficient. These functions are only useful for pseudo-intrusive or non-node indexes (like `flat_map_index`, -`iunordered_set_index`). These functions has no effect with the +`iunordered_set_index`). These functions have no effect with the default index (`iset_index`) or other indexes (`map_index`): [c++] @@ -3735,7 +3735,7 @@ creation/erasure/reserve operation: Sometimes it's interesting to be able to allocate aligned fragments of memory because of some hardware or software restrictions. Sometimes, having -aligned memory is an feature that can be used to improve several +aligned memory is a feature that can be used to improve several memory algorithms. This allocation is similar to the previously shown raw memory allocation but @@ -3758,7 +3758,7 @@ of memory maximizing both the size of the request [*and] the possibilities of future aligned allocations. This information is stored in the PayloadPerAllocation constant of managed memory segments. -Here's is an small example showing how aligned allocation is used: +Here is a small example showing how aligned allocation is used: [import ../example/doc_managed_aligned_allocation.cpp] [doc_managed_aligned_allocation] @@ -3812,9 +3812,9 @@ pointers to memory the user can overwrite. A `multiallocation_iterator`: referencing the first byte user can overwrite in the memory buffer. * The iterator category depends on the memory allocation algorithm, - but it's a least a forward iterator. + but it's at least a forward iterator. -Here's an small example showing all this functionality: +Here is a small example showing all this functionality: [import ../example/doc_managed_multiple_allocation.cpp] [doc_managed_multiple_allocation] @@ -3867,7 +3867,7 @@ allocated buffer, because many times, due to alignment issues the allocated buffer a bit bigger than the requested size. Thus, the programmer can maximize the memory use using `allocation_command`. -Here's the declaration of the function: +Here is the declaration of the function: [c++] @@ -3992,12 +3992,12 @@ contain any of these values: `expand_fwd`, `expand_bwd`. performing backwards expansion, if you have already constructed objects in the old buffer, make sure to specify correctly the type.] -Here is an small example that shows the use of `allocation_command`: +Here is a small example that shows the use of `allocation_command`: [import ../example/doc_managed_allocation_command.cpp] [doc_managed_allocation_command] -`allocation_commmand` is a very powerful function that can lead to important +`allocation_command` is a very powerful function that can lead to important performance gains. It's specially useful when programming vector-like data structures where the programmer can minimize both the number of allocation requests and the memory waste. @@ -4016,7 +4016,7 @@ share an initial managed segment and make private changes to it. If many process open a managed segment in copy on write mode and not modified pages from the managed segment will be shared between all those processes, with considerable memory savings. -Opening managed shared memory and mapped files with [*open_read_only] maps the the +Opening managed shared memory and mapped files with [*open_read_only] maps the underlying device in memory with [*read-only] attributes. This means that any attempt to write that memory, either creating objects or locking any mutex might result in an page-fault error (and thus, program termination) from the OS. Read-only mode opens @@ -4034,7 +4034,7 @@ a managed memory segment without modifying it. Read-only mode operations are lim * Additionally, the `find<>` member function avoids using internal locks and can be used to look for named and unique objects. -Here's an example that shows the use of these two open modes: +Here is an example that shows the use of these two open modes: [import ../example/doc_managed_copy_on_write.cpp] [doc_managed_copy_on_write] @@ -4047,7 +4047,7 @@ Here's an example that shows the use of these two open modes: [*Boost.Interprocess] offers managed shared memory between processes using `managed_shared_memory` or `managed_mapped_file`. Two processes just map the same -the memory mappable resoure and read from and write to that object. +the memory mappable resource and read from and write to that object. Many times, we don't want to use that shared memory approach and we prefer to send serialized data through network, local socket or message queues. Serialization @@ -4093,7 +4093,7 @@ provided buffers that allow the same functionality as shared memory classes: [c++] //Named object creation managed memory segment - //All objects are constructed in a a user provided buffer + //All objects are constructed in a user provided buffer template < class CharType, class MemoryAlgorithm, @@ -4102,7 +4102,7 @@ provided buffers that allow the same functionality as shared memory classes: class basic_managed_external_buffer; //Named object creation managed memory segment - //All objects are constructed in a a user provided buffer + //All objects are constructed in a user provided buffer // Names are c-strings, // Default memory management algorithm // (rbtree_best_fit with no mutexes and relative pointers) @@ -4114,7 +4114,7 @@ provided buffers that allow the same functionality as shared memory classes: > managed_external_buffer; //Named object creation managed memory segment - //All objects are constructed in a a user provided buffer + //All objects are constructed in a user provided buffer // Names are wide-strings, // Default memory management algorithm // (rbtree_best_fit with no mutexes and relative pointers) @@ -4269,7 +4269,7 @@ memory mapped in different base addresses in several processes. [section:allocator_properties Properties of [*Boost.Interprocess] allocators] -Container allocators are normally default-contructible because the are stateless. +Container allocators are normally default-constructible because the are stateless. `std::allocator` and [*Boost.Pool's] `boost::pool_allocator`/`boost::fast_pool_allocator` are examples of default-constructible allocators. @@ -4403,7 +4403,7 @@ a fast and space-friendly allocator, as explained in the Segregate storage node allocators allocate large memory chunks from a general purpose memory -allocator and divide that chunk into several nodes. No bookeeping information +allocator and divide that chunk into several nodes. No bookkeeping information is stored in the nodes to achieve minimal memory waste: free nodes are linked using a pointer constructed in the memory of the node. @@ -4642,7 +4642,7 @@ of objects but they end storing a few of them: the node pool will be full of nod that won't be reused wasting memory from the segment. Adaptive pool based allocators trade some space (the overhead can be as low as 1%) -and performance (aceptable for many applications) with the ability to return free chunks +and performance (acceptable for many applications) with the ability to return free chunks of nodes to the memory segment, so that they can be used by any other container or managed object construction. To know the details of the implementation of of "adaptive pools" see the @@ -4777,7 +4777,7 @@ An example using [classref boost::interprocess::private_adaptive_pool private_ad [section:cached_adaptive_pool cached_adaptive_pool: Avoiding synchronization overhead] Adaptive pools have also a cached version. In this allocator the allocator caches -some nodes to avoid the synchronization and bookeeping overhead of the shared +some nodes to avoid the synchronization and bookkeeping overhead of the shared adaptive pool. [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool] allocates nodes from the common adaptive pool but caches some of them privately so that following @@ -5065,7 +5065,7 @@ smart pointers. Hopefully several Boost containers are compatible with [*Interpr [section:unordered Boost unordered containers] [*Boost.Unordered] containers are compatible with Interprocess, so programmers can store -hash containers in shared memory and memory mapped files. Here's an small example storing +hash containers in shared memory and memory mapped files. Here is a small example storing `unordered_map` in shared memory: [import ../example/doc_unordered_map.cpp] @@ -5082,7 +5082,7 @@ and those strings need to be placed in shared memory. Shared memory strings requ an allocator in their constructors so this usually makes object insertion a bit more complicated. -Here's is an example that shows how to put a multi index container in shared memory: +Here is an example that shows how to put a multi index container in shared memory: [import ../example/doc_multi_index.cpp] [doc_multi_index] @@ -5433,7 +5433,7 @@ something that is not possible if you want to place your data in shared memory. The virtual function limitation makes even impossible to achieve the same level of functionality of Boost and TR1 with [*Boost.Interprocess] smart pointers. -Interprocess ownership smart pointers are mainly "smart pointers contaning smart pointers", +Interprocess ownership smart pointers are mainly "smart pointers containing smart pointers", so we can specify the pointer type they contain. [section:intrusive_ptr Intrusive pointer] @@ -5463,7 +5463,7 @@ the pointer type to be stored in the intrusive_ptr: class intrusive_ptr; So `boost::interprocess::intrusive_ptr` is equivalent to -`boost::instrusive_ptr`. But if we want to place the intrusive_ptr in +`boost::intrusive_ptr`. But if we want to place the intrusive_ptr in shared memory we must specify a relative pointer type like `boost::interprocess::intrusive_ptr >` @@ -5511,7 +5511,7 @@ reference-counted objects in managed shared memory or mapped files. Unlike [@http://www.boost.org/libs/smart_ptr/shared_ptr.htm boost::shared_ptr], due to limitations of mapped segments [classref boost::interprocess::shared_ptr] -can not take advantage of virtual functions to maintain the same shared pointer +cannot take advantage of virtual functions to maintain the same shared pointer type while providing user-defined allocators and deleters. The allocator and the deleter are template parameters of the shared pointer. @@ -5523,7 +5523,7 @@ when constructing a non-empty instance of [classref boost::interprocess::shared_ptr shared_ptr], just like [*Boost.Interprocess] containers need to pass allocators in their constructors. -Here's is the declaration of [classref boost::interprocess::shared_ptr shared_ptr]: +Here is the declaration of [classref boost::interprocess::shared_ptr shared_ptr]: [c++] @@ -5567,7 +5567,7 @@ to easily construct a shared pointer from a type allocated in a managed segment with an allocator that will allocate the reference count also in the managed segment and a deleter that will erase the object from the segment. -These utilities will use the a [*Boost.Interprocess] allocator +These utilities will use a [*Boost.Interprocess] allocator ([classref boost::interprocess::allocator]) and deleter ([classref boost::interprocess::deleter]) to do their job. The definition of the previous shared pointer @@ -5766,7 +5766,7 @@ section. As memory algorithm examples, you can see the implementations The *segment manager*, is an object also placed in the first bytes of the managed memory segment (shared memory, memory mapped file), that offers more -sofisticated services built above the [*memory algorithm]. How can [*both] the +sophisticated services built above the [*memory algorithm]. How can [*both] the segment manager and memory algorithm be placed in the beginning of the segment? That's because the segment manager [*owns] the memory algorithm: The truth is that the memory algorithm is [*embedded] in the segment manager: @@ -5797,7 +5797,7 @@ implement "unique instance" allocations. * The first index is a map with a pointer to a c-string (the name of the named object) as a key and a structure with information of the dynamically allocated object - (the most importants being the address and the size of the object). + (the most important being the address and the size of the object). * The second index is used to implement "unique instances" and is basically the same as the first index, @@ -5906,7 +5906,7 @@ etc... Segregated storage pools are simple and follow the classic segregated storage algorithm. -* The pool allocates chuncks of memory using the segment manager's raw memory +* The pool allocates chunks of memory using the segment manager's raw memory allocation functions. * The chunk contains a pointer to form a singly linked list of chunks. The pool will contain a pointer to the first chunk. @@ -5932,7 +5932,7 @@ private_node_pool and shared_node_pool] classes. Adaptive pools are a variation of segregated lists but they have a more complicated approach: -* Instead of using raw allocation, the pool allocates [*aligned] chuncks of memory +* Instead of using raw allocation, the pool allocates [*aligned] chunks of memory using the segment manager. This is an [*essential] feature since a node can reach its chunk information applying a simple mask to its address. @@ -5958,7 +5958,7 @@ approach: * Deallocation returns the node to the free node list of its chunk and updates the "active" pool accordingly. -* If the number of totally free chunks exceds the limit, chunks are returned +* If the number of totally free chunks exceeds the limit, chunks are returned to the segment manager. * When the pool is destroyed, the list of chunks is traversed and memory is returned @@ -6037,7 +6037,7 @@ these alternatives: [section:performance_named_allocation Performance of named allocations] -[*Boost.Interprocess] allows the same paralelism as two threads writing to a common +[*Boost.Interprocess] allows the same parallelism as two threads writing to a common structure, except when the user creates/searches named/unique objects. The steps when creating a named object are these: @@ -6101,7 +6101,7 @@ The steps when destroying a named object using the pointer of the object If you see that the performance is not good enough you have these alternatives: -* Maybe the problem is that the lock time is too big and it hurts paralelism. +* Maybe the problem is that the lock time is too big and it hurts parallelism. Try to reduce the number of named objects in the global index and if your application serves several clients try to build a new managed memory segment for each one instead of using a common one. @@ -6146,12 +6146,12 @@ This is the interface to be implemented: //!The pointer type to be used by the rest of Interprocess framework typedef implementation_defined void_pointer; - //!Constructor. "size" is the total size of the maanged memory segment, + //!Constructor. "size" is the total size of the managed memory segment, //!"extra_hdr_bytes" indicates the extra bytes after the sizeof(my_algorithm) //!that the allocator should not use at all. my_algorithm (std::size_t size, std::size_t extra_hdr_bytes); - //!Obtains the minimium size needed by the algorithm + //!Obtains the minimum size needed by the algorithm static std::size_t get_min_size (std::size_t extra_hdr_bytes); //!Allocates bytes, returns 0 if there is not more memory @@ -6191,7 +6191,7 @@ But if we define: then all [*Boost.Interprocess] framework will use relative pointers. -The `mutex_family` is an structure containing typedefs +The `mutex_family` is a structure containing typedefs for different interprocess_mutex types to be used in the [*Boost.Interprocess] framework. For example the defined @@ -6236,7 +6236,7 @@ that boost::interprocess::rbtree_best_fit class offers: This function should be executed with the synchronization capabilities offered by `typename mutex_family::mutex_type` interprocess_mutex. That means, that if we define `typedef mutex_family mutex_family;` then this function should offer - the same synchronization as if it was surrounded by a interprocess_mutex lock/unlock. + the same synchronization as if it was surrounded by an interprocess_mutex lock/unlock. Normally, this is implemented using a member of type `mutex_family::mutex_type`, but it could be done using atomic instructions or lock free algorithms. @@ -6372,7 +6372,7 @@ following class: [c++] - //!The key of the the named allocation information index. Stores a to + //!The key of the named allocation information index. Stores a to //!a null string and the length of the string to speed up sorting template<...> struct index_key @@ -6448,7 +6448,7 @@ Interprocess also defines other index types: * [*boost::map_index] uses *boost::interprocess::map* as index type. * [*boost::null_index] that uses an dummy index type if the user just needs - anonymous allocations and want's to save some space and class instantations. + anonymous allocations and wants to save some space and class instantations. Defining a new managed memory segment that uses the new index is easy. For example, a new managed shared memory that uses the new index: @@ -6541,6 +6541,20 @@ warranty. [section:release_notes Release Notes] +[section:release_notes_boost_1_37_00 Boost 1.37 Release] + +* Containers can be used now in recursive types. +* Added `BOOST_INTERPROCESS_FORCE_GENERIC_EMULATION` macro option to force the use + of generic emulation code for process-shared synchronization primitives instead of + native POSIX functions. +* Added placement insertion members to containers +* `boost::posix_time::pos_inf` value is now handled portably for timed functions. +* Update some function parameters from `iterator` to `const_iterator` in containers + to keep up with the draft of the next standard. +* Documentation fixes. + +[endsect] + [section:release_notes_boost_1_36_00 Boost 1.36 Release] * Added anonymous shared memory for UNIX systems. @@ -6559,7 +6573,7 @@ warranty. [classref boost::interprocess::shared_ptr shared_ptr], [classref boost::interprocess::weak_ptr weak_ptr] and [classref boost::interprocess::unique_ptr unique_ptr]. Added explanations - and examples of these smart pointers in the documenation. + and examples of these smart pointers in the documentation. * Optimized vector: * 1) Now works with raw pointers as much as possible when @@ -6576,7 +6590,7 @@ warranty. that might define virtual functions with the same names as container member functions. That would convert container functions in virtual functions and might disallow some of them if the returned type does not lead to a covariant return. - Allocators are now stored as base clases of internal structs. + Allocators are now stored as base classes of internal structs. * Implemented [classref boost::interprocess::named_mutex named_mutex] and [classref boost::interprocess::named_semaphore named_semaphore] with POSIX @@ -6671,12 +6685,12 @@ warranty. if the element has been moved (which is the case of many movable types). This trick was provided by Howard Hinnant. -* Added security check to avoid interger overflow bug in allocators and +* Added security check to avoid integer overflow bug in allocators and named construction functions. * Added alignment checks to forward and backwards expansion functions. -* Fixed bug in atomic funtions for PPC. +* Fixed bug in atomic functions for PPC. * Fixed race-condition error when creating and opening a managed segment. diff --git a/example/doc_contB.cpp b/example/doc_contB.cpp index 95fa43c..4aad144 100644 --- a/example/doc_contB.cpp +++ b/example/doc_contB.cpp @@ -19,7 +19,7 @@ int main () { using namespace boost::interprocess; try{ - //An special shared memory where we can + //A special shared memory where we can //construct objects associated with a name. //Connect to the already created shared memory segment //and initialize needed resources diff --git a/example/doc_ipc_messageA.cpp b/example/doc_ipc_messageA.cpp index 888135c..3a730f1 100644 --- a/example/doc_ipc_messageA.cpp +++ b/example/doc_ipc_messageA.cpp @@ -16,7 +16,7 @@ int main () { using namespace boost::interprocess; - //An special shared memory from which we are + //A special shared memory from which we are //able to allocate raw memory buffers. //First remove any old shared memory of the same name, create //the shared memory segment and initialize needed resources diff --git a/example/doc_ipc_messageB.cpp b/example/doc_ipc_messageB.cpp index 0984c6c..2c30d1a 100644 --- a/example/doc_ipc_messageB.cpp +++ b/example/doc_ipc_messageB.cpp @@ -17,7 +17,7 @@ int main () using namespace boost::interprocess; try{ - //An special shared memory from which we are + //A special shared memory from which we are //able to allocate raw memory buffers. //Connect to the already created shared memory segment //and initialize needed resources diff --git a/example/doc_managed_aligned_allocation.cpp b/example/doc_managed_aligned_allocation.cpp index 28f284f..2af9266 100644 --- a/example/doc_managed_aligned_allocation.cpp +++ b/example/doc_managed_aligned_allocation.cpp @@ -29,7 +29,7 @@ int main() void *ptr = managed_shm.allocate_aligned(100, Alignment); //Check alignment - assert(((char*)ptr-(char*)0) % Alignment == 0); + assert((static_cast(ptr)-static_cast(0)) % Alignment == 0); //Deallocate it managed_shm.deallocate(ptr); @@ -38,7 +38,7 @@ int main() ptr = managed_shm.allocate_aligned(100, Alignment, std::nothrow); //Check alignment - assert(((char*)ptr-(char*)0) % Alignment == 0); + assert((static_cast(ptr)-static_cast(0)) % Alignment == 0); //Deallocate it managed_shm.deallocate(ptr); @@ -53,7 +53,7 @@ int main() (3*Alignment - managed_shared_memory::PayloadPerAllocation, Alignment); //Check alignment - assert(((char*)ptr-(char*)0) % Alignment == 0); + assert((static_cast(ptr)-static_cast(0)) % Alignment == 0); //Deallocate it managed_shm.deallocate(ptr); diff --git a/example/doc_managed_copy_on_write.cpp b/example/doc_managed_copy_on_write.cpp index 0b8fecc..304be95 100644 --- a/example/doc_managed_copy_on_write.cpp +++ b/example/doc_managed_copy_on_write.cpp @@ -49,7 +49,7 @@ int main() std::fstream file("MyManagedFile2", std::ios_base::out | std::ios_base::binary); if(!file) throw int(0); - file.write((const char *)managed_file_cow.get_address(), managed_file_cow.get_size()); + file.write(static_cast(managed_file_cow.get_address()), managed_file_cow.get_size()); } //Now open the modified file and test changes diff --git a/example/doc_managed_external_buffer.cpp b/example/doc_managed_external_buffer.cpp index 130cd2e..051b430 100644 --- a/example/doc_managed_external_buffer.cpp +++ b/example/doc_managed_external_buffer.cpp @@ -30,7 +30,7 @@ int main() //We optimize resources to create 100 named objects in the static buffer objects_in_static_memory.reserve_named_objects(100); - //Alias a integer node allocator type + //Alias an integer node allocator type //This allocator will allocate memory inside the static buffer typedef allocator allocator_t; diff --git a/example/doc_named_allocA.cpp b/example/doc_named_allocA.cpp index 4f779c9..84eb889 100644 --- a/example/doc_named_allocA.cpp +++ b/example/doc_named_allocA.cpp @@ -19,7 +19,7 @@ int main () typedef std::pair MyType; try{ - //An special shared memory where we can + //A special shared memory where we can //construct objects associated with a name. //First remove any old shared memory of the same name, create //the shared memory segment and initialize needed resources diff --git a/example/doc_named_allocB.cpp b/example/doc_named_allocB.cpp index 4067f19..8b6bb65 100644 --- a/example/doc_named_allocB.cpp +++ b/example/doc_named_allocB.cpp @@ -21,7 +21,7 @@ int main () typedef std::pair MyType; try{ - //An special shared memory where we can + //A special shared memory where we can //construct objects associated with a name. //Connect to the already created shared memory segment //and initialize needed resources diff --git a/example/doc_offset_ptr.cpp b/example/doc_offset_ptr.cpp index b1c5b61..efa7930 100644 --- a/example/doc_offset_ptr.cpp +++ b/example/doc_offset_ptr.cpp @@ -25,7 +25,7 @@ struct list_node int main () { //Destroy any previous shared memory with the name to be used. - //Create an special shared memory from which we can + //Create a special shared memory from which we can //allocate buffers of raw memory. shared_memory_object::remove("MySharedMemory"); try{ diff --git a/example/doc_shared_memory2.cpp b/example/doc_shared_memory2.cpp index 3fd2da2..c977095 100644 --- a/example/doc_shared_memory2.cpp +++ b/example/doc_shared_memory2.cpp @@ -17,6 +17,7 @@ int main () { using namespace boost::interprocess; + shared_memory_object::remove("shared_memory"); try{ //Open already created shared memory object. shared_memory_object shm (open_only, "shared_memory", read_only); diff --git a/proj/to-do.txt b/proj/to-do.txt new file mode 100644 index 0000000..49ff717 --- /dev/null +++ b/proj/to-do.txt @@ -0,0 +1,54 @@ +-> change rvalue reference signatures in all containers + +-> add private_read_only to mapped_region to support MAP_PRIVATE plus PROT_READ + +-> add contiguous_elements option to burst allocation + +-> Test construct<> with throwing constructors + +-> Implement zero_memory flag for allocation_command + +-> The general allocation funtion can be improved with some fixed size allocation bins. + +-> Adapt error reporting to TR1 system exceptions + +-> Improve exception messages + +-> Movability of containers should depend on the no-throw guarantee of allocators copy constructor + +-> Check self-assignment for vectors + +-> Update writing a new memory allocator explaining new functions (like alignment) + +-> private node allocators could take the number of nodes as a runtime parameter. + +-> Explain how to build intrusive indexes. + +-> Add intrusive index types as available indexes. + +-> Add maximum alignment allocation limit in PageSize bytes. Otherwise, we can't + guarantee alignment for process-shared allocations. + +-> Add default algorithm and index types. The user does not need to know how are + they implemented. + +-> Pass max size check in allocation to node pools + +-> Use in-place expansion capabilities to shrink_to_fit and reserve functions + from iunordered_index. + +-> change unique_ptr to avoid using compressed_pair + +-> Improve unique_ptr test to test move assignment and other goodies like assigment from null + +-> barrier_test fails on MacOS X on PowerPC. + +->use virtual functions to minimize template explosion in managed classes + +->Insertions with InpIt are not tested in containers + +->Run tests with rvalue reference compilers with no variadic insertions + +->find a way to pass security attributes to shared memory + +->Explain in docs that shared memory can't be used between different users in windows diff --git a/proj/vc7ide/Interprocess.sln b/proj/vc7ide/Interprocess.sln index 7f00fbe..ed25a07 100644 --- a/proj/vc7ide/Interprocess.sln +++ b/proj/vc7ide/Interprocess.sln @@ -459,13 +459,15 @@ Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "doc_complex_map", "doc_comp ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "named_construct_test", "named_construct_test.vcproj", "{5183C8CE-F2E1-3620-237A-B765C9896390}" + ProjectSection(ProjectDependencies) = postProject + EndProjectSection +EndProject Global GlobalSection(SolutionConfiguration) = preSolution Debug = Debug Release = Release EndGlobalSection - GlobalSection(ProjectDependencies) = postSolution - EndGlobalSection GlobalSection(ProjectConfiguration) = postSolution {5CE18C83-6025-36FE-A4F7-BA09176D3A11}.Debug.ActiveCfg = Debug|Win32 {5CE18C83-6025-36FE-A4F7-BA09176D3A11}.Debug.Build.0 = Debug|Win32 @@ -927,6 +929,10 @@ Global {5C19CF83-4FB7-8219-8F6D-3BA9D2715A22}.Debug.Build.0 = Debug|Win32 {5C19CF83-4FB7-8219-8F6D-3BA9D2715A22}.Release.ActiveCfg = Release|Win32 {5C19CF83-4FB7-8219-8F6D-3BA9D2715A22}.Release.Build.0 = Release|Win32 + {5183C8CE-F2E1-3620-237A-B765C9896390}.Debug.ActiveCfg = Debug|Win32 + {5183C8CE-F2E1-3620-237A-B765C9896390}.Debug.Build.0 = Debug|Win32 + {5183C8CE-F2E1-3620-237A-B765C9896390}.Release.ActiveCfg = Release|Win32 + {5183C8CE-F2E1-3620-237A-B765C9896390}.Release.Build.0 = Release|Win32 EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution EndGlobalSection diff --git a/proj/vc7ide/interprocesslib.vcproj b/proj/vc7ide/interprocesslib.vcproj index 0f79970..4b4a0e4 100644 --- a/proj/vc7ide/interprocesslib.vcproj +++ b/proj/vc7ide/interprocesslib.vcproj @@ -361,6 +361,9 @@ + + @@ -391,12 +394,18 @@ + + + + @@ -415,6 +424,9 @@ + + @@ -430,6 +442,9 @@ + + @@ -445,6 +460,9 @@ + + @@ -526,6 +544,9 @@ + + diff --git a/proj/vc7ide/named_construct_test.vcproj b/proj/vc7ide/named_construct_test.vcproj new file mode 100644 index 0000000..b5f2cbd --- /dev/null +++ b/proj/vc7ide/named_construct_test.vcproj @@ -0,0 +1,133 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/test/allocator_v1.hpp b/test/allocator_v1.hpp index b46c3c4..d52a039 100644 --- a/test/allocator_v1.hpp +++ b/test/allocator_v1.hpp @@ -110,7 +110,7 @@ class allocator_v1 //!Allocates memory for an array of count elements. //!Throws boost::interprocess::bad_alloc if there is no enough memory pointer allocate(size_type count, cvoid_ptr hint = 0) - { (void)hint; return pointer((value_type*)mp_mngr->allocate(count*sizeof(value_type))); } + { (void)hint; return pointer(static_cast(mp_mngr->allocate(count*sizeof(value_type)))); } //!Deallocates memory previously allocated. Never throws void deallocate(const pointer &ptr, size_type) diff --git a/test/cached_node_allocator_test.cpp b/test/cached_node_allocator_test.cpp index c29e639..a70c3e3 100644 --- a/test/cached_node_allocator_test.cpp +++ b/test/cached_node_allocator_test.cpp @@ -21,7 +21,7 @@ using namespace boost::interprocess; -//Alias a integer node allocator type +//Alias an integer node allocator type typedef cached_node_allocator cached_node_allocator_t; diff --git a/test/deque_test.cpp b/test/deque_test.cpp index 483f8da..ceb5044 100644 --- a/test/deque_test.cpp +++ b/test/deque_test.cpp @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -31,14 +32,15 @@ #include #include #include "get_process_id_name.hpp" +#include "emplace_test.hpp" -//***************************************************************// +/////////////////////////////////////////////////////////////////// // // // This example repeats the same operations with std::deque and // // shmem_deque using the node allocator // // and compares the values of both containers // // // -//***************************************************************// +/////////////////////////////////////////////////////////////////// using namespace boost::interprocess; @@ -63,23 +65,64 @@ bool copyable_only(V1 *shmdeque, V2 *stddeque, detail::true_type) shmdeque->insert(shmdeque->end(), 50, 1); if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; { - IntType move_me(1); - stddeque->insert(stddeque->begin()+size/2, 50, 1); - shmdeque->insert(shmdeque->begin()+size/2, 50, detail::move_impl(move_me)); - if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + IntType move_me(1); + stddeque->insert(stddeque->begin()+size/2, 50, 1); + shmdeque->insert(shmdeque->begin()+size/2, 50, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; } { - IntType move_me(2); - shmdeque->assign(shmdeque->size()/2, detail::move_impl(move_me)); - stddeque->assign(stddeque->size()/2, 2); - if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + IntType move_me(2); + shmdeque->assign(shmdeque->size()/2, detail::move_impl(move_me)); + stddeque->assign(stddeque->size()/2, 2); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; } + { + IntType move_me(1); + stddeque->clear(); + shmdeque->clear(); + stddeque->insert(stddeque->begin(), 50, 1); + shmdeque->insert(shmdeque->begin(), 50, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + stddeque->insert(stddeque->begin()+20, 50, 1); + shmdeque->insert(shmdeque->begin()+20, 50, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + stddeque->insert(stddeque->begin()+20, 20, 1); + shmdeque->insert(shmdeque->begin()+20, 20, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + } + { + IntType move_me(1); + stddeque->clear(); + shmdeque->clear(); + stddeque->insert(stddeque->end(), 50, 1); + shmdeque->insert(shmdeque->end(), 50, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + stddeque->insert(stddeque->end()-20, 50, 1); + shmdeque->insert(shmdeque->end()-20, 50, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + stddeque->insert(stddeque->end()-20, 20, 1); + shmdeque->insert(shmdeque->end()-20, 20, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + } + return true; } +//Test recursive structures +class recursive_deque +{ +public: + int id_; + deque deque_; +}; + template class AllocatorType > bool do_test() { + //Test for recursive types + { + deque recursive_deque_deque; + } //Customize managed_shared_memory class typedef basic_managed_shared_memory insert(shmdeque->end(), detail::move_impl(move_me)); stddeque->insert(stddeque->end(), i); } - if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + + shmdeque->clear(); + stddeque->clear(); + + for(i = 0; i < max*100; ++i){ + IntType move_me(i); + shmdeque->push_back(detail::move_impl(move_me)); + stddeque->push_back(i); + } + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + + shmdeque->clear(); + stddeque->clear(); + + for(i = 0; i < max*100; ++i){ + IntType move_me(i); + shmdeque->push_front(detail::move_impl(move_me)); + stddeque->push_front(i); + } + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; typename MyShmDeque::iterator it; typename MyShmDeque::const_iterator cit = it; shmdeque->erase(shmdeque->begin()++); stddeque->erase(stddeque->begin()++); - if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; shmdeque->erase(shmdeque->begin()); stddeque->erase(stddeque->begin()); - if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; { //Initialize values @@ -185,21 +248,40 @@ bool do_test() shmdeque->erase(shmdeque->begin()); stddeque->erase(stddeque->begin()); - if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; for(i = 0; i < max; ++i){ IntType move_me(i); shmdeque->insert(shmdeque->begin(), detail::move_impl(move_me)); stddeque->insert(stddeque->begin(), i); } - if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + if(!test::CheckEqualContainers(shmdeque, stddeque)) return false; + + //Test insertion from list + { + std::list l(50, int(1)); + shmdeque->insert(shmdeque->begin(), l.begin(), l.end()); + stddeque->insert(stddeque->begin(), l.begin(), l.end()); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + shmdeque->assign(l.begin(), l.end()); + stddeque->assign(l.begin(), l.end()); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + } + + shmdeque->resize(100); + stddeque->resize(100); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; + + shmdeque->resize(200); + stddeque->resize(200); + if(!test::CheckEqualContainers(shmdeque, stddeque)) return 1; segment.template destroy("MyShmDeque"); delete stddeque; segment.shrink_to_fit_indexes(); if(!segment.all_memory_deallocated()) - return 1; + return false; } catch(std::exception &ex){ std::cout << ex.what() << std::endl; @@ -227,6 +309,12 @@ int main () if(!do_test()) return 1; + const test::EmplaceOptions Options = (test::EmplaceOptions)(test::EMPLACE_BACK | test::EMPLACE_FRONT | test::EMPLACE_BEFORE); + + if(!boost::interprocess::test::test_emplace + < deque, Options>()) + return 1; + return 0; } diff --git a/test/emplace_test.hpp b/test/emplace_test.hpp new file mode 100644 index 0000000..103d8da --- /dev/null +++ b/test/emplace_test.hpp @@ -0,0 +1,640 @@ +////////////////////////////////////////////////////////////////////////////// +// +// (C) Copyright Ion Gaztanaga 2008. Distributed under the Boost +// Software License, Version 1.0. (See accompanying file +// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// +// See http://www.boost.org/libs/interprocess for documentation. +// +////////////////////////////////////////////////////////////////////////////// +#ifndef BOOST_INTERPROCESS_TEST_EMPLACE_TEST_HPP +#define BOOST_INTERPROCESS_TEST_EMPLACE_TEST_HPP + +#include +#include +#include +#include +#include +#include +#include + +namespace boost{ +namespace interprocess{ +namespace test{ + +class EmplaceInt +{ + private: + EmplaceInt (const EmplaceInt &o); + EmplaceInt& operator=(const EmplaceInt &o); + + public: + EmplaceInt(int a = 0, int b = 0, int c = 0, int d = 0, int e = 0) + : a_(a), b_(b), c_(c), d_(d), e_(e) + {} + + #ifdef BOOST_INTERPROCESS_RVALUE_REFERENCE + EmplaceInt(EmplaceInt &&o) + : a_(o.a_), b_(o.b_), c_(o.c_), d_(o.d_), e_(o.e_) + #else + EmplaceInt(detail::moved_object mo) + : a_(mo.get().a_), b_(mo.get().b_), c_(mo.get().c_), d_(mo.get().d_), e_(mo.get().e_) + #endif + {} + + #ifdef BOOST_INTERPROCESS_RVALUE_REFERENCE + EmplaceInt& operator=(EmplaceInt &&o) + { + #else + EmplaceInt& operator=(detail::moved_object mo) + { + EmplaceInt &o = mo.get(); + #endif + this->a_ = o.a_; + this->b_ = o.b_; + this->c_ = o.c_; + this->d_ = o.d_; + this->e_ = o.e_; + return *this; + } + + friend bool operator==(const EmplaceInt &l, const EmplaceInt &r) + { + return l.a_ == r.a_ && + l.b_ == r.b_ && + l.c_ == r.c_ && + l.d_ == r.d_ && + l.e_ == r.e_; + } + + friend bool operator<(const EmplaceInt &l, const EmplaceInt &r) + { return l.sum() < r.sum(); } + + friend bool operator>(const EmplaceInt &l, const EmplaceInt &r) + { return l.sum() > r.sum(); } + + friend bool operator!=(const EmplaceInt &l, const EmplaceInt &r) + { return !(l == r); } + + friend std::ostream &operator <<(std::ostream &os, const EmplaceInt &v) + { + os << "EmplaceInt: " << v.a_ << ' ' << v.b_ << ' ' << v.c_ << ' ' << v.d_ << ' ' << v.e_; + return os; + } + + //private: + int sum() const + { return this->a_ + this->b_ + this->c_ + this->d_ + this->e_; } + + int a_, b_, c_, d_, e_; + int padding[6]; +}; + + +} //namespace test { + +template<> +struct is_movable +{ + static const bool value = true; +}; + +namespace test { + +enum EmplaceOptions{ + EMPLACE_BACK = 1 << 0, + EMPLACE_FRONT = 1 << 1, + EMPLACE_BEFORE = 1 << 2, + EMPLACE_AFTER = 1 << 3, + EMPLACE_ASSOC = 1 << 4, + EMPLACE_HINT = 1 << 5, + EMPLACE_ASSOC_PAIR = 1 << 6, + EMPLACE_HINT_PAIR = 1 << 7 +}; + +template +bool test_expected_container(const Container &ec, const EmplaceInt *Expected, unsigned int only_first_n) +{ + typedef typename Container::const_iterator const_iterator; + const_iterator itb(ec.begin()), ite(ec.end()); + unsigned int cur = 0; + if(only_first_n > ec.size()){ + return false; + } + for(; itb != ite && only_first_n--; ++itb, ++cur){ + const EmplaceInt & cr = *itb; + if(cr != Expected[cur]){ + return false; + } + } + return true; +} + +template +bool test_expected_container(const Container &ec, const std::pair *Expected, unsigned int only_first_n) +{ + typedef typename Container::const_iterator const_iterator; + const_iterator itb(ec.begin()), ite(ec.end()); + unsigned int cur = 0; + if(only_first_n > ec.size()){ + return false; + } + for(; itb != ite && only_first_n--; ++itb, ++cur){ + if(itb->first != Expected[cur].first){ + std::cout << "Error in first: " << itb->first << ' ' << Expected[cur].first << std::endl; + return false; + + } + else if(itb->second != Expected[cur].second){ + std::cout << "Error in second: " << itb->second << ' ' << Expected[cur].second << std::endl; + return false; + } + } + return true; +} + +static EmplaceInt expected [10]; + +typedef std::pair EmplaceIntPair; +static boost::aligned_storage::type pair_storage; + +static EmplaceIntPair* initialize_emplace_int_pair() +{ + EmplaceIntPair* ret = reinterpret_cast(&pair_storage); + for(unsigned int i = 0; i != 10; ++i){ + new(&ret->first)EmplaceInt(); + new(&ret->second)EmplaceInt(); + } + return ret; +} + +static EmplaceIntPair * expected_pair = initialize_emplace_int_pair(); + + +template +bool test_emplace_back(detail::true_) +{ + std::cout << "Starting test_emplace_back." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + { + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(1, 2); + new(&expected [3]) EmplaceInt(1, 2, 3); + new(&expected [4]) EmplaceInt(1, 2, 3, 4); + new(&expected [5]) EmplaceInt(1, 2, 3, 4, 5); + Container c; + c.emplace_back(); + if(!test_expected_container(c, &expected[0], 1)) + return false; + c.emplace_back(1); + if(!test_expected_container(c, &expected[0], 2)) + return false; + c.emplace_back(1, 2); + if(!test_expected_container(c, &expected[0], 3)) + return false; + c.emplace_back(1, 2, 3); + if(!test_expected_container(c, &expected[0], 4)) + return false; + c.emplace_back(1, 2, 3, 4); + if(!test_expected_container(c, &expected[0], 5)) + return false; + c.emplace_back(1, 2, 3, 4, 5); + if(!test_expected_container(c, &expected[0], 6)) + return false; + } + + return true; +} + +template +bool test_emplace_back(detail::false_) +{ return true; } + +template +bool test_emplace_front(detail::true_) +{ + std::cout << "Starting test_emplace_front." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + { + new(&expected [0]) EmplaceInt(1, 2, 3, 4, 5); + new(&expected [1]) EmplaceInt(1, 2, 3, 4); + new(&expected [2]) EmplaceInt(1, 2, 3); + new(&expected [3]) EmplaceInt(1, 2); + new(&expected [4]) EmplaceInt(1); + new(&expected [5]) EmplaceInt(); + Container c; + c.emplace_front(); + if(!test_expected_container(c, &expected[0] + 5, 1)) + return false; + c.emplace_front(1); + if(!test_expected_container(c, &expected[0] + 4, 2)) + return false; + c.emplace_front(1, 2); + if(!test_expected_container(c, &expected[0] + 3, 3)) + return false; + c.emplace_front(1, 2, 3); + if(!test_expected_container(c, &expected[0] + 2, 4)) + return false; + c.emplace_front(1, 2, 3, 4); + if(!test_expected_container(c, &expected[0] + 1, 5)) + return false; + c.emplace_front(1, 2, 3, 4, 5); + if(!test_expected_container(c, &expected[0] + 0, 6)) + return false; + } + return true; +} + +template +bool test_emplace_front(detail::false_) +{ return true; } + +template +bool test_emplace_before(detail::true_) +{ + std::cout << "Starting test_emplace_before." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + { + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(); + Container c; + c.emplace(c.cend(), 1); + c.emplace(c.cbegin()); + if(!test_expected_container(c, &expected[0], 2)) + return false; + c.emplace(c.cend()); + if(!test_expected_container(c, &expected[0], 3)) + return false; + } + { + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(1, 2); + new(&expected [3]) EmplaceInt(1, 2, 3); + new(&expected [4]) EmplaceInt(1, 2, 3, 4); + new(&expected [5]) EmplaceInt(1, 2, 3, 4, 5); + //emplace_front-like + Container c; + c.emplace(c.cbegin(), 1, 2, 3, 4, 5); + c.emplace(c.cbegin(), 1, 2, 3, 4); + c.emplace(c.cbegin(), 1, 2, 3); + c.emplace(c.cbegin(), 1, 2); + c.emplace(c.cbegin(), 1); + c.emplace(c.cbegin()); + if(!test_expected_container(c, &expected[0], 6)) + return false; + c.clear(); + //emplace_back-like + typename Container::const_iterator i = c.emplace(c.cend()); + if(!test_expected_container(c, &expected[0], 1)) + return false; + i = c.emplace(++i, 1); + if(!test_expected_container(c, &expected[0], 2)) + return false; + i = c.emplace(++i, 1, 2); + if(!test_expected_container(c, &expected[0], 3)) + return false; + i = c.emplace(++i, 1, 2, 3); + if(!test_expected_container(c, &expected[0], 4)) + return false; + i = c.emplace(++i, 1, 2, 3, 4); + if(!test_expected_container(c, &expected[0], 5)) + return false; + i = c.emplace(++i, 1, 2, 3, 4, 5); + if(!test_expected_container(c, &expected[0], 6)) + return false; + c.clear(); + //emplace in the middle + c.emplace(c.cbegin()); + i = c.emplace(c.cend(), 1, 2, 3, 4, 5); + i = c.emplace(i, 1, 2, 3, 4); + i = c.emplace(i, 1, 2, 3); + i = c.emplace(i, 1, 2); + i = c.emplace(i, 1); + + if(!test_expected_container(c, &expected[0], 6)) + return false; + } + return true; +} + +template +bool test_emplace_before(detail::false_) +{ return true; } + +template +bool test_emplace_after(detail::true_) +{ + std::cout << "Starting test_emplace_after." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + { + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(); + Container c; + typename Container::const_iterator i = c.emplace_after(c.cbefore_begin(), 1); + c.emplace_after(c.cbefore_begin()); + if(!test_expected_container(c, &expected[0], 2)) + return false; + c.emplace_after(i); + if(!test_expected_container(c, &expected[0], 3)) + return false; + } + { + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(1, 2); + new(&expected [3]) EmplaceInt(1, 2, 3); + new(&expected [4]) EmplaceInt(1, 2, 3, 4); + new(&expected [5]) EmplaceInt(1, 2, 3, 4, 5); + //emplace_front-like + Container c; + c.emplace_after(c.cbefore_begin(), 1, 2, 3, 4, 5); + c.emplace_after(c.cbefore_begin(), 1, 2, 3, 4); + c.emplace_after(c.cbefore_begin(), 1, 2, 3); + c.emplace_after(c.cbefore_begin(), 1, 2); + c.emplace_after(c.cbefore_begin(), 1); + c.emplace_after(c.cbefore_begin()); + if(!test_expected_container(c, &expected[0], 6)) + return false; + c.clear(); + //emplace_back-like + typename Container::const_iterator i = c.emplace_after(c.cbefore_begin()); + if(!test_expected_container(c, &expected[0], 1)) + return false; + i = c.emplace_after(i, 1); + if(!test_expected_container(c, &expected[0], 2)) + return false; + i = c.emplace_after(i, 1, 2); + if(!test_expected_container(c, &expected[0], 3)) + return false; + i = c.emplace_after(i, 1, 2, 3); + if(!test_expected_container(c, &expected[0], 4)) + return false; + i = c.emplace_after(i, 1, 2, 3, 4); + if(!test_expected_container(c, &expected[0], 5)) + return false; + i = c.emplace_after(i, 1, 2, 3, 4, 5); + if(!test_expected_container(c, &expected[0], 6)) + return false; + c.clear(); + //emplace_after in the middle + i = c.emplace_after(c.cbefore_begin()); + c.emplace_after(i, 1, 2, 3, 4, 5); + c.emplace_after(i, 1, 2, 3, 4); + c.emplace_after(i, 1, 2, 3); + c.emplace_after(i, 1, 2); + c.emplace_after(i, 1); + + if(!test_expected_container(c, &expected[0], 6)) + return false; + } + return true; +} + +template +bool test_emplace_after(detail::false_) +{ return true; } + +template +bool test_emplace_assoc(detail::true_) +{ + std::cout << "Starting test_emplace_assoc." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(1, 2); + new(&expected [3]) EmplaceInt(1, 2, 3); + new(&expected [4]) EmplaceInt(1, 2, 3, 4); + new(&expected [5]) EmplaceInt(1, 2, 3, 4, 5); + { + Container c; + c.emplace(); + if(!test_expected_container(c, &expected[0], 1)) + return false; + c.emplace(1); + if(!test_expected_container(c, &expected[0], 2)) + return false; + c.emplace(1, 2); + if(!test_expected_container(c, &expected[0], 3)) + return false; + c.emplace(1, 2, 3); + if(!test_expected_container(c, &expected[0], 4)) + return false; + c.emplace(1, 2, 3, 4); + if(!test_expected_container(c, &expected[0], 5)) + return false; + c.emplace(1, 2, 3, 4, 5); + if(!test_expected_container(c, &expected[0], 6)) + return false; + } + return true; +} + +template +bool test_emplace_assoc(detail::false_) +{ return true; } + +template +bool test_emplace_hint(detail::true_) +{ + std::cout << "Starting test_emplace_hint." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + new(&expected [0]) EmplaceInt(); + new(&expected [1]) EmplaceInt(1); + new(&expected [2]) EmplaceInt(1, 2); + new(&expected [3]) EmplaceInt(1, 2, 3); + new(&expected [4]) EmplaceInt(1, 2, 3, 4); + new(&expected [5]) EmplaceInt(1, 2, 3, 4, 5); + + { + Container c; + typename Container::const_iterator it; + it = c.emplace_hint(c.begin()); + if(!test_expected_container(c, &expected[0], 1)) + return false; + it = c.emplace_hint(it, 1); + if(!test_expected_container(c, &expected[0], 2)) + return false; + it = c.emplace_hint(it, 1, 2); + if(!test_expected_container(c, &expected[0], 3)) + return false; + it = c.emplace_hint(it, 1, 2, 3); + if(!test_expected_container(c, &expected[0], 4)) + return false; + it = c.emplace_hint(it, 1, 2, 3, 4); + if(!test_expected_container(c, &expected[0], 5)) + return false; + it = c.emplace_hint(it, 1, 2, 3, 4, 5); + if(!test_expected_container(c, &expected[0], 6)) + return false; + } + + return true; +} + +template +bool test_emplace_hint(detail::false_) +{ return true; } + +template +bool test_emplace_assoc_pair(detail::true_) +{ + std::cout << "Starting test_emplace_assoc_pair." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + new(&expected_pair[0].first) EmplaceInt(); + new(&expected_pair[0].second) EmplaceInt(); + new(&expected_pair[1].first) EmplaceInt(1); + new(&expected_pair[1].second) EmplaceInt(); + new(&expected_pair[2].first) EmplaceInt(2); + new(&expected_pair[2].second) EmplaceInt(2); + new(&expected_pair[3].first) EmplaceInt(3); + new(&expected_pair[3].second) EmplaceInt(2, 3); + new(&expected_pair[4].first) EmplaceInt(4); + new(&expected_pair[4].second) EmplaceInt(2, 3, 4); + new(&expected_pair[5].first) EmplaceInt(5); + new(&expected_pair[5].second) EmplaceInt(2, 3, 4, 5); + { + Container c; + c.emplace(); + if(!test_expected_container(c, &expected_pair[0], 1)){ + std::cout << "Error after c.emplace();\n"; + return false; + } + c.emplace(1); + if(!test_expected_container(c, &expected_pair[0], 2)){ + std::cout << "Error after c.emplace(1);\n"; + return false; + } + c.emplace(2, 2); + if(!test_expected_container(c, &expected_pair[0], 3)){ + std::cout << "Error after c.emplace(2, 2);\n"; + return false; + } + c.emplace(3, 2, 3); + if(!test_expected_container(c, &expected_pair[0], 4)){ + std::cout << "Error after c.emplace(3, 2, 3);\n"; + return false; + } + c.emplace(4, 2, 3, 4); + if(!test_expected_container(c, &expected_pair[0], 5)){ + std::cout << "Error after c.emplace(4, 2, 3, 4);\n"; + return false; + } + c.emplace(5, 2, 3, 4, 5); + if(!test_expected_container(c, &expected_pair[0], 6)){ + std::cout << "Error after c.emplace(5, 2, 3, 4, 5);\n"; + return false; + } + } + return true; +} + +template +bool test_emplace_assoc_pair(detail::false_) +{ return true; } + +template +bool test_emplace_hint_pair(detail::true_) +{ + std::cout << "Starting test_emplace_hint_pair." << std::endl << " Class: " + << typeid(Container).name() << std::endl; + + new(&expected_pair[0].first) EmplaceInt(); + new(&expected_pair[0].second) EmplaceInt(); + new(&expected_pair[1].first) EmplaceInt(1); + new(&expected_pair[1].second) EmplaceInt(); + new(&expected_pair[2].first) EmplaceInt(2); + new(&expected_pair[2].second) EmplaceInt(2); + new(&expected_pair[3].first) EmplaceInt(3); + new(&expected_pair[3].second) EmplaceInt(2, 3); + new(&expected_pair[4].first) EmplaceInt(4); + new(&expected_pair[4].second) EmplaceInt(2, 3, 4); + new(&expected_pair[5].first) EmplaceInt(5); + new(&expected_pair[5].second) EmplaceInt(2, 3, 4, 5); + { + Container c; + typename Container::const_iterator it; + it = c.emplace_hint(c.begin()); + if(!test_expected_container(c, &expected_pair[0], 1)){ + std::cout << "Error after c.emplace(1);\n"; + return false; + } + it = c.emplace_hint(it, 1); + if(!test_expected_container(c, &expected_pair[0], 2)){ + std::cout << "Error after c.emplace(it, 1);\n"; + return false; + } + it = c.emplace_hint(it, 2, 2); + if(!test_expected_container(c, &expected_pair[0], 3)){ + std::cout << "Error after c.emplace(it, 2, 2);\n"; + return false; + } + it = c.emplace_hint(it, 3, 2, 3); + if(!test_expected_container(c, &expected_pair[0], 4)){ + std::cout << "Error after c.emplace(it, 3, 2, 3);\n"; + return false; + } + it = c.emplace_hint(it, 4, 2, 3, 4); + if(!test_expected_container(c, &expected_pair[0], 5)){ + std::cout << "Error after c.emplace(it, 4, 2, 3, 4);\n"; + return false; + } + it = c.emplace_hint(it, 5, 2, 3, 4, 5); + if(!test_expected_container(c, &expected_pair[0], 6)){ + std::cout << "Error after c.emplace(it, 5, 2, 3, 4, 5);\n"; + return false; + } + } + return true; +} + +template +bool test_emplace_hint_pair(detail::false_) +{ return true; } + +template +struct emplace_active +{ + static const bool value = (0 != (O & Mask)); + typedef detail::bool_ type; + operator type() const{ return type(); } +}; + +template +bool test_emplace() +{ + if(!test_emplace_back(emplace_active())) + return false; + if(!test_emplace_front(emplace_active())) + return false; + if(!test_emplace_before(emplace_active())) + return false; + if(!test_emplace_after(emplace_active())) + return false; + if(!test_emplace_assoc(emplace_active())) + return false; + if(!test_emplace_hint(emplace_active())) + return false; + if(!test_emplace_assoc_pair(emplace_active())) + return false; + if(!test_emplace_hint_pair(emplace_active())) + return false; + return true; +} + +} //namespace test{ +} //namespace interprocess{ +} //namespace boost{ + +#include + +#endif //#ifndef BOOST_INTERPROCESS_TEST_EMPLACE_TEST_HPP diff --git a/test/expand_bwd_test_template.hpp b/test/expand_bwd_test_template.hpp index 4a61d92..66545ec 100644 --- a/test/expand_bwd_test_template.hpp +++ b/test/expand_bwd_test_template.hpp @@ -148,7 +148,7 @@ bool test_insert_with_expand_bwd() } expand_bwd_test_allocator alloc - ((value_type*)&memory[0], MemorySize, Offset[iteration]); + (&memory[0], MemorySize, Offset[iteration]); VectorWithExpandBwdAllocator vector(alloc); vector.insert( vector.begin() , initial_data.begin(), initial_data.end()); @@ -165,10 +165,10 @@ bool test_insert_with_expand_bwd() } } catch(...){ - delete []((non_volatile_value_type*)memory); + delete [](const_cast(memory)); throw; } - delete []((non_volatile_value_type*)memory); + delete [](const_cast(memory)); } return true; @@ -227,10 +227,10 @@ bool test_assign_with_expand_bwd() } } catch(...){ - delete []((typename boost::remove_volatile::type*)memory); + delete [](const_cast::type*>(memory)); throw; } - delete []((typename boost::remove_volatile::type*)memory); + delete [](const_cast::type*>(memory)); } return true; diff --git a/test/flat_tree_test.cpp b/test/flat_tree_test.cpp index 80df430..e7ebd77 100644 --- a/test/flat_tree_test.cpp +++ b/test/flat_tree_test.cpp @@ -20,6 +20,7 @@ #include "movable_int.hpp" #include "set_test.hpp" #include "map_test.hpp" +#include "emplace_test.hpp" ///////////////////////////////////////////////////////////////// // @@ -30,7 +31,7 @@ ///////////////////////////////////////////////////////////////// using namespace boost::interprocess; - +/* //Explicit instantiation to detect compilation errors template class boost::interprocess::flat_set ,test::dummy_test_allocator > >; - +*/ //Customize managed_shared_memory class typedef basic_managed_shared_memory ,shmem_move_copy_pair_allocator_t> MyMoveCopyShmMultiMap; +//Test recursive structures +class recursive_flat_set +{ +public: + int id_; + flat_set flat_set_; + friend bool operator< (const recursive_flat_set &a, const recursive_flat_set &b) + { return a.id_ < b.id_; } +}; + +class recursive_flat_map +{ +public: + int id_; + flat_map map_; + friend bool operator< (const recursive_flat_map &a, const recursive_flat_map &b) + { return a.id_ < b.id_; } +}; + int main() { using namespace boost::interprocess::test; + if (0 != set_test, MapOptions>()) + return 1; + if(!boost::interprocess::test::test_emplace, MapOptions>()) + return 1; + if(!boost::interprocess::test::test_emplace, SetOptions>()) + return 1; + if(!boost::interprocess::test::test_emplace, SetOptions>()) + return 1; + #endif //!defined(__GNUC__) return 0; } diff --git a/test/intersegment_ptr_test.cpp b/test/intersegment_ptr_test.cpp index 1f2d98a..d068624 100644 --- a/test/intersegment_ptr_test.cpp +++ b/test/intersegment_ptr_test.cpp @@ -205,14 +205,14 @@ bool test_basic_comparisons() if(sizeof(segment_data) > mapped_region::get_page_size()) return false; - segment_data &seg_0_0 = *((segment_data *)reg_0_0.get_address()); - segment_data &seg_0_1 = *((segment_data *)reg_0_1.get_address()); - segment_data &seg_1_0 = *((segment_data *)reg_1_0.get_address()); - segment_data &seg_1_1 = *((segment_data *)reg_1_1.get_address()); + segment_data &seg_0_0 = *static_cast(reg_0_0.get_address()); + segment_data &seg_0_1 = *static_cast(reg_0_1.get_address()); + segment_data &seg_1_0 = *static_cast(reg_1_0.get_address()); + segment_data &seg_1_1 = *static_cast(reg_1_1.get_address()); //Some dummy multi_segment_services - multi_segment_services *services0 = (multi_segment_services *)0; - multi_segment_services *services1 = (multi_segment_services *)1; + multi_segment_services *services0 = static_cast(0); + multi_segment_services *services1 = reinterpret_cast(1); const intersegment_ptr::segment_group_id group_0_id = intersegment_ptr::new_segment_group(services0); @@ -386,7 +386,7 @@ bool test_multi_segment_shared_memory() } int main() -{ +{/* if(!test_types_and_convertions()) return 1; if(!test_arithmetic()) @@ -398,6 +398,7 @@ int main() if(!test_multi_segment_shared_memory()) return 1; +*/ return 0; } diff --git a/test/list_test.cpp b/test/list_test.cpp index e2073fe..b4c4249 100644 --- a/test/list_test.cpp +++ b/test/list_test.cpp @@ -16,6 +16,7 @@ #include "dummy_test_allocator.hpp" #include "list_test.hpp" #include "movable_int.hpp" +#include "emplace_test.hpp" using namespace boost::interprocess; @@ -35,8 +36,21 @@ typedef list MyMoveList; typedef allocator ShmemCopyMoveAllocator; typedef list MyCopyMoveList; +class recursive_list +{ +public: + int id_; + list list_; +}; + +void recursive_list_test()//Test for recursive types +{ + list recursive_list_list; +} + int main () { + recursive_list_test(); if(test::list_test()) return 1; @@ -49,6 +63,11 @@ int main () if(test::list_test()) return 1; + const test::EmplaceOptions Options = (test::EmplaceOptions)(test::EMPLACE_BACK | test::EMPLACE_FRONT | test::EMPLACE_BEFORE); + + if(!boost::interprocess::test::test_emplace, Options>()) + return 1; + return 0; } diff --git a/test/list_test.hpp b/test/list_test.hpp index deaa277..390f61e 100644 --- a/test/list_test.hpp +++ b/test/list_test.hpp @@ -88,7 +88,6 @@ struct pop_back_function } }; - template diff --git a/test/managed_mapped_file_test.cpp b/test/managed_mapped_file_test.cpp index 0d00edb..d68752b 100644 --- a/test/managed_mapped_file_test.cpp +++ b/test/managed_mapped_file_test.cpp @@ -20,7 +20,7 @@ using namespace boost::interprocess; int main () { - const int FileSize = 65536; + const int FileSize = 65536*10; const char *const FileName = test::get_process_id_name(); //STL compatible allocator object for memory-mapped file @@ -42,6 +42,7 @@ int main () //Let's allocate some memory for(i = 0; i < max; ++i){ array[i] = mfile.allocate(i+1); + std::cout << i << ' '; } //Deallocate allocated memory diff --git a/test/memory_algorithm_test_template.hpp b/test/memory_algorithm_test_template.hpp index 29a85ef..f7216d4 100644 --- a/test/memory_algorithm_test_template.hpp +++ b/test/memory_algorithm_test_template.hpp @@ -108,7 +108,7 @@ bool test_allocation_shrink(Allocator &a) std::size_t received_size; if(a.template allocation_command ( shrink_in_place | nothrow_allocation, i*2 - , i, received_size, (char*)buffers[i]).first){ + , i, received_size, static_cast(buffers[i])).first){ if(received_size > std::size_t(i*2)){ return false; } @@ -160,7 +160,7 @@ bool test_allocation_expand(Allocator &a) while(a.template allocation_command ( expand_fwd | nothrow_allocation, min_size - , preferred_size, received_size, (char*)buffers[i]).first){ + , preferred_size, received_size, static_cast(buffers[i])).first){ //Check received size is bigger than minimum if(received_size < min_size){ return false; @@ -215,7 +215,7 @@ bool test_allocation_shrink_and_expand(Allocator &a) std::size_t received_size; if(a.template allocation_command ( shrink_in_place | nothrow_allocation, received_sizes[i] - , i, received_size, (char*)buffers[i]).first){ + , i, received_size, static_cast(buffers[i])).first){ if(received_size > std::size_t(received_sizes[i])){ return false; } @@ -234,7 +234,7 @@ bool test_allocation_shrink_and_expand(Allocator &a) std::size_t request_size = received_sizes[i]; if(a.template allocation_command ( expand_fwd | nothrow_allocation, request_size - , request_size, received_size, (char*)buffers[i]).first){ + , request_size, received_size, static_cast(buffers[i])).first){ if(received_size != received_sizes[i]){ return false; } @@ -299,7 +299,7 @@ bool test_allocation_deallocation_expand(Allocator &a) while(a.template allocation_command ( expand_fwd | nothrow_allocation, min_size - , preferred_size, received_size, (char*)buffers[i]).first){ + , preferred_size, received_size, static_cast(buffers[i])).first){ //Check received size is bigger than minimum if(received_size < min_size){ return false; @@ -312,8 +312,8 @@ bool test_allocation_deallocation_expand(Allocator &a) } //Now erase null values from the vector - buffers.erase(std::remove(buffers.begin(), buffers.end(), (void*)0) - ,buffers.end()); + buffers.erase( std::remove(buffers.begin(), buffers.end(), static_cast(0)) + , buffers.end()); //Deallocate it in non sequential order for(int j = 0, max = (int)buffers.size() @@ -369,7 +369,7 @@ bool test_allocation_with_reuse(Allocator &a) std::size_t prf_size = (received_size + (i+1)*2); std::pair ret = a.raw_allocation_command ( expand_bwd | nothrow_allocation, min_size - , prf_size, received_size, (char*)ptr, sizeof_object); + , prf_size, received_size, static_cast(ptr), sizeof_object); if(!ret.first) break; //If we have memory, this must be a buffer reuse @@ -511,7 +511,7 @@ bool test_clear_free_memory(Allocator &a) //Test allocated memory is zero for(int i = 0, max = buffers.size(); i < max; ++i){ for(int j = 0; j < i; ++j){ - if(((char*)buffers[i])[j]) return false; + if(static_cast(buffers[i])[j]) return false; } } diff --git a/test/movable_int.hpp b/test/movable_int.hpp index 8d16596..eeb12f6 100644 --- a/test/movable_int.hpp +++ b/test/movable_int.hpp @@ -52,6 +52,9 @@ class movable_int { this->m_int = mmi.m_int; mmi.m_int = 0; return *this; } #endif + movable_int & operator= (int i) + { this->m_int = i; return *this; } + bool operator ==(const movable_int &mi) const { return this->m_int == mi.m_int; } @@ -123,6 +126,9 @@ class movable_and_copyable_int { this->m_int = mmi.m_int; mmi.m_int = 0; return *this; } #endif + movable_and_copyable_int & operator= (int i) + { this->m_int = i; return *this; } + bool operator ==(const movable_and_copyable_int &mi) const { return this->m_int == mi.m_int; } diff --git a/test/mutex_test_template.hpp b/test/mutex_test_template.hpp index 54c57d4..64b4a7a 100644 --- a/test/mutex_test_template.hpp +++ b/test/mutex_test_template.hpp @@ -165,7 +165,7 @@ struct test_recursive_lock template void lock_and_sleep(void *arg, M &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::interprocess::scoped_lock l(sm); if(pdata->m_secs){ boost::thread::sleep(xsecs(pdata->m_secs)); @@ -181,7 +181,7 @@ void lock_and_sleep(void *arg, M &sm) template void try_lock_and_sleep(void *arg, M &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::interprocess::scoped_lock l(sm, boost::interprocess::defer_lock); if (l.try_lock()){ boost::thread::sleep(xsecs(2*BaseSeconds)); @@ -193,7 +193,7 @@ void try_lock_and_sleep(void *arg, M &sm) template void timed_lock_and_sleep(void *arg, M &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::posix_time::ptime pt(delay(pdata->m_secs)); boost::interprocess::scoped_lock l (sm, boost::interprocess::defer_lock); diff --git a/test/named_construct_test.cpp b/test/named_construct_test.cpp new file mode 100644 index 0000000..fade538 --- /dev/null +++ b/test/named_construct_test.cpp @@ -0,0 +1,195 @@ +////////////////////////////////////////////////////////////////////////////// +// +// (C) Copyright Ion Gaztanaga 2008-2008. Distributed under the Boost +// Software License, Version 1.0. (See accompanying file +// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// +// See http://www.boost.org/libs/interprocess for documentation. +// +////////////////////////////////////////////////////////////////////////////// +#include +#include +#include +#include + +typedef std::pair simple_pair; + +using namespace boost::interprocess; + +struct array_pair : public simple_pair +{ + array_pair(double d, int i) + : simple_pair(d, i) {} +}; + +struct array_it_pair : public array_pair +{ + array_it_pair(double d, int i) + : array_pair(d, i) {} +}; + +struct named_name_generator +{ + static const bool searchable = true; + + typedef simple_pair simple_type; + typedef array_pair array_type; + typedef array_it_pair array_it_type; + static const char *get_simple_name() + { return "MyType instance"; } + static const char *get_array_name() + { return "MyType array"; } + static const char *get_array_it_name() + { return "MyType array from it"; } +}; + +struct unique_name_generator +{ + static const bool searchable = true; + + typedef simple_pair simple_type; + typedef array_pair array_type; + typedef array_it_pair array_it_type; + static const detail::unique_instance_t *get_simple_name() + { return 0; } + static const detail::unique_instance_t *get_array_name() + { return 0; } + static const detail::unique_instance_t *get_array_it_name() + { return 0; } +}; + +struct anonymous_name_generator +{ + static const bool searchable = false; + + typedef simple_pair simple_type; + typedef array_pair array_type; + typedef array_it_pair array_it_type; + static const detail::anonymous_instance_t *get_simple_name() + { return 0; } + static const detail::anonymous_instance_t *get_array_name() + { return 0; } + static const detail::anonymous_instance_t *get_array_it_name() + { return 0; } +}; + + +template +int construct_test() +{ + typedef typename NameGenerator::simple_type simple_type; + typedef typename NameGenerator::array_type array_type; + typedef typename NameGenerator::array_it_type array_it_type; + + remove_shared_memory_on_destroy remover("MySharedMemory"); + shared_memory_object::remove("MySharedMemory"); + { + //A special shared memory where we can + //construct objects associated with a name. + //First remove any old shared memory of the same name, create + //the shared memory segment and initialize needed resources + managed_shared_memory segment + //create segment name segment size + (create_only, "MySharedMemory", 65536); + + //Create an object of MyType initialized to {0.0, 0} + simple_type *s = segment.construct + (NameGenerator::get_simple_name())//name of the object + (1.0, 2); //ctor first argument + assert(s->first == 1.0 && s->second == 2); + if(!(s->first == 1.0 && s->second == 2)) + return 1; + + //Create an array of 10 elements of MyType initialized to {0.0, 0} + array_type *a = segment.construct + (NameGenerator::get_array_name()) //name of the object + [10] //number of elements + (3.0, 4); //Same two ctor arguments for all objects + assert(a->first == 3.0 && a->second == 4); + if(!(a->first == 3.0 && a->second == 4)) + return 1; + + //Create an array of 3 elements of MyType initializing each one + //to a different value {0.0, 3}, {1.0, 4}, {2.0, 5}... + float float_initializer[3] = { 0.0, 1.0, 2.0 }; + int int_initializer[3] = { 3, 4, 5 }; + + array_it_type *a_it = segment.construct_it + (NameGenerator::get_array_it_name()) //name of the object + [3] //number of elements + ( &float_initializer[0] //Iterator for the 1st ctor argument + , &int_initializer[0]); //Iterator for the 2nd ctor argument + { + const array_it_type *a_it_ptr = a_it; + for(unsigned int i = 0, max = 3; i != max; ++i, ++a_it_ptr){ + assert(a_it_ptr->first == float_initializer[i]); + if(a_it_ptr->first != float_initializer[i]){ + return 1; + } + assert(a_it_ptr->second == int_initializer[i]); + if(a_it_ptr->second != int_initializer[i]){ + return 1; + } + } + } + + if(NameGenerator::searchable){ + { + std::pair res; + //Find the object + res = segment.find (NameGenerator::get_simple_name()); + //Length should be 1 + assert(res.second == 1); + if(res.second != 1) + return 1; + assert(res.first == s); + if(res.first != s) + return 1; + } + { + std::pair res; + + //Find the array + res = segment.find (NameGenerator::get_array_name()); + //Length should be 10 + assert(res.second == 10); + if(res.second != 10) + return 1; + assert(res.first == a); + if(res.first != a) + return 1; + } + { + std::pair res; + //Find the array constructed from iterators + res = segment.find (NameGenerator::get_array_it_name()); + //Length should be 3 + assert(res.second == 3); + if(res.second != 3) + return 1; + assert(res.first == a_it); + if(res.first != a_it) + return 1; + } + } + //We're done, delete all the objects + segment.destroy_ptr(s); + segment.destroy_ptr(a); + segment.destroy_ptr(a_it); + } + return 0; +} + +int main () +{ + if(0 != construct_test()) + return 1; + if(0 != construct_test()) + return 1; + if(0 != construct_test()) + return 1; + return 0; +} + +//] +#include diff --git a/test/node_allocator_test.cpp b/test/node_allocator_test.cpp index 4c1235f..8333fa8 100644 --- a/test/node_allocator_test.cpp +++ b/test/node_allocator_test.cpp @@ -22,7 +22,7 @@ using namespace boost::interprocess; //We will work with wide characters for shared memory objects -//Alias a integer node allocator type +//Alias an integer node allocator type typedef node_allocator shmem_node_allocator_t; typedef detail::node_allocator_v1 diff --git a/test/private_node_allocator_test.cpp b/test/private_node_allocator_test.cpp index ad3fc5d..5ea8659 100644 --- a/test/private_node_allocator_test.cpp +++ b/test/private_node_allocator_test.cpp @@ -22,7 +22,7 @@ using namespace boost::interprocess; //We will work with wide characters for shared memory objects -//Alias a integer node allocator type +//Alias an integer node allocator type typedef private_node_allocator priv_node_allocator_t; typedef detail::private_node_allocator_v1 diff --git a/test/semaphore_test_template.hpp b/test/semaphore_test_template.hpp index 23c235f..8b313cb 100644 --- a/test/semaphore_test_template.hpp +++ b/test/semaphore_test_template.hpp @@ -146,7 +146,7 @@ struct test_recursive_lock template void wait_and_sleep(void *arg, P &sm) { - data

*pdata = (data

*) arg; + data

*pdata = static_cast*>(arg); boost::interprocess::scoped_lock

l(sm); boost::thread::sleep(xsecs(3*BaseSeconds)); ++shared_val; @@ -156,7 +156,7 @@ void wait_and_sleep(void *arg, P &sm) template void try_wait_and_sleep(void *arg, P &sm) { - data

*pdata = (data

*) arg; + data

*pdata = static_cast*>(arg); boost::interprocess::scoped_lock

l(sm, boost::interprocess::defer_lock); if (l.try_lock()){ boost::thread::sleep(xsecs(3*BaseSeconds)); @@ -168,7 +168,7 @@ void try_wait_and_sleep(void *arg, P &sm) template void timed_wait_and_sleep(void *arg, P &sm) { - data

*pdata = (data

*) arg; + data

*pdata = static_cast*>(arg); boost::posix_time::ptime pt(delay(pdata->m_secs)); boost::interprocess::scoped_lock

l (sm, boost::interprocess::defer_lock); diff --git a/test/sharable_mutex_test_template.hpp b/test/sharable_mutex_test_template.hpp index 4858646..338ef1c 100644 --- a/test/sharable_mutex_test_template.hpp +++ b/test/sharable_mutex_test_template.hpp @@ -38,7 +38,7 @@ namespace boost { namespace interprocess { namespace test { template void plain_exclusive(void *arg, SM &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::interprocess::scoped_lock l(sm); boost::thread::sleep(xsecs(3*BaseSeconds)); shared_val += 10; @@ -48,7 +48,7 @@ void plain_exclusive(void *arg, SM &sm) template void plain_shared(void *arg, SM &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::interprocess::sharable_lock l(sm); if(pdata->m_secs){ boost::thread::sleep(xsecs(pdata->m_secs*BaseSeconds)); @@ -59,7 +59,7 @@ void plain_shared(void *arg, SM &sm) template void try_exclusive(void *arg, SM &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::interprocess::scoped_lock l(sm, boost::interprocess::defer_lock); if (l.try_lock()){ boost::thread::sleep(xsecs(3*BaseSeconds)); @@ -71,7 +71,7 @@ void try_exclusive(void *arg, SM &sm) template void try_shared(void *arg, SM &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::interprocess::sharable_lock l(sm, boost::interprocess::defer_lock); if (l.try_lock()){ if(pdata->m_secs){ @@ -84,7 +84,7 @@ void try_shared(void *arg, SM &sm) template void timed_exclusive(void *arg, SM &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::posix_time::ptime pt(delay(pdata->m_secs)); boost::interprocess::scoped_lock l (sm, boost::interprocess::defer_lock); @@ -98,7 +98,7 @@ void timed_exclusive(void *arg, SM &sm) template void timed_shared(void *arg, SM &sm) { - data *pdata = (data *) arg; + data *pdata = static_cast*>(arg); boost::posix_time::ptime pt(delay(pdata->m_secs)); boost::interprocess::sharable_lock l(sm, boost::interprocess::defer_lock); @@ -143,11 +143,11 @@ void test_plain_sharable_mutex() // Reader one launches, "clearly" after writer two, and "clearly" // while writer 1 still holds the lock boost::thread::sleep(xsecs(1*BaseSeconds)); - boost::thread tr1(thread_adapter(plain_shared,&s1, *pm3)); - boost::thread tr2(thread_adapter(plain_shared,&s2, *pm4)); + boost::thread thr1(thread_adapter(plain_shared,&s1, *pm3)); + boost::thread thr2(thread_adapter(plain_shared,&s2, *pm4)); - tr2.join(); - tr1.join(); + thr2.join(); + thr1.join(); tw2.join(); tw1.join(); @@ -177,8 +177,8 @@ void test_plain_sharable_mutex() data e2(2); //We launch 2 readers, that will block for 3*BaseTime seconds - boost::thread tr1(thread_adapter(plain_shared,&s1,*pm1)); - boost::thread tr2(thread_adapter(plain_shared,&s2,*pm2)); + boost::thread thr1(thread_adapter(plain_shared,&s1,*pm1)); + boost::thread thr2(thread_adapter(plain_shared,&s2,*pm2)); //Make sure they try to hold the sharable lock boost::thread::sleep(xsecs(1*BaseSeconds)); @@ -187,8 +187,8 @@ void test_plain_sharable_mutex() boost::thread tw1(thread_adapter(plain_exclusive,&e1,*pm3)); boost::thread tw2(thread_adapter(plain_exclusive,&e2,*pm4)); - tr2.join(); - tr1.join(); + thr2.join(); + thr1.join(); tw2.join(); tw1.join(); @@ -229,13 +229,13 @@ void test_try_sharable_mutex() // Reader one launches, "clearly" after writer #1 holds the lock // and before it releases the lock. boost::thread::sleep(xsecs(1*BaseSeconds)); - boost::thread tr1(thread_adapter(try_shared,&s1,*pm2)); + boost::thread thr1(thread_adapter(try_shared,&s1,*pm2)); // Writer two launches in the same timeframe. boost::thread tw2(thread_adapter(try_exclusive,&e2,*pm3)); tw2.join(); - tr1.join(); + thr1.join(); tw1.join(); assert(e1.m_value == 10); @@ -281,12 +281,12 @@ void test_timed_sharable_mutex() // to get the lock. 2nd reader will wait 3*BaseSeconds seconds, and will get // the lock. - boost::thread tr1(thread_adapter(timed_shared,&s1,*pm3)); - boost::thread tr2(thread_adapter(timed_shared,&s2,*pm4)); + boost::thread thr1(thread_adapter(timed_shared,&s1,*pm3)); + boost::thread thr2(thread_adapter(timed_shared,&s2,*pm4)); tw1.join(); - tr1.join(); - tr2.join(); + thr1.join(); + thr2.join(); tw2.join(); assert(e1.m_value == 10); diff --git a/test/shared_memory_mapping_test.cpp b/test/shared_memory_mapping_test.cpp index 54fc31d..6f363f3 100644 --- a/test/shared_memory_mapping_test.cpp +++ b/test/shared_memory_mapping_test.cpp @@ -72,7 +72,7 @@ int main () mapped_region region(mapping, read_write, 0, FileSize/2, 0); mapped_region region2(mapping, read_write, FileSize/2, FileSize - FileSize/2, 0); - unsigned char *checker = (unsigned char*)region.get_address(); + unsigned char *checker = static_cast(region.get_address()); //Check pattern for(std::size_t i = 0 ;i < FileSize/2 @@ -83,7 +83,7 @@ int main () } //Check second half - checker = (unsigned char *)region2.get_address(); + checker = static_cast(region2.get_address()); //Check pattern for(std::size_t i = FileSize/2 diff --git a/test/shared_ptr_test.cpp b/test/shared_ptr_test.cpp index ef7066a..38c08d1 100644 --- a/test/shared_ptr_test.cpp +++ b/test/shared_ptr_test.cpp @@ -587,7 +587,7 @@ void test_alias() BOOST_TEST( p2.use_count() == p.use_count() ); BOOST_TEST( !( p < p2 ) && !( p2 < p ) ); - p2.reset( p, (int*)0 ); + p2.reset( p, static_cast(0) ); BOOST_TEST( p2.get() == 0 ); diff --git a/test/slist_test.cpp b/test/slist_test.cpp index 00d2f36..1bf7462 100644 --- a/test/slist_test.cpp +++ b/test/slist_test.cpp @@ -16,6 +16,7 @@ #include "dummy_test_allocator.hpp" #include "list_test.hpp" #include "movable_int.hpp" +#include "emplace_test.hpp" using namespace boost::interprocess; @@ -26,20 +27,50 @@ template class boost::interprocess::slist ShmemAllocator; typedef slist MyList; +typedef allocator ShmemVolatileAllocator; +typedef slist MyVolatileList; + typedef allocator ShmemMoveAllocator; typedef slist MyMoveList; typedef allocator ShmemCopyMoveAllocator; typedef slist MyCopyMoveList; +class recursive_slist +{ +public: + int id_; + slist slist_; +}; + +void recursive_slist_test()//Test for recursive types +{ + slist recursive_list_list; +} + int main () { + recursive_slist_test(); + if(test::list_test()) return 1; if(test::list_test()) return 1; + if(test::list_test()) + return 1; + + if(test::list_test()) + return 1; + + const test::EmplaceOptions Options = (test::EmplaceOptions) + (test::EMPLACE_FRONT | test::EMPLACE_AFTER | test::EMPLACE_BEFORE | test::EMPLACE_AFTER); + + if(!boost::interprocess::test::test_emplace + < slist, Options>()) + return 1; + return 0; } diff --git a/test/tree_test.cpp b/test/tree_test.cpp index 2535ec6..a9c3786 100644 --- a/test/tree_test.cpp +++ b/test/tree_test.cpp @@ -22,6 +22,7 @@ #include "dummy_test_allocator.hpp" #include "set_test.hpp" #include "map_test.hpp" +#include "emplace_test.hpp" /////////////////////////////////////////////////////////////////// // // @@ -32,7 +33,7 @@ /////////////////////////////////////////////////////////////////// using namespace boost::interprocess; - +/* //Explicit instantiation to detect compilation errors template class boost::interprocess::set ,test::dummy_test_allocator > >; - +*/ //Customize managed_shared_memory class typedef basic_managed_shared_memory my_managed_shared_memory; //We will work with narrow characters for shared memory objects -//Alias a integer node allocator type +//Alias an integer node allocator type typedef allocator shmem_allocator_t; typedef allocator, my_managed_shared_memory::segment_manager> @@ -119,9 +120,54 @@ typedef multimap ,shmem_move_copy_node_pair_allocator_t> MyMoveCopyShmMultiMap; +//Test recursive structures +class recursive_set +{ +public: + int id_; + set set_; + friend bool operator< (const recursive_set &a, const recursive_set &b) + { return a.id_ < b.id_; } +}; + +class recursive_map +{ + public: + int id_; + map map_; + friend bool operator< (const recursive_map &a, const recursive_map &b) + { return a.id_ < b.id_; } +}; + +//Test recursive structures +class recursive_multiset +{ +public: + int id_; + multiset multiset_; + friend bool operator< (const recursive_multiset &a, const recursive_multiset &b) + { return a.id_ < b.id_; } +}; + +class recursive_multimap +{ +public: + int id_; + multimap multimap_; + friend bool operator< (const recursive_multimap &a, const recursive_multimap &b) + { return a.id_ < b.id_; } +}; int main () { + //Recursive container instantiation + { + set set_; + multiset multiset_; + map map_; + multimap multimap_; + } + using namespace boost::interprocess::detail; if(0 != test::set_test, SetOptions>()) + return 1; + if(!boost::interprocess::test::test_emplace, SetOptions>()) + return 1; + const test::EmplaceOptions MapOptions = (test::EmplaceOptions)(test::EMPLACE_HINT_PAIR | test::EMPLACE_ASSOC_PAIR); + if(!boost::interprocess::test::test_emplace, MapOptions>()) + return 1; + if(!boost::interprocess::test::test_emplace, MapOptions>()) + return 1; return 0; } diff --git a/test/user_buffer_test.cpp b/test/user_buffer_test.cpp index 0835575..cc107f1 100644 --- a/test/user_buffer_test.cpp +++ b/test/user_buffer_test.cpp @@ -198,7 +198,7 @@ int main () std::size_t heap_list_size = heaplist->size(); //Copy heap buffer to another - const char *insert_beg = detail::char_ptr_cast(heap_buffer.get_address()); + const char *insert_beg = static_cast(heap_buffer.get_address()); const char *insert_end = insert_beg + heap_buffer.get_size(); std::vector grow_copy (insert_beg, insert_end); diff --git a/test/vector_test.cpp b/test/vector_test.cpp index 206b0d0..22c9a57 100644 --- a/test/vector_test.cpp +++ b/test/vector_test.cpp @@ -76,8 +76,21 @@ int test_expand_bwd() return 0; } +class recursive_vector +{ + public: + int id_; + vector vector_; +}; + +void recursive_vector_test()//Test for recursive types +{ + vector recursive_vector_vector; +} + int main() { + recursive_vector_test(); typedef allocator ShmemAllocator; typedef vector MyVector; @@ -105,6 +118,11 @@ int main() if(test_expand_bwd()) return 1; + const test::EmplaceOptions Options = (test::EmplaceOptions)(test::EMPLACE_BACK | test::EMPLACE_BEFORE); + if(!boost::interprocess::test::test_emplace + < vector, Options>()) + return 1; + return 0; } diff --git a/test/vector_test.hpp b/test/vector_test.hpp index d2f17ac..dd3e46f 100644 --- a/test/vector_test.hpp +++ b/test/vector_test.hpp @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -24,6 +25,7 @@ #include #include #include "get_process_id_name.hpp" +#include "emplace_test.hpp" namespace boost{ namespace interprocess{ @@ -46,22 +48,22 @@ bool copyable_only(V1 *shmvector, V2 *stdvector, detail::true_type) if(!test::CheckEqualContainers(shmvector, stdvector)) return false; { - IntType move_me(1); - stdvector->insert(stdvector->begin()+size/2, 50, 1); - shmvector->insert(shmvector->begin()+size/2, 50, detail::move_impl(move_me)); - if(!test::CheckEqualContainers(shmvector, stdvector)) return false; + IntType move_me(1); + stdvector->insert(stdvector->begin()+size/2, 50, 1); + shmvector->insert(shmvector->begin()+size/2, 50, detail::move_impl(move_me)); + if(!test::CheckEqualContainers(shmvector, stdvector)) return false; } { - IntType move_me(2); - shmvector->assign(shmvector->size()/2, detail::move_impl(move_me)); - stdvector->assign(stdvector->size()/2, 2); - if(!test::CheckEqualContainers(shmvector, stdvector)) return false; + IntType move_me(2); + shmvector->assign(shmvector->size()/2, detail::move_impl(move_me)); + stdvector->assign(stdvector->size()/2, 2); + if(!test::CheckEqualContainers(shmvector, stdvector)) return false; } { - IntType move_me(3); - shmvector->assign(shmvector->size()*3-1, detail::move_impl(move_me)); - stdvector->assign(stdvector->size()*3-1, 3); - if(!test::CheckEqualContainers(shmvector, stdvector)) return false; + IntType move_me(3); + shmvector->assign(shmvector->size()*3-1, detail::move_impl(move_me)); + stdvector->assign(stdvector->size()*3-1, 3); + if(!test::CheckEqualContainers(shmvector, stdvector)) return false; } return true; } @@ -192,6 +194,17 @@ int vector_test() } if(!test::CheckEqualContainers(shmvector, stdvector)) return 1; + //Test insertion from list + { + std::list l(50, int(1)); + shmvector->insert(shmvector->begin(), l.begin(), l.end()); + stdvector->insert(stdvector->begin(), l.begin(), l.end()); + if(!test::CheckEqualContainers(shmvector, stdvector)) return 1; + shmvector->assign(l.begin(), l.end()); + stdvector->assign(l.begin(), l.end()); + if(!test::CheckEqualContainers(shmvector, stdvector)) return 1; + } + delete stdvector; segment.template destroy("MyShmVector"); segment.shrink_to_fit_indexes(); diff --git a/test/windows_shared_memory_mapping_test.cpp b/test/windows_shared_memory_mapping_test.cpp index a5a20ab..8bd4f10 100644 --- a/test/windows_shared_memory_mapping_test.cpp +++ b/test/windows_shared_memory_mapping_test.cpp @@ -68,7 +68,7 @@ int main () mapped_region region (mapping, read_only, 0, FileSize/2, 0); mapped_region region2(mapping, read_only, FileSize/2, FileSize - FileSize/2, 0); - unsigned char *checker = (unsigned char*)region.get_address(); + unsigned char *checker = static_cast(region.get_address()); //Check pattern for(std::size_t i = 0 ;i < FileSize/2 @@ -79,7 +79,7 @@ int main () } //Check second half - checker = (unsigned char *)region2.get_address(); + checker = static_cast(region2.get_address()); //Check pattern for(std::size_t i = FileSize/2