History log of /haiku/src/system/kernel/slab/HashedObjectCache.h
Revision Date Author Comments
# 30c9d3c0 01-Dec-2017 Augustin Cavalier <waddlesplash@gmail.com>

kernel: Correct class/struct mixups.

Almost certainly harmless. Spotted by Clang.


# ff59ce68 24-Feb-2010 Axel Dörfler <axeld@pinc-software.de>

* The low resource handler now empties the cache depot's magazines; before,
they were never freed unless the cache was destroyed (I just wondered why
my system would bury >1G in the magazines).
* Made the magazine capacity variable per cache, ie. for larger objects, it's
not a good idea to have 64*CPU buffers lying around in the worst case.
* Furthermore, the create_object_cache_etc()/object_depot_init() now have
arguments for the magazine capacity as well as the maximum number of full
unused magazines.
* By default, you might want to initialize both to zero, as then some hopefully
usable defaults are computed. Otherwise (the only current example is the
vm_page_mapping cache) you can just put in the values you'd want there.
The page mapping cache uses larger values, as its objects are usually
allocated and deleted in larger chunks.
* Beware, though, I couldn't test these changes yet as Qemu didn't like to run
today. I'll test these changes on another machine now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35601 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f1bdb8b9 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* HashedObjectCache: Instead of entering the individual objects in the hash
table, we only enter the slab. This also saves us the link object per object.
* Removed the now useless {Prepare,Unprepare}Object() methods.
* SmallObjectCache: Unlock the cache while calling into the MemoryManager. We
need to do that to avoid an indirect violation of the CACHE_DONT_* policy.
* Simplified lower_boundary().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35285 a95241bf-73f2-0310-859d-f6bbb57e9c96


# b4e5e498 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

MemoryManager:
* Added support to do larger raw allocations (up to one large chunk (128 pages))
in the slab areas. For an even larger allocation an area is created (haven't
seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
would be added to the free lists instead of being removed from them. This
would corrupt the lists and also lead to all kinds of misuse of meta chunks.

object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
case. The function could deadlock itself, since
HashedObjectCache::CreateSlab() does allocate memory, thus potentially
reentering.
* object_cache_low_memory():
- I missed some returns when reworking that one in r35254, so the function
might stop early and also leave the cache in maintenance mode, which would
cause it to be ignored by object cache resizer and low memory handler from
that point on.
- Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
and too many slabs could be freed.
- Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
unlock the cache, the situation might have changed and their might not be an
empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
MemoryManager. object_cache_get_usage() would thus always return 0 and the
block cache would not be considered cached memory. This was only of
informational relevance, though.

slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
Allocations of those sizes aren't so common that the object depots yield any
benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
memory from the MemoryManager, and the hash tables for HashedObjectCaches use
the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
(USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
has virtually no lock contention. Handling for low memory situations is yet
missing, though.
* Improved the output of some debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 86c794e5 21-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 08d66c12 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Always unlock the object cache while allocating memory. This is necessary for
the CACHE_DONT_SLEEP flag to work for real, since otherwise the thread could
block on the mutex held by a thread allocating memory. We use two condition
variables to prevent multiple threads from allocating slabs at the same time.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35206 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bb439b87 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Consequently propagate the CACHE_DONT_SLEEP flag.
* block_alloc(): Create B_FULL_LOCK area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35202 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1a7d8045 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

gcc 2 build fix


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35177 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 825566f8 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Split the slab allocator code into separate source files and C++-ified
things a bit.
* Some style cleanup.
* The object depot does now have a cookie that will be passed to the return
hook.
* Fixed object_cache_return_object_wrapper() using the new cookie.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35174 a95241bf-73f2-0310-859d-f6bbb57e9c96


# ff59ce680df5d2032ea5a11c666688269225f033 24-Feb-2010 Axel Dörfler <axeld@pinc-software.de>

* The low resource handler now empties the cache depot's magazines; before,
they were never freed unless the cache was destroyed (I just wondered why
my system would bury >1G in the magazines).
* Made the magazine capacity variable per cache, ie. for larger objects, it's
not a good idea to have 64*CPU buffers lying around in the worst case.
* Furthermore, the create_object_cache_etc()/object_depot_init() now have
arguments for the magazine capacity as well as the maximum number of full
unused magazines.
* By default, you might want to initialize both to zero, as then some hopefully
usable defaults are computed. Otherwise (the only current example is the
vm_page_mapping cache) you can just put in the values you'd want there.
The page mapping cache uses larger values, as its objects are usually
allocated and deleted in larger chunks.
* Beware, though, I couldn't test these changes yet as Qemu didn't like to run
today. I'll test these changes on another machine now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35601 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f1bdb8b92c9e68aeb6dec29f5274a48bb3e91d59 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* HashedObjectCache: Instead of entering the individual objects in the hash
table, we only enter the slab. This also saves us the link object per object.
* Removed the now useless {Prepare,Unprepare}Object() methods.
* SmallObjectCache: Unlock the cache while calling into the MemoryManager. We
need to do that to avoid an indirect violation of the CACHE_DONT_* policy.
* Simplified lower_boundary().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35285 a95241bf-73f2-0310-859d-f6bbb57e9c96


# b4e5e4982360e684c5a13d227b9a958dbe725554 25-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

MemoryManager:
* Added support to do larger raw allocations (up to one large chunk (128 pages))
in the slab areas. For an even larger allocation an area is created (haven't
seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
would be added to the free lists instead of being removed from them. This
would corrupt the lists and also lead to all kinds of misuse of meta chunks.

object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
case. The function could deadlock itself, since
HashedObjectCache::CreateSlab() does allocate memory, thus potentially
reentering.
* object_cache_low_memory():
- I missed some returns when reworking that one in r35254, so the function
might stop early and also leave the cache in maintenance mode, which would
cause it to be ignored by object cache resizer and low memory handler from
that point on.
- Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
and too many slabs could be freed.
- Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
unlock the cache, the situation might have changed and their might not be an
empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
MemoryManager. object_cache_get_usage() would thus always return 0 and the
block cache would not be considered cached memory. This was only of
informational relevance, though.

slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
Allocations of those sizes aren't so common that the object depots yield any
benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
memory from the MemoryManager, and the hash tables for HashedObjectCaches use
the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
(USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
has virtually no lock contention. Handling for low memory situations is yet
missing, though.
* Improved the output of some debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 86c794e5c10f1b2d99d672d424a8637639c703dd 21-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 08d66c12887e28e2760a85076561eba91a00ab66 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Always unlock the object cache while allocating memory. This is necessary for
the CACHE_DONT_SLEEP flag to work for real, since otherwise the thread could
block on the mutex held by a thread allocating memory. We use two condition
variables to prevent multiple threads from allocating slabs at the same time.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35206 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bb439b871e0be2e107ecc868be8c5660836f4253 20-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Consequently propagate the CACHE_DONT_SLEEP flag.
* block_alloc(): Create B_FULL_LOCK area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35202 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1a7d8045d02307aa23713d2f6afa3a4fbecd5e8d 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

gcc 2 build fix


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35177 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 825566f82f652d82ffaf3f0deca0a2bcda1e02c2 19-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Split the slab allocator code into separate source files and C++-ified
things a bit.
* Some style cleanup.
* The object depot does now have a cookie that will be passed to the return
hook.
* Fixed object_cache_return_object_wrapper() using the new cookie.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35174 a95241bf-73f2-0310-859d-f6bbb57e9c96