The textbook explains Vmem's design goals and performance improvements in 11.3, and sketches the implementation in 11.3.4. Pay attention to the explanation of the instant-fit in particular (page 558) and the footnotes on pages 558 and 660. I feel that the textbook stops a bit short of discussing the internals of Vmem_t, through. Luckily, there are the code comments at http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/os/vmem.c#39 (the "Big Theory Statement") to understand these internals better. Note the special role of the vmem_freelist_t struct, which only occurs in the array of 65 power-of-2 freelists contained in each vmem_t. These structs have one purpose only: they serve as "list heads", specially marked "peg" structs onto which a linked list of free vmem_seg_t segments may be hung (i.e., pointed to by the vs_knext pointer. When there are no free segments in the range of sizes in that particular freelist, the vs_knext pointer simply points to the vmem_freelist_t head of the next power-of-2 freelist. The idea is that by following the vs_knext pointers from a free segment, all subsequent free segments can be visited in the order of increasing addresses---interspersed with the "list head" markers. Therein lies a bit of confusion. There are _two_ different structs found on the vs_knext chain: the marker vmem_freelist_t structs, and full vmem_seg_t structs. These structs are laid out exactly the same up to their fourth element; vmem_freelist_t looks like a truncated vmem_seg_t; the vs_start of these head markers is always 0: http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/sys/vmem_impl.h 69 typedef struct vmem_freelist { 70 uintptr_t vs_start; /* always zero */ <--- this makes it a list head marker / "peg" 71 uintptr_t vs_end; /* segment size */ 72 vmem_seg_t *vs_knext; /* next of kin */ <--- See vmem.c comment section 2.1, line 142 73 vmem_seg_t *vs_kprev; /* prev of kin */ 74 } vmem_freelist_t; 76 #define VS_SIZE(vsp) ((vsp)->vs_end - (vsp)->vs_start) 47 struct vmem_seg { 48 /* 49 * The first four fields must match vmem_freelist_t exactly. <---- because vmem_freelist_t* is cast to vmem_seg_t e.g., by vmem_freelist_insert, see below. 50 */ 51 uintptr_t vs_start; /* start of segment (inclusive) */ 52 uintptr_t vs_end; /* end of segment (exclusive) */ 53 vmem_seg_t *vs_knext; /* next of kin (alloc, free, span) */ 54 vmem_seg_t *vs_kprev; /* prev of kin */ 55 56 vmem_seg_t *vs_anext; /* next in arena */ 57 vmem_seg_t *vs_aprev; /* prev in arena */ 58 uint8_t vs_type; /* alloc, free, span */ 59 uint8_t vs_import; /* non-zero if segment was imported */ 60 uint8_t vs_depth; /* stack depth if KMF_AUDIT active */ 61 /* 62 * The following fields are present only when KMF_AUDIT is set. 63 */ 64 kthread_t *vs_thread; 65 hrtime_t vs_timestamp; 66 pc_t vs_stack[VMEM_STACK_DEPTH]; 67 }; See comments in lines 142--158 for the explanation of next-of-kin pointers. Note that VS_SIZE works exactly the same for both structures, and can be applied to both. This is a dirty hack; both C purists and OOP purists would insist on using a C union or a derived class, _not_ relying on the same member names and layout. If VS_SIZE were a function, its code would not typecheck. So a vmem_freelist_t's vs_knext can point either to a full vmem_seg_t or to another vmem_freelist_t (which can be thought of as a truncated vmem_seg_t with vs_start==0). The same is true for a vmem_seg_t's vs_knext. Moreover, a vmem_t's vm_freelist[65] is an array of vmem_freelist_t structs whose vs_knext pointers (of the type vmem_seg_t*) point to either a full vmem_seg_t when the free list is non-empty (and its corresponding bit in vm_freemap is set)---or to another vmem_freelist_t entry in the same array if the list is empty and the corresponding bit in vm_freemap is not set. This sounds complicated, but the code for putting a segment onto a proper freelist should clarify it: 425 /* 426 * Add vsp to the appropriate freelist. 427 */ 428 static void 429 vmem_freelist_insert(vmem_t *vmp, vmem_seg_t *vsp) 430 { 431 vmem_seg_t *vprev; 432 433 ASSERT(*VMEM_HASH(vmp, vsp->vs_start) != vsp); // <--- that would be an allocated segment, not a free one! 434 435 vprev = (vmem_seg_t *)&vmp->vm_freelist[highbit(VS_SIZE(vsp)) - 1]; // <--- note conversion from vmem_freelist_t to vmem_seg_t ! 436 vsp->vs_type = VMEM_FREE; 437 vmp->vm_freemap |= VS_SIZE(vprev); // <--- set the freemap bit; VS_SIZE works for the converted vmem_freelist_t vprev // exactly because it's laid out exactly as the beginning of a vmem_seg_t 438 VMEM_INSERT(vprev, vsp, k); // <--- same here; note that this VMEM_INSERT only uses vs_knext & vs_kprev 439 440 cv_broadcast(&vmp->vm_cv); // <--- for the sleeping allocations, wake up all waiters 441 } VMEM_INSERT uses the ## GCC extension to combine several tokens in the expanded macro into a single token: e.g., "vs_", "k", and "next" joined by ##s become "vs_knext". A key thing to note is that vm_freemap gets exactly the same bit as the highest set bit of the binary representation of the vs_end in its marker, which is at the same time the size of the segment. So it the segment size is 0x40000, then it will be hung under the marker corresponding to the bit 0x40000 in vm_freemap, and the vs_end of this marker will be 0x40000 (and its vs_start is 0). This is much simpler than the code below might make it look. See where in vmem.c the above vmem_freelist_insert() gets called, especially in vmem_seg_alloc() (which you will see called on the fast instant-fit path in vmem_alloc() below, to link up the newly allocated segment with the hashtable, and the segments for the spare space (if any) into the appropriate free lists based on their sizes. Observe the trick of how the freelist head markers are used while deleting a segment to determine if the freelist it's being deleted from becomes empty: 443 /* 444 * Take vsp from the freelist. 445 */ 446 static void 447 vmem_freelist_delete(vmem_t *vmp, vmem_seg_t *vsp) 448 { 449 ASSERT(*VMEM_HASH(vmp, vsp->vs_start) != vsp); 450 ASSERT(vsp->vs_type == VMEM_FREE); 451 452 if (vsp->vs_knext->vs_start == 0 && vsp->vs_kprev->vs_start == 0) { //<---- the trick! Both neighbors are markers 453 /* 454 * The segments on both sides of 'vsp' are freelist heads, 455 * so taking vsp leaves the freelist at vsp->vs_kprev empty. 456 */ 457 ASSERT(vmp->vm_freemap & VS_SIZE(vsp->vs_kprev)); //<--- this bit must be set, or else panic 458 vmp->vm_freemap ^= VS_SIZE(vsp->vs_kprev); //<--- so unset it; XOR with 1 will force it to 0 459 } 460 VMEM_DELETE(vsp, k); //<--- unlinks from the vs_knext/... lists 461 } Similar uses of markers prevent the code from coalescing segments across freelist boundaries. Another piece of code that shows how freelists are initially populated is found in vmem_create_common(), which creates and initializes an arena: 1483 for (i = 0; i <= VMEM_FREELISTS; i++) { 1484 vfp = &vmp->vm_freelist[i]; 1485 vfp->vs_end = 1UL << i; //<--- vs_start is bzero-ed out above, so this is the VS_SIZE of the chunk 1486 vfp->vs_knext = (vmem_seg_t *)(vfp + 1); //<--- linkage between list head markers 1487 vfp->vs_kprev = (vmem_seg_t *)(vfp - 1); 1488 } 1489 1490 vmp->vm_freelist[0].vs_kprev = NULL; //<--- linkage given a proper start and termination, good for looping 1491 vmp->vm_freelist[VMEM_FREELISTS].vs_knext = NULL; // ... 1492 vmp->vm_freelist[VMEM_FREELISTS].vs_end = 0; // ... 1493 vmp->vm_hash_table = vmp->vm_hash0; //<--- setup for the hash of allocated segments 1494 vmp->vm_hash_mask = VMEM_HASH_INITIAL - 1; 1495 vmp->vm_hash_shift = highbit(vmp->vm_hash_mask); 1496 1497 vsp = &vmp->vm_seg0; //<--- the "SPAN" segment for the arena. Search in vmem_contains() will start from here 1498 vsp->vs_anext = vsp; // Initially, this is the only span (and segment) in the arena, so all lists 1499 vsp->vs_aprev = vsp; // point back to itself. 1500 vsp->vs_knext = vsp; 1501 vsp->vs_kprev = vsp; 1502 vsp->vs_type = VMEM_SPAN; To see how new segments are added on the anext/aprev and knext/kprev lists, search for vm_seg0 throughout vmem.c. To see how span segments are linked, see how vmem_contains() walks the chain of vs_knext pointers (with kstat statistics-gathering lines removed for clarity): 1328 /* 1329 * Determine whether arena vmp contains the segment [vaddr, vaddr + size). 1330 */ 1331 int 1332 vmem_contains(vmem_t *vmp, void *vaddr, size_t size) 1333 { 1334 uintptr_t start = (uintptr_t)vaddr; 1335 uintptr_t end = start + size; 1336 vmem_seg_t *vsp; 1337 vmem_seg_t *seg0 = &vmp->vm_seg0; 1338 1339 mutex_enter(&vmp->vm_lock); 1341 for (vsp = seg0->vs_knext; vsp != seg0; vsp = vsp->vs_knext) { //<--- traversing vs_knext 1343 ASSERT(vsp->vs_type == VMEM_SPAN); 1344 if (start >= vsp->vs_start && end - 1 <= vsp->vs_end - 1) 1345 break; 1346 } 1347 mutex_exit(&vmp->vm_lock); 1348 return (vsp != seg0); 1349 } ---------------[ Instant-fit and vmem_alloc() ]---------------- The actual code for instant-fit in vmem_alloc() is pretty simple to read once you get through the helper macros it uses: highbit, lowbit, P2ALIGN, and ISP2. The first two make use of x86 opcodes BSR and BSF, summarized here http://x86.renejeschke.de/html/file_module_x86_id_19.html and here http://x86.renejeschke.de/html/file_module_x86_id_20.html. Note that the result of these operations is _undefined_ when their source operand is 0; they just set the CPU's Zero flag to signal it. For this reason, the highbit and lowbit macros use an additional opcode SETZ to make to catch this condition and zero out that undefined result to a well-defined 0. Vmem_alloc() is pretty simple. After getting quantum caching and the special cases like VM_NEXTFIT, VM_BESTFIT, and VM_FIRSTFIT out of the way, it gets down to the business of instant-fit: Essentially, this code checks if a free segment of a size large enough to accommodate the allocation is available in the power-of-two freelists. If none is available, the execution falls through to a more thorough search (line 1300); if a matching freelist isn't empty, the first segment on it is taken (line 1305). To make this availability check, vmp->vm_freemap (which is simply a 64-bit integer) is scanned, after masking off its lower bits that correspond to freelists for sizes too small to accommodate size (with P2ALIGN). To wit: 168 * We maintain power-of-2 freelists for free segments, i.e. free segments 169 * of size >= 2^n reside in vmp->vm_freelist[n]. 179 * We maintain a bit map to determine quickly which freelists are non-empty. 180 * vmp->vm_freemap & (1 << n) is non-zero iff vmp->vm_freelist[n] is non-empty. 1289 mutex_enter(&vmp->vm_lock); 1290 1291 if (vmp->vm_nsegfree >= VMEM_MINFREE || vmem_populate(vmp, vmflag)) { 1292 if (ISP2(size)) //<--- size is a power of 2 (with a single non-zero bit in it) 1293 flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); //<--- flist is the index of the lowest bit in the // bitmap of available power-of-two freelists of segments // in sizes larger or equal than the 'size' argument // (counting from 1). // Look how it's used on line 1305 to get that segment. 1294 else if ((hb = highbit(size)) < VMEM_FREELISTS) //<--- If not a power of 2, check the index of the highest // bit in 'size'. VMEM_FREELISTS is 64 on a 64-bit system. 1295 flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 1296 } 1297 // So why isn't the ISP2 case checked for < VMEM_FREELISTS ? 1298 if (flist-- == 0) { //<---- If there was a freemap's bit matching 1299 mutex_exit(&vmp->vm_lock); 1300 return (vmem_xalloc(vmp, size, vmp->vm_quantum, 1301 0, 0, NULL, NULL, vmflag)); 1302 } 1303 1304 ASSERT(size <= (1UL << flist)); 1305 vsp = vmp->vm_freelist[flist].vs_knext; //<--- just get the first segment off of the right freelist head (a full vmem_seg_t) 1306 addr = vsp->vs_start; //<--- that'll be the result ... 1307 if (vmflag & VM_ENDALLOC) { // unless aligning the allocation to the end of segment is asked for 1308 addr += vsp->vs_end - (addr + size); 1309 } 1310 (void) vmem_seg_alloc(vmp, vsp, addr, size); //<--- adjusts the segment, puts it on the hash and off freelists, // creates new segments if there are leftovers on either side, // and places them on the appropriate freelists 1311 mutex_exit(&vmp->vm_lock); 1312 return ((void *)addr); //<--- the result is an integer (but large enough to hold an address) 1313 } Read vmem_seg_alloc() to see how it manages the freelists and the hash. It's pretty simple. ===============[ Some commands to browse the vmem_t structs: ]=============== > ::vmem ADDR NAME INUSE TOTAL SUCCEED FAIL fffffffffbcca900 heap 341053440 1092913987584 18063 0 fffffffffbccb738 vmem_metadata 8773632 8781824 2063 0 fffffffffbccc570 vmem_seg 7901184 7901184 1917 0 fffffffffbccd3a8 vmem_hash 596992 602112 37 0 fffffffffbcce1e0 vmem_vmem 276640 306736 136 0 One-line summary of a vmem_t: > fffffffffbcca900::vmem ADDR NAME INUSE TOTAL SUCCEED FAIL fffffffffbcca900 heap 341053440 1092913987584 18087 0 > fffffffffbcca900::print -t vmem_t vmem_t { char [30] vm_name = [ "heap" ] <--- name kcondvar_t vm_cv = { ushort_t _opaque = 0 } kmutex_t vm_lock = { void *[1] _opaque = [ 0 ] } uint32_t vm_id = 0x1 uint32_t vm_mtbf = 0 int vm_cflags = 0x10000 int vm_qshift = 0xc size_t vm_quantum = 0x1000 <--- page size size_t vm_qcache_max = 0 <--- no qcache trick for smaller allocations size_t vm_min_import = 0 int (*)() vm_source_alloc = 0 int (*)() vm_source_free = 0 vmem_t *vm_source = 0 vmem_t *vm_next = vmem0+0xe38 kstat_t *vm_ksp = kstat_initial ssize_t vm_nsegfree = 0x11 <--- free segment structs available vmem_seg_t *vm_segfree = 0xffffff015a28e7a8 <--- freelist (one for all free segments) vmem_seg_t **vm_hash_table = 0xffffff015843d000 <--- hash by allocated address to its segment size_t vm_hash_mask = 0x1fff size_t vm_hash_shift = 0xd ulong_t vm_freemap = 0x8088000000 <---- bitmap of available power-of-2 free lists vmem_seg_t vm_seg0 = { uintptr_t vs_start = 0 uintptr_t vs_end = 0 vmem_seg_t *vs_knext = vmem_seg0+0xd98 vmem_seg_t *vs_kprev = vmem_seg0+0xd98 vmem_seg_t *vs_anext = vmem_seg0+0xd98 vmem_seg_t *vs_aprev = 0xffffff01594a2a10 uint8_t vs_type = 0x10 uint8_t vs_import = 0 uint8_t vs_depth = 0 kthread_t *vs_thread = 0 hrtime_t vs_timestamp = 0 pc_t [20] vs_stack = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } vmem_seg_t vm_rotor = { uintptr_t vs_start = 0 uintptr_t vs_end = 0 vmem_seg_t *vs_knext = 0 vmem_seg_t *vs_kprev = 0 vmem_seg_t *vs_anext = 0xffffff01594a2a10 vmem_seg_t *vs_aprev = 0xffffff015a292818 uint8_t vs_type = 0x20 uint8_t vs_import = 0 uint8_t vs_depth = 0 kthread_t *vs_thread = 0 hrtime_t vs_timestamp = 0 pc_t [20] vs_stack = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } vmem_seg_t *[16] vm_hash0 = [ 0xffffff014ca213b8, 0xffffff014ca1b380, 0xffffff014ca22150, 0xffffff014ca1eb98, 0xffffff014ca1d8c0, 0xffffff014ca1f460, 0xffffff014ca1f348, 0xffffff014ca21540, 0xffffff014ca141c0, 0xffffff014ca22268, 0xffffff014ca20770, 0xffffff014ca21d20, 0xffffff014ca1eab8, 0xffffff014ca22770, 0xffffff014ca1c850, 0xffffff014ca1b2a0 ] void *[16] vm_qcache = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] vmem_freelist_t [65] vm_freelist = [ <---- power-of-2 free lists (truncated, 65 would be too much to print) vmem_freelist_t { uintptr_t vs_start = 0 uintptr_t vs_end = 0x1 vmem_seg_t *vs_knext = vmem0+0x398 vmem_seg_t *vs_kprev = 0 }, vmem_freelist_t { uintptr_t vs_start = 0 uintptr_t vs_end = 0x2 vmem_seg_t *vs_knext = vmem0+0x3b8 vmem_seg_t *vs_kprev = vmem0+0x378 }, > fffffffffbcca900::print -t vmem_t vm_freemap <---- just the freemap. It's got three populated freelists (3 bits set). ulong_t vm_freemap = 0x8088000000 > fffffffffbcca900::print -t vmem_t vm_freelist[27] <---- top populated power-of-2 freelist head vmem_freelist_t vm_freelist[27] = { uintptr_t vm_freelist[27].vs_start = 0 uintptr_t vm_freelist[27].vs_end = 0x8000000000 <---- Cf. to 0x8088000000 freemap. This is the top "8" bit. vmem_seg_t *vm_freelist[27].vs_knext = 0xffffff01594a2a10 <--- link to a real vmem_seg_t. we'll check this segment next vmem_seg_t *vm_freelist[27].vs_kprev = vmem0+0x838 } > fffffffffbcca900::print -t vmem_t vm_freelist[26] <---- freelist head just below that one vmem_freelist_t vm_freelist[26] = { uintptr_t vm_freelist[26].vs_start = 0 uintptr_t vm_freelist[26].vs_end = 0x4000000000 vmem_seg_t *vm_freelist[26].vs_knext = vmem0+0x858 <--- links to the next and previous vmem_freelist_t head marker structs vmem_seg_t *vm_freelist[26].vs_kprev = vmem0+0x818 } > 0xffffff01594a2a10::print -t vmem_seg_t <---- this segment is free & available vmem_seg_t { uintptr_t vs_start = 0xffffff0243fc8000 <---- from here uintptr_t vs_end = 0xffffffffc0000000 <---- to here vmem_seg_t *vs_knext = vmem0+0x878 vmem_seg_t *vs_kprev = vmem0+0x858 vmem_seg_t *vs_anext = vmem0+0xa8 vmem_seg_t *vs_aprev = vmem0+0x190 uint8_t vs_type = 0x2 <---- VMEM_FREE, free segment uint8_t vs_import = 0 uint8_t vs_depth = 0 kthread_t *vs_thread = 0xffffff015963e000 hrtime_t vs_timestamp = 0xffffff015963f000 pc_t [20] vs_stack = [ 0, 0xffffff014941d4f8, 0xffffff01594a2af0, 0xffffff01594a2a80, 0x1, 0xffffff015963e000, 0xffffff015963f000, 0xffffff01594a2af0, 0xffffff01594a29a0, 0xffffff01594a2a48, 0xffffff01594a2968, 0x110, 0xffffff015963d000, 0xffffff015963e000, 0, 0xffffff014941d4f8, 0xffffff01594a2b60, 0xffffff01594a2af0, 0x1, 0xffffff015963d000 ] } Let's see the other two non-empty freelists, summarized (NOTE THAT THE ARRAY INDICES ARE HEX!) > fffffffffbcca900::print -t vmem_t vm_freemap ulong_t vm_freemap = 0x8088000000 This is the top bit, 0x8088000000 : ^ > fffffffffbcca900::print -t vmem_t vm_freelist[27].vs_knext vmem_seg_t *vm_freelist[27].vs_knext = 0xffffff01594a2a10 > fffffffffbcca900::print -t vmem_t vm_freelist[27].vs_knext | ::vmem_seg ADDR TYPE START END WHO ffffff01594a2a10 FREE ffffff024f47e000 ffffffffc0000000 <---- OK, we saw that Count down 8 bits, to 0x8088000000 : ^ > fffffffffbcca900::print -t vmem_t vm_freelist[1f].vs_knext | ::vmem_seg ADDR TYPE START END WHO ffffff015a292818 FREE ffffff01680f6000 ffffff024f47e000 And another 4 bits, to 0x8088000000 : ^ > fffffffffbcca900::print -t vmem_t vm_freelist[1b].vs_knext | ::vmem_seg ADDR TYPE START END WHO ffffff015a28e7a8 FREE ffffff015d182000 ffffff0167917000 All of the above segments are properly FREE, as expected. If we try to apply the same to an array slot _not_ in vm_freemap, we'll get a pointer to a vmem_freelist_t, and we'll get garbage for type (type is not a field in a vmem_freelist_t truncated marker, only in the full vmem_seg_t): > fffffffffbcca900::print -t vmem_t vm_freelist[1a].vs_knext | ::vmem_seg ADDR TYPE START END WHO fffffffffbccafd8 ???? 0000000000000000 0000000008000000 (the type here is undefined; we just grabbed memory past the end of the marker head). ===============[ Walking the segments of a vmem_t ]=============== From here on we could walk the vs_knext and vs_anext lists, by type or order. (Suggestion: do it! See that the results make sense!) But in fact such walks are already available as DCMDs: check out "::walkers ! grep vmem". (Suggestion: find the source code for these DCMDs and see if you guessed the traversal logic right! See: http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/mdb/common/modules/genunix/genunix.c#4385 where the vmem_seg walker command is defined; see other matches for vmem_seg for the file path /mdb/ in the modules/ genunix directory.) Walking just the span segments: > fffffffffbcca900::walk vmem_span 0xfffffffffbcca9a8 0xfffffffffbcd44c8 > fffffffffbcca900::walk vmem_span | ::vmem_seg ADDR TYPE START END WHO fffffffffbcca9a8 SPAN 0000000000000000 0000000000000000 fffffffffbcd44c8 SPAN ffffff0149400000 ffffffffc0000000 Walking all segments: > fffffffffbcca900::walk vmem_seg <--- walks all segments, both free and allocated 0xfffffffffbcca9a8 0xfffffffffbcd44c8 0xfffffffffbcd86f0 These segments summarized: (note that "::walk vmem_seg" and "::vmem_seg" are different DCMDs!) > fffffffffbcca900::walk vmem_seg | ::vmem_seg ADDR TYPE START END WHO fffffffffbcca9a8 SPAN 0000000000000000 0000000000000000 fffffffffbcd44c8 SPAN ffffff0149400000 ffffffffc0000000 fffffffffbcd86f0 ALLC ffffff0149400000 ffffff0149401000 kernelheap_init+0x15d fffffffffbcda878 ALLC ffffff0149401000 ffffff0149421000 segkmem_alloc_vn+0x98 ffffff014940b270 ALLC ffffff0149421000 ffffff0149441000 segkmem_alloc_vn+0x98 ffffff014941e968 ALLC ffffff0149441000 ffffff0149442000 Same segments in detail: > fffffffffbcca900::walk vmem_seg | ::print -t vmem_seg_t vmem_seg_t { uintptr_t vs_start = 0 uintptr_t vs_end = 0 vmem_seg_t *vs_knext = vmem_seg0+0xd98 vmem_seg_t *vs_kprev = vmem_seg0+0xd98 vmem_seg_t *vs_anext = vmem_seg0+0xd98 vmem_seg_t *vs_aprev = 0xffffff01594a2a10 uint8_t vs_type = 0x10 <---- SPAN type uint8_t vs_import = 0 uint8_t vs_depth = 0 kthread_t *vs_thread = 0 hrtime_t vs_timestamp = 0 pc_t [20] vs_stack = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } vmem_seg_t { uintptr_t vs_start = 0xffffff0149400000 uintptr_t vs_end = 0xffffffffc0000000 vmem_seg_t *vs_knext = vmem0+0xa8 vmem_seg_t *vs_kprev = vmem0+0xa8 vmem_seg_t *vs_anext = vmem_seg0+0x4fc0 vmem_seg_t *vs_aprev = vmem0+0xa8 uint8_t vs_type = 0x10 <---- SPAN type uint8_t vs_import = 0 uint8_t vs_depth = 0 kthread_t *vs_thread = 0 hrtime_t vs_timestamp = 0 pc_t [20] vs_stack = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } vmem_seg_t { uintptr_t vs_start = 0xffffff0149400000 /* start of segment (inclusive) */ uintptr_t vs_end = 0xffffff0149401000 /* end of segment (exclusive) */ vmem_seg_t *vs_knext = 0 vmem_seg_t *vs_kprev = 0 vmem_seg_t *vs_anext = vmem_seg0+0x7148 vmem_seg_t *vs_aprev = vmem_seg0+0xd98 uint8_t vs_type = 0x1 <---- ALLC (allocated) type uint8_t vs_import = 0 uint8_t vs_depth = 0x8 kthread_t *vs_thread = t0 <----- KMF_AUDIT was set; so this is the thread that allocated it hrtime_t vs_timestamp = 0 pc_t [20] vs_stack = [ 0xfffffffffbb48e89, 0xfffffffffbb49282, 0xfffffffffbb4a190, 0xfffffffffb897775, 0xfffffffffb8498fd, 0xfffffffffb8480c0, 0xfffffffffba729b0, 0xfffffffffb8000a0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } See "Big Theory Statement" at the top of the vmem.c file for more info about various fields of vmem_seg_t segments and their uses. Walk the logic of hashing allocated segments and of removing them from the hash when freed.