21 Matching Annotations
  1. Apr 2025
  2. Mar 2025
    1. n->colour_next++; if (n->colour_next >= cachep->colour) n->colour_next = 0; offset = n->colour_next; if (offset >= cachep->colour) offset = 0; offset *= cachep->colour_off;

      Colors are rotated through for cache coloring and offset values for next slab is calculated.

    2. cachep->align = ralign; cachep->colour_off = cache_line_size(); /* Offset must be a multiple of the alignment. */ if (cachep->colour_off < cachep->align) cachep->colour_off = cachep->align;

      defines cache coloring alignments to ensure objects likely to be accessed together aren't placed in the same cache lines

  3. Feb 2025
    1. static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) {

      This method chooses what section to process next by going in the order of the linked list - could be improved by going off historical success rates

    2. if (ksm_run & KSM_RUN_UNMERGE) list_add_tail(&slot->mm_node, &ksm_mm_head.slot.mm_node); else list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node); spin_unlock(&ksm_mmlist_lock);

      Determines where to insert the process in the list of processes.

    3. switch (advice) { case MADV_MERGEABLE: if (vma->vm_flags & VM_MERGEABLE) return 0; if (!vma_ksm_compatible(vma)) return 0; if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { err = __ksm_enter(mm); if (err) return err; } *vm_flags |= VM_MERGEABLE; break; case MADV_UNMERGEABLE: if (!(*vm_flags & VM_MERGEABLE)) return 0; /* just ignore the advice */ if (vma->anon_vma) { err = unmerge_ksm_pages(vma, start, end, true); if (err) return err; } *vm_flags &= ~VM_MERGEABLE; break; }

      Sets whether pages should be merged or unmerged, relies on flags in order to do so.

    4. sleep_ms = READ_ONCE(ksm_thread_sleep_millisecs); wait_event_interruptible_timeout(ksm_iter_wait, sleep_ms != READ_ONCE(ksm_thread_sleep_millisecs), msecs_to_jiffies(sleep_ms));

      Sleep duration dynamically adjusted, if this value changes while sleeping then it will wake up immediately. But uses intial hardcoded values - potential to change this.

    1. static void ondemand_readahead(struct readahead_control *ractl, struct folio *folio, unsigned long req_size) {

      Function uses heuristics to allocate readahead size when assumed/predicted to be sequential reads. However, mostly in trivial cases so may not be worth changing.

    2. unsigned long newsize = roundup_pow_of_two(size); if (newsize <= max / 32) newsize = newsize * 4; else if (newsize <= max / 4) newsize = newsize * 2; else newsize = max; return newsize;

      These are all heuristics to determine intial readahead size. Trades potentially memory overuse for faster access. Could be changed to not use these hardcoded values.

    3. if (unlikely(rac->_workingset))

      Heuristic that says _workingset likely isn't set (pages being read aren't likely to be used soon and don't belong to physical memory). When true, stalls current task due to lack of memory (relieves memory pressure)