47 Matching Annotations
  1. Aug 2024
    1. if (hugetlb_cgroup_disabled())

      This probably not an interesting policy decision for ldos. It is a feature flag for the running OS. But if cgroups were decided by policy then this flag would be controlled by the cgroup decision.

    1. int vm_swappiness = 60;

      then main parameter that controls how aggressive the system will swap anon pages vs file pages.

    2. (sc->order > PAGE_ALLOC_COSTLY_ORDER || sc->priority < DEF_PRIORITY - 2))

      constants

    3. MIN_NR_GENS

      magic number?

    4. static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool can_swap)

      figure out how many pages to scan.

    5. static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) {

      sets up the struct scan_control. Most of the value come from elsewhere but this function seems to bring it all together.

    1. Flag indicating whether KSM should run.

    2. flag indicating if KSM should merge same-pages across NUMA nodes

    3. KSM_ATTR(advisor_target_scan_time);

      target scan time -- used by the EWA?

    4. KSM_ATTR(advisor_max_pages_to_scan);

      max number of pages to scan per iteration

    5. KSM_ATTR(advisor_min_pages_to_scan);

      min pages to scan per iteration

    6. KSM_ATTR(advisor_max_cpu);

      max amount of cpu per iteration of the ksmd?

    7. KSM_ATTR(advisor_mode);

      the mode that ksm runs in -- only two for now.

    8. KSM_ATTR(smart_scan);

      smart scan -- not sure what this does

    9. KSM_ATTR(stable_node_chains_prune_millisecs);

      millis before a page is removed from the stable tree??

    10. KSM_ATTR(max_page_sharing);

      what is the max number of page sharings -- I think this is how many times a single page can be shared (to limit the length of the reverse map?)

    11. KSM_ATTR(use_zero_pages);

      shouls KSM use special zero page handling

    12. KSM_ATTR(merge_across_nodes);

      should ksm merge across NUMA nodes

    13. KSM_ATTR(run);

      flag indicating if ksm should run (0 or 1)

    14. KSM_ATTR(pages_to_scan);

      number of pages to scan per loop iteration

    15. KSM_ATTR(sleep_millisecs);

      sleep time between ksmd loop iterations

    16. ksm_thread_pages_to_scan

      how many pages to scan. This is what many of the other config values are trying to get at.

    17. #define DEFAULT_PAGES_TO_SCAN 100

      constant for the number of pages to scan

    18. Constant default value for pages to scan

    19. configuration for targeted scan time

    20. max pages to scan during a single pass of the KSM loop

    21. min number of pages to scan during a KSM loo[p

    22. config for how much cpu to consume.

    23. choose the mode to run -- either using EWA or just some fixed values.

    24. function to compute a EWA for the number of pages to scan. Uses many of the config parameters from sysfs

    25. configuration for how long before a page is pruned from the stable tree

    26. configuration for max page sharing -- not quite sure what this is doing

    27. Configuration to decide if KSM should use special handilng for pages filled with zeros.

    28. Configuration for the number of pages to scan with each run

    29. Configuration for how often the thread should run

  2. Jul 2024
    1. static void scan_time_advisor(void)

      This function is calculating the pages to scan based on a other metrics such as cpu consumed. Seems like a good place for a learned algorithm.

  3. Jun 2024
    1. /* * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial);

      A policy decision about how often we may have to go to the page allocator.

    2. /* * calculate_sizes() determines the order and the distribution of data within * a slab object. */ 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 static int calculate_sizes(struct kmem_cache *s) {

      computes a several values for the allocator based on the size and flags of the allocator being created.

    3. #ifndef CONFIG_SLUB_TINY static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)

      Depending on the CONFIG_SLUB_TINY should ther be an active slab for each CPU?

    4. static inline int calculate_order(unsigned int size) { unsigned int order; unsigned int min_objects; unsigned int max_objects; unsigned int min_order; min_objects = slub_min_objects; if (!min_objects) {

      calculate the order (power of two number of pages) that each slab in this allocator should have.

    5. Should the allocator do consistency checks?

    6. why slow path vs fast path? policy?

    7. is there a policy decision here? Why would we choose to use lockless vs. not?

    8. set the number of slabs per cpu

    1. int calculate_normal_threshold(struct zone *zone) { int threshold; int mem; /* memory in 128 MB units */ /* * The threshold scales with the number of processors and the amount * of memory per zone. More memory means that we can defer updates for * longer, more processors could lead to more contention. * fls() is used to have a cheap way of logarithmic scaling. * * Some sample thresholds: * * Threshold Processors (fls) Zonesize fls(mem)+1 * ------------------------------------------------------------------ * 8 1 1 0.9-1 GB 4 * 16 2 2 0.9-1 GB 4 * 20 2 2 1-2 GB 5 * 24 2 2 2-4 GB 6 * 28 2 2 4-8 GB 7 * 32 2 2 8-16 GB 8 * 4 2 2 <128M 1 * 30 4 3 2-4 GB 5 * 48 4 3 8-16 GB 8 * 32 8 4 1-2 GB 4 * 32 8 4 0.9-1GB 4 * 10 16 5 <128M 1 * 40 16 5 900M 4 * 70 64 7 2-4 GB 5 * 84 64 7 4-8 GB 6 * 108 512 9 4-8 GB 6 * 125 1024 10 8-16 GB 8 * 125 1024 10 16-32 GB 9 */ mem = zone_managed_pages(zone) >> (27 - PAGE_SHIFT); threshold = 2 * fls(num_online_cpus()) * (1 + fls(mem)); /* * Maximum threshold is 125 */ threshold = min(125, threshold); return threshold; }

      a "magic" formula for computing the amount of memory per zone.

    1. khugepaged_pages_to_scan = HPAGE_PMD_NR * 8; khugepaged_max_ptes_none = HPAGE_PMD_NR - 1; khugepaged_max_ptes_swap = HPAGE_PMD_NR / 8; khugepaged_max_ptes_shared = HPAGE_PMD_NR / 2;

      bunch of "magic" numbers for hugepaged