- Oct 2024
-
elixir.bootlin.com elixir.bootlin.com
-
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* * Watermark failed for this zone, but see if we can * grow this zone if it contains deferred pages. */ if (deferred_pages_enabled()) { if (_deferred_grow_zone(zone, order)) goto try_this_zone; } #endif
This is a configuration policy. It will retry if zone has deferred pages.
-
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* Try again if zone has deferred pages */ if (deferred_pages_enabled()) { if (_deferred_grow_zone(zone, order)) goto try_this_zone; } #endif
This is a configuration policy. It will retry if zone has deferred pages.
-
batch = min(zone_managed_pages(zone) >> 10, SZ_1M / PAGE_SIZE); batch /= 4; /* We effectively *= 4 below */ if (batch < 1) batch = 1; /* * Clamp the batch to a 2^n - 1 value. Having a power * of 2 value was found to be more likely to have * suboptimal cache aliasing properties in some cases. * * For example if 2 tasks are alternately allocating * batches of pages, one task can end up with a lot * of pages of one half of the possible page colors * and the other with pages of the other colors. */ batch = rounddown_pow_of_two(batch + batch/2) - 1;
Determine the number of pages for batch allocating based on a heuristic. Using a (2^n - 1) to minimize cache aliasing issues.
but I think it may also be categorized as a configuration policy because the code execution depends on CONFIG_MMU.
-
if (!should_skip_kasan_unpoison(gfp_flags) && kasan_unpoison_pages(page, order, init)) { /* Take note that memory was initialized by KASAN. */ if (kasan_has_integrated_init()) init = false; } else { /* * If memory tags have not been set by KASAN, reset the page * tags to ensure page_address() dereferencing does not fault. */ for (i = 0; i != 1 << order; ++i) page_kasan_tag_reset(page + i); }
This decision made by configuration (KASAN's mode)
-
if (!skip_kasan_poison) { kasan_poison_pages(page, order, init); /* Memory is already initialized if KASAN did it internally. */ if (kasan_has_integrated_init()) init = false; }
bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
bool should_skip_kasan_poison(...) { bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); }
Skip KASAN memory poisoning is based on the configuration (depending on which kind of KASAN: generic or tag-based)
-
if (can_direct_reclaim && can_compact && (costly_order || (order > 0 && ac->migratetype != MIGRATE_MOVABLE)) && !gfp_pfmemalloc_allowed(gfp_mask))
const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
The value of PAGE_ALLOC_COSTLY_ORDER is defined as 3, heuristically determining whether try direct compaction or not. However, can_direct_reclaim is determined by caller, so it is also a configuaration policy.
-
if (!can_direct_reclaim)
bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
gfp_mask is parameter passed by caller and it specify whether the caller would like to reclaim pages or not if failing to allocate pages.
-
if (!order || order > PAGE_ALLOC_COSTLY_ORDER)
PAGE_ALLOC_COSTLY_ORDER is a heuristic threshold determining whether should retry the compaction or not.
-
if (!node_isset(node, *used_node_mask)) { node_set(node, *used_node_mask); return node; } for_each_node_state(n, N_MEMORY) { /* Don't want a node to appear more than once */ if (node_isset(n, *used_node_mask)) continue; /* Use the distance array to find the distance */ val = node_distance(node, n); /* Penalize nodes under us ("prefer the next node") */ val += (n < node); /* Give preference to headless and unused nodes */ if (!cpumask_empty(cpumask_of_node(n))) val += PENALTY_FOR_NODE_WITH_CPUS; /* Slight preference for less loaded node */ val *= MAX_NUMNODES; val += node_load[n]; if (val < min_val) { min_val = val; best_node = n; } } if (best_node >= 0) node_set(best_node, *used_node_mask);
Selects the best node based on a heuristic that takes into account node distance, CPU availability, and load. The code prefers unused nodes, penalizes closer nodes, and gives preference to less-loaded nodes. It then updates the used node mask to prevent reuse of the same node.
-
/* * !costly requests are much more important than * __GFP_RETRY_MAYFAIL costly ones because they are de * facto nofail and invoke OOM killer to move on while * costly can fail and users are ready to cope with * that. 1/4 retries is rather arbitrary but we would * need much more detailed feedback from compaction to * make a better decision. */ if (order > PAGE_ALLOC_COSTLY_ORDER) max_retries /= 4; if (++(*compaction_retries) <= max_retries) { ret = true; goto out; }
Adjusts the maximum number of retries for compaction based on allocation order. When handling high-order allocations, the retries are reduced to 1/4 to avoid overcommitting resources. This is a heuristic decision.
-
if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) *no_progress_loops = 0; else (*no_progress_loops)++;
Decision on whether to reset or increment no_progress_loops based on memory reclaim progress and allocation order. The progress is determined at runtime, with PAGE_ALLOC_COSTLY_ORDER serving as a threshold, representing a heuristic decision.
-
if (gfp_mask & __GFP_NORETRY)
gfp_mask is provided by caller. This configuration determine whether the function would retry to allocate pages or not.
-
-
elixir.bootlin.com elixir.bootlin.com
-
/* * If the user wants hardware cache aligned objects then follow that * suggestion if the object is sufficiently large. * * The hardware cache alignment cannot override the specified * alignment though. If that is greater then use it. */ if (flags & SLAB_HWCACHE_ALIGN) { unsigned int ralign; ralign = cache_line_size(); while (size <= ralign / 2) ralign /= 2; align = max(align, ralign); }
This SLAB_HWCACHE_ALIGN is defined by the user, making users decide whether hardware cache should align objects or not.
-
if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_NORMAL)) flags |= SLAB_NO_MERGE;
This is a configuration policy. If CONFIG_MEMCG_KMEM is enabled, disable cache merging for KMALLOC_NORMAL caches.
-