-
Notifications
You must be signed in to change notification settings - Fork 735
redpanda: Add support to map code into hugepages #30190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,137 @@ | ||
| // Copyright 2026 Redpanda Data, Inc. | ||
| // | ||
| // Use of this software is governed by the Business Source License | ||
| // included in the file licenses/BSL.md | ||
| // | ||
| // As of the Change Date specified in that file, in accordance with | ||
| // the Business Source License, use of this software will be governed | ||
| // by the Apache License, Version 2.0 | ||
|
|
||
| #include "syschecks/hugepages.h" | ||
|
|
||
| #include "base/vlog.h" | ||
| #include "syschecks/syschecks.h" | ||
|
|
||
| #include <sys/mman.h> | ||
|
|
||
| #include <cstddef> | ||
| #include <link.h> | ||
|
|
||
| // MADV_COLLAPSE was added in Linux 6.1. Define it for older headers. | ||
| #ifndef MADV_COLLAPSE | ||
| #define MADV_COLLAPSE 25 | ||
| #endif | ||
|
StephanDollberg marked this conversation as resolved.
|
||
|
|
||
| namespace syschecks { | ||
|
|
||
| namespace { | ||
|
|
||
| /// Invoke fn(addr, len) for each non-writable PT_LOAD segment across | ||
| /// all loaded ELF objects (main binary + shared libraries). This covers | ||
| /// .text (PF_R|PF_X) and .rodata (PF_R) segments. | ||
| template<typename Fn> | ||
| void for_each_ro_segment(Fn fn) { | ||
| dl_iterate_phdr( | ||
| [](struct dl_phdr_info* info, size_t /*size*/, void* data) -> int { | ||
| auto& callback = *static_cast<Fn*>(data); | ||
| for (int i = 0; i < info->dlpi_phnum; ++i) { | ||
| const auto& phdr = info->dlpi_phdr[i]; | ||
| if (phdr.p_type != PT_LOAD) { | ||
| continue; | ||
| } | ||
| // Skip writable segments (.data, .bss). | ||
| if (phdr.p_flags & PF_W) { | ||
| continue; | ||
| } | ||
| auto addr = info->dlpi_addr + phdr.p_vaddr; | ||
| auto len = phdr.p_memsz; | ||
| if (len == 0) { | ||
| continue; | ||
| } | ||
| callback(reinterpret_cast<void*>(addr), static_cast<size_t>(len)); | ||
| } | ||
|
StephanDollberg marked this conversation as resolved.
|
||
| return 0; // continue iteration | ||
| }, | ||
| &fn); | ||
| } | ||
|
|
||
| } // namespace | ||
|
|
||
| void promote_code_to_hugepages() { | ||
| size_t total_bytes = 0; | ||
| size_t marked_bytes = 0; | ||
| size_t collapsed_bytes = 0; | ||
|
|
||
| for_each_ro_segment([&](void* addr, size_t len) { | ||
| total_bytes += len; | ||
|
|
||
| // Mark the VMA for huge pages. In "madvise" THP mode (the common | ||
| // default), khugepaged only scans VMAs with VM_HUGEPAGE set, so this | ||
| // is required for ongoing huge page maintenance — not just a hint. | ||
| if (::madvise(addr, len, MADV_HUGEPAGE) == 0) { | ||
| marked_bytes += len; | ||
| } | ||
|
|
||
| // Fault in all pages so MADV_COLLAPSE has something to work with. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why do we want to sync fault in all huge pages? Should we still want on-demand mapping here? It's a lot of memory to waste.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
For performance reason? Best to take the page faults now? So from that POV I am on the no side. (Note if we don't want this that then also rules out MADV_COLLAPSE altogether as per the comment).
It's like 120MiB or so? I mean sure you are probably never going to need every single code page but I don't see much point in saving a few XX MB?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is that how big the executable segments covered are? Yeah I guess it's not too much. Do you happen to have any timings?
Maybe yes, but this is more obvious if you are going to take them all eventually anyway (like the heap, and currently we don't even enable this lock-meomry option for the heap). If you would only take 5% of them over the lifetime of the process then this looks less appealing. I have no idea if it's 5% or 95% though (evidently depends at least a bit on workload). Anyway I think it's fine. One other thing though: why are we doing the MADV_HUGEPAGE and MADV_COLLAPSE? ISTM you only want one or the other: the former for lazy, the latter for sync full mapping. I.e., I feel like MADV_HUGEPAGE does nothing now, unless it's for kernels that support MADV_HUGEPAGE and not MADV_COLLAPSE?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yeah
Timings of what sorry?
Yeah exactly, MADV_HUGEPAGE should only affect the latter. I don't think it hurts? We could do the whole if (not collapse fails) else { madv_hugepage } dance but I am not entirely sure about all the MADV_COLLAPSE return value semantics (.e.g.: as per docs one "area" in the range might fail to map which will already make it not return clean so we would do both in that case anyway). No strong feelings though. |
||
| // At startup most pages are still demand-paged. | ||
| // In theory this is not needed with MADV_COLLAPSE but the docs leave a | ||
| // cop out so we are just explicit in any case. | ||
| // Incompatible with ASAN, disable if on | ||
| #if !__has_feature(address_sanitizer) | ||
| auto* base = static_cast<volatile const char*>(addr); | ||
| for (size_t off = 0; off < len; off += 4096) { | ||
| [[maybe_unused]] char c = base[off]; | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FWIW just |
||
| } | ||
| #endif | ||
|
|
||
| // Synchronously collapse 4 KB pages into 2 MB huge pages | ||
| // (Linux 6.1+). Without this, khugepaged promotes pages in the | ||
| // background over the next few seconds; MADV_COLLAPSE makes it | ||
| // immediate (best effort). | ||
| if (::madvise(addr, len, MADV_COLLAPSE) == 0) { | ||
| collapsed_bytes += len; | ||
| } | ||
|
StephanDollberg marked this conversation as resolved.
|
||
| }); | ||
|
|
||
| if (total_bytes > 0) { | ||
| vlog( | ||
| checklog.info, | ||
| "hugepages: {}/{} MiB marked, {}/{} MiB collapsed", | ||
| marked_bytes / (1024 * 1024), | ||
| total_bytes / (1024 * 1024), | ||
| collapsed_bytes / (1024 * 1024), | ||
| total_bytes / (1024 * 1024)); | ||
|
StephanDollberg marked this conversation as resolved.
|
||
| } | ||
| } | ||
|
|
||
| void demote_code_from_hugepages() { | ||
| size_t total_bytes = 0; | ||
| size_t demoted_bytes = 0; | ||
|
|
||
| for_each_ro_segment([&](void* addr, size_t len) { | ||
| total_bytes += len; | ||
|
|
||
| // Prevent khugepaged from re-promoting these pages. | ||
| if (::madvise(addr, len, MADV_NOHUGEPAGE) != 0) { | ||
| return; | ||
| } | ||
|
|
||
| // MADV_NOHUGEPAGE only prevents future promotions — existing PMD | ||
| // entries for file-backed pages are not split. MADV_DONTNEED drops | ||
| // the page table entries. They will be re-faulted at 4 KB granularity | ||
| // (since MADV_NOHUGEPAGE is set). | ||
| if (::madvise(addr, len, MADV_DONTNEED) == 0) { | ||
| demoted_bytes += len; | ||
| } | ||
| }); | ||
|
|
||
| if (total_bytes > 0) { | ||
| vlog( | ||
| checklog.info, | ||
| "hugepages: demoted {}/{} MiB from huge pages", | ||
| demoted_bytes / (1024 * 1024), | ||
| total_bytes / (1024 * 1024)); | ||
| } | ||
| } | ||
|
|
||
| } // namespace syschecks | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| /* | ||
| * Copyright 2026 Redpanda Data, Inc. | ||
| * | ||
| * Use of this software is governed by the Business Source License | ||
| * included in the file licenses/BSL.md | ||
| * | ||
| * As of the Change Date specified in that file, in accordance with | ||
| * the Business Source License, use of this software will be governed | ||
| * by the Apache License, Version 2.0 | ||
| */ | ||
|
|
||
| #pragma once | ||
|
|
||
| namespace syschecks { | ||
|
|
||
| /// Promote file-backed executable mappings (code segments) to transparent huge | ||
| /// pages. | ||
| void promote_code_to_hugepages(); | ||
|
|
||
| /// Undo the effect of promote_code_to_hugepages(). Marks executable VMAs with | ||
| /// MADV_NOHUGEPAGE | ||
|
StephanDollberg marked this conversation as resolved.
|
||
| void demote_code_from_hugepages(); | ||
|
|
||
| } // namespace syschecks | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how many PT_LOAD segments are there? i.e., are we wasting a lot of "space" if we don't fill them out to the next 2MB boundary
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are 4 in the redpanda binary. We adding ~2MB (just conicidental similar to the 2MiB alignment) of total padding. This is about a 2% binary size increase.
From when I looked into this earlier.