Greg Kroah-Hartman | b244131 | 2017-11-01 15:07:57 +0100 | [diff] [blame] | 1 | # SPDX-License-Identifier: GPL-2.0 |
Mathieu Desnoyers | fb32e03 | 2008-02-02 15:10:33 -0500 | [diff] [blame] | 2 | # |
| 3 | # General architecture dependent options |
| 4 | # |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 5 | |
Christoph Hellwig | 1572497 | 2018-07-31 13:39:30 +0200 | [diff] [blame] | 6 | # |
| 7 | # Note: arch/$(SRCARCH)/Kconfig needs to be included first so that it can |
| 8 | # override the default values in this file. |
| 9 | # |
| 10 | source "arch/$(SRCARCH)/Kconfig" |
| 11 | |
Sean Christopherson | fe42754 | 2024-04-19 17:05:54 -0700 | [diff] [blame] | 12 | config ARCH_CONFIGURES_CPU_MITIGATIONS |
| 13 | bool |
| 14 | |
| 15 | if !ARCH_CONFIGURES_CPU_MITIGATIONS |
| 16 | config CPU_MITIGATIONS |
| 17 | def_bool y |
| 18 | endif |
| 19 | |
Randy Dunlap | 22471e1 | 2018-07-31 13:39:33 +0200 | [diff] [blame] | 20 | menu "General architecture-dependent options" |
| 21 | |
Catalin Marinas | da32b58 | 2022-04-23 11:07:49 +0100 | [diff] [blame] | 22 | config ARCH_HAS_SUBPAGE_FAULTS |
| 23 | bool |
| 24 | help |
| 25 | Select if the architecture can check permissions at sub-page |
| 26 | granularity (e.g. arm64 MTE). The probe_user_*() functions |
| 27 | must be implemented. |
| 28 | |
Thomas Gleixner | 05736e4 | 2018-05-29 17:48:27 +0200 | [diff] [blame] | 29 | config HOTPLUG_SMT |
| 30 | bool |
| 31 | |
Michael Ellerman | 3825346 | 2023-07-05 16:51:39 +0200 | [diff] [blame] | 32 | config SMT_NUM_THREADS_DYNAMIC |
| 33 | bool |
| 34 | |
Thomas Gleixner | 6f06212 | 2023-05-12 23:07:27 +0200 | [diff] [blame] | 35 | # Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL |
| 36 | config HOTPLUG_CORE_SYNC |
| 37 | bool |
| 38 | |
| 39 | # Basic CPU dead synchronization selected by architecture |
| 40 | config HOTPLUG_CORE_SYNC_DEAD |
| 41 | bool |
| 42 | select HOTPLUG_CORE_SYNC |
| 43 | |
| 44 | # Full CPU synchronization with alive state selected by architecture |
| 45 | config HOTPLUG_CORE_SYNC_FULL |
| 46 | bool |
| 47 | select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU |
| 48 | select HOTPLUG_CORE_SYNC |
| 49 | |
Thomas Gleixner | a631be92 | 2023-05-12 23:07:45 +0200 | [diff] [blame] | 50 | config HOTPLUG_SPLIT_STARTUP |
| 51 | bool |
| 52 | select HOTPLUG_CORE_SYNC_FULL |
| 53 | |
Thomas Gleixner | 18415f3 | 2023-05-12 23:07:50 +0200 | [diff] [blame] | 54 | config HOTPLUG_PARALLEL |
| 55 | bool |
| 56 | select HOTPLUG_SPLIT_STARTUP |
| 57 | |
Thomas Gleixner | 142781e | 2020-07-22 23:59:56 +0200 | [diff] [blame] | 58 | config GENERIC_ENTRY |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 59 | bool |
Thomas Gleixner | 142781e | 2020-07-22 23:59:56 +0200 | [diff] [blame] | 60 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 61 | config KPROBES |
| 62 | bool "Kprobes" |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 63 | depends on HAVE_KPROBES |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 64 | select KALLSYMS |
Mike Rapoport (IBM) | 7582b7b | 2024-05-05 19:06:27 +0300 | [diff] [blame] | 65 | select EXECMEM |
Paul E. McKenney | 900da4d | 2024-02-22 10:22:27 -0800 | [diff] [blame] | 66 | select NEED_TASKS_RCU |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 67 | help |
| 68 | Kprobes allows you to trap at almost any kernel address and |
| 69 | execute a callback function. register_kprobe() establishes |
| 70 | a probepoint and specifies the callback. Kprobes is useful |
| 71 | for kernel debugging, non-intrusive instrumentation and testing. |
| 72 | If in doubt, say "N". |
| 73 | |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 74 | config JUMP_LABEL |
Krzysztof Kozlowski | 24b54fe | 2019-12-04 16:50:26 -0800 | [diff] [blame] | 75 | bool "Optimize very unlikely/likely branches" |
| 76 | depends on HAVE_ARCH_JUMP_LABEL |
Josh Poimboeuf | 4ab7674 | 2022-04-18 09:50:39 -0700 | [diff] [blame] | 77 | select OBJTOOL if HAVE_JUMP_LABEL_HACK |
Krzysztof Kozlowski | 24b54fe | 2019-12-04 16:50:26 -0800 | [diff] [blame] | 78 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 79 | This option enables a transparent branch optimization that |
| 80 | makes certain almost-always-true or almost-always-false branch |
| 81 | conditions even cheaper to execute within the kernel. |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 82 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 83 | Certain performance-sensitive kernel code, such as trace points, |
| 84 | scheduler functionality, networking code and KVM have such |
| 85 | branches and include support for this optimization technique. |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 86 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 87 | If it is detected that the compiler has support for "asm goto", |
| 88 | the kernel will compile such branches with just a nop |
| 89 | instruction. When the condition flag is toggled to true, the |
| 90 | nop will be converted to a jump instruction to execute the |
| 91 | conditional block of instructions. |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 92 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 93 | This technique lowers overhead and stress on the branch prediction |
| 94 | of the processor and generally makes the kernel faster. The update |
| 95 | of the condition is slower, but those are always very rare. |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 96 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 97 | ( On 32-bit x86, the necessary options added to the compiler |
| 98 | flags may increase the size of the kernel slightly. ) |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 99 | |
Peter Zijlstra | 1987c94 | 2015-07-27 18:32:09 +0200 | [diff] [blame] | 100 | config STATIC_KEYS_SELFTEST |
| 101 | bool "Static key selftest" |
| 102 | depends on JUMP_LABEL |
| 103 | help |
| 104 | Boot time self-test of the branch patching code. |
| 105 | |
Peter Zijlstra | f03c412 | 2020-08-18 15:57:46 +0200 | [diff] [blame] | 106 | config STATIC_CALL_SELFTEST |
| 107 | bool "Static call selftest" |
| 108 | depends on HAVE_STATIC_CALL |
| 109 | help |
| 110 | Boot time self-test of the call patching code. |
| 111 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 112 | config OPTPROBES |
Masami Hiramatsu | 5cc718b | 2010-03-15 13:00:54 -0400 | [diff] [blame] | 113 | def_bool y |
| 114 | depends on KPROBES && HAVE_OPTPROBES |
Paul E. McKenney | 900da4d | 2024-02-22 10:22:27 -0800 | [diff] [blame] | 115 | select NEED_TASKS_RCU |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 116 | |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 117 | config KPROBES_ON_FTRACE |
| 118 | def_bool y |
| 119 | depends on KPROBES && HAVE_KPROBES_ON_FTRACE |
| 120 | depends on DYNAMIC_FTRACE_WITH_REGS |
| 121 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 122 | If function tracer is enabled and the arch supports full |
| 123 | passing of pt_regs to function tracing, then kprobes can |
| 124 | optimize on top of function tracing. |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 125 | |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 126 | config UPROBES |
David A. Long | 09294e3 | 2014-03-07 10:32:22 -0500 | [diff] [blame] | 127 | def_bool n |
Allen Pais | e8f4aa6 | 2016-10-13 10:06:13 +0530 | [diff] [blame] | 128 | depends on ARCH_SUPPORTS_UPROBES |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 129 | help |
Ingo Molnar | 7b2d81d | 2012-02-17 09:27:41 +0100 | [diff] [blame] | 130 | Uprobes is the user-space counterpart to kprobes: they |
| 131 | enable instrumentation applications (such as 'perf probe') |
| 132 | to establish unintrusive probes in user-space binaries and |
| 133 | libraries, by executing handler functions when the probes |
| 134 | are hit by user-space applications. |
| 135 | |
| 136 | ( These probes come in the form of single-byte breakpoints, |
| 137 | managed by the kernel and kept transparent to the probed |
| 138 | application. ) |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 139 | |
Steven Rostedt (VMware) | adab66b | 2020-12-14 12:33:51 -0500 | [diff] [blame] | 140 | config HAVE_64BIT_ALIGNED_ACCESS |
| 141 | def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS |
| 142 | help |
| 143 | Some architectures require 64 bit accesses to be 64 bit |
| 144 | aligned, which also requires structs containing 64 bit values |
| 145 | to be 64 bit aligned too. This includes some 32 bit |
| 146 | architectures which can do 64 bit accesses, as well as 64 bit |
| 147 | architectures without unaligned access. |
| 148 | |
| 149 | This symbol should be selected by an architecture if 64 bit |
| 150 | accesses are required to be 64 bit aligned in this way even |
| 151 | though it is not a 64 bit architecture. |
| 152 | |
Lukas Bulwahn | ba1a297 | 2021-01-19 10:53:26 +0100 | [diff] [blame] | 153 | See Documentation/core-api/unaligned-memory-access.rst for |
| 154 | more information on the topic of unaligned memory accesses. |
Steven Rostedt (VMware) | adab66b | 2020-12-14 12:33:51 -0500 | [diff] [blame] | 155 | |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 156 | config HAVE_EFFICIENT_UNALIGNED_ACCESS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 157 | bool |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 158 | help |
| 159 | Some architectures are unable to perform unaligned accesses |
| 160 | without the use of get_unaligned/put_unaligned. Others are |
| 161 | unable to perform such accesses efficiently (e.g. trap on |
| 162 | unaligned access and require fixing it up in the exception |
| 163 | handler.) |
| 164 | |
| 165 | This symbol should be selected by an architecture if it can |
| 166 | perform unaligned accesses efficiently to allow different |
| 167 | code paths to be selected for these cases. Some network |
| 168 | drivers, for example, could opt to not fix up alignment |
| 169 | problems with received packets if doing so would not help |
| 170 | much. |
| 171 | |
Mauro Carvalho Chehab | c9b54d6 | 2020-06-23 15:31:38 +0200 | [diff] [blame] | 172 | See Documentation/core-api/unaligned-memory-access.rst for more |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 173 | information on the topic of unaligned memory accesses. |
| 174 | |
David Woodhouse | cf66bb9 | 2012-12-03 16:25:40 +0000 | [diff] [blame] | 175 | config ARCH_USE_BUILTIN_BSWAP |
Krzysztof Kozlowski | 24b54fe | 2019-12-04 16:50:26 -0800 | [diff] [blame] | 176 | bool |
| 177 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 178 | Modern versions of GCC (since 4.4) have builtin functions |
| 179 | for handling byte-swapping. Using these, instead of the old |
| 180 | inline assembler that the architecture code provides in the |
| 181 | __arch_bswapXX() macros, allows the compiler to see what's |
| 182 | happening and offers more opportunity for optimisation. In |
| 183 | particular, the compiler will be able to combine the byteswap |
| 184 | with a nearby load or store and use load-and-swap or |
| 185 | store-and-swap instructions if the architecture has them. It |
| 186 | should almost *never* result in code which is worse than the |
| 187 | hand-coded assembler in <asm/swab.h>. But just in case it |
| 188 | does, the use of the builtins is optional. |
David Woodhouse | cf66bb9 | 2012-12-03 16:25:40 +0000 | [diff] [blame] | 189 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 190 | Any architecture with load-and-swap or store-and-swap |
| 191 | instructions should set this. And it shouldn't hurt to set it |
| 192 | on architectures that don't have such instructions. |
David Woodhouse | cf66bb9 | 2012-12-03 16:25:40 +0000 | [diff] [blame] | 193 | |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 194 | config KRETPROBES |
| 195 | def_bool y |
Masami Hiramatsu | 73f9b91 | 2022-03-26 11:27:05 +0900 | [diff] [blame] | 196 | depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK) |
| 197 | |
| 198 | config KRETPROBE_ON_RETHOOK |
| 199 | def_bool y |
| 200 | depends on HAVE_RETHOOK |
| 201 | depends on KRETPROBES |
| 202 | select RETHOOK |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 203 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 204 | config USER_RETURN_NOTIFIER |
| 205 | bool |
| 206 | depends on HAVE_USER_RETURN_NOTIFIER |
| 207 | help |
| 208 | Provide a kernel-internal notification when a cpu is about to |
| 209 | switch to user mode. |
| 210 | |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 211 | config HAVE_IOREMAP_PROT |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 212 | bool |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 213 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 214 | config HAVE_KPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 215 | bool |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 216 | |
| 217 | config HAVE_KRETPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 218 | bool |
Arthur Kepner | 74bc7ce | 2008-04-29 01:00:30 -0700 | [diff] [blame] | 219 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 220 | config HAVE_OPTPROBES |
| 221 | bool |
Cong Wang | d314d74 | 2012-03-23 15:01:51 -0700 | [diff] [blame] | 222 | |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 223 | config HAVE_KPROBES_ON_FTRACE |
| 224 | bool |
| 225 | |
Masami Hiramatsu | 1f6d3a8 | 2021-10-25 20:41:52 +0900 | [diff] [blame] | 226 | config ARCH_CORRECT_STACKTRACE_ON_KRETPROBE |
| 227 | bool |
| 228 | help |
| 229 | Since kretprobes modifies return address on the stack, the |
| 230 | stacktrace may see the kretprobe trampoline address instead |
| 231 | of correct one. If the architecture stacktrace code and |
| 232 | unwinder can adjust such entries, select this configuration. |
| 233 | |
Masami Hiramatsu | 540adea | 2018-01-13 02:55:03 +0900 | [diff] [blame] | 234 | config HAVE_FUNCTION_ERROR_INJECTION |
Josef Bacik | 9802d86 | 2017-12-11 11:36:48 -0500 | [diff] [blame] | 235 | bool |
| 236 | |
Petr Mladek | 42a0bb3 | 2016-05-20 17:00:33 -0700 | [diff] [blame] | 237 | config HAVE_NMI |
| 238 | bool |
| 239 | |
Christophe Leroy | a257cac | 2022-02-15 13:41:02 +0100 | [diff] [blame] | 240 | config HAVE_FUNCTION_DESCRIPTORS |
| 241 | bool |
| 242 | |
Masahiro Yamada | 4aae683 | 2021-07-31 14:22:32 +0900 | [diff] [blame] | 243 | config TRACE_IRQFLAGS_SUPPORT |
| 244 | bool |
| 245 | |
Mark Rutland | 4510bff | 2022-05-11 14:17:32 +0100 | [diff] [blame] | 246 | config TRACE_IRQFLAGS_NMI_SUPPORT |
| 247 | bool |
| 248 | |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 249 | # |
| 250 | # An arch should select this if it provides all these things: |
| 251 | # |
| 252 | # task_pt_regs() in asm/processor.h or asm/ptrace.h |
| 253 | # arch_has_single_step() if there is hardware single-step support |
| 254 | # arch_has_block_step() if there is hardware block-step support |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 255 | # asm/syscall.h supplying asm-generic/syscall.h interface |
| 256 | # linux/regset.h user_regset interfaces |
| 257 | # CORE_DUMP_USE_REGSET #define'd in linux/elf.h |
Eric W. Biederman | 153474b | 2022-01-27 11:46:37 -0600 | [diff] [blame] | 258 | # TIF_SYSCALL_TRACE calls ptrace_report_syscall_{entry,exit} |
Eric W. Biederman | 03248ad | 2022-02-09 12:20:45 -0600 | [diff] [blame] | 259 | # TIF_NOTIFY_RESUME calls resume_user_mode_work() |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 260 | # |
| 261 | config HAVE_ARCH_TRACEHOOK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 262 | bool |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 263 | |
Marek Szyprowski | c64be2b | 2011-12-29 13:09:51 +0100 | [diff] [blame] | 264 | config HAVE_DMA_CONTIGUOUS |
| 265 | bool |
| 266 | |
Thomas Gleixner | 29d5e04 | 2012-04-20 13:05:45 +0000 | [diff] [blame] | 267 | config GENERIC_SMP_IDLE_THREAD |
Krzysztof Kozlowski | 24b54fe | 2019-12-04 16:50:26 -0800 | [diff] [blame] | 268 | bool |
Thomas Gleixner | 29d5e04 | 2012-04-20 13:05:45 +0000 | [diff] [blame] | 269 | |
Kevin Hilman | 485cf5d | 2013-04-24 17:19:13 -0700 | [diff] [blame] | 270 | config GENERIC_IDLE_POLL_SETUP |
Krzysztof Kozlowski | 24b54fe | 2019-12-04 16:50:26 -0800 | [diff] [blame] | 271 | bool |
Kevin Hilman | 485cf5d | 2013-04-24 17:19:13 -0700 | [diff] [blame] | 272 | |
Daniel Micay | 6974f0c | 2017-07-12 14:36:10 -0700 | [diff] [blame] | 273 | config ARCH_HAS_FORTIFY_SOURCE |
| 274 | bool |
| 275 | help |
| 276 | An architecture should select this when it can successfully |
| 277 | build and run with CONFIG_FORTIFY_SOURCE. |
| 278 | |
Christoph Hellwig | d8ae8a3 | 2019-05-13 17:18:30 -0700 | [diff] [blame] | 279 | # |
| 280 | # Select if the arch provides a historic keepinit alias for the retain_initrd |
| 281 | # command line option |
| 282 | # |
| 283 | config ARCH_HAS_KEEPINITRD |
| 284 | bool |
| 285 | |
Daniel Borkmann | d2852a2 | 2017-02-21 16:09:33 +0100 | [diff] [blame] | 286 | # Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h |
| 287 | config ARCH_HAS_SET_MEMORY |
| 288 | bool |
| 289 | |
Rick Edgecombe | d253ca0 | 2019-04-25 17:11:34 -0700 | [diff] [blame] | 290 | # Select if arch has all set_direct_map_invalid/default() functions |
| 291 | config ARCH_HAS_SET_DIRECT_MAP |
| 292 | bool |
| 293 | |
Christoph Hellwig | c30700d | 2019-06-03 08:43:51 +0200 | [diff] [blame] | 294 | # |
Christoph Hellwig | fa7e224 | 2020-02-21 15:55:43 -0800 | [diff] [blame] | 295 | # Select if the architecture provides the arch_dma_set_uncached symbol to |
Colin Ian King | a86ecfa | 2020-12-14 19:03:44 -0800 | [diff] [blame] | 296 | # either provide an uncached segment alias for a DMA allocation, or |
Christoph Hellwig | fa7e224 | 2020-02-21 15:55:43 -0800 | [diff] [blame] | 297 | # to remap the page tables in place. |
Christoph Hellwig | c30700d | 2019-06-03 08:43:51 +0200 | [diff] [blame] | 298 | # |
Christoph Hellwig | fa7e224 | 2020-02-21 15:55:43 -0800 | [diff] [blame] | 299 | config ARCH_HAS_DMA_SET_UNCACHED |
Christoph Hellwig | c30700d | 2019-06-03 08:43:51 +0200 | [diff] [blame] | 300 | bool |
| 301 | |
Christoph Hellwig | 999a5d1 | 2020-02-21 12:35:05 -0800 | [diff] [blame] | 302 | # |
| 303 | # Select if the architectures provides the arch_dma_clear_uncached symbol |
| 304 | # to undo an in-place page table remap for uncached access. |
| 305 | # |
| 306 | config ARCH_HAS_DMA_CLEAR_UNCACHED |
Thomas Gleixner | f5e1028 | 2012-05-05 15:05:48 +0000 | [diff] [blame] | 307 | bool |
| 308 | |
Thomas Gleixner | 7725aca | 2023-06-14 01:39:22 +0200 | [diff] [blame] | 309 | config ARCH_HAS_CPU_FINALIZE_INIT |
| 310 | bool |
| 311 | |
Jason Gunthorpe | 8f23f5d | 2023-10-27 08:05:20 +0800 | [diff] [blame] | 312 | # The architecture has a per-task state that includes the mm's PASID |
| 313 | config ARCH_HAS_CPU_PASID |
| 314 | bool |
| 315 | select IOMMU_MM_DATA |
| 316 | |
Kees Cook | 5905429 | 2017-08-16 13:00:58 -0700 | [diff] [blame] | 317 | config HAVE_ARCH_THREAD_STRUCT_WHITELIST |
| 318 | bool |
Kees Cook | 5905429 | 2017-08-16 13:00:58 -0700 | [diff] [blame] | 319 | help |
| 320 | An architecture should select this to provide hardened usercopy |
| 321 | knowledge about what region of the thread_struct should be |
| 322 | whitelisted for copying to userspace. Normally this is only the |
| 323 | FPU registers. Specifically, arch_thread_struct_whitelist() |
| 324 | should be implemented. Without this, the entire thread_struct |
| 325 | field in task_struct will be left whitelisted. |
| 326 | |
Ingo Molnar | 5aaeb5c | 2015-07-17 12:28:12 +0200 | [diff] [blame] | 327 | # Select if arch wants to size task_struct dynamically via arch_task_struct_size: |
| 328 | config ARCH_WANTS_DYNAMIC_TASK_STRUCT |
| 329 | bool |
| 330 | |
Nick Desaulniers | 51c2ee6 | 2021-06-21 16:18:22 -0700 | [diff] [blame] | 331 | config ARCH_WANTS_NO_INSTR |
| 332 | bool |
| 333 | help |
| 334 | An architecture should select this if the noinstr macro is being used on |
| 335 | functions to denote that the toolchain should avoid instrumenting such |
| 336 | functions and is required for correctness. |
| 337 | |
Yury Norov | 942fa98 | 2018-05-16 11:18:49 +0300 | [diff] [blame] | 338 | config ARCH_32BIT_OFF_T |
| 339 | bool |
| 340 | depends on !64BIT |
| 341 | help |
| 342 | All new 32-bit architectures should have 64-bit off_t type on |
| 343 | userspace side which corresponds to the loff_t kernel type. This |
| 344 | is the requirement for modern ABIs. Some existing architectures |
| 345 | still support 32-bit off_t. This option is enabled for all such |
| 346 | architectures explicitly. |
| 347 | |
Heiko Carstens | 96c0a6a | 2021-02-10 21:51:02 +0100 | [diff] [blame] | 348 | # Selected by 64 bit architectures which have a 32 bit f_tinode in struct ustat |
| 349 | config ARCH_32BIT_USTAT_F_TINODE |
| 350 | bool |
| 351 | |
Masahiro Yamada | 2ff2b7e | 2019-08-19 14:54:20 +0900 | [diff] [blame] | 352 | config HAVE_ASM_MODVERSIONS |
| 353 | bool |
| 354 | help |
Colin Ian King | a86ecfa | 2020-12-14 19:03:44 -0800 | [diff] [blame] | 355 | This symbol should be selected by an architecture if it provides |
Masahiro Yamada | 2ff2b7e | 2019-08-19 14:54:20 +0900 | [diff] [blame] | 356 | <asm/asm-prototypes.h> to support the module versioning for symbols |
| 357 | exported from assembly code. |
| 358 | |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 359 | config HAVE_REGS_AND_STACK_ACCESS_API |
| 360 | bool |
Heiko Carstens | e01292b | 2010-02-18 14:25:21 +0100 | [diff] [blame] | 361 | help |
Colin Ian King | a86ecfa | 2020-12-14 19:03:44 -0800 | [diff] [blame] | 362 | This symbol should be selected by an architecture if it supports |
Heiko Carstens | e01292b | 2010-02-18 14:25:21 +0100 | [diff] [blame] | 363 | the API needed to access registers and stack entries from pt_regs, |
| 364 | declared in asm/ptrace.h |
| 365 | For example the kprobes-based event tracer needs this API. |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 366 | |
Mathieu Desnoyers | d7822b1 | 2018-06-02 08:43:54 -0400 | [diff] [blame] | 367 | config HAVE_RSEQ |
| 368 | bool |
| 369 | depends on HAVE_REGS_AND_STACK_ACCESS_API |
| 370 | help |
| 371 | This symbol should be selected by an architecture if it |
| 372 | supports an implementation of restartable sequences. |
| 373 | |
Miguel Ojeda | 2f7ab12 | 2021-07-03 16:42:57 +0200 | [diff] [blame] | 374 | config HAVE_RUST |
| 375 | bool |
| 376 | help |
| 377 | This symbol should be selected by an architecture if it |
| 378 | supports Rust. |
| 379 | |
Masami Hiramatsu | 3c88ee194c | 2018-04-25 21:20:57 +0900 | [diff] [blame] | 380 | config HAVE_FUNCTION_ARG_ACCESS_API |
| 381 | bool |
| 382 | help |
Colin Ian King | a86ecfa | 2020-12-14 19:03:44 -0800 | [diff] [blame] | 383 | This symbol should be selected by an architecture if it supports |
Masami Hiramatsu | 3c88ee194c | 2018-04-25 21:20:57 +0900 | [diff] [blame] | 384 | the API needed to access function arguments from pt_regs, |
| 385 | declared in asm/ptrace.h |
| 386 | |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 387 | config HAVE_HW_BREAKPOINT |
| 388 | bool |
Frederic Weisbecker | 99e8c5a | 2009-12-17 01:33:54 +0100 | [diff] [blame] | 389 | depends on PERF_EVENTS |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 390 | |
Frederic Weisbecker | 0102752 | 2010-04-11 18:55:56 +0200 | [diff] [blame] | 391 | config HAVE_MIXED_BREAKPOINTS_REGS |
| 392 | bool |
| 393 | depends on HAVE_HW_BREAKPOINT |
| 394 | help |
| 395 | Depending on the arch implementation of hardware breakpoints, |
| 396 | some of them have separate registers for data and instruction |
| 397 | breakpoints addresses, others have mixed registers to store |
| 398 | them but define the access type in a control register. |
| 399 | Select this option if your arch implements breakpoints under the |
| 400 | latter fashion. |
| 401 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 402 | config HAVE_USER_RETURN_NOTIFIER |
| 403 | bool |
Ingo Molnar | a1922ed | 2009-09-07 08:19:51 +0200 | [diff] [blame] | 404 | |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 405 | config HAVE_PERF_EVENTS_NMI |
| 406 | bool |
Frederic Weisbecker | 23637d4 | 2010-05-15 23:15:20 +0200 | [diff] [blame] | 407 | help |
| 408 | System hardware can generate an NMI using the perf event |
| 409 | subsystem. Also has support for calculating CPU cycle events |
| 410 | to determine how many clock cycles in a given period. |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 411 | |
Nicholas Piggin | 05a4a95 | 2017-07-12 14:35:46 -0700 | [diff] [blame] | 412 | config HAVE_HARDLOCKUP_DETECTOR_PERF |
| 413 | bool |
| 414 | depends on HAVE_PERF_EVENTS_NMI |
| 415 | help |
| 416 | The arch chooses to use the generic perf-NMI-based hardlockup |
| 417 | detector. Must define HAVE_PERF_EVENTS_NMI. |
| 418 | |
Nicholas Piggin | 05a4a95 | 2017-07-12 14:35:46 -0700 | [diff] [blame] | 419 | config HAVE_HARDLOCKUP_DETECTOR_ARCH |
| 420 | bool |
Nicholas Piggin | 05a4a95 | 2017-07-12 14:35:46 -0700 | [diff] [blame] | 421 | help |
Petr Mladek | 1356d0b | 2023-06-16 17:06:14 +0200 | [diff] [blame] | 422 | The arch provides its own hardlockup detector implementation instead |
| 423 | of the generic ones. |
| 424 | |
| 425 | It uses the same command line parameters, and sysctl interface, |
| 426 | as the generic hardlockup detectors. |
Nicholas Piggin | 05a4a95 | 2017-07-12 14:35:46 -0700 | [diff] [blame] | 427 | |
Jiri Olsa | c5e6319 | 2012-08-07 15:20:36 +0200 | [diff] [blame] | 428 | config HAVE_PERF_REGS |
| 429 | bool |
| 430 | help |
| 431 | Support selective register dumps for perf events. This includes |
| 432 | bit-mapping of each registers and a unique architecture id. |
| 433 | |
Jiri Olsa | c5ebced | 2012-08-07 15:20:40 +0200 | [diff] [blame] | 434 | config HAVE_PERF_USER_STACK_DUMP |
| 435 | bool |
| 436 | help |
| 437 | Support user stack dumps for perf event samples. This needs |
| 438 | access to the user stack pointer which is not unified across |
| 439 | architectures. |
| 440 | |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 441 | config HAVE_ARCH_JUMP_LABEL |
| 442 | bool |
| 443 | |
Ard Biesheuvel | 50ff18a | 2018-09-18 23:51:37 -0700 | [diff] [blame] | 444 | config HAVE_ARCH_JUMP_LABEL_RELATIVE |
| 445 | bool |
| 446 | |
Peter Zijlstra | 0d6e24d | 2020-02-03 17:37:11 -0800 | [diff] [blame] | 447 | config MMU_GATHER_TABLE_FREE |
| 448 | bool |
| 449 | |
Peter Zijlstra | ff2e6d72 | 2020-02-03 17:37:02 -0800 | [diff] [blame] | 450 | config MMU_GATHER_RCU_TABLE_FREE |
Peter Zijlstra | 2672391 | 2011-05-24 17:12:00 -0700 | [diff] [blame] | 451 | bool |
Peter Zijlstra | 0d6e24d | 2020-02-03 17:37:11 -0800 | [diff] [blame] | 452 | select MMU_GATHER_TABLE_FREE |
Peter Zijlstra | 2672391 | 2011-05-24 17:12:00 -0700 | [diff] [blame] | 453 | |
Peter Zijlstra | 3af4bd0 | 2020-02-03 17:37:05 -0800 | [diff] [blame] | 454 | config MMU_GATHER_PAGE_SIZE |
Peter Zijlstra | ed6a793 | 2018-08-31 14:46:08 +0200 | [diff] [blame] | 455 | bool |
| 456 | |
Peter Zijlstra | 27796d03 | 2020-02-03 17:36:59 -0800 | [diff] [blame] | 457 | config MMU_GATHER_NO_RANGE |
| 458 | bool |
Peter Zijlstra | 1e9fdf2 | 2022-07-08 09:18:03 +0200 | [diff] [blame] | 459 | select MMU_GATHER_MERGE_VMAS |
| 460 | |
| 461 | config MMU_GATHER_NO_FLUSH_CACHE |
| 462 | bool |
| 463 | |
| 464 | config MMU_GATHER_MERGE_VMAS |
| 465 | bool |
Peter Zijlstra | 27796d03 | 2020-02-03 17:36:59 -0800 | [diff] [blame] | 466 | |
Peter Zijlstra | 580a586 | 2020-02-03 17:37:08 -0800 | [diff] [blame] | 467 | config MMU_GATHER_NO_GATHER |
Martin Schwidefsky | 952a31c | 2018-09-18 14:51:50 +0200 | [diff] [blame] | 468 | bool |
Peter Zijlstra | 0d6e24d | 2020-02-03 17:37:11 -0800 | [diff] [blame] | 469 | depends on MMU_GATHER_TABLE_FREE |
Martin Schwidefsky | 952a31c | 2018-09-18 14:51:50 +0200 | [diff] [blame] | 470 | |
Nicholas Piggin | d53c3df | 2020-09-14 14:52:16 +1000 | [diff] [blame] | 471 | config ARCH_WANT_IRQS_OFF_ACTIVATE_MM |
| 472 | bool |
| 473 | help |
| 474 | Temporary select until all architectures can be converted to have |
| 475 | irqs disabled over activate_mm. Architectures that do IPI based TLB |
| 476 | shootdowns should enable this. |
| 477 | |
Nicholas Piggin | 88e3009 | 2023-02-03 17:18:35 +1000 | [diff] [blame] | 478 | # Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. |
| 479 | # MMU_LAZY_TLB_REFCOUNT=n can improve the scalability of context switching |
| 480 | # to/from kernel threads when the same mm is running on a lot of CPUs (a large |
| 481 | # multi-threaded application), by reducing contention on the mm refcount. |
| 482 | # |
| 483 | # This can be disabled if the architecture ensures no CPUs are using an mm as a |
| 484 | # "lazy tlb" beyond its final refcount (i.e., by the time __mmdrop frees the mm |
| 485 | # or its kernel page tables). This could be arranged by arch_exit_mmap(), or |
| 486 | # final exit(2) TLB flush, for example. |
| 487 | # |
| 488 | # To implement this, an arch *must*: |
| 489 | # Ensure the _lazy_tlb variants of mmgrab/mmdrop are used when manipulating |
| 490 | # the lazy tlb reference of a kthread's ->active_mm (non-arch code has been |
| 491 | # converted already). |
| 492 | config MMU_LAZY_TLB_REFCOUNT |
| 493 | def_bool y |
Nicholas Piggin | 2655421 | 2023-02-03 17:18:36 +1000 | [diff] [blame] | 494 | depends on !MMU_LAZY_TLB_SHOOTDOWN |
| 495 | |
| 496 | # This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an |
| 497 | # mm as a lazy tlb beyond its last reference count, by shooting down these |
| 498 | # users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may |
| 499 | # be using the mm as a lazy tlb, so that they may switch themselves to using |
| 500 | # init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs |
| 501 | # may be using mm as a lazy tlb mm. |
| 502 | # |
| 503 | # To implement this, an arch *must*: |
| 504 | # - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains |
| 505 | # at least all possible CPUs in which the mm is lazy. |
| 506 | # - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above). |
| 507 | config MMU_LAZY_TLB_SHOOTDOWN |
| 508 | bool |
Nicholas Piggin | 88e3009 | 2023-02-03 17:18:35 +1000 | [diff] [blame] | 509 | |
Huang Ying | df013ff | 2011-07-13 13:14:22 +0800 | [diff] [blame] | 510 | config ARCH_HAVE_NMI_SAFE_CMPXCHG |
| 511 | bool |
| 512 | |
Vignesh Balasubramanian | a9c3475 | 2024-04-12 11:51:39 +0530 | [diff] [blame] | 513 | config ARCH_HAVE_EXTRA_ELF_NOTES |
| 514 | bool |
| 515 | help |
| 516 | An architecture should select this in order to enable adding an |
| 517 | arch-specific ELF note section to core files. It must provide two |
| 518 | functions: elf_coredump_extra_notes_size() and |
| 519 | elf_coredump_extra_notes_write() which are invoked by the ELF core |
| 520 | dumper. |
| 521 | |
Paul E. McKenney | 2e83b87 | 2022-09-15 14:29:07 -0700 | [diff] [blame] | 522 | config ARCH_HAS_NMI_SAFE_THIS_CPU_OPS |
| 523 | bool |
| 524 | |
Heiko Carstens | 43570fd | 2012-01-12 17:17:27 -0800 | [diff] [blame] | 525 | config HAVE_ALIGNED_STRUCT_PAGE |
| 526 | bool |
| 527 | help |
| 528 | This makes sure that struct pages are double word aligned and that |
| 529 | e.g. the SLUB allocator can perform double word atomic operations |
| 530 | on a struct page for better performance. However selecting this |
| 531 | might increase the size of a struct page by a word. |
| 532 | |
Heiko Carstens | 4156153 | 2012-01-12 17:17:30 -0800 | [diff] [blame] | 533 | config HAVE_CMPXCHG_LOCAL |
| 534 | bool |
| 535 | |
Heiko Carstens | 2565409 | 2012-01-12 17:17:33 -0800 | [diff] [blame] | 536 | config HAVE_CMPXCHG_DOUBLE |
| 537 | bool |
| 538 | |
Paul E. McKenney | 77e5849 | 2017-01-14 13:32:50 -0800 | [diff] [blame] | 539 | config ARCH_WEAK_RELEASE_ACQUIRE |
| 540 | bool |
| 541 | |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 542 | config ARCH_WANT_IPC_PARSE_VERSION |
| 543 | bool |
| 544 | |
| 545 | config ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
| 546 | bool |
| 547 | |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 548 | config ARCH_WANT_OLD_COMPAT_IPC |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 549 | select ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 550 | bool |
| 551 | |
YiFei Zhu | 282a181 | 2020-09-24 07:44:16 -0500 | [diff] [blame] | 552 | config HAVE_ARCH_SECCOMP |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 553 | bool |
| 554 | help |
YiFei Zhu | 282a181 | 2020-09-24 07:44:16 -0500 | [diff] [blame] | 555 | An arch should select this symbol to support seccomp mode 1 (the fixed |
| 556 | syscall policy), and must provide an overrides for __NR_seccomp_sigreturn, |
| 557 | and compat syscalls if the asm-generic/seccomp.h defaults need adjustment: |
| 558 | - __NR_seccomp_read_32 |
| 559 | - __NR_seccomp_write_32 |
| 560 | - __NR_seccomp_exit_32 |
| 561 | - __NR_seccomp_sigreturn_32 |
| 562 | |
| 563 | config HAVE_ARCH_SECCOMP_FILTER |
| 564 | bool |
| 565 | select HAVE_ARCH_SECCOMP |
| 566 | help |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 567 | An arch should select this symbol if it provides all of these things: |
YiFei Zhu | 282a181 | 2020-09-24 07:44:16 -0500 | [diff] [blame] | 568 | - all the requirements for HAVE_ARCH_SECCOMP |
Will Drewry | bb6ea43 | 2012-04-12 16:48:01 -0500 | [diff] [blame] | 569 | - syscall_get_arch() |
| 570 | - syscall_get_arguments() |
| 571 | - syscall_rollback() |
| 572 | - syscall_set_return_value() |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 573 | - SIGSYS siginfo_t support |
| 574 | - secure_computing is called from a ptrace_event()-safe context |
| 575 | - secure_computing return value is checked and a return value of -1 |
| 576 | results in the system call being skipped immediately. |
Kees Cook | 48dc92b | 2014-06-25 16:08:24 -0700 | [diff] [blame] | 577 | - seccomp syscall wired up |
YiFei Zhu | 0d8315d | 2020-11-11 07:33:54 -0600 | [diff] [blame] | 578 | - if !HAVE_SPARSE_SYSCALL_NR, have SECCOMP_ARCH_NATIVE, |
| 579 | SECCOMP_ARCH_NATIVE_NR, SECCOMP_ARCH_NATIVE_NAME defined. If |
| 580 | COMPAT is supported, have the SECCOMP_ARCH_COMPAT* defines too. |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 581 | |
YiFei Zhu | 282a181 | 2020-09-24 07:44:16 -0500 | [diff] [blame] | 582 | config SECCOMP |
| 583 | prompt "Enable seccomp to safely execute untrusted bytecode" |
| 584 | def_bool y |
| 585 | depends on HAVE_ARCH_SECCOMP |
| 586 | help |
| 587 | This kernel feature is useful for number crunching applications |
| 588 | that may need to handle untrusted bytecode during their |
| 589 | execution. By using pipes or other transports made available |
| 590 | to the process as file descriptors supporting the read/write |
| 591 | syscalls, it's possible to isolate those applications in their |
| 592 | own address space using seccomp. Once seccomp is enabled via |
| 593 | prctl(PR_SET_SECCOMP) or the seccomp() syscall, it cannot be |
| 594 | disabled and the task is only allowed to execute a few safe |
| 595 | syscalls defined by each seccomp mode. |
| 596 | |
| 597 | If unsure, say Y. |
| 598 | |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 599 | config SECCOMP_FILTER |
| 600 | def_bool y |
| 601 | depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET |
| 602 | help |
| 603 | Enable tasks to build secure computing environments defined |
| 604 | in terms of Berkeley Packet Filter programs which implement |
| 605 | task-defined system call filtering polices. |
| 606 | |
Mauro Carvalho Chehab | 5fb94e9 | 2018-05-08 15:14:57 -0300 | [diff] [blame] | 607 | See Documentation/userspace-api/seccomp_filter.rst for details. |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 608 | |
YiFei Zhu | 0d8315d | 2020-11-11 07:33:54 -0600 | [diff] [blame] | 609 | config SECCOMP_CACHE_DEBUG |
| 610 | bool "Show seccomp filter cache status in /proc/pid/seccomp_cache" |
| 611 | depends on SECCOMP_FILTER && !HAVE_SPARSE_SYSCALL_NR |
| 612 | depends on PROC_FS |
| 613 | help |
| 614 | This enables the /proc/pid/seccomp_cache interface to monitor |
| 615 | seccomp cache data. The file format is subject to change. Reading |
| 616 | the file requires CAP_SYS_ADMIN. |
| 617 | |
| 618 | This option is for debugging only. Enabling presents the risk that |
| 619 | an adversary may be able to infer the seccomp filter logic. |
| 620 | |
| 621 | If unsure, say N. |
| 622 | |
Alexander Popov | afaef01 | 2018-08-17 01:16:58 +0300 | [diff] [blame] | 623 | config HAVE_ARCH_STACKLEAK |
| 624 | bool |
| 625 | help |
| 626 | An architecture should select this if it has the code which |
| 627 | fills the used part of the kernel stack with the STACKLEAK_POISON |
| 628 | value before returning from system calls. |
| 629 | |
Masahiro Yamada | d148eac | 2018-06-14 19:36:45 +0900 | [diff] [blame] | 630 | config HAVE_STACKPROTECTOR |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 631 | bool |
| 632 | help |
| 633 | An arch should select this symbol if: |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 634 | - it has implemented a stack canary (e.g. __stack_chk_guard) |
| 635 | |
Linus Torvalds | 050e9ba | 2018-06-14 12:21:18 +0900 | [diff] [blame] | 636 | config STACKPROTECTOR |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 637 | bool "Stack Protector buffer overflow detection" |
Masahiro Yamada | d148eac | 2018-06-14 19:36:45 +0900 | [diff] [blame] | 638 | depends on HAVE_STACKPROTECTOR |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 639 | depends on $(cc-option,-fstack-protector) |
| 640 | default y |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 641 | help |
| 642 | This option turns on the "stack-protector" GCC feature. This |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 643 | feature puts, at the beginning of functions, a canary value on |
| 644 | the stack just before the return address, and validates |
| 645 | the value just before actually returning. Stack based buffer |
| 646 | overflows (that need to overwrite this return address) now also |
| 647 | overwrite the canary, which gets detected and the attack is then |
| 648 | neutralized via a kernel panic. |
| 649 | |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 650 | Functions will have the stack-protector canary logic added if they |
| 651 | have an 8-byte or larger character array on the stack. |
| 652 | |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 653 | This feature requires gcc version 4.2 or above, or a distribution |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 654 | gcc with the feature backported ("-fstack-protector"). |
| 655 | |
| 656 | On an x86 "defconfig" build, this feature adds canary checks to |
| 657 | about 3% of all kernel functions, which increases kernel code size |
| 658 | by about 0.3%. |
| 659 | |
Linus Torvalds | 050e9ba | 2018-06-14 12:21:18 +0900 | [diff] [blame] | 660 | config STACKPROTECTOR_STRONG |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 661 | bool "Strong Stack Protector" |
Linus Torvalds | 050e9ba | 2018-06-14 12:21:18 +0900 | [diff] [blame] | 662 | depends on STACKPROTECTOR |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 663 | depends on $(cc-option,-fstack-protector-strong) |
| 664 | default y |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 665 | help |
| 666 | Functions will have the stack-protector canary logic added in any |
| 667 | of the following conditions: |
| 668 | |
| 669 | - local variable's address used as part of the right hand side of an |
| 670 | assignment or function argument |
| 671 | - local variable is an array (or union containing an array), |
| 672 | regardless of array type or length |
| 673 | - uses register local variables |
| 674 | |
| 675 | This feature requires gcc version 4.9 or above, or a distribution |
| 676 | gcc with the feature backported ("-fstack-protector-strong"). |
| 677 | |
| 678 | On an x86 "defconfig" build, this feature adds canary checks to |
| 679 | about 20% of all kernel functions, which increases the kernel code |
| 680 | size by about 2%. |
| 681 | |
Sami Tolvanen | d08b9f0 | 2020-04-27 09:00:07 -0700 | [diff] [blame] | 682 | config ARCH_SUPPORTS_SHADOW_CALL_STACK |
| 683 | bool |
| 684 | help |
Dan Li | afcf544 | 2022-03-02 23:43:23 -0800 | [diff] [blame] | 685 | An architecture should select this if it supports the compiler's |
| 686 | Shadow Call Stack and implements runtime support for shadow stack |
Will Deacon | aa7a65a | 2020-05-15 16:15:46 +0100 | [diff] [blame] | 687 | switching. |
Sami Tolvanen | d08b9f0 | 2020-04-27 09:00:07 -0700 | [diff] [blame] | 688 | |
| 689 | config SHADOW_CALL_STACK |
Dan Li | afcf544 | 2022-03-02 23:43:23 -0800 | [diff] [blame] | 690 | bool "Shadow Call Stack" |
| 691 | depends on ARCH_SUPPORTS_SHADOW_CALL_STACK |
Ard Biesheuvel | 3879297 | 2022-12-13 14:24:07 +0100 | [diff] [blame] | 692 | depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER |
Samuel Holland | 6f9dc68 | 2024-01-22 09:52:01 -0800 | [diff] [blame] | 693 | depends on MMU |
Sami Tolvanen | d08b9f0 | 2020-04-27 09:00:07 -0700 | [diff] [blame] | 694 | help |
Dan Li | afcf544 | 2022-03-02 23:43:23 -0800 | [diff] [blame] | 695 | This option enables the compiler's Shadow Call Stack, which |
| 696 | uses a shadow stack to protect function return addresses from |
| 697 | being overwritten by an attacker. More information can be found |
| 698 | in the compiler's documentation: |
Sami Tolvanen | d08b9f0 | 2020-04-27 09:00:07 -0700 | [diff] [blame] | 699 | |
Dan Li | afcf544 | 2022-03-02 23:43:23 -0800 | [diff] [blame] | 700 | - Clang: https://clang.llvm.org/docs/ShadowCallStack.html |
| 701 | - GCC: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html#Instrumentation-Options |
Sami Tolvanen | d08b9f0 | 2020-04-27 09:00:07 -0700 | [diff] [blame] | 702 | |
| 703 | Note that security guarantees in the kernel differ from the |
| 704 | ones documented for user space. The kernel must store addresses |
| 705 | of shadow stacks in memory, which means an attacker capable of |
| 706 | reading and writing arbitrary memory may be able to locate them |
| 707 | and hijack control flow by modifying the stacks. |
| 708 | |
Ard Biesheuvel | 9beccca | 2022-10-27 17:59:07 +0200 | [diff] [blame] | 709 | config DYNAMIC_SCS |
| 710 | bool |
| 711 | help |
| 712 | Set by the arch code if it relies on code patching to insert the |
| 713 | shadow call stack push and pop instructions rather than on the |
| 714 | compiler. |
| 715 | |
Sami Tolvanen | dc5723b | 2020-12-11 10:46:19 -0800 | [diff] [blame] | 716 | config LTO |
| 717 | bool |
| 718 | help |
| 719 | Selected if the kernel will be built using the compiler's LTO feature. |
| 720 | |
| 721 | config LTO_CLANG |
| 722 | bool |
| 723 | select LTO |
| 724 | help |
| 725 | Selected if the kernel will be built using Clang's LTO feature. |
| 726 | |
| 727 | config ARCH_SUPPORTS_LTO_CLANG |
| 728 | bool |
| 729 | help |
| 730 | An architecture should select this option if it supports: |
| 731 | - compiling with Clang, |
| 732 | - compiling inline assembly with Clang's integrated assembler, |
| 733 | - and linking with LLD. |
| 734 | |
| 735 | config ARCH_SUPPORTS_LTO_CLANG_THIN |
| 736 | bool |
| 737 | help |
| 738 | An architecture should select this option if it can support Clang's |
| 739 | ThinLTO mode. |
| 740 | |
| 741 | config HAS_LTO_CLANG |
| 742 | def_bool y |
Nathan Chancellor | 1e68a8af | 2021-11-29 09:58:00 -0700 | [diff] [blame] | 743 | depends on CC_IS_CLANG && LD_IS_LLD && AS_IS_LLVM |
Sami Tolvanen | dc5723b | 2020-12-11 10:46:19 -0800 | [diff] [blame] | 744 | depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm) |
| 745 | depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm) |
| 746 | depends on ARCH_SUPPORTS_LTO_CLANG |
| 747 | depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT |
Jakob Koschel | 349fde5 | 2023-07-19 00:29:12 +0200 | [diff] [blame] | 748 | # https://github.com/ClangBuiltLinux/linux/issues/1721 |
| 749 | depends on (!KASAN || KASAN_HW_TAGS || CLANG_VERSION >= 170000) || !DEBUG_INFO |
| 750 | depends on (!KCOV || CLANG_VERSION >= 170000) || !DEBUG_INFO |
Sami Tolvanen | dc5723b | 2020-12-11 10:46:19 -0800 | [diff] [blame] | 751 | depends on !GCOV_KERNEL |
Sami Tolvanen | dc5723b | 2020-12-11 10:46:19 -0800 | [diff] [blame] | 752 | help |
| 753 | The compiler and Kconfig options support building with Clang's |
| 754 | LTO. |
| 755 | |
| 756 | choice |
| 757 | prompt "Link Time Optimization (LTO)" |
| 758 | default LTO_NONE |
| 759 | help |
| 760 | This option enables Link Time Optimization (LTO), which allows the |
| 761 | compiler to optimize binaries globally. |
| 762 | |
| 763 | If unsure, select LTO_NONE. Note that LTO is very resource-intensive |
| 764 | so it's disabled by default. |
| 765 | |
| 766 | config LTO_NONE |
| 767 | bool "None" |
| 768 | help |
| 769 | Build the kernel normally, without Link Time Optimization (LTO). |
| 770 | |
| 771 | config LTO_CLANG_FULL |
| 772 | bool "Clang Full LTO (EXPERIMENTAL)" |
| 773 | depends on HAS_LTO_CLANG |
| 774 | depends on !COMPILE_TEST |
| 775 | select LTO_CLANG |
| 776 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 777 | This option enables Clang's full Link Time Optimization (LTO), which |
| 778 | allows the compiler to optimize the kernel globally. If you enable |
| 779 | this option, the compiler generates LLVM bitcode instead of ELF |
| 780 | object files, and the actual compilation from bitcode happens at |
| 781 | the LTO link step, which may take several minutes depending on the |
| 782 | kernel configuration. More information can be found from LLVM's |
| 783 | documentation: |
Sami Tolvanen | dc5723b | 2020-12-11 10:46:19 -0800 | [diff] [blame] | 784 | |
| 785 | https://llvm.org/docs/LinkTimeOptimization.html |
| 786 | |
| 787 | During link time, this option can use a large amount of RAM, and |
| 788 | may take much longer than the ThinLTO option. |
| 789 | |
| 790 | config LTO_CLANG_THIN |
| 791 | bool "Clang ThinLTO (EXPERIMENTAL)" |
| 792 | depends on HAS_LTO_CLANG && ARCH_SUPPORTS_LTO_CLANG_THIN |
| 793 | select LTO_CLANG |
| 794 | help |
| 795 | This option enables Clang's ThinLTO, which allows for parallel |
| 796 | optimization and faster incremental compiles compared to the |
| 797 | CONFIG_LTO_CLANG_FULL option. More information can be found |
| 798 | from Clang's documentation: |
| 799 | |
| 800 | https://clang.llvm.org/docs/ThinLTO.html |
| 801 | |
| 802 | If unsure, say Y. |
| 803 | endchoice |
| 804 | |
Sami Tolvanen | cf68fff | 2021-04-08 11:28:26 -0700 | [diff] [blame] | 805 | config ARCH_SUPPORTS_CFI_CLANG |
| 806 | bool |
| 807 | help |
| 808 | An architecture should select this option if it can support Clang's |
| 809 | Control-Flow Integrity (CFI) checking. |
| 810 | |
Sami Tolvanen | 8924560 | 2022-09-08 14:54:47 -0700 | [diff] [blame] | 811 | config ARCH_USES_CFI_TRAPS |
| 812 | bool |
| 813 | |
Sami Tolvanen | cf68fff | 2021-04-08 11:28:26 -0700 | [diff] [blame] | 814 | config CFI_CLANG |
| 815 | bool "Use Clang's Control Flow Integrity (CFI)" |
Sami Tolvanen | 8924560 | 2022-09-08 14:54:47 -0700 | [diff] [blame] | 816 | depends on ARCH_SUPPORTS_CFI_CLANG |
| 817 | depends on $(cc-option,-fsanitize=kcfi) |
Sami Tolvanen | cf68fff | 2021-04-08 11:28:26 -0700 | [diff] [blame] | 818 | help |
Liu Song | c4ca227 | 2024-03-13 11:10:11 -0700 | [diff] [blame] | 819 | This option enables Clang's forward-edge Control Flow Integrity |
Sami Tolvanen | cf68fff | 2021-04-08 11:28:26 -0700 | [diff] [blame] | 820 | (CFI) checking, where the compiler injects a runtime check to each |
| 821 | indirect function call to ensure the target is a valid function with |
| 822 | the correct static type. This restricts possible call targets and |
| 823 | makes it more difficult for an attacker to exploit bugs that allow |
| 824 | the modification of stored function pointers. More information can be |
| 825 | found from Clang's documentation: |
| 826 | |
| 827 | https://clang.llvm.org/docs/ControlFlowIntegrity.html |
| 828 | |
Sami Tolvanen | cf68fff | 2021-04-08 11:28:26 -0700 | [diff] [blame] | 829 | config CFI_PERMISSIVE |
| 830 | bool "Use CFI in permissive mode" |
| 831 | depends on CFI_CLANG |
| 832 | help |
| 833 | When selected, Control Flow Integrity (CFI) violations result in a |
| 834 | warning instead of a kernel panic. This option should only be used |
| 835 | for finding indirect call type mismatches during development. |
| 836 | |
| 837 | If unsure, say N. |
| 838 | |
Kees Cook | 0f60a8e | 2016-07-12 16:19:48 -0700 | [diff] [blame] | 839 | config HAVE_ARCH_WITHIN_STACK_FRAMES |
| 840 | bool |
| 841 | help |
| 842 | An architecture should select this if it can walk the kernel stack |
| 843 | frames to determine if an object is part of either the arguments |
| 844 | or local variables (i.e. that it excludes saved return addresses, |
| 845 | and similar) by implementing an inline arch_within_stack_frames(), |
| 846 | which is used by CONFIG_HARDENED_USERCOPY. |
| 847 | |
Frederic Weisbecker | 24a9c541 | 2022-06-08 16:40:24 +0200 | [diff] [blame] | 848 | config HAVE_CONTEXT_TRACKING_USER |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 849 | bool |
| 850 | help |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 851 | Provide kernel/user boundaries probes necessary for subsystems |
| 852 | that need it, such as userspace RCU extended quiescent state. |
Frederic Weisbecker | 490f561 | 2020-01-27 16:41:52 +0100 | [diff] [blame] | 853 | Syscalls need to be wrapped inside user_exit()-user_enter(), either |
| 854 | optimized behind static key or through the slow path using TIF_NOHZ |
| 855 | flag. Exceptions handlers must be wrapped as well. Irqs are already |
Frederic Weisbecker | 6f0e6c1 | 2022-06-08 16:40:26 +0200 | [diff] [blame] | 856 | protected inside ct_irq_enter/ct_irq_exit() but preemption or signal |
Frederic Weisbecker | 490f561 | 2020-01-27 16:41:52 +0100 | [diff] [blame] | 857 | handling on irq exit still need to be protected. |
| 858 | |
Frederic Weisbecker | 24a9c541 | 2022-06-08 16:40:24 +0200 | [diff] [blame] | 859 | config HAVE_CONTEXT_TRACKING_USER_OFFSTACK |
Frederic Weisbecker | 83c2da2 | 2020-11-17 16:16:33 +0100 | [diff] [blame] | 860 | bool |
| 861 | help |
| 862 | Architecture neither relies on exception_enter()/exception_exit() |
| 863 | nor on schedule_user(). Also preempt_schedule_notrace() and |
| 864 | preempt_schedule_irq() can't be called in a preemptible section |
| 865 | while context tracking is CONTEXT_USER. This feature reflects a sane |
| 866 | entry implementation where the following requirements are met on |
| 867 | critical entry code, ie: before user_exit() or after user_enter(): |
| 868 | |
| 869 | - Critical entry code isn't preemptible (or better yet: |
| 870 | not interruptible). |
Frederic Weisbecker | 493c182 | 2022-06-08 16:40:27 +0200 | [diff] [blame] | 871 | - No use of RCU read side critical sections, unless ct_nmi_enter() |
Frederic Weisbecker | 83c2da2 | 2020-11-17 16:16:33 +0100 | [diff] [blame] | 872 | got called. |
| 873 | - No use of instrumentation, unless instrumentation_begin() got |
| 874 | called. |
| 875 | |
Frederic Weisbecker | 490f561 | 2020-01-27 16:41:52 +0100 | [diff] [blame] | 876 | config HAVE_TIF_NOHZ |
| 877 | bool |
| 878 | help |
| 879 | Arch relies on TIF_NOHZ and syscall slow path to implement context |
| 880 | tracking calls to user_enter()/user_exit(). |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 881 | |
Frederic Weisbecker | b952741 | 2012-06-16 15:39:34 +0200 | [diff] [blame] | 882 | config HAVE_VIRT_CPU_ACCOUNTING |
| 883 | bool |
| 884 | |
Frederic Weisbecker | 2b91ec9f | 2020-12-02 12:57:29 +0100 | [diff] [blame] | 885 | config HAVE_VIRT_CPU_ACCOUNTING_IDLE |
| 886 | bool |
| 887 | help |
| 888 | Architecture has its own way to account idle CPU time and therefore |
| 889 | doesn't implement vtime_account_idle(). |
| 890 | |
Stanislaw Gruszka | 40565b5 | 2016-11-15 03:06:51 +0100 | [diff] [blame] | 891 | config ARCH_HAS_SCALED_CPUTIME |
| 892 | bool |
| 893 | |
Kevin Hilman | 554b000 | 2013-09-16 15:28:21 -0700 | [diff] [blame] | 894 | config HAVE_VIRT_CPU_ACCOUNTING_GEN |
| 895 | bool |
| 896 | default y if 64BIT |
| 897 | help |
| 898 | With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. |
| 899 | Before enabling this option, arch code must be audited |
| 900 | to ensure there are no races in concurrent read/write of |
| 901 | cputime_t. For example, reading/writing 64-bit cputime_t on |
| 902 | some 32-bit arches may require multiple accesses, so proper |
| 903 | locking is needed to protect against concurrent accesses. |
| 904 | |
Frederic Weisbecker | fdf9c35 | 2012-09-09 14:56:31 +0200 | [diff] [blame] | 905 | config HAVE_IRQ_TIME_ACCOUNTING |
| 906 | bool |
| 907 | help |
| 908 | Archs need to ensure they use a high enough resolution clock to |
| 909 | support irq time accounting and then call enable_sched_clock_irqtime(). |
| 910 | |
Kalesh Singh | c49dd34 | 2020-12-14 19:07:30 -0800 | [diff] [blame] | 911 | config HAVE_MOVE_PUD |
| 912 | bool |
| 913 | help |
| 914 | Architectures that select this are able to move page tables at the |
| 915 | PUD level. If there are only 3 page table levels, the move effectively |
| 916 | happens at the PGD level. |
| 917 | |
Joel Fernandes (Google) | 2c91bd4a | 2019-01-03 15:28:38 -0800 | [diff] [blame] | 918 | config HAVE_MOVE_PMD |
| 919 | bool |
| 920 | help |
| 921 | Archs that select this are able to move page tables at the PMD level. |
| 922 | |
Gerald Schaefer | 1562606 | 2012-10-08 16:30:04 -0700 | [diff] [blame] | 923 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE |
| 924 | bool |
| 925 | |
Matthew Wilcox | a00cc7d | 2017-02-24 14:57:02 -0800 | [diff] [blame] | 926 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD |
| 927 | bool |
| 928 | |
Toshi Kani | 0ddab1d | 2015-04-14 15:47:20 -0700 | [diff] [blame] | 929 | config HAVE_ARCH_HUGE_VMAP |
| 930 | bool |
| 931 | |
Nicholas Piggin | 121e6f3 | 2021-04-29 22:58:49 -0700 | [diff] [blame] | 932 | # |
| 933 | # Archs that select this would be capable of PMD-sized vmaps (i.e., |
Song Liu | 559089e | 2022-04-15 09:44:10 -0700 | [diff] [blame] | 934 | # arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag |
| 935 | # must be used to enable allocations to use hugepages. |
Nicholas Piggin | 121e6f3 | 2021-04-29 22:58:49 -0700 | [diff] [blame] | 936 | # |
| 937 | config HAVE_ARCH_HUGE_VMALLOC |
| 938 | depends on HAVE_ARCH_HUGE_VMAP |
| 939 | bool |
| 940 | |
Alexandre Ghiti | 3876d4a | 2019-06-27 15:00:11 -0700 | [diff] [blame] | 941 | config ARCH_WANT_HUGE_PMD_SHARE |
| 942 | bool |
| 943 | |
Rick Edgecombe | 2f0584f | 2023-06-12 17:10:27 -0700 | [diff] [blame] | 944 | # Archs that want to use pmd_mkwrite on kernel memory need it defined even |
| 945 | # if there are no userspace memory management features that use it |
| 946 | config ARCH_WANT_KERNEL_PMD_MKWRITE |
| 947 | bool |
| 948 | |
| 949 | config ARCH_WANT_PMD_MKWRITE |
| 950 | def_bool TRANSPARENT_HUGEPAGE || ARCH_WANT_KERNEL_PMD_MKWRITE |
| 951 | |
Pavel Emelyanov | 0f8975e | 2013-07-03 15:01:20 -0700 | [diff] [blame] | 952 | config HAVE_ARCH_SOFT_DIRTY |
| 953 | bool |
| 954 | |
David Howells | 786d35d | 2012-09-28 14:31:03 +0930 | [diff] [blame] | 955 | config HAVE_MOD_ARCH_SPECIFIC |
| 956 | bool |
| 957 | help |
| 958 | The arch uses struct mod_arch_specific to store data. Many arches |
| 959 | just need a simple module loader without arch specific data - those |
| 960 | should not enable this. |
| 961 | |
| 962 | config MODULES_USE_ELF_RELA |
| 963 | bool |
| 964 | help |
| 965 | Modules only use ELF RELA relocations. Modules with ELF REL |
| 966 | relocations will give an error. |
| 967 | |
| 968 | config MODULES_USE_ELF_REL |
| 969 | bool |
| 970 | help |
| 971 | Modules only use ELF REL relocations. Modules with ELF RELA |
| 972 | relocations will give an error. |
| 973 | |
Christophe Leroy | 01dc038 | 2022-02-23 13:02:14 +0100 | [diff] [blame] | 974 | config ARCH_WANTS_MODULES_DATA_IN_VMALLOC |
| 975 | bool |
| 976 | help |
| 977 | For architectures like powerpc/32 which have constraints on module |
| 978 | allocation and need to allocate module data outside of module area. |
| 979 | |
Mike Rapoport (IBM) | 223b5e5 | 2024-05-05 19:06:20 +0300 | [diff] [blame] | 980 | config ARCH_WANTS_EXECMEM_LATE |
| 981 | bool |
| 982 | help |
| 983 | For architectures that do not allocate executable memory early on |
| 984 | boot, but rather require its initialization late when there is |
| 985 | enough entropy for module space randomization, for instance |
| 986 | arm64. |
| 987 | |
Frederic Weisbecker | cc1f027 | 2013-09-24 17:17:47 +0200 | [diff] [blame] | 988 | config HAVE_IRQ_EXIT_ON_IRQ_STACK |
| 989 | bool |
| 990 | help |
| 991 | Architecture doesn't only execute the irq handler on the irq stack |
| 992 | but also irq_exit(). This way we can process softirqs on this irq |
| 993 | stack instead of switching to a new one when we call __do_softirq() |
| 994 | in the end of an hardirq. |
| 995 | This spares a stack switch and improves cache usage on softirq |
| 996 | processing. |
| 997 | |
Thomas Gleixner | cd1a41c | 2021-02-10 00:40:52 +0100 | [diff] [blame] | 998 | config HAVE_SOFTIRQ_ON_OWN_STACK |
| 999 | bool |
| 1000 | help |
| 1001 | Architecture provides a function to run __do_softirq() on a |
Colin Ian King | c226bc3 | 2021-09-07 19:57:38 -0700 | [diff] [blame] | 1002 | separate stack. |
Thomas Gleixner | cd1a41c | 2021-02-10 00:40:52 +0100 | [diff] [blame] | 1003 | |
Sebastian Andrzej Siewior | 8cbb2b5 | 2022-08-25 10:25:05 +0200 | [diff] [blame] | 1004 | config SOFTIRQ_ON_OWN_STACK |
| 1005 | def_bool HAVE_SOFTIRQ_ON_OWN_STACK && !PREEMPT_RT |
| 1006 | |
Arnd Bergmann | 12700c1 | 2022-02-15 17:55:04 +0100 | [diff] [blame] | 1007 | config ALTERNATE_USER_ADDRESS_SPACE |
| 1008 | bool |
| 1009 | help |
| 1010 | Architectures set this when the CPU uses separate address |
| 1011 | spaces for kernel and user space pointers. In this case, the |
| 1012 | access_ok() check on a __user pointer is skipped. |
| 1013 | |
Kirill A. Shutemov | 235a8f0 | 2015-04-14 15:46:17 -0700 | [diff] [blame] | 1014 | config PGTABLE_LEVELS |
| 1015 | int |
| 1016 | default 2 |
| 1017 | |
Kees Cook | 2b68f6c | 2015-04-14 15:48:00 -0700 | [diff] [blame] | 1018 | config ARCH_HAS_ELF_RANDOMIZE |
| 1019 | bool |
| 1020 | help |
| 1021 | An architecture supports choosing randomized locations for |
| 1022 | stack, mmap, brk, and ET_DYN. Defined functions: |
| 1023 | - arch_mmap_rnd() |
Kees Cook | 204db6e | 2015-04-14 15:48:12 -0700 | [diff] [blame] | 1024 | - arch_randomize_brk() |
Kees Cook | 2b68f6c | 2015-04-14 15:48:00 -0700 | [diff] [blame] | 1025 | |
Daniel Cashman | d07e225 | 2016-01-14 15:19:53 -0800 | [diff] [blame] | 1026 | config HAVE_ARCH_MMAP_RND_BITS |
| 1027 | bool |
| 1028 | help |
| 1029 | An arch should select this symbol if it supports setting a variable |
| 1030 | number of bits for use in establishing the base address for mmap |
| 1031 | allocations, has MMU enabled and provides values for both: |
| 1032 | - ARCH_MMAP_RND_BITS_MIN |
| 1033 | - ARCH_MMAP_RND_BITS_MAX |
| 1034 | |
Jiri Slaby | 5f56a5d | 2016-05-20 17:00:16 -0700 | [diff] [blame] | 1035 | config HAVE_EXIT_THREAD |
| 1036 | bool |
| 1037 | help |
| 1038 | An architecture implements exit_thread. |
| 1039 | |
Daniel Cashman | d07e225 | 2016-01-14 15:19:53 -0800 | [diff] [blame] | 1040 | config ARCH_MMAP_RND_BITS_MIN |
| 1041 | int |
| 1042 | |
| 1043 | config ARCH_MMAP_RND_BITS_MAX |
| 1044 | int |
| 1045 | |
| 1046 | config ARCH_MMAP_RND_BITS_DEFAULT |
| 1047 | int |
| 1048 | |
| 1049 | config ARCH_MMAP_RND_BITS |
| 1050 | int "Number of bits to use for ASLR of mmap base address" if EXPERT |
| 1051 | range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX |
| 1052 | default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT |
| 1053 | default ARCH_MMAP_RND_BITS_MIN |
| 1054 | depends on HAVE_ARCH_MMAP_RND_BITS |
| 1055 | help |
| 1056 | This value can be used to select the number of bits to use to |
| 1057 | determine the random offset to the base address of vma regions |
| 1058 | resulting from mmap allocations. This value will be bounded |
| 1059 | by the architecture's minimum and maximum supported values. |
| 1060 | |
| 1061 | This value can be changed after boot using the |
| 1062 | /proc/sys/vm/mmap_rnd_bits tunable |
| 1063 | |
| 1064 | config HAVE_ARCH_MMAP_RND_COMPAT_BITS |
| 1065 | bool |
| 1066 | help |
| 1067 | An arch should select this symbol if it supports running applications |
| 1068 | in compatibility mode, supports setting a variable number of bits for |
| 1069 | use in establishing the base address for mmap allocations, has MMU |
| 1070 | enabled and provides values for both: |
| 1071 | - ARCH_MMAP_RND_COMPAT_BITS_MIN |
| 1072 | - ARCH_MMAP_RND_COMPAT_BITS_MAX |
| 1073 | |
| 1074 | config ARCH_MMAP_RND_COMPAT_BITS_MIN |
| 1075 | int |
| 1076 | |
| 1077 | config ARCH_MMAP_RND_COMPAT_BITS_MAX |
| 1078 | int |
| 1079 | |
| 1080 | config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT |
| 1081 | int |
| 1082 | |
| 1083 | config ARCH_MMAP_RND_COMPAT_BITS |
| 1084 | int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT |
| 1085 | range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX |
| 1086 | default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT |
| 1087 | default ARCH_MMAP_RND_COMPAT_BITS_MIN |
| 1088 | depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS |
| 1089 | help |
| 1090 | This value can be used to select the number of bits to use to |
| 1091 | determine the random offset to the base address of vma regions |
| 1092 | resulting from mmap allocations for compatible applications This |
| 1093 | value will be bounded by the architecture's minimum and maximum |
| 1094 | supported values. |
| 1095 | |
| 1096 | This value can be changed after boot using the |
| 1097 | /proc/sys/vm/mmap_rnd_compat_bits tunable |
| 1098 | |
Dmitry Safonov | 1b028f7 | 2017-03-06 17:17:19 +0300 | [diff] [blame] | 1099 | config HAVE_ARCH_COMPAT_MMAP_BASES |
| 1100 | bool |
| 1101 | help |
| 1102 | This allows 64bit applications to invoke 32-bit mmap() syscall |
| 1103 | and vice-versa 32-bit applications to call 64-bit mmap(). |
| 1104 | Required for applications doing different bitness syscalls. |
| 1105 | |
Arnd Bergmann | ba89f9c | 2024-02-23 23:18:37 +0100 | [diff] [blame] | 1106 | config HAVE_PAGE_SIZE_4KB |
| 1107 | bool |
| 1108 | |
| 1109 | config HAVE_PAGE_SIZE_8KB |
| 1110 | bool |
| 1111 | |
| 1112 | config HAVE_PAGE_SIZE_16KB |
| 1113 | bool |
| 1114 | |
| 1115 | config HAVE_PAGE_SIZE_32KB |
| 1116 | bool |
| 1117 | |
| 1118 | config HAVE_PAGE_SIZE_64KB |
| 1119 | bool |
| 1120 | |
| 1121 | config HAVE_PAGE_SIZE_256KB |
| 1122 | bool |
| 1123 | |
| 1124 | choice |
| 1125 | prompt "MMU page size" |
| 1126 | |
| 1127 | config PAGE_SIZE_4KB |
| 1128 | bool "4KiB pages" |
| 1129 | depends on HAVE_PAGE_SIZE_4KB |
| 1130 | help |
| 1131 | This option select the standard 4KiB Linux page size and the only |
| 1132 | available option on many architectures. Using 4KiB page size will |
| 1133 | minimize memory consumption and is therefore recommended for low |
| 1134 | memory systems. |
| 1135 | Some software that is written for x86 systems makes incorrect |
| 1136 | assumptions about the page size and only runs on 4KiB pages. |
| 1137 | |
| 1138 | config PAGE_SIZE_8KB |
| 1139 | bool "8KiB pages" |
| 1140 | depends on HAVE_PAGE_SIZE_8KB |
| 1141 | help |
| 1142 | This option is the only supported page size on a few older |
| 1143 | processors, and can be slightly faster than 4KiB pages. |
| 1144 | |
| 1145 | config PAGE_SIZE_16KB |
| 1146 | bool "16KiB pages" |
| 1147 | depends on HAVE_PAGE_SIZE_16KB |
| 1148 | help |
| 1149 | This option is usually a good compromise between memory |
| 1150 | consumption and performance for typical desktop and server |
| 1151 | workloads, often saving a level of page table lookups compared |
| 1152 | to 4KB pages as well as reducing TLB pressure and overhead of |
| 1153 | per-page operations in the kernel at the expense of a larger |
| 1154 | page cache. |
| 1155 | |
| 1156 | config PAGE_SIZE_32KB |
| 1157 | bool "32KiB pages" |
| 1158 | depends on HAVE_PAGE_SIZE_32KB |
| 1159 | help |
| 1160 | Using 32KiB page size will result in slightly higher performance |
| 1161 | kernel at the price of higher memory consumption compared to |
| 1162 | 16KiB pages. This option is available only on cnMIPS cores. |
| 1163 | Note that you will need a suitable Linux distribution to |
| 1164 | support this. |
| 1165 | |
| 1166 | config PAGE_SIZE_64KB |
| 1167 | bool "64KiB pages" |
| 1168 | depends on HAVE_PAGE_SIZE_64KB |
| 1169 | help |
| 1170 | Using 64KiB page size will result in slightly higher performance |
| 1171 | kernel at the price of much higher memory consumption compared to |
| 1172 | 4KiB or 16KiB pages. |
| 1173 | This is not suitable for general-purpose workloads but the |
| 1174 | better performance may be worth the cost for certain types of |
| 1175 | supercomputing or database applications that work mostly with |
| 1176 | large in-memory data rather than small files. |
| 1177 | |
| 1178 | config PAGE_SIZE_256KB |
| 1179 | bool "256KiB pages" |
| 1180 | depends on HAVE_PAGE_SIZE_256KB |
| 1181 | help |
| 1182 | 256KiB pages have little practical value due to their extreme |
| 1183 | memory usage. The kernel will only be able to run applications |
| 1184 | that have been compiled with '-zmax-page-size' set to 256KiB |
| 1185 | (the default is 64KiB or 4KiB on most architectures). |
| 1186 | |
| 1187 | endchoice |
| 1188 | |
Guenter Roeck | 1f0e290 | 2021-11-27 07:44:40 -0800 | [diff] [blame] | 1189 | config PAGE_SIZE_LESS_THAN_64KB |
| 1190 | def_bool y |
Guenter Roeck | 1f0e290 | 2021-11-27 07:44:40 -0800 | [diff] [blame] | 1191 | depends on !PAGE_SIZE_64KB |
Nathan Chancellor | e4bbd20 | 2022-01-19 18:10:22 -0800 | [diff] [blame] | 1192 | depends on PAGE_SIZE_LESS_THAN_256KB |
| 1193 | |
| 1194 | config PAGE_SIZE_LESS_THAN_256KB |
| 1195 | def_bool y |
Guenter Roeck | 1f0e290 | 2021-11-27 07:44:40 -0800 | [diff] [blame] | 1196 | depends on !PAGE_SIZE_256KB |
| 1197 | |
Arnd Bergmann | ba89f9c | 2024-02-23 23:18:37 +0100 | [diff] [blame] | 1198 | config PAGE_SHIFT |
| 1199 | int |
Linus Torvalds | d5cf50d | 2024-04-12 10:05:10 -0700 | [diff] [blame] | 1200 | default 12 if PAGE_SIZE_4KB |
| 1201 | default 13 if PAGE_SIZE_8KB |
| 1202 | default 14 if PAGE_SIZE_16KB |
| 1203 | default 15 if PAGE_SIZE_32KB |
| 1204 | default 16 if PAGE_SIZE_64KB |
| 1205 | default 18 if PAGE_SIZE_256KB |
Arnd Bergmann | ba89f9c | 2024-02-23 23:18:37 +0100 | [diff] [blame] | 1206 | |
Alexandre Ghiti | 67f3977 | 2019-09-23 15:38:47 -0700 | [diff] [blame] | 1207 | # This allows to use a set of generic functions to determine mmap base |
| 1208 | # address by giving priority to top-down scheme only if the process |
| 1209 | # is not in legacy mode (compat task, unlimited stack size or |
| 1210 | # sysctl_legacy_va_layout). |
| 1211 | # Architecture that selects this option can provide its own version of: |
| 1212 | # - STACK_RND_MASK |
| 1213 | config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT |
| 1214 | bool |
| 1215 | depends on MMU |
Alexandre Ghiti | e7142bf | 2019-09-23 15:38:50 -0700 | [diff] [blame] | 1216 | select ARCH_HAS_ELF_RANDOMIZE |
Alexandre Ghiti | 67f3977 | 2019-09-23 15:38:47 -0700 | [diff] [blame] | 1217 | |
Josh Poimboeuf | 03f16cd | 2022-04-18 09:50:36 -0700 | [diff] [blame] | 1218 | config HAVE_OBJTOOL |
| 1219 | bool |
| 1220 | |
Josh Poimboeuf | 4ab7674 | 2022-04-18 09:50:39 -0700 | [diff] [blame] | 1221 | config HAVE_JUMP_LABEL_HACK |
| 1222 | bool |
| 1223 | |
Josh Poimboeuf | 22102f4 | 2022-04-18 09:50:40 -0700 | [diff] [blame] | 1224 | config HAVE_NOINSTR_HACK |
| 1225 | bool |
| 1226 | |
Josh Poimboeuf | 489e355 | 2022-04-18 09:50:42 -0700 | [diff] [blame] | 1227 | config HAVE_NOINSTR_VALIDATION |
| 1228 | bool |
| 1229 | |
Josh Poimboeuf | 5f3da8c | 2022-04-19 09:05:09 -0700 | [diff] [blame] | 1230 | config HAVE_UACCESS_VALIDATION |
| 1231 | bool |
| 1232 | select OBJTOOL |
| 1233 | |
Josh Poimboeuf | b9ab5eb | 2016-02-28 22:22:42 -0600 | [diff] [blame] | 1234 | config HAVE_STACK_VALIDATION |
| 1235 | bool |
| 1236 | help |
Josh Poimboeuf | 03f16cd | 2022-04-18 09:50:36 -0700 | [diff] [blame] | 1237 | Architecture supports objtool compile-time frame pointer rule |
| 1238 | validation. |
Josh Poimboeuf | b9ab5eb | 2016-02-28 22:22:42 -0600 | [diff] [blame] | 1239 | |
Josh Poimboeuf | af085d9 | 2017-02-13 19:42:28 -0600 | [diff] [blame] | 1240 | config HAVE_RELIABLE_STACKTRACE |
| 1241 | bool |
| 1242 | help |
Miroslav Benes | 140d7e8 | 2020-03-05 22:28:45 -0800 | [diff] [blame] | 1243 | Architecture has either save_stack_trace_tsk_reliable() or |
| 1244 | arch_stack_walk_reliable() function which only returns a stack trace |
| 1245 | if it can guarantee the trace is reliable. |
Josh Poimboeuf | af085d9 | 2017-02-13 19:42:28 -0600 | [diff] [blame] | 1246 | |
George Spelvin | 468a942 | 2016-05-26 22:11:51 -0400 | [diff] [blame] | 1247 | config HAVE_ARCH_HASH |
| 1248 | bool |
| 1249 | default n |
| 1250 | help |
| 1251 | If this is set, the architecture provides an <asm/hash.h> |
| 1252 | file which provides platform-specific implementations of some |
| 1253 | functions in <linux/hash.h> or fs/namei.c. |
| 1254 | |
Finn Thain | 666047fe | 2019-01-15 15:18:56 +1100 | [diff] [blame] | 1255 | config HAVE_ARCH_NVRAM_OPS |
| 1256 | bool |
| 1257 | |
William Breathitt Gray | 3a49551 | 2016-05-27 18:08:27 -0400 | [diff] [blame] | 1258 | config ISA_BUS_API |
| 1259 | def_bool ISA |
| 1260 | |
Al Viro | d212504 | 2012-10-23 13:17:59 -0400 | [diff] [blame] | 1261 | # |
| 1262 | # ABI hall of shame |
| 1263 | # |
| 1264 | config CLONE_BACKWARDS |
| 1265 | bool |
| 1266 | help |
| 1267 | Architecture has tls passed as the 4th argument of clone(2), |
| 1268 | not the 5th one. |
| 1269 | |
| 1270 | config CLONE_BACKWARDS2 |
| 1271 | bool |
| 1272 | help |
| 1273 | Architecture has the first two arguments of clone(2) swapped. |
| 1274 | |
Michal Simek | dfa9771 | 2013-08-13 16:00:53 -0700 | [diff] [blame] | 1275 | config CLONE_BACKWARDS3 |
| 1276 | bool |
| 1277 | help |
| 1278 | Architecture has tls passed as the 3rd argument of clone(2), |
| 1279 | not the 5th one. |
| 1280 | |
Al Viro | eaca6ea | 2012-11-25 23:12:10 -0500 | [diff] [blame] | 1281 | config ODD_RT_SIGACTION |
| 1282 | bool |
| 1283 | help |
| 1284 | Architecture has unusual rt_sigaction(2) arguments |
| 1285 | |
Al Viro | 0a0e8cd | 2012-12-25 16:04:12 -0500 | [diff] [blame] | 1286 | config OLD_SIGSUSPEND |
| 1287 | bool |
| 1288 | help |
| 1289 | Architecture has old sigsuspend(2) syscall, of one-argument variety |
| 1290 | |
| 1291 | config OLD_SIGSUSPEND3 |
| 1292 | bool |
| 1293 | help |
| 1294 | Even weirder antique ABI - three-argument sigsuspend(2) |
| 1295 | |
Al Viro | 495dfbf | 2012-12-25 19:09:45 -0500 | [diff] [blame] | 1296 | config OLD_SIGACTION |
| 1297 | bool |
| 1298 | help |
| 1299 | Architecture has old sigaction(2) syscall. Nope, not the same |
| 1300 | as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), |
| 1301 | but fairly different variant of sigaction(2), thanks to OSF/1 |
| 1302 | compatibility... |
| 1303 | |
| 1304 | config COMPAT_OLD_SIGACTION |
| 1305 | bool |
| 1306 | |
Deepa Dinamani | 17435e5 | 2018-03-13 21:03:28 -0700 | [diff] [blame] | 1307 | config COMPAT_32BIT_TIME |
Arnd Bergmann | 942437c | 2019-07-15 11:46:10 +0200 | [diff] [blame] | 1308 | bool "Provide system calls for 32-bit time_t" |
| 1309 | default !64BIT || COMPAT |
Deepa Dinamani | 17435e5 | 2018-03-13 21:03:28 -0700 | [diff] [blame] | 1310 | help |
| 1311 | This enables 32 bit time_t support in addition to 64 bit time_t support. |
| 1312 | This is relevant on all 32-bit architectures, and 64-bit architectures |
| 1313 | as part of compat syscall handling. |
| 1314 | |
Christoph Hellwig | 87a4c37 | 2018-07-31 13:39:32 +0200 | [diff] [blame] | 1315 | config ARCH_NO_PREEMPT |
| 1316 | bool |
| 1317 | |
Thomas Gleixner | a50a3f4 | 2019-07-17 22:01:49 +0200 | [diff] [blame] | 1318 | config ARCH_SUPPORTS_RT |
| 1319 | bool |
| 1320 | |
Zhaoxiu Zeng | fff7fb0 | 2016-05-20 17:03:57 -0700 | [diff] [blame] | 1321 | config CPU_NO_EFFICIENT_FFS |
| 1322 | def_bool n |
| 1323 | |
Andy Lutomirski | ba14a19 | 2016-08-11 02:35:21 -0700 | [diff] [blame] | 1324 | config HAVE_ARCH_VMAP_STACK |
| 1325 | def_bool n |
| 1326 | help |
| 1327 | An arch should select this symbol if it can support kernel stacks |
| 1328 | in vmalloc space. This means: |
| 1329 | |
| 1330 | - vmalloc space must be large enough to hold many kernel stacks. |
| 1331 | This may rule out many 32-bit architectures. |
| 1332 | |
| 1333 | - Stacks in vmalloc space need to work reliably. For example, if |
| 1334 | vmap page tables are created on demand, either this mechanism |
| 1335 | needs to work while the stack points to a virtual address with |
| 1336 | unpopulated page tables or arch code (switch_to() and switch_mm(), |
| 1337 | most likely) needs to ensure that the stack's page table entries |
| 1338 | are populated before running on a possibly unpopulated stack. |
| 1339 | |
| 1340 | - If the stack overflows into a guard page, something reasonable |
| 1341 | should happen. The definition of "reasonable" is flexible, but |
| 1342 | instantly rebooting without logging anything would be unfriendly. |
| 1343 | |
| 1344 | config VMAP_STACK |
| 1345 | default y |
| 1346 | bool "Use a virtually-mapped stack" |
Daniel Axtens | eafb149 | 2019-11-30 17:54:57 -0800 | [diff] [blame] | 1347 | depends on HAVE_ARCH_VMAP_STACK |
Andrey Konovalov | 38dd767d | 2020-12-22 12:02:45 -0800 | [diff] [blame] | 1348 | depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC |
Masahiro Yamada | a7f7f62 | 2020-06-14 01:50:22 +0900 | [diff] [blame] | 1349 | help |
Andy Lutomirski | ba14a19 | 2016-08-11 02:35:21 -0700 | [diff] [blame] | 1350 | Enable this if you want the use virtually-mapped kernel stacks |
| 1351 | with guard pages. This causes kernel stack overflows to be |
| 1352 | caught immediately rather than causing difficult-to-diagnose |
| 1353 | corruption. |
| 1354 | |
Andrey Konovalov | 38dd767d | 2020-12-22 12:02:45 -0800 | [diff] [blame] | 1355 | To use this with software KASAN modes, the architecture must support |
| 1356 | backing virtual mappings with real shadow memory, and KASAN_VMALLOC |
| 1357 | must be enabled. |
Andy Lutomirski | ba14a19 | 2016-08-11 02:35:21 -0700 | [diff] [blame] | 1358 | |
Kees Cook | 39218ff | 2021-04-01 16:23:44 -0700 | [diff] [blame] | 1359 | config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET |
| 1360 | def_bool n |
| 1361 | help |
| 1362 | An arch should select this symbol if it can support kernel stack |
| 1363 | offset randomization with calls to add_random_kstack_offset() |
| 1364 | during syscall entry and choose_random_kstack_offset() during |
| 1365 | syscall exit. Careful removal of -fstack-protector-strong and |
| 1366 | -fstack-protector should also be applied to the entry code and |
| 1367 | closely examined, as the artificial stack bump looks like an array |
| 1368 | to the compiler, so it will attempt to add canary checks regardless |
| 1369 | of the static branch state. |
| 1370 | |
Marco Elver | 8cb37a5 | 2022-01-31 10:05:20 +0100 | [diff] [blame] | 1371 | config RANDOMIZE_KSTACK_OFFSET |
| 1372 | bool "Support for randomizing kernel stack offset on syscall entry" if EXPERT |
| 1373 | default y |
Kees Cook | 39218ff | 2021-04-01 16:23:44 -0700 | [diff] [blame] | 1374 | depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET |
Marco Elver | efa90c1 | 2022-01-31 10:05:21 +0100 | [diff] [blame] | 1375 | depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000 |
Kees Cook | 39218ff | 2021-04-01 16:23:44 -0700 | [diff] [blame] | 1376 | help |
| 1377 | The kernel stack offset can be randomized (after pt_regs) by |
| 1378 | roughly 5 bits of entropy, frustrating memory corruption |
| 1379 | attacks that depend on stack address determinism or |
Marco Elver | 8cb37a5 | 2022-01-31 10:05:20 +0100 | [diff] [blame] | 1380 | cross-syscall address exposures. |
| 1381 | |
| 1382 | The feature is controlled via the "randomize_kstack_offset=on/off" |
| 1383 | kernel boot param, and if turned off has zero overhead due to its use |
| 1384 | of static branches (see JUMP_LABEL). |
| 1385 | |
| 1386 | If unsure, say Y. |
| 1387 | |
| 1388 | config RANDOMIZE_KSTACK_OFFSET_DEFAULT |
| 1389 | bool "Default state of kernel stack offset randomization" |
| 1390 | depends on RANDOMIZE_KSTACK_OFFSET |
| 1391 | help |
| 1392 | Kernel stack offset randomization is controlled by kernel boot param |
| 1393 | "randomize_kstack_offset=on/off", and this config chooses the default |
| 1394 | boot state. |
Kees Cook | 39218ff | 2021-04-01 16:23:44 -0700 | [diff] [blame] | 1395 | |
Laura Abbott | ad21fc4 | 2017-02-06 16:31:57 -0800 | [diff] [blame] | 1396 | config ARCH_OPTIONAL_KERNEL_RWX |
| 1397 | def_bool n |
| 1398 | |
| 1399 | config ARCH_OPTIONAL_KERNEL_RWX_DEFAULT |
| 1400 | def_bool n |
| 1401 | |
| 1402 | config ARCH_HAS_STRICT_KERNEL_RWX |
| 1403 | def_bool n |
| 1404 | |
Laura Abbott | 0f5bf6d | 2017-02-06 16:31:58 -0800 | [diff] [blame] | 1405 | config STRICT_KERNEL_RWX |
Laura Abbott | ad21fc4 | 2017-02-06 16:31:57 -0800 | [diff] [blame] | 1406 | bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX |
| 1407 | depends on ARCH_HAS_STRICT_KERNEL_RWX |
| 1408 | default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT |
| 1409 | help |
| 1410 | If this is set, kernel text and rodata memory will be made read-only, |
| 1411 | and non-text memory will be made non-executable. This provides |
| 1412 | protection against certain security exploits (e.g. executing the heap |
| 1413 | or modifying text) |
| 1414 | |
| 1415 | These features are considered standard security practice these days. |
| 1416 | You should say Y here in almost all cases. |
| 1417 | |
| 1418 | config ARCH_HAS_STRICT_MODULE_RWX |
| 1419 | def_bool n |
| 1420 | |
Laura Abbott | 0f5bf6d | 2017-02-06 16:31:58 -0800 | [diff] [blame] | 1421 | config STRICT_MODULE_RWX |
Laura Abbott | ad21fc4 | 2017-02-06 16:31:57 -0800 | [diff] [blame] | 1422 | bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX |
| 1423 | depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES |
| 1424 | default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT |
| 1425 | help |
| 1426 | If this is set, module text and rodata memory will be made read-only, |
| 1427 | and non-text memory will be made non-executable. This provides |
| 1428 | protection against certain security exploits (e.g. writing to text) |
| 1429 | |
Christoph Hellwig | ea8c64a | 2018-01-10 16:21:13 +0100 | [diff] [blame] | 1430 | # select if the architecture provides an asm/dma-direct.h header |
| 1431 | config ARCH_HAS_PHYS_TO_DMA |
| 1432 | bool |
| 1433 | |
Paul Burton | 04f264d | 2018-08-20 15:36:17 -0700 | [diff] [blame] | 1434 | config HAVE_ARCH_COMPILER_H |
| 1435 | bool |
| 1436 | help |
| 1437 | An architecture can select this if it provides an |
| 1438 | asm/compiler.h header that should be included after |
| 1439 | linux/compiler-*.h in order to override macro definitions that those |
| 1440 | headers generally provide. |
| 1441 | |
Ard Biesheuvel | 271ca78 | 2018-08-21 21:56:00 -0700 | [diff] [blame] | 1442 | config HAVE_ARCH_PREL32_RELOCATIONS |
| 1443 | bool |
| 1444 | help |
| 1445 | May be selected by an architecture if it supports place-relative |
| 1446 | 32-bit relocations, both in the toolchain and in the module loader, |
| 1447 | in which case relative references can be used in special sections |
| 1448 | for PCI fixup, initcalls etc which are only half the size on 64 bit |
| 1449 | architectures, and don't require runtime relocation on relocatable |
| 1450 | kernels. |
| 1451 | |
Ard Biesheuvel | ce9084b | 2019-02-02 10:41:17 +0100 | [diff] [blame] | 1452 | config ARCH_USE_MEMREMAP_PROT |
| 1453 | bool |
| 1454 | |
Waiman Long | fb346fd | 2019-04-04 13:43:17 -0400 | [diff] [blame] | 1455 | config LOCK_EVENT_COUNTS |
| 1456 | bool "Locking event counts collection" |
| 1457 | depends on DEBUG_FS |
Masahiro Yamada | a7f7f62 | 2020-06-14 01:50:22 +0900 | [diff] [blame] | 1458 | help |
Waiman Long | fb346fd | 2019-04-04 13:43:17 -0400 | [diff] [blame] | 1459 | Enable light-weight counting of various locking related events |
| 1460 | in the system with minimal performance impact. This reduces |
| 1461 | the chance of application behavior change because of timing |
| 1462 | differences. The counts are reported via debugfs. |
| 1463 | |
Peter Collingbourne | 5cf896fb6 | 2019-07-31 18:18:42 -0700 | [diff] [blame] | 1464 | # Select if the architecture has support for applying RELR relocations. |
| 1465 | config ARCH_HAS_RELR |
| 1466 | bool |
| 1467 | |
| 1468 | config RELR |
| 1469 | bool "Use RELR relocation packing" |
| 1470 | depends on ARCH_HAS_RELR && TOOLS_SUPPORT_RELR |
| 1471 | default y |
| 1472 | help |
| 1473 | Store the kernel's dynamic relocations in the RELR relocation packing |
| 1474 | format. Requires a compatible linker (LLD supports this feature), as |
| 1475 | well as compatible NM and OBJCOPY utilities (llvm-nm and llvm-objcopy |
| 1476 | are compatible). |
| 1477 | |
Thiago Jung Bauermann | 0c9c1d5 | 2019-08-06 01:49:14 -0300 | [diff] [blame] | 1478 | config ARCH_HAS_MEM_ENCRYPT |
| 1479 | bool |
| 1480 | |
Tom Lendacky | 46b49b1 | 2021-09-08 17:58:33 -0500 | [diff] [blame] | 1481 | config ARCH_HAS_CC_PLATFORM |
| 1482 | bool |
| 1483 | |
Hassan Naveed | 0e24220 | 2019-11-15 23:44:42 +0000 | [diff] [blame] | 1484 | config HAVE_SPARSE_SYSCALL_NR |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1485 | bool |
| 1486 | help |
| 1487 | An architecture should select this if its syscall numbering is sparse |
Hassan Naveed | 0e24220 | 2019-11-15 23:44:42 +0000 | [diff] [blame] | 1488 | to save space. For example, MIPS architecture has a syscall array with |
| 1489 | entries at 4000, 5000 and 6000 locations. This option turns on syscall |
| 1490 | related optimizations for a given architecture. |
| 1491 | |
Sven Schnelle | d60d7de | 2020-08-04 17:01:22 +0200 | [diff] [blame] | 1492 | config ARCH_HAS_VDSO_DATA |
| 1493 | bool |
| 1494 | |
Josh Poimboeuf | 115284d | 2020-08-18 15:57:41 +0200 | [diff] [blame] | 1495 | config HAVE_STATIC_CALL |
| 1496 | bool |
| 1497 | |
Josh Poimboeuf | 9183c3f | 2020-08-18 15:57:42 +0200 | [diff] [blame] | 1498 | config HAVE_STATIC_CALL_INLINE |
| 1499 | bool |
| 1500 | depends on HAVE_STATIC_CALL |
Josh Poimboeuf | 03f16cd | 2022-04-18 09:50:36 -0700 | [diff] [blame] | 1501 | select OBJTOOL |
Josh Poimboeuf | 9183c3f | 2020-08-18 15:57:42 +0200 | [diff] [blame] | 1502 | |
Michal Hocko | 6ef869e | 2021-01-18 15:12:19 +0100 | [diff] [blame] | 1503 | config HAVE_PREEMPT_DYNAMIC |
| 1504 | bool |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1505 | |
| 1506 | config HAVE_PREEMPT_DYNAMIC_CALL |
| 1507 | bool |
Michal Hocko | 6ef869e | 2021-01-18 15:12:19 +0100 | [diff] [blame] | 1508 | depends on HAVE_STATIC_CALL |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1509 | select HAVE_PREEMPT_DYNAMIC |
Michal Hocko | 6ef869e | 2021-01-18 15:12:19 +0100 | [diff] [blame] | 1510 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1511 | An architecture should select this if it can handle the preemption |
| 1512 | model being selected at boot time using static calls. |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1513 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1514 | Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a |
| 1515 | preemption function will be patched directly. |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1516 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1517 | Where an architecture does not select HAVE_STATIC_CALL_INLINE, any |
| 1518 | call to a preemption function will go through a trampoline, and the |
| 1519 | trampoline will be patched. |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1520 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1521 | It is strongly advised to support inline static call to avoid any |
| 1522 | overhead. |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1523 | |
| 1524 | config HAVE_PREEMPT_DYNAMIC_KEY |
| 1525 | bool |
Nick Desaulniers | a0a12c3 | 2022-08-19 12:06:40 -0700 | [diff] [blame] | 1526 | depends on HAVE_ARCH_JUMP_LABEL |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1527 | select HAVE_PREEMPT_DYNAMIC |
| 1528 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1529 | An architecture should select this if it can handle the preemption |
| 1530 | model being selected at boot time using static keys. |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1531 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1532 | Each preemption function will be given an early return based on a |
| 1533 | static key. This should have slightly lower overhead than non-inline |
| 1534 | static calls, as this effectively inlines each trampoline into the |
| 1535 | start of its callee. This may avoid redundant work, and may |
| 1536 | integrate better with CFI schemes. |
Mark Rutland | 99cf983c | 2022-02-14 16:52:14 +0000 | [diff] [blame] | 1537 | |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1538 | This will have greater overhead than using inline static calls as |
| 1539 | the call to the preemption function cannot be entirely elided. |
Michal Hocko | 6ef869e | 2021-01-18 15:12:19 +0100 | [diff] [blame] | 1540 | |
Nathan Chancellor | 59612b2 | 2020-11-19 13:46:56 -0700 | [diff] [blame] | 1541 | config ARCH_WANT_LD_ORPHAN_WARN |
| 1542 | bool |
| 1543 | help |
| 1544 | An arch should select this symbol once all linker sections are explicitly |
| 1545 | included, size-asserted, or discarded in the linker scripts. This is |
| 1546 | important because we never want expected sections to be placed heuristically |
| 1547 | by the linker, since the locations of such sections can change between linker |
| 1548 | versions. |
| 1549 | |
Mike Rapoport | 4f5b0c1 | 2020-12-14 19:09:59 -0800 | [diff] [blame] | 1550 | config HAVE_ARCH_PFN_VALID |
| 1551 | bool |
| 1552 | |
Mike Rapoport | 5d6ad66 | 2020-12-14 19:10:30 -0800 | [diff] [blame] | 1553 | config ARCH_SUPPORTS_DEBUG_PAGEALLOC |
| 1554 | bool |
| 1555 | |
Pasha Tatashin | df4e817 | 2022-01-14 14:06:37 -0800 | [diff] [blame] | 1556 | config ARCH_SUPPORTS_PAGE_TABLE_CHECK |
| 1557 | bool |
| 1558 | |
Brian Gerst | 2ca408d | 2020-11-30 17:30:59 -0500 | [diff] [blame] | 1559 | config ARCH_SPLIT_ARG64 |
| 1560 | bool |
| 1561 | help |
Juerg Haefliger | 9f79ffc | 2023-02-01 17:24:35 +0100 | [diff] [blame] | 1562 | If a 32-bit architecture requires 64-bit arguments to be split into |
| 1563 | pairs of 32-bit arguments, select this option. |
Brian Gerst | 2ca408d | 2020-11-30 17:30:59 -0500 | [diff] [blame] | 1564 | |
Al Viro | 7facdc4 | 2020-06-13 23:03:25 -0400 | [diff] [blame] | 1565 | config ARCH_HAS_ELFCORE_COMPAT |
| 1566 | bool |
| 1567 | |
Balbir Singh | 58e106e | 2021-04-26 21:59:11 +0200 | [diff] [blame] | 1568 | config ARCH_HAS_PARANOID_L1D_FLUSH |
| 1569 | bool |
| 1570 | |
Prasad Sodagudi | d593d64 | 2022-05-18 22:14:14 +0530 | [diff] [blame] | 1571 | config ARCH_HAVE_TRACE_MMIO_ACCESS |
| 1572 | bool |
| 1573 | |
Thomas Gleixner | 1bdda24 | 2021-10-21 15:55:05 -0700 | [diff] [blame] | 1574 | config DYNAMIC_SIGFRAME |
| 1575 | bool |
| 1576 | |
Jarkko Sakkinen | 50468e4 | 2021-11-16 18:21:16 +0200 | [diff] [blame] | 1577 | # Select, if arch has a named attribute group bound to NUMA device nodes. |
| 1578 | config HAVE_ARCH_NODE_DEV_GROUP |
| 1579 | bool |
| 1580 | |
Kinsey Ho | 71ce1ab | 2023-12-27 14:12:01 +0000 | [diff] [blame] | 1581 | config ARCH_HAS_HW_PTE_YOUNG |
| 1582 | bool |
| 1583 | help |
| 1584 | Architectures that select this option are capable of setting the |
| 1585 | accessed bit in PTE entries when using them as part of linear address |
| 1586 | translations. Architectures that require runtime check should select |
| 1587 | this option and override arch_has_hw_pte_young(). |
| 1588 | |
Yu Zhao | eed9a32 | 2022-09-18 01:59:59 -0600 | [diff] [blame] | 1589 | config ARCH_HAS_NONLEAF_PMD_YOUNG |
| 1590 | bool |
| 1591 | help |
| 1592 | Architectures that select this option are capable of setting the |
| 1593 | accessed bit in non-leaf PMD entries when using them as part of linear |
| 1594 | address translations. Page table walkers that clear the accessed bit |
| 1595 | may use this capability to reduce their search space. |
| 1596 | |
Samuel Holland | 6cbd1d6 | 2024-03-29 00:18:16 -0700 | [diff] [blame] | 1597 | config ARCH_HAS_KERNEL_FPU_SUPPORT |
| 1598 | bool |
| 1599 | help |
| 1600 | Architectures that select this option can run floating-point code in |
| 1601 | the kernel, as described in Documentation/core-api/floating-point.rst. |
| 1602 | |
Peter Oberparleiter | 2521f2c | 2009-06-17 16:28:08 -0700 | [diff] [blame] | 1603 | source "kernel/gcov/Kconfig" |
Masahiro Yamada | 45332b1 | 2018-07-05 15:24:12 +0900 | [diff] [blame] | 1604 | |
| 1605 | source "scripts/gcc-plugins/Kconfig" |
Linus Torvalds | fa1b5d0 | 2018-08-15 13:05:12 -0700 | [diff] [blame] | 1606 | |
Peter Zijlstra | d49a062 | 2022-09-15 13:10:47 +0200 | [diff] [blame] | 1607 | config FUNCTION_ALIGNMENT_4B |
| 1608 | bool |
| 1609 | |
| 1610 | config FUNCTION_ALIGNMENT_8B |
| 1611 | bool |
| 1612 | |
| 1613 | config FUNCTION_ALIGNMENT_16B |
| 1614 | bool |
| 1615 | |
| 1616 | config FUNCTION_ALIGNMENT_32B |
| 1617 | bool |
| 1618 | |
| 1619 | config FUNCTION_ALIGNMENT_64B |
| 1620 | bool |
| 1621 | |
| 1622 | config FUNCTION_ALIGNMENT |
| 1623 | int |
| 1624 | default 64 if FUNCTION_ALIGNMENT_64B |
| 1625 | default 32 if FUNCTION_ALIGNMENT_32B |
| 1626 | default 16 if FUNCTION_ALIGNMENT_16B |
| 1627 | default 8 if FUNCTION_ALIGNMENT_8B |
| 1628 | default 4 if FUNCTION_ALIGNMENT_4B |
| 1629 | default 0 |
| 1630 | |
Petr Pavlu | 5270316 | 2024-02-22 14:35:00 +0100 | [diff] [blame] | 1631 | config CC_HAS_MIN_FUNCTION_ALIGNMENT |
| 1632 | # Detect availability of the GCC option -fmin-function-alignment which |
| 1633 | # guarantees minimal alignment for all functions, unlike |
| 1634 | # -falign-functions which the compiler ignores for cold functions. |
| 1635 | def_bool $(cc-option, -fmin-function-alignment=8) |
| 1636 | |
| 1637 | config CC_HAS_SANE_FUNCTION_ALIGNMENT |
| 1638 | # Set if the guaranteed alignment with -fmin-function-alignment is |
| 1639 | # available or extra care is required in the kernel. Clang provides |
| 1640 | # strict alignment always, even with -falign-functions. |
| 1641 | def_bool CC_HAS_MIN_FUNCTION_ALIGNMENT || CC_IS_CLANG |
| 1642 | |
Paul E. McKenney | a88d970 | 2024-03-17 14:44:38 -0700 | [diff] [blame] | 1643 | config ARCH_NEED_CMPXCHG_1_EMU |
| 1644 | bool |
| 1645 | |
Randy Dunlap | 22471e1 | 2018-07-31 13:39:33 +0200 | [diff] [blame] | 1646 | endmenu |