David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1 | ============================ |
| 2 | LINUX KERNEL MEMORY BARRIERS |
| 3 | ============================ |
| 4 | |
| 5 | By: David Howells <dhowells@redhat.com> |
Paul E. McKenney | 714b690 | 2019-04-11 07:12:18 -0700 | [diff] [blame] | 6 | Paul E. McKenney <paulmck@linux.ibm.com> |
Peter Zijlstra | e7720af5 | 2016-04-26 10:22:05 -0700 | [diff] [blame] | 7 | Will Deacon <will.deacon@arm.com> |
| 8 | Peter Zijlstra <peterz@infradead.org> |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 9 | |
Peter Zijlstra | e7720af5 | 2016-04-26 10:22:05 -0700 | [diff] [blame] | 10 | ========== |
| 11 | DISCLAIMER |
| 12 | ========== |
| 13 | |
| 14 | This document is not a specification; it is intentionally (for the sake of |
| 15 | brevity) and unintentionally (due to being human) incomplete. This document is |
| 16 | meant as a guide to using the various memory barriers provided by Linux, but |
Andrea Parri | 621df43 | 2018-02-20 15:25:07 -0800 | [diff] [blame] | 17 | in case of any doubt (and there are many) please ask. Some doubts may be |
| 18 | resolved by referring to the formal memory consistency model and related |
| 19 | documentation at tools/memory-model/. Nevertheless, even this memory |
| 20 | model should be viewed as the collective opinion of its maintainers rather |
| 21 | than as an infallible oracle. |
Peter Zijlstra | e7720af5 | 2016-04-26 10:22:05 -0700 | [diff] [blame] | 22 | |
| 23 | To repeat, this document is not a specification of what Linux expects from |
| 24 | hardware. |
| 25 | |
David Howells | 8d4840e | 2016-04-26 10:22:06 -0700 | [diff] [blame] | 26 | The purpose of this document is twofold: |
| 27 | |
| 28 | (1) to specify the minimum functionality that one can rely on for any |
| 29 | particular barrier, and |
| 30 | |
| 31 | (2) to provide a guide as to how to use the barriers that are available. |
| 32 | |
| 33 | Note that an architecture can provide more than the minimum requirement |
Stan Drozd | 35bdc72 | 2017-04-20 11:03:36 +0200 | [diff] [blame] | 34 | for any particular barrier, but if the architecture provides less than |
David Howells | 8d4840e | 2016-04-26 10:22:06 -0700 | [diff] [blame] | 35 | that, that architecture is incorrect. |
| 36 | |
| 37 | Note also that it is possible that a barrier may be a no-op for an |
| 38 | architecture because the way that arch works renders an explicit barrier |
| 39 | unnecessary in that case. |
| 40 | |
| 41 | |
Peter Zijlstra | e7720af5 | 2016-04-26 10:22:05 -0700 | [diff] [blame] | 42 | ======== |
| 43 | CONTENTS |
| 44 | ======== |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 45 | |
| 46 | (*) Abstract memory access model. |
| 47 | |
| 48 | - Device operations. |
| 49 | - Guarantees. |
| 50 | |
| 51 | (*) What are memory barriers? |
| 52 | |
| 53 | - Varieties of memory barrier. |
| 54 | - What may not be assumed about memory barriers? |
Paul E. McKenney | f28f086 | 2018-03-07 09:27:37 -0800 | [diff] [blame] | 55 | - Data dependency barriers (historical). |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 56 | - Control dependencies. |
| 57 | - SMP barrier pairing. |
| 58 | - Examples of memory barrier sequences. |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 59 | - Read memory barriers vs load speculation. |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 60 | - Multicopy atomicity. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 61 | |
| 62 | (*) Explicit kernel barriers. |
| 63 | |
| 64 | - Compiler barrier. |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 65 | - CPU memory barriers. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 66 | |
| 67 | (*) Implicit kernel memory barriers. |
| 68 | |
SeongJae Park | 166bda7 | 2016-04-12 08:52:50 -0700 | [diff] [blame] | 69 | - Lock acquisition functions. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 70 | - Interrupt disabling functions. |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 71 | - Sleep and wake-up functions. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 72 | - Miscellaneous functions. |
| 73 | |
SeongJae Park | 166bda7 | 2016-04-12 08:52:50 -0700 | [diff] [blame] | 74 | (*) Inter-CPU acquiring barrier effects. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 75 | |
SeongJae Park | 166bda7 | 2016-04-12 08:52:50 -0700 | [diff] [blame] | 76 | - Acquires vs memory accesses. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 77 | |
| 78 | (*) Where are memory barriers needed? |
| 79 | |
| 80 | - Interprocessor interaction. |
| 81 | - Atomic operations. |
| 82 | - Accessing devices. |
| 83 | - Interrupts. |
| 84 | |
| 85 | (*) Kernel I/O barrier effects. |
| 86 | |
| 87 | (*) Assumed minimum execution ordering model. |
| 88 | |
| 89 | (*) The effects of the cpu cache. |
| 90 | |
| 91 | - Cache coherency. |
| 92 | - Cache coherency vs DMA. |
| 93 | - Cache coherency vs MMIO. |
| 94 | |
| 95 | (*) The things CPUs get up to. |
| 96 | |
| 97 | - And then there's the Alpha. |
SeongJae Park | 01e1cd6 | 2016-04-12 08:52:51 -0700 | [diff] [blame] | 98 | - Virtual Machine Guests. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 99 | |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 100 | (*) Example uses. |
| 101 | |
| 102 | - Circular buffers. |
| 103 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 104 | (*) References. |
| 105 | |
| 106 | |
| 107 | ============================ |
| 108 | ABSTRACT MEMORY ACCESS MODEL |
| 109 | ============================ |
| 110 | |
| 111 | Consider the following abstract model of the system: |
| 112 | |
| 113 | : : |
| 114 | : : |
| 115 | : : |
| 116 | +-------+ : +--------+ : +-------+ |
| 117 | | | : | | : | | |
| 118 | | | : | | : | | |
| 119 | | CPU 1 |<----->| Memory |<----->| CPU 2 | |
| 120 | | | : | | : | | |
| 121 | | | : | | : | | |
| 122 | +-------+ : +--------+ : +-------+ |
| 123 | ^ : ^ : ^ |
| 124 | | : | : | |
| 125 | | : | : | |
| 126 | | : v : | |
| 127 | | : +--------+ : | |
| 128 | | : | | : | |
| 129 | | : | | : | |
| 130 | +---------->| Device |<----------+ |
| 131 | : | | : |
| 132 | : | | : |
| 133 | : +--------+ : |
| 134 | : : |
| 135 | |
| 136 | Each CPU executes a program that generates memory access operations. In the |
| 137 | abstract CPU, memory operation ordering is very relaxed, and a CPU may actually |
| 138 | perform the memory operations in any order it likes, provided program causality |
| 139 | appears to be maintained. Similarly, the compiler may also arrange the |
| 140 | instructions it emits in any order it likes, provided it doesn't affect the |
| 141 | apparent operation of the program. |
| 142 | |
| 143 | So in the above diagram, the effects of the memory operations performed by a |
| 144 | CPU are perceived by the rest of the system as the operations cross the |
| 145 | interface between the CPU and rest of the system (the dotted lines). |
| 146 | |
| 147 | |
| 148 | For example, consider the following sequence of events: |
| 149 | |
| 150 | CPU 1 CPU 2 |
| 151 | =============== =============== |
| 152 | { A == 1; B == 2 } |
Alexey Dobriyan | 615cc2c | 2014-06-06 14:36:41 -0700 | [diff] [blame] | 153 | A = 3; x = B; |
| 154 | B = 4; y = A; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 155 | |
| 156 | The set of accesses as seen by the memory system in the middle can be arranged |
| 157 | in 24 different combinations: |
| 158 | |
Pranith Kumar | 8ab8b3e | 2014-09-02 23:34:29 -0400 | [diff] [blame] | 159 | STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 |
| 160 | STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 |
| 161 | STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 |
| 162 | STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 |
| 163 | STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 |
| 164 | STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 |
| 165 | STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 166 | STORE B=4, ... |
| 167 | ... |
| 168 | |
| 169 | and can thus result in four different combinations of values: |
| 170 | |
Pranith Kumar | 8ab8b3e | 2014-09-02 23:34:29 -0400 | [diff] [blame] | 171 | x == 2, y == 1 |
| 172 | x == 2, y == 3 |
| 173 | x == 4, y == 1 |
| 174 | x == 4, y == 3 |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 175 | |
| 176 | |
| 177 | Furthermore, the stores committed by a CPU to the memory system may not be |
| 178 | perceived by the loads made by another CPU in the same order as the stores were |
| 179 | committed. |
| 180 | |
| 181 | |
| 182 | As a further example, consider this sequence of events: |
| 183 | |
| 184 | CPU 1 CPU 2 |
| 185 | =============== =============== |
SeongJae Park | 3dbf091 | 2016-04-12 08:52:52 -0700 | [diff] [blame] | 186 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 187 | B = 4; Q = P; |
SeongJae Park | 8149b5c | 2020-01-31 21:52:37 +0100 | [diff] [blame] | 188 | P = &B; D = *Q; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 189 | |
| 190 | There is an obvious data dependency here, as the value loaded into D depends on |
| 191 | the address retrieved from P by CPU 2. At the end of the sequence, any of the |
| 192 | following results are possible: |
| 193 | |
| 194 | (Q == &A) and (D == 1) |
| 195 | (Q == &B) and (D == 2) |
| 196 | (Q == &B) and (D == 4) |
| 197 | |
| 198 | Note that CPU 2 will never try and load C into D because the CPU will load P |
| 199 | into Q before issuing the load of *Q. |
| 200 | |
| 201 | |
| 202 | DEVICE OPERATIONS |
| 203 | ----------------- |
| 204 | |
| 205 | Some devices present their control interfaces as collections of memory |
| 206 | locations, but the order in which the control registers are accessed is very |
| 207 | important. For instance, imagine an ethernet card with a set of internal |
| 208 | registers that are accessed through an address port register (A) and a data |
| 209 | port register (D). To read internal register 5, the following code might then |
| 210 | be used: |
| 211 | |
| 212 | *A = 5; |
| 213 | x = *D; |
| 214 | |
| 215 | but this might show up as either of the following two sequences: |
| 216 | |
| 217 | STORE *A = 5, x = LOAD *D |
| 218 | x = LOAD *D, STORE *A = 5 |
| 219 | |
| 220 | the second of which will almost certainly result in a malfunction, since it set |
| 221 | the address _after_ attempting to read the register. |
| 222 | |
| 223 | |
| 224 | GUARANTEES |
| 225 | ---------- |
| 226 | |
| 227 | There are some minimal guarantees that may be expected of a CPU: |
| 228 | |
| 229 | (*) On any given CPU, dependent memory accesses will be issued in order, with |
| 230 | respect to itself. This means that for: |
| 231 | |
Paul E. McKenney | 4055594 | 2017-10-09 09:15:21 -0700 | [diff] [blame] | 232 | Q = READ_ONCE(P); D = READ_ONCE(*Q); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 233 | |
| 234 | the CPU will issue the following memory operations: |
| 235 | |
| 236 | Q = LOAD P, D = LOAD *Q |
| 237 | |
Paul E. McKenney | 4055594 | 2017-10-09 09:15:21 -0700 | [diff] [blame] | 238 | and always in that order. However, on DEC Alpha, READ_ONCE() also |
| 239 | emits a memory-barrier instruction, so that a DEC Alpha CPU will |
| 240 | instead issue the following memory operations: |
| 241 | |
| 242 | Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER |
| 243 | |
| 244 | Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler |
| 245 | mischief. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 246 | |
| 247 | (*) Overlapping loads and stores within a particular CPU will appear to be |
| 248 | ordered within that CPU. This means that for: |
| 249 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 250 | a = READ_ONCE(*X); WRITE_ONCE(*X, b); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 251 | |
| 252 | the CPU will only issue the following sequence of memory operations: |
| 253 | |
| 254 | a = LOAD *X, STORE *X = b |
| 255 | |
| 256 | And for: |
| 257 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 258 | WRITE_ONCE(*X, c); d = READ_ONCE(*X); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 259 | |
| 260 | the CPU will only issue: |
| 261 | |
| 262 | STORE *X = c, d = LOAD *X |
| 263 | |
Matt LaPlante | fa00e7e | 2006-11-30 04:55:36 +0100 | [diff] [blame] | 264 | (Loads and stores overlap if they are targeted at overlapping pieces of |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 265 | memory). |
| 266 | |
| 267 | And there are a number of things that _must_ or _must_not_ be assumed: |
| 268 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 269 | (*) It _must_not_ be assumed that the compiler will do what you want |
| 270 | with memory references that are not protected by READ_ONCE() and |
| 271 | WRITE_ONCE(). Without them, the compiler is within its rights to |
| 272 | do all sorts of "creative" transformations, which are covered in |
Paul E. McKenney | 895f554 | 2016-01-06 14:23:03 -0800 | [diff] [blame] | 273 | the COMPILER BARRIER section. |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 274 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 275 | (*) It _must_not_ be assumed that independent loads and stores will be issued |
| 276 | in the order given. This means that for: |
| 277 | |
| 278 | X = *A; Y = *B; *D = Z; |
| 279 | |
| 280 | we may get any of the following sequences: |
| 281 | |
| 282 | X = LOAD *A, Y = LOAD *B, STORE *D = Z |
| 283 | X = LOAD *A, STORE *D = Z, Y = LOAD *B |
| 284 | Y = LOAD *B, X = LOAD *A, STORE *D = Z |
| 285 | Y = LOAD *B, STORE *D = Z, X = LOAD *A |
| 286 | STORE *D = Z, X = LOAD *A, Y = LOAD *B |
| 287 | STORE *D = Z, Y = LOAD *B, X = LOAD *A |
| 288 | |
| 289 | (*) It _must_ be assumed that overlapping memory accesses may be merged or |
| 290 | discarded. This means that for: |
| 291 | |
| 292 | X = *A; Y = *(A + 4); |
| 293 | |
| 294 | we may get any one of the following sequences: |
| 295 | |
| 296 | X = LOAD *A; Y = LOAD *(A + 4); |
| 297 | Y = LOAD *(A + 4); X = LOAD *A; |
| 298 | {X, Y} = LOAD {*A, *(A + 4) }; |
| 299 | |
| 300 | And for: |
| 301 | |
Paul E. McKenney | f191eec | 2012-10-03 10:28:30 -0700 | [diff] [blame] | 302 | *A = X; *(A + 4) = Y; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 303 | |
Paul E. McKenney | f191eec | 2012-10-03 10:28:30 -0700 | [diff] [blame] | 304 | we may get any of: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 305 | |
Paul E. McKenney | f191eec | 2012-10-03 10:28:30 -0700 | [diff] [blame] | 306 | STORE *A = X; STORE *(A + 4) = Y; |
| 307 | STORE *(A + 4) = Y; STORE *A = X; |
| 308 | STORE {*A, *(A + 4) } = {X, Y}; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 309 | |
Paul E. McKenney | 432fbf3 | 2014-09-04 17:12:49 -0700 | [diff] [blame] | 310 | And there are anti-guarantees: |
| 311 | |
| 312 | (*) These guarantees do not apply to bitfields, because compilers often |
| 313 | generate code to modify these using non-atomic read-modify-write |
| 314 | sequences. Do not attempt to use bitfields to synchronize parallel |
| 315 | algorithms. |
| 316 | |
| 317 | (*) Even in cases where bitfields are protected by locks, all fields |
| 318 | in a given bitfield must be protected by one lock. If two fields |
| 319 | in a given bitfield are protected by different locks, the compiler's |
| 320 | non-atomic read-modify-write sequences can cause an update to one |
| 321 | field to corrupt the value of an adjacent field. |
| 322 | |
| 323 | (*) These guarantees apply only to properly aligned and sized scalar |
| 324 | variables. "Properly sized" currently means variables that are |
| 325 | the same size as "char", "short", "int" and "long". "Properly |
| 326 | aligned" means the natural alignment, thus no constraints for |
| 327 | "char", two-byte alignment for "short", four-byte alignment for |
| 328 | "int", and either four-byte or eight-byte alignment for "long", |
| 329 | on 32-bit and 64-bit systems, respectively. Note that these |
| 330 | guarantees were introduced into the C11 standard, so beware when |
| 331 | using older pre-C11 compilers (for example, gcc 4.6). The portion |
| 332 | of the standard containing this guarantee is Section 3.14, which |
| 333 | defines "memory location" as follows: |
| 334 | |
| 335 | memory location |
| 336 | either an object of scalar type, or a maximal sequence |
| 337 | of adjacent bit-fields all having nonzero width |
| 338 | |
| 339 | NOTE 1: Two threads of execution can update and access |
| 340 | separate memory locations without interfering with |
| 341 | each other. |
| 342 | |
| 343 | NOTE 2: A bit-field and an adjacent non-bit-field member |
| 344 | are in separate memory locations. The same applies |
| 345 | to two bit-fields, if one is declared inside a nested |
| 346 | structure declaration and the other is not, or if the two |
| 347 | are separated by a zero-length bit-field declaration, |
| 348 | or if they are separated by a non-bit-field member |
| 349 | declaration. It is not safe to concurrently update two |
| 350 | bit-fields in the same structure if all members declared |
| 351 | between them are also bit-fields, no matter what the |
| 352 | sizes of those intervening bit-fields happen to be. |
| 353 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 354 | |
| 355 | ========================= |
| 356 | WHAT ARE MEMORY BARRIERS? |
| 357 | ========================= |
| 358 | |
| 359 | As can be seen above, independent memory operations are effectively performed |
| 360 | in random order, but this can be a problem for CPU-CPU interaction and for I/O. |
| 361 | What is required is some way of intervening to instruct the compiler and the |
| 362 | CPU to restrict the order. |
| 363 | |
| 364 | Memory barriers are such interventions. They impose a perceived partial |
David Howells | 2b94895 | 2006-06-25 05:48:49 -0700 | [diff] [blame] | 365 | ordering over the memory operations on either side of the barrier. |
| 366 | |
| 367 | Such enforcement is important because the CPUs and other devices in a system |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 368 | can use a variety of tricks to improve performance, including reordering, |
David Howells | 2b94895 | 2006-06-25 05:48:49 -0700 | [diff] [blame] | 369 | deferral and combination of memory operations; speculative loads; speculative |
| 370 | branch prediction and various types of caching. Memory barriers are used to |
| 371 | override or suppress these tricks, allowing the code to sanely control the |
| 372 | interaction of multiple CPUs and/or devices. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 373 | |
| 374 | |
| 375 | VARIETIES OF MEMORY BARRIER |
| 376 | --------------------------- |
| 377 | |
| 378 | Memory barriers come in four basic varieties: |
| 379 | |
| 380 | (1) Write (or store) memory barriers. |
| 381 | |
| 382 | A write memory barrier gives a guarantee that all the STORE operations |
| 383 | specified before the barrier will appear to happen before all the STORE |
| 384 | operations specified after the barrier with respect to the other |
| 385 | components of the system. |
| 386 | |
| 387 | A write barrier is a partial ordering on stores only; it is not required |
| 388 | to have any effect on loads. |
| 389 | |
David Howells | 6bc3927 | 2006-06-25 05:49:22 -0700 | [diff] [blame] | 390 | A CPU can be viewed as committing a sequence of store operations to the |
Guilherme G. Piccoli | 5692fcc | 2017-09-21 16:29:01 -0300 | [diff] [blame] | 391 | memory system as time progresses. All stores _before_ a write barrier |
| 392 | will occur _before_ all the stores after the write barrier. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 393 | |
| 394 | [!] Note that write barriers should normally be paired with read or data |
| 395 | dependency barriers; see the "SMP barrier pairing" subsection. |
| 396 | |
| 397 | |
| 398 | (2) Data dependency barriers. |
| 399 | |
| 400 | A data dependency barrier is a weaker form of read barrier. In the case |
| 401 | where two loads are performed such that the second depends on the result |
| 402 | of the first (eg: the first load retrieves the address to which the second |
| 403 | load will be directed), a data dependency barrier would be required to |
Nikolay Borisov | 51de788 | 2018-02-20 15:25:08 -0800 | [diff] [blame] | 404 | make sure that the target of the second load is updated after the address |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 405 | obtained by the first load is accessed. |
| 406 | |
| 407 | A data dependency barrier is a partial ordering on interdependent loads |
| 408 | only; it is not required to have any effect on stores, independent loads |
| 409 | or overlapping loads. |
| 410 | |
| 411 | As mentioned in (1), the other CPUs in the system can be viewed as |
| 412 | committing sequences of stores to the memory system that the CPU being |
| 413 | considered can then perceive. A data dependency barrier issued by the CPU |
| 414 | under consideration guarantees that for any load preceding it, if that |
| 415 | load touches one of a sequence of stores from another CPU, then by the |
| 416 | time the barrier completes, the effects of all the stores prior to that |
| 417 | touched by the load will be perceptible to any loads issued after the data |
| 418 | dependency barrier. |
| 419 | |
| 420 | See the "Examples of memory barrier sequences" subsection for diagrams |
| 421 | showing the ordering constraints. |
| 422 | |
| 423 | [!] Note that the first load really has to have a _data_ dependency and |
| 424 | not a control dependency. If the address for the second load is dependent |
| 425 | on the first load, but the dependency is through a conditional rather than |
| 426 | actually loading the address itself, then it's a _control_ dependency and |
| 427 | a full read barrier or better is required. See the "Control dependencies" |
| 428 | subsection for more information. |
| 429 | |
| 430 | [!] Note that data dependency barriers should normally be paired with |
| 431 | write barriers; see the "SMP barrier pairing" subsection. |
| 432 | |
| 433 | |
| 434 | (3) Read (or load) memory barriers. |
| 435 | |
| 436 | A read barrier is a data dependency barrier plus a guarantee that all the |
| 437 | LOAD operations specified before the barrier will appear to happen before |
| 438 | all the LOAD operations specified after the barrier with respect to the |
| 439 | other components of the system. |
| 440 | |
| 441 | A read barrier is a partial ordering on loads only; it is not required to |
| 442 | have any effect on stores. |
| 443 | |
| 444 | Read memory barriers imply data dependency barriers, and so can substitute |
| 445 | for them. |
| 446 | |
| 447 | [!] Note that read barriers should normally be paired with write barriers; |
| 448 | see the "SMP barrier pairing" subsection. |
| 449 | |
| 450 | |
| 451 | (4) General memory barriers. |
| 452 | |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 453 | A general memory barrier gives a guarantee that all the LOAD and STORE |
| 454 | operations specified before the barrier will appear to happen before all |
| 455 | the LOAD and STORE operations specified after the barrier with respect to |
| 456 | the other components of the system. |
| 457 | |
| 458 | A general memory barrier is a partial ordering over both loads and stores. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 459 | |
| 460 | General memory barriers imply both read and write memory barriers, and so |
| 461 | can substitute for either. |
| 462 | |
| 463 | |
| 464 | And a couple of implicit varieties: |
| 465 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 466 | (5) ACQUIRE operations. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 467 | |
| 468 | This acts as a one-way permeable barrier. It guarantees that all memory |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 469 | operations after the ACQUIRE operation will appear to happen after the |
| 470 | ACQUIRE operation with respect to the other components of the system. |
Davidlohr Bueso | 787df63 | 2016-04-12 08:52:55 -0700 | [diff] [blame] | 471 | ACQUIRE operations include LOCK operations and both smp_load_acquire() |
Andrea Parri | 2f359c7 | 2018-09-26 11:29:20 -0700 | [diff] [blame] | 472 | and smp_cond_load_acquire() operations. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 473 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 474 | Memory operations that occur before an ACQUIRE operation may appear to |
| 475 | happen after it completes. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 476 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 477 | An ACQUIRE operation should almost always be paired with a RELEASE |
| 478 | operation. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 479 | |
| 480 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 481 | (6) RELEASE operations. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 482 | |
| 483 | This also acts as a one-way permeable barrier. It guarantees that all |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 484 | memory operations before the RELEASE operation will appear to happen |
| 485 | before the RELEASE operation with respect to the other components of the |
| 486 | system. RELEASE operations include UNLOCK operations and |
| 487 | smp_store_release() operations. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 488 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 489 | Memory operations that occur after a RELEASE operation may appear to |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 490 | happen before it completes. |
| 491 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 492 | The use of ACQUIRE and RELEASE operations generally precludes the need |
SeongJae Park | a897b13 | 2019-11-22 00:41:23 +0100 | [diff] [blame] | 493 | for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is |
| 494 | -not- guaranteed to act as a full memory barrier. However, after an |
| 495 | ACQUIRE on a given variable, all memory accesses preceding any prior |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 496 | RELEASE on that same variable are guaranteed to be visible. In other |
| 497 | words, within a given variable's critical section, all accesses of all |
| 498 | previous critical sections for that variable are guaranteed to have |
| 499 | completed. |
Paul E. McKenney | 17eb88e | 2013-12-11 13:59:09 -0800 | [diff] [blame] | 500 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 501 | This means that ACQUIRE acts as a minimal "acquire" operation and |
| 502 | RELEASE acts as a minimal "release" operation. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 503 | |
Peter Zijlstra | 706eeb3 | 2017-06-12 14:50:27 +0200 | [diff] [blame] | 504 | A subset of the atomic operations described in atomic_t.txt have ACQUIRE and |
| 505 | RELEASE variants in addition to fully-ordered and relaxed (no barrier |
| 506 | semantics) definitions. For compound atomics performing both a load and a |
| 507 | store, ACQUIRE semantics apply only to the load and RELEASE semantics apply |
| 508 | only to the store portion of the operation. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 509 | |
| 510 | Memory barriers are only required where there's a possibility of interaction |
| 511 | between two CPUs or between a CPU and a device. If it can be guaranteed that |
| 512 | there won't be any such interaction in any particular piece of code, then |
| 513 | memory barriers are unnecessary in that piece of code. |
| 514 | |
| 515 | |
| 516 | Note that these are the _minimum_ guarantees. Different architectures may give |
| 517 | more substantial guarantees, but they may _not_ be relied upon outside of arch |
| 518 | specific code. |
| 519 | |
| 520 | |
| 521 | WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? |
| 522 | ---------------------------------------------- |
| 523 | |
| 524 | There are certain things that the Linux kernel memory barriers do not guarantee: |
| 525 | |
| 526 | (*) There is no guarantee that any of the memory accesses specified before a |
| 527 | memory barrier will be _complete_ by the completion of a memory barrier |
| 528 | instruction; the barrier can be considered to draw a line in that CPU's |
| 529 | access queue that accesses of the appropriate type may not cross. |
| 530 | |
| 531 | (*) There is no guarantee that issuing a memory barrier on one CPU will have |
| 532 | any direct effect on another CPU or any other hardware in the system. The |
| 533 | indirect effect will be the order in which the second CPU sees the effects |
| 534 | of the first CPU's accesses occur, but see the next point: |
| 535 | |
David Howells | 6bc3927 | 2006-06-25 05:49:22 -0700 | [diff] [blame] | 536 | (*) There is no guarantee that a CPU will see the correct order of effects |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 537 | from a second CPU's accesses, even _if_ the second CPU uses a memory |
| 538 | barrier, unless the first CPU _also_ uses a matching memory barrier (see |
| 539 | the subsection on "SMP Barrier Pairing"). |
| 540 | |
| 541 | (*) There is no guarantee that some intervening piece of off-the-CPU |
| 542 | hardware[*] will not reorder the memory accesses. CPU cache coherency |
| 543 | mechanisms should propagate the indirect effects of a memory barrier |
| 544 | between CPUs, but might not do so in order. |
| 545 | |
| 546 | [*] For information on bus mastering DMA and coherency please read: |
| 547 | |
Mauro Carvalho Chehab | bff9e34 | 2019-07-15 05:31:06 -0300 | [diff] [blame] | 548 | Documentation/driver-api/pci/pci.rst |
SeongJae Park | 537f3a7 | 2020-08-29 10:26:05 +0200 | [diff] [blame] | 549 | Documentation/core-api/dma-api-howto.rst |
| 550 | Documentation/core-api/dma-api.rst |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 551 | |
| 552 | |
Paul E. McKenney | f28f086 | 2018-03-07 09:27:37 -0800 | [diff] [blame] | 553 | DATA DEPENDENCY BARRIERS (HISTORICAL) |
| 554 | ------------------------------------- |
| 555 | |
Will Deacon | 8ca924a | 2019-11-07 14:36:37 +0000 | [diff] [blame] | 556 | As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for |
| 557 | DEC Alpha, which means that about the only people who need to pay attention |
| 558 | to this section are those working on DEC Alpha architecture-specific code |
| 559 | and those working on READ_ONCE() itself. For those who need it, and for |
| 560 | those who are interested in the history, here is the story of |
| 561 | data-dependency barriers. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 562 | |
| 563 | The usage requirements of data dependency barriers are a little subtle, and |
| 564 | it's not always obvious that they're needed. To illustrate, consider the |
| 565 | following sequence of events: |
| 566 | |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 567 | CPU 1 CPU 2 |
| 568 | =============== =============== |
SeongJae Park | 3dbf091 | 2016-04-12 08:52:52 -0700 | [diff] [blame] | 569 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 570 | B = 4; |
| 571 | <write barrier> |
SeongJae Park | 8149b5c | 2020-01-31 21:52:37 +0100 | [diff] [blame] | 572 | WRITE_ONCE(P, &B); |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 573 | Q = READ_ONCE(P); |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 574 | D = *Q; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 575 | |
| 576 | There's a clear data dependency here, and it would seem that by the end of the |
| 577 | sequence, Q must be either &A or &B, and that: |
| 578 | |
| 579 | (Q == &A) implies (D == 1) |
| 580 | (Q == &B) implies (D == 4) |
| 581 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 582 | But! CPU 2's perception of P may be updated _before_ its perception of B, thus |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 583 | leading to the following situation: |
| 584 | |
| 585 | (Q == &B) and (D == 2) ???? |
| 586 | |
Will Deacon | 806654a | 2018-11-19 11:02:45 +0000 | [diff] [blame] | 587 | While this may seem like a failure of coherency or causality maintenance, it |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 588 | isn't, and this behaviour can be observed on certain real CPUs (such as the DEC |
| 589 | Alpha). |
| 590 | |
David Howells | 2b94895 | 2006-06-25 05:48:49 -0700 | [diff] [blame] | 591 | To deal with this, a data dependency barrier or better must be inserted |
| 592 | between the address load and the data load: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 593 | |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 594 | CPU 1 CPU 2 |
| 595 | =============== =============== |
SeongJae Park | 3dbf091 | 2016-04-12 08:52:52 -0700 | [diff] [blame] | 596 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 597 | B = 4; |
| 598 | <write barrier> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 599 | WRITE_ONCE(P, &B); |
| 600 | Q = READ_ONCE(P); |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 601 | <data dependency barrier> |
| 602 | D = *Q; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 603 | |
| 604 | This enforces the occurrence of one of the two implications, and prevents the |
| 605 | third possibility from arising. |
| 606 | |
Paul E. McKenney | 92a84dd | 2016-01-14 14:17:04 -0800 | [diff] [blame] | 607 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 608 | [!] Note that this extremely counterintuitive situation arises most easily on |
| 609 | machines with split caches, so that, for example, one cache bank processes |
| 610 | even-numbered cache lines and the other bank processes odd-numbered cache |
| 611 | lines. The pointer P might be stored in an odd-numbered cache line, and the |
| 612 | variable B might be stored in an even-numbered cache line. Then, if the |
| 613 | even-numbered bank of the reading CPU's cache is extremely busy while the |
| 614 | odd-numbered bank is idle, one can see the new value of the pointer P (&B), |
David Howells | 6bc3927 | 2006-06-25 05:49:22 -0700 | [diff] [blame] | 615 | but the old value of the variable B (2). |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 616 | |
| 617 | |
Paul E. McKenney | 66ce3a4 | 2017-06-30 16:18:28 -0700 | [diff] [blame] | 618 | A data-dependency barrier is not required to order dependent writes |
| 619 | because the CPUs that the Linux kernel supports don't do writes |
| 620 | until they are certain (1) that the write will actually happen, (2) |
| 621 | of the location of the write, and (3) of the value to be written. |
| 622 | But please carefully read the "CONTROL DEPENDENCIES" section and the |
Mauro Carvalho Chehab | 72ef5e5 | 2020-04-14 18:48:35 +0200 | [diff] [blame] | 623 | Documentation/RCU/rcu_dereference.rst file: The compiler can and does |
Paul E. McKenney | 66ce3a4 | 2017-06-30 16:18:28 -0700 | [diff] [blame] | 624 | break dependencies in a great many highly creative ways. |
| 625 | |
| 626 | CPU 1 CPU 2 |
| 627 | =============== =============== |
| 628 | { A == 1, B == 2, C = 3, P == &A, Q == &C } |
| 629 | B = 4; |
| 630 | <write barrier> |
| 631 | WRITE_ONCE(P, &B); |
| 632 | Q = READ_ONCE(P); |
| 633 | WRITE_ONCE(*Q, 5); |
| 634 | |
| 635 | Therefore, no data-dependency barrier is required to order the read into |
| 636 | Q with the store into *Q. In other words, this outcome is prohibited, |
| 637 | even without a data-dependency barrier: |
| 638 | |
| 639 | (Q == &B) && (B == 4) |
| 640 | |
| 641 | Please note that this pattern should be rare. After all, the whole point |
| 642 | of dependency ordering is to -prevent- writes to the data structure, along |
| 643 | with the expensive cache misses associated with those writes. This pattern |
| 644 | can be used to record rare error conditions and the like, and the CPUs' |
| 645 | naturally occurring ordering prevents such records from being lost. |
| 646 | |
| 647 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 648 | Note well that the ordering provided by a data dependency is local to |
| 649 | the CPU containing it. See the section on "Multicopy atomicity" for |
| 650 | more information. |
| 651 | |
| 652 | |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 653 | The data dependency barrier is very important to the RCU system, |
| 654 | for example. See rcu_assign_pointer() and rcu_dereference() in |
| 655 | include/linux/rcupdate.h. This permits the current target of an RCU'd |
| 656 | pointer to be replaced with a new modified target, without the replacement |
| 657 | target appearing to be incompletely initialised. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 658 | |
| 659 | See also the subsection on "Cache Coherency" for a more thorough example. |
| 660 | |
| 661 | |
| 662 | CONTROL DEPENDENCIES |
| 663 | -------------------- |
| 664 | |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 665 | Control dependencies can be a bit tricky because current compilers do |
| 666 | not understand them. The purpose of this section is to help you prevent |
| 667 | the compiler's ignorance from breaking your code. |
| 668 | |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 669 | A load-load control dependency requires a full read memory barrier, not |
| 670 | simply a data dependency barrier to make it work correctly. Consider the |
| 671 | following bit of code: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 672 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 673 | q = READ_ONCE(a); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 674 | if (q) { |
| 675 | <data dependency barrier> /* BUG: No data dependency!!! */ |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 676 | p = READ_ONCE(b); |
Paul E. McKenney | 45c8a36 | 2013-07-02 15:24:09 -0700 | [diff] [blame] | 677 | } |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 678 | |
| 679 | This will not have the desired effect because there is no actual data |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 680 | dependency, but rather a control dependency that the CPU may short-circuit |
| 681 | by attempting to predict the outcome in advance, so that other CPUs see |
| 682 | the load from b as having happened before the load from a. In such a |
| 683 | case what's actually required is: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 684 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 685 | q = READ_ONCE(a); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 686 | if (q) { |
Paul E. McKenney | 45c8a36 | 2013-07-02 15:24:09 -0700 | [diff] [blame] | 687 | <read barrier> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 688 | p = READ_ONCE(b); |
Paul E. McKenney | 45c8a36 | 2013-07-02 15:24:09 -0700 | [diff] [blame] | 689 | } |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 690 | |
| 691 | However, stores are not speculated. This means that ordering -is- provided |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 692 | for load-store control dependencies, as in the following example: |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 693 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 694 | q = READ_ONCE(a); |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 695 | if (q) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 696 | WRITE_ONCE(b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 697 | } |
| 698 | |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 699 | Control dependencies pair normally with other types of barriers. |
| 700 | That said, please note that neither READ_ONCE() nor WRITE_ONCE() |
| 701 | are optional! Without the READ_ONCE(), the compiler might combine the |
| 702 | load from 'a' with other loads from 'a'. Without the WRITE_ONCE(), |
| 703 | the compiler might combine the store to 'b' with other stores to 'b'. |
| 704 | Either can result in highly counterintuitive effects on ordering. |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 705 | |
| 706 | Worse yet, if the compiler is able to prove (say) that the value of |
| 707 | variable 'a' is always non-zero, it would be well within its rights |
| 708 | to optimize the original example by eliminating the "if" statement |
| 709 | as follows: |
| 710 | |
| 711 | q = a; |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 712 | b = 1; /* BUG: Compiler and CPU can both reorder!!! */ |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 713 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 714 | So don't leave out the READ_ONCE(). |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 715 | |
| 716 | It is tempting to try to enforce ordering on identical stores on both |
| 717 | branches of the "if" statement as follows: |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 718 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 719 | q = READ_ONCE(a); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 720 | if (q) { |
Paul E. McKenney | 9b2b3bf | 2014-02-12 20:19:47 -0800 | [diff] [blame] | 721 | barrier(); |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 722 | WRITE_ONCE(b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 723 | do_something(); |
| 724 | } else { |
Paul E. McKenney | 9b2b3bf | 2014-02-12 20:19:47 -0800 | [diff] [blame] | 725 | barrier(); |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 726 | WRITE_ONCE(b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 727 | do_something_else(); |
| 728 | } |
| 729 | |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 730 | Unfortunately, current compilers will transform this as follows at high |
| 731 | optimization levels: |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 732 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 733 | q = READ_ONCE(a); |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 734 | barrier(); |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 735 | WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 736 | if (q) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 737 | /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 738 | do_something(); |
| 739 | } else { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 740 | /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 741 | do_something_else(); |
| 742 | } |
| 743 | |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 744 | Now there is no conditional between the load from 'a' and the store to |
| 745 | 'b', which means that the CPU is within its rights to reorder them: |
| 746 | The conditional is absolutely required, and must be present in the |
| 747 | assembly code even after all compiler optimizations have been applied. |
| 748 | Therefore, if you need ordering in this example, you need explicit |
| 749 | memory barriers, for example, smp_store_release(): |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 750 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 751 | q = READ_ONCE(a); |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 752 | if (q) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 753 | smp_store_release(&b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 754 | do_something(); |
| 755 | } else { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 756 | smp_store_release(&b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 757 | do_something_else(); |
| 758 | } |
| 759 | |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 760 | In contrast, without explicit memory barriers, two-legged-if control |
| 761 | ordering is guaranteed only when the stores differ, for example: |
| 762 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 763 | q = READ_ONCE(a); |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 764 | if (q) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 765 | WRITE_ONCE(b, 1); |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 766 | do_something(); |
| 767 | } else { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 768 | WRITE_ONCE(b, 2); |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 769 | do_something_else(); |
| 770 | } |
| 771 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 772 | The initial READ_ONCE() is still required to prevent the compiler from |
| 773 | proving the value of 'a'. |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 774 | |
| 775 | In addition, you need to be careful what you do with the local variable 'q', |
| 776 | otherwise the compiler might be able to guess the value and again remove |
| 777 | the needed conditional. For example: |
| 778 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 779 | q = READ_ONCE(a); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 780 | if (q % MAX) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 781 | WRITE_ONCE(b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 782 | do_something(); |
| 783 | } else { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 784 | WRITE_ONCE(b, 2); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 785 | do_something_else(); |
| 786 | } |
| 787 | |
| 788 | If MAX is defined to be 1, then the compiler knows that (q % MAX) is |
| 789 | equal to zero, in which case the compiler is within its rights to |
| 790 | transform the above code into the following: |
| 791 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 792 | q = READ_ONCE(a); |
pierre Kuo | b26cfc4 | 2017-04-07 14:37:36 +0800 | [diff] [blame] | 793 | WRITE_ONCE(b, 2); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 794 | do_something_else(); |
| 795 | |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 796 | Given this transformation, the CPU is not required to respect the ordering |
| 797 | between the load from variable 'a' and the store to variable 'b'. It is |
| 798 | tempting to add a barrier(), but this does not help. The conditional |
| 799 | is gone, and the barrier won't bring it back. Therefore, if you are |
| 800 | relying on this ordering, you should make sure that MAX is greater than |
| 801 | one, perhaps as follows: |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 802 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 803 | q = READ_ONCE(a); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 804 | BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ |
| 805 | if (q % MAX) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 806 | WRITE_ONCE(b, 1); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 807 | do_something(); |
| 808 | } else { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 809 | WRITE_ONCE(b, 2); |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 810 | do_something_else(); |
| 811 | } |
| 812 | |
Paul E. McKenney | 2456d2a | 2014-08-13 15:40:02 -0700 | [diff] [blame] | 813 | Please note once again that the stores to 'b' differ. If they were |
| 814 | identical, as noted earlier, the compiler could pull this store outside |
| 815 | of the 'if' statement. |
| 816 | |
Paul E. McKenney | 8b19d1d | 2014-10-12 07:55:47 -0700 | [diff] [blame] | 817 | You must also be careful not to rely too much on boolean short-circuit |
| 818 | evaluation. Consider this example: |
| 819 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 820 | q = READ_ONCE(a); |
Paul E. McKenney | 57aecae | 2015-05-18 18:27:42 -0700 | [diff] [blame] | 821 | if (q || 1 > 0) |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 822 | WRITE_ONCE(b, 1); |
Paul E. McKenney | 8b19d1d | 2014-10-12 07:55:47 -0700 | [diff] [blame] | 823 | |
Paul E. McKenney | 5af4692 | 2015-04-25 12:48:29 -0700 | [diff] [blame] | 824 | Because the first condition cannot fault and the second condition is |
| 825 | always true, the compiler can transform this example as following, |
| 826 | defeating control dependency: |
Paul E. McKenney | 8b19d1d | 2014-10-12 07:55:47 -0700 | [diff] [blame] | 827 | |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 828 | q = READ_ONCE(a); |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 829 | WRITE_ONCE(b, 1); |
Paul E. McKenney | 8b19d1d | 2014-10-12 07:55:47 -0700 | [diff] [blame] | 830 | |
| 831 | This example underscores the need to ensure that the compiler cannot |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 832 | out-guess your code. More generally, although READ_ONCE() does force |
Paul E. McKenney | 8b19d1d | 2014-10-12 07:55:47 -0700 | [diff] [blame] | 833 | the compiler to actually emit code for a given load, it does not force |
| 834 | the compiler to use the results. |
| 835 | |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 836 | In addition, control dependencies apply only to the then-clause and |
| 837 | else-clause of the if-statement in question. In particular, it does |
| 838 | not necessarily apply to code following the if-statement: |
| 839 | |
| 840 | q = READ_ONCE(a); |
| 841 | if (q) { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 842 | WRITE_ONCE(b, 1); |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 843 | } else { |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 844 | WRITE_ONCE(b, 2); |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 845 | } |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 846 | WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */ |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 847 | |
| 848 | It is tempting to argue that there in fact is ordering because the |
| 849 | compiler cannot reorder volatile accesses and also cannot reorder |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 850 | the writes to 'b' with the condition. Unfortunately for this line |
| 851 | of reasoning, the compiler might compile the two writes to 'b' as |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 852 | conditional-move instructions, as in this fanciful pseudo-assembly |
| 853 | language: |
| 854 | |
| 855 | ld r1,a |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 856 | cmp r1,$0 |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 857 | cmov,ne r4,$1 |
| 858 | cmov,eq r4,$2 |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 859 | st r4,b |
| 860 | st $1,c |
| 861 | |
| 862 | A weakly ordered CPU would have no dependency of any sort between the load |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 863 | from 'a' and the store to 'c'. The control dependencies would extend |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 864 | only to the pair of cmov instructions and the store depending on them. |
| 865 | In short, control dependencies apply only to the stores in the then-clause |
| 866 | and else-clause of the if-statement in question (including functions |
| 867 | invoked by those two clauses), not to code following that if-statement. |
| 868 | |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 869 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 870 | Note well that the ordering provided by a control dependency is local |
| 871 | to the CPU containing it. See the section on "Multicopy atomicity" |
| 872 | for more information. |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 873 | |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 874 | |
| 875 | In summary: |
| 876 | |
| 877 | (*) Control dependencies can order prior loads against later stores. |
| 878 | However, they do -not- guarantee any other sort of ordering: |
| 879 | Not prior loads against later loads, nor prior stores against |
| 880 | later anything. If you need these other forms of ordering, |
Davidlohr Bueso | d87510c | 2014-12-28 01:11:16 -0800 | [diff] [blame] | 881 | use smp_rmb(), smp_wmb(), or, in the case of prior stores and |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 882 | later loads, smp_mb(). |
| 883 | |
Paul E. McKenney | 7817b79 | 2015-12-29 16:23:18 -0800 | [diff] [blame] | 884 | (*) If both legs of the "if" statement begin with identical stores to |
| 885 | the same variable, then those stores must be ordered, either by |
| 886 | preceding both of them with smp_mb() or by using smp_store_release() |
| 887 | to carry out the stores. Please note that it is -not- sufficient |
Paul E. McKenney | a505265 | 2016-04-12 08:52:49 -0700 | [diff] [blame] | 888 | to use barrier() at beginning of each leg of the "if" statement |
| 889 | because, as shown by the example above, optimizing compilers can |
| 890 | destroy the control dependency while respecting the letter of the |
| 891 | barrier() law. |
Paul E. McKenney | 9b2b3bf | 2014-02-12 20:19:47 -0800 | [diff] [blame] | 892 | |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 893 | (*) Control dependencies require at least one run-time conditional |
Paul E. McKenney | 586dd56 | 2014-02-11 12:28:06 -0800 | [diff] [blame] | 894 | between the prior load and the subsequent store, and this |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 895 | conditional must involve the prior load. If the compiler is able |
| 896 | to optimize the conditional away, it will have also optimized |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 897 | away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() |
| 898 | can help to preserve the needed conditional. |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 899 | |
| 900 | (*) Control dependencies require that the compiler avoid reordering the |
Linus Torvalds | 105ff3c | 2015-11-03 17:22:17 -0800 | [diff] [blame] | 901 | dependency into nonexistence. Careful use of READ_ONCE() or |
| 902 | atomic{,64}_read() can help to preserve your control dependency. |
Paul E. McKenney | 895f554 | 2016-01-06 14:23:03 -0800 | [diff] [blame] | 903 | Please see the COMPILER BARRIER section for more information. |
Peter Zijlstra | 18c03c6 | 2013-12-11 13:59:06 -0800 | [diff] [blame] | 904 | |
Paul E. McKenney | ebff09a | 2016-06-15 16:08:17 -0700 | [diff] [blame] | 905 | (*) Control dependencies apply only to the then-clause and else-clause |
| 906 | of the if-statement containing the control dependency, including |
| 907 | any functions that these two clauses call. Control dependencies |
| 908 | do -not- apply to code following the if-statement containing the |
| 909 | control dependency. |
| 910 | |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 911 | (*) Control dependencies pair normally with other types of barriers. |
| 912 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 913 | (*) Control dependencies do -not- provide multicopy atomicity. If you |
| 914 | need all the CPUs to see a given store at the same time, use smp_mb(). |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 915 | |
Paul E. McKenney | c8241f8 | 2016-12-13 16:42:32 -0800 | [diff] [blame] | 916 | (*) Compilers do not understand control dependencies. It is therefore |
| 917 | your job to ensure that they do not break your code. |
| 918 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 919 | |
| 920 | SMP BARRIER PAIRING |
| 921 | ------------------- |
| 922 | |
| 923 | When dealing with CPU-CPU interactions, certain types of memory barrier should |
| 924 | always be paired. A lack of appropriate pairing is almost certainly an error. |
| 925 | |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 926 | General barriers pair with each other, though they also pair with most |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 927 | other types of barriers, albeit without multicopy atomicity. An acquire |
| 928 | barrier pairs with a release barrier, but both may also pair with other |
| 929 | barriers, including of course general barriers. A write barrier pairs |
| 930 | with a data dependency barrier, a control dependency, an acquire barrier, |
| 931 | a release barrier, a read barrier, or a general barrier. Similarly a |
| 932 | read barrier, control dependency, or a data dependency barrier pairs |
| 933 | with a write barrier, an acquire barrier, a release barrier, or a |
| 934 | general barrier: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 935 | |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 936 | CPU 1 CPU 2 |
| 937 | =============== =============== |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 938 | WRITE_ONCE(a, 1); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 939 | <write barrier> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 940 | WRITE_ONCE(b, 2); x = READ_ONCE(b); |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 941 | <read barrier> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 942 | y = READ_ONCE(a); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 943 | |
| 944 | Or: |
| 945 | |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 946 | CPU 1 CPU 2 |
| 947 | =============== =============================== |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 948 | a = 1; |
| 949 | <write barrier> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 950 | WRITE_ONCE(b, &a); x = READ_ONCE(b); |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 951 | <data dependency barrier> |
| 952 | y = *x; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 953 | |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 954 | Or even: |
| 955 | |
| 956 | CPU 1 CPU 2 |
| 957 | =============== =============================== |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 958 | r1 = READ_ONCE(y); |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 959 | <general barrier> |
Scott Tsai | d92f842 | 2017-09-20 02:16:00 +0800 | [diff] [blame] | 960 | WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) { |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 961 | <implicit control dependency> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 962 | WRITE_ONCE(y, 1); |
Paul E. McKenney | ff38281 | 2015-02-17 10:00:06 -0800 | [diff] [blame] | 963 | } |
| 964 | |
| 965 | assert(r1 == 0 || r2 == 0); |
| 966 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 967 | Basically, the read barrier always has to be there, even though it can be of |
| 968 | the "weaker" type. |
| 969 | |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 970 | [!] Note that the stores before the write barrier would normally be expected to |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 971 | match the loads after the read barrier or the data dependency barrier, and vice |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 972 | versa: |
| 973 | |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 974 | CPU 1 CPU 2 |
| 975 | =================== =================== |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 976 | WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); |
| 977 | WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 978 | <write barrier> \ <read barrier> |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 979 | WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); |
| 980 | WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 981 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 982 | |
| 983 | EXAMPLES OF MEMORY BARRIER SEQUENCES |
| 984 | ------------------------------------ |
| 985 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 986 | Firstly, write barriers act as partial orderings on store operations. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 987 | Consider the following sequence of events: |
| 988 | |
| 989 | CPU 1 |
| 990 | ======================= |
| 991 | STORE A = 1 |
| 992 | STORE B = 2 |
| 993 | STORE C = 3 |
| 994 | <write barrier> |
| 995 | STORE D = 4 |
| 996 | STORE E = 5 |
| 997 | |
| 998 | This sequence of events is committed to the memory coherence system in an order |
| 999 | that the rest of the system might perceive as the unordered set of { STORE A, |
Adrian Bunk | 80f7228 | 2006-06-30 18:27:16 +0200 | [diff] [blame] | 1000 | STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1001 | }: |
| 1002 | |
| 1003 | +-------+ : : |
| 1004 | | | +------+ |
| 1005 | | |------>| C=3 | } /\ |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 1006 | | | : +------+ }----- \ -----> Events perceptible to |
| 1007 | | | : | A=1 | } \/ the rest of the system |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1008 | | | : +------+ } |
| 1009 | | CPU 1 | : | B=2 | } |
| 1010 | | | +------+ } |
| 1011 | | | wwwwwwwwwwwwwwww } <--- At this point the write barrier |
| 1012 | | | +------+ } requires all stores prior to the |
| 1013 | | | : | E=5 | } barrier to be committed before |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 1014 | | | : +------+ } further stores may take place |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1015 | | |------>| D=4 | } |
| 1016 | | | +------+ |
| 1017 | +-------+ : : |
| 1018 | | |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1019 | | Sequence in which stores are committed to the |
| 1020 | | memory system by CPU 1 |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1021 | V |
| 1022 | |
| 1023 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 1024 | Secondly, data dependency barriers act as partial orderings on data-dependent |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1025 | loads. Consider the following sequence of events: |
| 1026 | |
| 1027 | CPU 1 CPU 2 |
| 1028 | ======================= ======================= |
David Howells | c14038c | 2006-04-10 22:54:24 -0700 | [diff] [blame] | 1029 | { B = 7; X = 9; Y = 8; C = &Y } |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1030 | STORE A = 1 |
| 1031 | STORE B = 2 |
| 1032 | <write barrier> |
| 1033 | STORE C = &B LOAD X |
| 1034 | STORE D = 4 LOAD C (gets &B) |
| 1035 | LOAD *C (reads B) |
| 1036 | |
| 1037 | Without intervention, CPU 2 may perceive the events on CPU 1 in some |
| 1038 | effectively random order, despite the write barrier issued by CPU 1: |
| 1039 | |
| 1040 | +-------+ : : : : |
| 1041 | | | +------+ +-------+ | Sequence of update |
| 1042 | | |------>| B=2 |----- --->| Y->8 | | of perception on |
| 1043 | | | : +------+ \ +-------+ | CPU 2 |
| 1044 | | CPU 1 | : | A=1 | \ --->| C->&Y | V |
| 1045 | | | +------+ | +-------+ |
| 1046 | | | wwwwwwwwwwwwwwww | : : |
| 1047 | | | +------+ | : : |
| 1048 | | | : | C=&B |--- | : : +-------+ |
| 1049 | | | : +------+ \ | +-------+ | | |
| 1050 | | |------>| D=4 | ----------->| C->&B |------>| | |
| 1051 | | | +------+ | +-------+ | | |
| 1052 | +-------+ : : | : : | | |
| 1053 | | : : | | |
| 1054 | | : : | CPU 2 | |
| 1055 | | +-------+ | | |
| 1056 | Apparently incorrect ---> | | B->7 |------>| | |
| 1057 | perception of B (!) | +-------+ | | |
| 1058 | | : : | | |
| 1059 | | +-------+ | | |
| 1060 | The load of X holds ---> \ | X->9 |------>| | |
| 1061 | up the maintenance \ +-------+ | | |
| 1062 | of coherence of B ----->| B->2 | +-------+ |
| 1063 | +-------+ |
| 1064 | : : |
| 1065 | |
| 1066 | |
| 1067 | In the above example, CPU 2 perceives that B is 7, despite the load of *C |
Paolo Ornati | 670e9f3 | 2006-10-03 22:57:56 +0200 | [diff] [blame] | 1068 | (which would be B) coming after the LOAD of C. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1069 | |
| 1070 | If, however, a data dependency barrier were to be placed between the load of C |
David Howells | c14038c | 2006-04-10 22:54:24 -0700 | [diff] [blame] | 1071 | and the load of *C (ie: B) on CPU 2: |
| 1072 | |
| 1073 | CPU 1 CPU 2 |
| 1074 | ======================= ======================= |
| 1075 | { B = 7; X = 9; Y = 8; C = &Y } |
| 1076 | STORE A = 1 |
| 1077 | STORE B = 2 |
| 1078 | <write barrier> |
| 1079 | STORE C = &B LOAD X |
| 1080 | STORE D = 4 LOAD C (gets &B) |
| 1081 | <data dependency barrier> |
| 1082 | LOAD *C (reads B) |
| 1083 | |
| 1084 | then the following will occur: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1085 | |
| 1086 | +-------+ : : : : |
| 1087 | | | +------+ +-------+ |
| 1088 | | |------>| B=2 |----- --->| Y->8 | |
| 1089 | | | : +------+ \ +-------+ |
| 1090 | | CPU 1 | : | A=1 | \ --->| C->&Y | |
| 1091 | | | +------+ | +-------+ |
| 1092 | | | wwwwwwwwwwwwwwww | : : |
| 1093 | | | +------+ | : : |
| 1094 | | | : | C=&B |--- | : : +-------+ |
| 1095 | | | : +------+ \ | +-------+ | | |
| 1096 | | |------>| D=4 | ----------->| C->&B |------>| | |
| 1097 | | | +------+ | +-------+ | | |
| 1098 | +-------+ : : | : : | | |
| 1099 | | : : | | |
| 1100 | | : : | CPU 2 | |
| 1101 | | +-------+ | | |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1102 | | | X->9 |------>| | |
| 1103 | | +-------+ | | |
| 1104 | Makes sure all effects ---> \ ddddddddddddddddd | | |
| 1105 | prior to the store of C \ +-------+ | | |
| 1106 | are perceptible to ----->| B->2 |------>| | |
| 1107 | subsequent loads +-------+ | | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1108 | : : +-------+ |
| 1109 | |
| 1110 | |
| 1111 | And thirdly, a read barrier acts as a partial order on loads. Consider the |
| 1112 | following sequence of events: |
| 1113 | |
| 1114 | CPU 1 CPU 2 |
| 1115 | ======================= ======================= |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1116 | { A = 0, B = 9 } |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1117 | STORE A=1 |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1118 | <write barrier> |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1119 | STORE B=2 |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1120 | LOAD B |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1121 | LOAD A |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1122 | |
| 1123 | Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in |
| 1124 | some effectively random order, despite the write barrier issued by CPU 1: |
| 1125 | |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1126 | +-------+ : : : : |
| 1127 | | | +------+ +-------+ |
| 1128 | | |------>| A=1 |------ --->| A->0 | |
| 1129 | | | +------+ \ +-------+ |
| 1130 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
| 1131 | | | +------+ | +-------+ |
| 1132 | | |------>| B=2 |--- | : : |
| 1133 | | | +------+ \ | : : +-------+ |
| 1134 | +-------+ : : \ | +-------+ | | |
| 1135 | ---------->| B->2 |------>| | |
| 1136 | | +-------+ | CPU 2 | |
| 1137 | | | A->0 |------>| | |
| 1138 | | +-------+ | | |
| 1139 | | : : +-------+ |
| 1140 | \ : : |
| 1141 | \ +-------+ |
| 1142 | ---->| A->1 | |
| 1143 | +-------+ |
| 1144 | : : |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1145 | |
| 1146 | |
David Howells | 6bc3927 | 2006-06-25 05:49:22 -0700 | [diff] [blame] | 1147 | If, however, a read barrier were to be placed between the load of B and the |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1148 | load of A on CPU 2: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1149 | |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1150 | CPU 1 CPU 2 |
| 1151 | ======================= ======================= |
| 1152 | { A = 0, B = 9 } |
| 1153 | STORE A=1 |
| 1154 | <write barrier> |
| 1155 | STORE B=2 |
| 1156 | LOAD B |
| 1157 | <read barrier> |
| 1158 | LOAD A |
| 1159 | |
| 1160 | then the partial ordering imposed by CPU 1 will be perceived correctly by CPU |
| 1161 | 2: |
| 1162 | |
| 1163 | +-------+ : : : : |
| 1164 | | | +------+ +-------+ |
| 1165 | | |------>| A=1 |------ --->| A->0 | |
| 1166 | | | +------+ \ +-------+ |
| 1167 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
| 1168 | | | +------+ | +-------+ |
| 1169 | | |------>| B=2 |--- | : : |
| 1170 | | | +------+ \ | : : +-------+ |
| 1171 | +-------+ : : \ | +-------+ | | |
| 1172 | ---------->| B->2 |------>| | |
| 1173 | | +-------+ | CPU 2 | |
| 1174 | | : : | | |
| 1175 | | : : | | |
| 1176 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | |
| 1177 | barrier causes all effects \ +-------+ | | |
| 1178 | prior to the storage of B ---->| A->1 |------>| | |
| 1179 | to be perceptible to CPU 2 +-------+ | | |
| 1180 | : : +-------+ |
| 1181 | |
| 1182 | |
| 1183 | To illustrate this more completely, consider what could happen if the code |
| 1184 | contained a load of A either side of the read barrier: |
| 1185 | |
| 1186 | CPU 1 CPU 2 |
| 1187 | ======================= ======================= |
| 1188 | { A = 0, B = 9 } |
| 1189 | STORE A=1 |
| 1190 | <write barrier> |
| 1191 | STORE B=2 |
| 1192 | LOAD B |
| 1193 | LOAD A [first load of A] |
| 1194 | <read barrier> |
| 1195 | LOAD A [second load of A] |
| 1196 | |
| 1197 | Even though the two loads of A both occur after the load of B, they may both |
| 1198 | come up with different values: |
| 1199 | |
| 1200 | +-------+ : : : : |
| 1201 | | | +------+ +-------+ |
| 1202 | | |------>| A=1 |------ --->| A->0 | |
| 1203 | | | +------+ \ +-------+ |
| 1204 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
| 1205 | | | +------+ | +-------+ |
| 1206 | | |------>| B=2 |--- | : : |
| 1207 | | | +------+ \ | : : +-------+ |
| 1208 | +-------+ : : \ | +-------+ | | |
| 1209 | ---------->| B->2 |------>| | |
| 1210 | | +-------+ | CPU 2 | |
| 1211 | | : : | | |
| 1212 | | : : | | |
| 1213 | | +-------+ | | |
| 1214 | | | A->0 |------>| 1st | |
| 1215 | | +-------+ | | |
| 1216 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | |
| 1217 | barrier causes all effects \ +-------+ | | |
| 1218 | prior to the storage of B ---->| A->1 |------>| 2nd | |
| 1219 | to be perceptible to CPU 2 +-------+ | | |
| 1220 | : : +-------+ |
| 1221 | |
| 1222 | |
| 1223 | But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 |
| 1224 | before the read barrier completes anyway: |
| 1225 | |
| 1226 | +-------+ : : : : |
| 1227 | | | +------+ +-------+ |
| 1228 | | |------>| A=1 |------ --->| A->0 | |
| 1229 | | | +------+ \ +-------+ |
| 1230 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
| 1231 | | | +------+ | +-------+ |
| 1232 | | |------>| B=2 |--- | : : |
| 1233 | | | +------+ \ | : : +-------+ |
| 1234 | +-------+ : : \ | +-------+ | | |
| 1235 | ---------->| B->2 |------>| | |
| 1236 | | +-------+ | CPU 2 | |
| 1237 | | : : | | |
| 1238 | \ : : | | |
| 1239 | \ +-------+ | | |
| 1240 | ---->| A->1 |------>| 1st | |
| 1241 | +-------+ | | |
| 1242 | rrrrrrrrrrrrrrrrr | | |
| 1243 | +-------+ | | |
| 1244 | | A->1 |------>| 2nd | |
| 1245 | +-------+ | | |
| 1246 | : : +-------+ |
| 1247 | |
| 1248 | |
| 1249 | The guarantee is that the second load will always come up with A == 1 if the |
| 1250 | load of B came up with B == 2. No such guarantee exists for the first load of |
| 1251 | A; that may come up with either A == 0 or A == 1. |
| 1252 | |
| 1253 | |
| 1254 | READ MEMORY BARRIERS VS LOAD SPECULATION |
| 1255 | ---------------------------------------- |
| 1256 | |
| 1257 | Many CPUs speculate with loads: that is they see that they will need to load an |
| 1258 | item from memory, and they find a time where they're not using the bus for any |
| 1259 | other loads, and so do the load in advance - even though they haven't actually |
| 1260 | got to that point in the instruction execution flow yet. This permits the |
| 1261 | actual load instruction to potentially complete immediately because the CPU |
| 1262 | already has the value to hand. |
| 1263 | |
| 1264 | It may turn out that the CPU didn't actually need the value - perhaps because a |
| 1265 | branch circumvented the load - in which case it can discard the value or just |
| 1266 | cache it for later use. |
| 1267 | |
| 1268 | Consider: |
| 1269 | |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 1270 | CPU 1 CPU 2 |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1271 | ======================= ======================= |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 1272 | LOAD B |
| 1273 | DIVIDE } Divide instructions generally |
| 1274 | DIVIDE } take a long time to perform |
| 1275 | LOAD A |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1276 | |
| 1277 | Which might appear as this: |
| 1278 | |
| 1279 | : : +-------+ |
| 1280 | +-------+ | | |
| 1281 | --->| B->2 |------>| | |
| 1282 | +-------+ | CPU 2 | |
| 1283 | : :DIVIDE | | |
| 1284 | +-------+ | | |
| 1285 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | |
| 1286 | division speculates on the +-------+ ~ | | |
| 1287 | LOAD of A : : ~ | | |
| 1288 | : :DIVIDE | | |
| 1289 | : : ~ | | |
| 1290 | Once the divisions are complete --> : : ~-->| | |
| 1291 | the CPU can then perform the : : | | |
| 1292 | LOAD with immediate effect : : +-------+ |
| 1293 | |
| 1294 | |
| 1295 | Placing a read barrier or a data dependency barrier just before the second |
| 1296 | load: |
| 1297 | |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 1298 | CPU 1 CPU 2 |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1299 | ======================= ======================= |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 1300 | LOAD B |
| 1301 | DIVIDE |
| 1302 | DIVIDE |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1303 | <read barrier> |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 1304 | LOAD A |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1305 | |
| 1306 | will force any value speculatively obtained to be reconsidered to an extent |
| 1307 | dependent on the type of barrier used. If there was no change made to the |
| 1308 | speculated memory location, then the speculated value will just be used: |
| 1309 | |
| 1310 | : : +-------+ |
| 1311 | +-------+ | | |
| 1312 | --->| B->2 |------>| | |
| 1313 | +-------+ | CPU 2 | |
| 1314 | : :DIVIDE | | |
| 1315 | +-------+ | | |
| 1316 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | |
| 1317 | division speculates on the +-------+ ~ | | |
| 1318 | LOAD of A : : ~ | | |
| 1319 | : :DIVIDE | | |
| 1320 | : : ~ | | |
| 1321 | : : ~ | | |
| 1322 | rrrrrrrrrrrrrrrr~ | | |
| 1323 | : : ~ | | |
| 1324 | : : ~-->| | |
| 1325 | : : | | |
| 1326 | : : +-------+ |
| 1327 | |
| 1328 | |
| 1329 | but if there was an update or an invalidation from another CPU pending, then |
| 1330 | the speculation will be cancelled and the value reloaded: |
| 1331 | |
| 1332 | : : +-------+ |
| 1333 | +-------+ | | |
| 1334 | --->| B->2 |------>| | |
| 1335 | +-------+ | CPU 2 | |
| 1336 | : :DIVIDE | | |
| 1337 | +-------+ | | |
| 1338 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | |
| 1339 | division speculates on the +-------+ ~ | | |
| 1340 | LOAD of A : : ~ | | |
| 1341 | : :DIVIDE | | |
| 1342 | : : ~ | | |
| 1343 | : : ~ | | |
| 1344 | rrrrrrrrrrrrrrrrr | | |
| 1345 | +-------+ | | |
| 1346 | The speculation is discarded ---> --->| A->1 |------>| | |
| 1347 | and an updated value is +-------+ | | |
| 1348 | retrieved : : +-------+ |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1349 | |
| 1350 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1351 | MULTICOPY ATOMICITY |
| 1352 | -------------------- |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1353 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1354 | Multicopy atomicity is a deeply intuitive notion about ordering that is |
| 1355 | not always provided by real computer systems, namely that a given store |
Alan Stern | 0902b1f | 2017-09-01 07:53:34 -0700 | [diff] [blame] | 1356 | becomes visible at the same time to all CPUs, or, alternatively, that all |
| 1357 | CPUs agree on the order in which all stores become visible. However, |
| 1358 | support of full multicopy atomicity would rule out valuable hardware |
| 1359 | optimizations, so a weaker form called ``other multicopy atomicity'' |
| 1360 | instead guarantees only that a given store becomes visible at the same |
| 1361 | time to all -other- CPUs. The remainder of this document discusses this |
| 1362 | weaker form, but for brevity will call it simply ``multicopy atomicity''. |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1363 | |
| 1364 | The following example demonstrates multicopy atomicity: |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1365 | |
| 1366 | CPU 1 CPU 2 CPU 3 |
| 1367 | ======================= ======================= ======================= |
| 1368 | { X = 0, Y = 0 } |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1369 | STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) |
| 1370 | <general barrier> <read barrier> |
| 1371 | STORE Y=r1 LOAD X |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1372 | |
Alan Stern | 0902b1f | 2017-09-01 07:53:34 -0700 | [diff] [blame] | 1373 | Suppose that CPU 2's load from X returns 1, which it then stores to Y, |
| 1374 | and CPU 3's load from Y returns 1. This indicates that CPU 1's store |
| 1375 | to X precedes CPU 2's load from X and that CPU 2's store to Y precedes |
| 1376 | CPU 3's load from Y. In addition, the memory barriers guarantee that |
| 1377 | CPU 2 executes its load before its store, and CPU 3 loads from Y before |
| 1378 | it loads from X. The question is then "Can CPU 3's load from X return 0?" |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1379 | |
Alan Stern | 0902b1f | 2017-09-01 07:53:34 -0700 | [diff] [blame] | 1380 | Because CPU 3's load from X in some sense comes after CPU 2's load, it |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1381 | is natural to expect that CPU 3's load from X must therefore return 1. |
Alan Stern | 0902b1f | 2017-09-01 07:53:34 -0700 | [diff] [blame] | 1382 | This expectation follows from multicopy atomicity: if a load executing |
| 1383 | on CPU B follows a load from the same variable executing on CPU A (and |
| 1384 | CPU A did not originally store the value which it read), then on |
| 1385 | multicopy-atomic systems, CPU B's load must return either the same value |
| 1386 | that CPU A's load did or some later value. However, the Linux kernel |
| 1387 | does not require systems to be multicopy atomic. |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1388 | |
Alan Stern | 0902b1f | 2017-09-01 07:53:34 -0700 | [diff] [blame] | 1389 | The use of a general memory barrier in the example above compensates |
| 1390 | for any lack of multicopy atomicity. In the example, if CPU 2's load |
| 1391 | from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load |
| 1392 | from X must indeed also return 1. |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1393 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1394 | However, dependencies, read barriers, and write barriers are not always |
| 1395 | able to compensate for non-multicopy atomicity. For example, suppose |
| 1396 | that CPU 2's general barrier is removed from the above example, leaving |
| 1397 | only the data dependency shown below: |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1398 | |
| 1399 | CPU 1 CPU 2 CPU 3 |
| 1400 | ======================= ======================= ======================= |
| 1401 | { X = 0, Y = 0 } |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1402 | STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) |
| 1403 | <data dependency> <read barrier> |
| 1404 | STORE Y=r1 LOAD X (reads 0) |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1405 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1406 | This substitution allows non-multicopy atomicity to run rampant: in |
| 1407 | this example, it is perfectly legal for CPU 2's load from X to return 1, |
| 1408 | CPU 3's load from Y to return 1, and its load from X to return 0. |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1409 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1410 | The key point is that although CPU 2's data dependency orders its load |
Alan Stern | 0902b1f | 2017-09-01 07:53:34 -0700 | [diff] [blame] | 1411 | and store, it does not guarantee to order CPU 1's store. Thus, if this |
| 1412 | example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a |
| 1413 | store buffer or a level of cache, CPU 2 might have early access to CPU 1's |
| 1414 | writes. General barriers are therefore required to ensure that all CPUs |
| 1415 | agree on the combined order of multiple accesses. |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1416 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1417 | General barriers can compensate not only for non-multicopy atomicity, |
| 1418 | but can also generate additional ordering that can ensure that -all- |
| 1419 | CPUs will perceive the same order of -all- operations. In contrast, a |
| 1420 | chain of release-acquire pairs do not provide this additional ordering, |
| 1421 | which means that only those CPUs on the chain are guaranteed to agree |
| 1422 | on the combined order of the accesses. For example, switching to C code |
| 1423 | in deference to the ghost of Herman Hollerith: |
Paul E. McKenney | c535cc9 | 2016-01-15 09:30:42 -0800 | [diff] [blame] | 1424 | |
| 1425 | int u, v, x, y, z; |
| 1426 | |
| 1427 | void cpu0(void) |
| 1428 | { |
| 1429 | r0 = smp_load_acquire(&x); |
| 1430 | WRITE_ONCE(u, 1); |
| 1431 | smp_store_release(&y, 1); |
| 1432 | } |
| 1433 | |
| 1434 | void cpu1(void) |
| 1435 | { |
| 1436 | r1 = smp_load_acquire(&y); |
| 1437 | r4 = READ_ONCE(v); |
| 1438 | r5 = READ_ONCE(u); |
| 1439 | smp_store_release(&z, 1); |
| 1440 | } |
| 1441 | |
| 1442 | void cpu2(void) |
| 1443 | { |
| 1444 | r2 = smp_load_acquire(&z); |
| 1445 | smp_store_release(&x, 1); |
| 1446 | } |
| 1447 | |
| 1448 | void cpu3(void) |
| 1449 | { |
| 1450 | WRITE_ONCE(v, 1); |
| 1451 | smp_mb(); |
| 1452 | r3 = READ_ONCE(u); |
| 1453 | } |
| 1454 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1455 | Because cpu0(), cpu1(), and cpu2() participate in a chain of |
| 1456 | smp_store_release()/smp_load_acquire() pairs, the following outcome |
| 1457 | is prohibited: |
Paul E. McKenney | c535cc9 | 2016-01-15 09:30:42 -0800 | [diff] [blame] | 1458 | |
| 1459 | r0 == 1 && r1 == 1 && r2 == 1 |
| 1460 | |
| 1461 | Furthermore, because of the release-acquire relationship between cpu0() |
| 1462 | and cpu1(), cpu1() must see cpu0()'s writes, so that the following |
| 1463 | outcome is prohibited: |
| 1464 | |
| 1465 | r1 == 1 && r5 == 0 |
| 1466 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1467 | However, the ordering provided by a release-acquire chain is local |
| 1468 | to the CPUs participating in that chain and does not apply to cpu3(), |
| 1469 | at least aside from stores. Therefore, the following outcome is possible: |
Paul E. McKenney | c535cc9 | 2016-01-15 09:30:42 -0800 | [diff] [blame] | 1470 | |
| 1471 | r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 |
| 1472 | |
Paul E. McKenney | 37ef034 | 2016-01-25 22:12:34 -0800 | [diff] [blame] | 1473 | As an aside, the following outcome is also possible: |
| 1474 | |
| 1475 | r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1 |
| 1476 | |
Paul E. McKenney | c535cc9 | 2016-01-15 09:30:42 -0800 | [diff] [blame] | 1477 | Although cpu0(), cpu1(), and cpu2() will see their respective reads and |
| 1478 | writes in order, CPUs not involved in the release-acquire chain might |
| 1479 | well disagree on the order. This disagreement stems from the fact that |
| 1480 | the weak memory-barrier instructions used to implement smp_load_acquire() |
| 1481 | and smp_store_release() are not required to order prior stores against |
| 1482 | subsequent loads in all cases. This means that cpu3() can see cpu0()'s |
| 1483 | store to u as happening -after- cpu1()'s load from v, even though |
| 1484 | both cpu0() and cpu1() agree that these two operations occurred in the |
| 1485 | intended order. |
| 1486 | |
| 1487 | However, please keep in mind that smp_load_acquire() is not magic. |
| 1488 | In particular, it simply reads from its argument with ordering. It does |
| 1489 | -not- ensure that any particular value will be read. Therefore, the |
| 1490 | following outcome is possible: |
| 1491 | |
| 1492 | r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0 |
| 1493 | |
| 1494 | Note that this outcome can happen even on a mythical sequentially |
| 1495 | consistent system where nothing is ever reordered. |
| 1496 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 1497 | To reiterate, if your code requires full ordering of all operations, |
| 1498 | use general barriers throughout. |
Paul E. McKenney | 241e666 | 2011-02-10 16:54:50 -0800 | [diff] [blame] | 1499 | |
| 1500 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1501 | ======================== |
| 1502 | EXPLICIT KERNEL BARRIERS |
| 1503 | ======================== |
| 1504 | |
| 1505 | The Linux kernel has a variety of different barriers that act at different |
| 1506 | levels: |
| 1507 | |
| 1508 | (*) Compiler barrier. |
| 1509 | |
| 1510 | (*) CPU memory barriers. |
| 1511 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1512 | |
| 1513 | COMPILER BARRIER |
| 1514 | ---------------- |
| 1515 | |
| 1516 | The Linux kernel has an explicit compiler barrier function that prevents the |
| 1517 | compiler from moving the memory accesses either side of it to the other side: |
| 1518 | |
| 1519 | barrier(); |
| 1520 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1521 | This is a general barrier -- there are no read-read or write-write |
| 1522 | variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be |
| 1523 | thought of as weak forms of barrier() that affect only the specific |
| 1524 | accesses flagged by the READ_ONCE() or WRITE_ONCE(). |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1525 | |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1526 | The barrier() function has the following effects: |
| 1527 | |
| 1528 | (*) Prevents the compiler from reordering accesses following the |
| 1529 | barrier() to precede any accesses preceding the barrier(). |
| 1530 | One example use for this property is to ease communication between |
| 1531 | interrupt-handler code and the code that was interrupted. |
| 1532 | |
| 1533 | (*) Within a loop, forces the compiler to load the variables used |
| 1534 | in that loop's conditional on each pass through that loop. |
| 1535 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1536 | The READ_ONCE() and WRITE_ONCE() functions can prevent any number of |
| 1537 | optimizations that, while perfectly safe in single-threaded code, can |
| 1538 | be fatal in concurrent code. Here are some examples of these sorts |
| 1539 | of optimizations: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1540 | |
Paul E. McKenney | 449f741 | 2014-01-02 15:03:50 -0800 | [diff] [blame] | 1541 | (*) The compiler is within its rights to reorder loads and stores |
| 1542 | to the same variable, and in some cases, the CPU is within its |
| 1543 | rights to reorder loads to the same variable. This means that |
| 1544 | the following code: |
| 1545 | |
| 1546 | a[0] = x; |
| 1547 | a[1] = x; |
| 1548 | |
| 1549 | Might result in an older value of x stored in a[1] than in a[0]. |
| 1550 | Prevent both the compiler and the CPU from doing this as follows: |
| 1551 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1552 | a[0] = READ_ONCE(x); |
| 1553 | a[1] = READ_ONCE(x); |
Paul E. McKenney | 449f741 | 2014-01-02 15:03:50 -0800 | [diff] [blame] | 1554 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1555 | In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for |
| 1556 | accesses from multiple CPUs to a single variable. |
Paul E. McKenney | 449f741 | 2014-01-02 15:03:50 -0800 | [diff] [blame] | 1557 | |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1558 | (*) The compiler is within its rights to merge successive loads from |
| 1559 | the same variable. Such merging can cause the compiler to "optimize" |
| 1560 | the following code: |
| 1561 | |
| 1562 | while (tmp = a) |
| 1563 | do_something_with(tmp); |
| 1564 | |
| 1565 | into the following code, which, although in some sense legitimate |
| 1566 | for single-threaded code, is almost certainly not what the developer |
| 1567 | intended: |
| 1568 | |
| 1569 | if (tmp = a) |
| 1570 | for (;;) |
| 1571 | do_something_with(tmp); |
| 1572 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1573 | Use READ_ONCE() to prevent the compiler from doing this to you: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1574 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1575 | while (tmp = READ_ONCE(a)) |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1576 | do_something_with(tmp); |
| 1577 | |
| 1578 | (*) The compiler is within its rights to reload a variable, for example, |
| 1579 | in cases where high register pressure prevents the compiler from |
| 1580 | keeping all data of interest in registers. The compiler might |
| 1581 | therefore optimize the variable 'tmp' out of our previous example: |
| 1582 | |
| 1583 | while (tmp = a) |
| 1584 | do_something_with(tmp); |
| 1585 | |
| 1586 | This could result in the following code, which is perfectly safe in |
| 1587 | single-threaded code, but can be fatal in concurrent code: |
| 1588 | |
| 1589 | while (a) |
| 1590 | do_something_with(a); |
| 1591 | |
| 1592 | For example, the optimized version of this code could result in |
| 1593 | passing a zero to do_something_with() in the case where the variable |
| 1594 | a was modified by some other CPU between the "while" statement and |
| 1595 | the call to do_something_with(). |
| 1596 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1597 | Again, use READ_ONCE() to prevent the compiler from doing this: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1598 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1599 | while (tmp = READ_ONCE(a)) |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1600 | do_something_with(tmp); |
| 1601 | |
| 1602 | Note that if the compiler runs short of registers, it might save |
| 1603 | tmp onto the stack. The overhead of this saving and later restoring |
| 1604 | is why compilers reload variables. Doing so is perfectly safe for |
| 1605 | single-threaded code, so you need to tell the compiler about cases |
| 1606 | where it is not safe. |
| 1607 | |
| 1608 | (*) The compiler is within its rights to omit a load entirely if it knows |
| 1609 | what the value will be. For example, if the compiler can prove that |
| 1610 | the value of variable 'a' is always zero, it can optimize this code: |
| 1611 | |
| 1612 | while (tmp = a) |
| 1613 | do_something_with(tmp); |
| 1614 | |
| 1615 | Into this: |
| 1616 | |
| 1617 | do { } while (0); |
| 1618 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1619 | This transformation is a win for single-threaded code because it |
| 1620 | gets rid of a load and a branch. The problem is that the compiler |
| 1621 | will carry out its proof assuming that the current CPU is the only |
| 1622 | one updating variable 'a'. If variable 'a' is shared, then the |
| 1623 | compiler's proof will be erroneous. Use READ_ONCE() to tell the |
| 1624 | compiler that it doesn't know as much as it thinks it does: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1625 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1626 | while (tmp = READ_ONCE(a)) |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1627 | do_something_with(tmp); |
| 1628 | |
| 1629 | But please note that the compiler is also closely watching what you |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1630 | do with the value after the READ_ONCE(). For example, suppose you |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1631 | do the following and MAX is a preprocessor macro with the value 1: |
| 1632 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1633 | while ((tmp = READ_ONCE(a)) % MAX) |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1634 | do_something_with(tmp); |
| 1635 | |
| 1636 | Then the compiler knows that the result of the "%" operator applied |
| 1637 | to MAX will always be zero, again allowing the compiler to optimize |
| 1638 | the code into near-nonexistence. (It will still load from the |
| 1639 | variable 'a'.) |
| 1640 | |
| 1641 | (*) Similarly, the compiler is within its rights to omit a store entirely |
| 1642 | if it knows that the variable already has the value being stored. |
| 1643 | Again, the compiler assumes that the current CPU is the only one |
| 1644 | storing into the variable, which can cause the compiler to do the |
| 1645 | wrong thing for shared variables. For example, suppose you have |
| 1646 | the following: |
| 1647 | |
| 1648 | a = 0; |
SeongJae Park | 65f95ff | 2016-02-22 08:28:29 -0800 | [diff] [blame] | 1649 | ... Code that does not store to variable a ... |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1650 | a = 0; |
| 1651 | |
| 1652 | The compiler sees that the value of variable 'a' is already zero, so |
| 1653 | it might well omit the second store. This would come as a fatal |
| 1654 | surprise if some other CPU might have stored to variable 'a' in the |
| 1655 | meantime. |
| 1656 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1657 | Use WRITE_ONCE() to prevent the compiler from making this sort of |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1658 | wrong guess: |
| 1659 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1660 | WRITE_ONCE(a, 0); |
SeongJae Park | 65f95ff | 2016-02-22 08:28:29 -0800 | [diff] [blame] | 1661 | ... Code that does not store to variable a ... |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1662 | WRITE_ONCE(a, 0); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1663 | |
| 1664 | (*) The compiler is within its rights to reorder memory accesses unless |
| 1665 | you tell it not to. For example, consider the following interaction |
| 1666 | between process-level code and an interrupt handler: |
| 1667 | |
| 1668 | void process_level(void) |
| 1669 | { |
| 1670 | msg = get_message(); |
| 1671 | flag = true; |
| 1672 | } |
| 1673 | |
| 1674 | void interrupt_handler(void) |
| 1675 | { |
| 1676 | if (flag) |
| 1677 | process_message(msg); |
| 1678 | } |
| 1679 | |
Masanari Iida | df5cbb2 | 2014-03-21 10:04:30 +0900 | [diff] [blame] | 1680 | There is nothing to prevent the compiler from transforming |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1681 | process_level() to the following, in fact, this might well be a |
| 1682 | win for single-threaded code: |
| 1683 | |
| 1684 | void process_level(void) |
| 1685 | { |
| 1686 | flag = true; |
| 1687 | msg = get_message(); |
| 1688 | } |
| 1689 | |
| 1690 | If the interrupt occurs between these two statement, then |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1691 | interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1692 | to prevent this as follows: |
| 1693 | |
| 1694 | void process_level(void) |
| 1695 | { |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1696 | WRITE_ONCE(msg, get_message()); |
| 1697 | WRITE_ONCE(flag, true); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1698 | } |
| 1699 | |
| 1700 | void interrupt_handler(void) |
| 1701 | { |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1702 | if (READ_ONCE(flag)) |
| 1703 | process_message(READ_ONCE(msg)); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1704 | } |
| 1705 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1706 | Note that the READ_ONCE() and WRITE_ONCE() wrappers in |
| 1707 | interrupt_handler() are needed if this interrupt handler can itself |
| 1708 | be interrupted by something that also accesses 'flag' and 'msg', |
| 1709 | for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() |
| 1710 | and WRITE_ONCE() are not needed in interrupt_handler() other than |
| 1711 | for documentation purposes. (Note also that nested interrupts |
| 1712 | do not typically occur in modern Linux kernels, in fact, if an |
| 1713 | interrupt handler returns with interrupts enabled, you will get a |
| 1714 | WARN_ONCE() splat.) |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1715 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1716 | You should assume that the compiler can move READ_ONCE() and |
| 1717 | WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), |
| 1718 | barrier(), or similar primitives. |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1719 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1720 | This effect could also be achieved using barrier(), but READ_ONCE() |
| 1721 | and WRITE_ONCE() are more selective: With READ_ONCE() and |
| 1722 | WRITE_ONCE(), the compiler need only forget the contents of the |
| 1723 | indicated memory locations, while with barrier() the compiler must |
SeongJae Park | 8149b5c | 2020-01-31 21:52:37 +0100 | [diff] [blame] | 1724 | discard the value of all memory locations that it has currently |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1725 | cached in any machine registers. Of course, the compiler must also |
| 1726 | respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, |
| 1727 | though the CPU of course need not do so. |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1728 | |
| 1729 | (*) The compiler is within its rights to invent stores to a variable, |
| 1730 | as in the following example: |
| 1731 | |
| 1732 | if (a) |
| 1733 | b = a; |
| 1734 | else |
| 1735 | b = 42; |
| 1736 | |
| 1737 | The compiler might save a branch by optimizing this as follows: |
| 1738 | |
| 1739 | b = 42; |
| 1740 | if (a) |
| 1741 | b = a; |
| 1742 | |
| 1743 | In single-threaded code, this is not only safe, but also saves |
| 1744 | a branch. Unfortunately, in concurrent code, this optimization |
| 1745 | could cause some other CPU to see a spurious value of 42 -- even |
| 1746 | if variable 'a' was never zero -- when loading variable 'b'. |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1747 | Use WRITE_ONCE() to prevent this as follows: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1748 | |
| 1749 | if (a) |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1750 | WRITE_ONCE(b, a); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1751 | else |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1752 | WRITE_ONCE(b, 42); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1753 | |
| 1754 | The compiler can also invent loads. These are usually less |
| 1755 | damaging, but they can result in cache-line bouncing and thus in |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1756 | poor performance and scalability. Use READ_ONCE() to prevent |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1757 | invented loads. |
| 1758 | |
| 1759 | (*) For aligned memory locations whose size allows them to be accessed |
| 1760 | with a single memory-reference instruction, prevents "load tearing" |
| 1761 | and "store tearing," in which a single large access is replaced by |
| 1762 | multiple smaller accesses. For example, given an architecture having |
| 1763 | 16-bit store instructions with 7-bit immediate fields, the compiler |
| 1764 | might be tempted to use two 16-bit store-immediate instructions to |
| 1765 | implement the following 32-bit store: |
| 1766 | |
| 1767 | p = 0x00010002; |
| 1768 | |
| 1769 | Please note that GCC really does use this sort of optimization, |
| 1770 | which is not surprising given that it would likely take more |
| 1771 | than two instructions to build the constant and then store it. |
| 1772 | This optimization can therefore be a win in single-threaded code. |
| 1773 | In fact, a recent bug (since fixed) caused GCC to incorrectly use |
| 1774 | this optimization in a volatile store. In the absence of such bugs, |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1775 | use of WRITE_ONCE() prevents store tearing in the following example: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1776 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1777 | WRITE_ONCE(p, 0x00010002); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1778 | |
| 1779 | Use of packed structures can also result in load and store tearing, |
| 1780 | as in this example: |
| 1781 | |
| 1782 | struct __attribute__((__packed__)) foo { |
| 1783 | short a; |
| 1784 | int b; |
| 1785 | short c; |
| 1786 | }; |
| 1787 | struct foo foo1, foo2; |
| 1788 | ... |
| 1789 | |
| 1790 | foo2.a = foo1.a; |
| 1791 | foo2.b = foo1.b; |
| 1792 | foo2.c = foo1.c; |
| 1793 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1794 | Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no |
| 1795 | volatile markings, the compiler would be well within its rights to |
| 1796 | implement these three assignment statements as a pair of 32-bit |
| 1797 | loads followed by a pair of 32-bit stores. This would result in |
| 1798 | load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() |
| 1799 | and WRITE_ONCE() again prevent tearing in this example: |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1800 | |
| 1801 | foo2.a = foo1.a; |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1802 | WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1803 | foo2.c = foo1.c; |
| 1804 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1805 | All that aside, it is never necessary to use READ_ONCE() and |
| 1806 | WRITE_ONCE() on a variable that has been marked volatile. For example, |
| 1807 | because 'jiffies' is marked volatile, it is never necessary to |
| 1808 | say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and |
| 1809 | WRITE_ONCE() are implemented as volatile casts, which has no effect when |
| 1810 | its argument is already marked volatile. |
Paul E. McKenney | 692118d | 2013-12-11 13:59:07 -0800 | [diff] [blame] | 1811 | |
| 1812 | Please note that these compiler barriers have no direct effect on the CPU, |
| 1813 | which may then reorder things however it wishes. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1814 | |
| 1815 | |
| 1816 | CPU MEMORY BARRIERS |
| 1817 | ------------------- |
| 1818 | |
| 1819 | The Linux kernel has eight basic CPU memory barriers: |
| 1820 | |
| 1821 | TYPE MANDATORY SMP CONDITIONAL |
| 1822 | =============== ======================= =========================== |
| 1823 | GENERAL mb() smp_mb() |
| 1824 | WRITE wmb() smp_wmb() |
| 1825 | READ rmb() smp_rmb() |
Paul E. McKenney | 9ad3c14 | 2017-11-27 09:20:40 -0800 | [diff] [blame] | 1826 | DATA DEPENDENCY READ_ONCE() |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1827 | |
| 1828 | |
Nick Piggin | 73f1028 | 2008-05-14 06:35:11 +0200 | [diff] [blame] | 1829 | All memory barriers except the data dependency barriers imply a compiler |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 1830 | barrier. Data dependencies do not impose any additional compiler ordering. |
Nick Piggin | 73f1028 | 2008-05-14 06:35:11 +0200 | [diff] [blame] | 1831 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1832 | Aside: In the case of data dependencies, the compiler would be expected |
| 1833 | to issue the loads in the correct order (eg. `a[b]` would have to load |
| 1834 | the value of b before loading a[b]), however there is no guarantee in |
| 1835 | the C specification that the compiler may not speculate the value of b |
SeongJae Park | 8149b5c | 2020-01-31 21:52:37 +0100 | [diff] [blame] | 1836 | (eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1) |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 1837 | tmp = a[b]; ). There is also the problem of a compiler reloading b after |
| 1838 | having loaded a[b], thus having a newer copy of b than a[b]. A consensus |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 1839 | has not yet been reached about these problems, however the READ_ONCE() |
| 1840 | macro is a good place to start looking. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1841 | |
| 1842 | SMP memory barriers are reduced to compiler barriers on uniprocessor compiled |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 1843 | systems because it is assumed that a CPU will appear to be self-consistent, |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1844 | and will order overlapping accesses correctly with respect to itself. |
Michael S. Tsirkin | 6a65d26 | 2015-12-27 18:23:01 +0200 | [diff] [blame] | 1845 | However, see the subsection on "Virtual Machine Guests" below. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1846 | |
| 1847 | [!] Note that SMP memory barriers _must_ be used to control the ordering of |
| 1848 | references to shared memory on SMP systems, though the use of locking instead |
| 1849 | is sufficient. |
| 1850 | |
| 1851 | Mandatory barriers should not be used to control SMP effects, since mandatory |
Michael S. Tsirkin | 6a65d26 | 2015-12-27 18:23:01 +0200 | [diff] [blame] | 1852 | barriers impose unnecessary overhead on both SMP and UP systems. They may, |
| 1853 | however, be used to control MMIO effects on accesses through relaxed memory I/O |
| 1854 | windows. These barriers are required even on non-SMP systems as they affect |
| 1855 | the order in which memory operations appear to a device by prohibiting both the |
| 1856 | compiler and the CPU from reordering them. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1857 | |
| 1858 | |
| 1859 | There are some more advanced barrier functions: |
| 1860 | |
Peter Zijlstra | b92b8b3 | 2015-05-12 10:51:55 +0200 | [diff] [blame] | 1861 | (*) smp_store_mb(var, value) |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1862 | |
Oleg Nesterov | 75b2bd5 | 2006-11-08 17:44:38 -0800 | [diff] [blame] | 1863 | This assigns the value to the variable and then inserts a full memory |
Davidlohr Bueso | 2d142e5 | 2015-10-27 12:53:51 -0700 | [diff] [blame] | 1864 | barrier after it. It isn't guaranteed to insert anything more than a |
| 1865 | compiler barrier in a UP compilation. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1866 | |
| 1867 | |
Peter Zijlstra | 1b15611 | 2014-03-13 19:00:35 +0100 | [diff] [blame] | 1868 | (*) smp_mb__before_atomic(); |
| 1869 | (*) smp_mb__after_atomic(); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1870 | |
Manfred Spraul | 39323c6 | 2020-02-03 17:34:29 -0800 | [diff] [blame] | 1871 | These are for use with atomic RMW functions that do not imply memory |
| 1872 | barriers, but where the code needs a memory barrier. Examples for atomic |
Fox Chen | d8566f1 | 2020-09-09 14:53:40 +0800 | [diff] [blame] | 1873 | RMW functions that do not imply a memory barrier are e.g. add, |
Manfred Spraul | 39323c6 | 2020-02-03 17:34:29 -0800 | [diff] [blame] | 1874 | subtract, (failed) conditional operations, _relaxed functions, |
| 1875 | but not atomic_read or atomic_set. A common example where a memory |
| 1876 | barrier may be required is when atomic ops are used for reference |
| 1877 | counting. |
Peter Zijlstra | 1b15611 | 2014-03-13 19:00:35 +0100 | [diff] [blame] | 1878 | |
Manfred Spraul | 39323c6 | 2020-02-03 17:34:29 -0800 | [diff] [blame] | 1879 | These are also used for atomic RMW bitop functions that do not imply a |
| 1880 | memory barrier (such as set_bit and clear_bit). |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1881 | |
| 1882 | As an example, consider a piece of code that marks an object as being dead |
| 1883 | and then decrements the object's reference count: |
| 1884 | |
| 1885 | obj->dead = 1; |
Peter Zijlstra | 1b15611 | 2014-03-13 19:00:35 +0100 | [diff] [blame] | 1886 | smp_mb__before_atomic(); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1887 | atomic_dec(&obj->ref_count); |
| 1888 | |
| 1889 | This makes sure that the death mark on the object is perceived to be set |
| 1890 | *before* the reference counter is decremented. |
| 1891 | |
Peter Zijlstra | 706eeb3 | 2017-06-12 14:50:27 +0200 | [diff] [blame] | 1892 | See Documentation/atomic_{t,bitops}.txt for more information. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1893 | |
| 1894 | |
Alexander Duyck | 1077fa3 | 2014-12-11 15:02:06 -0800 | [diff] [blame] | 1895 | (*) dma_wmb(); |
| 1896 | (*) dma_rmb(); |
| 1897 | |
| 1898 | These are for use with consistent memory to guarantee the ordering |
| 1899 | of writes or reads of shared memory accessible to both the CPU and a |
| 1900 | DMA capable device. |
| 1901 | |
| 1902 | For example, consider a device driver that shares memory with a device |
| 1903 | and uses a descriptor status value to indicate if the descriptor belongs |
| 1904 | to the device or the CPU, and a doorbell to notify it when new |
| 1905 | descriptors are available: |
| 1906 | |
| 1907 | if (desc->status != DEVICE_OWN) { |
| 1908 | /* do not read data until we own descriptor */ |
| 1909 | dma_rmb(); |
| 1910 | |
| 1911 | /* read/modify data */ |
| 1912 | read_data = desc->data; |
| 1913 | desc->data = write_data; |
| 1914 | |
| 1915 | /* flush modifications before status update */ |
| 1916 | dma_wmb(); |
| 1917 | |
| 1918 | /* assign ownership */ |
| 1919 | desc->status = DEVICE_OWN; |
| 1920 | |
Alexander Duyck | 1077fa3 | 2014-12-11 15:02:06 -0800 | [diff] [blame] | 1921 | /* notify device of new descriptors */ |
| 1922 | writel(DESC_NOTIFY, doorbell); |
| 1923 | } |
| 1924 | |
| 1925 | The dma_rmb() allows us guarantee the device has released ownership |
Sylvain Trias | 7a45800 | 2015-04-08 10:27:57 +0200 | [diff] [blame] | 1926 | before we read the data from the descriptor, and the dma_wmb() allows |
Alexander Duyck | 1077fa3 | 2014-12-11 15:02:06 -0800 | [diff] [blame] | 1927 | us to guarantee the data is written to the descriptor before the device |
Will Deacon | 5846581 | 2018-05-14 15:55:26 -0700 | [diff] [blame] | 1928 | can see it now has ownership. Note that, when using writel(), a prior |
| 1929 | wmb() is not needed to guarantee that the cache coherent memory writes |
| 1930 | have completed before writing to the MMIO region. The cheaper |
| 1931 | writel_relaxed() does not provide this guarantee and must not be used |
| 1932 | here. |
Alexander Duyck | 1077fa3 | 2014-12-11 15:02:06 -0800 | [diff] [blame] | 1933 | |
Will Deacon | 5846581 | 2018-05-14 15:55:26 -0700 | [diff] [blame] | 1934 | See the subsection "Kernel I/O barrier effects" for more information on |
SeongJae Park | 537f3a7 | 2020-08-29 10:26:05 +0200 | [diff] [blame] | 1935 | relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for |
| 1936 | more information on consistent memory. |
Alexander Duyck | 1077fa3 | 2014-12-11 15:02:06 -0800 | [diff] [blame] | 1937 | |
Aneesh Kumar K.V | 3e79f08 | 2020-07-01 12:52:32 +0530 | [diff] [blame] | 1938 | (*) pmem_wmb(); |
| 1939 | |
| 1940 | This is for use with persistent memory to ensure that stores for which |
| 1941 | modifications are written to persistent storage reached a platform |
| 1942 | durability domain. |
| 1943 | |
| 1944 | For example, after a non-temporal write to pmem region, we use pmem_wmb() |
| 1945 | to ensure that stores have reached a platform durability domain. This ensures |
| 1946 | that stores have updated persistent storage before any data access or |
| 1947 | data transfer caused by subsequent instructions is initiated. This is |
| 1948 | in addition to the ordering done by wmb(). |
| 1949 | |
| 1950 | For load from persistent memory, existing read memory barriers are sufficient |
| 1951 | to ensure read ordering. |
SeongJae Park | dfeccea | 2016-08-11 11:17:40 -0700 | [diff] [blame] | 1952 | |
Xiongfeng Wang | d5624bb | 2021-12-21 11:55:56 +0800 | [diff] [blame] | 1953 | (*) io_stop_wc(); |
| 1954 | |
| 1955 | For memory accesses with write-combining attributes (e.g. those returned |
| 1956 | by ioremap_wc(), the CPU may wait for prior accesses to be merged with |
| 1957 | subsequent ones. io_stop_wc() can be used to prevent the merging of |
| 1958 | write-combining memory accesses before this macro with those after it when |
| 1959 | such wait has performance implications. |
| 1960 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1961 | =============================== |
| 1962 | IMPLICIT KERNEL MEMORY BARRIERS |
| 1963 | =============================== |
| 1964 | |
| 1965 | Some of the other functions in the linux kernel imply memory barriers, amongst |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 1966 | which are locking and scheduling functions. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1967 | |
| 1968 | This specification is a _minimum_ guarantee; any particular architecture may |
| 1969 | provide more substantial guarantees, but these may not be relied upon outside |
| 1970 | of arch specific code. |
| 1971 | |
| 1972 | |
SeongJae Park | 166bda7 | 2016-04-12 08:52:50 -0700 | [diff] [blame] | 1973 | LOCK ACQUISITION FUNCTIONS |
| 1974 | -------------------------- |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1975 | |
| 1976 | The Linux kernel has a number of locking constructs: |
| 1977 | |
| 1978 | (*) spin locks |
| 1979 | (*) R/W spin locks |
| 1980 | (*) mutexes |
| 1981 | (*) semaphores |
| 1982 | (*) R/W semaphores |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1983 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 1984 | In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1985 | for each construct. These operations all imply certain barriers: |
| 1986 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 1987 | (1) ACQUIRE operation implication: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1988 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 1989 | Memory operations issued after the ACQUIRE will be completed after the |
| 1990 | ACQUIRE operation has completed. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1991 | |
Paul E. McKenney | 8dd853d | 2014-02-23 08:34:24 -0800 | [diff] [blame] | 1992 | Memory operations issued before the ACQUIRE may be completed after |
Peter Zijlstra | a9668cd | 2017-06-07 17:51:27 +0200 | [diff] [blame] | 1993 | the ACQUIRE operation has completed. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1994 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 1995 | (2) RELEASE operation implication: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1996 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 1997 | Memory operations issued before the RELEASE will be completed before the |
| 1998 | RELEASE operation has completed. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 1999 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2000 | Memory operations issued after the RELEASE may be completed before the |
| 2001 | RELEASE operation has completed. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2002 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2003 | (3) ACQUIRE vs ACQUIRE implication: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2004 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2005 | All ACQUIRE operations issued before another ACQUIRE operation will be |
| 2006 | completed before that ACQUIRE operation. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2007 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2008 | (4) ACQUIRE vs RELEASE implication: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2009 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2010 | All ACQUIRE operations issued before a RELEASE operation will be |
| 2011 | completed before the RELEASE operation. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2012 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2013 | (5) Failed conditional ACQUIRE implication: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2014 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2015 | Certain locking variants of the ACQUIRE operation may fail, either due to |
| 2016 | being unable to get the lock immediately, or due to receiving an unblocked |
Will Deacon | 806654a | 2018-11-19 11:02:45 +0000 | [diff] [blame] | 2017 | signal while asleep waiting for the lock to become available. Failed |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2018 | locks do not imply any sort of barrier. |
| 2019 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2020 | [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only |
| 2021 | one-way barriers is that the effects of instructions outside of a critical |
| 2022 | section may seep into the inside of the critical section. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2023 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2024 | An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier |
| 2025 | because it is possible for an access preceding the ACQUIRE to happen after the |
| 2026 | ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and |
| 2027 | the two accesses can themselves then cross: |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 2028 | |
| 2029 | *A = a; |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2030 | ACQUIRE M |
| 2031 | RELEASE M |
David Howells | 670bd95 | 2006-06-10 09:54:12 -0700 | [diff] [blame] | 2032 | *B = b; |
| 2033 | |
| 2034 | may occur as: |
| 2035 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2036 | ACQUIRE M, STORE *B, STORE *A, RELEASE M |
Paul E. McKenney | 17eb88e | 2013-12-11 13:59:09 -0800 | [diff] [blame] | 2037 | |
Paul E. McKenney | 8dd853d | 2014-02-23 08:34:24 -0800 | [diff] [blame] | 2038 | When the ACQUIRE and RELEASE are a lock acquisition and release, |
| 2039 | respectively, this same reordering can occur if the lock's ACQUIRE and |
| 2040 | RELEASE are to the same lock variable, but only from the perspective of |
| 2041 | another CPU not holding that lock. In short, a ACQUIRE followed by an |
| 2042 | RELEASE may -not- be assumed to be a full memory barrier. |
Paul E. McKenney | 17eb88e | 2013-12-11 13:59:09 -0800 | [diff] [blame] | 2043 | |
Paul E. McKenney | 12d560f | 2015-07-14 18:35:23 -0700 | [diff] [blame] | 2044 | Similarly, the reverse case of a RELEASE followed by an ACQUIRE does |
| 2045 | not imply a full memory barrier. Therefore, the CPU's execution of the |
| 2046 | critical sections corresponding to the RELEASE and the ACQUIRE can cross, |
| 2047 | so that: |
Paul E. McKenney | 17eb88e | 2013-12-11 13:59:09 -0800 | [diff] [blame] | 2048 | |
| 2049 | *A = a; |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2050 | RELEASE M |
| 2051 | ACQUIRE N |
Paul E. McKenney | 17eb88e | 2013-12-11 13:59:09 -0800 | [diff] [blame] | 2052 | *B = b; |
| 2053 | |
| 2054 | could occur as: |
| 2055 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2056 | ACQUIRE N, STORE *B, STORE *A, RELEASE M |
Paul E. McKenney | 17eb88e | 2013-12-11 13:59:09 -0800 | [diff] [blame] | 2057 | |
Paul E. McKenney | 8dd853d | 2014-02-23 08:34:24 -0800 | [diff] [blame] | 2058 | It might appear that this reordering could introduce a deadlock. |
| 2059 | However, this cannot happen because if such a deadlock threatened, |
| 2060 | the RELEASE would simply complete, thereby avoiding the deadlock. |
| 2061 | |
| 2062 | Why does this work? |
| 2063 | |
| 2064 | One key point is that we are only talking about the CPU doing |
| 2065 | the reordering, not the compiler. If the compiler (or, for |
| 2066 | that matter, the developer) switched the operations, deadlock |
| 2067 | -could- occur. |
| 2068 | |
| 2069 | But suppose the CPU reordered the operations. In this case, |
| 2070 | the unlock precedes the lock in the assembly code. The CPU |
| 2071 | simply elected to try executing the later lock operation first. |
| 2072 | If there is a deadlock, this lock operation will simply spin (or |
| 2073 | try to sleep, but more on that later). The CPU will eventually |
| 2074 | execute the unlock operation (which preceded the lock operation |
| 2075 | in the assembly code), which will unravel the potential deadlock, |
| 2076 | allowing the lock operation to succeed. |
| 2077 | |
| 2078 | But what if the lock is a sleeplock? In that case, the code will |
| 2079 | try to enter the scheduler, where it will eventually encounter |
| 2080 | a memory barrier, which will force the earlier unlock operation |
| 2081 | to complete, again unraveling the deadlock. There might be |
| 2082 | a sleep-unlock race, but the locking primitive needs to resolve |
| 2083 | such races properly in any case. |
| 2084 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2085 | Locks and semaphores may not provide any guarantee of ordering on UP compiled |
| 2086 | systems, and so cannot be counted on in such a situation to actually achieve |
| 2087 | anything at all - especially with respect to I/O accesses - unless combined |
| 2088 | with interrupt disabling operations. |
| 2089 | |
SeongJae Park | d7cab36 | 2016-08-11 11:17:41 -0700 | [diff] [blame] | 2090 | See also the section on "Inter-CPU acquiring barrier effects". |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2091 | |
| 2092 | |
| 2093 | As an example, consider the following: |
| 2094 | |
| 2095 | *A = a; |
| 2096 | *B = b; |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2097 | ACQUIRE |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2098 | *C = c; |
| 2099 | *D = d; |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2100 | RELEASE |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2101 | *E = e; |
| 2102 | *F = f; |
| 2103 | |
| 2104 | The following sequence of events is acceptable: |
| 2105 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2106 | ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2107 | |
| 2108 | [+] Note that {*F,*A} indicates a combined access. |
| 2109 | |
| 2110 | But none of the following are: |
| 2111 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2112 | {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E |
| 2113 | *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F |
| 2114 | *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F |
| 2115 | *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2116 | |
| 2117 | |
| 2118 | |
| 2119 | INTERRUPT DISABLING FUNCTIONS |
| 2120 | ----------------------------- |
| 2121 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2122 | Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts |
| 2123 | (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2124 | barriers are required in such a situation, they must be provided from some |
| 2125 | other means. |
| 2126 | |
| 2127 | |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2128 | SLEEP AND WAKE-UP FUNCTIONS |
| 2129 | --------------------------- |
| 2130 | |
| 2131 | Sleeping and waking on an event flagged in global data can be viewed as an |
| 2132 | interaction between two pieces of data: the task state of the task waiting for |
| 2133 | the event and the global data used to indicate the event. To make sure that |
| 2134 | these appear to happen in the right order, the primitives to begin the process |
| 2135 | of going to sleep, and the primitives to initiate a wake up imply certain |
| 2136 | barriers. |
| 2137 | |
| 2138 | Firstly, the sleeper normally follows something like this sequence of events: |
| 2139 | |
| 2140 | for (;;) { |
| 2141 | set_current_state(TASK_UNINTERRUPTIBLE); |
| 2142 | if (event_indicated) |
| 2143 | break; |
| 2144 | schedule(); |
| 2145 | } |
| 2146 | |
| 2147 | A general memory barrier is interpolated automatically by set_current_state() |
| 2148 | after it has altered the task state: |
| 2149 | |
| 2150 | CPU 1 |
| 2151 | =============================== |
| 2152 | set_current_state(); |
Peter Zijlstra | b92b8b3 | 2015-05-12 10:51:55 +0200 | [diff] [blame] | 2153 | smp_store_mb(); |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2154 | STORE current->state |
| 2155 | <general barrier> |
| 2156 | LOAD event_indicated |
| 2157 | |
| 2158 | set_current_state() may be wrapped by: |
| 2159 | |
| 2160 | prepare_to_wait(); |
| 2161 | prepare_to_wait_exclusive(); |
| 2162 | |
| 2163 | which therefore also imply a general memory barrier after setting the state. |
| 2164 | The whole sequence above is available in various canned forms, all of which |
| 2165 | interpolate the memory barrier in the right place: |
| 2166 | |
| 2167 | wait_event(); |
| 2168 | wait_event_interruptible(); |
| 2169 | wait_event_interruptible_exclusive(); |
| 2170 | wait_event_interruptible_timeout(); |
| 2171 | wait_event_killable(); |
| 2172 | wait_event_timeout(); |
| 2173 | wait_on_bit(); |
| 2174 | wait_on_bit_lock(); |
| 2175 | |
| 2176 | |
| 2177 | Secondly, code that performs a wake up normally follows something like this: |
| 2178 | |
| 2179 | event_indicated = 1; |
| 2180 | wake_up(&event_wait_queue); |
| 2181 | |
| 2182 | or: |
| 2183 | |
| 2184 | event_indicated = 1; |
| 2185 | wake_up_process(event_daemon); |
| 2186 | |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2187 | A general memory barrier is executed by wake_up() if it wakes something up. |
| 2188 | If it doesn't wake anything up then a memory barrier may or may not be |
| 2189 | executed; you must not rely on it. The barrier occurs before the task state |
| 2190 | is accessed, in particular, it sits between the STORE to indicate the event |
| 2191 | and the STORE to set TASK_RUNNING: |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2192 | |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2193 | CPU 1 (Sleeper) CPU 2 (Waker) |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2194 | =============================== =============================== |
| 2195 | set_current_state(); STORE event_indicated |
Peter Zijlstra | b92b8b3 | 2015-05-12 10:51:55 +0200 | [diff] [blame] | 2196 | smp_store_mb(); wake_up(); |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2197 | STORE current->state ... |
| 2198 | <general barrier> <general barrier> |
| 2199 | LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL) |
| 2200 | STORE task->state |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2201 | |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2202 | where "task" is the thread being woken up and it equals CPU 1's "current". |
| 2203 | |
| 2204 | To repeat, a general memory barrier is guaranteed to be executed by wake_up() |
| 2205 | if something is actually awakened, but otherwise there is no such guarantee. |
| 2206 | To see this, consider the following sequence of events, where X and Y are both |
| 2207 | initially zero: |
Paul E. McKenney | 5726ce0 | 2014-05-13 10:14:51 -0700 | [diff] [blame] | 2208 | |
| 2209 | CPU 1 CPU 2 |
| 2210 | =============================== =============================== |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2211 | X = 1; Y = 1; |
Paul E. McKenney | 5726ce0 | 2014-05-13 10:14:51 -0700 | [diff] [blame] | 2212 | smp_mb(); wake_up(); |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2213 | LOAD Y LOAD X |
Paul E. McKenney | 5726ce0 | 2014-05-13 10:14:51 -0700 | [diff] [blame] | 2214 | |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2215 | If a wakeup does occur, one (at least) of the two loads must see 1. If, on |
| 2216 | the other hand, a wakeup does not occur, both loads might see 0. |
| 2217 | |
| 2218 | wake_up_process() always executes a general memory barrier. The barrier again |
| 2219 | occurs before the task state is accessed. In particular, if the wake_up() in |
| 2220 | the previous snippet were replaced by a call to wake_up_process() then one of |
| 2221 | the two loads would be guaranteed to see 1. |
Paul E. McKenney | 5726ce0 | 2014-05-13 10:14:51 -0700 | [diff] [blame] | 2222 | |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2223 | The available waker functions include: |
| 2224 | |
| 2225 | complete(); |
| 2226 | wake_up(); |
| 2227 | wake_up_all(); |
| 2228 | wake_up_bit(); |
| 2229 | wake_up_interruptible(); |
| 2230 | wake_up_interruptible_all(); |
| 2231 | wake_up_interruptible_nr(); |
| 2232 | wake_up_interruptible_poll(); |
| 2233 | wake_up_interruptible_sync(); |
| 2234 | wake_up_interruptible_sync_poll(); |
| 2235 | wake_up_locked(); |
| 2236 | wake_up_locked_poll(); |
| 2237 | wake_up_nr(); |
| 2238 | wake_up_poll(); |
| 2239 | wake_up_process(); |
| 2240 | |
Andrea Parri | 7696f99 | 2018-07-16 11:06:03 -0700 | [diff] [blame] | 2241 | In terms of memory ordering, these functions all provide the same guarantees of |
| 2242 | a wake_up() (or stronger). |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2243 | |
| 2244 | [!] Note that the memory barriers implied by the sleeper and the waker do _not_ |
| 2245 | order multiple stores before the wake-up with respect to loads of those stored |
| 2246 | values after the sleeper has called set_current_state(). For instance, if the |
| 2247 | sleeper does: |
| 2248 | |
| 2249 | set_current_state(TASK_INTERRUPTIBLE); |
| 2250 | if (event_indicated) |
| 2251 | break; |
| 2252 | __set_current_state(TASK_RUNNING); |
| 2253 | do_something(my_data); |
| 2254 | |
| 2255 | and the waker does: |
| 2256 | |
| 2257 | my_data = value; |
| 2258 | event_indicated = 1; |
| 2259 | wake_up(&event_wait_queue); |
| 2260 | |
| 2261 | there's no guarantee that the change to event_indicated will be perceived by |
| 2262 | the sleeper as coming after the change to my_data. In such a circumstance, the |
| 2263 | code on both sides must interpolate its own memory barriers between the |
| 2264 | separate data accesses. Thus the above sleeper ought to do: |
| 2265 | |
| 2266 | set_current_state(TASK_INTERRUPTIBLE); |
| 2267 | if (event_indicated) { |
| 2268 | smp_rmb(); |
| 2269 | do_something(my_data); |
| 2270 | } |
| 2271 | |
| 2272 | and the waker should do: |
| 2273 | |
| 2274 | my_data = value; |
| 2275 | smp_wmb(); |
| 2276 | event_indicated = 1; |
| 2277 | wake_up(&event_wait_queue); |
| 2278 | |
| 2279 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2280 | MISCELLANEOUS FUNCTIONS |
| 2281 | ----------------------- |
| 2282 | |
| 2283 | Other functions that imply barriers: |
| 2284 | |
| 2285 | (*) schedule() and similar imply full memory barriers. |
| 2286 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2287 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2288 | =================================== |
| 2289 | INTER-CPU ACQUIRING BARRIER EFFECTS |
| 2290 | =================================== |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2291 | |
| 2292 | On SMP systems locking primitives give a more substantial form of barrier: one |
| 2293 | that does affect memory access ordering on other CPUs, within the context of |
| 2294 | conflict on any particular lock. |
| 2295 | |
| 2296 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2297 | ACQUIRES VS MEMORY ACCESSES |
| 2298 | --------------------------- |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2299 | |
Aneesh Kumar | 79afecf | 2006-05-15 09:44:36 -0700 | [diff] [blame] | 2300 | Consider the following: the system has a pair of spinlocks (M) and (Q), and |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2301 | three CPUs; then should the following sequence of events occur: |
| 2302 | |
| 2303 | CPU 1 CPU 2 |
| 2304 | =============================== =============================== |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2305 | WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2306 | ACQUIRE M ACQUIRE Q |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2307 | WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); |
| 2308 | WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2309 | RELEASE M RELEASE Q |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2310 | WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2311 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2312 | Then there is no guarantee as to what order CPU 3 will see the accesses to *A |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2313 | through *H occur in, other than the constraints imposed by the separate locks |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 2314 | on the separate CPUs. It might, for example, see: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2315 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2316 | *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2317 | |
| 2318 | But it won't see any of: |
| 2319 | |
Peter Zijlstra | 2e4f538 | 2013-11-06 14:57:36 +0100 | [diff] [blame] | 2320 | *B, *C or *D preceding ACQUIRE M |
| 2321 | *A, *B or *C following RELEASE M |
| 2322 | *F, *G or *H preceding ACQUIRE Q |
| 2323 | *E, *F or *G following RELEASE Q |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2324 | |
| 2325 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2326 | ================================= |
| 2327 | WHERE ARE MEMORY BARRIERS NEEDED? |
| 2328 | ================================= |
| 2329 | |
| 2330 | Under normal operation, memory operation reordering is generally not going to |
| 2331 | be a problem as a single-threaded linear piece of code will still appear to |
David Howells | 50fa610 | 2009-04-28 15:01:38 +0100 | [diff] [blame] | 2332 | work correctly, even if it's in an SMP kernel. There are, however, four |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2333 | circumstances in which reordering definitely _could_ be a problem: |
| 2334 | |
| 2335 | (*) Interprocessor interaction. |
| 2336 | |
| 2337 | (*) Atomic operations. |
| 2338 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2339 | (*) Accessing devices. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2340 | |
| 2341 | (*) Interrupts. |
| 2342 | |
| 2343 | |
| 2344 | INTERPROCESSOR INTERACTION |
| 2345 | -------------------------- |
| 2346 | |
| 2347 | When there's a system with more than one processor, more than one CPU in the |
| 2348 | system may be working on the same data set at the same time. This can cause |
| 2349 | synchronisation problems, and the usual way of dealing with them is to use |
| 2350 | locks. Locks, however, are quite expensive, and so it may be preferable to |
| 2351 | operate without the use of a lock if at all possible. In such a case |
| 2352 | operations that affect both CPUs may have to be carefully ordered to prevent |
| 2353 | a malfunction. |
| 2354 | |
| 2355 | Consider, for example, the R/W semaphore slow path. Here a waiting process is |
| 2356 | queued on the semaphore, by virtue of it having a piece of its stack linked to |
| 2357 | the semaphore's list of waiting processes: |
| 2358 | |
| 2359 | struct rw_semaphore { |
| 2360 | ... |
| 2361 | spinlock_t lock; |
| 2362 | struct list_head waiters; |
| 2363 | }; |
| 2364 | |
| 2365 | struct rwsem_waiter { |
| 2366 | struct list_head list; |
| 2367 | struct task_struct *task; |
| 2368 | }; |
| 2369 | |
| 2370 | To wake up a particular waiter, the up_read() or up_write() functions have to: |
| 2371 | |
| 2372 | (1) read the next pointer from this waiter's record to know as to where the |
| 2373 | next waiter record is; |
| 2374 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2375 | (2) read the pointer to the waiter's task structure; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2376 | |
| 2377 | (3) clear the task pointer to tell the waiter it has been given the semaphore; |
| 2378 | |
| 2379 | (4) call wake_up_process() on the task; and |
| 2380 | |
| 2381 | (5) release the reference held on the waiter's task struct. |
| 2382 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2383 | In other words, it has to perform this sequence of events: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2384 | |
| 2385 | LOAD waiter->list.next; |
| 2386 | LOAD waiter->task; |
| 2387 | STORE waiter->task; |
| 2388 | CALL wakeup |
| 2389 | RELEASE task |
| 2390 | |
| 2391 | and if any of these steps occur out of order, then the whole thing may |
| 2392 | malfunction. |
| 2393 | |
| 2394 | Once it has queued itself and dropped the semaphore lock, the waiter does not |
| 2395 | get the lock again; it instead just waits for its task pointer to be cleared |
| 2396 | before proceeding. Since the record is on the waiter's stack, this means that |
| 2397 | if the task pointer is cleared _before_ the next pointer in the list is read, |
| 2398 | another CPU might start processing the waiter and might clobber the waiter's |
| 2399 | stack before the up*() function has a chance to read the next pointer. |
| 2400 | |
| 2401 | Consider then what might happen to the above sequence of events: |
| 2402 | |
| 2403 | CPU 1 CPU 2 |
| 2404 | =============================== =============================== |
| 2405 | down_xxx() |
| 2406 | Queue waiter |
| 2407 | Sleep |
| 2408 | up_yyy() |
| 2409 | LOAD waiter->task; |
| 2410 | STORE waiter->task; |
| 2411 | Woken up by other event |
| 2412 | <preempt> |
| 2413 | Resume processing |
| 2414 | down_xxx() returns |
| 2415 | call foo() |
| 2416 | foo() clobbers *waiter |
| 2417 | </preempt> |
| 2418 | LOAD waiter->list.next; |
| 2419 | --- OOPS --- |
| 2420 | |
| 2421 | This could be dealt with using the semaphore lock, but then the down_xxx() |
| 2422 | function has to needlessly get the spinlock again after being woken up. |
| 2423 | |
| 2424 | The way to deal with this is to insert a general SMP memory barrier: |
| 2425 | |
| 2426 | LOAD waiter->list.next; |
| 2427 | LOAD waiter->task; |
| 2428 | smp_mb(); |
| 2429 | STORE waiter->task; |
| 2430 | CALL wakeup |
| 2431 | RELEASE task |
| 2432 | |
| 2433 | In this case, the barrier makes a guarantee that all memory accesses before the |
| 2434 | barrier will appear to happen before all the memory accesses after the barrier |
| 2435 | with respect to the other CPUs on the system. It does _not_ guarantee that all |
| 2436 | the memory accesses before the barrier will be complete by the time the barrier |
| 2437 | instruction itself is complete. |
| 2438 | |
| 2439 | On a UP system - where this wouldn't be a problem - the smp_mb() is just a |
| 2440 | compiler barrier, thus making sure the compiler emits the instructions in the |
David Howells | 6bc3927 | 2006-06-25 05:49:22 -0700 | [diff] [blame] | 2441 | right order without actually intervening in the CPU. Since there's only one |
| 2442 | CPU, that CPU's dependency ordering logic will take care of everything else. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2443 | |
| 2444 | |
| 2445 | ATOMIC OPERATIONS |
| 2446 | ----------------- |
| 2447 | |
Will Deacon | 806654a | 2018-11-19 11:02:45 +0000 | [diff] [blame] | 2448 | While they are technically interprocessor interaction considerations, atomic |
David Howells | dbc8700 | 2006-04-10 22:54:23 -0700 | [diff] [blame] | 2449 | operations are noted specially as some of them imply full memory barriers and |
| 2450 | some don't, but they're very heavily relied on as a group throughout the |
| 2451 | kernel. |
| 2452 | |
Peter Zijlstra | 706eeb3 | 2017-06-12 14:50:27 +0200 | [diff] [blame] | 2453 | See Documentation/atomic_t.txt for more information. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2454 | |
| 2455 | |
| 2456 | ACCESSING DEVICES |
| 2457 | ----------------- |
| 2458 | |
| 2459 | Many devices can be memory mapped, and so appear to the CPU as if they're just |
| 2460 | a set of memory locations. To control such a device, the driver usually has to |
| 2461 | make the right memory accesses in exactly the right order. |
| 2462 | |
| 2463 | However, having a clever CPU or a clever compiler creates a potential problem |
| 2464 | in that the carefully sequenced accesses in the driver code won't reach the |
| 2465 | device in the requisite order if the CPU or the compiler thinks it is more |
| 2466 | efficient to reorder, combine or merge accesses - something that would cause |
| 2467 | the device to malfunction. |
| 2468 | |
| 2469 | Inside of the Linux kernel, I/O should be done through the appropriate accessor |
| 2470 | routines - such as inb() or writel() - which know how to make such accesses |
Will Deacon | 806654a | 2018-11-19 11:02:45 +0000 | [diff] [blame] | 2471 | appropriately sequential. While this, for the most part, renders the explicit |
Will Deacon | 9155303 | 2019-02-22 16:17:54 +0000 | [diff] [blame] | 2472 | use of memory barriers unnecessary, if the accessor functions are used to refer |
| 2473 | to an I/O memory window with relaxed memory access properties, then _mandatory_ |
| 2474 | memory barriers are required to enforce ordering. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2475 | |
Helmut Grohne | 0fe397f | 2017-05-03 11:51:46 +0200 | [diff] [blame] | 2476 | See Documentation/driver-api/device-io.rst for more information. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2477 | |
| 2478 | |
| 2479 | INTERRUPTS |
| 2480 | ---------- |
| 2481 | |
| 2482 | A driver may be interrupted by its own interrupt service routine, and thus the |
| 2483 | two parts of the driver may interfere with each other's attempts to control or |
| 2484 | access the device. |
| 2485 | |
| 2486 | This may be alleviated - at least in part - by disabling local interrupts (a |
| 2487 | form of locking), such that the critical operations are all contained within |
Will Deacon | 806654a | 2018-11-19 11:02:45 +0000 | [diff] [blame] | 2488 | the interrupt-disabled section in the driver. While the driver's interrupt |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2489 | routine is executing, the driver's core may not run on the same CPU, and its |
| 2490 | interrupt is not permitted to happen again until the current interrupt has been |
| 2491 | handled, thus the interrupt handler does not need to lock against that. |
| 2492 | |
| 2493 | However, consider a driver that was talking to an ethernet card that sports an |
| 2494 | address register and a data register. If that driver's core talks to the card |
| 2495 | under interrupt-disablement and then the driver's interrupt handler is invoked: |
| 2496 | |
| 2497 | LOCAL IRQ DISABLE |
| 2498 | writew(ADDR, 3); |
| 2499 | writew(DATA, y); |
| 2500 | LOCAL IRQ ENABLE |
| 2501 | <interrupt> |
| 2502 | writew(ADDR, 4); |
| 2503 | q = readw(DATA); |
| 2504 | </interrupt> |
| 2505 | |
| 2506 | The store to the data register might happen after the second store to the |
| 2507 | address register if ordering rules are sufficiently relaxed: |
| 2508 | |
| 2509 | STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA |
| 2510 | |
| 2511 | |
| 2512 | If ordering rules are relaxed, it must be assumed that accesses done inside an |
| 2513 | interrupt disabled section may leak outside of it and may interleave with |
| 2514 | accesses performed in an interrupt - and vice versa - unless implicit or |
| 2515 | explicit barriers are used. |
| 2516 | |
| 2517 | Normally this won't be a problem because the I/O accesses done inside such |
| 2518 | sections will include synchronous load operations on strictly ordered I/O |
Will Deacon | 9155303 | 2019-02-22 16:17:54 +0000 | [diff] [blame] | 2519 | registers that form implicit I/O barriers. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2520 | |
| 2521 | |
| 2522 | A similar situation may occur between an interrupt routine and two routines |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 2523 | running on separate CPUs that communicate with each other. If such a case is |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2524 | likely, then interrupt-disabling locks should be used to guarantee ordering. |
| 2525 | |
| 2526 | |
| 2527 | ========================== |
| 2528 | KERNEL I/O BARRIER EFFECTS |
| 2529 | ========================== |
| 2530 | |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2531 | Interfacing with peripherals via I/O accesses is deeply architecture and device |
| 2532 | specific. Therefore, drivers which are inherently non-portable may rely on |
| 2533 | specific behaviours of their target systems in order to achieve synchronization |
| 2534 | in the most lightweight manner possible. For drivers intending to be portable |
| 2535 | between multiple architectures and bus implementations, the kernel offers a |
| 2536 | series of accessor functions that provide various degrees of ordering |
| 2537 | guarantees: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2538 | |
| 2539 | (*) readX(), writeX(): |
| 2540 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2541 | The readX() and writeX() MMIO accessors take a pointer to the |
| 2542 | peripheral being accessed as an __iomem * parameter. For pointers |
| 2543 | mapped with the default I/O attributes (e.g. those returned by |
| 2544 | ioremap()), the ordering guarantees are as follows: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2545 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2546 | 1. All readX() and writeX() accesses to the same peripheral are ordered |
Will Deacon | 9726840 | 2019-04-12 13:42:18 +0100 | [diff] [blame] | 2547 | with respect to each other. This ensures that MMIO register accesses |
| 2548 | by the same CPU thread to a particular device will arrive in program |
| 2549 | order. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2550 | |
Will Deacon | 9726840 | 2019-04-12 13:42:18 +0100 | [diff] [blame] | 2551 | 2. A writeX() issued by a CPU thread holding a spinlock is ordered |
| 2552 | before a writeX() to the same peripheral from another CPU thread |
| 2553 | issued after a later acquisition of the same spinlock. This ensures |
| 2554 | that MMIO register writes to a particular device issued while holding |
| 2555 | a spinlock will arrive in an order consistent with acquisitions of |
| 2556 | the lock. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2557 | |
Will Deacon | 9726840 | 2019-04-12 13:42:18 +0100 | [diff] [blame] | 2558 | 3. A writeX() by a CPU thread to the peripheral will first wait for the |
| 2559 | completion of all prior writes to memory either issued by, or |
| 2560 | propagated to, the same thread. This ensures that writes by the CPU |
| 2561 | to an outbound DMA buffer allocated by dma_alloc_coherent() will be |
| 2562 | visible to a DMA engine when the CPU writes to its MMIO control |
| 2563 | register to trigger the transfer. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2564 | |
Will Deacon | 9726840 | 2019-04-12 13:42:18 +0100 | [diff] [blame] | 2565 | 4. A readX() by a CPU thread from the peripheral will complete before |
| 2566 | any subsequent reads from memory by the same thread can begin. This |
| 2567 | ensures that reads by the CPU from an incoming DMA buffer allocated |
| 2568 | by dma_alloc_coherent() will not see stale data after reading from |
| 2569 | the DMA engine's MMIO status register to establish that the DMA |
| 2570 | transfer has completed. |
| 2571 | |
| 2572 | 5. A readX() by a CPU thread from the peripheral will complete before |
| 2573 | any subsequent delay() loop can begin execution on the same thread. |
| 2574 | This ensures that two MMIO register writes by the CPU to a peripheral |
| 2575 | will arrive at least 1us apart if the first write is immediately read |
| 2576 | back with readX() and udelay(1) is called prior to the second |
| 2577 | writeX(): |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2578 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2579 | writel(42, DEVICE_REGISTER_0); // Arrives at the device... |
| 2580 | readl(DEVICE_REGISTER_0); |
| 2581 | udelay(1); |
| 2582 | writel(42, DEVICE_REGISTER_1); // ...at least 1us before this. |
| 2583 | |
| 2584 | The ordering properties of __iomem pointers obtained with non-default |
| 2585 | attributes (e.g. those returned by ioremap_wc()) are specific to the |
| 2586 | underlying architecture and therefore the guarantees listed above cannot |
| 2587 | generally be relied upon for accesses to these types of mappings. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2588 | |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2589 | (*) readX_relaxed(), writeX_relaxed(): |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2590 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2591 | These are similar to readX() and writeX(), but provide weaker memory |
| 2592 | ordering guarantees. Specifically, they do not guarantee ordering with |
Will Deacon | 9726840 | 2019-04-12 13:42:18 +0100 | [diff] [blame] | 2593 | respect to locking, normal memory accesses or delay() loops (i.e. |
| 2594 | bullets 2-5 above) but they are still guaranteed to be ordered with |
| 2595 | respect to other accesses from the same CPU thread to the same |
| 2596 | peripheral when operating on __iomem pointers mapped with the default |
| 2597 | I/O attributes. |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2598 | |
| 2599 | (*) readsX(), writesX(): |
| 2600 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2601 | The readsX() and writesX() MMIO accessors are designed for accessing |
| 2602 | register-based, memory-mapped FIFOs residing on peripherals that are not |
| 2603 | capable of performing DMA. Consequently, they provide only the ordering |
| 2604 | guarantees of readX_relaxed() and writeX_relaxed(), as documented above. |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2605 | |
| 2606 | (*) inX(), outX(): |
| 2607 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2608 | The inX() and outX() accessors are intended to access legacy port-mapped |
| 2609 | I/O peripherals, which may require special instructions on some |
| 2610 | architectures (notably x86). The port number of the peripheral being |
| 2611 | accessed is passed as an argument. |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2612 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2613 | Since many CPU architectures ultimately access these peripherals via an |
| 2614 | internal virtual memory mapping, the portable ordering guarantees |
| 2615 | provided by inX() and outX() are the same as those provided by readX() |
| 2616 | and writeX() respectively when accessing a mapping with the default I/O |
| 2617 | attributes. |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2618 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2619 | Device drivers may expect outX() to emit a non-posted write transaction |
| 2620 | that waits for a completion response from the I/O peripheral before |
| 2621 | returning. This is not guaranteed by all architectures and is therefore |
| 2622 | not part of the portable ordering semantics. |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2623 | |
| 2624 | (*) insX(), outsX(): |
| 2625 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2626 | As above, the insX() and outsX() accessors provide the same ordering |
| 2627 | guarantees as readsX() and writesX() respectively when accessing a |
| 2628 | mapping with the default I/O attributes. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2629 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2630 | (*) ioreadX(), iowriteX(): |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2631 | |
Will Deacon | 0cde62a | 2019-04-10 14:01:06 +0100 | [diff] [blame] | 2632 | These will perform appropriately for the type of access they're actually |
| 2633 | doing, be it inX()/outX() or readX()/writeX(). |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2634 | |
Will Deacon | 9726840 | 2019-04-12 13:42:18 +0100 | [diff] [blame] | 2635 | With the exception of the string accessors (insX(), outsX(), readsX() and |
| 2636 | writesX()), all of the above assume that the underlying peripheral is |
| 2637 | little-endian and will therefore perform byte-swapping operations on big-endian |
| 2638 | architectures. |
Will Deacon | 4614bbd | 2019-02-11 15:24:56 +0000 | [diff] [blame] | 2639 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2640 | |
| 2641 | ======================================== |
| 2642 | ASSUMED MINIMUM EXECUTION ORDERING MODEL |
| 2643 | ======================================== |
| 2644 | |
| 2645 | It has to be assumed that the conceptual CPU is weakly-ordered but that it will |
| 2646 | maintain the appearance of program causality with respect to itself. Some CPUs |
| 2647 | (such as i386 or x86_64) are more constrained than others (such as powerpc or |
| 2648 | frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside |
| 2649 | of arch-specific code. |
| 2650 | |
| 2651 | This means that it must be considered that the CPU will execute its instruction |
| 2652 | stream in any order it feels like - or even in parallel - provided that if an |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2653 | instruction in the stream depends on an earlier instruction, then that |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2654 | earlier instruction must be sufficiently complete[*] before the later |
| 2655 | instruction may proceed; in other words: provided that the appearance of |
| 2656 | causality is maintained. |
| 2657 | |
| 2658 | [*] Some instructions have more than one effect - such as changing the |
| 2659 | condition codes, changing registers or changing memory - and different |
| 2660 | instructions may depend on different effects. |
| 2661 | |
| 2662 | A CPU may also discard any instruction sequence that winds up having no |
| 2663 | ultimate effect. For example, if two adjacent instructions both load an |
| 2664 | immediate value into the same register, the first may be discarded. |
| 2665 | |
| 2666 | |
| 2667 | Similarly, it has to be assumed that compiler might reorder the instruction |
| 2668 | stream in any way it sees fit, again provided the appearance of causality is |
| 2669 | maintained. |
| 2670 | |
| 2671 | |
| 2672 | ============================ |
| 2673 | THE EFFECTS OF THE CPU CACHE |
| 2674 | ============================ |
| 2675 | |
| 2676 | The way cached memory operations are perceived across the system is affected to |
| 2677 | a certain extent by the caches that lie between CPUs and memory, and by the |
| 2678 | memory coherence system that maintains the consistency of state in the system. |
| 2679 | |
| 2680 | As far as the way a CPU interacts with another part of the system through the |
| 2681 | caches goes, the memory system has to include the CPU's caches, and memory |
| 2682 | barriers for the most part act at the interface between the CPU and its cache |
| 2683 | (memory barriers logically act on the dotted line in the following diagram): |
| 2684 | |
| 2685 | <--- CPU ---> : <----------- Memory -----------> |
| 2686 | : |
| 2687 | +--------+ +--------+ : +--------+ +-----------+ |
| 2688 | | | | | : | | | | +--------+ |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 2689 | | CPU | | Memory | : | CPU | | | | | |
| 2690 | | Core |--->| Access |----->| Cache |<-->| | | | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2691 | | | | Queue | : | | | |--->| Memory | |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 2692 | | | | | : | | | | | | |
| 2693 | +--------+ +--------+ : +--------+ | | | | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2694 | : | Cache | +--------+ |
| 2695 | : | Coherency | |
| 2696 | : | Mechanism | +--------+ |
| 2697 | +--------+ +--------+ : +--------+ | | | | |
| 2698 | | | | | : | | | | | | |
| 2699 | | CPU | | Memory | : | CPU | | |--->| Device | |
Ingo Molnar | e0edc78 | 2013-11-22 11:24:53 +0100 | [diff] [blame] | 2700 | | Core |--->| Access |----->| Cache |<-->| | | | |
| 2701 | | | | Queue | : | | | | | | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2702 | | | | | : | | | | +--------+ |
| 2703 | +--------+ +--------+ : +--------+ +-----------+ |
| 2704 | : |
| 2705 | : |
| 2706 | |
| 2707 | Although any particular load or store may not actually appear outside of the |
| 2708 | CPU that issued it since it may have been satisfied within the CPU's own cache, |
| 2709 | it will still appear as if the full memory access had taken place as far as the |
| 2710 | other CPUs are concerned since the cache coherency mechanisms will migrate the |
| 2711 | cacheline over to the accessing CPU and propagate the effects upon conflict. |
| 2712 | |
| 2713 | The CPU core may execute instructions in any order it deems fit, provided the |
| 2714 | expected program causality appears to be maintained. Some of the instructions |
| 2715 | generate load and store operations which then go into the queue of memory |
| 2716 | accesses to be performed. The core may place these in the queue in any order |
| 2717 | it wishes, and continue execution until it is forced to wait for an instruction |
| 2718 | to complete. |
| 2719 | |
| 2720 | What memory barriers are concerned with is controlling the order in which |
| 2721 | accesses cross from the CPU side of things to the memory side of things, and |
| 2722 | the order in which the effects are perceived to happen by the other observers |
| 2723 | in the system. |
| 2724 | |
| 2725 | [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see |
| 2726 | their own loads and stores as if they had happened in program order. |
| 2727 | |
| 2728 | [!] MMIO or other device accesses may bypass the cache system. This depends on |
| 2729 | the properties of the memory window through which devices are accessed and/or |
| 2730 | the use of any special device communication instructions the CPU may have. |
| 2731 | |
| 2732 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2733 | CACHE COHERENCY VS DMA |
| 2734 | ---------------------- |
| 2735 | |
| 2736 | Not all systems maintain cache coherency with respect to devices doing DMA. In |
| 2737 | such cases, a device attempting DMA may obtain stale data from RAM because |
| 2738 | dirty cache lines may be resident in the caches of various CPUs, and may not |
| 2739 | have been written back to RAM yet. To deal with this, the appropriate part of |
| 2740 | the kernel must flush the overlapping bits of cache on each CPU (and maybe |
| 2741 | invalidate them as well). |
| 2742 | |
| 2743 | In addition, the data DMA'd to RAM by a device may be overwritten by dirty |
| 2744 | cache lines being written back to RAM from a CPU's cache after the device has |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2745 | installed its own data, or cache lines present in the CPU's cache may simply |
| 2746 | obscure the fact that RAM has been updated, until at such time as the cacheline |
| 2747 | is discarded from the CPU's cache and reloaded. To deal with this, the |
| 2748 | appropriate part of the kernel must invalidate the overlapping bits of the |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2749 | cache on each CPU. |
| 2750 | |
Mauro Carvalho Chehab | de0f51e | 2018-05-07 06:35:41 -0300 | [diff] [blame] | 2751 | See Documentation/core-api/cachetlb.rst for more information on cache management. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2752 | |
| 2753 | |
| 2754 | CACHE COHERENCY VS MMIO |
| 2755 | ----------------------- |
| 2756 | |
| 2757 | Memory mapped I/O usually takes place through memory locations that are part of |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2758 | a window in the CPU's memory space that has different properties assigned than |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2759 | the usual RAM directed window. |
| 2760 | |
| 2761 | Amongst these properties is usually the fact that such accesses bypass the |
| 2762 | caching entirely and go directly to the device buses. This means MMIO accesses |
| 2763 | may, in effect, overtake accesses to cached memory that were emitted earlier. |
| 2764 | A memory barrier isn't sufficient in such a case, but rather the cache must be |
| 2765 | flushed between the cached memory write and the MMIO access if the two are in |
| 2766 | any way dependent. |
| 2767 | |
| 2768 | |
| 2769 | ========================= |
| 2770 | THE THINGS CPUS GET UP TO |
| 2771 | ========================= |
| 2772 | |
| 2773 | A programmer might take it for granted that the CPU will perform memory |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2774 | operations in exactly the order specified, so that if the CPU is, for example, |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2775 | given the following piece of code to execute: |
| 2776 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2777 | a = READ_ONCE(*A); |
| 2778 | WRITE_ONCE(*B, b); |
| 2779 | c = READ_ONCE(*C); |
| 2780 | d = READ_ONCE(*D); |
| 2781 | WRITE_ONCE(*E, e); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2782 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2783 | they would then expect that the CPU will complete the memory operation for each |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2784 | instruction before moving on to the next one, leading to a definite sequence of |
| 2785 | operations as seen by external observers in the system: |
| 2786 | |
| 2787 | LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. |
| 2788 | |
| 2789 | |
| 2790 | Reality is, of course, much messier. With many CPUs and compilers, the above |
| 2791 | assumption doesn't hold because: |
| 2792 | |
| 2793 | (*) loads are more likely to need to be completed immediately to permit |
| 2794 | execution progress, whereas stores can often be deferred without a |
| 2795 | problem; |
| 2796 | |
| 2797 | (*) loads may be done speculatively, and the result discarded should it prove |
| 2798 | to have been unnecessary; |
| 2799 | |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2800 | (*) loads may be done speculatively, leading to the result having been fetched |
| 2801 | at the wrong time in the expected sequence of events; |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2802 | |
| 2803 | (*) the order of the memory accesses may be rearranged to promote better use |
| 2804 | of the CPU buses and caches; |
| 2805 | |
| 2806 | (*) loads and stores may be combined to improve performance when talking to |
| 2807 | memory or I/O hardware that can do batched accesses of adjacent locations, |
| 2808 | thus cutting down on transaction setup costs (memory and PCI devices may |
| 2809 | both be able to do this); and |
| 2810 | |
Will Deacon | 806654a | 2018-11-19 11:02:45 +0000 | [diff] [blame] | 2811 | (*) the CPU's data cache may affect the ordering, and while cache-coherency |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2812 | mechanisms may alleviate this - once the store has actually hit the cache |
| 2813 | - there's no guarantee that the coherency management will be propagated in |
| 2814 | order to other CPUs. |
| 2815 | |
| 2816 | So what another CPU, say, might actually observe from the above piece of code |
| 2817 | is: |
| 2818 | |
| 2819 | LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B |
| 2820 | |
| 2821 | (Where "LOAD {*C,*D}" is a combined load) |
| 2822 | |
| 2823 | |
| 2824 | However, it is guaranteed that a CPU will be self-consistent: it will see its |
| 2825 | _own_ accesses appear to be correctly ordered, without the need for a memory |
| 2826 | barrier. For instance with the following code: |
| 2827 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2828 | U = READ_ONCE(*A); |
| 2829 | WRITE_ONCE(*A, V); |
| 2830 | WRITE_ONCE(*A, W); |
| 2831 | X = READ_ONCE(*A); |
| 2832 | WRITE_ONCE(*A, Y); |
| 2833 | Z = READ_ONCE(*A); |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2834 | |
| 2835 | and assuming no intervention by an external influence, it can be assumed that |
| 2836 | the final result will appear to be: |
| 2837 | |
| 2838 | U == the original value of *A |
| 2839 | X == W |
| 2840 | Z == Y |
| 2841 | *A == Y |
| 2842 | |
| 2843 | The code above may cause the CPU to generate the full sequence of memory |
| 2844 | accesses: |
| 2845 | |
| 2846 | U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A |
| 2847 | |
| 2848 | in that order, but, without intervention, the sequence may have almost any |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2849 | combination of elements combined or discarded, provided the program's view |
| 2850 | of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() |
| 2851 | are -not- optional in the above example, as there are architectures |
| 2852 | where a given CPU might reorder successive loads to the same location. |
| 2853 | On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is |
| 2854 | necessary to prevent this, for example, on Itanium the volatile casts |
| 2855 | used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq |
| 2856 | and st.rel instructions (respectively) that prevent such reordering. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2857 | |
| 2858 | The compiler may also combine, discard or defer elements of the sequence before |
| 2859 | the CPU even sees them. |
| 2860 | |
| 2861 | For instance: |
| 2862 | |
| 2863 | *A = V; |
| 2864 | *A = W; |
| 2865 | |
| 2866 | may be reduced to: |
| 2867 | |
| 2868 | *A = W; |
| 2869 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2870 | since, without either a write barrier or an WRITE_ONCE(), it can be |
Paul E. McKenney | 2ecf810 | 2013-12-11 13:59:04 -0800 | [diff] [blame] | 2871 | assumed that the effect of the storage of V to *A is lost. Similarly: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2872 | |
| 2873 | *A = Y; |
| 2874 | Z = *A; |
| 2875 | |
Paul E. McKenney | 9af194c | 2015-06-18 14:33:24 -0700 | [diff] [blame] | 2876 | may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be |
| 2877 | reduced to: |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2878 | |
| 2879 | *A = Y; |
| 2880 | Z = Y; |
| 2881 | |
| 2882 | and the LOAD operation never appear outside of the CPU. |
| 2883 | |
| 2884 | |
| 2885 | AND THEN THERE'S THE ALPHA |
| 2886 | -------------------------- |
| 2887 | |
| 2888 | The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, |
| 2889 | some versions of the Alpha CPU have a split data cache, permitting them to have |
Jarek Poplawski | 81fc632 | 2007-05-23 13:58:20 -0700 | [diff] [blame] | 2890 | two semantically-related cache lines updated at separate times. This is where |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2891 | the data dependency barrier really becomes necessary as this synchronises both |
| 2892 | caches with the memory coherence system, thus making it seem like pointer |
| 2893 | changes vs new data occur in the right order. |
| 2894 | |
Paul E. McKenney | f28f086 | 2018-03-07 09:27:37 -0800 | [diff] [blame] | 2895 | The Alpha defines the Linux kernel's memory model, although as of v4.15 |
Will Deacon | 8ca924a | 2019-11-07 14:36:37 +0000 | [diff] [blame] | 2896 | the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly |
| 2897 | reduced its impact on the memory model. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2898 | |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 2899 | |
Michael S. Tsirkin | 6a65d26 | 2015-12-27 18:23:01 +0200 | [diff] [blame] | 2900 | VIRTUAL MACHINE GUESTS |
SeongJae Park | 3dbf091 | 2016-04-12 08:52:52 -0700 | [diff] [blame] | 2901 | ---------------------- |
Michael S. Tsirkin | 6a65d26 | 2015-12-27 18:23:01 +0200 | [diff] [blame] | 2902 | |
| 2903 | Guests running within virtual machines might be affected by SMP effects even if |
| 2904 | the guest itself is compiled without SMP support. This is an artifact of |
| 2905 | interfacing with an SMP host while running an UP kernel. Using mandatory |
| 2906 | barriers for this use-case would be possible but is often suboptimal. |
| 2907 | |
| 2908 | To handle this case optimally, low-level virt_mb() etc macros are available. |
| 2909 | These have the same effect as smp_mb() etc when SMP is enabled, but generate |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 2910 | identical code for SMP and non-SMP systems. For example, virtual machine guests |
Michael S. Tsirkin | 6a65d26 | 2015-12-27 18:23:01 +0200 | [diff] [blame] | 2911 | should use virt_mb() rather than smp_mb() when synchronizing against a |
| 2912 | (possibly SMP) host. |
| 2913 | |
| 2914 | These are equivalent to smp_mb() etc counterparts in all other respects, |
| 2915 | in particular, they do not control MMIO effects: to control |
| 2916 | MMIO effects, use mandatory barriers. |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2917 | |
SeongJae Park | 0b6fa34 | 2016-04-12 08:52:53 -0700 | [diff] [blame] | 2918 | |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 2919 | ============ |
| 2920 | EXAMPLE USES |
| 2921 | ============ |
| 2922 | |
| 2923 | CIRCULAR BUFFERS |
| 2924 | ---------------- |
| 2925 | |
| 2926 | Memory barriers can be used to implement circular buffering without the need |
| 2927 | of a lock to serialise the producer with the consumer. See: |
| 2928 | |
Mauro Carvalho Chehab | d8a121e | 2018-05-07 06:35:43 -0300 | [diff] [blame] | 2929 | Documentation/core-api/circular-buffers.rst |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 2930 | |
| 2931 | for details. |
| 2932 | |
| 2933 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2934 | ========== |
| 2935 | REFERENCES |
| 2936 | ========== |
| 2937 | |
| 2938 | Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, |
| 2939 | Digital Press) |
| 2940 | Chapter 5.2: Physical Address Space Characteristics |
| 2941 | Chapter 5.4: Caches and Write Buffers |
| 2942 | Chapter 5.5: Data Sharing |
| 2943 | Chapter 5.6: Read/Write Ordering |
| 2944 | |
| 2945 | AMD64 Architecture Programmer's Manual Volume 2: System Programming |
| 2946 | Chapter 7.1: Memory-Access Ordering |
| 2947 | Chapter 7.4: Buffering and Combining Memory Writes |
| 2948 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 2949 | ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile) |
| 2950 | Chapter B2: The AArch64 Application Level Memory Model |
| 2951 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2952 | IA-32 Intel Architecture Software Developer's Manual, Volume 3: |
| 2953 | System Programming Guide |
| 2954 | Chapter 7.1: Locked Atomic Operations |
| 2955 | Chapter 7.2: Memory Ordering |
| 2956 | Chapter 7.4: Serializing Instructions |
| 2957 | |
| 2958 | The SPARC Architecture Manual, Version 9 |
| 2959 | Chapter 8: Memory Models |
| 2960 | Appendix D: Formal Specification of the Memory Models |
| 2961 | Appendix J: Programming with the Memory Models |
| 2962 | |
Paul E. McKenney | f1ab25a | 2017-08-29 15:49:21 -0700 | [diff] [blame] | 2963 | Storage in the PowerPC (Stone and Fitzgerald) |
| 2964 | |
David Howells | 108b42b | 2006-03-31 16:00:29 +0100 | [diff] [blame] | 2965 | UltraSPARC Programmer Reference Manual |
| 2966 | Chapter 5: Memory Accesses and Cacheability |
| 2967 | Chapter 15: Sparc-V9 Memory Models |
| 2968 | |
| 2969 | UltraSPARC III Cu User's Manual |
| 2970 | Chapter 9: Memory Models |
| 2971 | |
| 2972 | UltraSPARC IIIi Processor User's Manual |
| 2973 | Chapter 8: Memory Models |
| 2974 | |
| 2975 | UltraSPARC Architecture 2005 |
| 2976 | Chapter 9: Memory |
| 2977 | Appendix D: Formal Specifications of the Memory Models |
| 2978 | |
| 2979 | UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 |
| 2980 | Chapter 8: Memory Models |
| 2981 | Appendix F: Caches and Cache Coherency |
| 2982 | |
| 2983 | Solaris Internals, Core Kernel Architecture, p63-68: |
| 2984 | Chapter 3.3: Hardware Considerations for Locks and |
| 2985 | Synchronization |
| 2986 | |
| 2987 | Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching |
| 2988 | for Kernel Programmers: |
| 2989 | Chapter 13: Other Memory Models |
| 2990 | |
| 2991 | Intel Itanium Architecture Software Developer's Manual: Volume 1: |
| 2992 | Section 2.6: Speculation |
| 2993 | Section 4.4: Memory Access |