commit | b004aa867c48b3232835b61ed9d44b572e29498e | [log] [tgz] |
---|---|---|
author | Davidlohr Bueso <dave@stgolabs.net> | Sun Mar 22 14:03:04 2020 +0800 |
committer | Jens Axboe <axboe@kernel.dk> | Sun Mar 22 10:06:57 2020 -0600 |
tree | c4e27aff03ea6deada3f73eb5e36d3715d541908 | |
parent | 9876e38609a8ea98bbb447eb5a8f1c0400a6ccb8 [diff] |
bcache: optimize barrier usage for Rmw atomic bitops We can avoid the unnecessary barrier on non LL/SC architectures, such as x86. Instead, use the smp_mb__after_atomic(). Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>