Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 1 | .. SPDX-License-Identifier: GPL-2.0 |
| 2 | |
Eric Biggers | abb861f | 2021-09-16 10:49:26 -0700 | [diff] [blame] | 3 | .. _inline_encryption: |
| 4 | |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 5 | ================= |
| 6 | Inline Encryption |
| 7 | ================= |
| 8 | |
| 9 | Background |
| 10 | ========== |
| 11 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 12 | Inline encryption hardware sits logically between memory and disk, and can |
| 13 | en/decrypt data as it goes in/out of the disk. For each I/O request, software |
| 14 | can control exactly how the inline encryption hardware will en/decrypt the data |
| 15 | in terms of key, algorithm, data unit size (the granularity of en/decryption), |
| 16 | and data unit number (a value that determines the initialization vector(s)). |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 17 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 18 | Some inline encryption hardware accepts all encryption parameters including raw |
| 19 | keys directly in low-level I/O requests. However, most inline encryption |
| 20 | hardware instead has a fixed number of "keyslots" and requires that the key, |
| 21 | algorithm, and data unit size first be programmed into a keyslot. Each |
| 22 | low-level I/O request then just contains a keyslot index and data unit number. |
| 23 | |
| 24 | Note that inline encryption hardware is very different from traditional crypto |
| 25 | accelerators, which are supported through the kernel crypto API. Traditional |
| 26 | crypto accelerators operate on memory regions, whereas inline encryption |
| 27 | hardware operates on I/O requests. Thus, inline encryption hardware needs to be |
| 28 | managed by the block layer, not the kernel crypto API. |
| 29 | |
| 30 | Inline encryption hardware is also very different from "self-encrypting drives", |
| 31 | such as those based on the TCG Opal or ATA Security standards. Self-encrypting |
| 32 | drives don't provide fine-grained control of encryption and provide no way to |
| 33 | verify the correctness of the resulting ciphertext. Inline encryption hardware |
| 34 | provides fine-grained control of encryption, including the choice of key and |
| 35 | initialization vector for each sector, and can be tested for correctness. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 36 | |
| 37 | Objective |
| 38 | ========= |
| 39 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 40 | We want to support inline encryption in the kernel. To make testing easier, we |
| 41 | also want support for falling back to the kernel crypto API when actual inline |
| 42 | encryption hardware is absent. We also want inline encryption to work with |
| 43 | layered devices like device-mapper and loopback (i.e. we want to be able to use |
| 44 | the inline encryption hardware of the underlying devices if present, or else |
| 45 | fall back to crypto API en/decryption). |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 46 | |
| 47 | Constraints and notes |
| 48 | ===================== |
| 49 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 50 | - We need a way for upper layers (e.g. filesystems) to specify an encryption |
| 51 | context to use for en/decrypting a bio, and device drivers (e.g. UFSHCD) need |
| 52 | to be able to use that encryption context when they process the request. |
| 53 | Encryption contexts also introduce constraints on bio merging; the block layer |
| 54 | needs to be aware of these constraints. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 55 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 56 | - Different inline encryption hardware has different supported algorithms, |
| 57 | supported data unit sizes, maximum data unit numbers, etc. We call these |
| 58 | properties the "crypto capabilities". We need a way for device drivers to |
| 59 | advertise crypto capabilities to upper layers in a generic way. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 60 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 61 | - Inline encryption hardware usually (but not always) requires that keys be |
| 62 | programmed into keyslots before being used. Since programming keyslots may be |
| 63 | slow and there may not be very many keyslots, we shouldn't just program the |
| 64 | key for every I/O request, but rather keep track of which keys are in the |
| 65 | keyslots and reuse an already-programmed keyslot when possible. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 66 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 67 | - Upper layers typically define a specific end-of-life for crypto keys, e.g. |
| 68 | when an encrypted directory is locked or when a crypto mapping is torn down. |
| 69 | At these times, keys are wiped from memory. We must provide a way for upper |
| 70 | layers to also evict keys from any keyslots they are present in. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 71 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 72 | - When possible, device-mapper devices must be able to pass through the inline |
| 73 | encryption support of their underlying devices. However, it doesn't make |
| 74 | sense for device-mapper devices to have keyslots themselves. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 75 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 76 | Basic design |
| 77 | ============ |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 78 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 79 | We introduce ``struct blk_crypto_key`` to represent an inline encryption key and |
| 80 | how it will be used. This includes the actual bytes of the key; the size of the |
| 81 | key; the algorithm and data unit size the key will be used with; and the number |
| 82 | of bytes needed to represent the maximum data unit number the key will be used |
| 83 | with. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 84 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 85 | We introduce ``struct bio_crypt_ctx`` to represent an encryption context. It |
| 86 | contains a data unit number and a pointer to a blk_crypto_key. We add pointers |
| 87 | to a bio_crypt_ctx to ``struct bio`` and ``struct request``; this allows users |
| 88 | of the block layer (e.g. filesystems) to provide an encryption context when |
| 89 | creating a bio and have it be passed down the stack for processing by the block |
| 90 | layer and device drivers. Note that the encryption context doesn't explicitly |
| 91 | say whether to encrypt or decrypt, as that is implicit from the direction of the |
| 92 | bio; WRITE means encrypt, and READ means decrypt. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 93 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 94 | We also introduce ``struct blk_crypto_profile`` to contain all generic inline |
| 95 | encryption-related state for a particular inline encryption device. The |
| 96 | blk_crypto_profile serves as the way that drivers for inline encryption hardware |
| 97 | advertise their crypto capabilities and provide certain functions (e.g., |
| 98 | functions to program and evict keys) to upper layers. Each device driver that |
| 99 | wants to support inline encryption will construct a blk_crypto_profile, then |
| 100 | associate it with the disk's request_queue. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 101 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 102 | The blk_crypto_profile also manages the hardware's keyslots, when applicable. |
| 103 | This happens in the block layer, so that users of the block layer can just |
| 104 | specify encryption contexts and don't need to know about keyslots at all, nor do |
| 105 | device drivers need to care about most details of keyslot management. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 106 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 107 | Specifically, for each keyslot, the block layer (via the blk_crypto_profile) |
| 108 | keeps track of which blk_crypto_key that keyslot contains (if any), and how many |
| 109 | in-flight I/O requests are using it. When the block layer creates a |
| 110 | ``struct request`` for a bio that has an encryption context, it grabs a keyslot |
| 111 | that already contains the key if possible. Otherwise it waits for an idle |
| 112 | keyslot (a keyslot that isn't in-use by any I/O), then programs the key into the |
| 113 | least-recently-used idle keyslot using the function the device driver provided. |
| 114 | In both cases, the resulting keyslot is stored in the ``crypt_keyslot`` field of |
| 115 | the request, where it is then accessible to device drivers and is released after |
| 116 | the request completes. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 117 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 118 | ``struct request`` also contains a pointer to the original bio_crypt_ctx. |
| 119 | Requests can be built from multiple bios, and the block layer must take the |
| 120 | encryption context into account when trying to merge bios and requests. For two |
| 121 | bios/requests to be merged, they must have compatible encryption contexts: both |
| 122 | unencrypted, or both encrypted with the same key and contiguous data unit |
| 123 | numbers. Only the encryption context for the first bio in a request is |
| 124 | retained, since the remaining bios have been verified to be merge-compatible |
| 125 | with the first bio. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 126 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 127 | To make it possible for inline encryption to work with request_queue based |
| 128 | layered devices, when a request is cloned, its encryption context is cloned as |
| 129 | well. When the cloned request is submitted, it is then processed as usual; this |
| 130 | includes getting a keyslot from the clone's target device if needed. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 131 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 132 | blk-crypto-fallback |
| 133 | =================== |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 134 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 135 | It is desirable for the inline encryption support of upper layers (e.g. |
| 136 | filesystems) to be testable without real inline encryption hardware, and |
| 137 | likewise for the block layer's keyslot management logic. It is also desirable |
| 138 | to allow upper layers to just always use inline encryption rather than have to |
| 139 | implement encryption in multiple ways. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 140 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 141 | Therefore, we also introduce *blk-crypto-fallback*, which is an implementation |
| 142 | of inline encryption using the kernel crypto API. blk-crypto-fallback is built |
| 143 | into the block layer, so it works on any block device without any special setup. |
| 144 | Essentially, when a bio with an encryption context is submitted to a |
| 145 | request_queue that doesn't support that encryption context, the block layer will |
| 146 | handle en/decryption of the bio using blk-crypto-fallback. |
| 147 | |
| 148 | For encryption, the data cannot be encrypted in-place, as callers usually rely |
| 149 | on it being unmodified. Instead, blk-crypto-fallback allocates bounce pages, |
| 150 | fills a new bio with those bounce pages, encrypts the data into those bounce |
| 151 | pages, and submits that "bounce" bio. When the bounce bio completes, |
| 152 | blk-crypto-fallback completes the original bio. If the original bio is too |
| 153 | large, multiple bounce bios may be required; see the code for details. |
| 154 | |
| 155 | For decryption, blk-crypto-fallback "wraps" the bio's completion callback |
| 156 | (``bi_complete``) and private data (``bi_private``) with its own, unsets the |
| 157 | bio's encryption context, then submits the bio. If the read completes |
| 158 | successfully, blk-crypto-fallback restores the bio's original completion |
| 159 | callback and private data, then decrypts the bio's data in-place using the |
| 160 | kernel crypto API. Decryption happens from a workqueue, as it may sleep. |
| 161 | Afterwards, blk-crypto-fallback completes the bio. |
| 162 | |
| 163 | In both cases, the bios that blk-crypto-fallback submits no longer have an |
| 164 | encryption context. Therefore, lower layers only see standard unencrypted I/O. |
| 165 | |
| 166 | blk-crypto-fallback also defines its own blk_crypto_profile and has its own |
| 167 | "keyslots"; its keyslots contain ``struct crypto_skcipher`` objects. The reason |
| 168 | for this is twofold. First, it allows the keyslot management logic to be tested |
| 169 | without actual inline encryption hardware. Second, similar to actual inline |
| 170 | encryption hardware, the crypto API doesn't accept keys directly in requests but |
| 171 | rather requires that keys be set ahead of time, and setting keys can be |
| 172 | expensive; moreover, allocating a crypto_skcipher can't happen on the I/O path |
| 173 | at all due to the locks it takes. Therefore, the concept of keyslots still |
| 174 | makes sense for blk-crypto-fallback. |
| 175 | |
| 176 | Note that regardless of whether real inline encryption hardware or |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 177 | blk-crypto-fallback is used, the ciphertext written to disk (and hence the |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 178 | on-disk format of data) will be the same (assuming that both the inline |
| 179 | encryption hardware's implementation and the kernel crypto API's implementation |
| 180 | of the algorithm being used adhere to spec and function correctly). |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 181 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 182 | blk-crypto-fallback is optional and is controlled by the |
| 183 | ``CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK`` kernel configuration option. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 184 | |
| 185 | API presented to users of the block layer |
| 186 | ========================================= |
| 187 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 188 | ``blk_crypto_config_supported()`` allows users to check ahead of time whether |
| 189 | inline encryption with particular crypto settings will work on a particular |
| 190 | request_queue -- either via hardware or via blk-crypto-fallback. This function |
| 191 | takes in a ``struct blk_crypto_config`` which is like blk_crypto_key, but omits |
| 192 | the actual bytes of the key and instead just contains the algorithm, data unit |
| 193 | size, etc. This function can be useful if blk-crypto-fallback is disabled. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 194 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 195 | ``blk_crypto_init_key()`` allows users to initialize a blk_crypto_key. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 196 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 197 | Users must call ``blk_crypto_start_using_key()`` before actually starting to use |
| 198 | a blk_crypto_key on a request_queue (even if ``blk_crypto_config_supported()`` |
| 199 | was called earlier). This is needed to initialize blk-crypto-fallback if it |
| 200 | will be needed. This must not be called from the data path, as this may have to |
| 201 | allocate resources, which may deadlock in that case. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 202 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 203 | Next, to attach an encryption context to a bio, users should call |
| 204 | ``bio_crypt_set_ctx()``. This function allocates a bio_crypt_ctx and attaches |
| 205 | it to a bio, given the blk_crypto_key and the data unit number that will be used |
| 206 | for en/decryption. Users don't need to worry about freeing the bio_crypt_ctx |
| 207 | later, as that happens automatically when the bio is freed or reset. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 208 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 209 | Finally, when done using inline encryption with a blk_crypto_key on a |
| 210 | request_queue, users must call ``blk_crypto_evict_key()``. This ensures that |
| 211 | the key is evicted from all keyslots it may be programmed into and unlinked from |
| 212 | any kernel data structures it may be linked into. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 213 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 214 | In summary, for users of the block layer, the lifecycle of a blk_crypto_key is |
| 215 | as follows: |
| 216 | |
| 217 | 1. ``blk_crypto_config_supported()`` (optional) |
| 218 | 2. ``blk_crypto_init_key()`` |
| 219 | 3. ``blk_crypto_start_using_key()`` |
| 220 | 4. ``bio_crypt_set_ctx()`` (potentially many times) |
| 221 | 5. ``blk_crypto_evict_key()`` (after all I/O has completed) |
| 222 | 6. Zeroize the blk_crypto_key (this has no dedicated function) |
| 223 | |
| 224 | If a blk_crypto_key is being used on multiple request_queues, then |
| 225 | ``blk_crypto_config_supported()`` (if used), ``blk_crypto_start_using_key()``, |
| 226 | and ``blk_crypto_evict_key()`` must be called on each request_queue. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 227 | |
| 228 | API presented to device drivers |
| 229 | =============================== |
| 230 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 231 | A device driver that wants to support inline encryption must set up a |
| 232 | blk_crypto_profile in the request_queue of its device. To do this, it first |
| 233 | must call ``blk_crypto_profile_init()`` (or its resource-managed variant |
| 234 | ``devm_blk_crypto_profile_init()``), providing the number of keyslots. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 235 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 236 | Next, it must advertise its crypto capabilities by setting fields in the |
| 237 | blk_crypto_profile, e.g. ``modes_supported`` and ``max_dun_bytes_supported``. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 238 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 239 | It then must set function pointers in the ``ll_ops`` field of the |
| 240 | blk_crypto_profile to tell upper layers how to control the inline encryption |
| 241 | hardware, e.g. how to program and evict keyslots. Most drivers will need to |
| 242 | implement ``keyslot_program`` and ``keyslot_evict``. For details, see the |
| 243 | comments for ``struct blk_crypto_ll_ops``. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 244 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 245 | Once the driver registers a blk_crypto_profile with a request_queue, I/O |
| 246 | requests the driver receives via that queue may have an encryption context. All |
| 247 | encryption contexts will be compatible with the crypto capabilities declared in |
| 248 | the blk_crypto_profile, so drivers don't need to worry about handling |
| 249 | unsupported requests. Also, if a nonzero number of keyslots was declared in the |
| 250 | blk_crypto_profile, then all I/O requests that have an encryption context will |
| 251 | also have a keyslot which was already programmed with the appropriate key. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 252 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 253 | If the driver implements runtime suspend and its blk_crypto_ll_ops don't work |
| 254 | while the device is runtime-suspended, then the driver must also set the ``dev`` |
| 255 | field of the blk_crypto_profile to point to the ``struct device`` that will be |
| 256 | resumed before any of the low-level operations are called. |
| 257 | |
| 258 | If there are situations where the inline encryption hardware loses the contents |
| 259 | of its keyslots, e.g. device resets, the driver must handle reprogramming the |
| 260 | keyslots. To do this, the driver may call ``blk_crypto_reprogram_all_keys()``. |
| 261 | |
| 262 | Finally, if the driver used ``blk_crypto_profile_init()`` instead of |
| 263 | ``devm_blk_crypto_profile_init()``, then it is responsible for calling |
| 264 | ``blk_crypto_profile_destroy()`` when the crypto profile is no longer needed. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 265 | |
| 266 | Layered Devices |
| 267 | =============== |
| 268 | |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 269 | Request queue based layered devices like dm-rq that wish to support inline |
| 270 | encryption need to create their own blk_crypto_profile for their request_queue, |
| 271 | and expose whatever functionality they choose. When a layered device wants to |
| 272 | pass a clone of that request to another request_queue, blk-crypto will |
| 273 | initialize and prepare the clone as necessary; see |
| 274 | ``blk_crypto_insert_cloned_request()``. |
Satya Tangirala | 54b259f | 2020-05-14 00:37:16 +0000 | [diff] [blame] | 275 | |
| 276 | Interaction between inline encryption and blk integrity |
| 277 | ======================================================= |
| 278 | |
| 279 | At the time of this patch, there is no real hardware that supports both these |
| 280 | features. However, these features do interact with each other, and it's not |
| 281 | completely trivial to make them both work together properly. In particular, |
| 282 | when a WRITE bio wants to use inline encryption on a device that supports both |
| 283 | features, the bio will have an encryption context specified, after which |
| 284 | its integrity information is calculated (using the plaintext data, since |
| 285 | the encryption will happen while data is being written), and the data and |
| 286 | integrity info is sent to the device. Obviously, the integrity info must be |
| 287 | verified before the data is encrypted. After the data is encrypted, the device |
| 288 | must not store the integrity info that it received with the plaintext data |
| 289 | since that might reveal information about the plaintext data. As such, it must |
| 290 | re-generate the integrity info from the ciphertext data and store that on disk |
| 291 | instead. Another issue with storing the integrity info of the plaintext data is |
| 292 | that it changes the on disk format depending on whether hardware inline |
| 293 | encryption support is present or the kernel crypto API fallback is used (since |
| 294 | if the fallback is used, the device will receive the integrity info of the |
| 295 | ciphertext, not that of the plaintext). |
| 296 | |
| 297 | Because there isn't any real hardware yet, it seems prudent to assume that |
| 298 | hardware implementations might not implement both features together correctly, |
| 299 | and disallow the combination for now. Whenever a device supports integrity, the |
| 300 | kernel will pretend that the device does not support hardware inline encryption |
Eric Biggers | 8e9f666 | 2021-10-18 11:04:53 -0700 | [diff] [blame] | 301 | (by setting the blk_crypto_profile in the request_queue of the device to NULL). |
| 302 | When the crypto API fallback is enabled, this means that all bios with and |
| 303 | encryption context will use the fallback, and IO will complete as usual. When |
| 304 | the fallback is disabled, a bio with an encryption context will be failed. |