3

In the past, I've considered encrypting my desktop and laptop's hard disks for security purposes.

However, I've heard from various sources that the failure of a single block in the physical media makes decryption impossible.

Is this really true, and are there ways of mitigating such a problem?

Naftuli Kay
  • 9,621
  • 3
    If you are going to use encryption then you should have a backup system in place and tested. In the case of something like LUKS, your key used for block encryption is actually stored in a special header on the drive. This key is encrypted with your password(s) and/or key-file(s). If they block(s) holding the LUKS header is trashed and you have no backup then everything is lost. Of course there are methods to backup the header, or you could use something like RAID1. – Zoredache Dec 10 '13 at 00:46

3 Answers3

4

As with almost everything, it depends on which block. If it's block 0, your disk is unreadable with or without encryption. If the block holds the actual encryption key, or part thereof, the key is lost most likely along with your data.

Generally, for modern whole-disk encryption, each block is encrypted independently of every other to avoid this exact problem. The cipher doesn't really matter, but it's how that cipher is used that causes this behavior. Take a look at this Wikipedia page for a more thorough explanation: http://en.wikipedia.org/wiki/XTS_mode.

kronenpj
  • 588
  • [+1] for the mode of operation I didn't know about. But why would the encryption key be kept in the data? – rath Dec 10 '13 at 03:43
  • Most schemes will keep the actual encryption key, encrypted, on the disk and use the passphrase to unlock it. Passphrases are notoriously non-random and really random keys are "hard" to memorize. This also allows multiple passphrases to unlock the drive, if that's desirable. – kronenpj Dec 10 '13 at 03:49
  • I would imagine the key is derived from the passphrase with PBKDF2 or Scrypt or something. It sounds a bit wrong. Anyway the "multiple passwords" thing does somewhat explain that decision, although I'd like to see how it's implemented. Cheers – rath Dec 10 '13 at 03:50
2

It depends on the encryption tool / scheme.

One factor is how the data on the disk is grouped by the tool. One extreme you have a program that treats the the entire drive as one item and encrypts or decrypts it all with one key as one operation; in this case you may lose everything. On the other extreme you may have a tool that encrypts one file at a time in which case you would lose at most one file.

Another factor is what encryption cypher is actually used. If you used a system that for each block of data generates a changing value that is used as one of the inputs to the encryption of the next block of data [part of a file or part of a disk block or part of a disk image etc] you could end up losing the entire item being operated on (be it a file or an entire disk image). If you use a system that operates independently on each block of data you may lose as little as a small portion of a specific file or less.

Normally I would suggest you start by asking your self what you mean by "for security purposes". Since you say encryption you might not be as concerned with tamper-proofing as you are with preventing curious fingers from going through your data when you're not around.

Once you know what kind of threats you are concerned with you should then consider failure modes and weigh trade-offs between risk mitigation and convenience.

Ultimately I suspect you should simply compare the top popular software options of those you trust - most easily done by reading the FAQs which should all talk about failure modes and data loss.

GL.

Ram
  • 1,107
1

I've never used such software thus far but I'd say it depends on the cipher's mode of operation. If you're using ECB mode, you'll lose at most the sector that got damaged. But you shouldn't use ECB mode if the size of your data is more than the block size of the cipher.

If you use any chaining mode such as OFB, you risk losing a sector in the aforementioned block, plus all other blocks that come next.

 Good block        Bad sector X     Propagated err
|--------------| |-------X------|  |xxxxxxxxxxxxxxx|  ...

that is, the error might propagate to the next encryption block and contaminate the rest of the data. This is true regardless of full-disk or file-by-file encryption.

By using CBC mode, you merely lose the corresponding sector in all following blocks

 Good block        Bad sector X     Propagated err
|--------------| |-------X------|  |-------o------|  ...
                                           ^
                                 due to previous bad block

It's not all doom and gloom though. I think respectable disk encryption software have countermeasures against this sort of thing happening, such as keeping a CRC32 checksum somewhere. Do check the software's wiki (or forum or customer service if you go non-free) to see how they handle this sort of thing.

Update: From the TrueCrypt documentation:

Some storage devices, such as hard drives, internally reallocate/remap bad sectors. Whenever the device detects a sector to which data cannot be written, it marks the sector as bad and remaps it to a sector in a hidden reserved area on the drive. Any subsequent read/write operations from/to the bad sector are redirected to the sector in the reserved area. [...]

which suggests your filesystem already takes care of these things.

But let's assume it doesn't. As kronenpj pointed out, most disk encryption (at least the popular ones) work with XTS mode, which is an enhancement of ciphertext stealing. That mode is a bit complicated to represent in ASCII, the linked Wikipedia page has a nice depiction of the process.

And here comes the tl;dr: Error propagation (in the absence of corrective measures) depends on whether the software treats plaintext as single blocks, or chains them together.

rath
  • 397
  • Great answer. I assumed that things like this depended on the cipher mode, as I've had to work with common ciphers in the past. OFB and CBC would be a disaster, but we'll have to see on the specific technologies how and if they mitigate these potential problems. – Naftuli Kay Dec 10 '13 at 07:53