Think your hard drive is really encrypted?
An industry-wide security issue with SEDs has been highlighted recently (Carlo Meijer and Bernard van Gastel, Netherlands Radboud University, 5/11/2018).
In the research paper ‘Self-Encrypting deception: weaknesses in the encryption of solid state drives (SSDs)’, a number of SSDs from different vendors were examined and discovered that the security could be bypassed in many cases, a problem compounded by the fact that the default behaviour of operating systems disk encryption routines (such as Microsoft Windows Bitlocker) is to rely on using the Opal standard if the drive supports it rather than a software encryption method. Drives using the Opal standard carry out hardware encryption on device, rather than relying on the CPU in the machine to do the heavy lifting from an encryption/decryption point of view.
Isn’t it surprising that given the existence of a universally adopted security specification, such a vulnerability makes its way through to so many disk varieties from different vendors?
Think of it this way…
Consider BS3621 which you may have come across when reading your house insurance policy conditions: this test relates to mortice and cylinder rim locks for doors where a key is required for entry or exit, and UK insurers often state this standard is a requirement.
However, insurers do not generally stipulate a standard for other factors surrounding the lock itself, so if you buy a door that has such a BS3621 but is made of thin plywood from a supplier instead of solid bonded & layered oak, they would have not have breached the insurer’s conditions (and the door would be cheaper and quicker to manufacture) but the burglars can happily get into your house without trying to overcome BS3621.
What are the areas of concern?
Given the widespread reporting of the disk storage security issue, many security-aware organisations who need to ensure they are compliant with government and regulatory standards around privacy and confidentiality (for example GDPR, ISO27001, SOC, HIPAA, Cyber Essentials etc.) are now concerned about:
Disk Security History
Prior to self-encrypting drives (SED), disks were limited to performing access control to secure themselves. The ATA Security Feature Set required the disk to be configured with a BIOS level password matching one set on the host to access it. It was easy to overcome by attacks on the disk at BIOS level to bypass or spoof the checks, or simply to get another new identical drive and take out the logic board (which does not have a password, being new) and substitute it into the original drive to eliminate access control and allow it to be read.
To mitigate this, software disk encryption (SDE) became available meaning that either individual files or whole disks could be encrypted by the host when writing to the disk and decrypted when reading from it. The disadvantage was that the processing load of doing the encryption/decryption could significantly slow the host’s other applications, so disk manufacturers introduced SEDs which (for a price premium over standard drives) offloaded the encryption/decryption onto the drive itself.
To ensure interoperability of disk drives across multiple hosts and operating systems, security standards for SEDs were developed by the Trusted Computing Group (Opal 2) and together with other international storage standards (in particular IEEE-1667, Protocol for Authentication of Removable Storage Devices) were adopted by disk manufacturers.
Opal 2 overcame the limitations of ATA Security by encrypting the disk itself using hardware-based cryptographic functions to encrypt the data inside the disk itself using an internal disk encryption key (DEK). This standard did not focus on the type of cryptography or performance of the disk, but did concern itself with how the keys themselves were generated, enabled, changed and reset and provides the concept of passwords protected by keys, namely ‘Key Encryption Keys’ (KEK) with multiple levels together with protection of exchanging the key securely with the host (using a MEK).
Benefits and Disadvantages of SEDs
The primary benefit of SED’s is generally based around performance, on the premise that dedicated hardware encryption relieves the burden from the host CPU and transfer bandwidth.
For organisations with strong ‘data at rest’ security requirements encryption on their storage, then SED’s provide such as the ‘crypto erase’ function, where the disk encryption key (DEK) is changed rendering the contents unusable in an instant without needing to re-encrypt sector-by-sector. This could be a benefit for large-scale estates (e.g. for IaaS service providers to prevent exposing data across hosts without time delays from secure wiping), although in many cases the end users are government organisations which implement a strict physical shredding policy where redeployment to a different security classification is concerned.
Where the applications have a high write requirement (commonly databases, client-based agents, disk defragmentation operations) this can be an important benefit, however even in these scenarios it isn’t a given that performance of SDE would be significant enough to justify the additional management overhead.
Originally, SEDs were sold with a significant additional price tag to organisations which had a specific need to do this, but today there is little or no difference in price.
Both SED and SDE encryption solutions are subject to attacks of varying complexities depending on the operating system (e.g. Microsoft Bitlocker) or third-party security software vendor (e.g. BeCrypt), and in many cases these are related to the key management aspects which applies to them both.
SDE Security Benefits
SDE security is independent of the operating system and are therefore not vulnerable to attacks that SED is, such as alternative boot approaches using two-factor keys (USB) and memory attacks to discover encryption keys held in systems memory (side-channel attack). Since the authentication is done on the drive itself using an isolated ‘pre-boot OS’, the real OS is not accessible to be attacked during the boot process. There is no need to centrally manage the disk encryption keys (DEK) with a SED, however there is still the need to manage the Master and other keys.
SED Security Disadvantages
The fundamental issue with the SED implementations mentioned more recently is that the password exchange routine and disk encryption key process is implemented internally within the drive itself, instead of using the password securely held within the host as part of the encryption key. So, if the password exchange routines on the drive are modified, then any host can happily read & write to the disk without encryption (even though internally the drive is still encrypted). So, even with the security level set to high, the disk is still vulnerable if the Security features are rewritten to ignore the initial password check allowing access to the drive.
In the case of the SED’s, hackers can use the TAP (Test Access Port) on the device (which is not specifically part of the scope of the Opal 2.0 specifications, like the door is not part of BS3621). It may seem that this is an advanced attack, however many board manufacturers (disk manufacturers included) add the industry-standard four-wire JTAG interface and so with a simple USB-to-JTAG adapter costing a few quid and some basic programming tools, you can reprogram the disk to not bother checking the user password. You may recall a similar scenario for the Xbox 360, where a widely-published JTAG hack allowed execution of unsigned code.
Microsoft Windows and other operating systems will automatically use the hardware encryption rather than software (e.g. Bitlocker for Windows), meaning that systems are vulnerable to the attack outlined by Meijer & Gastel. When this research was publicised, Microsoft for example published ADV180028 explaining that Group Policy can be set to override the default behaviour and use the software encryption instead, making clear that read performance is slightly reduced and write performance is reduced ‘up to 50%’, although this really applies to older hardware as described.
Self-Encrypting Disk Security Configuration and Operation
As described previously, there are a number of different levels of security available from a SED and implementation generally follows a specific path.
To benefit from the security provided from a SED, the owner (most likely an IT department) needs to define multiple passwords and security levels before using it, and should be guided by a Security Policy and procedures to do this, driven by the Risk Assessment demanding adequate ‘Data-at-Rest’ controls to satisfy many governmental and regulatory standards concerned with data privacy and confidentiality.
For many disk vendors, this involves installing an operating system in order to run the drive prep software to associate the key pair relationship between the host (typically in the TPM chipset) and the drive, which results in a crypto erase – meaning that an operating system needs to be put on again from scratch. For many individuals and organisations, this is an administrative burden that they don’t want to do or are not aware they need to do (or even that they have a SED in the first place!).
‘Users’ of the disk (which could be multiple applications, virtual hosts instances, etc.) could set a ‘User Password’ for the parts of the disk they can encrypt. Since there is the possibility that user passwords get unintentionally removed on the platform secure storage area (due to a TPM wipe or hardware failure for example), the concept of a ‘Master Password’ is provided to recover from the loss of user passwords – which itself is a security risk, so vendors allow the possibility of changing how this Master Password behaves by setting different levels.
Disk manufacturers may also have defined another password for use when carrying out diagnostics (maker’s credentials) which they usually state does not allow access to the customer data by their bench technicians, however the owner is usually given the option to disable this password to be sure (but accept they may not be covered by warranty if they do so).
If an organisation with strong data-at-rest requirements has the requirement to maximise the write performance to meet performance targets (such as write-intensive applications on older hardware for example), then advanced security applications can be leveraged to support mass deployments and reduce the risk of the attack highlighted by (e.g. Microsoft TPM Measured Boot, which validates the firmware hash of drives using a centralised attestation server).
Assuming that the vulnerabilities mentioned above (Carlo Meijer and Bernard van Gastel) are avoided through selecting disk drives whose Opal implementation does not suffer from the issues identified and/or are configured in such a way that it does not affect them, the main beneficiaries of SEDs are likely to be those organisations who need to extract every ounce of compute power for applications over a wide scale and can justify the additional overhead to properly manage the overhead of setting up and maintaining the KEK’s (such as SaaS providers), or who can benefit from the facility of crypto erase to guarantee security for redeployed storage (such as IaaS providers), or organisations with a large amount of legacy compute platforms that do not have modern CPU’s (the latter is more likely to have more pressing concerns than data-at-rest however).
While it is easy to see the main benefit of dedicated hardware within the drive, this is less of an advantage as it was in recent years. Modern CPU’s (for the last 10 years or so) have built-in hardware acceleration for AES encryption/decryption, and likewise modern disk systems use the NVMe interface specification that maps I/O to shared memory over the PCIe interface and uses parallel I/O multiple CPU cores which coupled with PCIe-based SSD’s which means software-based encryption/decryption can be done without significant performance loss (compared to older CPU and disk interface specifications such as SCSI/SAS and ATA/SATA).
Having SDE reduces the implementation and key management overheads of shared keys, increases the efficiency of key recovery and is a portable security control that works across all disk vendors, and physical recovery (if there is any physical drive damage, SEDs cannot be recovered) so is a lot to be said for it in most general-purpose applications running on modern hardware, provided that the key management procedures are done properly. In most non-service provider organisations with up-to-date infrastructure, then the marginal performance decreases with software disk encryption are generally outweighed by the benefits within datacentres.
Assuming physical security is in place to prevent physical access to the system to perform aside-channel attack, mobile platforms are more of a concern for SDE since it is subject to a side-channel attack (if attackers can access the memory, regardless of the presence of a TPM or 2-factor USB) which SEDs would not be subject to. Mitigation of this is possible (e.g. by storing keys in areas other than RAM, such as registers or performing memory encryption) but impose a performance hit (e.g. 5-30% for Spectre 2 – more recently however, Microsoft have indicated they will be adopting Retpoline that provides a negligible impact).