Memory and caches
The memory storage of a CPU, alluded to in the last section, is fairly minimal. It is limited to the handful of words in general-purpose registers, plus the special-purpose registers in some limited cases. Registers are very fast, owing to their construction and on-die location, but they are not suited for storage. Modern machines connect CPUs over a bus or buses to the main memory, a very large block of randomly addressable bytes. This random addressability is important as it means that, unlike other kinds of storage, the cost to retrieve the 0th byte from RAM is not distinct from retrieving the 1,000,000,000th byte. We programmers don't have to do any goofy trickery to ensure that our structures appear in the front of the RAM in order to be faster to retrieve or modify, whereas physical location in storage is a pressing concern for spinning hard disks and tape drives. Exactly how our CPUs interact with the memory varies between platforms, and the discussion that follows is heavily indebted to Mark Batty's description in his 2014 book, The C11 and C++11 Concurrency Model.
In a machine that exposes a sequentially consistent model of memory access, every memory load or store must be made in lockstep with one another, including in systems with multiple CPUs or multiple threads of execution per CPU. This limits important optimizations—consider the challenge of working around memory stalls in a sequentially consistent model—and so neither of the processors we'll be considering in this book offer this model. It will show up in literature by name because of the ease of reasoning about this model, and is worth knowing about, especially when studying atomics literature.
The x86 platform behaves as if it were sequentially consistent, excepting that every thread of execution maintains a FIFO buffer for writes prior to their being flushed to the main memory. In addition, there is a global lock for the coordination of atomic reads and writes between threads of execution. There's a lot to unpack here. Firstly, I use the words load and store interchangeably with read and write, as does most literature. There is also a variant distinction between plain load/store and atomic load/store. Atomic loads/stores are special in that their effects can be coordinated between threads of execution, allowing for coordination with varying degrees of guarantees. The x86 processor platform provides fence instructions that force the flush of write buffers, stalling other threads of execution attempting to access the written range of main memory until the flush is completed. This is the purpose of the global lock. Without atomicity, writes will be flushed willy-nilly. On x86 platforms, writes to an addressable location in the memory are coherent, meaning they are globally ordered, and reads will see values from that location in the order of the writes. Compared to ARM, the way this works on x86 is very simple—writes happen directly to main memory.
Let's look at an example. Taking inspiration from Batty's excellent dissertation, consider a setup where a parent thread sets two variables, x and y, to 0, then spawns two threads called, say, A and B. Thread A is responsible for setting x and then y to 1, whereas thread B is responsible for loading the value of x into a thread-local variable called thr_x and y into a thread-local variable called thr_y. This looks something like the following:
WRITE x := 0 WRITE y := 0 [A] WRITE x := 1 [A] WRITE y := 1 FLUSH A [B] READ x [B] WRITE thr_x := x [B] READ y [B] WRITE thr_y := y
In this specific example, thr_x == 1 and thr_y == 1. Had the flushes been ordered differently by the CPU, this outcome would have been different. For instance, look at the following:
WRITE x := 0 WRITE y := 0 [A] WRITE x := 1 [A] WRITE y := 1 [B] READ x [B] WRITE thr_x := x FLUSH A [B] READ y [B] WRITE thr_y := y
The consequence of this is that thr_x == 0 and thr_y == 1. Without any other coordination, the only other valid interleaving is thr_x == 0 and thr_y == 0. That is, as a result of the write buffer on x86, the write of x in thread A can never be reordered to occur after the write of y: thr_x == 1 and thr_y == 0. This kind of stinks, unless you enjoy this little program as a parlor trick. We want determinism out of our programs. To that end, x86 provides different fence and lock instructions that control how and when threads flush their local write buffers, and how and when threads may read byte ranges from the main memory. The exact interplay here is… complicated. We'll come back to it in great detail in Chapter 3, The Rust Memory Model – Ownership, References, and Manipulation and Chapter 6, Atomics – the Primitives of Synchronization. Suffice it to say that, for now, there's an SFENCE instruction available that forces a sequential consistency. We can employ this instruction as follows:
WRITE x := 0 WRITE y := 0 [A] WRITE x := 1 [A] WRITE y := 1 [A] SFENCE [B] SFENCE [B] READ x [B] WRITE thr_x := x [B] READ y [B] WRITE thr_y := y
From this, we get thr_x == 1 and thr_y == 1. The ARM processor is a little different—1/0 is a valid interleaving. There is no global lock to the ARM, no fences, and no write buffer. In ARM, a string of instructions—called a propagation list, for reasons that will soon be clear—is maintained in an uncommitted state, meaning that the thread that originated the instructions will see them as in-flight, but they will not have been propagated to other threads. These uncommitted instructions may have been performed—resulting in side-effects in the memory—or not, allowing for speculative execution and the other performance tricks discussed in the previous section. Specifically, reads may be satisfied from a thread's local propagation list, but not writes. Branch instructions cause the propagation list that led to the branch instruction to be committed, potentially out of the order from that specified by the programmer. The memory subsystem keeps track of which propagation list has been sent to which thread, meaning that it is possible for a thread's private loads and stores to be out of order and for committed loads and stores to appear out of order between threads. Coherency is maintained on ARM with more active participation from the memory subsystem.
Whereas on x86, the main memory is something that has actions done to it, on ARM, the memory subsystem responds to requests and may invalidate previous, uncommitted requests. A write request involves a read-response event, by which it is uniquely identified, and a read request must reference a write.
This sets up a data-dependency chain. Responses to reads may be invalidated at a later time, but not writes. Each location receives a coherence-commitment ordering, which records a global order of writes, built up per-thread as propagation lists are propagated to threads.
This is very complicated. The end result is that writes to a thread's view of the memory may be done out of programmer order, writes committed to main memory may also be done out of order, and reads can be requested out of programmer order. Therefore, the following code is perfectly valid, owing to the lack of branches in our example:
WRITE x := 0 WRITE y := 0 [A] WRITE x := 1 [B] READ x [A] WRITE y := 1 [B] WRITE thr_y := y [B] WRITE thr_x := x [B] READ y
ARM provides instructions for control of dependencies, called barriers. There are three types of dependency in ARM. The address dependency means a load is used to compute the address for access to the memory. The control dependency means that the program flow that leads to memory access depends on a load. We're already familiar with the data dependency.
While there is a lot of main memory available, it's not particularly fast. Well, let's clarify that. Fast here is a relative term. A photon in a vacuum will go about 30.5 centimeters in 1 nanosecond, meaning that in ideal circumstances, I, on the west coast of the United States, ought to be able to send a message to the east coast of the United States and receive a response in about 80 milliseconds. Of course, I'm ignoring request-processing time, the realities of the internet, and other factors; we're just dealing with ballpark figures, here. Consider that 80 milliseconds is 80,000,000 nanoseconds. A read access from a CPU to the main memory is around 100 nanoseconds, give or take your specific computer architecture, the details of your chips, and other factors. All this is to clarify that, when we say, not particularly fast, we're working outside normal human time scales.
It is difficult, sometimes, to keep from misjudging things as fast enough. Say, we have a 4 GHz processor. How many clock cycles do we get per nanosecond? Turns out, it's 4. Say, we have an instruction that needs to access main memory—which, remember, takes 100 nanoseconds—and happens to be able to do its work in exactly 4 cycles, or one nanosecond. That instruction will then be stalled for 99 nanoseconds while it waits, meaning we're potentially losing out on 99 instructions that could have been executed by the CPU. The CPU will make up some of that loss with its optimization tricks. These only go so far, unless our computation is very, very lucky.
In an effort to avoid the performance impact of the main memory on computation, processor manufacturers introduced caching between the processor and the main memory. Many machines these days have three layers of data cache, as well as an instruction cache, called dCACHE and iCACHE, which are tools we'll be using later. You will see dCACHE often consists of three layers these days, each layer being successively larger than the last, but also slower for cost or power concerns. The lowest, smallest layer is called L1, the next L2, and so forth. CPUs read into caches from the main memory in working blocks—or simply blocks—that are the size of the cache being read into. Memory access will preferentially be done on the L1 cache, then L2, then L3, and so forth, with time penalties at each level for missing and block reads. Cache hits, however, are significantly faster than going directly to the main memory. L1 dCACHE references are 0.5 nanoseconds, or fast enough that our hypothetical 4-cycle instruction is 8 times slower than the memory access it requires. L2 dCACHE references are 7 nanoseconds, still a fair sight better than the main memory. Of course, the exact numbers will vary from system to system, and we'll do quite a bit in the next chapter to measure them directly. Cache coherency—maintaining a high ratio of hits to misses—is a significant component of building fast software on modern machines. We'll never get away from the CPU cache in this book.