1, the preface

In CPU Cache working Principle (1), the original intention and purpose of Cache introduction are explained comprehensively, which makes up for the bottleneck of memory performance. It also gives a brief introduction to some related terms in Cache, and then this article mainly completes the two questions raised above. How do I ensure that the data required by the CPU is in the Cache? If in cache, how to find the specific location of the data block? * * * *

The data transfer between memory and Cache is carried out in the form of block, which has been mentioned in the computer storage hierarchy. This block is called a “Cache Line,” although some books refer to this block as a “Cache Line,” as in section 2.2.1 of Modern Operating System Principles and Implementations. To make it easier to understand the following content, the DRAM component of memory is first analyzed and explained, which is helpful to understand the mapping between Cache and and memory blocks.

2. Memory (main memory)

2.1 DRAM and SRAM

Memory is mainly composed of Dynamic Random Access Memory (DRAM), which is one of the Random Access Memory (RAM). For RAM, there are two types, namely Static Random Access Memory (SRAM) and dynamic Random Access Memory. Due to the different technology, materials (electronic components) and SRAM, DRAM and SRAM have significant differences in many characteristics. DRAM is cheaper, slower, more dense (packing more data into a smaller space) and less power consumption than SRAM. Thus, DRAM is used as a buffer for memory and graphics systems in most devices such as PCS and games, while SRAM acts as a high-speed buffer for the CPU.

Dynamic random access memory (DRAM) uses a capacitor (the charging and discharging characteristics of the capacitor) and a transistor to store bits of information for each binary data; Each binary digit bit in SRAM uses six transistors.

DRAM is very sensitive to interference (such as electronic noise), and based on various factors such as leakage current, the bit value stored between DRAM cells (a capacitor and transistor) can only be stored for a time range of 10 to 100ms, after which the original charge is lost, thus losing the bit information. However, the CPU’s clock cycle, processing speed, and frequency are extremely fast, measured in nanoseconds, so DRAM can store milliseconds (ms) for a computer, long enough for the external storage system to read the original charge information and refresh every bit of the cell. And this process needs to be repeated periodically, which is where “dynamic memory” comes from. On the contrary, SRAM is less sensitive to electronic interference and signal interference. Therefore, when the interference information source around THE SRAM disappears, its circuit can immediately recover the original potential value.

Ps: The so-called “capacitance leakage” means that capacitors lose a little charge even at voltage, and if there are devices (such as transistors) near them, they will produce a little current even when they are “off”.

Capacitors can lose their charge a bit even when supplied with voltage if they have devices nearby (like transistors) that draw a little current even if they are in an “off” state; this is called capacitor leakage.

The following table lists all the differences between DRAM and SRAM:

If you are interested in capacitors, transistors and other components, or want to have an in-depth understanding of the underlying principles and knowledge of these components, it is recommended to read books on high frequency, analog, digital and other fields. For example, Tong Shibai and Hua Chengying co-wrote “Fundamentals of Analog Electronic Technology”, Zhang Suwen’s “High-frequency Electronic Circuit”, and yu Qiu and Xiong Jie co-translated “Digital Electronic Technology”. These are also my electronic information major textbooks when I was in college. I vaguely remember my investment and efforts in these courses at that time. Because of this, I am still familiar with these titles today.

2.2 Diversified DRAM

Today, there are many variants of DRAM enhanced versions that are based on traditional DRAM cells with many optimizations and performance enhancements. As shown in the figure below, the optimization of various derivative versions is not enumerated.

2.3 DRAM Unit (bit)

DRAM is typically arranged in a rectangular array of charge storage cells, where each data bit consists of a capacitor and a transistor. The horizontal Line connecting each Line is called word-line, and the vertical Line is called bit-line. Each column cell consists of two bit lines, and each bit line is then connected to all the other storage cells in the column, which are often referred to as “+” and “-” bit lines.

For a unit of DRAM, as shown in the figure below, the (- | | -) for capacitor, transistor, another said DRAM storage unit (DRAM Cell) in the form of stored charge on the capacitor (that is, if there is a charge) store binary bits of information. When bits need to be put into memory, the transistor is used to charge or discharge the capacitor. A charging capacitor means logic high or “1”, while a discharge capacitor means logic low or “0”. Charging/discharging is completed by word-line and bit-line, as shown in the figure below.

▲ Image from ALLABOUTCIRCUITS

The details behind the underlying principle of how transistors and capacitors work together to generate high and low levels to store bit information will not be discussed in this article. Interested readers can Google or Baidu for themselves.

The functions and symbols of the electronic component Transistor are as follows:

A transistor is a semiconductor device with three terminals, a base, an emitter and a collector. It is used in electronic circuits as rectifiers, amplifiers, switches, etc.

www.electrical-symbols.com/electric-el…

The function and symbol of Capacitance can be seen:

Capacitors are not only used as coupling elements in circuits, or as resonant elements in oscillators or filters, etc., but are probably most widely used in decoupling.

www.electronics-notes.com/articles/an…

2.3.1 DRAM Array Layout

The DRAM cell in the figure above is just one of the memory array cells. For DRAM cells, it is laid out as a matrix, where word-lines control the grids of the transport transistors in all DRAM cells in a row, while bit-lines collect data from a large number of DRAM cells in a column. The column length increases the capacity of the DRAM array, but it also increases the capacitance Cline, limiting the strength of the signal.

The DRAM cell array layout diagram is shown below. The gray part of the figure is the memory array, designed as a grid of rows and columns, with row decoders and column decoders used to access rows and columns, respectively. The gray area is made up of tens of thousands (depending on the actual size of memory) of gray cells, each of which represents a DRAM cell, as shown in the diagram in Section 2.3.

▲ Image from ALLABOUTCIRCUITS

Below is a more complete and fully connected matrix layout rendering in DRAM. You can see the detailed layout of the DRAM interior and the roles that Column and Row decoders play.

▲ Image from QDPMA

After explaining so much about THE characteristics of DRAM and the internal layout of memory, we will focus on the mapping and confirmation matching between memory and high speed Cache.

3. Direct mapping

Direct mapping is a Cache structure. It is the simplest, most inefficient, and most extreme mechanism for locating. In the Cache, each word in memory is allocated a location, that is, any block in memory is mapped directly to a unique location in the Cache. Remember, memory is a set of individually addressable DRAM.

For details on how direct mapping works, read the next section on how CPU Cache works (3).