CHAPTER 3

Related Notes
No.240818


Overview

We introduce Virtual Memory for the following focuses:

  • Capacity of Physical Memory Devices are not decoupled from address space determined by ISAs.
  • Memory-access with physical address (PA) only is not a easy task for programmers and compilers.
  • Poor scalability, not conducive to scheduling.

With virtual memory, programs always think that they have exclusive access to the address space and therefore do not need to consider address restrictions.

Protection

Another purpose of using VM is permissions and protection. Programs’ access to data is isolated because their virtual address spaces are different, even though they appear to be accessing the same address.

Sharing

Program segments and data segments shared by multiple programs do not need to be copied multiple times, and the OS will link them to the same physical address, e.g., printf(). This saves physical memory device capacity and avoids Memory Bloat.

Address Translation

In this section, we will talk about Paging, the most commonly used implemention of virtual memory model.

In a paging scheme, the virtual address space is divided into pages, while the physical address space is divided into frames, they both represent a data block. The sizes of a page and a frame should be the same ( ? I think these can be decoupled).

Demand Paging

Pages are loaded into physical memory only when they are needed. Suppose that a program is allowed to use a virtual address (called VA0), but before VA0 is actually accessed the addressed, the VA → PA mapping is not established and there is no corresponding PTE. Not until a Page Fault occurs does the OS know it should fetch data from disks and put them into memory devices.

This brings below advantages:

  • Avoid redundant memory usage, traffic and bandwidth consumption.
  • Allow the processor to run programs that are larger than the memory device can hold by decoupling physical capacity and memory space.

For example, mapping of 4KB pages is likely to be:

1
2
3
4
5
(VPN ─► PFN)        (Program Calculates)        ▬ VPN: Virtual Page Number
┌───────────────────┬────────────────────┐ ▬ PFN: Physical Frame Number
│ VPN │ offset │
├───────────────────┼────────────────────┤
│31 12|11 ◄──── 2^12 ───► 0│

(32-bit ISA; P & F are of the same size.)

Single-level PT

TLB

TLB in MIPS32

MIPS32 中存在以下与 TLB 有关的寄存器

  • EntryHi
  • EntryLo {0/1}
  • Index

EntryHi 用于检查是否 TLB Hit,主要包含 VPN 和 ASID ( 用于检查地址空间是否一致 ),每个 EntryHi 对应两个 EntryLo. MIPS32 用位掩码支持可变页大小,一般地,一个 VPN 对应两个 PFN.

Index 用于标识当前 TLB 查找命中的索引,在填入 TLB 时,这个寄存器用于标识待替换的表项。

程序通过以上 TLB 寄存器与 TLB 进行交互,MMU 做地址翻译也是根据 TLB 寄存器,因此某种程度上,这可被看是 TLB 缓存。更新时,首先写入通用寄存器,再通过特权移动指令,移动到 TLB 寄存器。而 MIPS32 的页表由 OS 管理,在处理缺页等软件处理异常时,可通过 TLB 指令与 TLB 交互。

  • TLBWI: 将 TLB CP0s 中的内容写入 TLBindex
  • TLBWR: 将 TLB CP0s 中的内容写入 TLBrandom
  • TLBR: 将 TLBindex 中的内容写入 TLB CP0s
  • TLBP: 将使用 EntryHi 在 TLB 中的查询结果写入 Index

PTBR

页表基址寄存器,如 CR3 (in x86),在加速缺页处理过程中起重要作用。MIPS32 只有 Context 寄存器。MIPS64 XContext 寄存器,用来扩展虚拟地址空间 (up-to-40-bit)

Context 保存了想要保存的内存页表基址。该基地址的低 22 位为 0,也就是以 4M 为边界。虽然在物理内存或者未映射的内存上提供对齐很低效,但是这样设计的目的是把该表存储到 kseg2 ( 我们以前提到过 ) 映射区域内。主要辅助 TLB Refill 重填,当发生 TLB Miss 需要重填时,在 PTE Base 指向的区域取匹配的表项进行快速回填。

references[1] [2]


  1. https://tupelo-shen.github.io/2020/06/17/MIPS架构深入理解5-内存管理/ ↩︎

  2. MIPS32地址映射和TLB - 者旨於陽 - 博客园 ↩︎