This is the third section of lab code notes for my self-study of MIT6.S081 operating system course: Page Tables. The approximate duration of this lab: 19 hours.

Courses address: pdos.csail.mit.edu/6.S081/2020… Lab address: pdos.csail.mit.edu/6.S081/2020… My code address: github.com/Miigon/my-x… Commits: github.com/Miigon/my-x…

The code comments in this article were added while writing the blog, and the code in the original repository may be missing comments or not identical.

Lab 3: Page tables

In this lab you will explore page tables and modify them to simplify the functions that copy data from user space to kernel space.

Explore the page table and modify it to simplify copying data from user state to kernel state.

Print a page table (easy)

Define a function called vmprint(). It should take a pagetable_t argument, and print that pagetable in the format described below. Insert if(p->pid==1) vmprint(p->pagetable) in exec.c just before the return argc, to print the first process’s page table. You receive full credit for this assignment if you pass the pte printout test of make grade.

Add a kernel function that prints the page table to print the passed page table in the following format for the next two experiments:

page table 0x0000000087f6e000 .. 0: pte 0x0000000021fda801 pa 0x0000000087f6a000 .. . 0: pte 0x0000000021fda401 pa 0x0000000087f69000 .. . . 0: pte 0x0000000021fdac1f pa 0x0000000087f6b000 .. . . 1: pte 0x0000000021fda00f pa 0x0000000087f68000 .. . . 2: pte 0x0000000021fd9c1f pa 0x0000000087f67000 .. 255: pte 0x0000000021fdb401 pa 0x0000000087f6d000 .. . 511: pte 0x0000000021fdb001 pa 0x0000000087f6c000 .. . . 510: pte 0x0000000021fdd807 pa 0x0000000087f76000 .. . . 511: pte 0x0000000020001c0b pa 0x0000000080007000Copy the code

Risc-v logical address is in the form of a three-level page table, with a 9-bit first-level index to find a second-level page table, a 9-bit second-level index to find a third-level page table, and a 9-bit third-level index to find a memory page, with the lowest 12-bit in-page offset (i.e., 4096 bytes per page). For details, please refer to Figure 3.2 in Xv6 Book.

This function needs to simulate the above CPU page table query process, traversing the tertiary page table, and then output in a certain format

// kernel/defs.h.int             copyout(pagetable_t, uint64, char *, uint64);
int             copyin(pagetable_t.char *, uint64, uint64);
int             copyinstr(pagetable_t.char *, uint64, uint64);
int             vmprint(pagetable_t pagetable); // Add a function declaration
Copy the code

Xv6 already has a function that releases the page table recursively, freewalk(). Copy the function and change the release code to print:

// kernel/vm.c
int pgtblprint(pagetable_t pagetable, int depth) {
  // there are 2^9 = 512 PTEs in a page table.
  for(int i = 0; i < 512; i++){
    pte_t pte = pagetable[i];
    if(pte & PTE_V) { If the page entry is valid
      // Print the page entry in format
      printf("..");
      for(int j=0; j<depth; j++) {printf("..");
      }
      printf("%d: pte %p pa %p\n", i, pte, PTE2PA(pte));

      // If the node is not a leaf, print its children recursively.
      if((pte & (PTE_R|PTE_W|PTE_X)) == 0) {// this PTE points to a lower-level page table.
        uint64 child = PTE2PA(pte);
        pgtblprint((pagetable_t)child,depth+1); }}}return 0;
}

int vmprint(pagetable_t pagetable) {
  printf("page table %p\n", pagetable);
  return pgtblprint(pagetable, 0);
}
Copy the code
// exec.c

int
exec(char *path, char **argv)
{
  / /...

  vmprint(p->pagetable); // Print the page table before exec returns.
  return argc; // this ends up in a0, the first argument to main(argc, argv)

 bad:
  if(pagetable)
    proc_freepagetable(pagetable, sz);
  if(ip){
    iunlockput(ip);
    end_op();
  }
  return - 1;

}
Copy the code

grade:

$./grade-lab-pgtbl pte printout make: 'kernel/kernel' is up to date. == Test pte printout == pte printout: OK (1.6s)Copy the code

A kernel page table per process (hard)

Your first job is to modify the kernel so that every process uses its own copy of the kernel page table when executing in the kernel. Modify struct proc to maintain a kernel page table for each process, and modify the scheduler to switch kernel page tables when switching processes. For this step, each per-process kernel page table should be identical to the existing global kernel page table. You pass this part of the lab if usertests runs correctly.

Xv6 was originally designed for user processes to use their respective user-mode page tables in user-mode, but once in kernel mode (for example, with a system call), they switch to the kernel page table (by modifying the SATP register, trampoline.s). However, the kernel page table is shared globally, which means that all processes entering the kernel state share the same kernel page table:

// vm.c
pagetable_t kernel_pagetable; // Global variable, shared kernel page table
Copy the code

The goal of the Lab is to make each process have its own independent kernel page table after entering the kernel state, so as to prepare for the third experiment.

Create process kernel page table and kernel stack

Start by adding a kernelpgTBL in the proc structure of the process to store the process-specific kernelpgTBl page table.

// kernel/proc.h
// Per-process state
struct proc {
  struct spinlock lock;

  // p->lock must be held when using these:
  enum procstate state;        // Process state
  struct proc *parent;         // Parent process
  void *chan;                  // If non-zero, sleeping on chan
  int killed;                  // If non-zero, have been killed
  int xstate;                  // Exit status to be returned to parent's wait
  int pid;                     // Process ID

  // these are private to the process, so p->lock need not be held.
  uint64 kstack;               // Virtual address of kernel stack
  uint64 sz;                   // Size of process memory (bytes)
  pagetable_t pagetable;       // User page table
  struct trapframe *trapframe; // data page for trampoline.S
  struct context context;      // swtch() here to run process
  struct file *ofile[NOFILE];  // Open files
  struct inode *cwd;           // Current directory
  char name[16];               // Process name (debugging)
  pagetable_t kernelpgtbl;     // Kernel page table (add field in proc)
};
Copy the code

Next up is kvminit. The kernel relies on the existence of fixed mappings in the kernel page table to function properly, such as UART control, hard disk interface, interrupt control, etc. Kvminit originally only added these mappings to the global kernel pagetable kernel_pagetable. We abstracted a function kVM_map_pagetable () that could add these mappings to any kernel pagetable we created ourselves.

void kvm_map_pagetable(pagetable_t pgtbl) {
  // Add direct mapping required by various kernels to the page table PGTBL.
  
  // uart registers
  kvmmap(pgtbl, UART0, UART0, PGSIZE, PTE_R | PTE_W);

  // virtio mmio disk interface
  kvmmap(pgtbl, VIRTIO0, VIRTIO0, PGSIZE, PTE_R | PTE_W);

  // CLINT
  kvmmap(pgtbl, CLINT, CLINT, 0x10000, PTE_R | PTE_W);

  // PLIC
  kvmmap(pgtbl, PLIC, PLIC, 0x400000, PTE_R | PTE_W);

  // map kernel text executable and read-only.
  kvmmap(pgtbl, KERNBASE, KERNBASE, (uint64)etext-KERNBASE, PTE_R | PTE_X);

  // map kernel data and the physical RAM we'll make use of.
  kvmmap(pgtbl, (uint64)etext, (uint64)etext, PHYSTOP-(uint64)etext, PTE_R | PTE_W);

  // map the trampoline for trap entry/exit to
  // the highest virtual address in the kernel.
  kvmmap(pgtbl, TRAMPOLINE, (uint64)trampoline, PGSIZE, PTE_R | PTE_X);
}

pagetable_t
kvminit_newpgtbl(a)
{
  pagetable_t pgtbl = (pagetable_t) kalloc();
  memset(pgtbl, 0, PGSIZE);

  kvm_map_pagetable(pgtbl);

  return pgtbl;
}

/* * create a direct-map page table for the kernel. */
void
kvminit(a)
{
  kernel_pagetable = kvminit_newpgtbl(); // There is still a need for the global kernel page table for the kernel boot process and for no process to use at run time.
}

/ /...

// Map a logical address to a physical address (add the first parameter PGTBL)
void
kvmmap(pagetable_t pgtbl, uint64 va, uint64 pa, uint64 sz, int perm)
{
  if(mappages(pgtbl, va, sz, pa, perm) ! =0)
    panic("kvmmap");
}

// kvmpa converts the kernel logical address to a physical address (add the first parameter kernelpgtbl)
uint64
kvmpa(pagetable_t pgtbl, uint64 va)
{
  uint64 off = va % PGSIZE;
  pte_t *pte;
  uint64 pa;

  pte = walk(pgtbl, va, 0);
  if(pte == 0)
    panic("kvmpa");
  if((*pte & PTE_V) == 0)
    panic("kvmpa");
  pa = PTE2PA(*pte);
  return pa+off;
}

Copy the code

You can now create kernel page tables that are independent of each other, but there is one more thing to deal with: the kernel stack. In the original XV6 design, all kernel-state processes share the same page table, meaning they share the same address space. Since Xv6 supports multi-core/multi-process scheduling, multiple processes may be in kernel-mode at the same time. Therefore, all kernel-mode processes need to create their own independent kernel-mode stacks, namely kernel stacks, to supply their kernel-mode code execution procedures.

During startup, the kernel kstack is preallocated in procinit() for all possible 64 process bits. In the high address space, each process uses one page as the kstack. There is also an unmapped Guard page between the two different KStacks to detect stack overflow errors. See Figure 3.3 in Xv6 Book for details.

In xv6’s original design, there was originally only one kernel page table, shared by all processes, so you needed to create multiple kernel stacks for different processes and map them to different locations (see procinit() and KSTACK macros). In our new design, each process has its own separate kernel page table, and each process only needs to access its own kernel stack, rather than all 64 processes. So you can map the kernel stacks of all processes to fixed locations in their respective kernel page tables (the same logical address in different page tables, pointing to different physical memory).

// initialize the proc table at boot time.
void
procinit(void)
{
  struct proc *p;
  
  initlock(&pid_lock, "nextpid");
  for(p = proc; p < &proc[NPROC]; p++) {
      initlock(&p->lock, "proc");

      // This removes the code that preallocates the kernel stack for all processes, instead creating the kernel stack at process creation time, see allocproc().
  }

  kvminithart();
}

Copy the code

Then, when the process is created, separate kernel page tables, as well as kernel stacks, are assigned to the process

// kernel/proc.c

static struct proc*
allocproc(void)
{
  struct proc *p;

  for(p = proc; p < &proc[NPROC]; p++) {
    acquire(&p->lock);
    if(p->state == UNUSED) {
      goto found;
    } else{ release(&p->lock); }}return 0;

found:
  p->pid = allocpid();

  // Allocate a trapframe page.
  if((p->trapframe = (struct trapframe *)kalloc()) == 0){
    release(&p->lock);
    return 0;
  }

  // An empty user page table.
  p->pagetable = proc_pagetable(p);
  if(p->pagetable == 0){
    freeproc(p);
    release(&p->lock);
    return 0;
  }

////// New part start //////

  // Create a separate kernel page table for the new process and add the various mappings required by the kernel to the new page table
  p->kernelpgtbl = kvminit_newpgtbl();
  // printf("kernel_pagetable: %p\n", p->kernelpgtbl);

  // Assign a physical page to be used as the kernel stack for the new process
  char *pa = kalloc();
  if(pa == 0)
    panic("kalloc");
  uint64 va = KSTACK((int)0); // Map the kernel stack to a fixed logical address
  // printf("map krnlstack va: %p to pa: %p\n", va, pa);
  kvmmap(p->kernelpgtbl, va, (uint64)pa, PGSIZE, PTE_R | PTE_W);
  p->kstack = va; // Record the logical address of the kernel stack, which is already fixed, to avoid the need to modify other parts of the Xv6 code

////// add new section end //////

  // Set up new context to start executing at forkret,
  // which returns to user space.
  memset(&p->context, 0.sizeof(p->context));
  p->context.ra = (uint64)forkret;
  p->context.sp = p->kstack + PGSIZE;

  return p;
}
Copy the code

At this point, the process-independent kernel page table is created, but it is only created for now. User processes will still use the globally shared kernel page table when they enter kernel mode, so they need to be modified in scheduler().

Switch to the process kernel page table

Before the scheduler hands off the CPU to the process, switch to the kernel page table for that process:

// kernel/proc.c
void
scheduler(void)
{
  struct proc *p;
  struct cpu *c = mycpu();
  
  c->proc = 0;
  for(;;) {// Avoid deadlock by ensuring that devices can interrupt.
    intr_on();
    
    int found = 0;
    for(p = proc; p < &proc[NPROC]; p++) {
      acquire(&p->lock);
      if(p->state == RUNNABLE) {
        // Switch to chosen process. It is the process's job
        // to release its lock and then reacquire it
        // before jumping back to us.
        p->state = RUNNING;
        c->proc = p;

        // Switch to a process-independent kernel page table
        w_satp(MAKE_SATP(p->kernelpgtbl));
        sfence_vma(); // Clear the fast table cache
        
        // Schedule the execution process
        swtch(&c->context, &p->context);

        // Switch back to the global kernel page table
        kvminithart();

        // Process is done running for now.
        // It should have changed its p->state before coming back.
        c->proc = 0;

        found = 1;
      }
      release(&p->lock);
    }
#if! defined (LAB_FS)
    if(found == 0) {
      intr_on();
      asm volatile("wfi");
    }
#else
    ;
#endif}}Copy the code

At this point, each process executes with its own separate kernel page table in kernel mode.

Release the process kernel page table

The last thing you need to do is release the page tables and kernel stacks that are exclusive to the process and reclaim resources after the process ends, otherwise it will cause a memory leak.

If panic occurs in userTests when reparent2 is added: Kvmmap, there is a high probability that a large number of memory leaks have consumed the memory, leading to the failure of KVMMap to allocate the memory required by page table entries. At this time, it is necessary to check whether the allocated memory is correctly released, especially whether every page table entry is released clean.)

// kernel/proc.c
static void
freeproc(struct proc *p)
{
  if(p->trapframe)
    kfree((void*)p->trapframe);
  p->trapframe = 0;
  if(p->pagetable)
    proc_freepagetable(p->pagetable, p->sz);
  p->pagetable = 0;
  p->sz = 0;
  p->pid = 0;
  p->parent = 0;
  p->name[0] = 0;
  p->chan = 0;
  p->killed = 0;
  p->xstate = 0;
  
  // Release the kernel stack of the process
  void *kstack_pa = (void *)kvmpa(p->kernelpgtbl, p->kstack);
  // printf("trace: free kstack %p\n", kstack_pa);
  kfree(kstack_pa);
  p->kstack = 0;
  
  // Note: proc_freepagetable cannot be used here because it frees not only the pagetable itself, but also all the physical pages corresponding to the leaf nodes in the pagetable.
  // This causes critical physical pages that the kernel needs to run to be released, causing the kernel to crash.
  // Using kfree(p->kernelpgtbl) here is also not sufficient, because this frees only the ** level 1 page table itself **, not the space occupied by the level 2 and 3 page tables.
  
  // Recursively release the page table that is exclusive to the process, freeing the space occupied by the page table itself, but does not free the physical page that the page table points to
  kvm_free_kernelpgtbl(p->kernelpgtbl);
  p->kernelpgtbl = 0;
  p->state = UNUSED;
}
Copy the code

Kvm_free_kernelpgtbl () is used to recursively free the entire multilevel page table tree, also modified from Freewalk ().

// kernel/vm.c

// Recursively releases all mapping in a kernel page table, but not the physical page to which it points
void
kvm_free_kernelpgtbl(pagetable_t pagetable)
{
  // there are 2^9 = 512 PTEs in a page table.
  for(int i = 0; i < 512; i++){
    pte_t pte = pagetable[i];
    uint64 child = PTE2PA(pte);
    if((pte & PTE_V) && (pte & (PTE_R|PTE_W|PTE_X)) == 0) {// If the page entry points to a page table one level lower
      // Recursively releases the lower-level page table and its page table entries
      kvm_free_kernelpgtbl((pagetable_t)child);
      pagetable[i] = 0;
    }
  }
  kfree((void*)pagetable); // Release the space occupied by the page table at the current level
}

Copy the code

Here the release part of the implementation is complete.

Note that our changes affect other code: the virtio disk drive virtio_disk.c calls kvmpa() to convert virtual addresses to physical addresses, which in our modified version requires passing in the kernel page table of the process. You can modify it accordingly.

// virtio_disk.c
#include "proc.h" // Add header file import

/ /...

void
virtio_disk_rw(struct buf *b, int write)
{
/ /...
disk.desc[idx[0]].addr = (uint64) kvmpa(myproc()->kernelpgtbl, (uint64) &buf0); // Call myproc() to get the process kernel page table
/ /...
}
Copy the code

Simplify copyin/copyinstr (hard)

Replace the body of copyin in kernel/vm.c with a call to copyin_new (defined in kernel/vmcopyin.c); do the same for copyinstr and copyinstr_new. Add mappings for user addresses to each process’s kernel page table so that copyin_new and copyinstr_new work. You pass this assignment if usertests runs correctly and all the make grade tests pass.

In the previous experiment, each process had its own kernel-mode page table. The goal of this experiment was to maintain a copy of the user-mode page table mapping in the kernel-mode page table so that the kernel-mode can also dereference Pointers (logical addresses) passed in by the user-mode. The advantage of doing this over the original copyin implementation is that the original copyin obtains the physical address by software simulating the process of accessing the page table, while maintaining the mapped copy within the kernel page table can take advantage of the CPU’s hardware addressing capability, which is more efficient and can be accelerated by the fast table.

To achieve this effect, we need to synchronize the same changes to the process’s kernel page table every time the kernel makes changes to the user page table, so that the mapping of the program segment (0 to PLIC segment) address space of the two page tables is synchronized.

To prepare

First, implement some tools and methods, most of which are modified by referring to existing methods:

// kernel/vm.c

// Note: we need to add the corresponding function declaration in defs.h, which is omitted here.

// Copy part of the SRC page mapping to the DST page table.
// Copy only page entries, not actual physical page memory.
Return 0 on success, -1 on failure
int
kvmcopymappings(pagetable_t src, pagetable_t dst, uint64 start, uint64 sz)
{
  pte_t *pte;
  uint64 pa, i;
  uint flags;

  // PGROUNDUP: Align page boundaries to prevent remap
  for(i = PGROUNDUP(start); i < start + sz; i += PGSIZE){
    if((pte = walk(src, i, 0)) = =0)
      panic("kvmcopymappings: pte should exist");
    if((*pte & PTE_V) == 0)
      panic("kvmcopymappings: page not present");
    pa = PTE2PA(*pte);
    // '& ~PTE_U' indicates that the permission of this page is set to a non-user page
    // This permission must be set. In RISC-V, the kernel cannot access user pages directly.
    flags = PTE_FLAGS(*pte) & ~PTE_U;
    if(mappages(dst, i, PGSIZE, pa, flags) ! =0) {gotoerr; }}return 0;

 err:
  uvmunmap(dst, 0, i / PGSIZE, 0);
  return - 1;
}

// Similar to uVMDealloc, reduce program memory from oldsz to newsz. But the difference is that no real memory is freed
// It is used for synchronization between the program memory map in the kernel page table and the user page table program memory map
uint64
kvmdealloc(pagetable_t pagetable, uint64 oldsz, uint64 newsz)
{
  if(newsz >= oldsz)
    return oldsz;

  if(PGROUNDUP(newsz) < PGROUNDUP(oldsz)){
    int npages = (PGROUNDUP(oldsz) - PGROUNDUP(newsz)) / PGSIZE;
    uvmunmap(pagetable, PGROUNDUP(newsz), npages, 0);
  }

  return newsz;
}

Copy the code

Next, prepare the mapper memory. We will map the process memory to this range in the kernel page table. First, make sure that this range does not conflict with other mappings.

Looking at the Xv6 book, there is also a CLINT (core local interrupter) map before PLIC, which conflicts with the program memory we want to map. Refer to Chapter 5 and start.c of the Xv6 book to see that CLINT is only required for kernel startup, but not for user processes operating in kernel mode.

So modify kVM_map_pagetable () to remove the CLINT mapping so that the process kernel pagetable does not have CLINT conflicting with the application memory mapping. However, since the global kernel pagetable is also initialized with kVM_map_pagetable (), and the CLINT map is required at kernel startup, the global kernel pagetable is separately mapped with CLINT in kvminit().

// kernel/vm.c


void kvm_map_pagetable(pagetable_t pgtbl) {
  
  // uart registers
  kvmmap(pgtbl, UART0, UART0, PGSIZE, PTE_R | PTE_W);

  // virtio mmio disk interface
  kvmmap(pgtbl, VIRTIO0, VIRTIO0, PGSIZE, PTE_R | PTE_W);

  // CLINT
  // kvmmap(pgtbl, CLINT, CLINT, 0x10000, PTE_R | PTE_W);

  // PLIC
  kvmmap(pgtbl, PLIC, PLIC, 0x400000, PTE_R | PTE_W);

  / /...
}

/ /...

void
kvminit(a)
{
  kernel_pagetable = kvminit_newpgtbl();
  // CLINT *is* however required during kernel boot up and
  // we should map it for the global kernel pagetable
  kvmmap(kernel_pagetable, CLINT, CLINT, 0x10000, PTE_R | PTE_W);
}

Copy the code

Add a check to exec to prevent program memory from exceeding PLIC:

int
exec(char *path, char **argv)
{
  / /...

  // Load program into memory.
  for(i=0, off=elf.phoff; i<elf.phnum; i++, off+=sizeof(ph)){
    if(readi(ip, 0, (uint64)&ph, off, sizeof(ph)) ! =sizeof(ph))
      goto bad;
    if(ph.type ! = ELF_PROG_LOAD)continue;
    if(ph.memsz < ph.filesz)
      goto bad;
    if(ph.vaddr + ph.memsz < ph.vaddr)
      goto bad;
    uint64 sz1;
    if((sz1 = uvmalloc(pagetable, sz, ph.vaddr + ph.memsz)) == 0)
      goto bad;
    if(sz1 >= PLIC) { // Add detection to prevent program size from exceeding PLIC
      goto bad;
    }
    sz = sz1;
    if(ph.vaddr % PGSIZE ! =0)
      goto bad;
    if(loadseg(pagetable, ph.vaddr, ip, ph.off, ph.filesz) < 0)
      goto bad;
  }
  iunlockput(ip);
  end_op();
  ip = 0;
  / /...
Copy the code

Synchronous mapping

The next step is to synchronize each change to the process user page table to the process kernel page table. Total changes: fork(), exec(), GrowProc (), userInit ().

fork()

// kernel/proc.c
int
fork(void)
{
  / /...

  // Copy user memory from parent to child (call kvmcopymappings, and insert a Copy of the connection pages into the connection pages of the new process)
  if(uvmcopy(p->pagetable, np->pagetable, p->sz) < 0 ||
     kvmcopymappings(np->pagetable, np->kernelpgtbl, 0, p->sz) < 0){
    freeproc(np);
    release(&np->lock);
    return - 1;
  }
  np->sz = p->sz;

  / /...
}

Copy the code

exec()

// kernel/exec.c
int
exec(char *path, char **argv)
{
  / /...

  // Save program name for debugging.
  for(last=s=path; *s; s++)
    if(*s == '/')
      last = s+1;
  safestrcpy(p->name, last, sizeof(p->name));

  // Clear the old mapping of program memory in the kernel page table, and then rebuild the mapping.
  uvmunmap(p->kernelpgtbl, 0, PGROUNDUP(oldsz)/PGSIZE, 0);
  kvmcopymappings(pagetable, p->kernelpgtbl, 0, sz);
  
  // Commit to the user image.
  oldpagetable = p->pagetable;
  p->pagetable = pagetable;
  p->sz = sz;
  p->trapframe->epc = elf.entry;  // initial program counter = main
  p->trapframe->sp = sp; // initial stack pointer
  proc_freepagetable(oldpagetable, oldsz);
  / /...
}

Copy the code

growproc()

// kernel/proc.c
int
growproc(int n)
{
  uint sz;
  struct proc *p = myproc();

  sz = p->sz;
  if(n > 0){
    uint64 newsz;
    if((newsz = uvmalloc(p->pagetable, sz, sz + n)) == 0) {
      return - 1;
    }
    // The mapping in the kernel page table expands synchronously
    if(kvmcopymappings(p->pagetable, p->kernelpgtbl, sz, n) ! =0) {
      uvmdealloc(p->pagetable, newsz, sz);
      return - 1;
    }
    sz = newsz;
  } else if(n < 0){
    uvmdealloc(p->pagetable, sz, sz + n);
    // The mapping in the kernel page table shrinks synchronously
    sz = kvmdealloc(p->kernelpgtbl, sz, sz + n);
  }
  p->sz = sz;
  return 0;
}
Copy the code

userinit()

For init processes, since init is not fork like other processes, you need to add synchronous mapping code to userInit as well.

// kernel/proc.c
void
userinit(void)
{
  / /...

  // allocate one user page and copy init's instructions
  // and data into it.
  uvminit(p->pagetable, initcode, sizeof(initcode));
  p->sz = PGSIZE;
  kvmcopymappings(p->pagetable, p->kernelpgtbl, 0, p->sz); // Synchronizer memory is mapped to the process kernel page table

  / /...
}
Copy the code

At this point, the synchronization of both page tables is complete.

Replace copyin, copyinstr implementation

// kernel/vm.c

// Declare a new function prototype
int copyin_new(pagetable_t pagetable, char *dst, uint64 srcva, uint64 len);
int copyinstr_new(pagetable_t pagetable, char *dst, uint64 srcva, uint64 max);

// Forward copyin, copyinstr to the new function instead
int
copyin(pagetable_t pagetable, char *dst, uint64 srcva, uint64 len)
{
  return copyin_new(pagetable, dst, srcva, len);
}

int
copyinstr(pagetable_t pagetable, char *dst, uint64 srcva, uint64 max)
{
  return copyinstr_new(pagetable, dst, srcva, max);
}
Copy the code

Running grade.

Pte printout: OK (4.8s) == Test answers-pgtbl. TXT == answers-pgtbl. TXT: OK == Test count copyin == $ make qemu-gdb count copyin: OK (1.1s) == Test usertests == $make qemu-gdb (1.1s) == Test usertests: copyin == usertests: copyin: OK == Test usertests: copyinstr1 == usertests: copyinstr1: OK == Test usertests: copyinstr2 == usertests: copyinstr2: OK == Test usertests: copyinstr3 == usertests: copyinstr3: OK == Test usertests: sbrkmuch == usertests: sbrkmuch: OK == Test usertests: all tests == usertests: all tests: OK == Test time == time: OK Score: 66/66Copy the code

Optional challenges

  • Use super-pages to reduce the number of PTEs in page tables. (Skip)
  • Extend your solution to support user programs that are as large as possible; That is, eliminate the restriction that user programs be smaller than PLIC. (skip)
  • Unmap the first page of a user process so that dereferencing a null pointer will result in a fault. You will have to Start the user text segment at, for example, 4096, instead of 0.