Lecture 19 - Virtual Memory
Assignment 4
Just take the copy of the hello world device driver, then change the "Hello World"
strings with the "Secret"
, and then modify the few functions that are just boilerplate. This really is just Minix Scavenger Hunt III.
Virtual Memory
The idea is just to pretend that we have unlimited memory. Pretend that we have as much memory as we so desire. What we are going to do is use our real memory to cache some of it.
How does it work? Programs exist in virtual memory.
Virtual Memory | Real Memory |
---|---|
Divided into pages | Divided into frames |
Resident Pages are hung in frames | ... |
Non-resident pages are stored on a backing store (some drive usually) | |
Translation is done (virt. to phys.) is done using a memory management unit (MMU) using a page table. |
How do we do this? Using the figure above:
- We have a virtual address constructed from the page number and offset in that page to get our physical address.
- We use a bit to represent whether it's present in the virtual space.
- If it's present, translate to the address
- Else, get a
PAGEFAULT
error, and get the page from the disk (ughhhh!)
- We can also add permission bits to keep track of:
- Read/Write/Execute
rwx
- Reference recency
R
- Has this been modified
M
- Whether or not it's cacheable
C
- Read/Write/Execute
We've now replaced really expensive RAM for disk (essentially having infinite memory). However, we may have fragmentation due to larger page sizes. Further, each process has its own page table, so then each process has its own essential memory space.
Costs
Say we are on a 32-bit machine. Then our memory size is
- Put it in memory? Well even if we had a huge and fast memory bus, it's still a waste to move all the MMU between context switches!
- Some valid options:
- Limit the virtual memory size. But we wanted to assume that we had infinite memory! :(
- Page The Page Tables: Instead of having one level of page table you have multiple levels
The idea here is that we have a small, fully associative cache in the MMU with recent translations. This is known as a TLB, or translation lookaside buffer. It holds these translations for quick access:
- on hit you have the translation and wicked fast
- on miss you go get it from the page table
- This requires the HW to do both the fetching and know where the page table is.
- But doing it in software (while saving costs, in OS) will decrease speeds that HW can give.
Odds are though, that you'll get it from the TLB, and life will be good.
An Example
Say our memory size is based on a 64-bit system. So
For nex time, we'll talk about still where to put the page tables (in memory, TLB, CPU, ...?).