Lecture 22 - Paging Policies
We need to know about:
- When to page?
- Who gets what?
What do we want to avoid? You want to avoid thrashing, where the system gets overwhelmed by paging (pages are removed and then immediately requested).
When should we do it?
- You can do demand paging: fetching a page when it's referenced.
- Pros: Simple, no waste.
- Cons: It's demanding! Imagine getting a
PAGE_FAULT
every single memory reference! This means rough starts, and no opportunity to share costs.
- Pre-paging: Try to load everything at once (at the beginning).
- Pros: It fixes the many
PAGE_FAULT
s we get from the previous approach. You can now share costs. - Cons: You will quickly run out of memory (you will do this for every program, so there's a lot of wasted memory spaces that are not used). And more importantly, how could you know?!?!? You would need to know the addresses at runtime, which may be impossible for dynamic programs.
- Pros: It fixes the many
- Swapping: If paging pressure (from pre-paging) is too much, have a process sit out.
How big should our pages be? If we're on a 64-bit machine we have
- if your page size is too big, then only a few processes are happy about it (fragmentation)
- if your page size is too small, the time spent for page management is too large, and you lose some of the spatial locality properties
The choice is to pick something compatible with your backing store (your storage device). Usually common sizes are 4K
or even 64K
(especially with larger memory sizes now).
For allocating frames, you want to give each process a set of pages & replace within it:
Instead you want to use global allocation. To have each process have equal rights to pages, a process that has more page faults gets more pages:
More on Filesystems
How do we allocate blocks to files (see Lecture 21 - Minix Filesystem Structure, Finishing Virtual Memory)? How do we build a file?
How do we write books; we write them contiguously! Let's use at contiguous allocation.
This is great for contiguous (one at a time) & random access! However, expansion is really expensive (you'd have to move say
Think of a more dynamic data structure! A linked-list perhaps? Break your files into blocks, and for each block it should know where the next one is:
But just like with linked-lists, this is okay for reading, and horrible if it's random access! It's also not at all robust since if one head gets their bit flipped, then the whole remaining list of files is now gone!
What if instead we build a file allocation table and index them? Pull the links into a file allocation table:
For random access it is not bad, and sequential reads work alright. However, as your drive gets bigger, then this table gets bigger. Further, if you cache these in memory and start modifying it in memory and the system powers off, then there's a mismatch of the disk and memory.
An alternative is to make a system that indexes the files, not the disk, where as in MINIX you use inodes
to index the files. The index itself knows where all the blocks are (1st part, 2nd part, 3rd part, ...)