Lecture 18 - Memory Management Strategies
Memory is where all our stuff is. We really need it and need to use it efficiently. Let's look at some memory management strategies.
Memory Management Strategies
- Monoprogramming: only allow one program in memory. It's simple, and honestly a lot of cheap options would use this.
- Multi-programming/Fixed Partitions: we take our machine, divide it into chunks (of varying size) and load a program into each chunk.
Let's say we did this an had some programs use this memory:
What happens if a new program wants new memory? What happens if programs use memory outside of their space? What if a program wants more memory? This process doesn't really address this.
Each assembly/C program has assembly that creates memory addresses. But how can our program have memory addresses that work for itself, but don't break each other?
We could:
- Link on loading (ie: memory addresses are chosen before running).
- But the problem is that OS's like MINIX usually does some linking like this.
- We'd have to relink it every time.
- Further, what if we want to move it in memory?
- Relative Addressing: All of a programs addresses are relative to the base address give to the program.
- It'd be hard to have a Linked List to hold all the relative addresses, and then on a new program run have to calculate the relative addresses for every address.
- Base Register: For every memory reference, add a base register for that programs base offset.
- This works.
- But this doesn't offer protection against malicious actors (what happens if we add a really positive address, or go under the base address).
- To fix this add a limit register for the end address of our memory block.
But we also have the different text, data segments (and others that we would want to tag). Further, what if we need more space? Memory is usually requested dynamically so we want to address this.
Dynamic Partition
Here we want to carve out space per process.
The problem, similar with malloc
, experiences fragmentation. What can we do about it?
- compaction: smush all the above programs down to the bottom.
- Problem: It is way too expensive to copy say
all the way down.
- Problem: It is way too expensive to copy say
- growth: Say
needs more space now. The issue is that we are going to have an issue when the stack of one process meets the heap of an above process.
We can, similar to malloc
, try to keep track of allocated memory via:
- bitmap: one bit per allocation unit of memory to track allocated (set) or not.
- Keep a LL (list) of what's allocated and not allocated.
The difference is that we can't assume that the memory running the memory manager is in memory! So we need to be smart about this. We can have the bitmap and linked-list data in the headers of our regions as we did before.
Allocation Strategies
We could do:
- first fit: look for the first open region that has enough room from the
root
- next fit: start looking from the last freed region.
- This spreads things around (good) but increases fragmentation
- best fit: look for the smallest one that we can fit into.
- This doesn't work well since the fragments are so small such that you keep building them up and up until we have too much.
- worst fit: Carve your allocation unit off the biggest chunk available.
- Unsurprisingly this shreds your memory too. However, it's interesting to note that big allocation units are near other small allocation units (and similar for big units).
This brings us to the discussion of...
Virtual Memory
The idea is that, just like the guy moving through the house as the rooms get destroyed, we can cache enough rooms in advance such that if we expect a far enough memory address, then we have it ready.