Lab 5 Exercises - Brian Mere

1

Question

Pasted image 20241202151023.png

Proof

The CPU is idle only when all of the processes are idle (because on a context switch it couldn't switch to a utilizable process). The chance of this happening is only when all 4 processes are in a idle state for IO, with probability:

(12)4=116

2

Question

Pasted image 20241202151120.png
Pasted image 20241202152507.png

Proof

a. 20 is physical memory 0K->8K + 20 = 8 * 1024 + 20 = 8212.
b. 4100 is physical memory 4K->4K + 4 = 4100
c. 8300 is physical memory 8K->24K + 108 = 24,684

3

Question

Pasted image 20241202153417.png

Proof

Since there's 9 + 11 = 20 bits used for the page table entries, then the offset dictates the bit size of the page. Thus it is 212 page size, and then the number of pages is just the number of bits used for table entries as 220.

4

Question

Pasted image 20241202153635.png

Proof

a. Page 0. NRU looks at the class determined by R,M. See Lecture 20 - Filesystems, Virtual Memory Algorithms and the table in there is:

R R
M 0 2
M 1 3
We use the lowest class first, so we need a page with M,R which is page 0.

b. Page 2 since it has the lowest timestamp (and thus is the oldest loaded).

c. Page 1 since it has the largest loaded timestamp (and is the least recently used).

d. Page 0 since it hasn't been read, so even though page 1 would be looked at first it would get it's R bit reset and then look at ejecting page 0.

5

Question

Pasted image 20241202154929.png

Proof

0: 0713 (oldest is to the left, 4 page faults)
1: 7132 (fault)
2: 7132 (no fault)
3: 7132 (no fault)
4: 1320 (fault)
5: 1320 (no fault)
6: 1320 (no fault)

So 6 page faults, with final pages loaded as 1320.

6

Question

Repeat 5 with LRU replacement

Proof

0: 0713 (NRU is to the left, LRU to the right,  4 page faults)
1: 7312 (fault)
2: 1327 (no fault)
3: 1372 (no fault)
4: 3720 (fault)
5: 7201 (fault)
6: 2013 (fault)

So 8 page faults with final pages loaded 2013.

7

Question

Pasted image 20241202160831.png

Proof

6015,000.002=30 seconds instructing

If the memory is doubled, then there is half the amount of page faults (since the intervals are halved) so then we have:

30 seconds +7,500.002=45 seconds

8

Question

Pasted image 20241202162027.png

Proof

Users don't know the internal structure of directories, so they need a system call to handle it. All processes are virtualized, so there is no way anything other than the OS can know what are physical addresses, and thus there is know way for the user process to know where to start reading.

9

Question

Pasted image 20241202162250.png

Proof

You save time by not having to go far in address space to read the data after reading the inode. You get performance benefits via caching (you'll pull in some of the data for free after referencing the inode). The disk also doesn't have to move as far physically.

10

Question

Pasted image 20241202162508.png

Proof

The function for n requests in terms of time is:

t(n)=nh1 ms+n(1h)(40ms)

The average divides this by n:

μt=t(n)n=h1+(1h)40 (ms)
---
title: 
xLabel: 
yLabel: 
bounds: [0,1, 0, 40]
disableZoom: false
grid: true
---
f(x) = x + 40(1-x)

11

Question

Pasted image 20241202163144.png

Proof

Case 1: Add rotation latency + seek time + transmit time

t=10 ms+(5 msblock100 blocks)+20μ sblock100 blocks=t1=512 ms

Case 2: Add rotation latency + seek time (cylinder seek + track seek) + transmit time

t=10 ms+(1mscyl2cyl dist.block+100μsblock)100 blocks+20μ sblock100 blockst2=222 ms

The above proposed solution is incorrect. The correct is as follows:

a. (unoptimized case) Average seek of 5ms:

TU(b)=b(5 ms+10 ms+20μsblock)

Here it's:

TU(100)=1.502 s

b. (optimized case) Average seek of 100 μs:

TO(b)=b(100μsblock+10 ms/blk+20μsblock)

thus:

TO(100)=1.012 s

Essentially, in trying this problem I failed to have the rotation latency be per block too. Adding in the actual rotational delay would be:

b(5000+5000+20)

and:

b(100+5000+20)

respectively.

12

Question

Pasted image 20241202163447.png

Proof

Elinor is right (Carolyn is wrong). Even though the space is cheap and thus we have more of it, it does makes sense that you'd get a performance boost by writing to the table after every inode lookup (as then you don't have to search the table). However, for systems with many files (like most are), eventually your table will get so big that you'll be spending more time amending many duplicate inode entries in the table when you write to one file.