Monday, November 24, 2025

Week 27

This week was packed with ideas that finally tied together many of the things we started earlier in the course. The main topics we covered were free space management from OSTEP 17, TLBs from chapter 19, multi level paging from chapter 20, and swapping from chapters 21 and 22. When I saw them individually they felt separate, but once I put them side by side I started seeing the bigger story about how memory virtualization actually works.

Free space management focused on how malloc and free really behave inside the C library and why memory becomes fragmented. What stood out to me was how fragile the simple approaches are. It is easy to break everything if you do not track free regions carefully. The concept of external fragmentation made a lot more sense after seeing how the allocator keeps its own free list and updates it on every request. Before this week I only thought of malloc as something magic. Now I understand that there is a lot of bookkeeping behind that one function call.

Then we moved on to TLBs. I already knew that address translation could be slow, but reading about how every memory access would require multiple lookups without caching made me realize why the TLB is essential. Seeing actual traces and how hits happen due to locality helped me build intuition. I had never thought about spatial locality in terms of pages, but the example with the array made it click.

From there, multi level paging solved the giant page table problem. This part was harder for me to describe at first, because the whole idea of paging the page table felt strange. The point that helped me was realizing that most processes barely use a tiny piece of their virtual address space, so storing a full page table is wasteful. Breaking it up into smaller page sized chunks and mapping only what is needed suddenly felt logical. Once I saw it this way, the page directory structure made more sense.

Finally, swapping tied everything together. Removing the assumption that a process must fit in physical memory changed the whole picture. The present bit and page faults reminded me of the TLB structure. It is the same pattern again: check the fast path first, and if it fails, fall back to the slower path. The part that surprised me the most was how similar the mechanics are. We either fetch a missing translation from the page table or fetch a missing page from disk, update the structures, and retry. That parallel was an aha moment for me.

The hardest topic this week was multi level paging because of the number of moving parts. The easiest was TLBs because the impact of caching is something I already understood from other classes. I am curious how replacement policies tie into swapping, because choosing the wrong page to evict seems like it could slow the system down a lot.

Overall, this week helped me appreciate how virtualization is a stack of ideas that build on one another. Each solution fixes the limitations of the previous one. Seeing all of them together makes the whole system feel more consistent and complete.

No comments:

Post a Comment

Week 28

This week’s focus on concurrency and threads felt like a big shift from everything we have done so far. Until now, processes always felt sim...