Tuesday, December 2, 2025

Week 28

This week’s focus on concurrency and threads felt like a big shift from everything we have done so far. Until now, processes always felt simple in the sense that one thing runs at a time, and if something is blocked, that’s it. But the moment we started talking about threads and shared memory, things suddenly became a lot more unpredictable and honestly more interesting. A lot of it clicked while listening to the examples about web browsers loading images in the background while still letting the page respond to scrolling. That helped me visualize why threads matter beyond just “speed.”

What stood out to me the most was the idea that threads share the same virtual memory. At first it sounded convenient, but once we got into critical sections and race conditions, it became clear why this can be dangerous. Seeing how a simple “counter++” can break because of interleaving instructions made me realize how many assumptions I usually make about code running in a neat order. This was one of those moments where I stopped and thought, “Wow, this really does explain why debugging concurrency is such a pain.” The hardest part for me this week was keeping straight when something was thread-safe and when it wasn’t. Mutexes make sense in theory, but the challenge is spotting exactly where they need to be. The lectures did a good job showing how wrapping everything in a lock is not always the best solution either, especially when performance starts to drop as more threads wait on each other. I get the basic mechanics of pthread_create and pthread_join, but I’m still trying to build intuition about when to use locks and when to restructure the code itself. One “aha” moment was understanding why libraries hide these details. Seeing how a thread-safe counter can be wrapped into a small API so the user never touches locks made it clear why real-world systems are designed this way. It connects back to earlier topics about abstraction and virtual memory: hide the complexity so the user doesn’t break things. I’m curious about where this goes next. My guess is we’ll start mixing these thread concepts with scheduling or maybe condition variables. I also wonder how this ties into security issues.

Monday, November 24, 2025

Week 27

This week was packed with ideas that finally tied together many of the things we started earlier in the course. The main topics we covered were free space management from OSTEP 17, TLBs from chapter 19, multi level paging from chapter 20, and swapping from chapters 21 and 22. When I saw them individually they felt separate, but once I put them side by side I started seeing the bigger story about how memory virtualization actually works.

Free space management focused on how malloc and free really behave inside the C library and why memory becomes fragmented. What stood out to me was how fragile the simple approaches are. It is easy to break everything if you do not track free regions carefully. The concept of external fragmentation made a lot more sense after seeing how the allocator keeps its own free list and updates it on every request. Before this week I only thought of malloc as something magic. Now I understand that there is a lot of bookkeeping behind that one function call.

Then we moved on to TLBs. I already knew that address translation could be slow, but reading about how every memory access would require multiple lookups without caching made me realize why the TLB is essential. Seeing actual traces and how hits happen due to locality helped me build intuition. I had never thought about spatial locality in terms of pages, but the example with the array made it click.

From there, multi level paging solved the giant page table problem. This part was harder for me to describe at first, because the whole idea of paging the page table felt strange. The point that helped me was realizing that most processes barely use a tiny piece of their virtual address space, so storing a full page table is wasteful. Breaking it up into smaller page sized chunks and mapping only what is needed suddenly felt logical. Once I saw it this way, the page directory structure made more sense.

Finally, swapping tied everything together. Removing the assumption that a process must fit in physical memory changed the whole picture. The present bit and page faults reminded me of the TLB structure. It is the same pattern again: check the fast path first, and if it fails, fall back to the slower path. The part that surprised me the most was how similar the mechanics are. We either fetch a missing translation from the page table or fetch a missing page from disk, update the structures, and retry. That parallel was an aha moment for me.

The hardest topic this week was multi level paging because of the number of moving parts. The easiest was TLBs because the impact of caching is something I already understood from other classes. I am curious how replacement policies tie into swapping, because choosing the wrong page to evict seems like it could slow the system down a lot.

Overall, this week helped me appreciate how virtualization is a stack of ideas that build on one another. Each solution fixes the limitations of the previous one. Seeing all of them together makes the whole system feel more consistent and complete.

Tuesday, November 18, 2025

Week 26

This week everything started to come together in a way that finally made sense to me. We covered several topics related to how memory works behind the scenes, and it felt like I was slowly uncovering how the operating system really manages the world that programs think they live in. The main ideas were virtual address spaces, how C handles memory with malloc and free, and the different techniques the OS uses to translate a virtual address into a physical one. We started with base and bounds, then moved into segmentation, and this naturally prepared the ground for paging.

Virtual address spaces were the first thing that clicked for me. Understanding that every process thinks it has its own clean and simple memory layout made it easier to visualize how code, heap, and stack live inside that space. Seeing that this space is actually “fake” and that the real data is stored somewhere else in RAM made me realize how much work the operating system is constantly doing behind the curtain.

The C memory API was another eye opener. Using malloc and free looks simple on the surface, but when you think about how memory can leak if you forget to free something, or how both the heap and stack interact during a function call, everything becomes more delicate. It made me appreciate why memory bugs can be so dangerous and why languages with automatic garbage collection exist in the first place.

The hardest topics for me this week were base and bounds and segmentation. Not the idea itself, but applying it mentally. With base and bounds, the translation formula is easy, but the part that confused me was thinking in terms of “is this address even valid?” before doing anything else. Segmentation added even more mental juggling because now the virtual address has to be decoded into a segment number and an offset. I had to stop a few times and redraw the diagrams just to keep everything straight. Still, it helped me understand why internal fragmentation becomes a problem and why segmentation was invented to improve memory efficiency.

My “aha” moment came when segmentation finally made sense. Realizing that the OS can place the code, heap, and stack in totally different areas of physical memory made me see how much flexibility this gives compared to the single contiguous space used in base and bounds. It felt like the picture zoomed out and I could understand why this approach reduces wasted space.

Something I keep wondering is how paging will solve the remaining issues. I know it gets rid of external fragmentation and allows very flexible placement of memory, but I want to see how the translation works in practice and how the TLB fits into the picture. It feels like the next step will combine everything we learned into a system that is both efficient and practical.

Overall, these topics connect to things I learned before in computer architecture, especially the stack frame, the heap, and how pointers work. But now everything feels more concrete. I can finally see how the OS enforces protection, how it keeps different processes separate, and how memory is treated as a shared but carefully controlled resource. This week was challenging, but it also made everything feel more real.

Tuesday, November 11, 2025

Week 25

This week helped me understand more clearly how an operating system keeps everything running at once, even when there’s only one CPU. We talked about how a process is just a running program with its own memory, registers, and instructions, and how the system switches between them so quickly that it feels like they’re all running at the same time. I found that idea really interesting because it shows how much of what we see on a computer is an illusion created by clever design.

Another big topic was limited direct execution. I learned that this is how the operating system lets programs run directly on the hardware while still protecting the system. The computer has to make sure that one process can’t mess with another or with the operating system itself, so it uses things like kernel mode and timer interrupts. I hadn’t realized before that even the operating system pauses while other processes run — that really surprised me.

We also studied different scheduling algorithms and how they decide which process gets the CPU next. I practiced calculating turnaround time and response time, which helped me see how different approaches change performance. For example, Shortest Job First focuses on finishing tasks quickly, while Round Robin makes sure every process gets a fair share of time. The multilevel feedback queue was the hardest to wrap my head around at first, but once I understood that short jobs finish quickly and long ones get bumped down in priority, it clicked.

My “aha” moment this week was realizing how much thought goes into something as simple as keeping programs running smoothly. I’m still curious how these scheduling ideas work on modern computers with multiple cores, but I feel like I’m starting to see how all the pieces fit together. 

Sunday, November 2, 2025

Week 24

This week’s lessons focused on several basic concepts: the role of the operating system, computer architecture, the history of Linux and the shell, programming in C, and the command line. Together, these topics helped me connect the lower-level operations of hardware and software with how modern systems actually run. 

The idea that the operating system acts like an API between hardware and users made a lot of sense to me. It’s something I never really thought about before, but now I understand how it makes everything else possible. I also found the “cake” example from the lecture really helpful — it showed how operating systems divide up resources in a way that’s fair and efficient, not necessarily perfect. That made the whole concept of resource management more visual and easier to grasp.

The computer architecture review helped me remember how the CPU, RAM, and storage all communicate. The memory hierarchy stood out as something really important — the way we trade off between speed and size made me think about why caching exists and how performance is balanced in real systems.

The most challenging part for me was the introduction to C programming. While I’ve coded in Python and Java before, C feels very different because it provides such direct access to memory and hardware. Understanding pointers and memory management is still confusing, especially the difference between passing by value and passing by reference. However, I had an “aha” moment when I realized why C is still so dominant—it’s the backbone of all major operating systems. Seeing that connection made the learning curve feel worthwhile. 

Looking ahead, I’m curious to see how these concepts will connect to virtualization. It seems like everything we’re learning now—process management, memory allocation, and system calls—will serve as the foundation for understanding how multiple operating systems can share the same hardware efficiently. This week gave me a stronger sense of how the pieces of the computing world fit together, from transistors and buses to user-level commands.

Friday, October 24, 2025

Week 23

Throughout this database course, I learned several valuable concepts and skills, but the three most important ones for me were integrating databases with Javalearning SQL, and understanding NoSQL with MongoDB.

First, the integration between Java and SQL helped me understand how real-world applications connect to databases. By creating repositories, writing queries, and using JDBC connections, I learned how data can flow between a program and a database, which is essential for any backend or full-stack developer. It also taught me how to handle exceptions, maintain data consistency, and structure code that interacts efficiently with database systems.

Second, learning the SQL language itself was a fundamental part of this course. I learned how to design relational databases, define relationships between tables, and use queries such as SELECTJOIN, and GROUP BY to extract meaningful information. Understanding normalization, primary and foreign keys, and integrity constraints gave me a solid foundation in data management and design principles that apply to almost any system that handles structured data.

Finally, exploring NoSQL with MongoDB expanded my perspective on database systems. I discovered how flexible document-based storage can be compared to the rigid structure of relational databases. Learning how to use collections, documents, and queries in MongoDB showed me how NoSQL can be ideal for projects that need scalability and rapid development. This contrast between SQL and NoSQL helped me appreciate the strengths and trade-offs of both approaches and how to decide which one fits best depending on the project’s needs.

Monday, October 20, 2025

Week 22

This week we started learning about MongoDB, which feels very different from the SQL databases we had been using before. Up until now, we mostly worked with MySQL, where data is organized into tables with rows, columns, and relationships. In contrast, MongoDB stores data as documents inside collections, using a structure similar to JSON. At first, it was a little strange not having to create tables or define strict schemas, but it also made sense once I saw how flexible it can be.

One thing I found interesting is that MongoDB doesn’t require a fixed structure for the data. You can have documents with different fields in the same collection, which would not be possible in MySQL. This flexibility could be really helpful when building applications where the type of information might change or grow over time. On the other hand, MySQL forces you to define everything in advance, which helps maintain order and consistency when dealing with large, structured datasets.

Both databases serve a similar purpose — storing and retrieving data — but they do it in different ways. MySQL is great for handling data that needs clear relationships, like customers and orders in a store. MongoDB seems better for data that’s more dynamic, like user profiles or product catalogs that may have varying attributes.

If I had to choose between the two, I would probably use MySQL for projects where accuracy, constraints, and relationships are important, such as accounting or inventory systems. For projects that need to handle lots of changing or unstructured data, like social media apps or analytics platforms, I’d go with MongoDB. Overall, learning about MongoDB this week helped me appreciate how different database models can be used for different needs, and it gave me a broader perspective on how data is managed in modern applications.

Week 28

This week’s focus on concurrency and threads felt like a big shift from everything we have done so far. Until now, processes always felt sim...