Tuesday, July 30, 2024

CST 334 - Week 6

 For this week, we learned more about concurrency, focusing mainly on semaphores and common concurrency problems. A semaphore is a synchronization primitive that can be manipulated to be used as both a lock and a condition variable. Some of the (non-deadlock) concurrency problems that were covered were atomicity violations and order violations. An atomicity bug is where a section of code is meant to run atomically but has the potential to be interrupted before completion and leave things partially executed. A simple solution to this bug is to add locks around critical sections. An order violation is when things are assumed to happen in a certain order, but the order isn't being enforced. Condition variables can be implemented to solve this problem and enforce a specific flow. Other than these bugs, we also covered deadlocks, which occurs when two threads are waiting for each other and thus, neither are productive. There are four conditions that need to be met for a deadlock to occur: (1) mutual exclusion, (2) hold-and-wait, (3) no preemption, and (4) circular wait.

Tuesday, July 23, 2024

CST 334 - Week 5

For this module, the main focus was concurrency. This topic introduces the idea of a thread, which could be thought of similarly to a process. The difference being that in multi-threaded programs, the individual threads have access to the same address space. Threads are useful for two major reasons: parallelism, which allows you to execute different parts of a program simultaneously, and to avoid blocking program progress due to slow I/O operations. To utilize threads, we need to be able to create and end individual threads, of course. But we also need to know when each should run. To do this, we incorporate locks to provide mutual exclusion, which basically means that only one thread at a time can enter a critical section of code.

One of the biggest challenges for me this week was trying to wrap my head around how this works under the hood. I've used threads on a basic level to ensure proper input to an automation project I worked on the past, but have never really dug into the intricacies of how they work in tandem with the hardware. 

Tuesday, July 16, 2024

CST 334 - Week 4

 This week we learned more about memory virtualization mainly involving paging. Similar to segmentation, which divided the address space into variable-sized pieces, paging chops up the address space into fixed size pieces. We keep track of available pages in a "free list" and a page table stores the address translations from virtual to physical memory. This approach can also be slower than desired, so an OS works with hardware in order to make things more efficient. A TLB, or translation-lookaside buffer, is a hardware cache of popular virtual-to-physical address translations. Whenever there is a reference to virtual memory, the TLB is checked first, before having to search through the page table to find an entry. This has an incredible effect on performance.

We also learned a bit about swapping. This concept allots a bit of memory called a swap space. This is used when memory gets full and allows the OS to swap in and out pages as necessary using a page-replacement policy. A "high watermark" (HW) or "low watermark" (LW) helps to decide when an OS should start removing pages from memory. Once the OS realizes that there are fewer than LW pages available, a background thread starts evicting pages until the HW is reached. All of these things happen behind the scenes, and still support the illusion that each process has its own private, contiguous address space.

Tuesday, July 9, 2024

CST 334 - Week 3

 The main focus for this week was memory management. We read about the abstraction of memory and how an OS virtualizes memory. The goals of a virtual memory system are transparency, efficiency, and protection. Transparency means that the program shouldn't realize that the memory is virtual instead of working directly with physical memory. Efficiency strives to make sure virtualization is both time- and space-efficient, which involves the use of hardware. And finally, protection involves making sure that processes are protected from each other and also that the OS is protected from said processes. We also learned about memory APIs and how developers use calls to malloc() and free() to dynamically manage memory.

In addition, we continued learning about the Linux environment by looking at regex, grep, sed, and awk. Regular expressions are used to match patterns in strings and are used by many utilities like grep, sed, and awk to filter input, apply transformation rules to lines, and manipulate a pattern based on a defined action, respectively.

Tuesday, July 2, 2024

CST 334 - Week 2

 This week we covered processes and how the CPU handles them. We reviewed what constitutes a process and talked about different policies that guide a scheduler on how to handle them. One of the most interesting parts of this to me was seeing how these policies reflect how a human would make similar decisions if it were up to them. For example, I'm a bartender, and when I read about "Shortest Time to Completion" all I could think about was how this is similar to when I'm working a busy shift. I may have one ticket of drinks that consists of multi-step signature cocktails, when a ticket with just a glass of wine prints up. Usually, I'll pause the cocktail ticket, pour the glass of wine and then resume the original ticket. Making these connections made it much easier for me to get a grasp of how they would work in regards to computers. Some of the other schedules used were Round Robin, Shortest Job First, FIFO (First In, First Out) and LIFO (Last In, First Out), which all have their pros/cons. For example, Round Robin has a nearly immediate response time, but can cause slow turnaround times when many processes are running at once.