Posts

CST334 Week 8

     Over the past eight weeks I’ve developed a greater appreciation for how an operating system serves as the bridge between hardware and software. Before this class, I understood the OS as the thing that “runs the computer,” but now I see the complexity of what that really means. The OS isn’t just managing programs, it’s organizing processes, controlling access to memory, scheduling CPU time, and ensuring the different parts of the system work together without interfering with one another.      One of the most engaging and challenging parts of the course was working with memory virtualization. Practicing how virtual addresses are translated to physical ones deepened my understanding of how the OS protects processes, manages limited space efficiently, and keeps programs running smoothly even when they are competing for resources. It was interesting to see how theoretical concepts like paging, base and bounds, and caching algorithms play out in practice. ...

CST334 Week 7

      This week I explored how computers handle input/output and manage storage at both the hardware and software levels. I started by learning about I/O devices and the role of the bus, which acts as the main pathway for transferring bytes between the CPU, memory, and peripherals. I learned about three primary ways the CPU interacts with I/O are polling, where the CPU repeatedly checks if a device is ready; interrupt driven, where the device signals the CPU when it’s ready, and direct memory access, where data is transferred directly between the device and memory without continuous CPU involvement.      Next I learned more about how physical hard drives work. They store data on spinning metal disks called platters, which are read and written to by specialized heads. I learned how to calculate a drives average rotational delay which is the time it takes for the correct sector to rotate under the read/write head and how that, along with seek time ...

CST334 Week 6

       This week we continued concurrent programming and synchronization concepts and built on the ideas from last week. One of the first things I learned about was binder buffer coding, which was about how data can be shared and communicated between different parts of a system or between different threads. It's a used when dealing with message passing or structured data flow. Systems with modular or layered designs can make use of this to communicate with other parts of the system.    After  that, I learned about the concept of semaphores. Semaphores are a core component in concurrent programming as they act as signaling mechanisms used to control access to shared resources. The main idea of semaphores are integer variables protected by atomic operations like wait() and signal(). A semaphore can be used to limit the number of threads that access a resource such as a database connection pool or a bounded buffer to ensure that the variables remain safe ...

CST334 Week 5

     This week I focused on concurrency and how it allows processes to run multiple threads in parallel, allowing tasks to be handled in the background while the main application continues. While I’ve previously worked with multithreading in Java, learning how the operating system manages concurrency gave me a better understanding of how multithreading works in the CPU. I learned that threads can share certain resources, such as virtual address space which makes communication between them efficient, but also introduces its own challenges.      One of the key concepts I learned was nondeterminism in multithreaded programs. Even if two threads call the same function, the order in which memory is accessed and modified can change which can lead to different outcomes every time the program runs. This unpredictability can cause bugs that are hard to reproduce and debug.      To address this, we use thread synchronization APIs like locks, or mutexes...

CST334 Week 4

     This week was all about diving deeper into how operating systems manage memory via paging, swapping, and caching.      Paging is a method the OS uses to divide memory into evenly size blocks. I learned how the system uses a Virtual Page Number (VPN) to keep track of which pages are being used, and how the offset helps determine the exact location within that page. One of the key points was how the virtual address gets translated into a physical address using a Page Table Entry (PTE), which acts like a map for locating memory.     I learned how  Translation Lookaside Buffers (TLBs) are used to improve performance during frequent memory lookups. These are special caches that store recently used memory translations, making memory access much faster when there's a 'hit'. I also learned how to calculate average memory access time by using the hit and miss rates of a TLB and factoring in the number of cycles each operation takes.    ...

CST334 Week 3

     This week I learned how memory is organized and managed within a computer system. One of the key concepts I learned about was the address space, which is divided into sections for the code, stack, and heap with each of these sections is designed for a specific purpose. the code section stores the executable instructions, the stack manages function calls and local variables, and the heap is used for dynamically allocated memory at runtime. I found it interesting to see how these areas are laid out in memory and how their addresses are manipulated so programs can run efficiently.      An important reason to structure memory this way is to virtualize it, which helps protect different parts of the program and ensures that memory is used effectively. Virtualization makes sure each process believes it has access to a continuous block of memory, even though the actual physical memory might be shared with other processes. This design both improves security and...

CST334 Week 2

    This week has been a great introduction to how CPU process selection works in operating systems. We covered a range of scheduling algorithms, including FIFO (First In, First Out), LIFO (Last In, First Out), Shortest Job First (SJF), and Round Robin (RR). I learned how each algorithm has its own strategy for picking which process should run next, and how these choices affect system performance and metrics like response times.      I also learned to calculating those performance metrics such as response time, turnaround time, average arrival time, and average turnaround time. For Round Robin scheduling in particular, I learned how to calculate how long a process will stay in the system based on the time slice and the number of other processes in the queue. These exercises helped me understand the mechanics of how schedulers work and why the design choices made are so important.      One of the most interesting parts was comparing the pros and ...