Posts

cst370 Week4

This week was primarily focused on preparing for the midterm exam. This involved reviewing all of the material we have covered so far. Going back through the concepts helped me identify areas where I could improve, particularly reading and writing pseudocode and analyzing the efficiency of algorithms based on that pseudocode. Recognizing this gave me direction in what area I should focus my study. I especially enjoy exams that allow the use of a written study guide. The process of creating it helps cement the material in my memory. I often find that by the time I finish preparing the guide, I don’t need to rely on it as much as I initially expect. While I was nervous, as I usually am before exams, the structured review and preparation made me feel more confident going into the midterm.

cst370 Week 3

 This week we continued with brute force techniques including an example involving string matching. This was a helpful illustration of how brute force algorithms work in practice and also showed that the worst case scenario for an algorithm is not always as obvious as it may initially seem. Lectures then covered several classic example problems like the traveling salesman problem, knapsack problem, and assignment problem which demonstrated how different types of problems often require different algorithmic approaches. We also went into more depth on the graph traversal techniques depth first Search (DFS) and breadth first search (BFS). While these concepts were a review for me, the information was valuable because it introduced new perspectives on analyzing their time efficiency. Later in the lectures we covered the divide and conquer algorithms and the master theorem, including on how to identify the time complexity of recursive problems using this method.  The week’s homewo...

cst370 week2

     This week we focused on understanding different ways to analyze algorithm efficiency using Big O, Big Theta, and Big Omega notation. We discussed how each notation is used to describe algorithm behavior. Specifically how and when Big Theta is the most appropriate choice when compared to Big O. This helped understand why we might provide a tight bound versus an upper bound when analyzing an algorithm.      Later we took a look at recursive algorithm analysis, including using backward substitution to determine time complexity for recursive algorithms. We also learned about the brute force technique and how it solves problems by exhaustively checking all possible solutions.      I enjoyed the homework for this week, as it gave me a great opportunity to reevaluate my approach and helped me to practice writing more efficient solutions rather than settling for the first one that worked.

cst370 Week 1

     This week much of what we went over is review, but good review none the less. Lectures included discussing what an algorithm is, how to write pseudocode, identifying graphs and different graph types, and algorithm analysis such as complexity and comparing the efficiency of different approaches.      Revisiting these ideas was a great refresher for the importance of using algorithms to your advantage. Alongside the lectures were fun word problems that encouraged us to think through solutions and view them through an algorithmic lens. These problems made the material more engaging and highlighted how algorithms can be used to solve complex problems efficiently in real world scenarios.

CST334 Week 8

     Over the past eight weeks I’ve developed a greater appreciation for how an operating system serves as the bridge between hardware and software. Before this class, I understood the OS as the thing that “runs the computer,” but now I see the complexity of what that really means. The OS isn’t just managing programs, it’s organizing processes, controlling access to memory, scheduling CPU time, and ensuring the different parts of the system work together without interfering with one another.      One of the most engaging and challenging parts of the course was working with memory virtualization. Practicing how virtual addresses are translated to physical ones deepened my understanding of how the OS protects processes, manages limited space efficiently, and keeps programs running smoothly even when they are competing for resources. It was interesting to see how theoretical concepts like paging, base and bounds, and caching algorithms play out in practice. ...

CST334 Week 7

      This week I explored how computers handle input/output and manage storage at both the hardware and software levels. I started by learning about I/O devices and the role of the bus, which acts as the main pathway for transferring bytes between the CPU, memory, and peripherals. I learned about three primary ways the CPU interacts with I/O are polling, where the CPU repeatedly checks if a device is ready; interrupt driven, where the device signals the CPU when it’s ready, and direct memory access, where data is transferred directly between the device and memory without continuous CPU involvement.      Next I learned more about how physical hard drives work. They store data on spinning metal disks called platters, which are read and written to by specialized heads. I learned how to calculate a drives average rotational delay which is the time it takes for the correct sector to rotate under the read/write head and how that, along with seek time ...

CST334 Week 6

       This week we continued concurrent programming and synchronization concepts and built on the ideas from last week. One of the first things I learned about was binder buffer coding, which was about how data can be shared and communicated between different parts of a system or between different threads. It's a used when dealing with message passing or structured data flow. Systems with modular or layered designs can make use of this to communicate with other parts of the system.    After  that, I learned about the concept of semaphores. Semaphores are a core component in concurrent programming as they act as signaling mechanisms used to control access to shared resources. The main idea of semaphores are integer variables protected by atomic operations like wait() and signal(). A semaphore can be used to limit the number of threads that access a resource such as a database connection pool or a bounded buffer to ensure that the variables remain safe ...