CST334 Week 6
This week we continued concurrent programming and synchronization concepts and built on the ideas from last week. One of the first things I learned about was binder buffer coding, which was about how data can be shared and communicated between different parts of a system or between different threads. It's a used when dealing with message passing or structured data flow. Systems with modular or layered designs can make use of this to communicate with other parts of the system.
After that, I learned about the concept of semaphores. Semaphores are a core component in concurrent programming as they act as signaling mechanisms used to control access to shared resources. The main idea of semaphores are integer variables protected by atomic operations like wait() and signal(). A semaphore can be used to limit the number of threads that access a resource such as a database connection pool or a bounded buffer to ensure that the variables remain safe when used. This prevents race conditions and ensures that resources are accessed in a controlled and predictable manner.
Finally the last major topic this week was about synchronization barriers. These are mechanisms used to coordinate a fixed number of threads, ensuring that all threads reach a certain point in execution before any of them can proceed. This is especially useful in parallel algorithms where threads have to wait for each other at a specific checkpoint to maintain data consistency or perform a phase-based computation.
To build these synchronization barriers the Anderson Dahlin method is employed. This method uses a step by step design to build a barrier by expanding upon a regular class and equipping it with locks and signals to ensure it functions properly.
Altogether, this week proved to be very interesting as I learned a lot more about the complexity of synchronizing threads in a concurrent system and how to safely practice employing their use.
Comments
Post a Comment