A Deep Dive into Concurrency, Parallelism, Multiprocessing, and Distributed Systems - Part 2 a - Foundations of Concurrency.
A Deep Dive into Concurrency, Parallelism, Multiprocessing, and Distributed Systems - Part 2 a - Foundations of Concurrency.

A Deep Dive into Concurrency, Parallelism, Multiprocessing, and Distributed Systems - Part 2 a - Foundations of Concurrency.

Introduction

Concurrency is the ability of an operating system to run multiple tasks or instruction sequences at the same time, which can greatly enhance the system’s efficiency and responsiveness. For example, a web server can handle multiple client requests concurrently, or a scientific simulation can use parallel processing to obtain results faster.
But why is concurrency important in today’s software world? Well, we should thank the millions of developers who have worked hard to make concurrency transparent and seamless for us. When we use a smartphone, a laptop, or a server, we don’t have to worry about how concurrency works under the hood. We can write code in languages like Python or even use GPT to write these blogs, without ever considering the hard work the operating systems do. Almost any device we talk about today supports concurrency natively, from a Raspberry PI to a Dell Server. However, understanding how concurrency works can help us improve the software we develop and optimize its performance.
Concurrency can be achieved by using processes or threads, which are both units of execution that run on a CPU. Processes are independent instances of a program that have their own memory and resources, while threads are lightweight units that belong to a process and share its resources. Processes and threads communicate with each other through shared memory or message passing, which requires synchronization mechanisms to avoid data corruption or inconsistency.
Developing a state-of-the-art concurrent application can pose many challenges, such as coordinating tasks, managing shared resources, and preventing issues like race conditions or deadlocks. These are situations where multiple processes or threads interfere with each other or wait for each other indefinitely, causing errors or delays. Therefore, a deep understanding of concurrency and parallelism is essential for developers and system architects to design robust, efficient, and responsive software.

Important things to know for developing concurrent applications

On those lines, we’ve identified a few important fundamentals that might be helpful in understanding how concurrency is achieved in real-world operating systems. This flowchart illustrates some of those and their relationships. It demonstrates how these components interact in a concurrent system, impacting the execution and management of processes and threads.
Lets look at some of the components mentioned above -
  • Processes and threads are units of execution that run on the CPU. Processes are independent instances of a program that have their own memory and resources, while threads are lightweight units that belong to a process and share its resources. Processes and threads communicate with each other through shared memory or message passing, which requires synchronization mechanisms to avoid data corruption or inconsistency.
  • Synchronization mechanisms are methods that ensure that processes or threads access or modify shared memory or resources in a coordinated and consistent way. Synchronization mechanisms can be hardware-based or software-based, such as locks, semaphores, mutexes, condition variables, monitors, atomic operations, or barriers.
  • Memory and resources are the data or devices that processes or threads need to perform their tasks. Memory can be divided into different segments or regions, such as code, data, stack, heap, or kernel. Resources can be physical or logical, such as files, sockets, semaphores, or locks. Memory and resources can be shared or exclusive among processes or threads.
  • Scheduling algorithms are methods that determine which process or thread should run next on the CPU based on their priority, resource requirements, and fairness. Scheduling algorithms can use interrupts or timers to preempt or switch between processes or threads according to their quantum or deadline.
  • Interrupts are signals that cause the CPU to suspend the current task and jump to a special routine called an interrupt handler that deals with the interrupt. Interrupts can be generated by hardware devices or software programs when they need urgent attention from the CPU, such as input/output operations, network packets, timers, or system calls.
  • Bus architecture is the physical or logical connection that transfers data, addresses, and control signals between the CPU and other hardware components, such as memory, devices, or other CPUs. Bus architecture enables the CPU to access memory, devices, or other CPUs through different types of buses, such as address bus, data bus, or control bus.
  • Clocks and timers are devices that generate periodic signals or interrupts that synchronize the CPU and other components. They also help the operating system to measure and control the timing of its operations and interactions, such as scheduling processes or threads, handling alarms or timeouts, or doing profiling or statistics.
With that said, if our end agenda is to go as far as talking about distributed systems, we will not be able to complete that breadth of the topic if we were to go deep into any of the above topics. However, we believe we are extraordinary and that we can get our expertise by doing the impossible, and with the knowledge we garner along the way, we might as well do something beyond extraordinary. So we will narrow our scope but we will go down the rabbit hole and come out glorious. So from the above topics, we decided to elaborate on Processes and Threads, System Interrupts, and Synchronization mechanisms. Let us jump in!
And one more thing before we do that -
This diagram below provides an overview of the key components( as mentioned above) that help achieve concurrency, highlighting their relationships and how they work together to ensure the efficient execution of software applications. It emphasizes the significance of CPU scheduling, interrupts timing, external events, and data transfer within a typical computing environment.
Lets start from scratch!
In the next articles, we will look at some of these components in more detail.

 
Buy us a coffeeBuy us a coffee