Concurrency in Web Development

It is said that being able to explain a complex concept simply means that one understands the concept in question sufficiently. Today, I will delve into a complex topic known as concurrency




What is concurrency? Well, simply explained, it is the act of performing multiple actions at the same time; checking out your social media feed while perched on your toilet seat can be considered to be an instance of concurrency. 

In programming terms, concurrency occurs when multiple sequences of operations run in overlapping periods of time. 


How does one go about achieving concurrency? There are three architectures that allow for concurrency to be achieved. The first is multi-threading. A thread is defined as a unit of execution within a programme. Multi-threading therefore allows for multiple executions and thus concurrency. The second is event-driven architecture. This involves the use of a single thread which stacks events to be executed in an event queue. The events then executed sequentially by an event loop. Concurrency here is arrived at through asynchronous event handling wherein, callbacks/blocking-operations are executed at overlapping periods of time as events in the event queue; more on this later.

A third way to achieve concurrency is through a hybrid of the event-driven architecture and the multi-threaded approach.


What does all this talk about concurrency have to do with web development? We will find the answer in examining how Node.js implements concurrency. Node.js is a runtime environment for Javascript. Javascript is a programming language used to make websites interactive.


Node.js employs the use of the famed event loop. The Node.js environment is single-threaded and this means that pogramme execution is done via a single thread and not multiple threads. Asynchoronous execution in the node environment is used to achieve concurrency. Asynchronous execution should not be confused with parallel execution. The two diagrams below illustrate the difference: asynchronous execution involves 'jumping' between processes while switching context between them in a single thread. Parallel execution on the other hand involves running processes simultaneously with the expectation that they will theoritically start and finish at the same time.


Asynchoronous Execution



Parallel Execution


The famed event loop consists of a stack, a queue and a heap. In the stack, function calls are stacked on top of each other and await execution. The queue consists of a sequential collection of 'messages'. These messages are events that are triggered within the browser environment. The heap is just a store for objects. When function calls are made, they are placed on the stack and then executed. When the stack empties, the first message in the queue is handled and the function calls associated with it placed in the stack to be executed. After the stack is once again empty, the next message in the queue is handled and so on. This allows javascript's single thread to run multiple executions. Messages can also consist of asychonronous function calls. For example, the fetch web API. 


When a fetch request is made, javascript dedicates the handling of the request to the web API and then proceeds to run the next function call in the stack or handle the next message in the queue.

When the fetch request is resolved, it is added to the queue for handling when the call stack is empty. Thus, the fetch request has been handled asycnronously or in a non-blocking manner. What this means is that while the fetch request was being handled, other processes were not blocked from running as the call stack was still emptied and messages handled. Thus, concurrency was achieved.



Concurrency is also important in database management and manipulation. Concurrency in the realm of databases simply refers to the ability of multiple users to perform operations on the same database contemporaneously. Concurrency ensures database consistency which mandates that all users should have a consistent view of data that has been manipulated either through their own transactions or those of others.


As you can well imagine, with the simultaneous access of a database by multiple users, challenges arise. Three of these challenges are dirty reads, non-repeatable reads and phantom reads. Dirty reads occur when data that has been modified by an uncommitted transaction is read. Non-repeatable reads arise when transactions re-read previously read data only to find that the data has either been modified or deleted by other committed transactions. Finally, phantom reads occur when a transaction runs a query with a condition that returns a specified number of rows only to find that another committed transaction has modified the number of rows returned by the aforementioned condition.


Database design deals with these challenges in several ways, for example, Oracle implements transaction-level and statement-level read consistency. What this means is that when a query is made, only data available at the point in time when the query was made is returned and thus, there is no interference from other queries that are made during the processing of the operations related to the query in question or after the query is made. This guarantees data consistency.


MongoDB uses multi-granularity locking to solve the challenges that arise with concurrency. This entails executing operations in order while freezing operation access to the database as each operation executes. This ensures consistency and prevents dirty, phantom and non-repeatable reads. To increase efficiency however, MongoDB may implement related query blocks sequentially before other blocks thereby disregarding the order of operations.



Comments

Popular posts from this blog

How your VPN works

The Mouse on Your Desk

Why you need a Password Manager right now