Synchronous & Asynchronous and MultiThread Programming -4 — Non-Blocking Syncronization & Tools

Alperen Öz
11 min readMay 26, 2024

--

Race Condition & Synchronization Techniques -3

Non-Blocking Syncronization & Tools

Photo by Austin Distel on Unsplash

Non-Blocking Synchronization

In parallel programming and multi-threaded application development, synchronization techniques are crucial. These techniques are divided into blocking and non-blocking synchronization, each offering different approaches for managing and synchronizing thread access to resources.

These synchronization types can have varying impacts on thread interaction and program flow.

Blocking Synchronization

Blocking synchronization means that threads may wait, or be blocked, until a resource becomes available. This is achieved using mechanisms like critical sections, mutexes, and semaphores.

Advantages:

  • Safety: Mechanisms like critical sections ensure data integrity and consistency by preventing multiple threads from entering a critical section simultaneously. This is crucial for applications handling financial transactions.
  • Simplicity: Mechanisms like mutexes and semaphores are relatively simple and easy to understand, making them easier to learn and integrate into your code.
  • Predictability: Blocking synchronization makes it easier to predict how long a thread will wait to access a resource, aiding in better planning of program flow and performance.

Disadvantages:

  • Performance: When a thread is blocked waiting for a resource, it remains idle, which can degrade overall program performance, especially in high-contention scenarios.
  • Deadlocks: In certain situations, two or more threads may wait for each other indefinitely, leading to deadlocks, which can cause the program to crash or behave unexpectedly.
  • Scalability: Blocking synchronization mechanisms may struggle to scale as the number of threads increases, posing issues for large and complex multi-threaded applications.
  • Sleep and Wake Cost: Thread blocking incurs additional costs at the OS level for managing sleep and wake operations, leading to extra resource usage and overhead.

Non-Blocking Synchronization

Non-blocking synchronization means that threads do not wait or get blocked when accessing a resource. Instead, they perform an alternative operation or retry the action.

Advantages:

  • Performance: Non-blocking mechanisms can significantly improve program performance by eliminating waiting threads, making them ideal for high-performance applications.
  • Availability: Non-blocking techniques typically result in cleaner code, making maintenance and usage easier.
  • Deadlock Risk: Non-blocking synchronization is less prone to deadlocks, ensuring more reliable and predictable program behavior.
  • Scalability: These mechanisms scale better with an increasing number of threads, making them suitable for large, complex applications.

Disadvantages:

  • Implementation Complexity: Non-blocking synchronization can be more complex and harder to understand, posing challenges in learning and correctly integrating these mechanisms into your code.
  • Safety: Some non-blocking mechanisms may lead to data integrity or consistency issues in certain situations, which can be problematic for applications handling sensitive data.
  • Memory Usage: Predicting the wait time for a thread to access a resource can be more challenging with non-blocking synchronization, complicating the planning of program flow and performance.

Data Register

  • A data register is a type of register within a microprocessor used to temporarily store data.
  • To facilitate faster and more practical access to variables and values created in a program, the microprocessor stores these variables along with their values in limited-number data registers within its own memory.

This function of the data register can be thought of as a form of caching.

  • Some key features of data registers:
  • Fast access: Data registers are much faster to access compared to other types of memory in the CPU. This allows the processor to execute instructions more quickly and minimize delays.
  • Small size: Data registers are small memory locations that can hold only a few bytes of data, helping the CPU optimize memory usage and save power.
  • Specialized: Each data register is designed to store a specific type of data. For example, some data registers are general-purpose and can hold any type of data, while others are used only for storing numbers or characters.

BUT

  • Like everything, there are some risks when using data registers.
  • Let’s consider that at any time T, a variable and its value are saved in both the microprocessor’s memory and the data register.
  • Any change made to this variable at time T+ will be reflected directly in the microprocessor’s memory, but this change may not be immediately reflected in the data register.
  • Thus, at any time T++, when the value of the variable is retrieved from the data register, it may return the old value.
  • Programmers can choose to take this risk, but there are tools available to prevent such inconsistencies.
  • This risk is very low in synchronous or single-threaded operations. Inconsistency is more likely in asynchronous and multi-threaded programs.

VOLATILE Keyword

  • The volatile keyword helps prevent the inconsistency issue in the data register mentioned above and ensures correct data access.
  • A variable marked as volatile in a program ensures that it is accessed directly from memory without optimization by the data register.
  • Thus, all operations on variables marked as volatile will be carried out directly in memory without using the data register.
  • This way, in both asynchronous and multi-threaded operations, there will be no memory inconsistency concerns, and operations can continue smoothly.
  • In summary, the volatile keyword is a tool that disables compiler optimizations for read and write operations on a variable’s value, helping to prevent inconsistencies.

In fact, this is also an application of a non-blocking synchronization technique.

Some key advantages of using the volatile keyword:

  • Ensures data consistency: The volatile keyword ensures that the value of a variable is always the most recent, even when accessed by multiple threads. This helps avoid data inconsistencies and errors.
  • Controls hardware access: The volatile keyword can indicate that a variable is directly accessible by hardware, useful for reading data from sensors or controlling actuators.
  • Prevents memory optimization: Compilers often optimize memory to enhance program performance, which can sometimes cause data inconsistencies. The volatile keyword prevents the compiler from optimizing a specific variable.

However, there are some disadvantages to using the volatile keyword:

  • Performance decrease: The volatile keyword can slightly reduce program performance by preventing the compiler from optimizing a specific variable.
  • Code complexity: The volatile keyword can make your code more complex and harder to read.
  • Unnecessary usage: It’s important to use the volatile keyword only when necessary. Otherwise, it can increase your program’s performance and complexity unnecessarily.
  • Cannot resolve all issues: It is insufficient for dealing with situations like deadlock and race conditions, which require more sophisticated synchronization mechanisms as discussed under blocking synchronization.

When to use it:

  • Single read and multiple write scenarios: Suitable when a variable is written by only one thread and read by another.
  • Flag variables: Useful for flag variables that indicate the working status of a thread to another thread, as these values should be taken directly from memory.
  • When performance is not critical: Can be used when the performance optimization behavior of the microprocessor for variable usage is not significant.
  • In simple synchronization cases: Can be used in simple synchronization scenarios that do not require advanced synchronization.

When not to use it:

  • High-level synchronization scenarios: Not suitable for complex thread operations, as it is insufficient for comprehensive synchronization.
  • Thread coordination scenarios: Inadequate for scenarios requiring cooperation and coordination between threads. In such cases, synchronization mechanisms should be used.
internal class Program
{
private static void Main(string[] args)
{
Run();
}
volatile static int i; //volatile variable
private static void Run()
{
Thread thread1 = new(() =>
{
while (true)
i++;
});
Thread thread2 = new(() =>
{
while (true)
{
Console.WriteLine(i);
}
});
Thread thread3 = new(() =>
{
while (true)
i--;
});
thread1.Start();
thread2.Start();
thread3.Start();

}
  • As seen in the above example, the i variable is defined with the volatile keyword, ensuring its value is always fetched from memory, preventing potential data register inconsistencies.

If you want to exhibit volatile behavior in a variable not marked with the volatile keyword, you can use the Volatile.Write and Volatile.Read methods in the System.Threading namespace in C#.

This approach allows for volatile behavior at critical points, ensuring data consistency while benefiting from data register optimization where the behavior is not needed.

 internal class Program
{
private static void Main(string[] args)
{
Run();
}
static int i;
private static void Run()
{
Thread thread1 = new(() =>
{
while (true)
Volatile.Write(ref i, Volatile.Read(ref i) + 1); // sample using of volatile actions
});
Thread thread2 = new(() =>
{
while (true)
{
Console.WriteLine(Volatile.Read(ref i));
}
});
Thread thread3 = new(() =>
{
while (true)
Volatile.Write(ref i, Volatile.Read(ref i) - 1);
});
thread1.Start();
thread2.Start();
thread3.Start();


}
  • Above you see the synchronization approach performed using Volatile class methods, not with the volatile keyword.

SpinLock

  • SpinLock is a synchronization technique used in C# for multithreaded programming.
  • It operates on the same principle as the spinning technique previously discussed.
  • SpinLock is faster than blocking synchronization because it keeps the thread in a busy loop (spin) until the lock is available, instead of putting the thread to sleep.
  • Unlike blocking, which involves the operating system, spinning does not involve any OS interaction.
  • This makes SpinLock more performant but at the cost of higher CPU usage.
  • SpinLock is effective for short waits, as threads spin rather than sleep, but is inefficient for long waits due to continuous CPU usage.

In summary, SpinLock is advantageous for short-term locks as threads spin instead of sleeping. However, it is inefficient for long-term locks as continuous spinning consumes CPU resources, leading to a loss in efficiency.

int value = 0;
SpinLock spinLock = new();
Thread thread1 = new(() =>
{
bool lockTaken = false;
try
{
spinLock.Enter(ref lockTaken);
if (lockTaken)
for (int i = 0; i < 999; i++)
Console.WriteLine($"Thread1 : {++value}");
}
finally
{
spinLock.Exit();
}
});
Thread thread2 = new(() =>
{
bool lockTaken = false;
try
{
spinLock.Enter(ref lockTaken);
if (lockTaken)
for (int i = 0; i < 999; i++)
Console.WriteLine($"Thread2 : {++value}");
}
finally
{
spinLock.Exit();
}
});

thread1.Start();
thread2.Start();

As with other synchronization techniques, in spinning, it’s crucial to use try-catch-finally blocks to handle errors that may occur while a thread reference object is waiting. This prevents deadlock situations in the program. The finally block should always ensure that the reference object is released. This approach helps maintain the stability and reliability of the multithreaded application by ensuring proper resource management and error handling.

SpinWait

  • SpinWait is essentially a synchronization tool, akin to others in its category.
  • Structurally designed for use in very short wait scenarios, it is particularly ideal for fulfilling low-cost synchronization needs.
  • With a lighter footprint compared to the SpinLock structure, it’s a suitable option for very short wait scenarios. In essence, it’s a somewhat more optimized version of SpinLock.
bool waitMod = false, condition = false;            //without spinwait
Thread thread1 = new(() =>
{
while (true)
{
if (waitMod)
{
continue;
}

if (!condition)
{
continue;
}

Console.WriteLine("Thread1 working...");
}
});

Thread thread2 = new(() => //with spinwait
{
while (true)
{
SpinWait.SpinUntil(() =>
{
return waitMod || condition;
});

Console.WriteLine("Thread2 working...");
}
});

thread1.Start();
thread2.Start();
  • As seen in the example above, the thread will be kept spinning in the SpinWait block until the relevant conditions are met.

Interlocked Class

Interlocked Class: What is it?

  • The Interlocked class is utilized in C# to ensure safe access to shared variables while demonstrating a multi-threading approach.
  • It operates by performing atomic operations, preventing interference from other threads while one thread reads from or writes to a variable.
  • Thus, it ensures data integrity arising from multi-threading and asynchronous approaches.

Atomic Operations:

  • Implies that an operation is indivisible from start to finish.
  • It prevents interference from another thread while a read or write operation is being performed, thereby preventing issues like race conditions.

Difference with Volatile Keyword:

  • Volatile is a keyword used only for simple read and write operations. The Interlocked class, on the other hand, provides more advanced access management.
  • The Interlocked class performs specific operations atomically, whereas Volatile only ensures the variable’s value in memory remains current.

Actions of Interlocked class:

  1. Increment:
  • Increases the value of a variable by atomic +1.
int i = 0;
Interlocked.Increment(ref i);

2. Decrement:

  • Decreases the value of a variable atomically by -1.
int i = 0;
Interlocked.Decrement(ref i);

3. Add:

  • Adds the specified value to a variable as atomic.
int i = 0;
Interlocked.Add(ref i, 15);
Interlocked.Add(ref i, 5);

4. Exchange:

  • Replaces the value of a variable with the specified value and returns the old value.

int oldValue = Interlocked.Exchange(ref i, 15);

5. CompareExchange:

  • Replaces the value of a variable with the specified value when a certain condition is met. In this example, the condition is that the third parameter is 0. When the relevant variable i is 0, the exchange transaction will take place.
int i = 0;
Interlocked.CompareExchange(ref i, 25, 0);

Particularly Common Use Cases:

  • If you’re working with variables of simple types and need to use them for synchronization purposes, the Interlocked class is ideal.
  • Especially when attempting to solve problems with locks, it provides significant benefits in terms of speed and performance.

MemoryBarrier Method

MemoryBarrier Method:

  • It is used to regulate memory access and ensure optimized ordering in multi-threading approaches.
  • Controls the ordering of operations in memory and prevents unexpected ordering situations.

Memory Ordering:

  • Memory ordering refers to when a thread’s memory changes become visible to other threads.
  • Used to address issues related to expected ordering resulting from compiler and CPU optimizations.
  • The memory access order of threads can vary, leading to memory inconsistencies.

Usage of MemoryBarrier Method:

  • By calling it immediately before and after the code, synchronization among threads can be ensured.
  • Otherwise, due to CPU optimizations, the value of the variable can be incorrectly read, leading to unexpected changes.

Full Fence and Half Fence:

  1. Full Fence:
  • Achieved with the Thread.MemoryBarrier() command.
  • This command completes all previous memory accesses and prevents all subsequent memory accesses.
  • Thus, it creates a fence for the thread.

Thread.MemoryBarrier();

2. Half Fence:

  • The methods in the Interlocked class can be used to accomplish this.
  • This specifically regulates memory access on a particular variable.

Interlocked.Increment(ref i);
Thread.MemoryBarrier();

Writing:


int i = 0;
Thread writeThread = new Thread(() => {
while (true) {
Interlocked.Increment(ref i);
Thread.MemoryBarrier(); // The current value of variable i can be seen by other threads.
}
});

Reading:

Thread readThread = new Thread(() => {
while (true) {
Thread.MemoryBarrier(); // It is used to get the current value of variable i.
Console.WriteLine($"i: {i}");
}
});

writeThread.Start();
readThread.Start();

Additional Information:

  • The use of Monitor.Enter/Exit methods also exhibits ‘Full Fence’ behavior.
  • Operations such as asynchronous callback structures using ThreadPool, delegates, inter-thread signaling, and similar, are also related to ‘Fence’ behaviors.

ReaderWriterLock & ReaderWriterLockSlim

ReaderWriterLock & ReaderWriterLockSlim:

  • These tools are used in scenarios where multiple threads can access a resource concurrently, but only one thread can modify the resource.
  • ReaderWriterLockSlim is lighter and faster than the ReaderWriterLock class. It also has a disposable feature.

Differences from Other Synchronization Tools:

  • They provide a different level of locking compared to other synchronization tools.
  • While other locking mechanisms only control concurrent access to a resource, these tools allow concurrent execution of read operations without colliding with each other.
  • They ensure that write operations are performed by only one thread.

When to Use These Mechanisms?

  • If a resource is frequently read and writing occurs rarely, it is ideal for scenarios with high read traffic.
  • They should not be used in scenarios with high write traffic as they can lead to performance degradation during write operations.

using System.Threading;

internal class Program
{
private static void Main(string[] args)
{
// 5 reader thread creating...
for (int i = 0; i < 5; i++)
new Thread(Read).Start();

// 2 writer thread creating...
for (int i = 0; i < 2; i++)
new Thread(Write).Start();
}

static ReaderWriterLockSlim readerWriterLockSlim = new ReaderWriterLockSlim();
static int counter = 0;

static void Read()
{
for (int i = 0; i < 10; i++)
{
try
{
readerWriterLockSlim.EnterReadLock();
Console.WriteLine($"R : Thread {Thread.CurrentThread.ManagedThreadId} is reading : {counter}");
}
finally
{
readerWriterLockSlim.ExitReadLock();
}
Thread.Sleep(1000);
}
}

static void Write()
{
for (int i = 0; i < 10; i++)
{
try
{
readerWriterLockSlim.EnterWriteLock();
counter++;
Console.WriteLine($"W : Thread {Thread.CurrentThread.ManagedThreadId} is writing : {counter}");
Thread.Sleep(200);
}
finally
{
readerWriterLockSlim.ExitWriteLock();
}
}
}
}

Source: Gençay Yıldız — Asenkron & Multithread Programlama

--

--