Easy Concurrency Mastery: Exploring the Read-Write Lock Pattern in Rust for Performance

Photo by Pixabay: https://www.pexels.com/photo/gray-metal-typewriter-part-in-close-up-photo-261626/

Introduction

In another article we discussed the Lock pattern. In this we used the Mutex struct. The problem with this struct is, is that it doesn’t distinguish between reading from a resource, like accessing an element in a vector, and writing to it.

In cases where many threads need to read a resource at one, and there are a few write-operations, the Read-Write Lock pattern can help. This pattern allows concurrent access for readers of the resource. In the case however of writing an exclusive lock is granted, and all read operations are blocked. This last part could be a source of a deadlock, in case the writer takes a long time, or in some bad cases, never finishes for some reason.

In Rust, this pattern in implemented using the RwLock struct. This struct has a write() method, which grants an exclusive lock, and a read() which grants shared read access.

Implementation in Rust

One of the areas where locks might come in handy, is if you are handing different versions in a version control system. In our example we will build an extremely simplified version control system.

Let’s start with our preliminaries:

use std::sync::{Arc,RwLock};
use std::thread;

#[derive(Debug)]
struct Version {
    version: String,
    content: String,
}

impl Version {
    fn new(version: String, content: String) -> Version {
        Version {
            version,
            content,
        }
    }
}

In this example we will go straight to the main function. You could of course wrap this pattern in a struct, if you want, but I want to keep things simple:

fn main() {
    let list = Arc::new(RwLock::new(vec![]));
    let mut handles= vec![];

    for counter in 0..10 {
        let list_clone= Arc::clone(&list);
        let handle = thread::spawn(move || {
            let mut list= list_clone.write().unwrap();
            let version = Version::new(format!("v0.{}", counter), format!("content {}", counter*2));
            list.push(version);
        });

        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Result: {:?}",*list.read().unwrap());
}

Line by line:

  1. We create a vector inside an RwLock and wrapping it inside an Arc. With the Mutex we can ensure only one thread can mutate the vector, the Arc allows it to be shared among threads.
  2. The handles vector holds our thread handles.
  3. In the loop we:
    • We clone our Arc to make sure each thread get its own reference to it. This increases the reference count, so it, and the enclosed Mutex and the vector won’t be cleaned up until all threads have finished with it.
    • Next we spawn a thread. We use the move keyword to move the captured variables, list_clone and counter into the provided closure, so they can be used in it, because the ownership shifts.
    • Now we try to access the RwLock for writing to make sure we get exclusive write access to the contained vector. We use the unwrap function here, which is not something you should do in production, as you should always check for errors. That can happen if a thread panics for example. This is however only a problem if the thread trying to write to the resource panics, if it is a reader with concurrent read-access, this may not cause trouble.
  4. After the loop we wait for all threads to finish.
  5. Finally we print the vector. You will see that the numbers will not be in a perfect ascending order, as the threads do not execute in order.

Conclusion

Although the Mutex pattern performs quite well, and is stable, sometime in the case of many read operation and few write operations, you can boost performance by using the RwLock struct.

If the writer-thread blocks or panics, there is a chance of deadlock. Since only a write can block access to the resource, you can usually find the cause of this deadlock, and hopefully solve it.

Leave a Reply

Your email address will not be published. Required fields are marked *