A hate story of Rust and Non Linear Data Structures
Quite recently Rust has gained a lot of popularity as a high level programming language that is fast and memory safe. The high performance of Rust primarily stems from the fact that it is a compiled language similar to C/C++.
I have explored the memory management and memory safety aspect of Rust in my previous post. Do give it a read.
The time I had mixed feelings about Rust … | by Abhijit Mondal | Feb, 2024 | Medium
In this post, I wanted to show and explain how I struggled to implement some non-linear data structure operations using Rust especially Trees.
Here in this post I will show some possible ways by which we can implement AVL Tree (insertion operation only) and see what kind of challenges we are going to face as a newbie Rust adopter.
AVL Tree is a self balanced binary search tree that allows O(logN) worst case time complexity for search, insert and delete operations.
Although a standard binary search tree also has O(logN) time complexity but it is the average and best case complexity. In the worst case, a skewed BST has time complexity of O(N) for all these operations.
AVL Tree uses self balancing techniques after each insertion and deletion operation in order to ensure that the height difference between left and right subtree for any node is atmost 1. This ensures that the height of AVL Tree is O(logN) for N data points. In this post I will only show insertion operations on an AVL Tree.
Before we can explain why Rust is challenging for an AVL Tree, we need to explain how AVL Tree achieves height re-balancing for these conditions after each insertion.
For the 1st case, there can be 2 scenarios after a new node is inserted (note that the AVL Tree before insertion is height balanced).
Case 1: Insert node 10
In the diagram node 80 violates the height balancing property. Will assume that node 80 is the 1st node in the path from the new node 10 to root node that violates height balancing property.
Case 2: Insert node 60
Case 1 is what is called the left left rotation and we can rebalance the tree as follows. Here we are assuming that the root node of sub-tree T1 is 40.
Note that if instead of node 40, value of this node was 5, then node 10 would have gone to sub-tree T5 instead of T4. But even in that case the above rotation is valid. The main thing is that node 10 is less than node 50 for left left rotation.
Case 2 is what is called left right rotation because node 60 goes to left of node 80 and then to right of node 50. We will assume that the root node of sub-tree T2 is node 55.
This left right rotation is a 2 step process, where first we do right rotation on node 50, then we do left rotation on node 55.
Similar logic is applicable for the 2nd violation condition, when the right sub-tree has height more than 1 than the left sub-tree. In such a case, we do “right right” rotation if the new node has value greater than the right child and do “right left” rotation if the new node has value smaller than the right child. (I will not go into the details of that approach here).
Let’s dive into the approaches here:
Box
This is in continuation of our previous post, where we had used Box smart pointers to implement a BST node. We will continue to use it here and explore some of its advantages and disadvantages.
Let’s start by defining our AVLTree struct and the Node struct:
pub trait MyTrait: PartialOrd + Debug + Copy {}
impl MyTrait for isize {}
impl MyTrait for usize {}
impl MyTrait for f32 {}
impl MyTrait for f64 {}
impl MyTrait for &str {}
struct AVLTree<T> where T: PartialOrd {
root_node: Option<Box<Node<T>>>,
}
impl<T: MyTrait> AVLTree<T> {
fn new() -> Self {
Self {
root_node: None,
}
}
}
#[derive(Clone)]
struct Node<T> {
val: T,
lt_node: Option<Box<Node<T>>>,
rt_node: Option<Box<Node<T>>>,
height: isize
}
impl<T: MyTrait> Node<T> {
fn new(&val:&T) -> Self {
Self {
val,
lt_node: None,
rt_node: None,
height: 1,
}
}
}
The Node of our AVLTree is assumed to hold the following data types in Rust: isize, usize, f32, f64 and &str. We can always extend this list but for demonstration purposes, we will keep it small.
Next we define some helper methods that we will use for height calculations. These methods are required during re-balancing stage.
impl<T: MyTrait> AVLTree<T> {
fn get_lt_rt_heights(&self, root_node:&Option<Box<Node<T>>>) -> (isize, isize) {
// returns tuple of left and right subtree heights
let mut lheight:isize = 0;
let mut rheight:isize = 0;
match root_node {
Some(root) => {
let lt_node = &root.lt_node;
let rt_node = &root.rt_node;
match lt_node {
Some(x) => {
lheight = x.height;
}
None => {}
}
match rt_node {
Some(x) => {
rheight = x.height;
}
None => {}
}
}
None => {}
}
return (lheight, rheight);
}
}
impl<T: MyTrait> AVLTree<T> {
fn height_diff(&self, root_node:&Option<Box<Node<T>>>) -> isize {
// returns difference b/w left subtree height and right subtree height
let lr_heights = self.get_lt_rt_heights(&root_node);
return lr_heights.0 - lr_heights.1;
}
}
impl<T: MyTrait> AVLTree<T> {
fn set_height(&mut self, root_node:&mut Option<Box<Node<T>>>) {
// calculate the height of a tree node
let lr_heights = self.get_lt_rt_heights(&root_node);
match root_node {
Some(root) => {
root.height = 1 + max(lr_heights.0, lr_heights.1);
}
None => {}
}
}
}
Note that in the 1st two methods above, we are passing root_node as an immutable reference and in the 3rd method i.e. set_height as a mutable reference.
If we do not pass as a reference i.e. borrow root_node, then root_node will move into the function and its scope will end with the function. The reason for passing as mutable reference is that we are updating the root_node struct properties in-place i.e. height of root_node.
Everytime we define a function that takes as input the root_node, we need to either pass it as a reference (if we are doing in-place modifications to the heap object) or move it inside the function but create a clone object in heap and return the clone object.
Something similar to below:
impl<T: MyTrait> AVLTree<T> {
fn set_height(&mut self, root_node:Option<Box<Node<T>>>)->Option<Box<Node<T>>> {
// moving root_node into set_height method
let mut root_clone = root_node.clone();
let lr_heights = self.get_lt_rt_heights(&root_clone);
match root_clone {
Some(ref mut root) => {
root.height = 1 + max(lr_heights.0, lr_heights.1);
}
None => {}
}
return root_clone;
}
}
But cloning a heap object has memory implications as with Box smart pointers, Box pointer owns the data thus when we clone the object in memory, the value is also copied.
But in the above code, Rust compiler detects that the original root_node that moved into the function was never used again, thus it deallocates the heap memory assigned to original root_node. Thus only a single copy of root_node exists at any point in time in the above code.
But it is not always as simple as this. One can track the peak memory usage using “heaptrack” utility.
Compile and run the program avl_tree.rs with heaptrack:
RUSTFLAGS=-g cargo build --release
heaptrack target/release/avl_tree
If you have heaptrack_gui installed, you can see the graph of memory usage overtime. The peak heap memory usage gives the maximum heap memory used by the program and peak RSS memory is all types of memory used by the program including stack, heap etc.
Now let’s look at the left rotation function. This function takes as input node X and updates the sub-tree rooted at X as follows:
The code is as follows:
impl<T: MyTrait> AVLTree<T> {
fn left_rotate(&mut self, root_node:&mut Option<Box<Node<T>>>) {
match root_node {
Some(node) => {
let lt_node = &mut node.lt_node;
let lt_node_clone = &mut lt_node.clone();
match lt_node {
Some(x) => {
let lt_rt_node = &x.rt_node;
node.lt_node = lt_rt_node.clone();
}
None => {}
}
self.set_height(root_node);
match lt_node_clone {
Some(x) => {
x.rt_node = root_node.clone();
}
None => {}
}
*root_node = lt_node_clone.clone();
self.set_height(root_node);
}
None => {}
}
}
}
You would notice that we are doing a bunch clone() operations on the Node objects. Cloning is required because we are only borrowing the Node objects and we cannot move out by dereferencing inside the function because the root_node we are passing to the function is borrowed.
Simply put, if we are borrowing an object inside a function and the object is used after function scope ends, we cannot move out (or dereference or change ownership) of that object or any other object referenced by the object, inside the function.
Note that once we connect X to subtree T2, we need to recalculate the height of X. Height of subtree T2 is not affected. Next we recalculate height of Y. Also we are re-assigning the value inside root_node to Y.
Similarly, we can define the right rotation method:
impl<T: MyTrait> AVLTree<T> {
fn right_rotate(&mut self, root_node:&mut Option<Box<Node<T>>>) {
match root_node {
Some(node) => {
let rt_node = &mut node.rt_node;
let rt_node_clone = &mut rt_node.clone();
match rt_node {
Some(x) => {
let rt_lt_node = &x.lt_node;
node.rt_node = rt_lt_node.clone();
}
None => {}
}
self.set_height(root_node);
match rt_node_clone {
Some(x) => {
x.lt_node = root_node.clone();
}
None => {}
}
*root_node = rt_node_clone.clone();
self.set_height(root_node);
}
None => {}
}
}
}
This is reverse of the left rotation.
Finally we define the insert method:
impl<T: MyTrait> AVLTree<T> {
fn insert(&mut self, root_node:&mut Option<Box<Node<T>>>, &val:&T) {
let mut hdiff:isize = 0;
match root_node {
Some(node) => {
if val <= node.val {
let lt_node = &mut node.lt_node;
match lt_node {
Some(_x) => {
self.insert(lt_node, &val);
}
None => {
let new_node:Node<T> = Node::new(&val);
node.lt_node = Some(Box::new(new_node));
}
}
}
else {
let rt_node = &mut node.rt_node;
match rt_node {
Some(_x) => {
self.insert(rt_node, &val);
}
None => {
let new_node:Node<T> = Node::new(&val);
node.rt_node = Some(Box::new(new_node));
}
}
}
self.set_height(root_node);
hdiff = self.height_diff(root_node);
}
None => {
let new_node:Node<T> = Node::new(&val);
*root_node = Some(Box::new(new_node));
}
}
if hdiff > 1 {
match root_node {
Some(node) => {
let lt_node = &mut node.lt_node;
match lt_node {
Some(x) => {
if val <= x.val {
self.left_rotate(root_node);
}
else if val > x.val {
self.right_rotate(lt_node);
self.left_rotate(root_node);
}
}
None => {}
}
}
None => {}
}
}
else if hdiff < -1 {
match root_node {
Some(node) => {
let rt_node = &mut node.rt_node;
match rt_node {
Some(x) => {
if val <= x.val {
self.left_rotate(rt_node);
self.right_rotate(root_node);
}
else if val > x.val {
self.right_rotate(root_node);
}
}
None => {}
}
}
None => {}
}
}
}
}
Lets also define the main method, which creates a vector of 1 million integers from 1 to 1 million, shuffles them and inserts one by one into the tree.
fn main() {
let mut avltree:AVLTree<usize> = AVLTree::new();
let mut root:Option<Box<Node<usize>>> = None;
let mut vec: Vec<usize> = (1..1000000).collect();
vec.shuffle(&mut thread_rng());
for v in vec.iter() {
avltree.insert(&mut root, &v);
}
}
We can track how much heap memory and total memory used by the program using the heaptrack program.
We can see that the maximum heap memory used at any point in time is around 62 MB. This will vary since the inputs are shuffled.
If we look at the heaptrack output for lines within the program that consumes that largest heap memory, we can see that all the clone() operations within the left and right rotations take up the largest heap memory.
If you observe carefully, for the left rotation, cloning node Y takes roughly twice the memory of cloning X simply because number of nodes in Y is roughly twice that in X.
Similarly, cloning X takes roughly twice memory than T2. For each unit increase in height, the memory consumption increases by 2x.
HashMap
In the next implementation, instead of using Box smart pointers for left and right subtrees of a Node, we are going to assign ids to Node and maintain a separate id to Node HashMap.
The usefulness of this approach is that when we are cloning the Node objects, instead of the (almost) entire tree being deep copied in heap memory, we are just copying individual Node objects with id fields.
struct AVLTree<T> where T: PartialOrd {
auto_inc_id: usize,
id_to_node_map: HashMap<usize, Node<T>>,
}
impl<T: MyTrait> AVLTree<T> {
fn new() -> Self {
Self {
auto_inc_id: 0,
id_to_node_map: HashMap::new(),
}
}
}
#[derive(Clone)]
struct Node<T> {
val: T,
lt_node_id: Option<usize>,
rt_node_id: Option<usize>,
height: isize
}
impl<T: MyTrait> Node<T> {
fn new(&val:&T) -> Self {
Self {
val,
lt_node_id: None,
rt_node_id: None,
height: 1,
}
}
}
Note that we are using HashMap instead of a Vec in Rust because if we are implementing deletion, then it is efficient to delete from a HashMap rather than a Vector.
The id field is of type usize.
I am not going to show all the codes for HashMap implementation here but would recommend going through it in the below repo.
rust-algorithms/src/bin/avl_tree_hmap.rs at main · funktor/rust-algorithms (github.com)
An interesting observation from the heaptrack output is that the peak heap memory with HashMap is quite high as compared to Box approach above. Roughly 2–3 times the memory usage.
On deep diving we can see that HashMap is the culprit here as it requires additional data structure to hold the key-value pairs and the hashes.
As you can see that whenever a key-value pair is inserted into the HashMap, there is a possibility that the Hash Table will be resized depending on the load factor and often when they are resized, there are lots of wasted memory in the Hash Table.
Rc<RefCell>
In the next approach, instead of using Box smart pointers, I had used another type of smart pointers Rc and RefCell. Some important differences between Box and Rc<RefCell> are as follows:
- In Box smart pointers the data behind the smart pointer has a single owner. Thus when you clone the data, another copy of that data is made in the heap memory as we saw above for left and right rotations.
- In Rc, the data behind the smart pointer can have multiple owners. But Rc without RefCell allows only immutable access i.e. even if there are multiple owners of the data, they cannot mutate the values. Rc represents Reference Counting. Thus whenever a clone is made, the reference count of the object is incremented by 1, but not copied in heap memory.
- Using RefCell along with Rc allows multiple owners to mutate the data behind the smart pointer. But unlike Box, where the ownership and borrow is checked at compile time, in Rc<RefCell> the borrow is checked at runtime. Thus you will not see “much” compiler complains but if it is the case that multiple owners are trying to mutate the data at runtime, the rust program will panic.
Here is the data structure definition:
struct AVLTree<T> where T: PartialOrd {
root_node: Option<Rc<RefCell<Node<T>>>>,
}
impl<T: MyTrait> AVLTree<T> {
fn new() -> Self {
Self {
root_node: None,
}
}
}
#[derive(Clone)]
struct Node<T> {
val: T,
lt_node: Option<Rc<RefCell<Node<T>>>>,
rt_node: Option<Rc<RefCell<Node<T>>>>,
height: isize
}
impl<T: MyTrait> Node<T> {
fn new(&val:&T) -> Self {
Self {
val,
lt_node: None,
rt_node: None,
height: 1,
}
}
}
The methods to compute the heights of left and right subtrees and set height of self.
impl<T: MyTrait> AVLTree<T> {
fn get_lt_rt_heights(&self, root_borrow:&RefMut<'_, Node<T>>) -> (isize, isize) {
let mut lheight:isize = 0;
let mut rheight:isize = 0;
let lt_node = &root_borrow.lt_node;
let rt_node = &root_borrow.rt_node;
match lt_node {
Some(x) => {
lheight = x.borrow().height;
}
None => {}
}
match rt_node {
Some(x) => {
rheight = x.borrow().height;
}
None => {}
}
return (lheight, rheight);
}
}
impl<T: MyTrait> AVLTree<T> {
fn height_diff(&self, root_borrow:&RefMut<'_, Node<T>>) -> isize {
let lr_heights = self.get_lt_rt_heights(&root_borrow);
return lr_heights.0 - lr_heights.1;
}
}
impl<T: MyTrait> AVLTree<T> {
fn set_height(&self, root_borrow:&mut RefMut<'_, Node<T>>) {
let lr_heights = self.get_lt_rt_heights(&root_borrow);
root_borrow.height = 1 + max(lr_heights.0, lr_heights.1);
}
}
Note that instead of passing a mutable reference to the root_node unlike in the Box method, we are passing a mutable reference to the borrowed object.
This is because when you borrow an object mutably, then you cannot borrow that object again mutably or immutably and with Rc<RefCell>, anytime you have to modify the struct, you need to call borrow_mut() on the object.
Thus it is only possible to call borrow_mut() once on the object which in our example will not work.
The other pattern is get the RefMut object by calling borrow_mut() once and pass around the RefMut object.
Similarly, the left rotation and right rotation methods are as follows:
impl<T: MyTrait> AVLTree<T> {
fn left_rotate(&mut self, root_borrow:&mut RefMut<'_, Node<T>>)->Option<Rc<RefCell<Node<T>>>> {
let root_clone = root_borrow.clone();
let lt_node = &root_clone.lt_node;
match lt_node {
Some(x) => {
let lt_rt_node = &x.borrow().rt_node;
match lt_rt_node {
Some(y) => {
root_borrow.lt_node = Some(y.clone());
}
None => {
root_borrow.lt_node = None;
}
}
}
None => {}
}
self.set_height(root_borrow);
match lt_node {
Some(ref x) => {
let xborrow = &mut x.borrow_mut();
xborrow.rt_node = Some(Rc::new(RefCell::new(root_borrow.clone())));
self.set_height(xborrow);
}
None => {}
}
return lt_node.clone();
}
}
impl<T: MyTrait> AVLTree<T> {
fn right_rotate(&mut self, root_borrow:&mut RefMut<'_, Node<T>>)->Option<Rc<RefCell<Node<T>>>> {
let root_clone = root_borrow.clone();
let rt_node = &root_clone.rt_node;
match rt_node {
Some(x) => {
let rt_lt_node = &x.borrow().lt_node;
match rt_lt_node {
Some(y) => {
root_borrow.rt_node = Some(y.clone());
}
None => {
root_borrow.rt_node = None;
}
}
}
None => {}
}
self.set_height(root_borrow);
match rt_node {
Some(ref x) => {
let xborrow = &mut x.borrow_mut();
xborrow.lt_node = Some(Rc::new(RefCell::new(root_borrow.clone())));
self.set_height(xborrow);
}
None => {}
}
return rt_node.clone();
}
}
And finally the insert method:
impl<T: MyTrait> AVLTree<T> {
fn insert(&mut self, root_node:&mut Option<Rc<RefCell<Node<T>>>>, &val:&T)->Option<Rc<RefCell<Node<T>>>> {
let mut output:Option<Rc<RefCell<Node<T>>>> = root_node.clone();
match root_node {
Some(ref node) => {
let mut node_borrow = node.borrow_mut();
if val <= node_borrow.val {
let lt_node = &mut node_borrow.lt_node;
let new_lt:Option<Rc<RefCell<Node<T>>>>;
match lt_node {
Some(_x) => {
new_lt = self.insert(lt_node, &val);
}
None => {
let new_node:Node<T> = Node::new(&val);
new_lt = Some(Rc::new(RefCell::new(new_node)));
}
}
node_borrow.lt_node = new_lt;
}
else {
let rt_node = &mut node_borrow.rt_node;
let new_rt:Option<Rc<RefCell<Node<T>>>>;
match rt_node {
Some(_x) => {
new_rt = self.insert(rt_node, &val);
}
None => {
let new_node:Node<T> = Node::new(&val);
new_rt = Some(Rc::new(RefCell::new(new_node)));
}
}
node_borrow.rt_node = new_rt;
}
}
None => {
let new_node:Node<T> = Node::new(&val);
output = Some(Rc::new(RefCell::new(new_node)));
}
}
let mut hdiff:isize = 0;
match root_node {
Some(ref node) => {
let mut node_borrow = node.borrow_mut();
self.set_height(&mut node_borrow);
hdiff = self.height_diff(&node_borrow);
}
None => {}
}
match root_node {
Some(ref node) => {
let mut node_borrow = node.borrow_mut();
if hdiff > 1 {
let lt_node = &mut node_borrow.lt_node;
match lt_node {
Some(ref x) => {
if val <= x.borrow().val {
output = self.left_rotate(&mut node_borrow);
}
else if val > x.borrow().val {
let new_lt = self.right_rotate(&mut x.borrow_mut());
node_borrow.lt_node = new_lt;
output = self.left_rotate(&mut node_borrow);
}
}
None => {}
}
}
else if hdiff < -1 {
let rt_node = &mut node_borrow.rt_node;
match rt_node {
Some(ref x) => {
if val <= x.borrow().val {
let new_rt = self.left_rotate(&mut x.borrow_mut());
node_borrow.rt_node = new_rt;
output = self.right_rotate(&mut node_borrow);
}
else if val > x.borrow().val {
output = self.right_rotate(&mut node_borrow);
}
}
None => {}
}
}
}
None => {}
}
return output.clone();
}
}
Note that we call borrow_mut() once before using the returned RefMut object in the left and right rotation function calls.
Now lets look at the heap memory usage and how the usage per line looks like using heaptrack.
Peak heap memory usage is somewhere around 64 MB.
The line in the left and right rotation functions, which has the most heap memory usages are:
xborrow.rt_node = Some(Rc::new(RefCell::new(root_borrow.clone())));
xborrow.lt_node = Some(Rc::new(RefCell::new(root_borrow.clone())));
Also the most memory usage lines in the insert function are as follows:
new_lt = Some(Rc::new(RefCell::new(new_node)));
new_rt = Some(Rc::new(RefCell::new(new_node)));
We can clearly see the pattern here. Whenever we are creating a new Rc<RefCell> object in the heap, the memory usage is high. This is because when we are calling root_borrow.clone() in the left and right rotation functions, we are creating copies in heap memory.
Note that calling .clone() on existing Rc<RefCell> objects will not create copies but calling clone on the RefMut object will.
Personally, I feel Box method is much “cleaner” and “safer” way simply because I get to know all the memory issues at compile time instead of getting “panciky” at runtime where the error messages are not always very helpful.
There could be better way to implement these functions which I have not yet explored or not aware of. But the idea I wanted to demonstrate here is that there are multiple different ways to implement a non-linear data structure in Rust each with its own set of challenges. There are lots of struggle involved to get it right.
But the good part is that we can analyze the memory usages with heaptrack tool to understand how each method behaves at runtime and thus one can find ways to optimize them.
References
- Profiling heap allocation in rust | FlakM blog
- Heaptrack — A Heap Memory Profiler for Linux — Milian Wolff
- Title Page — The Rust Performance Book (nnethercote.github.io)
- The Rust Programming Language — The Rust Programming Language (rust-lang.org)
- Rc<T>, the Reference Counted Smart Pointer — The Rust Programming Language (rust-lang.org)
- RefCell<T> and the Interior Mutability Pattern — The Rust Programming Language (rust-lang.org)