source: quantamagazine

Building a simple distributed hash table using Python and Sockets — Part 1

Abhijit Mondal

--

While trying to understand distributed systems in details, I tried to search for implementations of some of the common distributed systems problems (commonly asked in interviews) such as Distributed Logs, Distributed Hash Table etc. in Python. Well I couldn't find many useful references for Python, although there are some written in Golang.

In this post (series of posts as I am certain one post is not enough to cover everything), I will try to implement a simple distributed hash table in Python using plain Sockets (one of multiple ways to do it apart from RPCs).

Why do we need a distributed hash table ?

Because memory is limited on a single instance. To enable us to use much more RAM as compared to a single instance, we have to distribute the hash map.

What are some important challenges while going from a single instance hash map to distributed hash map ?

  • How will different instances know that they “own” different pieces of the same hash map?
  • How to make it a service ? i.e. unlike a single instance hashmap where we can instantiate an object of HashMap class and all set and get operations are performed through that object, we cannot simply instantiate a class located on a remote instance. Thus we need some way for the remote instance to allow manipulating a class object from remote network location. Here comes the notion of service.
  • How to ensure availability ? Each service is running a process that instantiates an in-memory hash map. What if the service restarts ? The attached process gets killed and a new process starts that creates a new instance of the HashMap class. This class instance loses all data. Thus some “piece” of our hash map got corrupted.
  • How to distribute the data for hash map ? The main motivation was to allow arbitrary size hash maps to work with multiple instances that means each instance will hold a “piece” of the hash map only. How to know which instance holds which data ?
  • How to deploy a single piece of code across multiple instances as services ?
  • … and many more.

To demonstrate the distributed aspect of hash map, instead of creating our own hash map implementation, we will re-use python’s ‘dict’ for our single instance hash map since our focus more here is to take this ‘dict’ and distribute it across multiple instances.

This piece of code is (almost) sufficient for a single instance implementation.

How to make this code a service ?

One way is to use Flask server.

Define REST API endpoints in our Flask handler and then route ‘get’ or ‘set’ (post/put) requests to the HashTable instance. But we will not use Flask instead we will use something more low level i.e. Sockets.

Sockets gives us much more control over how we want to communicate and also reuse existing sockets, thus giving better performance over REST calls.

In order to simulate multiple instances on a single instance, we can create multiple sockets listening to different port numbers on the same localhost (127.0.0.1).

A simple service for the above hashtable.py code:

We start the service by running the command in a terminal window:

python hashtable_service.py “127.0.0.1” “5001”

5001 is the port number on which the server is accepting connections. If we implement the service on a remote VM, then we need to use the public IP of the VM here and also allow TCP/UDP inbound connections on the port.

The hashtable service is ready to accept connections from clients. We create a dummy client as follows:

We run the client in a separate terminal window as follows: python client.py “127.0.0.1” “5001”

The ip address and port number corresponds to the server’s ip and port.

Once the client is able to connect to the service, we start issuing hashtable commands such as:

set a 1
set b 2
set c 3
set d 4
get a
get b

We see that the service is reading the inputs and the output message from the service is received from the client and displayed on screen.

Note that the server and the client is not limited to a single machine but rather we can implement the server on a virtual machine in the cloud and let the client run on our local machine.

What’s the fun in having only a single machine running the service? We could have implemented it without any service. The main purpose of distributed hash table is to use as many as possible machines to run the service.

Let’s say now we have 2 machines running the service. We want approximately 50% of the keys to go to machine A and 50% to machine B. How do we achieve this?

One possible way is to compute an integer hash of the key and take modulo 2 i.e. machine_id = hash(key) % 2.

Now we realize that we probably need another service for routing the request to the correct machine because the client can send the request to a single machine which is our application end point. Or the client always sends the request to the nearest machine and the machine checks for the hash and either inserts the key in its own share of the hashtable or forwards it to the other machine running the service.

Let us simulate another machine by using a different port number on our localhost.

Here are the modifications to the hashtable_service code that will allow us to run the service with N number of machines.

Rest all remaining same from the single machine service, we add the following:

  1. Accept a list of ip addresses and ports corresponding to the partitions.
  2. For each partition (not self) we connect to that partition using sockets. Note that we are performing this step in a separate thread. This is because it may happen that all partitions have not yet started running. So we continue to try to connect to a partition till the partition is in running state. Thus we use a separate thread for this.
  3. For a request, we compute the key hash and take modulo with number of partitions. Then based on that index we either route it to the correct partition or process it in the same partition.

In order to run this, first we start 2 instances of the service. Let us start the 2 services on ports 5001 and 5002 respectively.

In one terminal run:

python hashtable_service.py "127.0.0.1" "5001" "['127.0.0.1:5001', '127.0.0.1:5002']"

Open another terminal and run:

python hashtable_service.py "127.0.0.1" "5002" "['127.0.0.1:5001', '127.0.0.1:5002']"

Run the client by connecting to any one of the above services: python client.py “127.0.0.1” “5001”

On sending the requests for set and get operations we can see that almost 50% of the requests will be forwarded from port 5001 to 5002. If instead of running on same machine with different port numbers, we run on separate remote VMs, we can effectively use twice the memory for the hash table running on a single machine.

Voila, now we have everything we required for a ‘Happy Case’ distributed hash table.

I said “Happy Case” because, we have still not discussed on how we are going to handle the following situations:

  1. What if one of the partition machine crashes? What will happen to those keys which are being forwarded to that machine?
  2. What if we add a new partition machine? How the existing partitions are going to discover that? Even if they discover, the modulo will be a different number. Thus it might happen that during set operation, a key went to machine 1 but during get, request is forwarded to machine 2, if a new machine is added between set and get for the same key.
  3. If a machine momentarily isn’t responding and later becomes active, how are they going to recover the lost keys during that interval?

These are just some of the questions we are going to answer in the next part of this series :)

--

--