Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
How do I find disjoint sets in a dataset
I have a dataset of car bookings like this:
car_id | user_id |
---|---|
1 | 1 |
2 | 1 |
1 | 2 |
3 | 3 |
1 | 2 |
3 | 3 |
In this dataset, two separate groups/sets of cars and users don't overlap: One group consists of two vehicles (1,2) and two users (1,2) the other group has only one car (3) and one user (3). The groups are independent and have no overlap in users or vehicles. Lines in the datasets can repeat.
Now I have a much bigger dataset with many thousands of cars and users. What is the most elegant/fastest algorithm/data structure to find those disjunct groups?
I code in Python or Julia.
I read the paper and implemented the RemSP algorithm, partly because I like algorithms, and it was cool that RemSP is so much faster than the rest of the algorithms presented. However, the processing around the results of it is noticeable. Also, I am confused because RemSP is for merging sets. I try to find independent groups. That is not the same.
Here is my code - did you have something more immediate in mind when recommending RemSP?
(How is pasting code supposed to work here? This markup seems to be an ill fit for code.)
def remsp(p, x, y):
rx = x
ry = y
while p[rx] != p[ry]:
if p[rx] < p[ry]:
if rx == p[rx]:
p[rx] = p[ry]
break
z = rx
p[rx] = p[ry]
rx = p[z]
else:
if ry == p[ry]:
p[ry] = p[rx]
break
z = ry
p[ry] = p[rx]
ry = p[z]
return p
def find(p, x):
if x != p[x]:
p[x] = find(p, p[x])
return p[x]
def get_sets(p):
sets = {}
for x in p.keys():
root = find(p, x)
if root not in sets:
sets[root] = (set(), set())
if x.startswith("car"):
sets[root][0].add(x)
else:
sets[root][1].add(x)
return sets.values()
# Dataset of car bookings (car_id, user_id)
bookings = [
("car_1", "user_1"),
("car_2", "user_1"),
("car_1", "user_2"),
("car_3", "user_3"),
("car_1", "user_2"),
("car_3", "user_3")
]
# Initialize parent array
p = {item: item for booking in bookings for item in booking}
# Process bookings
for car_id, user_id in bookings:
# Merge sets containing car_id and user_id
remsp(p, car_id, user_id)
# Merge sets of all cars and users connected through user_id
for car_id2, user_id2 in bookings:
if user_id == user_id2:
remsp(p, car_id, car_id2)
if car_id == car_id2:
remsp(p, user_id, user_id2)
# Get separate sets
sets = get_sets(p)
for s in sets:
print(s)
2 answers
One way of specifying what you want is that you want the equivalence classes of the equivalence relation generated by saying that pairs (a, b) and (x, y) are equal when either a = x or b = y. The famous union-find algorithm solves exactly the problem of incrementally computing representatives for an equivalence relation as more generating identifications are added.
In your case, you can simply maintain a hash map, e.g. a Python dictionary, (or an array if your IDs are dense) that maps car_id
to a pair's ID in the union-find data structure, and similarly for user_id
. You then simply iterate over all the pairs looking up the union-find ID in each of these maps and equating ("unioning") it with the union-find ID of the pair you're currently considering. If one or both of the maps don't have an entry for the current pair, insert the representative ID you got back from unioning with the other or the current pair's ID if entries were missing from both maps into the map(s) with the missing entry.
The hash table look-ups are effectively constant-time and union-find, famously, has an inverse Ackermann amortized time-complexity that might as well be constant-time. This gives an algorithm that's effectively linear-time in the number of pairs. The above algorithm will automatically handle duplicates, but you could also remove duplicates first. If you actually want to keep track of the duplicates (via some surrogate key effectively), you can easily just have the union-find ID be based on the surrogate key, e.g. the index in the list of pairs or you could just go back over the list after the fact grouping pairs by the representative IDs.
Union-find is a fairly easy algorithm to implement, though I recommend a slight variation, also easy to implement, called Rem's algorithm. See Experiments on Union-Find Algorithms for the Disjoint-Set Data Structure.
I'm pretty sure there are other, quite possibly better, ways of doing this, but this is already relatively simple and efficient.
2 comment threads
Your example is a bipartite graph in adjacency list format. The cars are nodes on the left, the people are nodes on the right. When a person "has" a car, there is an edge between the car and person. Your question is equivalent to asking for the connected components of the graph.
This can be done with standard algorithms like Depth First Search in O(n+m) time (nodes+edges). You can find implementations in libraries like NetworkX.
0 comment threads