Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
For holding ordered sets of keys, there are well-known data structures (the red-black tree, for example) that support O(log(n)) lookup and insertion algorithms. Of course this means that there triv...
#2: Post edited
For holding ordered sets of keys, there are well-known data structures (the red-black tree, for example) that support O(log(_n_)) lookup and insertion algorithms. Of course this means that there trivially exist algorithms for inserting a sequence of _k_ keys in O(_k_ log(_n_)). But if the keys to insert are **given in sequential order**, and it is known that **they will form a contiguous sequence in the final set** (i.e., they are all greater or all lesser than all the keys already in the structure, or they are all between two consecutive keys already in the structure), is there a data structure that can insert them more efficiently, say in O(_k_ + log(_n_)) time, while still supporting O(log(_n_)) lookup?- Intuitively, it seems like after inserting the first key, we already know a lot about where in the tree the remaining keys need to go. And perhaps whatever rebalancing operations are needed can be batched so that the entire operation only has to walk up or down the tree a constant number of times—that's how I came to O(_k_ + log(_n_)) as a target. But I haven't yet found a way to realize these intuitions.
- For holding ordered sets of keys, there are well-known data structures (the red-black tree, for example) that support O(log(_n_)) lookup and insertion algorithms. Of course this means that there trivially exist algorithms for inserting a sequence of _k_ keys in O(_k_ log(_n_ + _k_)). But if the keys to insert are **given in sequential order**, and it is known that **they will form a contiguous sequence in the final set** (i.e., they are all greater or all lesser than all the keys already in the structure, or they are all between two consecutive keys already in the structure), is there a data structure that can insert them more efficiently, say in O(_k_ + log(_n_)) time, while still supporting O(log(_n_)) lookup?
- Intuitively, it seems like after inserting the first key, we already know a lot about where in the tree the remaining keys need to go. And perhaps whatever rebalancing operations are needed can be batched so that the entire operation only has to walk up or down the tree a constant number of times—that's how I came to O(_k_ + log(_n_)) as a target. But I haven't yet found a way to realize these intuitions.
#1: Initial revision
Search tree supporting efficient bulk sequential insert
For holding ordered sets of keys, there are well-known data structures (the red-black tree, for example) that support O(log(_n_)) lookup and insertion algorithms. Of course this means that there trivially exist algorithms for inserting a sequence of _k_ keys in O(_k_ log(_n_)). But if the keys to insert are **given in sequential order**, and it is known that **they will form a contiguous sequence in the final set** (i.e., they are all greater or all lesser than all the keys already in the structure, or they are all between two consecutive keys already in the structure), is there a data structure that can insert them more efficiently, say in O(_k_ + log(_n_)) time, while still supporting O(log(_n_)) lookup? Intuitively, it seems like after inserting the first key, we already know a lot about where in the tree the remaining keys need to go. And perhaps whatever rebalancing operations are needed can be batched so that the entire operation only has to walk up or down the tree a constant number of times—that's how I came to O(_k_ + log(_n_)) as a target. But I haven't yet found a way to realize these intuitions.