Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

When using the compare function in Array.prototype.sort, how to avoid an element to be processed more than once?

+6
−0

When using the Array.prototype.sort method, we can pass a compare function as argument. Then, this function can be used to process array's elements, so the comparison is made using some custom criteria.

But I noticed that this can lead to some, let's say, redundancy. For instance, this code:

function getSortKey(item) {
    console.log('getSortKey', item);
    return parseInt(item);
}

const array = ['4', '16', '8', '2', '6'];
array.sort((a, b) => getSortKey(a) - getSortKey(b));

console.log(array);

I've created the getSortKey function just to know when each string is converted to a number during sorting. The output is:

getSortKey 16
getSortKey 4
getSortKey 8
getSortKey 16
getSortKey 8
getSortKey 16
getSortKey 8
getSortKey 4
getSortKey 2
getSortKey 8
getSortKey 2
getSortKey 4
getSortKey 6
getSortKey 8
getSortKey 6
getSortKey 4
[ '2', '4', '6', '8', '16' ]

Which means that all elements were processed by getSortKey more than once (that wouldn't be necessary, as each string always results in the same number).

This was tested in Chrome. Different browsers/runtimes/implementations may use different sorting algorithms and the exact output might not be the same (but testing in other browsers, it has the same behaviour: the function being called more than once for each element).


The example above was just to show this specific behaviour: the getSortKey function is called many times for the same elements.

But let's suppose that getSortKey is an expensive operation (it takes too much time and/or memory, etc), and the array has lots of elements, and these function calls are a bottleneck that needs to be fixed. The ideal situation is that getSortKey processes each element just once. How to do that?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

2 answers

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+7
−0

PS: for small arrays and/or if the function is fast and doesn't cause performance bottlenecks, none of the below is really necessary (see the analysis in the end). That said, let's see how to solve it.

I'm going to suggest two ways to make sure that the getSortKey function processes each element just once: memoization and Schwartzian Transform.


Memoization

The idea of memoization is to store the already computed results, and if the function is called with the same arguments, we return this pre-computed value, instead of calculating it again (and that's exactly what we need, to avoid the value to be processed again). One way to do it:

var computed = {};
function getSortKeyMem(item) {
    if (!computed[item]) {
        console.log('getSortKey', item); // log only when processing the value
        computed[item] = parseInt(item);
    }
    return computed[item];
}

let array = ['4', '16', '8', '2', '6'];
array.sort((a, b) => getSortKeyMem(a) - getSortKeyMem(b));
console.log(array);

The output is:

getSortKey 16
getSortKey 4
getSortKey 8
getSortKey 2
getSortKey 6
[ '2', '4', '6', '8', '16' ]

Now, each element was processed just once. When the function was called again, it used the pre-computed value.

Of course there are more sophisticated ways to do it, but I'm not focusing on "the best way to implement memoization". I'm just showing that if you use it, each element is processed only once.


Schwartzian Transform

The Schwartzian Transform is inspired by Lisp's decorate-sort-undecorate pattern.

It basically consists on computing the function value for all elements, and put the results in the array (or generate another, temporary one) - that's the decorate step. Then, sort the elements, using the computed values (the sort step). Finally, remove the computed values (undecorate step), and you'll have the original elements sorted.

Using the same function above (but without memoization), and applying the Schwartzian transform:

function getSortKey(item) {
    console.log('getSortKey', item);
    return parseInt(item);
}

const array = ['4', '16', '8', '2', '6'];

// decorate: computes the key for each element and store it in the array
for (let i = 0; i < array.length; i++) {
    array[i] = [ array[i], getSortKey(array[i]) ];
}

// sort by the computed value
array.sort((a, b) => a[1] - b[1]);

// undecorate: remove the keys, leaving only the original elements
for (let i = 0; i < array.length; i++) {
    array[i] = array[i][0];
}

console.log(array);

The output is:

getSortKey 4
getSortKey 16
getSortKey 8
getSortKey 2
getSortKey 6
[ '2', '4', '6', '8', '16' ]

Therefore, getSortKey was called just once for each element. If the array has repeated elements, then there would be repeated calls to getSortKey, but that's still better than calling it all the time (and obviously this could be solved with memoization).

In the first for loop I replace each element by an array containing the element itself and the respective result of getSortKey. When sorting, I use those results as the sorting key, and in the second for loop I remove those results. In the end, the array is correctly sorted.

Another approach (as suggested in the comments) is to use an object instead of an array in the decorate step, improving readability:

function getSortKey(item) {
    console.log('getSortKey', item);
    return parseInt(item);
}

const array = ['4', '16', '8', '2', '6'];

for (let i = 0; i < array.length; i++) {
    // object instead of array
    array[i] = { original: array[i], sortKey: getSortKey(array[i]) };
}
array.sort((a, b) => a.sortKey - b.sortKey);
for (let i = 0; i < array.length; i++) {
    array[i] = array[i].original;
}

console.log(array);

You can also use map to create another array, instead of using for loops:

function getSortKey(item) {
    console.log('getSortKey', item);
    return parseInt(item);
}

let array = ['4', '16', '8', '2', '6'];
array = array
    // decorate: computes the key for each element and store it in the array
    .map(e => [ e, getSortKey(e) ])
    // sort by the computed value
    .sort((a, b) => a[1] - b[1])
    // undecorate: remove the keys, leaving only the original elements
    .map(e => e[0]);

console.log(array);

The problem is that each map call creates a new array, which makes this more memory consuming than the previous solution.


Tests

I've created a test in JSBench and ran in Chrome, and did the same test in my machine using Benchmark.js.

In my machine, using memoization was way faster, and the Schwartzian solutions were the second best (usually, the solutions without map were slightly better). In the browser, Schwartzian solutions were faster (not using map was also better), and memoization was second best. In both environments, a "pure" sort (no Schwartzian and no memoization) was always the slowest.

But as I said, that makes a significant difference only if the function is quite "expensive"/slow, and the array has lots of elements. For a small array, and/or a fast function, both Schwartzian transform and memoization were actually slower than a no-Schwartzian, no-memoized sort.

Of course speed is not the only concern: there's the extra memory consumption (specially when using map, as each call returns a new array, but even when not using it, there's the cost of all the extra arrays/objects to store the sort keys). And there's also the increase of code complexity, which affects maintainability. All those factors must be considered when deciding whether to use those solutions: depending on the situation, they might or might not be relevant. YMMV.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

Minor suggestions (2 comments)
+3
−0

Create a hash map and precalculate the sort key in it:

// Set up: Create mock input
let u = ['4', '16', '8', '2', '6'];

function expensive_key_fn(x) {
  console.log("Doing expensive operation on: " + x)
  return String(x);
}

// Create an optimized sort function
let sortKey = {}
u.forEach(function(i) {
  sortKey[i] = expensive_key_fn(i)
});

// Use it as sort key
u.sort((a, b) => sortKey[a] - sortKey[b]);

// Show reuslt
console.log(u);

This is basically the poor man's memoization - in fact, simple memoization is often implemented just like this, with a hash map, but lazily, to keep up the illusion/abstraction of the "function call". However, I don't find the abstraction that helpful in this context, so despite being familiar with memoization, I often prefer this idiom just because it's simpler, more accessible to novice programmers who might read my code, and has less cognitive load for me.

Constructing the hash map is O(N), which pales in comparison to O(NlogN) for sorting (assuming JS uses an efficient sorting implementation). That means the work of constructing the hash map is negligible. There is an extra memory cost of O(N) for the map, but it stores only a hash and a result value, not the whole element (if the elements are large).

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

More of an edit suggestion :) (1 comment)

Sign up to answer this question »