Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Why would excluding records by creating a temporary table of their primary keys be faster than simply excluding by value?
I have two tables with millions of records. Every so often I need to join them and exclude just a handful of records where a bit(1) column is set to 1 instead of 0.
I can do it with either,
WHERE is_excluded !=1
or
WHERE example_table.pk NOT IN
(
SELECT pk FROM(
SELECT pk FROM
example_table
WHERE is_excluded =1)
AS t)
For example
UPDATE example_table
SET textfield = 'X'
WHERE textfield = 'Y'
and pk not in (SELECT pk FROM (SELECT pk FROM example_table WHERE do_not_touch =1)as t) ;
is faster than
UPDATE example_table
SET textfield = 'X'
WHERE textfield = 'Y'
and do_not_touch !=1
The second way is sometimes way faster, even though it takes much longer to write out.
Why would the second way be faster?
2 answers
is_excluded = 1
is very different from do_not_touch != 1
. Whenever possible, try to structure your data and queries so that you can do an equi-join - that is, compare things using an =
comparison. >
and <
and !=
can be really bad because the database will at the very least have to do an index scan, if there's an appropriate index available - if not, full table scan, baby! Wooo! If you can use do_not_touch = 0
that would be nice, but I obviously have no idea what that field contains.
Also, it helps to have an appropriate index. For your second query
UPDATE example_table
SET textfield = 'X'
WHERE textfield = 'Y'
and do_not_touch != 1
it would seem that an index on example_table(textfield, do_not_touch)
would perhaps be helpful.
Why would the second way be faster?
Generally speaking, the first form will perform worse (as well as looking a lot worse) than the second. You are hitting an edge case where the opposite is true, because:
-
The
not in
in your first example is likely to be transformed into an anti-join (something like this). Because you also have "…just a handful of records where a bit(1) column is set to 1…" that anti-join is likely to be fairly fast. -
Bad stats or bad luck means that the optimizer is making a wrong choice when filtering. Perhaps it is choosing a full table scan in the second case, or failing to use a good index.
We'd need to know your actual plans/indexes/etc to be able to say more, as several people have mentioned in comments.
1 comment thread