Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Alternatives to `EXPLAIN ANALYZE` for queries that won't complete

Parent

Alternatives to `EXPLAIN ANALYZE` for queries that won't complete

+11
−0

I have a large and complex PostgreSQL SELECT query that I would like to make faster. EXPLAIN suggests it should run quickly, with the worst parts being scans of a few thousand rows. When run, it does not complete in any reasonable amount of time (if statement_timeout is set to infinite, it eventually still gives up, complaining about having exceeded temporary file size limits, suggesting something is loading way more data than expected).

Usually, this would suggest to me that EXPLAIN's estimates are horribly inaccurate in some way, and I would try EXPLAIN ANALYZE to see what's really happening. But since this particular query is so bad I can't run it at all, I also can't run it with EXPLAIN ANALYZE.

What other tools are at my disposal for this sort of situation? Can I ask PostgreSQL for some sort of partial or time-limited EXPLAIN ANALYZE, as in "run this for five minutes, then stop and tell me what you spent those five minutes doing"? If I start commenting out bits of the query until it goes fast again, can I rely on the results being accurate, or does PostgreSQL's optimizer work more globally than that?

(Query itself omitted because I've run into this situation a few times, and would like general strategies rather than an answer for this specific query.)

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Would running the query on a new table -- same DDL but with, say, 10 rows copied into it from the rea... (4 comments)
Post
+1
−0

Note: I have limited experience with PostgreSQL, but extensive experience working with SQL Server, so not everything below might apply to PostgreSQL.

I have a large and complex PostgreSQL query that I would like to make faster. (..) When run, it does not complete in any reasonable amount of time

I would assume we are talking about a SELECT statement. One quick change to try would be using a LIMIT and see if the query ends for a small amount of returned rows.

However, I think that the real issue is that the query became large and complex. This should be broken into multiple statements with the help of temporary tables. This can also be encapsulated in a stored procedure.

The code structure can look like the following:

  • create the temporary table (i.e. empty, contains the output structure)
  • minimally populate the temporary table, for example having only a few columns populated with values (the rest remain NULL)
  • add UPDATE statements to deal with the rest of the columns. Define as many UPDATEs as are needed to have a good enough performance

Another advantage of this approach is readability (smaller queries) and maintainability (e.g. easier to change when a column is added as this affects a small query).

Other things to consider:

  • historical data - if the query deals with historical data aggregates, these can be precomputed in some persisted tables
  • indexes - consider adding covering indexes
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

Edited `SELECT` into the question for clarity, thanks. The query in question is mostly autogenerat... (2 comments)
Edited `SELECT` into the question for clarity, thanks. The query in question is mostly autogenerat...
Emily‭ wrote about 1 year ago

Edited SELECT into the question for clarity, thanks.

The query in question is mostly autogenerated at runtime, and is a search with arbitrary user inputs, but the complexity is entirely due to the application I work on having abnormally complicated visibility permission rules (it's bad even without any user-provided filters). I don't know if a big cache of things-each-user-can-see is reasonable; I'd have to investigate more but it is a thought that's crossed my mind before.

(I reran with EXPLAIN ANALYZE ... LIMIT 1 while typing this reply, and got an error that I did not get a chance to read because I instantly closed it by hitting space at the wrong time. Shortly afterward, our oncall person got alerted that a database server ran out of temp file space. Whoops.)

Alexei‭ wrote about 1 year ago

Ref. to "a big cache of things-each-user-can-see". In your case, it is reasonable to pre-compute the user access to some entities, if these are rarely changed (which most likely happens). I have seen this done in an application and it can work decently.

I would recommend trying to do this for 2-3 entities and get rid of some of the subqueries (or whatever is used in the big SELECT).