Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
The best way to estimate this is to measure it, for instance by importing a backup of the production database into a new instance and run your scripts there. Short of that, you could consult the ex...
Answer
#1: Initial revision
The best way to estimate this is to measure it, for instance by importing a backup of the production database into a new instance and run your scripts there. Short of that, you could consult the execution plan of your query to get a rough idea about the amount of work the database will be doing, and the estimated cardinalities involved. Problem is that depending on the complexity of your query, these estimates may be significantly off base. For instance, while table size estimates are generally very accurate, estimates about how many rows match a certain condition can be way off. Simple `alter table` statements are generally I/O bound, so the execution time will be proportional to table size on disk. Note that regardless of the method your choose, you should not rely on the estimate to be very precise. I recall an extreme case where a script took 2 hours in test, but 10 hours in production (!). It later turned out the database was on virtualized hardware together with other database servers, all of which were running their nightly backup at the same time, temporarily overloading the hardware ...