The best way to estimate this is to measure it, for instance by importing a backup of the production database into a new instance and run your scripts there.
Short of that, you could consult the execution plan of your query to get a rough idea about the amount of work the database will be doing, and the estimated cardinalities involved. Problem is that depending on the complexity of your query, these estimates may be significantly off base. For instance, while table size estimates are generally very accurate, estimates about how many rows match a certain condition can be way off.
alter table statements are generally I/O bound, so the execution time will be proportional to table size on disk.
Note that regardless of the method your choose, you should not rely on the estimate to be very precise. I recall an extreme case where a script took 2 hours in test, but 10 hours in production (!). It later turned out the database was on virtualized hardware together with other database servers, all of which were running their nightly backup at the same time, temporarily overloading the hardware ...