Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
fastest and most efficient method (time and memory-wise) Time and memory go against each other in a simple way: amount of jobs divides the time (up to amount of cores, sans the overhead, assum...
Answer
#1: Initial revision
> fastest and most efficient method (time and memory-wise) Time and memory go against each other in a simple way: amount of jobs divides the time (up to amount of cores, sans the overhead, assuming no I/O bottlenecks e.g. an SSD), and multiplies the peak memory used. > stored in a list named paths Is that a Python list, or do you mean a newline-delimited file? For the latter, an easy bash way to parallelize is `xargs --max-procs=16 --max-args=1 command_to_convert_one < pathlist_file` Alternatively, to list all files recursively (just in case), you can use `find Main_dir -type f -name '*.npy'`. > convert all these `.npy` files to image Here is the most unclear part of your question: npy files are datafiles (claimed as alternative to CSV). There are many ways to convert them to images, such as plotting the data with matplotlib or any other library; or interpreting the data as image pixels; or anything else. Without knowing what data is there and what you are trying to do, it is impossible to give a good answer.