Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Python looping 300 000 rows

Post

Python looping 300 000 rows

+3
−1

Based on my last question comes new one.
How to loop over 300 000 rows and edit each row string one by one? I have a list of 11-digit numbers stored in one single column in Excel, and I need to separate the digits according to this pattern: 2-2-1-3-3.

I use the code below to loop to test the solution for only 20 rows and it's working.

Example: 00002451018 becomes 00 00 2 451 018.

priceListTest contains the column Column1 which has these 11 digit numbers. Somehow I need to loop all over these 300 000 rows and use the get_slices to change the pattern for each row like from the example above and store it into the new column New Value.

The for index, row it's working very slowly when I have to use it for 300 000 rows. Maybe there is a better method, but I'm new to python.

Thanks in advance!

for index, row in priceListTest.iterrows(): 
    #print(index,row)
    def get_slices(n, sizes, n_digits=11):
        for size in sizes:
            n_digits -= size
            
            val, n = divmod(n, 10 ** n_digits)
            yield f'{val:0{size}}'

    n = row['Column1']
    newVar = (' '.join(get_slices(n, [2, 2, 1, 3, 3])))
    priceListTest.at[index,['New Value']] = newVar
History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

5 comment threads

Parallel execution (1 comment)
The actual performance issue (5 comments)
Types (1 comment)
Create the function just once (3 comments)
A small note regarding MCVE (1 comment)
The actual performance issue
Alexei‭ wrote over 2 years ago

Can you provide the actual waiting time for running your code? Example: run your code for 10K times and tell us how much it took to end.

sfrow‭ wrote over 2 years ago

Actually you are right. For 10k rows it's taking 14 seconds, which is not so bad. For the whole list of 300k it will take 7-8 minutes.

elgonzo‭ wrote over 2 years ago · edited over 2 years ago

If 14 sec for 10k rows "is not so bad", then 7 min for 300k rows is not worse and not so bad either. Because, given 7 min for 300k rows, each of the thirty 10k rows in those 300k rows still take around 14 sec.

sfrow‭ wrote over 2 years ago

First time when I run it took me almost 1 hour, but I found that the file contain several wrong data in the column and this cause the problem for the long run.

hkotsubo‭ wrote over 2 years ago

sfrow‭ I've made this test and surprisingly using string slices is faster than doing the math. Try to change the algorithm and see if it makes some difference