Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Python looping 300 000 rows

Parent

Python looping 300 000 rows

+3
−1

Based on my last question comes new one.
How to loop over 300 000 rows and edit each row string one by one? I have a list of 11-digit numbers stored in one single column in Excel, and I need to separate the digits according to this pattern: 2-2-1-3-3.

I use the code below to loop to test the solution for only 20 rows and it's working.

Example: 00002451018 becomes 00 00 2 451 018.

priceListTest contains the column Column1 which has these 11 digit numbers. Somehow I need to loop all over these 300 000 rows and use the get_slices to change the pattern for each row like from the example above and store it into the new column New Value.

The for index, row it's working very slowly when I have to use it for 300 000 rows. Maybe there is a better method, but I'm new to python.

Thanks in advance!

for index, row in priceListTest.iterrows(): 
    #print(index,row)
    def get_slices(n, sizes, n_digits=11):
        for size in sizes:
            n_digits -= size
            
            val, n = divmod(n, 10 ** n_digits)
            yield f'{val:0{size}}'

    n = row['Column1']
    newVar = (' '.join(get_slices(n, [2, 2, 1, 3, 3])))
    priceListTest.at[index,['New Value']] = newVar
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

5 comment threads

Parallel execution (1 comment)
The actual performance issue (5 comments)
Types (1 comment)
Create the function just once (3 comments)
A small note regarding MCVE (1 comment)
Post
+1
−0

I’m struggling to get timeit working correctly but this is faster in my limited tests:

l = [123456789, 23456789012, 34567890123]

result = [0, 0, 0]

for idx, row in enumerate(l):
   i = f"{row:011}"
   result[idx] = f"{i[:2]}-{i[2:4]}-{i[4:5]}-{i[5:8]}-{i[8:]}"

print(result)
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

Semi-duplicate answer? (1 comment)
Semi-duplicate answer?
NoahTheDuke‭ wrote about 3 years ago

I see now that my solution was already proposed by @hkotsubo, tho in a slightly roundabout way‭. I am still certain that it is a faster method than the nested loop with divmod.