Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

88%
+14 −0
Q&A Does the location of an import statement affect performance in Python?

Summary The location within a module where an import statement is found by the interpreter is not expected to cause differences in performance such as speed or memory usage. Modules are singleton ...

posted 3y ago by ghost-in-the-zsh‭  ·  edited 3y ago by ghost-in-the-zsh‭

Answer
#3: Post edited by user avatar ghost-in-the-zsh‭ · 2020-11-19T12:22:37Z (over 3 years ago)
Fix another typo
  • # Summary
  • The location within a module where an `import` statement is found by the interpreter is *not* expected to cause differences in performance such as speed or memory usage. Modules are singleton objects, which means that *they're only ever loaded once* and will *not* be re-imported or re-loaded again even if additional `import` statements are encountered.
  • Therefore, you should follow the best-practice of keeping `import` statements at the top of the module. All of that being said, *how* you do the `import` and/or subsequent attribute lookups, *does* have an impact.
  • # Imports and Attribute Look-ups
  • Suppose you `import math` and then, every time you need to use the `sin(...)` function, you have to do `math.sin(...)`. This will generally be *slower* than `from math import sin` and then using `sin(...)` directly because Python has to keep looking up the function name within the module *every time* an attempt to invoke it is made.
  • This lookup-penalty applies to everything that gets accessed using the dot `.` operator and will be particularly noticeable in a loop. It's therefore advisable to at least get a local reference to things you need to use/invoke frequently in *performance critical* sections.
  • For example, using the original `import math` example, right before a critical loop, you could do something like this:
  • ```python
  • # ... within some function
  • sin = math.sin
  • for i in range(0, REALLY_BIG_NUMBER):
  • x = sin(i) # faster than: x = math.sin(x)
  • # ...
  • ```
  • This is a trivial example, but note that something similar can happen with methods on other objects (e.g. lists, dictionaries, etc) because methods are still attributes that have to be looked up. (Remember, it's *everything* that requires usaging the dot `.` operator.)
  • ### Benchmark
  • Here're some benchmarks with 2 different CPUs.
  • This one is from an Intel Core i9 (8-CPUs: 4-Core + HT) I bought back in 2010:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 89.7203312900001
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 78.27029322999988
  • ```
  • And the same tests repeated on an AMD Ryzen 9 3900X (24-CPUs: 12-Core + SMT) I bought earlier this year:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 37.06144698499884
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 26.76371130500047
  • ```
  • There's a 10+ second difference in the look-up vs no look-up cases for *both* CPUs.
  • Note that the difference depends on how much time the program spends running this code, hence why the "performance critical section" qualifier is so important. The fact is that, for most (not all) other cases, the benchmarks above can be safely ignored because the actual impact of more sporadic usage will be negligible.
  • # Where to Import and Why
  • The `import` statements should be kept at the top of the module, as it's normally done. Straying away from that pattern ***for no good reason*** is just going to make the code more difficult to go through. For example, module dependencies will be more difficult to find because `import` statements will be scattered throughout the code instead of being in a single easily-seen location. (You could say dependencies are "hidden".)
  • It may also make a module less reliable for clients and more error-prone for their own developers because it's easier to forget about dependencies. As a trivial example, suppose you have this in a module:
  • ```python
  • # ... lots of code above
  • def fn_j(x: int) -> float:
  • import math
  • return math.sin(x)
  • # lots of code below ...
  • ```
  • Ok, that works. But then you add:
  • ```python
  • # ... lots of code above
  • def fn_z(x: int) -> float:
  • # BUG: notice the missing, but required, duplicate `import math` here
  • return math.cos(x)
  • ```
  • Clients that call `fn_j` will be fine, but calling `fn_z` will run into a `NameError: name 'math' is not defined`, which is a very avoidable bug and no one wants that.
  • Ok ...
  • > But you can catch this in your unit tests!
  • ... I hear you think. Yes, you can, but that's beside the point.
  • # Summary
  • The location within a module where an `import` statement is found by the interpreter is *not* expected to cause differences in performance such as speed or memory usage. Modules are singleton objects, which means that *they're only ever loaded once* and will *not* be re-imported or re-loaded again even if additional `import` statements are encountered.
  • Therefore, you should follow the best-practice of keeping `import` statements at the top of the module. All of that being said, *how* you do the `import` and/or subsequent attribute lookups, *does* have an impact.
  • # Imports and Attribute Look-ups
  • Suppose you `import math` and then, every time you need to use the `sin(...)` function, you have to do `math.sin(...)`. This will generally be *slower* than `from math import sin` and then using `sin(...)` directly because Python has to keep looking up the function name within the module *every time* an attempt to invoke it is made.
  • This lookup-penalty applies to everything that gets accessed using the dot `.` operator and will be particularly noticeable in a loop. It's therefore advisable to at least get a local reference to things you need to use/invoke frequently in *performance critical* sections.
  • For example, using the original `import math` example, right before a critical loop, you could do something like this:
  • ```python
  • # ... within some function
  • sin = math.sin
  • for i in range(0, REALLY_BIG_NUMBER):
  • x = sin(i) # faster than: x = math.sin(x)
  • # ...
  • ```
  • This is a trivial example, but note that something similar can happen with methods on other objects (e.g. lists, dictionaries, etc) because methods are still attributes that have to be looked up. (Remember, it's *everything* that requires using the dot `.` operator.)
  • ### Benchmark
  • Here're some benchmarks with 2 different CPUs.
  • This one is from an Intel Core i9 (8-CPUs: 4-Core + HT) I bought back in 2010:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 89.7203312900001
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 78.27029322999988
  • ```
  • And the same tests repeated on an AMD Ryzen 9 3900X (24-CPUs: 12-Core + SMT) I bought earlier this year:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 37.06144698499884
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 26.76371130500047
  • ```
  • There's a 10+ second difference in the look-up vs no look-up cases for *both* CPUs.
  • Note that the difference depends on how much time the program spends running this code, hence why the "performance critical section" qualifier is so important. The fact is that, for most (not all) other cases, the benchmarks above can be safely ignored because the actual impact of more sporadic usage will be negligible.
  • # Where to Import and Why
  • The `import` statements should be kept at the top of the module, as it's normally done. Straying away from that pattern ***for no good reason*** is just going to make the code more difficult to go through. For example, module dependencies will be more difficult to find because `import` statements will be scattered throughout the code instead of being in a single easily-seen location. (You could say dependencies are "hidden".)
  • It may also make a module less reliable for clients and more error-prone for their own developers because it's easier to forget about dependencies. As a trivial example, suppose you have this in a module:
  • ```python
  • # ... lots of code above
  • def fn_j(x: int) -> float:
  • import math
  • return math.sin(x)
  • # lots of code below ...
  • ```
  • Ok, that works. But then you add:
  • ```python
  • # ... lots of code above
  • def fn_z(x: int) -> float:
  • # BUG: notice the missing, but required, duplicate `import math` here
  • return math.cos(x)
  • ```
  • Clients that call `fn_j` will be fine, but calling `fn_z` will run into a `NameError: name 'math' is not defined`, which is a very avoidable bug and no one wants that.
  • Ok ...
  • > But you can catch this in your unit tests!
  • ... I hear you think. Yes, you can, but that's beside the point.
#2: Post edited by user avatar ghost-in-the-zsh‭ · 2020-11-19T12:19:31Z (over 3 years ago)
Fix typo
  • # Summary
  • The location within a module where another an `import` statement is found by the interpreter is *not* expected to cause differences in performance such as speed or memory usage. Modules are singleton objects, which means that *they're only ever loaded once* and will *not* be re-imported or re-loaded again even if additional `import` statements are encountered.
  • Therefore, you should follow the best-practice of keeping `import` statements at the top of the module. All of that being said, *how* you do the `import` and/or subsequent attribute lookups, *does* have an impact.
  • # Imports and Attribute Look-ups
  • Suppose you `import math` and then, every time you need to use the `sin(...)` function, you have to do `math.sin(...)`. This will generally be *slower* than `from math import sin` and then using `sin(...)` directly because Python has to keep looking up the function name within the module *every time* an attempt to invoke it is made.
  • This lookup-penalty applies to everything that gets accessed using the dot `.` operator and will be particularly noticeable in a loop. It's therefore advisable to at least get a local reference to things you need to use/invoke frequently in *performance critical* sections.
  • For example, using the original `import math` example, right before a critical loop, you could do something like this:
  • ```python
  • # ... within some function
  • sin = math.sin
  • for i in range(0, REALLY_BIG_NUMBER):
  • x = sin(i) # faster than: x = math.sin(x)
  • # ...
  • ```
  • This is a trivial example, but note that something similar can happen with methods on other objects (e.g. lists, dictionaries, etc) because methods are still attributes that have to be looked up. (Remember, it's *everything* that requires usaging the dot `.` operator.)
  • ### Benchmark
  • Here're some benchmarks with 2 different CPUs.
  • This one is from an Intel Core i9 (8-CPUs: 4-Core + HT) I bought back in 2010:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 89.7203312900001
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 78.27029322999988
  • ```
  • And the same tests repeated on an AMD Ryzen 9 3900X (24-CPUs: 12-Core + SMT) I bought earlier this year:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 37.06144698499884
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 26.76371130500047
  • ```
  • There's a 10+ second difference in the look-up vs no look-up cases for *both* CPUs.
  • Note that the difference depends on how much time the program spends running this code, hence why the "performance critical section" qualifier is so important. The fact is that, for most (not all) other cases, the benchmarks above can be safely ignored because the actual impact of more sporadic usage will be negligible.
  • # Where to Import and Why
  • The `import` statements should be kept at the top of the module, as it's normally done. Straying away from that pattern ***for no good reason*** is just going to make the code more difficult to go through. For example, module dependencies will be more difficult to find because `import` statements will be scattered throughout the code instead of being in a single easily-seen location. (You could say dependencies are "hidden".)
  • It may also make a module less reliable for clients and more error-prone for their own developers because it's easier to forget about dependencies. As a trivial example, suppose you have this in a module:
  • ```python
  • # ... lots of code above
  • def fn_j(x: int) -> float:
  • import math
  • return math.sin(x)
  • # lots of code below ...
  • ```
  • Ok, that works. But then you add:
  • ```python
  • # ... lots of code above
  • def fn_z(x: int) -> float:
  • # BUG: notice the missing, but required, duplicate `import math` here
  • return math.cos(x)
  • ```
  • Clients that call `fn_j` will be fine, but calling `fn_z` will run into a `NameError: name 'math' is not defined`, which is a very avoidable bug and no one wants that.
  • Ok ...
  • > But you can catch this in your unit tests!
  • ... I hear you think. Yes, you can, but that's beside the point.
  • # Summary
  • The location within a module where an `import` statement is found by the interpreter is *not* expected to cause differences in performance such as speed or memory usage. Modules are singleton objects, which means that *they're only ever loaded once* and will *not* be re-imported or re-loaded again even if additional `import` statements are encountered.
  • Therefore, you should follow the best-practice of keeping `import` statements at the top of the module. All of that being said, *how* you do the `import` and/or subsequent attribute lookups, *does* have an impact.
  • # Imports and Attribute Look-ups
  • Suppose you `import math` and then, every time you need to use the `sin(...)` function, you have to do `math.sin(...)`. This will generally be *slower* than `from math import sin` and then using `sin(...)` directly because Python has to keep looking up the function name within the module *every time* an attempt to invoke it is made.
  • This lookup-penalty applies to everything that gets accessed using the dot `.` operator and will be particularly noticeable in a loop. It's therefore advisable to at least get a local reference to things you need to use/invoke frequently in *performance critical* sections.
  • For example, using the original `import math` example, right before a critical loop, you could do something like this:
  • ```python
  • # ... within some function
  • sin = math.sin
  • for i in range(0, REALLY_BIG_NUMBER):
  • x = sin(i) # faster than: x = math.sin(x)
  • # ...
  • ```
  • This is a trivial example, but note that something similar can happen with methods on other objects (e.g. lists, dictionaries, etc) because methods are still attributes that have to be looked up. (Remember, it's *everything* that requires usaging the dot `.` operator.)
  • ### Benchmark
  • Here're some benchmarks with 2 different CPUs.
  • This one is from an Intel Core i9 (8-CPUs: 4-Core + HT) I bought back in 2010:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 89.7203312900001
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 78.27029322999988
  • ```
  • And the same tests repeated on an AMD Ryzen 9 3900X (24-CPUs: 12-Core + SMT) I bought earlier this year:
  • ```python
  • >>> # with lookup
  • >>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
  • 37.06144698499884
  • >>> # without lookup
  • >>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
  • 26.76371130500047
  • ```
  • There's a 10+ second difference in the look-up vs no look-up cases for *both* CPUs.
  • Note that the difference depends on how much time the program spends running this code, hence why the "performance critical section" qualifier is so important. The fact is that, for most (not all) other cases, the benchmarks above can be safely ignored because the actual impact of more sporadic usage will be negligible.
  • # Where to Import and Why
  • The `import` statements should be kept at the top of the module, as it's normally done. Straying away from that pattern ***for no good reason*** is just going to make the code more difficult to go through. For example, module dependencies will be more difficult to find because `import` statements will be scattered throughout the code instead of being in a single easily-seen location. (You could say dependencies are "hidden".)
  • It may also make a module less reliable for clients and more error-prone for their own developers because it's easier to forget about dependencies. As a trivial example, suppose you have this in a module:
  • ```python
  • # ... lots of code above
  • def fn_j(x: int) -> float:
  • import math
  • return math.sin(x)
  • # lots of code below ...
  • ```
  • Ok, that works. But then you add:
  • ```python
  • # ... lots of code above
  • def fn_z(x: int) -> float:
  • # BUG: notice the missing, but required, duplicate `import math` here
  • return math.cos(x)
  • ```
  • Clients that call `fn_j` will be fine, but calling `fn_z` will run into a `NameError: name 'math' is not defined`, which is a very avoidable bug and no one wants that.
  • Ok ...
  • > But you can catch this in your unit tests!
  • ... I hear you think. Yes, you can, but that's beside the point.
#1: Initial revision by user avatar ghost-in-the-zsh‭ · 2020-11-19T07:31:31Z (over 3 years ago)
# Summary

The location within a module where another an `import` statement is found by the interpreter is *not* expected to cause differences in performance such as speed or memory usage. Modules are singleton objects, which means that *they're only ever loaded once* and will *not* be re-imported or re-loaded again even if additional `import` statements are encountered.

Therefore, you should follow the best-practice of keeping `import` statements at the top of the module. All of that being said, *how* you do the `import` and/or subsequent attribute lookups, *does* have an impact.


# Imports and Attribute Look-ups

Suppose you `import math` and then, every time you need to use the `sin(...)` function, you have to do `math.sin(...)`. This will generally be *slower* than `from math import sin` and then using `sin(...)` directly because Python has to keep looking up the function name within the module *every time* an attempt to invoke it is made.

This lookup-penalty applies to everything that gets accessed using the dot `.` operator and will be particularly noticeable in a loop. It's therefore advisable to at least get a local reference to things you need to use/invoke frequently in *performance critical* sections.

For example, using the original `import math` example, right before a critical loop, you could do something like this:

```python
# ... within some function
sin = math.sin
for i in range(0, REALLY_BIG_NUMBER):
    x = sin(i)   # faster than: x = math.sin(x)
    # ...
```
    
This is a trivial example, but note that something similar can happen with methods on other objects (e.g. lists, dictionaries, etc) because methods are still attributes that have to be looked up. (Remember, it's *everything* that requires usaging the dot `.` operator.)


### Benchmark

Here're some benchmarks with 2 different CPUs.

This one is from an Intel Core i9 (8-CPUs: 4-Core + HT) I bought back in 2010:
 
```python
>>> # with lookup
>>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
89.7203312900001

>>> # without lookup
>>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
78.27029322999988
```

And the same tests repeated on an AMD Ryzen 9 3900X (24-CPUs: 12-Core + SMT) I bought earlier this year:

```python
>>> # with lookup
>>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
37.06144698499884

>>> # without lookup
>>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
26.76371130500047
```

There's a 10+ second difference in the look-up vs no look-up cases for *both* CPUs.

Note that the difference depends on how much time the program spends running this code, hence why the "performance critical section" qualifier is so important. The fact is that, for most (not all) other cases, the benchmarks above can be safely ignored because the actual impact of more sporadic usage will be negligible.


# Where to Import and Why

The `import` statements should be kept at the top of the module, as it's normally done. Straying away from that pattern ***for no good reason*** is just going to make the code more difficult to go through. For example, module dependencies will be more difficult to find because `import` statements will be scattered throughout the code instead of being in a single easily-seen location. (You could say dependencies are "hidden".)

It may also make a module less reliable for clients and more error-prone for their own developers because it's easier to forget about dependencies. As a trivial example, suppose you have this in a module:

```python
# ... lots of code above
def fn_j(x: int) -> float:
    import math
    return math.sin(x)
# lots of code below ...
```

Ok, that works. But then you add:

```python
# ... lots of code above
def fn_z(x: int) -> float:
    # BUG: notice the missing, but required, duplicate `import math` here
    return math.cos(x)
```

Clients that call `fn_j` will be fine, but calling `fn_z` will run into a `NameError: name 'math' is not defined`, which is a very avoidable bug and no one wants that.

Ok ...

> But you can catch this in your unit tests!

... I hear you think. Yes, you can, but that's beside the point.