Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
TL;DR Is (a and b) equivalent to tf.logical_and(a, b) in terms of optimization and performance? (a and b are tensorflow tensors) Details I use Python with Tensorflow. My priorities are Make t...
#5: Post edited
Is it okay to use python operators for tensorflow tensors?
**TL;DR**- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
**Details**:I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:@tf.function# @tf.function(jit_compile=True)def my_tf_func():# ...a = ... # some tensorflow tensorb = ... # another tensorflow tensor# currently ugly: prefix notation with tf.logical_andc = tf.math.count_nonzero(tf.logical_and(a, b))# more readable alternative: infix notation:c = tf.math.count_nonzero(a and b)# ...- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
- <sub>This question was crossposted at
- https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .</sub>
- TL;DR
- -
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- Details
- -
- I use Python with Tensorflow. My priorities are
- 1. Make the code run fast
- 2. Make it readable.
- I have working and fast code that, for my personal feeling, looks ugly:
- ```python
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- ```
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
- <sub>This question was crossposted at
- https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .</sub>
#4: Post edited
- **TL;DR**
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- **Details**:
- I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
This question was crossposted at https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .
- **TL;DR**
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- **Details**:
- I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
- <sub>This question was crossposted at
- https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .</sub>
#3: Post edited
- **TL;DR**
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- **Details**:
- I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
This question was crossposted atcrossposted at https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .
- **TL;DR**
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- **Details**:
- I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
- This question was crossposted at https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .
#2: Post edited
- **TL;DR**
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- **Details**:
- I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
- **TL;DR**
- Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors)
- **Details**:
- I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:
- @tf.function
- # @tf.function(jit_compile=True)
- def my_tf_func():
- # ...
- a = ... # some tensorflow tensor
- b = ... # another tensorflow tensor
- # currently ugly: prefix notation with tf.logical_and
- c = tf.math.count_nonzero(tf.logical_and(a, b))
- # more readable alternative: infix notation:
- c = tf.math.count_nonzero(a and b)
- # ...
- The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`).
- Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?
- The same question applies to unary operators like `not` vs. `tf.logical_not(...)`.
- [1]: https://en.wikipedia.org/wiki/Polish_notation
- [2]: https://en.wikipedia.org/wiki/Infix_notation
- This question was crossposted at
- crossposted at https://stackoverflow.com/questions/77045818/is-it-okay-to-use-python-operators-for-tensorflow-tensors .
#1: Initial revision
Is it okay to use python operators for tensorflow tensors?
**TL;DR** Is `(a and b)` equivalent to `tf.logical_and(a, b)` in terms of optimization and performance? (`a` and `b` are tensorflow tensors) **Details**: I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly: @tf.function # @tf.function(jit_compile=True) def my_tf_func(): # ... a = ... # some tensorflow tensor b = ... # another tensorflow tensor # currently ugly: prefix notation with tf.logical_and c = tf.math.count_nonzero(tf.logical_and(a, b)) # more readable alternative: infix notation: c = tf.math.count_nonzero(a and b) # ... The code that uses [prefix notation][1] works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation `logical_and` comes before the operands `a` and `b`). Can I use [infix notation][2], i.e. the alternative at the end of above code, with usual python operators like `and`, `+`, `-`, or `==` and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result? The same question applies to unary operators like `not` vs. `tf.logical_not(...)`. [1]: https://en.wikipedia.org/wiki/Polish_notation [2]: https://en.wikipedia.org/wiki/Infix_notation