Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

50%
+1 −1
Q&A Best practices to write functions for both execution modes in Tensorflow, eager and graph mode

Tensorflow functions should typically work on both eager and graph tensors. This means that you can just use the following implementation: def lin_to_db(x: float | tf.Tensor) -> tf.Tensor: ...

posted 1y ago by mr Tsjolder‭

Answer
#1: Initial revision by user avatar mr Tsjolder‭ · 2023-09-09T13:42:49Z (about 1 year ago)
Tensorflow functions should typically work on both eager and graph tensors.
This means that you can just use the following implementation:
```python
def lin_to_db(x: float | tf.Tensor) -> tf.Tensor:
    """ convert signal to noise ratio (SNR) from linear to dB """
    return 10. * tf.math.log(x) / tf.math.log(10.)
```

As you correctly pointed out, this does affect the output in the sense that the output will always be a `tf.Tensor`, even if the input is a `float`.
You seem to depict it as a disadvantage, but I would argue that this is actually an advantage.
After all, no matter what type the input is (`float`, `tf.Tensor`, `np.ndarray`, ...) the output will always have the same, known type.
If you need the resulting tensor to some other type, you can always convert it as follows:
```python
lin_to_db(x).numpy().item()
```
Note that this code works for any `x` that can be (implicitly) converted to a `tf.Tensor`.