### Communities

tag:snake search within a tag
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
created:<1w created < 1 week ago
post_type:xxxx type of post
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

# Best practices to write functions for both execution modes in Tensorflow, eager and graph mode

+0
−0

I regularly run into the problem that I have a Python function that I want to use in both, eager and graph execution mode. I therefore have to adjust the code so that it can handle both situations. Here are two examples:

``````import tensorflow as tf
``````
``````def lin_to_db(x: float | tf.Tensor) -> float | tf.Tensor:
# convert signal to noise ratio (SNR) from linear to dB

if tf.is_tensor(x):
return tf.math.log(x) * (10. / tf.math.log(10.))
else:
return math.log10(x) * 10.
``````
``````def cast_to_int_if_eager(x: tf.Variable) -> int | tf.Variable:
return int(x) if tf.executing_eagerly() else x
``````

Are there best practices for such functions? Or maybe helpful predefined functions from Tensorflow?

Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

#### 2 comment threads

Can't you just always use tensorflow functions? (3 comments)

+1
−1

Tensorflow functions should typically work on both eager and graph tensors. This means that you can just use the following implementation:

``````def lin_to_db(x: float | tf.Tensor) -> tf.Tensor:
""" convert signal to noise ratio (SNR) from linear to dB """
return 10. * tf.math.log(x) / tf.math.log(10.)
``````

As you correctly pointed out, this does affect the output in the sense that the output will always be a `tf.Tensor`, even if the input is a `float`. You seem to depict it as a disadvantage, but I would argue that this is actually an advantage. After all, no matter what type the input is (`float`, `tf.Tensor`, `np.ndarray`, ...) the output will always have the same, known type. If you need the resulting tensor to some other type, you can always convert it as follows:

``````lin_to_db(x).numpy().item()
``````

Note that this code works for any `x` that can be (implicitly) converted to a `tf.Tensor`.

Why does this post require moderator attention?
You might want to add some details to your flag.