Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Best practices to write functions for both execution modes in Tensorflow, eager and graph mode

+0
−0

I regularly run into the problem that I have a Python function that I want to use in both, eager and graph execution mode. I therefore have to adjust the code so that it can handle both situations. Here are two examples:

import tensorflow as tf
def lin_to_db(x: float | tf.Tensor) -> float | tf.Tensor:
	# convert signal to noise ratio (SNR) from linear to dB

	if tf.is_tensor(x):
		return tf.math.log(x) * (10. / tf.math.log(10.))
	else:
		return math.log10(x) * 10.
def cast_to_int_if_eager(x: tf.Variable) -> int | tf.Variable:
	return int(x) if tf.executing_eagerly() else x

Are there best practices for such functions? Or maybe helpful predefined functions from Tensorflow?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

2 comment threads

tensorflow-graph-mode (2 comments)
Can't you just always use tensorflow functions? (3 comments)

1 answer

+1
−1

Tensorflow functions should typically work on both eager and graph tensors. This means that you can just use the following implementation:

def lin_to_db(x: float | tf.Tensor) -> tf.Tensor:
    """ convert signal to noise ratio (SNR) from linear to dB """
    return 10. * tf.math.log(x) / tf.math.log(10.)

As you correctly pointed out, this does affect the output in the sense that the output will always be a tf.Tensor, even if the input is a float. You seem to depict it as a disadvantage, but I would argue that this is actually an advantage. After all, no matter what type the input is (float, tf.Tensor, np.ndarray, ...) the output will always have the same, known type. If you need the resulting tensor to some other type, you can always convert it as follows:

lin_to_db(x).numpy().item()

Note that this code works for any x that can be (implicitly) converted to a tf.Tensor.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »