Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Notifications
Mark all as read
Code Reviews

Measuring the impact of using exceptions instead of return values in an ASP.NET Core application

+4
−0

Context

This Q&A from SO suggests that throwing exceptions is incredibly expensive when compared to returning values (return codes):

that with return codes instead of exceptions the same program runs less than one millisecond, which means exceptions are at least 30,000 times slower than return codes.

The post is quite old (~ 2011, .NET 4.0) and I guess the cost ratio between the exception and the return code is smaller now, but I still expect it to be rather large.

One of the patterns I use in an ASP.NET Core is for business logic (services) to throw various exceptions. One such exception is a custom NotFoundException when the entity is not found which in turn is handled in an ExceptionFilterAttribute:

private void HandleNotFoundException(ExceptionContext context)
{
	var exception = context.Exception as NotFoundException;

	var details = new ProblemDetails()
	{
		Type = "https://tools.ietf.org/html/rfc7231#section-6.5.4",
		Title = "The specified resource was not found.",
		Detail = exception?.Message
	};

	context.Result = new NotFoundObjectResult(details);

	context.ExceptionHandled = true;
}

The benchmark

ASP.NET Core 5 test code

I expose two actions: one that triggers a not found exception when the entity is not found and one that returns a null value on not found.

To make things as realistic as possible, I am actually performing a database call for each case.

[HttpGet("[action]")]
public string CheckPerformanceWithException()
{
	var s = _exceptionPerformanceTestingService.GetRandomSupplierWithException();
	return s != null ? "Not null" : "Null";
}

[HttpGet("[action]")]
public string CheckPerformanceWithNull()
{
	var s = _exceptionPerformanceTestingService.GetRandomSupplierWithNull();
	return s != null ? "Not null" : "Null";
}

public Supplier GetRandomSupplierWithException()
{
	int randomSupplierId = rand.Next();
	var supplier = _context.ReadSet<Supplier>().FirstOrDefault(s => s.Id == randomSupplierId);
	if (supplier == null)
		throw new NotFoundException();

	return supplier;
}

public Supplier GetRandomSupplierWithNull()
{
	int randomSupplierId = rand.Next();
	var supplier = _context.ReadSet<Supplier>().FirstOrDefault(s => s.Id == randomSupplierId);
	if (supplier == null)
		return null;

	return supplier;
}

The benchmark code

I have tried with a single loop since the benchmark framework performs many calls anyway (warmup etc.).

[MemoryDiagnoser]
public class Program
{
	private const int Loops = 1;

	// execute the sum in a loop to get a significant computation time
	private static async Task PerformTest(bool withException)
	{
		for (int i = 0; i < Loops; i++)
		{
			await RestClient.FetchData(withException);
		}
	}

	[Benchmark(Baseline = true)]
	public async Task ComputeBaseline()
	{
		await PerformTest(false);
	}

	[Benchmark(Baseline = false)]
	public async Task ComputeWithException()
	{
		await PerformTest(true);
	}

	static async Task Main(string[] args)
	{
		// for debugging purposes only
		//await RestClient.FetchData(true);
		//await RestClient.FetchData(false);

		var summary = BenchmarkRunner.Run<Program>();
	}
}

The benchmark results

BenchmarkDotNet=v0.13.1, OS=Windows 10.0.19042.1645 (20H2/October2020Update) 11th Gen Intel Core i5-1145G7 2.60GHz, 1 CPU, 8 logical and 4 physical cores .NET SDK=5.0.401 [Host] : .NET 5.0.14 (5.0.1422.5710), X64 RyuJIT DefaultJob : .NET 5.0.14 (5.0.1422.5710), X64 RyuJIT

I will include the results from multiple benchmark runs

  1. Run #1 (1 loop)
Method Mean Error StdDev Ratio RatioSD Allocated
ComputeBaseline 6.127 ms 0.1224 ms 0.3512 ms 1.00 0.00 3 KB
ComputeWithException 6.004 ms 0.1196 ms 0.2724 ms 0.97 0.07 4 KB
  1. Run #2 (1 loop)
Method Mean Error StdDev Ratio RatioSD Allocated
ComputeBaseline 6.127 ms 0.1224 ms 0.3512 ms 1.00 0.00 3 KB
ComputeWithException 6.004 ms 0.1196 ms 0.2724 ms 0.97 0.07 4 KB
  1. Run #3 (5 loops)
Method Mean Error StdDev Ratio RatioSD Allocated
ComputeBaseline 27.28 ms 0.380 ms 0.337 ms 1.00 0.00 16 KB
ComputeWithException 28.01 ms 0.536 ms 0.502 ms 1.03 0.02 17 KB
  1. Run #4 (100 loops)
Method Mean Error StdDev Ratio RatioSD Allocated
ComputeBaseline 562.0 ms 11.13 ms 19.79 ms 1.00 0.00 310 KB
ComputeWithException 564.2 ms 10.93 ms 16.02 ms 1.00 0.05 343 KB

As expected, in almost all cases, running with exception took slightly longer than without it, but in a real life example it does not seem to matter much (even if a client abuses the API and generated lots of internal exceptions).

I would appreciate a code review to understand if my benchmark is correct or I missed some cases.

Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

1 answer

+2
−0

Since your code performs a data base operation, the cost of this operation is likely dominating the execution time. The cost of both a return or an exception may be negligible compared to that operation.

For example, assume your data base operation takes 6ms, the return takes 1ns, and the exception is 30000 times more expensive than the return. Then the exception still requires only 30µs.

With return your total execution time will then be 6.000001 ms and with exception the total execution time will be 6.03ms. Compared with the 6ms you will barely notice a difference.

Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »