Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
Context This Q&A from SO suggests that throwing exceptions is incredibly expensive when compared to returning values (return codes): that with return codes instead of exceptions the same pr...
#1: Initial revision
Measuring the impact of using exceptions instead of return values in an ASP.NET Core application
## Context [This Q&A from SO](https://stackoverflow.com/questions/891217/how-expensive-are-exceptions-in-c) suggests that throwing exceptions is incredibly expensive when compared to returning values (return codes): > that with return codes instead of exceptions the same program runs less than one millisecond, which means exceptions are at least 30,000 times slower than return codes. The post is quite old (~ 2011, .NET 4.0) and I guess the cost ratio between the exception and the return code is smaller now, but I still expect it to be rather large. One of the patterns I use in an ASP.NET Core is for business logic (services) to throw various exceptions. One such exception is a custom `NotFoundException` when the entity is not found which in turn is handled in an `ExceptionFilterAttribute`: ```c# private void HandleNotFoundException(ExceptionContext context) { var exception = context.Exception as NotFoundException; var details = new ProblemDetails() { Type = "https://tools.ietf.org/html/rfc7231#section-6.5.4", Title = "The specified resource was not found.", Detail = exception?.Message }; context.Result = new NotFoundObjectResult(details); context.ExceptionHandled = true; } ``` ## The benchmark ## ASP.NET Core 5 test code I expose two actions: one that triggers a not found exception when the entity is not found and one that returns a `null` value on not found. To make things as realistic as possible, I am actually performing a database call for each case. ```c# [HttpGet("[action]")] public string CheckPerformanceWithException() { var s = _exceptionPerformanceTestingService.GetRandomSupplierWithException(); return s != null ? "Not null" : "Null"; } [HttpGet("[action]")] public string CheckPerformanceWithNull() { var s = _exceptionPerformanceTestingService.GetRandomSupplierWithNull(); return s != null ? "Not null" : "Null"; } public Supplier GetRandomSupplierWithException() { int randomSupplierId = rand.Next(); var supplier = _context.ReadSet<Supplier>().FirstOrDefault(s => s.Id == randomSupplierId); if (supplier == null) throw new NotFoundException(); return supplier; } public Supplier GetRandomSupplierWithNull() { int randomSupplierId = rand.Next(); var supplier = _context.ReadSet<Supplier>().FirstOrDefault(s => s.Id == randomSupplierId); if (supplier == null) return null; return supplier; } ``` ## The benchmark code I have tried with a single loop since the benchmark framework performs many calls anyway (warmup etc.). ```c# [MemoryDiagnoser] public class Program { private const int Loops = 1; // execute the sum in a loop to get a significant computation time private static async Task PerformTest(bool withException) { for (int i = 0; i < Loops; i++) { await RestClient.FetchData(withException); } } [Benchmark(Baseline = true)] public async Task ComputeBaseline() { await PerformTest(false); } [Benchmark(Baseline = false)] public async Task ComputeWithException() { await PerformTest(true); } static async Task Main(string[] args) { // for debugging purposes only //await RestClient.FetchData(true); //await RestClient.FetchData(false); var summary = BenchmarkRunner.Run<Program>(); } } ``` ### The benchmark results > BenchmarkDotNet=v0.13.1, OS=Windows 10.0.19042.1645 (20H2/October2020Update) 11th Gen Intel Core i5-1145G7 2.60GHz, 1 CPU, 8 logical and 4 physical cores > .NET SDK=5.0.401 [Host] : .NET 5.0.14 (5.0.1422.5710), X64 RyuJIT DefaultJob : .NET 5.0.14 (5.0.1422.5710), X64 RyuJIT I will include the results from multiple benchmark runs 1. Run #1 (1 loop) | Method | Mean | Error | StdDev | Ratio | RatioSD | Allocated | |--------------------- |---------:|----------:|----------:|------:|--------:|----------:| | ComputeBaseline | 6.127 ms | 0.1224 ms | 0.3512 ms | 1.00 | 0.00 | 3 KB | | ComputeWithException | 6.004 ms | 0.1196 ms | 0.2724 ms | 0.97 | 0.07 | 4 KB | 2. Run #2 (1 loop) | Method | Mean | Error | StdDev | Ratio | RatioSD | Allocated | |--------------------- |---------:|----------:|----------:|------:|--------:|----------:| | ComputeBaseline | 6.127 ms | 0.1224 ms | 0.3512 ms | 1.00 | 0.00 | 3 KB | | ComputeWithException | 6.004 ms | 0.1196 ms | 0.2724 ms | 0.97 | 0.07 | 4 KB | 3. Run #3 (5 loops) | Method | Mean | Error | StdDev | Ratio | RatioSD | Allocated | |--------------------- |---------:|---------:|---------:|------:|--------:|----------:| | ComputeBaseline | 27.28 ms | 0.380 ms | 0.337 ms | 1.00 | 0.00 | 16 KB | | ComputeWithException | 28.01 ms | 0.536 ms | 0.502 ms | 1.03 | 0.02 | 17 KB | 4. Run #4 (100 loops) | Method | Mean | Error | StdDev | Ratio | RatioSD | Allocated | |--------------------- |---------:|---------:|---------:|------:|--------:|----------:| | ComputeBaseline | 562.0 ms | 11.13 ms | 19.79 ms | 1.00 | 0.00 | 310 KB | | ComputeWithException | 564.2 ms | 10.93 ms | 16.02 ms | 1.00 | 0.05 | 343 KB | As expected, in almost all cases, running with exception took slightly longer than without it, but in _a real life_ example it does not seem to matter much (even if a client abuses the API and generated lots of internal exceptions). I would appreciate a code review to understand if my benchmark is correct or I missed some cases.