Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
Our team is finally focusing on writing more automatic testing and one of my ex-colleagues recommended to try out the Verify library. The tool does the following: runs the test and compares the...
#1: Initial revision
What are the drawbacks of using data snapshot testing?
Our team is finally focusing on writing more automatic testing and one of my ex-colleagues recommended to try out the [Verify library](https://github.com/VerifyTests/Verify). The tool does the following: - runs the test and compares the JSON serialization of the actual result with a JSON file named after the test name. The first run will always fail, as the file is missing - the actual data is written to a file (matching the test name) and the file will become the expected result - subsequent test runs will succeed as long as the actual result does not change This is particularly useful for complex objects assertions since it spares the developer to write lots of assertions. Until now I have avoided comparing large objects, except for rather technical scenarios like deep-cloning where I relied on [Fluent Assertions Object Graphs operations](https://fluentassertions.com/objectgraphs/) (e.g. `Should().BeEquivalentTo`). The gain is clear and I think it is a great library. I am wondering about its downsides. The only downside I can think of is increasing the effort to quickly understand what's wrong with a failed test since the result is just a partial object graph mismatch instead of an assertion and a human-readable assertion "because" text.