Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
As with most everything in engineering, how much call arguments to a subroutine should be validated is a tradeoff. There is no single universal right answer. Note that checking for null pointers ...
Answer
#1: Initial revision
As with most everything in engineering, how much call arguments to a subroutine should be validated is a tradeoff. There is no single universal right answer. Note that checking for null pointers is only one instance of validating information from elsewhere before acting on it. Advantages of data validation:<ul> <li>Helps during development and debugging. <li>Protects against possible malicious use. <li>Reduces cascading errors. When everything is a mess, it's often hard to figure out what initially went wrong. <li>Allowing bad data to pass may cause a hard failure later and crash the whole system. Trying to dereference a NIL pointer will usually crash a program. At that point, it can't apply any corrective action. However, if a NIL pointer is detected, then there is opportunity to handle the error without disrupting other parts of the operation. <li>There might be legitimate reasons data could be invalid. Your subroutine might be the right place to check for that. <li>Some errors can cause correct-looking results. Getting wrong answers can be far worse than the process stalling and giving no answer. <li>Safety-critical applications. <i>"Invalid message received about adjusting the control rods"</i> may be preferred over <i>"Raise the control rods by 10<sup>38</sup> meters"</i>. </ul> Of course nothing is free. Data validation comes at the cost of larger code size and slower run-time response. Whether that is a worthwhile tradeoff depends on a number of parameters:<ul> <li>Does the extra delay in responsiveness matter? In critical control applications, it might. <li>Does the extra code space matter? If it's in a high-volume throw-away consumer product that maxxes out a cheap microcontroller, then using the next size up micro may make the whole product non-viable. <li>What's the cost of failure? Misreading a button press on a child's toy versus on an x-ray cancer treatment machine have vastly different downsides. <li>How likely is bad data? Something a user types in could be anything, whereas the reading from your 10 bit A/D is always going to be 0-1023. Also, at some point data needs to be trusted. Is this a low level routine that should only be handed good data that has already been validated by upper layers? <li>Is there anything you can do about it? On an embedded microcontroller there may be no operating system, no user console to write messages to, etc. In the end, your not really going to output 10<sup>38</sup> volts. Resetting the system may be worse than the disruption due to a bad data value. </ul> <b>Everything is a tradeoff.</b>