Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Should I check if pointer parameters are null pointers?

Parent

Should I check if pointer parameters are null pointers?

+11
−0

When writing any form of custom function such as this:

void func (int* a, int* b)

Should I add code to check if a and b are null pointers or not?

if(a == NULL)
/* error handling */

When posting code for code reviews, one frequent comment is that such checks against null should be added for robustness. However, these reviewers are then just as often corrected by others telling them to not add such checks because they are bad practice.

What should be done? Is there an universal policy to check or not to check, or should this be done on case-to-case basis?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

I don't think this is enough for a full-fledged answer, but I think adding debug `assert`s for testin... (1 comment)
Post
+9
−0

As with most everything in engineering, how much call arguments to a subroutine should be validated is a tradeoff. There is no single universal right answer. Note that checking for null pointers is only one instance of validating information from elsewhere before acting on it.

Advantages of data validation:

  • Helps during development and debugging.
  • Protects against possible malicious use.
  • Reduces cascading errors. When everything is a mess, it's often hard to figure out what initially went wrong.
  • Allowing bad data to pass may cause a hard failure later and crash the whole system. Trying to dereference a NIL pointer will usually crash a program. At that point, it can't apply any corrective action. However, if a NIL pointer is detected, then there is opportunity to handle the error without disrupting other parts of the operation.
  • There might be legitimate reasons data could be invalid. Your subroutine might be the right place to check for that.
  • Some errors can cause correct-looking results. Getting wrong answers can be far worse than the process stalling and giving no answer.
  • Safety-critical applications. "Invalid message received about adjusting the control rods" may be preferred over "Raise the control rods by 1038 meters".

Of course nothing is free. Data validation comes at the cost of larger code size and slower run-time response. Whether that is a worthwhile tradeoff depends on a number of parameters:

  • Does the extra delay in responsiveness matter? In critical control applications, it might.
  • Does the extra code space matter? If it's in a high-volume throw-away consumer product that maxxes out a cheap microcontroller, then using the next size up micro may make the whole product non-viable.
  • What's the cost of failure? Misreading a button press on a child's toy versus on an x-ray cancer treatment machine have vastly different downsides.
  • How likely is bad data? Something a user types in could be anything, whereas the reading from your 10 bit A/D is always going to be 0-1023. Also, at some point data needs to be trusted. Is this a low level routine that should only be handed good data that has already been validated by upper layers?
  • Is there anything you can do about it? On an embedded microcontroller there may be no operating system, no user console to write messages to, etc. In the end, your not really going to output 1038 volts. Resetting the system may be worse than the disruption due to a bad data value.

Everything is a tradeoff.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

Defensive programming (1 comment)
Defensive programming
Lundin‭ wrote about 2 years ago

"Defensive programming", as in applying numerous error checks even in cases that theoretically shouldn't fail, might be a good idea. However, such error checks should be located near the place where one might suspect that the error appears (user input, buffer copying etc), and not in some unrelated library function. It's important than any such unexpected errors are spotted as early as possible. That is true for any system, safety-critical or not. Safety-critical systems do not have excessive/random checks against null parameters either.

Skipping 1 deleted comment.