Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Should I check if pointer parameters are null pointers?

Parent

Should I check if pointer parameters are null pointers?

+11
−0

When writing any form of custom function such as this:

void func (int* a, int* b)

Should I add code to check if a and b are null pointers or not?

if(a == NULL)
/* error handling */

When posting code for code reviews, one frequent comment is that such checks against null should be added for robustness. However, these reviewers are then just as often corrected by others telling them to not add such checks because they are bad practice.

What should be done? Is there an universal policy to check or not to check, or should this be done on case-to-case basis?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

I don't think this is enough for a full-fledged answer, but I think adding debug `assert`s for testin... (1 comment)
Post
+2
−4

The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.

The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (The Development of the C Language - Dennis M. Ritchie).

If we look at the standard library, lots of functions like memcpy/strcpy do for example not support overlapping parameters even though they could. Instead a specialized function memmove was designed for that purpose - safer but slower. Similarly, malloc does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function calloc was designed for that purpose. And so on.

We should never add checks against null because they add needless overhead code.

Instead the function should be explicitly documented to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.

The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:

char* array[n] = { ... };
for(size_t i=0; i<n; i++)
{
  func(array[i]);
}

Now if this snippet is performance-critical, we stall the whole loop in case func repeatedly checks the passed array for null. We know it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On any system, it means a useless check taking up performance. And it cannot get optimized away unless the function is inlined, perhaps not even then.

To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.


As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic static declarator feature (-Wall is necessary for this in gcc). But it has very limited use:

// must point at array of at least 1 item and not null
void func (int a[static 1]); 

int main (void) 
{
  func(NULL); // warning null passed to a callee that requires a non-null argument 
  
  int* x=NULL;
  func(x);   // no warning, the compiler can't predict what x contains
}

It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

3 comment threads

I find this very dogmatic. "Never" is a very strong word. As someone said: "Blindly following best pr... (8 comments)
Uh... what? (8 comments)
I agree with you analysis. There is a problem with one of the examples: you should use `for(size_t i ... (2 comments)
I find this very dogmatic. "Never" is a very strong word. As someone said: "Blindly following best pr...
klutt‭ wrote over 2 years ago

I find this very dogmatic. "Never" is a very strong word. As someone said: "Blindly following best practices is not best practice."

However, I can agree that it seems a bit point(haha)less for two reasons.

  1. There is rarely a good way to recover is a NULL pointer is passed.
  2. NULL is basically the only thing that is possible to check. To have some real value, a check should also check if the address is valid, which is not possible.

However, if you have a non-critical function it can make sense. Let's say you have this simple logging function:

void log(const char *msg) {
    fprintf(stderr, "%s", msg);
}

If you don't want the program to crash because of this, it can make total sense.

And sure, it's common to use C for performance, but that does not mean that all C code is performance critical.

Lundin‭ wrote over 2 years ago

klutt‭ Everyone is missing the point that checking for null inside some generic function is always the wrong place. Checking for null elsewhere is perfectly reasonable and I never said otherwise. Unfortunately it would seem that the average reader here is of the PC programmer variety who has never done actual defensive programming, with the purpose of catching RAM corruptions or runaway code. So they check against null because "one cannot be too careful". Maybe I should write a post about how/when to actually do defensive programming, because the average reader here seems oblivious to the concept.

Lundin‭ wrote over 2 years ago

Checking if you have the key before starting your car is perfectly reasonable. Checking if the car still has an engine, wheels and exhaust each time before starting it is not. What is one's reason for doing so, is there a notorious car part thief in the neighbourhood (valid reason) or is it "one cannot be too careful" (invalid reason caused by paranoia or other mental problems).

klutt‭ wrote over 2 years ago

In the case that it would mean a catastrophe if the programmer forgets to do the null check in the right place, it makes total sense to have a check in the wrong place.

Lundin‭ wrote over 2 years ago

klutt‭ So every 10th line or so we should check if all pointers in the current scope are null? Because that makes as much sense.

klutt‭ wrote over 2 years ago

No I did not say that, and the comparison is ridiculous.

Lundin‭ wrote over 2 years ago · edited over 2 years ago

klutt‭ It isn't, it is the very same thing. This code int* someptr = ...; /*...*/ if(!someptr) { ... } func(someptr); /*...*/ void func (int* someptr) { if(!someptr) { ...} boils down to the very same thing as int* someptr = ...; /*...*/ if(!someptr) { ... } /*some rows of unrelated code*/ if(!someptr). There is no reason to believe that the pointer would magically get set to null just because we called a function. Unless you suspect stack corruption, in which case checking for null is the wrong counter-measure entirely.

Lundin‭ wrote over 2 years ago

In either of these scenarios, the correct place to check against null is always where you have valid reasons to suspect it to get set to null (after a malloc call?) and only there. It is never the correct place to check inside the completely unrelated library function func which has the purpose of calculating something not in the slightest related to what you did with the pointer before calling that function. This is very fundamental object-orientation: a piece of code should do its own designated task and not worry about completely unrelated things.