Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

40%
+2 −4
Q&A Should I check if pointer parameters are null pointers?

The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explic...

posted 2y ago by Lundin‭  ·  edited 2y ago by Lundin‭

Answer
#4: Post edited by user avatar Lundin‭ · 2022-04-05T12:56:48Z (about 2 years ago)
Fixed a bug
  • The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.
  • The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).
  • If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.
  • **We should never add checks against null because they add needless overhead code.**
  • Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.
  • The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:
  • ```c
  • char array[][n] = { ... };
  • for(size_t i=0; i<n; i++)
  • {
  • func(array[i]);
  • }
  • ```
  • Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance. And it cannot get optimized away unless the function is inlined, perhaps not even then.
  • To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.
  • ---
  • As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:
  • ```c
  • // must point at array of at least 1 item and not null
  • void func (int a[static 1]);
  • int main (void)
  • {
  • func(NULL); // warning null passed to a callee that requires a non-null argument
  • int* x=NULL;
  • func(x); // no warning, the compiler can't predict what x contains
  • }
  • ```
  • It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.
  • The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.
  • The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).
  • If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.
  • **We should never add checks against null because they add needless overhead code.**
  • Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.
  • The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:
  • ```c
  • char* array[n] = { ... };
  • for(size_t i=0; i<n; i++)
  • {
  • func(array[i]);
  • }
  • ```
  • Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance. And it cannot get optimized away unless the function is inlined, perhaps not even then.
  • To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.
  • ---
  • As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:
  • ```c
  • // must point at array of at least 1 item and not null
  • void func (int a[static 1]);
  • int main (void)
  • {
  • func(NULL); // warning null passed to a callee that requires a non-null argument
  • int* x=NULL;
  • func(x); // no warning, the compiler can't predict what x contains
  • }
  • ```
  • It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.
#3: Post edited by user avatar Lundin‭ · 2022-04-05T09:45:14Z (about 2 years ago)
  • The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.
  • The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).
  • If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.
  • **We should never add checks against null because they add needless overhead code.**
  • Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.
  • The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:
  • ```c
  • char array[][n] = { ... };
  • for(size_t i=0; i<n; i++)
  • {
  • func(array[i]);
  • }
  • ```
  • Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance. And it cannot get optimized away unless the function is inlined, perhaps not even then.
  • To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.
  • ---
  • As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:
  • ```c
  • void func (int a[static 1]); // must point at array of at least 1 item and not null
  • int main (void)
  • {
  • func(NULL); // warning null passed to a callee that requires a non-null argument
  • int* x=NULL;
  • func(x); // no warning, the compiler can't predict what x contains
  • }
  • ```
  • It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.
  • The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.
  • The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).
  • If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.
  • **We should never add checks against null because they add needless overhead code.**
  • Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.
  • The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:
  • ```c
  • char array[][n] = { ... };
  • for(size_t i=0; i<n; i++)
  • {
  • func(array[i]);
  • }
  • ```
  • Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance. And it cannot get optimized away unless the function is inlined, perhaps not even then.
  • To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.
  • ---
  • As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:
  • ```c
  • // must point at array of at least 1 item and not null
  • void func (int a[static 1]);
  • int main (void)
  • {
  • func(NULL); // warning null passed to a callee that requires a non-null argument
  • int* x=NULL;
  • func(x); // no warning, the compiler can't predict what x contains
  • }
  • ```
  • It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.
#2: Post edited by user avatar Lundin‭ · 2022-04-05T09:44:10Z (about 2 years ago)
  • The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.
  • The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).
  • If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.
  • **We should never add checks against null because they add needless overhead code.**
  • Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.
  • The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:
  • ```c
  • char array[][n] = { ... };
  • for(size_t i=0; i<n; i++)
  • {
  • func(array[i]);
  • }
  • ```
  • Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance.
  • To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.
  • ---
  • As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:
  • ```c
  • void func (int a[static 1]); // must point at array of at least 1 item and not null
  • int main (void)
  • {
  • func(NULL); // warning null passed to a callee that requires a non-null argument
  • int* x=NULL;
  • func(x); // no warning, the compiler can't predict what x contains
  • }
  • ```
  • It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.
  • The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.
  • The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).
  • If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.
  • **We should never add checks against null because they add needless overhead code.**
  • Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.
  • The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:
  • ```c
  • char array[][n] = { ... };
  • for(size_t i=0; i<n; i++)
  • {
  • func(array[i]);
  • }
  • ```
  • Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance. And it cannot get optimized away unless the function is inlined, perhaps not even then.
  • To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.
  • ---
  • As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:
  • ```c
  • void func (int a[static 1]); // must point at array of at least 1 item and not null
  • int main (void)
  • {
  • func(NULL); // warning null passed to a callee that requires a non-null argument
  • int* x=NULL;
  • func(x); // no warning, the compiler can't predict what x contains
  • }
  • ```
  • It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.
#1: Initial revision by user avatar Lundin‭ · 2022-04-05T09:39:48Z (about 2 years ago)
The kind of comments telling you to add checks against null are typically coming from programmers mostly used to deal with higher level programming languages. They think that generally, more explicit error handling is always a good thing. And that is true in most cases, but not in this one.

The spirit of C has always been performance over safety. Give the programmer the freedom to do what they want, as fast as possible, at the price of potential risks. For example arrays were purposely not designed with a stored size or any bounds-checking (_[The Development of the C Language - Dennis M. Ritchie](https://www.bell-labs.com/usr/dmr/www/chist.pdf)_).

If we look at the standard library, lots of functions like `memcpy`/`strcpy` do for example not support overlapping parameters even though they could. Instead a specialized function `memmove` was designed for that purpose - safer but slower. Similarly, `malloc` does not needlessly initialize the allocated memory to zero/null, because that would lead to execution overhead. A specialized function `calloc` was designed for that purpose. And so on.

**We should never add checks against null because they add needless overhead code.**

Instead the function should be _explicitly documented_ to not handle the case where an argument which is a null pointer is passed, thereby leaving error handling to the caller.

The reason for this is simple: there are a whole lot of scenarios where the caller is sure that the passed arguments are most definitely not null pointers. Having the function needlessly checking for null pointers only adds bloat in the form of additional branches. Take this example:

```c
char array[][n] = { ... };
for(size_t i=0; i<n; i++)
{
  func(array[i]);
}
```

Now if this snippet is performance-critical, we stall the whole loop in case `func` repeatedly checks the passed array for null. We _know_ it isn't a null pointer, but the repeated check may lead to branch prediction problems or cache misses on some systems. On _any_ system, it means a useless check taking up performance.

To give the caller the freedom to do as they like, we should let them handle the null pointer check. Which should ideally be done as close to the point where something might end up as a null pointer, rather than inside some unrelated library function.

---

As a side note, some very modern compilers like recent gcc or clang versions have limited possibilities of static analysis at compile time, in case we use the exotic `static` declarator feature (`-Wall` is necessary for this in gcc). But it has very limited use:

```c
void func (int a[static 1]); // must point at array of at least 1 item and not null

int main (void) 
{
  func(NULL); // warning null passed to a callee that requires a non-null argument 
  
  int* x=NULL;
  func(x);   // no warning, the compiler can't predict what x contains
}
```

It's better to use dedicated static analyser tools to find bugs like this and then we don't need the exotic language feature either. Compilers are still to this date not very good at finding application bugs through static analysis.