Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck. A CI is sup...
Answer
#4: Post edited
- The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck.
- A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't.
- Sometimes the change is actually external to the repo, so you can't even blame the CI. Possibly evidence of bad design, but it is what it is.
- You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway.
- You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI.
In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need to make a dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's reading pages and pages of CI docs again...- CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well.
- As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.
- Best practices in software are rarely dogmatic rules that must always be followed no matter what. You're supposed to judge case by case whether a given principle applies, while keeping an eye towards pragmatism. Best practices without corner cases have not been invented yet.
- The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck.
- A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't.
- Sometimes the change is actually external to the repo, so you can't even blame the CI. Possibly evidence of bad design, but it is what it is.
- You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway.
- You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI.
- In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need to make a dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's read pages and pages of CI docs again...
- CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well.
- As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.
- Best practices in software are rarely dogmatic rules that must always be followed no matter what. You're supposed to judge case by case whether a given principle applies, while keeping an eye towards pragmatism. Best practices without corner cases have not been invented yet.
#3: Post edited
- The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck.
- A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't.
- Sometimes the change is actually external to the repo, so you can't even blame the CI. Possibly evidence of bad design, but it is what it is.
- You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway.
- You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI.
In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's reading pages and pages of CI docs again...- CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well.
- As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.
- Best practices in software are rarely dogmatic rules that must always be followed no matter what. You're supposed to judge case by case whether a given principle applies, while keeping an eye towards pragmatism. Best practices without corner cases have not been invented yet.
- The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck.
- A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't.
- Sometimes the change is actually external to the repo, so you can't even blame the CI. Possibly evidence of bad design, but it is what it is.
- You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway.
- You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI.
- In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need to make a dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's reading pages and pages of CI docs again...
- CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well.
- As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.
- Best practices in software are rarely dogmatic rules that must always be followed no matter what. You're supposed to judge case by case whether a given principle applies, while keeping an eye towards pragmatism. Best practices without corner cases have not been invented yet.
#2: Post edited
- The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck.
- A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't.
- You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway.
- You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI.
- In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's reading pages and pages of CI docs again...
- CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well.
- As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.
- The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck.
- A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't.
- Sometimes the change is actually external to the repo, so you can't even blame the CI. Possibly evidence of bad design, but it is what it is.
- You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway.
- You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI.
- In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's reading pages and pages of CI docs again...
- CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well.
- As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.
- Best practices in software are rarely dogmatic rules that must always be followed no matter what. You're supposed to judge case by case whether a given principle applies, while keeping an eye towards pragmatism. Best practices without corner cases have not been invented yet.
#1: Initial revision
The reason this practice exists is because CIs suck. The frameworks/services themselves suck, and the way people write the configs also suck, and the two combine to create a mega-suck. A CI is supposed to trigger as you make changes so you automatically have an up-to-date indicator on whether the checks/tests/builds/deployments/whatever are succeeding or not. With typical CIs today, you occasionally run into a situation where it should have triggered, but didn't. You would expect CIs to have a way to manually trigger a build. They do. But it's often confusing, obscure and not easy to use so people don't. Either they don't know about it, or don't understand how to do it with confidence, or know how to do it but don't want to jump through hoops. You could dismiss those people as stupid or lazy, but the point is that they won't stop being stupid or lazy, and they were just as stupid or lazy when the CI was developed, and it was well known to the CI devs that there are all these stupids and lazies out there. Yet the devs didn't care and made the CI the way it is anyway. You would expect that the CI is just running some shell script in the repo, like `./test-and-deploy.sh`. Then it's irrelevant that the CI has bad UX (DX?) - you can just run the same script yourself. It's the same shell commands, so it should be the same thing as triggering the CI, right? Unfortunately, because CIs suck, they manage to add a whole bunch of complexity above that shell script, so actually there are non-trivial differences between running the script locally and on CI. Moreover, CI users also suck, so they implement complex logic in their CI config which cannot be tested outside the CI. In the end, yes, you could swallow all the complexity of the CI and just use it right, which should in theory mean you never need dummy commit. But that is a lot of hassle. Why bother, when you can do a simple command with a software you already know well (git) rather than something much more specialized and with less staying power? Memorize all the ins and outs of Jenkins, then tomorrow it's obsolete and replaced by Drone, great let's reading pages and pages of CI docs again... CI is obviously a valuable service, it solves a bunch of important problems. These problems can be solved in very simple ways, and indeed it's better to solve them in the simple way. The industry has spawned companies which provide CI and solve these as a service, but of course simplicity is not conducive to maximizing profit. Therefore they are ever tirelessly working on ways to overcomplicate their CI, and encourage users to overcomplicate their usage. As a bonus, it helps with vendor lock in as well. As for your concern of "cluttering history", it's kind of a non-issue. Git is already very powerful for filtering histories. Empty commits don't show up on git blame. You can always do an interactive rebase if you don't want them. And people who care that much about their git history usually have some kind of multiple-branch workflow, where they squash or omit the dummy commits when merging.