Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
This is somewhat of a non-answer because I also think the answer is mostly "no". I'll split kinds of triggers into three categories. 1) Those that only touch the affected rows, 2) those that addit...
Answer
#1: Initial revision
This is somewhat of a non-answer because I also think the answer is mostly "no". I'll split kinds of triggers into three categories. 1) Those that only touch the affected rows, 2) those that additionally read data from other rows/tables, and 3) those that mutate other rows/tables. The first category is the most innocuous, but the compelling use-cases are largely covered by other features in major RDBMSes like Sql Server. This would include things like auto-incrementing columns, check constraints, and (materialized) computed columns. Since this kind of trigger doesn't lead to any additional contention, the only real concern is surprising behavior and maybe slightly surprising performance characteristics depending on how involved the trigger logic is. The second category is mostly problematic due to the extra contention and surprising performance. I would be pretty concerned if the trigger changed the inserted/updated row based on data from other rows/tables. Generally, my knee-jerk reaction would be that if you need to do this you should normalize your tables better, but there are definitely natural and legitimate uses for this kind of check. Foreign keys are this kind of check, but, of course, virtually every major database that supports triggers supports these. There are other kinds of foreign keys that are natural that are supported by no databases to my knowledge (though I haven't checked in a while). A "polymorphic" foreign key would allow a natural encoding of (non-primitive) sum types into tables and would require that a key exists in the union of the primary keys of a (fixed) set of tables. However, this can be encoded in other ways that are almost as good. The third category is the one most likely to produce surprises and surprising contention that can be very problematic. The most compelling use-case I've seen for this type of trigger has since been subsumed by the Change Data Capture (CDC) feature that is now common in major RDBMSes. Obviously, these kinds of triggers are very powerful and can be used to do many things. The main alternative suggested to triggers is to make changes via a controlled API (usually stored procedures) that will encapsulate what would otherwise be trigger logic but keep the schema clean. The benefit is that it is way more obvious what's going on and controllable. The typical argument against this is that this requires covering all "entrypoints" and potentially duplicating logic amongst them, and if you make a mistake you may violate database invariants which a trigger could guarantee. This is where your "modern software application" comes in. Nowadays access to the database is entirely programmatic and well controlled. This makes ensuring that you go through the appropriate code paths quite a lot easier, and it also makes it easy to extract out common code. Also, some modern database patterns as well as a general deescalation of the centrality of the (OLTP schema in a) relational database lessen either the need for certain integrity constraints or the severity of violating them. For example, an application using an event sourcing approach would have the collection of events (stored in a table or in something like Kafka) be the source of truth with the OLTP (schema of the) database being derived. Further, in this example, there would likely be a single piece of code which applies events to the OLTP schema which would make it easy to centralize any "trigger" logic. Similar, ELT-like patterns that move data from a more raw and less constrained representation to a more structured representation are common as maintaining the raw input is valuable in many cases. These patterns can make it possible to "regenerate" the OLTP schema from the raw input to fix data errors caused by coding errors. For example, you can replay the event log with a new, corrected event applier. Another modern trend, that, likely, isn't as leveraged as much in the traditional RDBMS sphere, is better exploiting weaker consistency levels. Triggers don't interact with this particularly well as they make most sense at high isolation levels (typically, though there are certainly exceptions). Triggers also likely lack the flexibility that sophisticated approaches to getting strong consistency guarantees in a relaxed consistency environment would need. These approaches would benefit more from having different logic for different transactions that still might ultimately insert into a table. I also haven't used triggers any time recently. My experience with them is that people forget/don't know they exist, and then get really confused by the behavior, performance, or contention they get. (Then they [re]discover the trigger and curse it.) I would only use a trigger as a tool of last resort. That said, I certainly wouldn't want to remove such functionality as it can be used to do interesting things. For example, the scripts used in [Anchor Modeling](https://www.anchormodeling.com/modeler/latest/) will create triggers allowing you to "insert" into views hiding the temporal nature of the underlying tables. This is arguably just a convenience (and you can have those triggers omitted), and it would certainly be confusing if you worked at the level of the RDBMS, but, in this context, the RDBMS is arguably an implementation detail for a higher level model. It's also not as bad to use triggers to implement functionality that otherwise wouldn't exist as opposed to changing basic database functionality.