Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

In the current development context (2020+), is there any reason to use database triggers?

+4
−0

I have not used a database trigger in years in the projects of the teams I have worked in, but I have seen them being used by other teams.

Back in 2009, it seemed like a heated debate, but I am wondering if they are useful anymore for modern software applications.

By modern software application I mean at least the following:

  • domain-centric as opposed to database-centric - way less business logic in the database
  • no direct database interaction - e.g. no linked servers or similar used to change data in the database from another application. Use APIs instead
  • code-first - database changes are almost always dealt with from the application code (e.g. migrations, data seeding)

The only valid case where I created a trigger was to track DDL changes since some database objects were being changed without our team knowing it. However, this was not part of the normal development process.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

1 answer

+4
−0

This is somewhat of a non-answer because I also think the answer is mostly "no".

I'll split kinds of triggers into three categories. 1) Those that only touch the affected rows, 2) those that additionally read data from other rows/tables, and 3) those that mutate other rows/tables.

The first category is the most innocuous, but the compelling use-cases are largely covered by other features in major RDBMSes like Sql Server. This would include things like auto-incrementing columns, check constraints, and (materialized) computed columns. Since this kind of trigger doesn't lead to any additional contention, the only real concern is surprising behavior and maybe slightly surprising performance characteristics depending on how involved the trigger logic is.

The second category is mostly problematic due to the extra contention and surprising performance. I would be pretty concerned if the trigger changed the inserted/updated row based on data from other rows/tables. Generally, my knee-jerk reaction would be that if you need to do this you should normalize your tables better, but there are definitely natural and legitimate uses for this kind of check. Foreign keys are this kind of check, but, of course, virtually every major database that supports triggers supports these. There are other kinds of foreign keys that are natural that are supported by no databases to my knowledge (though I haven't checked in a while). A "polymorphic" foreign key would allow a natural encoding of (non-primitive) sum types into tables and would require that a key exists in the union of the primary keys of a (fixed) set of tables. However, this can be encoded in other ways that are almost as good.

The third category is the one most likely to produce surprises and surprising contention that can be very problematic. The most compelling use-case I've seen for this type of trigger has since been subsumed by the Change Data Capture (CDC) feature that is now common in major RDBMSes. Obviously, these kinds of triggers are very powerful and can be used to do many things.

The main alternative suggested to triggers is to make changes via a controlled API (usually stored procedures) that will encapsulate what would otherwise be trigger logic but keep the schema clean. The benefit is that it is way more obvious what's going on and controllable. The typical argument against this is that this requires covering all "entrypoints" and potentially duplicating logic amongst them, and if you make a mistake you may violate database invariants which a trigger could guarantee.

This is where your "modern software application" comes in. Nowadays access to the database is entirely programmatic and well controlled. This makes ensuring that you go through the appropriate code paths quite a lot easier, and it also makes it easy to extract out common code. Also, some modern database patterns as well as a general deescalation of the centrality of the (OLTP schema in a) relational database lessen either the need for certain integrity constraints or the severity of violating them. For example, an application using an event sourcing approach would have the collection of events (stored in a table or in something like Kafka) be the source of truth with the OLTP (schema of the) database being derived. Further, in this example, there would likely be a single piece of code which applies events to the OLTP schema which would make it easy to centralize any "trigger" logic. Similar, ELT-like patterns that move data from a more raw and less constrained representation to a more structured representation are common as maintaining the raw input is valuable in many cases. These patterns can make it possible to "regenerate" the OLTP schema from the raw input to fix data errors caused by coding errors. For example, you can replay the event log with a new, corrected event applier.

Another modern trend, that, likely, isn't as leveraged as much in the traditional RDBMS sphere, is better exploiting weaker consistency levels. Triggers don't interact with this particularly well as they make most sense at high isolation levels (typically, though there are certainly exceptions). Triggers also likely lack the flexibility that sophisticated approaches to getting strong consistency guarantees in a relaxed consistency environment would need. These approaches would benefit more from having different logic for different transactions that still might ultimately insert into a table.

I also haven't used triggers any time recently. My experience with them is that people forget/don't know they exist, and then get really confused by the behavior, performance, or contention they get. (Then they [re]discover the trigger and curse it.) I would only use a trigger as a tool of last resort. That said, I certainly wouldn't want to remove such functionality as it can be used to do interesting things. For example, the scripts used in Anchor Modeling will create triggers allowing you to "insert" into views hiding the temporal nature of the underlying tables. This is arguably just a convenience (and you can have those triggers omitted), and it would certainly be confusing if you worked at the level of the RDBMS, but, in this context, the RDBMS is arguably an implementation detail for a higher level model. It's also not as bad to use triggers to implement functionality that otherwise wouldn't exist as opposed to changing basic database functionality.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »