Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
I would need to check, but I don't think PostgreSQL or most (any?) relational databases have a good way of expressing this "at most 8" constraint. It may be possible to express it in some way but p...
Answer
#1: Initial revision
I would need to check, but I don't think PostgreSQL or most (any?) relational databases have a good way of expressing this "at most 8" constraint. It may be *possible* to express it in some way but probably not in a good way. For example, you could use triggers, but those have significant issues with regards to performance, discoverability, and complexity. Most of the time this would be handled through some kind of database access layer which may be a collection of stored procedures or something like the Repository Pattern or some such in application code. The key thing is that all changes should go through this layer which is then free to check and enforce any constraints needed with arbitrary logic. Stored procedures have many benefits for this use, but many developers have historically been pretty negative about them for reasons that I think are mostly obsolete. Assuming the "Restrict deletion rule" here means to disallow deletes that would violate the constraint, then you would need to delete the instructor and all associated rows of `INSTRUCTOR CLASSES` within a single transaction. The alternative would be to use a cascading delete rule which would automatically do that when you deleted an instructor. Deleting the last entry for a particular instructor in `INSTRUCTOR CLASSES` would require a transaction as well which also deleted the instructor or also inserted one or more new rows for that instructor maintaining the constraint. This is presumably an introduction to database design so you should expect some toy examples. If we ignore the specific cardinality constraints, this is the standard way to implement a many-to-many relation representing that many instructors can teach the same class, and instructors can teach multiple classes. As mentioned, most databases only have direct support for handling 1:1 and 1:N relationships via foreign keys and uniqueness constraints. That said, many database designs and naive applications of database design methodologies *don't* actually do a good job of representing database schemas that make sense for mutable data. To put it another way, it's relatively easy to design a database schema for representing a snapshot of data. It is much harder when that data can change over time. For example, a naive schema for a store selling products might be to have a `PRODUCTS` table with product descriptions, prices, and quantities, a `CUSTOMERS` table, and a `SALE` table^[In practice, you'd more likely have an `INVOICE` table and a `LINE ITEMS` table allowing multiple products to be sold in a single transaction, but I'm simplifying here.] representing a sale of a product to a customer which you may naively just have as a foreign key to `CUSTOMERS` and a foreign key to `PRODUCTS` and a quantity field. For a fixed state of affairs, this would be completely adequate and avoid data redundancy. But now consider increasing the price of a product. Suddenly, it looks like you made more money with your sales which doesn't reflect reality. Indeed, this schema is fundamentally broken in an evolving world. The "right" solution to this problem in general is to use a temporal database, but few database management systems have much support for temporal database techniques. Nevertheless, looking at techniques behind temporal databases is a good source for dealing with these issues. Event sourcing is another technique to look at, though there is a lot of extra baggage over the core idea that I don't think is so great. Another, related, option to look at is [log-based designs](https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying) that leverage relatively new tools like Apache Kafka. In this approach, a *persistent* log of "events" would be the source of truth for the system, and the relational database would just be a (materialized) view of the log as of a point in (logical) time. This doesn't obviate the need to think carefully about database design, but it takes some pressure off of it, and if you mess up you haven't permanently lost/corrupted any data (at least not directly). You can always go through the log again with the corrected schema.