Dealing with code maintenance when saving a large and complex entity in a single business transaction
I am currently working on migrating a legacy application to an ASP.NET Core + Angular application. The Web API's main project is based on Jason Taylor's template (clean architecture) and among others relies on MediatR library.
The UI/UX experience for already migrated business flows relies on UI pages that look like the following:
When the user enters a page, each section (group) is loading its data, which ultimately relies on a MediatR query. Only the information from a specific group can be edited and saved. This helps our development and maintenance quite a lot:
- data fetch is clearly separated for each part (component)
- a good mapping between use cases and code structure (data fetch for a part = query, save group data = command)
- smaller transactions (save small data, instead of bigger data)
- each use case implementation is quite small and easy to maintain
Now, for migrating the functionality around another entity type, the product owner wants another approach to UI/UX that looks like the following:
- there is no read-only view and the users can directly jump at editing stuff
- there is the main Save button that will save "everything". We can have Save working only on the current section to keep things more manageable.
- at least one section is quite heavy (dozens of fields and files to be uploaded)
The main issue with the desired design is that the number of use cases is drastically reduced and I get way bigger and more complex payloads to handle.
The legacy application treats the saving functionality in a rather monolithic way resulting in a very hard-to-maintain code.
I am wondering about how to tackle this issue as I want to avoid maintenance issues.
Since I cannot avoid a big payload being ingested in the API, I need to somehow split the business logic into more maintainable pieces of code. Chain of Responsibility pattern comes into my mind:
- controller action received the payload
- a save command is issued with the big payload
- a bunch of middleware will receive the payload and each one will only process the part of interest (e.g. merging the section 1/group 1 data). I can even have the payload DTO implement an interface dedicated to each processing part (I will have to check if it works).
- the final middleware will commit the transaction if everything goes fine until then
We are already using the concept for handling cross-cutting concerns in a way similar to what is depicted in this article, but I have used for handling business transactions.
**Is this a good approach for handling such a business scenario? **
Note: drawings were done using Excalidraw.