Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

71%
+3 −0
Q&A Best practices in setting up a development & production environments

As a baseline, here's what we did in my last company: For tests, we used an in-memory database, whose schema was initialized by our object-relational mapper, with initial data loaded either from...

posted 2y ago by meriton‭

Answer
#1: Initial revision by user avatar meriton‭ · 2022-02-11T21:27:14Z (about 2 years ago)
As a baseline, here's what we did in my last company:

* For tests, we used an [in-memory database](https://docs.spring.io/spring-boot/docs/2.1.0.M2/reference/html/boot-features-sql.html#boot-features-embedded-database-support), whose [schema was initialized by our object-relational mapper](https://docs.spring.io/spring-boot/docs/2.1.0.M2/reference/html/boot-features-sql.html#boot-features-creating-and-dropping-jpa-databases), with initial data loaded either from an SQL-Script, or code using the O/R-Mapper.
* Before deploying, we would generate SQL-Scripts for migrating the database schema, whose application was automated by [flyway](https://flywaydb.org/).

This approach

* allows rapid evolution of the database schema during development (simply change your entity classes and restart)
* requires no external database for development, which means you can develop without access to the company network, such as from home or on the train
* each dev effectively gets their own database, allowing features can be developed in parallel without interference
* since customer data is not used during development, development data need not be protected, allowing easy access without risking customer privacy
* requires data generation logic to be written, but since this can be reused in unit tests, the added effort seemed low

It remained technically possible to load a dump of the production database into a development database instance, but we only used this if we really needed production data (for instance, to diagnose issues, run load tests, test schema migrations, and the like)

This approach was mandated by our data protection officer, and met initial resistance because people were used to work with dumps of production databases, but the ease of working with in-memory databases, and developing without network access (home office! yay!) soon won everyone over. 

We did not go the full continuous delivery route because we did not trust our test coverage to automatically ensure the quality of deployments, and therefore wanted to give customers the opportunity to test the new version before deploying it to production. That said, it would have been easy to instruct our build server to create a docker image and tell kubernetes to update the deployment, and then flyway would have automatically applied the migration (if we hadn't used kubernetes, a little shell scripting would likely have done the job, too)

Of course, applying database migrations to live systems (some of which may not have updated yet) poses its own challenges. Alas, since our customers did not need that capability, I have little relevant experience to advise you in this, but I hear that breaking schema changes can often be split into backward-compatible increments.