Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Comments on How to inject environment configuration values when deploying an Angular application in Kubernetes or similar infrastructure?
Parent
How to inject environment configuration values when deploying an Angular application in Kubernetes or similar infrastructure?
Context
I am currently migrating a Web application from on-prem infrastructure to K8s.
The legacy infrastructure relies on defining some tokens in the configuration files and these are replaced during the deployment as follows:
- ASP.NET Core: appsettings.json tokens are replaced
- Angular: replacements are done directly in the bundle js files
The issue
While the .NET Core application features a designated configuration file (appsettings.json) which can be found in the Docker container, the Angular relies on environment.{env}.ts
files which have their content bundled in the main js file.
This prevents one important aspect of the deployment: build (the Docker image) once and deploy in any environment.
How to make an Angular application allow its configuration data to be changed after the production build is created? (at Docker container level, not at image level)
Post
I'm not quite sure I see what the issue is. As far as I can tell, you could continue to do exactly what you're doing now, you'd just do the token replacement on the bundle.js
when a container is provisioned. On the other hand, I can see why you might want to move away from this approach.
A more modern way to handle these kinds of concerns, especially in a elastic environment like Kubernetes is via service discovery/configurations services such as Consul or etcd. These may already be integrated if you are using a private cloud infrastructure. (I'll talk more about Consul, because I'm a bit more familiar with it and it's a bit more featureful.)
The idea is instead of deploying configuration files with a provisioning tool like Puppet/Chef/Ansible/Salt in a push-based manner, containers would access configuration from Consul/etcd in a pull-based manner.
This could be used in several ways.
The least compelling way would be to have a script run when the container starts that fetches the configuration details from Consul/etcd and then either does the token replacement a la your current solution or creates a config file a la the solution in your answer. By itself, this doesn't provide much more benefit over the provisioning approach. However, both Consul and etcd allow you to wait for a key to change value, so you could have this script additional wait for changes and then restart the container or recreate the config file without needing to manually reconfigure.
More compellingly, both Consul and etcd present their key-value stores via an HTTP API. So instead of making and serving a config
file, you could just have the application directly talk to Consul/etcd. This has the benefit of needing on (re)configuration step and always having the latest configuration. It also allows you to control how often different parts of the config are checked. For example, you can pull an initial config when the SPA web page is first loaded, and then pull other parts more frequently even without a full page reload. And, again, you can also wait for changes and thus detect when the configuration has changed and force a page reload (or do something smarter) in that case. Practically speaking, it's likely the end-user's browser wouldn't have network access to the Consul/etcd server. This can be resolved by server configuration or by using a reverse proxy tool like Traefik. You would route a request to app.example.com/config/foo
to consul.example.com/v1/kv/frontend/public/config/foo
or whatever. Here consul.example.com
is not accessible from the outside internet but is accessing from the Docker container serving app.example.com
which is itself externally accessible.
As a bit of a tangent, one difference between etcd and Consul is that Consul acts as a DNS server. This is a simple but clever idea that has immense repercussions for configuration. A lot of configuration is specifying where other services are. Consul allows you to put something like app-db.consul
as the database URL in various configuration files once and for all. Since app-db.consul
is a valid and (internally) accessible URL, you can just use it as-is in existing tools. What server(s) app-db.consul
refers to is automatically handled in real-time (with load balancing and health checks to kick out failed servers). Whether app-db.consul
refers to a production or development server is handled by which Consul server you're talking to. By itself, this feature can often drastically reduce or even outright eliminate configuration as well as simplify deployment.
0 comment threads