Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Simplest way of getting failure notification emails from kubernetes
What would be the simplest (and most lightweight) way of getting email notifications of failures in kubernetes clusters. Mostly interested in failing pods, so notifying on certain kubernetes event types would be sufficient.
This really shouldn't need an additional database like prometheus.
Ideally something dead simple like the MAILTO variable in crontab files.
1 answer
The simplest solution I managed to find is robusta.
It still has a bunch of unnecessary features, but with the correct configuration it's possible to disable these. As a bonus, it nicely adds some extra info to the notifications (called enrichments).
It's intended to run alongside prometheus, but it will also work without it. Here's the relevant installation guide.
The resulting deployment consist of two pods. One watching the kubernetes API for events and another for sending the notifications (+ maybe one more for sending email, if you don't have a mailserver around).
Here's a helm values skeleton for minimal setup: (replace the stuff in <>'s)
# See here for the unmodified values:
# https://github.com/robusta-dev/robusta/blob/master/helm/robusta/values.yaml
clusterName: <your-cluster>
isSmallCluster: true
# Disable all phoning home.
disableCloudRouting: true
runner:
sendAdditionalTelemetry: false
additional_env_vars:
# Some telemetry is apparently enabled by default.
# https://github.com/robusta-dev/robusta/blob/master/helm/robusta/templates/NOTES.txt
- name: ENABLE_TELEMETRY
value: "false"
enablePrometheusStack: false
# https://docs.robusta.dev/master/configuration/sinks/mail.html
sinksConfig:
- mail_sink:
name: mail_sink
# You need to have an email server somewhere.
mailto: "mailto://<my.domain>?smtp=<my.smtp.server>&from=<my.cluster@add.ress>&to=<my.email@add.ress"
# builtin playbooks
builtinPlaybooks:
# playbooks for non-prometheus based monitoring
- name: "CrashLoopBackOff"
triggers:
- on_pod_crash_loop:
restart_reason: "CrashLoopBackOff"
actions:
- report_crash_loop: {}
- name: "ImagePullBackOff"
triggers:
- on_image_pull_backoff: {}
actions:
- image_pull_backoff_reporter: {}
# playbooks for non-prometheus based monitoring that use prometheus for enrichment
- name: "PodOOMKill"
triggers:
- on_pod_oom_killed:
rate_limit: 3600
actions:
- pod_oom_killer_enricher:
attach_logs: true
container_memory_graph: true
node_memory_graph: true
stop: true
# Prometheus trigger playbooks removed.
# Robusta UI sinks have been disabled.
enablePlatformPlaybooks: true
platformPlaybooks:
- name: "K8sWarningEventsReport"
triggers:
- on_kubernetes_warning_event_create:
exclude: ["NodeSysctlChange"]
actions:
- event_report: {}
- event_resource_events: {}
# sinks:
# - "robusta_ui_sink"
- name: "IngressChangeTracking"
triggers:
- on_ingress_all_changes: {}
actions:
- resource_babysitter: {}
- customise_finding:
title: Ingress Changes
aggregation_key: IngressChange
# sinks:
# - "robusta_ui_sink"
- name: "EventBasedChangeTracking"
triggers:
- on_kubernetes_resource_operation:
resources: ["deployment", "replicaset", "daemonset", "statefulset", "pod", "node", "job" ]
actions:
- resource_events_diff: {}
- name: "K8sJobFailure"
triggers:
- on_job_failure: {}
actions:
- create_finding:
aggregation_key: "job_failure"
title: "Job Failed"
- job_info_enricher: {}
- job_events_enricher: {}
- job_pod_enricher: {}
# sinks:
# - "robusta_ui_sink"
# Read-only setup.
# Don't try to autofix anything.
lightActions:
- related_pods
# - prometheus_enricher
- add_silence
# - delete_pod
- delete_silence
- get_silences
- logs_enricher
- pod_events_enricher
- deployment_events_enricher
- job_events_enricher
- job_pod_enricher
- get_resource_yaml
- node_cpu_enricher
- node_disk_analyzer
- node_running_pods_enricher
- node_allocatable_resources_enricher
- node_status_enricher
- node_graph_enricher
- oomkilled_container_graph_enricher
- pod_oom_killer_enricher
- oom_killer_enricher
- volume_analysis
- python_profiler
- pod_ps
- python_memory
- debugger_stack_trace
- python_process_inspector
# - prometheus_alert
# - create_pvc_snapshot
- resource_events_enricher
# - delete_job
- list_resource_names
- node_dmesg_enricher
- status_enricher
# No scanning either
# - popeye_scan
# - krr_scan
# - handle_alertmanager_event
# - drain
# - cordon
# - uncordon
# - rollout_restart
# - prometheus_all_available_metrics
# - prometheus_get_series
0 comment threads