Monday, January 29, 2024

[SOLVED] Kubernetes: how to run a Task periodically inside every Pod of a Deployment?

Issue

I have read about the various ways to run tasks periodically in a K8s cluster, but none of them seem to work well for this specific case. I have a deployment "my-depl" that can run an arbitrary number of pods and the task needs to execute periodically inside each pod (basically a shell command that "nudges" the main application once a week or so).

The Kubernetes Cronjob functionality starts a task in its own container. This K8s task does not know how many pods are currently running for "my-depl" and cannot run anything in those pods. Conceivably, I could run kubectl within this K8s Cronjob, but that seems incredibly hacky and dangerous.

The second alternative would be to have crond (or an alternative tool like Jobber or Cronenberg) run as part of the pod. But that would mean that two processes are running and the container might not die, if only the cron process dies.

The third option is to run a multi-process container via a special init process like s6-overlay. This can be made to die if one of the child processes dies, but it seems fairly involved and hardly a first-class feature.

The fourth option I could think of was "don't do this, it's stupid. Redesign your application so it doesn't need to be 'nudged' once a week". That's a sound suggestion, but a lot of work and I need at least a temporary solution in the meantime.

So, does anyone have a better idea than those detailed here?


Solution

I think the simplest solution is to run crond (or an alternative of your choice) in a sidecar container (that is, another container in the same pod). Recall that all containers in a pod share the same network namespace, so localhost is the same thing for all containers.

This means your cron container can happily run a curl or wget command (or whatever else is necessary) to ping your API over the local port.

For example, something like this, in which our cron task simply runs wget against the web server running in the api container:

apiVersion: v1
data:
  root: |
    * * * * * wget -O /tmp/testfile http://127.0.0.1:8080 2> /tmp/testfile.err
kind: ConfigMap
metadata:
  labels:
    app: cron-example
  name: crontabs-ghm86fgddg
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: cron-example
  name: cron-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cron-example
  template:
    metadata:
      labels:
        app: cron-example
    spec:
      containers:
      - image: docker.io/alpinelinux/darkhttpd:latest
        name: api
      - command:
        - /bin/sh
        - -c
        - |
          crontab /data/crontabs/root
          exec crond -f -d0
        image: docker.io/alpine:latest
        name: cron
        volumeMounts:
        - mountPath: /data/crontabs
          name: crontabs
      volumes:
      - configMap:
          name: crontabs-ghm86fgddg
        name: crontabs


Answered By - larsks
Answer Checked By - Clifford M. (WPSolving Volunteer)