Issue
Context/why am I looking for that strange way:
the pod (pods) where I'm trying to inject the envs are spawned by a job and there are hundreds of them. There is also a configmap in k8s which holds all the envs. I'm doing this to avoid manually declaring hundreds of pods with different env vars
I have a pod in k8s with defined env vars in the manifest:
spec:
containers:
name: something
command: ["/bin/bash"]
args: ["-c", "sleep 60 && source ~/.bashrc && env | grep FOO"]
env:
- name: FOO
value: bar
- name: FOO2
value: bar2
I also run another pod with kubectl and rbac set that executes:
kubectl exec -i FIRST_POD_NAME -- bash -c "echo 'FOO=updated_bar' >> ~/.bashrc"
first pods use (sleep is to ensure the other pods has time to update the values of env vars):
sleep 60 && source ~/.bashrc && env | grep FOO
as command and it logs:
FOO=bar
FOO2=bar2
but when I manually enter the pod (bash) and run:
env | grep FOO
I see FOO=updated_bar
What am I missing here and how to update these envs so the pod uses them in start cmd?
Solution
As I was unable to find a reason for this, I've made a workaround so if anyone needs to do something similar:
helping pod is just appending the values to tmp files in other pods:
kubectl exec -i FIRST_POD_NAME -- bash -c "echo 'FOO=updated_bar' > /tmp/foo
and on the startup I read the value of this fire and use it as ENV:
# some sleep is needed to let the helping pod created this file
sleep n && export FOO=$(cat /tmp/foo)
this is a poor, but working solution to import envs declared in configmap to hundrets of pods
Answered By - wiktor Answer Checked By - Pedro (WPSolving Volunteer)