# kubernetes


01/20/2021, 9:14 PM
We are seeing some really strange behavior deploying our apps. The manifest we are deploying, using the resources from pulumi, has a couple of secrets defined and are mapped as environment variables in the deployment resource. Every now and then one secret is not included in the deployment manifest, running the exact same pulumi script one (or sometimes it require multiple retries) will render a different deployment manifest including the secret. We don't have any conditional logic around the secret and it is deployment to the same stack we are talking about here. Anyone that experienced something similar?
This is super annoying, it happens about 30-50% of our deployments. The way we do it is like this: 1. Create args for a Deployment 2. Manipulate args until satisfied 3. Create Deployment resource with args It almost looks like there is some kind of race condition since not all environment variables that we have defined in step 2 are added every time.
Downloading the stack from the permalink defined in the deployment actually shows that the missing environment variables should be on the container... but then the provider might do something weird.
not sure though if that link is to what is being deployed or what is currently deployed.


01/21/2021, 12:53 PM
Hi Tomas, Can you please open a git issue for this?


01/21/2021, 1:28 PM
sure, I don't know how to replicate a small case for it though, I can just write what we experience. I am also available to show what we see in our logs and show our code if needed.
👍 1
I'm sort of stuck with this super annoying issue: Anyone that has experienced something similar? The issue is that the env variables we set on a k8s deployment, that references some k8s secrets, doesn't always seem to be included in the deploy. If we deploy again it might work some times, so it seems to be some kind of race condition or some weird transformation of the manifest that goes on.