square-intern-22004
11/13/2019, 12:54 AMpulumi up
in which I have left out needed env vars for my container so the Pod enters a CrashLoopBackOff, I would expect the command to eventually fail with a non-zero exit code. However, all it does is warn about the CrashLoopBackOff and then completes with 0 exit. In k8s I can see the old revision is still working as expected and the new revision as status CrashLoopBackOff.
Is there something I need to specify either in the cli command or my configuration to get the desired failure exit to occur?creamy-potato-29402
11/13/2019, 1:09 AMcreamy-potato-29402
11/13/2019, 1:09 AMerror
in it?square-intern-22004
11/13/2019, 1:21 AM$ pulumi up
Previewing update (feature):
Type Name Plan Info
pulumi:pulumi:Stack my-deployment-feature
~ └─ kubernetes:apps:Deployment my-deployment update [diff: ~spec]
Outputs:
~ name: "my-deployment-y96by901" => output<string>
Resources:
~ 1 to update
1 unchanged
Do you want to perform this update? yes
Updating (feature):
Type Name Status Info
pulumi:pulumi:Stack my-deployment-feature
~ └─ kubernetes:apps:Deployment my-deployment updated [diff: ~spec]
Outputs:
name: "my-deployment-y96by901"
Resources:
~ 1 updated
1 unchanged
Duration: 2m57s
Permalink: <https://app.pulumi.com/><user>/my-deployment/feature/updates/4
$ echo $?
0
square-intern-22004
11/13/2019, 1:26 AM$ pulumi up
Previewing update (feature):
Type Name Plan Info
pulumi:pulumi:Stack my-deployment-feature
~ └─ kubernetes:apps:Deployment my-deployment update [diff: ~spec]
Outputs:
~ name: "my-deployment-y96by901" => output<string>
Resources:
~ 1 to update
1 unchanged
Do you want to perform this update? yes
Updating (feature):
Type Name Status Info
pulumi:pulumi:Stack my-deployment-feature **failed** 1 error
~ └─ kubernetes:apps:Deployment my-deployment **updating failed** [diff: ~spec]; 1 error
Diagnostics:
pulumi:pulumi:Stack (my-deployment-feature):
error: update failed
kubernetes:apps:Deployment (my-deployment):
error: Plan apply failed: 4 errors occurred:
* the Kubernetes API server reported that "namespace/my-deployment-y96by901" failed to fully initialize or become live: 'my-deployment-y96by901' timed out waiting to be Ready
* [MinimumReplicasUnavailable] Deployment does not have minimum availability.
* Minimum number of live Pods was not attained
* [Pod namespace/my-deployment-y96by901-775cbd5c9f-zwvvh]: containers with unready status: [my-deployment] -- [ImagePullBackOff] Back-off pulling image "my-deployment:pulumi-missing"
Outputs:
- name: "my-deployment-y96by901"
Resources:
1 unchanged
Duration: 10m3s
Permalink: <https://app.pulumi.com/><user>/my-deployment/feature/updates/3
$ echo $?
255
white-balloon-205
square-intern-22004
11/13/2019, 2:38 AMpulumi up
command to pass in that case, as i believe kubectl rolling-update
or kubectl apply
with a rolling update configured in the deployment with crashing pods on the new revision will eventually fail.square-intern-22004
11/13/2019, 2:39 AMexit 1
as its entrypoint and deploying it outcreamy-potato-29402
11/13/2019, 3:23 AMcreamy-potato-29402
11/13/2019, 3:24 AMcreamy-potato-29402
11/13/2019, 3:24 AMsquare-intern-22004
11/13/2019, 5:09 AMsquare-intern-22004
11/18/2019, 8:25 PM