I’m encountering an issue when adding/removing a s...
# kubernetes
b
I’m encountering an issue when adding/removing a service account from a pod spec (in a Deployment):
Copy code
Spec: &corev1.PodSpecArgs{
					ServiceAccountName: tenant.serviceAccount.Metadata.Name(),
Let’s say I had the above, and then I just deleted the ServiceAccountName line. Pulumi seems to recognize that it should do an update, but nothing happens on the Kubernetes side (the Deployment is not updated, the pod does not restart), and Pulumi hangs waiting for something to happen (
[diff: ~spec]; [1/2] Waiting for app ReplicaSet be marked available (1/1 Pods available)
). Any ideas what might be going on?
b
i'm not entirely sure without the logs from the Kubernetes side, but ultimately your pod/replica is referencing a service account which doesn't exist, so it would never become ready
b
The service account is still there
I never delete it, only the reference to it in the Deployment template
The only thing I’m changing is whether or not it is mentioned in the Deployment template, and Pulumi sees that it should do a change (e.g. add it/remove it), but it doesn’t seem to be communicating this appropriately to Kubernetes.
b
what does the diff look like when you run the update?
b
@billowy-army-68599 what’s the best way to send this? I’ve honestly never really been able to figure out the diffs, it just seems like it prints out some JSON blobs without telling me what’s different 🙂
I’m looking on the Pulumi app
b
are you running
pulumi up
?
b
I am running
pulumi preview
but I can change it to
pulumi up
b
you should see a
details
option, that'll give you the full diff
b
Copy code
Do you want to perform this update? details
  pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:tenant-itay7::okera-trial-tenants::pulumi:pulumi:Stack::okera-trial-tenants-tenant-itay7]
    > pulumi:pulumi:StackReference: (read)
        [id=itay/okera-infra-regions/us-west-2]
        [urn=urn:pulumi:tenant-itay7::okera-trial-tenants::pulumi:pulumi:StackReference::itay/okera-infra-regions/us-west-2]
        name: "itay/okera-infra-regions/us-west-2"
    ~ kubernetes:apps/v1:Deployment: (update)
        [id=itay7/cdas-rest-server]
        [urn=urn:pulumi:tenant-itay7::okera-trial-tenants::kubernetes:apps/v1:Deployment::cdas_rest_serverDeployment]
      ~ spec: {
          ~ template: {
              ~ spec: {
                  - serviceAccountName: <null>
                }
            }
        }
b
okay so as I said, the serviceAccountName is immutable in the deployment, and it's being remove from the spec, so the restart is expected - why exactly are you removing it? what are you expecting to happen?
b
Sorry, let me clarify - I am expecting it to restart my pod, but it is not doing so.
b
right, but it cant start the pod because it doesn't have a valid service account - you need to have something there
b
Hmm
b
you should try doing
kubectl describe pod <name>
and it'll tell you this
b
When I do a describe post
pulumi up
with the above, it still has the servie account
As far as Kubernetes seems to be concerned, Pulumi did no update
OK, let me back up for a second and explain the sequence of events: 1. I had a Deployment with no explicit service account 2. I created a service account, and added it as per the above snippet. 3. All is good 4. I removed the service account reference (i.e. back to step 1) 5. I did pulumi up, and Kubernetes is showing no update of the pod/Deployment
Sorry if I am being unclear
b
can you show the output of
kubectl describe pod
and
kubectl get pod -o yaml
? feel free to DM it to me in a github gist
s
curious what will happen if you change the service account to "default" instead of removing it in the pulumi config
b
also, can you clarify:
I did pulumi up, and Kubernetes is showing no update of the pod/Deployment
it seems that's not true, the pod/replicaset is updating but it's not becoming healthy, or at least that's what was described earlier?
b
As far as
kubectl describe deployment
shows - there is no update to the deployment
b
yeah my suspicion is the same as @steep-toddler-94095 - the default created deployment used the namespaces' default service account, and now you're removing it, so it can't start
but the replicaset is changing, so it's failing because it can't start
b
That sounds plausible - so basically removing the service account reference I added doesn’t really get me to the before state
(where I had no explicit service account)
b
no, the kubernetes APi will fill in certain parts of stuff that you omit for the first deployment
b
Note that if I manually edit the Deployment to remove
serviceAccount
and
serviceAccountName
, it does what I expect
this is with
kubectl edit
which is roughly what I’d expect Pulumi to do here as well (i.e. I removed the explicit reference, just delete it and let Kubernetes do its thing)
s
i was able to reproduce the behavior you're getting. I see the following in the console but no change to any live state in k8s
Copy code
updating.    [diff: ~spec]; [1/2] Waiting for app ReplicaSet be marked available (1/1 Pods available)
I ran a
pulumi refresh
to get around the issue (edit: on second thought perhaps the refresh was not needed after the
pulumi up
that I needed to manually interrupt, but it's late so i'm not going to re-test until tomorrow). this may warrant a bug ticket filed
b
Yes mike that’s exactly what I saw
s
it's unexpected behavior but hopefully it's not blocking you
b
It’s not - this was more for testing, and as you said pulumi refresh resolves it
👍 1