I've noticed that pulumi won't restart my pods whe...
# kubernetes
I've noticed that pulumi won't restart my pods when I deploy or update a deployment. Should I expect pulumi to run something like
kubectl rollout restart...
or similar? If not, how do you usually go about this?
pulumi applies the updated deployment specs and kubernetes should handle restarting the pods if necessary. what sort of changes are you making that you're expecting restarts for?
After deploying, my pods will still point to the previous version. If I restart the deployment manually, the pods will pull the new images and will finally point to the current version
My deployment is basically this:
Copy code
const pulumi = require('@pulumi/pulumi')
const docker = require('@pulumi/docker')
const k8s = require('@pulumi/kubernetes')

const stack = pulumi.getStack()
const env = stack.split('-').slice(-1)[0]
const backend = new pulumi.StackReference(`backend-cluster-${env}`)
const registry = backend.getOutput('registry')

const imageName = pulumi.interpolate`${registry.server}/foobar:latest`
const image = new docker.Image(imageName, {
  build: {
    context: '../../',
    dockerfile: '../Dockerfile',
    extraOptions: ['--no-cache'],
  registry: {
    server: registry.server,
    username: registry.username,
    password: registry.password,

const kubeconfig = backend.getOutput('kubeconfig')
const coreProvider = new k8s.Provider('provider', { kubeconfig })

// eslint-disable-next-line no-unused-vars
const depl = new k8s.apps.v1.Deployment('foobar', {
  metadata: { name: 'foobar' },
  spec: {
    replicas: 3,
    selector: { matchLabels: { app: 'foobar' } },
    template: {
      metadata: {
        labels: {
          app: 'foobar',
        annotations: {
          '<http://dapr.io/enabled|dapr.io/enabled>': 'true',
          '<http://dapr.io/app-id|dapr.io/app-id>': 'foobar',
          '<http://dapr.io/app-port|dapr.io/app-port>': '3000',
          '<http://dapr.io/config|dapr.io/config>': 'tracing',
      spec: {
        terminationGracePeriodSeconds: 30,
        hostname: 'foobar',
        securityContext: { fsGroup: 10001 },
        containers: [
            name: 'foobar',
            image: imageName,
            imagePullPolicy: 'Always',
            ports: [
              { containerPort: 3000 }],
            env: [
                name: 'MSSQL_USER',
                value: 'appportal',
}, { provider: coreProvider, parent: image })

// eslint-disable-next-line no-unused-vars
const svc = new k8s.core.v1.Service('foobar-svc', {
  metadata: { name: 'foobar' },
  spec: {
    selector: { app: 'foobar' },
    ports: [
        protocol: 'TCP',
        port: 30001,
        targetPort: 3000,
    type: 'LoadBalancer',
}, { provider: coreProvider, dependsOn: [depl] })
if you instead reference the image object in the deployment it will update the image tag on each image update. Right now it's just
so kubernetes doesn't see a change in the deployment spec.
Copy code
//in the deployment:
image: image.imageName
the reason why imageName works here is because Pulumi adds a unique string at the end for one of the tags it pushes up so the image tag is
. side note: if you want to reference the tag you explicitly set, you'd do: image`.imageName.apply((_) => image.baseImageName)`
so... I should set it to ┬┤image: image`?
I want to work only with "latest" if possible, since I don't care about proper versioning at this point
I have some operator logic to rollback automatically to the previous tag, if needed
ah, I see what you mean now
could you give me some hint about another point? what should I use for creating a dependency between the deployment and the image? should I use "dependsOn" or "parent"?
I've read the docs about this but I couldn't grasp the difference
pulumi will actually handle the dependency properly if you just reference the
property in your deployment, so it won't update the deployment until the image is built and pushed. For some reason this doesn't happen with
which is why I gave that other example above if you wanted to use that in the future