https://pulumi.com logo
Title
f

fresh-librarian-34748

08/30/2021, 3:19 AM
Hi, I found that
pulumi update
always want to update
nodeLaunchConfiguration
of
eks:index:Cluster
after days, which will restart all k8s nodes after enable. Do I miss some configuration that should be declared to pin it? Here is some logs:
pulumi up --cwd infra --stack staging
Previewing update (staging)

View Live: <https://app.pulumi.com/Ma233/fiora/staging/previews/29e82764-6253-4f0d-9b22-f571eb3c75b3>

     Type                                  Name                          Plan        Info
     pulumi:pulumi:Stack                   fiora-staging                             2 messages
     └─ custom:resource:EKS                jace
        └─ eks:index:Cluster               jace
 +-        ├─ aws:ec2:LaunchConfiguration  jace-nodeLaunchConfiguration  replace     [diff: ~imageId]
 ~         └─ aws:cloudformation:Stack     jace-nodes                    update      [diff: ~templateBody]

Diagnostics:
  pulumi:pulumi:Stack (fiora-staging):
    W0830 11:13:24.710750   94282 transport.go:260] Unable to cancel request for *exec.roundTripper

    W0830 11:13:24.927600   94285 transport.go:260] Unable to cancel request for *exec.roundTripper


Do you want to perform this update? details
  pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:staging::fiora::pulumi:pulumi:Stack::fiora-staging]
            ++aws:ec2/launchConfiguration:LaunchConfiguration: (create-replacement)
                [id=jace-nodeLaunchConfiguration-6cc66d3]
                [urn=urn:pulumi:staging::fiora::custom:resource:EKS$eks:index:Cluster$aws:ec2/launchConfiguration:LaunchConfiguration::jace-nodeLaunchConfiguration]
                [provider=urn:pulumi:staging::fiora::pulumi:providers:aws::default_4_14_0::9f1d7127-76df-4224-b36c-9fc9ec0e7da3]
              ~ imageId: "ami-075bfc7d8a7e81bc5" => "ami-0d29b23a29ababad8"
            +-aws:ec2/launchConfiguration:LaunchConfiguration: (replace)
                [id=jace-nodeLaunchConfiguration-6cc66d3]
                [urn=urn:pulumi:staging::fiora::custom:resource:EKS$eks:index:Cluster$aws:ec2/launchConfiguration:LaunchConfiguration::jace-nodeLaunchConfiguration]
                [provider=urn:pulumi:staging::fiora::pulumi:providers:aws::default_4_14_0::9f1d7127-76df-4224-b36c-9fc9ec0e7da3]
              ~ imageId: "ami-075bfc7d8a7e81bc5" => "ami-0d29b23a29ababad8"
            ~ aws:cloudformation/stack:Stack: (update)
                [id=arn:aws:cloudformation:us-east-2:145889354582:stack/jace-3b8b0f53/742364d0-f6cc-11eb-a94f-0a909640e19a]
                [urn=urn:pulumi:staging::fiora::custom:resource:EKS$eks:index:Cluster$aws:cloudformation/stack:Stack::jace-nodes]
                [provider=urn:pulumi:staging::fiora::pulumi:providers:aws::default_4_14_0::9f1d7127-76df-4224-b36c-9fc9ec0e7da3]
              ~ templateBody: "\n                AWSTemplateFormatVersion: '2010-09-09'\n                Outputs:\n                    NodeGroup:\n                        Value: !Ref NodeGroup\n                Resources:\n                    NodeGroup:\n                        Type: AWS::AutoScaling::AutoScalingGroup\n                        Properties:\n                          DesiredCapacity: 2\n                          LaunchConfigurationName: jace-nodeLaunchConfiguration-6cc66d3\n                          MinSize: 2\n                          MaxSize: 10\n                          VPCZoneIdentifier: [\"subnet-00863d16d1e2634bd\",\"subnet-0a90bf1f4ab7acfe7\"]\n                          Tags:\n                          \n                          - Key: Name\n                            Value: jace-worker\n                            PropagateAtLaunch: 'true'\n                          - Key: <http://kubernetes.io/cluster/jace\n|kubernetes.io/cluster/jace\n>                            Value: owned\n                            PropagateAtLaunch: 'true'\n                        UpdatePolicy:\n                          AutoScalingRollingUpdate:\n                            MinInstancesInService: '1'\n                            MaxBatchSize: '1'\n                " => output<string>
            --aws:ec2/launchConfiguration:LaunchConfiguration: (delete-replaced)
                [id=jace-nodeLaunchConfiguration-6cc66d3]
                [urn=urn:pulumi:staging::fiora::custom:resource:EKS$eks:index:Cluster$aws:ec2/launchConfiguration:LaunchConfiguration::jace-nodeLaunchConfiguration]
                [provider=urn:pulumi:staging::fiora::pulumi:providers:aws::default_4_14_0::9f1d7127-76df-4224-b36c-9fc9ec0e7da3]
👍 1
b

billowy-army-68599

08/30/2021, 5:38 PM
hey @fresh-librarian-34748 - this is because the EKS team is updating the AMI used and you haven't specified one. We grab the latest AMI by default. You can stop this behaviour by setting the explicit AMI you want to use
👍 1
b

breezy-diamond-32138

09/15/2021, 10:01 AM
Bringing up this thread - @billowy-army-68599 is there a way to still update the nodes every once in a while in maintenance time? How can I see what is the latest AMI recommended to know if it’s changed?