Yesterday, i recreated a K8s cluster to use some n...
# general
m
Yesterday, i recreated a K8s cluster to use some new features from the latest version of
pulumi/gcp
. After recreating the cluster, i started to see the following error message regularly:
Copy code
kubernetes:core:ConfigMap (api-config-map):
    warning: The provider for this resource has inputs that are not known during preview.
    This preview may not correctly represent the changes that will be applied during an update.
The error is inconsistent and when it happens, a new cluster is created and the previous one is marked to be deleted, what shouldn’t be possible because it has the flag
protect
. The k8s resources from the previous cluster are moved to the new cluster instantly, but they’re not created in the new cluster actually. Also, to make any updates to the stack, the previous cluster needs to be deleted. I’m not sure what is happening. Any ideas?
After downgrading this project back to the version 17.1 of the
pulumi/gcp
package, i literally had the same issue. Another cluster that is running this same version
pulumi/gcp
version is working properly. Both clusters are using the same Pulumi version (17.2).
What it looks like is that Pulumi is having issues to get the information about the cluster, so it assumes that the cluster has a specific configuration, ignoring that fact it’s protected and triggering unnecessary changes. Don’t know if that’s what’s happening there.
I tried to create this same cluster about 10 times. I had issues in all of them.
Ah… they have something in common. All the new clusters that are having this issue are using one of the latest Kubernetes versions available in GCP (1.12.5-gke.5).
c
cc @white-balloon-205 @stocky-spoon-28903
s
I think this is likely related to the issues that we’ve seen in CI and elsewhere but have been unable to reproduce. @white-balloon-205, do you think we should roll back the changes in
pulumi-terraform
here?
m
I can confirm that it’s related to the Kubernetes version. I created another cluster using the version
1.11.7-gke.12
instead of
1.12.5-gke.5
and i had no issues after more than 10 deployments.
What it looks like is that Pulumi is having issues to get the information about the cluster, so it assumes that the cluster has a specific configuration, ignoring that fact it’s protected and triggering unnecessary changes.
Is that possible? I believe the main issue here is how Pulumi behaved in that situation. Assuming a specific resource configuration can lead to a lot of unexpected results and it makes much more difficult to understand what’s really happening.
w
Opened https://github.com/pulumi/pulumi-gcp/issues/116 to track this so we can get to the bottom of it. It has some similar symptoms to a couple other issues we've seen recently so I'm expecting it is related as @stocky-spoon-28903 notes above.
partypus 1
m
I’m going to follow from there. Thanks!