https://pulumi.com logo
#general
Title
# general
m

most-pager-38056

03/23/2019, 7:01 AM
Yesterday, i recreated a K8s cluster to use some new features from the latest version of
pulumi/gcp
. After recreating the cluster, i started to see the following error message regularly:
Copy code
kubernetes:core:ConfigMap (api-config-map):
    warning: The provider for this resource has inputs that are not known during preview.
    This preview may not correctly represent the changes that will be applied during an update.
The error is inconsistent and when it happens, a new cluster is created and the previous one is marked to be deleted, what shouldn’t be possible because it has the flag
protect
. The k8s resources from the previous cluster are moved to the new cluster instantly, but they’re not created in the new cluster actually. Also, to make any updates to the stack, the previous cluster needs to be deleted. I’m not sure what is happening. Any ideas?
After downgrading this project back to the version 17.1 of the
pulumi/gcp
package, i literally had the same issue. Another cluster that is running this same version
pulumi/gcp
version is working properly. Both clusters are using the same Pulumi version (17.2).
What it looks like is that Pulumi is having issues to get the information about the cluster, so it assumes that the cluster has a specific configuration, ignoring that fact it’s protected and triggering unnecessary changes. Don’t know if that’s what’s happening there.
I tried to create this same cluster about 10 times. I had issues in all of them.
Ah… they have something in common. All the new clusters that are having this issue are using one of the latest Kubernetes versions available in GCP (1.12.5-gke.5).
c

creamy-potato-29402

03/23/2019, 5:04 PM
cc @white-balloon-205 @stocky-spoon-28903
s

stocky-spoon-28903

03/23/2019, 6:07 PM
I think this is likely related to the issues that we’ve seen in CI and elsewhere but have been unable to reproduce. @white-balloon-205, do you think we should roll back the changes in
pulumi-terraform
here?
m

most-pager-38056

03/23/2019, 6:32 PM
I can confirm that it’s related to the Kubernetes version. I created another cluster using the version
1.11.7-gke.12
instead of
1.12.5-gke.5
and i had no issues after more than 10 deployments.
What it looks like is that Pulumi is having issues to get the information about the cluster, so it assumes that the cluster has a specific configuration, ignoring that fact it’s protected and triggering unnecessary changes.
Is that possible? I believe the main issue here is how Pulumi behaved in that situation. Assuming a specific resource configuration can lead to a lot of unexpected results and it makes much more difficult to understand what’s really happening.
w

white-balloon-205

03/25/2019, 4:56 PM
Opened https://github.com/pulumi/pulumi-gcp/issues/116 to track this so we can get to the bottom of it. It has some similar symptoms to a couple other issues we've seen recently so I'm expecting it is related as @stocky-spoon-28903 notes above.
partypus 1
m

most-pager-38056

03/25/2019, 8:26 PM
I’m going to follow from there. Thanks!
3 Views