steep-portugal-37539
06/04/2021, 1:57 PMsteep-portugal-37539
06/07/2021, 4:47 PMFailed build model due to ingress: waterrecharge/waterrecharge-ingress: none certificate found for host: <http://waterrecharge-pulumi-api-aryeh.tqhosted.com|waterrecharge-pulumi-api-aryeh.tqhosted.com>"
bored-table-20691
06/07/2021, 4:53 PMpulumi-eks
updated as well to handle https://github.com/pulumi/pulumi-eks/issues/566 and https://github.com/pulumi/pulumi-eks/issues/577?better-shampoo-48884
06/08/2021, 8:41 AMbumpy-laptop-30846
06/08/2021, 7:18 PMbored-table-20691
06/09/2021, 1:41 AMerror: pre-step event returned an error: failed to verify snapshot: resource urn:pulumi:ssa-us-west-2::okera-infra-regions::kubernetes:yaml:ConfigFile$kubernetes:core/v1:ServiceAccount::cert-manager/cert-manager-webhook refers to unknown provider urn:pulumi:ssa-us-west-2::okera-infra-regions::pulumi:providers:kubernetes::k8s-ssa-provider::460da6b8-808b-4d03-b8f8-ee2fdc9ec693
I get this during pulumi up -f
, but same issue with if I do pulumi refresh
bored-table-20691
06/09/2021, 8:04 PMpulumi up
, and it created resources on my new cluster just fine, but is failing to delete the old resources since that EKS cluster no longer exists.
1. Is this expected?
2. How should I get out of this situation? The old resources can’t really exist by definition anymore since the EKS cluster is gone. pulumi refresh
errors out in the same way.icy-jordan-58549
06/10/2021, 11:40 AMerror: no resource plugin 'kubernetes-v1.1.1' found in the workspace or on your $PATH, install the plugin using `pulumi plugin install resource kubernetes v1.1.1`
and by doing so, I get
error: [resource plugin kubernetes-1.1.1] downloading from : 403 HTTP error fetching plugin from <https://get.pulumi.com/releases/plugins/pulumi-resource-kubernetes-v1.1.1-darwin-amd64.tar.gz>
ripe-kite-37642
06/11/2021, 4:16 PM+ │ ├─ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> iaaksuksouthmm13005-cert-manager created
+ │ │ ├─ kubernetes:core/v1:ServiceAccount nginx-ingress/iaaksuksouthmm13005-cert-manager-cainjector created
+ │ │ ├─ kubernetes:core/v1:ServiceAccount nginx-ingress/iaaksuksouthmm13005-cert-manager created
+ │ │ ├─ kubernetes:core/v1:Service nginx-ingress/iaaksuksouthmm13005-cert-manager **creating failed** 1 error
+ │ │ ├─ kubernetes:core/v1:ServiceAccount nginx-ingress/iaaksuksouthmm13005-cert-manager-webhook created
+ │ │ ├─ kubernetes:core/v1:Service nginx-ingress/iaaksuksouthmm13005-cert-manager-webhook **creating failed** 1 error
+ │ │ ├─ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> iaaksuksouthmm13005-cert-manager-webhook created
+ │ │ ├─ kubernetes:<http://admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration|admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration> iaaksuksouthmm13005-cert-manager-webhook created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-cainjector created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-issuers created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-clusterissuers created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-certificates created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-orders created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-challenges created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-ingress-shim created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-view created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-edit created
+ │ │ ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-controller-approve:cert-manager-io created
+ │ │ └─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> iaaksuksouthmm13005-cert-manager-webhook:subjectaccessreviews created
Those services fail to deploy because it hasn’t deployed the deployments. If I re-run, it will then attempt to do the deploymentsstraight-cartoon-24485
06/12/2021, 7:23 PMconfig.require('kubernetes:context')
to ensure only the expected cluster gets modified by Pulumi, but this seems to be scoped to the stack
I want to make sure folks who fork my pulumi program don't mess up their default k8s context, whatever that happens to be on their machines... I'd like them to be explicit about the k8s cluster they want to targetstraight-cartoon-24485
06/13/2021, 12:33 AMworried-city-86458
06/16/2021, 5:34 AMripe-shampoo-80285
06/19/2021, 1:52 AM<http://k8s.io/cluster-autoscaler/|k8s.io/cluster-autoscaler/><cluster-name>
owned
<http://k8s.io/cluster-autoscaler/enabledTRUE|k8s.io/cluster-autoscaler/enabledTRUE>
straight-cartoon-24485
06/20/2021, 5:57 PMexport const getTokenWith = pulumi.interpolate `kubectl get secret/${dashboardServiceAccount.secrets[0].name} -n kube-system -o go-template='{{.data.token | base64decode}}'`
which "works" by returning:
getTokenWith: "kubectl get secret/admin-user-ys16knlv-token-xpqss -n kube-system -o go-template='{{.data.token | base64decode}}'"
which I then copy paste to get what I really need to log into the kubernetes dashboard with a token...proud-pizza-80589
06/22/2021, 2:43 PMproud-pizza-80589
06/22/2021, 2:45 PMancient-megabyte-79588
06/25/2021, 2:21 PMalert-mechanic-59024
06/25/2021, 4:03 PMbetter-shampoo-48884
06/25/2021, 6:34 PMbetter-shampoo-48884
06/25/2021, 7:50 PMbetter-shampoo-48884
06/25/2021, 7:50 PMbetter-shampoo-48884
06/25/2021, 8:44 PMstraight-cartoon-24485
06/26/2021, 1:23 AMbetter-shampoo-48884
06/26/2021, 8:37 AMbetter-shampoo-48884
06/26/2021, 8:38 AMbetter-shampoo-48884
06/26/2021, 8:39 AMbetter-shampoo-48884
06/26/2021, 9:05 AMbusy-soccer-65968
06/28/2021, 7:37 PMtaint
to my eks.ManagedNodeGroup
I see it under Supporting Types
here . but I am not sure how to access it. am I missing something silly?bitter-rain-31542
06/29/2021, 12:53 PMbumpy-summer-9075
06/29/2021, 2:59 PMpulumi up
detects changes in a secret and in an annotation (checksum/secret).
How do you use ignoreChanges
with helm chart's sub resources?