few-pillow-1133
06/23/2022, 3:40 PMinitialize discovery client: exec plugin: invalid apiVersion "<http://client.authentication.k8s.io/v1alpha1|client.authentication.k8s.io/v1alpha1>
stacks: pulumi/pulumi:3.35.0victorious-exabyte-70545
06/23/2022, 6:23 PMkubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (ingress-nginx-helm):
error: uninstall: Release not loaded: ingress-nginx-helm-zy7cpt64: release: not found
few-pillow-1133
06/23/2022, 8:37 PMvictorious-exabyte-70545
06/23/2022, 9:33 PMloud-carpenter-77875
06/24/2022, 11:16 AMhallowed-intern-40532
06/26/2022, 10:12 AMvar eks = new Cluster($"{clusterName}", new ClusterArgs
{
Name = $"{clusterName}",
Version = config.Require("eks_version"),
VpcId = VpcId,
PrivateSubnetIds = PrivateSubnetIds,
PublicSubnetIds = PublicSubnetIds,
EndpointPrivateAccess = true,
EndpointPublicAccess = true,
NodeAssociatePublicIpAddress = false,
NodeRootVolumeType = "gp3",
StorageClasses = "gp3",
SkipDefaultNodeGroup = true,
ServiceRole = clusterRole,
InstanceRole = instanceRole,
EnabledClusterLogTypes =
{
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
},
EncryptionConfigKeyArn = eksKmsKey.Arn,
ProviderCredentialOpts = new KubeconfigOptionsArgs { ProfileName = $"{awsProfile}",
RoleArn = adminRoleArn },
UseDefaultVpcCni = true,
InstanceType = config.Require("main_instance_type"),
CreateOidcProvider = true,
KubernetesServiceIpAddressRange = config.Require("k8s_service_cidr_block"),
PublicAccessCidrs = config.RequireObject<List<string>>("k8s_public_access_cidr_block")
});
var eksProvider = new k8s.Provider("eksProvider", new k8s.ProviderArgs
{
KubeConfig = eks.GetKubeconfig()
});
steep-portugal-37539
06/27/2022, 11:09 PMDiagnostics:
kubernetes:core/v1:ConfigMap (tezos-aws-tutorial-nodeAccess):
error: failed to initialize discovery client: exec plugin: invalid apiVersion "<http://client.authentication.k8s.io/v1alpha1|client.authentication.k8s.io/v1alpha1>"
I’ve updated the k8s version on AWS, and pulumi refreshed. I’ve also updated my pulumi packages to the latest versions as well as aws-cli and kubectl. (Although it seems kubectl 1.24 is broken so i went back down to 1.22)
I’ve also manually modified my stacks state. I changed "apiVersion": "<http://client.authentication.k8s.io/v1alpha1|client.authentication.k8s.io/v1alpha1>"
to use v1beta1
and changed the EKS provider version to "version": "0.41.0"
No matter what i do pulumi gives me the error, and pulumi stack output kubeconfig --show-secrets -j
shows the old kubeconfig version using alpha
steep-portugal-37539
06/27/2022, 11:09 PMsteep-portugal-37539
06/27/2022, 11:10 PMaws update-kubeconfig
command. Doesn’t seem to help memost-mouse-38002
06/28/2022, 11:33 AMkubectl annotate namespace default foo=bar
without using local.Command
and kubectl
? I have looked for a way of fetching an existing namespace, but they apparently have to have been created by Pulumi. There is also no way of editing kubernetes native resources (such as namespaces) that I could find. Any help would be welcomed! 🙂 (adding link to the docs of what I am trying to achieve with Pulumi which we have been using terraform for so far).happy-raincoat-89168
06/30/2022, 6:10 PMk8s.yaml.ConfigFile
to read from a file and apply, but if possible I’d like to skip the file and just use text that I specify in my codefreezing-quill-32178
07/01/2022, 8:50 AMDiagnostics:
pulumi:pulumi:Stack (usermanagement-svc-deploy-usermanagement-svc-dev):
W0701 10:41:26.074241 57840 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult <https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke>
I’m getting this warning because the GKE cluster kubeconfig is created and exported as output while back, it there a way to force Pulumi to regenerate it with the new GKE Auth plugin?
I’ve setup locally the new auth plugin and kubectl is working fine, USE_GKE_GCLOUD_AUTH_PLUGIN
is set as well but it is only for local kubectl/terminal usage.
Any idea what has to be done on Pulumi side in order not to break GKE connection to the cluster while updating/migrating to K8s v1.25?best-appointment-51810
07/01/2022, 6:35 PMerror: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`- couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
best-appointment-51810
07/01/2022, 6:35 PMconst procCluster = new linode.LkeCluster(label(date), {
k8sVersion: "1.23",
label: date,
pools: [{
count: 3,
// <https://api.linode.com/v4/linode/types>
type: "g6-standard-2",
}],
region: "us-central",
tags: ["prod"],
});
best-appointment-51810
07/01/2022, 6:36 PMconst lkeProvider = new k8s.Provider("date", {
kubeconfig: procCluster.kubeconfig
})
cool-wall-66940
07/03/2022, 1:08 PM# Generate the cluster itself
k8s_cluster = KubernetesCluster(
resource_name=K8S_CLUSTER_NAME,
name=K8S_CLUSTER_NAME,
region=K8S_REGION,
version=K8S_VERSION,
node_pool=KubernetesClusterNodePoolArgs(
name=K8S_NODE_POOL_NAME,
node_count=K8S_NODE_COUNT,
size=K8S_NODE_SIZE,
),
ha=K8S_HIGH_AVAILABILITY_PLANE,
)
# Get provider for Kubernetes cluster
k8s_provider = Provider(
resource_name=K8S_CLUSTER_NAME,
kubeconfig=k8s_cluster.kube_configs[0].raw_config,
opts=pulumi.ResourceOptions(parent=k8s_cluster),
)
# Install nginx ingress controller with Helm chart and wait for it so load balancer gets IP address
release_args = ReleaseArgs(
name="ingress-nginx",
chart="ingress-nginx",
repository_opts=RepositoryOptsArgs(
repo="<https://kubernetes.github.io/ingress-nginx>"
),
values=INGRESS_CONTROLLER_HELM_VALUES,
skip_await=False,
)
# noinspection PyArgumentList
release = Release(
resource_name="ingress-nginx",
name="ingress-nginx",
args=release_args,
opts=ResourceOptions(
provider=k8s_provider
),
timeout=1000,
skip_await=False,
)
status = release.status
# srv = Service.get(id=release.name, resource_name="ingress-nginx-controller")
srv = Service.get(
f"{release.status.name}-controller",
Output.concat(release.status.name, "-controller")
)
pulumi.export("externalIPs", srv.spec.external_ips)
pulumi.export("status", status)
I will attach the error screenshot with the image where it actually created the ingress-nginx-controller service.
Error message:
Diagnostics:
pulumi:pulumi:Stack (k8s_init-dev):
error: update failed
kubernetes:core/v1:Service (Calling __str__ on an Output[T] is not supported.
To get the value of an Output[T] as an Output[str] consider:
1. o.apply(lambda v => f"prefix{v}suffix")
See <https://pulumi.io/help/outputs> for more details.
This function may throw in a future version of Pulumi.-controller):
error: resource 'ingress-nginx-controller' does not exist
I hope somebody can help me out and guide me. Thanks a lot!
Best regards,
Refikmost-lighter-95902
07/04/2022, 12:01 AMmost-lighter-95902
07/04/2022, 12:02 AMvalueYamlFiles
and values
together where the valueYamlFiles
yaml is referencing some values from values
object using this kind of syntax:most-lighter-95902
07/04/2022, 12:02 AMstorage:
# -- Sets the storage type. Supported values are sandbox, s3, gcs and custom.
type: s3
# -- bucketName defines the storage bucket flyte will use. Required for all types except for sandbox.
bucketName: "{{ .Values.userSettings.bucketName }}"
s3:
region: "{{ .Values.userSettings.accountRegion }}"
most-lighter-95902
07/04/2022, 12:03 AMmost-lighter-95902
07/04/2022, 12:03 AMnew k8s.helm.v3.Release(
flyteHelmReleaseName,
{
name: flyteHelmReleaseName,
namespace: flyteNsName,
createNamespace: true,
chart: 'flyte',
version: 'v1.1.0-beta.5',
repositoryOpts: {
repo: '<https://helm.flyte.org>',
},
valueYamlFiles,
values: {
userSettings: {
accountRegion: awsRegion,
bucketName: flyteBucketName,
...
}
},
...
most-lighter-95902
07/04/2022, 12:04 AMfamous-salesclerk-74711
07/05/2022, 6:07 PMfamous-salesclerk-74711
07/05/2022, 8:31 PMpulumi_kubernetes
resources to pull from a default eks_provider w/o explicitly passing one in in the ResourceOptions
? Kind of like how environment variables can auto-configure resources with the aws_provider
?helpful-morning-53046
07/06/2022, 4:55 PMnodeGroupOptions
as I get an error Setting nodeGroupOptions, and any set of singular node group option(s) on the cluster, is mutually exclusive. Choose a single approach.
when trying to spin up a NodeGroup (or ManagedNodeGroup) alongside the existing one.
Does anyone have any advice as to how to perform a zero downtime worker node upgrade i.e. without having to tear them all down?important-leather-28796
07/06/2022, 6:16 PMcurl <https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml> -O
is a zero byte file. Pulumi just started reporting this with a not-so-great error. Is there a workaround for this?
error: TypeError: Cannot read properties of undefined (reading 'map')
at /Users/kross/projects/archetype/node_modules/@pulumi/yaml/yaml.ts:2993:14
at processTicksAndRejections (node:internal/process/task_queues:95:5)
victorious-exabyte-70545
07/06/2022, 6:24 PMcrooked-laptop-67565
07/06/2022, 7:51 PMkubernetes.Provider
for an EKS cluster? I have two Pulumi projects, one that sets up infra like k8s and databases, and a second one for apps that deploy to kubernetes and use the db etc... the app project needs to work with a kubernetes provider (eg when I define a Service
), like the one that's available from new eks.Cluster(...)
. But the app code isn't doing the cluster creation, so it presumably needs to create that provider for itself.
(I'm new to both Pulumi and k8s so please let me know if any part of what I said doesn't make sense or seems wrong).victorious-exabyte-70545
07/06/2022, 8:22 PMmost-lighter-95902
07/06/2022, 11:21 PMvalueYamlFiles
and values
together where the valueYamlFiles
yaml is referencing some values from values
object using this kind of templating syntax:
storage:
type: s3
bucketName: "{{ .Values.userSettings.bucketName }}"