purple-plumber-90981
05/05/2021, 2:32 AM# setup EFS CSI driver
k8s_k_efscsi = k8s.kustomize.Directory(
"itplat-kust-efs-driver",
directory="<https://github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.1>",
resource_prefix="itplat-kust-efs-driver",
opts=pulumi.ResourceOptions(provider=k8s_use1_provider),
)
works fine the first time but every subsequent pulumi up
i get
Diagnostics:
pulumi:pulumi:Stack (aws_eks-itplat-aws-eks):
error: update failed
<kubernetes:storage.k8s.io/v1beta1:CSIDriver> (<http://itplat-kust-efs-driver-efs.csi.aws.com|itplat-kust-efs-driver-efs.csi.aws.com>):
warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at <https://github.com/pulumi/pulumi-kubernetes/issues/555>
warning: <http://storage.k8s.io/v1beta1/CSIDriver|storage.k8s.io/v1beta1/CSIDriver> is deprecated by <http://storage.k8s.io/v1/CSIDriver|storage.k8s.io/v1/CSIDriver>.
error: resource <http://efs.csi.aws.com|efs.csi.aws.com> was not successfully created by the Kubernetes API server : <http://csidrivers.storage.k8s.io|csidrivers.storage.k8s.io> "<http://efs.csi.aws.com|efs.csi.aws.com>" already exists
agreeable-ram-97887
05/05/2021, 10:49 AMDuplicate resource URN 'urn:pulumi:prod::submission_strategy::<kubernetes:helm.sh/v3:Chart$kubernetes:apiregistration.k8s.io/v1:APIService::v1beta1.metrics.k8s.io>'; try giving it a unique name
Note that I am already using the resource_prefix
argument in my “ChartOpts” object to separate the resources URI’s for each cluster, and except for the one “APIService” resource this appears to work as expected.
Looking into the docs, I don’t see a clear way to specify the “APIService” resource URN 😕. Anyone have any suggestions ?lemon-monkey-228
05/05/2021, 12:11 PMk8s.core.v1.Secrets.get
as an example but get my hand slapped by Pulumi when it runs in an update.
/* Returns auth from a `ServiceAccount`'s associated secret */
export const getServiceAccountAuth = (serviceAccount: ServiceAccount): pulumi.Output<{ token: string, caCert: string}> =>
pulumi.output(serviceAccount.secrets[0].name).apply(secretName => {
const secret = k8s.core.v1.Secret.get(secretName, secretName)
const { token, ['ca.crt']: caCert } = secret.data.get()
return { token, caCert }
})
lemon-monkey-228
05/05/2021, 12:12 PMpurple-plumber-90981
05/06/2021, 5:31 AM<https://www.pulumi.com/docs/reference/pkg/kubernetes/yaml/configfile/>
it describes the option of Using a literal string containing YAML, or a list of such strings:
and i was expecting a k8s.ConfigFile(“myconfig”, config=“”"<inline yaml>“”") but cant see anything like that, only file=
bumpy-summer-9075
05/06/2021, 3:20 PMdata
of a configMap and it wants to delete-replace the configMap, and the deployment that refers to it, causing downtime.
--kubernetes:core/v1:ConfigMap: (delete-replaced)
[id=default/project-internal-tools-service-common]
[urn=urn:pulumi:internal-tools.dev::project-internal-tools::kubernetes:core/v1:ConfigMap::common]
[provider=urn:pulumi:internal-tools.dev::project-internal-tools::pulumi:providers:kubernetes::eks::...]
+-kubernetes:core/v1:ConfigMap: (replace)
[id=default/project-internal-tools-service-common]
[urn=urn:pulumi:internal-tools.dev::project-internal-tools::kubernetes:core/v1:ConfigMap::common]
[provider=urn:pulumi:internal-tools.dev::project-internal-tools::pulumi:providers:kubernetes::eks::...]
~ data: {
+ FOO: "bar"
}
++kubernetes:core/v1:ConfigMap: (create-replacement)
[id=default/project-internal-tools-service-common]
[urn=urn:pulumi:internal-tools.dev::project-internal-tools::kubernetes:core/v1:ConfigMap::common]
[provider=urn:pulumi:internal-tools.dev::project-internal-tools::pulumi:providers:kubernetes::eks::...]
~ data: {
+ FOO: "bar"
}
(this configMap was not initially created by Pulumi, it was imported)billowy-army-68599
05/06/2021, 7:18 PMwitty-vegetable-61961
05/06/2021, 9:40 PMfaint-dog-16036
05/07/2021, 1:54 PMingress-nginx
? I'd prefer not to rely on helm, as it adds another layer of complexity that I'm trying to avoid, but using kube2pulumi
on ingress backend yml feels fairly clunky.colossal-australia-65039
05/07/2021, 8:53 PMcert-manager/cert-manager-cainjector
resource has been creating...
for several minutes already. Why isn't it essentially instant?colossal-australia-65039
05/07/2021, 10:55 PMprovider
set and then later want to set the provider, is there a way to do this without having to recreate the namespace? I've tried updating the statefile directly with the provider I want from the pulumi preview
diff, but Pulumi does like my editshandsome-state-59775
05/10/2021, 6:59 PMk8s.rbac.v1.ClusterRole
, in k8s.rbac.v1.PolicyRuleArgs
, if I omit setting api_groups
, is it equivalent to:
rules:
- apiGroups: [""] # "" indicates the core API group
ancient-megabyte-79588
05/10/2021, 8:29 PMTwitter peeps, I need help.
I have an http://ASP.NET Core Host serving gRPC endpoints up. This host is hosted in Kubernetes behind a nginx-ingress-controller. Ingress Controller terminates HTTPS. For the life of me, I cannot get to the gRPC service endpoints.
I'm hoping to find someone whom I can talk to, or point me at examples. My google-foo for this is failing terribly.
Thanks in advance.
handsome-state-59775
05/11/2021, 4:55 AMk8s.rbac.v1.ClusterRole
with Azure AKS, I get this even while recreating a stack from scratch (destroy, then up):
Diagnostics:
<kubernetes:rbac.authorization.k8s.io/v1:ClusterRole> (clusterRole-storage):
error: resource system:azure-cloud-provider was not successfully created by the Kubernetes API server : <http://clusterroles.rbac.authorization.k8s.io|clusterroles.rbac.authorization.k8s.io> "system:azure-cloud-provider" already exists
Is this expected? What should I be doing ideally?limited-rain-96205
05/12/2021, 6:05 AMaloof-jelly-80665
05/12/2021, 5:36 PMsparse-intern-71089
05/13/2021, 9:53 AMorange-autumn-61493
05/15/2021, 8:09 AMorange-autumn-61493
05/15/2021, 8:10 AMorange-autumn-61493
05/15/2021, 8:11 AMstraight-cartoon-24485
05/16/2021, 6:35 AMpulumi refresh
which would presumably refresh the cluster state based off the Pulumi stack state?
I want to copy over a bunch of running pods from my toy cluster to a live cluster, and skip the deployment-from-scratch workflow altogether; what's interesting to me is the actual state of the app, I want to keep what I was tinkering with in the toy cluster (memory state, PVCs). Kind of like freezing a VM and moving it to another hypervisor, but for an already deployed k8s app...
A non-me use-case could be to clone the state of an k8s app and send it as-is to a friend to run on their cluster (many assumptions withstanding)
Maybe sething at the container/pod/resource layer can be used? I suppose I could keep searching for DRP, whole cluster import/export/backup patterns, not sure if this is a valid use-case in other contexts...
Maybe I need to think differently about all this; feedback appreciated :-)lemon-monkey-228
05/18/2021, 8:55 PMprovider
, but is there an easier way?lemon-monkey-228
05/18/2021, 8:55 PMlemon-monkey-228
05/18/2021, 8:56 PMlemon-monkey-228
05/18/2021, 9:35 PMnarrow-vegetable-60985
05/19/2021, 7:36 PMmany-psychiatrist-74327
05/19/2021, 7:41 PMk8s.yaml.ConfigFile
.
In simplified terms, I have two yaml files: foo.yaml
and bar.yaml
, each of which defines multiple resources. The resources in bar.yml
depend on those in foo.yaml
. Thus, my pulumi (typescript) code looks something like:
const foo = new k8s.yaml.ConfigFile("foo", { file: "foo.yaml" });
const bar = new k8s.yaml.ConfigFile("bar", { file: "bar.yaml" }, { dependsOn: foo });
however, when pulumi runs the update, it starts creating the resources under bar
first.. and of course they fail. it actually starts retrying them 5 times, and sometimes they’ll eventually succeed because the resources in foo
got created in the meantime, but the behavior is non-deterministic and fails very often.
Do you know why pulumi isn’t waiting on foo
resources to be created before creating the resources in bar
?purple-plumber-90981
05/20/2021, 3:13 AM# create cluster resource
eks_cluster = aws.eks.Cluster("itplat-eks-cluster", opts=provider_opts, **eks_cluster_config)
k8s_use1_provider = k8s.Provider(
k8s_use1_provider_name,
cluster=eks_cluster.arn,
context=eks_cluster.arn,
enable_dry_run=None,
namespace=None,
render_yaml_to_directory=None,
suppress_deprecation_warnings=None,
)
# lets have a go at creating a "crossplane-system" namespace
crossplane_namespace = k8s.core.v1.Namespace(
"crossplane-system", opts=pulumi.ResourceOptions(provider=k8s_use1_provider), metadata=k8s.meta.v1.ObjectMetaArgs(name="crossplane-system")
)
this makes namespace dependent on provider which is dependent on eks_cluster ??bumpy-summer-9075
05/20/2021, 1:04 PMnginx-ingress
deployed with Pulumi (helm chart) in an EKS cluster sitting in front of Node.js pods, and every so often I get a upstream prematurely closed connection while reading response header from upstream
and I have no clue why. Does that ring a bell to anyone?bored-table-20691
05/20/2021, 3:29 PMmyConfigMap, err := corev1.NewConfigMap(ctx, "my_config_map", &corev1.ConfigMapArgs{ ...
How do I reference that in my Deployment (in this in EnvFrom
):
...
EnvFrom: corev1.EnvFromSourceArray{
&corev1.EnvFromSourceArgs{
ConfigMapRef: &corev1.ConfigMapEnvSourceArgs{
Name: ... what goes here ? ...
},
},
...
(This is with Pulumi 3 btw)bored-table-20691
05/20/2021, 3:29 PMmyConfigMap, err := corev1.NewConfigMap(ctx, "my_config_map", &corev1.ConfigMapArgs{ ...
How do I reference that in my Deployment (in this in EnvFrom
):
...
EnvFrom: corev1.EnvFromSourceArray{
&corev1.EnvFromSourceArgs{
ConfigMapRef: &corev1.ConfigMapEnvSourceArgs{
Name: ... what goes here ? ...
},
},
...
(This is with Pulumi 3 btw)bumpy-summer-9075
05/20/2021, 3:37 PMbored-table-20691
05/20/2021, 7:53 PM