handsome-state-59775
05/10/2021, 6:59 PMk8s.rbac.v1.ClusterRole
, in k8s.rbac.v1.PolicyRuleArgs
, if I omit setting api_groups
, is it equivalent to:
rules:
- apiGroups: [""] # "" indicates the core API group
ancient-megabyte-79588
05/10/2021, 8:29 PMTwitter peeps, I need help.
I have an http://ASP.NET Core Host serving gRPC endpoints up. This host is hosted in Kubernetes behind a nginx-ingress-controller. Ingress Controller terminates HTTPS. For the life of me, I cannot get to the gRPC service endpoints.
I'm hoping to find someone whom I can talk to, or point me at examples. My google-foo for this is failing terribly.
Thanks in advance.
handsome-state-59775
05/11/2021, 4:55 AMk8s.rbac.v1.ClusterRole
with Azure AKS, I get this even while recreating a stack from scratch (destroy, then up):
Diagnostics:
<kubernetes:rbac.authorization.k8s.io/v1:ClusterRole> (clusterRole-storage):
error: resource system:azure-cloud-provider was not successfully created by the Kubernetes API server : <http://clusterroles.rbac.authorization.k8s.io|clusterroles.rbac.authorization.k8s.io> "system:azure-cloud-provider" already exists
Is this expected? What should I be doing ideally?limited-rain-96205
05/12/2021, 6:05 AMaloof-jelly-80665
05/12/2021, 5:36 PMsparse-intern-71089
05/13/2021, 9:53 AMorange-autumn-61493
05/15/2021, 8:09 AMorange-autumn-61493
05/15/2021, 8:10 AMorange-autumn-61493
05/15/2021, 8:11 AMstraight-cartoon-24485
05/16/2021, 6:35 AMpulumi refresh
which would presumably refresh the cluster state based off the Pulumi stack state?
I want to copy over a bunch of running pods from my toy cluster to a live cluster, and skip the deployment-from-scratch workflow altogether; what's interesting to me is the actual state of the app, I want to keep what I was tinkering with in the toy cluster (memory state, PVCs). Kind of like freezing a VM and moving it to another hypervisor, but for an already deployed k8s app...
A non-me use-case could be to clone the state of an k8s app and send it as-is to a friend to run on their cluster (many assumptions withstanding)
Maybe sething at the container/pod/resource layer can be used? I suppose I could keep searching for DRP, whole cluster import/export/backup patterns, not sure if this is a valid use-case in other contexts...
Maybe I need to think differently about all this; feedback appreciated :-)lemon-monkey-228
05/18/2021, 8:55 PMprovider
, but is there an easier way?lemon-monkey-228
05/18/2021, 8:55 PMlemon-monkey-228
05/18/2021, 8:56 PMlemon-monkey-228
05/18/2021, 9:35 PMnarrow-vegetable-60985
05/19/2021, 7:36 PMmany-psychiatrist-74327
05/19/2021, 7:41 PMk8s.yaml.ConfigFile
.
In simplified terms, I have two yaml files: foo.yaml
and bar.yaml
, each of which defines multiple resources. The resources in bar.yml
depend on those in foo.yaml
. Thus, my pulumi (typescript) code looks something like:
const foo = new k8s.yaml.ConfigFile("foo", { file: "foo.yaml" });
const bar = new k8s.yaml.ConfigFile("bar", { file: "bar.yaml" }, { dependsOn: foo });
however, when pulumi runs the update, it starts creating the resources under bar
first.. and of course they fail. it actually starts retrying them 5 times, and sometimes they’ll eventually succeed because the resources in foo
got created in the meantime, but the behavior is non-deterministic and fails very often.
Do you know why pulumi isn’t waiting on foo
resources to be created before creating the resources in bar
?purple-plumber-90981
05/20/2021, 3:13 AM# create cluster resource
eks_cluster = aws.eks.Cluster("itplat-eks-cluster", opts=provider_opts, **eks_cluster_config)
k8s_use1_provider = k8s.Provider(
k8s_use1_provider_name,
cluster=eks_cluster.arn,
context=eks_cluster.arn,
enable_dry_run=None,
namespace=None,
render_yaml_to_directory=None,
suppress_deprecation_warnings=None,
)
# lets have a go at creating a "crossplane-system" namespace
crossplane_namespace = k8s.core.v1.Namespace(
"crossplane-system", opts=pulumi.ResourceOptions(provider=k8s_use1_provider), metadata=k8s.meta.v1.ObjectMetaArgs(name="crossplane-system")
)
this makes namespace dependent on provider which is dependent on eks_cluster ??bumpy-summer-9075
05/20/2021, 1:04 PMnginx-ingress
deployed with Pulumi (helm chart) in an EKS cluster sitting in front of Node.js pods, and every so often I get a upstream prematurely closed connection while reading response header from upstream
and I have no clue why. Does that ring a bell to anyone?bored-table-20691
05/20/2021, 3:29 PMmyConfigMap, err := corev1.NewConfigMap(ctx, "my_config_map", &corev1.ConfigMapArgs{ ...
How do I reference that in my Deployment (in this in EnvFrom
):
...
EnvFrom: corev1.EnvFromSourceArray{
&corev1.EnvFromSourceArgs{
ConfigMapRef: &corev1.ConfigMapEnvSourceArgs{
Name: ... what goes here ? ...
},
},
...
(This is with Pulumi 3 btw)bored-table-20691
05/20/2021, 7:55 PMbored-table-20691
05/20/2021, 8:10 PMbored-table-20691
05/20/2021, 8:11 PMkube2pulumi
online site - should I just open GitHub issues for them?colossal-australia-65039
05/21/2021, 5:23 PMpulumi preview --diff
to not print out anything usefulbored-table-20691
05/21/2021, 6:20 PMvalues
), things don’t get re-run.
In a normal setting I’d just explicitly declare the dependencies (or have it be implicit via the resources), but the Helm chart is a single unit, and not clear to me if I can do it via a Transformation.straight-cartoon-24485
05/23/2021, 2:46 AMbillowy-vr-96461
05/24/2021, 5:19 PMbored-table-20691
05/25/2021, 1:19 AMfresh-hospital-81544
05/25/2021, 4:56 AMimport * as k8s from "@pulumi/kubernetes";
// Deploy the latest version of the stable/wordpress chart.
const wordpress = new k8s.helm.v3.Chart("wpdev", {
repo: "stable",
chart: "wordpress",
version: "9.0.3",
});
// Export the public IP for WordPress.
const frontend = wordpress.getResource("v1/Service", "wpdev-wordpress");
export const frontendIp = frontend.status.loadBalancer.ingress[0].ip;
Is there a way to get this code to run as is or must it be modified with apply like so
import * as k8s from "@pulumi/kubernetes";
// Deploy the latest version of the stable/wordpress chart.
const wordpress = new k8s.helm.v3.Chart("wpdev", {
repo: "stable",
chart: "wordpress",
version: "9.0.3",
});
// Export the public IP for WordPress.
const frontend = wordpress.apply(wordpress => wordpress.getResource("v1/Service", "wpdev-wordpress"));
export const frontendIp = frontend.apply(frontend=>frontend.status.loadBalancer.ingress[0].ip);
Thankslemon-monkey-228
05/25/2021, 7:45 AMlemon-monkey-228
05/25/2021, 7:45 AM