worried-knife-31967
08/09/2023, 8:49 PMdamp-salesmen-74351
08/11/2023, 6:40 PMVPC
using pulumi_aws
and a cluster
using pulumi_eks
, but in the end, I received the error no nodes available to schedule pods.
Here is the code:
https://github.com/omidraha/pulumi_example/blob/main/vpc.py
https://github.com/omidraha/pulumi_example/blob/main/iam.py
https://github.com/omidraha/pulumi_example/blob/main/cluster.py
https://github.com/omidraha/pulumi_example/blob/main/setup.py
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6ff9c46cd8-98sck 0/1 Pending 0 24h
kube-system coredns-6ff9c46cd8-hrj56 0/1 Pending 0 24h
$ kubectl get event -A
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
kube-system 38s Warning FailedScheduling pod/coredns-6ff9c46cd8-98sck no nodes available to schedule pods
kube-system 68s Warning FailedScheduling pod/coredns-6ff9c46cd8-hrj56 no nodes available to schedule pods
damp-salesmen-74351
08/14/2023, 9:03 PMnode_user_data
, But it didn't work:
import pulumi_eks as eks
def create_cluster(vpc):
cluster = eks.Cluster(
"cluster",
# other parameters...
node_user_data="""
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${var.cluster_endpoint}' --b64-cluster-ca '${var.cluster_ca_data}' '${var.cluster_name}' --use-max-pods false --kubelet-extra-args '--max-pods=20'
""",
)
return cluster
proud-noon-87466
08/15/2023, 6:10 PMmillions-train-91139
08/22/2023, 6:08 AMv4
objects! And ran the old provider, also didn’t update the state
I had to manually go to the GitHub state of 6 stacks and update the provider version within the state manually! Crazy
For some reason GitHub CI prefer’s the state objects but my Mac doesn’t
Same Pulumi version (v3.78.1), GitHub CI used pulumi/actions@v3incalculable-camera-24952
08/23/2023, 2:43 PMdamp-salesmen-74351
08/23/2023, 7:20 PMFluent Bit
with Grafana
in Kubernetes
using Pulumi
through these two Helm charts:
1. Fluent Bit: https://artifacthub.io/packages/helm/fluent/fluent-bit
2. Grafana: https://artifacthub.io/packages/helm/grafana/grafana
Here is an example of my code, but I'm facing pod startup issues due to the configuration:
https://github.com/omidraha/pulumi_example/blob/main/log/log.py
https://github.com/omidraha/pulumi_example/blob/main/log/setup.pyelegant-activity-51782
08/23/2023, 10:52 PM@pulumi/eks
package?rapid-furniture-53351
08/24/2023, 2:06 PMapps/v1
resources some time ago:
warning: ignoring user-specified value for internal annotation "<http://pulumi.com/autonamed|pulumi.com/autonamed>"
I thought it was because our Pulumi versions were out of date, but updated now (and ran pulumi up
on the new update), but still seeing the warnings. We are not setting the <http://pulumi.com/autonamed|pulumi.com/autonamed>
annotation, and we're using auto-naming everywhere.
Here's an example of one of the apps with warnings:
const provider = new k8s.Provider('dm-k8s-provider', {
kubeconfig,
});
const labels = {
app: 'dm-cron',
'scrape-metrics': String(true),
};
export const cronApp = new k8s.apps.v1.Deployment('dm-cron-app', {
metadata: { labels },
spec: {
strategy: { type: 'Recreate' },
replicas: 1,
selector: { matchLabels: labels },
template: {
metadata: { labels },
spec: {
nodeSelector: {
'<http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>': nodePools.misc.name,
},
serviceAccountName: dmAppServiceAccountName,
imagePullSecrets,
containers: [ {
name: 'dm-cron-app-container',
image: image.imageName,
env: environmentVariables,
command: [ 'yarn', 'workspace', 'cron', 'start' ],
ports: [ dmMetricsPort ],
} ],
},
},
},
}, {
provider,
});
Using Pulumi CLI v3.78.1, @pulumi/pulumi
v3.78.1, @pulumi/kubernetes
v4.1.1bitter-painter-92241
08/25/2023, 2:38 PMgray-sunset-78851
08/27/2023, 9:15 PMcrd2pulumi
generate the code that uses Kubernetes provider v4?dry-autumn-69630
08/28/2023, 3:26 PMgray-electrician-97832
08/28/2023, 6:03 PMapply
and Output.all
effectively in several places. But I can't figure out how to make a later step depend on a status of EKS itself, or a k8s deployment or service. I've tried things like:
## note -- depl
something_else(opts=pulumi.ResourceOptions(depends_on=app_depl)
## or
## note -- svc
something_else(opts=pulumi.ResourceOptions(depends_on=app_svc)
## or
app_hostname = app_svc.status.load_balancer.ingress[0].hostname
...
something_else(opts=pulumi.ResourceOptions(depends_on=app_hostname)
gray-electrician-97832
08/28/2023, 6:05 PMapply
at each dot of the "svc.status.load_balancer.ingress" nested dictionary all to no avail. What's the recommended way to base a depends_on
on a resource that has an evolving status, like a k8s deployment or service? I'm confused and stuck. Any suggestions would be appreciated.dry-keyboard-94795
09/01/2023, 2:43 PMdry-keyboard-94795
09/01/2023, 2:48 PMkubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (dp-chiron-app-fission):
error: plugin "helm-git" exited with error
Well, at least I know it's trying to use it 🙂dry-keyboard-94795
09/01/2023, 2:50 PMimportant-leather-28796
09/02/2023, 7:07 PMrolloutStatus
typescript fn but it is no longer working. I see the pulumi addition of local.Command
, but it appears it only executes shell commands. Is there a dependency helper I can use that will allow me to procedurally run some code after a deployment rollout is finished? I’m happy to provide my old code, but I’m hoping there is something easier using dependsOn
or Output
resolution.dry-keyboard-94795
09/04/2023, 11:45 AMmillions-army-91633
09/05/2023, 6:43 PMnumerous-eye-91601
09/06/2023, 7:50 PMtry giving it a unique name
numerous-train-50906
09/08/2023, 8:57 PMpulumi_eks
support creating a fully private EKS cluster (no public api access) with storage classes? Running into a strange issue:
eks:index:VpcCni (staging-eks-vpc-cni):
error: Command failed: kubectl apply -f /tmp/tmp-37930G3QLEJ0E0vMu.tmp
Unable to connect to the server: net/http: TLS handshake timeout
kubernetes:<http://storage.k8s.io/v1:StorageClass|storage.k8s.io/v1:StorageClass> (staging-eks-gp2):
error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://A5F09297FEECD3B9C32CA.gr7.ca-central-1.eks.amazonaws.com/openapi/v2?timeout=32s>": net/http: TLS handshake timeout
kubernetes:core/v1:ConfigMap (staging-eks-nodeAccess):
error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://A5F09297FEECD3B9C32CA.gr7.ca-central-1.eks.amazonaws.com/openapi/v2?timeout=32s>": net/http: TLS handshake timeout
Resources:
21 unchanged
Duration: 7m42s
able-painter-57976
09/11/2023, 8:58 AMlatest
. I've got running deployment and newer image, but for pulumi there is no change? How to "redeploy" deployment to download latest latest image.full-boots-69133
09/11/2023, 11:55 PMtsh
as the credential exec command and interacts with a user session (I check this is already logged in before running pulumi) while in CI we have to use tbot
to interact with a short lived identity (machine id) generated by tbot
via OIDC (in Github Actions).
The provider seems to handle changes to Kubeconfig fine as long as the cluster info does not change however there is a condition where the provider is trying actions against existing resources before the Kubeconfig input has settled (via an output that yields a different value if you in CI or not) and this can cause it to completely fail when you switch between local and CI. Changing the path is the prime case where this happens because the provider saves the local system path which does not exist on other systems.
I have managed to get this to a point where it works (providing the contents instead) but we end up with errors like the following when switching between local and CI:
# on CI after local up:
If browser window does not open automatically, open it by clicking on the link:
<http://127.0.0.1:38621/0f9740eb-81b8-4bd3-80e3-010159fec870>
# on local after CI up:
ERROR: no such file or directory
ERROR: no such file or directory
all because the provider does not wait for its inputs. Is this a bug?worried-knife-31967
09/13/2023, 5:50 PMworried-knife-31967
09/15/2023, 5:47 PMtall-lion-84030
09/18/2023, 3:48 PMhundreds-lunch-5706
09/18/2023, 9:56 PMdry-autumn-69630
09/20/2023, 4:46 PMnutritious-petabyte-61303
09/21/2023, 9:53 AM