famous-twilight-87777
09/21/2020, 10:41 AMfamous-twilight-87777
09/21/2020, 10:41 AMconst assoc_bastion_ip = new os.networking.FloatingIpAssociate("associate_bastion_fip", {
floatingIp: bastion_ip,
portId: bastion.networks[0].port
});
famous-twilight-87777
09/21/2020, 10:42 AMfamous-twilight-87777
09/21/2020, 10:42 AMDiagnostics:
openstack:networking:FloatingIpAssociate (associate_bastion_fip):
error: openstack:networking/floatingIpAssociate:FloatingIpAssociate resource 'associate_bastion_fip' has a problem: Missing required property 'portId'
elegant-island-39916
09/21/2020, 11:00 AMpulumi_kubernetes
)
pulumi for k8s dashboard helm chart:
k8s_dash = helm.Chart(
"kubernetes-dashboard",
config=helm.ChartOpts(
repo="stable",
chart="kubernetes-dashboard",
version="1.5.2",
namespace="kube-system",
),
opts=pulumi.ResourceOptions(
providers={"kubernetes": cluster_provider}
),
)
output:
Diagnostics:
kubernetes:<http://rbac.authorization.k8s.io:Role|rbac.authorization.k8s.io:Role> (kube-system/kubernetes-dashboard):
warning: <http://rbac.authorization.k8s.io/v1beta1/Role|rbac.authorization.k8s.io/v1beta1/Role> is deprecated by <http://rbac.authorization.k8s.io/v1/Role|rbac.authorization.k8s.io/v1/Role> and not supported by Kubernetes v1.22+ clusters.
kubernetes:<http://rbac.authorization.k8s.io:RoleBinding|rbac.authorization.k8s.io:RoleBinding> (kube-system/kubernetes-dashboard):
warning: <http://rbac.authorization.k8s.io/v1beta1/RoleBinding|rbac.authorization.k8s.io/v1beta1/RoleBinding> is deprecated by <http://rbac.authorization.k8s.io/v1/RoleBinding|rbac.authorization.k8s.io/v1/RoleBinding> and not supported by Kubernetes v1.22+ clusters.
kubernetes:extensions:Deployment (kube-system/kubernetes-dashboard):
error: apiVersion "extensions/v1beta1/Deployment" was removed in Kubernetes 1.16. Use "apps/v1/Deployment" instead.
See <https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.16.md#deprecations-and-removals> for more information.
where can i adjust apiversion for pulumi to avoid this issue?adamant-byte-68515
09/21/2020, 12:18 PMred-area-47037
09/21/2020, 12:33 PMfuture-morning-96441
09/21/2020, 1:43 PMfatal error: heap out of memory
for several days now (using pulumi with node), Are there any known antipatterns in Pulumi, which can cause this?
It tried to use more verbose logging and tracing but couldn't find any hint about what is causing this. (see error message in my next message)
I vaguely guess this could be either during stack-output-serialization or serializing and packaging lambda-functions but I sadly don't have any hard evidence to pinpoint the location.
Does anyone have some advice on how to proceed analyzing this error or what can cause this?clever-byte-21551
09/21/2020, 6:02 PMPULUMI_CONFIG_PASSPHRASE_FILE
and I get this error:
FATA[0729] Failed to deploy error="failed to update infra stack: failed to update stack: failed to get stack outputs: code: 255\n, stdout: \n, stderr: error: decrypting secret value: failed to decrypt: incorrect passphrase, please set PULUMI_CONFIG_PASSPHRASE to the correct passphrase\n\n: failed to select stack: exit status 255"
Any idea why?
@broad-dog-22463clever-plumber-29709
09/21/2020, 10:01 PM$ pulumi stack import --file github.singular.checkpoint.json
error: could not deserialize deployment: constructing secrets manager of type "cloud": secrets (code=Unknown): NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
quiet-helicopter-40480
09/22/2020, 1:20 AMclever-plumber-29709
09/22/2020, 1:55 AMhandsome-zebra-11018
09/22/2020, 3:59 AMpulumi up
command is 10 mins? Can we reduce it via some config?
When I kill pulumi up and start new one.. it complains old one is still running..famous-twilight-87777
09/22/2020, 5:34 AMclever-byte-21551
09/22/2020, 5:36 AMfamous-twilight-87777
09/22/2020, 6:00 AMfamous-twilight-87777
09/22/2020, 6:00 AMfamous-twilight-87777
09/22/2020, 6:00 AMkind-school-28825
09/22/2020, 6:01 AM> Run pulumi/actions 8s
error: failed to load language plugin nodejs: could not read plugin [/usr/bin/pulumi-language-nodejs] stdout: EOF
Run <docker://pulumi/actions>
/usr/bin/docker run --name pulumiactions_ca0c30 --label 9e3346 --workdir /github/workspace --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e PULUMI_ACCESS_TOKEN -e PULUMI_ROOT -e PULUMI_CI -e INPUT_ARGS -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/backend/backend":"/github/workspace" pulumi/actions preview
Logging in using access token from PULUMI_ACCESS_TOKEN
Logged in to <http://pulumi.com|pulumi.com> as qasim (<https://app.pulumi.com/qasim>)
#### :tropical_drink: `pulumi --non-interactive preview`
Previewing update (development)
View Live: <https://app.pulumi.com/qasim/core/development/previews/73500177-9c41-4b15-84b9-0708e2895a11>
pulumi:pulumi:Stack core-development error: It looks like the Pulumi SDK has not been installed. Have you run npm install or yarn install?
pulumi:pulumi:Stack core-development 1 message
Diagnostics:
pulumi:pulumi:Stack (core-development):
error: It looks like the Pulumi SDK has not been installed. Have you run npm install or yarn install?
error: failed to load language plugin nodejs: could not read plugin [/usr/bin/pulumi-language-nodejs] stdout: EOF
famous-twilight-87777
09/22/2020, 9:20 AMgetSecret
and then read the payload and inject it into a file - this is possible in terraform however I cannot seem to access the payload (it's always a promise)wet-soccer-72485
09/22/2020, 11:42 AMfierce-memory-34976
09/22/2020, 12:30 PMbitter-application-91815
09/22/2020, 2:32 PM<https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml>
like i see the autoscaling in the kubs sdk. Is this horizontalPodScaling ?bitter-application-91815
09/22/2020, 2:33 PMambitious-ambulance-56829
09/22/2020, 4:16 PMambitious-ambulance-56829
09/22/2020, 4:16 PMType Name Plan Info
pulumi:pulumi:Stack argocd-support
~ ββ aws:iam:Role argo-cd update [diff: ~assumeRolePolicy]
ββ kubernetes:helm.sh:Chart argo-cd 1 message
- β ββ kubernetes:core:Secret argo-cd/argocd-secret delete
- β ββ kubernetes:core:ServiceAccount argo-cd/argocd-server delete
- β ββ kubernetes:core:ConfigMap argo-cd/argocd-rbac-cm delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-server delete
- β ββ kubernetes:<http://monitoring.coreos.com:ServiceMonitor|monitoring.coreos.com:ServiceMonitor> argo-cd/argo-cd-argocd-application-controller delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-application-controller delete
- β ββ kubernetes:<http://networking.k8s.io:Ingress|networking.k8s.io:Ingress> argo-cd/argo-cd-argocd-server delete
- β ββ kubernetes:core:ConfigMap argo-cd/argocd-ssh-known-hosts-cm delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-redis delete
- β ββ kubernetes:core:ConfigMap argo-cd/argocd-tls-certs-cm delete
- β ββ kubernetes:core:ServiceAccount argo-cd/argocd-application-controller delete
- β ββ kubernetes:apps:Deployment argo-cd/argo-cd-argocd-redis delete
- β ββ kubernetes:<http://monitoring.coreos.com:ServiceMonitor|monitoring.coreos.com:ServiceMonitor> argo-cd/argo-cd-argocd-repo-server delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:RoleBinding|rbac.authorization.k8s.io:RoleBinding> argo-cd/argo-cd-argocd-application-controller delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-repo-server delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-server-metrics delete
- β ββ kubernetes:apps:Deployment argo-cd/argo-cd-argocd-repo-server delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-application-controller-metrics delete
- β ββ kubernetes:core:Service argo-cd/argo-cd-argocd-repo-server-metrics delete
- β ββ kubernetes:core:ConfigMap argo-cd/argocd-cm delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:ClusterRoleBinding|rbac.authorization.k8s.io:ClusterRoleBinding> argo-cd-argocd-application-controller delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:Role|rbac.authorization.k8s.io:Role> argo-cd/argo-cd-argocd-application-controller delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:ClusterRole|rbac.authorization.k8s.io:ClusterRole> argo-cd-argocd-application-controller delete
- β ββ kubernetes:apps:Deployment argo-cd/argo-cd-argocd-server delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:ClusterRoleBinding|rbac.authorization.k8s.io:ClusterRoleBinding> argo-cd-argocd-server delete
- β ββ kubernetes:<http://monitoring.coreos.com:ServiceMonitor|monitoring.coreos.com:ServiceMonitor> argo-cd/argo-cd-argocd-server delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:RoleBinding|rbac.authorization.k8s.io:RoleBinding> argo-cd/argo-cd-argocd-server delete
- β ββ kubernetes:core:ServiceAccount argo-cd/argocd-dex-server delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:ClusterRole|rbac.authorization.k8s.io:ClusterRole> argo-cd-argocd-server delete
- β ββ kubernetes:<http://rbac.authorization.k8s.io:Role|rbac.authorization.k8s.io:Role> argo-cd/argo-cd-argocd-server delete
- β ββ kubernetes:apps:Deployment argo-cd/argo-cd-argocd-application-controller delete
ββ kubernetes:yaml:ConfigFile project
β ββ kubernetes:<http://apiextensions.k8s.io:CustomResourceDefinition|apiextensions.k8s.io:CustomResourceDefinition> <http://appprojects.argoproj.io|appprojects.argoproj.io> 1 warning
ββ kubernetes:yaml:ConfigFile application
ββ kubernetes:<http://apiextensions.k8s.io:CustomResourceDefinition|apiextensions.k8s.io:CustomResourceDefinition> <http://applications.argoproj.io|applications.argoproj.io> 1 warning
ambitious-ambulance-56829
09/22/2020, 4:16 PMkubernetes:helm.sh:Chart
nor its dependencies. However when I run an update specifying a single resource URN to update (in this case the aws:iam:Role
one) it will run without problems and then trying to update it again no changes are shown anymore, no delete plan for the chart resources anymore.
Do you guys have any idea on what could Iβve been wrong or am I missing something?
Thanks in advance for the help.clever-plumber-29709
09/22/2020, 4:44 PMbitter-application-91815
09/22/2020, 5:32 PMbitter-application-91815
09/22/2020, 5:33 PMbitter-application-91815
09/22/2020, 5:33 PM