I’m experiencing pretty slow `pulumi up` runs (eve...
# general
r
I’m experiencing pretty slow
pulumi up
runs (even during preview phase) with a project containing two `StackReference`s. It’s hanging on
running     read pulumi:pulumi:StackReference ajaegle/proj/stack
for ages and the super verbose log doesn’t list any more lines than the ones that were there since the beginning (the diff of some resources). I don’t expect that to be lossy internet connectivity as my machine is constantly using almost two cores for the pulumi process.
m
what other files are in the current directory which you run
pulumi up
in? I've noticed that if there are unrelated typescript files in the same directory then pulumi has some trouble and it slows down or stalls
r
Upgrading pulumi from 1.9.0 to 1.10.1 also didn’t solve the issue.
Oh, I just went from tsconfig explicit files to include/exclude. let me try that.
Switching back to
files
in
tsconfig.json
didn’t help. My project contains several typescript files (all for pulumi), one typings folder with manually added type definitions for one library that didn’t support typings and the node_modules folder of course. That setup was the same for months.
Switched back to node@13 instead of node@10 which I still had pinned due to some earlier incompatibilities I had. Same behaviour.
I don’t get it what causes this issue. Inspecting the log even shows that the
StackReference
was already resolved - I can see some content from the given stack (e.g. the exported kubeconfig) and some kubernetes resources using this resource that are diffed (and found to be same).
I was able to track down the problem but still no clue how to solve it. If I remove the new resources from my script - basically ending up with a zero-changes diff then everything works. If I on the other hand add the resources (just one kubernetes
ConfigFile
) I end up with a never halting process. The resources don’t use anything from the
StackReference
as I’m just migrating to a self-contained pulumi project without stack references but I planned some soft migration with two clusters instead of tearing down one and creating a new one later.
The thing I want to deploy is https://github.com/jetstack/cert-manager/releases/download/v0.13.0/cert-manager.yaml using
kubernetes.ConigFile
which has loads of resources. But I already did that with v0.11.0 before which was basically the same size.
I cannot imagine diffing of some 30-50 kubernetes manifests can take so long.
r
Just a shot in the dark, but can you check whether all instances of the
pulumi-kubernetes
package are at least
v1.5.0
? And can you double-check the lock file if they all should be
1.5.0
? There has been a bug in
1.4.2 - 1.4.4
which resembles what you describe. See https://github.com/pulumi/pulumi-kubernetes/issues/963
I faced the issue when trying to update cert-manager as well.
r
Thanks for that pointer. This project still has the pulumi-kubernetes dependency on 1.4.5, one of the versions with those issues…
I’ll try upgrading later.
Thanks @rapid-eye-32575. You were perfectly right. The issue is gone after upgrading
@pulumi/kubernetes
from
1.4.5
to
1.5.1
. 🎖️
🤩 1