is the Pulumi API having some issues today? ```p s...
# general
g
is the Pulumi API having some issues today?
Copy code
p stack change-secrets-provider default
Migrating old configuration and state to new secrets provider
this is just… hanging.. I’m trying to migrate from a hashivault -> default provider .. the only thing I can think of is this stack has a lot of resources (200~) but it still shouldn’t take that long… edit: looks like the operation was extremely slow and adding
-v
flag didn’t really provide any additional info. Methinks it might be a useful to have…
It finally completed after about 5 minutes… Would be nice if Pulumi gave me some feedback other than “sit there and don’t look busy” I noticed this stack had a bunch of encrypted ciphertext in the last-applied-configuration section for a ton of the k8s resources… I wonder if that is what took so long??
Dear God… now every operation like preview is incredibly slow.. is there a known issue ? is ther ea reason why pulumi needs to compute a ciphertext for every resource in a state? apparently this was an issue I commented on and was “fixed”: https://github.com/pulumi/pulumi-kubernetes/issues/1118#issuecomment-888478422 … now I feel like my pulumi commands are slamming the Pulumi encryption API and that’s why it’s so slow. There are over 250-300+ k8s resources in this stack state and they are all encrypting the last-applied-configuration and unecrypting?
not sure if this is helpful but I noticed pulumi going from k8s provider 3_14 -> 3_15, my “outputs” have decided to no longer contain obscured [secret] values…
Copy code
~ deployment_names    : {
      ~ all-events-s3                       : "[secret]" => "event-service-all-events-s3-vk3hn5jq"
      ~ apple-receipts                      : "[secret]" => "event-service-apple-receipts-4mnqalbz"
      ~ bpt-completions                     : "[secret]" => "event-service-bpt-completions-epgne5c2"
so I changed the secrets provider to passphrase, and then vault… and I’m back to normal times … soooooooo whatever change was made in that issue to encrypt things has a severe performance penalty when using the default (service) encryption provider… might be worth investingating
1
I think Pulumi may need to consider moving to an atomic state encrypt/decrypt operation. I don’t see the value in having every little resource tracked in the state with a unique ciphertext. .. that’s wasteful.
Maybe there are some technical reasons this isn’t possible, but this clearly doesn’t scale..
Copy code
(pulumi_project)> p stack export --show-secrets=true > tmp1.json
if i try to export the stack…. it takes 5-6+ mins plus due to all of these resources having encrypted ciphertext for a bunch of their outputs..
o
@gorgeous-minister-41131 the default encryption provider encrypts each item individually and that involves a lengthy network request - I believe it's bounded below by the cost of generating key material or some other operation that's supposed to be CPU-hard (like bcrypt if you're familiar). Sorry that had such an impact on your day, that sucks. Could you tell me what
pulumi version
reports? We just merged a change recently in v3.23.0 to enable bulk decryption: https://github.com/pulumi/pulumi/releases/tag/v3.23.0 But it sounds like we may need to also implement bulk _en_cryption and make sure this code path uses it. If it wouldn't be too much of a bother, if you're not on 3.23.x yet please update & let me know if that's still a 5 minute operation. If so I'll create an issue to track this.
1
g
Yup tried it on 3.23.2 yesterday and it still didn’t solve this particular issue.. I updated the issue in the github it looks like the problem lies in a scenario where all of the k8s resources are detecting a secret, and thus assuming the last-applied-configuration needs to be encrypted for each individual resource. I presume that operation in the kubernetes provider is not bulk decrypted/encrypted an that’s where the time was. @red-match-15116 was working on a fix in a PR but it doesn’t look like it made it yet, so maybe we can hold out and just continue to use the vault kms (or move to a passphrase if need-be). https://github.com/pulumi/pulumi/pull/8138 https://github.com/pulumi/pulumi-kubernetes/issues/1118#issuecomment-1027501313
r
Ah yeah I think they decided to go with a different approach and that PR should probably be closed out.
g
Ahh.. well will the other approach resolve this problem?
I guess I’m just concerned I’m going to forever have possible ciphertext for many of my
last-applied-configuration
in my current stack 😞and that will prevent us from using the
service
provider efficiently…
for the record, the hashivault, passphrase (of course), and awskms is blazing fast
so it’s obviously something unique to the service one
r
I'm no longer on the team so I'm not sure what they ended up with - it should address the issue you're seeing, whatever route they take since that was the issue it was meant to address
1
g
I’m fine with, and understand that these values in the stack should and need to be encrypted, but I feel like the k8s provider should bulk encrypt/decrypt those values just like Pulumi now does with config’s/other Output.secrets
otherwise it’s just wasting cycles IMO
no need for every resource to have its own cipherkey
Just means we’ll have to continue leveraging hashivault for our encrypted stacks for now 🙂 We’re internally discussing anyway if we really want to move to the pulumi service for these or continue using vault or maybe sops and deal with a rotating passphrase ourselves… unfortunately this left a bad taste in our mouth for PoC. But it’s not the end of the world haha. We already use the Pulumi Service and are talking about possible Enterprise support so it may come back up in a discussion somewhere.
Anyway thank you for the input all.