https://pulumi.com logo
Title
b

bored-table-20691

06/09/2021, 1:41 AM
What’s the best way to resolve this type of issue:
error: pre-step event returned an error: failed to verify snapshot: resource urn:pulumi:ssa-us-west-2::okera-infra-regions::kubernetes:yaml:ConfigFile$kubernetes:core/v1:ServiceAccount::cert-manager/cert-manager-webhook refers to unknown provider urn:pulumi:ssa-us-west-2::okera-infra-regions::pulumi:providers:kubernetes::k8s-ssa-provider::460da6b8-808b-4d03-b8f8-ee2fdc9ec693
I get this during
pulumi up -f
, but same issue with if I do
pulumi refresh
pulumi preview
seems to not hit the error
After a preview and then a
pulumi up
, I am in an error state:
error: the current deployment has 1 resource(s) with pending operations:
  * urn:pulumi:ssa-us-west-2::okera-infra-regions::pulumi:providers:kubernetes::k8s-ssa-provider, interrupted while creating

These resources are in an unknown state because the Pulumi CLI was interrupted while
waiting for changes to these resources to complete. You should confirm whether or not the
operations listed completed successfully by checking the state of the appropriate provider.
For example, if you are using AWS, you can confirm using the AWS Console.

Once you have confirmed the status of the interrupted operations, you can repair your stack
using 'pulumi stack export' to export your stack to a file. For each operation that succeeded,
remove that operation from the "pending_operations" section of the file. Once this is complete,
use 'pulumi stack import' to import the repaired stack.

refusing to proceed
If I export the stack, remove the penidng operation and try to re-import it:
$ pulumi stack import --file stack.out 
error: 2 errors occurred:
    1) state file contains errors: resource urn:pulumi:ssa-us-west-2::okera-infra-regions::kubernetes:yaml:ConfigFile$kubernetes:core/v1:ServiceAccount::cert-manager/cert-manager-webhook refers to unknown provider urn:pulumi:ssa-us-west-2::okera-infra-regions::pulumi:providers:kubernetes::k8s-ssa-provider::460da6b8-808b-4d03-b8f8-ee2fdc9ec693
s

steep-toddler-94095

06/09/2021, 2:04 AM
weird. do you know what happened right before this issue surfaced?
is any other resource in the stack using that same provider?
b

bored-table-20691

06/09/2021, 2:23 AM
I likely switched the provider from one that was constructed manually from a LookupCluster result to one that came form pulumi-EKS
So I don’t know if I just got stuff into an unusable state
Also one thing I had briefly done was remove the provider role thing on the cluster, and then added it back
So maybe it got confused by that
I’m also ok just destroying the stack and starting over if that’ll work
s

steep-toddler-94095

06/09/2021, 2:28 AM
yeah that's that I'd do TBH. in the past i've tried to preserve a resource while adding an existing provider to it by manually editing the statefile and was never able to make it work. I'd be curious what a Pulumi employee has to say about this
b

bored-table-20691

06/09/2021, 2:29 AM
Will destroy just work even though the state is messed up?
s

steep-toddler-94095

06/09/2021, 2:31 AM
oh right. may have to remove that resource from the pulumi stack, run destroy, and manually delete the resource. in my case everything was in a k8s namespace so I just deleted the namespace and removed the stack, but your situation may not be so easy
b

bored-table-20691

06/09/2021, 2:52 AM
If I destroy the stack it will destroy the EKS cluster too so I guess that’ll solve it. 😀 one issue is I’m worried there’s over a hundred resources in this jammed state so it may be a bit painful
s

steep-toddler-94095

06/09/2021, 3:08 AM
actually that was silly of me to suggest. if you can delete only that resource from the state and then manually delete it, you could just then run
pulumi up
to recreate it with the correct provider
b

bored-table-20691

06/09/2021, 3:14 AM
Dumb question - how to delete just that resource?
w

white-balloon-205

06/09/2021, 3:18 AM
@bored-table-20691 the original error you got suggests something got malformed in your state file. It is very unexpected that that could happen via normal use of the Pulumi cli - this most often happens when manual edits have been made to a state file. If it did happen organically - would love to understand the steps that led to that. But for the issue itself - you should be able to fix up the state file to be correct without destroying any resources. Can you share either the whole state file, or at least the set of provider instances and references (perhaps in DM if that helps?). You most likely just need to ensure that all references to the provider are using a name that exists in the statefile - it should be relatively easy to identify the fix needed by a find/replace on the state.
b

bored-table-20691

06/09/2021, 3:25 AM
@white-balloon-205 thanks for replying. It definitely was organic, I don’t modify the state file manually. I think roughly the steps were: 1. I had my stack as before, which created the kubeconfig (and k8s provider from it) using an
aws/eks
LookupCluster and manually making the kubeconfig. 2. I then made it use the kubeconfig from a
pulumi-eks
cluster, and I didn’t have the
ProviderCredentialOpts
setting on the cluster. 3. I did
pulumi up
which gave me an error saying that it’s asking for credentials (this eks cluster is in another account than my default profile) 4. I added in
ProviderCredentialOpts
, and then did
pulumi up
, and from there I am in error land. I can definitely see how something here might confuse it. I’ve also sent the stack in DM.
OK I edited the state file manually and managed to resurrect things by replacing the bad provider ID with the good one
🧟 1
👍🏾 1