also, is there a benefit for failing pulumi refres...
# general
b
also, is there a benefit for failing pulumi refreshes if a service is broken? Example:
Copy code
Diagnostics:
  kubernetes:core:Service (XXXXX-XXX-0151):
    error: Preview failed: 2 errors occurred:

    * Resource XXXXX-XXX-0151' was created but failed to initialize
    * Service does not target any Pods. Application Pods may failed to become alive, or field '.spec.selector' may not match labels on any Pods

error: an error occurred while advancing the preview
If I’m trying to update the state of my infrastructure to detect some drift and fix it, this would be a blocker
g
@creamy-potato-29402 ^
c
@busy-umbrella-36067 say more!
The reason we error out here is because
pulumi up
would error out. If we allow a refresh here, they’d refresh into a state of error, no?
What we should probably do is allow an annotation that opts out of the endpoints stuff.
b
@creamy-potato-29402 what happened is the sts/deploy behind that service was deleted
c
@busy-umbrella-36067 makes sense. We’ll fix this next week.
b
ideally I’d like Pulumi to just recognize that and recreate the missing resource
c
Oh I see.
You’re saying that you tried to refresh because the sts is missing.
Then that failed.
that is really interesting.
I had not considered this.
b
yeah, for instance: terraform does a refresh before any update
to make sure that any drift on the infrastructure doesn’t result in a failed deployment
c
ok I’ll take a look at this.