Continuing with this setup: <https://pulumi-commun...
# kubernetes
l
Continuing with this setup: https://pulumi-community.slack.com/archives/C019YSXN04B/p1601995800036200 I’m wondering if I bumped into a bug regarding persistence of state. The automation API is running in a docker container, so I attached a cluster role & rolebinding with a set of permissions which I deemed enough to roll out a git based project/stack. In a previous run, Pulumi complained it couldn’t watch for the pods associated with a StatefulSet. Indeed, the
pods
resource was not added to list of permissions. Meanwhile, the statefulset and the pods are up and running though. So I redeployed the automation app with more permissions and ran again. But now it bails out complaining that my statefulset already exists. Running
pulumi stack export
shows me indeed that my statefulset is not recorded in the state. /cc @gorgeous-egg-16927
w
Does
pulumi refresh
help?