It says this for every pending update to the clust...
# general
It says this for every pending update to the cluster, even though it can somehow detect that there are diffs?...
Diffs are detected between the state and the code. It doesn't need remote access for that.
Ah, that makes sense
Think it might have something to do with DO rotating kubeconfigs:
Although now unfortunately I swapped out the DO token and even that won't authenticate haha
So pulumi just refuses to authenticate to digtialocean, even though using the exact same token in curl works. Not sure what's going on I've tried setting the DIGITALOCEAN_TOKEN environment variable, setting pulumi config, and using an explicit provider Wish I could see what HTTP requests it's making
I'll try now, but that's a bit granular haha
Is there any way you can actually see network requests with strace?
Yes, but you need to know the kernel api call and grep for that. I'm not that low-level. There's bound to be an easier solution...
Yeah that's what I imagined hehe
Some kinda write syscall
But in a program like this, there's bound to be a lot of them
This looks like the same use case?
nethogs. Never heard of it...
Was gonna install wireshark, this seems useful
Doesn't give me any content I'm afraid 😕, just shows usage
Wireshark is a good option too. Generally most useful when there's an expert around..
I happen to be that expert, so that's convenient
👍 1
Hmm, it's TLS traffic, which I forgot about. This is gonna take a while 😕.
I actually don't know how I would MITM this, since it's the pulumi CLI connecting directly and not e.g. my browser.
@little-cartoon-10569 I got it to print the api requests with TF_LOG=trace pulumi preview -v=3 --debug. For some reason, either it's not attaching the credentials in the Authorization header, or it's redacting them from the output. Do you know how I can find out which it is? Or what I would need to check if it were the former?
Copy code
The Pulumi CLI encountered a fatal error. This is a bug!
We would appreciate a report: <>
Please provide all of the below text in your report.
Pulumi Version:   v2.22.0
Go Version:       go1.15.8
Go Compiler:      gc
Architecture:     amd64
Operating System: linux
Panic:            fatal: An assertion has failed: Expected to find parent node 'urn:pulumi:global::main-shared::kubernetes:core/v1:Namespace::cert-manager' in checkpoint tree nodes

goroutine 1 [running]:
runtime/debug.Stack(0xc00057d210, 0x197a4c0, 0xc000de3060)
        /Users/runner/hostedtoolcache/go/1.15.8/x64/src/runtime/debug/stack.go:24 +0x9f
        /Users/runner/work/pulumi/pulumi/pkg/cmd/pulumi/main.go:29 +0x73
panic(0x197a4c0, 0xc000de3060)
        /Users/runner/hostedtoolcache/go/1.15.8/x64/src/runtime/panic.go:969 +0x1b9
<|>(0xc0011a0500, 0x1d31729, 0x3a, 0xc00057d4f8, 0x1, 0x1)
        /Users/runner/work/pulumi/pulumi/sdk/go/common/util/contract/assert.go:33 +0x1a5
<|>, 0xf, 0x10, 0xc0011bacc0, 0x0)
        /Users/runner/work/pulumi/pulumi/pkg/operations/resources.go:77 +0x449
<|>(0xc0011bac90, 0xc0004ded20, 0x0, 0x0, 0xc0004e6500, 0xc0004daf00, 0x1fef140, 0xc0004daf30, 0xc0011bac90)
        /Users/runner/work/pulumi/pulumi/pkg/backend/filestate/backend.go:672 +0x9e
<*cloudBackend).GetLogs(0xc000cdd500|*cloudBackend).GetLogs(0xc000cdd500>, 0x2022b60, 0xc000130010, 0x203fc60, 0xc0000a8000, 0xc0004daf00, 0x1fef140, 0xc0004daf30, 0xc0004ded20, 0x0, ...)
        /Users/runner/work/pulumi/pulumi/pkg/backend/httpstate/backend.go:1201 +0x10d
<|>, 0xc000130010, 0x203fc60, 0xc0000a8000, 0xc0004daf00, 0x1fef140, 0xc0004daf30, 0xc0004ded20, 0x0, 0x0, ...)
        /Users/runner/work/pulumi/pulumi/pkg/backend/stack.go:118 +0xdf
<*cloudStack).GetLogs(0xc0000a8000|*cloudStack).GetLogs(0xc0000a8000>, 0x2022b60, 0xc000130010, 0xc0004daf00, 0x1fef140, 0xc0004daf30, 0xc0004ded20, 0x0, 0x0, 0x0, ...)
        /Users/runner/work/pulumi/pulumi/pkg/backend/httpstate/stack.go:170 +0xa6
main.newLogsCmd.func1(0xc000cdf8c0, 0x2c3ddc0, 0x0, 0x0, 0x11, 0x11)
        /Users/runner/work/pulumi/pulumi/pkg/cmd/pulumi/logs.go:100 +0x446
<|>, 0x2c3ddc0, 0x0, 0x0, 0x0, 0x0)
        /Users/runner/work/pulumi/pulumi/sdk/go/common/util/cmdutil/exit.go:96 +0x51
<|>, 0x2c3ddc0, 0x0, 0x0)
        /Users/runner/work/pulumi/pulumi/sdk/go/common/util/cmdutil/exit.go:112 +0x6b
<*Command).execute(0xc000cdf8c0|*Command).execute(0xc000cdf8c0>, 0x2c3ddc0, 0x0, 0x0, 0xc000cdf8c0, 0x2c3ddc0)
        /Users/runner/go/pkg/mod/ +0x2c2
<*Command).ExecuteC(0xc000346dc0|*Command).ExecuteC(0xc000346dc0>, 0x15, 0x0, 0x0)
        /Users/runner/go/pkg/mod/ +0x375
        /Users/runner/work/pulumi/pulumi/pkg/cmd/pulumi/main.go:48 +0x4d
^ upon running pulumi logs
I think my stack is cursed
Apparently it's a bug, it says so!
I think you'll need someone with more knowledge than me. I wonder if @billowy-army-68599 is still online? It's after working hours where he is..
I can't continue working until I figure out what's wrong with this stack 😞
I'd try to clone to new directory and remove (or move) ~/.pulumi. Maybe there's something cached that you can flush by doing that.
I went and imported an old pulumi stack version and it will literally 401 on pulumi refresh, but pulumi up works just fine.
Ok, I think the issue is fixed. I did a series of state-rollbacks, and ups+refreshes, and somehow I ended up with a working stack
Unfortunately I can't tell you what I did, it was kinda just random stuff
Ah. I'm sorry to have to tell you, but you have switched from "engineering" to "programming".
I have another new problem after fixing these series of issues actually I copied the code from the example to handle rotating kubeconfigs:
Copy code
I have a problem; I'm doing this:
export const kubeConfig = cluster.status.apply(status => {
    if (status === "running") {
        const clusterDataSource = => doc.getKubernetesCluster({name}));
        return clusterDataSource.kubeConfigs[0].rawConfig;
    } else {
        return cluster.kubeConfigs[0].rawConfig;
This works (as far as I can tell), but exports a non-secret output
I can see the kubeconfig in the logs
Not sure if this is a problem with the digitalocean datasource ro somehow the operations cause that