Hello, I have some very random problem with Pulum...
# python
i
Hello, I have some very random problem with Pulumi Python. I just suddenly started seeing the following error with Pulumi Kubernetes:
Copy code
Previewing update (staging)

View in Browser (Ctrl+O): <https://app.pulumi.com/pressone/pressone-infra/staging/previews/a35a5f5a-debb-4c7a-b325-7e573e5b34e6>

Downloading plugin: 36.40 MiB / 36.40 MiB [========================] 100.00% 16s
                                                                                [resource plugin kubernetes-4.8.1] installing
Downloading plugin: 36.45 MiB / 36.45 MiB [========================] 100.00% 16s
                                                                                [resource plugin kubernetes-4.9.0] installing
     Type                                            Name                                      Plan        Info
     pulumi:pulumi:Stack                             pressone-infra-staging                                1 error
 ~   ├─ pulumi:providers:kubernetes                  pressone-do-k8s                           update      [diff: ~version]
 +-  └─ kubernetes:cert-manager.io/v1:ClusterIssuer  pressone-letsencrypt-staging-cert-issuer  replace     [diff: ~metadata]

Diagnostics:
  pulumi:pulumi:Stack (pressone-infra-staging):
    error: Program failed with an unhandled exception:
    Traceback (most recent call last):
      File "/Users/theoluwanifemi/pulumi/pulumi-language-python-exec", line 197, in <module>
        loop.run_until_complete(coro)
      File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
        return future.result()
               ^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 141, in run_in_stack
        await run_pulumi_func(run)
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 51, in run_pulumi_func
        await wait_for_rpcs()
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 83, in wait_for_rpcs
        raise exn from cause
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc_manager.py", line 71, in rpc_wrapper
        result = await rpc
                 ^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/resource.py", line 1067, in do_register_resource_outputs
        serialized_props = await rpc.serialize_properties(outputs or {}, {})
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc.py", line 215, in serialize_properties
        result = await serialize_property(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc.py", line 468, in serialize_property
        value = await serialize_property(
                ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc.py", line 451, in serialize_property
        future_return = await asyncio.ensure_future(awaitable)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 129, in get_value
        val = await self._future
              ^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 212, in run
        return await transformed.future(with_unknowns=True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 129, in get_value
        val = await self._future
              ^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 175, in run
        value = await self._future
                ^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 200, in run
        transformed: Input[U] = func(value)
                                ^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi_kubernetes/helm/v3/helm.py", line 618, in invoke_helm_template
        inv = pulumi.runtime.invoke('kubernetes:helm:template', {'jsonOpts': opts}, invoke_opts)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/invoke.py", line 192, in invoke
        raise invoke_error
    Exception: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: the server has asked for the client to provide credentials
I’ve tried to remove the Kubernetes plugin and re-install, no luck. Please help?
b
this doesn't look like a pulumi error:
Copy code
Helm chart: could not get server version from Kubernetes: the server has asked for the client to provide credentials
which means authentication is turned on the k8s cluster, or your credentials expired?
i
authentication on my local or where?
b
k8s cluster you are trying to modify with pulumi
l
@icy-lion-8963 local. You should have a valid kubeconfig context active on the system your are running
pulumi
cli on.
b
what do you get when running
kubectl get pods
? if it doesn't succeed, then pulumi won't either
i
kubectl
and
helm
commands work fine
Just Pulumi
l
@icy-lion-8963 can you post some code? From the initial error output, I see that you are using an explicit Kubernetes provider in your code named
pressone-do-k8s
i
Copy code
import pulumi
import pulumi_kubernetes as k8s

from pulumi_kubernetes.helm.v3 import Chart, ChartOpts, FetchOpts

from digitalocean.config import do_settings
from digitalocean.k8s.provider import get_k8s_opts
from digitalocean.k8s.utils.transformations import metadata_annotations

nginx_ingress_controller_name = f"{do_settings.CURRENT_ENV}-ingress-controller"

namespace_identifier = "pressone-nginx-ingress-ns"
nginx_ingress_namespace = k8s.core.v1.Namespace(
    namespace_identifier, metadata={
        "name": namespace_identifier
    },
    opts=get_k8s_opts()
)

pod_annotations = {
    # "<http://service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol|service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol>": "true",
    # "<http://service.beta.kubernetes.io/do-loadbalancer-hostname|service.beta.kubernetes.io/do-loadbalancer-hostname>": get_pressone_cloud_domain()
}
# commented out the loadbalancer annotations because they removed the ip from output


nginx_ingress = Chart(
    nginx_ingress_controller_name,
    ChartOpts(
        chart="ingress-nginx",
        version="4.10.0",
        namespace=namespace_identifier,
        fetch_opts=FetchOpts(
            repo="<https://kubernetes.github.io/ingress-nginx>",
        ),
        transformations=[metadata_annotations(pod_annotations)],
        values={
            "controller": {
                "metrics": {
                    "enabled": True,
                },
                "publishService": {
                    "enabled": True,
                }
            },
        },
    ),
    opts=get_k8s_opts(
        depends_on=nginx_ingress_namespace,
    )
)

ingress_service_ip = nginx_ingress.get_resource(
    "v1/Service", f"{namespace_identifier}/{nginx_ingress_controller_name}-ingress-nginx-controller"
).status.apply(lambda status: status.load_balancer.ingress[0].ip)

pulumi.export("load_balancer_ip", ingress_service_ip)
provider.py
Copy code
from typing import Any

import pulumi_kubernetes as k8s
from pulumi import ResourceOptions

from digitalocean.k8s.cluster import cluster

k8s_provider = k8s.Provider(
    "pressone-do-k8s", kubeconfig=cluster.kube_configs[0].raw_config
)


def get_k8s_opts(**kwargs: Any) -> ResourceOptions:
    return ResourceOptions(
        provider=k8s_provider,
        **kwargs
    )
l
@icy-lion-8963 in your main program, comment out the
Chart
resource temporarily and can you add these lines in your
__main__.yaml
?
Copy code
from digitalocean.k8s.cluster import cluster

pulumi.export("kubeconfig", cluster.kube_configs[0].raw_config)
If your run
pulumi up
like this, should get the value of the
kubeconfig
of your DigitalOcean cluster as a stack output for debugging purposes. If the value seems legit, can you drop that value in a file and then try this:
Copy code
KUBECONFIG=<file containing export kubeconfig value> kubectl get pods
i
OKAY.. One sec
Copy code
Previewing update (staging)

View in Browser (Ctrl+O): <https://app.pulumi.com/pressone/pressone-infra/staging/previews/5f6ddfcf-5078-46bc-9a49-4e1ee3364afc>

     Type                                            Name                                      Plan        Info
     pulumi:pulumi:Stack                             pressone-infra-staging                                1 error
 ~   ├─ pulumi:providers:kubernetes                  pressone-do-k8s                           update      [diff: ~version]
 +-  ├─ kubernetes:<http://cert-manager.io/v1:ClusterIssuer|cert-manager.io/v1:ClusterIssuer>  pressone-letsencrypt-staging-cert-issuer  replace     [diff: ~metadata]
 +-  └─ kubernetes:core/v1:Secret                    pressone-k8s-gitlab-registry-secret       replace     [diff: ~data,provider]

Diagnostics:
  pulumi:pulumi:Stack (pressone-infra-staging):
    error: Program failed with an unhandled exception:
    Traceback (most recent call last):
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc_manager.py", line 71, in rpc_wrapper
        result = await rpc
                 ^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/resource.py", line 1067, in do_register_resource_outputs
        serialized_props = await rpc.serialize_properties(outputs or {}, {})
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc.py", line 215, in serialize_properties
        result = await serialize_property(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc.py", line 468, in serialize_property
        value = await serialize_property(
                ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/rpc.py", line 451, in serialize_property
        future_return = await asyncio.ensure_future(awaitable)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 129, in get_value
        val = await self._future
              ^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 212, in run
        return await transformed.future(with_unknowns=True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 129, in get_value
        val = await self._future
              ^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 175, in run
        value = await self._future
                ^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/output.py", line 200, in run
        transformed: Input[U] = func(value)
                                ^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi_kubernetes/helm/v3/helm.py", line 618, in invoke_helm_template
        inv = pulumi.runtime.invoke('kubernetes:helm:template', {'jsonOpts': opts}, invoke_opts)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/invoke.py", line 192, in invoke
        raise invoke_error
    Exception: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: the server has asked for the client to provide credentials
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/Users/theoluwanifemi/pulumi/pulumi-language-python-exec", line 197, in <module>
        loop.run_until_complete(coro)
      File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
        return future.result()
               ^^^^^^^^^^^^^^^
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 141, in run_in_stack
        await run_pulumi_func(run)
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 51, in run_pulumi_func
        await wait_for_rpcs()
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 83, in wait_for_rpcs
        raise exn from cause
      File "/Users/theoluwanifemi/Library/Caches/pypoetry/virtualenvs/pressone-infra-_cUdUW92-py3.11/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 75, in wait_for_rpcs
        await rpc_manager.rpcs.pop()
    Exception: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: the server has asked for the client to provide credentials
Similar error
Or do you want me to comment out every Chart I have.. That’s not all my code I sent earlier.. It’s quite a mutifile prject
l
Yes. Make sure we can get the value of
kubeconfig
shown, uncommenting all the code that is now failing due to this problem.
i
Hmm.. The output does not show up, maybe it’s blank? Looks like here’s where it’s:
Copy code
Outputs:
  - cluster_issuer_name       : "pressone-staging-letsencrypt"
  - load_balancer_ip          : "<REPLACED BY ME>"
  - tls_private_key_secret_ref: "pressone-staging-letsencrypt-private-key"

Resources:
    - 110 to delete
    16 unchanged

Do you want to perform this update?  [Use arrows to move, type to filter]
  yes
> no
  details
l
Did you add the lines to export the kubeconfig as a stack export?
i
Yes.. I did:
Copy code
import pulumi
import pulumi_digitalocean as do
from pulumi_digitalocean import KubernetesClusterNodePoolArgs

from digitalocean.config import do_settings
from digitalocean.providers import get_do_opts
from digitalocean.registry import pressone_container_registry
from digitalocean.vpc import pressone_vpc


cluster_name = f"pressone-{do_settings.CURRENT_ENV}-k8s-cluster"

cluster = do.KubernetesCluster(
    cluster_name,
    version="1.29.1-do.0",
    region=do.Region.LON1,
    node_pool=KubernetesClusterNodePoolArgs(
        name=f"pressone-{do_settings.CURRENT_ENV}-pool",
        size=do.DropletSlug.DROPLET_S4_VCPU8_G_B_INTEL,
        # <https://www.pulumi.com/registry/packages/digitalocean/api-docs/droplet/#supporting-types>
        auto_scale=True,
        min_nodes=1,
        max_nodes=2,
    ),
    tags=[do_settings.CURRENT_ENV],
    vpc_uuid=pressone_vpc.id,
    registry_integration=True,
    opts=get_do_opts(depends_on=pressone_container_registry),
    name=cluster_name
)

pulumi.export("kubeconfig", cluster.kube_configs[0].raw_config)
l
Is this the entrypoint file of your program, usually
__main__.yaml
?
i
Yeah.
l
I doubt it. There is a file somewhere where you have
pulumi.export
lines for the existing outputs:
Copy code
- cluster_issuer_name       : "pressone-staging-letsencrypt"
- load_balancer_ip          : "<REPLACED BY ME>"
- tls_private_key_secret_ref: "pressone-staging-letsencrypt-private-key"
That file should also contain the additional
pulumi.export
line I suggested before.
i
Something funny is happening: When I change the output name, for instance I append “1” to make it
kubeconfig1
, then I see it like this:
But when I leave it as
kubeconfig
, I don’t see the output anymore. Are there like any auto exports? Maybe I’m overriding something
Sorry for the stupid questions, I’m quite new as well 🙂
l
Well, it depends on what you commented out. Not sure where the existing
kubeconfig
export comes from, but I suspect there is already an export named
kubeconfig
somewhere in your code.
i
I just searched the entire project, I have none.. These are all the exports I have
l
What if you search for
"kubeconfig"
?
i
Here, but it’s not exported:
I tried commenting out the
kubeconfig
export statment completely, and the
kubeconfig
output still came. So, it’s being exported from somewhere, but I definitely do not know where
l
ok, no problem. Since you have a stack output named
kubeconfig
, see if the command
pulumi stack output kubeconfig --show-secrets
shows you a valid kubeconfig configuration. Don't paste it here as it is a secret.
i
Does pulumi cache the config?
The cluster name I see in the config outputted has been changed
l
If you made manual changes outside of Pulumi, then the Pulumi state is behind on the actual state. You should then run
pulumi refresh
to bring it back in sync.
i
Nope.. the change was made by Pulumi. No change was made outside
Using the kubeconfig secret to run
kubectl get pods
returns
Unauthorized
l
We still don't know where that
kubeconfig
stack output comes from. It is possible that value is not connected to the actual cluster output (
cluster.kube_configs[0].raw_config
).
i
I have two projects on my computer, but of course, they have different
Pulumi.yaml
configs with different name. Is it possible this has anything to do?
l
I doubt it. I have tens of Pulumi projects on my system. Each stack has isolated state, so that shouldn't mix up.
i
Okay.. Let me see if I find something. I probably have enough information to start investigating further.
I’d let you know if I find something
d
There's a similar report in the terraform side of the provider; the token in the kubeconfig is short-lived, and expires after a few days. Did the issue persist after you ran
pulumi refresh
?
The kubeconfig is cached (stored in state) on the Cluster resource, a refresh should regenerate the token if it's expired
It's worth having
cluster.kube_configs[0].expires_at
as an output to help debug
i
Nope.. Not ran
pulumi refresh
yet
Let me do that
Copy code
kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> (staging-cert-manager-cainjector):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"

  kubernetes:<http://opentelemetry.io/v1alpha1:Instrumentation|opentelemetry.io/v1alpha1:Instrumentation> (openobserve-collector/openobserve-java):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> (staging-cert-manager-controller-challenges):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> (staging-cert-manager-controller-certificatesigningrequests):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"
I get an error trying to refresh the state
d
You can try with
--target
to only refresh the cluster
Either with the URN, or should also work with
--target '*-k8s-cluster'
i
Okay.. Just did
But no resource was listed with the target.. Can I get the URN from the web
d
Yes, or with
pulumi stack --show-urns
i
I just ran the refresh against the cluster.. but the config being exported still seems wrong
d
I'm unsure if a refresh updates the outputs
If you try an
up
again, does it work?
i
Yes yes.. It works now
Looks like something with the refresh works
That’s crazy
d
Cool. I don't think there's a way to force a refresh on apply for particulate resources. You can make it part of your process to run the refresh command with URN targeted, or have the token be updated on every apply using this function: https://www.pulumi.com/registry/packages/digitalocean/api-docs/getkubernetescluster/
The latter might be better, as it'll mean the token used is always associated to the running user
i
Apparently, the outputs are also pre-computed. Now it shows that
kubeconfig
output is being removed.
d
Yes, it'll preview the values (if they're not secret) if the dependent data is in state
i
You can make it part of your process to run the refresh command with URN targeted, or have the token be updated on every apply using this function:
This makes sense then
d
With the function, you should end up with something like:
Copy code
kubeconfig = get_kubernetes_cluster_output(cluster.name).kube_configs[0].raw_config
i
Wow.. Didn’t realize there was something that easy.. Thanks thank a lot