I'm using pulumi with a Service Principal and this...
# azure
a
I'm using pulumi with a Service Principal and this morning I am having issues with the Kubernetes plugin. I am getting the warning about azure "auth plugin is deprecated" and it is trying to get me to enter a code at a web page. Anyone else hit this issue and resolved it? Thanks.
m
what kind of issues?
a
image.png
m
Looks like it’s trying to authenticate via cli or device code. How did you configure your SP? If via env variables, can you check they’re still set?
a
I have these all set with the correct values: ARM_CLIENT_ID= ARM_CLIENT_SECRET= ARM_TENANT_ID= ARM_SUBSCRIPTION_ID=
m
Are you using AKS? Can you post the output of
pulumi about
?
a
AKS, yes.
Pulumi Project: pulumi/projects/hri CLI Version 3.53.1 Go Version go1.19.5 Go Compiler gc Plugins NAME VERSION azure 5.32.0 azure-native 1.93.0 azuread 5.34.0 kubernetes 3.23.1 postgresql 3.6.0 python unknown random 4.10.0 tls 4.8.0 Host OS darwin Version 13.1 Arch arm64 This project is written in python: executable='/Users/csnyder/Documents/merative/git/hi-cloud-automation/.venv/bin/python3' version='3.10.9 ' Current Stack: v1-hri Found no resources associated with v1-hri Found no pending operations associated with v1-hri Backend Name MBP-Y6K3P5671R URL azblob://pulumi-state User csnyder Organizations Dependencies: NAME VERSION jsonpath-python 1.0.6 merative-pulumi-wrapper 0.2.263 msal 1.20.0 pip 22.3.1 pulumi-azure 5.32.0 pulumi-azure-native 1.93.0 pulumi-azuread 5.34.0 pulumi-kubernetes 3.23.1 pulumi-postgresql 3.6.0 pulumi-random 4.10.0 pulumi-tls 4.8.0 setuptools 65.6.3 wheel 0.38.4 zulu 2.0.0
My quick check seemed to show that I have all the latest pulumi libs.
m
Yeah that looks good at first glance. I’m afraid I don’t have a quick answer here.
a
I appreciate the look. thanks.
If useful, our AKS is at v1.23 my local kubectl is at v1.26. A coworker has kubectl v1.24 and sees the same issue. Same project worked yesterday. 🤷‍♂️ I'll keep digging...
m
Something’s gotta have changed since then… Did your client secret expire?
i
instead of setting those environment variables, you could also consider creating a
provider
object and putting the values in your config yaml:
Copy code
var provider = new Pulumi.AzureNative.Provider("provider", new Pulumi.AzureNative.ProviderArgs()
        {
            SubscriptionId = p.subscriptionId,
            ClientId = p.servicePrincipalId,
            ClientSecret = p.servicePrincipalSecret,
            TenantId = p.tenantId,
            PartnerId = p.partnerId
        });
and then append this to your aks object:
Copy code
}, new CustomResourceOptions() { Provider = provider });
a
secret is good until 2024. Here is how I am setting up the provider.
Copy code
def get_kubernetes_provider(self) -> kubernetes.Provider:
        """
        Function to configure the Pulumi kubernetes provider for the specified cluster
        return kubernetes.Provider or None if the missing information to create provider
        """
        kubernetes_credentials = containerservice.list_managed_cluster_user_credentials_output(
            resource_group_name=self.compute_resource_group_name,
            resource_name=self.kubernetes_cluster_name
        )
        # Export kubeconfig
        encoded = kubernetes_credentials.kubeconfigs[0].value
        kube_config = encoded.apply(
            lambda enc: base64.b64decode(enc).decode())

        return kubernetes.Provider(
            "kubernetes_provider",
            kubeconfig=kube_config
        )
and using that provider like
Copy code
namespace = Namespace("hri-namespace",
                              metadata={"name": Output.concat(args.environment, "-hri")},
                              opts=ResourceOptions(parent=self, provider=args.k8s_provider)
                              )
i
the provider i mentioned was the one that allows pulumi to communicate with Azure
a
ok, I'll look into that version. Thanks
Found the change but this is weird. If you look at how I'm setting up the provider, I'm pulling the kube_config directly from the AKS cluster. I was targeting a new stack which has an AKS on 1.23.12 where previously I was using an AKS on 1.24.9. Switching to my previous cluster, the issue goes away. Looking into the permissions on that cluster to see if that is the issue and not the AKS version difference.