This message was deleted.
# general
s
This message was deleted.
s
The default Kubeconfig leverages
kubelogin
and is configured to do device login (as you’ve observed). If you are passing this Kubeconfig to a Kubernetes provider, then that’s probably why you’re seeing the behavior you’re seeing.
I would guess that you’re probably going to have to create a service principal so that you can pass credentials to the Kubernetes provider.
l
so I've actually done that already
but I'm not sure if I'm not leveraging it in the right places after switching to azure AD authentication
OH I see I'm currently passing the kubeconfig to the provider, it needs to be the service principal?
s
I suspect that it’s the Kubeconfig you’re passing to the provider that’s tripping you up, yes. You still need to pass a Kubeconfig to the provider, but it needs to be a different one. If you’re currently using the Azure Native provider and retrieving the Kubeconfig via
ListManagedClusterUserCredentials
(or
ListManagedClusterUserCredentialsOutput
), you might try
ListManagedClusterAdminCredentials
instead. Otherwise, I’m not sure how to get the provider authenticated to Azure properly.
If you want to/are able to share your code, I’m happy to take a look and see if I can reproduce the situation and find a potential workaround. It might take a few days, though.
l
This is how i'm passing the kubeconfig currently:
Copy code
// Export the KubeConfig
        MyKubeConfig = GetKubeConfig(rgName, cluster.Name);

        // Create a k8s provider pointing to the kubeconfig.
        var k8sProvider = new Pulumi.Kubernetes.Provider("k8s", new Pulumi.Kubernetes.ProviderArgs
        {
            KubeConfig = MyKubeConfig,
            

        });

        var k8sCustomResourceOptions = new CustomResourceOptions
        {
            Provider = k8sProvider,
            DependsOn = cluster
        };

        var appnamespace = new Pulumi.Kubernetes.Core.V1.Namespace("appName", new NamespaceArgs()
        {
            Metadata = new ObjectMetaArgs()
            {
                Name = appName
            },
            ApiVersion = "v1",
            Kind = "Namespace"
        }, k8sCustomResourceOptions);

        var appNamespaceProvider = new Pulumi.Kubernetes.Provider("k8s-sdbackplaneprivate-provider",
            new Pulumi.Kubernetes.ProviderArgs()
            {
                
                KubeConfig = MyKubeConfig,
                Namespace = appnamespace.Metadata.Apply(c => c.Name)
            });
s
That looks like TypeScript, yes? Which provider(s) are you using?
l
This is C# actually
azure-native
using Pulumi.AzureAD; using Pulumi.AzureNative.ContainerService; using Pulumi.AzureNative.ContainerService.Inputs; using Pulumi.AzureNative.Authorization; using Pulumi.Kubernetes.Types.Inputs.Core.V1; using Pulumi.Kubernetes.Types.Inputs.Meta.V1;
s
Ah, thanks. (I don’t work with C# much.) Let me do some digging into how C# handles/creates/returns the Kubeconfig, and I’ll see what I can find.
l
Thanks I appreciate it
Ok I'm thinking its because when I call the kubeconfig I'm not using the service principal explicitly? I'm not sure how to do that though
s
I think it’s related to the format/content of the Kubeconfig returned by
GetKubeConfig
, but that’s what I need to do some digging to find out.
l
I forgot to include this output
private static Output<string> GetKubeConfig(Output<string> resourceGroupName, Output<string> clusterName) { return ListManagedClusterUserCredentials.Invoke(new ListManagedClusterUserCredentialsInvokeArgs { ResourceGroupName = resourceGroupName, ResourceName = clusterName }).Apply(credentials => { var encoded = credentials.Kubeconfigs[0].Value; var data = Convert.FromBase64String(encoded); return Encoding.UTF8.GetString(data); }); } }
Ok I guess I am using listmanagedclusterusercredentials
I will try with admin instead
s
Yes, I was just going to suggest that. 🙂 Let me know how it goes!
l
ok so first I got: listClusterAdminCredential: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Getting static credential is not allowed because this cluster is set to disable local accounts.""
I set the cluster to allow local accounts and it DOES work this way
but ideally I want that disabled hmmm
s
If this isn’t time-sensitive, let me dig around for a few days and see what I can come up with.
l
Sure