is it correct that if i establish a k8s provider u...
# kubernetes
p
is it correct that if i establish a k8s provider using an eks cluster output lifted as input to the provider, this should create implicit dependency chain ? and then everything i use that provider as an option for also inherits that implicit dependency ??? ie -
Copy code
# create cluster resource
    eks_cluster = aws.eks.Cluster("itplat-eks-cluster", opts=provider_opts, **eks_cluster_config)
 
    k8s_use1_provider = k8s.Provider(
        k8s_use1_provider_name,
        cluster=eks_cluster.arn,
        context=eks_cluster.arn,
        enable_dry_run=None,
        namespace=None,
        render_yaml_to_directory=None,
        suppress_deprecation_warnings=None,
    )

    # lets have a go at creating a "crossplane-system" namespace
    crossplane_namespace = k8s.core.v1.Namespace(
        "crossplane-system", opts=pulumi.ResourceOptions(provider=k8s_use1_provider), metadata=k8s.meta.v1.ObjectMetaArgs(name="crossplane-system")
    )
this makes namespace dependent on provider which is dependent on eks_cluster ??
b
yep, that's right
any output that is used as an input will create a dependency
p
ok i think the provider in the example above does not adopt the context of the eks_cluster and is still using the vakues in my ~/.kube/config
which is not what i was expecting
i was expecting that referencing the cluster and context when defining the k8s.provider would source the values of my new cluster
much like
aws eks --region us-east-1 update-kubeconfig --name <my_new_cluster>
is there a way to achieve a provider that references the config in the eks cluster i just created so that i can deploy k8s resources into the same eks cluster from the same pulumi stack ?
currently for the namespace creation i get
error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server
b
You’re not passing a kubeconfig to the provider. There are examples of how to generate a kubeconfig and pass it to your provider in here: https://github.com/pulumi/examples
Search for generate
p
so using the generate_kube_config utility like so :-
Copy code
k8s_use1_provider = k8s.Provider(
        k8s_use1_provider_name,
        cluster=eks_cluster.arn,
        context=eks_cluster.arn,
        enable_dry_run=None,
        kubeconfig=utils.generate_kube_config(eks_cluster),
        namespace=None,
        render_yaml_to_directory=None,
        suppress_deprecation_warnings=None,
    )

    # lets have a go at creating a "crossplane-system" namespace
    crossplane_namespace = k8s.core.v1.Namespace(
        "crossplane-system", opts=pulumi.ResourceOptions(provider=k8s_use1_provider), metadata=k8s.meta.v1.ObjectMetaArgs(name="crossplane-system")
    )
leaves me with weirdness :-
Copy code
Diagnostics:
  kubernetes:core/v1:Namespace (crossplane-system):
    error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:52392: connect: connection refused"

  pulumi:pulumi:Stack (aws_eks-itplat-aws-eks):
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x291f5c2]
    goroutine 19 [running]:
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.getActiveClusterFromConfig(0xc0004804e0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.getActiveClusterFromConfig(0xc0004804e0>, 0xc00047f620, 0xc0000ddd00)
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/util.go:118 +0xe2
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).DiffConfig(0xc00071ac30|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).DiffConfig(0xc00071ac30>, 0x3105be8, 0xc00047f5c0, 0xc0005fccb0, 0xc00071ac30, 0x2b0b201, 0xc0006eebc0)
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:345 +0xcb8
    <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x3105be8|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x3105be8>, 0xc00047f5c0, 0x2cb14e0, 0xc0005fccb0, 0x2cc6a00, 0x41623c8, 0x3105be8, 0xc00047f5c0)
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.1.0/proto/go/provider.pb.go:2158 +0x89
    <http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x3105be8|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x3105be8>, 0xc00047f140, 0x2cb14e0, 0xc0005fccb0, 0xc000448380, 0xc00096c510, 0x0, 0x0, 0x30bfc20, 0xc00054dbd0)
    	/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
    <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler(0x2d4d5e0|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler(0x2d4d5e0>, 0xc00071ac30, 0x3105be8, 0xc00047f140, 0xc000480420, 0xc000449280, 0x3105be8, 0xc00047f140, 0xc00093b500, 0x950)
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.1.0/proto/go/provider.pb.go:2160 +0x150
    <http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000ff6c0|google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000ff6c0>, 0x3121b58, 0xc00048ac00, 0xc000152000, 0xc00060d8f0, 0x41001f0, 0x0, 0x0, 0x0)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210 +0x52b
    <http://google.golang.org/grpc.(*Server).handleStream(0xc0000ff6c0|google.golang.org/grpc.(*Server).handleStream(0xc0000ff6c0>, 0x3121b58, 0xc00048ac00, 0xc000152000, 0x0)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533 +0xd0c
    <http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc00079a030|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc00079a030>, 0xc0000ff6c0, 0x3121b58, 0xc00048ac00, 0xc000152000)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871 +0xab
    created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:869 +0x1fd
b
can you share your kubeconfig generate? any idea where port 52392 is coming from
p
im using the code in the example
let me grab the link
<https://github.com/pulumi/examples/blob/master/aws-py-eks/utils.py>
im unsure about the port… greping throught he code shows no reference
i just blew away the whole eks environment and am recreating now
i try to create a bunch of k8s resources after the namespace example above . . . they all fail with
error: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file: context "arn:aws:eks:us-east-1:69999999991:cluster/itplat-eks-cluster" does not exist
which makes me believe im not loading the k8s provider properly
or that the dependency chain is not working as i believe
b
Is that the right cluster name for your provider?
p
Copy code
# create cluster resource
    eks_cluster = aws.eks.Cluster(
        "itplat-eks-cluster",
        name="itplat-eks-cluster",
and then
ooooooh
no that looks right
the cluster object is passed to utils
Copy code
# create k8s providers allowing us to switch clusters/contexts
    k8s_use1_provider_name = "k8s_provider_use1"

    k8s_use1_provider = k8s.Provider(
        k8s_use1_provider_name,
        cluster=eks_cluster.arn,
        context=eks_cluster.arn,
        enable_dry_run=None,
        kubeconfig=utils.generate_kube_config(eks_cluster),
        namespace=None,
        render_yaml_to_directory=None,
        suppress_deprecation_warnings=None,
    )
all subsequent k8s stuff employing that provider fails
b
try remove the
context=
property
infact, remove, everything except the
kubeconfig=
p
yeah this made no difference
so i abstracted and simplified the code to just this :-
Copy code
"""An AWS Python Pulumi program"""

import json
import pulumi
import pulumi_kubernetes as k8s
import pulumi_aws as aws
import utils
from pulumi import ResourceOptions


eks_cluster = aws.eks.Cluster(
    "test-cluster",
    name="itplat-eks-cluster",
    role_arn="arn:aws:iam::629205377521:role/itplat_eks_clusteradmin_role",
    vpc_config={
        "endpointPrivateAccess": True,
        "endpointPublicAccess": False,
        "securityGroupIds": ["sg-08a13f35d34ee1b7f"],
        "subnet_ids": ["subnet-0b962f93a756f624b", "subnet-09cc1903498dc4474", "subnet-0e0ff2e030397c840"],
    },
    opts=ResourceOptions(import_='itplat-eks-cluster'),
)

k8s_use1_provider = k8s.Provider(
    "test_provider",
    kubeconfig=utils.generate_kube_config(eks_cluster),
)

# lets have a go at creating a "crossplane-system" namespace
crossplane_namespace = k8s.core.v1.Namespace(
    "crossplane-system", opts=pulumi.ResourceOptions(provider=k8s_use1_provider), metadata=k8s.meta.v1.ObjectMetaArgs(name="crossplane-system")
)
and that works just fine
so this really feels like a dependency fail now
like it is trying to use a provider to create k8s stuff before the provider exists
thus the transport errors
im going to set some explicit dependsOn to see if that helps
b
just make sure I understand, you removed:
Copy code
cluster=eks_cluster.arn,
context=eks_cluster.arn,
namespace=None,
render_yaml_to_directory=None,
suppress_deprecation_warnings=None,
and it now works?
p
no i have two dirs, with similar config… one worrks and one doesent… both define the provider in teh same way, with only the kubeconfig
b
what error are you getting on the one that doesn't work?
p
i cant see any difference in the piece that is erroring except a potential dependency issue
Copy code
Diagnostics:
  pulumi:pulumi:Stack (aws_eks-itplat-aws-eks):
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x291f382]
    goroutine 35 [running]:
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.getActiveClusterFromConfig(0xc00093eae0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.getActiveClusterFromConfig(0xc00093eae0>, 0xc0001d5980, 0x0)
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/util.go:118 +0xe2
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).DiffConfig(0xc000535860|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).DiffConfig(0xc000535860>, 0x31057e8, 0xc0001d5950, 0xc0009ac850, 0xc000535860, 0x2b0af01, 0xc000616e80)
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:344 +0xc8d
    <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x31057e8|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x31057e8>, 0xc0001d5950, 0x2cb11c0, 0xc0009ac850, 0x2cc66e0, 0x4162348, 0x31057e8, 0xc0001d5950)
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.0.0/proto/go/provider.pb.go:2158 +0x89
    <http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x31057e8|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x31057e8>, 0xc0001d5650, 0x2cb11c0, 0xc0009ac850, 0xc00004c9c0, 0xc000403290, 0x0, 0x0, 0x30bf880, 0xc0004444e0)
    	/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
    <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler(0x2d4d2c0|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_DiffConfig_Handler(0x2d4d2c0>, 0xc000535860, 0x31057e8, 0xc0001d5650, 0xc00093ea80, 0xc00010e040, 0x31057e8, 0xc0001d5650, 0xc0000d0000, 0x1056)
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.0.0/proto/go/provider.pb.go:2160 +0x150
    <http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc00077f6c0|google.golang.org/grpc.(*Server).processUnaryRPC(0xc00077f6c0>, 0x3121758, 0xc000001680, 0xc0009cc100, 0xc00028c030, 0x4100170, 0x0, 0x0, 0x0)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210 +0x52b
    <http://google.golang.org/grpc.(*Server).handleStream(0xc00077f6c0|google.golang.org/grpc.(*Server).handleStream(0xc00077f6c0>, 0x3121758, 0xc000001680, 0xc0009cc100, 0x0)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533 +0xd0c
    <http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000601a10|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000601a10>, 0xc00077f6c0, 0x3121758, 0xc000001680, 0xc0009cc100)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871 +0xab
    created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:869 +0x1fd

  kubernetes:core/v1:Namespace (crossplane-system):
    error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:52449: connect: connection refused"
if i comment out the k8s_provider, no error
at this stage i think when i blew away the cluster and created a new one maybe something got screwed in the state file and it still references the deleted cluster arn
and cant make connction there
b
try
pulumi up -r
p
because that simplified code above manually imports the existing cluster and works correctly
whats the “-r” do ?
b
refreshes the infra provisioned with the state file and make's sure it;s accurate
so it'll make sure the provider looks like the defined code
p
oh right, its a refresh… cool
yeah its saying the provider differs on the cluster
so it wants to remove the context and cluster from the provider
b
yeah
cluster
and
context
shouldn't have the arn set, those are incorrect values
p
ok im going to go ahead an dclean that up
then hopefully all will be well again
gah … hit ulimit… expanding
Copy code
kubernetes:core/v1:Namespace (crossplane-system):
    error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:57683: connect: connection refused"
so looks like i need to manually hack the state
exporting and hacking to remove those values
actually i guess i can just remove that resource
and let it recreate
sweet, no error in preview now
yeah cool, the null pointer errors are all gone … thx @billowy-army-68599 . . . i still have errors in my k8s stuff but i know what to do with those