Hi, I am trying to get the kubernetes provider to ...
# kubernetes
s
Hi, I am trying to get the kubernetes provider to render yaml using `const provider = new k8s.Provider("render-yaml", { renderYamlToDirectory:
rendered
});` Naturally, this works on my machine but I cannot get it to create the files on other machines or the CI environment using the same plugin versions (pulumi v3.131.0 and kubernetes 4.15.0). The only difference I can spot is that locally
pulumi preview
prints the additional line with
rendered file ..
which is not printed on the other environments.
Copy code
kubernetes:apps/v1:Deployment (backend-api):
    warning: rendered YAML will contain a secret value in plaintext
    warning: rendered YAML will contain a secret value in plaintext
    warning: rendered file /home/work/..../1-manifest/apps_v1-deployment-.....yaml contains a secret value in plaintext
I tried looking at the provider code and
rendered file
is printed in
Create()
and
Update()
methods so my only guess is that for some reason these are not invoked. Setting
PULUMI_K8S_ENABLE_SERVER_SIDE_APPLY=false
does not create the yaml files locally but setting it to true on the other environments does not help.
h
can you say more about how you’re invoking the program? for example is your CI environment only running a preview?
s
yes, pulimi preview is how I invoke it both locally and in CI. The reason for this is that currently pulumi is managing the full resource lifecycle, and I am trying to extract the k8s resources as yaml without altering the state
h
if you have any inputs that are unknown during your preview it won’t be able to output yaml, for example if your outputs depend on a dynamically named namespace that doesn’t exist on the cluster yet
without knowing more my guess is you might have already set something up locally which is missing on CI
s
dynamically named namespace that doesn’t exist on the cluster yet
I understand that renderYamlToDirectory and kubeconfig are mutually exclusive. From your statement it seems that rendering yaml also requires connection to the cluster.
h
you’re right, it doesn’t, but if there are any unknown inputs it won’t be able to render. i was just trying to give an example of something that isn’t known until you run up.
s
I see. Do you think it is worth comparing the log outputs with -v=10 of both environments? The stack contains a fair number of resources so it will take a while to comb through it
h
you can try running with
--logtostderr --logflow -v=9
to confirm if this is actually what’s happening — you’ll see a “cannot preview” message
s
You are right.
Copy code
gvkExists check failed due to unreachable cluster
cannot preview Create(urn:pulumi:<stack-name>::services::kubernetes:core/v1:ConfigMap::.....)
I cannot comprehend how it happens to work on my dev environment. It must be something related to resolving some external resources but aws login is setup on both places
I picked a random resources (Service and Configmap), their all of their inputs and outputs seem to be populated
Running pulumi preview --diff does not show and unknown inputs. The only change I see comes from the provider change
Copy code
[provider: urn:pulumi:...::services::pulumi:providers:kubernetes::k8sProvider::792acc7b-8a8d-46f3-b714-8ce66a62bb8d => urn:pulumi:...::services::pulumi:providers:kubernetes::k8sProvider::output<string>]
h
why is the provider changing? are your resources getting replaced because of that?
s
I am swapping the existing provider (uses kubeconfig) with renderYamlToDirectory provider. Yes, the plan is 100% resource replacements
h
i think that would explain it, try setting clusterIdentifier on the provider’s config to stop those things from getting replaced
s
Thanks. Does this mean I will have to run pulumi up once to apply clusterIdentifier to the stack state and from then on it will be used in subsequent runs?
h
as long as it’s reflected in the source code, yes
s
Back at this again, set
clusterIdentifier
and the resources aren't getting replaced anymore but I still can't get my CI or other environments other than my local to save the rendered yaml files. The only provider diff I have is replacing kubeconfig with
renderYamlToDirectory
which makes sense.
pulumi:providers:kubernetes k8sProvider update [diff: +renderYamlToDirectory-kubeconfig]
Turns out only
pulumi up
renders the files reliably so what I ended up doing is copying the stack into a temporary one, running pulumi up to get the files and then removing the temp stack.