brash-waiter-7373309/19/2020, 12:34 PM
to output YAML files from my Python code. However, I noticed (as documented) that this means I can’t then deploy those resources to the cluster with Pulumi as well. Does anyone have a pattern for doing both? ie. 1. run
2. deploy to cluster 3. generate YAML I tried: • Passing
and a list of providers, but that didn’t seemed to revert to defaults ❌ • Using a config as a toggle between two providers, but this led to state problems ❌ • Abstracting the resources, and then applying twice in the same script, but Pulumi complains about them having the same name ❌ Appreciate this has a nice
warning. I’d be interested if anyone has a pattern for doing the above, or if this might be supported in thee future.
:param pulumi.Input[str] render_yaml_to_directory: BETA FEATURE - If present, render resource manifests to this directory. In this mode, resources will not
be created on a Kubernetes cluster, but the rendered manifests will be kept in sync with changes
to the Pulumi program. This feature is in developer preview, and is disabled by default.
Note that some computed Outputs such as status fields will not be populated
since the resources are not created on a Kubernetes cluster. These Output values will remain undefined,
and may result in an error if they are referenced by other resources. Also note that any secret values
used in these resources will be rendered in plaintext to the resulting YAML.
broad-dog-2246309/19/2020, 1:40 PM
is to allow you to do your deployments out of band using one of the gitops methods. i haven't had time to create a demo of this yet, but generally I'd expect this to look like this - render yaml - use the github provider to push to a git repo - register the application in a gitops controller like argocd or flux i'm happy to jump on a call on Monday and chat through this if you liek
brash-waiter-7373309/19/2020, 3:43 PM