is it possible to have `pulumi up` only create ne...
# general
a
is it possible to have
pulumi up
only create new resources? not deleting the old, with some
--flag
?
m
Can you say more about the use case? is https://www.pulumi.com/docs/iac/concepts/options/ignorechanges/ this what you are looking for?
a
no i dont think that would work. I have a k8s deployment i want to deploy to a new cluster, so i can switch the cluster, but i want it to run in both clusters (by default it wants to also remove it from the old cluster)
then later at some point do a proper up, where it gets deleted from the old cluster
m
could you use retain on delete and then create a command to delete it with a dependency on on the new cluster? https://www.pulumi.com/docs/iac/concepts/options/retainondelete/#resource-option-retainondelete
In that scenario you should be able to do it in a single pulumi up
a
maybe, i want to be able to fallback to start controlling the deployments on the old cluster again
but it seems kinda cumbersome, but maybe
m
I have a scenario in Azure where I am using k8's and I dont have downtime when modifying the cluster and generally, if it fails, it falls back on its own.
but I am also allowing azure to handle the management of the k8's so I am not directly controlling them.
a
i dont have pulumi controlling the dns
so i need to do that manual thing, so its easiest to just have it run fully in both clusters, then switch over the dns'es and then take down the old cluster
right now i have two clusters running, but i dont want to move the deployments over, cause that will be downtime 😛
i could do
pulumi stack init --copy-config-from <old-deployment>
make new stacks..
m
I think having separate stacks for both clusters is likely the way to go here. You can spin up the workloads on the new cluster, verify that they work, and then manually shift requests from the old to the new cluster.
m
Can you get your dns state in your code? if so, then you could make the calls dependent on what is in the dns, right?
a
yeah, having dns in state is an interesting posibility also
m
Will you do this shift multiple times going forward?
m
I literally just went through this same scenario in my stack. Except I also had to deal with a managed certificate that cant completely be handled in the api.
a
yeah, when the cluster goes to extended support, we create new clusters and move stack over
(we also controll clusters with k8s, so we create new stacks there)
clusters with pulumi*
m
Then it definitely makes sense to build this out properly so that you don't have to fiddle with the Pulumi code and resource imports.
a
yeah, spent a lot of time already thinking about how to do this without downtime 😄 and possibility to fallback to old cluster
but ideally would need cloudflare stuff also managed by pulumi. we use zero-trust cloudlfared tunnels , and point dns entries to the new tunnel in the new cluster when ready
i was hoping for a
--only-create
flag, that would have fixed it for now 😄 but i guess not
m
I'm using cloudflare as well. for azure I had to use this pattern to create/replace containerApps (which use managed k8's) 1. create the managed env, networking, etc as normal 2. create the containerApp with custom domains that have the
name
set and the
bindingType
set to
disabled
(no other properties in the customDomains should be set). it should have a dependency on the environment 3. Create the DNS records dependent on both the environment and the containerApp 4. create the managedCertificates needed with a dependency on the containerapp 5. At this point you have all the resources configured however the certificate is not bound to the container app and there is not a way to do this through the api. All of the outstanding terraform issues on github show this as unresolved. So the binding needs to be done through the cli using
command.local.command
with
az containerapp hostname bind
This needs to
trigger
on the
cert.systemData.LastModifiedAt
and depend on the containerApp, the Environment, and the Cert. If there are more than one cert being added then youe commands must be dependent on eachother so they fire in series and not in parallel.
It was a pain to figure out but in this case it is mostly because of a gap in the azure api.
a
yeah, thats a lot of certificate stuff 😄 im glad i dont have to deal with that
m
The funny thing is that there are a lot of issues logged for this and no solution that I could find that did not require manual intervention to deploy. It bugs me when people open issues but dont post the solution when they figure out a workaround, they just disappear. I mean for the issue I had in Azure. not yours.
Since you are using cloudflare, if you cant manage the dns in pulumi I would set it up to at least pull information from it to avoid multiple stacks. Ultimately it is pretty simple to manage though.
Copy code
const crmCNAME = new cloudflare.Record(props.crmSubdomain, {
        zoneId: zone.then((z: cloudflare.GetZoneResult) => z.id),
        name: `${props.crmSubdomain}.${props.domain}`,
        type: "CNAME",
        content: props.siteFQDN,
        ttl: 3600,
    },{dependsOn: [marketing_env, props.mauticNginxApp ]});
This runs after creating/modifying the container and before cert validation.
a
hmm, nice, how do you easily make pulumi start managing existing record 😛
or do i have to delete it first and then let pulumi create it
m
you can import resources into pulumi. I haven't done that before though.
a
https://github.com/STRRL/cloudflare-tunnel-ingress-controller this looks like can simplify stuff for me quite a lot
m
nice. When I have some time I need to dig into that one.
a
i looked into this aswell https://github.com/adyanth/cloudflare-operator/ it looked a bit more serious