This message was deleted.
# general
s
This message was deleted.
p
hah, interesting question šŸ˜„
by ā€œit may or may not be createdā€ you probably meant its creation is within some
if
statement
I guess depending on the programming language you use you might get a different results but I assume it will vary between ā€œit won’t compileā€ (because you cannot reference the variable) or ā€œruntime exception with unexpected null/nil/Noneā€ because
dependsOn
excepts an existing object. Either way, it’s not gonna work at all and you should be forced to fix your code šŸ™‚.
I’ve just checked that in my example project and when I passed
None
to
depends_on
list, I got:
Copy code
File "./__main__.py", line 78, in <module>
        opts=pulumi.ResourceOptions(depends_on=[None]),
      File "<redacted>/gcp_project_bootstrap/venv/lib/python3.8/site-packages/pulumi/resource.py", line 471, in __init__
        raise Exception(
    Exception: 'depends_on' was passed a value that was not a Resource.
    error: an unhandled error occurred: Program exited with non-zero exit code: 1
So in other words, it’s up to you to make a creation of another resource conditional as well. If you know that the resource you pass to
dependsOn
might be null/None/nil/whatever-but-not-an-actual-resource, just check it and don’t create the resource cause it’s gonna fail otherwise.
Copy code
if resource_i_depend_on is not None:
  another_resource = MyResource(
    ...
    opts=pulumi.ResourceOptions(depends_on=[resource_i_depend_on])
  )
s
the use case I have here is creating kubernetes deployment which may or may not have a configmap which is completely optional, and if configmap does exists, then it should be created before the deployment otherwise we have problem looking for it to get it mounted to the pod. Does this make sense? by the way, I am using Typescript here
p
Well, to be honest k8s should deal with that situation just fine šŸ™‚ Sure, it might be nice to first create configmap but the pod will still start properly once it’s created.
anyway, that still should be easily done in pulumi
can you paste the code snippet how you implement that right now?
(I assume you already have a working solution where your deployment depends on confimmap resource?)
p
Could set up the array beforehand and insert the resource into it if created (and if it's not, the array will be empty - or whatever other resources you have)
p
^ that sounds about right
you can depend on empty list šŸ™‚ (so it’s be a valid resource definition in both cases)
s
thanks @purple-application-23904 @prehistoric-activity-61023 I am stuck in a custom k8s resource (externalsecret). In the deployment spec, we are loading the secrets into the environment variables using envFrom, apparently the k8s secrets are created by externalsecret. While the envFrom only takes the secret name as input which doesn't change, is there a way to force a redeployment of the application if there's change in the secret data?
p
trick often used in helm is to add an annotation to deployment with sha of its dependencies (like secrets and configmaps) so whenever any of them changes, the annotation will change and in the end, k8s will perform redeployment
Considering your secrets are managed by operator and not pulumi, I’d try to resolve that on the cluster level rather than coding that in pulumi (see https://github.com/external-secrets/kubernetes-external-secrets/issues/38).
However, if you want to implement that in pulumi, if I were you, I’d try to get that secrets to pulumi and mimic the trick used in helm (that is, once you have a secret object, calculate md5/sha of it and create custom annotation on the deployment)
s
thanks for sharing, actually how does the k8s provider knows there is a need to redeploy the application? I am referring to this example: https://github.com/pulumi/examples/blob/master/kubernetes-ts-configmap-rollout/index.ts It seems like the configmap name in that example won't change either, so how would the provider know there's data change in the configmap and thus need to redeploy?
p
I’d check it in practise but I think the name will change. This config map doesn’t have name specified explicitly (see line 18, there’s no
name
in
metadata
, only
labels
). That means, pulumi autonaming will be used. When config map definition change (due to different content of
default.conf
file), pulumi will try to update the resource and it will generate a new name (according to autonaming mechanism). You can see later in the code, that name of the configmap is actually extracted from the resource (it’s not hardcoded):
Copy code
const nginxConfigName = nginxConfig.metadata.apply(m => m.name);
s
ah, that does the trick, in this case is there anyway that we can force restart the pods?
p
changing the definition of the deployment (because different configmap was assigned) causes the rollout restart
s
I guess it won't work if we fix the name of the configmap? saw something on the reloader, let me try that
p
you can make it work even with fixed name: by creating annotation with sha (something I’ve already described some time ago in this thread)
but if your secret is managed outside of pulumi scope, I’m not sure if that’s a good idea to do so