Hi, I’m having. a problem with plugin version in p...
# general
f
Hi, I’m having. a problem with plugin version in python. I create a
pulumi_kubernetes.Provider
using the code below
Copy code
k8s_provider = Provider(
        name,
        kubeconfig=k8s_config,
        opts=ResourceOptions(depends_on=[node_pool]),
    )
where
k8s_config
are generated from gcp cluster created before this. And all of the k8s resource use this Provider to deploy. Currently, the Provider on stack is using pulumi_kubernetes version 4.5.0, but when I do
pulumi preview
or
up
on my local machine, it keep try to revert it back to version 3.30.2 (which I presume it’s the first version that I installed on my machine). And this triggered the replacement of Provider and all kubernetes resources that depend on that Provider. Below is samples from
pulumi preview -j
Copy code
{
            "op": "update",
            "urn": "urn:pulumi:prod::foundry-r::pulumi:providers:kubernetes::xxx",
            "oldState": {
                "urn": "urn:pulumi:prod::foundry-r::pulumi:providers:kubernetes::xxx",
                "custom": true,
                "id": "5de2c95a-0f6a-4b59-8a8a-6f3cabb7726e",
                "type": "pulumi:providers:kubernetes",
                "inputs": {
                    "cluster": "xxx",
                    "context": "xxx",
                    "kubeconfig": "[secret]",
                    "namespace": "default",
                    "version": "4.5.0"
                },
                "outputs": {
                    "cluster": "xxx",
                    "context": "xxx",
                    "kubeconfig": "[secret]",
                    "namespace": "default",
                    "version": "4.5.0"
                },
            },
            "newState": {
                "urn": "urn:pulumi:prod::foundry-r::pulumi:providers:kubernetes::xxx",
                "custom": true,
                "type": "pulumi:providers:kubernetes",
                "inputs": {
                    "kubeconfig": "[secret]",
                    "version": "3.30.2"
                },
            },
            "diffReasons": [
                "cluster",
                "context",
                "namespace",
                "version"
            ],
            "detailedDiff": null
        },
This will replace replace everything on kubernetes including namespace, which is not ideal. I’ve tried to delete all pulumi plugin, delete all python package in pip, fix version in requirement.txt, requirement-dev.txt, pyproject.toml. But it still tried to use the old version anyway. My question is 1. Where does pulumi command pull these version from? I’ve seen it even installed 3 different version at the same time in ci logs. 2. Can I fixed the version number so that all the dev use the same version as the ci pipeline? 3. Is there a better way of doing this? It seems really fragile that everything will be replace if I change some version number. Even if I manage to fixed the version number, it still means that I cannot upgrade it later, otherwise it’ll replace everything.
d
Running
pulumi about
should give details on dependencies. 4.5.0 was yanked (release removed), updating to 4.5.1 should fix the plugin
You've mentioned 2 ways of managing python dependencies here, requirements.txt and pyproject.toml. Which are you trying to use to run pulumi?
f
Looks like delete the whole venv and do
pip install requirements.txt
do the trick. But am I doing this right? I feel like I lock myself to one plugin version. Because if I ever to update the plugin version. It’ll replace everything on my cluster. Are there any best practice doing this?
d
My understanding of the issue is that it's the preview being over-cautious. To play it safe, you can do
pulumi up --target 'provider urn'; pulumi refresh
It's a known issue, something on Pulumi's radar
f
Thank you very much, I’ll try this. Is there an issue about this on github that I could follow?
d