is there a way to tell pulumi in special cases to ...
# general
a
is there a way to tell pulumi in special cases to NOT delete resources? background: I'm uploading binaries to (Azure) blob storage, their name containing a timestamp. When I upload a new binary, I really don't want the old binary to be deleted, I just want to add a new one.
g
You can set ResourceOptions(protected=True)
it won't let you change the resource unless you explicitly unprotect it
ah sorry, I miss understood the question. Pulumi manages the resource lifecycle (CRUD) any item created as a resource is expected to be managed. when you say "old" and "new" - do those files have the same name then? if so you are doing an Update so pulumi deletes the old file. if you expect something else I'd rather just use SDK or CLI to upload the file. And manage lifecycle of the storage account/container only
a
example: in one run, myapp.202108051030.zip is uploaded in the next run, myapp.202108051200.zip is uploaded, but myapp.202108051030.zip is also deleted. the later part is what I would like to prevent, so that after the second run both myapp.202108051030.zip and myapp.202108051030.zip exist
is this possible from within pulumi?
p
I guess you’d have to create an additional resource instead of modifying the existing one because that’s what you actually want to do.
If you provide a broader picture what you want to achieve and maybe some code snippet, we could advice more.
a
here you go:
Copy code
function createService(name: string, archiveName: string, settings: azure.types.input.web.NameValuePairArgs[][]){
    var plan= new web.AppServicePlan(`${name}plan`, {
        resourceGroupName,
        kind: "Linux",
        sku: {
            size: "S1",
            tier: "Standard",
            name: "S1"
        }
    });
    const filename = `${archiveName}.zip`;
    const blobname= `${archiveName}.${formatForPath(new Date())}.zip`
    const fullPath= path.join(cfg.require("PackagePath"), filename);
    const deploymentBlob= new storage.Blob(blobname, {
        resourceGroupName: resourceGroupName,
        accountName: deploymentStorage[0],
        containerName: deploymentStorage[1],
        source: new pulumi.asset.FileAsset(fullPath)
    })
    const deploymentUrl= pulumi.all(
        [deploymentStorage[0], deploymentStorage[1], resourceGroupName, deploymentBlob.name, deploymentBlob.id]).apply(
            ([accountName, containerName, rgName, blobName]) => getSASToken(accountName, containerName, rgName, blobName));

    const appSettings= settings.reduce((current, v)=> current.concat(v), [
        {
            name: "WEBSITE_RUN_FROM_PACKAGE",
            value: deploymentUrl
        },
        {
            name: "ASPNETCORE_ENVIRONMENT",
            value: getEnvironmentName()
        }
    ]);
    return new web.WebApp(name, {
        resourceGroupName,
        enabled: true,
        serverFarmId: plan.id,
        siteConfig: {
            appSettings
        }
    });

    function getEnvironmentName() : string {
        switch (pulumi.getStack().toLowerCase()){
            case "dev":
                return "Development";
            case "staging":
                return "Staging";
            case "production":
                return "Production";
        }
        return "";
    }
    function formatForPath(when: Date){
        const year= when.getFullYear();
        const month= 1+when.getMonth();
        const day= when.getDate();
        const hour= when.getHours();
        const minute= when.getMinutes();
        const second= when.getSeconds();
        return `${year}${pad(month)}${pad(day)}${pad(hour)}${pad(minute)}${pad(second)}`;
    }
    
    function getSASToken(storageAccountName: string, storageContainerName: string, resourceGroupName: string, blobName: string): pulumi.Output<string> {
        const blobSAS = storage.listStorageAccountServiceSAS({
            accountName: storageAccountName,
            protocols: storage.HttpProtocol.Https,
            sharedAccessStartTime: format(new Date()),
            sharedAccessExpiryTime: format(nextYear()),
            resource: storage.SignedResource.C,
            resourceGroupName: resourceGroupName,
            permissions: storage.Permissions.R,
            canonicalizedResource: "/blob/" + storageAccountName + "/" + storageContainerName,
            contentType: "application/json",
            cacheControl: "max-age=5",
            contentDisposition: "inline",
            contentEncoding: "deflate",
        });
        return pulumi.interpolate `https://${storageAccountName}.<http://blob.core.windows.net/${storageContainerName}/${blobName}?${blobSAS.then(x|blob.core.windows.net/${storageContainerName}/${blobName}?${blobSAS.then(x> => x.serviceSasToken)}`;

        function format(when: Date){
            const year= when.getFullYear();
            const month= 1+when.getMonth();
            const day= when.getDate();
            return `${year}-${pad(month)}-${pad(day)}`;
        }
        function nextYear():Date {
            const result= new Date();
            result.setFullYear(result.getFullYear()+1);
            return result;
        }
    }
    function pad(n: number){
        return n.toString().padStart(2, '0');
    }

}
this looks bad here, sorry
anyway, if you look at it in a better editor, you will see the part where I generate the blobname by including the formatted date
the goal would be that if I run this repeatedly, the container will contain multiple versions. currently, it will always delete the last one and create the new one.
to be clear: I invoke createService for each of my services. But I want the deployed versions (the archive I upload to the blob storage) to be kept so that in the future I can easily see all version that were deployed and, if need be, also access them manually.
p
So I guess we know what’s wrong based on your last message. You expect Pulumi to work in a imperative way - “if I run this, I want to upload new file” but it’s designed to be declarative (and it’s expected to be). When you have one resource declared in code, pulumi will make sure that only one resource exists on the cloud as well (assuming everything is managed by it).
a
I know
in general, this is the behavior one wants
but in this specific instance
I need to be able to keep the old versions of the "blob resource"
how would I go about that?
p
I’d need to take a look at the code to really understand the use case here but my first thought was… to not use pulumi/terraform/iac-solution.
a
so, I'd take this bit out of pulumi and do it in another tool? fair enough
p
If you want to upload an object to the blob storage and you don’t care about its lifetime (it’s gonna be there forever), you should simply upload it and forget about it.