This message was deleted.
# general
s
This message was deleted.
c
It's definitely this function:
Copy code
function toContainerURL(containerName: string): pulumi.Output<string> {
    return registryUrl.apply(host => {
        return redisUri.apply(uri => {
            const client = redis.createClient(uri);
            const redisKey = "main_" + branch + "_" + containerName + "_digest"
            const digestPath = gitlabC.require("ci_project_path") + "src/pulumi/digests/" + containerName;
            const ret = new Promise<string>(function (resolve, reject) {
                try {
                    if (fs.existsSync(digestPath)) {
                        const digest = fs.readFileSync(digestPath).toString().trim()
                        client.set(redisKey, digest)
                        <http://pulumi.log.info|pulumi.log.info>("Successfully found container hash for " + containerName)
                        resolve(host + "/" + toContainerPath(containerName) + "@" + digest)
                    } else {
                        client.get(redisKey, function (_, reply: string | null) {
                            if (!reply) {
                                reject("No such key")
                            } else {
                                <http://pulumi.log.info|pulumi.log.info>("Successfully found container hash for " + containerName)
                                resolve(host + "/" + toContainerPath(containerName) + "@" + reply)
                            }
                        })
                    }
                } catch (ex) {
                    reject("Error connecting to redis")
                }
            })
            return ret
        })
    })
}
Those logs execute every time...
l
You have a promise inside an apply inside an apply...?
Maybe we can find a way to simplify that.
Returning an output containing an output containing a Promise might be an isue.
Can you use Output.all() to gather the two apply()s to one?
c
Sure, I'll do that.
Copy code
function toContainerURL(containerName: string): pulumi.Output<string> {
    return pulumi.all([registryUrl, redisUri]).apply(([host, uri]) => {
        const client = redis.createClient(uri);
        const redisKey = "main_" + branch + "_" + containerName + "_digest"
        const digestPath = gitlabC.require("ci_project_path") + "src/pulumi/digests/" + containerName;
        return new Promise<string>(function (resolve, reject) {
            try {
                if (fs.existsSync(digestPath)) {
                    const digest = fs.readFileSync(digestPath).toString().trim()
                    client.set(redisKey, digest)
                    <http://pulumi.log.info|pulumi.log.info>("Successfully found container hash for " + containerName + " in filesystem")
                    resolve(host + "/" + toContainerPath(containerName) + "@" + digest)
                } else {
                    client.get(redisKey, function (err, reply: string | null) {
                        if (err) {
                            reject(err)
                        } else if (!reply) {
                            reject("No such key")
                        } else {
                            <http://pulumi.log.info|pulumi.log.info>("Successfully found container hash for " + containerName + " in redis")
                            resolve(host + "/" + toContainerPath(containerName) + "@" + reply)
                        }
                    })
                }
            } catch (ex) {
                reject("Error connecting to redis: " + ex)
            }
        })
    })
}
Like this?
l
That unwraps one level automatically, which can only help. I can't see a way to get rid of the Promise, that's important to the logic..
c
Yeah, I mean I'd retrieve the redis key synchronously if the javascript zealots would let me, but I can't
l
Is it reasonably constant? You could put in the Pulumi config as a secret?
c
No. The value is the latest digest of a container, which changes every time one of the services changes. The file in the above code is an artifact produced by gitlab. I have to save it because gitlab will remove the artifact after the next build if the service isn't built again as a job.
This is the only way I've figured out how to do this unfortunately; gitlab literally won't let you persist artifacts "from the last successful triggered job"
So I have to manually store these sha256 digest hashes in redis
It's still hanging 😕
l
And is the Promise ever resolving / rejecting?
c
Those logs that I put before resolution, i.e.
Successfully found container hash for examiner in redis
, are called every time.
I can also do a .finally() and the log message I put in there gets called. But it hangs anyways.
Is this some kind of bug with pulumi outputs?
l
Maybe... but there's a lot of moving parts. What about the place that's calling the function? Is it handling the output appropriately?
c
Copy code
function makeExaminer() {
    const basename = "examiner";
    const appLabels = {app: basename};
    const port = 80
    new k8s.apps.v1.Deployment(basename, {
        metadata: {
            labels: appLabels,
            annotations: {
                "<http://pulumi.com/skipAwait|pulumi.com/skipAwait>": "true",
            }
        },
        spec: {
            replicas: 1,
            selector: {matchLabels: appLabels},
            template: {
                metadata: {
                    labels: appLabels,
                },
                spec: {
                    containers: [
                        {
                            name: basename,
                            image: toContainerURL(basename),

..
toContainerURL is being fed right into the spec. If I change toContainerURL and feed it "nginx" as an image instead, it hangs on preview.
The skipAwait was apart of earlier debugging, don't mind that
This is a bug, right? I'm not somehow not doing something I'm supposed to with the output to that function? This is the only place it's called, I've narrowed my code down to this point.
I need to find some way to get those values out of redis...
Copy code
function toContainerURL(containerName: string): pulumi.Output<string> {
    return pulumi.all([registryUrl, redisUri]).apply(([host, uri]) => {
        const client = redis.createClient(uri);
        const redisKey = "main_" + branch + "_" + containerName + "_digest"
        const digestPath = gitlabC.require("ci_project_path") + "src/pulumi/digests/" + containerName;
        if (fs.existsSync(digestPath)) {
            const digest = fs.readFileSync(digestPath).toString().trim()
            client.set(redisKey, digest)
            <http://pulumi.log.info|pulumi.log.info>("Successfully found container hash for " + containerName + " in filesystem")
            return host + "/" + toContainerPath(containerName) + "@" + digest
        } else {
            let ret = ""
            client.get(redisKey, function (err, reply: string | null) {
                if (err) {
                    throw err
                } else if (!reply) {
                    throw "No such key"
                } else {
                    <http://pulumi.log.info|pulumi.log.info>("Successfully found container hash for " + containerName + " in redis")
                    ret = host + "/" + toContainerPath(containerName) + "@" + reply
                }
            })
            while(ret == "") {}
            return ret
        }
    })
}
Unironically just doing this
l
Sorry got called afk, catching up now...
Did you say that if you put a constant value instead of toContainerUrl is also hangs?
c
No, I said/meant the opposite
If I replace toContainerUrl with "nginx", it works fine
l
Ah, I read " If I change toContainerURL and feed it "nginx" as an image instead, it hangs on preview." as "it's still doing the wrong thing"..
c
yeah mb
l
Ok cool.
It think that while-forever loop would need a yield or sleep to work...
c
Well, it wouldn't need a sleep...
Oh wait
l
When would the CPU be available for doing the other work?
Also, I don't know if async code works that way...
c
Forgot that js doesn't green thread or whatever
l
Anyway, this is a sidetrack 🙂
If you replace the contents of the Promise with a simple
resolve("nginx")
does it do the same as putting in a constant "nginx"?
c
I can check...
It updates
I havent updated it from x to y, but doing a resolve("nginx") from within the pulumi all at least breaks the hang
l
Fascinating. So somewhere between
<http://pulumi.log.info|pulumi.log.info>
and
resolve(somestrings)
, it hangs...
c
Or the provider is hanging
err not the provider
Something after resolve()
l
Well if it's anything in the k8s provider, I won't be able to help. I haven't used it.
I can't see a definition for the format of the image property.. maybe URLs are a problem? Or you need to add some level of auth for whichever registry you're using?
Ah it's not quite a URL.. no protocol or :
c
This is a preview though, so it's not trying to pull the container at the url. I've also already independently solved auth. This happened when I started to use the hash digests, not when I started ot use containers from that registry
Btw; I put a log after the resolve(), and it successfully ran. Not sure what that means because of javascript's whack threads, but it happened
l
Does that mean that the file at digestPath doesn't exist?
Resolve calls a callback, so after the callback is finished, the code after resolve should run as normal 🙂
What happens inside the callback is a mystery to me. No idea if it's synchronous or not. Maybe it is in browsers but not in node? Not sure.
c
No, it got the hash from redis and called resolve() with it. I'm just saying that it's not hanging inbetween pulumi.log.info(ret) and resolve(ret)
l
I think I'm out of my depth here.. 😞 So far, you've determined that with the code you have,
resolve("nginx")
works but
resolve(host + "/" + toContainerPath(containerName) + "@" + digest)
doesn't, even though
host + "/" + toContainerPath(containerName) + "@" + digest
is a valid string. So.. I'm stumped.
c
If it helps any, the string comes out to:
Copy code
<http://registry.infra.leonardcyber.com/leonard-cyber/main/dean/examiner@sha256:d0d4fbfcaf2cfacce3dde187a3e5780b87af8fbbcbb6cabd3b457f3e3f316f69|registry.infra.leonardcyber.com/leonard-cyber/main/dean/examiner@sha256:d0d4fbfcaf2cfacce3dde187a3e5780b87af8fbbcbb6cabd3b457f3e3f316f69>
l
That might be an issue... the rules from https://kubernetes.io/docs/concepts/containers/images/ about tag names are:
Image tags consist of lowercase and uppercase letters, digits, underscores (_), periods (.), and dashes (-).
There are additional rules about where you can place the separator characters (_, -, and .) inside an image tag.
So, colon not allowed.
Maybe you need to de-sha it and put the unencoded string in?
Or just the sha, without "sha256:"?
Can you
docker pull
from the command line to see which tag works? Or just look in your registry and check out the existing tags?
c
Colons are allowed. It's saying image tags have those regex restrictions. Otherwise you wouldn't be able to specify tags
l
Yes, but you have a colon in your tag...
Colon is allowed in the registry name, to separate it from the port...
But (according to that page) not in the tags.
c
That's not the tag, the tag would be "latest" in nginx:latest "examiner" is still part of the name
I'm trying to pin a hash instead of use a tag but I suspect that I'm doing it incorrectly
l
The tag is everything after the @
Which in what you've shown me, is "sha256:d0d4fbfcaf2cfacce3dde187a3e5780b87af8fbbcbb6cabd3b457f3e3f316f69"
Which includes a colon.
Do you have a console where you've logged into that registry? Try
docker pull <http://registry.infra.leonardcyber.com/leonard-cyber/main/dean/examiner@sha256:d0d4fbfcaf2cfacce3dde187a3e5780b87af8fbbcbb6cabd3b457f3e3f316f69|registry.infra.leonardcyber.com/leonard-cyber/main/dean/examiner@sha256:d0d4fbfcaf2cfacce3dde187a3e5780b87af8fbbcbb6cabd3b457f3e3f316f69>
c
It works
l
Ah, then the page needs an update 🙂
Ok, I'm back to being stumped.
c
As far as I can tell kubernetes supports this format of specification It's gotta be something in pulumi where specifying images by hash makes it break
Although I'm not sure why on earth it would break in that way Doesn't it not even talk to kubernetes during a preview?
l
No idea.. who on Slack is involved in that provider ¯\_(ツ)_/¯
c
In fact, removing everything past the @ doesn't change anything. The program still hangs if I leave the toContainerUrl() call in.
Copy code
function toContainerURL(containerName: string): pulumi.Output<string> {
    return pulumi.all([registryUrl, redisUri]).apply(([host, uri]) => {
        const client = redis.createClient(uri);
        return new Promise<string>(function (resolve, reject) {
            resolve("nginx:latest")
        })
})
}
^ fail
Copy code
function toContainerURL(containerName: string): pulumi.Output<string> {
    return pulumi.all([registryUrl, redisUri]).apply(([host, uri]) => {
        //const client = redis.createClient(uri);
        return new Promise<string>(function (resolve, reject) {
            resolve("nginx:latest")
        })
})
}
^ succeed
l
Ooo. Does it work on
up
? Maybe wrap that line in
isDryRun()
?
c
@little-cartoon-10569 If I do --skip-preview, it seems like it works until it hangs at the end, just like for previews.
The weird thing is that I can log after the line
And I can get values from the redis instance as well
If I make sure to redis.quit, it will exit. Apparently that's necessary
client.quit*
l
Ah. So does that make it all work as expected? Fully resolved?
c
I don't think it's necessary anywhere except inside pulumi. The docs don't mention this at all. My issue is fixed though
👍 1
l
There has to be a reason to create the method, and apparently you've just found it 🙂