More input: I notices, that similar behavior is sh...
# typescript
b
More input: I notices, that similar behavior is shown when I'm try to add a Database or Grant to the postgresql instance in my local Kubernetes cluster. I wonder, if these resources (Database, Table, Grant) work at all. I cannot find any tests in the GitHub repository.
b
Shouldn't the provider connect to
localhost:5432
?
b
Thank you @big-architect-71258 for paying attention to this. I also tried this option as well. I can connect with IntelliJ Idea DB provider to localhost:5432, but pulumi Provider cannot.
Copy code
postgresql:index:Role (agentApplicationUser):
    error:   sdk-v2/provider2.go:385: sdk.helper_schema: Error connecting to PostgreSQL server localhost (scheme: postgres): read tcp [::1]:58361->[::1]:5432: read: connection reset by peer: provider=postgresql@3.11.3
    error: 1 error occurred:
        * Error connecting to PostgreSQL server localhost (scheme: postgres): read tcp [::1]:58361->[::1]:5432: read: connection reset by peer
b
@brash-kilobyte-32523
connection reset by peer
means: the client could connect to the socket, but the server closed it. So I wonder what's really wrong here. So I'd try to get more logging data:
TF_LOG=TRACE pulumi up --logtostderr --logflow -v=10 2> out.txt
https://www.pulumi.com/docs/support/troubleshooting/
b
Thanks, @big-architect-71258. It really looks strange. I tried all options: internal docker hostname, internal container IP, and localhost. In all cases it complains to the connection. I wonder if I'm doing something wrong.
is a specific version of terraform should be installed locally for pulumi to work right?
b
@brash-kilobyte-32523 there is no requirement that Terraform or one of it's providers must be installed to run Pulumi successfully.
When I look through the log files I saw that you're doing a
pulumi preview
Try to use
pulumi up --skip-preview
.
Used your code on my local test system, and getting another error message, aside the ones you had.
image.png
Ahh I see.. DNS issue because I blindly copied your code. Hold on
When I connect using
psql
it works without any issues
👍 1
b
I also can connect from another postgres container to the target one, so the instance of the postgres is visible within the docker network.
👍🏻 1
b
I think there's something wrong how to connect to the database and apply the required role assignment.
b
probably, I found the code example in the issue Pulumi AI was not so helpful
b
As commented in the issue this should be fixed. But who can be really certain 😄
b
thanks for your time, it's really strange that so simple scenario cannot work out of the box. there are tons of method to bootstrap the postgres db, and the problem that I'm trying to solve is that there are 3 or 4 different methods to bootstrap it in my project: one in integration tests, another in e2e tests, one in docker and one in kubernetes cluster. So, I was looking for a tool that can be a golden source for the orchestration independently from the underlying stack: docker, cloud, kubernetes. Pulumi's approach with Providers as an abstraction layer should handle this challenge.
The ticket I sent before is not related to the problem that I have.
b
So, when running the PostgreSQL instance via Docker Compose and only trying to configure the additional user with it's role, I discovered that
sslmode
must be set to
disabled
. When you do that, a correct error message about the missing
schema
property is shown. Then is added
schema: "public"
and everything went smoothly.
So this definitely something how the Docker Container gets deployed and how Postgres is configured.
Because everything get quickly out of sync and the stack gets totally broken, when the container gets reconfigured, I'd abstain from having both the Docker Container and the DB configuration on ONE Pulumi program, but using two different stacks, where the DB stack references the outputs from the Container stack.
Okay, when I look at the code and the test with Docker Compose I would say: the database isn't initialized when the Postgres Provider tries to access it. It's simply too quick!
That DID the trick
Race conditions, how we all love them 😄
Copy code
import * as pulumi from "@pulumi/pulumi";
import * as docker from "@pulumi/docker";
import * as postgresql from "@pulumi/postgresql";
import * as time from "@pulumiverse/time"

const network = new docker.Network("identus-stack", {
    ipamConfigs: [{
        subnet: "172.18.0.0/16",
    }],
    driver: "bridge", // You can choose other drivers like "overlay", "host", etc.
    attachable: true,
    checkDuplicate: true
});

// Create a Docker container running PostgreSQL
const postgresContainer = new docker.Container("postgresContainer", {
    image: "postgres:16",
    ports: [{
        internal: 5432,
        external: 5432,
    }],
    envs: [
        "POSTGRES_DB=agent",
        "POSTGRES_USER=postgres",
        "POSTGRES_PASSWORD=postgres",
    ],
    hostname: "postgres-agent",
    publishAllPorts: true,
    rm: true, // Remove the container when stopped
    healthcheck: {
        tests: ["CMD", "pg_isready", "-U", "postgres"],
        interval: "30s",
        timeout: "10s",
        retries: 5,
    },
    command: [
        "postgres", "-c", "log_statement=all", "-c", "log_destination=stderr", "-c", "log_connections=true", "-c", "log_error_verbosity=VERBOSE",
    ],
    networksAdvanced: [
        {
            name: network.name,
            ipv4Address: "172.18.0.2",
        }
    ],
}, { dependsOn: network });


const containerIp = pulumi.output(postgresContainer.networksAdvanced).apply(networks => {
    const networkInfo = networks && networks[0];
    return networkInfo ? networkInfo.ipv4Address : undefined;
});

containerIp.apply(ip => {
    console.log(`Postgres Container IP: ${ip}`);
});

export const containerId = postgresContainer.id;
export const containerName = postgresContainer.name;

const wait = new time.Sleep("wait-container", {
    createDuration: "10s"
}, {
    dependsOn: [
        postgresContainer
    ]
})

const pgProvider = new postgresql.Provider("pgProvider", {
    host: "127.0.0.1",
    port: 5432,
    username: "postgres",
    password: "postgres",
    sslmode: "disable"
},
{ dependsOn: [wait], parent: postgresContainer }
);

const agentDbApplicationUser = new postgresql.Role("agentApplicationUser", {
    name: "agent-application-user",
    password: "postgres",
    login: true,
},
{ provider: pgProvider, parent: pgProvider }
);

const agentDbApplicationUserPrivileges = new postgresql.Grant("agentApplicationUserPrivileges", {
    role: agentDbApplicationUser.name,
    database: "agent",
    objectType: "table",
    schema: "public",
    privileges: ["SELECT", "INSERT", "UPDATE", "DELETE"],
},
    {
        provider: pgProvider, parent: pgProvider
    }
);
pulumi destroy
works flawlessly as well
This renders my advice to split the program into, two separate stacks a bit obsolete. But if you'd did it that way in the first place, the error wouldn't never occurred.
The only thing that still puzzles me is the drift on the image reference:
Because in that case the container gets recreated and thus the DB configuration is lost and eventually the stack is broken!
Because you don't have a volume configured the container must stay the same under every circumstances because otherwise the DB configuration gets lost, which blows up the stack. So to remediate the two drifts I used a
docker.RemoteImage
instance to get the image ID and added
networkMode: "bridge",
. With that the deployment seem to be stable.
Copy code
import * as pulumi from "@pulumi/pulumi";
import * as docker from "@pulumi/docker";
import * as postgresql from "@pulumi/postgresql";
import * as time from "@pulumiverse/time"

const network = new docker.Network("identus-stack", {
    ipamConfigs: [{
        subnet: "172.18.0.0/16",
    }],
    driver: "bridge", // You can choose other drivers like "overlay", "host", etc.
    attachable: true,
    checkDuplicate: true
});

const postgresImage = new docker.RemoteImage("image", {
    name: "postgres:16"
})

// Create a Docker container running PostgreSQL
const postgresContainer = new docker.Container("postgresContainer", {
    image: postgresImage.imageId,
    ports: [{
        internal: 5432,
        external: 5432,
    }],
    envs: [
        "POSTGRES_DB=agent",
        "POSTGRES_USER=postgres",
        "POSTGRES_PASSWORD=postgres",
    ],
    hostname: "postgres-agent",
    publishAllPorts: true,
    rm: true, // Remove the container when stopped
    healthcheck: {
        tests: ["CMD", "pg_isready", "-U", "postgres"],
        interval: "30s",
        timeout: "10s",
        retries: 5,
    },
    command: [
        "postgres", "-c", "log_statement=all", "-c", "log_destination=stderr", "-c", "log_connections=true", "-c", "log_error_verbosity=VERBOSE",
    ],
    networksAdvanced: [
        {
            name: network.name,
            ipv4Address: "172.18.0.2",
        }
    ],
    networkMode: "bridge",
}, { dependsOn: network });


const containerIp = pulumi.output(postgresContainer.networksAdvanced).apply(networks => {
    const networkInfo = networks && networks[0];
    return networkInfo ? networkInfo.ipv4Address : undefined;
});

containerIp.apply(ip => {
    console.log(`Postgres Container IP: ${ip}`);
});

export const containerId = postgresContainer.id;
export const containerName = postgresContainer.name;

const wait = new time.Sleep("wait-container", {
    createDuration: "10s"
}, {
    parent: postgresContainer,
    dependsOn: [
        postgresContainer
    ]
})

const pgProvider = new postgresql.Provider("pgProvider", {
    host: "127.0.0.1",
    port: 5432,
    username: "postgres",
    password: "postgres",
    sslmode: "disable"
},
    { dependsOn: [wait], parent: postgresContainer }
);

const agentDbApplicationUser = new postgresql.Role("agentApplicationUser", {
    name: "agent-application-user",
    password: "postgres",
    login: true,
},
    { provider: pgProvider, parent: pgProvider }
);

const agentDbApplicationUserPrivileges = new postgresql.Grant("agentApplicationUserPrivileges", {
    role: agentDbApplicationUser.name,
    database: "agent",
    objectType: "table",
    schema: "public",
    privileges: ["SELECT", "INSERT", "UPDATE", "DELETE"],
},
    {
        provider: pgProvider, parent: pgProvider
    }
);
Small hint: you can add
keepLocally: true,
to the
RemoteImage
when you don't want to have image deleted locally on destruction.
b
image.png
👍🏻 1
b
One important note: to ensure that the
time.Sleep
resource gets recreated and thus will spend the amout of waiting time on creation, us the
triggers
parameter and add the
container.id
to it. https://www.pulumi.com/registry/packages/time/api-docs/sleep/#inputs
b
WOW, it really works. Thanks again for your time!
b
You're welcome
b
I wonder, would it possible to add such kind of simple getting started examples (docker-postgresql-ts) to some repository
Pulumi AI is not helpful in this questions
@big-architect-71258, could you point me to any docs or examples to define a modular/layered pulumi project structure?
I have in mind the idea to reuse the bootstrap logic for different stacks (docker/k8s/aws)
b
Well considering how IaC tools work (not only Pulumi but Terraform as well), I wouldn't consider your setup simple, because even it involves only a small amount of resources one have to consider when actions (deploy, update, destroy) take place. And sometimes it requires to have some sleep. Whereas I've to say that sleeping is always a bad sign, because you simply lack some sort of signal, that gets set when you can perform with the rest of some tasks. And waiting for a static amount of time might be too little, i.e. when the container startup is slow for what ever reason, you might run into the known error because the DB isn't initialized, even if you waited.
I have in mind the idea to reuse the bootstrap logic for different stacks (docker/k8s/aws)
Create a
ComponentResource
https://www.pulumi.com/docs/concepts/resources/components/ and distribute the code as a nodejs (Typescript) package. So that it's easily reusable.
Another option would be a to create a Dynamic Provider. Which exposes such "modules" as resources. https://www.pulumi.com/docs/concepts/resources/dynamic-providers/
Component Resources are easier to create but Dynamic Providers are more flexible.
Reflecting what I said about sleeping, the
pg_ready
CLI could be used to test if the PostgreSQL server is ready. https://www.postgresql.org/docs/current/app-pg-isready.html You could use the Pulumi Command provider to run
pg_ready
. https://www.pulumi.com/registry/packages/command/api-docs/local/command/#command-local-command
👍 1
This would be a much better way to wait for the PostgreSQL Server to become available, especially if you plan to provide the current code as a module. Where you never know in which environment (slow or fast) it will be used.
I think the
pg_isready
is available in the container. So that it's not required to install this tool locally you could use the Pulumi Command provider and run a command inside the DB container
Copy code
docker exec $DOCKER_CONTAINER_NAME pg_isready
The
docker
cli must be installed anyway because of the Docker Provider.
🙌 1
b
Thanks a lot for valuable input. Now I can proceed with other experiments.
👍🏻 1