https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
automation-api
  • w

    worried-helmet-23171

    06/14/2022, 12:28 AM
    Is there a programmatic way to do the pulumi login?
    b
    f
    • 3
    • 4
  • w

    worried-helmet-23171

    06/14/2022, 12:29 AM
    Like to refer to the bucket where my state is located and then use the automation api.
  • c

    cool-computer-40158

    06/14/2022, 9:23 AM
    Hello together, i have a question regarding digital ocean app alerts. I have configured my alert now but i cannot find the place where i can configure the delivery method. So where to deliver if the alert occurs. In the web view it is possible to say where the alert should be delivered. Thanks 🙂
    b
    • 2
    • 3
  • c

    cold-orange-37453

    06/15/2022, 12:50 PM
    Just starting out with pulumi, In the automation api, the engine events have a field PropertyDiff, here InputDiff is defined as
    InputDiff is true if this is a difference between old and new inputs rather than old state and new inputs.
    , am I correct to understand that whenever there is some external change made to a resource, for example a tag is changed for AWS sqs, the InputDiff for that change will be False, whereas if I change the tag for AWS SQS in pulumi code, then InputDiff will be True ?
    • 1
    • 2
  • p

    plain-pillow-11037

    06/22/2022, 6:38 PM
    Hello! I’m building some integration tests for some resources that are
    protected
    . For the purposes of testing tear down/resource cleanup I need to
    unprotect
    the resources before destroying the stack. What’s the best way to do this?
    b
    • 2
    • 5
  • a

    ancient-solstice-53934

    06/23/2022, 12:22 PM
    Is there any way to enable Synapse Link in Azure Cosmos DB containers using Pulumi?
  • l

    lemon-lamp-41193

    07/01/2022, 2:24 AM
    I noticed EngineEvents omitted by Pulumi are different between a recently created Pulumi stack, and an older one. Any idea on why “metadata” is a dict in one instance, and type StepEventMetadata in another? Both are on the newest version of Pulumi and pulumi-aws.
    # New Pulumi Project
    EngineEvent(
        ...
        resource_pre_event=ResourcePreEvent(
            metadata=StepEventMetadata(
                op=<OpType.SAME: 'same'>,
                urn='urn:pulumi:dev::...
    
    # Old Pulumi Project
    EngineEvent(
        ...
        resource_pre_event=ResourcePreEvent(
            metadata={
                'op': 'same',
                'urn': 'urn:pulumi:dev::...
    b
    • 2
    • 2
  • m

    mammoth-garden-53682

    07/01/2022, 11:38 PM
    has anyone encountered this issue with
    auto.GitRepo
    :
    failed to create stack: failed to create workspace, unable to enlist in git repo: unable to checkout branch: reference not found
    using a PAT. Config is straight from example and looks like this:
    auth := &auto.GitAuth{
    		PersonalAccessToken: cfg.AuthToken,
    	}
    
    	repo := auto.GitRepo{
    		Auth:   auth,
    		URL:    cfg.Repo,
    		Branch: cfg.Branch,
    
    	}
    	ctx := context.Background()
    	s, err := auto.NewStackRemoteSource(ctx, "dev", repo)
    If I don’t specify branch then it works…
    e
    • 2
    • 2
  • q

    quiet-laptop-13439

    07/11/2022, 10:20 AM
    is there a way to clean up pending operations from the state using automation api?
    a
    w
    +2
    • 5
    • 9
  • q

    quiet-laptop-13439

    07/27/2022, 9:51 AM
    what is the reason for automation api being just a wrapper around command line?
    b
    • 2
    • 1
  • a

    acoustic-window-73051

    07/27/2022, 3:38 PM
    Hopefully quick question: Python, AWS When building one of my stacks I embed the AWS Access/Secret in the stack's configuration options (see AWS Classic Setup | Pulumi) When I call the automation api to destroy it, is it going to use those creds? I would assume so, but I'm suspecting otherwise
    • 1
    • 1
  • m

    most-lighter-95902

    07/28/2022, 3:56 AM
    Getting a large amount of active promises when running Pulumi Automation API:
  • m

    most-lighter-95902

    07/28/2022, 3:56 AM
    The Pulumi runtime detected that 148 promises were still active
    at the time that the process exited. There are a few ways that this can occur:
      * Not using `await` or `.then` on a Promise returned from a Pulumi API
      * Introducing a cyclic dependency between two Pulumi Resources
      * A bug in the Pulumi Runtime
  • m

    most-lighter-95902

    07/28/2022, 3:58 AM
    Anyone has any idea why this is happening? I’m using this to provision k8s resources via Pulumi.
    l
    • 2
    • 3
  • m

    mammoth-garden-53682

    07/28/2022, 7:09 PM
    I have a situation where I am using
    optup/optdestroy.EventStreams
    to stream changes to one or more clients. The events are dispatched as expected but when my operation completes the channel I provide is continuously spammed with events that have null values and no sequence. Marshaled event:
    INFO[0028] { 'event': {"sequence":0,"timestamp":0,"Error":null} }
    Is this…expected? Am I not able to use long-lived channels for for event streams? My code is fairly straightforward:
    ...
    ctx := context.Background()
    upResp, err := stack.Up(ctx, optup.EventStreams(ec))
    ...
    
    // and in another goroutine
    ...
    for {
    		select {
    		case <-sl.Done:
    			return
    		case e := <-<http://sl.ec|sl.ec>:
    			me, _ := json.Marshal(e)
    			log.Infof("{ 'event': %s }", me)
    		}
    	}
    • 1
    • 1
  • g

    gorgeous-accountant-60580

    07/28/2022, 9:02 PM
    Hi! If I run my Pulumi integration tests with Bazel, I run into trouble because the ~/.pulumi/credentials.json file is inaccessible. Is it possible to log in some other way, that doesn’t require accessing the host file system?
    b
    • 2
    • 1
  • k

    kind-hamburger-15227

    08/03/2022, 11:25 AM
    Cross post from general, as this channel seems to be more appropriate: 👋 Hi everyone! Does anyone managed to run Pulumi automation as function under AWS Lambda? I think this sample https://github.com/pulumi/automation-api-examples/tree/main/python/pulumi_over_http can be easily packaged, except it have Pulumi cli in requirements, which I am not sure how to package into lambda image. (I mean easily). If anyone tried and succeed or failed would be good to discuss.
    l
    f
    • 3
    • 9
  • v

    victorious-memory-43562

    08/05/2022, 2:23 AM
    Hey everyone! I have some resources I need to create/destroy on eventbridge events. I wrapped my component resources in the automation API, and I have it all working locally, but I need to host this in AWS now and I’m a bit stuck. I thought maybe I could run it from lambda, but that wasn’t working. Any recommendations for the easiest place to host this where it can be triggered by eventbridge? I’m not a pro at EC2 or containers, but I’m willing to learn. That said, the simpler the better in this case. I appreciate the help!
    m
    • 2
    • 11
  • m

    miniature-leather-70472

    08/08/2022, 3:07 PM
    I'm looking to run a dotnet C# pulumi program in a docker conatainer using the automation API. As far as I can tell I need to have the .net SDK installed, rather than just the run time to do this, as Pulumi compiles the program at run time, is this correct? Is there anyway to pre-compile the program at container creation time to help reduce the size?
    b
    w
    l
    • 4
    • 11
  • c

    cold-orange-37453

    08/11/2022, 6:29 AM
    How can we set config with a list of values in golang with automation api ? Currently SetConfig expects a key and value, when providing key like “pulumi:disable-default-providers[1]” , it does not parse it as a list, but instead as a key
    • 1
    • 1
  • c

    cold-orange-37453

    08/11/2022, 7:54 AM
    How can I use non default providers with refresh command in automation api ? Currently when running preview/up/destroy via automation api creates a provider map and passes it to various pulumi functions. When I use refresh command tho, it seems this code for creating provider map is never used. What changes do I need to make to make refresh command use non default providers ? If it is possible at all.
    • 1
    • 1
  • s

    salmon-musician-20405

    08/11/2022, 7:33 PM
    Hi, Need help . I am setting pulumi_ auth0 passwordless. I have configured the machine-to-machine api and created application from it. I want to turn off username-password connection and setup passwordless email template automatically. any sample code link will be helpful. Pulumi manual or examples are no good.
    f
    • 2
    • 3
  • m

    microscopic-postman-4756

    08/12/2022, 8:11 PM
    I wanted to share an example we wrote for automating Pulumi via events: https://github.com/TheNileDev/nile-js/tree/master/packages/events-example the developer recorded a quick video of how it works:

    https://www.youtube.com/watch?v=ZbZkzVGzB2k▾

    👍 1
    l
    • 2
    • 2
  • k

    kind-napkin-54965

    08/19/2022, 11:53 AM
    Trying to recreate inline python gcp example from https://github.com/pulumi/automation-api-examples/blob/main/python/inline_program/main.py . All seems to work well (=resources get created in GCP) till the execution of this line:
    up_res = stack.up(on_output=print)
    . It outputs:
    E0819 14:48:02.366739000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]>                  Other threads are currently calling into gRPC, skipping fork() handlers
    Updating (<redacted>/dev)
    
    View Live: <https://app.pulumi.com/><redacted>/gcp-python/dev/updates/6
    
    
        pulumi:pulumi:Stack gcp-python-dev running
        gcp:storage:Bucket my-bucket-new
        pulumi:pulumi:Stack gcp-python-dev
    
    Outputs:
        bucket_name: "<gs://my-bucket-new-7aa5ca0>"
    
    Resources:
        2 unchanged
    
    Duration: 1s
    
    E0819 14:48:09.979133000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]>                  Other threads are currently calling into gRPC, skipping fork() handlers
    E0819 14:48:11.208918000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]>                  Other threads are currently calling into gRPC, skipping fork() handlers
    E0819 14:48:12.668303000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]>                  Other threads are currently calling into gRPC, skipping fork() handlers
    Any ideas what's going on here? Running on zsh/mac m1.
  • k

    kind-napkin-54965

    08/19/2022, 12:02 PM
    https://github.com/home-assistant/core/issues/73178#issuecomment-1155413167 -> this suggestion helped:
    os.environ['GRPC_ENABLE_FORK_SUPPORT'] = "false"
    .
  • b

    bulky-yacht-13037

    08/23/2022, 2:24 PM
    Hey everyone 👋 I’m looking into using the Automation API to build a PoC for an internal CLI and API for self-service infra provisioning. I was wondering if anyone has implemented, know about one implementation or future plans for one implementation of the
    auto.Workspace
    interface? I’ve had some issues with the existing
    auto.LocalWorkspace
    one (most probably because of lack of experience w/ Pulumi) and would really prefer a “native” implementation instead of relying on having the
    pulumi
    CLI installed in the systems we’ll use it.
    b
    w
    • 3
    • 4
  • s

    silly-smartphone-71988

    08/24/2022, 11:54 AM
    Hi, I'm using
    LocalWorkspace.CreateOrSelectStackAsync
    which works great. But is there a way to remove a stack (not destroy) using pulumi.automation? Right now I have to do it using the CLI pulumi stack rm
    b
    • 2
    • 2
  • b

    bulky-yacht-13037

    08/24/2022, 12:02 PM
    What’s the correct way to set a custom secrets provider without having to specify it in each stack’s manifest file? For example, the following (abbreviated) code:
    opts := []auto.LocalWorkspaceOption{
    	...
    	auto.SecretsProvider("awskms://..."),
    }
    
    stack, err := auto.UpsertStackLocalSource(ctx, "dev", "example", opts...)
    if err != nil {
    	log.Fatalf("Failed to load dev stack: %s\n", err)
    }
    
    _, err = stack.Refresh(ctx, optrefresh.ProgressStreams(os.Stdout))
    if err != nil {
    	log.Fatalf("Failed to refresh dev stack: %s\n", err)
    }
    Results in the following error:
    [...] passphrase must be set with PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE environment variables [...]
    • 1
    • 2
  • b

    bulky-yacht-13037

    08/24/2022, 2:58 PM
    Related with FQNs not being (yet) supported for some backends like AWS S3; are there any alternatives anyone has been using to deal with this (i.e. diff buckets per projects; sub dirs inside the same bucket; etc) as a workaround? Or any suggestions?
    p
    • 2
    • 3
  • a

    abundant-vegetable-12924

    08/24/2022, 8:32 PM
    hello all! i'm curious about "inline" style automation api convention with respect to managing config data...all the python examples configure
    aws:region
    and nothing else really, but the thing i'm working on has several config parameters....are people setting up [in python] configparser style .ini or .yaml files (maybe tsconfig.json if you're doing typescript?) and reading that in......or is there some pulumi magics? or something different? 🙂
    m
    b
    • 3
    • 26
Powered by Linen
Title
a

abundant-vegetable-12924

08/24/2022, 8:32 PM
hello all! i'm curious about "inline" style automation api convention with respect to managing config data...all the python examples configure
aws:region
and nothing else really, but the thing i'm working on has several config parameters....are people setting up [in python] configparser style .ini or .yaml files (maybe tsconfig.json if you're doing typescript?) and reading that in......or is there some pulumi magics? or something different? 🙂
m

millions-furniture-75402

08/24/2022, 9:00 PM
afaik, the config module is coupled with the stack configuration YAMLs
The automation api is a wrapper for the Pulumi CLI, and will be generating YAMLs, or otherwise reading them if they exist (following Pulumi stack configuration standards)
☝️ 1
https://www.pulumi.com/docs/intro/concepts/project/#stack-settings-file
b

bored-oyster-3147

08/26/2022, 7:31 PM
Like @millions-furniture-75402 says, you’re just manipulating stack config files under the hood. Automation API just enables you to do that programmatically.
a

abundant-vegetable-12924

08/26/2022, 7:33 PM
Thanks to both of you...don't suppose anyone can direct me to some sample code of an inline python piece that's loading config from a Pulumi.<stack-name>.yaml file? 🙂
b

bored-oyster-3147

08/26/2022, 7:38 PM
If you are using automation api in python, it does that for you automatically.
if you want it to use an existing file you just need to choose the directory that contains your project when you setup your automation api workspace.
if you don’t set a directory, it makes a temp directory and manages transient files behind the scenes in order to get your work done.
Inside of your Stack implementation, or your pulumi program, you access config the same way you would in any other pulumi use-case… by using the Pulumi.Config class.
a

abundant-vegetable-12924

08/26/2022, 7:42 PM
Okay, will give it a go...thanks!
just reporting back for posterity...if i'm reading Josh's suggestion correctly, i should be able to instantiate a config object in python with
config = pulumi.Config()
...unfortunately that doesn't work with an automation api "inline" context...you get the old "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the
pulumi
CLI" error....will continue beating head against
b

bored-oyster-3147

08/29/2022, 4:26 PM
That error means you are not declaring your config within a pulumi program. Can you share more code?
m

millions-furniture-75402

08/29/2022, 4:56 PM
Can you share more context to your code?
Also, as Josh eluded to, there are more concepts than just Config at play here. There is a Stack the Config is associated with, and that needs to happen within a Workspace
b

bored-oyster-3147

08/29/2022, 5:01 PM
Not just within a workspace, to clarify. You can only declare “Pulumi.Config” within a pulumi program. That is, either within a Stack constructor like a classic pulumi program, or within the delegate that you pass to workspace.Up
👍 1
m

millions-furniture-75402

08/29/2022, 5:01 PM
This class is written in TypeScript, and was created against an earlier version of the automation API, but maybe it will help clarify the parts at play.
Untitled.txt
BTW, is it just me, or is this page terribly formatted? https://www.pulumi.com/docs/reference/pkg/python/pulumi/#module-pulumi.automation
b

bored-oyster-3147

08/29/2022, 5:30 PM
That page is terribly formatted for some reason. I made an issue for it: https://github.com/pulumi/docs/issues/7950
a

abundant-vegetable-12924

08/29/2022, 5:41 PM
really appreciate the continued help...i'll just post my code here, but, high level, i'm trying to migrate from vanilla AWS RDS to Aurora....you can't simply create the Aurora db from a snapshot and change the password in one go so i have the "bootstrap" function to create the initial DB from snapshot, and the "manage" function to do everything after:
"""An AWS Python Pulumi program"""


import argparse
import os
import pulumi
from pulumi import log
from pulumi import automation as auto
import pulumi_aws as aws
import pulumi_random as random

parser = argparse.ArgumentParser()
parser.add_argument("-e", "--env", choices=["dev", "staging", "production"], required=True)
parser.add_argument("-f", "--function", choices=["bootstrap", "manage"], required=True)
parser.add_argument("-d", "--destroy", action='store_true', default=False)
args = parser.parse_args()

org = 'talentpair'
aws_region = 'us-west-2'
project_name="database"

<http://log.info|log.info>(f"function: {args.function}")
<http://log.info|log.info>(f"environment: {args.env}")

def ensure_plugins():
    <http://log.info|log.info>("loading plugins...")
    ws = auto.LocalWorkspace()
    ws.install_plugin("aws", "v5.5.0")
    ws.install_plugin("random", "v4.4.2")

ensure_plugins()

def bootstrap():
    rds_enhanced_monitoring_role = aws.iam.Role(
        "rds-enhanced-monitoring",
        assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
            "Action": "sts:AssumeRole",
            "Principal": {
                "Service": "<http://monitoring.rds.amazonaws.com|monitoring.rds.amazonaws.com>"
            },
            "Effect": "Allow",
            "Sid": ""
            }
        ]
        }""",
        managed_policy_arns=[aws.iam.ManagedPolicy.AMAZON_RDS_ENHANCED_MONITORING_ROLE,],
    )

    # we create a db snapshot from the original database and
    # will use that to spin up the new instances
    if source_db_id is not None:
        source_db_instance = aws.rds.get_instance(db_instance_identifier=source_db_id)
        origin_snapshot = aws.rds.Snapshot(f"{args.env}-source-snapshot", 
                                    db_instance_identifier=source_db_id, 
                                    db_snapshot_identifier=f"{args.env}-source-snapshot"
        )

    postgresql_cluster = aws.rds.Cluster(
        f"postgresql-{args.env}",
        backup_retention_period=30,
        cluster_identifier=f"db-{args.env}",
        database_name="main",  # was "joe" in original RDS
        engine="aurora-postgresql",
        #master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
        #master_password=secret_version.secret_string,
        #master_username=db_master_username,
        preferred_backup_window="01:00-04:00",
        preferred_maintenance_window="sat:05:00-sat:07:00",
        iam_database_authentication_enabled=True,
        engine_version=db_engine_version,
        skip_final_snapshot=True,
        snapshot_identifier=origin_snapshot.db_snapshot_arn,
        vpc_security_group_ids=db_security_groups,
    )

    cluster_instances = []
    for x in range(db_instance_count):
        cluster_instances.append(
            aws.rds.ClusterInstance(
                f"{args.env}-db-{x}",
                identifier=f"{args.env}-db-{x}",
                cluster_identifier=postgresql_cluster.id,
                instance_class=db_instance_class,
                engine=postgresql_cluster.engine,
                engine_version=postgresql_cluster.engine_version,
                monitoring_interval=db_monitoring_interval,
                performance_insights_enabled=db_perf_insights_enabled,
                publicly_accessible=False,
                monitoring_role_arn=rds_enhanced_monitoring_role.arn,
            )
        )

    pulumi.export("db_primary_endpoint", postgresql_cluster.endpoint)
    pulumi.export("db_reader_endpoint", postgresql_cluster.reader_endpoint)
    pulumi.export("cluster_id", postgresql_cluster.id)

def manage():
    secret = aws.secretsmanager.Secret(
        f"{stack}-db-master-password"
    )

    secret_version = aws.secretsmanager.SecretVersion(
        f"{stack}-db-master-password-version",
        secret_id=secret.id,
        #secret_string=random_password.result,
        secret_string="s00pers33cr3t",
        opts=pulumi.ResourceOptions(depends_on=[secret]),
    )

    # NOTE: we're using the cluster we created in bootstrap process here...it must exist or we get errors
    postgresql_cluster = aws.rds.Cluster(
        f"postgresql-{stack}",
        cluster_identifier=f"db-{stack}",
        database_name="main",  # was "joe" in original RDS
        engine="aurora-postgresql",
        #master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
        master_password=secret_version.secret_string,
        master_username=db_master_username,
        skip_final_snapshot=True,
        opts=pulumi.ResourceOptions(import_=pulumi.StackReference(f"{org}/database/{stack}").get_output("cluster_id"))
    )


"""TODO: think the "manage database" path is working except for the fact that it attempts to use
the cluster created in the "bootstrap" phase and we haven't finished the bootstrap yet.
"""

set_stack_name = ( lambda function: bool(function == "bootstrap") and f"bootstrap-{args.env}" or args.env)

<http://log.info|log.info>(f"set_stack_name: {set_stack_name(args.function)}")

stack = auto.create_or_select_stack(stack_name=set_stack_name(args.function), project_name=project_name, program=args.function, work_dir=".")

if args.destroy:
    <http://log.info|log.info>("destroying stack...")
    stack.destroy(on_output=print)
    <http://log.info|log.info>("stack destroy complete")
    exit(0)

#<http://log.info|log.info>(f"<http://stack.info|stack.info>: {<http://stack.info|stack.info>()}")
#<http://log.info|log.info>(f"dir(stack): {dir(stack)}")

# This doesn't work...
# config = pulumi.Config()
# print(f"dir(config): {dir(config)}")
# ...
# get this error --> "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the `pulumi` CLI"

stack.refresh(on_output=print)
stack.workspace.refresh_config("bootstrap-dev")
print(f"dir(stack.workspace): {dir(stack.workspace)}")
print(f"os.listdir(): {os.listdir()}")
print(f"<http://stack.info|stack.info>(): {<http://stack.info|stack.info>()}")
print(f"stack.name: {stack.name}")
print(f"stack.get_all_config(): {stack.get_all_config()}")
stack.workspace.refresh_config("bootstrap-dev")


# aws_account_number = config.require("aws_account_number")
# print(f"aws_account_number: {aws_account_number}")

#<http://log.info|log.info>(f"stack.get_all_config(): {stack.get_all_config()}")

#stack.up(on_output=print)
b

bored-oyster-3147

08/29/2022, 5:45 PM
ok a couple things. 1. Your "bootstrap" and "manage" concepts should be separate pulumi projects. Remember that a "stack" is an instance of a set of a architecture. If 2 sets of architecture are completely different, they should be different projects. 2.
Pulumi.Config
is meant to be used to access config from inside your pulumi program. So in your case it can only be used inside your
bootstrap()
or
manage()
function. 3. If you want to set config on the stack using Automation API, there should be
stack.SetConfig
functions or something along those lines. 4. You are not using an "inline program" function currently because you are not passing either the
bootstrap()
or
manage()
function as the
program
input on
auto.create_or_select_stack
. That's where you set the pulumi program to be invoked
Or I guess for 4 I may be having trouble understanding what is on your
args.function
property. So maybe that is working for you but I'm not getting it from reading this
a

abundant-vegetable-12924

08/29/2022, 6:10 PM
re 1: these are the same database instances..."bootstrap" creates them initially from snapshot/backup and then my hope is that i can use stack refs to manipulate the database cluster going forward...i guess i'm just trying to fully automate that bit...super cool that pulumi supports those sorts of things (at least, i think it does) yeah, in 4,
args.function
evaluates to either
manage
or
bootstrap
....i'm probably the worst python practitioner you'll meet this week 😏
b

bored-oyster-3147

08/29/2022, 6:21 PM
Yea coming from C# I'm looking at 4 being a string
"manage"
or
"bootstrap"
and not seeing how that translates to correctly invoking
manage()
and
bootstrap()
but if that works for python than by all means. In C# you would need to actually pass the delegate.
re 1: these are the same database instances..."bootstrap" creates them initially from snapshot/backup and then my hope is that i can use stack refs to manipulate the database cluster going forward...i guess i'm just trying to fully automate that bit...super cool that pulumi supports those sorts of things (at least, i think it does)
It may be the same database instance but it is not the same architecture. Stacks should be different instances of the same architecture. For instance if I have a
prod
cluster and I have a
dev
cluster. All of the surround resources will be the same, but I just want to be able to duplicate it. So these should be different stacks of the same project. In your scenario, your fundamental resources declared in your pulumi program are different. They should be different projects. Is your goal that you will then abandon the
bootstrap()
pulumi and
manage()
will take care of that cluster from here on out?
The problem with that would be that your
ClusterInstances
got lost along the way and are no longer managed.
a

abundant-vegetable-12924

08/29/2022, 6:55 PM
Is your goal that you will then abandon the
bootstrap()
pulumi and
manage()
will take care of that cluster from here on out? (edited)
yes...exactly...i actually got the idea from pulumi support (Mitch G.), but looking back through his message, i can see that i was wrong about my approach....he suggested, essentially, 1. do the creation within a given stack with
stack.up()
2. update the password with
stack.set_config('new password')
3. do another
stack.up()
...sooo, i guess that's why i was struggling...lesson learned, listen to Mitch 🙂 super helpful your walking me through that...i think i can probably run with it from here...thanks so much, again!
b

bored-oyster-3147

08/29/2022, 7:08 PM
yes his suggestion would make more sense and would work better I think. good luck!
View count: 1