hello all! i'm curious about "inline" style automa...
# automation-api
a
hello all! i'm curious about "inline" style automation api convention with respect to managing config data...all the python examples configure
aws:region
and nothing else really, but the thing i'm working on has several config parameters....are people setting up [in python] configparser style .ini or .yaml files (maybe tsconfig.json if you're doing typescript?) and reading that in......or is there some pulumi magics? or something different? šŸ™‚
m
afaik, the config module is coupled with the stack configuration YAMLs
The automation api is a wrapper for the Pulumi CLI, and will be generating YAMLs, or otherwise reading them if they exist (following Pulumi stack configuration standards)
ā˜ļø 1
b
Like @millions-furniture-75402 says, youā€™re just manipulating stack config files under the hood. Automation API just enables you to do that programmatically.
a
Thanks to both of you...don't suppose anyone can direct me to some sample code of an inline python piece that's loading config from a Pulumi.<stack-name>.yaml file? šŸ™‚
b
If you are using automation api in python, it does that for you automatically.
if you want it to use an existing file you just need to choose the directory that contains your project when you setup your automation api workspace.
if you donā€™t set a directory, it makes a temp directory and manages transient files behind the scenes in order to get your work done.
Inside of your Stack implementation, or your pulumi program, you access config the same way you would in any other pulumi use-caseā€¦ by using the Pulumi.Config class.
a
Okay, will give it a go...thanks!
just reporting back for posterity...if i'm reading Josh's suggestion correctly, i should be able to instantiate a config object in python with
config = pulumi.Config()
...unfortunately that doesn't work with an automation api "inline" context...you get the old "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the
pulumi
CLI" error....will continue beating head against
b
That error means you are not declaring your config within a pulumi program. Can you share more code?
m
Can you share more context to your code?
Also, as Josh eluded to, there are more concepts than just Config at play here. There is a Stack the Config is associated with, and that needs to happen within a Workspace
b
Not just within a workspace, to clarify. You can only declare ā€œPulumi.Configā€ within a pulumi program. That is, either within a Stack constructor like a classic pulumi program, or within the delegate that you pass to workspace.Up
šŸ‘ 1
m
This class is written in TypeScript, and was created against an earlier version of the automation API, but maybe it will help clarify the parts at play.
BTW, is it just me, or is this page terribly formatted? https://www.pulumi.com/docs/reference/pkg/python/pulumi/#module-pulumi.automation
b
That page is terribly formatted for some reason. I made an issue for it: https://github.com/pulumi/docs/issues/7950
a
really appreciate the continued help...i'll just post my code here, but, high level, i'm trying to migrate from vanilla AWS RDS to Aurora....you can't simply create the Aurora db from a snapshot and change the password in one go so i have the "bootstrap" function to create the initial DB from snapshot, and the "manage" function to do everything after:
Copy code
"""An AWS Python Pulumi program"""


import argparse
import os
import pulumi
from pulumi import log
from pulumi import automation as auto
import pulumi_aws as aws
import pulumi_random as random

parser = argparse.ArgumentParser()
parser.add_argument("-e", "--env", choices=["dev", "staging", "production"], required=True)
parser.add_argument("-f", "--function", choices=["bootstrap", "manage"], required=True)
parser.add_argument("-d", "--destroy", action='store_true', default=False)
args = parser.parse_args()

org = 'talentpair'
aws_region = 'us-west-2'
project_name="database"

<http://log.info|log.info>(f"function: {args.function}")
<http://log.info|log.info>(f"environment: {args.env}")

def ensure_plugins():
    <http://log.info|log.info>("loading plugins...")
    ws = auto.LocalWorkspace()
    ws.install_plugin("aws", "v5.5.0")
    ws.install_plugin("random", "v4.4.2")

ensure_plugins()

def bootstrap():
    rds_enhanced_monitoring_role = aws.iam.Role(
        "rds-enhanced-monitoring",
        assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
            "Action": "sts:AssumeRole",
            "Principal": {
                "Service": "<http://monitoring.rds.amazonaws.com|monitoring.rds.amazonaws.com>"
            },
            "Effect": "Allow",
            "Sid": ""
            }
        ]
        }""",
        managed_policy_arns=[aws.iam.ManagedPolicy.AMAZON_RDS_ENHANCED_MONITORING_ROLE,],
    )

    # we create a db snapshot from the original database and
    # will use that to spin up the new instances
    if source_db_id is not None:
        source_db_instance = aws.rds.get_instance(db_instance_identifier=source_db_id)
        origin_snapshot = aws.rds.Snapshot(f"{args.env}-source-snapshot", 
                                    db_instance_identifier=source_db_id, 
                                    db_snapshot_identifier=f"{args.env}-source-snapshot"
        )

    postgresql_cluster = aws.rds.Cluster(
        f"postgresql-{args.env}",
        backup_retention_period=30,
        cluster_identifier=f"db-{args.env}",
        database_name="main",  # was "joe" in original RDS
        engine="aurora-postgresql",
        #master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
        #master_password=secret_version.secret_string,
        #master_username=db_master_username,
        preferred_backup_window="01:00-04:00",
        preferred_maintenance_window="sat:05:00-sat:07:00",
        iam_database_authentication_enabled=True,
        engine_version=db_engine_version,
        skip_final_snapshot=True,
        snapshot_identifier=origin_snapshot.db_snapshot_arn,
        vpc_security_group_ids=db_security_groups,
    )

    cluster_instances = []
    for x in range(db_instance_count):
        cluster_instances.append(
            aws.rds.ClusterInstance(
                f"{args.env}-db-{x}",
                identifier=f"{args.env}-db-{x}",
                cluster_identifier=postgresql_cluster.id,
                instance_class=db_instance_class,
                engine=postgresql_cluster.engine,
                engine_version=postgresql_cluster.engine_version,
                monitoring_interval=db_monitoring_interval,
                performance_insights_enabled=db_perf_insights_enabled,
                publicly_accessible=False,
                monitoring_role_arn=rds_enhanced_monitoring_role.arn,
            )
        )

    pulumi.export("db_primary_endpoint", postgresql_cluster.endpoint)
    pulumi.export("db_reader_endpoint", postgresql_cluster.reader_endpoint)
    pulumi.export("cluster_id", postgresql_cluster.id)

def manage():
    secret = aws.secretsmanager.Secret(
        f"{stack}-db-master-password"
    )

    secret_version = aws.secretsmanager.SecretVersion(
        f"{stack}-db-master-password-version",
        secret_id=secret.id,
        #secret_string=random_password.result,
        secret_string="s00pers33cr3t",
        opts=pulumi.ResourceOptions(depends_on=[secret]),
    )

    # NOTE: we're using the cluster we created in bootstrap process here...it must exist or we get errors
    postgresql_cluster = aws.rds.Cluster(
        f"postgresql-{stack}",
        cluster_identifier=f"db-{stack}",
        database_name="main",  # was "joe" in original RDS
        engine="aurora-postgresql",
        #master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
        master_password=secret_version.secret_string,
        master_username=db_master_username,
        skip_final_snapshot=True,
        opts=pulumi.ResourceOptions(import_=pulumi.StackReference(f"{org}/database/{stack}").get_output("cluster_id"))
    )


"""TODO: think the "manage database" path is working except for the fact that it attempts to use
the cluster created in the "bootstrap" phase and we haven't finished the bootstrap yet.
"""

set_stack_name = ( lambda function: bool(function == "bootstrap") and f"bootstrap-{args.env}" or args.env)

<http://log.info|log.info>(f"set_stack_name: {set_stack_name(args.function)}")

stack = auto.create_or_select_stack(stack_name=set_stack_name(args.function), project_name=project_name, program=args.function, work_dir=".")

if args.destroy:
    <http://log.info|log.info>("destroying stack...")
    stack.destroy(on_output=print)
    <http://log.info|log.info>("stack destroy complete")
    exit(0)

#<http://log.info|log.info>(f"<http://stack.info|stack.info>: {<http://stack.info|stack.info>()}")
#<http://log.info|log.info>(f"dir(stack): {dir(stack)}")

# This doesn't work...
# config = pulumi.Config()
# print(f"dir(config): {dir(config)}")
# ...
# get this error --> "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the `pulumi` CLI"

stack.refresh(on_output=print)
stack.workspace.refresh_config("bootstrap-dev")
print(f"dir(stack.workspace): {dir(stack.workspace)}")
print(f"os.listdir(): {os.listdir()}")
print(f"<http://stack.info|stack.info>(): {<http://stack.info|stack.info>()}")
print(f"stack.name: {stack.name}")
print(f"stack.get_all_config(): {stack.get_all_config()}")
stack.workspace.refresh_config("bootstrap-dev")


# aws_account_number = config.require("aws_account_number")
# print(f"aws_account_number: {aws_account_number}")

#<http://log.info|log.info>(f"stack.get_all_config(): {stack.get_all_config()}")

#stack.up(on_output=print)
b
ok a couple things. 1. Your "bootstrap" and "manage" concepts should be separate pulumi projects. Remember that a "stack" is an instance of a set of a architecture. If 2 sets of architecture are completely different, they should be different projects. 2.
Pulumi.Config
is meant to be used to access config from inside your pulumi program. So in your case it can only be used inside your
bootstrap()
or
manage()
function. 3. If you want to set config on the stack using Automation API, there should be
stack.SetConfig
functions or something along those lines. 4. You are not using an "inline program" function currently because you are not passing either the
bootstrap()
or
manage()
function as the
program
input on
auto.create_or_select_stack
. That's where you set the pulumi program to be invoked
Or I guess for 4 I may be having trouble understanding what is on your
args.function
property. So maybe that is working for you but I'm not getting it from reading this
a
re 1: these are the same database instances..."bootstrap" creates them initially from snapshot/backup and then my hope is that i can use stack refs to manipulate the database cluster going forward...i guess i'm just trying to fully automate that bit...super cool that pulumi supports those sorts of things (at least, i think it does) yeah, in 4,
args.function
evaluates to either
manage
or
bootstrap
....i'm probably the worst python practitioner you'll meet this week šŸ˜
b
Yea coming from C# I'm looking at 4 being a string
"manage"
or
"bootstrap"
and not seeing how that translates to correctly invoking
manage()
and
bootstrap()
but if that works for python than by all means. In C# you would need to actually pass the delegate.
re 1: these are the same database instances..."bootstrap" creates them initially from snapshot/backup and then my hope is that i can use stack refs to manipulate the database cluster going forward...i guess i'm just trying to fully automate that bit...super cool that pulumi supports those sorts of things (at least, i think it does)
It may be the same database instance but it is not the same architecture. Stacks should be different instances of the same architecture. For instance if I have a
prod
cluster and I have a
dev
cluster. All of the surround resources will be the same, but I just want to be able to duplicate it. So these should be different stacks of the same project. In your scenario, your fundamental resources declared in your pulumi program are different. They should be different projects. Is your goal that you will then abandon the
bootstrap()
pulumi and
manage()
will take care of that cluster from here on out?
The problem with that would be that your
ClusterInstances
got lost along the way and are no longer managed.
a
Is your goal that you will then abandon the
bootstrap()
pulumi and
manage()
will take care of that cluster from here on out? (edited)
yes...exactly...i actually got the idea from pulumi support (Mitch G.), but looking back through his message, i can see that i was wrong about my approach....he suggested, essentially, 1. do the creation within a given stack with
stack.up()
2. update the password with
stack.set_config('new password')
3. do another
stack.up()
...sooo, i guess that's why i was struggling...lesson learned, listen to Mitch šŸ™‚ super helpful your walking me through that...i think i can probably run with it from here...thanks so much, again!
b
yes his suggestion would make more sense and would work better I think. good luck!