sparse-intern-71089
08/24/2022, 8:32 PMmillions-furniture-75402
08/24/2022, 9:00 PMmillions-furniture-75402
08/24/2022, 9:00 PMmillions-furniture-75402
08/24/2022, 9:01 PMbored-oyster-3147
08/26/2022, 7:31 PMabundant-vegetable-12924
08/26/2022, 7:33 PMbored-oyster-3147
08/26/2022, 7:38 PMbored-oyster-3147
08/26/2022, 7:39 PMbored-oyster-3147
08/26/2022, 7:39 PMbored-oyster-3147
08/26/2022, 7:41 PMabundant-vegetable-12924
08/26/2022, 7:42 PMabundant-vegetable-12924
08/29/2022, 2:30 PMconfig = pulumi.Config()
...unfortunately that doesn't work with an automation api "inline" context...you get the old "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the pulumi
CLI" error....will continue beating head againstbored-oyster-3147
08/29/2022, 4:26 PMmillions-furniture-75402
08/29/2022, 4:56 PMmillions-furniture-75402
08/29/2022, 4:58 PMbored-oyster-3147
08/29/2022, 5:01 PMmillions-furniture-75402
08/29/2022, 5:01 PMmillions-furniture-75402
08/29/2022, 5:02 PMbored-oyster-3147
08/29/2022, 5:30 PMabundant-vegetable-12924
08/29/2022, 5:41 PM"""An AWS Python Pulumi program"""
import argparse
import os
import pulumi
from pulumi import log
from pulumi import automation as auto
import pulumi_aws as aws
import pulumi_random as random
parser = argparse.ArgumentParser()
parser.add_argument("-e", "--env", choices=["dev", "staging", "production"], required=True)
parser.add_argument("-f", "--function", choices=["bootstrap", "manage"], required=True)
parser.add_argument("-d", "--destroy", action='store_true', default=False)
args = parser.parse_args()
org = 'talentpair'
aws_region = 'us-west-2'
project_name="database"
<http://log.info|log.info>(f"function: {args.function}")
<http://log.info|log.info>(f"environment: {args.env}")
def ensure_plugins():
<http://log.info|log.info>("loading plugins...")
ws = auto.LocalWorkspace()
ws.install_plugin("aws", "v5.5.0")
ws.install_plugin("random", "v4.4.2")
ensure_plugins()
def bootstrap():
rds_enhanced_monitoring_role = aws.iam.Role(
"rds-enhanced-monitoring",
assume_role_policy="""{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "<http://monitoring.rds.amazonaws.com|monitoring.rds.amazonaws.com>"
},
"Effect": "Allow",
"Sid": ""
}
]
}""",
managed_policy_arns=[aws.iam.ManagedPolicy.AMAZON_RDS_ENHANCED_MONITORING_ROLE,],
)
# we create a db snapshot from the original database and
# will use that to spin up the new instances
if source_db_id is not None:
source_db_instance = aws.rds.get_instance(db_instance_identifier=source_db_id)
origin_snapshot = aws.rds.Snapshot(f"{args.env}-source-snapshot",
db_instance_identifier=source_db_id,
db_snapshot_identifier=f"{args.env}-source-snapshot"
)
postgresql_cluster = aws.rds.Cluster(
f"postgresql-{args.env}",
backup_retention_period=30,
cluster_identifier=f"db-{args.env}",
database_name="main", # was "joe" in original RDS
engine="aurora-postgresql",
#master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
#master_password=secret_version.secret_string,
#master_username=db_master_username,
preferred_backup_window="01:00-04:00",
preferred_maintenance_window="sat:05:00-sat:07:00",
iam_database_authentication_enabled=True,
engine_version=db_engine_version,
skip_final_snapshot=True,
snapshot_identifier=origin_snapshot.db_snapshot_arn,
vpc_security_group_ids=db_security_groups,
)
cluster_instances = []
for x in range(db_instance_count):
cluster_instances.append(
aws.rds.ClusterInstance(
f"{args.env}-db-{x}",
identifier=f"{args.env}-db-{x}",
cluster_identifier=postgresql_cluster.id,
instance_class=db_instance_class,
engine=postgresql_cluster.engine,
engine_version=postgresql_cluster.engine_version,
monitoring_interval=db_monitoring_interval,
performance_insights_enabled=db_perf_insights_enabled,
publicly_accessible=False,
monitoring_role_arn=rds_enhanced_monitoring_role.arn,
)
)
pulumi.export("db_primary_endpoint", postgresql_cluster.endpoint)
pulumi.export("db_reader_endpoint", postgresql_cluster.reader_endpoint)
pulumi.export("cluster_id", postgresql_cluster.id)
def manage():
secret = aws.secretsmanager.Secret(
f"{stack}-db-master-password"
)
secret_version = aws.secretsmanager.SecretVersion(
f"{stack}-db-master-password-version",
secret_id=secret.id,
#secret_string=random_password.result,
secret_string="s00pers33cr3t",
opts=pulumi.ResourceOptions(depends_on=[secret]),
)
# NOTE: we're using the cluster we created in bootstrap process here...it must exist or we get errors
postgresql_cluster = aws.rds.Cluster(
f"postgresql-{stack}",
cluster_identifier=f"db-{stack}",
database_name="main", # was "joe" in original RDS
engine="aurora-postgresql",
#master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
master_password=secret_version.secret_string,
master_username=db_master_username,
skip_final_snapshot=True,
opts=pulumi.ResourceOptions(import_=pulumi.StackReference(f"{org}/database/{stack}").get_output("cluster_id"))
)
"""TODO: think the "manage database" path is working except for the fact that it attempts to use
the cluster created in the "bootstrap" phase and we haven't finished the bootstrap yet.
"""
set_stack_name = ( lambda function: bool(function == "bootstrap") and f"bootstrap-{args.env}" or args.env)
<http://log.info|log.info>(f"set_stack_name: {set_stack_name(args.function)}")
stack = auto.create_or_select_stack(stack_name=set_stack_name(args.function), project_name=project_name, program=args.function, work_dir=".")
if args.destroy:
<http://log.info|log.info>("destroying stack...")
stack.destroy(on_output=print)
<http://log.info|log.info>("stack destroy complete")
exit(0)
#<http://log.info|log.info>(f"<http://stack.info|stack.info>: {<http://stack.info|stack.info>()}")
#<http://log.info|log.info>(f"dir(stack): {dir(stack)}")
# This doesn't work...
# config = pulumi.Config()
# print(f"dir(config): {dir(config)}")
# ...
# get this error --> "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the `pulumi` CLI"
stack.refresh(on_output=print)
stack.workspace.refresh_config("bootstrap-dev")
print(f"dir(stack.workspace): {dir(stack.workspace)}")
print(f"os.listdir(): {os.listdir()}")
print(f"<http://stack.info|stack.info>(): {<http://stack.info|stack.info>()}")
print(f"stack.name: {stack.name}")
print(f"stack.get_all_config(): {stack.get_all_config()}")
stack.workspace.refresh_config("bootstrap-dev")
# aws_account_number = config.require("aws_account_number")
# print(f"aws_account_number: {aws_account_number}")
#<http://log.info|log.info>(f"stack.get_all_config(): {stack.get_all_config()}")
#stack.up(on_output=print)
bored-oyster-3147
08/29/2022, 5:45 PMPulumi.Config
is meant to be used to access config from inside your pulumi program. So in your case it can only be used inside your bootstrap()
or manage()
function.
3. If you want to set config on the stack using Automation API, there should be stack.SetConfig
functions or something along those lines.
4. You are not using an "inline program" function currently because you are not passing either the bootstrap()
or manage()
function as the program
input on auto.create_or_select_stack
. That's where you set the pulumi program to be invokedbored-oyster-3147
08/29/2022, 5:51 PMargs.function
property. So maybe that is working for you but I'm not getting it from reading thisabundant-vegetable-12924
08/29/2022, 6:10 PMargs.function
evaluates to either manage
or bootstrap
....i'm probably the worst python practitioner you'll meet this week 😏bored-oyster-3147
08/29/2022, 6:21 PM"manage"
or "bootstrap"
and not seeing how that translates to correctly invoking manage()
and bootstrap()
but if that works for python than by all means. In C# you would need to actually pass the delegate.
re 1: these are the same database instances..."bootstrap" creates them initially from snapshot/backup and then my hope is that i can use stack refs to manipulate the database cluster going forward...i guess i'm just trying to fully automate that bit...super cool that pulumi supports those sorts of things (at least, i think it does)It may be the same database instance but it is not the same architecture. Stacks should be different instances of the same architecture. For instance if I have a
prod
cluster and I have a dev
cluster. All of the surround resources will be the same, but I just want to be able to duplicate it. So these should be different stacks of the same project.
In your scenario, your fundamental resources declared in your pulumi program are different. They should be different projects.
Is your goal that you will then abandon the bootstrap()
pulumi and manage()
will take care of that cluster from here on out?bored-oyster-3147
08/29/2022, 6:24 PMClusterInstances
got lost along the way and are no longer managed.abundant-vegetable-12924
08/29/2022, 6:55 PMIs your goal that you will then abandon theyes...exactly...i actually got the idea from pulumi support (Mitch G.), but looking back through his message, i can see that i was wrong about my approach....he suggested, essentially, 1. do the creation within a given stack withpulumi andbootstrap()
will take care of that cluster from here on out? (edited)manage()
stack.up()
2. update the password with stack.set_config('new password')
3. do another stack.up()
...sooo, i guess that's why i was struggling...lesson learned, listen to Mitch 🙂
super helpful your walking me through that...i think i can probably run with it from here...thanks so much, again!bored-oyster-3147
08/29/2022, 7:08 PM