hallowed-animal-47023
02/10/2022, 12:58 PMThis is the rules object value:
{'ingress_rules': [<pulumi_aws.ec2._inputs.SecurityGroupIngressArgs object at 0x11093eeb0>], 'egress_rules': [<pulumi_aws.ec2._inputs.SecurityGroupEgressArgs object at 0x1109662e0>]}
This is the ingress rules value:
[<pulumi_aws.ec2._inputs.SecurityGroupIngressArgs object at 0x11093eeb0>]
The ingress rules type is:
<class 'list'>
This is the egress rules value:
[<pulumi_aws.ec2._inputs.SecurityGroupEgressArgs object at 0x1109662e0>]
The egress rules type is:
<class 'list'>
sparse-state-34229
02/12/2022, 1:26 AMsparse-state-34229
02/12/2022, 1:27 AMfor rule in ingress_rules:
rules_objects['ingress_rules'].append(
ec2.SecurityGroupRule(
# ...
hallowed-animal-47023
02/12/2022, 12:42 PMgorgeous-minister-41131
02/14/2022, 11:09 PMgorgeous-minister-41131
02/14/2022, 11:09 PMgorgeous-minister-41131
02/14/2022, 11:10 PMgorgeous-minister-41131
02/14/2022, 11:10 PMTypeError: 'Output' object is not iterable, consider iterating the underlying value inside an 'apply'
sparse-gold-10561
02/15/2022, 5:22 PMdry-answer-66872
02/17/2022, 9:00 AMfew-pillow-1133
02/17/2022, 9:04 AMeks-cluster-sg-*
created by the cluster?
Also, by default if vpc_security_group_ids
isn't specified in the launch_template, the ``eks-cluster-sg-*` get used for each worker nodes.
Is there a way to specify both the cluster generated security group and a custom security group for each node during creation?hallowed-furniture-81921
02/19/2022, 12:46 AM❯ ls -la
drwxr-xr-x 9 wsheldon staff 288 Jan 20 16:26 .
drwxr-xr-x 10 wsheldon staff 320 Jan 20 16:30 ..
-rw-r--r-- 1 wsheldon staff 20709 Feb 18 15:46 __main__.py
drwxr-xr-x 3 wsheldon staff 96 Feb 18 15:46 __pycache__
drwxr-xr-x 4 wsheldon staff 128 Dec 29 08:56 docker
-rw-r--r-- 1 wsheldon staff 978 Jan 20 16:05 ecr.py
-rw-r--r-- 1 wsheldon staff 9516 Jan 20 16:05 iam.py
-rw-r--r-- 1 wsheldon staff 4368 Jan 20 15:51 s3.py
-rw-r--r-- 1 wsheldon staff 9944 Jan 20 16:05 vpc.py
How would one access s3_bucket.id
in the s3.py
file, from the __main__.py
filefierce-market-67222
02/21/2022, 1:21 PMdef set_deployment_env_var(obj, opts):
if obj["kind"] == "Deployment":
obj["spec"]["template"]["spec"]["containers[0]"] = [
{
"env": [
{
"name": "RELEASE_DATE",
"value": "....."
},
]
}
]
great-sunset-355
02/22/2022, 11:32 AMmypy
Code 1:
How can I cast or at least tell that output is a type Ouput[str]
and not Output[any]
?
r53_zone = aws.route53.get_zone(zone_id=SHARED_STACK_REF.require_output('hosted_zone_id'))
mypy output
Argument "zone_id" to "get_zone" has incompatible type "Output[Any]"; expected "Optional[str]"
Code 2
Why do I get an error and how to prevent that? It looks like that profile
attribute is dynamically created, correct?
ec1_provider = aws.Provider(
f"{PREFIX}-{ec1_region}", profile=aws.config.profile, region=ec1_region
)
mypy output:
__main__.py:27: error: Module has no attribute "profile"
kind-napkin-54965
02/22/2022, 12:25 PM.virtualenvs/pulumi39/lib/python3.9/site-packages/pulumi_auth0/_utilities.py", line 15, in <module>
import pulumi.runtime
ModuleNotFoundError: No module named 'pulumi.runtime'
Tried both py3.10 and py3.9, same. I am clearly missing something, can't figure out what exactly. Thanks for help!worried-xylophone-86184
02/23/2022, 1:42 PMapply
and all
function the individual values are supposed to be accessed. This makes sense when the values are being passed on to another Cloud resource. Function calls and traceback are in the thread. What is supposed to be done when the values are supposed to extracted from the Output
object and used for a non Pulumi operation like making POST calls?
I found an alternative where the output is being written to file proud-daybreak-22469
02/23/2022, 11:13 PMspot_instance_id
Output from a SpotInstanceRequest
and it always is returning None in the apply method I’m calling on it. What is the proper way to access this output? I’m trying to include the ARN of that instance as a resource in an AWS policy
I’m sure this question has been asked 1001 times, and I took a look at the input and output documentation but it didn’t help solve the problem.eager-jordan-39489
02/25/2022, 2:09 AMonObjectCreated
 in Python? I couldn't find it..happy-alarm-59675
02/25/2022, 11:23 AMvictorious-exabyte-70545
03/01/2022, 10:40 PMbland-electrician-81799
03/03/2022, 1:02 AMgreat-sunset-355
03/09/2022, 7:05 AMpulumi.interpolate
in python?prehistoric-shoe-5168
03/11/2022, 6:36 PMrough-oyster-77458
03/11/2022, 9:21 PMpulumi.export()
to export a variable in file1.py
. I'm going to use this variable in file2.py
Is there any way to get this variable in file2.py
?
So far I have found how to import this variable in CLI onlyhallowed-animal-47023
03/12/2022, 2:10 AMprimary_private_route_table = ec2.RouteTable(
f'{environment}-primary-private-subnet-route-table',
vpc_id=vpc.id,
routes=[
ec2.RouteTableRouteArgs(
cidr_block="0.0.0.0/0",
gateway_id=nat_ids['primary_nat_id']
)
],
tags={
"Name": f'{environment}-primary-private-subnets-route-table',
"Environment": f'{environment}'
}
)
~ routes: [
~ [0]: {
~ cidrBlock : "0.0.0.0/0" => "0.0.0.0/0"
+ gatewayId : "nat-08f9a3f32bfaaa09c"
- natGatewayId: "nat-08f9a3f32bfaaa09c"
}
]
fast-spoon-69536
03/14/2022, 5:45 PMesa.create_app_role(example.endpoint.apply(lambda endpoint: f"{endpoint}"),admin_username,admin_password,app_role_payload,role_name)
_The create_app_role function takes the following inputs._
endpoint - the endpoint is only available after the previous resource is deployed.
_admin_username - username to log into the endpoint_
_admin_password - admin password_
_app_role_payload - a dictionary / REST payload_
_role_name - name of the role to create_
The basics of what I'm trying to do is.
1. deploy aws opensearch
2. call the opensearch internal API to create roles and users after it is deployed.
endpoint needs to be a str, but it is a Output<str>. How can I get it to be a str? Or is there a better approach ?
TypeError: can only concatenate str (not "Output") to str
little-photographer-14867
03/16/2022, 12:06 AMread_sa = serviceaccount.Account(
"read-sa",
account_id="read-sa",
display_name="Read Service Account"
)
py_repo = artifactregistry.Repository(
"pypi-repo",
location="us-west1",
repository_id="pypi-repo",
description="python pacakges.",
format="PYTHON",
)
read_binding = artifactregistry.RepositoryIamBinding(
"read-binding",
project=py_repo.project,
location=py_repo.location,
repository=py_repo.name,
role="roles/artifactregistry.reader",
members=[
f"serviceAccount:{read_sa.email}",
],
)
But I am getting errors:
gcp:artifactregistry:RepositoryIamBinding (read-binding):
error: 1 error occurred:
* Error applying IAM policy for artifactregistry repository "projects/test-project/locations/us-west1/repositories/pypi-repo": Error setting IAM policy for artifactregistry repository "projects/test-project/locations/us-west1/repositories/pypi-repo": googleapi: Error 400: Invalid service account (<pulumi.output.Output object at 0x7f7f53ff7f10>).
Am I defining the members
arg properly, or correct in using the service account email in an f-string? Mostly following this ts example.average-article-76176
03/16/2022, 5:28 PMpurple-plumber-90981
03/17/2022, 3:31 AMnice-father-44210
03/17/2022, 6:42 PMComponentResources
that encode our best practices while allowing the calling code to override some aspects of the underlying AWS resources.
We want to expose the available overrides as Pulumi’s Args
objects. E.g.,:
class SecureS3Bucket(pulumi.ComponentResource):
def __init__(
self,
name: str,
bucket_overrides: aws.s3.BucketArgs = aws.s3.BucketArgs(),
Is there a convenient way to manipulate/merge BucketArgs
objects? E.g., we might overlay required attribute values on top of the supplied bucket_overrides
argument before passing the merged object into the Bucket(BucketArgs)
constructor.
One way we’ve found is using a combination of pulumi._types.input_type_to_dict(bucket_overrides)
to turn BucketArgs
into a dict
and pulumi.set( args_object, "attribute", "value" )
to apply dict
entries to a BucketArgs
object.