wide-dress-96388
06/19/2023, 3:56 PMFileArchive <-- azure.Blob <-- k8s.Deployment <-- k8s.Service
• When a change occurs to the hash of the FileArchive
, the Blob
is marked as replace
, and that leads to a cascading replace on the K8s Deployment
and Service
• Is there a way to mark the changes (inherited) changes in the Deployment/Service as non-replacing changes (it's a safe-update), or a way to force Pulumi to make an update (instead of replace
) for the Blob
? 🤔
• The scenario above causes some downtime issues, since changing the blob changes the Deployment spec, and that leads to downtime, because sometimes Pulumi just replaces the original K8s Deployment
in a non-graceful way 😞 (we see that it deletes and only then creates a new Deployment)
Thanks 🙂adventurous-raincoat-20995
06/20/2023, 7:37 AMcurved-dream-12503
06/20/2023, 9:01 AMsource <(pulumi gen-completion bash)
it will be very slow due to the update check. And you get this update message every time you open a new terminal.sticky-bear-14421
06/20/2023, 9:08 AMDiagnostics:
aws:s3:BucketPolicy (pulumi-infrastructure-bucket-policy):
error: Preview failed: refreshing urn:pulumi:dev::bootstrap::aws:s3/bucketPolicy:BucketPolicy::pulumi-infrastructure-bucket-policy: 1 error occurred:
* reading Amazon S3 (Simple Storage) Bucket Policy (arn:aws:s3:::infrastructure.pulumi.dev): InvalidARNError: invalid ARN
caused by: invalid Amazon s3 ARN, unknown resource type, arn:aws:s3:::infrastructure.pulumi.dev
I created the policy in the policy editor and copied the (real) arn from the s3 console, so everything should be formally correct.
I am working with the latest Pulumi release (3.72.2) with NodeJS Version v18.16.0.
Any ideas why the simple import command fails?calm-account-35699
06/20/2023, 12:20 PMpulumi/actions
. Pulumi up has been stuck for over an hour trying to deploy an ECS service. I'm wondering if there's any way to remotely stop the Pulumi up jobwooden-egg-90698
06/20/2023, 1:28 PMglamorous-family-11920
06/20/2023, 4:13 PMtls:
key:
but tls.key:
limited-farmer-68874
06/20/2023, 10:01 PMbrief-car-60542
06/21/2023, 4:17 AMpulumi state delete
ed resource back to pulumi state?green-oxygen-75202
06/21/2023, 8:28 AMpulumi destroy
pulumi stacks with Pulumi CLI that were originally created with Pulumi CLI (which have a Pulumi.yaml file on the file system). Now i try to do the same for stacks that were originally created using the Pulumi Automation API (which do not have Pulumi.yaml files on the file system), but this does not seem to work. It complains about seraching for and not finding a Pulumi.yaml file in parent directories. The thing is that i succesfully CAN pulumi rm
stacks with Pulumi CLI that were originally created with the Pulumi Automation API. But of course that leaves the Pulumi Resources "orphaned" since pulumi rm
does not destroy the resources itself, but only the Pulumi state in Pulumi Cloud. So what i would like to achieve is to be able to pulumi destroy
resources with Pulumi CLI that were originally cretated using the Pulumi Automation API. Is this implemented or on the roadmap?able-policeman-41860
06/21/2023, 9:09 AMimport * as pulumi from "@pulumi/pulumi";
import * as resources from "@pulumi/azure-native/resources";
import * as network from "@pulumi/azure-native/network";
import * as containerservice from "@pulumi/azure-native/containerservice";
import * as kubernetes from "@pulumi/kubernetes";
// Grab some values from the Pulumi stack configuration (or use defaults)
const projCfg = new pulumi.Config();
const numWorkerNodes = projCfg.getNumber("numWorkerNodes") || 1;
const k8sVersion = projCfg.get("kubernetesVersion") || "1.26.3";
const prefixForDns = projCfg.get("prefixForDns") || "pulumi";
const nodeVmSize = projCfg.get("nodeVmSize") || "standard_B2s";
// The next two configuration values are required (no default can be provided)
const mgmtGroupId = projCfg.require("mgmtGroupId");
const sshPubKey = projCfg.require("sshPubKey");
// Create a new Azure Resource Group
const resourceGroup = new resources.ResourceGroup("resourceGroup", {});
// Create a new Azure Virtual Network
const virtualNetwork = new network.VirtualNetwork("virtualNetwork", {
addressSpace: {
addressPrefixes: ["10.0.0.0/16"],
},
resourceGroupName: resourceGroup.name,
});
// Create three subnets in the virtual network
const subnet1 = new network.Subnet("subnet1", {
addressPrefix: "10.0.0.0/22",
resourceGroupName: resourceGroup.name,
virtualNetworkName: virtualNetwork.name,
});
// Create an Azure Kubernetes Cluster
const managedCluster = new containerservice.ManagedCluster("managedCluster", {
resourceGroupName: resourceGroup.name,
addonProfiles: {},
agentPoolProfiles: [{
availabilityZones: ["1","2","3"],
count: numWorkerNodes,
enableNodePublicIP: false,
mode: "System",
name: "systempool",
osType: "Linux",
osDiskSizeGB: 30,
type: "VirtualMachineScaleSets",
vmSize: nodeVmSize,
vnetSubnetID: subnet1.id,
}],
apiServerAccessProfile: {
authorizedIPRanges: ["0.0.0.0/0"],
enablePrivateCluster: false,
},
dnsPrefix: prefixForDns,
enableRBAC: true,
identity: {
type: "SystemAssigned",
},
kubernetesVersion: k8sVersion,
linuxProfile: {
adminUsername: "azureuser",
ssh: {
publicKeys: [{
keyData: sshPubKey,
}],
},
},
networkProfile: {
networkPlugin: "azure",
networkPolicy: "azure",
serviceCidr: "10.96.0.0/16",
dnsServiceIP: "10.96.0.10",
},
aadProfile: {
enableAzureRBAC: true,
managed: true,
adminGroupObjectIDs: [mgmtGroupId],
},
});
// Build a Kubeconfig to access the cluster
const creds = containerservice.listManagedClusterUserCredentialsOutput({
resourceGroupName: resourceGroup.name,
resourceName: managedCluster.name,
});
const encoded = creds.kubeconfigs[0].value;
const decoded = encoded.apply(enc => Buffer.from(enc, "base64").toString());
// Apply the Percona XtraDB container image to the Kubernetes cluster
const k8sProvider = new kubernetes.Provider("k8sProvider", {
kubeconfig: decoded,
});
const perconaXtradbDeployment = new kubernetes.apps.v1.Deployment("percona-xtradb-deployment", {
metadata: {
name: "percona-xtradb",
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: "percona-xtradb",
},
},
template: {
metadata: {
labels: {
app: "percona-xtradb",
},
},
spec: {
containers: [
{
name: "percona-xtradb",
image: "percona/percona-xtradb-cluster:8.0",
ports: [
{
containerPort: 3306,
},
],
env: [
{
name: "MYSQL_ROOT_PASSWORD",
value: "kjhkjhjkhkhllg",
},
],
resources: {
requests: {
memory: "1Gi",
},
limits: {
memory: "1.5Gi",
},
},
},
],
},
},
},
}, { provider: k8sProvider });
const perconaXtradbService = new kubernetes.core.v1.Service("percona-xtradb-service", {
metadata: {
name: "percona-xtradb",
},
spec: {
selector: {
app: "percona-xtradb",
},
ports: [
{
port: 3306,
targetPort: 3306,
},
],
type: "LoadBalancer",
},
}, { provider: k8sProvider });
// Export some values for use elsewhere
export const rgName = resourceGroup.name;
export const networkName = virtualNetwork.name;
export const clusterName = managedCluster.name;
export const kubeconfig = decoded;
is there anything wrong with this code structure ?
after executing this the created cluster uses a high amount of ram. what should be the reason for that ?kind-country-41992
06/21/2023, 4:03 PMworried-queen-62794
06/21/2023, 7:17 PMDetected multiple versions of ‘@pulumi/pulumi’ in use in an inline automation api program.I think it might be because I am using npm link with a pulumi library that I am developing. Is there a way to get this to work?
faint-father-49077
06/21/2023, 8:37 PMserverlessv2_scaling_configuration=aws.rds.ClusterServerlessv2ScalingConfigurationArgs(
max_capacity=1,
min_capacity=0.5,
)
I have had a look in aws.rds
package and the ClusterServerlessv2ScalingConfigurationArgs
class doesn't seem to be there.
Error message when I run a Pulumi preview:
AttributeError: module 'pulumi_aws.rds' has no attribute 'ClusterServerlessv2ScalingConfigurationArgs'
I can only find the scaling configuration class for serverless v1: ClusterScalingConfigurationArgs
Can anyone advise as to what I am doing wrong, please?
I am using Pulumi v3.72.2.
Thank you.brief-car-60542
06/22/2023, 12:19 AMaws:redshiftserverless/namespace:Namespace:
~ aws:redshiftserverless/namespace:Namespace: (update)
[id=ingestion-dev-us-west-2]
[urn=urn:pulumi:backend.dev::backend::aws:redshiftserverless/namespace:Namespace::redshift-Namespace-dev-us-west-2]
[provider=urn:pulumi:backend.dev::backend::pulumi:providers:aws::dev/network/provider::***]
~ iamRoles: [
~ [0]: "IamRole(applyStatus=in-sync, iamRoleArn=arn:aws:iam::***:role/redshift-dev-us-west-2-s3-access-role-c4e1095)" => "arn:aws:iam::***:role/redshift-dev-us-west-2-s3-access-role-c4e1095"
]
~ aws-native:docdbelastic:Cluster: (update) 🔓
[id=arn:aws:docdb-elastic:us-west-2:***:cluster/61b9e661-d249-4994-81a8-0dba80182c45]
[urn=urn:pulumi:backend.dev::backend::aws-native:docdbelastic:Cluster::documentdb-dev-us-west-2]
[provider=urn:pulumi:backend.dev::backend::pulumi:providers:aws-native::dev/us-west-2/network/provider::093c5e34-b80c-4df0-b808-f6eaa4133a3d]
+ adminUserPassword: "***"
- kmsKeyId : "AWS_OWNED_KMS_KEY"
~ subnetIds : [
~ [1]: "subnet-07af7d63c689725da" => "subnet-02d3aeda4ce2c0905"
~ [2]: "subnet-02d3aeda4ce2c0905" => "subnet-07af7d63c689725da"
]
icy-controller-6092
06/22/2023, 3:06 AMdatabricks:token
stuck in the config of some of my stacks so I cannot destroy
them - and I no longer have the source code that was used to bring the stacks up. what's the solution here?most-dream-25581
06/22/2023, 5:28 AMnarrow-jackal-57645
06/22/2023, 8:06 AMeks.ManagedNodeGroup
object?
FYI, my setup is as following:
• nodejs 18.16.0
• pulumi 3.72.0
• pulumi/pulumi-eks 1.0.2
Below is my code:
const nodeGroup = new eks.ManagedNodeGroup("cicd-kubernetes-nodegrp-00", {
# stripped
})
// my naive way to retrieve the launchTemplate of the NodeGroup EC2 instance
nodeGroup.nodeGroup.launchTemplate.apply(lt => {
console.log(lt.id); // returns null unfortunately
})
Many thanks in advnice-rain-32013
06/22/2023, 8:29 AMabundant-twilight-47012
06/22/2023, 11:18 AMpulumi refresh
, I get:
error: getting snapshot: snapshot integrity failure; refusing to use it: resource urn:... refers to unknown provider urn:...
which I try to fix by deleting the first resource from the state with pulumi state delete "urn:..." --disable-integrity-checking.
But I get:
error: No such resource "urn:..." exists in the current state
Would appreciate help!late-nest-59850
06/22/2023, 12:45 PMTypeError: Cannot read properties of undefined (reading 'data')
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/cluster.ts:576:103
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:250:35
at Generator.next (<anonymous>)
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:21:71
at new Promise (<anonymous>)
at __awaiter (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:17:12)
at applyHelperAsync (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:229:12)
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:183:65
at processTicksAndRejections (node:internal/process/task_queues:95:5)
TypeError: Cannot read properties of undefined (reading 'data')
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/cluster.ts:576:103
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:250:35
at Generator.next (<anonymous>)
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:21:71
at new Promise (<anonymous>)
at __awaiter (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:17:12)
at applyHelperAsync (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:229:12)
at /home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/pulumi/output.js:183:65
at processTicksAndRejections (node:internal/process/task_queues:95:5)
unhandled rejection: CONTEXT(156): resource:devopsEksCluster-provider[pulumi:providers:kubernetes]
STACK_TRACE:
Error:
at Object.debuggablePromise (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/runtime/debuggable.ts:74:75)
at Object.registerResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/runtime/resource.ts:401:5)
at new Resource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/resource.ts:423:13)
at new CustomResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/resource.ts:810:9)
at new ProviderResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/resource.ts:854:9)
at new Provider (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/provider.ts:56:9)
at new Cluster (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/cluster.ts:1461:25)
at Object.<anonymous> (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/aws/eks/eks.ts:12:28)
at Module._compile (node:internal/modules/cjs/loader:1254:14)
at Module.m._compile (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/ts-node/src/index.ts:439:23)
unhandled rejection: CONTEXT(156): resource:devopsEksCluster-provider[pulumi:providers:kubernetes]
STACK_TRACE:
Error:
at Object.debuggablePromise (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/runtime/debuggable.ts:74:75)
at Object.registerResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/runtime/resource.ts:401:5)
at new Resource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/resource.ts:423:13)
at new CustomResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/resource.ts:810:9)
at new ProviderResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/kubernetes/node_modules/@pulumi/resource.ts:854:9)
at new Provider (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/provider.ts:56:9)
at new Cluster (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/cluster.ts:1461:25)
at Object.<anonymous> (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/aws/eks/eks.ts:12:28)
at Module._compile (node:internal/modules/cjs/loader:1254:14)
at Module.m._compile (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/ts-node/src/index.ts:439:23)
unhandled rejection: CONTEXT(156): resource:devopsEksCluster-provider[pulumi:providers:kubernetes]
I'm not really sure what the cause could be. I seems there is some dependency issue between the k8s provider created by EKS and the infra running on that provider?square-night-79134
06/22/2023, 4:00 PMimport (
"<http://github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/container|github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/container>"
"<http://github.com/pulumi/pulumi/sdk/v3/go/pulumi|github.com/pulumi/pulumi/sdk/v3/go/pulumi>"
)
existingCluster, err := container.GetCluster(ctx, &container.GetClusterArgs{
Location: pulumi.String("your-existing-cluster-location"),
Name: pulumi.String("your-existing-cluster-name"),
Project: pulumi.String("your-existing-project-id"),
})
if err != nil {
return err
}
square-night-79134
06/22/2023, 4:02 PMaloof-leather-66267
06/22/2023, 4:51 PMparent
is a parent resource, does that imply that the child resource dependsOn
the parent resource as well?damp-honey-93158
06/23/2023, 11:35 AMrefined-pilot-45584
06/23/2023, 2:24 PMcommand/local
provider; I am looking for a way to easily (In Go) make the Scripts for Update run every time Pulumi Up is executed even if nothing in the remainder of the IaC/Context has changed. Any ideas on this? I know it’s only in Preview but I wasn’t sure if someone else has already done this; Thanks.helpful-secretary-93852
06/23/2023, 3:06 PMsshCommands:
type: command:remote:Command
properties:
connection:
host: ${debianserver.ipv4Address}
user: user
privateKey: ${AUTHORIZED_PRIVATE_KEY}
port: 22
create: echo ''
options:
additionalSecretOutputs:
- create
- connection
where "AUTHORIZED_PRIVATE_KEY" is a secret declared in the config-map of my github-ci pipeline (marked "secret": true)
but then the input (whereas the output is ok) is exposed to the state in plaintext
"inputs": {
"connection": {
"host": "1.1.1.1",
"port": 22,
"privateKey": "-----BEGIN OPENSSH PRIVATE KEY----- ...",
"user": "user"
},
"create": "echo ''"
},
fierce-bird-8909
06/23/2023, 3:51 PMindex.ts
|- service-account
|- index.ts
acceptable-intern-25844
06/24/2023, 5:26 PMk8s.helm.v3.Release
to fully install all its resources to get a reference to a resource created by the helm release?
What I’m trying to achieve is install an istio/gateway
helm release and get the k8s Service it creates, to get the Service’s external IP.
I’m getting the resource like this: `const gatewayService = k8s.core.v1.Service.get("istio-gateway-service", pulumi.interpolate`${gateway.namespace}/${gateway.name}`, { dependsOn: [gateway] });`
but it fails with error: Preview failed: resource 'istio-system/istio-ingressgateway' does not exist
and that’s pretty obvious, why would it exist during the preview, when the release is not installed at all?fierce-art-88136
06/25/2023, 1:37 PM