red-lighter-44012
02/18/2021, 8:02 AMproud-pizza-80589
02/19/2021, 7:43 AMcolossal-australia-65039
02/20/2021, 2:55 AMbumpy-laptop-30846
02/24/2021, 1:53 PMconst cluster = new eks.Cluster( clusterName,
{
fargate: true,
vpcId: vpc.id,
publicSubnetIds: vpc.publicSubnetIds,
privateSubnetIds: vpc.privateSubnetIds,
deployDashboard: false,
nodeAssociatePublicIpAddress: false,
providerCredentialOpts: {},
skipDefaultNodeGroup: true, ...
then fargate use the default namespace? Is there a way to access the fargateProfile when creating the fargate profile with fargate: true
?lemon-monkey-228
02/26/2021, 12:12 PMlemon-monkey-228
02/26/2021, 12:30 PMjq
and deleting the resources keylemon-monkey-228
02/26/2021, 12:31 PMorange-psychiatrist-22511
02/26/2021, 2:12 PMbreezy-cricket-40277
02/26/2021, 3:50 PMdry-run
before applying it? I have a helm chart deployed and don’t want to deploy again without dry-running
the helm template first.wet-noon-14291
02/26/2021, 10:30 PMdry-engine-17210
02/27/2021, 6:32 PMbitter-application-91815
03/03/2021, 4:53 PMbitter-application-91815
03/03/2021, 4:53 PMadorable-action-51248
03/05/2021, 9:31 AMgcp.compute.Subnetwork
with empty secondaryIpRanges: []
and then refer to that subnet in gcp.container.Cluster
and use ipAllocationPolicy
to specify the services and pods subnets.
However, this setup breaks because GKE modifies the network’s secondaryIpRanges
and on the next pulumi up
pulumi tries to remove GKEs modifications.
The fails because the subnetwork is already in use.colossal-australia-65039
03/05/2021, 11:39 PM@pulumi/eks
Node package and when creating a new Cluster()
I can supply roleMappings
. Ideally these roleMappings would reference ClusterRoleBindings
but these ClusterRoleBindings
cannot exist until the EKS cluster itself is created since they're k8s resources. I'm stuck in a chicken/egg scenario for the initial creation of the EKS cluster! Is there a way to solve this without having to fall. back to referencing a hardcoded string value instead of my ideal case of referencing the clusterRoleBinding.subjects[0].name
?limited-planet-95090
03/08/2021, 11:52 PMbitter-application-91815
03/10/2021, 7:07 PMbitter-application-91815
03/10/2021, 7:08 PMthe Kubernetes API server reported that "default/asg-chart-cloud-production-aws-cluster-autoscaler" failed to fully initialize or become live: 'asg-chart-cloud-production-aws-cluster-autoscaler' timed out waiting to be Ready
* Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods
sticky-match-71841
03/11/2021, 1:52 PMlimited-rainbow-51650
03/11/2021, 1:53 PMget
functions? E.g. getService
that I can feed some labels or annotations?adorable-action-51248
03/11/2021, 4:04 PMk8s.yaml.ConfigFile
?
I tried using dependsOn but that doesnt wait long enough.some-elephant-30417
03/12/2021, 8:29 AMdask = helm.v3.Chart(
f'dask-helm-{resource_suffix}',
config=k8s.helm.v3.ChartOpts(
chart='dask',
repo='dask',
version='4.5.7',
fetch_opts=k8s.helm.v3.FetchOpts(
repo='<https://helm.dask.org>',
),
),
opts=pulumi.ResourceOptions(
providers={ 'kubernetes': k8s_provider },
),
)
But I get this error:
error: Program failed with an unhandled exception:
error: Traceback (most recent call last):
File "/home/alexandre/.pyenv/versions/daks/lib/python3.9/site-packages/pulumi/runtime/invoke.py", line 110, in do_invoke
return monitor.Invoke(req)
File "/home/alexandre/.pyenv/versions/daks/lib/python3.9/site-packages/grpc/_channel.py", line 923, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/alexandre/.pyenv/versions/daks/lib/python3.9/site-packages/grpc/_channel.py", line 826, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: chart "dask/dask" version "4.5.7" not found in <https://helm.dask.org> repository"
debug_error_string = "{"created":"@1615537561.526221208","description":"Error received from peer ipv4:127.0.0.1:38703","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: chart "dask/dask" version "4.5.7" not found in <https://helm.dask.org> repository","grpc_status":2}"
I tried many variations of the Chart
parameters without success. Any idea? Thank you!brash-house-42711
03/15/2021, 10:00 PMsetIamRoleArn
method but Pulumi doesn't detect any changes. I wonder if annotations attribute is being ignores as there is a high change of it being updated outside of Pulumi? If so is there a way to force adding specific annotation? Thank you!
private deployCloudWatchAgentDaemonset(): k8s.yaml.ConfigFile {
let serviceAccounts = this.serviceAccounts;
return new k8s.yaml.ConfigFile('cloudwatch-agent-setup', {
file: ContainerInsights.CW_AGENT_TEMPLATE,
transformations: [(obj: any, _opts: pulumi.CustomResourceOptions) => {
if (typeof serviceAccounts !== 'undefined') {
ContainerInsights.setIamRoleArn(obj, serviceAccounts);
}
}],
},
{ providers: { kubernetes: this.k8sProvider } });
}
private static setIamRoleArn(obj: any, serviceAccounts: pulumi.Output<any>): void {
if (obj !== undefined && obj.kind == 'ServiceAccount') {
serviceAccounts.apply(serviceAccounts => {
if (typeof serviceAccounts !== 'undefined' && Object.keys(serviceAccounts).includes(obj.metadata.name)) {
if (!obj.metadata.annotations) {
obj.metadata['annotations'] = {}
}
obj.metadata.annotations['<http://eks.amazonaws.com/role-arn|eks.amazonaws.com/role-arn>'] = serviceAccounts[obj.metadata.name].role.arn;
}
});
}
}
hundreds-battery-67030
03/16/2021, 12:49 AMpulumi up
process and the number of objects in the K8s cluster? I have a situation where pulumi up
was crawling to halt on a 8GB worker in CircleCI, and when I tried it locally I saw it taking up 20+GB of memory. Any tips on troubleshooting this further?incalculable-animal-125
03/16/2021, 9:55 AMkubectl --kubeconfig ${PATH_TO_KUBECONFIG} --cluster ${CLUSTER_NAME} --token ${TOKEN}
?I'm confused on how to set the --token
part with Pulumi. From here, seems like it is id_token
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokensprehistoric-coat-10166
03/16/2021, 2:30 PMpulumi-kubernetes
package I figured to try here first.
I'm trying to write some unit tests in C# for my stacks and I'm having trouble with a stack which contains a Helm.V3.Chart
.
The problem can be produced by having the following stack:
public class FailingHelmStack : Pulumi.Stack
{
public FailingHelmStack()
{
var chart = new Pulumi.Kubernetes.Helm.V3.Chart("chart"
, new Pulumi.Kubernetes.Helm.ChartArgs()
{
Chart = "ingress-nginx",
Namespace = "kube-system",
FetchOptions = new Pulumi.Kubernetes.Helm.ChartFetchArgs
{
Repo = "<https://kubernetes.github.io/ingress-nginx>",
},
});
}
}
Using Deployment.TestAsync
with this stack results in the following exception being thrown.
Pulumi.RunException : Running program '<path>\bin\Debug\net5.0\testhost.dll' failed with an unhandled exception:
System.NullReferenceException: Object reference not set to an instance of an object.
at System.Collections.Immutable.ImmutableArray.CreateRange[TSource,TResult](ImmutableArray`1 items, Func`2 selector)
at Pulumi.Extensions.SelectAsArray[TItem,TResult](ImmutableArray`1 items, Func`2 map)
at Pulumi.InputList`1.op_Implicit(ImmutableArray`1 values)
at Pulumi.Kubernetes.Helm.V3.Chart.<>c__DisplayClass3_0.<ParseTemplate>b__0(ImmutableArray`1 objs)
at Pulumi.Output`1.ApplyHelperAsync[U](Task`1 dataTask, Func`2 func)
at Pulumi.Output`1.ApplyHelperAsync[U](Task`1 dataTask, Func`2 func)
at Pulumi.Output`1.Pulumi.IOutput.GetDataAsync()
at Pulumi.Serialization.Serializer.SerializeAsync(String ctx, Object prop, Boolean keepResources)
at Pulumi.Deployment.SerializeFilteredPropertiesAsync(String label, IDictionary`2 args, Predicate`1 acceptKey, Boolean keepResources)
at Pulumi.Deployment.SerializeAllPropertiesAsync(String label, IDictionary`2 args, Boolean keepResources)
at Pulumi.Deployment.RegisterResourceOutputsAsync(Resource resource, Output`1 outputs)
at Pulumi.Deployment.Runner.<>c__DisplayClass9_0.<<WhileRunningAsync>g__HandleCompletion|0>d.MoveNext()
--- End of stack trace from previous location ---
at Pulumi.Deployment.Runner.WhileRunningAsync()
Stack Trace:
at Pulumi.Deployment.TestAsync(IMocks mocks, Func`2 runAsync, TestOptions options)
<snip>
--- End of stack trace from previous location ---
I think I might be able to workaround this problem by mocking some necessary outputs perhaps. But
I'm having trouble figuring out what exactly is required.
Any help or suggestions would be greatly appreciated.purple-plumber-90981
03/16/2021, 11:34 PMeks_nodegroup = eks.NodeGroup("my-eks-nodegroup", opts=eks_opts, **eks_node_group_config)
File "/Users/bmeehan/repos/itplat-pulumi-infrastructure/venv/lib/python3.7/site-packages/pulumi_eks/node_group.py", line 145, in __init__
raise TypeError("Missing required property 'cluster'")
TypeError: Missing required property 'cluster'
however in trying to create my cluster it seems to need nodegroup to pre-exist :-
# node_group_options: Optional[pulumi.Input[pulumi.InputType['ClusterNodeGroupOptionsArgs']]] = None,
so in my eks_cluster_config
"node_group_options": eks_node_group_config,
eks_cluster = eks.Cluster("myt-eks-cluster", opts=eks_opts, **eks_cluster_config)
so which should come first? chicken or egg ?adorable-action-51248
03/17/2021, 9:24 AMdelightful-mouse-18472
03/17/2021, 11:31 AMproud-pizza-80589
03/18/2021, 1:54 PMproviderregionkubeconfig
. A second stack which uses a stack reference and the predicable name to fetch the output, and create a provider to deploy stuff on the cluster. Since we are messing about with the clusters sometimes they need to be recreated. At that point the second stack is broken, i cannot destroy nor up since the cluster is replaced (error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server
) The fix is to manually edit the stack export json files are remove all references to stuff deployed on the cluster so i can destroy it.
I’m looking for a) a way to not depend on the specific k8s api server endpoint but on what it gets from the stack reference or b) a --ignore-errors and delete whatever you can find option removing the things you cannot. Any hints?