no problem creating via “gcloud container” with no...
# google-cloud
v
no problem creating via “gcloud container” with no ip aliases and autoassign range.
g
Continuing what I sent on general Take a look here about VPC-native clusters and here about routes-based clusters They work differently regarding IP allocations and interaction with the VPC network.
The default can trick you because they are not very consistent:
And although newer gcloud versions returned to routes-based, they add a lot of options that are not the default for the API. Pulumi used the API directly, so you need to set those values to work.
VPC-native clusters have advantages over routes-based ones, but they also require more configuration and care
The parameter for VPC-native on Pulumi is this one
v
forcing to ROUTES
g
That is ignored as input
If there is no
ipAllocationPolicy
the cluster will be routes-based
v
yes, the expected as was before
and as I would like
g
Check you network and subnetworks
For routes-based there must be a subnetwork in the same region and the network must have a /14 range available
v
is there a way to do “pulumi up” using the old gcp plugin? seems the only change.
g
I think this might be leftovers, so you should check if the cluster auto allocated a range and left it there after it was destroyed
v
and in that case: cli continues to work? sounds strange. But I check
g
CLI is "smarter" than the raw API
v
but the range is visible in “vpc networks”? theorically no, only ip aliases area visible. Where i can check this leftover?
g
You can set the range directly. I personally prefer to have everything on the infrastructure explicit
v
I have a running cluster at the moment. But no sign of its range here
g
Sorry, it should be under routes in your case
Something like this probably
v
yes. Found thanks. But I see only that cluster. I figure that I setup several VPNs from the creation of that cluster to now. and I have a 10.0.0.0/8 sattic route. mhmhm may be?
g
If you aready have a route for 10.0.0.0/8 you can't auto-allocate a /14 inside of it. Not sure what magic gcloud is doing in this case, might be worth creating a cluster with it just to see which route appears
v
make sense. I try to remove that static
not so easy to do this trial….
but with CLI it is working… I’m not so sure that is a 10.0.0.0 route issue…
g
That is why I said that it might be worth creating the cluster with gcloud and inspecting it. It might be setting something different
v
without 10.0.0.0/8 route it is working. It is a huge problem, but it make sense now.
Thanks Luiz for the support.