I have trouble installing default example configuration of opensearch: <https://github.com/opensearc...
r

Roman Gorodeckij

12 months ago
I have trouble installing default example configuration of opensearch: https://github.com/opensearch-project/opensearch-k8s-operator/blob/main/opensearch-operator/examples/2.x/opensearch-cluster.yaml Operator installed fine. Also I'm using chart for CRD and default config should be fine: http://dpaste.com//8HCW2U8UL So when I'm deploying opensearch cluster chart I've got this error:
Diagnostics:
  pulumi:pulumi:Stack (infra-eks-utils-dev):
    error: preview failed

  kubernetes:opensearch.opster.io/v1:OpenSearchCluster (opensearch-cluster/opensearch-cluster):
    error: Preview failed: 1 error occurred:
    	* the Kubernetes API server reported that "opensearch-cluster/opensearch-cluster" failed to fully initialize or become live: Server-Side Apply field conflict detected. See <https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/managing-resources-with-server-side-apply/#handle-field-conflicts-on-existing-resources> for troubleshooting help.
    The resource managed by field manager "pulumi-kubernetes-fa75979c" had an apply conflict: Apply failed with 1 conflict: conflict with "Go-http-client" using opensearch.opster.io/v1: .spec.nodePools
And what's interesting pods are running fine..?
holms@Romans-MBP-2 ~/D/p/i/i/eks_utils (main)> kubectl get pods -n=opensearch-cluster
NAME                                             READY   STATUS      RESTARTS   AGE
opensearch-cluster-dashboards-68bfb4bd48-xzxjs   1/1     Running     0          43m
opensearch-cluster-masters-0                     1/1     Running     0          43m
opensearch-cluster-masters-1                     1/1     Running     0          41m
opensearch-cluster-masters-2                     1/1     Running     0          39m
opensearch-cluster-securityconfig-update-j4cdd   0/1     Completed   0          43m

holms@Romans-MBP-2 ~/D/p/i/i/eks_utils (main)> kubectl get deployments -n=opensearch-cluster
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
opensearch-cluster-dashboards   1/1     1            1           45m
Source here: http://dpaste.com//DFR3MJABP and here http://dpaste.com//7RHPZHHX6
Help tracing a slow Pulumi Up…. We’re trying to understand an extremely slow `pulumi up…` run. When...
m

Matt W

about 1 year ago
Help tracing a slow Pulumi Up…. We’re trying to understand an extremely slow
pulumi up…
run. When we run
pulumi refresh
it takes ~1m or so to get the data back - and that feels reasonable, given that we have a few hundred resources. When we then run a
pulumi up
where even only 1 resource changes, the run takes over 9 minutes. The primary API we are interfacing with is the PagerDuty API using the Pulumi PagerDuty provider. I’ve run
pulumi up --tracing…
and I have some trace files. Using the
AppDash
is a little confusing though and I am not sure how to really get much use out of it. The one thing I see is the
Profile View
at the bottom and sorting by `Cumulative Time(ms)`: When I do that, I get a MASSIVE output of
RegisterResource
calls, … like … thousands and thousands of lines. We don’t have thousands of resources. There are over 11,500 lines in this output. There are 932
RegisterResourceOutputs
calls… 1864
RegisterResource
calls
..
%  cat pulumi.cumulative.out | awk '{print $1}' | sort | uniq -c | sort -n 
   1 /pulumirpc.Engine/SetRootResource
   1 execNodejs
   1 locateModule
   1 pf.CheckConfig
   1 pf.Configure
   1 pf.ValidateProviderConfig
   1 pulumi-plan
   1 sdkv2.CheckConfig
   1 sdkv2.Configure
   1 sdkv2.GetPluginInfo
   1 sdkv2.ValidateProviderConfig
   2 /pulumirpc.LanguageRuntime/GetPluginInfo
   2 /pulumirpc.LanguageRuntime/GetRequiredPlugins
   2 /pulumirpc.LanguageRuntime/Run
   2 Cumulative
   2 Name
   3 newPlugin
   4 /pulumirpc.ResourceProvider/CheckConfig
   4 /pulumirpc.ResourceProvider/Configure
   4 /pulumirpc.ResourceProvider/DiffConfig
   4 /pulumirpc.ResourceProvider/GetPluginInfo
   4 Time
  12 /pulumirpc.ResourceMonitor/SupportsFeature
 130 sdkv2.Invoke
 260 /pulumirpc.ResourceMonitor/Invoke
 260 /pulumirpc.ResourceProvider/Invoke
 459 sdkv2.Check
 459 sdkv2.Diff
 928 /pulumirpc.ResourceProvider/Check
 928 /pulumirpc.ResourceProvider/Diff
 932 /pulumirpc.ResourceMonitor/RegisterResourceOutputs
1864 /pulumirpc.ResourceMonitor/RegisterResource
5240 /pulumirpc.Engine/Log
I don’t see much detail though in the trace view… is there something I am missing in order to figure out why we see so many calls? Also, it feels like the calls get cumulatively slower as we go. While looking at the
--debug --verbose=7
logs, I would swear that the
pulumi up
command is making S3 calls after each and every resource update. Is that possible?