Hi team, Is there a way to determine the number o...
# getting-started
p
Hi team, Is there a way to determine the number of requests(read/ write) made using the Pulumi SDK in Go? Also, what is the applicable rate-limit policy?
e
What requests do you mean?
c
@powerful-waitress-36724 - I was curious about this myself. I didn't verify the response I got from Sider below, but maybe it is helpful. Yes, you can determine the number of requests made using the Pulumi SDK in Go, and I can also provide information about rate limits. Here's a breakdown: 1. Determining the Number of Requests - Custom Metrics/Instrumentation: The Pulumi SDK itself doesn't directly expose a counter for the number of API requests made during a
pulumi up
. You'd typically need to add your own instrumentation to track this. Here's how you could approach it: * Wrap the Pulumi Providers: Create wrapper functions around the calls you make to create/read/update resources. In these wrappers, increment a counter (perhaps using the
atomic
package for concurrency safety). This is the most precise way, but requires modifying your Pulumi code.
Copy code
go
     import (
         "sync/atomic"
         "<http://github.com/pulumi/pulumi/sdk/v3/go/pulumi|github.com/pulumi/pulumi/sdk/v3/go/pulumi>"
     )

     var requestCount int64

     func createResource(ctx *pulumi.Context, t string, name string, args interface{}, opts ...pulumi.ResourceOption) error {
         atomic.AddInt64(&requestCount, 1)
         _, err := pulumi.NewResource(ctx, t, name, args, opts...)
         return err
     }

     func main() {
         pulumi.Run(func(ctx *pulumi.Context) error {
             // Example:
             err := createResource(ctx, "aws:s3:Bucket", "my-bucket", nil)
             if err != nil {
                 return err
             }

             ctx.Export("requestCount", pulumi.Int64(atomic.LoadInt64(&requestCount)))
             return nil
         })
     }
- Provider-Specific Logging: Some Pulumi providers (like AWS) might have their own internal logging that could give you some insight. However, this is provider-dependent and not a general solution. You'd need to consult the specific provider's documentation and potentially enable very verbose logging. - External Network Monitoring: Use tools like
tcpdump
or Wireshark to capture network traffic during a
pulumi up
. You could then filter the traffic to only show requests to the cloud provider's API endpoints. This is a more indirect method and requires additional tooling. 2. Rate Limits - Cloud Provider Rate Limits are Key: Pulumi itself doesn't impose rate limits. The rate limits you encounter will be those imposed by the underlying cloud providers (AWS, Azure, GCP, etc.). - Provider Documentation: The definitive source for rate limit information is always the cloud provider's official documentation. Here are some examples: * AWS: Search for "AWS API rate limits" or consult the specific service's documentation (e.g., EC2, S3, IAM). AWS often uses a token bucket algorithm for rate limiting. They also offer the Service Quotas console to view and request increases. * Azure: Look for "Azure Resource Manager throttling" or "Azure API rate limits." Azure uses throttling to manage API requests. * GCP: Search for "GCP API rate limits" or consult the documentation for the specific GCP service you are using. - Pulumi's Impact on Rate Limits: Pulumi can indirectly affect how quickly you hit rate limits. For example: * Excessive Parallelism: If you're creating a large number of resources in parallel (e.g., many VMs or databases), you're more likely to hit rate limits. Consider using
pulumi.ResourceOptions{DependsOn: ...}
to introduce dependencies and reduce parallelism. * Frequent Updates: Making frequent updates to your infrastructure (e.g., running
pulumi up
very often) will generate more API requests. - Handling Rate Limit Errors: Your Pulumi program should be written to handle rate limit errors gracefully. Cloud providers typically return specific error codes when you're being throttled. You can use retry logic (with exponential backoff) to automatically retry failed API calls. The
<http://github.com/cenkalti/backoff|github.com/cenkalti/backoff>
library is a good choice for implementing retries in Go.
Copy code
go
     import (
         "time"
         "<http://github.com/cenkalti/backoff|github.com/cenkalti/backoff>"
         "<http://github.com/pulumi/pulumi/sdk/v3/go/pulumi|github.com/pulumi/pulumi/sdk/v3/go/pulumi>"
         "fmt"
     )

     func createResourceWithRetry(ctx *pulumi.Context, t string, name string, args interface{}, opts ...pulumi.ResourceOption) error {
         operation := func() error {
             _, err := pulumi.NewResource(ctx, t, name, args, opts...)
             if err != nil {
                 // Check if the error is a rate limit error (example for AWS)
                 if strings.Contains(err.Error(), "RateLimitExceeded") || strings.Contains(err.Error(), "Throttling") {
                     fmt.Println("Rate limit encountered. Retrying...")
                     return err // Return the error to trigger retry
                 }
                 return backoff.Permanent(err) // Non-retryable error
             }
             return nil
         }

         exponentialBackoff := backoff.NewExponentialBackOff()
         exponentialBackoff.MaxElapsedTime = 5 * time.Minute // Set a maximum retry time

         err := backoff.Retry(operation, exponentialBackoff)
         if err != nil {
             return fmt.Errorf("failed to create resource after multiple retries: %w", err)
         }
         return nil
     }
In Summary: * You'll need to add custom instrumentation to your Pulumi Go code to accurately count requests. * Rate limits are imposed by the cloud providers, not Pulumi itself. Consult the provider's documentation. * Design your Pulumi programs to handle rate limit errors using retry logic.