Just a heads up from one LLM-backed feature develo...
# general
f
Just a heads up from one LLM-backed feature developer to another. Please let your product team know the impact of your Pulumi AI on the SEO visibility of your documentation. This is an example that isn't explicitly in your docs, but the 'ask ai' answers outrank your example docs (which are right more often!) and your actual sdk docs (which are 100% of the time the second best resource if applicable).
I love the Pulumi AI feature, and use it a great deal. There's probably just a robots.txt value that needs to change, or some SEO magic your analytics team can propose to make sure your docs outrank your openai generated results (even if they surprisingly good for openai generated results, they should probably be ranked below the hand curated stuff).
I've been running into this issue pretty much every time I have a new question about pulumi- I start with a web search hoping for hand curated tutorials or blog posts about the subject, then depending on the issue I either ask the AI or go to the SDK. Very rarely is someone else's Pulumi AI result the right choice for me.
l
I've complained about this before. I wonder if there's an open issue to uovote..
The is a #CLBMM3BS9 channel for added visibility.
e
The AI team is aware of this, not my domain but I know they are trying to fix it.
m
This has come up several times in this Slack before (messages seem to be no longer visible) and I think the problem has gotten a bit better. I still think the AI answers should be completely hidden from search engines, they not only outrank proper documentation but also third-party blog posts and repos
There is also a #C055KGGFB1N channel but it's not super active
m
Hi there @flat-yak-38918, thanks for the feedback. I appreciate you taking the time to call it out. We did de-list a huge number of these pages a while back, so things should indeed be better as @modern-zebra-45309 suggests. But in situations like this one where we don't have other content that matches a given query, you may still see AI Answers show up in results. In this case, since we don't have other content related to "signoz", you get the AI Answers. It's definitely not ideal, but the hope is that there might be at least something in there that could be useful for you. (If not, the downvote buttons are a good way to signal that.) We are in the process of transitioning to more of a human-curated and test-driven approach to all this, so we expect things to continue to improve. Apologies for the rough edges in the meantime, though. We are indeed working on it!
👀 1
💯 1
m
@miniature-musician-31262 I obviously don't whether this is already in the works, but simply running the AI-generated code in a sandbox (or even just against a type checker for languages like Python or TypeScript) would be a huge step forward. This would resolve most issues with hallucinated arguments or the resources and functions that the AI dreams up (e.g., by mixing aws, awsx, aws-native, aws-eks functionality) when trying to satisfy user requests. This was my biggest frustration so far: I'm trying to do something, and the AI "agrees" with me that it should indeed be possible and creates something that looks like it should work, and is the obvious solution to the question the way I phrased it. It's really difficult to get from there to the actual solution, which often requires a shift in perspective.