Hey friends, kind of a maybe weird question for you. Is there a reason why CloudQuery doesn’t just leverage the Cloud Asset API in GCP (Cloud Asset API) to gather all the assets as opposed to querying each API individually? I ask this because we’ve been running into a lot of API rate limit issues and GCP is advising us to start using the Cloud Asset Inventory (obviously, because it’s their solution lol), but I’m curious why CloudQuery didn’t/doesn’t just leverage this in the first place.
Hey
So we can add support to GCP Cloud Asset API, and I think it can be a good feature. I believe we looked at it a while ago and saw that a lot of the time it doesn’t have all the data or is missing resources right away, while the APIs always expose all the available properties. If there was one API that exposes everything, that would be a dream and much less code
Well, according to my open support ticket with GCP, it’s amazing and everyone should use it
I was just curious, is all - I haven’t really used it before except for one-off things in the UI, but they’re being a bit resistant to our requests to increase some API rate limits we’re hitting. So, I’m trying to find the best way around it without having to reverse engineer the internet.
So I need to check what is the latest with GCP Asset Inventory, but from a quick look, yeah it only exposes 5 properties per resource.
I can’t find anything that indicates what “with” and “without” resource metadata even means either
I’m definitely curious though if it works well for some use cases and if it was improved. If yes, we can integrate this API as well or maybe integrate with the export API they mention. I’ll need to play with it a bit more, but please do share any insights, both good and bad, from your experience with it
Give me a few years to finish my Udemy course on Golang, and I might take a stab at playing with the SDK to see what it returns.
I’ll play around with cURL-ing the API this afternoon and see what it actually returns to check if there’s anything viable there, anything to get around some of these absurd API rate limits Google has.
Joined the Discord to ask this exact question.
My experience is that the expert API is sufficiently complete and will use up less API quota than hitting the dedicated APIs. Adding the ability to create/monitor a feed would also be amazing.
Hi!
We may look into it further down the road.
At first glance, the inventory would take several different calls (per the content type option) and would require some additional work from the data engineers to reconstruct the data/relationships/structure from the response as well.
For now, we provide the typed tables/resources that have a (somewhat) well-documented format and are easily understandable by the users. Having the same experience by using asset inventory might be hard to implement (if anyhow possible at all).
We would love to see more discussion/engagement in the GitHub issue, which can be opened via GitHub - CloudQuery Issue.