I am new to CloudQuery and I would like to know if the ability to write custom policies and queries is still supported. Or is there another API available to add custom policies?
Hi @honest-snipe,
Policies are queries that run on a database (destination), so you should be able to write your own without any specific API. What are you trying to achieve with custom policies?
I have a set of custom security policies that are specific to our security requirements. I use Grafana to view custom query results and utilize a Python client to run these queries (poor man’s ETL). However, I would like to have either CloudQuery or PostgreSQL execute these queries automatically for me, if possible.
OK, so even with our own policies, cloudquery
doesn’t execute them for you.
You would run
cloudquery sync
to get the data to the DB, then run another process (we use dbt
, https://www.getdbt.com/) to execute the policies.
Our policies only work on data extracted using cloudquery
as the schema of the data originates from our source plugins.
Do your custom policies extract data or only transform it?
We currently only extract, but now we are seeing the need to transform and are interested in building tools for reporting and trending.
Thanks for the added content. I think it would make sense to run any tools for reporting, etc., on the data after the sync finishes.
I also have a question regarding error recovery. If, for some reason, the CloudQuery AWS plugin (cron job) crashes or is halted, will this create any data inconsistency? Are all PostgreSQL writes batched into a transaction? How does it work?
Hi @honest-snipe,
The PostgreSQL destination uses batches with default settings as described in the CloudQuery documentation. You can reduce the batch size to 1 if you’d like.
However, if the source crashes, the CLI should still wait for the last batch to finish writing before exiting. To clarify, not everything is persisted in a single transaction. Data is batched and flushed as the sync progresses based on the batch settings.
Thank you so much for the details.