Do you foresee any issues if I upgrade the Postgres and AWS plugins (we are also using the EKS plugin)? Would you suggest creating a new schema for testing?
I think it is definitely a good idea to use a new schema for testing. Especially because you are going to be going up like 10 major versions for AWS and 7 major versions for Postgres.
Be aware that all plugins are now hosted on the CloudQuery Registry, which requires authentication (once you have signed up, you can generate an API Key that you can use in CI environments).
More information about this change can be found here: CloudQuery Blog - Mandatory Login
Thanks for your help. I will get back to you once we have updated.
I forgot to ask… how would I specify a new database schema in the config? Will the plugins automatically create all necessary tables? Could the tables be pre-fixed to be able to use the same db schema?
You can set the search_path
in the connection string, and that will tell the CloudQuery Postgres plugin where to make the tables and store the data. You will have to manually create the schema, as the Postgres Plugin requires that it be created before the sync. Here is an example connection_string
with a non-default schema:
postgres://jack:secret@localhost:5432/mydb?search_path=testschema
jack
is the username
secret
is the password
localhost
is the address
5432
is the port
mydb
is the name of the database
testschema
is the name of the non-default schema that you want to use for testing
Thank you. If I may, I have another question. If I want to publish data to different destinations (Postgres, S3), can the same source plugin handle that?
Definitely. The destinations
attribute is an array and so you can set multiple destinations:
It would look something like this:
kind: source
spec:
name: "aws"
path: "cloudquery/aws"
version: "v26.0.0"
tables:
- "aws_cloudfront_distributions"
destinations: ["postgresql","s3"]
---
kind: destination
spec:
name: "postgresql"
path: "cloudquery/postgresql"
registry: "cloudquery"
version: "v8.0.4"
spec:
connection_string: "postgresql://postgres:pass@localhost:5432/postgres?sslmode=disable"
---
kind: destination
spec:
name: "s3"
path: "cloudquery/s3"
version: "v6.1.0"
write_mode: "append"
spec:
bucket: "<BUCKET_NAME>"
region: "us-east-2" # Example: us-east-1
path: "path/to/files/{{TABLE}}/{{UUID}}.{{FORMAT}}"
athena: true
format: "parquet" # options: parquet, json, csv
Got it! And this will work if I have a custom plugin creating custom tables
?
All of the routing is handled by the CLI, so it should be 100% supported.
@cq-ben Hi there! So, I have started to work on the plugin upgrade. My first step was to move the PostgreSQL database to another account (for security purposes), so I have set up a dual destination (based on the above example) until fully tested. Then I will delete the old database.
I’m also setting up another environment for bringing up the latest CloudQuery plugin versions (we use AWS + EKS, PostgreSQL). In a previous comment, you mentioned I would have to create the new schema if I upgraded the plugin version. How would I do that, cq init
?
In my case, keeping the same old version, I just configured the destination and the plugins created the tables in the new database.