Help with importing DynamoDB records in CloudQuery for Grafana integration

I have multiple DynamoDB on AWS. Using CloudQuery, I can get generic information from them, but most are the structure information.

I’m looking for guidance on how to import DynamoDB records and perform queries in CloudQuery integrated with Grafana.

Hi @calm-walrus,

We currently do not have DynamoDB as a Source or Destination. I would suggest opening an issue on GitHub with details about your intended use case.

Open an issue on GitHub

If you want to create your own plugin, we have resources here about how to do that:

Creating a New Plugin

Let us know if you have any other questions!

I can export to S3 as CSV files. Will it be an easy way to read by cloudquery sync?

So my question is, can CloudQuery support input data with CSV?

That is an interesting use case @calm-walrus. We do have a File premium plugin that syncs from S3 but syncs Parquet files.

I personally haven’t tried this with DynamoDB. However, we do have an example syncing data from S3 (Amazon Cost and Usage Reports, which are stored in Parquet format in S3) with the File plugin: CloudQuery File Plugin Overview

There’s also this blog on AWS on how to export DynamoDB to S3 via AWS Glue: AWS Blog on DynamoDB to S3 Export

Ok, here is my understanding.

  1. Export DynamoDB file to CSV.
  2. Convert to Parquet:
    import pandas as pd
    df = pd.read_csv('your_input.csv')
    df.to_parquet('output.parquet', index=False)
    
  3. Sync the Parquet file to PSQL.

Let me have a try.