Cluster data becoming stale in DB after deleting k8s cluster

Hi Team,

I have started using the k8s plugin for a couple of days, and I came across an instance where my cluster data became stale forever in the database.

I tried syncing multiple Kubernetes clusters with different source_name, using the below manifest. When I delete, let’s say cluster-1, the next time I will not be able to sync cluster-1, as the connection got broken, but the data in the database will be hanging. I am just wondering how you guys are handling this scenario?

config.yml

kind: source
spec:
  # Source spec section
  name: unique_source_name_per_cluster
  path: cloudquery/k8s
  version: "v5.1.0"
  tables:
  - "k8s_core_nodes"
  - "k8s_core_pods"
  - "k8s_core_services"
  destinations: ["postgresql"]
  spec:
    # contexts: ["abc"]
---
kind: destination
spec:
  name: "postgresql"
  path: "cloudquery/postgresql"
  version: "v6.0.2"
  write_mode: "overwrite-delete-stale"
  migrate_mode: forced
  pk_mode: cq-id-only
  spec:
    # connection_string: "${PG_CONNECTION_STRING}"

Hi @expert-lacewing,

Thanks for raising this issue! We currently have an open issue here that is nearly the same issue you are facing now.

Would you mind adding your use case to that issue so that we can be sure to address it?

Thanks @ben for pointing out this open issue; I have added the comment.