Error syncing CloudQuery with Kubernetes context missing

I am getting the below error:

Error: failed to sync source k8s: failed to fetch resources from stream: rpc error: code = Unknown desc = failed to sync resources: failed to create execution client for source plugin k8s: could not find any context. Try to add context, https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration

Hi @unified-reptile :wave:, welcome to the channel! If possible, could you paste your CloudQuery configuration you are using here with any sensitive information redacted?

Here is my ServiceAccount, ClusterRole, and Binding:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: cloud-query-service-account
  namespace: test
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::<aws_account>:role/cloudquery-role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cloudquery-clusterrole
rules:
- apiGroups: [""]
  resources: ["*"]
  verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cloudquery-clusterolebinding
subjects:
  - kind: ServiceAccount
    name: cloud-query-service-account
    namespace: test
roleRef:
  kind: ClusterRole
  name: cloudquery-clusterrole
  apiGroup: rbac.authorization.k8s.io

@martin and I have serviceAccountName defined in cron specs for the Kubernetes plugin. Here is my source:

kind: source
spec:
  # Source spec section
  name: k8s
  path: cloudquery/k8s
  version: "v3.0.1"
  tables: ["*"]
  destinations: ["postgresql"]
  spec:
    contexts: ["*"]

@martin

Cloudquery K8s context errors.

@martin Any idea?

I’m currently looking into it, a couple of things stand out. The version of the plugin looks to be fairly old if you are using v3.0.1. I recommend updating to the latest if possible v5.0.8. The other thing I’m looking into is what will happen if you configure an empty context. I think you are safe to leave this configuration line out when running CQ inside of a k8s cluster.

@martin Yes, I have an old version and that is being used in other environments also.

The only thing is, in other environments we defined the kube config manually and deployed that as a secret, and mounted it in the pod.

But here I am trying to achieve the same with a service account.

A weird thing I am noticing is that the service account token inside the pod under /var/run/secrets/kubernetes.io/serviceaccount/token is different than the secret token that is created after deploying the service account.

What does the Kubernetes manifest that is running the CloudQuery (CQ) container look like?

apiVersion: batch/v1
kind: CronJob
metadata:
  creationTimestamp: null
  labels:
    app: cloudquery-cron
  name: cloudquery-cron
  namespace: sentor
spec:
  schedule: "* * * * *"
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      backoffLimit: 4
      template:
        spec:
          serviceAccountName: cloud-query-service-account
          securityContext:
            runAsUser: 1000
            runAsGroup: 3000
            fsGroup: 2000
          containers:
            - image: <my_custom_image>
              name: cloudquery-cron
              resources:
                limits:
                  memory: "2Gi"
                  cpu: "1000m"
                requests:
                  memory: "512Mi"
                  cpu: "500m"
              args: ["-c", "sh /app/config/script.sh"]
              command: [ "/bin/sh" ]
              imagePullPolicy: Always
              env:
              - name: CQ_DSN
                valueFrom:
                  secretKeyRef:
                    name: cloudquery-secret-connection-string
                    key: CQ_DSN
                    optional: false
          restartPolicy: Never

Where the script is running cloudquery sync with a specific plugin file. @martin, do you see something off?

In your CQ source plugin configuration, is the context set to a list with a single empty string or did you redact that? Since you are seeing the error message at line 115 in this code snippet, I’m trying to understand how the context configuration you are providing is being interpreted.

My current recommendations are to:

i) Add [..., "--log-level", "debug"] to your container arguments to see the debug messaging that provides some more information.
ii) Try with the contexts configuration removed from your spec in the source plugin configuration - this should use the default context with this configuration.

@martin Context is set to * only.
I am not copying any kube config file or anything else.
I want my service account to take care of kube authentication.
Also, the command is already running with --log-level, debug.
@martin Thanks!
I guess service account support started after 3.1.0.
Change Log
I will try to update and see.
Thanks for your support!

Oh, nice find! Hope that helps.