Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Certificate signed by unknown authority #3521

Open
Jdavid77 opened this issue Dec 7, 2024 · 2 comments
Open

Certificate signed by unknown authority #3521

Jdavid77 opened this issue Dec 7, 2024 · 2 comments

Comments

@Jdavid77
Copy link

Jdavid77 commented Dec 7, 2024

Component(s)

collector

Describe the issue you're reporting

Greetings!!

Tried to setup Open Telemetry Operator in my current cluster using Helm , but i'm getting this error when deploying the collector.

OpenTelemetryCollector/monitoring/otel-filelog dry-run failed (InternalError): Internal error occurred: failed calling webhook "mopentelemetrycollectorbeta.kb.io": failed to call webhook: Post "https://opentelemetry-operator-webhook.monitoring.svc:443/mutate-opentelemetry-io-v1beta1-opentelemetrycollector?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority.

Current Config:

manager:
  image:
    repository: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator
    tag: v0.114.0
  collectorImage:
    repository: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib
    tag: 0.114.0
admissionWebhooks:
  certManager:
    enabled: true

Collector

---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: otel-filelog
spec:
  managementState: managed
  upgradeStrategy: automatic
  mode: daemonset
  serviceAccount: collector
  ports:
    - name: health
      port: 13113
  env:
    - name: KUBE_NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
  config:
    receivers:
      filelog:
        include:
          - /var/log/pods/**/**/*.log
        exclude:
          - /var/log/pods/**/otc-container/*.log
          - /var/log/pods/**/grafana-sc-datasources/*.log
          - /var/log/pods/**/grafana-sc-dashboard/*.log
          - /var/log/pods/**/download-dashboards/*.log
          - /var/log/pods/**/init-db/*.log
          - /var/log/pods/**/trim-filesystem/*.log
          - /var/log/pods/**/config-reloader/*.log
          - /var/log/pods/**/init-config-reloader/*.log
        include_file_path: true
        include_file_name: false
        operators:
          - type: regex_parser
            regex: '^(?P<timestamp>.+?) (?P<streamm>stdout|stderr) (?P<flag>[A-Z]{1}) (?P<log>.*)$'
            id: pre_process
          - type: move
            from: attributes.log
            to: body
          - type: regex_parser
            id: extract_metadata_from_filepath
            regex: '^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]{36})\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log(.*)$'
            parse_from: attributes["log.file.path"]
          - type: router
            routes:
              - output: recombine
                expr: 'body matches "^ "'
            default: first_move
          - type: move
            id: first_move
            from: attributes.streamm
            to: resource["log.iostream"]
          - type: copy
            from: attributes.namespace
            to: resource["service.namespace"]
          - type: copy
            from: attributes.container_name
            to: resource["service.name"]
          - type: move
            from: attributes.container_name
            to: resource["k8s.container.name"]
          - type: move
            from: attributes.namespace
            to: resource["k8s.namespace.name"]
          - type: move
            from: attributes.pod_name
            to: resource["k8s.pod.name"]
          - type: move
            from: attributes.restart_count
            to: resource["k8s.container.restart_count"]
          - type: move
            from: attributes.uid
            to: resource["k8s.pod.uid"]
          - type: remove
            field: attributes.flag
          - type: recombine
            combine_field: body
            is_first_entry: body matches "^ "
            source_identifier: attributes["log.file.path"]
    processors:
      batch:
      k8sattributes:
        auth_type: "serviceAccount"
        passthrough: false
        filter:
          node_from_env_var: KUBE_NODE_NAME
        extract:
          metadata:
            - k8s.pod.name
            - k8s.pod.uid
            - k8s.deployment.name
            - k8s.namespace.name
            - k8s.node.name
            - k8s.pod.start_time
          # Pod labels which can be fetched via K8sattributeprocessor
          labels:
            - tag_name: key1
              key: label1
              from: pod
            - tag_name: key2
              key: label2
              from: pod
        # Pod association using resource attributes and connection
        pod_association:
          - sources:
             - from: resource_attribute
               name: k8s.pod.uid
             - from: resource_attribute
               name: k8s.pod.name
          - sources:
             - from: connection
      memory_limiter:
        check_interval: 1s
        limit_percentage: 70
        spike_limit_percentage: 30
      resource:
        attributes:
          - action: insert
            key: loki.format
            value: json
    exporters:
      loki:
        endpoint: http://loki-gateway.monitoring/loki/api/v1/push
        default_labels_enabled:
          exporter: false
    service:
      pipelines:
        logs:
          receivers: [filelog]
          processors: [k8sattributes,batch,resource,memory_limiter]
          exporters: [loki]
  securityContext:
    runAsUser: 0
    runAsGroup: 0
  volumeMounts:
    - name: pods
      mountPath: /var/log/pods
      readOnly: true
  volumes:
    - name: pods
      hostPath:
        path: /var/log/pods

Kubernetes version: 1.30.5
OS: Talos v1.8.2

The issuer and the certificate have been created successfully , but don't know the origin of this error. Would love some inputs if someone faced this issue in the past.

Thanks in advance!!

@yuriolisa
Copy link
Contributor

@Jdavid77, did you have the opportunity to ensure that the cert-manager was properly deployed before starting the Operator?

@Jdavid77
Copy link
Author

@yuriolisa Yes it was. Up and Running

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants