k8s series 9: DaemonSet部署日志组件

这是我参与更文挑战的第6天,活动详情查看: 更文挑战

  有一种场景,pod需要在所有node节点运行,并且随着node的增加减少而自动变更,始终保持和node节点一样,并且每个node节点只有一个相应的pod。或者在指定labels节点保持一致。这种场景在日志采集,监控服务,定时任务,守护进程是必不可少的。k8s控制器中有一种专门该场景而生的对象 DaemonSet。

DaemonSet介绍

 接下来看看daemonset特点

特点

  • 确保每个节点运行一个pod副本
  • 节点加入或退出集群,自动增加和删除pod副本
  • 删除daemonset会自动删除它创建的全部pod副本
  • 与labels结合,自定义灵活的配置项
  • 提供和deployment控制一样的pod维护能力

部署

 接下来以elk官方的例子来部署一下filebeat,仓库地址:  github.com/elastic/bea…

文档地址:  www.elastic.co/guide/en/be…

namespace直接使用kube-system

请提前安装好elasticsearch,测试环境直接使用docker安装既可,简单快速,博主之前的文章(juejin.cn/post/684490…)

下载yaml文件

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.13/deploy/kubernetes/filebeat-kubernetes.yaml
复制代码

请修改其中elasticsearch连接配置部分

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:esip}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.13.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "esip"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---
复制代码
kubectl apply -f filebeat-kubernetes.yaml
复制代码

执行完,可以发现创建了服务账号,以及授权了filebeat集群资源namespace,pods,nodes资的get,watch,list权限

kubectl get ds,pods -n kube-system -l k8s-app=filebeat -o wide
复制代码

在集群node中每台一个pod副本

处理流程

 daemonset部署后,先从etcd中获取node的列表,然后遍历所有node。

然后检查配置的labels,如果没有,就在所有node上各启动一个pod副本,如果匹配到,就只在对应的labels所在node启动一个pod副本。

检查的处理结果一般如下:

  • node上没有这种pod,要在这个node 上创建这种pod;
  • node上有这种pod,但是数量大于 1,把多余的pod从这个node 上删除掉;
  • 只有一个这种pod,节点正常。

查看日志

使用docker搭建一个kibana(juejin.cn/post/684490…),然后创建索引,既可查看日志了

DaemonSet 优点

  • DaemonSet Pod 自带监控功能,自我保活
  • DaemonSet Pod 任何服务都是统一部署方式
  • DaemonSet Pod 资源限制,避免资源争用

Rancher

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享