Kubernetes持续集成之Drone

Kubernetes持续集成之Drone

这是我参与更文挑战的第6天,活动详情查看: 更文挑战

阅读该文需了解容器化、持续集成以及Kubernetes相关知识。

引言

Drone 是一个容器原生的持续集成系统,旨在成为陈旧的 Jenkins 安装的自助服务替代品,Drone使用起来比较简单性能出众且非常灵活,每个step都由一个独立容器运行,且共享workdir,只是插件和文档略少,这里基于Kubernetes将DroneCi集成进来,实现持续集成,本文将详细介绍从git仓库到镜像仓库,触发构建的全部过程,以及对应的的k8s-yml编写,镜像的制作等,非Kubernetes参考之前的文章 Docker环境DroneC实践

为了快速搭建,一下所有服务均未挂载持久化文件和配置NodeSelect

预准备

一套k8s集群 我这里是高可用3三节点版本,关闭了一节点节省内存

NAME                 STATUS      MESSAGE                                                                                         ERROR
etcd-0               Healthy     {"health":"true"}
etcd-2               Healthy     {"health":"true"}
controller-manager   Healthy     ok
scheduler            Healthy     ok
etcd-1               Unhealthy   Get "https://172.16.3.131:2379/health": dial tcp 172.16.3.131:2379: connect: no route to host

[root@node0 kubectls]# kubectl get nodes
NAME    STATUS     ROLES    AGE    VERSION
node0   Ready      master   207d   v1.19.4
node1   NotReady   master   207d   v1.19.4
node2   Ready      <none>   207d   v1.19.4
复制代码

老规矩开局直接偷配置文件,直接生成一个模板,然后再模板基础上改造

kubectl create deployment gitea --image drone -o yaml --dry-run=client > gitea.yml
复制代码

1. Gitea搭建

这里Git仓库选型选择了Gitea,相对轻量级一些。首先编写Yaml文件,由于Gitea是有状态的服务,需要持久化文件一般选择为StatefulSet方式部署,Gitea本身并无一些其他配置所以配置文件如下.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: gitea
spec:
  serviceName: gitea-headless
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - image: gitea/gitea
        name: gitea
        ports:
         - name: http
           containerPort: 3000
        securityContext:
            privileged: true
---
apiVersion: v1
kind: Service
metadata:
  name: gitea-svc
  labels:
    app: gitea
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 3000
  selector:
    app: gitea
  type: NodePort
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
  name: gitea-headless
  labels:
    app: gitea
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 3000
  selector:
    app: gitea
status:
  loadBalancer: {}
复制代码

部署完成之后查看下svc并访问初始化Gitea

kubectl get svc | grep 'gitea'
gitea-headless   ClusterIP      10.110.232.83    <none>        3000/TCP                                                      83m
gitea-svc        NodePort       10.111.69.19     <none>        3000:31466/TCP                                                83m
复制代码

访问一下 node:31466开始配置Gitea,仅配置以下两个即可

基础URL	gitea所在node的host:nodeport
SSH服务域名	gitea所在node的host
复制代码

创建一个OAuth2应用,重定向URL写Drone的node:nodeport/login(此时还没有继续往下看)

image.png

2. Drone搭建

创建一个DroneServer服务

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: drone
  name: drone
spec:
  replicas: 1
  selector:
    matchLabels:
      app: drone
  strategy: {}
  template:
    metadata:
      labels:
        app: drone
    spec:
      containers:
      - image: drone/drone
        name: drone
        ports:
         - containerPort: 80
           name: http
        env:
         - name: DRONE_GITEA_SERVER
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_GITEA_SERVER
         - name: DRONE_GITEA_CLIENT_ID
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_GITEA_CLIENT_ID
         - name: DRONE_GITEA_CLIENT_SECRET
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_GITEA_CLIENT_SECRET
         - name: DRONE_RPC_SECRET
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_RPC_SECRET
         - name: DRONE_USER_CREATE
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_USER_CREATE
         - name: DRONE_SERVER_HOST
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_SERVER_HOST
         - name: DRONE_SERVER_PROTO
           valueFrom:
             configMapKeyRef:
               name: drone-cm
               key: DRONE_SERVER_PROTO
        volumeMounts:
        - mountPath: /var/run/docker.sock
          name: sock
        resources: {}
      volumes:
      - name: sock
        hostPath:
          path: /var/run/docker.sock
status: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: drone-cm
  namespace: default
data:
  DRONE_GITEA_SERVER: http://172.16.3.130:31466  #一般配合nodeSelect
  DRONE_GITEA_CLIENT_ID: ee2484a3-8953-4f1b-bf2f-28e9a95663be
  DRONE_GITEA_CLIENT_SECRET: abFGMz0Q9kSX46LQLdq0bgvqQpWFbZ3VLvr7mrXMBs5M
  DRONE_RPC_SECRET: dd6fed184d56520b5c72ff652f941eb2 #openssl rand -hex 16 生成
  DRONE_USER_CREATE: username:root,admin:true #gitea账户
  DRONE_SERVER_HOST: 172.16.3.130:30270 #Drone的SVC端口
  DRONE_SERVER_PROTO: http
---
apiVersion: v1
kind: Service
metadata:
  name: drone-svc
  labels:
    app: drone
spec:
  ports:
  - name: http
    targetPort: 80
    nodePort: 30270 #Drone的SVC端口写死配合上面
    port: 80
  selector:
    app: drone
  type: NodePort
status:
  loadBalancer: {}

复制代码

访问Drone,并授权给Drone访问Gitea,我这里已经授权完成

image.png
访问Drone,开启信任模式

image.png

3. DroneRunner搭建

DroneRunner负责实际流水线的运行,所以他需要在每个节点都运行一个副本,我们以DaemonSet方式部署

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: drone-run
  name: drone-run
spec:
  selector:
    matchLabels:
      app: drone-run
  template:
    metadata:
      labels:
        app: drone-run
    spec:
      containers:
      - image: drone/drone-runner-docker
        name: drone-runner
        ports:
         - containerPort: 3000
           name: http
        env:
         - name: DRONE_RPC_PROTO
           valueFrom:
             configMapKeyRef:
               name: drone-run-cm
               key: DRONE_RPC_PROTO
         - name: DRONE_RPC_HOST
           valueFrom:
             configMapKeyRef:
               name: drone-run-cm
               key: DRONE_RPC_HOST
         - name: DRONE_RUNNER_CAPACITY
           valueFrom:
             configMapKeyRef:
               name: drone-run-cm
               key: DRONE_RUNNER_CAPACITY
         - name: DRONE_RPC_SECRET
           valueFrom:
             configMapKeyRef:
               name: drone-run-cm
               key: DRONE_RPC_SECRET
         - name: DRONE_RUNNER_NAME
           valueFrom:
             configMapKeyRef:
               name: drone-run-cm
               key: DRONE_RUNNER_NAME
        volumeMounts:
        - mountPath: /var/run/docker.sock
          name: sock
      volumes:
      - name: sock
        hostPath:
          path: /var/run/docker.sock
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: drone-run-cm
  namespace: default
data:
  DRONE_RPC_PROTO: http
  DRONE_RPC_HOST: 172.16.3.130:30270 #Drone node:nodePort
  DRONE_RUNNER_CAPACITY: "2"
  DRONE_RPC_SECRET: dd6fed184d56520b5c72ff652f941eb2 #上面生成的
  DRONE_RUNNER_NAME: drone-runner
---
apiVersion: v1
kind: Service
metadata:
  name: drone-run-svc
  labels:
    app: drone-run
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 3000
  selector:
    app: drone-run
  type: NodePort
status:
  loadBalancer: {}
复制代码

查看运行情况

kubectl get pod | grep 'drone'

drone-68cf888fb-9888w           1/1     Running       0          54m
drone-run-8h7ft                 1/1     Running       0          43m
drone-run-z5v95                 1/1     Running       0          43m

复制代码

查看下Runner的日志,是否有报错。

[root@node0 drone]# kubectl logs -f drone-run-8h7ft
time="2021-06-20T05:33:43Z" level=info msg="starting the server" addr=":3000"
time="2021-06-20T05:33:43Z" level=info msg="successfully pinged the remote server"
time="2021-06-20T05:33:43Z" level=info msg="polling the remote server" arch=amd64 capacity=2 endpoint="http://172.16.3.130:30270" kind=pipeline os=linux type=docker
复制代码

4. Nexus搭建

使用Nexus作为镜像,没有使用Harbor。因为他同时还是Maven仓库,由于它需要持久化有状态,也选择了StatefulSet方式部署(这东西稍微有点吃内存)

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nexus
spec:
  serviceName: nexus-headless
  replicas: 1
  selector:
    matchLabels:
      app: nexus
  template:
    metadata:
      labels:
        app: nexus
    spec:
      containers:
      - image: sonatype/nexus3
        name: nexus
        ports:
         - name: http
           containerPort: 8081
         - name: http2
           containerPort: 5000
        securityContext:
            privileged: true
---
apiVersion: v1
kind: Service
metadata:
  name: nexus-svc
  labels:
    app: nexus
spec:
  ports:
  - name: http
    targetPort: 8081
    port: 8081
  - name: http2
    targetPort: 5000
    port: 5000
  selector:
    app: nexus
  type: NodePort
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
  name: nexus-headless
  labels:
    app: nexus
spec:
  ports:
  - name: http
    targetPort: 8081
    port: 8081
  - name: http2
    targetPort: 5000
    port: 5000
  selector:
    app: nexus
status:
  loadBalancer: {}
复制代码

部署完成后查看Svc访问Nexus并创建Docker镜像仓库 参考

配置所有节点的Docker.json添加私有仓库地址

vi /etc/docker/daemon.json
"insecure-registries": [
          "172.16.3.130:31834"
       ]
       
systemctl daemon-reload 
service docker restart
复制代码

4. Kubectl镜像搭建(没找到现成的,自己临时做了一个)

最后一步部署,需要Kubectl来执行yml,所以我们搞一个镜像可以执行kubectl命令(可以直接pull yujian1996/kubectls 已经上传到dockerhub)

#将kubeconfig拿到Dockerfile目录
cp ~/.kube/config .
#编写个Dockerfile
FROM alpine
WORKDIR /home
COPY ./config /home/config
CMD tail -f /dev/null

复制代码

启动时需要挂载/usr/bin/kubectl

 kubectl --kubeconfig ./config apply -f ./deploy.yml
复制代码

5. 修改.Drone.yml(流水线配置文件类似jenkins pipeline)

修改项目根目录的.drone.yml文件,为了快速没有搭建sonar,把这部分注释掉就可以了。

kind: pipeline
name: run
type: docker
steps:
  - name: 打包&单元测试
    image: maven:3.6.2-jdk-8
    pull: if-not-exists
    commands:
      - mvn clean
      - mvn org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true
      - mvn package
    volumes:
      - name: cache
        path: /root/.m2
    when:
      branch: master
      event: [ push ]
#  - name: 质量扫描
#    image: aosapps/drone-sonar-plugin
#    settings:
#      sonar_host:
#        from_secret: sonar_host
#      sonar_token:
#        from_secret: sonar_token
#    when:
#      branch: master
#      event: [ push ]
  - name: 构建镜像
    image: plugins/docker
    pull: if-not-exists
    settings:
      purge: false
      repo: 172.16.3.130:31465/spirngboot/test
      username: admin
      registry: 172.16.3.130:31465
      password: admin
      insecure: true
      tags: 1
    volumes:
      - name: docker
        path: /var/run/docker.sock
    when:
      branch: master
      event: [ push ]
#  - name: 推送报告到钉钉
#    image: yujian1996/sonar-ding:1
#    pull: if-not-exists
#    environment:
#      accessKey: edd02de6d6402150514802d82505ba4b0b59314e186fc98f736255ab3156c029
#      projectKeys: root:test
#      sonarUrl: http://192.168.31.79:9000
#    when:
#      status:
#        - success
#        - failure
  - name: 运行容器
    image: yujian1996/kubectls
    pull: if-not-exists
    volumes:
      - name: kubectl
        path: /usr/bin/kubectl
    commands:
      - ls
      - cat ./deploy.yml
      - kubectl --kubeconfig /home/config apply -f ./deploy.yml
    when:
        branch: master
        event: [ push ]
volumes:
  - name: cache
    host:
      path: /root/.m2
  - name: docker
    host:
      path: /var/run/docker.sock
  - name: kubectl
    host:
      path: /usr/bin/kubectl
trigger:
  branch:
    - master
  event:
    - promote
    - push


复制代码

在项目根目录再新增一个部署服务的k8s yml

apiVersion: v1
kind: Service
metadata:
  name: srpingboot
  namespace: default
  labels:
    app: srpingboot
spec:
  type: NodePort
  ports:
    - port: 8080
  selector:
    app: srpingboot
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: srpingboot
  labels:
    app: srpingboot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: srpingboot
  template:
    metadata:
      labels:
        app: srpingboot
    spec:
      containers:
        - name: srpingboot
          image: 172.16.3.130:31465/spirngboot/test:1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
复制代码

5. 测试流水线

我们push代码来触发流水线,然后查看drone的状态,可以看到一共只消耗了40多秒的时间就完成了发布,速度比jenkins还是要快的,这里我为了方便直接给镜像版本为1了,正常这个tag是commitid,然后sed替换掉配置文件再调用kubectl apply启动

image.png

然后查看一下 k8s 的 svc 和 pod , srpingboot-686b7bff78-9shhm已经处于运行状态

kubectl get pod 
NAME                          READY   STATUS        RESTARTS   AGE
drone-68cf888fb-24ghx         1/1     Running       0          51m
drone-run-8h7ft               1/1     Running       1          156m
drone-run-z5v95               1/1     Running       2          156m
gitea-0                       1/1     Running       3          3h33m
mysql-6fc5954fc5-dw9k9        0/1     Terminating   226        111d
nexus-0                       1/1     Running       9          3h2m
srpingboot-686b7bff78-9shhm   1/1     Running       0          5m18s
traefik-hn6n8                 1/1     Terminating   7          206d

kubectl get svc
srpingboot       NodePort       10.110.55.83     <none>        8080:30093/TCP
复制代码

我们看一下 log, 无异常 启动成功!一套基于 Drone 的 cicd 就搭建好了

[root@node0 kubectls]# kubectl logs -f srpingboot-686b7bff78-9shhm

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.4.5)

2021-06-20 08:04:11.667  INFO 6 --- [           main] c.e.s.SpringBootTestDemoApplication      : Starting SpringBootTestDemoApplication v0.0.1-SNAPSHOT using Java 1.8.0_292 on srpingboot-686b7bff78-9shhm with PID 6 (/app.jar started by root in /)
2021-06-20 08:04:11.672  INFO 6 --- [           main] c.e.s.SpringBootTestDemoApplication      : No active profile set, falling back to default profiles: default
2021-06-20 08:04:13.523  WARN 6 --- [           main] io.undertow.websockets.jsr               : UT026010: Buffer pool was not set on WebSocketDeploymentInfo, the default pool will be used
2021-06-20 08:04:13.547  INFO 6 --- [           main] io.undertow.servlet                      : Initializing Spring embedded WebApplicationContext
2021-06-20 08:04:13.547  INFO 6 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1788 ms
2021-06-20 08:04:15.013  INFO 6 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 2 endpoint(s) beneath base path '/actuator'
2021-06-20 08:04:15.203  INFO 6 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2021-06-20 08:04:15.481  INFO 6 --- [           main] io.undertow                              : starting server: Undertow - 2.2.7.Final
2021-06-20 08:04:15.497  INFO 6 --- [           main] org.xnio                                 : XNIO version 3.8.0.Final
2021-06-20 08:04:15.514  INFO 6 --- [           main] org.xnio.nio                             : XNIO NIO Implementation Version 3.8.0.Final
2021-06-20 08:04:15.657  INFO 6 --- [           main] org.jboss.threads                        : JBoss Threads version 3.1.0.Final
2021-06-20 08:04:15.720  INFO 6 --- [           main] o.s.b.w.e.undertow.UndertowWebServer     : Undertow started on port(s) 8080 (http)
2021-06-20 08:04:16.080  INFO 6 --- [           main] c.e.s.SpringBootTestDemoApplication      : Started SpringBootTestDemoApplication in 4.915 seconds (JVM running for 5.544)
复制代码
© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享