上篇已经演示了 lyanna 用到的 Memcached、Redis、Mariadb、arq 等内容,这篇介绍 lyanna 应用以及怎么在本地跑起来它。

lyanna 应用

应用配置都在 k8s/app.yaml 里面,直接在代码中注释:

apiVersion: apps/v1
kind: Deployment  # lyanna应用是无状态的应用,所以选择用Deployment这种资源对象
metadata:
  name: lyanna-deployment
  labels:
    app.kubernetes.io/name: lyanna
spec:
  replicas: 4  # 启动4个副本(进程)接受请求
  selector:
    matchLabels:
      app.kubernetes.io/name: lyanna  # 这个和Service部分一致,服务请求会打到符合这个标记的Pod上
  strategy:
    rollingUpdate:  # 使用滚动更新
      maxSurge: 25%  # 指定可以超过期望的Pod数量的最大百分比,当然写个数也可以,这里表示最多可以部署5个
      maxUnavailable: 25%  # 指定在升级过程中不可用Pod的最大数量,这里表示允许最多一个Pod不可用
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: lyanna
    spec:
      containers:  # Pod里只有一个容器
      - image: dongweiming/lyanna:latest  # 每次push代码到master后,Docker服务会自动构建latest标签的镜像
        name: lyanna-web
        command: ['sh', '-c', './gunicorn app:app --bind 0.0.0.0:8000 --worker-class sanic.worker.GunicornWorker --threads=2']  # 运行的命令和之前类似,没有直接用python app.py,而是用Python应用服务器gunicorn
        env:  # 指定环境变量,之前配置的各种服务都是通过环境变量让应用获得的(当然变量的值可能是一个内部域名)
        - name: MEMCACHED_HOST
          valueFrom:
            configMapKeyRef:
              name: lyanna-cfg
              key: memcached_host
        - name: DB_URL
          valueFrom:
            configMapKeyRef:
              name: lyanna-cfg
              key: db_url
        - name: REDIS_SENTINEL_SVC_HOST
          valueFrom:
            configMapKeyRef:
              name: lyanna-cfg
              key: redis_sentinel_host
        - name: REDIS_SENTINEL_SVC_POST
          valueFrom:
             configMapKeyRef:
               name: lyanna-cfg
               key: redis_sentinel_port
        - name: PYTHONPATH  # 依然需要PYTHONPATH
          value: $PYTHONPATH:/usr/local/src/aiomcache:/usr/local/src/tortoise:/usr/local/src/arq:/usr/local/src
        #imagePullPolicy: Always
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
          name: http
          protocol: TCP
      initContainers:  # 使用了init容器配置,也就是先要做下面三种检查,都可用了才启动容器lyanna-web
      - name: init-memcached  # 确认memcached服务正常
        image: busybox
        command: ['sh', '-c', 'until nslookup lyanna-memcached; do echo waiting for Memcached; sleep 0.5; done;']
      - name: init-redis  # 确认redis服务正常
        image: busybox
        command: ['sh', '-c', 'until nslookup redis-sentinel; do echo waiting for Redis; sleep 0.5; done;']
      - name: init-mariadb  # 确认mariadb服务正常
        image: busybox
        command: ['sh', '-c', 'until nslookup lyanna-mariadb; do echo waiting for MariaDB; sleep 0.5; done;']
      restartPolicy: Always
---
apiVersion: v1
kind: Service  # 对外提供Service的配置
metadata:
  name: lyanna-svc
  labels:
    app.kubernetes.io/name: lyanna
spec:
  ports:
  - name: http
    port: 80  # 暴露80端口
    protocol: TCP
    targetPort: 8000  # 流量转发到Pod的8000端口
  selector:
    app.kubernetes.io/name: lyanna  # 这个要和前面Deployment的选择器条件一致
  sessionAffinity: ClientIP  # 用户的所有后续请求都会被分发到同一台应用服务器上
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress  # Ingress服务
metadata:
  name: lyanna-ing
  labels:
    app.kubernetes.io/name: lyanna
spec:
  rules:
  - host: lyanna.local
    http:
      paths:
      - backend:
          serviceName: lyanna-svc  # 后端都由lyanna-svc服务来处理
          servicePort: 80
        path: /

这里面有 2 个重点:

command。容器里面运行的命令没有指定 workers 参数 (-w),也就是没有显式的控制 Gunicorn 进程数,事实上在容器里面只运行了一个 Master + 一个 Worker。一开始我理解它是单个进程,所以按照之前官方优化建议 (延伸阅读链接 1) 加了这么一段

--workers=`python -c "import multiprocessing;print(multiprocessing.cpu_count() * 2 + 1)"`

虚拟机一会就卡顿到崩溃甚至需要我重建虚拟机... 一个不成熟的建议是希望大家不要试图在容器中使用多进程。

Ingress。之前的文章中我使用 Traefik 作为 Ingress 控制器,后来问了下我厂用的是官方的 NGINX Ingress (延伸阅读链接 2),所以这次我也换用了它。另外用那种 Ingress 控制器,配置文件都是一样的不用动。

安装 NGINX Ingress

在 Minikube 虚拟机搭建 NGINX Ingress 非常方便:

❯ minikube addons enable ingress

✅  ingress was successfully enabled

❯ echo "$(minikube ip) lyanna.local" | sudo tee -a /etc/hosts  # 指一下本机的host文件,一会访问这个域名就能看到本地运行效果了

这样就可以了,稍等一会可以在系统 Pod 列表中找到控制器是运行状态:

❯ kubectl get pods -n kube-system
❯ kubectl get pods -n kube-system | grep nginx-ingress-controller
nginx-ingress-controller-57bf9855c8-lb8nl   1/1     Running   71         24h

如果发现镜像没有拉下来是网络问题,可以使用代理后手动 pull 或者使用国内源再打回标签:

# 方法1: 使用代理后
❯ docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1  # 未来可能镜像版本会改变
# 方法2: 使用国内源
❯ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.1
❯ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
❯ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.1

本地部署 lyanna

我重新把整个部署过程演示一遍:

❯ minikube start --vm-driver hyperkit --cache-images --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers  # 启动使用hyperkit驱动的虚拟机,使用国内源
❯ minikube ssh  # 进入虚拟机
$ sudo mkdir -p /etc/docker
$ sudo tee /etc/docker/daemon.json <<-'EOF'  # 添加国内源,要不然拉镜像很慢
{
  "registry-mirrors" : [
    "http://hub-mirror.c.163.com",
    "http://docker.mirrors.ustc.edu.cn",
    "http://dockerhub.azk8s.cn"
  ],
  "experimental" : false,
  "debug" : true
}
EOF
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker  # 重启Docker进程
$ exit  # 退出虚拟机
❯ kubectl apply -f k8s/ -R  # 创建lyanna需要的全部资源对象
deployment.apps/lyanna-deployment created
service/lyanna-svc created
ingress.extensions/lyanna-ing created
daemonset.apps/lyanna-arq created
configmap/lyanna-cfg created
statefulset.apps/lyanna-memcached created
service/lyanna-memcached created
persistentvolume/mariadb-master created
persistentvolume/mariadb-slave created
configmap/lyanna-mariadb-master created
configmap/lyanna-mariadb-slave created
statefulset.apps/lyanna-mariadb-master created
statefulset.apps/lyanna-mariadb-slave created
service/lyanna-mariadb created
service/lyanna-mariadb-slave created
pod/redis-master created
service/redis-sentinel created
replicaset.apps/redis created
replicaset.apps/redis-sentinel created
# 第一次需要拉镜像需要等几分钟,之后再看都应该是Running状态
❯ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
lyanna-arq-g22j8                     1/1     Running   1          8m45s
lyanna-deployment-5c59c6b5d5-6lvwn   1/1     Running   0          8m45s
lyanna-deployment-5c59c6b5d5-7hz5c   1/1     Running   0          8m44s
lyanna-deployment-5c59c6b5d5-8vmcv   1/1     Running   0          8m44s
lyanna-deployment-5c59c6b5d5-rqf78   1/1     Running   0          8m44s
lyanna-mariadb-master-0              1/1     Running   0          8m44s
lyanna-mariadb-slave-0               1/1     Running   0          8m45s
lyanna-memcached-0                   1/1     Running   0          8m45s
lyanna-memcached-1                   1/1     Running   0          4m32s
lyanna-memcached-2                   1/1     Running   0          4m24s
redis-jnrwn                          1/1     Running   1          8m44s
redis-kzkv5                          1/1     Running   1          8m44s
redis-master                         2/2     Running   0          8m46s
redis-sentinel-6v5kq                 1/1     Running   0          8m44s
# 基本看Pod的名字就知道它属于那一部分了
# 接着需要支持初始化脚本setup.sh,在有lyanna源代码的任意Pod上执行即可,例如下面这样:
❯ kubectl exec -it lyanna-deployment-5c59c6b5d5-6lvwn  -- sh -c ./setup.sh
Init Finished!
User admin created!!! ID: 1
# 在看看Service/DaemonSet/StatefulSet/ReplicaSet/Ingress
❯ kubectl get svc  # Service
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP        16m  # 系统自己的
lyanna-mariadb         ClusterIP   None            <none>        3306/TCP       12m
lyanna-mariadb-slave   ClusterIP   None            <none>        3306/TCP       12m
lyanna-memcached       ClusterIP   None            <none>        11211/TCP      12m
lyanna-svc             NodePort    10.104.253.19   <none>        80:30362/TCP   12m
redis-sentinel         ClusterIP   10.102.28.182   <none>        26379/TCP      12m
❯ kubectl get ds  # DaemonSet
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
lyanna-arq   1         1         1       1            1           <none>          12m
❯ kubectl get statefulset
NAME                    READY   AGE
lyanna-mariadb-master   1/1     13m
lyanna-mariadb-slave    1/1     13m
lyanna-memcached        3/3     13m
❯ kubectl get rs  # ReplicaSet
NAME                           DESIRED   CURRENT   READY   AGE
lyanna-deployment-5c59c6b5d5   4         4         4       13m
redis                          2         2         2       13m
redis-sentinel                 2         2         2       13m
❯ kubectl get ing  # Ingress
NAME         HOSTS          ADDRESS   PORTS   AGE
lyanna-ing   lyanna.local             80      14m

到这里,一切正常,浏览器访问

有同学问「为什么 MariaDB 的配置要放在独立的子目录下 (k8s/optional)?」,前面介绍了我的博客运行在云服务器上,用的是云数据库所以使用 k8s 部署时不需要跑 MariaDB,我不需要使用-R参数:

❯ kubectl apply -f k8s/  # 这样就不创建子目录下的那部分资源对象了

当然 Redis、Memcached 等都可以作为可选的服务安装,只要修改 k8s/config.yaml 文件对应的设置项能正确连接即可。

后记

全部源码能在 lyanna 项目的 k8s 目录 下找到

延伸阅读

  1. http://docs.gunicorn.org/en/latest/design.html#how-many-workers
  2. https://github.com/kubernetes/ingress-nginx
  3. https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
  4. https://github.com/dongweiming/lyanna/tree/master/k8s