K8s
简介:
k8s暴露服务的方式有以下几种:
- Proxy + clusterIP
- NodePort
- LoadBalancer
- Ingress
- Proxy + ClusterIP
自定义代理 + 集群内存IP
有一些场景下,你得使用 Kubernetes 的 proxy 模式来访问你的服务:
• 由于某些原因,你需要调试你的服务,或者需要直接通过笔记本电脑去访问它们。
• 容许内部通信,展示内部仪表盘等。
这种方式要求我们运行 kubectl 作为一个未认证的用户,因此我们不能用这种方式把服务暴露到 internet 或者在生产环境使用。
- NodePort (当前采用方式):
NodePort 服务是引导外部流量到你的服务的最原始方式。NodePort,正如这个名字所示,在所有节点(虚拟机)上开放一个特定端口,任何发送到该端口的流量都被转发到对应服务。
这种方式的优点是: 简单快速;
缺点是:
1 通过这种方式暴露服务,相当于在给 k8s 环境打孔,需要所有工作节点都开放对外访问的端口,随着应用的增加,集群的端口会被大量消耗且很难管理,更麻烦的问题是无法制作边缘服务器,所有的服务器都是边缘节点。
2 由于通过 nodePort 方式暴露服务,请求是通过集群内部第二跳做到请求转发的:
第二跳会通过 SNAT的方式替换请求头,导致集群内部应用无法获取用户真实 IP,开发网关无法根据IP进行流量控制。
3 LoadBalancer方式
LoadBalancer 服务是暴露服务到 internet 的标准方式。在 GKE 上,这种方式会启动一个 Network Load Balancer[2],它将给你一个单独的 IP 地址,转发所有流量到你的服务。
这个方式的最大缺点是每一个用 LoadBalancer 暴露的服务都会有它自己的 IP 地址,每个用到的 LoadBalancer 都需要付费,这将是非常昂贵的。
4 Ingress
Ingress 可能是暴露服务的最强大方式,但同时也是最复杂的。Ingress 控制器有各种类型,包括 Google Cloud Load Balancer, Nginx,Contour,Istio,等等。它还有各种插件,比如 cert-manager[5],它可以为你的服务自动提供 SSL 证书。
优点: 功能强大、免费;
缺点: 搭建复杂
当前我们的需求:
- 系统面向第三方系统需要控制访问的真实 IP,需要做到 IP 透传
- 保证高可用
- 蓝绿部署需要做到流量切换,需要使用到Ingress的部分
高可用Ingress 架构如下:
安装步骤:
1、 申请VIP虚拟IP,并将虚拟IP映射到DNS上,如果你没有添加公司内网 DNS的权限,又需要在本地进行测试,可以在本地计算机上,更改 host 文件,将虚拟IP和域名的映射添加上
2、 在三台工作节点分别安装 keepAlive
#安装keepalive
yum install -y keepalived配置:/etc/keepalived/keepalived.conf
备份:
cd /etc/keepalived
cp keepalived.conf keepalived_BK.conf第一台机器执行:
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {xxxxxx}notification_email_from xxxxxxxsmtp_server 10.xx.xx.xxsmtp_connect_timeout 30router_id LVS_DEVELvrrp_skip_check_adv_addr#vrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 52priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.xx.xx.xx}
}
EOF第二台机器执行:
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {xxxxxxx}notification_email_from xxxxxxxsmtp_server 10.xx.xx.xxsmtp_connect_timeout 30router_id LVS_DEVELvrrp_skip_check_adv_addr#vrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state SLAVEinterface eth0virtual_router_id 52priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.xx.xx.xx}
}
EOF#启动keepalive
systemctl start keepalived && systemctl enable keepalived#验证:
#1、 ping 虚拟IP
#2、 systemctl status keepalived
#3、 ip a 查看虚拟IP是否绑定到物理网卡
#4、 systemctl stop/start keepalived 停止100优先级的节点,查看虚拟IP是否有漂移到其他物理机的物理网卡上,重启后查看是否飘回
3、 给工作节点加上label标签
kubectl label node node1 labelName=ingress-controller
kubectl label node node2 labelName=ingress-controller
4、 在三台工作节点上安装nginx-ingress-controller: mandotory.yaml 修改点如下:
(1) Depolyment 修改为 Damonset 守护
(2) pod 选择器修改到打上标签的node
(3) 镜像地址修改
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---kind: ConfigMap
apiVersion: v1
metadata:name: nginx-configurationnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: tcp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: udp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: v1
kind: ServiceAccount
metadata:name: nginx-ingress-serviceaccountnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: nginx-ingress-clusterrolelabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secretsverbs:- list- watch- apiGroups:- ""resources:- nodesverbs:- get- apiGroups:- ""resources:- servicesverbs:- get- list- watch- apiGroups:- ""resources:- eventsverbs:- create- patch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingressesverbs:- get- list- watch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingresses/statusverbs:- update---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:name: nginx-ingress-rolenamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- pods- secrets- namespacesverbs:- get- apiGroups:- ""resources:- configmapsresourceNames:# Defaults to "<election-id>-<ingress-class>"# Here: "<ingress-controller-leader>-<nginx>"# This has to be adapted if you change either parameter# when launching the nginx-ingress-controller.- "ingress-controller-leader-nginx"verbs:- get- update- apiGroups:- ""resources:- configmapsverbs:- create- apiGroups:- ""resources:- endpointsverbs:- get---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:name: nginx-ingress-role-nisa-bindingnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: nginx-ingress-role
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: nginx-ingress-clusterrole-nisa-bindinglabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: nginx-ingress-clusterrole
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---apiVersion: apps/v1
kind: DaemonSet
metadata:name: nginx-ingress-controllernamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxannotations:prometheus.io/port: "10254"prometheus.io/scrape: "true"spec:# wait up to five minutes for the drain of connectionsterminationGracePeriodSeconds: 300serviceAccountName: nginx-ingress-serviceaccountnodeSelector:kubernetes.io/os: linuxlabelName: ingress-controllercontainers:- name: nginx-ingress-controllerimage: siriuszg/nginx-ingress-controller:0.26.1args:- /nginx-ingress-controller- --configmap=$(POD_NAMESPACE)/nginx-configuration- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services- --udp-services-configmap=$(POD_NAMESPACE)/udp-services- --publish-service=$(POD_NAMESPACE)/ingress-nginx- --annotations-prefix=nginx.ingress.kubernetes.iosecurityContext:allowPrivilegeEscalation: truecapabilities:drop:- ALLadd:- NET_BIND_SERVICE# www-data -> 33runAsUser: 33env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceports:- name: httpcontainerPort: 80- name: httpscontainerPort: 443livenessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 10readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 10lifecycle:preStop:exec:command:- /wait-shutdown---
5、 创建可以透传真实IP的service: 注意 externalTrafficPolicy 为 local
apiVersion: v1
kind: Service
metadata:name: ingress-nginxnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:type: NodePortports:- name: httpport: 80targetPort: 80nodePort: 30030protocol: TCP- name: httpsport: 443targetPort: 443nodePort: 30031protocol: TCPselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxexternalTrafficPolicy: Local
6、 创建对应服务的service,注意类型由NodePort转换为ClusterIP
apiVersion: apps/v1
kind: Deployment
metadata:namespace: test-kube-devname: test-deploylabels:app: test-deploy
spec:replicas: 1template:metadata:name: testlabels:app: test-devspec:containers:- name: testimage: xxxxxxxxxxxximagePullPolicy: IfNotPresentports:- containerPort: 8080restartPolicy: AlwaysimagePullSecrets:- name: harbor-secret-nameselector:matchLabels:app: test-dev
---
apiVersion: v1
kind: Service
metadata:namespace: test-kube-devname: test-service
spec:selector:app: test-devports:- name: tomcatportport: 8080targetPort: 8080type: ClusterIPsessionAffinity: ClientIP
7、 创建对应service的Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:namespace: test-kube-devname: test-dev-ing
# annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:rules:- host: ingress.xxxxhttp:paths:# 老版本服务- path: /testbackend:serviceName: test-serviceservicePort: 8080
8、 部署应用之后,验证是否可以正常访问,以及IP透传是否生效:
访问地址为: 虚拟IP域名:ingress的Service端口/path


发布评论