基于k8s建立Prometheus+Grafana监控

/ k8s监控 / 没有评论 / 174浏览

Environment

nfs

K8s 集群正常

内网地址(nfs地址):172.31.189.239

服务采用NodePort的暴露形式

install and set nfs

#安装依赖包
yum -y install nfs-utils rpcbind

#开机启动,
systemctl enable rpcbind.service 
systemctl enable nfs-server.service
systemctl start rpcbind.service #端口是111
systemctl start nfs-server.service # 端口是 2049 

# 创建一个/data/pvdata的共享目录
# mkdir /data/pvdata
chown nfsnobody:nfsnobody /nfs3/grafana/data
# cat /etc/exports
/nfs3/grafana/data 172.22.22.0/24(rw,async,all_squash)
# exportfs -rv
exporting 172.31.189.0/24:/nfs3/grafana/data

记得测试nfs可用性

git clone

git clone https://github.com/lijinghuatongxue/k8s_yaml.git

create ns

kubectl apply -f namespace.yaml # create ns

prometheus node yaml

先装node

kubectl apply -f node-exporter.yaml #create node

prometheus server yaml

注意 persistentVolumeReclaimPolicy 的配置,这里请灵活使用 path: /nfs3/grafana/data 这里是nfs的服务端的挂载源目录 server: 172.31.189.239这里是我的nfs的地址

kubectl apply -f prometheus.yaml # create prometheus server

grafana server yaml

这里的pv设置和prometheus server 同理

kubectl apply -f grafana.yaml # create grafana

Resources list

pv

➜  pro kubectl get pv -A
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE
grafana-data-pv      5Gi        RWO            Recycle          Bound    ns-monitor/grafana-data-pvc                              34m
prometheus-data-pv   5Gi        RWO            Recycle          Bound    ns-monitor/prometheus-data-pvc                           33m

deployment

➜  pro kubectl get  deployment -A
NAMESPACE       NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx   nginx-ingress-controller   1/1     1            1           6h35m
kube-system     coredns                    2/2     2            2           6h38m
kube-system     tiller-deploy              1/1     1            1           4h55m
ns-monitor      grafana                    1/1     1            1           35m
ns-monitor      prometheus                 1/1     1            1           33m

svc

也是从这里获取我们工网访问端口,采用nodeport的暴露形式

prometheus-service 的端口为31910

prometheus-node的端口为31672

➜  pro kubectl get  svc -A
NAMESPACE       NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes              ClusterIP      10.96.0.1        <none>        443/TCP                      6h39m
ingress-nginx   ingress-nginx           LoadBalancer   10.109.247.227   <pending>     80:31828/TCP,443:31944/TCP   6h35m
kube-system     kube-dns                ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       6h39m
kube-system     tiller-deploy           ClusterIP      10.109.62.34     <none>        44134/TCP                    4h55m
ns-monitor      grafana-service         NodePort       10.103.95.54     <none>        3000:31715/TCP               35m
ns-monitor      node-exporter-service   NodePort       10.104.14.3      <none>        9100:31672/TCP               144m
ns-monitor      prometheus-service      NodePort       10.109.16.226    <none>        9090:31910/TCP               34m

pod

➜  pro kubectl get pod -n ns-monitor
NAME                        READY   STATUS    RESTARTS   AGE
grafana-7bcb754f9d-5f7d9    1/1     Running   0          47m
node-exporter-kxvk7         1/1     Running   3          156m
prometheus-b54b8f85-nl7cc   1/1     Running   0          46m

grafana setting

添加grafana 数据源 == > 选择prometheus ==> url填这个 http://prometheus-service.ns-monitor:9090 ==> 补充模版

demo

地址是 you ip:31715