前言

前言先听我唠个嗑,为什么要搭建这套监控体系呢,因为公司的服务器目前部署在阿里云服务器上,ECS虽然有CPU监控、带宽的监控但是对涉及到应用的Mysql、nginx、Api并没有监控,所以才打算搭建这套监控体系。如果问为什么不把数据库放在阿里云的RDS上,这里说明下原因,因为阿里云RDS目前不支持Myisam的引擎,但是我们业务中又用到所以没办法。

配置

以下是该组合的docker-composer.yml的配置,因为prometheus也是第一次玩,里面的一些报警规则还不是很了解,所以就简单的贴一个该组合下的prometheus.yml。

docker-compose

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
version: '3'
services:
prometheus:
image: prom/prometheus:latest
volumes:
- /YOUR_YAML_DIR/prometheus.yml:/etc/prometheus/prometheus.yml
- /etc/localtime:/etc/localtime:ro
networks:
- "net"
depends_on:
- mysql
links:
- mysql:mysql
grafana:
image: grafana/grafana:7.0.1-ubuntu
volumes:
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
networks:
- "net"
depends_on:
- prometheus
links:
- prometheus:prometheus
mysql:
image: prom/mysqld-exporter:latest
volumes:
- /etc/localtime:/etc/localtime:ro
environment:
DATA_SOURCE_NAME: "DB_USER:DB_PASSWORD@(DB_HOST:DB_PORT)/"
networks:
- "prometheus"
networks:
net:
external: true

prometheus.yml

1
2
3
4
5
6
7
8
9
10
11
12
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']

- job_name: 'mysql'
static_configs:
- targets: ['mysql:9104']