常见Prometheus exporter部署

news/2024/5/18 22:43:47 标签: prometheus

常见Prometheus exporter部署

  • Prometheus部署
  • Node exporter
  • Process exporter
  • Redis exporter
  • MySQL exporter
  • OracleDB exporter

Prometheus部署

本地部署:

wget https://github.com/prometheus/prometheus/releases/download/v*/prometheus-*.*-amd64.tar.gz
tar xvf prometheus-*.*-amd64.tar.gz

cd prometheus-*.*
./prometheus --config.file=./prometheus.yml

容器化部署(通过Bind Mount将宿主机上的prometheus目录挂载到容器内):

mkdir -vp /opt/prometheus/data

docker run \
    -p 9090:9090 \
    -v /opt/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
    -v /opt/prometheus/data:/prometheus \
    prom/prometheus

Node exporter

本地部署:

wget https://github.com/prometheus/node_exporter/releases/download/v<VERSION>/node_exporter-<VERSION>.<OS>-<ARCH>.tar.gz
tar xvfz node_exporter-*.*-amd64.tar.gz

cd node_exporter-*.*-amd64
./node_exporter

curl http://localhost:9100/metrics

容器化部署node exporter时,必须通过Bind Mount把要监控的宿主机目录挂载到node exporter运行的容器中。Node exporter会使用path.rootfs作为前缀来访问宿主机文件系统。

docker run -d \
  --net="host" \
  --pid="host" \
  -v "/:/host:ro,rslave" \
  quay.io/prometheus/node-exporter:latest \
  --path.rootfs=/host --no-collector.systemd

对应的docker compose文件如下:

---
version: '3.8'

services:
  node_exporter:
    image: quay.io/prometheus/node-exporter:latest
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
      - '--no-collector.systemd'
    network_mode: host
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'

prometheus.yml配置:

global:
  scrape_interval: 15s

scrape_configs:
- job_name: node
  static_configs:
  - targets: ['<NODE_EXPORTER_IP>:9100']

Process exporter

以监控mysqld进程为例。

本地部署:

wget https://github.com/ncabatoff/process-exporter/releases/download/v0.7.10/process-exporter-0.7.10.linux-amd64.tar.gz

tar -zxvf process-exporter-0.7.10.linux-amd64.tar.gz -C /usr/local
mv process-exporter-0.7.10.linux-amd64/ process_exporter

cd /usr/local && ./process-exporter -procnames=mysqld

容器化部署:

#通过config.path指定配置文件
docker run -d --rm -p 9256:9256 --privileged \
-v /proc:/host/proc \
-v `pwd`:/config ncabatoff/process-exporter \
--procfs /host/proc -threads=false \
-config.path /path/to/config/filename.yml

#通过procnames指定被监控的进程
docker run -d --rm -p 9256:9256 --privileged \
-v /proc:/host/proc \
-v `pwd`:/config ncabatoff/process-exporter \
--procfs /host/proc -threads=false \
-procnames=mysqld

Process exporter配置文件:

process_names:
 - name: "{{.Matches}}"
   cmdline:
   - 'mysqld'

prometheus.yml配置:

global:
  scrape_interval: 15s

scrape_configs:
- job_name: Process
  static_configs:
  - targets: ['<PROCESS_EXPORTER_IP>:9256']

Redis exporter

支持版本:Redis 2.x, 3.x, 4.x, 5.x, 6.x, 7.x

编译:

git clone https://github.com/oliver006/redis_exporter.git
cd redis_exporter
go build .

本地部署:

./redis_exporter --version

容器化部署:

docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter
docker run -d --name redis_exporter --network host oliver006/redis_exporter  #仅主机模式

curl -X GET http://localhost:9121/metrics

prometheus.ym配置:

scrape_configs:
  - job_name: redis_exporter
    static_configs:
    - targets: ['<REDIS-EXPORTER-HOSTNAME>:9121']

MySQL exporter

支持的版本:MySQL >= 5.6, MariaDB >= 10.3

需要权限:

CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';

编译:

make build

本地部署:

./mysqld_exporter --web.listen-address=:9104 \
--no-collect.info_schema.query_response_time \
--no-collect.info_schema.innodb_cmp \
--no-collect.info_schema.innodb_cmpmem \
--collect.info_schema.processlist --collect.binlog_size

容器化部署:

docker network create my-mysql-network
docker pull prom/mysqld-exporter

docker run -d \
  -p 9104:9104 \
  --network my-mysql-network  \
  prom/mysqld-exporter
  --config.my-cnf=<path_to_cnf>
  
#仅主机网络模式部署
docker run -d \
  --network host \
  prom/mysqld-exporter
  --config.my-cnf=<path_to_cnf>

prometheus.ym配置:

scrape_configs:
  - job_name: mysqld_exporter
    static_configs:
    - targets: ['<MYSQLD-EXPORTER-HOSTNAME>:9104']        

OracleDB exporter

本地部署(如果本地没有部署Oracle软件,需要安装Oracle Instant Client Basic):

mkdir /etc/oracledb_exporter
chown root:oracledb_exporter /etc/oracledb_exporter  
chmod 775 /etc/oracledb_exporter  
Put config files to **/etc/oracledb_exporter**  
Put binary to **/usr/local/bin**

cat > /etc/systemd/system/oracledb_exporter.service << EOF
[Unit]
Description=Service for oracle telemetry client
After=network.target
[Service]
Type=oneshot
#!!! Set your values and uncomment
#User=oracledb_exporter
#Environment="CUSTOM_METRICS=/etc/oracledb_exporter/custom-metrics.toml"
ExecStart=/usr/local/bin/oracledb_exporter  \
  --default.metrics "/etc/oracledb_exporter/default-metrics.toml"  \
  --log.level error --web.listen-address 0.0.0.0:9161
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start oracledb_exporter

容器化部署:

docker pull ghcr.io/iamseth/oracledb_exporter:0.5.0

docker run -it --rm -p 9161:9161 ghcr.io/iamseth/oracledb_exporter:0.5.0 \
--default.metrics "/etc/oracledb_exporter/default-metrics.toml"  \
--custom.metrics "/etc/oracledb_exporter/custom-metrics.toml"  \
--log.level error

运行oracledb exporter之前需要配置DATA_SOURCE_NAME环境变量:

# export Oracle location:
export DATA_SOURCE_NAME=oracle://system:password@oracle-sid
# or using a complete url:
export DATA_SOURCE_NAME=oracle://user:password@myhost:1521/service

# 19c client for primary/standby configuration
export DATA_SOURCE_NAME=oracle://user:password@primaryhost:1521,standbyhost:1521/service
# 19c client for primary/standby configuration with options
export DATA_SOURCE_NAME=oracle://user:password@primaryhost:1521,standbyhost:1521/service?connect_timeout=5&transport_connect_timeout=3&retry_count=3

# 19c client for ASM instance connection (requires SYSDBA)
export DATA_SOURCE_NAME=oracle://user:password@primaryhost:1521,standbyhost:1521/+ASM?as=sysdba

# Then run the exporter
/path/to/binary/oracledb_exporter --log.level error --web.listen-address 0.0.0.0:9161

OracleDB exporter连接到数据库的用户必须对以下数据字典具有查询权限:

dba_tablespace_usage_metrics
dba_tablespaces
v$system_wait_class
v$asm_diskgroup_stat
v$datafile
v$sysstat
v$process
v$waitclassmetric
v$session
v$resource_limit

通过custom.metrics指定TOML文件可以为oracledb exporter自定义metrics。

[[metric]]
context = "slow_queries"
metricsdesc = { p95_time_usecs= "Gauge metric with percentile 95 of elapsed time.", p99_time_usecs= "Gauge metric with percentile 99 of elapsed time." }
request = "select  percentile_disc(0.95)  within group (order by elapsed_time) as p95_time_usecs, percentile_disc(0.99)  within group (order by elapsed_time) as p99_time_usecs from v$sql where last_active_time >= sysdate - 5/(24*60)"

[[metric]]
context = "big_queries"
metricsdesc = { p95_rows= "Gauge metric with percentile 95 of returned rows.", p99_rows= "Gauge metric with percentile 99 of returned rows." }
request = "select  percentile_disc(0.95)  within group (order by rownum) as p95_rows, percentile_disc(0.99)  within group (order by rownum) as p99_rows from v$sql where last_active_time >= sysdate - 5/(24*60)"

[[metric]]
context = "size_user_segments_top100"
metricsdesc = {table_bytes="Gauge metric with the size of the tables in user segments."}
labels = ["segment_name"]
request = "select * from (select segment_name,sum(bytes) as table_bytes from user_segments where segment_type='TABLE' group by segment_name) order by table_bytes DESC FETCH NEXT 100 ROWS ONLY"

[[metric]]
context = "size_user_segments_top100"
metricsdesc = {table_partition_bytes="Gauge metric with the size of the table partition in user segments."}
labels = ["segment_name"]
request = "select * from (select segment_name,sum(bytes) as table_partition_bytes from user_segments where segment_type='TABLE PARTITION' group by segment_name) order by table_partition_bytes DESC FETCH NEXT 100 ROWS ONLY"

[[metric]]
context = "size_user_segments_top100"
metricsdesc = {cluster_bytes="Gauge metric with the size of the cluster in user segments."}
labels = ["segment_name"]
request = "select * from (select segment_name,sum(bytes) as cluster_bytes from user_segments where segment_type='CLUSTER' group by segment_name) order by cluster_bytes DESC FETCH NEXT 100 ROWS ONLY"

[[metric]]
context = "size_dba_segments_top100"
metricsdesc = {table_bytes="Gauge metric with the size of the tables in user segments."}
labels = ["segment_name"]
request = "select * from (select segment_name,sum(bytes) as table_bytes from dba_segments where segment_type='TABLE' group by segment_name) order by table_bytes DESC FETCH NEXT 100 ROWS ONLY"

[[metric]]
context = "size_dba_segments_top100"
metricsdesc = {table_partition_bytes="Gauge metric with the size of the table partition in user segments."}
labels = ["segment_name"]
request = "select * from (select segment_name,sum(bytes) as table_partition_bytes from dba_segments where segment_type='TABLE PARTITION' group by segment_name) order by table_partition_bytes DESC FETCH NEXT 100 ROWS ONLY"

[[metric]]
context = "size_dba_segments_top100"
metricsdesc = {cluster_bytes="Gauge metric with the size of the cluster in user segments."}
labels = ["segment_name"]
request = "select * from (select segment_name,sum(bytes) as cluster_bytes from dba_segments where segment_type='CLUSTER' group by segment_name) order by cluster_bytes DESC FETCH NEXT 100 ROWS ONLY"

[[metric]]
context = "cache_hit_ratio"
metricsdesc = {percentage="Gauge metric with the cache hit ratio."}
request = "select Round(((Sum(Decode(a.name, 'consistent gets', a.value, 0)) + Sum(Decode(a.name, 'db block gets', a.value, 0)) - Sum(Decode(a.name, 'physical reads', a.value, 0))  )/ (Sum(Decode(a.name, 'consistent gets', a.value, 0)) + Sum(Decode(a.name, 'db block gets', a.value, 0)))) *100,2) as percentage FROM v$sysstat a"

[[metric]]
context = "startup"
metricsdesc = {time_seconds="Database startup time in seconds."}
request = "SELECT (SYSDATE - STARTUP_TIME) * 24 * 60 * 60 AS time_seconds FROM V$INSTANCE"

prometheus.yml配置:

- job_name: oracledb_exporter
  scrape_interval: 50s
  scrape_timeout: 50s
  static_configs:
  - targets: ['<ORACLEDB_EXPORTER_IP>:9161']

References
【1】https://prometheus.io/docs/instrumenting/exporters/
【2】https://prometheus.io/docs/guides/node-exporter/
【3】https://github.com/prometheus/node_exporter
【4】https://github.com/ncabatoff/process-exporter
【5】https://github.com/prometheus/mysqld_exporter
【6】https://github.com/oliver006/redis_exporter
【7】https://github.com/iamseth/oracledb_exporter
【8】https://github.com/iamseth/oracledb_exporter/blob/master/custom-metrics-example/custom-metrics.toml
【9】https://github.com/burningalchemist/sql_exporter


http://www.niftyadmin.cn/n/5400973.html

相关文章

多个a-model相互引用打开,新打开的被遮住

方案&#xff1a;改一下z-index属性 ps:最好不要用 :keyMath.random()的方式。这个方式一旦在model中有输入&#xff0c;每次输入都会重新渲染model <a-modalcenteredv-model"dataVisible":width"600":keyboard"false"class"base-modal&…

Ubuntu系统下DPDK环境搭建

目录 一.虚拟机配置1.添加一个网卡(桥接模式)2.修改网卡类型3.修改网卡名称4.重启虚拟机5.查看网卡信息6.dpdk配置内存巨型页 三 DPDK源代码下载和编译1.下载源代码2.解压源代码3.安装编译环境4.编译5.设置dpdk的环境变量6.禁止多队列网卡7.加载igb_uio模块8.网卡绑定9.验证测试…

react 路由的基本原理及实现

1. react 路由原理 不同路径渲染不同的组件 有两种实现方式 ● HasRouter 利用hash实现路由切换 ● BrowserRouter 实现h5 API实现路由切换 1. 1 HasRouter 利用hash 实现路由切换 1.2 BrowserRouter 利用h5 Api实现路由的切换 1.2.1 history HTML5规范给我们提供了一个…

Leetcode算法题

罗马数字转整数 罗马数字包含以下七种字符: I&#xff0c; V&#xff0c; X&#xff0c; L&#xff0c;C&#xff0c;D 和 M。 字符 数值 I 1 V 5 X 10 L 50 C 100 D 500 M 1000 例如&#xff0c; 罗马数字 2 写做 II &#xff0c;即为两个并列的 1 。12 写做 XII &#xff0…

探索rsync远程同步和SSH免密登录的奥秘

目录 集群分发脚本xsyncscp&#xff08;secure copy&#xff09;安全拷贝rsync 远程同步工具集群分发脚本 SSH免密登录免密登录原理SSH免密登录配置生成公钥和私钥授权测试 在现代科技飞速发展的时代&#xff0c;数据的备份和迁移成为了一个重要的课题。其中&#xff0c;rsync远…

【矩阵】【方向】【素数】3044 出现频率最高的素数

作者推荐 动态规划的时间复杂度优化 本文涉及知识点 素数 矩阵 方向 LeetCode 3044 出现频率最高的素数 给你一个大小为 m x n 、下标从 0 开始的二维矩阵 mat 。在每个单元格&#xff0c;你可以按以下方式生成数字&#xff1a; 最多有 8 条路径可以选择&#xff1a;东&am…

基于springboot实现图书馆管理系统项目【项目源码+论文说明】

基于springboot实现图书馆管理系统演示 摘要 电脑的出现是一个时代的进步&#xff0c;不仅仅帮助人们解决了一些数学上的难题&#xff0c;如今电脑的出现&#xff0c;更加方便了人们在工作和生活中对于一些事物的处理。应用的越来越广泛&#xff0c;通过互联网我们可以更方便地…

【c++】stack和queue模拟实现

> 作者简介&#xff1a;დ旧言~&#xff0c;目前大二&#xff0c;现在学习Java&#xff0c;c&#xff0c;c&#xff0c;Python等 > 座右铭&#xff1a;松树千年终是朽&#xff0c;槿花一日自为荣。 > 目标&#xff1a;能手撕stack和queue模拟 > 毒鸡汤&#xff1a;…