0% found this document useful (0 votes)
572 views242 pages

Documentation: Percona Technical Documentation Team

This document provides an overview of Percona Monitoring and Management (PMM), including its architecture, components, and how it can be used to monitor MySQL, PostgreSQL, MongoDB performance. PMM is an open-source platform that collects metrics from database servers and presents the data through a web interface for analysis and monitoring.

Uploaded by

Mardi Septianto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
572 views242 pages

Documentation: Percona Technical Documentation Team

This document provides an overview of Percona Monitoring and Management (PMM), including its architecture, components, and how it can be used to monitor MySQL, PostgreSQL, MongoDB performance. PMM is an open-source platform that collects metrics from database servers and presents the data through a web interface for analysis and monitoring.

Uploaded by

Mardi Septianto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 242

Documentation

2.12.0

Percona Technical Documentation Team

Percona LLC, © 2020


Table of contents

Table of contents

1. Welcome 4

1.1 What is Percona Monitoring and Management? 4

1.2 Architecture 4

1.3 PMM Client 5

1.4 PMM Server 7

1.5 Percona Platform 8

1.6 Contact Us 8

2. Setting up 9

2.1 In this section 9

2.2 Server 10

2.3 Client 29

3. Using 64

3.1 In this section 64

3.2 User Interface 65

3.3 Query Analytics 75

3.4 Percona Platform 84

4. How to 87

4.1 In this section 87

4.2 Configure 88

4.3 Upgrade 95

4.4 Secure 96

4.5 Optimize 98

5. Details 99

5.1 In this section 99

5.2 Dashboards 100

5.3 Commands 197

5.4 API 206

5.5 Glossary 208

6. FAQ 209

6.1 How can I contact the developers? 209

6.2 What are the minimum system requirements for PMM? 209

6.3 How can I upgrade from PMM version 1? 210

6.4 How to control data retention for PMM? 211

6.5 How often are NGINX logs in PMM Server rotated? 211

6.6 What privileges are required to monitor a MySQL instance? 211

2 of 242 Percona LLC, © 2020


Table of contents

6.7 Can I monitor multiple service instances? 211

6.8 Can I rename instances? 212

6.9 Can I add an AWS RDS MySQL or Aurora MySQL instance from a non-default AWS partition? 212

6.10 How do I troubleshoot communication issues between PMM Client and PMM Server? 212

6.11 What resolution is used for metrics? 213

6.12 How do I set up Alerting in PMM? 213

6.13 How do I use a custom Prometheus configuration file inside PMM Server? 214

6.14 How to troubleshoot an Update? 214

6.15 What are my login credentials when I try to connect to a Prometheus Exporter? 215

6.16 How do I troubleshoot VictoriaMetrics? 215

7. Release Notes 216

7.1 Percona Monitoring and Management 2.12.0 216

7.2 Percona Monitoring and Management 2.11.1 218

7.3 Percona Monitoring and Management 2.11.0 219

7.4 Percona Monitoring and Management 2.10.1 220

7.5 Percona Monitoring and Management 2.10.0 221

7.6 Percona Monitoring and Management 2.9.1 223

7.7 Percona Monitoring and Management 2.9.0 224

7.8 Percona Monitoring and Management 2.8.0 225

7.9 Percona Monitoring and Management 2.7.0 226

7.10 Percona Monitoring and Management 2.6.1 227

7.11 Percona Monitoring and Management 2.6.0 228

7.12 Percona Monitoring and Management 2.5.0 230

7.13 Percona Monitoring and Management 2.4.0 231

7.14 Percona Monitoring and Management 2.3.0 233

7.15 Percona Monitoring and Management 2.2.2 234

7.16 Percona Monitoring and Management 2.2.1 235

7.17 Percona Monitoring and Management 2.2.0 237

7.18 Percona Monitoring and Management 2.1.0 239

7.19 Percona Monitoring and Management 2.0.1 241

7.20 Percona Monitoring and Management 2.0.0 242

3 of 242 Percona LLC, © 2020


1. Welcome

1. Welcome

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL,
PostgreSQL, MongoDB, and ProxySQL performance. It is developed by Percona in collaboration with experts in the
field of managed database services, support and consulting.

Note

This documentation covers the latest release: PMM 2.12.0

1.1 What is Percona Monitoring and Management?


PMM is a free and open-source solution that you can run in your own environment for maximum security and
reliability. It provides thorough time-based analysis for MySQL, PostgreSQL and MongoDB servers to ensure that
your data works as efficiently as possible.

1.2 Architecture
The PMM platform is based on a client-server model that enables scalability. It includes the following modules:

• PMM Client installed on every database host that you want to monitor. It collects server metrics, general system
metrics, and Query Analytics data for a complete performance overview.

• PMM Server is the central part of PMM that aggregates collected data and presents it in the form of tables,
dashboards, and graphs in a web interface.

• Percona Platform provides value-added services for PMM.

The modules are packaged for easy installation and usage. It is assumed that the user should not need to
understand what are the exact tools that make up each module and how they interact. However, if you want to
leverage the full potential of PMM, the internal structure is important.

4 of 242 Percona LLC, © 2020


1.3 PMM Client

PMM is a collection of tools designed to seamlessly work together. Some are developed by Percona and some are
third-party open-source tools.

Note

The overall client-server model is not likely to change, but the set of tools that make up each component may evolve
with the product.

The following sections illustrates how PMM is currently structured.

1.3 PMM Client

Each PMM Client collects various data about general system and database performance, and sends this data to the
corresponding PMM Server.

5 of 242 Percona LLC, © 2020


1.3 PMM Client

The PMM Client package consist of the following:

• pmm-admin is a command-line tool for managing PMM Client, for example, adding and removing database
instances that you want to monitor. For more information, see pmm-admin - PMM Administration Tool.

• pmm-agent is a client-side component a minimal command-line interface, which is a central entry point in charge
for bringing the client functionality: it carries on client’s authentication, gets the client configuration stored on
the PMM Server, manages exporters and other agents.

• node_exporter is an exporter that collects general system metrics.

• mysqld_exporter is an exporter that collects MySQL server metrics.

• mongodb_exporter is an exporter that collects MongoDB server metrics.

• postgres_exporter is an exporter that collects PostgreSQL performance metrics.

• proxysql_exporter is an exporter that collects ProxySQL performance metrics.

To make data transfer from PMM Client to PMM Server secure, all exporters are able to use SSL/TLS encrypted
connections, and their communication with the PMM server is protected by the HTTP basic authentication.

Note

Credentials used in communication between the exporters and the PMM Server are the following ones:

• login is pmm

• password is equal to Agent ID, which can be seen e.g. on the Inventory Dashboard.

6 of 242 Percona LLC, © 2020


1.4 PMM Server

1.4 PMM Server

PMM Server runs on the machine that will be your central monitoring host. It is distributed as an appliance via the
following:

• Docker image that you can use to run a container

• OVA (Open Virtual Appliance) that you can run in VirtualBox or another hypervisor

• AMI (Amazon Machine Image) that you can run via Amazon Web Services

7 of 242 Percona LLC, © 2020


1.5 Percona Platform

PMM Server includes the following tools:

• Query Analytics (QAN) enables you to analyze MySQL query performance over periods of time. In addition to
the client-side QAN agent, it includes the following:

• QAN API is the backend for storing and accessing query data collected by the QAN agent running on a PMM
Client.

• QAN Web App is a web application for visualizing collected Query Analytics data.

• Metrics Monitor provides a historical view of metrics that are critical to a MySQL or MongoDB server instance. It
includes the following:

• VictoriaMetrics, a scalable time-series database. (Replaces Prometheus.)

• ClickHouse is a third-party column-oriented database that facilitates the Query Analytics functionality.

• Grafana is a third-party dashboard and graph builder for visualizing data aggregated (by VictoriaMetrics or
Prometheus) in an intuitive web interface.

• Percona Dashboards is a set of dashboards for Grafana developed by Percona.

All tools can be accessed from the PMM Server web interface.

1.5 Percona Platform


Percona Platform provides the following value-added services to PMM.

1.5.1 Security Threat Tool

Security Threat Tool checks registered database instances for a range of common security issues. This service requires
the Telemetry setting to be on.

1.6 Contact Us
Percona Monitoring and Management is an open source product. We provide ways for anyone to contact developers
and experts directly, submit bug reports and feature requests, and contribute to source code directly.

1.6.1 Contacting the developers

Use the community forum to ask questions about using PMM. Developers and experts will try to help with problems
that you experience.

1.6.2 Reporting bugs

Use the PMM project in JIRA to report bugs and request features. Please register and search for similar issues before
submitting a bug or feature request.

1.6.3 Contributing to development

To explore source code and suggest contributions, see the PMM repository list.

You can fork and clone any Percona repositories, but to have your source code patches accepted please sign the
Contributor License Agreement (CLA).

8 of 242 Percona LLC, © 2020


2. Setting up

2. Setting up

2.1 In this section


Installing and running PMM Server on:

• Docker

• Virtual appliances

• AWS Marketplace

Installing and running PMM Clients on:

• MySQL

• Percona Server for MySQL

• MongDB

• PostgreSQL

• ProxySQL

• AWS

• Linux

• External services

9 of 242 Percona LLC, © 2020


2.2 Server

2.2 Server

2.2.1 Setting up PMM Server

PMM Server runs as a Docker image, a virtual appliance, or on an AWS instance.

Verifying

In your browser, go to the server by its IP address. If you run your server as a virtual appliance or by using an
Amazon machine image, you will need to setup the user name, password and your public key if you intend to
connect to the server by using ssh. This step is not needed if you run PMM Server using Docker.

In the given example, you would need to direct your browser to http://192.168.100.1. Since you have not added any
monitoring services yet, the site will show only data related to the PMM Server internal services.

Accessing the Components of the Web Interface

• http://192.168.100.1 to access Home Dashboard

• http://192.168.100.1/graph/ to access Metrics Monitor

• http://192.168.100.1/swagger/ to access PMM API.

PMM Server provides user access control, and therefore you will need user credentials to access it:

The default user name is admin , and the default password is admin also. You will be proposed to change the default
password at login if you didn’t it.

10 of 242 Percona LLC, © 2020


2.2.2 Docker

2.2.2 Docker

Introduction

PMM Server can run as a container with Docker 1.12.6 or later. Images are available at https://hub.docker.com/r/
percona/pmm-server.

The Docker tags used here are for the latest version of PMM 2 (2.12.0) but you can specify any available tag to use
the corresponding version of PMM Server.

Metrics collection consumes disk space. PMM needs approximately 1GB of storage for each monitored database
node with data retention set to one week. (By default, data retention is 30 days.) To reduce the size of the
VictoriaMetrics database, you can consider disabling table statistics.

Although the minimum amount of memory is 2 GB for one monitored database node, memory usage does not grow
in proportion to the number of nodes. For example, 16GB is adequate for 20 nodes.

Run an image

1. Pull an image.

# Pull the latest 2.x image


docker pull percona/pmm-server:2

2. Create a persistent data container.

docker create --volume /srv \


--name pmm-data percona/pmm-server:2 /bin/true

Caution

PMM Server expects the data volume (specified with --volume ) to be /srv . Using any other value will result in data loss
when upgrading.

3. Run the image to start PMM Server.

docker run --detach --restart always \


--publish 80:80 --publish 443:443 \
--volumes-from pmm-data --name pmm-server \
percona/pmm-server:2

Note

You can disable manual updates via the Home Dashboard PMM Upgrade panel by adding -e DISABLE_UPDATES=true to the
docker run command.

4. In a web browser, visit server hostname:80 or server hostname:443 to see the PMM user interface.

Backup and upgrade

1. Find out which version is installed.

11 of 242 Percona LLC, © 2020


2.2.2 Docker

docker exec -it pmm-server curl -u admin:admin http://localhost/v1/version

Note

Use jq to extract the quoted string value.

sudo apt install jq # Example for Debian, Ubuntu


docker exec -it pmm-server curl -u admin:admin http://localhost/v1/version | jq .version

2. Check container mount points are the same ( /srv ).

docker inspect pmm-data | grep Destination


docker inspect pmm-server | grep Destination

# With jq
docker inspect pmm-data | jq '.[].Mounts[].Destination'
docker inspect pmm-server | jq '.[].Mounts[].Destination'

3. Stop the container and create backups.

docker stop pmm-server


docker rename pmm-server pmm-server-backup
mkdir pmm-data-backup && cd $_
docker cp pmm-data:/srv .

4. Pull and run the latest image.

docker pull percona/pmm-server:2


docker run \
--detach \
--restart always \
--publish 80:80 --publish 443:443 \
--volumes-from pmm-data \
--name pmm-server \
percona/pmm-server:2

5. (Optional) Repeat step 1 to confirm the version, or check the PMM Upgrade panel on the Home Dashboard.

Downgrade and restore

1. Stop and remove the running version.

docker stop pmm-server


docker rm pmm-server

2. Restore backups.

docker rename pmm-server-backup pmm-server


# cd to wherever you saved the backup
docker cp srv pmm-data:/

3. Restore permissions.

docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R root:root /srv && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R pmm:pmm /srv/alertmanager && \

12 of 242 Percona LLC, © 2020


2.2.2 Docker

docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R root:pmm /srv/clickhouse && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R grafana:grafana /srv/grafana && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R pmm:pmm /srv/logs && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R postgres:postgres /srv/postgres && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R pmm:pmm /srv/prometheus && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R pmm:pmm /srv/victoriametrics && \
docker run --rm --volumes-from pmm-data -it percona/pmm-server:2 chown -R postgres:postgres /srv/logs/
postgresql.log

4. Start (don’t run) the image.

docker start pmm-server

13 of 242 Percona LLC, © 2020


2.2.3 Virtual Appliance

2.2.3 Virtual Appliance

Percona provides a virtual appliance for running PMM Server in a virtual machine. It is distributed as an Open Virtual
Appliance (OVA) package, which is a tar archive with necessary files that follow the Open Virtualization Format (OVF).
OVF is supported by most popular virtualization platforms, including:

• VMware - ESXi 6.5

• Red Hat Virtualization

• VirtualBox

• XenServer

• Microsoft System Center Virtual Machine Manager

Supported Platforms for Running the PMM Server Virtual Appliance

The virtual appliance is ideal for running PMM Server on an enterprise virtualization platform of your choice. This
page explains how to run the appliance in VirtualBox and VMware Workstation Player. which is a good choice to
experiment with PMM at a smaller scale on a local machine. Similar procedure should work for other platforms
(including enterprise deployments on VMware ESXi, for example), but additional steps may be required.

The virtual machine used for the appliance runs CentOS 7.

Warning

The appliance must run in a network with DHCP, which will automatically assign an IP address for it. To assign a static
IP manually, you need to acquire the root access.

VirtualBox Using the Command Line

Instead of using the VirtualBox GUI, you can do everything on the command line. Use the VBoxManage command to
import, configure, and start the appliance.

The following script imports the PMM Server appliance from pmm-server-2.12.0.ova and configures it to bridge the
en0 adapter from the host. Then the script routes console output from the appliance to /tmp/pmm-server-console.log .
This is done because the script then starts the appliance in headless (without the console) mode.

To get the IP address for accessing PMM, the script waits for 1 minute until the appliance boots up and returns the
lines with the IP address from the log file.

# Import image
VBoxManage import pmm-server-2.12.0.ova

# Modify NIC settings if needed


VBoxManage list bridgedifs
VBoxManage modifyvm 'PMM Server 2.12.0' --nic1 bridged --bridgeadapter1 'en0: Wi-Fi (AirPort)'

# Log console output into file


VBoxManage modifyvm 'PMM Server 2.12.0' --uart1 0x3F8 4 --uartmode1 file /tmp/pmm-server-console.log

# Start instance
VBoxManage startvm --type headless 'PMM Server 2.12.0'

# Wait for 1 minute and get IP address from the log


sleep 60
grep cloud-init /tmp/pmm-server-console.log

14 of 242 Percona LLC, © 2020


2.2.3 Virtual Appliance

By convention OVA files start with pmm-server- followed by the full version number such as 2.12.0.

To use this script, make sure to replace this placeholder with the the name of the image that you have downloaded
from the PMM download site.

VirtualBox Using the GUI

The following procedure describes how to run the PMM Server appliance using the graphical user interface of
VirtualBox:

1. Download the OVA. The latest version is available at https://www.percona.com/downloads/pmm2/2.12.0/ova.

2. Import the appliance. For this, open the File menu and click Import Appliance and specify the path to the OVA and
click Continue. Then, select Reinitialize the MAC address of all network cards and click Import.

3. Configure network settings to make the appliance accessible from other hosts in your network.

Note

All database hosts must be in the same network as PMM Server, so do not set the network adapter to NAT.

If you are running the appliance on a host with properly configured network settings, select Bridged Adapter in the
Network section of the appliance settings.

4. Start the PMM Server appliance.

If it was assigned an IP address on the network by DHCP, the URL for accessing PMM will be printed in the console
window.

VMware Workstation Player

The following procedure describes how to run the PMM Server appliance using VMware Workstation Player:

1. Download the OVA. The latest version is available at https://www.percona.com/downloads/pmm2/2.12.0/ova.

2. Import the appliance.

a. Open the File menu and click Open.

b. Specify the path to the OVA and click Continue.

Note

You may get an error indicating that import failed. Click Retry and the import should succeed.

3. Configure network settings to make the appliance accessible from other hosts in your network.

If you are running the appliance on a host with properly configured network settings, select Bridged in the
Network connection section of the appliance settings.

4. Start the PMM Server appliance.

Log in as root , password percona and follow the prompts to change the password.

Identifying PMM Server IP Address

PMM Server uses DHCP for security reasons. Use this command in the PMM Server console to find out the server’s IP
address:

hostname -I

15 of 242 Percona LLC, © 2020


2.2.3 Virtual Appliance

Accessing PMM Server

1. Start the virtual machine

2. Open a web browser

3. Enter the server’s IP address

4. Enter the user login and password to access the PMM Server web interface

If you run PMM Server in your browser for the first time, you are requested to supply the user login and password.
The default PMM Server credentials are:

• username: admin

• password: admin

After login you will be proposed to change this default password. Enter the new password twice and click Save. The
PMM Server is now ready and the home page opens.

16 of 242 Percona LLC, © 2020


2.2.3 Virtual Appliance

You are creating a username and password that will be used for two purposes:

1. authentication as a user to PMM - this will be the credentials you need in order to log in to PMM.

2. authentication between PMM Server and PMM Clients - you will re-use these credentials as a part of the server URL
when configuring PMM Client for the first time on a server:

Run this command as root or by using the sudo command

pmm-admin config --server-insecure-tls --server-url=https://admin:admin@<IP Address>:443

Accessing the Virtual Machine

To access the VM with the PMM Server appliance via SSH, you will need to provide your public key:

1. Open the URL for accessing PMM in a web browser. The URL is provided either in the console window or in the
appliance log.

2. Go to PMM > PMM Settings > SSH Key.

3. Enter your public key in the SSH Key field and click the Apply SSH Key button.

After that you can use ssh to log in as the admin user. For example, if PMM Server is running at 192.168.100.1 and
your private key is ~/.ssh/pmm-admin.key , use the following command:

ssh [email protected] -i ~/.ssh/pmm-admin.key

Next Steps

Verify that PMM Server is running by connecting to the PMM web interface using the IP address assigned to the
virtual appliance, then install PMM Client on all database hosts that you want to monitor.

See also

Configuring network interfaces in CentOS

17 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

2.2.4 AWS Marketplace

You can run an instance of PMM Server hosted at AWS Marketplace.

Assuming that you have an AWS (Amazon Web Services) account, locate Percona Monitoring and Management Server in
AWS Marketplace (or use this link).

Selecting a region and instance type in the Pricing Information section will give you an estimate of the costs involved.
This is only an indication of costs. You will choose regions and instance types in later steps.

Percona Monitoring and Management Server is provided at no cost, but you may need to pay for infrastructure
costs.

Note

Disk space consumed by PMM Server depends on the number of hosts being monitored. Although each environment
will be unique, you can consider the data consumption figures for the PMM Demo web site which consumes
approximately 230MB/host/day, or ~6.9GB/host at the default 30 day retention period.

For more information, see our blog post How much disk space should I allocate for Percona Monitoring and
Management?.

1. Click Continue to Subscribe.

2. Subscribe to this software: Check the terms and conditions and click Continue to Configuration.

3. Configure this software:

a. Select a value for Software Version. (The latest is 2.12.0)

b. Select a region. (You can change this in the next step.)

c. Click Continue to Launch.

4. Launch this software:

a. Choose Action: Select a launch option. Launch from Website is a quick way to make your instance ready. For
more control, choose Launch through EC2.

b. EC2 Instance Type: Select an instance type.

c. VPC Settings: Choose or create a VPC (virtual private cloud).

d. Subnet Settings: Choose or create a subnet.

e. Security Group Settings: Choose a security group or click *Create New Based On Seller Settings

f. Key Pair Settings: Choose or create a key pair.

g. Click Launch.

18 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

Limiting Access to the instance: security group and a key pair

In the Security Group section, which acts like a firewall, you may use the preselected option
Create new based on seller settings to create a security group with recommended settings. In the Key Pair select an
already set up EC2 key pair to limit access to your instance.

Note

It is important that the security group allow communication via the the following ports: 22, 80, and 443. PMM should
also be able to access port 3306 on the RDS that uses the instance.

19 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

Applying settings

Scroll up to the top of the page to view your settings. Then, click the Launch with 1 click button to continue and
adjust your settings in the EC2 console.

Your instance settings are summarized in a special area. Click the Launch with 1 click button to continue.

20 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

Note

The Launch with 1 click button may alternatively be titled as Accept Software Terms & Launch with 1-Click.

Adjusting instance settings in the EC2 Console

Your clicking the Launch with 1 click button, deploys your instance. To continue setting up your instance, run the EC2
console. It is available as a link at the top of the page that opens after you click the Launch with 1 click button.

Your instance appears in the EC2 console in a table that lists all instances available to you. When a new instance is
only created, it has no name. Make sure that you give it a name to distinguish from other instances managed via the
EC2 console.

Running the instance

After you add your new instance it will take some time to initialize it. When the AWS console reports that the
instance is now in a running state, you many continue with configuration of PMM Server.

Note

When started the next time after rebooting, your instance may acquire another IP address. You may choose to set up
an elastic IP to avoid this problem.

With your instance selected, open its IP address in a web browser. The IP address appears in the IPv4 Public IP
column or as value of the Public IP field at the top of the Properties panel.

To run the instance, copy and paste its public IP address to the location bar of your browser. In the Percona
Monitoring and Management welcome page that opens, enter the instance ID.

21 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

You can copy the instance ID from the Properties panel of your instance, select the Description tab back in the EC2
console. Click the Copy button next to the Instance ID field. This button appears as soon as you hover the cursor of
your mouse over the ID.

Hover the cursor over the instance ID for the Copy button to appear.

Paste the instance in the Instance ID field of the Percona Monitoring and Management welcome page and click Submit.

PMM Server provides user access control, and therefore you will need user credentials to access it:

22 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

• Default user name: admin

• Default password: admin

You will be prompted to change the default password every time you log in.

The PMM Server is now ready and the home page opens.

23 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

You are creating a username and password that will be used for two purposes:

1. authentication as a user to PMM - this will be the credentials you need in order to log in to PMM.

2. authentication between PMM Server and PMM Clients - you will re-use these credentials when configuring pmm-
client for the first time on a server, for example:

pmm-admin config --server-insecure-tls --server-url=https://admin:admin@<IP Address>:443

Accessing the instance by using an SSH client

For instructions about how to access your instances by using an SSH client, see Connecting to Your Linux Instance
Using SSH

Make sure to replace the user name ec2-user used in this document with admin .

Resizing the EBS Volume

Your AWS instance comes with a predefined size which can become a limitation. To make more disk space available
to your instance, you need to increase the size of the EBS volume as needed and then your instance will reconfigure
itself to use the new size.

The procedure of resizing EBS volumes is described in the Amazon documentation: Modifying the Size, IOPS, or Type
of an EBS Volume on Linux.

After the EBS volume is updated, PMM Server instance will auto-detect changes in approximately 5 minutes or less
and will reconfigure itself for the updated conditions.

Upgrading PMM Server on AWS

UPGRADING EC2 INSTANCE CLASS

Upgrading to a larger EC2 instance class is supported by PMM provided you follow the instructions from the AWS
manual. The PMM AMI image uses a distinct EBS volume for the PMM data volume which permits independent
resize of the EC2 instance without impacting the EBS volume.

EXPANDING THE PMM DATA EBS VOLUME

The PMM data volume is mounted as an XFS formatted volume on top of an LVM volume. There are two ways to
increase this volume size:

1. Add a new disk via EC2 console or API, and expand the LVM volume to include the new disk volume.

2. Expand existing EBS volume and grow the LVM volume.

24 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

EXPAND EXISTING EBS VOLUME

To expand the existing EBS volume in order to increase capacity, the following steps should be followed.

25 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

Expand the disk from AWS Console/CLI to the desired capacity.


1.
2. Login to the PMM EC2 instance and verify that the disk capacity has increased. For example, if you have expanded
disk from 16G to 32G, dmesg output should look like below:

[ 535.994494] xvdb: detected capacity change from 17179869184 to 34359738368

3. You can check information about volume groups and logical volumes with the vgs and lvs commands:

vgs

VG #PV #LV #SN Attr VSize VFree


DataVG 1 2 0 wz--n- <16.00g 0

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
DataLV DataVG Vwi-aotz-- <12.80g ThinPool 1.74
ThinPool DataVG twi-aotz-- 15.96g 1.39 1.29

4. Now we can use the lsblk command to see that our disk size has been identified by the kernel correctly, but LVM2
is not yet aware of the new size. We can use pvresize to make sure the PV device reflects the new size. Once
pvresize is executed, we can see that the VG has the new free space available.

lsblk | grep xvdb

xvdb 202:16 0 32G 0 disk

pvscan

PV /dev/xvdb VG DataVG lvm2 [<16.00 GiB / 0 free]


Total: 1 [<16.00 GiB] / in use: 1 [<16.00 GiB] / in no VG: 0 [0 ]

pvresize /dev/xvdb

Physical volume "/dev/xvdb" changed


1 physical volume(s) resized / 0 physical volume(s) not resized

pvs

PV VG Fmt Attr PSize PFree


/dev/xvdb DataVG lvm2 a-- <32.00g 16.00g

5. We then extend our logical volume. Since the PMM image uses thin provisioning, we need to extend both the pool
and the volume:

lvs

26 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
DataLV DataVG Vwi-aotz-- <12.80g ThinPool 1.77
ThinPool DataVG twi-aotz-- 15.96g 1.42 1.32

lvextend /dev/mapper/DataVG-ThinPool -l 100%VG

Size of logical volume DataVG/ThinPool_tdata changed from 16.00 GiB (4096 extents) to 31.96 GiB (8183 extents).
Logical volume DataVG/ThinPool_tdata successfully resized.

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
DataLV DataVG Vwi-aotz-- <12.80g ThinPool 1.77
ThinPool DataVG twi-aotz-- 31.96g 0.71 1.71

6. Once the pool and volumes have been extended, we need to now extend the thin volume to consume the newly
available space. In this example we’ve grown available space to almost 32GB, and already consumed 12GB, so
we’re extending an additional 19GB:

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
DataLV DataVG Vwi-aotz-- <12.80g ThinPool 1.77
ThinPool DataVG twi-aotz-- 31.96g 0.71 1.71

lvextend /dev/mapper/DataVG-DataLV -L +19G

Size of logical volume DataVG/DataLV changed from <12.80 GiB (3276 extents) to <31.80 GiB (8140 extents).
Logical volume DataVG/DataLV successfully resized.

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
DataLV DataVG Vwi-aotz-- <31.80g ThinPool 0.71
ThinPool DataVG twi-aotz-- 31.96g 0.71 1.71

7. We then expand the XFS filesystem to reflect the new size using xfs_growfs , and confirm the filesystem is accurate
using the df command.

df -h /srv

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/DataVG-DataLV 13G 249M 13G 2% /srv

xfs_growfs /srv

meta-data=/dev/mapper/DataVG-DataLV isize=512 agcount=103, agsize=32752 blks


= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=3354624, imaxpct=25

27 of 242 Percona LLC, © 2020


2.2.4 AWS Marketplace

= sunit=16 swidth=16 blks


naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=768, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 3354624 to 8335360

df -h /srv

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/DataVG-DataLV 32G 254M 32G 1% /srv

See also

• Amazon AWS Documentation: Availability zones

• Amazon AWS Documentation: Security groups

• Amazon AWS Documentation: Key pairs

• Amazon AWS Documentation: Importing your own public key to Amazon EC2

• Amazon AWS Documentation: Elastic IP Addresses

28 of 242 Percona LLC, © 2020


2.3 Client

2.3 Client

2.3.1 Setting up PMM Clients

PMM Client is a package of agents and exporters installed on a database host that you want to monitor. Before
installing the PMM Client package on each database host that you intend to monitor, make sure that your PMM
Server host is accessible.

For example, you can run the ping command passing the IP address of the computer that PMM Server is running
on. For example:

ping 192.168.100.1

You will need to have root access on the database host where you will be installing PMM Client (either logged in as
a user with root privileges or be able to run commands with sudo ).

Supported platforms

PMM Client should run on any modern Linux 64-bit distribution, however Percona provides PMM Client packages
for automatic installation from software repositories only on the most popular Linux distributions.

It is recommended that you install your PMM (Percona Monitoring and Management) client by using the software
repository for your system. If this option does not work for you, Percona provides downloadable PMM Client
packages from the Download Percona Monitoring and Management page.

In addition to DEB and RPM packages, this site also offers:

• Generic tarballs that you can extract and run the included install script.

• Source code tarball to build your PMM (Percona Monitoring and Management) client from source.

Warning

You should not install agents on database servers that have the same host name, because host names are used by
PMM Server to identify collected data.

Storage requirements

Minimum 100 MB of storage is required for installing the PMM Client package. With a good constant connection to
PMM Server, additional storage is not required. However, the client needs to store any collected data that it is not
able to send over immediately, so additional storage may be required if connection is unstable or throughput is too
low.

Connecting PMM Clients to the PMM Server

With your server and clients set up, you must configure each PMM Client and specify which PMM Server it should
send its data to.

To connect a PMM Client, enter the IP address of the PMM Server as the value of the --server-url parameter to the
pmm-admin config command, and allow using self-signed certificates with --server-insecure-tls .

29 of 242 Percona LLC, © 2020


2.3.1 Setting up PMM Clients

Note

The --server-url argument should include https:// prefix and PMM Server credentials, which are admin / admin by default, if
not changed at first PMM Server GUI access.

Run this command as root or by using the sudo command

pmm-admin config --server-insecure-tls --server-url=https://admin:[email protected]:443

For example, if your PMM Server is running on 192.168.100.1, you have installed PMM Client on a machine with IP
192.168.200.1, and didn’t change default PMM Server credentials, run the following in the terminal of your client.
Run the following commands as root or by using the sudo command:

pmm-admin config --server-insecure-tls --server-url=https://admin:[email protected]:443

Checking local pmm-agent status...


pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

If you change the default port 443 when running PMM Server, specify the new port number after the IP address of
PMM Server.

Note

By default pmm-admin config refuses to add client if it already exists in the PMM Server inventory database. If you need to
re-add an already existing client (e.g. after full reinstall, hostname changes, etc.), you can run pmm-admin config with the
additional --force option. This will remove an existing node with the same name, if any, and all its dependent services.

Removing monitoring services with pmm-admin remove

Use the pmm-admin remove command to remove monitoring services.

USAGE

Run this command as root or by using the sudo command

pmm-admin remove [OPTIONS] [SERVICE-TYPE] [SERVICE-NAME]

When you remove a service, collected data remains in Metrics Monitor on PMM Server for the specified retention
period.

SERVICES

Service type can be mysql, mongodb, postgresql or proxysql, and service name is a monitoring service alias. To see
which services are enabled, run pmm-admin list .

EXAMPLES

30 of 242 Percona LLC, © 2020


2.3.1 Setting up PMM Clients

# Removing MySQL service named mysql-sl


pmm-admin remove mysql mysql-sl

# remove MongoDB service named mongo


pmm-admin remove mongodb mongo

# remove PostgreSQL service named postgres


pmm-admin remove postgresql postgres

# remove ProxySQL service named ubuntu-proxysql


pmm-admin remove proxysql ubuntu-proxysql

For more information, run pmm-admin remove --help .

31 of 242 Percona LLC, © 2020


2.3.2 MySQL

2.3.2 MySQL

PMM supports all commonly used variants of MySQL, including Percona Server, MariaDB, and Amazon RDS. To
prevent data loss and performance issues, PMM does not automatically change MySQL configuration. However,
there are certain recommended settings that help maximize monitoring efficiency. These recommendations depend
on the variant and version of MySQL you are using, and mostly apply to very high loads.

PMM can collect query data either from the slow query log or from Performance Schema. The slow query log provides
maximum details, but can impact performance on heavily loaded systems. On Percona Server the query sampling
feature may reduce the performance impact.

Performance Schema is generally better for recent versions of other MySQL variants. For older MySQL variants, which
have neither sampling, nor Performance Schema, configure logging only slow queries.

Note

MySQL with too many tables can lead to PMM Server overload due to the streaming of too much time series data. It
can also lead to too many queries from mysqld_exporter causing extra load on MySQL. Therefore PMM Server disables
most consuming mysqld_exporter collectors automatically if there are more than 1000 tables.

You can add configuration examples provided below to my.cnf and restart the server or change variables
dynamically using the following syntax:

SET GLOBAL <var_name>=<var_value>

The following sample configurations can be used depending on the variant and version of MySQL:

• If you are running Percona Server (or XtraDB Cluster), configure the slow query log to capture all queries and
enable sampling. This will provide the most amount of information with the lowest overhead.

log_output=file
slow_query_log=ON
long_query_time=0
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
userstat=1

• If you are running MySQL 5.6+ or MariaDB 10.0+, see Performance Schema.

innodb_monitor_enable=all
performance_schema=ON

• If you are running MySQL 5.5 or MariaDB 5.5, configure logging only slow queries to avoid high performance
overhead.

log_output=file
slow_query_log=ON
long_query_time=0

32 of 242 Percona LLC, © 2020


2.3.2 MySQL

log_slow_admin_statements=ON
log_slow_slave_statements=ON

Caution

This may affect the quality of monitoring data gathered by Query Analytics.

Creating a MySQL User Account for PMM

When adding a MySQL instance to monitoring, you can specify the MySQL server superuser account credentials.
However, monitoring with the superuser account is not advised. It’s better to create a user with only the necessary
privileges for collecting data.

As an example, the user pmm can be created manually with the necessary privileges and pass its credentials when
adding the instance.

To enable complete MySQL instance monitoring, a command similar to the following is recommended:

sudo pmm-admin add mysql --username pmm --password <password>

Of course this user should have necessary privileges for collecting data. If the pmm user already exists, you can grant
the required privileges as follows:

CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;


GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'localhost';

Performance Schema

The default source of query data for PMM is the slow query log. It is available in MySQL 5.1 and later versions.
Starting from MySQL 5.6 (including Percona Server 5.6 and later), you can choose to parse query data from the
Performance Schema instead of slow query log. Starting from MySQL 5.6.6, Performance Schema is enabled by default.

Performance Schema is not as data-rich as the slow query log, but it has all the critical data and is generally faster to
parse. If you are not running Percona Server (which supports sampling for the slow query log), then Performance
Schema is a better alternative.

Note

Use of the performance schema is off by default in MariaDB 10.x.

To use Performance Schema, set the performance_schema variable to ON :

SHOW VARIABLES LIKE 'performance_schema';

+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| performance_schema | ON |
+--------------------+-------+

If this variable is not set to ON, add the the following lines to the MySQL configuration file my.cnf and restart
MySQL:

33 of 242 Percona LLC, © 2020


2.3.2 MySQL

[mysql]
performance_schema=ON

If you are running a custom Performance Schema configuration, make sure that the statements_digest consumer is
enabled:

select * from setup_consumers;

+----------------------------------+---------+
| NAME | ENABLED |
+----------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | YES |
| events_statements_history_long | NO |
| events_transactions_current | NO |
| events_transactions_history | NO |
| events_transactions_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
| statements_digest | YES |
+----------------------------------+---------+
15 rows in set (0.00 sec)

Note

Performance Schema instrumentation is enabled by default in MySQL 5.6.6 and later versions. It is not available at all in
MySQL versions prior to 5.6.

If certain instruments are not enabled, you will not see the corresponding graphs in the MySQL Performance Schema
dashboard. To enable full instrumentation, set the option --performance_schema_instrument to '%=on' when starting the MySQL
server.

mysqld --performance-schema-instrument='%=on'

This option can cause additional overhead and should be used with care.

If the instance is already running, configure the QAN agent to collect data from Performance Schema:

1. Open the PMM Query Analytics dashboard.

2. Click the Settings button.

3. Open the Settings section.

4. Select Performance Schema in the Collect from drop-down list.

5. Click Apply to save changes.

If you are adding a new monitoring instance with the pmm-admin tool, use the --query-source perfschema option:

Run this command as root or by using the sudo command

pmm-admin add mysql --username=pmm --password=pmmpassword --query-source='perfschema' ps-mysql 127.0.0.1:3306

34 of 242 Percona LLC, © 2020


2.3.2 MySQL

For more information, run pmm-admin add mysql --help .

Adding MySQL Service Monitoring

You add MySQL services (Metrics and Query Analytics) with the following command:

USAGE

pmm-admin add mysql --query-source=slowlog --username=pmm --password=pmm

where username and password are credentials for the monitored MySQL access, which will be used locally on the
database host. Additionally, two positional arguments can be appended to the command line flags: a service name
to be used by PMM, and a service address. If not specified, they are substituted automatically as <node>-mysql and
127.0.0.1:3306 .

The command line and the output of this command may look as follows:

pmm-admin add mysql --query-source=slowlog --username=pmm --password=pmm sl-mysql 127.0.0.1:3306

MySQL Service added.


Service ID : /service_id/a89191d4-7d75-44a9-b37f-a528e2c4550f
Service name: sl-mysql

Note

There are two possible sources for query metrics provided by MySQL to get data for the Query Analytics: the slow log
and the Performance Schema.

The --query-source option can be used to specify it, either as slowlog (it is also used by default if nothing specified) or as
perfschema :

pmm-admin add mysql --username=pmm --password=pmm --query-source=perfschema ps-mysql 127.0.0.1:3306

Beside positional arguments shown above you can specify service name and service address with the following flags:
--service-name , --host (the hostname or IP address of the service), and --port (the port number of the service). If
both flag and positional argument are present, flag gains higher priority. Here is the previous example modified to
use these flags:

pmm-admin add mysql --username=pmm --password=pmm --service-name=ps-mysql --host=127.0.0.1 --port=3306

Note

It is also possible to add MySQL instance using UNIX socket with use of a special --socket flag followed with the path to
a socket without username, password and network type:

pmm-admin add mysql --socket=/var/path/to/mysql/socket

After adding the service you can view MySQL metrics or examine the added node on the new PMM Inventory
Dashboard.

35 of 242 Percona LLC, © 2020


2.3.2 MySQL

MySQL InnoDB Metrics

Collecting metrics and statistics for graphs increases overhead. You can keep collecting and graphing low-overhead
metrics all the time, and enable high-overhead metrics only when troubleshooting problems.

InnoDB metrics provide detailed insight about InnoDB operation. Although you can select to capture only specific
counters, their overhead is low even when they all are enabled all the time. To enable all InnoDB metrics, set the
global variable innodb_monitor_enable to all :

SET GLOBAL innodb_monitor_enable=all

Slow Log Settings

If you are running Percona Server for MySQL, a properly configured slow query log will provide the most amount of
information with the lowest overhead. In other cases, use Performance Schema if it is supported.

CONFIGURING THE SLOW LOG FILE

The first and obvious variable to enable is slow_query_log which controls the global Slow Query on/off status.

Secondly, verify that the log is sent to a FILE instead of a TABLE. This is controlled with the log_output variable.

By definition, the slow query log is supposed to capture only slow queries. These are the queries the execution time
of which is above a certain threshold. The threshold is defined by the long_query_time variable.

In heavily-loaded applications, frequent fast queries can actually have a much bigger impact on performance than
rare slow queries. To ensure comprehensive analysis of your query traffic, set the long_query_time to 0 so that all
queries are captured.

FINE TUNE

Depending on the amount of traffic, logging could become aggressive and resource consuming. However, Percona
Server for MySQL provides a way to throttle the level of intensity of the data capture without compromising
information. The most important variable is log_slow_rate_limit , which controls the query sampling in Percona Server
for MySQL. Details on that variable can be found here.

A possible problem with query sampling is that rare slow queries might not get captured at all. To avoid this, use the
slow_query_log_always_write_time variable to specify which queries should ignore sampling. That is, queries with
longer execution time will always be captured by the slow query log.

SLOW LOG FILE ROTATION

PMM will take care of rotating and removing old slow log files, only if you set the --size-slow-logs variable via pmm-
admin.

When the limit is reached, PMM will remove the previous old slow log file, rename the current file with the suffix
.old , and execute the MySQL command FLUSH LOGS . It will only keep one old file. Older files will be deleted on the
next iteration.

Configuring MySQL 8.0 for PMM

MySQL 8 (in version 8.0.4) changes the way clients are authenticated by default. The default_authentication_plugin
parameter is set to caching_sha2_password . This change of the default value implies that MySQL drivers must support
the SHA-256 authentication. Also, the communication channel with MySQL 8 must be encrypted when using
caching_sha2_password .

The MySQL driver used with PMM does not yet support the SHA-256 authentication.

36 of 242 Percona LLC, © 2020


2.3.2 MySQL

With currently supported versions of MySQL, PMM requires that a dedicated MySQL user be set up. This MySQL user
should be authenticated using the mysql_native_password plugin. Although MySQL is configured to support SSL
clients, connections to MySQL Server are not encrypted.

There are two workarounds to be able to add MySQL Server version 8.0.4 or higher as a monitoring service to PMM:

1. Alter the MySQL user that you plan to use with PMM

2. Change the global MySQL configuration

Altering the MySQL User

Provided you have already created the MySQL user that you plan to use with PMM, alter this user as follows:

ALTER USER pmm@'localhost' IDENTIFIED WITH mysql_native_password BY '$eCR8Tp@s$w*rD';

Then, pass this user to pmm-admin add as the value of the --username parameter.

This is a preferred approach as it only weakens the security of one user.

Changing the global MySQL Configuration

A less secure approach is to set default_authentication_plugin to the value mysql_native_password before adding it
as a monitoring service. Then, restart your MySQL Server to apply this change.

[mysqld]
default_authentication_plugin=mysql_native_password

See also

• MySQL Server 5.7 Documentation: performance_schema_instrument

• MySQL Server 5.7 Documentation: innodb_monitor_enable

• MySQL Server Blog: MySQL 8.0.4 : New Default Authentication Plugin : caching_sha2_password

• MySQL Server 8.0 Documentation: Authentication Plugins

• MySQL Server 8.0 Documentation: Native Pluggable Authentication

37 of 242 Percona LLC, © 2020


2.3.3 Percona Server

2.3.3 Percona Server

Not all dashboards in Metrics Monitor are available by default for all MySQL variants and configurations: Oracle’s
MySQL, Percona Server. or MariaDB. Some graphs require Percona Server, and specialized plugins, or additional
configuration.

log_slow_rate_limit

The log_slow_rate_limit variable defines the fraction of queries captured by the slow query log. A good rule of thumb
is to have approximately 100 queries logged per second. For example, if your Percona Server instance processes
10_000 queries per second, you should set log_slow_rate_limit to 100 and capture every 100 th query for the slow
query log.

Note

When using query sampling, set log_slow_rate_type to query so that it applies to queries, rather than sessions.

It is also a good idea to set log_slow_verbosity to full so that maximum amount of information about each captured
query is stored in the slow query log.

log_slow_verbosity

log_slow_verbosity variable specifies how much information to include in the slow query log. It is a good idea to set
log_slow_verbosity to full so that maximum amount of information about each captured query is stored.

slow_query_log_use_global_control

By default, slow query log settings apply only to new sessions. If you want to configure the slow query log during
runtime and apply these settings to existing connections, set the slow_query_log_use_global_control variable to all .

Query Response Time Plugin

Query response time distribution is a feature available in Percona Server. It provides information about changes in
query response time for different groups of queries, often allowing to spot performance problems before they lead
to serious issues.

To enable collection of query response time:

1. Install the QUERY_RESPONSE_TIME plugins:

INSTALL PLUGIN QUERY_RESPONSE_TIME_AUDIT SONAME 'query_response_time.so';


INSTALL PLUGIN QUERY_RESPONSE_TIME SONAME 'query_response_time.so';
INSTALL PLUGIN QUERY_RESPONSE_TIME_READ SONAME 'query_response_time.so';
INSTALL PLUGIN QUERY_RESPONSE_TIME_WRITE SONAME 'query_response_time.so';

2. Set the global variable query_response_time_stats to ON :

SET GLOBAL query_response_time_stats=ON;

MySQL User Statistics (userstat)

User statistics is a feature of Percona Server and MariaDB. It provides information about user activity, individual table
and index access. In some cases, collecting user statistics can lead to high overhead, so use this feature sparingly.

38 of 242 Percona LLC, © 2020


2.3.3 Percona Server

To enable user statistics, set the userstat variable to 1 .

See also

• MySQL Server 5.7 Documentation: Setting variables

• Percona Server Documentation: userstat

• Percona Server 5.7: query_response_time_stats

• Percona Server 5.7: Response time distribution

39 of 242 Percona LLC, © 2020


2.3.4 MongoDB

2.3.4 MongoDB

Configuring MongoDB for Monitoring in PMM Query Analytics

In Query Analytics, you can monitor MongoDB metrics and queries. Run the pmm-admin add command to use these
monitoring services.

Supported versions of MongoDB

Query Analytics supports MongoDB version 3.2 or higher.

Setting Up the Required Permissions

For MongoDB monitoring services to work in Query Analytics, you need to set up the mongodb_exporter user.

Here is an example for the MongoDB shell that creates and assigns the appropriate roles to the user.

db.createRole({
role: "explainRole",
privileges: [{
resource: {
db: "",
collection: ""
},
actions: [
"listIndexes",
"listCollections",
"dbStats",
"dbHash",
"collStats",
"find"
]
}],
roles:[]
})

db.getSiblingDB("admin").createUser({
user: "mongodb_exporter",
pwd: "s3cR#tpa$$worD",
roles: [
{ role: "explainRole", db: "admin" },
{ role: "clusterMonitor", db: "admin" },
{ role: "read", db: "local" }
]
})

Enabling Profiling

For MongoDB to work correctly with Query Analytics, you need to enable profiling in your mongod configuration.
When started without profiling enabled, Query Analytics displays the following warning:

Note

A warning message is displayed when profiling is not enabled

It is required that profiling of the monitored MongoDB databases be enabled, however profiling is not enabled by
default because it may reduce the performance of your MongoDB server.

40 of 242 Percona LLC, © 2020


2.3.4 MongoDB

ENABLING PROFILING ON COMMAND LINE

You can enable profiling from command line when you start the mongod server. This command is useful if you start
mongod manually.

Run this command as root or by using the sudo command

mongod --dbpath=DATABASEDIR --profile 2 --slowms 200 --rateLimit 100

Note that you need to specify a path to an existing directory that stores database files with the --dpbath . When the
--profile option is set to 2, mongod collects the profiling data for all operations. To decrease the load, you may
consider setting this option to 1 so that the profiling data are only collected for slow operations.

The --slowms option sets the minimum time for a slow operation. In the given example, any operation which takes
longer than 200 milliseconds is a slow operation.

The --rateLimit option, which is available if you use PSMDB instead of MongoDB, refers to the number of queries
that the MongoDB profiler collects. The lower the rate limit, the less impact on the performance. However, the
accuracy of the collected information decreases as well.

ENABLING PROFILING IN THE CONFIGURATION FILE

If you run mongod as a service, you need to use the configuration file which by default is /etc/mongod.conf .

In this file, you need to locate the operationProfiling: section and add the following settings:

operationProfiling:
slowOpThresholdMs: 200
mode: slowOp

These settings affect mongod in the same way as the command line options. Note that the configuration file is in the
YAML format. In this format the indentation of your lines is important as it defines levels of nesting.

Restart the mongod service to enable the settings.

Run this command as root or by using the sudo command

service mongod restart

Adding MongoDB Service Monitoring

Add monitoring as follows:

pmm-admin add mongodb --username=pmm --password=pmm

where username and password are credentials for the monitored MongoDB access, which will be used locally on the
database host. Additionally, two positional arguments can be appended to the command line flags: a service name
to be used by PMM, and a service address. If not specified, they are substituted automatically as <node>-mongodb and
127.0.0.1:27017 .

The command line and the output of this command may look as follows:

pmm-admin add mongodb --username=pmm --password=pmm mongo 127.0.0.1:27017

41 of 242 Percona LLC, © 2020


2.3.4 MongoDB

MongoDB Service added.


Service ID : /service_id/f1af8a88-5a95-4bf1-a646-0101f8a20791
Service name: mongo

Beside positional arguments shown above you can specify service name and service address with the following flags:
--service-name , --host (the hostname or IP address of the service), and --port (the port number of the service). If
both flag and positional argument are present, flag gains higher priority. Here is the previous example modified to
use these flags:

pmm-admin add mongodb --username=pmm --password=pmm --service-name=mongo --host=127.0.0.1 --port=27017

Note

It is also possible to add a MongoDB instance using a UNIX socket with just the --socket flag followed by the path to a
socket:

pmm-admin add mongodb --socket=/tmp/mongodb-27017.sock

Passing SSL parameters to the mongodb monitoring service

SSL/TLS related parameters are passed to an SSL enabled MongoDB server as monitoring service parameters along
with the pmm-admin add command when adding the MongoDB monitoring service.

Run this command as root or by using the sudo command

pmm-admin add mongodb --tls

Supported SSL/TLS Parameters

--tls

Enable a TLS connection with mongo server

--tls-skip-verify

Skip TLS certificates validation

See also

• Percona Server for MongoDB: rateLimit

• Percona Server for MongoDB: Profiling Rate Limit

• MongoDB Documentation: Enabling Profiling

• MongoDB Documentation: Profiling Mode

• MongoDB Documentation: SlowOpThresholdMd option

• MongoDB Documentation: Profiler Overhead

42 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

2.3.5 PostgreSQL

PMM follows the postgresql.org EOL policy.

For specific details on supported platforms and versions, see Percona’s Software Platform Lifecycle page.

To monitor PostgreSQL queries, you must install a database extension. There are two choices:

• pg_stat_monitor , a new extension created by Percona, based on pg_stat_statements and compatible with it.

• pg_stat_statements , the original extension created by PostgreSQL, part of the postgres-contrib package available
on Linux.

pg_stat_monitor provides all the features of pg_stat_statements , but extends it to provide bucket-based data
aggregation, a feature missing from pg_stat_statements . ( pg_stat_statements accumulates data without providing
aggregated statistics or histogram information.)

Note

• pg_stat_monitor is the recommended option.

• Although nothing prevents you from installing and using both, we don’t recommend this as you will get duplicate
metrics.

Caution

pg_stat_monitor is beta software and currently unsupported.

Prerequisites

We recommend that you create a PostgreSQL user for SUPERUSER level access. This lets you gather the most data with
the least fuss.

This user must be able to connect to the postgres database where the extension was installed. The PostgreSQL user
should have local password authentication enabled to access PMM. To do this, set ident to md5 for the user in the
pg_hba.conf configuration file.

To create a superuser:

CREATE USER pmm_user WITH SUPERUSER ENCRYPTED PASSWORD '******';

Or, if your database runs on Amazon RDS:

CREATE USER pmm_user WITH rds_superuser ENCRYPTED PASSWORD '******';

pg_stat_monitor

pg_stat_monitor collects statistics and aggregates data in a data collection unit called a bucket linked together to form
a bucket chain.

43 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

You can specify:

• the number of buckets (the length of the chain);

• how much space is available for all buckets;

• a time limit for each bucket’s data collection (the bucket expiry).

When a bucket’s expiration time is reached, accumulated statistics are reset and data is stored in the next available
bucket in the chain.

When all buckets in the chain have been used, the first bucket is reused and its contents are overwritten.

If a bucket fills before its expiration time is reached, data is discarded.

COMPATIBILITY

pg_stat_monitor has been tested with:

• PostgreSQL versions 11, 12.

• Percona Distribution for PostgreSQL versions 11, 12.

(It should also work with versions 13 of both, but hasn’t been tested.)

INSTALL

This extension can be installed in two ways:

• For Percona Distribution for PostgreSQL: Using standard Linux package manager tools.

• For PostgreSQL or Percona Distribution for PostgreSQL: download and compile the source code.

Install using Linux package manager

The pg-stat-monitor extension is included in Percona Distribution for PostgreSQL. This can be installed via the percona-
release package.

This section reproduces parts of the following:

• Configuring Percona Repositories with percona-release

• Installing Percona Distribution for PostgreSQL

Debian

sudo apt-get install -y wget gnupg2 lsb-release


wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

sudo percona-release setup ppg-12 # version 12 (others available)


sudo apt install -y percona-postgresql-12

Red Hat

44 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

sudo yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm

# If RHEL 8
sudo dnf module disable postgresql

# If RHEL 7
sudo yum install -y epel-release
sudo yum repolist

sudo percona-release setup ppg-12


sudo yum install -y percona-postgresql12-server

Install from source code

Debian

1. Install common packages

sudo apt-get install -y curl git wget gnupg2 lsb-release


sudo apt-get update -y

2. Install PostgreSQL development packages

With Percona Distribution for PostgreSQL (version 12):

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
sudo percona-release setup ppg-12
sudo apt install -y percona-postgresql-server-dev-all

With PostgreSQL:

wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -


echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" | sudo tee /etc/apt/
sources.list.d/pgdg.list
sudo apt install -y postgresql-server-dev-all

3. Download, compile, and install extension

git clone git://github.com/percona/pg_stat_monitor.git && cd pg_stat_monitor


sudo make USE_PGXS=1
sudo make USE_PGXS=1 install

45 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

Red Hat

1. Install common packages

sudo yum install -y centos-release-scl epel-release


sudo yum update -y
sudo yum install -y git gcc gcc-c++ llvm-toolset-7

2. Install PostgreSQL development packages

With Percona Distribution for PostgreSQL (version 12):

sudo yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm


sudo percona-release setup ppg-12
sudo yum install -y percona-postgresql12-devel

With PostgreSQL version 12:

sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-


latest.noarch.rpm
sudo yum install -y postgresql12-devel

3. Download, compile, and install extension

git clone git://github.com/percona/pg_stat_monitor.git && cd pg_stat_monitor


sudo make PG_CONFIG=/usr/pgsql-12/bin/pg_config USE_PGXS=1
sudo make PG_CONFIG=/usr/pgsql-12/bin/pg_config USE_PGXS=1 install

CONFIGURE

1. Set or change the value for shared_preload_library in your postgresql.conf file:

shared_preload_libraries = 'pg_stat_monitor'

2. Set the value

pg_stat_monitor.pgsm_normalized_query

1. Start or restart your PostgreSQL instance.

2. In a psql session:

CREATE EXTENSION pg_stat_monitor;

CONFIGURATION PARAMETERS

Here are the configuration parameters, available values ranges, and default values. All require a restart of
PostgreSQL except for pg_stat_monitor.pgsm_track_utility and pg_stat_monitor.pgsm_normalized_query .

To make settings permanent, add them to your postgresql.conf file before starting your PostgreSQL instance.

pg_stat_monitor.pgsm_max (5000-2147483647 bytes) Default: 5000

Defines the limit of shared memory. Memory is used by buckets in a circular manner and is divided between
buckets equally when PostgreSQL starts.

46 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

pg_stat_monitor.pgsm_query_max_len (1024-2147483647 bytes) Default: 1024

The maximum size of the query. Long queries are truncated to this length to avoid unnecessary usage of shared
memory. This parameter must be set before PostgreSQL starts.

pg_stat_monitor.pgsm_enable (0-1) Default: 1 (true).

Enables or disables monitoring. A value of Disable means that pg_stat_monitor will not collect statistics for the
entire cluster.

pg_stat_monitor.pgsm_track_utility (0-1) Default: 1 (true)

Controls whether utility commands (all except SELECT, INSERT, UPDATE and DELETE) are tracked.

pg_stat_monitor.pgsm_normalized_query (0-1) Default: 0 (false)

By default, a query shows the actual parameter instead of a placeholder. Set to 1 to change to showing value
placeholders (as $n where n is an integer).

pg_stat_monitor.pgsm_max_buckets (1-10) Default: 10

Sets the maximum number of available data buckets.

pg_stat_monitor.pgsm_bucket_time (1-2147483647 seconds) Default: 60

Sets the lifetime of the bucket. The system switches between buckets on the basis of this value.

pg_stat_monitor.pgsm_object_cache (50-2147483647) Default: 50

The maximum number of objects in the information cache.

pg_stat_monitor.pgsm_respose_time_lower_bound (1-2147483647 milliseconds) Default: 1

Sets the lower bound of the execution time histogram.

pg_stat_monitor.pgsm_respose_time_step (1-2147483647 milliseconds) Default: 1

Sets the time value of the steps for the histogram.

pg_stat_monitor.pgsm_query_shared_buffer (500000-2147483647 bytes) Default: 500000

Sets the query shared_buffer size.

pg_stat_monitor.pgsm_track_planning (0-1) Default: 1 (true)

Whether to track planning statistics.

pg_stat_statements

pg_stat_statements is included in the official PostgreSQL postgresql-contrib available from your Linux distribution
package manager.

INSTALL

Debian

sudo apt-get install postgresql-contrib

47 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

Red Hat

sudo yum install -y postgresql-contrib

CONFIGURE

1. Add these lines to your postgresql.conf file:

shared_preload_libraries = 'pg_stat_statements'
track_activity_query_size = 2048 # Increase tracked query string size
pg_stat_statements.track = all # Track all statements including nested

2. Restart your PostgreSQL instance.

3. Install the extension (run in the postgres database).

CREATE EXTENSION pg_stat_statements SCHEMA public;

Adding PostgreSQL queries and metrics monitoring

You add PostgreSQL metrics and queries monitoring with the following command:

pmm-admin add postgresql --username=<user name> --password=<password>

Where <user name> and <password> are the PostgreSQL user credentials.

Additionally, two positional arguments can be appended to the command line flags: a service name to be used by
PMM, and a service address. If not specified, they are substituted automatically as <node>-postgresql and
127.0.0.1:5432 .

The command line and the output of this command may look as follows:

pmm-admin add postgresql --username=pmm --password=pmm postgres 127.0.0.1:5432


PostgreSQL Service added.
Service ID : /service_id/28f1d93a-5c16-467f-841b-8c014bf81ca6
Service name: postgres

If correct installed and set up, you should be able to see data in PostgreSQL Overview dashboard, and also Query
Analytics should contain PostgreSQL queries.

Beside positional arguments shown above you can specify service name and service address with the following flags:
--service-name , --host (the hostname or IP address of the service), and --port (the port number of the service). If
both flag and positional argument are present, flag gains higher priority. Here is the previous example modified to
use these flags:

pmm-admin add postgresql --username=pmm --password=pmm --service-name=postgres --host=127.0.0.1 --port=270175432

It is also possible to add a PostgreSQL instance using a UNIX socket with just the --socket flag followed by the path
to a socket:

pmm-admin add postgresql --socket=/var/run/postgresql

Capturing read and write time statistics is possible only if track_io_timing setting is enabled. This can be done either
in configuration file or with the following query executed on the running system:

48 of 242 Percona LLC, © 2020


2.3.5 PostgreSQL

ALTER SYSTEM SET track_io_timing=ON;


SELECT pg_reload_conf();

See also

• pg_stat_monitor Github repository <https://github.com/percona/pg_stat_monitor> __

• PostgreSQL pg_stat_statements module <https://www.postgresql.org/docs/current/pgstatstatements.html> __

49 of 242 Percona LLC, © 2020


2.3.6 ProxySQL

2.3.6 ProxySQL

Use the proxysql alias to enable ProxySQL performance metrics monitoring.

USAGE

pmm-admin add proxysql --username=admin --password=admin

where username and password are credentials for the monitored MongoDB access, which will be used locally on the
database host. Additionally, two positional arguments can be appended to the command line flags: a service name
to be used by PMM, and a service address. If not specified, they are substituted automatically as <node>-proxysql and
127.0.0.1:3306 .

The output of this command may look as follows:

pmm-admin add proxysql --username=admin --password=admin

ProxySQL Service added.


Service ID : /service_id/f69df379-6584-4db5-a896-f35ae8c97573
Service name: ubuntu-proxysql

Beside positional arguments shown above you can specify service name and service address with the following flags:
--service-name , and --host (the hostname or IP address of the service) and --port (the port number of the service),
or --socket (the UNIX socket path). If both flag and positional argument are present, flag gains higher priority. Here
is the previous example modified to use these flags for both host/port or socket connections:

pmm-admin add proxysql --username=pmm --password=pmm --service-name=my-new-proxysql --host=127.0.0.1 --port=6032


pmm-admin add proxysql --username=pmm --password=pmm --service-name=my-new-proxysql --socket=/tmp/
proxysql_admin.sock

50 of 242 Percona LLC, © 2020


2.3.7 Amazon AWS

2.3.7 Amazon AWS

Required AWS settings

It is possible to use PMM for monitoring Amazon RDS (just like any remote MySQL instance). In this case, the PMM
Client is not installed on the host where the database server is deployed. By using the PMM web interface, you
connect to the Amazon RDS DB instance. You only need to provide the IAM user access key (or assign an IAM role)
and PMM discovers the Amazon RDS DB instances available for monitoring.

First of all, ensure that there is the minimal latency between PMM Server and the Amazon RDS instance.

Network connectivity can become an issue for VictoriaMetrics to scrape metrics with 1 second resolution. We
strongly suggest that you run PMM Server on AWS (Amazon Web Services) in the same availability zone as Amazon
RDS instances.

It is crucial that enhanced monitoring be enabled for the Amazon RDS DB instances you intend to monitor.

Set the Enable Enhanced Monitoring option in the settings of your Amazon RDS DB instance.

Creating an IAM user with permission to access Amazon RDS DB instances

It is recommended that you use an IAM user account to access Amazon RDS DB instances instead of using your AWS
account. This measure improves security as the permissions of an IAM user account can be limited so that this
account only grants access to your Amazon RDS DB instances. On the other hand, you use your AWS account to
access all AWS services.

The procedure for creating IAM user accounts is well described in the Amazon RDS documentation. This section only
goes through the essential steps and points out the steps required for using Amazon RDS with Percona Monitoring
and Management.

The first step is to define a policy which will hold all the necessary permissions. Then, you need to associate this
policy with the IAM user or group. In this section, we will create a new user for this purpose.

Creating a policy

A policy defines how AWS services can be accessed. Once defined it can be associated with an existing user or group.

To define a new policy use the IAM page at AWS.

51 of 242 Percona LLC, © 2020


2.3.7 Amazon AWS

52 of 242 Percona LLC, © 2020


2.3.7 Amazon AWS

Select the Policies option on the navigation panel and click the Create policy button.
1.
2. On the Create policy page, select the JSON tab and replace the existing contents with the following JSON document.

{ "Version": "2012-10-17",
"Statement": [{ "Sid": "Stmt1508404837000",
"Effect": "Allow",
"Action": [ "rds:DescribeDBInstances",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics"],
"Resource": ["*"] },
{ "Sid": "Stmt1508410723001",
"Effect": "Allow",
"Action": [ "logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents" ],
"Resource": [ "arn:aws:logs:*:*:log-group:RDSOSMetrics:*" ]}
]
}

3. Click Review policy and set a name to your policy, such as AmazonRDSforPMMPolicy. Then, click the Create policy
button.

Creating an IAM user

Policies are attached to existing IAM users or groups. To create a new IAM user, select Users on the Identity and
Access Management page at AWS. Then click Add user and complete the following steps:

53 of 242 Percona LLC, © 2020


2.3.7 Amazon AWS

1. On the Add user page, set the user name and select the Programmatic access option under Select AWS access type. Set
a custom password and then proceed to permissions by clicking the Permissions button.

2. On the Set permissions page, add the new user to one or more groups if necessary. Then, click Review.

3. On the Add user page, click Create user.

Creating an access key for an IAM user

In order to be able to discover an Amazon RDS DB instance in PMM, you either need to use the access key and secret
access key of an existing IAM user or an IAM role. To create an access key for use with PMM, open the IAM console
and click Users on the navigation pane. Then, select your IAM user.

To create the access key, open the Security credentials tab and click the Create access key button. The system
automatically generates a new access key ID and a secret access key that you can provide on the PMM Add Instance
dashboard to have your Amazon RDS DB instances discovered.

Note

You may use an IAM role instead of IAM user provided your Amazon RDS DB instances are associated with the same
AWS account as PMM.

In case, the PMM Server and Amazon RDS DB instance were created by using the same AWS account, you do not
need create the access key ID and secret access key manually. PMM retrieves this information automatically and
attempts to discover your Amazon RDS DB instances.

Attaching a policy to an IAM user

The last step before you are ready to create an Amazon RDS DB instance is to attach the policy with the required
permissions to the IAM user.

First, make sure that the Identity and Access Management page is open and open Users. Then, locate and open the
IAM user that you plan to use with Amazon RDS DB instances. Complete the following steps, to apply the policy:

1. On the Permissions tab, click the Add permissions button.

2. On the Add permissions page, click Attach existing policies directly.

3. Using the Filter, locate the policy with the required permissions (such as AmazonRDSforPMMPolicy).

4. Select a checkbox next to the name of the policy and click Review.

5. The selected policy appears on the Permissions summary page. Click Add permissions.

The AmazonRDSforPMMPolicy is now added to your IAM user.

54 of 242 Percona LLC, © 2020


2.3.8 Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance

Setting up the Amazon RDS DB Instance

Query Analytics requires Configuring Performance Schema as the query source, because the slow query log is stored
on the AWS (Amazon Web Services) side, and QAN agent is not able to read it. Enable the performance_schema option
under Parameter Groups in Amazon RDS.

Warning

Enabling Performance Schema on T2 instances is not recommended because it can easily run the T2 instance out of
memory.

When adding a monitoring instance for Amazon RDS, specify a unique name to distinguish it from the local MySQL
instance. If you do not specify a name, it will use the client’s host name.

Create the pmm user with the following privileges on the Amazon RDS instance that you want to monitor:

GRANT SELECT, PROCESS, REPLICATION CLIENT ON *.* TO 'pmm'@'%' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, UPDATE, DELETE, DROP ON performance_schema.* TO 'pmm'@'%';

If you have Amazon RDS with a MySQL version prior to 5.5, REPLICATION CLIENT privilege is not available there and
has to be excluded from the above statement.

Note

General system metrics are monitored by using the rds_exporter exporter which replaces node_exporter . rds_exporter gives
access to Amazon Cloudwatch metrics.

node_exporter , used in versions of PMM prior to 1.8.0, was not able to monitor general system metrics remotely.

2.3.8 Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance

The PMM Add Instance is now a preferred method of adding an Amazon RDS database instance to PMM. This method
supports Amazon RDS database instances that use Amazon Aurora, MySQL, or MariaDB engines, as well as any
remote PostgreSQL, ProxySQL, MySQL and MongoDB instances.

55 of 242 Percona LLC, © 2020


2.3.8 Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance

Following steps are needed to add an Amazon RDS database instance to PMM:

56 of 242 Percona LLC, © 2020


2.3.8 Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance

In the PMM web interface, go to PMM > PMM Add Instance.


1.
2. Select Add an AWS RDS MySQL or Aurora MySQL Instance.

3. Enter the access key ID and the secret access key of your IAM user.

4. Click the Discover button for PMM to retrieve the available Amazon RDS instances.

For the instance that you would like to monitor, select the Start monitoring button.

5. You will see a new page with the number of fields. The list is divided into the following groups: Main details, RDS
database, Labels, and Additional options. Some already known data, such as already entered AWS access key, are filled
in automatically, and some fields are optional.

57 of 242 Percona LLC, © 2020


2.3.8 Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance

The Main details section allows to specify the DNS hostname of your instance, service name to use within PMM, the
port your service is listening on, the database user name and password.

The Labels section allows specifying labels for the environment, the AWS region and availability zone to be used,
the Replication set and Cluster names and also it allows to set the list of custom labels in a key:value format.

58 of 242 Percona LLC, © 2020


2.3.8 Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance

The Additional options section contains specific flags which allow to tune the RDS monitoring. They can allow you to
skip connection check, to use TLS for the database connection, not to validate the TLS certificate and the
hostname, as well as to disable basic and/or enhanced metrics collection for the RDS instance to reduce costs.

Also this section contains a database-specific flag, which would allow Query Analytics for the selected remote
database:

• when adding some remote MySQL, AWS RDS MySQL or Aurora MySQL instance, you will be able to choose
using performance schema for the database monitoring

• when adding a PostgreSQL instance, you will be able to activate using pg_stat_statements extension

• when adding a MongoDB instance, you will be able to choose using Query Analytics MongoDB profiler

6. Finally press the Add service button to start monitoring your instance.

See also

• Amazon RDS Documentation: Setting Up

• Amazon AWS Documentation: Connecting to a DB Instance Running the MySQL Database Engine

• Amazon RDS Documentation: Modifying an Amazon RDS DB Instance

• Amazon RDS Documentation: Enhanced Monitoring

• Amazon RDS Documentation: Availability zones

• Amazon RDS Documentation: Master User Account Privileges

• Amazon AWS Documentation: Creating IAM policies

• Amazon AWS Documentation: IAM roles

• Amazon AWS Documentation: Managing Access Keys for IAM Users

• Amazon AWS Documentation: Parameter groups

59 of 242 Percona LLC, © 2020


2.3.9 Linux

2.3.9 Linux

Adding general system metrics service

PMM collects Linux metrics automatically starting from the moment when you have configured your node for
monitoring with pmm-admin config .

Installing DEB packages using apt-get

If you are running a DEB-based Linux distribution, you can use the apt package manager to install PMM client from
the official Percona software repository.

Percona provides .deb packages for 64-bit versions of popular Linux distributions.

The list can be found on Percona’s Software Platform Lifecycle page.

Note

Although PMM client should work on other DEB-based distributions, it is tested only on the platforms listed above.

60 of 242 Percona LLC, © 2020


2.3.9 Linux

To install the PMM client package, follow these steps.

1. Configure Percona repositories using the percona-release tool. First you’ll need to download and install the official
percona-release package from Percona:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Note

If you have previously enabled the experimental or testing Percona repository, don’t forget to disable them and
enable the release component of the original repository as follows:

sudo percona-release disable all


sudo percona-release enable original release

2. Install the PMM client package:

sudo apt-get update


sudo apt-get install pmm2-client

3. Register your Node:

pmm-admin config --server-insecure-tls --server-url=https://admin:admin@<IP Address>:443

4. You should see the following output:

Checking local pmm-agent status...


pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.

Installing RPM packages using yum

If you are running an RPM-based Linux distribution, use the yum package manager to install PMM Client from the
official Percona software repository.

Percona provides .rpm packages for 64-bit versions of Red Hat Enterprise Linux 6 (Santiago) and 7 (Maipo), including
its derivatives that claim full binary compatibility, such as, CentOS, Oracle Linux, Amazon Linux AMI, and so on.

Note

PMM Client should work on other RPM-based distributions, but it is tested only on RHEL and CentOS versions 6 and 7.

61 of 242 Percona LLC, © 2020


2.3.9 Linux

To install the PMM Client package, complete the following procedure. Run the following commands as root or by
using the sudo command:

1. Configure Percona repositories using the percona-release tool. First you’ll need to download and install the official
percona-release package from Percona:

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Note

If you have previously enabled the experimental or testing Percona repository, don’t forget to disable them and
enable the release component of the original repository as follows:

sudo percona-release disable all


sudo percona-release enable original release

See percona-release official documentation for details.

2. Install the pmm2-client package:

yum install pmm2-client

3. Once PMM Client is installed, run the pmm-admin config command with your PMM Server IP address to register your
Node within the Server:

pmm-admin config --server-insecure-tls --server-url=https://admin:admin@<IP Address>:443

You should see the following:

Checking local pmm-agent status...


pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.

62 of 242 Percona LLC, © 2020


2.3.10 External Services

2.3.10 External Services

Adding general external services

You can collect metrics from an external (custom) exporter on a node when:

• there is already a PMM Agent instance running and,

• this node has been configured using the pmm-admin config command.

USAGE

pmm-admin add external [--service-name=<service-name>] [--listen-port=<listen-port>] [--metrics-path=<metrics-path>]


[--scheme=<scheme>]

63 of 242 Percona LLC, © 2020


3. Using

3. Using

3.1 In this section


• The User Interface

• Query Analytics

• Percona Platform

• Security Threat Tool

64 of 242 Percona LLC, © 2020


3.2 User Interface

3.2 User Interface


You can access the PMM web interface using the IP address of the host where PMM Server is running. For example,
if PMM Server is running on a host with IP 192.168.100.1, access the following address with your web browser:
http://192.168.100.1 .

The PMM home page that opens provides an overview of the environment that you have set up to monitor by using
the pmm-admin tool.

From the PMM home page, you can access specific monitoring tools, or dashboards. Each dashboard features a
collection of metrics. These are graphs of a certain type that represent one specific aspect showing how metric values
change over time.

By default the PMM home page lists most recently used dashboards and helpful links to the information that may
be useful to understand PMM better.

The PMM home page lists all hosts that you have set up for monitoring as well as the essential details about their
performance such as CPU load, disk performance, or network activity.

3.2.1 Understanding Dashboards

The Metrics Monitor tool provides a historical view of metrics that are critical to a database server. Time-based
graphs are separated into dashboards by themes: some are related to MySQL or MongoDB, others provide general
system metrics.

3.2.2 Opening a Dashboard

The default PMM installation provides more than thirty dashboards. To make it easier to reach a specific dashboard,
the system offers two tools. The Dashboard Dropdown is a button in the header of any PMM page. It lists all
dashboards, organized into folders. Right sub-panel allows to rearrange things, creating new folders and dragging
dashboards into them. Also a text box on the top allows to search the required dashboard by typing.

With Dashboard Dropdown, search the alphabetical list for any dashboard.

65 of 242 Percona LLC, © 2020


3.2.3 Viewing More Information about a Graph

3.2.3 Viewing More Information about a Graph

Each graph has a descriptions to display more information about the monitored data without cluttering the
interface.

These are on-demand descriptions in the tooltip format that you can find by hovering the mouse pointer over the
More Information icon at the top left corner of a graph. When you move the mouse pointer away from the More
Information button the description disappears.

Graph descriptions provide more information about a graph without claiming any space in the interface.

66 of 242 Percona LLC, © 2020


3.2.4 Rendering Dashboard Images

3.2.4 Rendering Dashboard Images

PMM Server can’t currently directly render dashboard images exported by Grafana without these additional set-up
steps.

Part 1: Install dependencies

1. Connect to your PMM Server Docker container.

docker exec -it pmm-server bash

2. Install Grafana plugins.

grafana-cli plugins install grafana-image-renderer

3. Restart Grafana.

supervisorctl restart grafana

4. Install additional libraries.

yum install -y libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita-cursor-theme adwaita-icon-
theme at at-spi2-atk at-spi2-core cairo-gobject colord-libs dconf desktop-file-utils ed emacs-filesystem gdk-
pixbuf2 glib-networking gnutls gsettings-desktop-schemas gtk-update-icon-cache gtk3 hicolor-icon-theme jasper-
libs json-glib libappindicator-gtk3 libdbusmenu libdbusmenu-gtk3 libepoxy liberation-fonts liberation-narrow-
fonts liberation-sans-fonts liberation-serif-fonts libgusb libindicator-gtk3 libmodman libproxy libsoup
libwayland-cursor libwayland-egl libxkbcommon m4 mailx nettle patch psmisc redhat-lsb-core redhat-lsb-submod-
security rest spax time trousers xdg-utils xkeyboard-config alsa-lib

67 of 242 Percona LLC, © 2020


3.2.4 Rendering Dashboard Images

Part 2 - Share your dashboard image

1. Navigate to the dashboard you want to share.

2. Open the panel menu (between the PMM main menu and the navigation breadcrumbs).

3. Select Share to reveal the Share Panel.

4. Click Direct link rendered image.

5. A new browser tab opens. Wait for the image to be rendered then use your browser’s image save function to
download the image.

If the necessary plugins are not installed, a message in the Share Panel will say so.

68 of 242 Percona LLC, © 2020


3.2.5 Navigating across Dashboards

3.2.5 Navigating across Dashboards

Beside the Dashboard Dropdown button you can also Navigate across Dashboards with the navigation menu which
groups dashboards by application. Click the required group and then select the dashboard that matches your
choice.

• PMM Query Analytics

• OS: The operating system status

• MySQL: MySQL and Amazon Aurora

• MongoDB: State of MongoDB hosts

• HA: High availability

• Cloud: Amazon RDS and Amazon Aurora

• Insight: Summary, cross-server and Prometheus

• PMM: Server settings

69 of 242 Percona LLC, © 2020


3.2.6 Zooming in on a single metric

3.2.6 Zooming in on a single metric

On dashboards with multiple metrics, it is hard to see how the value of a single metric changes over time. Use the
context menu to zoom in on the selected metric so that it temporarily occupies the whole dashboard space.

Click the title of the metric that you are interested in and select the View option from the context menu that opens.

The selected metric opens to occupy the whole dashboard space. You may now set another time range using the
time and date range selector at the top of the Metrics Monitor page and analyze the metric data further.

70 of 242 Percona LLC, © 2020


3.2.7 Annotations

To return to the dashboard, click the Back to dashboard button next to the time range selector.

The Back to dashboard button returns to the dashboard; this button appears when you are zooming in on one
metric.

Navigation menu allows you to navigate between dashboards while maintaining the same host under observation
and/or the same selected time range, so that for example you can start on MySQL Overview looking at host serverA,
switch to MySQL InnoDB Advanced dashboard and continue looking at serverA, thus saving you a few clicks in the
interface.

3.2.7 Annotations

The pmm-admin annotate command registers a moment in time, marking it with a text string called an annotation.

The presence of an annotation shows as a vertical dashed line on a dashboard graph; the annotation text is revealed
by mousing over the caret indicator below the line.

Annotations are useful for recording the moment of a system change or other significant application event.

They can be set globally or for specific nodes or services.

71 of 242 Percona LLC, © 2020


3.2.7 Annotations

USAGE

pmm-admin annotate [--node|--service] <annotation> [--tags <tags>] [--node-name=<node>] [--service-name=<service>]

OPTIONS

<annotation>

The annotation string. If it contains spaces, it should be quoted.

--node

Annotate the current node or that specified by --node-name .

--service

Annotate all services running on the current node, or that specified by --service-name .

--tags

A quoted string that defines one or more comma-separated tags for the annotation. Example: "tag 1,tag 2" .

--node-name

The node name being annotated.

--service-name

The service name being annotated.

Combining flags

Flags may be combined as shown in the following examples.

--node

current node

--node-name

node with name

72 of 242 Percona LLC, © 2020


3.2.7 Annotations

--node --node-name=NODE_NAME

node with name

--node --service-name

current node and service with name

--node --node-name --service-name

node with name and service with name

--node --service

current node and all services of current node

-node --node-name --service --service-name

service with name and node with name

--service

all services of the current node

--service-name

service with name

--service --service-name

service with name

--service --node-name

all services of current node and node with name

--service-name --node-name

service with name and node with name

--service --service-name -node-name

service with name and node with name

Note

If node or service name is specified, they are used instead of other parameters.

Visibility

You can toggle the display of annotations on graphs with the PMM Annotations checkbox.

73 of 242 Percona LLC, © 2020


3.2.7 Annotations

Remove the check mark to hide annotations from all dashboards.

See also

• docs.grafana.org: Annotations

74 of 242 Percona LLC, © 2020


3.3 Query Analytics

3.3 Query Analytics


The Query Analytics dashboard shows how queries are executed and where they spend their time. It helps you
analyze database queries over time, optimize database performance, and find and remedy the source of problems.

Note

• Query Analytics supports MySQL, MongoDB and PostgreSQL. The minimum requirements for MySQL are:

• MySQL 5.1 or later (if using the slow query log)

• MySQL 5.6.9 or later (if using Performance Schema)

• Query Analytics data retrieval is not instantaneous and can be delayed due to network conditions. In such situations
no data is reported and a gap appears in the sparkline.

Query Analytics displays metrics in both visual and numeric form. Performance-related characteristics appear as
plotted graphics with summaries.

The dashboard contains three panels:

• Filters Panel

• Overview Panel

• Details Panel

75 of 242 Percona LLC, © 2020


3.3.1 Filters Panel

3.3.1 Filters Panel

• The Filter panel occupies the left side of the dashboard. It lists filters, grouped by category. Selecting one
reduces the Overview list to those items matching the filter.

• A maximum of the first five of each category are shown. If there are more, the list is expanded by clicking Show
all beside the category name, and collapsed again with Show top 5.

• Applying a filter may make other filters inapplicable. These become grayed out and inactive.


Click the chart symbol ( ) to navigate directly to an item’s associated dashboard.

• Separately, the global Time range setting filters results by time, either your choice of Absolute time range, or one
of the pre-defined Relative time ranges.

76 of 242 Percona LLC, © 2020


3.3.2 Overview Panel

3.3.2 Overview Panel

To the right of the Filters panel and occupying the upper portion of the dashboard is the Overview panel.

Each row of the table represents the metrics for a chosen object type, one of:

• Query

• Service Name

• Database

• Schema

• User Name

• Client Host

At the top of the second column is the dimension menu. Use this to choose the object type.

77 of 242 Percona LLC, © 2020


3.3.2 Overview Panel

On the right side of the dimension column is the Dimension Search bar.

Enter a string and press Enter to limit the view to queries containing only the specified keywords.

Delete the search text and press Enter to see the full list again.

Columns

• The first column is the object’s identifier. For Query, it is the query’s Fingerprint.

• The second column is the Main metric, containing a reduced graphical representation of the metric over time,
called a sparkline, and a horizontal meter, filled to reflect a percentage of the total value.

• Additional values are revealed as mouse-over tool-tips.

Tool-tips

• For the Query dimension, hovering over the information icon ( ) reveals the query ID and its example.

• Hovering on a column header reveals an informative tool-tip for that column.

• Hovering on the main metric sparkline highlights the data point and a tooltip shows the data value under the
cursor.

78 of 242 Percona LLC, © 2020


3.3.2 Overview Panel

• Hovering on the main metric meter reveals the percentage of the total, and other details specific to the main
metric.

• Hovering on column values reveals more details on the value. The contents depends on the type of value.

79 of 242 Percona LLC, © 2020


3.3.3 Details Panel

Adding and removing columns

• Metrics columns are added with the Add column button.

• When clicked, a text field and list of available metrics are revealed. Select a metric or enter a search string to
reduce the list. Selecting a metric adds it to the panel.

• A metric column is removed by clicking on the column heading and selecting Remove column.

• The value plotted in the main metric column can be changed by clicking a metric column heading and selecting
Swap with main metric.

Sorting

• The entire list is sorted by one of the columns.

• Click either the up or down caret to sort the list by that column’s ascending or descending values.

Pagination

• The pagination device lets you move forwards or backwards through pages, jump to a specific page, and
choose how many items are listed per page.

• Queries are grouped into pages of 25, 50 or 100 items.

3.3.3 Details Panel

• Selecting an item in the Overview panel opens the Details panel with a Details Tab.

• If the dimension is Query, the panel also contains the Examples Tab, Explain Tab, and Tables Tab.

80 of 242 Percona LLC, © 2020


3.3.3 Details Panel

Details Tab

The Details tab contains a Query time distribution bar (only for MySQL databases) and a set of Metrics in collapsible
subpanels.

• The Query time distribution bar shows a query’s total time made up of colored segments, each segment
representing the proportion of time spent on one of the follow named activities:

• query_time : Statement execution time.

• lock_time : Time to acquire locks.

• blk_read_time : Total time the statement spent reading blocks (if track_io_timing is enabled, otherwise zero).

• blk_write_time : Total time the statement spent writing blocks (if track_io_timing is enabled, otherwise zero).

• innodb_io_r_wait : Time for InnoDB to read the data from storage.

• innodb_queue_wait : Time the query spent either waiting to enter the InnoDB queue, or in it pending
execution.

• innodb_rec_lock_wait : Time the query waited for row locks.

• other : Remaining uncategorized query time.

• Metrics is a table with these headings:

• Metric: The Metric name, with a question-mark tool-tip that reveals a description of the metric on mouse-
over.

• Rate/Second: A sparkline chart of real-time values per unit time.

• Sum: A summation of the metric for the selected query, and the percentage of the total.

• Per Query Stats: The value of the metric per query.

• Each row in the table is a metric. The contents depends on the chosen dimension.

Examples Tab

(For Query dimension.)

The Examples tab shows an example of the selected query’s fingerprint or table element.

81 of 242 Percona LLC, © 2020


3.3.4 Query Analytics for MongoDB

Explain Tab

(For Query dimension.)

The Explain tab shows the explain output for the selected query, in Classic or JSON formats:

• MySQL: Classic and JSON

• MongoDB: JSON only

• PostgreSQL: Not supported

Tables Tab

(For Query dimension.)

The Tables tab shows information on the tables and indexes involved in the selected query.

3.3.4 Query Analytics for MongoDB

MongoDB is conceptually different from relational database management systems, such as MySQL and MariaDB.

82 of 242 Percona LLC, © 2020


3.3.4 Query Analytics for MongoDB

Relational database management systems store data in tables that represent single entities. Complex objects are
represented by linking several tables.

In contrast, MongoDB uses the concept of a document where all essential information pertaining to a complex
object is stored in one place.

Query Analytics can monitor MongoDB queries. Although MongoDB is not a relational database management
system, you analyze its databases and collections in the same interface using the same tools.

See also

MongoDB requirements

83 of 242 Percona LLC, © 2020


3.4 Percona Platform

3.4 Percona Platform

3.4.1 About Percona Platform

Percona Platform provides value-added services to PMM.

The services comprise:

• Security Threat Tool

84 of 242 Percona LLC, © 2020


3.4.2 Security Threat Tool

3.4.2 Security Threat Tool

The Security Threat Toll runs regular checks against connected databases, alerting you if any servers pose a potential
security threat.

The checks are automatically downloaded from Percona Platform and run every 24 hours. (This period is not
configurable.)

They run on the PMM Client side with the results passed to PMM Server for display in the Failed security checks
summary dashboard and the PMM Database Checks details dashboard.

Note

Check results data always remains on the PMM Server, and is not to be confused with anonymous data sent for
Telemetry purposes.

Where to see the results of checks

On your PMM home page, the Failed security checks dashboard shows a count of the number of failed checks.

More details can be seen by opening the Failed Checks dashboard using PMM > PMM Database Checks.

Note

After activating the Security Threat Tool, you must wait 24 hours for data to appear in the dashboard.

How to enable the Security Threat Tool

The Security Threat Tool is disabled by default. It can be enabled in PMM > PMM Settings (see PMM Settings Page).

Failed security checks summary dashboard when checks are disabled:

85 of 242 Percona LLC, © 2020


3.4.2 Security Threat Tool

Failed database checks dashboard when disabled:

Checks made by the Security Threat Tool

mongodb_auth

This check returns a warning if MongoDB authentication is disabled.

mongodb_version

Warn if MongoDB/PSMDB version is not the latest.

mysql_empty_password

Warn if there are users without passwords.

mysql_version

Warn if MySQL/PS/MariaDB version is not the latest.

postgresql_version

Warn if PostgreSQL version is not the latest.

86 of 242 Percona LLC, © 2020


4. How to

4. How to

4.1 In this section


• Configure - How to configure PMM via the PMM Settings page

• Secure - Various ways to secure your PMM installation

• Upgrade - How to upgrade PMM Server via the interface

• Optimize - How to improve PMM performance

87 of 242 Percona LLC, © 2020


4.2 Configure

4.2 Configure
The PMM Settings page lets you configure a number of PMM options.

Note

Press Apply changes to store any changes.

4.2.1 Metrics resolution

Metrics are collected at three intervals representing low, medium and high resolutions. Short time intervals are
regarded as high resolution metrics, while those at longer time intervals are low resolution.

The Metrics Resolution radio button lets you select one of four presets.

• Rare, Standard and Frequent are fixed presets.

• Custom is an editable preset.

Each preset is a group of Low, Medium and High metrics resolution values. Low resolution intervals increases the
time between collection, resulting in low-resolution metrics and lower disk usage. High resolution intervals decreases
the time between collection, resulting in high-resolution metrics and higher disk usage.

The default values for the fixed presets are:

Rare

• Low: 300 seconds

• Medium: 180 seconds

• High: 60 seconds

88 of 242 Percona LLC, © 2020


4.2.2 Advanced Settings

Standard

• Low: 60 seconds

• Medium: 10 seconds

• High: 5 seconds

Frequent

• Low: 30 seconds

• Medium: 5 seconds

• High: 1 second

Values for the Custom preset can be entered as values, or changed with the arrows.

Note

If there is poor network connectivity between PMM Server and PMM Client, or between PMM Client and the database
server it is monitoring, scraping every second may not be possible when the network latency is greater than 1 second.

4.2.2 Advanced Settings

Data Retention

Data retention specifies how long data is stored by PMM Server.

Telemetry

The Telemetry switch enables gathering and sending basic anonymous data to Percona, which helps us to determine
where to focus the development and what is the uptake of the various versions of PMM. Specifically, gathering this
information helps determine if we need to release patches to legacy versions beyond support, determining when
supporting a particular version is no longer necessary, and even understanding how the frequency of release
encourages or deters adoption.

89 of 242 Percona LLC, © 2020


4.2.2 Advanced Settings

Currently, only the following information is gathered:

• PMM Version,

• Installation Method (Docker, AMI, OVF),

• the Server Uptime.

We do not gather anything that would make the system identifiable, but the following two things are to be
mentioned:

1. The Country Code is evaluated from the submitting IP address before it is discarded.

2. We do create an “instance ID” - a random string generated using UUID v4. This instance ID is generated to
distinguish new instances from existing ones, for figuring out instance upgrades.

The first telemetry reporting of a new PMM Server instance is delayed by 24 hours to allow sufficient time to disable
the service for those that do not wish to share any information.

There is a landing page for this service, available at check.percona.com, which clearly explains what this service is,
what it’s collecting, and how you can turn it off.

Grafana’s anonymous usage statistics is not managed by PMM. To activate it, you must change the PMM Server
container configuration after each update.

As well as via the PMM Settings page, you can also disable telemetry with the -e DISABLE_TELEMETRY=1 option in your
docker run statement for the PMM Server.

Note

1. If the Security Threat Tool is enabled in PMM Settings, Telemetry is automatically enabled.

2. Telemetry is sent immediately; the 24-hour grace period is not honored.

Check for updates

When active, PMM will automatically check for updates and put a notification in the Updates dashboard if any are
available.

Security Threat Tool

The Security Threat Tool performs a range of security-related checks on a registered instance and reports the
findings.

It is disabled by default.

It can be enabled in PMM > PMM Settings > Settings > Advanced Settings > Security Threat Tool.

The checks take 24 hours to complete.

The results can be viewed in PMM > PMM Database Checks.

DBaaS

Shows whether DBaaS features are activated on this server.

Note

DBaaS is a technical preview and requires activation via a server feature flag. See Setting up a development
environment for DBaaS.

90 of 242 Percona LLC, © 2020


4.2.3 SSH Key Details

Public Address

Public address for accessing DBaaS features on this server.

4.2.3 SSH Key Details

This section lets you upload your public SSH key to access the PMM Server via SSH (for example, when accessing
PMM Server as a virtual appliance).

Enter your public key in the SSH Key field and click Apply SSH Key.

4.2.4 Alertmanager integration

VictoriaMetrics vmalert manages alerts. It is compatible with Prometheus Alertmanager.

• The Alertmanager URL field should contain the URL of the Alertmanager which would serve your PMM alerts.

• The Alerting rules field is used to specify alerting rules in the YAML configuration format.

91 of 242 Percona LLC, © 2020


4.2.5 Percona Platform

Fill both fields and click the Apply Alertmanager settings button to proceed.

4.2.5 Percona Platform

This panel is where you create, and log into and out of your Percona Platform account.

Logging in

If you have a Percona Platform account, enter your credentials and click Login.

Logging out

92 of 242 Percona LLC, © 2020


4.2.5 Percona Platform

Click Sign out to log out of your Percona Platform account.

Create an account

To create a Percona Platform account:

• Click Sign up

• Enter a valid email address in the Email field

• Choose and enter a strong password in the Password field

• Select the check box acknowledging our terms of service and privacy policy

• Click Sign up

A brief message will confirm the creation of your new account and you may now log in with these credentials.

93 of 242 Percona LLC, © 2020


4.2.6 Diagnostics

Note

Your Percona Platform account is separate from your PMM User account.

4.2.6 Diagnostics

PMM can generate a set of diagnostics data which can be examined and/or shared with Percona Support in case of
some issue to solve it faster. You can get collected logs from PMM Server by clicking the Download server
diagnostics button.

See also

• How do I troubleshoot communication issues between PMM Client and PMM Server?

• Security Threat Tool

• Prometheus Alertmanager documentation

• Prometheus Alertmanager alerting rules

94 of 242 Percona LLC, © 2020


4.3 Upgrade

4.3 Upgrade
4.3.1 Updating a Server

Client and server components are installed and updated separately.

PMM Server can run natively, as a Docker image, a virtual appliance, or an AWS cloud instance. Each has its own
installation and update steps.

The preferred and simplest way to update PMM Server is with the PMM Upgrade panel on the Home page.

The panel shows:

• the current server version and release date;

• whether the server is up to date;

• the last time a check was made for updates.

Click the refresh button to manually check for updates.

If one is available, click the update button to update to the version indicated.

95 of 242 Percona LLC, © 2020


4.4 Secure

4.4 Secure
You can improve the security of your PMM installation with:

• SSL encryption to secure traffic between client and server;

• Grafana HTTPS secure cookies

To see which security features are enabled:

pmm-admin status

Tip

You can gain an extra level of security by keeping PMM Server isolated from the internet, if possible.

4.4.1 SSL encryption

You need valid SSL certificates to encrypt traffic between client and server.

With our Docker, OVF and AMI images, self-signed certificates are in /srv/nginx .

To use your own, you can either:

• mount the local certificate directory to the same location, or,

• copy your certificates to a running PMM Server container.

Mounting certificates

For example, if your own certificates are in /etc/pmm-certs :

docker run -d -p 443:443 --volumes-from pmm-data \


--name pmm-server -v /etc/pmm-certs:/srv/nginx \
--restart always percona/pmm-server:2

Note

• The certificates must be owned by root. You can do this with: sudo chown 0:0 /etc/pmm-certs/*

• The mounted certificate directory ( /etc/pmm-certs in this example) must contain the files certificate.crt , certificate.key ,
ca-certs.pem and dhparam.pem .

• For SSL encryption, the container must publish on port 443 instead of 80.

Copying certificates

If PMM Server is running as a Docker image, use docker cp to copy certificates. This example copies certificate files
from the current working directory to a running PMM Server docker container.

docker cp certificate.crt pmm-server:/srv/nginx/certificate.crt


docker cp certificate.key pmm-server:/srv/nginx/certificate.key
docker cp ca-certs.pem pmm-server:/srv/nginx/ca-certs.pem
docker cp dhparam.pem pmm-server:/srv/nginx/dhparam.pem

96 of 242 Percona LLC, © 2020


4.4.2 Grafana HTTPS secure cookies

Enabling SSL when connecting PMM Client to PMM Server

pmm-admin config --server-url=https://<user>:<password>@<server IP> --server-insecure-tls

4.4.2 Grafana HTTPS secure cookies

To enable:

1. Start a shell within the Docker container: docker exec -it pmm-server bash

2. Edit /etc/grafana/grafana.ini

3. Enable cookie_secure and set the value to true

4. Restart Grafana: supervisorctl restart grafana

Seealso

https://grafana.com/docs/grafana/latest/administration/configuration/#cookie_secure

97 of 242 Percona LLC, © 2020


4.5 Optimize

4.5 Optimize
4.5.1 Improving PMM Performance with Table Statistics Options

If a MySQL instance has a lot of schemas or tables, there are two options to help improve the performance of PMM
when adding instances with pmm-admin add : --disable-tablestats and --disable-tablestats-limit .

Note

• These settings can only be used when adding an instance. To change them, you must remove and re-add the
instances.

• You can only use one of these options when adding an instance.

4.5.2 Disable per-table statistics for an instance

When adding an instance with pmm-admin add , the --disable-tablestats option disables table statistics collection
when there are more than the default number (1000) of tables in the instance.

USAGE

sudo pmm-admin add mysql --disable-tablestats

4.5.3 Change the number of tables beyond which per-table statistics is disabled

When adding an instance with pmm-admin add , the --disable-tablestats-limit option changes the number of tables
(from the default of 1000) beyond which per-table statistics collection is disabled.

USAGE

sudo pmm-admin add mysql --disable-tablestats-limit=<LIMIT>

EXAMPLE

Add a MySQL instance, disabling per-table statistics collection when the number of tables in the instance reaches
2000.

sudo pmm-admin add mysql --disable-tablestats-limit=2000

98 of 242 Percona LLC, © 2020


5. Details

5. Details

5.1 In this section


• Command line tools: pmm-admin

• List of dashboards

• API

• Glossary

99 of 242 Percona LLC, © 2020


5.2 Dashboards

5.2 Dashboards

5.2.1 Insight

Advanced Data Exploration

100 of 242 Percona LLC, © 2020


5.2.1 Insight

The Advanced Data Exploration dashboard provides detailed information about the progress of a single Prometheus
metric across one or more hosts.

VIEW ACTUAL METRIC VALUES (GAUGE)

A gauge is a metric that represents a single numerical value that can arbitrarily go up and down.

Gauges are typically used for measured values like temperatures or current memory usage, but also “counts” that can
go up and down, like the number of running goroutines.

VIEW METRIC RATE OF CHANGE (COUNTER)

A counter is a cumulative metric that represents a single numerical value that only ever goes up. A counter is
typically used to count requests served, tasks completed, errors occurred, etc. Counters should not be used to
expose current counts of items whose number can also go down, e.g. the number of currently running goroutines.
Use gauges for this use case.

METRIC RATES

Shows the number of samples Per second stored for a given interval in the time series.

NUMA-related metrics

This dashboard supports metrics related to NUMA. The names of all these metrics start with node_memory_numa .

101 of 242 Percona LLC, © 2020


5.2.1 Insight

Home Dashboard

The Home Dashboard is a high-level overview of your environment, the starting page of the PMM portal from which
you can open the tools of PMM, and browse to online resources.

On the PMM home page, you can also find the version number and a button to update your PMM Server.

GENERAL INFORMATION

This section contains links to online resources, such as PMM documentation, releases notes, and blogs.

SHARED AND RECENTLY USED DASHBOARDS

This section is automatically updated to show the most recent dashboards that you worked with. It also contains the
dashboards that you have bookmarked.

STATISTICS

This section shows the total number of hosts added to PMM and the total number of database instanced being
monitored. This section also current the version number. Use the Check for Updates Manually button to see if you are
using the most recent version of PMM.

102 of 242 Percona LLC, © 2020


5.2.1 Insight

ENVIRONMENT OVERVIEW

This section lists all added hosts along with essential information about their performance. For each host, you can
find the current values of the following metrics:

• CPU Busy

• Memory Available

• Disk Reads

• Disk Writes

• Network IO

• DB Connections

• DB QPS

• Virtual CPUs

• RAM

• Host Uptime

• DB Uptime

103 of 242 Percona LLC, © 2020


5.2.2 PMM

5.2.2 PMM

PMM Inventory

The Inventory dashboard is a high level overview of all objects PMM “knows” about.

It contains three tabs (services, agents, and nodes) with lists of the correspondent objects and details about them, so
that users are better able to understand which objects are registered against PMM Server. These objects are
composing a hierarchy with Node at the top, then Service and Agents assigned to a Node.

• Nodes – Where the service and agents will run. Assigned a node_id , associated with a machine_id (from /etc/
machine-id ). Few examples are bare metal, virtualized, container.

• Services – Individual service names and where they run, against which agents will be assigned. Each instance of
a service gets a service_id value that is related to a node_id . Examples are MySQL, Amazon Aurora MySQL. This
feature also allows to support multiple mysqld instances on a single node, with different service names, e.g.
mysql1-3306, and mysql1-3307.

• Agents – Each binary (exporter, agent) running on a client will get an agent_id value.

• pmm-agent one is the top of the tree, assigned to a node_id

• node_exporter is assigned to pmm-agent agent_id

• mysqld_exporter & QAN MySQL Perfschema are assigned to a service_id.

Examples are pmm-agent, node_exporter, mysqld_exporter, QAN MySQL Perfschema.

104 of 242 Percona LLC, © 2020


5.2.2 PMM

REMOVING ITEMS FROM THE INVENTORY

You can remove items from the inventory.

1. Open Home Dashboard > PMM Inventory

2. In the first column, select the items to be removed.

3. Click Delete. The interface will ask you to confirm the operation:

105 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

5.2.3 OS Dashboards

CPU Utilization Details

OVERALL CPU UTILIZATION

The Overall CPU Utilization metric shows how much of the overall CPU time is used by the server. It has these
components:

Max Core Utilization

No description

System

This component the proportion of time the CPUs spent inside the Linux kernel for operations like context
switching, memory allocation and queue handling.

106 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

User

This component is the time spent in the user space. Normally, most of the MySQL CPU time is in user space. A high
value of user time indicates a CPU bound workload.

Softirq

This component is the portion of time the CPU spent servicing software interrupts generated by the device drivers.
A high value of softirq may indicates a poorly configured device. The network devices are generally the main
source of high softirq values.

Steal

When multiple virtual machines share the same physical host, some virtual machines may be allowed to use more
of their share of CPU and that CPU time is accounted as Steal by the virtual machine from which the time is taken.

Iowait

This component is the time the CPU spent waiting for disk IO requests to complete. A high value of iowait
indicates a disk bound load.

Nice

No description

In addition, sampling of the Max utilization of a single core is shown.

Note

This metric presents global values: while there may be a lot of unused CPU, a single core may be saturated. Look at the
Max Core Utilization to see if any core is reaching close to 100%.

CURRENT CPU THREADS UTILIZATION

This shows the total utilization of each CPU core along with the average utilization of all CPU cores. Watch for any
core close to 100% utilization and investigate the root cause.

CPU THREADS FREQUENCY

No description

CURRENT CPU CORES TEMPERATURE

No description

OVERALL CPU THREADS UTILIZATION DETAILS

No description

107 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Disk Details

MOUNTPOINT USAGE

Shows the percentage of disk space utilization for every mountpoint defined on the system. Having some of the
mountpoints close to 100% space utilization is not good because of the risk of a “disk full” error that can block one
of the services or even cause a crash of the entire system.

In cases where the mountpoint is close to 100% consider removing unused files or expanding the space allocated to
the mountpoint.

MOUNTPOINT

Shows information about the disk space usage of the specified mountpoint.

Used is the amount of space used.

Free is the amount of space not in use.

Used+Free is the total disk space allocated to the mountpoint.

Having Free close to 0 B is not good because of the risk of a “disk full” error that can block one of the services or even
cause a crash of the entire system.

In cases where Free is close to 0 B consider removing unused files or expanding the space allocated to the
mountpoint.

DISK LATENCY

Shows average latency for Reads and Writes IO Devices. Higher than typical latency for highly loaded storage
indicates saturation (overload) and is frequent cause of performance problems. Higher than normal latency also can
indicate internal storage problems.

DISK OPERATIONS

Shows amount of physical IOs (reads and writes) different devices are serving. Spikes in number of IOs served often
corresponds to performance problems due to IO subsystem overload.

108 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

DISK BANDWIDTH

Shows volume of reads and writes the storage is handling. This can be better measure of IO capacity usage for
network attached and SSD storage as it is often bandwidth limited. Amount of data being written to the disk can be
used to estimate Flash storage life time.

DISK LOAD

Shows how much disk was loaded for reads or writes as average number of outstanding requests at different period
of time. High disk load is a good measure of actual storage utilization. Different storage types handle load
differently - some will show latency increases on low loads others can handle higher load with no problems.

DISK IO UTILIZATION

Shows disk Utilization as percent of the time when there was at least one IO request in flight. It is designed to match
utilization available in iostat tool. It is not very good measure of true IO Capacity Utilization. Consider looking at IO
latency and Disk Load Graphs instead.

AVG DISKS OPERATIONS MERGE RATIO

Shows how effectively Operating System is able to merge logical IO requests into physical requests. This is a good
measure of the IO locality which can be used for workload characterization.

DISK IO SIZE

Shows average size of a single disk operation.

109 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Network Details

LAST HOUR STATISTIC

This section reports the inbound speed, outbound speed, traffic errors and drops, and retransmit rate.

NETWORK TRAFFIC

This section contains the Network traffic and network utilization hourly metrics.

NETWORK TRAFFIC DETAILS

This section offers the following metrics:

• Network traffic by packets

• Network traffic errors

• Network traffic drop

• Network traffic multicust

NETWORK NETSTAT TCP

This section offers the following metrics:

• Timeout value used for retransmitting

• Min TCP retransmission timeout

• Max TCP retransmission timeout

• Netstat: TCP

• TCP segments

NETWORK NETSTAT UDP

In this section, you can find the following metrics:

• Netstat: UDP

• UDP Lite

110 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

The graphs in the UDP Lite metric give statistics about:

InDatagrams

Packets received

OutDatagrams

Packets sent

InCsumErrors

Datagrams with checksum errors

InErrors

Datagrams that could not be delivered to an application

RcvbufErrors

Datagrams for which not enough socket buffer memory to receive

SndbufErrors

Datagrams for which not enough socket buffer memory to transmit

NoPorts

Datagrams received on a port with no listener

ICMP

This section has the following metrics:

• ICMP Errors

• Messages/Redirects

• Echos

• Timestamps/Mask Requests

ICMP Errors

InErrors

Messages which the entity received but determined as having ICMP-specific errors (bad ICMP checksums, bad
length, etc.)

OutErrors

Messages which this entity did not send due to problems discovered within ICMP, such as a lack of buffers

InDestUnreachs

Destination Unreachable messages received

OutDestUnreachs

Destination Unreachable messages sent

111 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

InType3

Destination unreachable

OutType3

Destination unreachable

InCsumErrors

Messages with ICMP checksum errors

InTimeExcds

Time Exceeded messages received

Messages/Redirects

InMsgs

Messages which the entity received. Note that this counter includes all those counted by icmpInErrors

InRedirects

Redirect messages received

OutMsgs

Messages which this entity attempted to send. Note that this counter includes all those counted by icmpOutErrors

OutRedirects

Redirect messages sent. For a host, this object will always be zero, since hosts do not send redirects

Echos

InEchoReps

Echo Reply messages received

InEchos

Echo (request) messages received

OutEchoReps

Echo Reply messages sent

OutEchos

Echo (request) messages sent

Timestamps/Mask Requests

InAddrMaskReps

Address Mask Reply messages received

112 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

InAddrMasks

Address Mask Request messages received

OutAddrMaskReps

Address Mask Reply messages sent

OutAddrMasks

Address Mask Request messages sent

InTimestampReps

Timestamp Reply messages received

InTimestamps

Timestamp Request messages received

OutTimestampReps

Timestamp Reply messages sent

OutTimestamps

Timestamp Request messages sent

113 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Memory Details

MEMORY USAGE

No description

114 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Node Temperature Details

The Node Temperature Details dashboard exposes hardware monitoring and sensor data obtained through the
sysfs virtual filesystem of the node.

Hardware monitoring devices attached to the CPU and/or other chips on the motherboard let you monitor the
hardware health of a system. Most modern systems include several of such devices. The actual list can include
temperature sensors, voltage sensors, fan speed sensors, and various additional features, such as the ability to
control the rotation speed of the fans.

CPU CORES TEMPERATURES

Presents data taken from the temperature sensors of the CPU

CHIPS TEMPERATURES

Presents data taken from the temperature sensors connected to other system controllers

FAN ROTATION SPEEDS

Fan rotation speeds reported in RPM (rotations per minute).

FAN POWER USAGE

Describes the pulse width modulation of the PWN-equipped fans. PWM operates like a switch that constantly cycles
on and off, thereby regulating the amount of power the fan gains: 100% makes it rotate at full speed, while lower
percentage slows rotation down proportionally.

115 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Nodes Compare

This dashboard lets you compare a wide range of parameters. Parameters of the same type are shown side by side
for all servers, grouped into the following sections:

• System Information

• CPU

• Memory

• Disk Partitions

• Disk Performance

• Network

The System Information section shows the System Info summary of each server, as well as System Uptime, CPU Cores,
RAM, Saturation Metrics, and Load Average gauges.

The CPU section offers the CPU Usage, Interrupts, and Context Switches metrics.

In the Memory section, you can find the Memory Usage, Swap Usage, and Swap Activity metrics.

The Disk Partitions section encapsulates two metrics, Mountpoint Usage and Free Space.

The Disk Performance section contains the I/O Activity, Disk Operations, Disk Bandwidth, Disk IO Utilization, Disk Latency,
and Disk Load metrics.

Finally, Network section shows Network Traffic, and Network Utilization Hourly metrics.

116 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Nodes Overview

The Nodes Overview dashboard provides details about the efficiency of work of the following components. Each
component is represented as a section in the dashboard.

• CPU

• Memory & Swap

• Disk

• Network

The CPU section offers the CPU Usage, CPU Saturation and Max Core Usage, Interrupts and Context Switches, and Processes
metrics.

In the Memory section, you can find the Memory Utilization, Virtual Memory Utilization, Swap Space, and Swap Activity
metrics.

The Disk section contains the I/O Activity, Global File Descriptors Usage, Disk IO Latency, and Disk IO Load metrics.

In the Network section, you can find the Network Traffic, Network Utilization Hourly, Local Network Errors, and TCP
Retransmission metrics.

117 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Node Summary

SYSTEM SUMMARY

The output from pt-summary , one of the Percona Toolkit utilities.

CPU USAGE

The CPU time is measured in clock ticks or seconds. It is useful to measure CPU time as a percentage of the CPU’s
capacity, which is called the CPU usage.

CPU SATURATION AND MAX CORE USAGE

When a system is running with maximum CPU utilization, the transmitting and receiving threads must all share the
available CPU. This will cause data to be queued more frequently to cope with the lack of CPU. CPU Saturation may
be measured as the length of a wait queue, or the time spent waiting on the queue.

INTERRUPTS AND CONTEXT SWITCHES

Interrupt is an input signal to the processor indicating an event that needs immediate attention. An interrupt signal
alerts the processor and serves as a request for the processor to interrupt the currently executing code, so that the
event can be processed in a timely manner.

Context switch is the process of storing the state of a process or thread, so that it can be restored and resume
execution at a later point. This allows multiple processes to share a single CPU, and is an essential feature of a
multitasking operating system.

PROCESSES

No description

MEMORY UTILIZATION

No description

VIRTUAL MEMORY UTILIZATION

No description

118 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

SWAP SPACE

No description

SWAP ACTIVITY

Swap Activity is memory management that involves swapping sections of memory to and from physical storage.

I/O ACTIVITY

Disk I/O includes read or write or input/output operations involving a physical disk. It is the speed with which the
data transfer takes place between the hard disk drive and RAM.

GLOBAL FILE DESCRIPTORS USAGE

No description

DISK IO LATENCY

Shows average latency for Reads and Writes IO Devices. Higher than typical latency for highly loaded storage
indicates saturation (overload) and is frequent cause of performance problems. Higher than normal latency also can
indicate internal storage problems.

DISK IO LOAD

Shows how much disk was loaded for reads or writes as average number of outstanding requests at different period
of time. High disk load is a good measure of actual storage utilization. Different storage types handle load
differently - some will show latency increases on low loads others can handle higher load with no problems.

NETWORK TRAFFIC

Network traffic refers to the amount of data moving across a network at a given point in time.

NETWORK UTILIZATION HOURLY

No description

LOCAL NETWORK ERRORS

Total Number of Local Network Interface Transmit Errors, Receive Errors and Drops. Should be Zero

TCP RETRANSMISSION

Retransmission, essentially identical with Automatic repeat request (ARQ), is the resending of packets which have
been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a
packet switched computer network to provide reliable communication (such as that provided by a reliable byte
stream, for example TCP).

119 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

NUMA Details

For each node, this dashboard shows metrics related to Non-uniform memory access (NUMA).

MEMORY USAGE

Remotes over time the total, used, and free memory.

FREE MEMORY PERCENT

Shows the free memory as the ratio to the total available memory.

NUMA MEMORY USAGE TYPES

Dirty

Memory waiting to be written back to disk

Bounce

Memory used for block device bounce buffers

Mapped

Files which have been mmaped, such as libraries

KernelStack The memory the kernel stack uses. This is not reclaimable.

NUMA ALLOCATION HITS

Memory successfully allocated on this node as intended.

NUMA ALLOCATION MISSED

Memory missed is allocated on a node despite the process preferring some different node.

Memory foreign is intended for a node, but actually allocated on some different node.

120 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

ANONYMOUS MEMORY

Active

Anonymous memory that has been used more recently and usually not swapped out.

Inactive

Anonymous memory that has not been used recently and can be swapped out.

NUMA FILE (PAGECACHE)

Active(file) Pagecache memory that has been used more recently and usually not reclaimed until needed.

Inactive(file) Pagecache memory that can be reclaimed without huge performance impact.

SHARED MEMORY

Shmem Total used shared memory (shared between several processes, thus including RAM disks, SYS-V-IPC and BSD
like SHMEM).

HUGEPAGES STATISTICS

Total

Number of hugepages being allocated by the kernel (Defined with vm.nr_hugepages ).

Free

The number of hugepages not being allocated by a process

Surp

The number of hugepages in the pool above the value in vm.nr_hugepages . The maximum number of surplus
hugepages is controlled by vm.nr_overcommit_hugepages .

LOCAL PROCESSES

Memory allocated on a node while a process was running on it.

REMOTE PROCESSES

Memory allocated on a node while a process was running on some other node.

SLAB MEMORY

Slab

Allocation is a memory management mechanism intended for the efficient memory allocation of kernel objects.

SReclaimable

The part of the Slab that might be reclaimed (such as caches).

SUnreclaim

The part of the Slab that can’t be reclaimed under memory pressure

121 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

Processes Details

The Processes Details dashboard displays Linux process information - PIDs, Threads, and Processes. The dashboard
shows how many processes/threads are either in the kernel run queue (runnable state) or in the blocked queue
(waiting for I/O). When the number of process in the runnable state is constantly higher than the number of CPU
cores available, the load is CPU bound. When the number of process blocked waiting for I/O is large, the load is disk
bound. The running average of the sum of these two quantities is the basis of the loadavg metric.

The dashboard consists of two parts: the first section describes metrics for all hosts, and the second part provides
charts for each host.

Charts for all hosts, available in the first section, are the following ones:

• States of Processes

• Number of PIDs

• Percentage of Max PIDs Limit

• Number of Threads

• Percentage of Max Threads Limit

• Runnable Processes

• Blocked Processes Waiting for I/O

• Sleeping Processes

• Running Processes

• Disk Sleep Processes

• Stopped Processes

• Zombie Processes

• Dead Processes

122 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

The following charts are present in the second part, available for each host:

• Processes

• States of Processes

• Number of PIDs

• Percentage of Max PIDs Limit

• Number of Threads

• Percentage of Max Threads Limit

NUMBER OF PIDS

No description

PERCENTAGE OF MAX PIDS LIMIT

No description

NUMBER OF THREADS

No description

PERCENTAGE OF MAX THREADS LIMIT

No description

RUNNABLE PROCESSES

Processes

The Processes graph shows how many processes/threads are either in the kernel run queue (runnable state) or in the
blocked queue (waiting for I/O). When the number of process in the runnable state is constantly higher than the
number of CPU cores available, the load is CPU bound. When the number of process blocked waiting for I/O is large,
the load is disk bound. The running average of the sum of these two quantities is the basis of the loadavg metric.

BLOCKED PROCESSES WAITING FOR I/O

Processes

The Processes graph shows how many processes/threads are either in the kernel run queue (runnable state) or in the
blocked queue (waiting for I/O). When the number of process in the runnable state is constantly higher than the
number of CPU cores available, the load is CPU bound. When the number of process blocked waiting for I/O is large,
the load is disk bound. The running average of the sum of these two quantities is the basis of the loadavg metric.

SLEEPING PROCESSES

No description

RUNNING PROCESSES

No description

DISK SLEEP PROCESSES

No description

STOPPED PROCESSES

No description

ZOMBIE PROCESSES

No description

123 of 242 Percona LLC, © 2020


5.2.3 OS Dashboards

DEAD PROCESSES

No description

124 of 242 Percona LLC, © 2020


5.2.4 Prometheus Dashboards

5.2.4 Prometheus Dashboards

Prometheus

PROMETHEUS OVERVIEW

This section shows the most essential parameters of the system where Prometheus is running, such as CPU and
memory usage, scrapes performed and the samples ingested in the head block.

RESOURCES

This section provides details about the consumption of CPU and memory by the Prometheus process. This section
contains the following metrics:

• Prometheus Process CPU Usage

• Prometheus Process Memory Usage

• Disk Space Utilization

STORAGE (TSDB) OVERVIEW

This section includes a collection of metrics related to the usage of storage. It includes the following metrics:

• Data blocks (Number of currently loaded data blocks)

• Total chunks in the head block

• Number of series in the head block

• Current retention period of the head block

• Activity with chunks in the head block

• Reload block data from disk

125 of 242 Percona LLC, © 2020


5.2.4 Prometheus Dashboards

SCRAPING

This section contains metrics that help monitor the scraping process. This section contains the following metrics:

• Ingestion

• Prometheus Targets

• Scraped Target by Job

• Scrape Time by Job

• Scraped Target by Instance

• Scraped Time by Instance

• Scrapes by Target Frequency

• Scrape Frequency Versus Target

• Scraping Time Drift

• Prometheus Scrape Interval Variance

• Slowest Jobs

• Largest Samples Jobs

QUERIES

This section contains metrics that monitor Prometheus queries. This section contains the following metrics:

• Prometheus Queries

• Prometheus Query Execution

• Prometheus Query Execution Latency

• Prometheus Query Execution Load

NETWORK

Metrics in this section help detect network problems.

• HTTP Requests by Handler

• HTTP Errors

• HTTP Avg Response time by Handler

• HTTP 99% Percentile Response time by Handler

• HTTP Response Average Size by Handler

• HTTP 99% Percentile Response Size

TIME SERIES INFORMATION

This section shows the top 10 metrics by time series count and the top 10 hosts by time series count.

SYSTEM LEVEL METRICS

Metrics in this section give an overview of the essential system characteristics of PMM Server. This information is also
available from the Nodes Overview dashboard.

PMM SERVER LOGS

This section contains a link to download the logs collected from your PMM Server and further analyze possible
problems. The exported logs are requested when you submit a bug report.

126 of 242 Percona LLC, © 2020


5.2.4 Prometheus Dashboards

Prometheus Exporter Status

The Prometheus Exporter Status dashboard reports the consumption of resources by the Prometheus exporters used
by PMM. For each exporter, this dashboard reveals the following information:

• CPU usage

• Memory usage

• File descriptors used

• Exporter uptime

127 of 242 Percona LLC, © 2020


5.2.4 Prometheus Dashboards

Prometheus Exporters Overview

PROMETHEUS EXPORTERS SUMMARY

This section provides a summary of how exporters are used across the selected hosts. It includes the average usage
of CPU and memory as well as the number of hosts being monitored and the total number of running exporters.

Avg CPU Usage per Host

Shows the average CPU usage in percent per host for all exporters.

Avg Memory Usage per Host

Shows the Exporters average Memory usage per host.

Monitored Hosts

Shows the number of monitored hosts that are running Exporters.

Exporters Running

Shows the total number of Exporters running with this PMM Server instance.

Note

The CPU usage and memory usage do not include the additional CPU and memory usage required to produce metrics
by the application or operating system.

PROMETHEUS EXPORTERS RESOURCE USAGE BY NODE

This section shows how resources, such as CPU and memory, are being used by the exporters for the selected hosts.

CPU Usage

Plots the Exporters’ CPU usage across each monitored host (by default, All hosts).

128 of 242 Percona LLC, © 2020


5.2.4 Prometheus Dashboards

Memory Usage

Plots the Exporters’ Memory usage across each monitored host (by default, All hosts).

PROMETHEUS EXPORTERS RESOURCE USAGE BY TYPE

This section shows how resources, such as CPU and memory, are being used by the exporters for host types: MySQL,
MongoDB, ProxySQL, and the system.

CPU Cores Used

Shows the Exporters’ CPU Cores used for each type of Exporter.

Memory Usage

Shows the Exporters’ memory used for each type of Exporter.

LIST OF HOSTS

At the bottom, this dashboard shows details for each running host.

CPU Used

Show the CPU usage as a percentage for all Exporters.

Mem Used

Shows total Memory Used by Exporters.

Exporters Running

Shows the number of Exporters running.

RAM

Shows the total amount of RAM of the host.

Virtual CPUs

Shows the total number of virtual CPUs on the host.

You can click the value of the CPU Used, Memory Used, or Exporters Running columns to open the Prometheus Exporter
Status dashboard for further analysis.

See also

• Understand Your Prometheus Exporters with Percona Monitoring and Management (PMM)

• Prometheus documentation: Exporters and integrations

129 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

5.2.5 MySQL Dashboards

MySQL Amazon Aurora Details

AMAZON AURORA TRANSACTION COMMITS

This graph shows the number of Commits which Amazon Aurora engine performed as well as average commit
latency. Graph Latency does not always correlate with the number of performed commits and can be quite high in
certain situations.

• Number of Amazon Aurora Commits: The average number of commit operations per second.

• Amazon Aurora Commit avg Latency: The average amount of latency for commit operations

130 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

AMAZON AURORA LOAD

This graph shows us what statements contribute most load on the system as well as what load corresponds to
Amazon Aurora transaction commit.

• Write Transaction Commit Load: Load in Average Active Sessions per second for COMMIT operations

• UPDATE load: Load in Average Active Sessions per second for UPDATE queries

• SELECT load: Load in Average Active Sessions per second for SELECT queries

• DELETE load: Load in Average Active Sessions per second for DELETE queries

• INSERT load: Load in Average Active Sessions per second for INSERT queries

An active session is a connection that has submitted work to the database engine and is waiting for a response from
it. For example, if you submit an SQL query to the database engine, the database session is active while the
database engine is processing that query.

AURORA MEMORY USED

This graph shows how much memory is used by Amazon Aurora lock manager as well as amount of memory used by
Amazon Aurora to store Data Dictionary.

• Aurora Lock Manager Memory: the amount of memory used by the Lock Manager, the module responsible
for handling row lock requests for concurrent transactions.

• Aurora Dictionary Memory: the amount of memory used by the Dictionary, the space that contains metadata
used to keep track of database objects, such as tables and indexes.

AMAZON AURORA STATEMENT LATENCY

This graph shows average latency for the most important types of statements. Latency spikes are often indicative of
the instance overload.

• DDL Latency: Average time to execute DDL queries

• DELETE Latency: Average time to execute DELETE queries

• UPDATE Latency: Average time to execute UPDATE queries

• SELECT Latency: Average time to execute SELECT queries

• INSERT Latency: Average time to execute INSERT queries

AMAZON AURORA SPECIAL COMMAND COUNTERS

Amazon Aurora MySQL allows a number of commands which are not available in standard MySQL. This graph shows
usage of such commands. Regular “unit_test” calls can be seen in default Amazon Aurora install, the rest will depend
on your workload.

• show_volume_status : The number of executions per second of the command SHOW VOLUME STATUS. The SHOW
VOLUME STATUS query returns two server status variables, Disks and Nodes. These variables represent the total
number of logical blocks of data and storage nodes, respectively, for the DB cluster volume.

• awslambda : The number of AWS Lambda calls per second. AWS Lambda is an event-drive, server-less computing
platform provided by AWS. It is a compute service that run codes in response to an event. You can run any kind
of code from Aurora invoking Lambda from a stored procedure or a trigger.

• alter_system : The number of executions per second of the special query ALTER SYSTEM, that is a special query
to simulate an instance crash, a disk failure, a disk congestion or a replica failure. It’s a useful query for testing
the system.

131 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

AMAZON AURORA PROBLEMS

This graph shows different kinds of Internal Amazon Aurora MySQL Problems which general should be zero in
normal operation.

Anything non-zero is worth examining in greater depth.

132 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Command/Handler Counters Compare

This dashboard shows server status variables. On this dashboard, you may select multiple servers and compare their
counters simultaneously.

Server status variables appear in two sections: Commands and Handlers. Choose one or more variables in the
Command and Handler fields in the top menu to select the variables which will appear in the COMMANDS or HANDLERS
section for each host. Your comparison may include from one up to three hosts.

By default or if no item is selected in the menu, PMM displays each command or handler respectively.

See also

MySQL 8.0 Documentation: Server Status Variables

133 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL InnoDB Compression Details

This dashboard helps you analyze the efficiency of InnoDB compression.

COMPRESSION LEVEL AND FAILURE RATE THRESHOLD

InnoDB Compression Level

The level of zlib compression to use for InnoDB compressed tables and indexes.

InnoDB Compression Failure Threshold

The compression failure rate threshold for a table.

Compression Failure Rate Threshold

The maximum percentage that can be reserved as free space within each compressed page, allowing room to
reorganize the data and modification log within the page when a compressed table or index is updated and the
data might be recompressed.

Write Pages to the Redo Log

Specifies whether images of re-compressed pages are written to the redo log. Re-compression may occur when
changes are made to compressed data.

STATISTIC OF COMPRESSION OPERATIONS

Compress Attempts

Number of compression operations attempted. Pages are compressed whenever an empty page is created or the
space for the uncompressed modification log runs out.

Uncompressed Attempts

Number of uncompression operations performed. Compressed InnoDB pages are uncompressed whenever
compression fails, or the first time a compressed page is accessed in the buffer pool and the uncompressed page
does not exist.

134 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

CPU CORE USAGE

CPU Core Usage for Compression

Shows the time in seconds spent by InnoDB Compression operations.

CPU Core Usage for Uncompression

Shows the time in seconds spent by InnoDB Uncompression operations.

BUFFER POOL TOTAL

Total Used Pages

Shows the total amount of used compressed pages into the InnoDB Buffer Pool split by page size.

Total Free Pages

Shows the total amount of free compressed pages into the InnoDB Buffer Pool split by page size.

See also

MySQL 5.7 InnoDB INFORMATION_SCHEMA Documentation

135 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL InnoDB Details

INNODB ACTIVITY

Writes (Rows)

Writes (Transactions)

Row Writes per Trx

Rows Written Per Transactions which modify rows. This is better indicator of transaction write size than looking at all
transactions which did not do any writes as well.

Rows Read Per Trx

Log Space per Trx

Rollbacks

Percent of Transaction Rollbacks (as portion of read-write transactions).

BP Reqs Per Row

Number of Buffer Pool requests per Row Access. High numbers here indicate going through long undo chains, deep
trees and other inefficient data access. It can be less than zero due to several rows being read from single page.

136 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Log Fsync Per Trx

Log Fsync Per Transaction.

InnoDB Row Reads

InnoDB Row Operations

This graph allows you to see which operations occur and the number of rows affected per operation. A graph like
Queries Per Second will give you an idea of queries, but one query could effect millions of rows.

InnoDB Row Writes

InnoDB Row Operations

This graph allows you to see which operations occur and the number of rows affected per operation. A graph like
Queries Per Second will give you an idea of queries, but one query could effect millions of rows.

InnoDB Read-Only Transactions

InnoDB Read-Write Transactions

InnoDB Transactions Information (RW)

The InnoDB Transactions Information graph shows details about the recent transactions. Transaction IDs Assigned
represents the total number of transactions initiated by InnoDB. RW Transaction Commits are the number of
transactions not read-only. Insert-Update Transactions Commits are transactions on the Undo entries. Non Locking
RO Transaction Commits are transactions commit from select statement in auto-commit mode or transactions
explicitly started with “start transaction read only”.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

Misc InnoDB Transactions Information

Additional Innodb Transaction Information

INNODB STORAGE SUMMARY

Innodb Tables

Current Number of Innodb Tables in database

Data Buffer Pool Fit

Buffer Pool Size as Portion of the Data

Avg Row Size

Amount of Data Per Row

Index Size Per Row

Index Size Per Row shows how much space we’re using for indexes on per row basics

InnoDB Data Summary

Space Allocated

Total Amount of Space Allocated. May not exactly match amount of space used on file system but provided great
guidance.

Space Used

Space used in All Innodb Tables. Reported Allocated Space Less Free Space.

137 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Data Length

Space Used by Data (Including Primary Key).

Index Length

Space Used by Secondary Indexes.

Estimated Rows

Estimated number of Rows in Innodb Storage Engine. It is not exact value and it can change abruptly as information
is updated.

Indexing Overhead

How Much Indexes Take Compared to Data.

Free Space Percent

How Much Space is Free. Too high value wastes space on disk.

Free

Allocated Space not currently used by Data or Indexes.

InnoDB File Per Table

If Enabled, By Default every Table will have its own Tablespace represented as its own .idb file rather than all tables
stored in single system tablespace.

INNODB DISK IO

InnoDB Page Size

Avg Data Read Rq Size

Avg Data Write Rq Size

Avg Log Write Rq Size

Data Written Per Fsync

Log Written Per Fsync

Data Read Per Row Read

Data Written Per Row Written

Note: Due to difference in timing of Row Write and Data Write the value may be misleading on short intervals.

InnoDB Data I/O

InnoDB I/O

• Data Writes - The total number of InnoDB data writes.

• Data Reads - The total number of InnoDB data reads (OS file reads).

• Log Writes - The number of physical writes to the InnoDB redo log file.

• Data Fsyncs - The number of fsync() operations. The frequency of fsync() calls is influenced by the setting of the
innodb_flush_method configuration option.

138 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

InnoDB Data Bandwitdh

InnoDB Log IO

InnoDB I/O

• Data Writes - The total number of InnoDB data writes.

• Data Reads - The total number of InnoDB data reads (OS file reads).

• Log Writes - The number of physical writes to the InnoDB redo log file.

• Data Fsyncs - The number of fsync() operations. The frequency of fsync() calls is influenced by the setting of
the innodb_flush_method configuration option.

InnoDB FSyncs

InnoDB Pending IO

InnoDB Pending Fsyncs

InnoDB Auto Extend Increment

When Growing Innodb System Tablespace extend it by this size at the time.

InnoDB Double Write

Whether InnoDB Double Write Buffer is enabled. Doing so doubles amount of writes InnoDB has to do to storage
but is required to avoid potential data corruption during the crash on most storage subsystems.

InnoDB Fast Shutdown

Fast Shutdown means InnoDB will not perform complete Undo Space and Change Buffer cleanup on shutdown,
which is faster but may interfere with certain major upgrade operations.

InnoDB Open Files

Maximum Number of Files InnoDB is Allowed to use.

InnoDB File Use

Portion of Allowed InnoDB Open Files Use.

INNODB IO OBJECTS

InnoDB IO Targets Write Load

Write Load Includes both Write and fsync (refered as misc).

INNODB BUFFER POOL

Buffer Pool Size

InnoDB Buffer Pool Size

InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. Knowing how the
InnoDB buffer pool works, and taking advantage of it to keep frequently accessed data in memory, is one of the
most important aspects of MySQL tuning. The goal is to keep the working set in memory. In most cases, this should
be between 60%-90% of available memory on a dedicated database host, but depends on many factors.

Buffer Pool Size of Total RAM

InnoDB Buffer Pool Size % of Total RAM

InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. Knowing how the
InnoDB buffer pool works, and taking advantage of it to keep frequently accessed data in memory, is one of the

139 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

most important aspects of MySQL tuning. The goal is to keep the working set in memory. In most cases, this should
be between 60%-90% of available memory on a dedicated database host, but depends on many factors.

NUMA Interleave

Interleave Buffer Pool between NUMA zones to better support NUMA systems.

Buffer Pool Activity

Combined value of Buffer Pool Read and Write requests.

BP Data

Percent of Buffer Pool Occupied by Cached Data.

BP Data Dirty

Percent of Data which is Dirty.

BP Miss Ratio

How often buffer pool read requests have to do read from the disk. Keep this percent low for good performance.

BP Write Buffering

Number of Logical Writes to Buffer Pool Per logical Write.

InnoDB Buffer Pool LRU Sub-Chain Churn

Buffer Pool Chunk Size

Size of the “Chunk” for buffer pool allocation. Allocation of buffer pool will be rounded by this number. It also affects
the performance impact of online buffer pool resize.

Buffer Pool Instances

Number of Buffer Pool Instances. Higher values allow to reduce contention but also increase overhead.

Read Ahead IO Percent

Percent of Reads Caused by Innodb Read Ahead.

Read Ahead Wasted

Percent of Pages Fetched by Read Ahead Evicted Without Access.

Dump Buffer Pool on Shutdown

Load Buffer Pool at Startup

Portion of Buffer Pool To Dump/Load

Larger Portion increases dump/load time but get more of original buffer pool content and hence may reduce
warmup time.

Include Buffer Pool in Core Dump

Whenever to Include Buffer Pool in Crash Core Dumps. Doing so may dramatically increase core dump file slow
down restart. Only makes a difference if core dumping on crash is enabled.

InnoDB Old Blocks

Percent of The Buffer Pool To be Reserved for “Old Blocks” - which has been touched repeatedly over period of time.

InnoDB Old Blocks Time

The Time which has to pass between multiple touches for the block for it to qualify as old block.

140 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

InnoDB Random Read Ahead

Is InnoDB Random ReadAhead Enabled.

InnoDB Random Read Ahead

The Threshold (in Pages) to trigger Linear Read Ahead.

InnoDB Read IO Threads

Number of Threads used to Schedule Reads.

InnoDB Write IO Threads

Number of Threads used to Schedule Writes.

InnoDB Native AIO Enabled

Whether Native Asynchronous IO is enabled. Strongly recommended for optimal performance.

INNODB BUFFER POOL - REPLACEMENT MANAGEMENT

LRU Scan Depth

Innodb LRU Scan Depth

This variable defines Innodb Free Page Target per buffer pool. When number of free pages falls below this number
this number page cleaner will make required amount of pages free, flushing or evicting pages from the tail of LRU
as needed.

LRU Clean Page Searches

When Page is being read (or created) the Page need to be allocated in Buffer Pool.

Free List Miss Rate

The most efficient way to get a clean page is to grab one from free list. However if no pages are available in Free List
the LRU scan needs to be performed.

LRU Get Free Loops

If Free List was empty LRU Get Free Loop will be performed. It may perform LRU scan or may use some other
heuristics and shortcuts to get free page.

LRU Scans

If Page could not be find any Free list and other shortcuts did not work, free page will be searched by scanning LRU
chain which is not efficient.

Pages Scanned in LRU Scans

Pages Scanned Per Second while doing LRU scans. If this value is large (thousands) it means a lot of resources are
wasted.

Pages scanned per LRU Scan

Number of pages scanned per LRU scan in Average. Large number of scans can consume a lot of resources and also
introduce significant addition latency to queries.

LRU Get Free Waits

If Innodb could not find a free page in LRU list and had to sleep. Should be zero.

141 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

INNODB CHECKPOINTING AND FLUSHING

Pages Flushed from Flush List

Number of Pages Flushed from “Flush List” This combines Pages Flushed through Adaptive Flush and Background
Flush.

Page Flush Batches Executed

Innodb Flush Cycle typically Runs on 1 second intervals. If it is too far off from this number it can indicate an issue.

Pages Flushed Per Batch

How many pages are flushed per Batch. Large Batches can “choke” IO subsystem and starve other IO which needs to
happen.

Neighbor Flushing Enabled

Neighbor Flushing is Optimized for Rotational Media and unless you’re Running spinning disks you should disable
it.

InnoDB Checkpoint Age

InnoDB Checkpoint Age

The maximum checkpoint age is determined by the total length of all transaction log files ( innodb_log_file_size ).

When the checkpoint age reaches the maximum checkpoint age, blocks are flushed syncronously. The rules of the
thumb is to keep one hour of traffic in those logs and let the checkpointing perform its work as smooth as possible.
If you don’t do this, InnoDB will do synchronous flushing at the worst possible time, ie when you are busiest.

Pages Flushed (Adaptive)

Adaptive Flush Flushes pages from Flush List based on the need to advance Checkpoint (driven by Redo Generation
Rate) and by maintaining number of dirty pages within set limit.

Adaptive Flush Batches Executed

Pages Per Batch (Adaptive)

Pages Flushed Per Adaptive Batch.

Neighbor Flushing

To optimize IO for rotational Media InnoDB may flush neighbor pages. It can cause significant wasted IO for flash
storage. Generally for flash you should run with innodb_flush_neighbors=0 but otherwise this shows how much IO
you’re wasting.

Pages Flushed (LRU)

Flushing from the tail of LRU list needs to happen when data does not fit in buffer pool in order to maintain free
pages readily available for new data to be read.

LRU Flush Batches Executed

Pages Per Batch (LRU)

Pages Flushed Per Neighbor.

LSN Age Flush Batch Target

Target for Pages to Flush due to LSN Age.

Pages Flushed (Neighbor)

Number of Neighbor pages flushed (If neighbor flushing is enabled) from Flush List and LRU List Combined.

142 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Neighbor Flush Batches Executed

Pages Per Batch (Neighbor)

Pages Flushed Per Neighbor.

Sync Flush Waits

If Innodb could not keep up with Checkpoint Flushing and had to trigger Sync flush. This should never happen.

Pages Flushed (Background)

Pages Flushed by Background Flush which is activated when server is considered to be idle.

Background Flush Batches Executed

Pages Per Batch (Background)

Pages Flushed Per Background Batch.

Redo Generation Rate

Rate at which LSN (Redo) is Created. It may not match how much data is written to log files due to block size
rounding.

Innodb Flushing by Type

Pages Evicted (LRU)

This correspond to number of clean pages which were evicted (made free) from the tail of LRU buffer.

Page Eviction Batches

Pages Evicted per Batch

Max Log Space Used

Single Page Flushes

Single Page flushes happen in rare case, then clean page could not be found in LRU list. It should be zero for most
workloads.

Single Page Flush Pages Scanned

Pages Scanned Per Single Page Flush

Innodb IO Capacity

Estimated number of IOPS storage system can provide. Is used to scale background activities. Do not set it to actual
storage capacity.

Innodb IO Capacity Max

InnoDB IO Capacity to use when falling behind and need to catch up with Flushing.

INNODB LOGGING

Total Log Space

Number of Innodb Log Files Multiplied by Their Size.

Log Buffer Size

InnoDB Log Buffer Size

The size of buffer InnoDB uses for buffering writes to log files.

143 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

At Transaction Commit

What to do with Log file At Transaction Commit. Do nothing and wait for timeout to flush the data from Log Buffer,
Flush it to OS Cache but not FSYNC or Flush only.

Flush Transaction Log Every

Every Specified Number of Seconds Flush Transaction Log.

InnoDB Write Ahead Block Size

This variable can be seen as minimum IO alignment InnoDB will use for Redo log file. High Values cause waste, low
values can make IO less efficient.

Log Write Amplification

How much Writes to Log Are Amplified compared to how much Redo is Generated.

Log Fsync Rate

Redo Generated per Trx

Amount of Redo Generated Per Write Transaction. This is a good indicator of transaction size.

InnoDB Log File Usage Hourly

InnoDB Log File Usage Hourly

Along with the buffer pool size, innodb_log_file_size is the most important setting when we are working with
InnoDB. This graph shows how much data was written to InnoDB’s redo logs over each hour. When the InnoDB log
files are full, InnoDB needs to flush the modified pages from memory to disk.

The rules of the thumb is to keep one hour of traffic in those logs and let the checkpointing perform its work as
smooth as possible. If you don’t do this, InnoDB will do synchronous flushing at the worst possible time, ie when
you are busiest.

This graph can help guide you in setting the correct innodb_log_file_size .

Log Padding Written

Amount of Log Padding Written.

InnoDB Log File Size

InnoDB Log Files

Number of InnoDB Redo Log Files.

Log Bandwidth

Redo Generation Rate

Rate at which LSN (Redo) is Created. It may not match how much data is written to log files due to block size
rounding.

InnoDB Group Commit Batch Size

The InnoDB Group Commit Batch Size graph shows how many bytes were written to the InnoDB log files per
attempt to write. If many threads are committing at the same time, one of them will write the log entries of all the
waiting threads and flush the file. Such process reduces the number of disk operations needed and enlarge the
batch size.

144 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

INNODB LOCKING

Lock Wait Timeout

InnoDB Lock Wait Timeout

How long to wait for row lock before timing out.

InnoDB Deadlock Detection

If Disabled Innodb Will not detect deadlocks but rely on timeouts.

InnoDB Auto Increment Lock Mode

Will Define How much locking will come from working with Auto Increment Columns.

Rollback on Timeout

Whenever to rollback all transaction on timeout or just last statement.

Row Lock Blocking

Percent of Active Sections which are blocked due to waiting on Innodb Row Locks.

Row Writes per Trx

Rows Written Per Transactions which modify rows. This is better indicator of transaction write size than looking at all
transactions which did not do any writes as well.

Rollbacks

Percent of Transaction Rollbacks (as portion of read-write transactions).

InnoDB Row Lock Wait Activity

Innodb Row Lock Wait Time

Innodb Row Lock Wait Load

Average Number of Sessions blocked from proceeding due to waiting on row level lock.

Innodb Row Locks Activity

Innodb Table Lock Activity

Current Locks

INNODB UNDO SPACE AND PURGING

Undo Tablespaces

Max Undo Log Size

Innodb Undo Log Truncate

Purge Threads

Max Purge Lag

Maximum number of Unpurged Transactions, if this number exceeded delay will be introduced to incoming DDL
statements.

Max Purge Lag Delay

Current Purge Delay

The Delay Injected due to Purge Thread(s) unable to keep up with purge progress.

145 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Rollback Segments

InnoDB Purge Activity

The InnoDB Purge Performance graph shows metrics about the page purging process. The purge process removed
the undo entries from the history list and cleanup the pages of the old versions of modified rows and effectively
remove deleted rows.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

Transactions and Undo Records

InnoDB Undo Space Usage

The InnoDB Undo Space Usage graph shows the amount of space used by the Undo segment. If the amount of
space grows too much, look for long running transactions holding read views opened in the InnoDB status.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

Transaction History

InnoDB Purge Throttling

Records Per Undo Log Page

How Many Undo Operations Are Handled Per Each Undo Log Page.

Purge Invoked

How Frequently Purge Operation is Invoked.

Ops Per Purge

Home Many Purge Actions are done Per invocation.

Undo Slots Used

Number of Undo Slots Used.

Max Transaction History Length

Purge Batch Size

Rseg Truncate Frequency

INNODB PAGE OPERATIONS

InnoDB Page Splits and Merges

The InnoDB Page Splits graph shows the InnoDB page maintenance activity related to splitting and merging pages.
When an InnoDB page, other than the top most leaf page, has too much data to accept a row update or a row insert,
it has to be split in two. Similarly, if an InnoDB page, after a row update or delete operation, ends up being less than
half full, an attempt is made to merge the page with a neighbor page. If the resulting page size is larger than the
InnoDB page size, the operation fails. If your workload causes a large number of page splits, try lowering the
innodb_fill_factor variable (5.7+).

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

Page Merge Success Ratio

InnoDB Page Reorg Attempts

The InnoDB Page Reorgs graph shows information about the page reorganization operations. When a page receives
an update or an insert that affect the offset of other rows in the page, a reorganization is needed. If the
reorganization process finds out there is not enough room in the page, the page will be split. Page reorganization
can only fail for compressed pages.

146 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

InnoDB Page Reorgs Failures

The InnoDB Page Reorgs graph shows information about the page reorganization operations. When a page receives
an update or an insert that affect the offset of other rows in the page, a reorganization is needed. If the
reorganization process finds out there is not enough room in the page, the page will be split. Page reorganization
can only fail for compressed pages.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

InnoDB Fill Factor

The portion of the page to fill then doing sorted Index Build. Lowering this value will worsen space utilization but
will reduce need to split pages when new data is inserted in the index.

INNODB ADAPTIVE HASH INDEX

Adaptive Hash Index Enabled

Adaptive Hash Index Helps to Optimize Index Lookups but can be severe hotspot for some workloads.

Adaptive Hash Index Partitions

How many Partitions Used for Adaptive Hash Index (to reduce contention).

Percent of Pages Hashed

Number of Pages Added to AHI vs Number of Pages Added to Buffer Pool.

AHI Miss Ratio

Percent of Searches which could not be resolved through AHI.

Rows Added Per Page

Number of Rows “Hashed” Per Each Page which needs to be added to AHI.

AHI ROI

How Many Successful Searches using AHI are performed per each row maintenance operation.

InnoDB AHI Usage

The InnoDB AHI Usage graph shows the search operations on the InnoDB adaptive hash index and its efficiency. The
adaptive hash index is a search hash designed to speed access to InnoDB pages in memory. If the Hit Ratio is small,
the working data set is larger than the buffer pool, the AHI should likely be disabled.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

InnoDB AHI Miss Ratio

InnoDB AHI Churn - Rows

InnoDB AHI Churn - Pages

INNODB CHANGE BUFFER

Change Buffer Max Size

The Maximum Size of Change Buffer (as Percent of Buffer Pool Size).

Change Buffer Max Size

The Maximum Size of Change Buffer (Bytes).

147 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

InnoDB Change Buffer Merge Load

Number of Average of Active Merge Buffer Operations in Process.

INNODB CONTENTION

InnoDB Thread Concurrency

If Enabled limits number of Threads allowed inside Innodb Kernel at the same time.

InnoDB Commit Concurrency

If Enabled limits number of Threads allowed inside InnoDB Kernel at the same time during Commit Stage.

InnoDB Thread Sleep Delay

The Time the thread will Sleep before Re-Entering Innodb Kernel if high contention.

InnoDB Adaptive Max Sleep Delay

If Set to Non-Zero Value InnoDB Thread Sleep Delay will be adjusted automatically depending on the load up to the
value specified by this variable.

InnoDB Concurrency Tickets

Number of low level operations InnoDB can do after it entered InnoDB kernel before it is forced to exit and yield to
another thread waiting.

InnoDB Spin Wait Delay

InnoDB Spin Wait Pause Multiplier

InnoDB Sync Spin Loops

InnoDB Contention - OS Waits

The InnoDB Contention - OS Waits graph shows the number of time an OS wait operation was required while
waiting to get the lock. This happens once the spin rounds are exhausted.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

InnoDB Contention - Spin Rounds

The InnoDB Contention - Spin Rounds graph shows the number of spin rounds executed in order to get a lock. A
spin round is a fast retry to get the lock in a loop.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

INNODB MISC

InnoDB Main Thread Utilization

The InnoDB Main Thread Utilization graph shows the portion of time the InnoDB main thread spent at various task.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

InnoDB Activity

The InnoDB Activity graph shows a measure of the activity of the InnoDB threads.

Note: If you do not see any metric, try running: SET GLOBAL innodb_monitor_enable=all; in the MySQL client.

InnoDB Dedicated Server

InnoDB automatically optimized for Dedicated Server Environment (auto scaling cache and some other variables).

148 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

InnoDB Sort Buffer Size

This Buffer is used for Building InnoDB Indexes using Sort algorithm.

InnoDB Stats Auto Recalc

Update Stats when Metadata Queried

Refresh InnoDB Statistics when meta-data queries by SHOW TABLE STATUS or INFORMATION_SCHEMA queries. If Enabled can
cause severe performance issues.

Index Condition Pushdown (ICP)

Index Condition Pushdown (ICP) is an optimization for the case where MySQL retrieves rows from a table using an
index. Without ICP, the storage engine traverses the index to locate rows in the base table and returns them to the
MySQL server which evaluates the WHERE condition for the rows. With ICP enabled, and if parts of
the WHERE condition can be evaluated by using only columns from the index, the MySQL server pushes this part of
the WHERE condition down to the storage engine. The storage engine then evaluates the pushed index condition by
using the index entry and only if this is satisfied is the row read from the table. ICP can reduce the number of times
the storage engine must access the base table and the number of times the MySQL server must access the storage
engine.

InnoDB Persistent Statistics

InnoDB Persistent Sample Pages

Number of Pages To Sample if Persistent Statistics are Enabled.

InnoDB Transient Sample Pages

Number of Pages To Sample if Persistent Statistics are Disabled.

INNODB ONLINE OPERATIONS (MARIADB)

InnoDB Defragmentation

The InnoDB Defragmentation graph shows the status information related to the InnoDB online defragmentation
feature of MariaDB for the optimize table command. To enable this feature, the variable innodb-defragment must be
set to 1 in the configuration file.

Note: Currently available only on a MariaDB server.

InnoDB Online DDL

The InnoDB Online DDL graph shows the state of the online DDL (alter table) operations in InnoDB. The progress
metric is estimate of the percentage of the rows processed by the online DDL.

Note: Currently available only on a MariaDB server.

MYSQL SUMMARY

MySQL Uptime

MySQL Uptime

The amount of time since the last restart of the MySQL server process.

Current QPS

Current QPS

Based on the queries reported by MySQL’s SHOW STATUS command, it is the number of statements executed by the
server within the last second. This variable includes statements executed within stored programs, unlike the
Questions variable. It does not count COM_PING or COM_STATISTICS commands.

149 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

File Handlers Used

Table Open Cache Miss Ratio

Table Open Cache Size

Table Definition Cache Size

MySQL Connections

Max Connections

Max Connections is the maximum permitted number of simultaneous client connections. By default, this is 151.
Increasing this value increases the number of file descriptors that mysqld requires. If the required number of
descriptors are not available, the server reduces the value of Max Connections.

mysqld actually permits Max Connections + 1 clients to connect. The extra connection is reserved for use by accounts
that have the SUPER privilege, such as root.

Max Used Connections is the maximum number of connections that have been in use simultaneously since the
server started.

Connections is the number of connection attempts (successful or not) to the MySQL server.

MySQL Client Thread Activity

MySQL Active Threads

Threads Connected is the number of open connections, while Threads Running is the number of threads not
sleeping.

MySQL Handlers

MySQL Handlers

Handler statistics are internal statistics on how MySQL is selecting, updating, inserting, and modifying rows, tables,
and indexes.

This is in fact the layer between the Storage Engine and MySQL.

• read_rnd_next is incremented when the server performs a full table scan and this is a counter you don’t really
want to see with a high value.

• read_key is incremented when a read is done with an index.

• read_next is incremented when the storage engine is asked to ‘read the next index entry’. A high value means a
lot of index scans are being done.

Top Command Counters

Top Command Counters

The Com_ statement counter variables indicate the number of times each xxx statement has been executed. There is
one status variable for each type of statement. For example, Com_delete and Com_update count DELETE and UPDATE
statements, respectively. Com_delete_multi and Com_update_multi are similar but apply to DELETE and UPDATE
statements that use multiple-table syntax.

MySQL Network Traffic

MySQL Network Traffic

Here we can see how much network traffic is generated by MySQL. Outbound is network traffic sent from MySQL and
Inbound is network traffic MySQL has received.

150 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

NODE SUMMARY

System Uptime

The parameter shows how long a system has been “up” and running without a shut down or restart.

Load Average

The system load is a measurement of the computational work the system is performing. Each running process either
using or waiting for CPU resources adds 1 to the load.

RAM

RAM (Random Access Memory) is the hardware in a computing device where the operating system, application
programs and data in current use are kept so they can be quickly reached by the device’s processor.

Memory Available

Percent of Memory Available Note: on Modern Linux Kernels amount of Memory Available for application is not the
same as Free+Cached+Buffers.

Virtual Memory

RAM + SWAP

Disk Space

Sum of disk space on all partitions. Note it can be significantly over-reported in some installations.

Min Space Available

Lowest percent of the disk space available.

CPU Usage

The CPU time is measured in clock ticks or seconds. It is useful to measure CPU time as a percentage of the CPU’s
capacity, which is called the CPU usage.

CPU Saturation and Max Core Usage

When a system is running with maximum CPU utilization, the transmitting and receiving threads must all share the
available CPU. This will cause data to be queued more frequently to cope with the lack of CPU. CPU Saturation may
be measured as the length of a wait queue, or the time spent waiting on the queue.

Disk I/O and Swap Activity

Disk I/O includes read or write or input/output operations involving a physical disk. It is the speed with which the
data transfer takes place between the hard disk drive and RAM.

Swap Activity is memory management that involves swapping sections of memory to and from physical storage.

Network Traffic

Network traffic refers to the amount of data moving across a network at a given point in time.

151 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL MyISAM/Aria Details

MYISAM KEY BUFFER PERFORMANCE

The Key Read Ratio (Key_reads/Key_read_requests) ratio should normally be less than 0.01.

The Key Write Ratio (Key_writes/Key_write_requests) ratio is usually near 1 if you are using mostly updates and
deletes, but might be much smaller if you tend to do updates that affect many rows at the same time or if you are
using the DELAY_KEY_WRITE table option.

ARIA PAGECACHE READS/WRITES

This graph is similar to InnoDB buffer pool reads/writes. aria-pagecache-buffer-size is the main cache for the Aria
storage engine. If you see high reads/writes (physical IO), i.e. reads are close to read requests and/or writes are close
to write requests you may need to increase the aria-pagecache-buffer-size (may need to decrease other buffers:
key_buffer_size , innodb_buffer_pool_size , etc.)

ARIA TRANSACTION LOG SYNCS

This is similar to InnoDB log file syncs. If you see lots of log syncs and want to relax the durability settings you can
change aria_checkpoint_interval (in seconds) from 30 (default) to a higher number. It is good to look at the disk IO
dashboard as well.

ARIA PAGECACHE BLOCKS

This graph shows the utilization for the Aria pagecache. This is similar to InnDB buffer pool graph. If you see all
blocks are used you may consider increasing aria-pagecache-buffer-size (may need to decrease other buffers:
key_buffer_size , innodb_buffer_pool_size , etc.)

152 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL MyRocks Details

The MyRocks storage engine developed by Facebook based on the RocksDB storage engine is applicable to systems
which primarily interact with the database by writing data to it rather than reading from it. RocksDB also features a
good level of compression, higher than that of the InnoDB storage engine, which makes it especially valuable when
optimizing the usage of hard drives.

PMM collects statistics on the MyRocks storage engine for MySQL in the Metrics Monitor information for this
dashboard comes from the Information Schema tables.

153 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Metrics

• MyRocks cache

• MyRocks cache data bytes R/W

• MyRocks cache index hit rate

• MyRocks cache index

• MyRocks cache filter hit rate

• MyRocks cache filter

• MyRocks cache data byltes inserted

• MyRocks bloom filter

• MyRocks memtable

• MyRocks memtable size

• MyRocks number of keys

• MyRocks cache L0/L1

• MyRocks number of DB ops

• MyRocks R/W

• MyRocks bytes read by iterations

• MyRocks write ops

• MyRocks WAL

• MyRocks number reseeks in iterations

• RocksDB row operations

• MyRocks file operations

• RocksDB stalls

• RocksDB stops/slowdowns

See also

MyRocks Information schema

154 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Instance Summary

MYSQL CONNECTIONS

Max Connections

Max Connections is the maximum permitted number of simultaneous client connections. By default, this is 151.
Increasing this value increases the number of file descriptors that mysqld requires. If the required number of
descriptors are not available, the server reduces the value of Max Connections.

mysqld actually permits Max Connections + 1 clients to connect. The extra connection is reserved for use by accounts
that have the SUPER privilege, such as root.

Max Used Connections is the maximum number of connections that have been in use simultaneously since the
server started.

Connections is the number of connection attempts (successful or not) to the MySQL server.

MYSQL ABORTED CONNECTIONS

Aborted Connections

When a given host connects to MySQL and the connection is interrupted in the middle (for example due to bad
credentials), MySQL keeps that info in a system table (since 5.6 this table is exposed in performance_schema).

If the amount of failed requests without a successful connection reaches the value of max_connect_errors, mysqld
assumes that something is wrong and blocks the host from further connection.

To allow connections from that host again, you need to issue the FLUSH HOSTS statement.

MYSQL CLIENT THREAD ACTIVITY

MySQL Active Threads

Threads Connected is the number of open connections, while Threads Running is the number of threads not
sleeping.

155 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MYSQL THREAD CACHE

MySQL Thread Cache

The thread_cache_size variable sets how many threads the server should cache to reuse. When a client disconnects,
the client’s threads are put in the cache if the cache is not full. It is autosized in MySQL 5.6.8 and above (capped to
100). Requests for threads are satisfied by reusing threads taken from the cache if possible, and only when the cache
is empty is a new thread created.

• Threads_created: The number of threads created to handle connections.

• Threads_cached: The number of threads in the thread cache.

MYSQL SLOW QUERIES

MySQL Slow Queries

Slow queries are defined as queries being slower than the long_query_time setting. For example, if you have
long_query_time set to 3, all queries that take longer than 3 seconds to complete will show on this graph.

MYSQL SELECT TYPES

MySQL Select Types

As with most relational databases, selecting based on indexes is more efficient than scanning an entire table’s data.
Here we see the counters for selects not done with indexes.

• Select Scan is how many queries caused full table scans, in which all the data in the table had to be read and
either discarded or returned.

• Select Range is how many queries used a range scan, which means MySQL scanned all rows in a given range.

• Select Full Join is the number of joins that are not joined on an index, this is usually a huge performance hit.

MYSQL SORTS

MySQL Sorts

Due to a query’s structure, order, or other requirements, MySQL sorts the rows before returning them. For example,
if a table is ordered 1 to 10 but you want the results reversed, MySQL then has to sort the rows to return 10 to 1.

This graph also shows when sorts had to scan a whole table or a given range of a table in order to return the results
and which could not have been sorted via an index.

MYSQL TABLE LOCKS

Table Locks

MySQL takes a number of different locks for varying reasons. In this graph we see how many Table level locks MySQL
has requested from the storage engine. In the case of InnoDB, many times the locks could actually be row locks as it
only takes table level locks in a few specific cases.

It is most useful to compare Locks Immediate and Locks Waited. If Locks waited is rising, it means you have lock
contention. Otherwise, Locks Immediate rising and falling is normal activity.

MYSQL QUESTIONS

MySQL Questions

The number of statements executed by the server. This includes only statements sent to the server by clients and not
statements executed within stored programs, unlike the Queries used in the QPS calculation.

156 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

This variable does not count the following commands:

• COM_PING

• COM_STATISTICS

• COM_STMT_PREPARE

• COM_STMT_CLOSE

• COM_STMT_RESET

MYSQL NETWORK TRAFFIC

MySQL Network Traffic

Here we can see how much network traffic is generated by MySQL. Outbound is network traffic sent from MySQL and
Inbound is network traffic MySQL has received.

MYSQL NETWORK USAGE HOURLY

MySQL Network Usage Hourly

Here we can see how much network traffic is generated by MySQL per hour. You can use the bar graph to compare
data sent by MySQL and data received by MySQL.

MYSQL INTERNAL MEMORY OVERVIEW

System Memory: Total Memory for the system.

InnoDB Buffer Pool Data: InnoDB maintains a storage area called the buffer pool for caching data and indexes in
memory.

TokuDB Cache Size: Similar in function to the InnoDB Buffer Pool, TokuDB will allocate 50% of the installed RAM for
its own cache.

Key Buffer Size: Index blocks for MYISAM tables are buffered and are shared by all threads. key_buffer_size is the size
of the buffer used for index blocks.

Adaptive Hash Index Size: When InnoDB notices that some index values are being accessed very frequently, it
builds a hash index for them in memory on top of B-Tree indexes.

Query Cache Size: The query cache stores the text of a SELECT statement together with the corresponding result
that was sent to the client. The query cache has huge scalability problems in that only one thread can do an
operation in the query cache at the same time.

InnoDB Dictionary Size: The data dictionary is InnoDB’s internal catalog of tables. InnoDB stores the data dictionary
on disk, and loads entries into memory while the server is running.

InnoDB Log Buffer Size: The MySQL InnoDB log buffer allows transactions to run without having to write the log to
disk before the transactions commit.

TOP COMMAND COUNTERS

Top Command Counters

The Com_ statement counter variables indicate the number of times each xxx statement has been executed. There is
one status variable for each type of statement. For example, Com_delete and Com_update count DELETE and
UPDATE statements, respectively. Com_delete_multi and Com_update_multi are similar but apply to DELETE and
UPDATE statements that use multiple-table syntax.

TOP COMMAND COUNTERS HOURLY

Top Command Counters Hourly

157 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

The Com_ statement counter variables indicate the number of times each xxx statement has been executed. There is
one status variable for each type of statement. For example, Com_delete and Com_update count DELETE and
UPDATE statements, respectively. Com_delete_multi and Com_update_multi are similar but apply to DELETE and
UPDATE statements that use multiple-table syntax.

MYSQL HANDLERS

MySQL Handlers

Handler statistics are internal statistics on how MySQL is selecting, updating, inserting, and modifying rows, tables,
and indexes.

This is in fact the layer between the Storage Engine and MySQL.

• read_rnd_next is incremented when the server performs a full table scan and this is a counter you don’t really
want to see with a high value.

• read_key is incremented when a read is done with an index.

• read_next is incremented when the storage engine is asked to ‘read the next index entry’. A high value means a
lot of index scans are being done.

MYSQL QUERY CACHE MEMORY

MySQL Query Cache Memory

The query cache has huge scalability problems in that only one thread can do an operation in the query cache at the
same time. This serialization is true not only for SELECTs, but also for INSERT/UPDATE/DELETE.

This also means that the larger the query_cache_size is set to, the slower those operations become. In concurrent
environments, the MySQL Query Cache quickly becomes a contention point, decreasing performance. MariaDB and
AWS Aurora have done work to try and eliminate the query cache contention in their flavors of MySQL, while MySQL
8.0 has eliminated the query cache feature.

The recommended settings for most environments is to set:

• query_cache_type=0

• query_cache_size=0

Note that while you can dynamically change these values, to completely remove the contention point you have to
restart the database.

MYSQL QUERY CACHE ACTIVITY

MySQL Query Cache Activity

The query cache has huge scalability problems in that only one thread can do an operation in the query cache at the
same time. This serialization is true not only for SELECTs, but also for INSERT/UPDATE/DELETE.

This also means that the larger the query_cache_size is set to, the slower those operations become. In concurrent
environments, the MySQL Query Cache quickly becomes a contention point, decreasing performance. MariaDB and
AWS Aurora have done work to try and eliminate the query cache contention in their flavors of MySQL, while MySQL
8.0 has eliminated the query cache feature.

The recommended settings for most environments is to set:

• query_cache_type=0

• query_cache_size=0

Note that while you can dynamically change these values, to completely remove the contention point you have to
restart the database.

158 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MYSQL TABLE OPEN CACHE STATUS

MySQL Table Open Cache Status

The recommendation is to set the table_open_cache_instances to a loose correlation to virtual CPUs, keeping in mind
that more instances means the cache is split more times. If you have a cache set to 500 but it has 10 instances, each
cache will only have 50 cached.

The table_definition_cache and table_open_cache can be left as default as they are auto-sized MySQL 5.6 and above
(ie: do not set them to any value).

MYSQL OPEN TABLES

MySQL Open Tables

The recommendation is to set the table_open_cache_instances to a loose correlation to virtual CPUs, keeping in mind
that more instances means the cache is split more times. If you have a cache set to 500 but it has 10 instances, each
cache will only have 50 cached.

The table_definition_cache and table_open_cache can be left as default as they are auto-sized MySQL 5.6 and above
(ie: do not set them to any value).

MYSQL TABLE DEFINITION CACHE

MySQL Table Definition Cache

The recommendation is to set the table_open_cache_instances to a loose correlation to virtual CPUs, keeping in mind
that more instances means the cache is split more times. If you have a cache set to 500 but it has 10 instances, each
cache will only have 50 cached.

The table_definition_cache and table_open_cache can be left as default as they are auto-sized MySQL 5.6 and above
(ie: do not set them to any value).

159 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Instances Compare

No description

160 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Instances Overview

No description

161 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Wait Event Analyses Details

This dashboard helps to analyse Performance Schema wait events. It plots the following metrics for the chosen (one or
more) wait events:

• Count - Performance Schema Waits

• Load - Performance Schema Waits

• Avg Wait Time - Performance Schema Waits

See also

MySQL 5.7 Documentation: Performance Schema

162 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Performance Schema Details

The MySQL Performance Schema dashboard helps determine the efficiency of communicating with Performance
Schema. This dashboard contains the following metrics:

• Performance Schema file IO (events)

• Performance Schema file IO (load)

• Performance Schema file IO (Bytes)

• Performance Schema waits (events)

• Performance Schema waits (load)

• Index access operations (load)

• Table access operations (load)

• Performance Schema SQL and external locks (events)

• Performance Schema SQL and external locks (seconds)

See also

MySQL Server 5.7 Documentation: Performance Schema

163 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Query Response Time Details

AVERAGE QUERY RESPONSE TIME

The Average Query Response Time graph shows information collected using the Response Time Distribution plugin
sourced from table INFORMATION_SCHEMA. QUERY_RESPONSE_TIME. It computes this value across all queries by
taking the sum of seconds divided by the count of queries.

QUERY RESPONSE TIME DISTRIBUTION

Query response time counts (operations) are grouped into three buckets:

• 100ms - 1s

• 1s - 10s

• > 10s

AVERAGE QUERY RESPONSE TIME

Available only in Percona Server for MySQL, provides visibility of the split of READ vs WRITE query response time.

READ QUERY RESPONSE TIME DISTRIBUTION

Available only in Percona Server for MySQL, illustrates READ query response time counts (operations) grouped into
three buckets:

• 100ms - 1s

• 1s - 10s

• > 10s

164 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

WRITE QUERY RESPONSE TIME DISTRIBUTION

Available only in Percona Server for MySQL, illustrates WRITE query response time counts (operations) grouped into
three buckets:

• 100ms - 1s

• 1s - 10s

• > 10s

165 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Replication Summary

IO THREAD RUNNING

This metric shows if the IO Thread is runnig or not. It only applies to a slave host.

SQL Thread is a process that runs on a slave host in the replication environment. It reads the events from the local
relay log file and applies them to the slave server.

Depending on the format of the binary log it can read query statements in plain text and re-execute them or it can
read raw data and apply them to the local host.

Possible values

Yes

The thread is running and is connected to a replication master

No

The thread is not running because it is not lauched yet or because an error has occured connecting to the master
host

Connecting

The thread is running but is not connected to a replication master

No value

The host is not configured to be a replication slave

IO Thread Running is one of the parameters that the command SHOW SLAVE STATUS returns.

SQL THREAD RUNNING

This metric shows if the SQL thread is running or not. It only applies to a slave host.

Possible values

166 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Yes

SQL Thread is running and is applying events from the realy log to the local slave host

No

SQL Thread is not running because it is not launched yet or because of an error occurred while applying an event
to the local slave host

REPLICATION ERROR NO

This metric shows the number of the last error in the SQL Thread encountered which caused replication to stop.

One of the more common errors is Error: 1022 Duplicate Key Entry. In such a case replication is attempting to update a
row that already exists on the slave. The SQL Thread will stop replication in order to avoid data corruption.

READ ONLY

This metric indicates whether the host is configured to be in Read Only mode or not.

Possible values

Yes

The slave host permits no client updates except from users who have the SUPER privilege or the REPLICATION
SLAVE privilege.

This kind of configuration is typically used for slave hosts in a replication environment to avoid a user can
inadvertently or voluntarily modify data causing inconsistencies and stopping the replication process.

No

The slave host is not configured in Read Only mode.

MYSQL REPLICATION DELAY

This metric shows the number of seconds the slave host is delayed in replication applying events compared to when
the Master host applied them, denoted by the Seconds_Behind_Master value, and only applies to a slave host.

Since the replication process applies the data modifications on the slave asyncronously, it could happen that the
slave replicates events after some time. The main reasons are:

• Network round trip time - high latency links will lead to non-zero replication lag values.

• Single threaded nature of replication channels - master servers have the advantage of applying changes in
parallel, whereas slave ones are only able to apply changes in serial, thus limiting their throughput. In some
cases Group Commit can help but is not always applicable.

• High number of changed rows or computationally expensive SQL - depending on the replication format ( ROW
vs STATEMENT ), significant changes to the database through high volume of rows modified, or expensive CPU
will all contribute to slave servers lagging behind the master.

Generally adding more CPU or Disk resources can alleviate replication lag issues, up to a point.

BINLOG SIZE

This metric shows the overall size of the binary log files, which can exist on both master and slave servers. The binary
log (also known as the binlog) contains events that describe database changes: CREATE TABLE , ALTER TABLE , updates,
inserts, deletes and other statements or database changes. The binlog is the file that is read by slaves via their IO
Thread process in order to replicate database changes modification on the data and on the table structures. There
can be more than one binlog file present depending on the binlog rotation policy adopted (for example using the
configuration variables max_binlog_size and expire_logs_days ).

167 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

Note

There can be more binlog files depending on the rotation policy adopted (for example using the configuration variables
max_binlog_size and expire_logs_days ) or even because of server reboots.

When planning the disk space, take care of the overall dimension of binlog files and adopt a good rotation policy or
think about having a separate mount point or disk to store the binlog data.

BINLOG DATA WRITTEN HOURLY

This metric shows the amount of data written hourly to the binlog files during the last 24 hours. This metric can give
you an idea of how big is your application in terms of data writes (creation, modification, deletion).

BINLOG COUNT

This metric shows the overall count of binary log files, on both master and slave servers.

BINLOGS CREATED HOURLY

This metric shows the number of binlog files created hourly during the last 24 hours.

RELAY LOG SPACE

This metric shows the overall size of the relay log files. It only applies to a slave host.

The relay log consists of a set of numbered files containing the events to be executed on the slave host in order to
replicate database changes.

The relay log has the same format as the binlog.

There can be multiple relay log files depending on the rotation policy adopted (using the configuration variable
max_relay_log_size ).

As soon as the SQL thread completes to execute all events in the relay log file, the file is deleted.

If this metric contains a high value, the variable max_relay_log_file is high too. Generally, this not a serious issue. If
the value of this metric is constantly increased, the slave is delaying too much in applying the events.

Treat this metric in the same way as the MySQL Replication Delay metric.

RELAY LOG WRITTEN HOURLY

This metric shows the amount of data written hourly into relay log files during the last 24 hours.

See also

• MySQL 5.7 Replication

• MySQL 5.7 SHOW SLAVE STATUS Syntax

• MySQL 5.7 IO Thread states

• MySQL 5.7 Thread states

• MySQL 5.7 list of error codes

• MySQL 5.7 Improving replication performance

• MySQL 5.7 Replication Slave Options and Variables

• MySQL 5.7 The binary log

• MySQL 5.7 The Slave Relay Log

168 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Group Replication Summary

OVERVIEW

• PRIMARY Service

• Group Replication Service States

• Replication Group Members

• Replication Lag

• Replication Delay

• Transport Time

TRANSACTIONS

• Transaction Details

169 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

• Applied Transactions

• Sent Transactions

• Checked Transactions

• Rolled Back Transactions

• Transactions Row Validating

• Transactions in the Queue for Checking

• Received Transactions Queue

CONFLICTS

• Detected Conflicts

170 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL Table Details

LARGEST TABLES

Largest Tables by Row Count

The estimated number of rows in the table from information_schema.tables .

Largest Tables by Size

The size of the table components from information_schema.tables .

PIE

Total Database Size

The total size of the database: as data + index size, so freeble one.

Most Fragmented Tables by Freeable Size

The list of 5 most fragmented tables ordered by their freeable size

TABLE ACTIVITY

The next two graphs are available only for Percona Server and MariaDB and require userstat variable turned on.

ROWS READ

The number of rows read from the table, shown for the top 5 tables.

ROWS CHANGED

The number of rows changed in the table, shown for the top 5 tables.

AUTO INCREMENT USAGE

The current value of an auto_increment column from information_schema , shown for the top 10 tables.

171 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL User Details

Note

This dashboard requires Percona Server for MySQL 5.1+ or MariaDB 10.1/10.2 with XtraDB. Also userstat should be
enabled, for example with the SET GLOBAL userstat=1 statement. See Setting up MySQL.

Data is displayed for the 5 top users.

Top Users by Connections Created

The number of times user’s connections connected using SSL to the server.

Top Users by Traffic

The number of bytes sent to the user’s connections.

Top Users by Rows Fetched/Read

The number of rows fetched by the user’s connections.

Top Users by Rows Updated

The number of rows updated by the user’s connections.

Top Users by Busy Time

The cumulative number of seconds there was activity on connections from the user.

Top Users by CPU Time

The cumulative CPU time elapsed, in seconds, while servicing connections of the user.

172 of 242 Percona LLC, © 2020


5.2.5 MySQL Dashboards

MySQL TokuDB Details

No description

173 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

5.2.6 MongoDB Dashboards

MongoDB Cluster Summary

CURRENT CONNECTIONS PER SHARD

TCP connections (Incoming) in mongod processes.

TOTAL CONNECTIONS

Incoming connections to mongos nodes.

CURSORS PER SHARD

The Cursor is a MongoDB Collection of the document which is returned upon the find method execution.

MONGOS CURSORS

The Cursor is a MongoDB Collection of the document which is returned upon the find method execution.

OPERATIONS PER SHARD

Ops/sec, classified by legacy wire protocol type (query, insert, update, delete, getmore).

TOTAL MONGOS OPERATIONS

Ops/sec, classified by legacy wire protocol type (query, insert, update, delete, getmore).

CHANGE LOG EVENTS

Count, over last 10 minutes, of all types of config db changelog events.

OPLOG RANGE BY SET

Timespan ‘window’ between oldest and newest ops in the Oplog collection.

174 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB Instance Summary

COMMAND OPERATIONS

Ops or Replicated Ops/sec classified by legacy wire protocol type (query, insert, update, delete, getmore). And (from
the internal TTL threads) the docs deletes/sec by TTL indexes.

LATENCY DETAIL

Average latency of operations (classified by read, write, or (other) command)

CONNECTIONS

TCP connections (Incoming)

CURSORS

Open cursors. Includes idle cursors.

DOCUMENT OPERATIONS

Docs per second inserted, updated, deleted or returned. (N.b. not 1-to-1 with operation counts.)

QUEUED OPERATIONS

Operations queued due to a lock.

QUERY EFFICIENCY

Ratio of Documents returned or Index entries scanned / full documents scanned

SCANNED AND MOVED OBJECTS

This panel shows the number of objects (both data (scanned_objects) and index (scanned)) as well as the number of
documents that were moved to a new location due to the size of the document growing. Moved documents only
apply to the MMAPv1 storage engine.

GETLASTERROR WRITE TIME

Legacy driver operation: Number of, and Sum of time spent, per second executing getLastError commands to
confirm write concern.

175 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

GETLASTERROR WRITE OPERATIONS

Legacy driver operation: Number of getLastError commands that timed out trying to confirm write concern.

ASSERT EVENTS

This panel shows the number of assert events per second on average over the given time period. In most cases
assertions are trivial, but you would want to check your log files if this counter spikes or is consistently high.

PAGE FAULTS

Unix or Window memory page faults. Not necessarily from mongodb.

176 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB Instances Overview

This dashboard provides basic information about MongoDB instances.

COMMAND OPERATIONS

Shows how many times a command is executed per second on average during the selected interval.

Look for peaks and drops and correlate them with other graphs.

CONNECTIONS

Keep in mind the hard limit on the maximum number of connections set by your distribution.

Anything over 5,000 should be a concern, because the application may not close connections correctly.

CURSORS

Helps identify why connections are increasing. Shows active cursors compared to cursors being automatically killed
after 10 minutes due to an application not closing the connection.

DOCUMENT OPERATIONS

When used in combination with Command Operations, this graph can help identify write aplification. For example,
when one insert or update command actually inserts or updates hundreds, thousands, or even millions of
documents.

177 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

QUEUED OPERATIONS

Any number of queued operations for long periods of time is an indication of possible issues. Find the cause and fix
it before requests get stuck in the queue.

GETLASTERROR WRITE TIME, GETLASTERROR WRITE OPERATIONS

This is useful for write-heavy workloads to understand how long it takes to verify writes and how many concurrent
writes are occurring.

ASSERTS

Asserts are not important by themselves, but you can correlate spikes with other graphs.

MEMORY FAULTS

Memory faults indicate that requests are processed from disk either because an index is missing or there is not
enough memory for the data set. Consider increasing memory or sharding out.

178 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB Instances Compare

CONNECTIONS

No description

CURSORS

No description

LATENCY

Average latency of operations (classified by read, write, or (other) command)

SCAN RATIOS

Ratio of index entries scanned or whole docs scanned / number of documents returned

INDEX FILTERING EFFECTIVENESS

No description

REQUESTS

Ops/sec (classified by (legacy) wire protocol request type)

DOCUMENT OPERATIONS

Documents inserted/updated/deleted or returned per sec

QUEUED OPERATIONS

The number of operations that are currently queued and waiting for a lock

USED MEMORY

No description

179 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB ReplSet Summary

REPLICATION LAG

MongoDB replication lag occurs when the secondary node cannot replicate data fast enough to keep up with the
rate that data is being written to the primary node. It could be caused by something as simple as network latency,
packet loss within your network, or a routing issue.

OPERATIONS - BY SERVICE NAME

Operations are classified by legacy wire protocol type (insert, update, and delete only).

MAX MEMBER PING TIME - BY SERVICE NAME

This metric can show a correlation with the replication lag value.

MAX HEARTBEAT TIME

Time span between now and last heartbeat from replicaset members.

ELECTIONS

Count of elections. Usually zero; 1 count by each healthy node will appear in each election. Happens when the
primary role changes due to either normal maintenance or trouble events.

OPLOG RECOVERY WINDOW - BY SERVICE NAME

Timespan ‘window’ between newest and the oldest op in the Oplog collection.

180 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB InMemory Details

INMEMORY TRANSACTIONS

WiredTiger internal transactions

INMEMORY CAPACITY

Configured max and current size of the WiredTiger cache.

INMEMORY SESSIONS

Internal WiredTiger storage engine cursors and sessions currently open.

INMEMORY PAGES

Pages in the WiredTiger cache

INMEMORY CONCURRENCY TICKETS

A WT ‘ticket’ is assigned out for every operation running simultaneously in the WT storage engine. “Tickets available”
= hardcoded high value - “Tickets Out”.

QUEUED OPERATIONS

Operations queued due to a lock

DOCUMENT CHANGES

Mixed metrics: Docs per second inserted, updated, deleted or returned on any type of node (primary or secondary); +
replicated write Ops/sec; + TTL deletes per second.

INMEMORY CACHE EVICTION

This panel shows the number of pages that have been evicted from the WiredTiger cache for the given time period.
The InMemory storage engine only evicts modified pages which signals a compaction of the data and removal of
the dirty pages.

181 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

SCANNED AND MOVED OBJECTS

This panel shows the number of objects (both data (scanned_objects) and index (scanned)) as well as the number of
documents that were moved to a new location due to the size of the document growing. Moved documents only
apply to the MMAPv1 storage engine.

PAGE FAULTS

Unix or Window memory page faults. Not necessarily from mongodb.

182 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB MMAPv1 Details

DOCUMENT ACTIVITY

Docs per second inserted, updated, deleted or returned. Also showing replicated write ops and internal TTL index
deletes.

MMAPV1 LOCK WAIT TIME

Time spent per second waiting to acquire locks.

MMAPV1 PAGE FAULTS

Unix or Window memory page faults. Not necessarily from mongodb.

MMAPV1 JOURNAL WRITE ACTIVITY

MB processed through the journal in memory.

MMAPV1 JOURNAL COMMIT ACTIVITY

MB committed to disk for the journal.

MMAPV1 BACKGROUND FLUSHING TIME

Average time in ms, over full uptime of mongod process, the MMAP background flushes have taken.

QUEUED OPERATIONS

Queue size of ops waiting to be submitted to storage engine layer. (N.b. see WiredTiger concurrency tickets for
number of ops being processed simultaneously in storage engine layer.)

CLIENT OPERATIONS

Ops and Replicated Ops/sec, classified by legacy wire protocol type (query, insert, update, delete, getmore).

SCANNED AND MOVED OBJECTS

This panel shows the number of objects (both data (scanned_objects) and index (scanned)) as well as the number of
documents that were moved to a new location due to the size of the document growing. Moved documents only
apply to the MMAPv1 storage engine.

183 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

MongoDB WiredTiger Details

WIREDTIGER TRANSACTIONS

WiredTiger internal transactions

WIREDTIGER CACHE ACTIVITY

Data volume transfered per second between the WT cache and data files. Writes out always imply disk; Reads are
often from OS filebuffer cache already in RAM, but disk if not.

WIREDTIGER BLOCK ACTIVITY

Data volume handled by the WT block manager per second

WIREDTIGER SESSIONS

Internal WT storage engine cursors and sessions currently open

WIREDTIGER CONCURRENCY TICKETS AVAILABLE

A WT ‘ticket’ is assigned out for every operation running simultaneously in the WT storage engine. “Available” =
hardcoded high value - “Out”.

QUEUED OPERATIONS

Operations queued due to a lock.

WIREDTIGER CHECKPOINT TIME

The time spent in WT checkpoint phase. Warning: This calculation averages the cyclical event (default: 1 min)
execution to a per-second value.

WIREDTIGER CACHE EVICTION

Least-recently used pages being evicted due to WT cache becoming full.

WIREDTIGER CACHE CAPACITY

Configured max and current size of the WT cache.

184 of 242 Percona LLC, © 2020


5.2.6 MongoDB Dashboards

WIREDTIGER CACHE PAGES

WIREDTIGER LOG OPERATIONS

WT internal write-ahead log operations.

WIREDTIGER LOG ACTIVITY

Data volume moved per second in WT internal write-ahead log.

WIREDTIGER LOG RECORDS

Number of records appended per second in WT internal log.

DOCUMENT CHANGES

Mixed metrics: Docs per second inserted, updated, deleted or returned on any type of node (primary or secondary); +
replicated write Ops/sec; + TTL deletes per second.

SCANNED AND MOVED OBJECTS

This panel shows the number of objects (both data (scanned_objects) and index (scanned)) as well as the number of
documents that were moved to a new location due to the size of the document growing. Moved documents only
apply to the MMAPv1 storage engine.

PAGE FAULTS

Unix or Window memory page faults. Not necessarily from mongodb.

See also

MongoDB WiredTiger Storage Engine Documentation

185 of 242 Percona LLC, © 2020


5.2.7 PostgreSQL Dashboards

5.2.7 PostgreSQL Dashboards

PostgreSQL Instances Overview

CONNECTED

Reports whether PMM Server can connect to the PostgreSQL instance.

VERSION

The version of the PostgreSQL instance.

SHARED BUFFERS

Defines the amount of memory the database server uses for shared memory buffers. Default is 128MB . Guidance on
tuning is 25% of RAM, but generally doesn’t exceed 40% .

186 of 242 Percona LLC, © 2020


5.2.7 PostgreSQL Dashboards

DISK-PAGE BUFFERS

The setting wal_buffers defines how much memory is used for caching the write-ahead log entries. Generally this
value is small ( 3% of shared_buffers value), but it may need to be modified for heavily loaded servers.

MEMORY SIZE FOR EACH SORT

The parameter work_mem defines the amount of memory assigned for internal sort operations and hash tables before
writing to temporary disk files. The default is 4MB .

DISK CACHE SIZE

PostgreSQL’s effective_cache_size variable tunes how much RAM you expect to be available for disk caching.
Generally adding Linux free+cached will give you a good idea. This value is used by the query planner whether plans
will fit in memory, and when defined too low, can lead to some plans rejecting certain indexes.

AUTOVACUUM

Whether autovacuum process is enabled or not. Generally the solution is to vacuum more often, not less.

POSTGRESQL CONNECTIONS

Max Connections

The maximum number of client connections allowed. Change this value with care as there are some memory
resources that are allocated on a per-client basis, so setting max_connections higher will generally increase overall
PostgreSQL memory usage.

Connections

The number of connection attempts (successful or not) to the PostgreSQL server.

Active Connections

The number of open connections to the PostgreSQL server.

POSTGRESQL TUPLES

Tuples

The total number of rows processed by PostgreSQL server: fetched, returned, inserted, updated, and deleted.

Read Tuple Activity

The number of rows read from the database: as returned so fetched ones.

Tuples Changed per 5min

The number of rows changed in the last 5 minutes: inserted, updated, and deleted ones.

POSTGRESQL TRANSACTIONS

Transactions

The total number of transactions that have been either been committed or rolled back.

Duration of Transactions

Maximum duration in seconds any active transaction has been running.

187 of 242 Percona LLC, © 2020


5.2.7 PostgreSQL Dashboards

TEMP FILES

Number of Temp Files

The number of temporary files created by queries.

Size of Temp files

The total amount of data written to temporary files by queries in bytes.

Note

All temporary files are taken into account by these two gauges, regardless of why the temporary file was created (e.g.,
sorting or hashing), and regardless of the log_temp_files setting.

CONFLICTS AND LOCKS

Conflicts/Deadlocks

The number of queries canceled due to conflicts with recovery in the database (due to dropped tablespaces, lock
timeouts, old snapshots, pinned buffers, or deadlocks).

Number of Locks

The number of deadlocks detected by PostgreSQL.

BUFFERS AND BLOCKS OPERATIONS

Operations with Blocks

The time spent reading and writing data file blocks by backends, in milliseconds.

Note

Capturing read and write time statistics is possible only if track_io_timing setting is enabled. This can be done either in
configuration file or with the following query executed on the running system:

ALTER SYSTEM SET track_io_timing=ON;


SELECT pg_reload_conf();

Buffers

The number of buffers allocated by PostgreSQL.

CANCELED QUERIES

The number of queries that have been canceled due to dropped tablespaces, lock timeouts, old snapshots, pinned
buffers, and deadlocks.

Note

Data shown by this gauge are based on the pg_stat_database_conflicts view.

188 of 242 Percona LLC, © 2020


5.2.7 PostgreSQL Dashboards

CACHE HIT RATIO

The number of times disk blocks were found already in the buffer cache, so that a read was not necessary.

Note

This only includes hits in the PostgreSQL buffer cache, not the operating system’s file system cache.

CHECKPOINT STATS

The total amount of time that has been spent in the portion of checkpoint processing where files are either written
or synchronized to disk, in milliseconds.

POSTGRESQL SETTINGS

The list of all settings of the PostgreSQL server.

SYSTEM SUMMARY

This section contains the following system parameters of the PostgreSQL server: CPU Usage, CPU Saturation and
Max Core Usage, Disk I/O Activity, and Network Traffic.

See also

• PostgreSQL Server status variables: autovacuum

• PostgreSQL Server status variables: effective_cache_size

• PostgreSQL Server status variables: max_connections

• PostgreSQL Server status variables: shared_buffers

• PostgreSQL Server status variables: wal_buffers

• PostgreSQL Server status variables: work_mem

189 of 242 Percona LLC, © 2020


5.2.7 PostgreSQL Dashboards

PostgreSQL Instance Summary

NUMBER OF TEMP FILES

Cumulative number of temporary files created by queries in this database since service start. All temporary files are
counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the
log_temp_files setting.

SIZE OF TEMP FILES

Cumulative amount of data written to temporary files by queries in this database since service start. All temporary
files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.

TEMP FILES ACTIVITY

Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the
temporary file was created (e.g., sorting or hashing), and regardless of the log_temp_files setting.

TEMP FILES UTILIZATION

Total amount of data written to temporary files by queries in this database. All temporary files are counted,
regardless of why the temporary file was created, and regardless of the log_temp_files setting.

CANCELED QUERIES

Based on pg_stat_database_conflicts view

190 of 242 Percona LLC, © 2020


5.2.7 PostgreSQL Dashboards

PostgreSQL Instances Compare

No description

191 of 242 Percona LLC, © 2020


5.2.8 ProxySQL Dashboards

5.2.8 ProxySQL Dashboards

ProxySQL Instance Summary

NETWORK TRAFFIC

Network traffic refers to the amount of data moving across a network at a given point in time.

192 of 242 Percona LLC, © 2020


5.2.9 HA Dashboards

5.2.9 HA Dashboards

PXC/Galera Node Summary

GALERA REPLICATION LATENCY

Shows figures for the replication latency on group communication. It measures latency from the time point when a
message is sent out to the time point when a message is received. As replication is a group operation, this
essentially gives you the slowest ACK and longest RTT in the cluster.

GALERA REPLICATION QUEUES

Shows the length of receive and send queues.

GALERA CLUSTER SIZE

Shows the number of members currently connected to the cluster.

GALERA FLOW CONTROL

Shows the number of FC_PAUSE events sent/received. They are sent by a node when its replication queue gets too
full. If a node is sending out FC messages it indicates a problem.

GALERA PARALLELIZATION EFFICIENCY

Shows the average distances between highest and lowest seqno that are concurrently applied, committed and can
be possibly applied in parallel (potential degree of parallelization).

GALERA WRITING CONFLICTS

Shows the number of local transactions being committed on this node that failed certification (some other node had
a commit that conflicted with ours) – client received deadlock error on commit and also the number of local
transactions in flight on this node that were aborted because they locked something an applier thread needed –
deadlock error anywhere in an open transaction. Spikes in the graph may indicate writing to the same table
potentially the same rows from 2 nodes.

193 of 242 Percona LLC, © 2020


5.2.9 HA Dashboards

AVAILABLE DOWNTIME BEFORE SST REQUIRED

Shows for how long the node can be taken out of the cluster before SST is required. SST is a full state transfer
method.

GALERA WRITESET COUNT

Shows the count of transactions received from the cluster (any other node) and replicated to the cluster (from this
node).

GALERA WRITESET SIZE

Shows the average transaction size received/replicated.

GALERA WRITESET TRAFFIC

Shows the bytes of data received from the cluster (any other node) and replicated to the cluster (from this node).

GALERA NETWORK USAGE HOURLY

Shows the bytes of data received from the cluster (any other node) and replicated to the cluster (from this node).

194 of 242 Percona LLC, © 2020


5.2.9 HA Dashboards

PXC/Galera Cluster Summary

No description

195 of 242 Percona LLC, © 2020


5.2.9 HA Dashboards

PXC/Galera Nodes Compare

$CLUSTER - GALERA CLUSTER SIZE

Shows the number of members currently connected to the cluster.

196 of 242 Percona LLC, © 2020


5.3 Commands

5.3 Commands

5.3.1 pmm-admin - PMM Administration Tool

NAME

pmm-admin - Administer PMM

SYNOPSIS

pmm-admin [FLAGS]

pmm-admin config [FLAGS] --server-url=server-url

pmm-admin add DATABASE [FLAGS] [NAME] [ADDRESS]

pmm-admin add external [FLAGS] [NAME] [ADDRESS] (CAUTION: Technical preview feature)

pmm-admin remove [FLAGS] service-type [service-name]

pmm-admin register [FLAGS] [node-address] [node-type] [node-name]

pmm-admin list [FLAGS] [node-address]

pmm-admin status [FLAGS] [node-address]

pmm-admin summary [FLAGS] [node-address]

pmm-admin annotate [--node|--service] [--tags <tags>] [node-name|service-name]

pmm-admin help [COMMAND]

DESCRIPTION

pmm-admin is a command-line tool for administering PMM using a set of COMMAND keywords and associated FLAGS.

PMM communicates with the PMM Server via a PMM agent process.

FLAGS

-h , --help

Show help and exit.

--help-long

Show extended help and exit.

--help-man

Generate man page. (Use pmm-admin --help-man | man -l - to view.)

--debug

Enable debug logging.

197 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

--trace

Enable trace logging (implies debug).

--json

Enable JSON output.

--version

Show the application version and exit.

--server-url=server-url

PMM Server URL in https://username:password@pmm-server-host/ format.

--server-insecure-tls

Skip PMM Server TLS certificate validation.

--group=<group-name>

Group name for external services. Default: external

COMMANDS

GENERAL COMMANDS

pmm-admin help [COMMAND]

Show help for COMMAND .

INFORMATION COMMANDS

pmm-admin list --server-url=server-url [FLAGS]

Show Services and Agents running on this Node, and the agent mode (push/pull).

pmm-admin status --server-url=server-url [FLAGS]

Show the following information about a local pmm-agent, and its connected server and clients:

• Agent: Agent ID, Node ID.

• PMM Server: URL and version.

• PMM Client: connection status, time drift, latency, vmagent status, pmm-admin version.

• Agents: Agent ID path and client name.

FLAGS:

--wait=<period><unit>

Time to wait for a successful response from pmm-agent. period is an integer. unit is one of ms for milliseconds,
s for seconds, m for minutes, h for hours.

pmm-admin summary --server-url=server-url [FLAGS]

Creates an archive file in the current directory with default filename


summary_<hostname>_<year>_<month>_<date>_<hour>_<minute>_<second>.zip . The contents are two directories, client and
server containing diagnostic text files.

198 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

FLAGS:

--filename="filename"

The Summary Archive filename.

--skip-server

Skip fetching logs.zip from PMM Server.

--pprof

Include performance profiling data in the summary.

CONFIGURATION COMMANDS

pmm-admin config [FLAGS] [node-address] [node-type] [node-name]

Configure a local pmm-agent .

FLAGS:

--node-id=node-id

Node ID (default is auto-detected).

--node-model=node-model

Node model

--region=region

Node region

--az=availability-zone

Node availability zone

--force

Remove Node with that name with all dependent Services and Agents if one exist

pmm-admin register [FLAGS] [node-address] [node-type] [node-name]

Register the current Node with the PMM Server.

--server-url=server-url

PMM Server URL in https://username:password@pmm-server-host/ format.

--machine-id="/machine_id/9812826a1c45454a98ba45c56cc4f5b0"

Node machine-id (default is auto-detected).

--distro="linux"

Node OS distribution (default is auto-detected).

--container-id=container-id

Container ID.

199 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

--container-name=container-name

Container name.

--node-model=node-model

Node model.

--region=region

Node region.

--az=availability-zone

Node availability zone.

--custom-labels=labels

Custom user-assigned labels.

--force

Remove Node with that name with all dependent Services and Agents if one exists.

pmm-admin remove [FLAGS] service-type [service-name]

Remove Service from monitoring.

--service-id=service-id

Service ID.

DATABASE COMMANDS

MongoDB

pmm-admin add mongodb [FLAGS] [node-name] [node-address]

Add MongoDB to monitoring.

FLAGS:

--node-id=node-id

Node ID (default is auto-detected).

--pmm-agent-id=pmm-agent-id

The pmm-agent identifier which runs this instance (default is auto-detected).

--username=username

MongoDB username.

--password=password

MongoDB password.

--query-source=profiler

Source of queries, one of: profiler , none (default: profiler ).

200 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

--environment=environment

Environment name.

--cluster=cluster

Cluster name.

--replication-set=replication-set

Replication set name.

--custom-labels=custom-labels

Custom user-assigned labels.

--skip-connection-check

Skip connection check.

--tls

Use TLS to connect to the database.

--tls-skip-verify

Skip TLS certificates validation.

MySQL

pmm-admin add mysql [FLAGS] node-name node-address | [--name=service-name] --address=address[:port] | --socket

Add MySQL to monitoring.

FLAGS:

--address

MySQL address and port (default: 127.0.0.1:3306).

--socket=socket

Path to MySQL socket.

--node-id=node-id

Node ID (default is auto-detected).

--pmm-agent-id=pmm-agent-id

The pmm-agent identifier which runs this instance (default is auto-detected).

--username=username

MySQL username.

--password=password

MySQL password.

201 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

--query-source=slowlog

Source of SQL queries, one of: slowlog , perfschema , none (default: slowlog ).

--size-slow-logs=N

Rotate slow log file at this size (default: server-defined; negative value disables rotation).

--disable-queryexamples

Disable collection of query examples.

--disable-tablestats

Disable table statistics collection.

--disable-tablestats-limit=disable-tablestats-limit

Table statistics collection will be disabled if there are more than specified number of tables (default: server-
defined).

--environment=environment

Environment name.

--cluster=cluster

Cluster name.

--replication-set=replication-set

Replication set name.

--custom-labels=custom-labels

Custom user-assigned labels.

--skip-connection-check

Skip connection check.

--tls

Use TLS to connect to the database.

--tls-skip-verify

Skip TLS certificates validation.

PostgreSQL

pmm-admin add postgresql [FLAGS] [node-name] [node-address]

Add PostgreSQL to monitoring.

FLAGS:

--node-id=<node id>

Node ID (default is auto-detected).

202 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

--pmm-agent-id=<pmm agent id>

The pmm-agent identifier which runs this instance (default is auto-detected).

--username=<username>

PostgreSQL username.

--password=<password>

PostgreSQL password.

--query-source=<query source>

Source of SQL queries, one of: pgstatements , pgstatmonitor , none (default: pgstatements ).

--environment=<environment>

Environment name.

--cluster=<cluster>

Cluster name.

--replication-set=<replication set>

Replication set name

--custom-labels=<custom labels>

Custom user-assigned labels.

--skip-connection-check

Skip connection check.

--tls

Use TLS to connect to the database.

--tls-skip-verify

Skip TLS certificates validation.

ProxySQL

pmm-admin add proxysql [FLAGS] [node-name] [node-address]

Add ProxySQL to monitoring.

FLAGS:

--node-id=node-id

Node ID (default is auto-detected).

--pmm-agent-id=pmm-agent-id

The pmm-agent identifier which runs this instance (default is auto-detected).

203 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

--username=username

ProxySQL username.

--password=password

ProxySQL password.

--environment=environment

Environment name.

--cluster=cluster

Cluster name.

--replication-set=replication-set

Replication set name.

--custom-labels=custom-labels

Custom user-assigned labels.

--skip-connection-check

Skip connection check.

--tls

Use TLS to connect to the database.

--tls-skip-verify

Skip TLS certificates validation.

EXAMPLES

pmm-admin add mysql --query-source=slowlog --username=pmm --password=pmm sl-mysql 127.0.0.1:3306

MySQL Service added.


Service ID : /service_id/a89191d4-7d75-44a9-b37f-a528e2c4550f
Service name: sl-mysql

pmm-admin add mysql --username=pmm --password=pmm --service-name=ps-mysql --host=127.0.0.1 --port=3306

pmm-admin status
pmm-admin status --wait=30s

Agent ID: /agent_id/c2a55ac6-a12f-4172-8850-4101237a4236


Node ID : /node_id/29b2cc24-3b90-4892-8d7e-4b44258d9309
PMM Server:
URL : https://x.x.x.x:443/
Version: 2.5.0
PMM Client:
Connected : true
Time drift: 2.152715ms

204 of 242 Percona LLC, © 2020


5.3.1 pmm-admin - PMM Administration Tool

Latency : 465.658µs
pmm-admin version: 2.5.0
pmm-agent version: 2.5.0
Agents:
/agent_id/aeb42475-486c-4f48-a906-9546fc7859e8 mysql_slowlog_agent Running

205 of 242 Percona LLC, © 2020


5.4 API

5.4 API
PMM Server lets you visually interact with API resources representing all objects within PMM. You can browse the
API using the Swagger UI, accessible at the /swagger/ endpoint URL:

Clicking an object lets you examine objects and execute requests on them:

206 of 242 Percona LLC, © 2020


5.4 API

The objects visible are nodes, services, and agents:

• A Node represents a bare metal server, a virtual machine, a Docker container, or a more specific type such as an
Amazon RDS Node. A node runs zero or more Services and Agents, and has zero or more Agents providing
insights for it.

• A Service represents something useful running on the Node: Amazon Aurora MySQL, MySQL, MongoDB, etc. It
runs on zero (Amazon Aurora Serverless), single (MySQL), or several (Percona XtraDB Cluster) Nodes. It also has
zero or more Agents providing insights for it.

• An Agent represents something that runs on the Node which is not useful in itself but instead provides insights
(metrics, query performance data, etc) about Nodes and/or Services. An agent always runs on the single Node
(except External Exporters), and provides insights for zero or more Services and Nodes.

Nodes, Services, and Agents have Types which define specific their properties, and the specific logic they implement.

Nodes and Services are external by nature – we do not manage them (create, destroy), but merely maintain a list of
them (add to inventory, remove from inventory) in pmm-managed . Most Agents, however, are started and stopped by
pmm-agent . The only exception is the External Exporter Type which is started externally.

207 of 242 Percona LLC, © 2020


5.5 Glossary

5.5 Glossary
5.5.1 Annotation

A way of showing a mark on dashboards signifying an important point in time.

5.5.2 Dimension

In the Query Analytics dashboard, to help focus on the possible source of performance issues, you can group
queries by dimension, one of: Query, Service Name, Database, Schema, User Name, Client Host

5.5.3 EBS

Amazon’s Elastic Block Store.

5.5.4 Fingerprint

A normalized statement digest—a query string with values removed that acts as a template or typical example for a
query.

5.5.5 IAM

Identity and Access Management (for Amazon AWS).

5.5.6 MM

Metrics Monitor.

5.5.7 NUMA

Non-Uniform Memory Access.

5.5.8 PEM

Privacy Enhanced Mail.

5.5.9 QPS

Queries Per Second. A measure of the rate of queries being monitored.

5.5.10 Query Analytics

Component of PMM Server that enables you to analyze MySQL query performance over periods of time.

5.5.11 STT

Security Threat Tool.

5.5.12 VG

Volume Group.

208 of 242 Percona LLC, © 2020


6. FAQ

6. FAQ

• How can I contact the developers?

• What are the minimum system requirements for PMM?

• How can I upgrade from PMM version 1?

• How to control data retention for PMM?

• How often are NGINX logs in PMM Server rotated?

• What privileges are required to monitor a MySQL instance?

• Can I monitor multiple service instances?

• Can I rename instances?

• Can I add an AWS RDS MySQL or Aurora MySQL instance from a non-default AWS partition?

• How do I troubleshoot communication issues between PMM Client and PMM Server?

• What resolution is used for metrics?

• How do I set up Alerting in PMM?

• How do I use a custom Prometheus configuration file inside PMM Server?

• How to troubleshoot an Update?

• What are my login credentials when I try to connect to a Prometheus Exporter?

• How do I troubleshoot VictoriaMetrics?

6.1 How can I contact the developers?


The best place to discuss PMM with developers and other community members is the community forum.

To report a bug, visit the PMM project in JIRA.

6.2 What are the minimum system requirements for PMM?


PMM Server

Any system which can run Docker version 1.12.6 or later.

It needs roughly 1 GB of storage for each monitored database node with data retention set to one week.

Note

By default, retention is set to 30 days for Metrics Monitor and for Query Analytics. You can consider disabling table
statistics to decrease the VictoriaMetrics database size.

The minimum memory requirement is 2 GB for one monitored database node.

Note

The increase in memory usage is not proportional to the number of nodes. For example, data from 20 nodes should be
easily handled with 16 GB.

209 of 242 Percona LLC, © 2020


6.3 How can I upgrade from PMM version 1?

PMM Client

Any modern 64-bit Linux distribution. It is tested on the latest versions of Debian, Ubuntu, CentOS, and Red Hat
Enterprise Linux.

A minimum of 100 MB of storage is required for installing the PMM Client package. With a good connection to PMM
Server, additional storage is not required. However, the client needs to store any collected data that it cannot
dispatch immediately, so additional storage may be required if the connection is unstable or the throughput is low.
(Caching only applies to Query Analytics data; VictoriaMetrics data is never cached on the client side.)

6.3 How can I upgrade from PMM version 1?


Because of the significant architectural changes between PMM1 and PMM2, there is no direct upgrade path. The
approach to making the switch from PMM version 1 to 2 is a gradual transition, outlined in this blog post.

In short, it involves first standing up a new PMM2 server on a new host and connecting clients to it. As new data is
reported to the PMM2 server, old metrics will age out during the course of the retention period (30 days, by default),
at which point you’ll be able to shut down your existing PMM1 server.

Note

Any alerts configured through the Grafana UI will have to be recreated due to the target dashboard id’s not matching
between PMM1 and PMM2. In this instance we recommend moving to Alertmanager recipes in PMM2 for alerting
which, for the time being, requires a separate Alertmanager instance. However, we are working on integrating this
natively into PMM2 Server and expect to support your existing Alertmanager rules.

210 of 242 Percona LLC, © 2020


6.4 How to control data retention for PMM?

6.4 How to control data retention for PMM?


By default, PMM stores time-series data for 30 days. Depending on your available disk space and requirements, you
may need to adjust the data retention time:

1. Go to PMM > PMM Settings > Advanced Settings.

2. Change the data retention value.

3. Click Apply changes.

6.5 How often are NGINX logs in PMM Server rotated?


PMM Server runs logrotate on a daily basis to rotate NGINX logs and keeps up to ten of the most recent log files.

6.6 What privileges are required to monitor a MySQL instance?

GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'localhost';

6.7 Can I monitor multiple service instances?


You can add multiple instances of MySQL or some other service to be monitored from one PMM Client. In this case,
you must provide a unique port and IP address, or a socket for each instance, and specify a unique name for each.
(If a name is not provided, PMM uses the name of the PMM Client host.)

For example, to add complete MySQL monitoring for two local MySQL servers, the commands would be:

sudo pmm-admin add mysql --username root --password root instance-01 127.0.0.1:3001
sudo pmm-admin add mysql --username root --password root instance-02 127.0.0.1:3002

For more information, run:

211 of 242 Percona LLC, © 2020


6.8 Can I rename instances?

pmm-admin add mysql --help

6.8 Can I rename instances?


You can remove any monitoring instance and then add it back with a different name (see Removing monitoring
services with pmm-admin remove).

When you remove a monitoring service, previously collected data remains available in Grafana. However, the metrics
are tied to the instance name. So if you add the same instance back with a different name, it will be considered a
new instance with a new set of metrics. So if you are re-adding an instance and want to keep its previous data, add
it with the same name.

6.9 Can I add an AWS RDS MySQL or Aurora MySQL instance from a non-default
AWS partition?
By default, the RDS discovery works with the default aws partition. But you can switch to special regions, like the
GovCloud one, with the alternative AWS partitions (e.g. aws-us-gov ) adding them to the Settings via the PMM Server
API (see Exploring PMM API).

To specify other than the default value, or to use several, use the JSON Array syntax: ["aws", "aws-cn"] .

6.10 How do I troubleshoot communication issues between PMM Client and PMM
Server?
Broken network connectivity may be due to many reasons. Particularly, when using Docker, the container is
constrained by the host-level routing and firewall rules. For example, your hosting provider might have default

212 of 242 Percona LLC, © 2020


6.11 What resolution is used for metrics?

iptables rules on their hosts that block communication between PMM Server and PMM Client, resulting in DOWN
targets in VictoriaMetrics. If this happens, check the firewall and routing settings on the Docker host.

PMM is also able to generate diagnostics data which can be examined and/or shared with Percona Support to help
quickly solve an issue. You can get collected logs from PMM Client using the pmm-admin summary command.

Logs obtained in this way includes PMM Client logs and logs which were received from the PMM Server, stored
separately in the client and server folders. The server folder also contains its own client subfolder with the self-
monitoring client information collected on the PMM Server.

Note

Beginning with PMM version 2.4.0, there is an additional flag that enables the fetching of pprof debug profiles and adds
them to the diagnostics data. To enable, run pmm-admin summary --pprof .

You can get PMM Server logs in two ways:

• In a browser, visit https://<address-of-your-pmm-server>/logs.zip .

• Go to PMM > PMM Settings and click Download server diagnostics. (See Diagnostics in PMM Settings.)

6.11 What resolution is used for metrics?


The default values are:

• Low: 60 seconds

• Medium: 10 seconds

• High: 5 seconds

(See Metrics resolution.)

6.12 How do I set up Alerting in PMM?


When a monitored service metric reaches a defined threshold, PMM Server can trigger alerts for it either using the
Grafana Alerting feature or by using an external alert manager.

With these methods you must configure alerting rules that define conditions under which an alert should be
triggered, and the channel used to send the alert (e.g. email).

Alerting in Grafana allows attaching rules to your dashboard panels. Grafana Alerts are already integrated into PMM
Server and may be simpler to get set up.

Alertmanager allows the creation of more sophisticated alerting rules and can be easier to manage installations with
a large number of hosts. This additional flexibility comes at the expense of simplicity.

Note

We can only offer support for creating custom rules to Percona customers, so you should already have a working
Alertmanager instance prior to using this feature.

213 of 242 Percona LLC, © 2020


6.13 How do I use a custom Prometheus configuration file inside PMM Server?

See also

• Grafana Alerts overview

• Alertmanager

• PMM Alerting with Grafana: Working with Templated Dashboards

6.13 How do I use a custom Prometheus configuration file inside PMM Server?
Normally, PMM Server fully manages the Prometheus configuration file.

However, some users may want to change the generated configuration to add additional scrape jobs, configure
remote storage, etc.

From version 2.4.0, when pmm-managed starts the Prometheus file generation process, it tries to load the /srv/
prometheus/prometheus.base.yml file first, to use it as a base for the prometheus.yml file.

Note

The prometheus.yml file can be regenerated by restarting the PMM Server container, or by using the SetSettings API call
with an empty body (see Exploring PMM API).

See also

Extending PMM’s Prometheus Configuration

6.14 How to troubleshoot an Update?


If PMM server wasn’t updated properly, or if you have concerns about the release, you can force the update process
in 2 ways:

1. From the UI - Home panel: click with the Alt key on the reload icon in the Update panel (IMG needed) to make the
Update Button visible even if you are on the same version as available for update. Pressing this button will force
the system to rerun the update so that any broken or not installed components can be installed. In this case, you’ll
go through the usual update process with update logs and successful messages at the end.

2. By API call (if UI not available): You can call the Update API directly with:

curl --user admin:admin --request POST 'http://PMM_SERVER/v1/Updates/Start'

Replace admin:admin with your username/password, and replace PMM_SERVER with your server address.

Note

You will not see the logs using this method.

Refresh The Home page in 2-5 min and you should see that PMM was updated.

214 of 242 Percona LLC, © 2020


6.15 What are my login credentials when I try to connect to a Prometheus Exporter?

6.15 What are my login credentials when I try to connect to a Prometheus


Exporter?
PMM protects an exporter’s output from unauthorized access by adding an authorization layer. To access an
exporter you can use “ pmm ” as a user name and the Agent ID as a password. You can find the Agent ID
corresponding to a given exporter by running pmm-admin list .

6.16 How do I troubleshoot VictoriaMetrics?


1. Check the VictoriaMetrics troubleshooting documentation

2. Ask a question on:

• Google Groups

• Slack

• Reddit

• Telegram

215 of 242 Percona LLC, © 2020


7. Release Notes

7. Release Notes

7.1 Percona Monitoring and Management 2.12.0


Date: December 1, 2020
Installation: Installing Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.1.1 Release Highlights

• VictoriaMetrics replaces Prometheus and is now the default datasource. VictoriaMetrics supports both PUSH
(client to server) and PULL metrics collection modes.

• PMM Client can be run as a Docker image.

• The ‘Add Instance’ page and forms have been redesigned and look much better.

7.1.2 New Features

• PMM-5799: PMM Client now available as docker image in addition to RPM, DEB and .tgz

• PMM-6968: Integrated Alerting: Basic notification channels actions API Create, Read, Update, Delete

• PMM-6842: VictoriaMetrics: Grafana dashboards to monitor VictoriaMetricsDB as replacement for dashboards


that used to monitor Prometheus DB

• PMM-6395: Replace Prometheus with VictoriaMetrics in PMM for better performance and additional
functionality

7.1.3 Improvements

• PMM-6744: Prevent timeout of low resolution metrics in MySQL instances with many tables (~1000’s)

• PMM-6504: MySQL Replication Summary: MySQL Replication Delay graph not factoring in value of intentionally
set SQL_Delay thus inflating time displayed

• PMM-6820: ‘pmm-admin status –wait’ option added to allow for configurable delay in checking status of pmm-
agent

• PMM-6710: pmm-admin: Allow user-specified custom ‘group’ name when adding external services

• PMM-6825: Allow user to specify ‘listen address’ to pmm-agent otherwise default to 127.0.0.1

• PMM-6793: Improve user experience of ‘add remote instance’ workflow

• PMM-6759: Enable kubernetes startup probes to get status of pmm-agent using ‘GET HTTP’ verb

• PMM-6736: MongoDB Instance Summary dashboard: Ensure colors for ReplSet status matches those in
MongoDB ReplSet Summary dashboard for better consistency

• PMM-6730: Node Overview/Summary Cleanup: Remove duplicate service type ‘DB Service Connections’

• PMM-6542: PMM Add Instance: Redesign page for more intuitive experience when adding various instance
types to monitoring

• PMM-6518: Update default datasource name from ‘Prometheus’ to ‘Metrics’ to ensure graphs are populated
correctly after upgrade to VictoriaMetrics

• PMM-6428: Query Analytics dashboard - Ensure user-selected filter selections are always visible even if they
don’t appear in top 5 results

216 of 242 Percona LLC, © 2020


7.1.4 Bugs Fixed

• PMM-5020: PMM Add Remote Instance: User can specify ‘Table Statistics Limit’ for MySQL and AWS RDS MySQL
to disable table stat metrics which can have an adverse impact on performance with too many tables

7.1.4 Bugs Fixed

• PMM-6811: MongoDB Cluster Summary: when secondary optime is newer than primary optime, lag incorrectly
shows 136 years

• PMM-6650: Custom queries for MySQL 8 fail on 5.x (on update to pmm-agent 2.10) (Thanks to user debug for
reporting this issue)

• PMM-6751: PXC/Galera dashboards: Empty service name with MySQL version < 5.6.40

• PMM-5823: PMM Server: Timeout when simultaneously generating and accessing logs via download or API

• PMM-4547: MongoDB dashboard replication lag count incorrect (Thanks to user vvol for reporting this issue)

• PMM-7057: MySQL Instances Overview: Many monitored instances (~250+) gives ‘too long query’ error

• PMM-6883: Query Analytics: ‘Reset All’ and ‘Show Selected’ filters behaving incorrectly

• PMM-6686: Query Analytics: Filters panel blank on Microsoft Edge 44.18362.449.0

• PMM-6007: PMM Server virtual appliance’s IP address not shown in OVF console

• PMM-6754: Query Analytics: Bad alignment of percentage values in Filters panel

• PMM-6752: Query Analytics: Time interval not preserved when using filter panel dashboard shortcuts

• PMM-6664: Query Analytics: No horizontal scroll bar on Explain tab

• PMM-6632: Node Summary - Virtual Memory Utilization chart: incorrect formulas

• PMM-6537: MySQL InnoDB Details - Logging - Group Commit Batch Size: giving incorrect description

• PMM-6055: PMM Inventory - Services: ‘Service Type’ column empty when it should be ‘External’ for external
services

217 of 242 Percona LLC, © 2020


7.2 Percona Monitoring and Management 2.11.1

7.2 Percona Monitoring and Management 2.11.1


Date: October 19, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.2.1 Bugs Fixed

• PMM-6782: High CPU usage after update to 2.11.0

218 of 242 Percona LLC, © 2020


7.3 Percona Monitoring and Management 2.11.0

7.3 Percona Monitoring and Management 2.11.0


Date: October 14, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.3.1 New Features

• PMM-6567: Technical preview of new PostgreSQL extension ‘pg_stat_monitor’

• PMM-6515: Link added directly to Node/Service page from Query Analytics filters, opens in new window

7.3.2 Improvements

• PMM-6727: Grafana plugin updates: grafana-polystat-panel=1.2.2, grafana-piechart-panel=1.6.1

• PMM-6625: Default sort to “Average - descending” on all dashboards

• PMM-6609: MySQL Instances Compare & Summary dashboards: Changed metric in ‘MySQL Internal Memory
Overview’

• PMM-6598: Dashboard image sharing (Share Panel): Improved wording with link to configuration instructions

• PMM-6557: Update Prometheus to v2.21.0

• PMM-6554: MySQL InnoDB Details dashboard: Add “sync flushing” to “Innodb Flushing by Type”

7.3.3 Bugs Fixed

• PMM-4547: MongoDB dashboard replication lag count incorrect (Thanks to user vvol for reporting this issue)

• PMM-6639: Integrated update does not detect all container types

• PMM-6765: Tables information tab reports ‘table not found’ with new PostgreSQL extension ‘pg_stat_monitor’

• PMM-6764: Query Analytics: cannot filter items that are hidden - must use “Show all”

• PMM-6742: Upgrade via PMM UI stalls (on yum update pmm-update)

• PMM-6689: No PostgreSQL queries or metrics in Query Analytics with PostgreSQL 13


(postgresql_pgstatements_agent in Waiting status)

• PMM-6738: PostgreSQL examples shown despite ‘–disable-queryexamples’ option

• PMM-6535: Unable to open ‘Explore’ in new window from Grafana menu

• PMM-6532: Click-through URLs lose time ranges when redirecting to other dashboards

• PMM-6531: Counter-intuitive coloring of element “Update Stats when Metadata Queried”

• PMM-6645: Clean up unnecessary errors in logs (vertamedia-clickhouse-datasource plugin)

• PMM-6547: Hexagonal graph tooltip text overflows bounding box

219 of 242 Percona LLC, © 2020


7.4 Percona Monitoring and Management 2.10.1

7.4 Percona Monitoring and Management 2.10.1


Date: September 22, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.4.1 Bugs Fixed

• PMM-6643: New MongoDB exporter has higher CPU usage compared with old

220 of 242 Percona LLC, © 2020


7.5 Percona Monitoring and Management 2.10.0

7.5 Percona Monitoring and Management 2.10.0


Date: September 15, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.5.1 New Features

• PMM-2045: New dashboard: MySQL Group Replication Summary

• PMM-5738: Enhanced exporter: replaced original mongodb-exporter with a completely rewritten one with
improved functionality

• PMM-5126: Query Analytics Dashboard: Search by query substring or dimension (Thanks to user debug for
reporting this issue)

• PMM-6360: Grafana Upgrade to 7.1.3

• PMM-6355: Upgrade Prometheus to v2.19.3

• PMM-6597: Documentation: Updated Image rendering instructions for PMM

• PMM-6568: Reusable user interface component: Pop-up dialog. Allows for more consistent intefaces across
PMM

• PMM-6375, PMM-6373, PMM-6372: Sign in, Sign up and Sign out UI for Percona Account inside PMM Server

• PMM-6328: Query Analytics Dashboard: Mouse-over crosshair shows value on sparklines

• PMM-3831: Node Summary Dashboard: Add pt-summary output to dashboard to provide details on system
status and configuration

7.5.2 Improvements

• PMM-6647: MongoDB dashboards: RockDB Details removed, MMAPv1 & Cluster Summary changed

• PMM-6536: Query Analytics Dashboard: Improved filter/time search message when no results

• PMM-6467: PMM Settings: User-friendly error message

• PMM-5947: Bind services to internal address for containers

7.5.3 Bugs Fixed

• PMM-6336: Suppress sensitive data: honor pmm-admin flag ‘–disable-queryexamples’ when used in
conjunction with ‘–query-source=perfschema’

• PMM-6244: MySQL InnoDB Details Dashboard: Inverted color scheme on “BP Write Buffering” panel

• PMM-6294: Query Analytics Dashboard doesn’t resize well for some screen resolutions (Thanks to user debug
for reporting this issue)

• PMM-5701: Home Dashboard: Incorrect metric for ‘DB uptime’ (Thanks to user hubi_oediv for reporting this
issue)

• PMM-6427: Query Analytics dashboard: Examples broken when switching from MongoDB to MySQL query

• PMM-5684: Use actual data from INFORMATION_SCHEMA vs relying on cached data (which can be 24 hrs old by
default)

• PMM-6500: PMM Database Checks: Unwanted high-contrast styling

• PMM-6440: MongoDB ReplSet Summary Dashboard: Primary shows more lag than replicas

221 of 242 Percona LLC, © 2020


7.5.4 Known Issues

• PMM-6436: Query Analytics Dashboard: Styles updated to conform with upgrade to Grafana 7.x

• PMM-6415: Node Summary Dashboard: Redirection to database’s Instance Summary dashboard omits Service
Name

• PMM-6324: Query Analytics Dashboard: Showing stale data while fetching updated data for query details
section

• PMM-6316: Query Analytics Dashboard: Inconsistent scrollbar styles

• PMM-6276: PMM Inventory: Long lists unclear; poor contrast & column headings scroll out of view

• PMM-6529: Query Analytics filter input margin disappears after scrolling

7.5.4 Known Issues

• PMM-6643: High CPU usage for new MongoDB exporter (fixed in Percona Monitoring and Management 2.10.1)

222 of 242 Percona LLC, © 2020


7.6 Percona Monitoring and Management 2.9.1

7.6 Percona Monitoring and Management 2.9.1


Date: August 4, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.6.1 Improvements

• PMM-6230: Custom dashboards set as Home remain so after update

• PMM-6300: Query Analytics Dashboard: Column sorting arrows made easier to use (Thanks to user debug for
reporting this issue)

• PMM-6208: Security Threat Tool: Temporarily silence viewed but unactioned alerts

• PMM-6315: Query Analytics Dashboard: Improved metrics names and descriptions

• PMM-6274: MySQL User Details Dashboard: View selected user’s queries in Query Analytics Dashboard

• PMM-6266: Query Analytics Dashboard: Pagination device menu lists 25, 50 or 100 items per page

• PMM-6262: PostgreSQL Instance Summary Dashboard: Descriptions for all ‘Temp Files’ views

• PMM-6253: Query Analytics Dashboard: Improved SQL formatting in Examples panel

• PMM-6211: Query Analytics Dashboard: Loading activity spinner added to Example, Explain and Tables tabs

• PMM-6162: Consistent sort order in dashboard drop-down filter lists

• PMM-5132: Better message when filter search returns nothing

7.6.2 Bugs Fixed

• PMM-5783: Bulk failure of SHOW ALL SLAVES STATUS scraping on PS/MySQL distributions triggers errors

• PMM-6294: Query Analytics Dashboard doesn’t resize well for some screen resolutions (Thanks to user debug
for reporting this issue)

• PMM-6420: Wrong version in successful update pop-up window

• PMM-6319: Query Analytics Dashboard: Query scrolls out of view when selected

• PMM-6302: Query Analytics Dashboard: Unnecessary EXPLAIN requests

• PMM-6256: Query Analytics Dashboard: ‘InvalidNamespace’ EXPLAIN error with some MongoDB queries

• PMM-6329: Query Analytics Dashboard: Unclear origin of sparkline tooltip on mouse-over

• PMM-6259: Query Analytics Dashboard: Slow appearance of query time distribution graph for some queries

• PMM-6189: Disk Details Dashboard: Disk IO Size chart larger by factor of 512

• PMM-6269: Query Analytics Dashboard: Metrics dropdown list obscured when opened

• PMM-6247: Query Analytics Dashboard: Overview table not resizing on window size change

• PMM-6227: Home Dashboard redirection to Node Summary Dashboard not working

223 of 242 Percona LLC, © 2020


7.7 Percona Monitoring and Management 2.9.0

7.7 Percona Monitoring and Management 2.9.0


Date: July 14, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.7.1 Highlights

This release brings a major rework of the Query Analytics (QAN) component, completing the migration from Angular
to React, and adding new UI functionality and features.

For details, see:

• PMM-5125: Implement new version of QAN

• PMM-5516: QAN migration to React and new UI implementation

You can read more in the accompanying blog post (here).

7.7.2 New Features

• PMM-6124: New dashboards: MongoDB Replica Set Summary and MongoDB Cluster Summary

• PMM-1027: New dashboard: MySQL User Details (INFORMATION_SCHEMA.CLIENT_STATISTICS)

• PMM-5604: User interface for MongoDB EXPLAIN

• PMM-5563: Per-Service and per-Node Annotations (This completes the work on improvements to the
Annotation functionality.)

7.7.3 Improvements

• PMM-6114: Sort Agents, Nodes, and Services alphabetically by name in Inventory page (Thanks to user debug
for reporting this issue)

• PMM-6147: Update Grafana plugins to latest versions

7.7.4 Bugs Fixed

• PMM-5800: QAN explain and tables tabs not working after removing MySQL metrics agent

• PMM-5812: Prometheus relabeling broken (relabel_configs unmarshal errors) (Thanks to user b4bufr1k for
reporting this issue)

• PMM-6184: MongoDB Instances Compare dashboard shows MySQL metric

• PMM-5941: Stacked Incoming/Outgoing Network Traffic graphs in MySQL Instances Overview dashboard
prevents comparison

• PMM-6194: Missing UID for Advanced Data Exploration dashboard

• PMM-6191: Incorrect computation for Prometheus Process CPU Usage panel values in Prometheus dashboard

• PMM-6175: Node Overview dashboard shows unit for unitless value ‘Top I/O Load’

224 of 242 Percona LLC, © 2020


7.8 Percona Monitoring and Management 2.8.0

7.8 Percona Monitoring and Management 2.8.0


Date: June 25, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.8.1 Improvements

• PMM-544: Agents, Services and Nodes can now be removed via the ‘PMM Inventory’ page

• PMM-5706: User-installed Grafana plugins unaffected by PMM upgrade

7.8.2 Bugs Fixed

• PMM-6153: PMM 2.7.0 inoperable when no Internet connectivity

• PMM-5365: Client fails to send non-UTF-8 query analytics content to server (Thanks to user romulus for
reporting this issue)

• PMM-5920: Incorrect metric used in formula for “Top Users by Rows Fetched/Read” graph

• PMM-6084: Annotations not showing consistently on dashboards

• PMM-6011: No data in MongoDB Cluster summary, RocksDB & MMAPv1 details

• PMM-5987: Incorrect total value for virtual memory utilization

225 of 242 Percona LLC, © 2020


7.9 Percona Monitoring and Management 2.7.0

7.9 Percona Monitoring and Management 2.7.0


Date: June 9, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

In this release, we have updated Grafana to version 6.7.4 to fix CVE-2020-13379. We recommend updating to the
latest version of PMM as soon as possible.

7.9.1 New Features

• PMM-5257, PMM-5256, & PMM-5243: pmm-admin socket option (–socket) to specify UNIX socket path for
connecting to MongoDB, PostgreSQL, and ProxySQL instances

7.9.2 Improvements

• PMM-2244: ‘pmm-admin status’ command output shows both pmm-admin and pmm-agent versions

• PMM-5968: Disallow PMM Server node or agent removal via API

• PMM-5946: MySQL Table Details dashboard filter on Service Name prevents display of services without data

• PMM-5926: Expose PMM-agent version in pmm-admin status command

• PMM-5891: PMM Home page now includes News panel

• PMM-5906: Independent update of PMM components deactivated

7.9.3 Bugs Fixed

• PMM-6004: MySQL exporter reporting wrong values for cluster status (wsrep_cluster_status)

• PMM-4547: MongoDB dashboard replication lag count incorrect

• PMM-5524: Prometheus alerting rule changes needs docker restart to activate

• PMM-5949: Unwanted filters applied when moving from QAN to Add Instance page

• PMM-5870: MySQL Table Details dashboard not showing separate service names for tables

• PMM-5839: PostgreSQL metrics disparity between query time and block read/write time

• PMM-5348: Inventory page has inaccessible tabs that need reload to access

• PMM-5348: Incorrect access control vulnerability fix (CVE-2020-13379) by upgrading grafana to v6.7.4

226 of 242 Percona LLC, © 2020


7.10 Percona Monitoring and Management 2.6.1

7.10 Percona Monitoring and Management 2.6.1


Date: May 18, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.10.1 Improvements

• PMM-5936: Improved Summary dashboard for Security Threat Tool ‘Failed Checks’

• PMM-5937: Improved Details dashboard for Security Threat Tool ‘Failed Database Checks’

7.10.2 Bugs Fixed

• PMM-5924: Alertmanager not running after PMM Server upgrade via Docker

• PMM-5915: Supervisord not restarting after restart of PMM Server virtual appliances (OVF/AMI)

• PMM-5945: ‘Updates’ dashboard not showing available updates

• PMM-5870: MySQL Table Details dashboard not showing separate service names for tables

227 of 242 Percona LLC, © 2020


7.11 Percona Monitoring and Management 2.6.0

7.11 Percona Monitoring and Management 2.6.0


Date: May 11, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.11.1 New Features

• PMM-5728: Technical preview of External Services monitoring feature. A new command provides integration
with hundreds of third-party systems (https://prometheus.io/docs/instrumenting/exporters/) via the
Prometheus protocol so that you can monitor external services on a node where pmm-agent is installed.

• PMM-5822: PMM now includes a Security Threat Tool to help users avoid the most common database security
issues. Read more here.

• PMM-5559: Global annotations can now be set with the pmm-admin annotate command.

• PMM-4931: PMM now checks Docker environment variables and warns about invalid ones.

7.11.2 Improvements

• PMM-1962: The PMM Server API (via /v1/readyz) now also returns Grafana status information in addition to that
for Prometheus.

• PMM-5854: The Service Details dashboards were cleaned up and some unused selectors were removed.

• PMM-5775: It is now clearer which nodes are Primary and which are Secondary on MongoDB Instance
dashboards.

• PMM-5549: PMM’s Grafana component is now the latest, v6.7.3.

• PMM-5393: There’s a new ‘Node Summary’ row in the services Summary and Details dashboards summarizing
the system update, load average, RAM and memory.

• PMM-4778: The mongodb_exporter is now the latest version, v0.11.0.

• PMM-5734: Temporary files activity and utilization charts (rate & irate) were added to the PostgreSQL Instance
overview.

• PMM-5695: The error message explains better when using the –socket option incorrectly.

7.11.3 Bugs Fixed

• PMM-4829: The MongoDB Exporter wasn’t able to collect metrics from hidden nodes without either the latest
driver or using the ‘connect-direct’ parameter.

• PMM-5056: The average values for Query time in the Details and Profile sections were different.

• PMM-2717: Updating MongoDB Exporter resolves an error ( Failed to execute find query on 'config.locks': not
found. ) when used with shardedCluster 3.6.4.

• PMM-4541: MongoDB exporter metrics collection was including system collections from collStats and
indexStats, causing “log bloat”.

• PMM-5913: Only totals were shown in QAN when filtering on Cluster=MongoDB.

• PMM-5903: When applying a filter the QAN Overview was being refreshed twice.

• PMM-5821: The Compare button was missing from HA Dashboard main menus.

• PMM-5687: Cumulative charts for Disk Details were not showing any data if metrics were returning ‘NaN’ results.

228 of 242 Percona LLC, © 2020


7.11.3 Bugs Fixed

• PMM-5663: The ‘version’ value was not being refreshed in various MySQL dashboards.

• PMM-5643: Advanced Data Exploration charts were showing ‘N/A’ for Metric Resolution and ‘No data to show’ in
the Metric Data Table.

• PMM-4756: Dashboards were not showing services with empty environments.

• PMM-4562: MongoDB and MySQL registered instances with empty cluster labels ( –environment=<label> ) were not
visible in the dashboard despite being added instances.

• PMM-4906: The MongoDB exporter for MongoDB 4.0 and above was causing a “log bloat” condition.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

229 of 242 Percona LLC, © 2020


7.12 Percona Monitoring and Management 2.5.0

7.12 Percona Monitoring and Management 2.5.0


Date: April 14, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.12.1 New Features

• PMM-5042 and PMM-5272: PMM can now connect to MySQL instances by specifying a UNIX socket. This can be
done with a new --socket option of the pmm-admin add mysql command. (Note: Updates to both PMM Client and
PMM Server were done to allow UNIX socket connections.)

• PMM-4145: Amazon RDS instance metrics can now be independently enabled/disabled for Basic and/or
Enhanced metrics.

7.12.2 Improvements

• PMM-5581: PMM Server Grafana plugins can now be updated on the command line with the grafana-cli
command-line utility.

• PMM-5536: Three Grafana plugins were updated to the latest versions: vertamedia-clickhouse-datasource to
1.9.5, grafana-polystat-panel to 1.1.0, and grafana-piechart-panel to 1.4.0.

• PMM-4252: The resolution of the PMM Server favicon image has been improved.

7.12.3 Bugs Fixed

• PMM-5547: PMM dashboards were failing when presenting data from more than 100 monitored instances (error
message proxy error: context canceled ).

• PMM-5624: Empty charts were being shown in some Node Temperature dashboards.

• PMM-5637: The Data retention value in Settings was incorrectly showing the value as minutes instead of days.

• PMM-5613: Sorting data by Query Time was not working properly in Query Analytics.

• PMM-5554: Totals in charts were inconsistently plotted with different colors across charts.

• PMM-4919: The force option ( --force ) in pmm-admin config was not always working.

• PMM-5351: The documentation on MongoDB user privileges has been corrected.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

230 of 242 Percona LLC, © 2020


7.13 Percona Monitoring and Management 2.4.0

7.13 Percona Monitoring and Management 2.4.0


Date: March 18, 2020
Installation: Installing Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

7.13.1 New Features

• PMM-3387: Prometheus custom configuration is now supported by PMM Server. The feature is targeted at
experienced users and is done by adding the base configuration file into the PMM Server container to be
parsed and included into the managed Prometheus configuration.

• PMM-5186: Including –pprof option in the pmm-admin summary command adds pprof debug profiles to the
diagnostics data archive

• PMM-5102: The new “Node Details” dashboard now displays data from the hardware monitoring sensors in
hwmon The new dashboard is based on the hwmon collector data from the node_exporter. Please note that data
may be unavailable for some nodes because of the configuration or virtualization parameters

7.13.2 Improvements

• PMM-4915: The Query Analytics dashboard now shows Time Metrics in the Profile Section as “AVG per query”
instead of “AVG per second”

• PMM-5470: Clickhouse query optimized for Query Analytics to improve its speed and reduce the load on the
backend

• PMM-5448: The default high and medium metrics resolutions were changed to 1-5-30 and 5-10-60 sec. To
reduce the effect of this change on existing installations, systems having the “old” high resolution chosen on
the PMM Settings page (5-5-60 sec.) will be automatically re-configured to the medium one during an upgrade.
If the resolution was changed to some custom values via API, it will not be affected

• PMM-5531: A healthcheck indicator was implemented for the PMM Server Docker image. It is based on the
Docker HEALTHCHECK. This feature can be leveraged as follows:

docker inspect -f { {.State.Health.Status}}


until [ "`docker inspect -f { {.State.Health.Status}} pmm-server`" == "healthy" ]; do sleep 1; done

• PMM-5489: The “Total” line in all charts is now drawn with the same red color for better consistency

• PMM-5461: Memory graphs on the node-related dashboards were adjusted to have fixed colors that are more
distinguishable from each other

• PMM-5329: Prometheus in PMM Server was updated to version 2.16.0. This update has brought several
improvements. Among them are significantly reduced memory footprint of the loaded TSDB blocks, lower
memory footprint for the compaction process (caused by the more balanced choice of what to buffer during
compaction), and improved query performance for the queries that only touch the most recent 2h of data.

• PMM-5210: Data Retention is now specified in days instead of seconds on the PMM Settings page. Please note
this is the UI-only change, so the actual data retention precision is not changed

• PMM-5182: The logs.zip archive available on the PMM Settings page now includes additional self-monitoring
information in a separate “client” subfolder. This subfolder contains information collected on the PMM Server
and is equivalent to the one collected on a node by the pmm-admin summary command.

• PMM-5112: The Inventory API List requests now can be filtered by the Node/Service/Agent type

231 of 242 Percona LLC, © 2020


7.13.3 Bugs Fixed

7.13.3 Bugs Fixed

• PMM-5178: Query Detail Section of the Query Analytics dashboard didn’t show tables definitions and indexes
for the internal PostgreSQL database

• PMM-5465: MySQL Instance related dashboards had row names not always matching the actual contents. To fix
this, elements were re-ordered and additional rows were added for better matching of the row name and the
corresponding elements

• PMM-5455: Dashboards from the Insight menu were fixed to work correctly when the low resolution is set on
the PMM Settings page

• PMM-5446: A number of the Compare Dashboards were fixed to work correctly when the low resolution is set
on the PMM Settings page

• PMM-5430: MySQL Exporter section on the Prometheus Exporter Status dashboard now collapsed by default to
be consistent with other database-related sections

• PMM-5445, PMM-5439, PMM-5427, PMM-5426, PMM-5419: Labels change (which occurs e.g. when the metrics
resolution is changed on the PMM Settings page) was breaking dashboards

• PMM-5347: Selecting queries on the Query Analytics dashboard was generating errors in the browser console

• PMM-5305: Some applied filters on the Query Analytics dashboard were not preserved after changing the time
range

• PMM-5267: The Refresh button was not working on the Query Analytics dashboard

• PMM-5003: pmm-admin list and status use different JSON naming for the same data

• PMM-5526: A typo was fixed in the Replication Dashboard description tooltip

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

232 of 242 Percona LLC, © 2020


7.14 Percona Monitoring and Management 2.3.0

7.14 Percona Monitoring and Management 2.3.0


Date: February 19, 2020

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

For PMM install instructions, see Installing PMM Server and Installing PMM Client.

Note

PMM 2 is designed to be used as a new installation — your existing PMM 1 environment can’t be upgraded to this
version.

7.14.1 Improvements and new features

• PMM-5064 and PMM-5065: Starting from this release, users will be able to integrate PMM with an external
Alertmanager by specifying the Alertmanager URL and the Alert Rules to be executed inside the PMM server

Note

This feature is for advanced users only at this point

• PMM-4954: Query Analytics dashboard now shows units both in the list of queries in a summary table and in
the Details section to ease understanding of the presented data

• PMM-5179: Relations between metrics are now specified in the Query Analytics Details section

• PMM-5115: The CPU frequency and temperature graphs were added to the CPU Utilization dashboard

• PMM-5394: A special treatment for the node-related dashboards was implemented for the situations when the
data resolution change causes new metrics to be generated for existing nodes and services, to make graphs
show continuous lines of the same colors

7.14.2 Fixed bugs

• PMM-4620: The high CPU usage by the pmm-agent process related to MongoDB Query Analytics was fixed

• PMM-5377: Singlestats showing percentage had sparklines scaled vertically along with the graph swing, which
made it difficult to visually notice the difference between neighboring singlestats.

• PMM-5204: Changing resolution on the PMM settings page was breaking some singlestats on the Home and
MySQL Overview dashboards

• PMM-5251: Vertical scrollbars on the graph elements were not allowed to do a full scroll, making last rows of
the legend unavailable for some graphs

• PMM-5410: The “Available Downtime before SST Required” chart on the PXC/Galera Node Summary dashboard
was not showing data because it was unable to use metrics available with different scraping intervals

233 of 242 Percona LLC, © 2020


7.15 Percona Monitoring and Management 2.2.2

7.15 Percona Monitoring and Management 2.2.2


Date: February 4, 2020

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

For PMM install instructions, see Installing PMM Server and Installing PMM Client.

Note

PMM 2 is designed to be used as a new installation — your existing PMM 1 environment can’t be upgraded to this
version.

7.15.1 Improvements and new features

• PMM-5321: The optimization of the Query Analytics parser code for PostgreSQL queries allowed us to reduce
the memory resources consumption by 1-5%, and the parsing time of an individual query by 30-40%

• PMM-5184: The pmm-admin summary command have gained a new --skip-server flag which makes it operating in
a local-only mode, creating summary file without contacting the PMM Server

7.15.2 Fixed bugs

• PMM-5340: The Scraping Time Drift graph on the Prometheus dashboard was showing wrong values because
the actual metrics resolution wasn’t taken into account

• PMM-5060: Qery Analytics Dashboard did not show the row with the last query of the first page, if the number
of queries to display was 11

234 of 242 Percona LLC, © 2020


7.16 Percona Monitoring and Management 2.2.1

7.16 Percona Monitoring and Management 2.2.1


Date: January 23, 2020

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance.

For PMM install instructions, see Installing PMM Server and Installing PMM Client.

Note

PMM 2 is designed to be used as a new installation — your existing PMM 1 environment can’t be upgraded to this
version.

PMM Server version 2.2.0 suffered an unauthenticated denial of service vulnerability (CVE-2020-7920). Any other
PMM versions do not carry the same code logic, and are thus unaffected by this issue. Users who have already
deployed PMM Server 2.2.0 are advised to upgrade to version 2.2.1 which resolves this issue.

7.16.1 Improvements and new features

• PMM-5229: The new RDS Exporter section added to the Prometheus Exporter Status dashboard shows
singlestats and charts related to the rds_exporter

• PMM-5228 and PMM-5238: The Prometheus dashboard and the Exporters Overview dashboard were updated
to include the rds_exporter metrics in their charts, allowing better understanding of the impacts of monitoring
RDS instances

• PMM-4830: The consistency of the applied filters between the Query Analytics and the Overview dashboards
was implemented, and now filters selected in QAN will continue to be active after the switch to any of the
Overview dashboards available in the Services menu

• PMM-5235: The DB uptime singlestats in node rows on the Home dashboard were changed to show minimal
values instead of average ones to be consistent with the top row

• PMM-5127: The “Search by” bar on the Query Analytics dashboard was renamed to “Filter by” to make its
purpose more clear

• PMM-5131: The Filter panel on the Query Analytics dashboard now shows the total number of available Labels
within the “See all” link, which appears if the Filter panel section shows only top 5 of its Labels

7.16.2 Fixed bugs

• PMM-5232: The pmm-managed component of the PMM Server 2.2.0 is vulnerable to DoS attacks, that could be
carried out by anyone who knows the PMM Server IP address (CVE-2020-7920). Versions other than 2.2.0 are not
affected.

• PMM-5226: The handlebars package was updated to version 4.5.3 because of the Prototype Pollution
vulnerability in it (CVE-2019-19919). Please note PMM versions were not affected by this vulnerability, as
handlebars package is used as a build dependency only.

• PMM-5206: Switching to the Settings dashboard was breaking the visual style of some elements on the Home
dashboard

• PMM-5139: The breadcrumb panel, which shows all dashboards visited within one session starting from the
root, was unable to fully show breadcrumb longer than one line

235 of 242 Percona LLC, © 2020


7.16.2 Fixed bugs

• PMM-5212: The explanatory text was added to the Download PMM Server Logs button in the Diagnostic
section of the PMM Settings dashboard, and a link to it was added to the Prometheus dashboard which was
the previous place to download logs

• PMM-5215: The unneeded mariadb-libs package was removed from the PMM Server 2.2.0 OVF image, resulting
in both faster updating with the yum update command and avoiding dependency conflict messages in the
update logs

• PMM-5216: PMM Server Upgrade to 2.2.0 was showing Grafana Update Error page with the Refresh button
which had to be clicked to start using the updated version

• PMM-5211: The “Where do I get the security credentials for my Amazon RDS DB instance” link in the Add AWS
RDS MySQL or Aurora MySQL instance dialog was not targeted at the appropriate instruction

• PMM-5217: PMM2.x OVF Image memory size was increased from 1Gb to 4Gb with the additional 1Gb swap
space because the previous amount was hardly housing the PMM Server, and it wasn’t enough in some cases
like performing an upgrade

• PMM-5271: LVM logical volumes were wrongly resized on AWS deployment, resulting in “no space left on
device” errors

• PMM-5295: Innodb Transaction Rollback Rate values on the MySQL InnoDB Details dashboard were calculated
incorrectly

• PMM-5270: PXC/Galera Cluster Summary dashboard was showing empty Cluster drop-down list, making it
impossible to choose the cluster name

• PMM-4769: The wrongly named “Timeout value used for retransmitting” singlestat on the Network Details
dashboard was renamed to “The algorithm used to determine the timeout value” and updated to show the
algorithm name instead of a digital code

• PMM-5260: Extensive resource consumption by pmm-agent took place in case of Query Analytics for
PostgreSQL; it was fixed by a number of optimizations in the code, resulting in about 4 times smaller memory
usage

• PMM-5261: CPU usage charts on all dashboards which contain them have undergone colors update to make
softIRQ and Steal curves better differentiated

• PMM-5244: High memory consumption in the PMM Server with a large number of agents sending data
simultaneously was fixed by improving bulk data insertion to the ClickHouse database

236 of 242 Percona LLC, © 2020


7.17 Percona Monitoring and Management 2.2.0

7.17 Percona Monitoring and Management 2.2.0


Date: December 24, 2019

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance. You can run PMM in your own environment for maximum security
and reliability. It provides thorough time-based analysis for MySQL, MongoDB, and PostgreSQL servers to ensure
that your data works as efficiently as possible.

Main improvements in this release are:

• Alternative installation methods available for PMM 1.x are re-implemented for PMM 2: now PMM Server can be
installed as a virtual appliance, or run using AWS Marketplace

• AWS RDS and remote instances monitoring re-added in this release include AWS RDS MySQL / Aurora MySQL
instances, and remote PostgreSQL, MySQL, MongoDB, and ProxySQL ones

• The new Settings dashboard allows configuring PMM Server via the graphical interface

For PMM install instructions, see Installing PMM Server and Installing PMM Client.

Note

PMM 2 is designed to be used as a new installation — your existing PMM 1 environment can’t be upgraded to this
version.

7.17.1 Improvements and new features

• PMM-4575: The new PMM Settings dashboard allows users to configure various PMM Server options: setting
metrics resolution and data retention, enabling or disabling send usage data statistics back to Percona and
checking for updates; this dashboard is now the proper place to upload your public key for the SSH login and
to download PMM Server logs for diagnostics

• PMM-4907 and PMM-4767: The user’s AMI Instance ID is now used to setup running PMM Server using AWS
Marketplace as an additional verification on the user, based on the Amazon Marketplace rules

• PMM-4950 and PMM-3094: Alternative AWS partitions are now supported when adding an AWS RDS MySQL or
Aurora MySQL Instance to PMM

• PMM-4976: Home dashboard clean-up: “Systems under monitoring” and “Network IO” singlestats were refined
to be based on the host variable; also avoiding using color as an indicator of state; “All” row elements were
relinked to the “Nodes Overview” dashboard with regards to the selected host.

• PMM-4800: The pmm-admin add mysql command has been modified to make help text more descriptive: now
when you enable tablestats you will get more detail on if they’re enabled for your environment and where you
stand with respect to the auto-disable limit

• PMM-4969: Update Grafana to version 6.5.1

• PMM-5053: A tooltip was added to the Head Block graph on the Prometheus dashboard

• PMM-5068: Drill-down links were added to the Node Summary dashboard graphs

• PMM-5050: Drill-down links were added to the graphs on all Services Compare dashboards

• PMM-5037: Drill-down links were added to all graphs on the Services Overview dashboards

• PMM-4988: Filtering in Query Analytics have undergone improvements to make group selection more intuitive:
Labels unavailable under the current selection are shown as gray/disabled, and the percentage values are
dynamically recalculated to reflect Labels available within the currently applied filters

237 of 242 Percona LLC, © 2020


7.17.2 Fixed bugs

• PMM-4966: All passwords are now substituted with asterisk signs in the exporter logs for security reasons when
not in debug mode

• PMM-527: node_exporter is now providing hardware monitoring information such as CPU temperatures and fan
statuses; while this information is being collected by PMM Server, it will not be shown until a dedicated
dashboard is added in a future release

• PMM-3198: Instead of showing All graphs for all services by default, MySQL Command/Handler Counters
Compare dashboard now shows the pre-defined set of ten most informative ones, to reduce load on PMM
Server at its first open

7.17.2 Fixed bugs

• PMM-4978: The “Top MySQL Questions” singlestat on the MySQL Instances Overview dashboard was changed
to show ops instead of percentage

• PMM-4917: The “Systems under monitoring” and “Monitored DB Instances” singlestats on the Home dashboard
now have a sparkline to make situation more clear with recently shut down nodes/instances

• PMM-4979: Set decimal precision 2 for all the elements, including charts and singlestats, on all dashboards

• PMM-4980: Fix “Load Average” singlestat on the Node Summary dashboard to show decimal value instead of
percent

• PMM-4981: Disable automatic color gradient in filled graphs on all dashboards

• PMM-4941: Some charts were incorrectly showing empty fragments with high time resolution turned on

• PMM-5022: Fix outdated drill-down links on the Prometheus Exporters Overview and Nodes Overview
dashboards

• PMM-5023: Make the All instances uptime singlestat on the Home dashboard to show Min values instead of
Avg

• PMM-5029: Option to upload dashboard snapshot to Percona was disappearing after upgrade to 2.1.x

• PMM-4946: Rename singlestats on the Home dashboard for better clarity: “Systems under monitoring” to
“Nodes under monitoring” and “Monitored DB Instances” to “Monitored DB Services”, and make the last one to
count remote DB instances also

• PMM-5015: Fix format of Disk Page Buffers singlestat on the Compare dashboard for PostgreSQL to have two
digits precision for the consistency with other singlestats

• PMM-5014: LVM logical volumes were wrongly sized on a new AWS deployment, resulting in “no space left on
device” errors.

• PMM-4804: Incorrect parameters validation required both service-name and service-id parameters of the pmm-
admin remove command to be presented, while the command itself demanded only one of them to identify the
service.

• PMM-3298: Panic errors were present in the rds_exporter log after adding an RDS instance from the second
AWS account

• PMM-5089: The serialize-javascript package was updated to version 2.1.1 because of the possibility of regular
expressions cross-site scripting vulnerability in it (CVE-2019-16769). Please note PMM versions were not affected
by this vulnerability, as serialize-javascript package is used as a build dependency only.

• PMM-5149: Disk Space singlestat was unable to show data for RDS instances because of not taking into account
sources with unknown filesystem type

238 of 242 Percona LLC, © 2020


7.18 Percona Monitoring and Management 2.1.0

7.18 Percona Monitoring and Management 2.1.0


Date: November 11, 2019

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance. You can run PMM in your own environment for maximum security
and reliability. It provides thorough time-based analysis for MySQL, MongoDB, and PostgreSQL servers to ensure
that your data works as efficiently as possible.

For install instructions, see Installing Percona Monitoring and Management.

Note

PMM 2 is designed to be used as a new installation — please don’t try to upgrade your existing PMM 1 environment.

7.18.1 Improvements and new features

• PMM-4063: Update QAN filter panel to show only labels available for selection under currently applied filters

• PMM-815: Latency Detail graph added to the MongoDB Instance Summary dashboard

• PMM-4768: Disable heavy-load collectors automatically when there are too many tables

• PMM-4821: Use color gradient in filled graphs on all dashboards

• PMM-4733: Add more log and config files to the downloadable logs.zip archive

• PMM-4672: Use integer percentage values in QAN filter panel

• PMM-4857: Update tooltips for all MongoDB dashboards

• PMM-4616: Rename column in the Query Details section in QAN from Total to Sum

• PMM-4770: Use Go 1.12.10

• PMM-4780: Update Grafana to version 6.4.1

• PMM-4918: Update Grafana plugins to newer versions, including the clickhouse-datasource plugin

7.18.2 Fixed bugs

• PMM-4935: Wrong instance name displayed on the MySQL Instance Summary dashboard due to the incorrect
string crop

• PMM-4916: Wrong values are shown when changing the time range for the Node Summary Dashboard in case
of remote instances

• PMM-4895 and PMM-4814: The update process reports completion before it is actually done and therefore
some dashboards, etc. may not be updated

• PMM-4876: PMM Server access credentials are shown by the pmm-admin status command instead of hiding them
for security reasons

• PMM-4875: PostgreSQL error log gets flooded with warnings when pg_stat_statements extension is not installed
in the database used by PMM Server or when PostgreSQL user is unable to connect to it

• PMM-4852: Node name has an incorrect value if the Home dashboard opened after QAN

• PMM-4847: Drilldowns from the Environment Overview dashboard doesn’t show data for the pre-selected host

• PMM-4841 and PMM-4845: pg_stat_statement QAN Agent leaks database connections

• PMM-4831: Clean-up representation of selectors names on MySQL-related dashboards for a better consistency

239 of 242 Percona LLC, © 2020


7.18.2 Fixed bugs

• PMM-4824: Incorrectly calculated singlestat values on MySQL Instances Overview dashboard

• PMM-4819: In case of the only one monitored host, its uptime is shown as a smaller value than the all hosts
uptime due to the inaccurate rounding

• PMM-4816: Set equal thresholds to avoid confusing singlestat color differences on a Home dashboard

• PMM-4718: Labels are not fully displayed in the filter panel of the Query Details section in QAN

• PMM-4545: Long queries are not fully visible in the Query Examples section in QAN

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter
using our bug tracking system.

240 of 242 Percona LLC, © 2020


7.19 Percona Monitoring and Management 2.0.1

7.19 Percona Monitoring and Management 2.0.1


Date: October 9, 2019

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance. You can run PMM in your own environment for maximum security
and reliability. It provides thorough time-based analysis for MySQL, MongoDB, and PostgreSQL servers to ensure
that your data works as efficiently as possible.

For install instructions, see Installing Percona Monitoring and Management.

Note

PMM 2 is designed to be used as a new installation — please don’t try to upgrade your existing PMM 1 environment.

7.19.1 Improvements

• PMM-4779: Securely share dashboards with Percona

• PMM-4735: Keep one old slowlog file after rotation

• PMM-4724: Alt+click on check updates button enables force-update

• PMM-4444: Return “what’s new” URL with the information extracted from the pmm-update package changelog

7.19.2 Fixed bugs

• PMM-4758: Remove Inventory rows from dashboards

• PMM-4757: qan_mysql_perfschema_agent failed querying events_statements_summary_by_digest due to data types


conversion

• PMM-4755: Fixed a typo in the InnoDB AHI Miss Ratio formula

• PMM-4749: Navigation from Dashboards to QAN when some Node or Service was selected now applies filtering
by them in QAN

• PMM-4742: General information links were updated to go to PMM 2 related pages

• PMM-4739: Remove request instances list

• PMM-4734: A fix was made for the collecting node_name formula at MySQL Replication Summary dashboard

• PMM-4729: Fixes were made for formulas on MySQL Instances Overview

• PMM-4726: Links to services in MongoDB singlestats didn’t show Node name

• PMM-4720: machine_id could contain trailing \\n

• PMM-4640: It was not possible to add MongoDB remotely if password contained a # symbol

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter
using our bug tracking system.

241 of 242 Percona LLC, © 2020


7.20 Percona Monitoring and Management 2.0.0

7.20 Percona Monitoring and Management 2.0.0


Date: September 19, 2019

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring
MySQL, MongoDB, and PostgreSQL performance. You can run PMM in your own environment for maximum security
and reliability. It provides thorough time-based analysis for MySQL, MongoDB, and PostgreSQL servers to ensure
that your data works as efficiently as possible.

For install instructions, see Installing Percona Monitoring and Management.

Note

PMM 2 is designed to be used as a new installation — please don’t try to upgrade your existing PMM 1 environment.

The new PMM2 introduces a number of enhancements and additional feature improvements, including:

• Detailed query analytics and filtering technologies which enable you to identify issues faster than ever before.

• A better user experience: Service-level dashboards give you immediate access to the data you need.

• The new addition of PostgreSQL query tuning.

• Enhanced security protocols to ensure your data is safe.

• Our new API allows you to extend and interact with third-party tools.

More details about new and improved features available within the release can be found in the corresponding blog
post.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter
using our bug tracking system.

242 of 242 Percona LLC, © 2020

You might also like