Deploy Monitoring
- Objective: Deploy TiDB Monitor in Kubernetes
- Prerequisites:
- background knowledge of TiDB components
- AWS account
- deployed TiDB cluster
- Optionality: Optional
- Estimated time: 20 minutes
Monitoring of TiDB running in Kubernetes (TidbCluster) is provided by a separate cluster resource called TidbMonitor (Documentation, API reference).
The TidbMonitor resource consists of a single pod with multiple containers, including Prometheus and Grafana. The default configuration makes the Grafana web interface open to the internet, which makes it easier to load it for testing without having to forward ports or perform any other complex setup.
Deploy TidbMonitor
From the deploy/aws
subdirectory of the checked-out tidb-operator
, using the same cluster name and namespace that you used when you deployed TidbCluster, execute these commands to deploy TidbMonitor:
cluster_name=my-cluster
sed "s/CLUSTER_NAME/${cluster_name}/" manifests/db-monitor.yaml.example > monitor.yaml
kubectl create -f monitor.yaml -n "$namespace"
Monitor the progress of the deployment, and wait until the pod is "Running":
After starting data loading, let's open the Grafana dashboard so we can view some cluster metrics while placing load on it. Execute these commands back on your host OS where your Kubernetes config file is located (not on the bastion or other EC2 instance you may have used to run sysbench).
TiDB server instances write monitoring info to a Prometheus instance
Grafana
Get the HTTP endpoint for the Grafana service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-cluster-grafana LoadBalancer 172.20.119.95 ad88200d638c845748655e5540fef21b-1466443333.us-west-2.elb.amazonaws.com 3000:31680/TCP 104m
my-cluster-monitor-reloader NodePort 172.20.146.19 <none> 9089:31329/TCP 104m
my-cluster-prometheus NodePort 172.20.112.150 <none> 9090:30946/TCP 104m
The Grafana service listens on port 3000 and is by default open to the internet, so for the output above, you'd load http://abc41545f88aa11eaa5440ab39cc36d1-2123698752.us-west-2.elb.amazonaws.com:3000.
The default view is empty, but you can find several useful graphs by clicking the "Home" drop-down in the upper left and choosing the "-Overview" Dashboard.
While executing sysbench workloads on the bastion host, you should see activity on various graphs in the Overview Dashboard.
TiDB Dashboard
New in TiDB 4.0 is an experimental Dashboard that hosted by the PD component of TiDB. If your cluster is using TiDB version 3.0 or 3.1, you'll need to upgrade the cluster in order to access the Dashboard.
Connecting
Only a single PD instance presents the dashboard, the other PD instances return an HTTP redirect.
You can access this Dashboard using kubectl
port forwarding from your desktop to port 2379 to a PD pod in the cluster.
Note: If you are running
kubectl
from inside a Docker container or on a host other than where you want to use a web browser, you'll need to take additional steps to forward a local port to the environment where you runkubectl
or you'll need to copy your credentials locally in order to usekubectl
to forward a port from your local machine.
If you get a redirect, you should stop the forward you created and create a new one that forwards to the specific pod give in the redirect. For example:
Forwarding from 127.0.0.1:2379 -> 2379
Forwarding from [::1]:2379 -> 2379
Handling connection for 2379
<a href="http://my-cluster-pd-1.my-cluster-pd-peer.my-cluster.svc:2379/dashboard/">Temporary Redirect</a>.
[1]+ Terminated: 15 kubectl port-forward pod/my-cluster-pd -n "$namespace" 2379
Forwarding from 127.0.0.1:2379 -> 2379
Forwarding from [::1]:2379 -> 2379
Handling connection for 2379
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Allow-Headers: accept, content-type, authorization
Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE
Access-Control-Allow-Origin: *
Content-Length: 6477
Content-Type: text/html; charset=utf-8
Last-Modified: Tue, 28 Apr 2020 19:42:37 GMT
Date: Tue, 28 Apr 2020 19:50:14 GMT
Note: If you have enabled TLS between the components of your cluster, you'll need to take additional steps to access the Dashboard. Namely, you will need to load your cluster's CA into your web browser (or operating system) as a trusted root, and you will need to load the client certificate and key into your web browser, or provide these items to curl.
Logging in
After confirming that you're forwarding port 2379 to the correct pod, you can load http://127.0.0.1:2379/dashboard/ in a web browser.
The login information for the Dashboard is the same that you'd use to log in to TiDB Server. Since by default the "root" user does not have a password, you can log in to the Dashboard without a password. If you've already given the TiDB "root" user a password, you'll need to use that password to log in to the Dashboard.
Key Visualizer
After logging in to the Dashboard, you can use the Key Visualizer feature to see how queries are distributed across the keyspace of the cluster. Note that you have to explicitly enable Key Visualizer before it can collect data about your workload. If a workload is already in progress, you'll be able to see key usage performed after enabling Key Visualizer.
After Key Visualizer is enabled, you can run a variety of sysbench workloads and compare their effects on the cluster. Note that in Key Visualizer you can look at Read/Write activity both on the basis of bytes and keys, and note that you can zoom in and out of the graph to get more insight.
You can mouse over individual data points in the chart to see the table name, range of keys for a specific region, and the metrics for that timeslice of the given region for the type of operation being viewed.
Comments
0 comments
Please sign in to leave a comment.