Contributed by Mike Barlow
Getting TiDB Up and Running on Google Cloud Compute (GCP) Instance - Part 2
Overview
This is the second part of a two part series of getting TiDB Up and Running on Google Cloud Compute (GCP). In Part 1 we created a Google GCP instance. In this part we install, configure, and run TiDB on the instance that we created in Part 1.
Here’s a Reference Architecture of what we will have at the end of this post.
Table of Contents
- Install TiUP
- Create a TiDB Topology Configuration File
- Deploy TiDB using TiUP
- Start TiDB and Components
- Let’s connect
In Part 1, we focused on Prerequisite Operation (see image below) of setting up and configuring GCP to run TiDB. This part, we will focus on setting up and running TiDB.
Requirements
We set up the following items in Part 1. If any of the following items are not setup or you are having issues, please refer to Part 1.
- GCP Instance
- gcloud associated with a User Account
- Private Keys
- TiDB Ports Open
Let’s check that things are set up correctly. I will try to keep this short and quick
SSH to GCP Instance
If you are not already SSH into your GCP instance, please do so now.
gcloud compute ssh --zone "us-west1-a" "tidb-vm"
Notice that the server we SSH into has a prompt that references the tidb-vm instance.
gcloud Account
Let’s check the account that gcloud is associated with.
gcloud config list
# or
gcloud auth list
Perfect, we are running under a user account.
SSH Keys
Let’s check what ssh keys we have available.
ls -al .ssh/
The primary file we are interested in is the google_compute_engine, which is the private key that TiUP will use for SSH.
TiDB Ports
We should have a firewall rule that allows our local computer access TiDB components (TiDB Dashboard, Grafana, Prometheus, SQL interface)
gcloud compute firewall-rules list
We created the firewall rule access-from-home in Part 1. The other firewall rules were created by GCP automatically.
This should provide us a comfort level that the GCP instance is set up correctly to run TiDB.
The Process
Install TiUP
Think of TiUP as a package manager that makes it easier to manage different cluster components in the TiDB ecosystem.
References
- TiUP Overview
- Deploy a TiDB Cluster Offline Using TiUP
- Deploy a TiDB Cluster Using TiUP
- Quick Start Guide for the TiDB Database Platform
So that we don’t accidentally install TiUP and TiDB on our local computer, confirm that you are on the GCP instance. In the image below, notice the command prompt prefix includes tidb-vm. This lets us know that we are on the GCP instance
Let’s install TiUP.
This command will download and install TiUP
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
Notice that the .bashrc file was updated to include the path to the TiUP working directory that was created in our home directory under .tiup/.
Since the path has been updated in the .bashrc, we need to reload it by running the following command. There will be no output from the following command
source .bashrc
Sanity Check
Let’s check that TiUP is installed.
tiup -v
tiup cluster list
You can see that no clusters are running
So tiup -v worked and provided us with the version of TiUP. The TiUP version that is outputted will be different from TiDB’s version we will install.
Notice the output for the tiup cluster list command. The cluster component was downloaded from the repository. This is because when we initially downloaded and installed TiUP not all components were downloaded, therefore these additional components will only get downloaded when they are called for the first time.
Create a TiDB Topology Configuration File
We use TiUP to deploy and scale the TiDB ecosystem. A topology file is a configuration file that is used to identify the different components that will be deployed and to what server they will be deployed on.
Let’s create the topology.yaml file. This topology file is fairly simple. With the following topology file, we are telling TiUP to do the following:
- Create a new Linux user called tidb
- Manage the cluster over the SSH Port 22
- Deploy TiDB components to the directory /home/tidb/tidb-deploy
- Store data in the directory /home/tidb/tidb-data
- Install and run the following Components all on the server with IP 127.0.0.1 (Localhost)
- Placement Driver (PD)
- TiDB
- TiKV
- Monitoring Server
- Grafana Server
- etc
GitHub - Example topology.yaml file with explanations.
vim topology.yaml
Copy and paste the following the in the topology.yaml file
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/home/tidb/tidb-deploy"
data_dir: "/home/tidb/tidb-data"
server_configs: {}
pd_servers:
- host: 127.0.0.1
tidb_servers:
- host: 127.0.0.1
tikv_servers:
- host: 127.0.0.1
monitoring_servers:
- host: 127.0.0.1
grafana_servers:
- host: 127.0.0.1
alertmanager_servers:
- host: 127.0.0.1
Something to notice, all IP addresses are 127.0.0.1 (localhost). We are installing one instance of each component on the same computer (localhost). This is by no means a production setup; we just want an environment where we can kick the tires.
To deploy a TiDB cluster, we can use TiUp command tiup cluster. Below we provide a check argument to do a dry run that validates the topology.yaml file and if ssh access is sufficient.
Remember we created the Google private key for SSH. We will use this private key in the command below with the parameter --identity_file.
Then we reference the topology.yaml file
tiup cluster check --identity_file .ssh/google_compute_engine topology.yaml
Since we are doing a quick example, do not worry about the output of Pass, Fail, and Warn status.
The output in the image below, is what we expect. This is good.
If things didn’t work correctly, you may see something like the results in the image below. If you look at the command in the image below you will notice that I misspelled the private key name. Therefore, TiUP could not find the private key and raised an error.
If you do see this error, please reference creating SSH Keys in Part 1
Deploy TiDB using TiUP
In the previous section we did a dry run. Now, we actually deploy TiDB using TiUP.
When we do a deployment, we need to add a few additional parameters. In the following image, the main parameters are identified.
tiup cluster deploy tidb-test v5.0.1 -i .ssh/google_compute_engine topology.yaml
Here’s the breakdown of the command
We successfully deployed a TiDB cluster, but we have not started the cluster yet.
Sanity Check
Let’s view what cluster is being managed by TiUP
tiup cluster list
We should see only the one cluster that we deployed.
Let’s see the details of the tidb-test cluster.
tiup cluster display tidb-test
Since we have not started TiDB and its components, we see that everything is offline with the statuses of (inactive, down, N/A). This is all fine and expected.
Before we start TiDB, I like to see what service ports are open on the GCP instance. This is different from the GCP Firewall rules we created. These are ports assigned to processes that are running on the GCP instance. After we start TiDB, we will run this command again and see what ports are associated with the different TiDB processes.
By default, Google GCP instances have assigned these ports to these processes.
sudo netstat -tulpn | grep LISTEN
Start TiDB and Components
Alright, let’s start up the TiDB ecosystem using tiup cluster
tiup cluster start tidb-test
There’s alot going on here, but key is that there are no errors being shown
Sanity Check
Let’s see the details of the tidb-test cluster using the display parameter.
tiup cluster display tidb-test
As we can see all the components are up and running.
Let’s see what ports and services are available
sudo netstat -tulpn | grep LISTEN
There are many processes running that have TCP ports associated with them. In the image above, I highlighted the ports that we open with the GCP firewall rule.
Let’s connect
In this section we will access the following items from a browser on our local computer. Also, we will use a local SQL client to access TiDB and run a few SQL Commands. Below are the different components that we will acces
- TiDB Dashboard
- Grafana
- Prometheus
- SQL Client
To access the TiDB components that are running on our GCP instance from our local computer, we will need the external IP address of our GCP Instance.
gcloud compute instances list
Here we can see that the GCP instance’s external IP address is 34.83.139.90. Of course, your IP Address will be different. We will use this IP address from our browser on our local computer.
TiDB Dashboard
In a browser on your local computer, provide the URL that starts with your GCP instance IP Address, port number 2379, and at the end add /dashboard. The URL for me will be http://34.83.139.90:2379/dashboard. The IP address for your GCP instance will be different.
A login should not be needed, so go ahead and click the “Sign In” button
Here’s the TiDB Dashboard. We will not go into detail about these tools.
Grafana
To access Grafana, create a URL and use your GCP instance’s public IP address with port 3000. For me, the URL will be http://34.83.139.90:3000/. Your IP address will probably be different.
To login to Grafana,the Username is admin and the Password admin
You may be prompted to change your password. I select “Skip”, since this is a temporary system that only I have access to with my local computer.
The initial Grafana Dashboard should look something like this
Prometheus
To access Prometheus, create a URL and use your GCP instance’s public IP address with port 9090. For me, the URL will be http://34.83.139.90:9090. Your IP address will probably be different
You should not need a userid or password
MySQL Client
In Part 1, we installed a MySQL Client on the GCP Instance. Now let’s use the MySQL Client to connect to TiDB.
TiDB SQL interface default port is 4000.
From our GCP instance, run the following command. This command will start a MySQL Client and connect to 127.0.0.1 (localhost) on port 4000 with the user of root.
mysql -h 127.0.0.1 -P 4000 -u root
Now we should have a mysql> prompt. Let’s run a few commands.
SELECT VERSION();
SHOW DATABASES;
exit
Please play around with the different web interfaces and MySQL Client.
If you already have a MySQL Client or other tool, you can use it to access TiDB over port 4000. Just remember to use the GCP instance public IP address.
Wrap Up
We now have a simple TiDB cluster running. We can access monitoring that includes TiDB Dashboard, Grafana, and Prometheus.
I hope this helps you get up and running with TiDB and provides a foundation where to learn and try things out.
If you are done with the GCP instance, you can tear it down by following the Tear Down commands below.
Tear Down
The easiest way to tear down this environment is by destroying the GCP instance.
From our local computer let’s get a list of GCP instances. We will then destroy (delete) the tidb-vm instance and then confirm the instance has been deleted.
From your local computer, run the following gcloud commands.
The delete command may take a few minutes to complete.
tiup cluster list
tiup cluster delete tidb-test
tiup cluster list
Let’s remove the firewall rule that we created. In the commands below, we first get a list of firewall rules. Then we delete the firewall rule access-from-home, and then validate that the rule was deleted.
gcloud compute firewall-rules list
gcloud compute firewall-rules delete access-from-home
gcloud compute firewall-rules list
Your GCP environment should be cleaned up and back to its original state.
Select a repo
Comments
0 comments
Please sign in to leave a comment.