Table of contents
- Introduction
- Overview of GSP321
- Prerequisites
- Step-by-Step Solution
- Task 1: Create development VPC manually
- Task 2: Create Production VPC Manually
- Task 3: Create Bastion Host
- Task 4: Create and Configure Cloud SQL Instance.
- Task 5: Create Kubernetes Cluster
- Task 6: Prepare the Kubernetes Cluster.
- Task 7: Create a WordPress Deployment.
- Task 8: Enable Monitoring.
- Task 9: Provide Access for an Additional Engineer
- Conclusion
Cover Image Credit: NASSCOM Community
Introduction
In this article, I’ll take you through a step-by-step approach to solving one of the most challenging labs in the Google Cloud Associate Cloud Engineer Certification journey. The Develop your Google Cloud Network: Challenge Lab.
Providing the Google Cloud commands you need and giving detailed explanations of every aspect of each command.
At the end of this article, you’ll be able to confidently complete the lab and gain an in-depth understanding of Google Cloud networking, Creating and Configuring a Cloud SQL Instance, Cloud IAM & Admin management, and a bit of Kubernetes.
Overview of GSP321
The scenario presented in this challenge requires you as a trained Google Cloud and Kubernetes Engineer to help a new team (Griffin) set up their cloud environment.
The tasks you need to complete include;
Create a development VPC with three subnets manually.
Create a production VPC with three subnets manually.
Create a bastion that is connected to both VPCs.
Create a development Cloud SQL Instance and connect and prepare the WordPress environment.
Create a Kubernetes cluster in the development VPC for WordPress.
Prepare the Kubernetes cluster for the WordPress environment.
Create a WordPress deployment using the supplied configuration.
Enable monitoring of the cluster.
Provide access for an additional engineer.
There will be a set of specified Region and Zone for you to use in the challenge, but for the sake of this tutorial we will use REGION: us-central1 and ZONE: us-central1-a
Prerequisites
Basic knowledge of Google Cloud CLI, Google Cloud Console, Kubernetes, and Compute Engine.
Credits on Google Cloud Skills Boost
Active Qwiklabs subscription or Google Cloud free trial
Step-by-Step Solution
Task 1: Create development VPC manually
Create a VPC called
griffin-dev-vpc
with the following subnets only:griffin-dev-wp
- IP address block:
192.168.16.0/20
- IP address block:
griffin-dev-mgmt
- IP address block:
192.168.32.0/20
- IP address block:
To get started with this, we must first sign into Cloud Console with the credentials provided in the lab. After that, we then proceed to activate the Cloud Shell, which is illustrated in the image below;
Now in the Cloud Shell;
- Run this command to authenticate the Cloud Shell: A popup is likely to be displayed, click on the “Activate“ button and move ahead.
gcloud auth list
- Confirm that you’re in the right project:
gcloud config list project
- Save the Region and Zone provided in the lab instructions as variables in your Cloud Shell so they can be easily reused. We will be using the values mentioned earlier here for the Region and Zone respectively.
export REGION=us-central1
export ZONE=us-central1-a
- Run the echo command to confirm that the right values have been saved.
echo $REGION
echo $ZONE
These commands should print out us-central1 and us-central1-a to the console respectively.
- Create the
griffin-dev-vpc
network with the command below. We will also set the subnet mode of the network to custom, thus allowing us to manually create our own subnetworks from the network.
gcloud compute networks create griffin-dev-vpc --subnet-mode custom
- Create
griffin-dev-wp
subnetwork using the IP address block:192.168.16.0/20
as therange
and specifying the provided region as our preferred resource location.
gcloud compute networks subnets create griffin-dev-wp --network griffin-dev-vpc --range 192.168.16.0/20 --region $REGION
- Create
griffin-dev-mgmt
subnetwork using the IP address block:192.168.32.0/20
as therange
and specifying the provided region as our preferred resource location.
gcloud compute networks subnets create griffin-dev-mgmt --network griffin-dev-vpc --range 192.168.32.0/20 --region $REGION
This completes Task 1. Wait for the command to finish executing and click on the “Check my progress” button to confirm completion.
Task 2: Create Production VPC Manually
Create a VPC called
griffin-prod-vpc
with the following subnets only:griffin-prod-wp
- IP address block:
192.168.48.0/20
- IP address block:
griffin-prod-mgmt
- IP address block:
192.168.64.0/20
- IP address block:
This is basically a replica of the previous task just with different names for the vpc network and subnetworks.
- Create
griffin-prod-vpc
.
gcloud compute networks create griffin-prod-vpc --subnet-mode custom
- Create
griffin-prod-wp
subnetwork.
gcloud compute networks subnets create griffin-prod-wp --network griffin-prod-vpc --range 192.168.48.0/20 --region $REGION
- Create
griffin-prod-mgmt
subnetwork.
gcloud compute networks subnets create griffin-prod-mgmt --network griffin-prod-vpc --range 192.168.64.0/20 --region $REGION
This completes Task 2. Wait a few moments for the command to finish running and click on the “Check my progress” button to confirm.
Task 3: Create Bastion Host
- Create a bastion host (instance) with two network interfaces, one connected to
griffin-dev-mgmt
and the other connected togriffin-prod-mgmt
. Make sure you can SSH to the host.
To achieve this, we need to create an instance called bastion with two network interfaces, one created on the griffin-dev-vpc
network so we can connect the bastion instance to the griffin-dev-mgmt
subnetwork through it, and the other created on the griffin-prod-vpc
network so we can also connect the bastion host instance to the griffin-prod-mgmt
subnetwork through it.
We need to ensure we can SSH into the instance by adding the ssh
tag in the command and specify the zone
where the instance should be created.
After this is done, we need to set up firewall rules for both the griffin-dev-vpc
and the griffin-prod-vpc
networks connected to the bastion host. This is important for Security, Access Control, Logging, and Monitoring of network traffic going out and coming into the bastion
instance.
- Run the command below to create the
bastion
instance:
gcloud compute instances create bastion --network-interface=network=griffin-dev-vpc,subnet=griffin-dev-mgmt --network-interface=network=griffin-prod-vpc,subnet=griffin-prod-mgmt --tags=ssh --zone=$ZONE
Create the firewall rule for the
griffin-dev-vpc
network that only allows SSH traffic onTCP
port22
.We can use
0.0.0.0/0
as the source range to accept connection from any IP on the internet, although this is not ideal, a dedicated IP address should be used in real-world scenarios, but since this is only a challenge lab in a temporary environment, this can be used to make our lives easier.We will also set the target tags as
ssh
so they correlate with the tag we set up during the creation of the bastion instance. This ensures the firewall rules will only apply to instances with thessh
tag.
gcloud compute firewall-rules create fw-ssh-dev --network=griffin-dev-vpc --source-ranges=0.0.0.0/0 --target-tags ssh --allow=tcp:22
- Create the firewall rule for the
griffin-prod-vpc
network.
gcloud compute firewall-rules create fw-ssh-prod --network=griffin-prod-vpc --source-ranges=0.0.0.0/0 --target-tags ssh --allow=tcp:22
This completes task 3. Click on the “Check my progress” button to confirm completion.
Task 4: Create and Configure Cloud SQL Instance.
Create a MySQL Cloud SQL Instance called
griffin-dev-db
in$REGION
.- Run this command to create a MySQL Cloud SQL Instance. This creates the SQL instance with “password“ used as the
root-password
, you can use whatever suits you here, but take note of it because it will be used to sign into the database later on.
- Run this command to create a MySQL Cloud SQL Instance. This creates the SQL instance with “password“ used as the
gcloud sql instances create griffin-dev-db --root-password password --region $REGION
- Run the command below to connect to the SQL instance you just created. You will be prompted to enter a password, enter the password you used in the previous instance creation command. Here it is “password“.
gcloud sql connect griffin-dev-db
- Once in the MySQL environment, copy and paste in the SQL queries provided below and press enter. Type
exit
to come out of the SQL environment once done. Take note of the username:wp_user
and password:stormwind_rules
they will be used in later steps. These SQL queries prepare the WordPress environment:
CREATE DATABASE wordpress;
CREATE USER "wp_user"@"%" IDENTIFIED BY "stormwind_rules";
GRANT ALL PRIVILEGES ON wordpress.* TO "wp_user"@"%";
FLUSH PRIVILEGES;
This completes Task 4, click “Check my progress“ to confirm completion.
Task 5: Create Kubernetes Cluster
- Create a 2 node cluster (e2-standard-4) called
griffin-dev
, in thegriffin-dev-wp
subnet, and in zone$ZONE
.
Run the command below to achieve this. Specifying the machine-type
as e2-standard-4
, number of nodes
as 2
, network as griffin-dev-vpc
, subnetwork as griffin-dev-wp
, and zone as the provided lab zone.
gcloud container clusters create griffin-dev --machine-type e2-standard-4 --num-nodes 2 --network griffin-dev-vpc --subnetwork griffin-dev-wp --zone $ZONE
Click “Check my progress“ to confirm completion
Task 6: Prepare the Kubernetes Cluster.
From Cloud Shell copy all files from gs://cloud-training/gsp321/wp-k8s
.
- Do this with the command below
gsutil cp -r gs://cloud-training/gsp321/wp-k8s .
Now change directory to the wp-k8s folder.
cd wp-k8s
Now inside the folder, edit the wp-env.yaml
by running this command:
nano wp-env.yaml
Once the file content is open, edit it to look like below. Changing the username field to wp_user
and the password field to stormwind_rules
respectively.
GNU nano 7.2 wp-env.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: wordpress-volumeclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: Secret
metadata:
name: database
type: Opaque
stringData:
username: wp_user
password: stormwind_rules
Save the file and exit nano by pressing CTRL+O, then Enter, then CTRL+X.
Now that we’ve appropriately edited the file, we need to create the configuration based on the resources specified in the wp-env.yaml
file. We do this by running the following command:
kubectl create -f wp-env.yaml
Next is to provide a key for the service account. The service account has already been set up with the lab so we don’t have to worry about it. The service account provides access to the database for a sidecar container (A sidecar container is a secondary container that runs alongside the main application container in the same Pod in Kubernetes. It is commonly used to extend or enhance the functionality of the main application without modifying its core logic).
Use the command below to create and add the key to the Kubernetes environment.
gcloud iam service-accounts keys create key.json \
--iam-account=cloud-sql-proxy@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com
kubectl create secret generic cloudsql-instance-credentials \
--from-file key.json
This completes Task 6. Click on the “Check my progress“ button to confirm completion.
Task 7: Create a WordPress Deployment.
Now that you have provisioned the MySQL database, and set up the secrets and volume, you can create the deployment using wp-deployment.yaml
.
Before you create the deployment you need to edit
wp-deployment.yaml
.Replace YOUR_SQL_INSTANCE with griffin-dev-db's Instance connection name.
Get the Instance connection name from your Cloud SQL instance.
After you create your WordPress deployment, create the service with
wp-service.yaml
.
- Edit the
wp-deployment.yaml
file by running the command below to open it up with the nano editor in the Cloud Shell.
nano wp-deployment.yaml
Once opened you will see something like this:
GNU nano 7.2 wp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: database
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: database
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.33.2
command: ["/cloud_sql_proxy",
"-instances=YOUR_SQL_INSTANCE=tcp:3306",
"-credential_file=/secrets/cloudsql/key.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-volumeclaim
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
Edit the file by changing YOUR_SQL_INSTANCE to griffin-dev-db
. You can confirm the instance connection name from Cloud Console by navigating to SQL > Instances like so:
Change WORDPRESS_DB_USER to wp_user
, and WORDPRESS_DB_PASSWORD to stormwind_rules
Save the file and exit nano by pressing CTRL+O, then Enter, then CTRL+X.
- Create the WordPress Deployment by running the following command
kubectl create -f wp-deployment.yaml
- Create the Service by running the following command
kubectl create -f wp-service.yaml
You can check that the deployment and service have been created by running the following commands
kubectl get deployments
kubectl get services
This completes Task 7. Click on the “Check my progress“ button to confirm completion.
Task 8: Enable Monitoring.
- Create an uptime check for your WordPress development site. This periodically verifies the availability of the WordPress development site.
You can do this by running the following command:
gcloud monitoring uptime create wp-uptime-check \
--resource-type=uptime-url \
--resource-labels=host=EXTERNAL_IP_ADDRESS
We specify the uptime check name to be wp-uptime-check
, resource-type
to be uptime-url
because this is a URL-based uptime check to check if the website is available, and use the WordPress load balancer External IP address as the resource-label
. You can get the EXTERNAL_IP
by running kubectl get services
.
Task 9: Provide Access for an Additional Engineer
- You have an additional engineer starting and you want to ensure they have access to the project. Grant them the editor role to the project.
The second user account for the lab represents the additional engineer.
We do this by navigating to the IAM & Admin section in Cloud Console to edit the role of the second user provided in the lab from Viewer to Editor as the case may be. This is illustrated in the images below:
- Find the second user detail:
- Locate the second user on the IAM page in Cloud Console and Click on the Edit icon:
Click on the Role field and Select Editor under the Roles section:
Click Save once done:
This completes Task 9. Click on “Check my progress“ to confirm completion.
Conclusion
This is one of the most challenging labs in the Google Cloud Associate Cloud Engineer journey, it might take a couple of tries (like it did for me 😅) to fully understand some nuances and take note of important steps that one might otherwise overlook and get stuck as a result.
That’s why I made it a duty to write this article and provide as much detail and explanation as possible so it can make the lives of others who might be attempting this lab sometime in the future just a bit easier. I hope I’ve been able to do that.
Reach out on LinkedIn and let’s connect, or you can find me on my Portfolio if you’d like to discuss an opportunity to work together.
See you on the next one. Cheers 🥂