Deploying a containerized web application


This tutorial shows you how to package a web application in a Docker container image, and run that container image on a Google Kubernetes Engine (GKE) cluster. Then, you deploy the web application as a load-balanced set of replicas that can scale to the needs of your users.

This page is for Operators and Developers who provision and configure cloud resources and deploy apps and services. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.

Objectives

  • Package a sample web application into a Docker image.
  • Upload the Docker image to Artifact Registry.
  • Create a GKE cluster.
  • Deploy the sample app to the cluster.
  • Manage autoscaling for the deployment.
  • Expose the sample app to the internet.
  • Deploy a new version of the sample app.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Compute Engine, Artifact Registry, and Google Kubernetes Engine APIs.

    Enable the APIs

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Compute Engine, Artifact Registry, and Google Kubernetes Engine APIs.

    Enable the APIs

Activate Cloud Shell

Cloud Shell comes preinstalled with the gcloud, docker, and kubectl command-line tools that are used in this tutorial.

  1. Go to the Google Cloud console.
  2. Click the Activate Cloud Shell Activate Shell Button button at the top of the Google Cloud console window.

    A Cloud Shell session opens inside a new frame at the bottom of the Google Cloud console and displays a command-line prompt.

    Cloud Shell session

Create a repository

In this tutorial, you store an image in Artifact Registry and deploy it from the registry. For this quickstart, you'll create a repository named hello-repo.

  1. Set the PROJECT_ID environment variable to your Google Cloud project ID (PROJECT_ID). You'll use this environment variable when you build the container image and push it to your repository.

    export PROJECT_ID=PROJECT_ID
    
  2. Confirm that the PROJECT_ID environment variable has the correct value:

    echo $PROJECT_ID
    
  3. Set your project ID for the Google Cloud CLI:

    gcloud config set project $PROJECT_ID
    

    Output:

    Updated property [core/project].
    
  4. Create the hello-repo repository with the following command:

    gcloud artifacts repositories create hello-repo \
       --repository-format=docker \
       --location=REGION \
       --description="Docker repository"
    

    Replace REGION with the a region for the repository, such as us-west1. To see a list of available locations, run the command:

     gcloud artifacts locations list
    

Building the container image

In this tutorial, you deploy a sample web application called hello-app, a web server written in Go that responds to all requests with the message Hello, World! on port 8080.

GKE accepts Docker images as the application deployment format. Before deploying hello-app to GKE, you must package the hello-app source code as a Docker image.

To build a Docker image, you need source code and a Dockerfile. A Dockerfile contains instructions on how the image is built.

  1. Download the hello-app source code and Dockerfile by running the following commands:

    git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
    cd kubernetes-engine-samples/quickstarts/hello-app
    
  2. Build and tag the Docker image for hello-app:

    docker build -t REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1 .
    

    This command instructs Docker to build the image using the Dockerfile in the current directory, save it to your local environment, and tag it with a name, such as us-west1-docker.pkg.dev/my-project/hello-repo/hello-app:v1. The image is pushed to Artifact Registry in the next section.

    • The PROJECT_ID variable associates the container image with the hello-repo repository in your Google Cloud project.
    • The us-west1-docker.pkg.dev prefix refers to Artifact Registry, regional host for your repository.
  3. Run the docker images command to verify that the build was successful:

    docker images
    

    Output:

    REPOSITORY                                                 TAG     IMAGE ID       CREATED          SIZE
    us-west1-docker.pkg.dev/my-project/hello-repo/hello-app    v1      25cfadb1bf28   10 seconds ago   54 MB
    
  4. Add IAM policy bindings to your service account:

    gcloud artifacts repositories add-iam-policy-binding hello-repo \
        --location=REGION \
        --member=serviceAccount:PROJECT_NUMBER[email protected] \
        --role="roles/artifactregistry.reader"
    

    Replace PROJECT_NUMBER with the project number of your project.

Running your container locally (optional)

  1. Test your container image using your local Docker engine:

    docker run --rm -p 8080:8080 REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1
    
  2. Click the Web Preview button Web Preview Button and then select the 8080 port number. GKE opens the preview URL on its proxy service in a new browser window.

Pushing the Docker image to Artifact Registry

You must upload the container image to a registry so that your GKE cluster can download and run the container image. In this tutorial, you will store your container in Artifact Registry.

  1. Configure the Docker command-line tool to authenticate to Artifact Registry:

    gcloud auth configure-docker REGION-docker.pkg.dev
    
  2. Push the Docker image that you just built to the repository:

    docker push REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1
    

Creating a GKE cluster

Now that the Docker image is stored in Artifact Registry, create a GKE cluster to run hello-app. A GKE cluster consists of a pool of Compute Engine VM instances running Kubernetes, the open source cluster orchestration system that powers GKE.

Cloud Shell

  1. Set your Compute Engine region:

     gcloud config set compute/region REGION
    

    For Standard zonal clusters, set a Compute Engine zone nearest to the Artifact Registry repository.

  2. Create a cluster named hello-cluster:

     gcloud container clusters create-auto hello-cluster
    

    It takes a few minutes for your GKE cluster to be created and health-checked. To run this tutorial on a GKE Standard cluster, use the gcloud container clusters create command instead.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. For GKE Autopilot, click Configure.

  4. In the Name field, enter the name hello-cluster.

  5. Select a Compute Engine region from the Region drop-down list, such as us-west1.

  6. Click Create.

  7. Wait for the cluster to be created. When the cluster is ready, a checkmark appears next to the cluster name.

Deploying the sample app to GKE

You are now ready to deploy the Docker image you built to your GKE cluster.

Kubernetes represents applications as Pods, which are scalable units holding one or more containers. The Pod is the smallest deployable unit in Kubernetes. Usually, you deploy Pods as a set of replicas that can be scaled and distributed together across your cluster. One way to deploy a set of replicas is through a Kubernetes Deployment.

In this section, you create a Kubernetes Deployment to run hello-app on your cluster. This Deployment has replicas (Pods). One Deployment Pod contains only one container: the hello-app Docker image. You also create a HorizontalPodAutoscaler resource that scales the number of Pods from 3 to a number between 1 and 5, based on CPU load.

Cloud Shell

  1. Ensure that you are connected to your GKE cluster.

    gcloud container clusters get-credentials hello-cluster --region REGION
    
  2. Create a Kubernetes Deployment for your hello-app Docker image.

    kubectl create deployment hello-app --image=REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1
    
  3. Set the baseline number of Deployment replicas to 3.

    kubectl scale deployment hello-app --replicas=3
    
  4. Create a HorizontalPodAutoscaler resource for your Deployment.

    kubectl autoscale deployment hello-app --cpu-percent=80 --min=1 --max=5
    
  5. To see the Pods created, run the following command:

    kubectl get pods
    

    Output:

    NAME                         READY   STATUS    RESTARTS   AGE
    hello-app-784d7569bc-hgmpx   1/1     Running   0          90s
    hello-app-784d7569bc-jfkz5   1/1     Running   0          90s
    hello-app-784d7569bc-mnrrl   1/1     Running   0          95s
    

Console

  1. Go to the Workloads page in the Google Cloud console.

    Visit Workloads

  2. Click Deploy.

  3. In the Specify container section, select Existing container image.

  4. In the Image path field, click Select.

  5. In the Select container image pane, select the hello-app image you pushed to Artifact Registry and click Select.

  6. In the Container section, click Done, then click Continue.

  7. In the Configuration section, under Labels, enter app for Key and hello-app for Value.

  8. Under Configuration YAML, click View YAML. This opens a YAML configuration file representing the two Kubernetes API resources about to be deployed into your cluster: one Deployment, and one HorizontalPodAutoscaler for that Deployment.

  9. Click Close, then click Deploy.

  10. When the Deployment Pods are ready, the Deployment details page opens.

  11. Under Managed pods, note the three running Pods for the hello-app Deployment.

Exposing the sample app to the internet

While Pods do have individually-assigned IP addresses, those IPs can only be reached from inside your cluster. Also, GKE Pods are designed to be ephemeral, starting or stopping based on scaling needs. And when a Pod crashes due to an error, GKE automatically redeploys that Pod, assigning a new Pod IP address each time.

What this means is that for any Deployment, the set of IP addresses corresponding to the active set of Pods is dynamic. We need a way to 1) group Pods together into one static hostname, and 2) expose a group of Pods outside the cluster, to the internet.

Kubernetes Services solve for both of these problems. Services group Pods into one static IP address, reachable from any Pod inside the cluster. GKE also assigns a DNS hostname to that static IP. For example, hello-app.default.svc.cluster.local.

The default Service type in GKE is called ClusterIP, where the Service gets an IP address reachable only from inside the cluster. To expose a Kubernetes Service outside the cluster, create a Service of type LoadBalancer. This type of Service spawns an External Load Balancer IP for a set of Pods, reachable through the internet.

In this section, you expose the hello-app Deployment to the internet using a Service of type LoadBalancer.

Cloud Shell

  1. Use the kubectl expose command to generate a Kubernetes Service for the hello-app deployment:

    kubectl expose deployment hello-app --name=hello-app-service --type=LoadBalancer --port 80 --target-port 8080
    

    Here, the --port flag specifies the port number configured on the Load Balancer, and the --target-port flag specifies the port number that the hello-app container is listening on.

  2. Run the following command to get the Service details for hello-app-service:

    kubectl get service
    

    Output:

    NAME                 CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
    hello-app-service    10.3.251.122    203.0.113.0     80:30877/TCP     10s
    
  3. Copy the EXTERNAL_IP address to the clipboard (for instance: 203.0.113.0).

Console

  1. Go to the Workloads page in the Google Cloud console.

    Go to Workloads

  2. Click hello-app.

  3. From the Deployment details page, click Actions > Expose.

  4. In the Expose dialog, set the Target port to 8080. This is the port the hello-app container listens on.

  5. From the Service type drop-down list, select Load balancer.

  6. Click Expose to create a Kubernetes Service for hello-app.

  7. When the Load Balancer is ready, the Service details page opens.

  8. Scroll down to the External endpoints field, and copy the IP address.

Now that the hello-app Pods are exposed to the internet through a Kubernetes Service, you can open a new browser tab, and navigate to the Service IP address you copied to the clipboard. A Hello, World! message appears, along with a Hostname field. The Hostname corresponds to one of the three hello-app Pods serving your HTTP request to your browser.

Deploying a new version of the sample app

In this section, you upgrade hello-app to a new version by building and deploying a new Docker image to your GKE cluster.

Kubernetes rolling update lets you update your Deployments without downtime. During a rolling update, your GKE cluster incrementally replaces the existing hello-app Pods with Pods containing the Docker image for the new version. During the update, your load balancer service routes traffic only into available Pods.

  1. Return to Cloud Shell, where you have cloned the hello app source code and Dockerfile. Update the function hello() in the main.go file to report the new version 2.0.0.

  2. Build and tag a new hello-app Docker image.

    docker build -t REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v2 .
    
  3. Push the image to Artifact Registry.

    docker push REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v2
    

Now you're ready to update your hello-app Kubernetes Deployment to use a new Docker image.

Cloud Shell

  1. Apply a rolling update to the existing hello-app Deployment with an image update using the kubectl set image command:

    kubectl set image deployment/hello-app hello-app=REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v2
    
  2. Watch the running Pods running the v1 image stop, and new Pods running the v2 image start.

    watch kubectl get pods
    

    Output:

    NAME                        READY   STATUS    RESTARTS   AGE
    hello-app-89dc45f48-5bzqp   1/1     Running   0          2m42s
    hello-app-89dc45f48-scm66   1/1     Running   0          2m40s
    
  3. In a separate tab, navigate again to the hello-app-service External IP. You should now see the Version set to 2.0.0.

Console

  1. Go to the Workloads page in the Google Cloud console.

    Go to Workloads

  2. Click hello-app.

  3. On the Deployment details page, click Actions > Rolling update.

  4. In the Rolling update dialog, set the Image of hello-app field to REGION-docker.pkg.dev/PROJECT_ID/hello-repo/hello-app:v2.

  5. Click Update.

  6. On the Deployment details page, inspect the Active Revisions section. You should now see two Revisions, 1 and 2. Revision 1 corresponds to the initial Deployment you created earlier. Revision 2 is the rolling update you just started.

  7. After a few moments, refresh the page. Under Managed pods, all of the replicas of hello-app now correspond to Revision 2.

  8. In a separate tab, navigate again to the Service IP address you copied. The Version should be 2.0.0.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

  1. Delete the Service: This deallocates the Cloud Load Balancer created for your Service:

    kubectl delete service hello-app-service
    
  2. Delete the cluster: This deletes the resources that make up the cluster, such as the compute instances, disks, and network resources:

    gcloud container clusters delete hello-cluster --region REGION
    
  3. Delete your container images: This deletes the Docker images you pushed to Artifact Registry.

    gcloud artifacts docker images delete \
        REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1 \
        --delete-tags --quiet
    gcloud artifacts docker images delete \
        REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v2 \
        --delete-tags --quiet
    

What's next

Try it for yourself

If you're new to Google Cloud, create an account to evaluate how GKE performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.

Try GKE free