A step by step guide for deploying flask web services on GKE cluster

Kekayan
10 min readMay 4, 2019

Welcome back with another step-by-step guide!

Today I am going to do a guide on one of the most widely used technologies in the tech world. That is Kubernetes.

Bellow is the outline of our today’s guide:

  • What is Kubernetes
  • Story of Container Orchestrators
  • Scenario of the Hands-on
  • Developing Services & Docker images
  • MiniKube: Local Kubernetes Deployment
  • Deploy in cluster

What is Kubernetes

Kubernetes is a Google born technology for Container Orchestration. However, currently, it is functioning as an Open source service under the Apache 2.0 license.

Before going forward, let me describe the term “Container Orchestrator” bit more.

Story of Container Orchestrators

With the evolution of computers, people thought of scaling their applications in more efficient and effective ways. They come across mechanisms to rent their unused space of the data centres. This is where business bloomed for companies like Amazon or Google. The concepts like Virtual Machines helped people to use machines far away in unknown territories. But in a virtual machine, the main disadvantage will be the waste of resources on the operating system. So people came up with a solution for it. Containers.

Containers are very much similar to Virtual Machine, except, it does not need its own Operating system. This made the scaling of applications even more resource effective and efficient.

Within a very small time period, containers started to rule the IT industry. But the next problem was how to control and manage those numerous containers that one system own? What about companies with many systems and products? These containers needed upgrades, version controls, security patches, health checks and many more. People wanted to manage similar containers with the simplest possible way.

That is where the Container Orchestrators were born. An orchestrator tells the containers on what to do, where to be etc. It made the life of dev-ops easier. All the updates, roll-backs, patches are now one YAML away, literally.

So this is the story of a Container Orchestrator. Our Kubernetes is one such Orchestrator. I will explain more theories along-side our practical.

The scenario of the Hands-on

Think a scenario where we have two services data service & API service where data service not exposed to public and API service exposed to the public. See the diagram below for understanding

1. API-service — expose 2 API endpoints as “/codeToState” and “/stateToCode” , pass parameters as URL parameters
http://host/codeToState?code=XX
http://host/stateToCode?state=XXXXXXX

2. Data-Service
read data from this JSON or this JSON and provide relevant data requested by APIService

First, start with services and make the docker container

Developing Services & Docker images

before start developing our service we need to install packages and dependencies

First install python, pip (python package manager), git and docker. Finally, we use pip to install the virtual environment

sudo apt-get update
sudo apt-get install python-dev python-pip git docker.io
sudo pip install virtualenv

1. Set up the project

  • Make a new empty directory for the project. data_service
  • Set up a virtual environment in a directory called venv with the command virtualenv venv
  • Activate the virtual environment with source venv/bin/activate
  • Make a file requirements.txt that has all your dependencies in it. For the simplest flask app we need only
Flask==1.0.2
requests
  • Install your dependencies with pip install -r requirements.txt
setup project

2. Code the endpoints

  • Make a flask app at app/main.py
setup flask project
  • when we call ip/codes it will return codes: states json and ip/states other way.

3. Make the Dockerfile

A Dockerfile is a file called Dockerfile (with no extension ) which tells Docker how to build an image.

I will use base docker image from Sebastián Ramírez .which have nginx docker flask pre-installed.

Our directory Structure should be like this

.
├── app
│ └── main.py
└── Dockerfile
└── requirements.txt

4. Explanation for docker file

FROM

This line tells Docker what image to pull from the Docker hub repository (it’s like GitHub for Docker containers). The image specified by FROM is the base of the container we are building. It has python, flask, nginx, and uWSGI (the bridge between Flask and Nginx) as well as some other tools like git installed for us on a Debian OS.

You can think of a Docker image as a virtual operating system, but with smaller file size and with better efficiency. The tiangolo/uwsgi-nginx-flask:flask sets up a Debian Jesse “virtual operating system” which can run on our Ubuntu machine, or on pretty much any machine. The settings, installed programs, and even the operating system of our machine doesn’t matter – Docker makes its own isolated container.

COPY

This is how our app’s code gets incorporated into the Docker container. As you might expect, it copies the appdirectory from our project directory to the Docker container’s /app directory, overwriting the default app that Sebastián set up in there with our own.

5. Building Docker Image— docker build

docker build -t data_service .

This tells Docker to build a container using the project in the present working directory (the . at the end), and tag it data_service (-t stands for “tag”). Docker will pull down the base image tiangolo/uwsgi-nginx-flask:flask from Docker Hub, then copy our app code into the container.

we can check docker image by

docker images

6. Run the Docker Image — docker run

docker run -p 80:80 -t data_service

  • -p connects port 80 of the Docker container to port 8080 of your machine, so HTTP can work.
  • -t again specifies the tag of the container we want to run.

Check in the Localhost:

7. API service and related Docker Container

Similarly to data_service:

  • we create a directory called api_service
  • Create a virtualenv called venv
  • Let’s create requirements.txt and add flask & requests
  • Activate the environment and install requirements
  • Then we create our python file app/main.py
  • And we create our Dockerfile

Before Deploy everything in Google cloud let’s try to deploy in our local machine using MiniKube.

MiniKube: Local Kubernetes Deployment

1. Prerequisites

Let’s start Minikube first

minikube start

Now we ssh/ Login to Minikube VM and create docker images as we mentioned earlier.

minikube ssh

2. Build Docker images

  • build the two docker images after cd to each directory
docker build -t api_service:v1a .
docker build -t data_service .

we can see the docker images by running docker images

exit from VM.

3. Deploy in Minikube

Now the time to deploy our two conatiners or dockers to Kubernetes .

we will use kubectl to deploy our pods and create services.

PODS

When we want to run an application in Kubernetes we do so by declaring a Pod which describes the container that we want to run. Each Pod is given an IP address that is internal to the Kubernetes cluster but this IP is not accessible from outside of Kubernetes. Even from inside Kubernetes you’d have to know the IP of the Pod to access it which is inconvenient at best.

we can create a simple Deployment to manage a Pod as below

kubectl  run dataservice --image=data_service:latest --image-pull-policy=Never --port=80
kubectl run apiservice --image=api_service:v1a --image-pull-policy=Never --port=80

Let’s create service

Why use a Service?

In a Kubernetes cluster, each Pod has an internal IP address. But the Pods in a Deployment come and go, and their IP addresses change. So it doesn’t make sense to use Pod IP addresses directly. With a Service, we get a stable IP address that lasts for the life of the Service, even as the IP addresses of the member Pods change.

Types of Services

There are five types of Services:

  • ClusterIP (default): Internal clients send requests to a stable internal IP address.
  • NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.
  • LoadBalancer: Clients send requests to the IP address of a network load balancer.
  • ExternalName: Internal clients use the DNS name of a Service as an alias for an external DNS name.
  • Headless: You can use a headless service in situations where you want a Pod grouping, but don’t need a stable IP address.

According to our architecture above here bothapi_service” and “data_service” are running on the same Kubernetes cluster.

  • we have to define a Service for the app that we want to call (data_service) using type: ClusterIP type service since it doesn’t need to expose to public
  • Api_service with NodePort if we are using MiniKube or LoadBlancer in cloud.

data_service is in a namespace named default, define a service named dataservice for the data_service deployment and then configure the api_service to call data_service at this URI:

http://dataservice.default.svc.cluster.local/codes
http://dataservice.default.svc.cluster.local/states

update above url in the api_service application.

To create service

kubectl  expose deployment dataservice --type=ClusterIP
kubectl expose deployment apiservice --type=NodePort

Now we can check the URL:

echo $(minikube service apiservice --url)
postman

So Now we tested each endpoint. You can see the results above.

GKE Cluster Deployment

Now, let’s Deploy the system in GKE Cluster

  • Create your project here
  • Now select cloud shell
  • we will clone our repos in there

1. Build docker containers for both

Set the PROJECT_ID environment variable in your shell by retrieving the pre- configured project ID on gcloud by running the command below:

export PROJECT_ID=”$(gcloud config get-value project -q)”

To build the container image of this application and tag it for uploading, run the following command:

docker build -t gcr.io/${PROJECT_ID}/data_service:v1 .
docker build -t gcr.io/${PROJECT_ID}/api_service:v1 .

2. Upload the container image

  • First enable the conatiner registry api.
  • configure Docker command-line tool to authenticate to Container Registry (we need to run this only once):
gcloud auth configure-docker
  • we can now use the Docker command-line tool to upload the image to your Container Registry:
docker push gcr.io/${PROJECT_ID}/data_service:v1
docker push gcr.io/${PROJECT_ID}/api_service:v1
pushing api service to conatiner registry

3. Create a container cluster

A cluster consists of a pool of Compute Engine VM instances running Kubernetes, the open source cluster orchestration system that powers GKE.

  • Enable the billling API for GKE
  • Let’s create a two-node cluster named mediumtuts-cluster
gcloud container clusters create mediumtuts-cluster --num-nodes=2 --zone=us-central1-b

check our cluster

gcloud compute instances list

4. Deploy our two services/apps

kubectl  run dataservice --image=gcr.io/${PROJECT_ID}/data_service:v1 --port 80kubectl  run apiservice --image=gcr.io/${PROJECT_ID}/api_service:v1 --port 80

5. Expose our API service to the Internet

kubectl  expose deployment dataservice --type=ClusterIP
kubectl expose deployment apiservice --type=LoadBalancer

So after a few minutes if we run kubectl get svc

We can see the load balancer IP like the picture below.

So if we try our endpoints we can see the results

satetocode
codetostate

DashBoard

GKE Dashboard

I ll attach both services repo here

References

Czarkowski, P. (2019). Kubernetes Services: Exposed!. [online] Medium. Available at: https://medium.com/@pczarkowski/kubernetes-services-exposed-86d45c994521 [Accessed 4 May 2019].

Google Cloud. (2019). Deploying a containerized web application | Kubernetes Engine Tutorials | Google Cloud. [online] Available at: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app [Accessed 4 May 2019].

Google Cloud. (2019). Service | Kubernetes Engine | Google Cloud. [online] Available at: https://cloud.google.com/kubernetes-engine/docs/concepts/service [Accessed 4 May 2019].

--

--