Serverless Architectures with Kubernetes
上QQ阅读APP看书,第一时间看更新

Introduction to Kubernetes

In the previous chapter, we studied serverless frameworks, created serverless applications using these frameworks, and deployed these applications to the major cloud providers.

As we have seen in the previous chapters, Kubernetes and serverless architectures started to gain traction at the same time in the industry. Kubernetes got a high level of adoption and became the de facto container management system with its design principles based on scalability, high availability, and portability. For serverless applications, Kubernetes provides two essential benefits: removal of vendor lock-in and reuse of services.

Kubernetes creates an infrastructure layer of abstraction to remove vendor lock-in. Vendor lock-in is a situation where transition from one service provider to another is very difficult or even infeasible. In the previous chapter, we studied how serverless frameworks make it easy to develop cloud-agnostic serverless applications. Let's assume you are running your serverless framework on an AWS EC2 instance and want to move to Google Cloud. Although your serverless framework creates a layer between the cloud provider and serverless applications, you are still deeply attached to the cloud provider for the infrastructure. Kubernetes breaks this connection by creating an abstraction between the infrastructure and the cloud provider. In other words, serverless frameworks running on Kubernetes are unaware of the underlying infrastructure. If your serverless framework runs on Kubernetes in AWS, it is expected to run on Google Cloud Platform (GCP) or Azure.

As the defacto container management system, Kubernetes manages most microservices applications in the cloud and in on-premise systems. Let's assume you have already converted your big monolith application to cloud-native microservices and you're running them on Kubernetes. And now you've started developing serverless applications or turning some of your microservices to serverless nanoservices. At this stage, your serverless applications will need to access the data and other services. If you can run your serverless applications in your Kubernetes clusters, you will have the chance to reuse the services and be close to your data. Besides, it will be easier to manage and operate both microservices and serverless applications.

As a solution to vendor lock-in, and for potential reuse of data and services, it is crucial to learn how to run serverless architectures on Kubernetes. In this chapter, a Kubernetes recap is presented to introduce the origin and design of Kubernetes. Following that, we will install a local Kubernetes cluster, and you will be able to access the cluster by using a dashboard or a client tool such as kubectl. In addition to that, we will discuss the building blocks of Kubernetes applications, and finally, we'll deploy a real-life application to the cluster.