Here is the VLOG transcript for our Kubernetes 101 looking to answer “Kubernetes, what is it?”, and Kubernetes deployment.
Hello, my name is Jeff Dusk. I’m part of the CloudBees core team and we’re going to go over Kubernetes 101.
Let’s do a quick introduction to Kubernetes at a high level. Then we’ll also have a quick demo to go over some of the basic concepts we’re going to cover in this presentation.
Kubernetes – What is it?
Whenever I’m looking at new technology, one of the things I like to find out is “what is Kubernetes”? The quickest way to find out what the official definition, is to go back to Kubernetes website. Their definition is:
Kubernetes is a portable extensible open source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. It has a large and rapid growing ecosystem. Kubernetes service support and tools are widely available.
What we care about are the things that are highlighted: portable and extendable, that manages containerized workloads.
It’s managing containers for us. One of the big things is it’s like a declarative configuration automation. So what does that mean? The portable, extendable means that it runs on any of the clouds. Then the declarative configuration and automation means that we’re going to have something that’s going be being declared up instead of iterative – I don’t have to do all the steps.
How Kubernetes deployment can help developers.
There are three big things here for developers. We’re going to break them down. It was designed by developers for developers. Kubernetes was created based off of work in Google because for the last 10 years or, Google has been managing their own systems with containers. So they have their own thing called bore and this is based on next generation of work. It uses declarative metadata in the form of a YAML file. Now what that means, as a developer, I tell Kubernetes what I want but I don’t need to tell it to low level details. I don’t care how it does it, just do it.
If you’ve ever done SQL for example, when I go out and look for an employee in the employees tables, I do select star from employee where employees last name is Smith. I’m going to find all the Smith’s. We don’t change that query if I have 10 employees, or if we have 10 million employees, it’s going to be the same query. What’s going to happen is the optimizer will figure that out. It’s the same idea we have here. Kubernetes is going to figure out how to actually manage that workforce.
Why should I care about Kubernetes?
There’s also a developer friendly API which is a rest API, so that means we can program this to the other thing that’s going to be big – a common native API. A common cloud API. So if you think about it, the cloud’s been big, but the problem has been we’ve had specific clouds. AWS is really popular, now there’s Google container engine, there’s the Azure container service and there’s a bunch of them. This hides the implementation like cloud. So if you think about it from CloudBees point of view, we’re running our application, we have a common API that can be used on any cloud service and on-premises.
You can think of this almost as like a cloud OS or from a lot of people who may have heard of it posix for the cloud so there’s a common API, I can write my code too.
And because we have those two features, we’re getting developer or DevOps best practices that means we can now do immutable infrastructure. We can do infrastructure as code and so we can actually create clusters that are easily reproduced in portable. It means I can create a cluster that can be the same one that I have in testing and staging, then I can have production. I can have it a multi-cloud environment so that I can have a development environment locally on premise which might be my development testing, one for my developers and then have production running out in the cloud so that’s why we care about Kubernetes.
That’s why it’s helpful for CloudBees and why it’s helpful for our customers
Now we know what it is why we care about it let’s do a quick dive into the architecture.
There are nodes within Kubernetes, two of them: the master node and the worker node. The master node has what are called the Kubernetes cluster services. There’s the API service, there’s NDC (which is a name value pair database which will store our metadata for our cluster), there’s a schedule that will schedule our work, and then there’s a control manager which manages the workers, working in conjunction with the scheduler.
For our workers to do anything we want to manage containers, so we have to have a container engine and that’ll be docker for most cases. It can be other things, but we’re only going to worry about a big docker today. Our control manager has to talk to those workers and have the workers do work for it with that container engine and that’s the purpose of the Kubelet.
We’re going to want to be able to expose some of those containers out to the cluster to work with them. Our service proxy is going to have multiple workers and each worker will have those same three things on all of them.
As I mentioned, this is going to be a declarative process. Kubernetes wants to think about declarative state management. What that means is I’m going to declare the state I want my system to be, then it’s going to implement it. For example if it has a deployment with two pots, I’m going to have a container image and then I have a number of instances or replicas that I want for that pot. You can see here I’ve got pod one and pod two. I have maintainer image and then I’m going to have three replicas for pod one and then I’m going to have two replicas for pod two.
I’m going to use a tool to take that YAML file and send it to the API service. The tool we’ll be using I’m is cube control or some people call Kubelet I will send the system the commands take this YAML to the API service the API service will then talk to the scheduler in the control manager. Then, based on that they will then reach out to each of the workers to then deploy what I’ve asked for.
I will now send out: pod one, replica one; pod one, replica two; pod one, replica three; pod two, replica one and then finally on the third worker we’ll put pod two, replica two.
We now have a state there – this is helpful, but what happens if something goes wrong? Let’s imagine that worker 2 goes away now. What’s going to happen is Kubernetes is going to say “I do not match the state that I’m supposed to”, so it’s going to create a new pod and replica. It will look like this is moved to a new worker and we will not have to do anything about this. So you can see what the advantage is now from a development point of view – I don’t have to write any code. I don’t have to do anything with like ansible or uppet or chef, to make sure that my application is in the proper state.
That’s basically a high level so now let’s do a demo on the environment to kind of show the low level of how this would work.
Want more tips and insights to accelerate your business?