1. What is Kubernetes?
Answer: Kubernetes is an open-source container orchestration platform. It was developed by Google and was donated to the Cloud Native Computing Foundation (CNCF) in 2015.
2. How does Kubernetes relate to Docker?
Answer: Docker is a container runtime which is a software that runs containerized applications. When Kubernetes schedules a pod to a node, the kubelet running on that node instructs Docker to launch the containers.
3. What is container orchestration?
Answer: Container orchestration is the automation of components and processes related to running containers. It includes things like configuring and scheduling containers, the availability of containers, allocation of resources between containers, and securing the interaction between containers, among other things.
4. How is Kubernetes related to Docker?
Answer: It’s a known fact for all of us that Docker used to provide the lifecycle management of containers and a Docker image helps in building the runtime containers. But, since these individual containers have to communicate, Kubernetes is used. So, Docker helps in building the containers and these containers help in communicating with each other via Kubernetes. So, containers running on the multiple hosts can be manually linked and orchestrated by making use of Kubernetes.
5. What do you know about Kubernetes clusters?
Answer: A Kubernetes cluster is a set of nodes that containerized applications run on. These nodes can be physical or virtual machines.
6. What is kubectl?
Answer: Kubectl is the command-line configuration tool for Kubernetes that communicates with a Kubernetes API server. Kubectl allows you to create, inspect, update, and delete Kubernetes objects.
7. What is a pod?
Answer: A pod is the most basic Kubernetes object. A pod consists of a group of containers running in your cluster. Most commonly, a pod runs a single primary container.
8. Can you explain the different components of Kubernetes architecture?
Answer: Kubernetes is composed of two layers: a control plane and a data plane. The control plane is the container orchestration layer that includes
1. Kubernetes objects that control the cluster, and
2. the data about the cluster’s state and configuration.
The data plane is the layer that processes the data requests and is managed by the control plane.
9. What is the difference between a daemonset, a deployment and a replication controller?
Answer: A daemonset ensures that all nodes you select are running exactly one copy of a pod.
A deployment is a resource object in Kubernetes that provides declarative updates to applications. It manages the scheduling and life cycle of pods. It provides several key features for managing pods including pod health checks, rolling updates of pods, the ability to roll back, and the ability to easily scale pods horizontally.
The replication controller used to specify how many exact copies of a pod should be running in a cluster. It differs from a deployment and it does not offer pod health checks, and the rolling update process is not as robust.
10. Do all of the nodes have to be the same size in your cluster?
Answer: No, they don’t. The Kubernetes components, like kubelet, will take up resources on your nodes and you’ll still need more capacity for the node to do any work. In a larger cluster, it often makes sense to create a mix of different instance sizes. That way, pods that require a lot of memory with intensive compute workloads can be scheduled by Kubernetes on large nodes and smaller nodes can handle smaller pods.
11. What is a sidecar container, and what would you use it for?
Answer: A sidecar container is a utility container that is used to extend support for a main container in a Pod. Sidecar containers can be paired with one or more main containers and they enhance the functionality of those main containers. An example would be making use of a sidecar container specifically to process the system logs or for monitoring.
12. How do logs work for pods?
Answer: With the help of a traditional server setup, application logs can be written to a file and then viewed either on each server or collected by making use of a logging agent and sent to a centralized location. In Kubernetes, however, writing logs to disk from a pod is discouraged since you would then have to manage log files for pods. The better way is to have your application output logs to stdout and stderr. The kubelet on each node collects stdout and stderr on the running pods and then combines them into a log file managed by Kubernetes. Then you can use different kubectl commands to view the logs.
13. How can you separate resources?
Answer: You can separate resources by using namespaces. These can be created either by using kubectl or applying a YAML file. After you have created the namespace you can then place resources or create new resources within that namespace. Some people think of namespaces in Kubernetes like a virtual cluster in your actual Kubernetes cluster.
14. What are the main differences between the Docker Swarm and Kubernetes?
Answer: Docker Swarm is Docker’s native, open-source container orchestration platform that is used to cluster and schedule Docker containers. Swarm differs from Kubernetes in the following ways:
- Docker Swarm is more convenient to set up but doesn’t have a robust cluster, while Kubernetes is more complicated to set up but the benefit of having the assurance of a robust cluster.
- Docker Swarm can’t do auto-scaling (as can Kubernetes); however, Docker scaling is five times faster than Kubernetes.
- Docker Swarm doesn’t have a GUI; Kubernetes has a GUI in the form of a dashboard.
- Docker Swarm does automatic load balancing of traffic between containers in a cluster, while Kubernetes requires manual intervention for load balancing.
- Docker requires third-party tools like ELK stack for logging and monitoring, while Kubernetes has integrated tools for the same.
- Docker Swarm can share storage volumes with any container easily, while Kubernetes can only share storage volumes with containers in the same pod.
- Docker can deploy rolling updates but can’t deploy automatic rollbacks; Kubernetes can deploy rolling updates as well as automatic rollbacks.
15. What does the node status contain?
Answer: The main components of a node status are Address, Condition, Capacity, and Info.
16. What process runs on Kubernetes Master Node?
Answer: The Kube-api server process runs on the master node and serves to scale the deployment of more instances.
17. What is the Google Container Engine?
Answer: The Google Container Engine is defined as an open-source management platform tailor-made for Docker containers and clusters for providing the support for the clusters that are running in Google public cloud services.
18. What is ‘Heapster’ in Kubernetes?
Answer: A Heapster is a system used for a performance monitoring and metrics collection system for data collected by the Kublet. This aggregator is natively supported and runs like any other pod within a Kubernetes cluster which allows it to discover and query usage data from all nodes within the cluster.
19. What is a Namespace in Kubernetes?
Answer: Namespaces are used for dividing cluster resources between multiple users. They are meant for environments where there are many users spread across projects or teams and provide a scope of resources.
20. Name the initial namespaces from which Kubernetes starts?
- Kube – system
- Kube – public
21. What is the Kubernetes controller manager?
Answer: The controller manager is a daemon that is used for embedding core control loops, garbage collection, and Namespace creation. It enables the running of multiple processes on the master node even though they are compiled to run as a single process.
22. What are the types of controller managers?
Answer: The primary controller managers that can run on the master node are the endpoints controller, service accounts controller, namespace controller, node controller, token controller, and replication controller.
23. What is etcd?
Answer: Kubernetes uses etcd as a distributed key-value store for all of its data including metadata and configuration data and allows nodes in Kubernetes clusters to read and write data. Although etcd was purposely built for CoreOS, it also works on a variety of operating systems (e.g., Linux, BSB, and OS X) because it is an open-source. Etcd represents the state of a cluster at a specific moment in time and is a canonical hub for state management and cluster coordination of a Kubernetes cluster.
24. What are the different services within Kubernetes?
Answer: Different types of Kubernetes services include:
- Cluster IP service
- Node Port service
- External Name Creation service and
- Load Balancer service
25. What is ClusterIP?
Answer: The ClusterIP is defined as the default Kubernetes service which provides a service inside a cluster (with no external access) that other apps inside your cluster can access.
26. What is the LoadBalancer in Kubernetes?
Answer: The LoadBalancer service is used to expose services to the internet. A Network load balancer, for example, creates a single IP address that forwards all traffic to your service.
27. What is a headless service?
Answer: A headless service is used to interface service discovery mechanisms without being tied to a ClusterIP, therefore allowing you to directly reach pods without having to access them through a proxy. It is useful when neither load balancing nor a single Service IP is required.
28. Why use Kubernetes?
Answer: Kubernetes is used because:
- Kubernetes can run on-premises bare metal, OpenStack, public clouds Google, Azure, AWS, etc.
- It helps you to avoid vendor lock issues as it can use any vendor-specific APIs or services except where Kubernetes provides an abstraction, e.g., load balancer and storage.
- It will enable applications that need to be released and updated without any downtime.
- Kubernetes allows you to assure those containerized apps run where and when you want and help you to find resources and tools which you want to work with.
29. What are the features of Kubernetes?
Answer: The features of Kubernetes are:
- Automated Scheduling
- Self-Healing Capabilities
- Automated rollouts & rollback
- Horizontal Scaling & Load Balancing
- Offers environment consistency for development, testing, and production
- Infrastructure is loosely coupled to each component can act as a separate unit
- Provides a higher density of resource utilization
- Offers enterprise-ready features
- Application-centric management
- Auto-scalable infrastructure
- You can create predictable infrastructure
30. What are the disadvantages of Kubernetes?
- Kubernetes dashboard is not as helpful as it should be.
- Security is not very effective.
- It is very complex and can reduce productivity
- Kubernetes is more costly than its alternatives.
31. Define Ingress Network.
Answer: Ingress network is defined as a collection of rules which allow permission for connections into the Kubernetes cluster.
32. What is GKE?
Answer: GKE or Google Container Engine is a management platform that supports clusters and Docker containers that run within public cloud services of Google.
33. How to run Kubernetes locally?
Answer: Kubernetes can be run locally using the Minikube tool. It runs a single-node cluster in a VM (virtual machine) on the computer. Therefore, it offers the ideal way for users who have just started learning Kubernetes.
34. What are the tools that are used for container monitoring?
Answer: Tools that are used for container monitoring are:
35. List components of Kubernetes.
Answer: There are three components of Kubernetes, they are:
- Node components
- Master Components
36. Explain Prometheus in Kubernetes.
Answer: Prometheus is an application that is used for monitoring and alerting. It can be called out to your systems, grab real-time metrics, compress it, and store properly in a database.
37. Explain Replica set.
Answer: A Replica set is used to keep replica pods stable. It enables us to specify the available number of identical pods. This can be considered a replacement for the replication controller.
38. List out some important Kubectl commands.
Answer: The important Kubectl commands are:
- kubectl annotate
- kubectl cluster-info
- kubectl attach
- kubectl apply
- kubectl config
- kubectl autoscale
- kubectl config current-context
- kubectl config set.
39. Why use a Kube-apiserver?
Answer: Kube-apiserver is an API server of Kubernetes that is used to configure and validate API objects, which include services, controllers, etc. It provides the frontend to the cluster’s shared region using which components interact with each other.
40. Explain the types of Kubernetes pods.
Answer: There are two types of pods in Kubernetes:
Single Container Pod: It can be created with the run command.
Multi container pods: It can be created by making use of the “create” command in Kubernetes.
41. What are the labels in Kubernetes?
Answer: Labels are a collection of keys that contain some values. The key values are connected to pods, replication controllers and associated services. Generally, labels are added to some object during its creation time. They can be modified by the users at run time.
42. What are the objectives of the replication controller?
Answer: The objectives of the replication controller are:
- It is responsible for controlling and administering the pod lifecycle.
- It monitors and verifies whether the allowed number of replicas are running or not.
- The replication controller helps the user to check the pod status.
- It enables us to alter a pod. The user can drag its position in the interested way.
43. What do you mean by persistent volume?
Answer: A persistent volume is a storage unit that is controlled by the administrator. It is used to manage an individual pod in a cluster.
44. What are Secrets in Kubernetes?
Answer: Secrets are sensitive information like login credentials of the user. They are objects in Kubernetes that store sensitive information like username and password after performing encryption.
45. What is Sematext Docker Agent?
Answer: Sematext Docker agent is a log collection agent with events and metrics. It runs as a small container in each of the Docker hosts. These agents gather metrics, events, and logs for all cluster nodes and containers.
46. Define OpenShift.
Answer: OpenShift is a public cloud application development and hosting platform developed by Red Hat. It offers automation for management so that developers can focus on writing the code.
47. Define K8s.
Answer: K8s (K-eight characters-S) is a term for Kubernetes. It is an open-source orchestration framework for the containerized applications.
48. What are the ways to provide API-Security on Kubernetes?
Answer: The ways to provide API-Security on Kubernetes are:
- Using correct auth mode with API server authentication mode= Node.
- Make kubeless that protects its API via authorization-mode=Webhook.
- Ensuring the kube-dashboard uses a restrictive RBAC (Role-Based Access Control) policy.
49. What are the types of Kubernetes Volume?
Answer: The types of Kubernetes Volume are:
- GCE persistent disk
50. Explain PVC.
Answer: The full form of PVC stands for Persistent Volume Claim. It is a storage requested by Kubernetes for pods. The user does not require knowing the underlying provisioning. This claim should be created in the same namespace where the pod is created.
51. What is the Kubernetes Network Policy?
Answer: Network Policy defines how the pods in the same namespace would communicate with each other and the network endpoint.
52. What are labels and annotations when it comes to Kubernetes?
Answer: A label in Kubernetes is defined as a meaningful type of tag word which is usually attached to the Kubernetes objects in order to make them as part of a group. The Labels might be used for working on the various instances for the purposes of management or even routing purposes. For one, the controller-based objects may be using the labels for marking the pods they would be operating on. The microservices make use of the labels for understanding the structure of the backend pods they route the requests toward. The labels are some of the key value pairs. Each unit may be having more than one label but each unit may be only having one entry for each of the keys. The key is most commonly utilized as an identifier or unique ID. However, at the same time may be classifying the objects by making use of other criteria according to public access, application versions and the developmental stages.
The annotations attach arbitrary key value information to the Kubernetes object. The levels, however, ought to be utilized for meaningful information in order to match a pod with selection criteria, so the annotations have less structure data.
53. Can you discuss how the master node works in Kubernetes?
Answer: Kubernetes master controls the nodes and the containers are within the nodes. These individual containers are stored within pods and inside each pod based according to the configuration and requirements in the Kubernetes. Because of this, if the Kubernetes pods have to be deployed, then they may either be accessed using a user interface or command line tool. The job of the kube APIserver is to make certain there is absolute communication between the node of Kubernetes and its master components.
54. What is the role of the Kube apiserver and the Kube scheduler?
Answer: The Kube apiserver used to follow the scale out architecture plan and is the front end which comes to the master node control panel. That would expose all the APIs of the Kubernetes Master Node components. It is responsible for the establishment of communication between the Kubernetes node and the Kubernetes master components. The Kube scheduler is at its core, which is responsible for the distribution and management of the workload on the various worker nodes. It selects the most suitable nodes to run the unscheduled pod depending on the resource needs and keeps track of the overall resource utilization. It makes certain the workload is not scheduled on the nodes that may be already full.
55. What is node affinity and pod affinity?
Answer: Node affinity helps in ensuring the hosting of pods on specific nodes. On the other hand, pod affinity helps in ensuring that two pods could be co-located on a single node.
56. How can you start a rollback for an application?
Answer: The Rollback and rolling updates feature in Kubernetes is in-built with the Deployment object. If the existing state of a Deployment is unstable due to configuration or application code then you can Rollback to earlier Deployment version. With every rollback you can update the version of the Deployment.
57. What are init containers?
Answer: You can find many containers in a Kubernetes pod and init container is the first container that is executed before running other containers in the pod.
58. How to do maintenance activity on the K8 node?
Answer: Maintenance activity is an inevitable part of the administration; you may need to do the patching or applying some security fixes on the K8. Marking the node unschedulable and then draining the PODs which are present on the K8 node.
- kubectl cordon
- kubectl drain –ignore-daemonsets
It’s important to include the –ignore-daemonsets for any daemonset running on this node. Just in case if any statefulset is running on this node and if no more node is available to maintain the count of stateful set then statefulset POD will be in pending status.
59. What is the role of a pause container?
Answer: Pause container servers are considered as the parent container for all the containers in your POD.
- It used to serve as the basis of Linux namespace shared in the POD.
- PID 1 for each POD to reap the zombie processes.
60. Why do we need service mesh?
Answer: A service mesh is used to ensure that communication among the containerized and often ephemeral application infrastructure services is fast, reliable and secure. The mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication and authorization, and support for the circuit breaker pattern.
61. How to control the resource usage of a POD?
Answer: With requests and limits, resource usage of a POD can be controlled.
request: The amount of resources being requested for a container. If a container is exceeding its request for the resources, it may be throttled back down to its request.
limit: It is an upper cap on the resources that a container is able to use. If it tries to exceed this limit it may be terminated if Kubernetes decides that another container needs the resources. If you’re much sensitive to pod restarts, it also makes sense for having the sum of all container resource limits equal or less than the total resource capacity for your cluster.
62. What are the units of CPU and memory in POD definition?
Answer: CPU is in milicores and memory in bytes. The CPU can be easily throttled but not memory.
Where else can we set a resource limit?
Answer: You may also set resource limits on a namespace. This is helpful in scenarios where people have a habit of not defining the resource limits in POD definition.
63. How will you update the version of K8?
Answer: Before doing the update of K8, it’s important to read the release notes to understand the changes introduced in newer versions and whether version updates will also update the etcd.
64. Explain the role of CRD (Custom Resource Definition) in K8?
Answer: A custom resource definition is an extension of the Kubernetes API that is not necessarily available in a default installation of Kubernetes. It represents a customization of a particular Kubernetes installation. However, many core Kubernetes functions are now built by making use of the custom resources and making Kubernetes more modular.
65. What are the various K8 related services running on nodes and the role of each service?
Answer: Mainly K8 cluster consists of two types of nodes: master and executor
Kube-apiserver: Master API service which acts like a door to K8 cluster.
Kube-scheduler: Schedule PODs according to available resources on executor nodes.
Kube-controller-manager: controller is a control loop that is used to watch the shared state of the cluster through the apiserver and helps in making changes attempting to move the current state towards the desired state.
executor node: (These also runs on master node)
Kube-proxy: The Kubernetes network proxies are running on each node. This is reflecting services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
Kubelet: kubelet takes a set of PodSpecs that are provided through different mechanisms (primarily through the apiserver) and ensures that the containers described in those PodSpecs are running and also healthy.
66. What is PDB (Pod Disruption Budget)?
Answer: A PDB specifies the number of replicas that an application can tolerate having, relative to how many it is intended to have. For example, a Deployment which has a .spec.replicas: 5 is supposed to have 5 pods at any given time. If PDB of it is allowing it to be 4 at a time, then the Eviction API will allow voluntary disruption of one, but not two pods, at a time. This is applicable for voluntary disruptions.
67. What are the application deployment strategies?
Answer: In this agile world there is continuous demand to upgrade the applications; we are having multiple options for the deployment of the new version of app:
1) Recreate: Old style, existing application version is destroyed and new version is deployed. The significant amount of downtime.
2) Rolling update: Gradually bringing down the existing deployment and introducing the new versions. You decide how many instances can be upgraded at a single point of time.
3) Shadow: Traffic going to the existing version of the application is replicated to the new version to see if it’s working. Istio provides this pattern.
4) A/B Testing using Istio: Running multiple variants of application together and determining the best one based on user traffic. It’s more for management decisions.
5) Blue/Green: Blue is mainly about switching the traffic from one version of app to another version.
6) Canary deployment: It is the deployment in which a certain percentage of traffic is shifted from one version to another. If things work well we will keep on increasing the traffic shift. It’s a little bit different from the rolling update in which the existing version count is reduced gradually.
68. How to run a POD on a particular node?
Answer: Various methods are available to achieve it.
- nodeName: specify the node name in the POD spec, it will try to run the POD on a specific node.
- nodeSelector : you may be assigning a specific label to nodes which have special resources and use the same label in POD spec so that POD will be running only on that node.
- nodeaffinities: requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution are hard and soft requirements for running the POD on specific nodes. This will replace nodeSelector in future. It depends on the node labels.
69. How to ensure PODs are collocated to get performance benefits?
Answer: podAntiAffinity and podAffinity are the concepts of affinity to not keep and keep the PODs on the same node. Key point to note is that it depends on the POD labels.
70. What are the taints and toleration?
Answer: Taints is allowing a node to repel a set of pods. You can set taints on the node and only the POD which has tolerations matching the taints condition will be able to run on those nodes. This is useful in the case when you allocated a node for one user and don’t want to run the PODs from other users on that node.
71. How to provide persistent storage for POD?
Answer: Persistent volumes are used for persistent POD storage. They can be provisioned statically or dynamically.
Static: A cluster administrator is used to create a number of PVs. They are used to carry the details of the real storage with them which is available for use by cluster users.
Dynamically: Administrator creates PVC (Persistent volume claim) specifying the existing storage class and volume created dynamically based on PVC.
72. How do two containers running in a single POD have a single IP address?
Answer: Kubernetes implements this by creating a special container for each pod whose only purpose is to provide a network interface for the other containers. These are one pausecontainer which is responsible for namespace sharing in the POD. Generally, people used to ignore the existence of this pause container but actually this container is considered as the heart of the network and other functionalities of the POD. It provides a single virtual interface which is used by all containers running in a POD.
73. What’s the difference between nodeport and load balancer?
Answer: nodport used to rely on the IP address of your node. Also, you can use the node ports only from the range 30000–32767, on another hand load balancer will have its own IP address. All the major cloud providers support creating the LB for you if you specify LB type while creating the service. On bare metal based clusters, metallb is promising.
74. How POD to POD communication works?
Answer: For POD to POD communication, it’s always recommended to use the K8 service DNS instead of POD IP because PODs are ephemeral and their IPs can get changed after the redeployment.
- If the two PODs are running on the same host then the physical interface will not come into the picture.
- Packet will be leaving the POD1 virtual network interface and go to Docker Bridge (cbr0).
- Docker Bridge will forward the packet to right POD2 which is running on the same host.
- If two PODs are running on a different host then the physical interface of both host machines will come into the picture. Let’s consider a scenario in which CNI is not used.
- POD1 = 192.168.2.10/24 (node1, cbr0 192.168.2.1) POD2 = 192.168.3.10/24 (node2, cbr1 192.168.3.1)
- POD1 will send the traffic destined for POD2 to its GW (cbr0) because both are in different subnets.
- GW doesn’t know about 192.168.3.0/24 network hence it will forward the traffic to the physical interface of node1.
- node1 will forward the traffic to its own physical router/gateway.
- That physical router/GW should have the route for 192.168.3.0/24 network to route the traffic to node2.
- Once traffic reaches node2, it passes that traffic to POD2 through cbr1
75. How POD to service communication works?
Answer: PODs are ephemeral, their IP address can change hence to communicate with POD in a reliable way service is used as a proxy or load balancer. A service is a type of kubernetes resource that causes a proxy to be configured to forward requests to a set of pods. The set of pods that will receive traffic is determined by the selector which matches labels assigned to the pods when they were created. K8 helps in providing an internal cluster DNS that is resolving the service name.
Service is making use of a different internal network than POD network. netfilter rules which are getting injected by kube-proxy are used for redirecting the request actually destined for service IP to the right POD.
76. How does the service know about healthy endpoints?
Answer: Kubelet running on the worker node is responsible for detecting the unhealthy endpoints, it passes that information to the API server then eventually this information is getting passed to Kube-proxy which will be adjusting the netfilter rules accordingly.
77. What are the various things that can be done to increase the K8 security?
- By default, POD can communicate with any other POD, we can set up network policies to limit this communication between the PODs.
- RBAC (Role based access control) to narrow down the permissions.
- Use namespaces to establish security boundaries.
- We should set the admission control policies to avoid running of the privileged containers.
- Turn on audit logging.
78. How to monitor K8 cluster?
Answer: Prometheus is used for K8 monitoring. The Prometheus ecosystem consists of multiple components.
- main Prometheus server which is scraping and storing the time series data.
- client libraries for instrumenting application code.
- a push gateway for supporting short-lived jobs.
- Special-purpose exporters for services like HAProxy, StatsD, Graphite, etc.
an alertmanager to handle alerts.
- various support tools.
79. How to make Prometheus HA?
Answer: You may run multiple instances of prometheus HA but grafana can use only one of them as a datasource. You may put a load balancer in front of multiple prometheus instances, use sticky sessions and failover if one of the prometheus instances dies. This makes things complicated. Thanos is another project which solves these challenges.
80. What are other challenges with prometheus?
Answer: Despite of being very good at the monitoring of k8, Prometheus is still having some issues:
- Prometheus HA support.
- No downsampling is available for collected metrics over the period of time.
- There is not any kind of support for object storage for long term metric retention.
81. What’s a prometheus operator?
Answer: The mission of the Prometheus Operator is for making the running Prometheus on the top of Kubernetes as easy as possible, while preserving configurability as well as making the configuration Kubernetes native.
82. How to get the central logs from POD?
Answer: This architecture depends upon application and many other factors. Following are the common logging patterns.
- Node level logging agent
- Streaming sidecar container
- Sidecar container with logging agent
- Export logs directly from the application
In our setup, filebeat and journalbeat are running as daemonset. Logs which are collected by these are dumped to kafka topics which are eventually getting dumped to ELK stack.
Same can be achieved using EFK stack and fluentd-bit.
83. Where Kubernetes cluster data is stored?
Answer: etcd is responsible for storing Kubernetes cluster data.
etcd is written in the Go programming language and is a distributed key-value store that is used for coordinating between distributed work. So, Etcd stores the configuration data of the Kubernetes cluster which is used for representing the state of the cluster at any given point in time.
84. What is the role of kube-scheduler?
Answer: kube-scheduler is responsible to assign a node to the newly created pods.
85. Which process runs on Kubernetes master node?
Answer: The Kube-apiserver process runs on Kubernetes master node.
86. Which process runs on Kubernetes non-master node?
Answer: Kube-proxy process runs on Kubernetes non-master node
87. Which container runtimes are supported by Kubernetes?
Answer: Kubernetes supports docker and rkt container runtimes.