- Discuss the benefits of logically grouping Pods with Services to access an application.
- Explain the role of the kube-proxy daemon running on each node.
- Explore the Service discovery options available in Kubernetes.
- Discuss different Service types.
A Service offers a single DNS entry for a containerized application managed by the Kubernetes cluster, regardless of the number of replicas. It provides a common load balancing access point to the set of pods. These pods are logically grouped and managed by a for example a Deployment controller.
To access the applications, we need to connect to the Pods. As Pods are ephemeral in nature, resources like IP addresses allocated to them cannot be static. The Service is a higher-level abstraction provided by Kubernetes, which will with the help of Labels and Selectors, logically group these Pods and defines a policy to access them.
Labels and Selectors use a key/value pair format, which are attached to objects, such as pods. Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
A Service object definition might look like this.
apiVersion: v1 kind: Service metadata: name: frontend-svc spec: selector: app: frontend ports: - protocol: TCP port: 80 targetPort: 5000
We are creating a frontend-svc Service by selecting all the Pods that have the Label key=
app set to the value=
By default, each Service receives an IP address routeable only inside the cluster, know an ClusterIP.
The client will connect to a Service via its ClusterIP, which then forwards traffic to one of the Pods attached to it.
We can select the
targetPort on the Pod which receives the traffic, forwarded by the Service.
Our service will receive requests from the client on
port: 80 and then forwards these requests to one of the attached Pods on the
targetPort: 5000, which in turn would be defined in the
containerPort property of the
Endpoints are created and managed automatically by the Service, not by the Kubernetes cluster administrator.
Each cluster runs a daemon called kube-proxy, that watches the API server on the master node for the addition, updates and removal of Services and endpoints. kube-proxy is responsible for implementing the Service configuration on behalf of an administrator or developer, in order to enable traffic routing to an exposed application running in pods. For each Service on each node it will configure iptables rules to capture the traffic for its ClusterIP.
Services are the primary mode of communication between containerized applications managed by Kubernetes, it is helpful to be able to discover them at runtime.
Environmental Variables can be used to discover services. As soon as the Pod starts on any worker node, the kubelet daemon running on that nodes will add a set of environment variables in the Pod for all active Services.
Another way is to use DNS. Kubernetes has an add-on for DNS, which creates a DNS record for each Service and its format is
my-svc.my-namespace.svc.cluster.local. Services within the same Namespace find other Services just by their names.
If we would add a Service
my-namespace Namespace, all Pods in the same
my-namespace lookup the Service just by its name,
redis-master. Pods from other Namespaces, will lookup the same Service by adding the respective Namespace as a suffix, such as
redis-master.my-namespace or by providing the FQHN of the service as
DNS is the most common and highly recommended solution.
By defining the ServiceType property upon creation of the Service, we can decide whether the Server should:
- be only accessible within the cluster
- be accessible from within the cluster and the external world
- map to an entity which resides either inside or outside the cluster.
ClusterIP and NodePort
A Service receives a Virtual IP address, known as ClusterIP, which is the default ServiceType. If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range (default: 30000-32767). Each node proxies that port into your service. Each node will expose the same port to the Service. The NodePort is useful when we want to make our Services accessible from the external world. When the end-user connects to any worker node on the specified high-port, it will proxy the request internally to the ClusterIP of the Service, and then forward the request to the applications running inside the cluster. The Service itself is load balancing the request, thus only forwards the request to one of the pods running the desired application. Administrators can configure a reverse proxy (ingress) to manage access to multiple application service from the external world.
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for the service.
With the LoadBalancer ServiceType, NodePort and ClusterIp are automatically created and the external load balancer will route to them. The Service is like on the default exposed at a static port on each worker node and is exposed externally using the underlying cloud provider's load balancer feature.
The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of load balancers and have the respective support in Kubernetes. Some cloud providers allow you to specify the loadBalancerIP. Azures AKS, IBM, Googles GKE, Amazons EKS support this feature, to name a few.
An ExternalIP address can be mapped to by a Service, if it can route to one or more of the worker nodes. ExternalIPs are not managed by Kubernetes and the routing has to be configured by the cluster administrator, which maps the ExternalIP address to one of the nodes. This type of service requires an external cloud provider and a Load Balancer configured on the cloud provider's infrastructure.
ExternalName is a special ServiceType, that has no Selectors and does not define any endpoints. When accessed within the cluster, it returns a CNAME record of an externally configured Service. These Services map a Service to a DNS name. When looking up the host my-service.prod.svc.cluster.local, the cluster DNS Service returns a CNAME record with the value my.database.example.com when given the following object manifest.
apiVersion: v1 kind: Service metadata: name: my-service namespace: prod spec: type: ExternalName externalName: my.database.example.com
This was some heavy lifting, as networking is not an easy to understand. The training provided some good visuals. I did not include them since I don't want to steal everything. My guess is that in most cases it is enough to use the LoadBalancer type which many cloud providers have configured. The instructions for each of them are well written, and I will give it a try soon.
As for an External Load these instructions should be a good start.
It is interesting how Kubernetes is managing the communication between pods the "old-fashioned" way. That's it for today, thank you for reading friends.