Kubernetes Service Types: ClusterIP, NodePort, LoadBalancer and ExternalName
9 minutes
Introduction
A Kubernetes service represents an abstract concept for a pod or a collection of pods. Kubernetes Pods are transient and short lived since they can be created, deleted and replaced at any time. This abstraction allows access to pods with a permanent address all the time with a given unique service name and an individual IP address(Service).
A Service in Kubernetes thus acts as a way to make an application accessible, whether it operates as a single Pod or multiple Pods within your cluster. Simply put, all traffic aimed at a service eventually goes to the pods it represents.
In this article, we will go through the concept of Kubernetes service types including ClusterIP, NodePort, LoadBalancer, and ExternalName and practically implement each service type.
Kubernetes service types
A Kubernetes Service type allows you to specify what kind of Service you want. These service types let you manage and access your applications in a Kubernetes cluster in different ways, depending on your requirements. Kubernetes has the following four types of services, each with its use cases and features.
ClusterIP: Expose a service that can only be reached from inside the cluster.
NodePort: Expose a service through a fixed port on every node's IP address.
LoadBalancer: Expose a service through the cloud provider's load balancer.
ExternalName: Links a service to a set externalName field by providing a value for the CNAME record.
Kubernetes ClusterIP Service
In Kubernetes, the default service type is the ClusterIP and is internal to a Kubernetes cluster. This service type can only be accessed within the Kubernetes cluster, indicating that you cannot access a kubernetes ClusterIP service directly from outside the cluster or from different networks.
The ClusterIP service in Kubernetes allows access to the Service via a load-balanced cluster-internal IP address. One or more pods that matches the label selector can forward traffic to the ClusterIP service.
Create a ClusterIP service
Before we create a Kubernetes cluster ip service, let's dry run a deployment with 3 pods by executing the following imperative command and save the output in a file.
$ kubectl create deployment nginx-deploy --image=nginx --replicas=3 --dry-run=client -o yaml > nginx_deploy.yaml
$ cat nginx_deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy ← Pod labels
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Observe the names of the pods labeled (app: nginx-deploy
) in the template section. This label will serve as criteria for choosing pods during the setup of a ClusterIP service.
$ kubectl create -f nginx_deploy.yaml
$ kubectl get po --selector=app=nginx-deploy
NAME READY STATUS RESTARTS AGE
nginx-deploy-d845cc945-nwbxx 1/1 Running 0 9h
nginx-deploy-d845cc945-p2dgd 1/1 Running 0 9h
nginx-deploy-d845cc945-wjm78 1/1 Running 0 9h
Make a dry run of creating a ClusterIP service of the above web deployment.
$ kubectl create service clusterip nginx-svc --tcp=8080:80 --dry-run=client -o yaml > nginx-svc.yaml
Edit the service definition and update the label selector.
$ vi nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-svc
name: nginx-svc
spec:
ports:
- name: 8080-80
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy ← Update the label selector.
type: ClusterIP
status:
loadBalancer: {}
Create the desired ClusterIP service for the Kubernetes deployment.
$ kubectl create -f nginx-svc.yaml
Describe the ClusterIP service to get the information about the allocated IP address. Eventually all the traffic destined at this IP address will be load balanced across the three endpoints.
$ kubectl describe svc nginx-svc
Name: nginx-svc
Namespace: default
Labels: app=nginx-svc
Annotations: <none>
Selector: app=nginx-deploy
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.106.28
IPs: 10.111.106.28
Port: 8080-80 8080/TCP
TargetPort: 80/TCP
Endpoints: 192.168.237.7:80,192.168.237.8:80,192.168.237.9:80
Session Affinity: None
Events: <none>
Access the ClusterIP service on port 8080.
$ kubectl run tmp --rm -it --restart=Never --image=curlimages/curl -- curl nginx-svc.default.svc:8080
The Kubernetes ClusterIP service facilitates inter-service communication within the cluster. For instance, you can use ClusterIP service to connect front-end with the back-end components of your application.
Note: An Ingress or a Gateway can be used to access a ClusterIP service from outside the cluster. However, this does not imply that a gateway or an ingress are a sort of service types. Conceptually, they are different from the Kubernetes service types but have similar purposes.
Kubernetes NodePort Service
The Kubernetes NodePort service provides the easiest way to forward external traffic directly to the service. It makes the Service available on each Node's (the VM’s) IP at a fixed port i.e. NodePort. When you create a NodePort service it automatically creates a ClusterIP service for you. Internally, this ClusterIP service directs traffic from the NodePort to the correct set of Pods.
The port range for Kubernetes NodePort service is 30000 to 32767. Pick a port number from this range while creating a NodePort service. If you don't specify a Nodeport, Kubernetes will randomly select a port.
Although NodePort makes external exposure simpler, it has limitations and security concerns.
- Anyone who has access to the node can access the service through NodePort.
- A NodePort cannot be attached to more than one service.
- Limits the port ranges to 30000–32767.
- Unlike ClusterIP, there is no automated load balancing.
Create a NodePort service:
In the following example, kubernetes nodeport service will use the same kubernetes deployment which we created previously for ClusterIP service. First, make a dry run of NodePort service using kubectl
and save the output in a file.
$ kubectl create service nodeport nginx-nodeport-svc --tcp=80:80 --node-port=32750 --dry-run=client -o yaml > nodeport.yaml
The <port>:<targetPort>
pair option in the above command specifies service's listening port and port number for routing traffic to the service's pods respectively. Update the nodeport service definition to include the correct pod label.
$ vi nodeport-svc.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-nodeport-svc
name: nginx-nodeport-svc
spec:
ports:
- name: 80-80
nodePort: 32750
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy ← Update the label selector.
type: NodePort
status:
loadBalancer: {}
Create the nodeport service.
$ k create -f nodeport.yaml
service/nginx-nodeport-svc created
Describe the NodePort service to view the comprehensive information.
$ kubectl describe svc nginx-nodeport-svc
Name: nginx-nodeport-svc
Namespace: default
Labels: app=nginx-nodeport-svc
Annotations: <none>
Selector: app=nginx-deploy
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.21.58
IPs: 10.107.21.58
Port: 80-80 80/TCP
TargetPort: 80/TCP
NodePort: 80-80 32750/TCP
Endpoints: 192.168.237.7:80,192.168.237.8:80,192.168.237.9:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Access the NodePort service.
$ curl NODE_IP:32750
Kubernetes LoadBalancer service
A kubernetes load balancer service provides external access to a group of pods. The kubernetes environment being used for load balancer service must have the capability to provision a load balancer. The provisioned load balancer is then allocated a single IP address to access the set of pods using ClusterIP. The cloud environment like AWS or GCP have the mechanism to automatically detect the service specification spec.LoadBalancer
and provision a load balancer.
Kubernetes provision a load balancer service with a clusterIP internally. In addition a NodePort is also allocated on each node to expose the kubernetes pods via a random port. In case you have created a load balancer service without cloud support, the external IP will remain unattached until you attach an external IP or DNS.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-lb-svc
name: nginx-lb-svc
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy
type: LoadBalancer
status:
loadBalancer: {}
Review the allocated ClusterIP address, NodePort, and the external IP address for the above lb service. Observe that the external IP column for the load balancer service is indicated as pending. If you are utilizing the load balancer service with a cloud provider, the external IP column will display the DNS of the load balancer.
$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
nginx-lb-svc LoadBalancer 10.99.19.11 <pending> 80:31224/TCP 31m
A loadbalancer service in kubernetes is the easiest way to expose an application in the web. One downside of the Kubernetes Load balancer service is that every time you wish to make your service accessible to the outside world, you are required to acquire a new IP address. This can significantly increase your expenses, particularly if you're relying on AWS, Azure, or GCP.
Kubernetes ExternalName service
Unlike the other service types, the kubernetes externalname service does not route traffic to pods. Rather, you can access any external resources from within your Kubernetes cluster by using kubernetes external service. In the case of the ExternalName service, clients utilize the DNS address of a Service(my-service.default.svc.cluster.local
) as a substitute for the external DNS address.
Generate a skeleton of Kubernetes external service with the following imperative command and then edit the service definition to match with your requirements.
$ k create service externalname web-service --external-name=web-service --tcp=80:80 --dry-run=client -o yaml > es.yaml
Edit the service definition.
$ cat es.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: web-service
spec:
externalName: web-service
ports:
- name: http
port: 80
protocol: TCP
type: ExternalName
status:
loadBalancer: {}
Create the external service and get details of it.
$ kubectl create -f es.yaml
$ kubectl describe svc web-service
Name: web-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP Families: <none>
IP:
IPs: <none>
External Name: web-service
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: <-- No endpoint is attached
Session Affinity: None
Events: <none>
Once external service is created, Kubernetes attaches a CNAME DNS record to your cluster that resolves the internal address(web-service.default.svc.cluster.local
) of the Service.
$ kubectl run tmp --rm --restart=Never --image=curlimages/curl -it -- nslookup web-service.default.svc.cluster.local
If you don't see a command prompt, try pressing enter.
Server: 10.96.0.10
Address: 10.96.0.10:53
web-service.default.svc.cluster.local canonical name = web-service
web-service.default.svc.cluster.local canonical name = web-service
The above-mentioned service lacks endpoints, necessitating the manual creation of endpoints (such as web service or DB service) to hook into an external service. Therefore, to ensure the functionality of the above externalName service, it is necessary to manually create the endpoint.
$ cat ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: web-service ← This name must match with the external service name.
namespace: default
subsets:
- addresses:
- ip: 192.168.183.63 ← IP address of external service.
ports: ← The port definition should match with service definition including name,port and protocol.
- name: http
port: 80
protocol: TCP
Create the endpoint.
$ kubectl create -f ep.yaml
At this point, describe the external service to verify that the service has a endpoint.
$ kubectl describe svc web-service
Name: web-service
...
External Name: web-service
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.183.63:80
...
Create a temporary pod and access the external service.
$ vi tmp.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: tmp
name: tmp
spec:
containers:
- image: willdockerhub/curl-wget
command: ["/bin/bash","-c","sleep 3600"]
name: tmp
resources: {}
dnsPolicy: ClusterFirst
status: {}
$ kubectl create -f tmp.yaml // Create the temporary pod.
Exec into the temporary pod and ping the external service.
$ kubectl exec tmp -it -- curl -I web-service
HTTP/1.1 200 OK
Date: Wed, 24 Jul 2024 05:53:30 GMT
Server: Apache/2.4.52 (Ubuntu)
Last-Modified: Fri, 23 Feb 2024 14:27:06 GMT
ETag: "29af-6120d5ac34601"
Accept-Ranges: bytes
Content-Length: 10671
Vary: Accept-Encoding
Content-Type: text/html
You can deploy an ExternalName service (functioning as a local service) when a pod within one namespace needs to communicate with a service in a different namespace or want to represent an external datastore or a web service in a Kubernetes cluster.
Conclusion
In this article, we have covered different Kubernetes Service type and implemented them practically. It gets easier to choose a service type in your application once you understand each Kubernetes service type effectively.
Although few other factor also determines which service type to choose - like your application requirements and environment. For example, Choose Kubernetes ClusterIP service if you need to implement internal micro-service communication, NodePort service for development and testing purpose, and LoadBalncer service for production where scalability and resilience is important.