Service
Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.
The Service API lets you provide a stable (long lived) IP address or hostname for a service implemented by one or more backend pods, where the individual pods making up the service can change over time.
Kubernetes automatically manages EndpointSlice objects to provide information about the pods currently backing a Service. Service also act like internal load balancer that will automatically chose which pod should receive traffic.
Service type
There are multiple service type in kubernetes that allow us ot specify what kind of Service we want. The available type
values and their behaviors are:
ClusterIP
: Exposes the Service on a cluster-internal IP, the service only reachable from within the cluster. This is the default that is used if we don't explicitly specify a type for a Service. We can expose the Service to the public internet using an Ingress or a Gateway.NodePort
: Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if we had requested a Service oftype: ClusterIP
.LoadBalancer
: Exposes the Service externally using an external load balancer from cloud provider.ExternalName
: Maps the Service to the contents of the externalName field (for example, to the hostnameapi.foo.bar.example
). The mapping configures our cluster's DNS server to return aCNAME
record with that external hostname value. No proxying of any kind is set up.
type: ClusterIP
Create new file service.yaml
and put our service definition there.
apiVersion: v1
kind: Service
metadata:
name: simple-go
spec:
selector:
app: simple-go
ports:
- protocol: TCP
port: 8080
targetPort: 8080
This definition will create a service that called simple-go
which will expose our groups of pods into single endpoint using port 8080
. We can choose which group of pods that this service point to using the selector
that we define.
Remember previously in our deployment definition we define our spec template to have a label app: simple-go
? That means all of the pods created by that deployment will have that label.
The controller for that Service continuously scans for Pods that match its selector, and then makes any necessary updates to the set of EndpointSlices for the Service.
In this service definition we set the spec selector to app: simple-go
so our service will represent all of our simple-go
app pods.
Lets apply it using command kubectl apply -f service.yaml
and check it using kubectl get service
.
➜ kubectl apply -f service.yaml
service/simple-go created
➜ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
simple-go ClusterIP 10.106.119.159 <none> 8080/TCP 7s
Accessing Service
There are multiple ways to access our service, the easiest way and should work on any OS is by using minikube service tunnel. To run service tunnel we can use command minikube service <service-name> --url
.
➜ minikube service simple-go --url
😿 service default/simple-go has no node port
❗ Services [default/simple-go] have type "ClusterIP" not meant to be exposed, however for local development minikube allows you to access this !
http://127.0.0.1:64290
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
We got an warning that our service don't have a NodePort, but it's ok for local development. The command output also have a URL for accessing our service, try access it using curl and it should return response from our app.
➜ curl http://127.0.0.1:64290
{"message":"Everything is fine!"}
type: NodePort
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by
--service-node-port-range
flag (default:30000-32767
). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its.spec.ports[*].nodePort
field.Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even to expose one or more nodes' IP addresses directly.
In production if we want to expose our service directly we need to explicitly change our service type to NodePort
. We can do this by adding type: NodePort
into the spec
section of our service definition.
Changing service type to NodePort
also fix warning that we previously got when running minikube service tunnel.
type: LoadBalancer
A LoadBalancer Service in Kubernetes exposes our service externally using a cloud provider’s load balancer. It automatically provisions a public IP address and distributes incoming traffic across the backend pods.
How it works in general (actual implementation may varies depends on cloud provider):
- A LoadBalancer service gets an external IP.
- It forwards traffic to pods inside the cluster.
- The cloud provider (AWS, GCP, Azure) manages the external load balancer.
We can try this in minikube using tunnel. First let's create a new file service-loadbalancer.yaml
and put below service definition there.
apiVersion: v1
kind: Service
metadata:
name: simple-go-loadbalancer
spec:
type: LoadBalancer
selector:
app: simple-go
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Now lest apply check it using kubectl get svc
command.
➜ kubectl apply -f service-loadbalacer.yaml
service/simple-go-loadbalancer created
➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
postgres ClusterIP 10.100.111.204 <none> 5432/TCP 5h33m
simple-go ClusterIP 10.96.229.141 <none> 8080/TCP 58m
simple-go-loadbalancer LoadBalancer 10.102.155.101 <pending> 8080:31512/TCP 2s
As you can see above the EXTERNAL-IP
of our load balancer service that we just created is still in <pending>
. Lets open new terminal and run minikube tunnel
and check the service again.
➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
postgres ClusterIP 10.100.111.204 <none> 5432/TCP 5h34m
simple-go ClusterIP 10.96.229.141 <none> 8080/TCP 59m
simple-go-loadbalancer LoadBalancer 10.102.155.101 127.0.0.1 8080:31512/TCP 41s
After running minikube tunnel we can see that our load balancer service that we just created get assigned external IP. And we can access it from our local using curl.
➜ curl http://localhost:8080
{"message":"Everything is fine!"}
type: ExternalName
A Service ExternalName is used to redirect traffic to an external DNS name. Instead of exposing Kubernetes pods, it acts like a DNS alias.
- Maps a service name to an external DNS name.
- Used for legacy services, databases, or APIs hosted outside the cluster.
- Does not create a ClusterIP, just a DNS alias (
CNAME
).
Lets try this. Create new file called service-external.yaml
and put below service definition there. This will route any request or traffic to my-external-service
to external example.com
.
apiVersion: v1
kind: Service
metadata:
name: my-external-service
spec:
type: ExternalName
externalName: example.com
Now lets apply and execute nslookup
from inside any pod that you have.
➜ kubectl apply -f service-external.yaml
service/my-external-service created
➜ kubectl exec simple-go-5dc4557ffc-c24fp -- nslookup my-external-service
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find my-external-service.cluster.local: NXDOMAIN
** server can't find my-external-service.svc.cluster.local: NXDOMAIN
** server can't find my-external-service.svc.cluster.local: NXDOMAIN
** server can't find my-external-service.cluster.local: NXDOMAIN
my-external-service.default.svc.cluster.local canonical name = example.com
Name: example.com
Address: 2600:1406:bc00:53::b81e:94ce
Name: example.com
Address: 2600:1408:ec00:36::1736:7f24
Name: example.com
Address: 2600:1406:3a00:21::173e:2e66
Name: example.com
Address: 2600:1406:3a00:21::173e:2e65
Name: example.com
Address: 2600:1408:ec00:36::1736:7f31
Name: example.com
Address: 2600:1406:bc00:53::b81e:94c8
my-external-service.default.svc.cluster.local canonical name = example.com
Name: example.com
Address: 23.192.228.80
Name: example.com
Address: 23.215.0.136
Name: example.com
Address: 23.192.228.84
Name: example.com
Address: 96.7.128.175
Name: example.com
Address: 23.215.0.138
Name: example.com
Address: 96.7.128.198
command terminated with exit code 1