Ingress: CRD (Custom Resource Definition)
A Ingress CRD is a custom resource created to extend Kubernetes and provide advanced ingress capabilities beyond the standard Ingress
resource.
While the default Kubernetes Ingress object handles basic routing (like mapping a URL path to a service), some use cases require more flexibility. For example:
- Advanced traffic management (e.g., canary releases, A/B testing).
- Complex routing (e.g., routing based on headers or cookies).
- Security policies (e.g., JWT authentication, IP whitelisting).
To achieve this, many Ingress controllers (e.g., NGINX, Traefik, Istio) define their own CRDs to enable advanced configurations.
NGINX
The NGINX Ingress Controller for Kubernetes supports an advanced custom resource called VirtualServer and VirtualServerRoute that allows complex routing, traffic splitting, custom load balancing, and fine-grained configuration.This is more flexible than the standard Ingress resource and is useful for advanced scenarios.
VirtualServer and VirtualServerRoute resources are load balancing configurations recommended as an alternative to the Ingress resource.
Installing NGINX CRD is quite advanced and difficult, make sure you read and follow the installation guide in the Official Documentation.
Traffic Splitting
One of scenario that we can use is traffic splitting or canary deployment. You can also use this for A/B testing.
To add traffic splitting in NGINX VirtualServer is relatively easy. We just need to add list upstream (2 or more) and then on the routes.action
spec we add split
and weight
for each upstream.
Create Deployment
Lets create two deployments using nginxdemos/nginx-hello:plain-text
image, it will return request and server info so we can differentiate the response.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
spec:
replicas: 2
selector:
matchLabels:
app: my-app-v1
template:
metadata:
labels:
app: my-app-v1
spec:
containers:
- name: my-app-v1
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
spec:
replicas: 2
selector:
matchLabels:
app: my-app-v2
template:
metadata:
labels:
app: my-app-v2
spec:
containers:
- name: my-app-v2
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
Create Service
Then lets create service for both of our app using kubectl expose
command and check if the service created properly.
➜ kubectl expose deployment my-app-v1 --port=80 --target-port=8080
service/my-app-v1 exposed
➜ kubectl expose deployment my-app-v2 --port=80 --target-port=8080
service/my-app-v2 exposed
➜ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25d
my-app-v1 ClusterIP 10.101.208.134 <none> 80/TCP 9s
my-app-v2 ClusterIP 10.97.235.79 <none> 80/TCP 4s
Create VirtualServer
Now lets create the VirtualServer
definition. Here are example of splitting traffic into 2 different upstream with weight 80
and 20
. This means that 80% of the traffic will goes to my-app-v1
and the others goes to my-app-v2
.
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: my-app
spec:
host: my-app.local
upstreams:
- name: my-app-v1
service: my-app-v1
port: 80
- name: my-app-v2
service: my-app-v2
port: 80
routes:
- path: "/"
splits:
- weight: 80
action:
pass: my-app-v1
- weight: 20
action:
pass: my-app-v2
Apply the yaml
file and the you can use kubectl describe
to get the details and status. Also because we are using minikube don't forget to start minikube tunnel
after applying the VirtualServer.
➜ kubectl apply -f ingress-nginx-crd.yaml
virtualserver.k8s.nginx.org/my-app created
➜ kubectl describe virtualserver my-app
Name: my-app
...
Status:
External Endpoints:
Ip: 127.0.0.1
Ports: [80,443]
Message: Configuration for default/my-app was added or updated
Reason: AddedOrUpdated
State: Valid
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 10m nginx-ingress-controller Configuration for default/my-app was added or updated
Test and Validate
Then we can try to make multiple curl
request to the custom ingress CRD that we just created like this.
➜ curl --resolve "my-app.local:80:127.0.0.1" \
http://my-app.local \
http://my-app.local \
http://my-app.local \
http://my-app.local \
http://my-app.local
Server address: 10.244.0.96:8080
Server name: my-app-v1-6dd7f778db-js6kr
Date: 06/Mar/2025:06:18:32 +0000
URI: /
Request ID: 5e251a15a14de4174b461bbe54369b15
Server address: 10.244.0.95:8080
Server name: my-app-v1-6dd7f778db-5766t
Date: 06/Mar/2025:06:18:32 +0000
URI: /
Request ID: 37b1233a3d8d90900c419d799a523675
Server address: 10.244.0.95:8080
Server name: my-app-v1-6dd7f778db-5766t
Date: 06/Mar/2025:06:18:32 +0000
URI: /
Request ID: 206b75cc134cad35c9b2efde77d8cb8c
Server address: 10.244.0.95:8080
Server name: my-app-v1-6dd7f778db-5766t
Date: 06/Mar/2025:06:18:32 +0000
URI: /
Request ID: 2de3128b19fe664e0eef02ffbbef6044
Server address: 10.244.0.94:8080
Server name: my-app-v2-6fd4d596c4-8pdb6
Date: 06/Mar/2025:06:18:32 +0000
URI: /
Request ID: af3f2fb5e2848d2b79bbf367845b544b
As you can see from 5 consecutive request 4 of them goes to my-app-v1
and only 1 goes to my-app-v2
as expected. This is very useful if you want to do Canary release to make sure that the new version will work as expected before moving all traffic into it.
References
- https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
- https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-manifests/
- https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/