Gateway API
Ingress API has had a very difficult history and had remained in v1beta1 for many years. Despite having a thriving ecosystem of controller implementations, their use of Ingress API have remained largely incompatible. In addition to that, the same controller vendors have started shipping their own set of custom resources designed to address the limitations of Ingress API. At some point, Kubernetes SIG Network group even discussed the possibility of scrapping the Ingress API altogether and letting each vendor bring their own set of CRDs (see “Ingress Discussion Notes” in Network SIG Meeting Minutes). Despite all that, Ingress API has survived, addressed some of the more pressing issues and finally got promoted to v1 in Kuberntes v1.19. However, some of the problems could not be solved by an incremental re-design and this is why the Gateway API project (formerly called Service API) was founded.
Gateway API decomposes a single Ingress API into a set of independent resources that can be combined via label selectors and references to build the desired proxy state. This decomposition follows a pattern very commonly found in proxy configuration – listener, route and backends – and can be viewed as a hierarchy of objects:
| Hierarchy | Description |
|---|---|
| Gateway Class | Identifies a single GatewayAPI controller installed in a cluster. |
| Gateway | Associates listeners with Routes, belongs to one of the Gateway classes. |
| Route | Defines rules for traffic routing by linking Gateways with Services. |
| Service | Represents a set of Endpoints to be used as backends. |
This is how the above hierarchy can be combined to expose an existing web Service to the outside world as http://gateway.tkng.io (see the Lab walkthrough for more details):
Regardless of all the new features and operational benefits Gateway API brings, its final goal is exactly the same as for Ingress API – to configure a proxy for external access to applications running in a cluster.
Lab
For this lab exercise, we’ll use one of the Gateway API implementations from Istio.
Preparation
Assuming that the lab environment is already set up, Istio can be set up with the following commands:
Wait for all Istio Pods to fully initialise:
Set up a test Deployment to be used in the walkthrough:
Make sure that the Gateway has been assigned with a LoadBalancer IP:
Now we can verify the functionality:
Walkthrough
One of the easiest ways to very data plane configuration is to use the istioctl tool. The first thing we can do is look at the current state of all data plane proxies. In our case we’re not using Istio’s service mesh functionality, so the only proxy will be the istio-ingressgateway:
Let’s take a close look at the proxy-config, starting with the current set of listeners:
The one that we’re interested in is called http.8080 and here is how we can check all of the routing currently configured for it:
From the above output we can see that the proxy is set up to route all HTTP requests with Host: gateway.tkng.io header to a cluster called outbound|80||web.default.svc.cluster.local. Let’s check this cluster’s Endpoints:
The above Endpoint address corresponds to the only running Pod in the web deployment: