← Back to overview

Migrating to Envoy Gateway

On November 11, 2025 the Kubernetes Network SIG together with the Security Response Committee announced the retirement of the old trusty Ingress NGINX project (don’t confuse it with the F5 NGINX Ingress Controller). Also, with 2026 coming up, the renewal of my active CKA certification is up the horizon and according to the renewed curriculum for v1.34, the Gateway API is now part of the “Servicing and Networking” section of the exam.

Reason enough to take a good look at the Gateway API and how the future of exposing HTTP(s) services in Kubernetes will look like.

Choosing a Gateway API Ingress

Over the last years I used a variety of different Ingresses for Kubernetes. However, for production workloads it usually boiled down to either (the soon-to-be-retired) Ingress NGINX or Traefik, both of which served me very well in the past. Traefik introduced (experimental) Gateway API compatibility way back in version 2.x and over the course of the 3.x major version the feature matured and the compatibility with newer versions of the Gateway API was ensured.

When it comes to the concept of the Gateway API it’s designed to be absolutely Ingress agnostic and focused on decoupling the infrastructure from the developers or end-users of the cluster, allowing the SRE or operations team to manage the gateway towards the internet and the other part of the team to manage their actual resources (like which request should go to which service).

I initially considered going with Traefik but to be frank, I was annoyed about their “Traefik Hub” and that some Middleware was only available when using that “cloud feature”. Guys. It’s an Ingress. I most certainly won’t couple it with some kind of “call-home” or “remote-management” solution. So, unfortunately, as it’s time for a change: Traefik is out.

My focus then shifted on Envoy. Written in C++ and created by Lyft, it offers high performance and an unprecedented amount of flexibility when it comes to extensibility and observability. To provide a good example, you can natively and easily enrich requests with inline Lua or WASM extensions or even tunnel every request (and/or response) through a gRPC service to inspect and optionally modify them.

As a developer, I already had more use-cases in mind than I’ll be able to implement and try out. So, let’s go for it. Envoy it is. Luckily, there’s a sub-project that tailored Envoy to be a Gateway API compatible Ingress solution, called Envoy Gateway.

Installing Envoy on Kubernetes

For sandbox purposes, I run my own infrastructure on bare-metal in a datacenter. No cloud provider, no third party responsible for managing my setup. It’s based on Rancher’s RKE2 Kubernetes distribution.

High Level Concept

Before we’re jumping in installing Envoy Gateway on RKE2 Kubernetes, I quickly want to highlight the high level differences between the “old” way of using Ingress resources to configure an Ingress Controller and the “new” way of using the Gateway API concept, because the new flexibility is being paid with a few more moving parts and manifests that need to be taken into account.

As pointed out, I’m not using any cloud provider with Kubernetes, so I can not use any LoadBalancer resource in my setup. Therefore, I solely rely on NodePort services for all Ingress-related traffic.

Internet
|
v
+----------------+
| Layer 4 LB | (TCP/443, TCP/80)
| (cloud/metal) |
+----------------+
|
+------------------+------------------+
| | |
v v v
[Node:30080] [Node:30080] [Node:30080] ← NodePort
| | |
v v v
+-------------------------------------------+
| Ingress Controller Pod(s) | <--- Ingress Resources
| (nginx/traefik - exposed via NodePort) | - app.example.com → svc-a
+-------------------------------------------+ - api.example.com → svc-b
|
+--------------+---------------+
| |
v v
[Service A] [Service B]
(ClusterIP) (ClusterIP)
| |
v v
[Pods] [Pods]

Within my Kubernetes cluster I had a single Ingress Controller (such as nginx or Traefik) that listened to Ingress resources and generated an appropriate configuration based on those manifests. All traffic would then go through a Layer 4 Load Balancer that forwarded the HTTP and HTTPS traffic to their respective NodePorts. From there, the Ingress distributed the traffic to the Kubernetes Services and therefore to the Pods behind those services. So far, so easy.

With the Gateway API and Envoy Gateway, things are changing a bit.

When installing Envoy Gateway in your Kubernetes Cluster for the first time, you will not have any NodePort or LoadBalancer service exposing your Ingress. That’s because there’s no Gateway configured, yet. That’s the first big change. You now have to configure at least one Gateway resource to tell the underlying Ingress to actually listen for connections and for which kind of connections it’s responsible. While this is an extra step, it enables you to do two things that were not possible before:

  • A single Ingress can now expose multiple HTTP(S) servers that are completely independent from each other in terms of configuration, in terms of which hosts they are listening for and in terms of which TLS certificates are being used to terminate the encrypted connections
  • You can not only manage HTTP and HTTPS connections with your Ingress, but also plain TCP or UDP. Natively supported as first-class citizens. Of course, gRPC is supported as well.

We’ll only concentrate on HTTP(S) traffic, therefore, the de-facto equivalent of an Ingress resource is now the HTTPRoute. However, due to the structure with Gateway resources, there are a two key differences I immediately stumbled upon:

  • TLS is no longer configured on Ingress-level (or HTTPRoute in terms of the new Gateway API). That’s because TLS is infrastructure and the end-use should not care about how the TLS connections are being terminated. Everything TLS-related is now configured in the Gateway resource, effectively telling it for which hosts it’s responsible.
  • There’s no ingressClassName field on the Ingress-level (aka HTTPRoute) anymore. Instead, there’s a parentRefs field, where you explicitly reference a certain Gateway. That only makes sense due to the decoupling and adds another neat possibility: A HTTPRoute can be deployed on multiple Gateways, because parentRefs is plural and therefore an array.

Of course, there’s a lot more to it and I’ll get to it, once it becomes relevant. What’s important is that I don’t need the whole “separation of concerns” stuff the Gateway API introduces. At least in my current setups. So I’m going to showcase, how I configured the Envoy Gateway to be as close to my old Traefik/Ingress NGINX-based setup as possible, trying to make my own life as easy as possible when it comes to adding new HTTPRoute resources.

Installation with Helm

Envoy Gateway supports the installation with Helm, so naturally that’s what we’ll do.

Terminal window
helm install \
envoy-gateway oci://docker.io/envoyproxy/gateway-helm \
--version v1.6.0 \
--namespace envoy-gateway-system \
--create-namespace

The command is straight from the Envoy Proxy installation manual and your mileage may vary, depending on the version that’s up to date when you’re reading this post.

Also, we’ll need cert-manager in our cluster, to obtain a TLS certificate from Let’s Encrypt later on.

Terminal window
helm install \
cert-manager oci://quay.io/jetstack/charts/cert-manager \
--version v1.19.1 \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true

You can of course skip this, if you have it already installed or if you’re using another solution for your certificates. It’s not mandatory for the Envoy Gateway to work.

Configuring Envoy

Now it’s time to configure Envoy Gateway to make sure it works the same as Traefik or Ingress NGINX before. As pointed out above, first thing we’re going to need is a Gateway resource, because otherwise there’s no NodePort we can let our L4 load balancer send traffic to.

Before we can define a Gateway, we need to define a GatewayClass. That’s basically the equivalent to the old IngressClass concept, allowing multiple Ingresses to distinguish between Gateway resources to only take those into account that are meant for them.

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: envoy-gateway-class
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: envoy-proxy-config
namespace: envoy-gateway-system

The lower (highlighted) section of the GatewayClass defines yet another new moving part to the Gateway API. It’s a possibility to provide a Ingress-specific configuration on how Gateways of that GatewayClass should be configured by default.

And there’s one configuration change indeed, we want to have by default. As laid out above, I can’t use LoadBalancer objects, because I’m not running my infrastructure on any cloud/hyperscaler provider, but rather bare-metal. Therefore, I need a NodePort service for each gateway. So we’re going ahead and adding the appropriate EnvoyProxy resource I already referenced in the GatewayClass to tell Envoy Gateway to create a NodePort service instead of a LoadBalancer service for each Gateway that is using the GatewayClass it references.

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: envoy-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort

As you can see with the apiVersion fields, while the GatewayClass is a native Gateway API resource, the EnvoyProxy is (as the name suggests) specific to the Envoy Gateway I’m using. Thanks to the parametersRef field in the GatewayClass we have flexible possibilities to extend our installation and set sensible defaults.

Now that the GatewayClass and the EnvoyProxy configuration resource are both out of the way, we can finally create the Gateway that will tell Envoy Gateway to actually listen for HTTP and HTTPS connections using a NodePort service.

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
gatewayClassName: envoy-gateway-class # (1)
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes: # (2)
namespaces:
from: All
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
namespaces:
from: All
tls: # (3)
mode: Terminate
certificateRefs:
- kind: Secret
name: example.com-tls
- kind: Secret
name: foobar.com-tls
- kind: Secret
name: fooexample.com-tls

Lots of things going on here again, so let’s break it down.

  1. First of all, we’re telling the Gateway to be one of GatewayClass called envoy-gateway-class which we just defined. That will cause the Envoy Gateway to recognize this Gateway definition as relevant for its own configuration. Furthermore thanks to the EnvoyProxy resource, we referenced in the GatewayClass, the Services that will be created for each listener within the Gateway will be of type NodePort.

  2. While the listeners section should be relatively straight-forward, the second highlighted section in the Gateway definition is an important deviation from the default. Without setting the allowedRoutes to be from all namespaces, the Gateway will only listen for HTTPRoutes that are within the same namespace as the Gateway itself. That absolutely makes sense for separation of concerns or separation of tenants, but I neither need nor want that in my cluster, because then I’d have to configure multiple Gateways and therefore waste multiple IP addresses in my L4 load balancer to get the right traffic to the right NodePort of the right Gateway.

  3. As pointed out above, the third highlighted section in the manifest, shows a huge difference to the old Ingress resources. TLS specifics are no longer configured within the Ingress (or HTTPRoute in terms of the Gateway API), but rather within the Gateway itself. First of all, you need to tell the Gateway whether it should terminate the TLS connection or not (which we do want). And finally you have a certificateRefs (plural) field again, where you can add all certificates, this listener should be using. That makes many things a whole lot easier, especially when using wildcard certificates, because until now, you needed to reflect the certificate Secrets in all Namespaces where you would create Ingress resources and use said wildcard certificate. That was the main use case of using Reflector for me which is now obsolete.

After applying the Gateway manifest, we should now see the NodePort service.

$ kubectl -n envoy-gateway-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
envoy-default-gateway-dd0750dd NodePort 10.43.203.17 <none> 80:32472/TCP,443:32096/TCP
[...]

Notice: If you don’t have any TLS certificate to be used with the https-Listener, the HTTPS NodePort will be missing. Once you have at least once working certificate secret in the list, it will appear.

We can now let our L4 load balancer forward all traffic for HTTP to 32472/tcp and all HTTPS traffic to 32096/tcp. I’m using HAProxy for that purpose and my configuration looks like the following.

frontend fe_kubernetes_http
mode tcp
bind 203.0.113.113:80
use_backend be_kubernetes_http
frontend fe_kubernetes_https
mode tcp
bind 203.0.113.113:443
use_backend be_kubernetes_https
backend be_kubernetes_http
server envoy-k3s 10.42.0.8:32472 check send-proxy-v2
backend be_kubernetes_https
server envoy-k3s 10.42.0.8:32096 check send-proxy-v2

As you can see, I’m using the send-proxy-v2 directive in my backend server definition. If you don’t know what the PROXY Protocol is, I basically use it to make sure that my downstream applications see the correct visitor IP address instead of the IP addresses of the load balancer as the TCP connection is originating from them and not from the visitor. The PROXY Protocol adds a little payload in front of the actual HTTP request telling Envoy Gateway where the connection is coming from. One issue: Without telling Envoy that the PROXY Protocol is being used, it just looks like a malformed HTTP request and therefore, all subsequent requests will return a HTTP 400 Bad Request.

To get Envoy Gateway to work with the PROXY Protocol, we need to add a small resource called ClientTrafficPolicy. Again, that’s an Envoy Gateway-specific resource, not defined by the Gateway API.

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-proxy-protocol
spec:
enableProxyProtocol: true
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: gateway

Envoy will automatically determine whether PROXY Protocol v1 (Plaintext) or PROXY Protocol v2 (Binary) is being used. The second version is more efficient, hence we’re sticking to that.

Exposing services using HTTPRoute resources

Envoy is now fully configured and it should return HTTP 404 responses when receiving a request. That’s because we haven’t defined a HTTPRoute, yet, which we’ll now do. As already explained, a HTTPRoute is basically the equivalent to what Ingress resources were in the “old” world before the Gateway API.

However, instead of just referencing a ingressClassName, it now has a parentRefs field (plural again) that defines which listeners of which Gateways should amend their configuration to implement the specifics of the HTTPRoute.

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-echo
spec:
parentRefs:
- name: gateway
namespace: default
sectionName: http
- name: gateway
namespace: default
sectionName: https
hostnames:
- example.com
rules:
- matches:
- path:
value: /
backendRefs:
- name: http-echo
port: 80

In our case, we want the HTTPRoute to be valid for HTTP and HTTPS, so if a visitor is sending a HTTP request, we can redirect them to a secure connection via HTTPS.

The list of hostnames is just an array of strings. Notice how we don’t define any TLS-related parameters anymore? That’s now the sole job of the Gateway.

As with the old Ingress resources, you then can define rules and tell your Gateway to which Service it should forward a certain request to.

The new workflow

In order to expose a new Service to the internet, my new workflow now looks like this:

  1. Create the Service of ClusterIP for my workload
  2. Create a Certificate object in the same Namespace as my Gateway
  3. Amend the Gateway definition, adding the Secret name to the list of certificateRefs
  4. Create a HTTPRoute referencing my two Gateway listeners

In the end, it’s one step more than before (Step 3 and 4 were combined in a single Ingress object before). However the added flexibility is worth that little trade off.

Outlook

In the next months, I’m planning to experiment a bit with Envoy’s filter capabilities, giving me the possibility to look into requests and responses and modify them on the fly. If there’s an interesting outcome, I’ll write up another post regarding this topic.

If you need help migrating from the “old” Ingress API to the new Gateway API, feel free to contact me.