The Ingress API has long provided Kubernetes users the ability to get external HTTP(s) traffic into their cluster. With a variety of Ingress Controller implementations, there are plenty of options to solve nearly any use case. In this blog post, we are going to look at some of the key differentiators of the ngrok Kubernetes Operator and why it might be right for you.
One of my favorite features, and a key differentiator, of the ngrok Kubernetes Operator is that it works seamlessly behind Network Address Translation (NAT). With most other Ingress Controllers, you must have a public IP address attached to a load balancer or edge router to get ingress into your cluster. With the ngrok Kubernetes Operator, an outbound connection is made from the controller to ngrok’s global network, allowing traffic into the cluster. Because that traffic is routed through the ngrok platform, you can ingress traffic into any Kubernetes cluster, whether it’s in a public cloud, private data center, or even running on your laptop
A common setup for fully automating the creation of Ingresses involves running an ingress controller, cert-manager (with an ACME Issuer such as Let’s Encrypt, for example), and ExternalDNS. Let’s dive into this flow to get a good idea of how these three pieces work together to fully automate HTTPS ingress traffic:
loadbalancer.ingress.hostname
or loadbalancer.ingress.ip
.The ngrok Kubernetes Operator collapses these steps into a simpler ingress configuration. You don’t need to install and manage cert-manager, as certificates are automatically provisioned and managed by ngrok. If you only use ngrok subdomains or manage DNS manually, you don’t need to install and configure ExternalDNS. If you are using your own subdomain with ngrok (app.yourdomain.com, for example), you can continue to use ExternalDNS to automatically create CNAME records pointing to ngrok for ingresses created by the ngrok Kubernetes Operator. This diagram below shows how the operator automates edge provisioning in the ngrok API and forwards traffic from ngrok’s edge network to your service in Kubernetes.
One of the key differentiators of the ngrok platform is that it runs the same locally as it does in any production environment.
This benefit also extends to the ngrok Kubernetes Operator.
This means that you can run the operator locally on your own machine using k3s, k3d, kind, with the same configuration (NgrokModuleSet
s) that you use for prod.
In addition to providing connectivity to your services, the ngrok platform includes a variety of modules to assist you in providing reliable and secure ingress.
To access these modules, the ngrok Kubernetes Operator uses a custom resource, NgrokModuleSet
, to allow users to build reusable and composable configuration for controlling Ingress traffic.
For example, I can create the following NgrokModuleSet
which requires users to authenticate to access my site.
The OAuth settings will only allow ngrok employees access to my site.
---
apiVersion: ingress.k8s.ngrok.com/v1alpha1
kind: NgrokModuleSet
metadata:
name: google-oauth-ngrok
namespace: default
modules:
oauth:
google:
emailDomains:
- ngrok.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
k8s.ngrok.com/modules: google-oauth-ngrok
name: test-app-1
namespace: default
spec:
ingressClassName: ngrok
rules:
- host: my-test-app-1.ngrok.io
http:
paths:
- backend:
service:
name: test-app-1
port:
name: http
path: /
pathType: Prefix
This is just one example of the capabilities of the ngrok Kubernetes Operator. It supports all ngrok modules, including OAuth, OIDC, SAML, IP restrictions, header addition and removal, and webhook verification, to name a few.
All traffic intended for your application flows through ngrok’s global network, and your cluster only receives authorized traffic—keeping it safe from threats. You can add ngrok’s circuit breaker module to titrate traffic and prevent DDoS attacks. Your clients will often experience faster response times because ngrok routes client traffic to the Point of Presence (PoP) with the lowest latency to the client and automatically reroutes traffic if a PoP becomes unavailable.
The ngrok Kubernetes Operator provides ingress using the Ingress API, and ngrok modules are implemented using Custom Resource Definitions (CRDs) to extend the Kubernetes API. The Kubernetes Ingress API only supports getting HTTP(s) traffic into your cluster. However, the ngrok Kubernetes Operator watches and reconciles a number of different CRDs, making it possible for you to create TCP and TLS Edges in the ngrok platform in conjunction with the HTTP/HTTPs support provided by the Kubernetes Ingress API
We are currently developing a ngrok Gateway Controller which implements the recently GA’d Kubernetes Gateway API.
This will allow us to offer the same key differentiators and features that exist in the ngrok Ingress Controller in our Gateway Controller, along with first class support for exposing TCP and TLS services instead of having to use the TLSEdge
and TCPEdge
CRDs.
While [TLSRoute](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.
TLSRoute) and [TCPRoute](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.
TCPRoute) are still v1alpha2
in the Gateway API, we look forward to our users being able to use them to expose their services with all the benefits of the ngrok platform we’ve previously mentioned.
To learn more about the ngrok Kubernetes Operator, check out some of these other posts from ngrok:
Questions or comments? Hit us up on X (aka Twitter) @ngrokhq or LinkedIn, or join our community on Slack.