Flexible and expressive traffic orchestration: introducing internal endpoints
Today, we’re excited to introduce two new primitives that enable you to create arbitrarily flexible routing topologies so you can send your ngrok traffic where it needs to go.
As ngrok has begun handling production workloads as an API gateway, we’ve heard from you that you need the ability to route traffic to different upstream services based on headers, paths, subdomains, query parameters and more. This was either not easy or not possible with ngrok’s existing primitives. We’ve solved the traffic orchestration challenge and made ngrok more flexible with two simple new primitives.
Internal endpoints are secure, non-public endpoints that end with the .internal
TLD suffix. Unlike public endpoints, internal endpoints receive traffic only via the forward-internal
Traffic Policy action. With an internal endpoint, you can put a service online with ngrok without making it addressable on the public internet.
The forward-internal
action is a new Traffic Policy action which forwards traffic from one endpoint to another endpoint. Because the forward-internal
action is part of ngrok’s Traffic Policy module, you can route traffic based on any aspect of an HTTP request, TCP connection, TLS handshake, or any of over 100 different variables available to your Traffic Policy rules.
You can create your first internal endpoint from the ngrok agent’s CLI with a single command:
ngrok http 80 --url https://example.internal
You now have an internal endpoint online which will send traffic it receives to port 80
on your local machine. Only problem is that because it’s an internal endpoint, it can’t receive any traffic! Let’s fix that by creating a public endpoint which forwards some of its traffic to our new internal endpoint. First, create a new file called traffic-policy.yml
.
---
on_http_request:
- expressions:
- req.url.path.startsWith("/foo")
actions:
- type: forward-internal
config:
url: https://example.internal
In another terminal (even on a different machine!), run:
ngrok http 8080 --url https://ex.ngrok.app –-traffic-policy-file traffic-policy.yml
Now, traffic to ex.ngrok.app
will route to port 8080
, but traffic to ex.ngrok.app/foo
will route to port 80 via our internal endpoint. Public-facing endpoints, internal endpoints, and the forward-internal
action work together to easily customize traffic routing based on headers, paths, or domains, allowing you to customize your traffic orchestration to meet any requirements.
How do internal endpoints and the forward-internal action work?
Internal endpoints are secure, non-public endpoints that end with the .internal
domain suffix. You can create internal endpoints from the ngrok Dashboard, through the ngrok agent, or by using an SDK.
With the introduction of internal endpoints, we’ve needed to add a new property to an endpoint to determine how it will be made available. We call this the endpoint’s binding
. Endpoints are available either on the internet (public
) or within ngrok (internal
). All existing endpoints used the public
binding, and the public
binding is the default for newly created endpoints.
When you create an endpoint you can set the binding property explicitly:
ngrok http 80 --url https://example.internal --binding internal
To make it easier, you may omit the binding property in nearly all cases when you create a new endpoint because ngrok automatically sets the binding of your endpoint to internal
when you use the .internal
TLD in the endpoint URL.
Route traffic to internal endpoints with the forward-internal action
Internal endpoints can only receive traffic through the forward-internal
action, which forwards traffic from one endpoint to another or to a group of endpoints. This provides a flexible way to manage traffic flow securely.
For example, all client traffic to public-facing endpoint https://example.org
can be forwarded to the internal endpoint https://example.internal,
which processes the request without public exposure.
---
on_http_request:
- actions:
- type: forward-internal
config:
url: http://example.internal
What can you do with internal endpoints?
Let’s explore some real-world use cases of how you might use internal endpoints for your applications today.
Path-based routing
You can route traffic from public-facing endpoints to internal services based on the URL path. This allows you to host different parts of your application from different services or even different clouds without splitting your app across multiple domains. For example, say you want to have multiple services behind a single public-facing endpoint like company.com
:
/
→web.company.internal
/blog/
→blog.company.internal
/docs/
→docs.company.internal
/app/
→app.company.internal
Here’s how you can configure path-based routing using a Traffic Policy document based on the example above:
---
on_http_request:
- name: 'Handle /docs/*'
expressions:
- req.url.path.startsWith('/docs/')
actions:
- type: forward-internal
config:
url: http://docs.company.internal
- name: 'Handle /blog/*'
expressions:
- req.url.path.startsWith('/blog/')
actions:
- type: forward-internal
config:
url: http://blog.company.internal
- name: 'Handle /app/*'
expressions:
- req.url.path.startsWith('/app/')
actions:
- type: forward-internal
config:
url: http://app.company.internal
- name: 'Handle all other traffic'
actions:
- type: forward-internal
config:
url: http://web.company.internal
Domain-based routing
A combination of internal endpoints and a wildcard domain enables you to share a traffic policy document for each subdomain instead of duplicating it across many separate endpoints. Each subdomain can still have its own traffic policy document on its internal endpoint.
For example, once you have reserved a wildcard domain, such as *.company.com
, you can setup public-facing endpoints that route to different internal endpoints based on the subdomain:
https://blog.company.com/
→blog.company.internal
https://docs.company.com/
→docs.company.internal
https://app.company.com/
→app.company.internal
---
on_http_request:
- name: 'Shared behavior for all subdomains'
actions:
- type: add-headers
config:
headers:
geo: “${conn.geo.country_code}”
- name: 'Handle /docs/*'
expressions:
- req.host.startsWith('docs.')
actions:
- type: forward-internal
config:
url: http://docs.company.internal
- name: 'Handle /blog/*'
expressions:
- req.host.startsWith('blog.')
actions:
- type: forward-internal
config:
url: http://blog.company.internal
- name: 'Handle /app/*'
expressions:
- req.host.startsWith('app.')
actions:
- type: forward-internal
config:
url: http://app.company.internal
- name: 'Handle all other traffic'
actions:
- type: forward-internal
config:
url: http://web.company.internal
This approach allows you to logically separate different services across subdomains while still benefiting from the security and flexibility that internal endpoints provide.
Header-based routing
Internal endpoints also make header-based routing a breeze. Imagine a scenario where you’re maintaining multiple versions of your API.
For instance, you might have a public-facing endpoint like api.billing.com
while developing a new version, v2, on the internal endpoint v2.api.billing.internal
. You could host the legacy version on another internal endpoint, v1.api.billing.internal
.
When ready, you can route requests from api.billing.com
to these internal services based on the x-version header
:
---
on_http_request:
- name: 'Handle legacy API, x-version: 1'
expressions:
- req.headers['x-version'][0] == '1'
actions:
- type: forward-internal
config:
url: http://v1.api.billing.internal
- name: 'Handle latest API, x-version: 2'
expressions:
- req.headers['x-version'][0] == '2'
actions:
- type: forward-internal
config:
url: http://v2.api.billing.internal
Geolocation-based routing by client location
Internal endpoints allow you to direct traffic based on geographic regions. Traffic originating from different countries can be forwarded to internal endpoints specifically designed to handle those regions by using the conn.geo.country_code
variable to route traffic based on the origin of the request. For example, traffic from China (conn.geo.country_code == "CN"
) can be forwarded to a dedicated internal endpoint:
---
on_http_request:
- name: 'Send incoming traffic from China to a dedicated endpoint'
expressions:
- conn.geo.country_code == "CN"
actions:
- type: forward-internal
config:
url: http://cn.website.internal
- name: 'Handle all other traffic'
actions:
- type: forward-internal
config:
url: http://website.internal
And more!
We designed these new features to be reusable and composable building blocks which enable use cases far beyond traffic orchestration. At ngrok, we’ve already discovered other uses for them, like:
- Self-service app delivery for development teams
DevOps teams that own the API Gateway often field tickets to bring new API resources online or modify their configuration. Because traffic policies are composable, the ops team can control the security and authentication policy for the front door and forward to the internal endpoints controlled by the feature teams. This enables them to self-service their own API gateway needs without the usual friction of filing tickets for ops.
- Centralized security for ngrok on developer laptops
When enterprises use ngrok for webhook testing and local previews, they require centralized security policy. Internal endpoints the ops and security teams keep their dev teams safe without slowing them down because traffic policies are composable. The security team can control a public endpoint and set a security policy while allowing developers to define traffic policy which runs afterwards on the internal endpoint that is forwarded to.
- On-demand public addressability of endpoints
We’ve seen some customers with intermittently-accessed endpoints that are predominantly idle write complex code to turn their ngrok agents on and off dynamically so that their endpoints aren’t always online. Now, you can keep put those services online via internal endpoints and simply update a public facing endpoint to route to them only when needed. This both improves your security posture by keeping your endpoints unaddressable when not needed but also avoid writing complex code to dynamically drive the ngrok agent to bring them on and offline.
- Debugging live traffic
Ever wanted to send a stream of traffic from your staging or CI environment to your local? Now it’s easy. If you set a debugging header, you can route traffic to an internal URL that goes to an internal endpoint for you local ngrok agent if it’s online.
Try internal endpoints today
Internal endpoints, combined with public-facing endpoints, give you the flexibility to shape traffic flow in endless ways. Your first step, if you don't have one already, is to create a free ngrok account.
Internal endpoints are currently a private beta feature, so you'll need to reach out to our customer success team to activate them for your account.
Check out our developer documentation for an in-depth description of how to use internal endpoints:
Think you found a bug in internal endpoints, or can’t quite get your configuration right? Create an issue in the ngrok community repo. Curious about other ways you can use internal endpoints? Register for the next session of Office Hours and ask away. Either way, we’re here to help.