Have you ever wanted an ngrok endpoint that doesn’t go offline when you get disconnected from the internet?
Today, we are excited to introduce Cloud Endpoints: persistent, always-on ngrok endpoints that are managed centrally via the dashboard and API. They let you route and respond to traffic even if your upstream services are offline. Cloud endpoints use ngrok’s new Traffic Policy system to configure traffic handling just like the Agent Endpoints (aka tunnels) that you're familiar with.
Cloud endpoints solve a number of problems for ngrok developers, let’s take a closer look:
Cloud endpoints are available today to users on our free and pay-as-you-go plans. You can read the cloud endpoints documentation to get into the nitty-gritty details about how they work.
Once you've reserved a domain on ngrok, you can create a cloud endpoint on the ngrok dashboard or via API.
For the example below, we’re going to use the API via the ngrok agent CLI (you may need to ngrok update
first!).
Creating a cloud endpoint is a single API call where you specify the endpoint’s URL and its Traffic Policy:
ngrok api endpoints create \
--api-key {YOUR_API_KEY} \
--type cloud \
--url https://inconshreveable.ngrok.app \
--traffic-policy ‘‘{"on_http_request":[{"actions":[{"type":"custom-response","config":{"status_code":200,"content":"hello world from my new cloud endpoint"}}]}]}’
Now let’s try it out:
$ curl https://inconshreveable.ngrok.app
> hello world from my new cloud endpoint
Easy. You’ve got a cloud endpoint online serving requests! Now that we know how to create a cloud endpoint, let’s take a deeper look into what you’ll use them for.
Combining Cloud Endpoints with Agent Endpoints gives you, as a developer, full autonomy over when and where your services become accessible.
For instance, your friendly Ops team can create a public cloud endpoint, such as api.example.com
, and configure JWT validation (with the help of Auth0!) to authenticate and authorize client requests before they reach your internal service.
Meanwhile, you keep building critical functionality, such as pricing, on an agent endpoint with an internal binding like api-pricing.example.internal
.
When ready, Ops can enable public API access via api.example.com/pricing
and route to api-pricing.example.internal
using the forward-internal action.
When client requests hit api.example.com/pricing
, ngrok forwards them to your agent endpoint (api-pricing.example.internal
).
This setup lets you ship fast, eliminating the headaches that come from filing tickets for Ops.
Here is the Traffic Policy snippet that makes this possible:
on_http_response:
- actions:
- type: jwt-validation
config:
issuer:
allow_list:
- value: https://<AUTHO_TENANT>.us.auth0.com/
audience:
allow_list:
- value: https://api.example.com
http:
tokens:
- type: jwt
method: header
name: Authorization
prefix: "Bearer "
jws:
allowed_algorithms:
- RS256
keys:
sources:
additional_jkus:
- https://<AUTHO_TENANT>.us.auth0.com/.well-known/jwks.json
- name: Route /pricing/` to new internal agent endpoint
expressions:
- req.url.path.startsWith('/pricing')
actions:
- type: forward-internal
config:
url: https://api-pricing.example.internal
- name: Route all other traffic to existing internal agent endpoint
actions:
- type: forward-internal
config:
url: https://api.example.internal
To dig deeper on how to do set up the routing that makes Ops control and developer self-service possible, check out a few of our resources:
We all have bad days. Services crash. Fixes take longer than you'd like. Users first start wondering what's wrong, then start reaching out to support.
You might currently bring your service online with a public agent endpoint, which is what ngrok creates when you run ngrok http 8080 --url https://example.com
.
If that upstream service at port 8080
crashes, requests will fail silently, or maybe worse, confusingly.
Cloud endpoints helps you deliver an informative error page without you having to host more web content on your own infra.
Instead, you can:
forward-internal
action to your agent endpoint.custom-response
Traffic Policy action when the forward-internal
action fails.ngrok http 8080 --url https://your-agent-endpoint.internal
.The Traffic Policy example will look like this:
on_http_request:
- actions:
- type: forward-internal
config:
url: https://your-agent-endpoint.internal
on_error: continue
- type: custom-response
config:
status_code: 503
body: |
<!DOCTYPE html>
<html>
<body>
<h1>Service Temporarily Unavailable</h1>
<p>We apologize, but our service is currently offline. Please try again later.</p>
</body>
</html>
headers:
content-type: text/html
Again, you don’t have to host a specific service or webpage for your error messages—just use ngrok’s Traffic Policy to serve up static content, and make it your own with HTML.
At ngrok, we dogfood everything we ship to customers. We’ve already been using cloud endpoints and find all sorts of uses for them. You’re even accessing one right now!
The https://ngrok.com site is a cloud endpoint itself with a chain of Traffic Policy rules to filter and take action on requests as they hit our network. For one, we block Tor traffic using the custom error page shown just above, add redirects, and route traffic to multiple external services, like our blog, docs, and downloads page.
For example, here’s how we forward ngrok.com/downloads to a Vercel app with the upcoming forward-external
Traffic Policy action:
on_http_request:
- expressions:
- req.url.path.startsWith('/downloads') ||
req.url.path.startsWith('/__manifest')
actions:
- type: forward-external
config:
url: https://<NGROK_DOWNLOADS_DEPLOY>.vercel.app
Cloud endpoints may feel familiar if you’ve used Edges before. They replace and deprecate Edges with a primitive that is both simpler and more flexible. They are powered by our expressive Traffic Policy engine that was built with modern traffic routing needs in mind. Cloud endpoints improve on Edges with:
Want to get off Edges? See the guide on how to migrate off of Edges to cloud endpoints. There is no planned end-of-life date for Edges yet. That will be announced separately with plenty of time to make a transition along with automated tooling to help you migrate.
To close, we’re pretty pumped up about cloud endpoints and the flexibility they bring to managing your traffic. So excited, that we’re actually using them ourselves. Stay tuned for more in-depth guides on how you can utilize cloud endpoints in your own workflows. Until then, peace.