Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Get public TCP LoadBalancers for local Kubernetes clusters

License

NotificationsYou must be signed in to change notification settings

inlets/inlets-operator

Repository files navigation

buildLicense: MITGo Report CardDocumentation

Get public TCP LoadBalancers for local Kubernetes clusters

When using a managedKubernetes engine, you can expose a Service as a "LoadBalancer" and your cloud provider will provision a TCP cloud load balancer for you, and start routing traffic to the selected service inside your cluster. In other words, you get ingress to an otherwise internal service.

The inlets-operator brings that same experience to your local Kubernetes cluster by provisioning a VM on the public cloud and running aninlets server process there.

Within the cluster, it runs theinlets client as a Deployment, and once the two are connected, it updates the original service with the IP, just like a managed Kubernetes engine.

Deleting the service or annotating it will cause the cloud VM to be deleted.

See also:

Change any LoadBalancer from<pending> to a real IP

Once the inlets-operator is installed, any Service of type LoadBalancer will get an IP address, unless you exclude it with an annotation.

kubectl run nginx-1 --image=nginx --port=80 --restart=Alwayskubectl expose pod/nginx-1 --port=80 --type=LoadBalancer$ kubectl get services -wNAME               TYPE        CLUSTER-IP        EXTERNAL-IP       PORT(S)   AGEservice/nginx-1    ClusterIP   192.168.226.216<pending>         80/TCP    78sservice/nginx-1    ClusterIP   192.168.226.216   104.248.163.242   80/TCP    78s

You'll also find a Tunnel Custom Resource created for you:

$ kubectl get tunnelsNAMESPACE   NAME             SERVICE   HOSTSTATUS     HOSTIP         HOSTIDdefault     nginx-1-tunnel   nginx-1   provisioning                  342453649default     nginx-1-tunnel   nginx-1   active         178.62.64.13   342453649

We recommend exposing an Ingress Controller or Istio Ingress Gateway, see also:Expose an Ingress Controller

Plays well with other LoadBalancers

Want to create tunnels for all LoadBalancer services, but ignore one or two?

Want to disable the inlets-operator for a particular Service? Add the annotationoperator.inlets.dev/manage with a value of0.

kubectl annotate service nginx-1 operator.inlets.dev/manage=0

Want to ignore all services, then only create Tunnels for annotated ones?

Install the chart withannotatedOnly: true, then run:

kubectl annotate service nginx-1 operator.inlets.dev/manage=1

Using IPVS for your Kubernetes networking?

For IPVS, you need to declare a Tunnel Custom Resource instead of using the LoadBalancer field.

apiVersion:operator.inlets.dev/v1alpha1kind:Tunnelmetadata:name:nginx-1-tunnelnamespace:defaultspec:serviceRef:name:nginx-1namespace:defaultstatus:{}

You can pre-define the auth token for the tunnel if you need to:

spec:authTokenRef:name:nginx-1-tunnel-tokennamespace:default

Who is this for?

Your cluster could be running anywhere: on your laptop, in an on-premises datacenter, within a VM, or on your Raspberry Pi. Ingress and LoadBalancers are a core-building block of Kubernetes clusters, so Ingress is especially important if you:

  • run a private-cloud or a homelab
  • self-host applications and APIs
  • test and share work with colleagues or clients
  • want to build a realistic environment
  • integrate with webhooks and third-party APIs

There is no need to open a firewall port, set-up port-forwarding rules, configure dynamic DNS or any of the usual hacks. You will get a public IP and it will "just work" for any TCP traffic you may have.

How does it compare to other solutions?

  • There are no rate limits on connections or bandwidth limits
  • You can use your own DNS
  • You can use any IngressController or an Istio Ingress Gateway
  • You can take your IP address with you - wherever you go

Any Service of typeLoadBalancer can be exposed within a few seconds.

Since exit-servers are created in your preferred cloud (around a dozen are supported already), you'll only have to pay for the cost of the VM, and where possible, the cheapest plan has already been selected for you. For example with Hetzner (coming soon) that's about 3 EUR / mo, and with DigitalOcean it comes in at around 5 USD - both of these VPSes come with generous bandwidth allowances, global regions and fast network access.

Conceptual overview

In this animation byIvan Velichko, you see the operator in action.

It detects a new Service of type LoadBalancer, provisions a VM in the cloud, and then updates the Service with the IP address of the VM.

Demo GIF

There's also avideo walk-through of exposing an Ingress Controller

Installation

Read theinstallation instructions for different cloud providers

The image for this operator is multi-arch and supports bothx86_64 andarm64.

See also:Helm chart

Expose an Ingress Controller or Istio Ingress Gateway

Unlike other solutions, this:

  • Integrates directly into Kubernetes
  • Gives you a TCP LoadBalancer, and updates its IP inkubectl get svc
  • Allows you to use any custom DNS you want
  • Works with LetsEncrypt

Configuring ingress:

Other use-cases

Provider Pricing

The hostprovisioning code used by the inlets-operator is shared withinletsctl, both tools use the configuration in the grid below.

These costs need to be treated as an estimate and will depend on your bandwidth usage and how many hosts you decide to create. You can at all times check your cloud provider's dashboard, API, or CLI to view your exit-nodes. The hosts provided have been chosen because they are the absolute lowest-cost option that the maintainers could find.

ProviderPrice per monthPrice per hourOS imageCPUMemoryBoot time
Google Compute Engine* ~$4.28~$0.006Ubuntu 22.041614MB~3-15s
Digital Ocean$5~$0.0068Ubuntu 22.0411GB~20-30s
Scaleway5.84€0.01€Ubuntu 22.0422GB3-5m
Amazon Elastic Computing 2$3.796$0.0052Ubuntu 20.0411GB3-5m
Linode$5$0.0075Ubuntu 22.0411GB~10-30s
Azure$4.53$0.0062Ubuntu 22.0410.5GB2-4min
Hetzner4.15€€0.007Ubuntu 22.0412GB~5-10s
  • The first f1-micro instance in a GCP Project (the default instance type for inlets-operator) is free for 720hrs(30 days) a month

Video walk-through

In this video walk-through Alex will guide you through creating a Kubernetes cluster on your laptop with KinD, then he'll install ingress-nginx (an IngressController), followed by cert-manager and then after the inlets-operator creates a LoadBalancer on the cloud, you'll see a TLS certificate obtained by LetsEncrypt.

Video demo

Tutorial:Tutorial: Expose a local IngressController with the inlets-operator

Contributing

Contributions are welcome, see theCONTRIBUTING.md guide.

Also in this space

  • inlets - L7 HTTP / L4 TCP tunnel which can tunnel any TCP traffic. One of the ways to deploy it is via the inlets-operator.
  • MetalLB - a LoadBalancer for private Kubernetes clusters, cannot expose services publicly
  • kube-vip - a Kubernetes LoadBalancer similar to MetalLB, cannot expose services publicly
  • Cloudflare Tunnel aka "Argo" - product from Cloudflare for Cloudflare customers and domains - K8s integration available through Cloudflare DNS and ingress controller. Not for use with existing Ingress Controllers, unable to provide LoadBalancer
  • ngrok - a SasS tunnel service tool, restarts every 7 hours, limits connections per minute, SaaS-only, no K8s integration available, TCP tunnels can only use high/unconventional ports, can't be used with Ingress Controllers
  • Wireguard - a modern VPN for connecting whole hosts and networks. Does not expose HTTP or TCP ports publicly.
  • Tailscale - a managed SaaS VPN that is built upon Wireguard.

Author / vendor

inlets and the inlets-operator are brought to you byOpenFaaS Ltd.

Sponsor this project

 

Packages

 
 
 

Contributors24


[8]ページ先頭

©2009-2025 Movatter.jp