All posts

Reverse Proxy on K3s with Traefik: Endpoints for Off-Cluster Services

How to configure a K3s mini cluster as a reverse proxy for services running on external containers, using Traefik, EndpointSlice and security middleware.

The Context

I have a K3s mini cluster made up of two Debian VMs on Proxmox. The VMs have been through basic hardening: SSH access with key only, unnecessary services disabled, nftables firewall with default drop policy. The cluster sits behind a private LAN, reachable from the outside through port forwarding configured on the router.

The actual problem: several containerized services run on separate hosts on the same local network. These services need TLS, a public domain and a minimum level of protection at the header and rate limiting level. Setting that up individually on every host does not make sense. A single entry point is needed.

The adopted solution is to use the K3s cluster as a centralized reverse proxy. Traefik, the ingress controller bundled with K3s, handles TLS termination and security middleware enforcement. External services are exposed through headless Service and EndpointSlice resources pointing to the container’s LAN IP.


When It Makes Sense to Use K3s as a Reverse Proxy

There are scenarios where this approach is reasonable, and others where it adds complexity without a real return.

It makes sense when:

  • Multiple services on the local network require TLS and a public domain. Managing certificates and renewals on each individual host quickly becomes a maintenance burden.
  • You want to apply a uniform security policy (HSTS, CSP, rate limiting) without replicating the configuration on every service.
  • You already have a K3s cluster running for other purposes. Adding an ingress for an external service costs a few lines of YAML.
  • The external service does not have its own reverse proxy or has a limited one.

It does not make sense when:

  • You have a single service to expose. A dedicated reverse proxy like Caddy or a simple Nginx is enough and simpler to maintain.
  • The service is already behind its own reverse proxy with TLS. Adding a second proxy layer introduces latency and debugging complexity without clear benefits.
  • You are not familiar with Kubernetes. The learning cost of K3s, Traefik and cert-manager is not justified just for reverse proxying.

Common Configuration Problems

Setting up a reverse proxy in Kubernetes is not as straightforward as doing it with Nginx or Caddy. There are several points where the configuration can break without clear error messages.

Missing or misreferenced middleware

Traefik references middleware using the format <namespace>-<name>@kubernetescrd. If the name or namespace does not match exactly what is declared in the Middleware manifest, Traefik cannot find the resource and returns 404 on the entire route, not an explicit error, but a blank page. The log is often less than obvious and you need to enable deeper Traefik verbosity.

Deprecated Endpoints

Kubernetes deprecated the v1 Endpoints resource starting from version 1.33. The replacement is EndpointSlice (discovery.k8s.io/v1). If the cluster is up to date, using Endpoints generates warnings and may stop working in the future. The migration requires adding the kubernetes.io/service-name label on the EndpointSlice to bind it to the Service.

Overly restrictive Content Security Policy

A CSP that blocks blob: or data: in img-src and media-src can prevent web interfaces from loading dynamic content. A media server, for example, generates blob: URLs through MediaSource Extensions for video playback in the browser. If the CSP does not allow them, the player does not work and there is no visible error on the page.

Compression of already compressed streams

Enabling Traefik compression with compress: {} without exclusions means that video and audio streams encoded with lossy codecs like H.264 or AAC get processed by gzip or brotli. The result is increased CPU usage with no appreciable reduction in data size. For MIME types to exclude, always use complete types (video/mp4, video/x-matroska) and never truncated prefixes like video/, which some Traefik versions do not handle correctly.


Implementation: Exposing an External Service

Let us take as an example a service running on a container with IP 10.0.0.20 on port 3000, to be exposed on the domain git.example.org.

1. Namespace

Each service gets its own namespace. This isolates resources and allows applying middleware with per-app scope.

apiVersion: v1
kind: Namespace
metadata:
  name: myservice

2. Headless Service

The Service declares ports but has no selector — it does not point to Pods in the cluster, but will be manually linked to an external IP through an EndpointSlice.

apiVersion: v1
kind: Service
metadata:
  name: myservice
  namespace: myservice
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000

3. EndpointSlice

This resource binds the Service to the external container’s IP. The kubernetes.io/service-name label is mandatory for the binding.

apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: myservice-1
  namespace: myservice
  labels:
    kubernetes.io/service-name: myservice
addressType: IPv4
ports:
  - name: http
    port: 3000
endpoints:
  - addresses:
      - "10.0.0.20"

4. TLS Certificate

cert-manager handles automatic renewal through Let’s Encrypt with DNS-01 challenge via Cloudflare.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: myservice-tls
  namespace: myservice
spec:
  secretName: myservice-tls
  issuerRef:
    name: letsencrypt-cloudflare
    kind: ClusterIssuer
  dnsNames:
    - git.example.org

5. Security Middleware

A middleware that applies HSTS, CSP, XSS filter and Permissions-Policy. It lives in the service namespace and is referenced as myservice-myservice-headers@kubernetescrd.

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: myservice-headers
  namespace: myservice
spec:
  headers:
    contentSecurityPolicy: >-
      default-src 'self';
      script-src 'self' 'unsafe-inline';
      style-src 'self' 'unsafe-inline';
      img-src 'self' data:;
      font-src 'self' data:;
      connect-src 'self';
      frame-ancestors 'none';
      base-uri 'self';
      form-action 'self';
      upgrade-insecure-requests;
    contentTypeNosniff: true
    customFrameOptionsValue: "DENY"
    forceSTSHeader: true
    stsSeconds: 31536000
    stsIncludeSubdomains: true
    stsPreload: true
    browserXssFilter: true

6. Ingress

The ingress ties everything together: host, TLS, backend and middleware chain.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myservice
  namespace: myservice
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
    traefik.ingress.kubernetes.io/router.middlewares: >-
      myservice-myservice-headers@kubernetescrd,
      kube-system-rate-limit@kubernetescrd
spec:
  ingressClassName: traefik
  tls:
    - hosts:
        - git.example.org
      secretName: myservice-tls
  rules:
    - host: git.example.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myservice
                port:
                  number: 3000

Why This Stack

K3s provides Traefik by default. cert-manager automates certificates. Traefik middleware applies security headers. The combination works as a centralized reverse proxy with TLS and hardening, without installing additional software on individual hosts.

The practical advantage is centralization: a change to the rate limiting middleware propagates to every service that references it. A certificate renewal managed by cert-manager requires no manual intervention. A new external service is added with five or six YAML manifests and a kubectl apply.

The cost is the inherent complexity of Kubernetes. For two or three services it may not be worth it. Once the services become five, ten, fifteen, the initial setup cost is amortized by the uniformity of management.


Note: This configuration exposes services through a Cloudflare tunnel, not through ports opened directly on the machine. The port forwarding on the router points to the tunnel, not to the services. This adds a layer of protection against direct IP scanning.