I had two VMs on Proxmox — 2 GB RAM and 1 vCPU each — and wanted to see how far I could push k3s in a home environment. Short answer: it works, but memory management needs attention.
Hardware context
The cluster consists of a master node and a worker node, both running Ubuntu 22.04 on Proxmox. Resources are intentionally limited to simulate a real edge environment:
# Master node
RAM: 2 GB | vCPU: 1 | Disk: 20 GB
# Worker node (deploy target)
RAM: 2 GB | vCPU: 1 | Disk: 20 GB
Why MariaDB instead of MySQL
The official mysql:8 image sets innodb_buffer_pool_size to ~128 MB but the process easily settles at 400 MB in practice. With MariaDB and minimal tuning you get down to ~220 MB — on 2 GB that difference matters.
Deploy structure
I used the local-path-provisioner already bundled with k3s for PersistentVolumes, avoiding external dependencies. Everything lives in the wordpress namespace:
namespace: wordpress
├── PersistentVolumeClaim mysql-pvc (5 Gi)
├── PersistentVolumeClaim wp-pvc (3 Gi)
├── Deployment mariadb
│ resources:
│ requests: { memory: 256Mi }
│ limits: { memory: 512Mi }
├── Deployment wordpress
│ resources:
│ requests: { memory: 128Mi }
│ limits: { memory: 300Mi }
├── Service mariadb (ClusterIP)
├── Service wordpress (ClusterIP)
└── Ingress (Traefik, host: blog.local)
The MariaDB Deployment
The critical piece is the ConfigMap with InnoDB tuning:
apiVersion: v1
kind: ConfigMap
metadata:
name: mariadb-config
namespace: wordpress
data:
my.cnf: |
[mysqld]
innodb_buffer_pool_size = 128M
max_connections = 50
query_cache_size = 0
Ingress with Traefik
k3s ships with Traefik v2 already configured as the default ingress controller. No extra Helm chart needed:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress-ingress
namespace: wordpress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: blog.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress
port:
number: 80
The tuning that makes the difference
Without explicit resources.limits, k3s applies no throttling and pods can consume all available RAM until the OOM killer strikes. I learned this lesson the hard way: the worker hit a swap storm on the first WordPress dashboard load.
resources.limits in your Deployments. Without limits, a single pod can saturate the node and bring everything else down.Real idle consumption
After a week of use, the worker settles at these values:
| Component | RSS | Limit |
|---|---|---|
| MariaDB | ~230 MB | 512Mi |
| WordPress | ~180 MB | 300Mi |
| k3s agent | ~120 MB | — |
| OS (Ubuntu) | ~350 MB | — |
| Total | ~880 MB | / 2048 MB ✓ |
About 1.1 GB of headroom remains, enough to handle light traffic spikes.
Next steps
- Add a CronJob for automatic MySQL PVC backup to external storage
- Configure cert-manager + Let’s Encrypt for automatic HTTPS
- Integrate Prometheus + Grafana to monitor namespace RAM consumption