move `kubernetes` folder to the `k8s-unofficial` branch

pull/641/head^2
Louis Lam 2 years ago
parent f11417e854
commit 3512faad14

@ -1,32 +0,0 @@
# Uptime-Kuma K8s Deployment
⚠ Warning: K8s deployment is provided by contributors. I have no experience with K8s and I can't fix error in the future. I only test Docker and Node.js. Use at your own risk.
## How does it work?
Kustomize is a tool which builds a complete deployment file for all config elements.
You can edit the files in the ```uptime-kuma``` folder except the ```kustomization.yml``` until you know what you're doing.
If you want to choose another namespace you can edit the ```kustomization.yml``` in the ```kubernetes```-Folder and change the ```namespace: uptime-kuma``` to something you like.
It creates a certificate with the specified Issuer and creates the Ingress for the Uptime-Kuma ClusterIP-Service.
## What do I have to edit?
You have to edit the ```ingressroute.yml``` to your needs.
This ingressroute.yml is for the [nginx-ingress-controller](https://kubernetes.github.io/ingress-nginx/) in combination with the [cert-manager](https://cert-manager.io/).
- Host
- Secrets and secret names
- (Cluster)Issuer (optional)
- The Version in the Deployment-File
- Update:
- Change to newer version and run the above commands, it will update the pods one after another
## How To use
- Install [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/)
- Edit files mentioned above to your needs
- Run ```kustomize build > apply.yml```
- Run ```kubectl apply -f apply.yml```
Now you should see some k8s magic and Uptime-Kuma should be available at the specified address.

@ -1,10 +0,0 @@
namespace: uptime-kuma
namePrefix: uptime-kuma-
commonLabels:
app: uptime-kuma
bases:
- uptime-kuma

@ -1,45 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: uptime-kuma
name: deployment
spec:
selector:
matchLabels:
component: uptime-kuma
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
component: uptime-kuma
spec:
containers:
- name: app
image: louislam/uptime-kuma:1
ports:
- containerPort: 3001
volumeMounts:
- mountPath: /app/data
name: storage
livenessProbe:
exec:
command:
- node
- extra/healthcheck.js
initialDelaySeconds: 180
periodSeconds: 60
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 3001
scheme: HTTP
volumes:
- name: storage
persistentVolumeClaim:
claimName: pvc

@ -1,39 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/server-snippets: |
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_cache_bypass $http_upgrade;
}
name: ingress
spec:
tls:
- hosts:
- example.com
secretName: example-com-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service
port:
number: 3001

@ -1,5 +0,0 @@
resources:
- deployment.yml
- service.yml
- ingressroute.yml
- pvc.yml

@ -1,10 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi

@ -1,13 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: service
spec:
selector:
component: uptime-kuma
type: ClusterIP
ports:
- name: http
port: 3001
targetPort: 3001
protocol: TCP
Loading…
Cancel
Save