Compare commits

..

51 Commits

Author SHA1 Message Date
Ultradesu
c28566ce21 Added alertmanager ingress
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-09 14:57:52 +02:00
Ultradesu
1b2b4da98d Added alertmanager ingress
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 8s
2026-02-09 14:55:22 +02:00
Ultradesu
014284c11d Added alertmanager ingress
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 8s
2026-02-09 14:51:43 +02:00
Ultradesu
fc00513db3 3 Adjusted node alerts
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 7s
2026-02-09 14:46:27 +02:00
Ultradesu
1451a5fb37 2 Adjusted node alerts
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 7s
2026-02-09 14:15:17 +02:00
Ultradesu
d19ae33cd1 Adjusted node alerts
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 10s
2026-02-09 13:04:25 +02:00
Ultradesu
8a8cab019f Added node alerts
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 7s
2026-02-09 13:00:15 +02:00
Ultradesu
137384ce55 Fixed node alert
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 7s
2026-02-09 12:52:22 +02:00
Ultradesu
1aee4d5cd7 Fixed node alert
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-09 12:48:09 +02:00
Ultradesu
9d6fa51fc7 Fixed node alert
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 8s
2026-02-09 12:44:42 +02:00
AB
fc689d5e22 Added kubectl to n8n
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 7s
2026-02-08 01:35:26 +02:00
ab
a2f4f989e7 Update k8s/apps/n8n/rbac.yaml
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-06 15:46:46 +00:00
ab
cacc5ef02b Update k8s/apps/n8n/deployment-worker.yaml
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 8s
2026-02-06 15:40:28 +00:00
ab
f05a1515e6 Update k8s/apps/n8n/deployment-worker.yaml
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 10s
2026-02-06 13:55:15 +00:00
ab
dbb9722840 Update k8s/apps/n8n/deployment-worker.yaml
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-06 12:56:26 +00:00
ab
e7e066587f Update k8s/apps/n8n/deployment-main.yaml
Some checks failed
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Has been cancelled
2026-02-06 12:56:10 +00:00
ab
cb83a3fa38 Merge pull request 'Auto-update README with k8s applications' (#127) from auto-update-readme-20260205-182558 into main
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 10s
Reviewed-on: #127
2026-02-06 12:02:10 +00:00
Gitea Actions Bot
4b3e1a10d4 Auto-update README with current k8s applications
All checks were successful
Terraform / Terraform (pull_request) Successful in 26s
Generated by CI/CD workflow on 2026-02-05 18:25:58

This PR updates the README.md file with the current list of applications found in the k8s/ directory structure.
2026-02-05 18:25:58 +00:00
Ultradesu
caf024aaa2 Fix
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 11s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 16s
2026-02-05 20:25:11 +02:00
Ultradesu
f4c1a4b310 Fix
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 11s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 11s
2026-02-05 19:53:09 +02:00
Ultradesu
f6623efab1 Fix
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 7s
2026-02-05 19:46:14 +02:00
Ultradesu
52cea30ac3 Fix
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 8s
2026-02-05 19:43:53 +02:00
Ultradesu
67bcf5247e Fix
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 10s
Check with kubeconform / lint (push) Successful in 9s
Auto-update README / Generate README and Create MR (push) Successful in 17s
2026-02-05 19:42:45 +02:00
Ultradesu
e38f18d9a8 Added longhorn
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 15s
2026-02-05 19:31:29 +02:00
Ultradesu
67bdb8ea29 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-05 19:09:54 +02:00
Ultradesu
1e40073cb7 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-05 19:08:15 +02:00
Ultradesu
82e9b336dc moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-05 19:07:04 +02:00
Ultradesu
afbf68c6fa moved to manifests from chart 2026-02-05 19:06:55 +02:00
Ultradesu
f6be70e1ca moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 11s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 9s
2026-02-05 18:43:04 +02:00
Ultradesu
02dff40276 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 12s
2026-02-05 18:28:06 +02:00
Ultradesu
e5d9a78699 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 11s
Check with kubeconform / lint (push) Successful in 10s
Auto-update README / Generate README and Create MR (push) Successful in 10s
2026-02-05 18:15:37 +02:00
Ultradesu
1221dbf7b5 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 11s
2026-02-05 18:10:16 +02:00
Ultradesu
42ebe4cbda moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 14s
2026-02-05 18:06:00 +02:00
Ultradesu
4059bc1a70 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 10s
2026-02-05 18:02:34 +02:00
Ultradesu
65f8056ef7 moved to manifests from chart
Some checks failed
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Has been cancelled
Auto-update README / Generate README and Create MR (push) Has been cancelled
2026-02-05 18:00:26 +02:00
Ultradesu
8fca12c674 moved to manifests from chart
Some checks failed
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Has been cancelled
Check with kubeconform / lint (push) Has been cancelled
Auto-update README / Generate README and Create MR (push) Has been cancelled
2026-02-05 17:59:22 +02:00
Ultradesu
51cc40377c moved to manifests from chart
Some checks failed
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Has been cancelled
Check with kubeconform / lint (push) Has been cancelled
Auto-update README / Generate README and Create MR (push) Has been cancelled
2026-02-05 17:57:22 +02:00
Ultradesu
ff58069789 moved to manifests from chart
Some checks failed
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Has been cancelled
Auto-update README / Generate README and Create MR (push) Has been cancelled
2026-02-05 17:55:41 +02:00
Ultradesu
6b5a120fc4 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 10s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-05 17:54:28 +02:00
Ultradesu
499da735f7 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 9s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 14s
2026-02-05 17:50:57 +02:00
Ultradesu
3054a9242b moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 10s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 17s
2026-02-05 17:47:38 +02:00
Ultradesu
4d095e2773 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-05 17:46:20 +02:00
Ultradesu
09562a6cb9 moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 4s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-05 17:41:13 +02:00
Ultradesu
b81087515d moved to manifests from chart
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-05 17:39:42 +02:00
Ultradesu
39232d422d Disable NODES_EXCLUDE for n8n
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 6s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-05 17:14:21 +02:00
Ultradesu
40b565b5c8 Disable NODES_EXCLUDE for n8n
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 11s
Check with kubeconform / lint (push) Successful in 8s
Auto-update README / Generate README and Create MR (push) Successful in 12s
2026-02-05 17:09:07 +02:00
Ultradesu
a7aaa3e4a5 Added RBAC
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 10s
Check with kubeconform / lint (push) Successful in 7s
Auto-update README / Generate README and Create MR (push) Successful in 12s
2026-02-05 12:15:47 +02:00
Ultradesu
5f882c7beb fixing permissions
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 5s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-04 17:57:46 +02:00
Ultradesu
72cf9902d4 fixing permissions
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 7s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 5s
2026-02-04 17:55:32 +02:00
Ultradesu
a4b2eb8ab9 fixing permissions
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 6s
2026-02-04 17:31:32 +02:00
Ultradesu
80b7b0a7f7 Drop init cont fixing permissions
All checks were successful
Update Kubernetes Services Wiki / Generate and Update K8s Wiki (push) Successful in 8s
Check with kubeconform / lint (push) Successful in 6s
Auto-update README / Generate README and Create MR (push) Successful in 12s
2026-02-04 17:25:41 +02:00
20 changed files with 828 additions and 102 deletions

View File

@@ -18,6 +18,7 @@ ArgoCD homelab project
| **external-secrets** | [![external-secrets](https://ag.hexor.cy/api/badge?name=external-secrets&revision=true)](https://ag.hexor.cy/applications/argocd/external-secrets) |
| **kube-system-custom** | [![kube-system-custom](https://ag.hexor.cy/api/badge?name=kube-system-custom&revision=true)](https://ag.hexor.cy/applications/argocd/kube-system-custom) |
| **kubernetes-dashboard** | [![kubernetes-dashboard](https://ag.hexor.cy/api/badge?name=kubernetes-dashboard&revision=true)](https://ag.hexor.cy/applications/argocd/kubernetes-dashboard) |
| **longhorn** | [![longhorn](https://ag.hexor.cy/api/badge?name=longhorn&revision=true)](https://ag.hexor.cy/applications/argocd/longhorn) |
| **postgresql** | [![postgresql](https://ag.hexor.cy/api/badge?name=postgresql&revision=true)](https://ag.hexor.cy/applications/argocd/postgresql) |
| **prom-stack** | [![prom-stack](https://ag.hexor.cy/api/badge?name=prom-stack&revision=true)](https://ag.hexor.cy/applications/argocd/prom-stack) |
| **system-upgrade** | [![system-upgrade](https://ag.hexor.cy/api/badge?name=system-upgrade&revision=true)](https://ag.hexor.cy/applications/argocd/system-upgrade) |

View File

@@ -0,0 +1,153 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n-main
labels:
app: n8n
component: main
spec:
replicas: 1
selector:
matchLabels:
app: n8n
component: main
template:
metadata:
labels:
app: n8n
component: main
spec:
serviceAccountName: n8n
initContainers:
- name: install-tools
image: alpine:3.22
command:
- /bin/sh
- -c
- |
set -e
if [ -x /tools/kubectl ]; then
echo "kubectl already exists, skipping download"
/tools/kubectl version --client
exit 0
fi
echo "Downloading kubectl..."
ARCH=$(uname -m)
case $ARCH in
x86_64) ARCH="amd64" ;;
aarch64) ARCH="arm64" ;;
esac
wget -O /tools/kubectl "https://dl.k8s.io/release/$(wget -qO- https://dl.k8s.io/release/stable.txt)/bin/linux/${ARCH}/kubectl"
chmod +x /tools/kubectl
/tools/kubectl version --client
volumeMounts:
- name: tools
mountPath: /tools
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
containers:
- name: n8n
image: docker.n8n.io/n8nio/n8n:latest
ports:
- containerPort: 5678
name: http
env:
- name: PATH
value: "/opt/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
- name: HOME
value: "/home/node"
- name: N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS
value: "true"
- name: NODES_EXCLUDE
value: "[]"
- name: N8N_HOST
value: "n8n.hexor.cy"
- name: N8N_PORT
value: "5678"
- name: N8N_PROTOCOL
value: "https"
- name: N8N_RUNNERS_ENABLED
value: "true"
- name: N8N_RUNNERS_MODE
value: "external"
- name: EXECUTIONS_MODE
value: "queue"
- name: QUEUE_BULL_REDIS_HOST
value: "n8n-redis"
- name: NODE_ENV
value: "production"
- name: WEBHOOK_URL
value: "https://n8n.hexor.cy/"
- name: GENERIC_TIMEZONE
value: "Europe/Moscow"
- name: TZ
value: "Europe/Moscow"
- name: DB_TYPE
value: "postgresdb"
- name: DB_POSTGRESDB_HOST
value: "psql.psql.svc"
- name: DB_POSTGRESDB_DATABASE
value: "n8n"
- name: DB_POSTGRESDB_USER
valueFrom:
secretKeyRef:
name: credentials
key: username
- name: DB_POSTGRESDB_PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: password
- name: N8N_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: credentials
key: encryptionkey
- name: N8N_RUNNERS_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: credentials
key: runnertoken
volumeMounts:
- name: n8n-data
mountPath: /home/node/.n8n
- name: tools
mountPath: /opt/tools
resources:
requests:
cpu: 2000m
memory: 512Mi
limits:
cpu: 4000m
memory: 2048Gi
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /healthz/readiness
port: http
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 10
volumes:
- name: n8n-data
persistentVolumeClaim:
claimName: n8n-data
- name: tools
persistentVolumeClaim:
claimName: n8n-tools
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
fsGroup: 1000

View File

@@ -0,0 +1,112 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n-worker
labels:
app: n8n
component: worker
spec:
replicas: 2
selector:
matchLabels:
app: n8n
component: worker
template:
metadata:
labels:
app: n8n
component: worker
spec:
serviceAccountName: n8n
containers:
- name: n8n-worker
image: docker.n8n.io/n8nio/n8n:latest
command: ["n8n", "worker"]
env:
- name: PATH
value: "/opt/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
- name: HOME
value: "/home/node"
- name: NODES_EXCLUDE
value: "[]"
- name: N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS
value: "true"
- name: N8N_RUNNERS_ENABLED
value: "true"
- name: N8N_RUNNERS_MODE
value: "external"
- name: N8N_PORT
value: "80"
- name: EXECUTIONS_MODE
value: "queue"
- name: QUEUE_BULL_REDIS_HOST
value: "n8n-redis"
- name: N8N_RUNNERS_TASK_BROKER_URI
value: "http://n8n:80"
- name: NODE_ENV
value: "production"
- name: GENERIC_TIMEZONE
value: "Europe/Moscow"
- name: TZ
value: "Europe/Moscow"
- name: DB_TYPE
value: "postgresdb"
- name: DB_POSTGRESDB_HOST
value: "psql.psql.svc"
- name: DB_POSTGRESDB_DATABASE
value: "n8n"
- name: DB_POSTGRESDB_USER
valueFrom:
secretKeyRef:
name: credentials
key: username
- name: DB_POSTGRESDB_PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: password
- name: N8N_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: credentials
key: encryptionkey
- name: N8N_RUNNERS_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: credentials
key: runnertoken
volumeMounts:
- name: n8n-data
mountPath: /home/node/.n8n
- name: tools
mountPath: /opt/tools
resources:
requests:
cpu: 2000m
memory: 512Mi
limits:
cpu: 4000m
memory: 2048Gi
livenessProbe:
exec:
command:
- /bin/sh
- -c
- "ps aux | grep '[n]8n worker' || exit 1"
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3
volumes:
- name: n8n-data
persistentVolumeClaim:
claimName: n8n-data
- name: tools
persistentVolumeClaim:
claimName: n8n-tools
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
fsGroup: 1000

View File

@@ -10,8 +10,10 @@ spec:
template:
type: Opaque
data:
postgres-password: "{{ .psql | trim }}"
N8N_ENCRYPTION_KEY: "{{ .enc_pass | trim }}"
password: "{{ .psql | trim }}"
username: "n8n"
encryptionkey: "{{ .enc_pass | trim }}"
runnertoken: "{{ .runner_token | trim }}"
data:
- secretKey: psql
sourceRef:
@@ -35,3 +37,14 @@ spec:
metadataPolicy: None
key: 18c92d73-9637-4419-8642-7f7b308460cb
property: fields[0].value
- secretKey: runner_token
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
conversionStrategy: Default
decodingStrategy: None
metadataPolicy: None
key: 18c92d73-9637-4419-8642-7f7b308460cb
property: fields[1].value

28
k8s/apps/n8n/ingress.yaml Normal file
View File

@@ -0,0 +1,28 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: n8n
labels:
app: n8n
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
ingressClassName: traefik
tls:
- hosts:
- n8n.hexor.cy
secretName: n8n-tls
rules:
- host: n8n.hexor.cy
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: n8n
port:
number: 80

View File

@@ -1,19 +1,18 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Updated: Fixed n8n volume permissions issue
resources:
- external-secrets.yaml
- storage.yaml
- rbac.yaml
- redis-deployment.yaml
- redis-service.yaml
- deployment-main.yaml
- deployment-worker.yaml
- service.yaml
- ingress.yaml
helmCharts:
- name: n8n
repo: https://community-charts.github.io/helm-charts
version: 1.16.28
releaseName: n8n
namespace: n8n
valuesFile: values-n8n.yaml
includeCRDs: true
- name: yacy
repo: https://gt.hexor.cy/api/packages/ab/helm
version: 0.1.2
@@ -21,3 +20,6 @@ helmCharts:
namespace: n8n
valuesFile: values-yacy.yaml
includeCRDs: true
commonLabels:
app.kubernetes.io/name: n8n

45
k8s/apps/n8n/rbac.yaml Normal file
View File

@@ -0,0 +1,45 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: n8n
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: n8n-clusterrole
rules:
# Core API group ("")
- apiGroups: [""]
resources: ["*"]
verbs: ["get", "list", "watch"]
# Common built-in API groups
- apiGroups: ["apps", "batch", "autoscaling", "extensions", "policy"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io", "rbac.authorization.k8s.io", "apiextensions.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io", "discovery.k8s.io", "events.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io", "admissionregistration.k8s.io", "authentication.k8s.io", "authorization.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: n8n-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: n8n-clusterrole
subjects:
- kind: ServiceAccount
name: n8n
namespace: n8n

View File

@@ -0,0 +1,57 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n-redis
labels:
app: redis
component: n8n
spec:
replicas: 1
selector:
matchLabels:
app: redis
component: n8n
template:
metadata:
labels:
app: redis
component: n8n
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
name: redis
command:
- redis-server
- --appendonly
- "yes"
- --save
- "900 1"
volumeMounts:
- name: redis-data
mountPath: /data
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: redis-data
emptyDir: {}

View File

@@ -0,0 +1,18 @@
---
apiVersion: v1
kind: Service
metadata:
name: n8n-redis
labels:
app: redis
component: n8n
spec:
selector:
app: redis
component: n8n
ports:
- name: redis
port: 6379
targetPort: 6379
protocol: TCP
type: ClusterIP

17
k8s/apps/n8n/service.yaml Normal file
View File

@@ -0,0 +1,17 @@
---
apiVersion: v1
kind: Service
metadata:
name: n8n
labels:
app: n8n
spec:
selector:
app: n8n
component: main
ports:
- name: http
port: 80
targetPort: 5678
protocol: TCP
type: ClusterIP

View File

@@ -2,11 +2,23 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-home
name: n8n-data
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-csi
storageClassName: longhorn
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-tools
spec:
accessModes:
- ReadWriteMany
storageClassName: longhorn
resources:
requests:
storage: 20Gi

View File

@@ -1,79 +0,0 @@
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
db:
type: postgresdb
main:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 512m
memory: 512Mi
persistence:
enabled: true
existingClaim: n8n-home
mountPath: /home/node/.n8n
podSecurityContext:
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
# Fix NFS permission issues - required for NFS volumes
initContainers:
- name: fix-permissions
image: busybox:1.35
command:
- sh
- -c
- |
echo "Fixing permissions for NFS volume..."
if [ ! -d "/home/node/.n8n" ]; then
mkdir -p /home/node/.n8n
fi
chown -R 1000:1000 /home/node/.n8n
chmod -R 775 /home/node/.n8n
echo "Permissions fixed: $(ls -ld /home/node/.n8n)"
volumeMounts:
- name: node-modules
mountPath: /home/node/.n8n
securityContext:
runAsUser: 0
runAsGroup: 0
worker:
mode: regular
webhook:
url: https://n8n.hexor.cy
redis:
enabled: true
existingEncryptionKeySecret: credentials
externalPostgresql:
existingSecret: credentials
host: "psql.psql.svc"
username: "n8n"
database: "n8n"
ingress:
enabled: true
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
hosts:
- host: n8n.hexor.cy
paths:
- path: /
pathType: Prefix
tls:
- secretName: n8n-tls
hosts:
- '*.hexor.cy'

View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: longhorn
namespace: argocd
spec:
project: core
destination:
namespace: longhorn
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/core/longhorn
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,15 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
#resources:
# - app.yaml
helmCharts:
- name: longhorn
repo: https://charts.longhorn.io
version: 1.11.0
releaseName: longhorn
namespace: longhorn
valuesFile: values.yaml
includeCRDs: true

View File

@@ -0,0 +1,4 @@
longhornUI:
replicas: 1
persistence:
reclaimPolicy: "Retain"

View File

@@ -0,0 +1,46 @@
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: telegram-notifications
namespace: prometheus
labels:
app: kube-prometheus-stack-alertmanager
release: prometheus
spec:
route:
groupBy: ['alertname', 'cluster', 'service']
groupWait: 10s
groupInterval: 5m
repeatInterval: 12h
receiver: telegram
routes:
- matchers:
- name: alertname
value: Watchdog
matchType: "="
receiver: 'null'
receivers:
- name: telegram
telegramConfigs:
- botToken:
name: alertmanager-telegram-secret
key: TELEGRAM_BOT_TOKEN
chatID: 124317807
parseMode: HTML
sendResolved: true
disableNotifications: false
message: |
{{ if eq .Status "firing" }}🔥 FIRING{{ else }}✅ RESOLVED{{ end }}
{{ range .Alerts }}
📊 <b>{{ .Labels.alertname }}</b>
{{ .Annotations.summary }}
{{ if .Annotations.node }}🖥 <b>Node:</b> <code>{{ .Annotations.node }}</code>{{ end }}
{{ if .Annotations.pod }}📦 <b>Pod:</b> <code>{{ .Annotations.pod }}</code>{{ end }}
{{ if .Annotations.namespace }}📁 <b>Namespace:</b> <code>{{ .Annotations.namespace }}</code>{{ end }}
{{ if .Annotations.throttle_rate }}⚠️ <b>Throttling rate:</b> {{ .Annotations.throttle_rate }}{{ end }}
🔗 <a href="{{ .GeneratorURL }}">View in Grafana</a>
{{ end }}
- name: 'null'

View File

@@ -45,7 +45,7 @@ data:
type: __expr__
uid: __expr__
expression: A
reducer: last
reducer: min
refId: B
type: reduce
noDataState: NoData
@@ -63,7 +63,7 @@ data:
- orgId: 1
name: kubernetes_alerts
folder: Kubernetes
interval: 30s
interval: 2m
rules:
- uid: node_not_ready
title: Kubernetes Node Not Ready
@@ -71,17 +71,17 @@ data:
data:
- refId: A
relativeTimeRange:
from: 300
from: 600
to: 0
datasourceUid: P76F38748CEC837F0
model:
expr: 'kube_node_status_condition{condition="Ready",status="true"} == 0'
expr: 'kube_node_status_condition{condition="Ready",status="false"}'
refId: A
intervalMs: 1000
maxDataPoints: 43200
- refId: B
relativeTimeRange:
from: 300
from: 600
to: 0
datasourceUid: __expr__
model:
@@ -98,12 +98,12 @@ data:
type: __expr__
uid: __expr__
expression: A
reducer: last
reducer: min
refId: B
type: reduce
noDataState: Alerting
noDataState: NoData
execErrState: Alerting
for: 0s
for: 10m
annotations:
node: '{{ $labels.node }}'
condition: '{{ $labels.condition }}'
@@ -111,6 +111,236 @@ data:
labels:
severity: critical
- uid: node_high_memory_usage
title: High Node Memory Usage
condition: B
data:
- refId: A
relativeTimeRange:
from: 300
to: 0
datasourceUid: P76F38748CEC837F0
model:
expr: '(1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100'
refId: A
intervalMs: 1000
maxDataPoints: 43200
- refId: B
relativeTimeRange:
from: 300
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 80
type: gt
operator:
type: and
query:
params: []
datasource:
type: __expr__
uid: __expr__
expression: A
reducer: max
refId: B
type: reduce
noDataState: NoData
execErrState: Alerting
for: 5m
annotations:
node: '{{ $labels.instance }}'
memory_usage: '{{ printf "%.1f%%" $values.A }}'
summary: 'Node memory usage is critically high'
labels:
severity: warning
- uid: node_high_cpu_usage
title: High Node CPU Usage
condition: B
data:
- refId: A
relativeTimeRange:
from: 300
to: 0
datasourceUid: P76F38748CEC837F0
model:
expr: '100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)'
refId: A
intervalMs: 1000
maxDataPoints: 43200
- refId: B
relativeTimeRange:
from: 300
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 80
type: gt
operator:
type: and
query:
params: []
datasource:
type: __expr__
uid: __expr__
expression: A
reducer: max
refId: B
type: reduce
noDataState: NoData
execErrState: Alerting
for: 10m
annotations:
node: '{{ $labels.instance }}'
cpu_usage: '{{ printf "%.1f%%" $values.A }}'
summary: 'Node CPU usage is critically high'
labels:
severity: warning
- uid: node_high_disk_usage
title: High Node Disk Usage
condition: B
data:
- refId: A
relativeTimeRange:
from: 300
to: 0
datasourceUid: P76F38748CEC837F0
model:
expr: '(1 - (node_filesystem_avail_bytes{fstype=~"ext[234]|xfs|zfs|btrfs"} / node_filesystem_size_bytes)) * 100'
refId: A
intervalMs: 1000
maxDataPoints: 43200
- refId: B
relativeTimeRange:
from: 300
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 85
type: gt
operator:
type: and
query:
params: []
datasource:
type: __expr__
uid: __expr__
expression: A
reducer: max
refId: B
type: reduce
noDataState: NoData
execErrState: Alerting
for: 5m
annotations:
node: '{{ $labels.instance }}'
filesystem: '{{ $labels.mountpoint }}'
disk_usage: '{{ printf "%.1f%%" $values.A }}'
summary: 'Node disk usage is critically high'
labels:
severity: critical
- uid: node_load_average_high
title: High Node Load Average
condition: B
data:
- refId: A
relativeTimeRange:
from: 300
to: 0
datasourceUid: P76F38748CEC837F0
model:
expr: 'node_load5 / on(instance) group_left count by(instance)(node_cpu_seconds_total{mode="idle"})'
refId: A
intervalMs: 1000
maxDataPoints: 43200
- refId: B
relativeTimeRange:
from: 300
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 0.8
type: gt
operator:
type: and
query:
params: []
datasource:
type: __expr__
uid: __expr__
expression: A
reducer: max
refId: B
type: reduce
noDataState: NoData
execErrState: Alerting
for: 5m
annotations:
node: '{{ $labels.instance }}'
load_average: '{{ printf "%.2f" $values.A }}'
summary: 'Node load average is high relative to CPU count'
labels:
severity: warning
- uid: node_exporter_down
title: Node Exporter Down
condition: B
data:
- refId: A
relativeTimeRange:
from: 300
to: 0
datasourceUid: P76F38748CEC837F0
model:
expr: 'up{job="node-exporter"}'
refId: A
intervalMs: 1000
maxDataPoints: 43200
- refId: B
relativeTimeRange:
from: 300
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 1
type: lt
operator:
type: and
query:
params: []
datasource:
type: __expr__
uid: __expr__
expression: A
reducer: min
refId: B
type: reduce
noDataState: NoData
execErrState: Alerting
for: 2m
annotations:
node: '{{ $labels.instance }}'
summary: 'Node exporter is down - unable to collect metrics'
labels:
severity: critical
contactpoints.yaml: |
apiVersion: 1
contactPoints:
@@ -149,4 +379,4 @@ data:
- alertname
group_wait: 10s
group_interval: 5m
repeat_interval: 4h
repeat_interval: 12h

View File

@@ -5,6 +5,7 @@ resources:
- persistentVolume.yaml
- external-secrets.yaml
- grafana-alerting-configmap.yaml
- alertmanager-config.yaml
helmCharts:
- name: kube-prometheus-stack

View File

@@ -26,11 +26,41 @@ alertmanager:
{{ if .Annotations.description }}<b>Description:</b> {{ .Annotations.description }}{{ end }}
{{ end }}
ingress:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
hosts:
- prom.hexor.cy
paths:
- /alertmanager
tls:
- secretName: alertmanager-tls
hosts:
- prom.hexor.cy
alertmanagerSpec:
secrets:
- alertmanager-telegram-secret
externalUrl: https://prom.hexor.cy/alertmanager
routePrefix: /alertmanager
prometheus:
ingress:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
hosts:
- prom.hexor.cy
paths:
- /
tls:
- secretName: prometheus-tls
hosts:
- prom.hexor.cy
prometheusSpec:
enableRemoteWriteReceiver: true
additionalScrapeConfigs: