Compare commits

...

440 Commits
main ... main

Author SHA1 Message Date
ab
c26fdfa914 Update terraform/authentik/proxy-apps.tfvars 2025-12-10 13:47:16 +00:00
ab
ddf438e903 Update terraform/authentik/proxy-apps.tfvars 2025-12-10 13:46:17 +00:00
ab
3c95524bd1 Update terraform/authentik/proxy-apps.tfvars 2025-12-09 22:00:00 +00:00
ab
6b7e35e7c8 Update terraform/authentik/proxy-apps.tfvars 2025-12-09 21:17:38 +00:00
ab
7d17a5931c Update k8s/apps/pasarguard/daemonset.yaml 2025-12-09 15:42:24 +00:00
ab
921cb50ddc Update k8s/apps/pasarguard/kustomization.yaml 2025-12-09 04:36:31 +00:00
ab
598d2f80f6 Update k8s/apps/jellyfin/values.yaml 2025-12-09 03:16:37 +00:00
ab
9792dd799b Update k8s/apps/jellyfin/values.yaml 2025-12-09 03:08:51 +00:00
ab
9267f25278 Update k8s/apps/jellyfin/values.yaml 2025-12-09 03:03:19 +00:00
ab
812c37cfe7 Update k8s/apps/jellyfin/values.yaml 2025-12-09 03:02:28 +00:00
Ultradesu
a6a50c3c13 Reworked pasarguard nodes daemonset. 2025-12-08 18:52:36 +02:00
Ultradesu
2ff8f9efe8 Reworked pasarguard nodes daemonset. 2025-12-08 18:51:55 +02:00
Ultradesu
477708190d Reworked pasarguard nodes daemonset. 2025-12-08 18:47:12 +02:00
Ultradesu
53d5723141 Reworked pasarguard nodes daemonset. 2025-12-08 18:41:33 +02:00
Ultradesu
101d97eb51 Reworked pasarguard nodes daemonset. 2025-12-08 18:31:46 +02:00
Ultradesu
4241c81fc0 Reworked pasarguard nodes daemonset. 2025-12-08 18:30:34 +02:00
Ultradesu
bb9ce21bb4 Reworked pasarguard nodes daemonset. 2025-12-08 18:06:47 +02:00
Ultradesu
95913c3f73 Reworked pasarguard nodes daemonset. 2025-12-08 18:01:25 +02:00
Ultradesu
737604f466 Reworked pasarguard nodes daemonset. 2025-12-08 17:43:51 +02:00
Ultradesu
2bb7ad6959 Reworked pasarguard nodes daemonset. 2025-12-08 17:36:28 +02:00
Ultradesu
b2a77a6572 Reworked pasarguard nodes daemonset. 2025-12-08 17:31:51 +02:00
ab
fc1bc1a65e Update k8s/apps/jellyfin/values.yaml 2025-12-04 19:11:29 +00:00
AB
a2114327cf Got rid of authentik secret 2025-12-03 17:14:28 +02:00
AB
5c886d7604 Added bot restart 2025-12-03 16:11:42 +02:00
ab
ebcb6dde23 Update k8s/apps/tg-bots/get-id-bot.yaml 2025-12-01 00:19:51 +00:00
ab
c6f3528174 Update k8s/apps/pasarguard/daemonset.yaml 2025-11-30 22:40:07 +00:00
ab
a3c0c6bce0 Update k8s/apps/pasarguard/deployment.yaml 2025-11-30 22:39:03 +00:00
ab
ae30d8e898 Update k8s/apps/pasarguard/deployment.yaml 2025-11-30 21:47:07 +00:00
ab
2dd9eb544b Update k8s/core/postgresql/pgadmin4-values.yaml 2025-11-28 23:11:47 +00:00
ab
5d7882fb96 Update k8s/core/system-upgrade/plan.yaml 2025-11-27 11:46:28 +00:00
ab
5065061eea Update k8s/apps/pasarguard/deployment.yaml 2025-11-27 09:36:55 +00:00
ab
dccfa0ca7b Merge pull request 'Auto-update README with k8s applications' (#36) from auto-update-readme-20251126-161845 into main
Reviewed-on: ab/homelab#36
2025-11-27 09:33:07 +00:00
Gitea Actions Bot
883139964e Auto-update README with current k8s applications
Generated by CI/CD workflow on 2025-11-26 16:18:45

This PR updates the README.md file with the current list of applications found in the k8s/ directory structure.
2025-11-26 16:18:45 +00:00
ab
74c2e2230c Update k8s/core/postgresql/external-secrets.yaml 2025-11-26 16:18:02 +00:00
Ultradesu
5d9fe8f832 Added Remnawave 2025-11-24 18:50:38 +02:00
Ultradesu
af4a4c9216 Added Remnawave 2025-11-24 18:46:22 +02:00
Ultradesu
43bd0b6ac3 Added user UI remnawave 2025-11-24 18:45:59 +02:00
Ultradesu
c4a40fad9f Added user UI remnawave 2025-11-24 18:42:30 +02:00
Ultradesu
e1d6a09f6f Added user UI remnawave 2025-11-24 18:40:58 +02:00
ab
e37c365630 Update terraform/authentik/proxy-apps.tfvars 2025-11-24 16:39:42 +00:00
Ultradesu
6cf7a68ad7 Added user UI remnawave 2025-11-24 18:38:54 +02:00
Ultradesu
b82407f31f Added user UI remnawave 2025-11-24 18:33:39 +02:00
ab
117583e660 Update terraform/authentik/proxy-apps.tfvars 2025-11-24 15:08:49 +00:00
ab
70d31aae17 Update k8s/core/authentik/values.yaml 2025-11-24 15:06:20 +00:00
ab
9066b3f624 Update terraform/authentik/proxy-apps.tfvars 2025-11-24 15:03:55 +00:00
Ultradesu
633c4eec94 Added secrets 2025-11-24 16:57:39 +02:00
Ultradesu
89df3d985e Added secrets 2025-11-24 16:57:02 +02:00
Ultradesu
8ef1242243 Added secrets 2025-11-24 16:54:04 +02:00
Ultradesu
f07f0ea7c5 Added Remnawave 2025-11-24 16:32:43 +02:00
Ultradesu
17321b5d61 Added Remnawave 2025-11-24 16:32:00 +02:00
ab
c9177d9878 Update k8s/core/postgresql/external-secrets.yaml 2025-11-24 14:28:56 +00:00
ab
06f8b620ae Update k8s/apps/jellyfin/qbittorent.yaml 2025-11-24 12:12:24 +00:00
Ultradesu
a3c02b3510 Update external-secrets 2025-11-24 14:01:13 +02:00
ab
b1acb37700 Update k8s/apps/gitea/external-secrets.yaml 2025-11-24 11:58:05 +00:00
ab
a1288f4c99 Update k8s/core/external-secrets/app.yaml 2025-11-24 11:54:25 +00:00
Ultradesu
1672cdf402 Update external-secrets 2025-11-24 13:51:59 +02:00
Ultradesu
e96a75880e Update external-secrets 2025-11-24 13:51:37 +02:00
ab
6c79543b84 Update k8s/core/external-secrets/app.yaml 2025-11-24 11:46:33 +00:00
ab
f41b86bad4 Update k8s/core/external-secrets/kustomization.yaml 2025-11-24 11:43:04 +00:00
ab
d44ade2298 Update k8s/core/argocd/kustomization.yaml 2025-11-24 11:41:20 +00:00
ab
462cb9189d Update k8s/core/prom-stack/kustomization.yaml 2025-11-24 11:39:51 +00:00
ab
37bb407a77 Update k8s/core/cert-manager/kustomization.yaml 2025-11-24 11:38:05 +00:00
ab
cae1c42004 Update k8s/core/cert-manager/values.yaml 2025-11-24 11:36:56 +00:00
ab
663f2e7504 Update k8s/core/cert-manager/values.yaml 2025-11-24 11:31:25 +00:00
ab
8679dbf47e Merge pull request 'Auto-update README with k8s applications' (#24) from auto-update-readme-20251124-111959 into main
Reviewed-on: ab/homelab#24
2025-11-24 11:28:23 +00:00
Gitea Actions Bot
1ace105757 Auto-update README with current k8s applications
Generated by CI/CD workflow on 2025-11-24 11:19:59

This PR updates the README.md file with the current list of applications found in the k8s/ directory structure.
2025-11-24 11:19:59 +00:00
Ultradesu
ce4172b435 Increase CPU limits 2025-11-24 13:19:04 +02:00
Ultradesu
c841f95bd2 Fixed iperf3 monitor 2025-11-19 00:02:10 +02:00
Ultradesu
fb651bc6fe Fixed iperf3 monitor 2025-11-18 23:57:43 +02:00
Ultradesu
04330aa474 Fixed iperf3 monitor 2025-11-18 22:30:15 +02:00
Ultradesu
57d517af77 Fixed iperf3 monitor 2025-11-18 22:21:17 +02:00
Ultradesu
1f7d9e41c8 Fixed iperf3 monitor 2025-11-18 22:16:11 +02:00
Ultradesu
39a27c596f Fixed iperf3 2025-11-18 22:13:12 +02:00
Ultradesu
353bb877be Fixed iperf3 2025-11-18 21:45:26 +02:00
Ultradesu
e523bb8bea Fixed iperf3 2025-11-18 21:44:00 +02:00
Ultradesu
b433373725 Fixed iperf3 2025-11-18 21:40:53 +02:00
Ultradesu
3026e53746 Added iperf3-server 2025-11-18 21:37:09 +02:00
Ultradesu
63669c69ff Added iperf3-server 2025-11-18 21:33:34 +02:00
Ultradesu
fa98e553cd Added iperf3-server 2025-11-18 21:32:01 +02:00
Ultradesu
055ef8aa77 Added iperf3-server 2025-11-18 21:31:57 +02:00
Ultradesu
22b359a7ee Added iperf3-server 2025-11-18 20:47:50 +02:00
ab
611e3e31dd Update k8s/core/prom-stack/prom-values.yaml 2025-11-18 18:12:39 +00:00
ab
ddbd53e476 Update k8s/core/prom-stack/prom-values.yaml 2025-11-18 18:07:07 +00:00
ab
f8a9d91932 Update k8s/apps/pasarguard/deployment.yaml 2025-11-18 17:18:26 +00:00
ab
262fea115d Update k8s/core/prom-stack/prom-values.yaml 2025-11-18 15:36:08 +00:00
Ultradesu
1e1a015dc0 Added xray expoter monitor 2025-11-18 14:53:07 +02:00
Ultradesu
e76ebdd8c3 Added xray expoter monitor 2025-11-18 14:47:08 +02:00
Ultradesu
0c2ce55a41 Added xray expoter monitor 2025-11-18 14:43:35 +02:00
Ultradesu
6e9de5addf Added xray expoter monitor 2025-11-18 14:43:03 +02:00
Ultradesu
887a9a2306 Added xray exporter 2025-11-18 14:39:05 +02:00
Ultradesu
776109d795 Added xray exporter 2025-11-18 14:32:39 +02:00
Ultradesu
c998426b44 Added xray exporter 2025-11-18 14:27:12 +02:00
Ultradesu
536be6a61f Added xray exporter 2025-11-18 14:13:55 +02:00
Ultradesu
713481c726 Added xray exporter 2025-11-18 14:12:14 +02:00
ab
f6411b7b65 Update k8s/core/authentik/values.yaml 2025-11-17 20:31:03 +00:00
ab
3af6d98be8 Update k8s/core/authentik/kustomization.yaml 2025-11-17 20:30:45 +00:00
ab
a45af9d4bc Update k8s/core/external-secrets/bitwarden-store.yaml 2025-11-17 20:15:34 +00:00
ab
76937930ce Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-09 00:39:33 +00:00
ab
d4ff8d4665 Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-09 00:38:08 +00:00
ab
e0cf9371ae Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-08 23:45:31 +00:00
ab
1126cb25bc Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-08 22:42:29 +00:00
ab
44250dc937 Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-08 22:41:09 +00:00
ab
d9db73e078 Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-08 22:40:45 +00:00
ab
71ce9f15ef Update k8s/apps/ollama/kustomization.yaml 2025-11-08 22:37:47 +00:00
ab
6b855294af Add k8s/apps/ollama/external-secrets.yaml 2025-11-08 22:37:16 +00:00
ab
8dd16e24e6 Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-08 22:29:36 +00:00
ab
3df95f46a5 Update terraform/authentik/oauth2-apps.tfvars 2025-11-08 22:29:24 +00:00
ab
c0151eb2c9 Update k8s/core/authentik/values.yaml 2025-11-08 22:29:02 +00:00
ab
6d7e365058 Update terraform/authentik/proxy-apps.tfvars 2025-11-08 22:27:59 +00:00
ab
0b5361323a Update terraform/authentik/oauth2-apps.tfvars 2025-11-08 22:27:35 +00:00
ab
56352fef4b Update k8s/apps/ollama/openweb-ui-values.yaml 2025-11-08 22:25:26 +00:00
AB
7a1f792391 Added ollama 2025-11-07 17:33:21 +02:00
AB
defe0cbdf5 Added ollama 2025-11-07 17:22:39 +02:00
AB
7285c62b37 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 17:07:14 +02:00
ab
60f8d86fca Merge pull request 'Auto-update README with k8s applications' (#10) from auto-update-readme-20251107-145618 into main
Reviewed-on: ab/homelab#10
2025-11-07 15:06:01 +00:00
AB
2387653edd Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 17:05:25 +02:00
AB
78a639162b Added ollama 2025-11-07 17:05:11 +02:00
ab
90b197bcbe Update terraform/authentik/proxy-apps.tfvars 2025-11-07 14:58:47 +00:00
Gitea Actions Bot
156d26aaf9 Auto-update README with current k8s applications
Generated by CI/CD workflow on 2025-11-07 14:56:18

This PR updates the README.md file with the current list of applications found in the k8s/ directory structure.
2025-11-07 14:56:18 +00:00
AB
700b9cf5ff Added ollama 2025-11-07 16:53:16 +02:00
AB
84bd1fc05a Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 16:52:27 +02:00
AB
cb3defd28c Added ollama 2025-11-07 16:52:16 +02:00
ab
47320b9df6 Update k8s/core/authentik/values.yaml 2025-11-07 14:34:43 +00:00
AB
0e4204b2ea Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 16:32:58 +02:00
AB
3230c67f16 Added ollama 2025-11-07 16:32:48 +02:00
ab
60c764a657 Update terraform/authentik/proxy-apps.tfvars 2025-11-07 14:32:14 +00:00
AB
869a75f6e4 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 16:11:40 +02:00
AB
26429f1d4c Added ollama 2025-11-07 16:11:29 +02:00
ab
5718981b67 Update k8s/apps/ollama/kustomization.yaml 2025-11-07 14:10:30 +00:00
AB
8c6ca27523 Added ollama 2025-11-07 16:07:59 +02:00
AB
4334ea25b7 Added ollama 2025-11-07 16:07:03 +02:00
AB
18e60344b3 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 15:45:16 +02:00
AB
113f1e668d moved prometheus 2025-11-07 15:45:03 +02:00
ab
9b1caa28ef Merge pull request 'Auto-update README with k8s applications' (#3) from auto-update-readme-20251107-134131 into main
Reviewed-on: ab/homelab#3
2025-11-07 13:42:55 +00:00
Gitea Actions Bot
bd927b7ea6 Auto-update README with current k8s applications
Generated by CI/CD workflow on 2025-11-07 13:41:31

This PR updates the README.md file with the current list of applications found in the k8s/ directory structure.
2025-11-07 13:41:31 +00:00
AB
1688ac67c2 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 15:40:40 +02:00
AB
eda944741e moved prometheus 2025-11-07 15:40:25 +02:00
ab
e744629f85 Merge pull request 'Auto-update README with k8s applications' (#2) from auto-update-readme-20251107-133650 into main
Reviewed-on: ab/homelab#2
2025-11-07 13:37:53 +00:00
Gitea Actions Bot
e38ba5b70c Auto-update README with current k8s applications
Generated by CI/CD workflow on 2025-11-07 13:36:50

This PR updates the README.md file with the current list of applications found in the k8s/ directory structure.
2025-11-07 13:36:50 +00:00
AB
860f83445a Readme CI 2025-11-07 15:34:22 +02:00
AB
8e1e20f478 Readme CI 2025-11-07 15:32:07 +02:00
AB
93afe8b187 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-07 15:29:17 +02:00
AB
54e83b0af9 Readme CI 2025-11-07 15:29:04 +02:00
ab
cbed0939fa Update k8s/core/postgresql/nas.yaml 2025-11-07 12:47:25 +00:00
ab
e545346fec Update k8s/core/postgresql/nas.yaml 2025-11-07 12:46:42 +00:00
ab
aae0255843 Update k8s/apps/tg-bots/desubot.yaml 2025-11-07 00:01:33 +00:00
ab
c7fc3504ba Update k8s/apps/tg-bots/kustomization.yaml 2025-11-06 23:48:39 +00:00
ab
b2701a8435 Update k8s/apps/tg-bots/desubot.yaml 2025-11-06 23:48:22 +00:00
ab
df39182fea Update k8s/apps/tg-bots/kustomization.yaml 2025-11-06 23:40:09 +00:00
ab
c9bee8c049 Update k8s/apps/tg-bots/external-secrets.yaml 2025-11-06 23:39:33 +00:00
ab
439ad751e9 Update k8s/apps/tg-bots/get-id-bot.yaml 2025-11-06 23:37:35 +00:00
ab
4681a3b263 Add k8s/apps/tg-bots/desubot.yaml 2025-11-06 23:37:17 +00:00
ab
45f208a4ff Update k8s/apps/pasarguard/deployment.yaml 2025-11-06 20:51:44 +00:00
AB
a19648aacc Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-06 22:49:10 +02:00
AB
7d7906edc7 Changed pasarguard sub page 2025-11-06 22:48:58 +02:00
Ultradesu
30a3b1cd60 Fix xray-nodes 2025-11-06 16:53:41 +02:00
Ultradesu
8dfc8e2549 Fix xray-nodes 2025-11-06 16:51:32 +02:00
Ultradesu
8e18775e3e Fix xray-nodes 2025-11-06 16:46:47 +02:00
ab
1ee1b9a1c6 Update k8s/apps/pasarguard/daemonset.yaml 2025-11-06 14:19:18 +00:00
ab
407b11f40d Update k8s/apps/pasarguard/daemonset.yaml 2025-11-06 14:17:31 +00:00
ab
82cf742edb Update terraform/authentik/proxy-apps.tfvars 2025-11-06 14:13:11 +00:00
ab
360f260cb5 Update k8s/apps/pasarguard/daemonset.yaml 2025-11-06 14:08:37 +00:00
ab
31fe1bf95d Update k8s/apps/pasarguard/daemonset.yaml 2025-11-06 14:07:01 +00:00
ab
7c8a3b2aaa Update terraform/authentik/proxy-apps.tfvars 2025-11-06 13:43:07 +00:00
ab
792421ffc5 Update k8s/apps/pasarguard/deployment.yaml
Enabled PasarGuard API
2025-11-06 13:36:29 +00:00
AB
014db9ad81 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-06 02:35:51 +02:00
AB
602b729d97 Added tg bots 2025-11-06 02:35:40 +02:00
ab
fa7b81c1b3 Update terraform/authentik/proxy-apps.tfvars 2025-11-05 23:34:57 +00:00
ab
7deb5971a5 Update terraform/authentik/proxy-apps.tfvars 2025-11-05 23:31:56 +00:00
AB
dbfdc0f122 Added pasarguard 2025-11-05 16:17:36 +02:00
AB
984f7030a3 Added pasarguard 2025-11-05 15:56:01 +02:00
AB
3e2fe905bd Added pasarguard 2025-11-05 15:50:41 +02:00
AB
45b49944b3 Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-05 15:50:11 +02:00
AB
bd8caa57bf Added pasarguard 2025-11-05 15:50:00 +02:00
ab
c93b8f2797 Update k8s/core/authentik/values.yaml
Added pasarguard
2025-11-05 13:40:24 +00:00
AB
48fee5ceed Added pasarguard 2025-11-05 15:36:02 +02:00
AB
75a9c140af Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-11-05 15:34:32 +02:00
AB
c8930bbebc Added pasarguard 2025-11-05 15:34:21 +02:00
AB
f52b3aab5b Added pasarguard 2025-11-05 15:34:15 +02:00
ab
6f1b6dee5d Update terraform/authentik/proxy-apps.tfvars
Added pasarguard
2025-11-05 13:32:47 +00:00
AB
5e1aa4cd37 Added pasarguard 2025-11-05 12:21:01 +02:00
AB
fef5303429 Added pasarguard 2025-11-05 12:19:32 +02:00
AB
ba389b8b1e Added pasarguard 2025-11-05 12:15:27 +02:00
AB
79d106468a Added pasarguard 2025-11-05 12:13:42 +02:00
ab
ca19a1f46a Update k8s/core/postgresql/external-secrets.yaml 2025-11-05 09:59:28 +00:00
ab
280fbe41bf Update k8s/core/authentik/kustomization.yaml 2025-11-04 17:44:48 +00:00
ab
32c2ba2781 Add k8s/core/authentik/worker-restart.yaml 2025-11-04 17:44:13 +00:00
ab
1587abfba8 Update k8s/apps/paperless/paperless-values.yaml 2025-10-30 12:48:52 +00:00
ab
704f9dc85e Update k8s/core/postgresql/pgadmin4-values.yaml 2025-10-29 15:54:38 +00:00
Ultradesu
5ff4b8ecb2 Extend prom PV 2025-10-29 16:10:57 +02:00
AB from home.homenet
1d8c0c1421 Added custom coredns resolve config 2025-10-27 11:03:57 +02:00
ab
661c2c31aa Update k8s/apps/vpn/outfleet-rs.yaml 2025-10-25 10:58:57 +00:00
ab
6d76d20d02 Update k8s/apps/vpn/outfleet-rs.yaml 2025-10-24 15:53:15 +00:00
ab
ab3e687cea Update k8s/apps/vpn/outfleet-rs.yaml 2025-10-24 15:51:08 +00:00
AB from home.homenet
6ade2bb13a Pinned xray version 2025-10-24 18:30:38 +03:00
ab
d49c382055 Update k8s/apps/vpn/kustomization.yaml 2025-10-24 15:21:48 +00:00
ab
1b9775c63d Update k8s/apps/vpn/outfleet-rs.yaml 2025-10-24 15:21:32 +00:00
ab
f7838be372 Update README.md 2025-10-24 12:18:36 +00:00
ab
ce74590719 Update README.md 2025-10-24 12:18:06 +00:00
Ultradesu
280bdd3091 Update readme 2025-10-24 15:17:14 +03:00
Ultradesu
c34f5ed0a0 Update readme 2025-10-24 15:14:47 +03:00
Ultradesu
107782318b Update readme 2025-10-24 15:05:52 +03:00
Ultradesu
ed2a59948f Update readme 2025-10-24 15:02:24 +03:00
ab
03f6596262 Update k8s/core/argocd/values.yaml
Enabled ArgoCD statusbages
2025-10-24 11:56:22 +00:00
ab
bed8f5b7c3 Update k8s/core/authentik/values.yaml 2025-10-22 15:49:23 +00:00
ab
676a81852a Disable hexound 2025-10-22 15:31:09 +00:00
ab
73c09f80f7 Update k8s/apps/hexound/kustomization.yaml 2025-10-22 15:30:09 +00:00
ab
104d67bfb3 Add k8s/apps/hexound/ingress.yaml 2025-10-22 15:29:34 +00:00
AB from home.homenet
71e5101604 Deployed OF django back 2025-10-22 15:19:13 +03:00
AB from home.homenet
5783db189a Deployed outfleet-rs 2025-10-22 15:08:13 +03:00
ab
5659e4455b Update terraform/authentik/proxy-apps.tfvars 2025-10-19 11:28:03 +00:00
AB from home.homenet
36e8c5c36b Fix: Drop nginx magic 2025-10-12 15:16:59 +03:00
AB from home.homenet
a6e0165027 Fix: Changed syncthing access and auth scheme 2025-10-12 14:24:07 +03:00
AB from home.homenet
09526f4e91 Fix: Changed syncthing access and auth scheme 2025-10-12 14:18:42 +03:00
AB from home.homenet
d1922019ab Fix: Changed syncthing access and auth scheme 2025-10-12 14:17:16 +03:00
AB from home.homenet
118a1c431a Fix: Changed syncthing access and auth scheme 2025-10-12 14:13:05 +03:00
AB from home.homenet
b9667ea5e7 Fix: Changed syncthing access and auth scheme 2025-10-12 14:09:18 +03:00
AB from home.homenet
b1446c53cd Fix: Changed syncthing access and auth scheme 2025-10-12 13:44:39 +03:00
AB from home.homenet
56fa6a5e05 Fix: Changed syncthing access and auth scheme 2025-10-12 13:36:39 +03:00
AB from home.homenet
aa19cd8e61 Fix: Changed syncthing access and auth scheme 2025-10-12 13:29:10 +03:00
AB from home.homenet
00837fb238 Changed syncthing access and auth scheme 2025-10-12 13:16:39 +03:00
ab
479a2a02ea Update k8s/core/authentik/values.yaml 2025-10-12 10:04:04 +00:00
AB from home.homenet
95e12df43d Changed syncthing access and auth scheme 2025-10-12 13:02:32 +03:00
ab
5a33337aa1 Update k8s/core/prometheus/grafana-values.yaml 2025-10-06 13:07:16 +00:00
ab
ce9ba3661b Update terraform/authentik/oauth2-apps.tfvars 2025-10-06 12:43:58 +00:00
ab
8bcba25f7e Update terraform/authentik/oauth2-apps.tfvars 2025-10-06 11:26:42 +00:00
ab
be9f42afa7 Update terraform/authentik/oauth2-apps.tfvars 2025-10-06 11:09:09 +00:00
ab
656ec121d2 Update k8s/core/postgresql/kustomization.yaml 2025-10-04 03:26:37 +00:00
ab
240fc4127f Update k8s/core/postgresql/pgadmin4-values.yaml 2025-10-04 03:26:01 +00:00
ab
9b19d8ddd8 Update k8s/apps/paperless/paperless-values.yaml 2025-10-04 03:22:38 +00:00
ab
0b8fe99ee1 Update k8s/core/system-upgrade/plan.yaml 2025-10-04 02:53:55 +00:00
Ultradesu
cff6c28b72 extended shared memory for authentik worker 2025-10-04 03:00:00 +01:00
ab
99a63eb840 Update terraform/authentik/proxy-apps.tfvars 2025-09-20 16:55:22 +00:00
ab
4f3be5b14a Update k8s/core/postgresql/external-secrets.yaml 2025-09-17 14:36:58 +00:00
ab
9f5ec499dc Update k8s/core/postgresql/external-secrets.yaml 2025-09-17 14:35:02 +00:00
AB from home.homenet
7b169b8e6d Added wiki generator 2025-09-16 16:35:11 +03:00
ab
a79003740a Update .gitea/workflows/authentik-apps.yaml 2025-09-16 13:30:48 +00:00
AB from home.homenet
6276d543bc Added wiki generator 2025-09-16 16:26:18 +03:00
AB from home.homenet
18a9bfa22a Added wiki generator 2025-09-16 16:24:30 +03:00
AB from home.homenet
4b6090910c Added wiki generator 2025-09-16 16:21:24 +03:00
AB from home.homenet
cfa796cc94 Added wiki generator 2025-09-16 16:17:30 +03:00
AB from home.homenet
3e4f5500d2 Added wiki generator 2025-09-16 16:15:39 +03:00
AB from home.homenet
9dd761bc8e Added wiki generator 2025-09-16 16:13:26 +03:00
AB from home.homenet
eb3b5183b0 Added wiki generator 2025-09-16 16:11:28 +03:00
AB from home.homenet
c9c75c791b Added wiki generator 2025-09-16 16:09:48 +03:00
AB from home.homenet
8b959fec49 Added wiki generator 2025-09-16 16:04:18 +03:00
AB from home.homenet
1184ff9060 Added wiki generator 2025-09-16 16:01:41 +03:00
AB from home.homenet
18c64ef812 Added wiki generator 2025-09-16 15:59:44 +03:00
AB from home.homenet
993cf1985d Added wiki generator 2025-09-16 15:57:48 +03:00
AB from home.homenet
3daf7cf79a Added wiki generator 2025-09-16 15:56:17 +03:00
AB from home.homenet
caa3354b33 Added wiki generator 2025-09-16 15:53:42 +03:00
AB from home.homenet
68ca195735 Added wiki generator 2025-09-16 15:52:35 +03:00
AB from home.homenet
93d7cb6bf1 Added Authentik TF code 2025-09-16 15:51:21 +03:00
ab
600a1dfb6e Update .gitea/workflows/authentik-apps.yaml 2025-09-16 12:36:03 +00:00
AB from home.homenet
a8c089d9ec Added Authentik TF code 2025-09-16 15:35:36 +03:00
ab
e516f95f77 Update .gitea/workflows/authentik-apps.yaml 2025-09-16 12:33:20 +00:00
ab
84dcdc343a Update .gitea/workflows/authentik-apps.yaml 2025-09-16 12:31:54 +00:00
ab
094d80896a Add .gitea/workflows/authentik-apps.yaml 2025-09-16 12:29:20 +00:00
AB from home.homenet
4ffc42af97 Added Authentik TF code 2025-09-16 15:28:50 +03:00
ab
b1183896f9 Update k8s/core/argocd/values.yaml 2025-09-16 11:57:50 +00:00
ab
6f17dc23f1 Update k8s/core/argocd/values.yaml 2025-09-16 11:23:59 +00:00
ab
e353751031 Update k8s/apps/paperless/external-secrets.yaml 2025-09-16 10:11:17 +00:00
ab
f3baf90672 Update k8s/core/authentik/values.yaml 2025-09-15 22:23:52 +00:00
AB from home.homenet
d71935d063 fmt 2025-09-16 00:28:54 +03:00
AB
7dde0d3f2a Merge branch 'main' of ssh://gt.hexor.cy:30022/ab/homelab 2025-09-15 21:42:20 +03:00
AB
00cbd8830b Added authentik terraform 2025-09-15 21:42:01 +03:00
ab
3760908270 Update k8s/apps/k8s-secrets/deployment.yaml 2025-09-15 12:46:07 +00:00
ab
412d32aa68 Delete k8s/apps/k8s-secrets/test-secret.yaml 2025-09-15 12:45:55 +00:00
ab
a6dd3638bb Update k8s/apps/k8s-secrets/deployment.yaml 2025-09-15 12:44:34 +00:00
ab
8e445f01ae Add k8s/apps/k8s-secrets/test-secret.yaml 2025-09-15 12:44:17 +00:00
ab
3a8b14d3b4 Update k8s/apps/k8s-secrets/deployment.yaml 2025-09-15 11:42:28 +00:00
ab
7dde46f40d Update k8s/apps/k8s-secrets/deployment.yaml 2025-09-15 11:35:12 +00:00
ab
60fcf95476 Update k8s/core/authentik/values.yaml 2025-09-14 13:38:19 +00:00
ab
7e95c361a8 Update k8s/core/authentik/values.yaml 2025-09-14 13:35:57 +00:00
ab
0e5cb7686f Update k8s/core/authentik/values.yaml 2025-09-14 13:34:49 +00:00
ab
15e1718861 Update k8s/core/authentik/values.yaml 2025-09-14 12:16:29 +00:00
ab
902d630d09 Update k8s/core/authentik/kustomization.yaml 2025-09-12 09:44:00 +00:00
ab
9b7f953bd3 Update k8s/apps/greece-notifier/deployment.yaml 2025-09-10 22:48:48 +00:00
ab
228a5e3b1c Update k8s/apps/greece-notifier/deployment.yaml 2025-09-10 22:48:10 +00:00
ab
3a0bc6e0d2 Update k8s/apps/greece-notifier/deployment.yaml 2025-09-10 22:44:54 +00:00
Ultradesu
39ac71e4ef added greece-notifier 2025-09-10 19:44:59 +03:00
Ultradesu
fb80dfad0b added greece-notifier 2025-09-10 19:43:43 +03:00
Ultradesu
8c3d29c5b6 added greece-notifier 2025-09-10 19:42:15 +03:00
ab
1b69064a9a Update k8s/apps/immich/deployments.yaml 2025-09-10 10:48:06 +00:00
ab
21fde3cfb4 Update k8s/apps/syncthing/syncthing-master.yaml 2025-09-10 00:28:39 +00:00
ab
201179f57d Update k8s/apps/syncthing/syncthing-khv.yaml 2025-09-10 00:28:23 +00:00
ab
181b577255 Update k8s/apps/syncthing/syncthing-nas.yaml 2025-09-10 00:10:54 +00:00
ab
10773abd14 Update k8s/apps/syncthing/syncthing-khv.yaml 2025-09-09 23:27:57 +00:00
ab
049542f434 Update k8s/apps/syncthing/kustomization.yaml 2025-09-09 22:45:30 +00:00
ab
ef6c7998a4 Add k8s/apps/syncthing/syncthing-nas.yaml 2025-09-09 22:44:51 +00:00
ab
ee703e8399 Update k8s/apps/syncthing/syncthing-master.yaml 2025-09-09 22:41:25 +00:00
ab
399833de34 Update k8s/apps/gitea/deployment.yaml 2025-09-09 19:59:24 +00:00
ab
4c571f63e7 Update k8s/apps/gitea/deployment.yaml 2025-09-09 19:54:13 +00:00
ab
56125caedf Update k8s/apps/immich/deployments.yaml 2025-09-09 19:47:01 +00:00
ab
4aae2a87d6 Update k8s/core/system-upgrade/plan.yaml 2025-09-08 10:39:02 +00:00
ab
bbc1ba4c2a Update k8s/core/system-upgrade/plan.yaml 2025-09-08 10:38:41 +00:00
ab
5924dcfc5f Update k8s/core/authentik/values.yaml 2025-09-04 21:40:39 +00:00
ab
2cb9b5636c Update k8s/games/counter-strike-16/restart-job.yaml 2025-09-04 16:38:43 +00:00
ab
a2d631d06f Update k8s/games/counter-strike-16/kustomization.yaml 2025-09-04 16:36:15 +00:00
ab
aa61b78c47 Add k8s/games/counter-strike-16/restart-job.yaml
Added cs16 restart
2025-09-04 16:35:57 +00:00
Ultradesu
27fe1d2f50 Added k8s-secrets 2025-09-03 14:02:04 +03:00
Ultradesu
557d03eb32 Added k8s-secrets 2025-09-03 13:51:53 +03:00
Ultradesu
f06c80162f Added k8s-secrets 2025-09-03 13:48:29 +03:00
ab
e8588a2d3b Update k8s/apps/paperless/paperless-values.yaml 2025-08-26 19:43:58 +00:00
ab
621871be93 Update k8s/core/system-upgrade/plan.yaml 2025-08-26 19:30:55 +00:00
ab
91dcd9d5bf Update k8s/core/system-upgrade/plan.yaml 2025-08-26 19:29:30 +00:00
ab
e40357ad78 Update k8s/apps/gitea/deployment.yaml 2025-08-26 18:10:43 +00:00
ab
fa544b3b7b Update k8s/apps/gitea/deployment.yaml 2025-08-26 17:59:16 +00:00
ab
8c112a1b0a Update k8s/apps/gitea/deployment.yaml 2025-08-26 17:57:09 +00:00
ab
e1ffaa8ba2 Update k8s/core/postgresql/external-secrets.yaml 2025-08-12 23:41:34 +00:00
ab
1a2b09bcaa Update k8s/core/authentik/values.yaml 2025-07-29 16:35:21 +00:00
ab
21f27adc80 Update k8s/core/system-upgrade/plan.yaml 2025-07-29 16:26:50 +00:00
ab
142c828f2b Update k8s/games/minecraft/kustomization.yaml 2025-07-27 12:56:39 +00:00
ab
5a01da21af Update k8s/games/minecraft/kustomization.yaml 2025-07-27 12:55:02 +00:00
ab
269b2b5221 Add k8s/games/minecraft/ingress.yaml 2025-07-27 12:54:42 +00:00
ab
1417fa830d Update k8s/core/system-upgrade/plan.yaml 2025-07-27 12:07:07 +00:00
ab
6b85343c9e Update k8s/games/minecraft/services.yaml 2025-07-25 15:48:23 +00:00
ab
0bcd23009c Update k8s/games/minecraft/services.yaml 2025-07-25 15:45:28 +00:00
ab
02b20c9fcc Update k8s/games/minecraft/deployments.yaml 2025-07-25 15:43:10 +00:00
ab
dbe0fa9acf Update k8s/apps/vpn/outfleet.yaml 2025-07-20 20:23:30 +00:00
ab
7d447163cb Update k8s/apps/vpn/outfleet.yaml 2025-07-20 20:22:07 +00:00
ab
b58a930052 Update k8s/apps/vpn/outfleet.yaml 2025-07-20 20:20:23 +00:00
ab
bf9b37b55f Update k8s/games/minecraft/services.yaml 2025-07-20 14:59:40 +00:00
Ultradesu
e093dd4f57 Fix secrets 2025-07-18 16:08:53 +03:00
Ultradesu
18a27dadcb Fix secrets 2025-07-18 16:07:20 +03:00
Ultradesu
288a4685d7 Fix secrets 2025-07-18 16:03:29 +03:00
Ultradesu
24d570e522 Adjust Rustdesk config 2025-07-18 15:41:27 +03:00
Ultradesu
7541cee2eb Adjust Rustdesk config 2025-07-18 15:39:54 +03:00
Ultradesu
c3fa6eb874 Adjust Rustdesk config 2025-07-18 15:34:01 +03:00
Ultradesu
414d60edb4 Adjust Rustdesk config 2025-07-18 15:33:09 +03:00
Ultradesu
364f5b38a9 Adjust Rustdesk certs 2025-07-18 15:30:02 +03:00
Ultradesu
e59215d2e9 Adjust Rustdesk certs 2025-07-18 15:28:31 +03:00
Ultradesu
3c6da4969c Adjust Rustdesk certs 2025-07-18 15:27:35 +03:00
Ultradesu
c08a3e745e Adjust Rustdesk certs 2025-07-18 15:26:22 +03:00
Ultradesu
00177d347f Adjust Rustdesk certs 2025-07-18 15:22:18 +03:00
Ultradesu
ca7fed506b Adjust Rustdesk certs 2025-07-18 15:12:17 +03:00
Ultradesu
2f0ada97cc Adjust Rustdesk certs 2025-07-18 15:06:00 +03:00
Ultradesu
ae516a79eb Adjust Rustdesk certs 2025-07-18 15:01:42 +03:00
Ultradesu
196d53a5a9 Added rustdesk 2025-07-18 14:47:26 +03:00
Ultradesu
24d4d34733 Adjusted resources on all deployments 2025-07-18 13:46:54 +03:00
ab
74b7611ea0 Update k8s/games/counter-strike-16/deployments.yaml 2025-07-17 17:05:36 +00:00
ab
91739d622e Update k8s/games/counter-strike-16/services.yaml 2025-07-17 17:02:04 +00:00
ab
7730458061 Update k8s/games/counter-strike-16/deployments.yaml 2025-07-17 17:01:24 +00:00
ab
b482c2e768 Update k8s/games/counter-strike-16/services.yaml 2025-07-17 16:57:58 +00:00
ab
7256c98046 Update k8s/games/counter-strike-16/deployments.yaml 2025-07-17 16:49:18 +00:00
ab
56d315eb4a Update k8s/games/counter-strike-16/services.yaml 2025-07-17 16:45:23 +00:00
ab
58a2cd4a74 Update k8s/games/counter-strike-16/deployments.yaml 2025-07-17 14:35:01 +00:00
ab
0052a81389 Update k8s/games/counter-strike-16/deployments.yaml 2025-07-17 14:31:18 +00:00
ab
34bd0f1ec4 Update k8s/games/counter-strike-16/services.yaml 2025-07-17 14:20:39 +00:00
ab
c1cedeaa13 Update k8s/games/beam-ng/services.yaml 2025-07-17 14:18:31 +00:00
ab
a37ccbe5ef Update k8s/games/counter-strike-16/services.yaml 2025-07-17 12:50:22 +00:00
ab
cc0a6559da Update k8s/games/counter-strike-16/services.yaml 2025-07-17 12:45:19 +00:00
ab
88021e4bc0 Update k8s/games/counter-strike-16/services.yaml 2025-07-17 12:42:12 +00:00
ab
81fa68af48 Update k8s/games/counter-strike-16/services.yaml 2025-07-17 12:39:09 +00:00
ab
2a9c18cae0 Update k8s/games/minecraft/deployments.yaml 2025-07-17 10:53:30 +00:00
ab
be5d76c1e8 Update k8s/core/prometheus/prom-values.yaml 2025-07-17 10:35:58 +00:00
ab
229190f0e8 Update k8s/core/prometheus/prom-values.yaml 2025-07-17 10:34:32 +00:00
ab
d7adb966c4 Update k8s/core/prometheus/prom-values.yaml 2025-07-17 10:34:15 +00:00
ab
f8ef2a48f5 Update k8s/games/counter-strike-16/services.yaml 2025-07-17 10:31:52 +00:00
ab
a7cfc086d5 Update k8s/games/counter-strike-16/deployments.yaml 2025-07-17 10:28:41 +00:00
ab
dfba5434f7 Update k8s/core/postgresql/external-secrets.yaml 2025-07-16 23:46:33 +00:00
AB from home.homenet
4c68ddfc3a Added beam-ng server 2025-07-17 02:38:23 +03:00
AB from home.homenet
98a11199d0 Added beam-ng server 2025-07-17 02:37:40 +03:00
ab
c9173fbcc3 Update k8s/games/counter-strike-16/services.yaml 2025-07-16 22:26:48 +00:00
ab
4f91fdd26a Update k8s/games/counter-strike-16/deployments.yaml 2025-07-16 22:25:26 +00:00
AB from home.homenet
b27d5594c5 Added cs16 server 2025-07-17 01:19:47 +03:00
ab
ae02f0fe2a Update k8s/games/minecraft/deployments.yaml 2025-07-16 22:19:09 +00:00
AB from home.homenet
b682b7f8ef Added cs16 server 2025-07-17 01:15:35 +03:00
ab
d7a425d005 Update k8s/core/system-upgrade/crd.yaml 2025-07-16 18:08:44 +00:00
ab
422269f5e9 Update k8s/core/system-upgrade/crd.yaml 2025-07-16 18:08:21 +00:00
ab
a99b549e2e Update k8s/core/argocd/values.yaml 2025-07-16 14:34:57 +00:00
ab
a3c26117b3 Update k8s/core/argocd/values.yaml 2025-07-16 14:34:24 +00:00
ab
5f8216cc7b Update k8s/core/argocd/values.yaml 2025-07-16 14:33:45 +00:00
ab
ceb405b069 Update k8s/core/argocd/values.yaml 2025-07-16 14:16:13 +00:00
ab
f53ea1976c Update k8s/core/argocd/values.yaml 2025-07-16 14:15:01 +00:00
ab
b9e1b73681 Update k8s/core/argocd/values.yaml 2025-07-16 14:05:57 +00:00
ab
1b04222c3e Update k8s/core/argocd/values.yaml 2025-07-16 13:56:02 +00:00
ab
3ed26f872c Update k8s/core/argocd/app.yaml 2025-07-16 13:40:23 +00:00
ab
aa615fe587 Update k8s/core/argocd/external-secrets.yaml 2025-07-16 13:40:06 +00:00
ab
1be64f2f63 Update k8s/core/argocd/kustomization.yaml 2025-07-16 13:31:35 +00:00
ab
1212dfcaec Update .gitea/workflows/lint.yaml 2025-07-15 11:31:42 +00:00
ab
28e06770c6 Bump IDM to 2025.6.3 2025-07-15 11:01:40 +00:00
ab
005cb0db72 Update k8s/core/postgresql/app.yaml 2025-07-14 13:18:55 +00:00
ab
fd80f3ad65 Update k8s/core/postgresql/app.yaml 2025-07-14 13:16:16 +00:00
ab
5281d58fae Update k8s/core/postgresql/app.yaml 2025-07-14 13:15:23 +00:00
ab
4542d03bc5 Update k8s/games/minecraft/deployments.yaml 2025-07-09 15:40:02 +00:00
ab
eb6a2e3e47 Update k8s/games/minecraft/deployments.yaml 2025-07-09 13:44:31 +00:00
ab
311ab269b6 Update k8s/games/minecraft/deployments.yaml 2025-07-09 13:41:25 +00:00
ab
5fa5843fa1 Update k8s/games/minecraft/deployments.yaml 2025-07-09 13:22:59 +00:00
ab
006f607e0d Update k8s/games/minecraft/deployments.yaml 2025-07-09 13:19:37 +00:00
ab
77371cd640 Update k8s/games/minecraft/deployments.yaml 2025-07-09 13:16:41 +00:00
ab
e3373dfb5f Update k8s/games/minecraft/deployments.yaml 2025-07-09 12:32:52 +00:00
ab
c3eb8ffc5c Update k8s/games/minecraft/deployments.yaml 2025-07-09 12:31:40 +00:00
ab
c5eb2a80c2 Update k8s/games/minecraft/deployments.yaml 2025-07-09 12:15:19 +00:00
ab
46527d924a Update k8s/games/minecraft/deployments.yaml 2025-07-09 12:13:16 +00:00
ab
0c5076c649 Update k8s/games/minecraft/deployments.yaml 2025-07-09 11:40:28 +00:00
ab
acf1f88412 Update k8s/games/minecraft/deployments.yaml 2025-07-09 11:37:44 +00:00
ab
01a88e21a2 Update k8s/games/minecraft/deployments.yaml 2025-07-09 11:22:55 +00:00
ab
fbfbaf0826 Update k8s/games/minecraft/deployments.yaml 2025-07-09 11:17:37 +00:00
ab
bf70cae59e Update k8s/games/minecraft/deployments.yaml 2025-07-09 11:12:54 +00:00
ab
95ea0c21fb Update k8s/games/minecraft/deployments.yaml 2025-07-09 11:01:13 +00:00
ab
816fa3662d Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:49:04 +00:00
ab
caeb350ece Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:47:21 +00:00
ab
ab184e559d Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:43:25 +00:00
ab
a6002e7cc3 Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:41:01 +00:00
ab
03f61962f7 Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:38:49 +00:00
ab
2ebc8e718e Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:35:12 +00:00
ab
a6cc4b067f Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:33:45 +00:00
ab
37e79a1175 Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:30:45 +00:00
ab
431f0df03d Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:30:07 +00:00
ab
bd91762c9d Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:05:20 +00:00
ab
e4c86235ae Update k8s/games/minecraft/deployments.yaml 2025-07-09 10:02:13 +00:00
ab
72a1154610 Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:50:05 +00:00
ab
0beb0cd78b Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:49:51 +00:00
ab
e342aab9df Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:49:30 +00:00
ab
26f811c3b7 Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:45:31 +00:00
ab
d1e834d175 Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:41:43 +00:00
ab
02ec8fd4e1 Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:39:20 +00:00
ab
7565c6c34f Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:37:34 +00:00
ab
a45c11f883 Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:35:31 +00:00
ab
cfc15d05eb Update k8s/games/minecraft/deployments.yaml 2025-07-09 09:33:02 +00:00
ab
3d1658f41d Update k8s/apps/gitea/deployment.yaml 2025-07-03 15:52:04 +00:00
ab
51a8cc1834 Update k8s/apps/gitea/deployment.yaml 2025-07-03 12:13:58 +00:00
ab
5dcbc9b11f Update k8s/apps/gitea/deployment.yaml 2025-07-03 12:00:36 +00:00
ab
aed859b8e9 Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 12:00:09 +00:00
ab
05f277c8cd Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 11:43:01 +00:00
ab
e25e9a8608 Update k8s/apps/gitea/deployment.yaml 2025-07-03 11:38:16 +00:00
ab
2ef7b23c69 Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 11:37:59 +00:00
ab
4184534c8c Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 11:31:56 +00:00
ab
145bdcaca1 Update k8s/apps/gitea/deployment.yaml 2025-07-03 11:30:54 +00:00
ab
e0ef44d8bd Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 11:30:40 +00:00
ab
628c250a0b Update k8s/apps/gitea/deployment.yaml 2025-07-03 11:25:59 +00:00
ab
2e0df4ad1b Update k8s/apps/gitea/deployment.yaml 2025-07-03 11:18:30 +00:00
ab
120d68bd57 Update k8s/apps/gitea/deployment.yaml 2025-07-03 11:16:40 +00:00
ab
6f7fc0b796 Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 11:15:21 +00:00
ab
a4f043c5c6 Update k8s/apps/gitea/external-secrets.yaml 2025-07-03 11:14:03 +00:00
ab
640447a4e0 Update k8s/apps/gitea/deployment.yaml 2025-06-25 22:11:12 +00:00
ab
b55e1b936b Update k8s/apps/gitea/deployment.yaml 2025-06-25 22:07:10 +00:00
ab
e939b14796 Update k8s/games/minecraft/deployments.yaml 2025-06-25 21:51:03 +00:00
ab
a9d63a7c0c Merge pull request 'main' (#1) from xelnagamex/homelab:main into main
Reviewed-on: ab/homelab#1
2025-06-25 21:42:51 +00:00
167 changed files with 8180 additions and 343 deletions

View File

@@ -0,0 +1,147 @@
# Auto-Update README Workflow
## Overview
This workflow automatically updates the `README.md` file with the current list of ArgoCD applications based on the directory structure in `k8s/`.
## How it works
1. **Trigger**: Workflow runs automatically when changes are pushed to `k8s/**` paths
2. **Scan**: Python script scans `k8s/` directory structure and finds all applications
3. **Generate**: Creates README.md with badges for all found applications
4. **Create PR**: If changes detected, creates a Merge Request for manual review
## Files
- `.gitea/workflows/update-readme.yaml` - GitHub Actions workflow
- `.gitea/scripts/generate-readme.py` - Python script for README generation
## Directory Structure
The script expects the following k8s directory structure:
```
k8s/
├── core/ # Core infrastructure applications
│ ├── argocd/
│ ├── authentik/
│ └── ...
├── apps/ # User applications
│ ├── gitea/
│ ├── immich/
│ └── ...
└── games/ # Game servers
├── minecraft/
└── ...
```
Each subdirectory name becomes an application name in the README.
## Required Secrets
The workflow requires the following secrets to be configured in Gitea:
| Secret | Description | Example |
|--------|-------------|---------|
| `GT_URL` | Gitea instance URL | `https://gt.hexor.cy` |
| `GT_TOKEN` | Gitea API token with repo write access | `glpat-xxxxx...` |
| `GT_OWNER` | Repository owner (username or org) | `ab` |
| `GT_REPO` | Repository name | `homelab` |
### How to create a Gitea Token
1. Go to Settings → Applications → Generate New Token
2. Give it a name like "README Update Bot"
3. Select scopes: `repo` (Full control of repositories)
4. Generate and copy the token
5. Add it as a secret in repository settings
## Badge Format
Badges are generated using a predictable pattern:
```markdown
[![app-name](https://ag.hexor.cy/api/badge?name=app-name&revision=true)](https://ag.hexor.cy/applications/argocd/app-name)
```
This allows you to immediately see which applications are:
- ✅ Healthy and synced (green badge)
- ⚠️ Degraded or out of sync (yellow badge)
- ❌ Unhealthy or failed (red badge)
## Manual Trigger
You can manually trigger the workflow from Gitea:
1. Go to Actions tab
2. Select "Auto-update README" workflow
3. Click "Run workflow"
4. Select branch and run
## Example Output
The generated README will look like:
```markdown
# homelab
ArgoCD homelab project
## ArgoCD Applications Status
| Application | Status |
| :--- | :---: |
| **argocd** | [![argocd](https://ag.hexor.cy/api/badge?name=argocd&revision=true)](https://ag.hexor.cy/applications/argocd/argocd) |
...
```
## Reviewing Pull Requests
When the workflow creates a PR:
1. Check the Actions tab for the workflow run details
2. Review the PR in the Pull Requests tab
3. Verify the application list matches your k8s/ structure
4. Merge when ready
The PR will include:
- Updated application list
- Timestamp of generation
- Automatic commit message
## Troubleshooting
### No PR created
- Check if there are actually changes in README.md
- Verify secrets are configured correctly
- Check workflow logs in Actions tab
### Wrong applications listed
- Verify k8s/ directory structure
- Ensure folder names match expected application names
- Check for hidden directories (starting with `.`)
### Badge not loading
- Verify ArgoCD badge API is accessible at `https://ag.hexor.cy`
- Check application name matches ArgoCD application name
- Ensure application exists in ArgoCD
## Maintenance
### Update badge URL
If you need to change the badge URL pattern, edit:
- `.gitea/scripts/generate-readme.py` - function `generate_badge_line()`
### Change workflow trigger
To modify when the workflow runs, edit:
- `.gitea/workflows/update-readme.yaml` - `on:` section
### Add new categories
To add new categories (besides core/apps/games), edit:
- `.gitea/scripts/generate-readme.py` - function `scan_k8s_directory()` and `generate_readme_content()`

91
.gitea/scripts/README.md Normal file
View File

@@ -0,0 +1,91 @@
# Wiki Generators for Homelab
Automated Wiki page generation scripts for documenting homelab infrastructure.
## 1. Authentik Applications Wiki Generator
Generates a Wiki page with all applications managed by Authentik from Terraform configuration.
### Files:
- `generate-apps-wiki.py` - Generates Applications.md from Terraform output
- `process-terraform-output.py` - Processes Terraform JSON output
### Workflow:
- **Trigger**: Push to `main` branch with Terraform changes
- **Workflow**: `.gitea/workflows/authentik-apps.yaml`
- **Output**: Applications Wiki page
## 2. Kubernetes Services Wiki Generator
Analyzes k8s/ directory and generates comprehensive documentation for all Kubernetes services.
### Files:
- `generate-k8s-wiki.py` - Main script for analyzing k8s services
### Features:
- **Service Types**: Detects Helm Charts, Kustomize, and YAML manifests
- **ArgoCD Integration**: Shows auto-sync status and project info
- **Service Discovery**: Lists all services, ingresses, and external secrets
- **Categorization**: Groups by apps, core, games categories
- **Detailed Analysis**: Shows deployments, containers, files
### Workflow:
- **Trigger**: Changes in `k8s/` directory
- **Workflow**: `.gitea/workflows/k8s-wiki.yaml`
- **Output**: Kubernetes-Services Wiki page
## GitHub Secrets Configuration
Required secrets in repository settings:
```
GT_URL=https://gt.hexor.cy
GT_WIKI_TOKEN=your_gitea_access_token
GT_OWNER=your_username
GT_REPO=homelab
```
## Generated Wiki Pages Structure
### Applications Page
- Table with icons (32x32), external/internal URLs
- Statistics by type (Proxy vs OAuth2)
- Grouping by categories (Core, Tools, Media, etc.)
### Kubernetes Services Page
- Overview table with service types and status
- Detailed sections by category
- ArgoCD integration status
- Service discovery information
## Local Testing
### Authentik Apps:
```bash
cd terraform/authentik
terraform output -json > terraform-output.json
python3 ../../.gitea/scripts/process-terraform-output.py terraform-output.json processed-output.json
python3 ../../.gitea/scripts/generate-apps-wiki.py processed-output.json
```
### K8s Services:
```bash
pip install pyyaml
python3 .gitea/scripts/generate-k8s-wiki.py k8s/ Kubernetes-Services.md
```
## Troubleshooting
### Common Issues:
1. **Terraform output parsing errors**
- Check for [command] prefix in output
- Verify JSON structure with debug mode
2. **Wiki upload failures**
- Verify Gitea token permissions
- Check network connectivity to Gitea instance
3. **YAML parsing errors in k8s analysis**
- Ensure valid YAML syntax in k8s files
- Check PyYAML installation

View File

@@ -0,0 +1,226 @@
#!/usr/bin/env python3
"""
Script for generating Wiki page with applications list from Terraform outputs
"""
import json
import sys
import os
from datetime import datetime
def generate_markdown_table(apps_data):
"""Generates Markdown table for applications"""
# Combine all applications
all_apps = []
if 'proxy_apps' in apps_data:
for key, app in apps_data['proxy_apps'].items():
all_apps.append({
'key': key,
'name': app['name'],
'type': app['type'],
'url': app['url'],
'internal_url': app.get('internal_url', '-'),
'group': app['group'],
'description': app['description'],
'icon': app['icon'],
'slug': app['slug']
})
if 'oauth_apps' in apps_data:
for key, app in apps_data['oauth_apps'].items():
all_apps.append({
'key': key,
'name': app['name'],
'type': app['type'],
'url': app['url'],
'internal_url': '-', # OAuth apps don't have internal URLs
'group': app['group'],
'description': app['description'],
'icon': app['icon'],
'slug': app['slug']
})
# Sort by groups, then by name
all_apps.sort(key=lambda x: (x['group'], x['name']))
# Generate Markdown
markdown = []
markdown.append("# Authentik Applications")
markdown.append("")
markdown.append(f"*Automatically generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')}*")
markdown.append("")
markdown.append("## All Applications")
markdown.append("")
# Table
markdown.append("| Icon | Name | Type | External URL | Internal URL | Group | Description |")
markdown.append("|:----:|------|------|--------------|--------------|-------|-------------|")
for app in all_apps:
# Icon with size constraint
if app['icon']:
icon = f'<img src="{app["icon"]}" width="32" height="32" alt="{app["name"]}">'
else:
icon = "📱"
# External URL link
external_link = f"[🔗 {app['url'].replace('https://', '').replace('http://', '')}]({app['url']})" if app.get('url') else "-"
# Internal URL (only for proxy apps)
internal_url = app.get('internal_url', '-')
if internal_url != '-':
# Show full internal URL without shortening
internal_url = f"`{internal_url}`"
description = app['description'] if app['description'] else "-"
markdown.append(f"| {icon} | **{app['name']}** | {app['type']} | {external_link} | {internal_url} | {app['group']} | {description} |")
markdown.append("")
# Statistics
proxy_count = len(apps_data.get('proxy_apps', {}))
oauth_count = len(apps_data.get('oauth_apps', {}))
total_count = proxy_count + oauth_count
markdown.append("## Statistics")
markdown.append("")
markdown.append(f"- **Total applications**: {total_count}")
markdown.append(f"- **Proxy applications**: {proxy_count}")
markdown.append(f"- **OAuth2/OpenID applications**: {oauth_count}")
markdown.append("")
# Grouping by types
groups = {}
for app in all_apps:
group = app['group']
if group not in groups:
groups[group] = {'proxy': 0, 'oauth': 0}
if app['type'] == 'Proxy':
groups[group]['proxy'] += 1
else:
groups[group]['oauth'] += 1
markdown.append("## Applications by Groups")
markdown.append("")
for group, counts in sorted(groups.items()):
total = counts['proxy'] + counts['oauth']
markdown.append(f"- **{group}**: {total} applications (Proxy: {counts['proxy']}, OAuth: {counts['oauth']})")
markdown.append("")
markdown.append("---")
markdown.append("*This page is automatically generated via Terraform CI/CD*")
return "\n".join(markdown)
def parse_terraform_output(output_data):
"""Parse Terraform output JSON structure"""
# Check if this is a full terraform output (with value, type, sensitive fields)
if isinstance(output_data, dict) and 'applications_for_wiki' in output_data:
# This is full terraform output format
app_output = output_data.get('applications_for_wiki', {})
if isinstance(app_output, dict) and 'value' in app_output:
return app_output['value']
else:
return app_output
else:
# This is already the value extracted
return output_data
def main():
if len(sys.argv) < 2:
print("Usage: python3 generate-apps-wiki.py <terraform-output-json> [--debug]")
sys.exit(1)
output_file = sys.argv[1]
debug = "--debug" in sys.argv
try:
# Check if file exists and has content
if not os.path.exists(output_file):
print(f"ERROR: File {output_file} not found")
sys.exit(1)
file_size = os.path.getsize(output_file)
if file_size == 0:
print(f"ERROR: File {output_file} is empty")
sys.exit(1)
print(f"📄 Reading Terraform output file: {output_file} ({file_size} bytes)")
# Read file content
with open(output_file, 'r') as f:
content = f.read()
if debug:
print(f"🔍 File content preview: {content[:200]}...")
# Clean content - remove command line if present
if content.startswith('[command]'):
print("⚠️ Detected command prefix, removing...")
lines = content.split('\n', 1)
if len(lines) > 1:
content = lines[1]
if debug:
print(f"🔍 Cleaned content preview: {content[:200]}...")
else:
print("ERROR: File contains only command line, no JSON data")
sys.exit(1)
# Parse JSON
try:
terraform_output = json.loads(content)
except json.JSONDecodeError as e:
print(f"ERROR: Invalid JSON in {output_file}: {e}")
print(f"Content starts with: {repr(content[:100])}")
# Try to find where JSON starts
json_start = content.find('{')
if json_start > 0:
print(f"Found JSON starting at position {json_start}, retrying...")
content = content[json_start:]
try:
terraform_output = json.loads(content)
except json.JSONDecodeError as e2:
print(f"ERROR: Still invalid JSON: {e2}")
sys.exit(1)
else:
sys.exit(1)
# Extract application data using helper function
apps_data = parse_terraform_output(terraform_output)
if not apps_data:
print("ERROR: No applications data found in Terraform output")
if debug:
print(f"Full output structure: {json.dumps(terraform_output, indent=2)[:500]}...")
sys.exit(1)
# Check if we have correct structure
if 'proxy_apps' not in apps_data and 'oauth_apps' not in apps_data:
print("ERROR: Expected 'proxy_apps' or 'oauth_apps' in output")
print(f"Available keys: {list(apps_data.keys())}")
if debug and apps_data:
print(f"Data structure: {json.dumps(apps_data, indent=2)[:500]}...")
sys.exit(1)
print(f"📊 Found {len(apps_data.get('proxy_apps', {}))} proxy apps, {len(apps_data.get('oauth_apps', {}))} oauth apps")
# Generate Markdown
markdown_content = generate_markdown_table(apps_data)
# Write result
wiki_file = "Applications.md"
with open(wiki_file, 'w', encoding='utf-8') as f:
f.write(markdown_content)
print(f"✅ Wiki page generated: {wiki_file}")
print(f"📊 Total applications: {len(apps_data.get('proxy_apps', {})) + len(apps_data.get('oauth_apps', {}))}")
except Exception as e:
print(f"ERROR: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,298 @@
#!/usr/bin/env python3
"""
Script for generating Wiki page with Kubernetes services from k8s/ directory
"""
import os
import yaml
import json
import sys
from datetime import datetime
from collections import defaultdict
class K8sService:
def __init__(self, name, category, path):
self.name = name
self.category = category
self.path = path
self.namespace = None
self.deployment_type = "Unknown"
self.helm_charts = []
self.services = []
self.ingresses = []
self.external_secrets = []
self.deployments = []
self.pvcs = []
self.argo_app = None
self.files = []
def __repr__(self):
return f"K8sService({self.name}, {self.deployment_type})"
def parse_yaml_file(filepath):
"""Parse YAML file and return content"""
try:
with open(filepath, 'r') as f:
# Load all documents in the file
docs = list(yaml.safe_load_all(f))
return docs if len(docs) > 1 else docs[0] if docs else None
except Exception as e:
print(f" ⚠️ Error parsing {filepath}: {e}")
return None
def analyze_service_directory(service_path, service_name, category):
"""Analyze a service directory and extract information"""
service = K8sService(service_name, category, service_path)
# List all files
for file in os.listdir(service_path):
if file.endswith('.yaml') or file.endswith('.yml'):
service.files.append(file)
filepath = os.path.join(service_path, file)
# Parse YAML content
content = parse_yaml_file(filepath)
if not content:
continue
# Handle multiple documents in one file
documents = content if isinstance(content, list) else [content]
for doc in documents:
if not isinstance(doc, dict) or 'kind' not in doc:
continue
kind = doc['kind']
metadata = doc.get('metadata', {})
# ArgoCD Application
if kind == 'Application' and doc.get('apiVersion', '').startswith('argoproj.io'):
service.argo_app = {
'name': metadata.get('name', ''),
'namespace': doc.get('spec', {}).get('destination', {}).get('namespace', ''),
'project': doc.get('spec', {}).get('project', ''),
'auto_sync': doc.get('spec', {}).get('syncPolicy', {}).get('automated') is not None
}
service.namespace = service.argo_app['namespace']
# Kustomization
elif kind == 'Kustomization':
if 'helmCharts' in doc:
service.deployment_type = "Helm Chart"
for chart in doc.get('helmCharts', []):
service.helm_charts.append({
'name': chart.get('name', ''),
'repo': chart.get('repo', ''),
'version': chart.get('version', ''),
'namespace': chart.get('namespace', service.namespace)
})
else:
service.deployment_type = "Kustomize"
# Deployment
elif kind == 'Deployment':
service.deployments.append({
'name': metadata.get('name', ''),
'namespace': metadata.get('namespace', service.namespace),
'replicas': doc.get('spec', {}).get('replicas', 1),
'containers': [c.get('name', '') for c in doc.get('spec', {}).get('template', {}).get('spec', {}).get('containers', [])]
})
if service.deployment_type == "Unknown":
service.deployment_type = "YAML Manifests"
# Service
elif kind == 'Service':
svc_spec = doc.get('spec', {})
service.services.append({
'name': metadata.get('name', ''),
'type': svc_spec.get('type', 'ClusterIP'),
'ports': svc_spec.get('ports', [])
})
# Ingress
elif kind == 'Ingress':
rules = doc.get('spec', {}).get('rules', [])
hosts = []
for rule in rules:
if 'host' in rule:
hosts.append(rule['host'])
service.ingresses.append({
'name': metadata.get('name', ''),
'hosts': hosts
})
# ExternalSecret
elif kind == 'ExternalSecret':
service.external_secrets.append({
'name': metadata.get('name', ''),
'store': doc.get('spec', {}).get('secretStoreRef', {}).get('name', '')
})
# PersistentVolumeClaim
elif kind == 'PersistentVolumeClaim':
service.pvcs.append({
'name': metadata.get('name', ''),
'size': doc.get('spec', {}).get('resources', {}).get('requests', {}).get('storage', '')
})
# If no specific deployment type found but has YAML files
if service.deployment_type == "Unknown" and service.files:
service.deployment_type = "YAML Manifests"
return service
def generate_markdown_table(services):
"""Generate markdown table for services"""
markdown = []
markdown.append("# Kubernetes Services")
markdown.append("")
markdown.append(f"*Automatically generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')}*")
markdown.append("")
# Group by category
categories = defaultdict(list)
for service in services:
categories[service.category].append(service)
# Statistics
markdown.append("## Statistics")
markdown.append("")
markdown.append(f"- **Total Services**: {len(services)}")
markdown.append(f"- **Categories**: {len(categories)}")
helm_count = sum(1 for s in services if s.deployment_type == "Helm Chart")
kustomize_count = sum(1 for s in services if s.deployment_type == "Kustomize")
yaml_count = sum(1 for s in services if s.deployment_type == "YAML Manifests")
markdown.append(f"- **Helm Charts**: {helm_count}")
markdown.append(f"- **Kustomize**: {kustomize_count}")
markdown.append(f"- **YAML Manifests**: {yaml_count}")
markdown.append("")
# Main table
markdown.append("## All Services")
markdown.append("")
markdown.append("| Service | Category | Type | Namespace | Ingresses | Services | Secrets | Auto-Sync |")
markdown.append("|---------|----------|------|-----------|-----------|----------|---------|-----------|")
for category in sorted(categories.keys()):
for service in sorted(categories[category], key=lambda x: x.name):
# Service name with link to directory
name_link = f"[{service.name}](k8s/{service.category}/{service.name}/)"
# Deployment type with emoji
type_emoji = {
"Helm Chart": "🎩",
"Kustomize": "🔧",
"YAML Manifests": "📄",
"Unknown": ""
}
type_str = f"{type_emoji.get(service.deployment_type, '')} {service.deployment_type}"
# Ingresses
ingresses = []
for ing in service.ingresses:
for host in ing['hosts']:
ingresses.append(f"[{host}](https://{host})")
ingress_str = "<br>".join(ingresses) if ingresses else "-"
# Services
svc_list = []
for svc in service.services:
ports = [f"{p.get('port', '?')}" for p in svc['ports']]
svc_list.append(f"`{svc['name']}:{','.join(ports)}`")
svc_str = "<br>".join(svc_list) if svc_list else "-"
# External Secrets
secrets_str = f"{len(service.external_secrets)} secrets" if service.external_secrets else "-"
# Auto-sync
auto_sync = "" if service.argo_app and service.argo_app.get('auto_sync') else ""
markdown.append(f"| **{name_link}** | {category} | {type_str} | {service.namespace or '-'} | {ingress_str} | {svc_str} | {secrets_str} | {auto_sync} |")
markdown.append("")
# Detailed sections by category
for category in sorted(categories.keys()):
markdown.append(f"## {category.title()} Services")
markdown.append("")
for service in sorted(categories[category], key=lambda x: x.name):
markdown.append(f"### {service.name}")
markdown.append("")
# Basic info
markdown.append(f"- **Type**: {service.deployment_type}")
markdown.append(f"- **Namespace**: {service.namespace or 'Not specified'}")
markdown.append(f"- **Path**: `{service.path}`")
# Helm charts
if service.helm_charts:
markdown.append("- **Helm Charts**:")
for chart in service.helm_charts:
markdown.append(f" - {chart['name']} v{chart['version']} from {chart['repo']}")
# Deployments
if service.deployments:
markdown.append("- **Deployments**:")
for dep in service.deployments:
containers = ', '.join(dep['containers'])
markdown.append(f" - {dep['name']} ({dep['replicas']} replicas) - Containers: {containers}")
# Files
if service.files:
markdown.append(f"- **Files**: {', '.join(sorted(service.files))}")
markdown.append("")
markdown.append("---")
markdown.append("*This page is automatically generated from k8s/ directory via CI/CD*")
return "\n".join(markdown)
def main():
if len(sys.argv) < 2:
print("Usage: generate-k8s-wiki.py <k8s-directory> [output-file]")
sys.exit(1)
k8s_dir = sys.argv[1]
output_file = sys.argv[2] if len(sys.argv) > 2 else "Kubernetes-Services.md"
if not os.path.exists(k8s_dir):
print(f"❌ Directory {k8s_dir} not found")
sys.exit(1)
print(f"📂 Scanning {k8s_dir}...")
services = []
# Scan each category directory
for category in ['apps', 'core', 'games']:
category_path = os.path.join(k8s_dir, category)
if not os.path.exists(category_path):
print(f" ⚠️ Category {category} not found")
continue
print(f"📁 Processing {category}/...")
# Scan each service in category
for service_name in os.listdir(category_path):
service_path = os.path.join(category_path, service_name)
if os.path.isdir(service_path):
print(f" 🔍 Analyzing {service_name}...")
service = analyze_service_directory(service_path, service_name, category)
services.append(service)
print(f"\n✅ Found {len(services)} services")
# Generate markdown
markdown = generate_markdown_table(services)
# Write output
with open(output_file, 'w', encoding='utf-8') as f:
f.write(markdown)
print(f"📄 Wiki page generated: {output_file}")
print(f"📊 Total services: {len(services)}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,161 @@
#!/usr/bin/env python3
"""
Generate README.md with ArgoCD application status badges.
Scans k8s/ directory structure to find all applications and generates badges for them.
"""
import os
import sys
from pathlib import Path
from typing import Dict, List
def scan_k8s_directory(k8s_path: str) -> Dict[str, List[str]]:
"""
Scan k8s/ directory and return applications grouped by category.
Args:
k8s_path: Path to k8s directory
Returns:
Dictionary with categories as keys and lists of app names as values
"""
apps_by_category = {
'core': [],
'apps': [],
'games': []
}
k8s_dir = Path(k8s_path)
for category in apps_by_category.keys():
category_path = k8s_dir / category
if category_path.exists() and category_path.is_dir():
# Get all subdirectories (each subdirectory is an app)
apps = [
d.name for d in category_path.iterdir()
if d.is_dir() and not d.name.startswith('.')
]
apps_by_category[category] = sorted(apps)
return apps_by_category
def generate_badge_line(app_name: str) -> str:
"""
Generate markdown line with badge for an application.
Args:
app_name: Name of the application
Returns:
Markdown formatted string with badge
"""
badge_url = f"https://ag.hexor.cy/api/badge?name={app_name}&revision=true"
app_url = f"https://ag.hexor.cy/applications/argocd/{app_name}"
return f"| **{app_name}** | [![{app_name}]({badge_url})]({app_url}) |"
def generate_readme_content(apps_by_category: Dict[str, List[str]]) -> str:
"""
Generate README.md content with all applications.
Args:
apps_by_category: Dictionary with apps grouped by category
Returns:
Complete README.md content
"""
lines = [
"# homelab",
"",
"ArgoCD homelab project",
"",
"## ArgoCD Applications Status",
"",
"<table>",
"<tr>",
"<td valign=\"top\" width=\"50%\">",
"",
"### Core Applications",
"",
"| Application | Status |",
"| :--- | :---: |"
]
# Add core applications
for app in apps_by_category.get('core', []):
lines.append(generate_badge_line(app))
lines.extend([
"",
"### Games",
"",
"| Application | Status |",
"| :--- | :---: |"
])
# Add games
for app in apps_by_category.get('games', []):
lines.append(generate_badge_line(app))
lines.extend([
"</td>",
"<td valign=\"top\" width=\"50%\">",
"",
"### Applications",
"",
"| Application | Status |",
"| :--- | :---: |"
])
# Add applications
for app in apps_by_category.get('apps', []):
lines.append(generate_badge_line(app))
lines.extend([
"",
"</td>",
"</tr>",
"</table>"
])
return '\n'.join(lines) + '\n'
def main():
if len(sys.argv) < 2:
print("Usage: generate-readme.py <k8s-directory> [output-file]")
print("Example: generate-readme.py k8s/ README.md")
sys.exit(1)
k8s_path = sys.argv[1]
output_file = sys.argv[2] if len(sys.argv) > 2 else "README.md"
if not os.path.exists(k8s_path):
print(f"Error: Directory {k8s_path} does not exist")
sys.exit(1)
print(f"📁 Scanning {k8s_path}...")
apps_by_category = scan_k8s_directory(k8s_path)
# Print statistics
total_apps = sum(len(apps) for apps in apps_by_category.values())
print(f"✅ Found {total_apps} applications:")
for category, apps in apps_by_category.items():
if apps:
print(f" - {category}: {len(apps)} apps")
print(f"📝 Generating {output_file}...")
readme_content = generate_readme_content(apps_by_category)
with open(output_file, 'w', encoding='utf-8') as f:
f.write(readme_content)
print(f"{output_file} generated successfully")
print(f" Total lines: {len(readme_content.splitlines())}")
print(f" File size: {len(readme_content)} bytes")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,105 @@
#!/usr/bin/env python3
"""
Process Terraform output to extract applications_for_wiki data
Handles various output formats and cleans up invalid JSON
"""
import json
import sys
import os
def clean_command_prefix(content):
"""Remove [command] prefix if present"""
if content.startswith('[command]'):
lines = content.split('\n', 1)
if len(lines) > 1:
return lines[1]
return content
def extract_valid_json(content):
"""Extract valid JSON from content that might have extra data"""
# Find first { and last matching }
start = content.find('{')
if start < 0:
return None
count = 0
end = start
for i in range(start, len(content)):
if content[i] == '{':
count += 1
elif content[i] == '}':
count -= 1
if count == 0:
end = i + 1
break
if end > start and count == 0:
return content[start:end]
return None
def extract_value(data):
"""Extract value from Terraform output format"""
if isinstance(data, dict) and 'value' in data:
return data['value']
return data
def main():
if len(sys.argv) != 3:
print("Usage: process-terraform-output.py <input-file> <output-file>")
sys.exit(1)
input_file = sys.argv[1]
output_file = sys.argv[2]
try:
# Read input file
with open(input_file, 'r') as f:
content = f.read()
# Clean command prefix if present
content = clean_command_prefix(content)
# Try to parse JSON directly
try:
data = json.loads(content)
print("✅ Valid JSON parsed successfully")
except json.JSONDecodeError as e:
print(f"⚠️ Initial JSON parse failed: {e}")
print("🔍 Attempting to extract valid JSON portion...")
# Try to extract valid JSON
valid_json = extract_valid_json(content)
if valid_json:
try:
data = json.loads(valid_json)
print("✅ Extracted valid JSON successfully")
except json.JSONDecodeError as e2:
print(f"❌ Failed to parse extracted JSON: {e2}")
sys.exit(1)
else:
print("❌ Could not extract valid JSON from content")
sys.exit(1)
# Extract value if it's wrapped in Terraform output format
result = extract_value(data)
# Write output
with open(output_file, 'w') as f:
json.dump(result, f, indent=2)
print(f"✅ Processed output written to {output_file}")
# Show preview
preview = json.dumps(result, indent=2)[:200]
print(f"📄 Preview: {preview}...")
except FileNotFoundError:
print(f"❌ Input file {input_file} not found")
sys.exit(1)
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,147 @@
name: 'Terraform'
on:
push:
branches: [ "main" ]
paths:
- 'terraform/authentik/**'
pull_request:
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
defaults:
run:
shell: bash
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: Terraform Init
env:
TF_VAR_authentik_token: ${{ secrets.AUTHENTIK_TOKEN }}
run: terraform init
working-directory: ./terraform/authentik
- name: Terraform Format
env:
TF_VAR_authentik_token: ${{ secrets.AUTHENTIK_TOKEN }}
run: terraform fmt -check
continue-on-error: true
working-directory: ./terraform/authentik
- name: Terraform Apply
env:
TF_VAR_authentik_token: ${{ secrets.AUTHENTIK_TOKEN }}
run: terraform apply -var-file proxy-apps.tfvars -var-file oauth2-apps.tfvars -var-file terraform.tfvars -var-file groups.tfvars -input=false -auto-approve -parallelism=100
working-directory: ./terraform/authentik
- name: Generate Wiki Content
if: success()
continue-on-error: true
env:
TF_VAR_authentik_token: ${{ secrets.AUTHENTIK_TOKEN }}
run: |
echo "📋 Starting Wiki generation..."
cd ./terraform/authentik
# Get terraform output
echo "🔍 Generating Terraform output..."
terraform output -json applications_for_wiki > terraform-raw-output.json 2>&1
# Process output to extract clean JSON
echo "📤 Processing Terraform output..."
python3 ../../.gitea/scripts/process-terraform-output.py terraform-raw-output.json terraform-output.json
# Run wiki generation
echo "📊 Running wiki generation script..."
if python3 ../../.gitea/scripts/generate-apps-wiki.py terraform-output.json; then
echo "✅ Wiki content generated successfully"
else
echo "⚠️ Wiki generation failed, retrying with debug..."
python3 ../../.gitea/scripts/generate-apps-wiki.py terraform-output.json --debug || echo "⚠️ Wiki generation failed"
fi
# Check results
if [ -f "Applications.md" ]; then
echo "✅ Wiki file created: $(wc -l < Applications.md) lines"
else
echo "⚠️ Wiki content not generated"
exit 0
fi
working-directory: ./
- name: Upload Wiki to Gitea
if: success()
continue-on-error: true
run: |
cd ./terraform/authentik
# Set variables
GITEA_URL="${{ secrets.GT_URL }}"
GITEA_TOKEN="${{ secrets.GT_WIKI_TOKEN }}"
GITEA_OWNER="${{ secrets.GT_OWNER }}"
GITEA_REPO="${{ secrets.GT_REPO }}"
# Debug variables (without exposing token)
echo "🔍 Checking variables..."
echo "GITEA_URL: ${GITEA_URL:-NOT SET}"
echo "GITEA_OWNER: ${GITEA_OWNER:-NOT SET}"
echo "GITEA_REPO: ${GITEA_REPO:-NOT SET}"
echo "GITEA_TOKEN: $(if [ -n "$GITEA_TOKEN" ]; then echo "SET"; else echo "NOT SET"; fi)"
# Check if file exists
if [ ! -f "Applications.md" ]; then
echo "⚠️ Applications.md not found, skipping wiki update"
exit 0
fi
echo "📤 Uploading to Gitea Wiki..."
# Encode content to base64
CONTENT=$(base64 -w 0 Applications.md)
# Check if wiki page exists
WIKI_PAGE_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: token $GITEA_TOKEN" \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Applications" || echo "000")
if [ "$WIKI_PAGE_EXISTS" = "200" ]; then
echo "📝 Updating existing wiki page..."
curl -X PATCH \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d "{
\"title\": \"Applications\",
\"content_base64\": \"$CONTENT\",
\"message\": \"Update applications list from CI/CD [$(date)]\"
}" \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Applications" || echo "⚠️ Wiki update failed"
else
echo "📄 Creating new wiki page..."
curl -X POST \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d "{
\"title\": \"Applications\",
\"content_base64\": \"$CONTENT\",
\"message\": \"Create applications list from CI/CD [$(date)]\"
}" \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/new" || echo "⚠️ Wiki creation failed"
fi
echo "✅ Wiki update process completed"
working-directory: ./

View File

@@ -0,0 +1,111 @@
name: 'Update Kubernetes Services Wiki'
on:
push:
branches: [ "main" ]
# paths:
# - 'k8s/**'
# - '.gitea/scripts/generate-k8s-wiki.py'
workflow_dispatch:
permissions:
contents: read
jobs:
update-k8s-wiki:
name: 'Generate and Update K8s Wiki'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
pip install pyyaml
- name: Generate K8s Services Wiki
run: |
echo "📋 Starting K8s wiki generation..."
python3 .gitea/scripts/generate-k8s-wiki.py k8s/ Kubernetes-Services.md
if [ -f "Kubernetes-Services.md" ]; then
echo "✅ Wiki content generated successfully"
echo "📄 File size: $(wc -c < Kubernetes-Services.md) bytes"
echo "📄 Lines: $(wc -l < Kubernetes-Services.md)"
else
echo "❌ Wiki content not generated"
exit 1
fi
- name: Upload Wiki to Gitea
continue-on-error: true
run: |
# Set variables
GITEA_URL="${{ secrets.GT_URL }}"
GITEA_TOKEN="${{ secrets.GT_WIKI_TOKEN }}"
GITEA_OWNER="${{ secrets.GT_OWNER }}"
GITEA_REPO="${{ secrets.GT_REPO }}"
# Debug variables (without exposing token)
echo "🔍 Checking variables..."
echo "GITEA_URL: ${GITEA_URL:-NOT SET}"
echo "GITEA_OWNER: ${GITEA_OWNER:-NOT SET}"
echo "GITEA_REPO: ${GITEA_REPO:-NOT SET}"
echo "GITEA_TOKEN: $(if [ -n "$GITEA_TOKEN" ]; then echo "SET"; else echo "NOT SET"; fi)"
if [ ! -f "Kubernetes-Services.md" ]; then
echo "❌ Kubernetes-Services.md not found"
exit 1
fi
echo "📤 Uploading to Gitea Wiki..."
# Encode content to base64
CONTENT=$(base64 -w 0 Kubernetes-Services.md)
# Check if wiki page exists
WIKI_PAGE_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: token $GITEA_TOKEN" \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Kubernetes-Services" || echo "000")
if [ "$WIKI_PAGE_EXISTS" = "200" ]; then
echo "📝 Updating existing wiki page..."
curl -X PATCH \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d "{
\"title\": \"Kubernetes-Services\",
\"content_base64\": \"$CONTENT\",
\"message\": \"Update K8s services list from CI/CD [$(date)]\"
}" \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Kubernetes-Services" || echo "⚠️ Wiki update failed"
else
echo "📄 Creating new wiki page..."
curl -X POST \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d "{
\"title\": \"Kubernetes-Services\",
\"content_base64\": \"$CONTENT\",
\"message\": \"Create K8s services list from CI/CD [$(date)]\"
}" \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/new" || echo "⚠️ Wiki creation failed"
fi
echo "✅ Wiki update process completed"
echo "🔗 Wiki URL: $GITEA_URL/$GITEA_OWNER/$GITEA_REPO/wiki/Kubernetes-Services"
- name: Summary
if: always()
run: |
echo "## 📊 K8s Wiki Update Summary" >> $GITHUB_STEP_SUMMARY
if [ -f "Kubernetes-Services.md" ]; then
echo "- ✅ K8s services analyzed" >> $GITHUB_STEP_SUMMARY
echo "- ✅ Wiki page generated" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Services found:** $(grep -c '^|' Kubernetes-Services.md || echo 0)" >> $GITHUB_STEP_SUMMARY
else
echo "- ❌ Wiki generation failed" >> $GITHUB_STEP_SUMMARY
fi
echo "**Generated at:** $(date)" >> $GITHUB_STEP_SUMMARY

View File

@@ -1,9 +1,9 @@
name: Check with kubeconform
on:
push:
branches: [ main ]
paths:
- 'k8s/**'
jobs:
lint:
runs-on: ubuntu-latest
@@ -16,22 +16,53 @@ jobs:
- name: Kubeconform validation
id: kubeconform
run: |
# Create exclusion list - add files that should be skipped from validation
EXCLUSIONS=(
"./k8s/core/system-upgrade/crd.yaml"
# Add more files here as needed
# "./path/to/another/file.yaml"
)
# Create a temporary file for storing validation output
VALIDATION_OUTPUT=$(mktemp)
# Run kubeconform and capture output
find . -name '*.yaml' \
# Function to check if file is in exclusions
is_excluded() {
local file="$1"
for exclusion in "${EXCLUSIONS[@]}"; do
if [[ "$file" == "$exclusion" ]]; then
return 0
fi
done
return 1
}
# Find all yaml files and filter out exclusions
YAML_FILES=()
while IFS= read -r -d '' file; do
if ! is_excluded "$file"; then
YAML_FILES+=("$file")
else
echo "⚠️ Skipping excluded file: $file"
fi
done < <(find . -name '*.yaml' \
! -name '*values.yaml' \
! -path './.gitea/*' \
-print0 \
| xargs -0 kubeconform \
-summary \
-verbose \
-output pretty \
-ignore-missing-schemas \
-schema-location default \
-schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' \
-schema-location 'https://raw.githubusercontent.com/SchemaStore/schemastore/refs/heads/master/src/schemas/json/kustomization.json' > $VALIDATION_OUTPUT 2>&1 || true
-print0)
# Run kubeconform only if there are files to validate
if [ ${#YAML_FILES[@]} -gt 0 ]; then
printf '%s\0' "${YAML_FILES[@]}" | xargs -0 kubeconform \
-summary \
-verbose \
-output pretty \
-ignore-missing-schemas \
-schema-location default \
-schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' \
-schema-location 'https://raw.githubusercontent.com/SchemaStore/schemastore/refs/heads/master/src/schemas/json/kustomization.json' > $VALIDATION_OUTPUT 2>&1 || true
else
echo "No files to validate after applying exclusions" > $VALIDATION_OUTPUT
fi
# Display output in logs
cat $VALIDATION_OUTPUT
@@ -44,7 +75,7 @@ jobs:
cat invalid_files.txt
exit 1
else
echo "All manifests are valid!"
echo "All manifests are valid!"
fi
continue-on-error: true
@@ -69,4 +100,4 @@ jobs:
Invalid files:
${{ env.INVALID_FILES }}
<a href="https://gt.hexor.cy/${{ github.repository }}/actions/runs/${{ github.run_number }}">🔗 Check details</a>
<a href="https://gt.hexor.cy/${{ github.repository }}/actions/runs/${{ github.run_number }}">🔗 Check details</a>

View File

@@ -0,0 +1,164 @@
name: 'Auto-update README'
on:
push:
branches: [ "main" ]
paths:
- 'k8s/**'
workflow_dispatch:
permissions:
contents: write
pull-requests: write
jobs:
update-readme:
name: 'Generate README and Create MR'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Configure Git
run: |
git config --global user.name "Gitea Actions Bot"
git config --global user.email "actions@gitea.local"
- name: Generate README
run: |
echo "📋 Starting README generation..."
python3 .gitea/scripts/generate-readme.py k8s/ README.md
if [ -f "README.md" ]; then
echo "✅ README generated successfully"
echo "📄 File size: $(wc -c < README.md) bytes"
echo "📄 Lines: $(wc -l < README.md)"
else
echo "❌ README not generated"
exit 1
fi
- name: Check for changes
id: check_changes
run: |
if git diff --quiet README.md; then
echo "No changes detected in README.md"
echo "has_changes=false" >> $GITHUB_OUTPUT
else
echo "Changes detected in README.md"
echo "has_changes=true" >> $GITHUB_OUTPUT
fi
- name: Create Pull Request
if: steps.check_changes.outputs.has_changes == 'true'
run: |
# Set variables
GITEA_URL="${{ secrets.GT_URL }}"
GITEA_TOKEN="${{ secrets.GT_TOKEN }}"
GITEA_OWNER="${{ secrets.GT_OWNER }}"
GITEA_REPO="${{ secrets.GT_REPO }}"
BRANCH_NAME="auto-update-readme-$(date +%Y%m%d-%H%M%S)"
echo "🔍 Configuration:"
echo "GITEA_URL: ${GITEA_URL:-NOT SET}"
echo "GITEA_OWNER: ${GITEA_OWNER:-NOT SET}"
echo "GITEA_REPO: ${GITEA_REPO:-NOT SET}"
echo "BRANCH_NAME: $BRANCH_NAME"
# Create and push new branch
echo "🌿 Creating branch: $BRANCH_NAME"
git checkout -b "$BRANCH_NAME"
git add README.md
git commit -m "Auto-update README with current k8s applications" \
-m "Generated by CI/CD workflow on $(date +%Y-%m-%d\ %H:%M:%S)" \
-m "This PR updates the README.md file with the current list of applications found in the k8s/ directory structure."
# Push branch to remote
echo "📤 Pushing branch to remote..."
git push origin "$BRANCH_NAME"
# Create Pull Request using Gitea API
echo "🔀 Creating Pull Request..."
PR_TITLE="Auto-update README with k8s applications"
# Create PR body
cat > /tmp/pr_body.json <<EOF
{
"title": "$PR_TITLE",
"body": "This PR automatically updates README.md based on the current k8s/ directory structure.\n\n## Changes\n- Updated application list in README.md\n- Applications are now synced with k8s/ folders\n\n## Review\nPlease review and merge if everything looks correct.\n\n---\n🤖 This PR was automatically generated by CI/CD workflow\n⏰ Generated at: $(date '+%Y-%m-%d %H:%M:%S')",
"head": "$BRANCH_NAME",
"base": "main"
}
EOF
# Create PR via API
echo "Making API request to: $GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/pulls"
echo "Request body:"
cat /tmp/pr_body.json
RESPONSE=$(curl -s -w "\nHTTP_CODE:%{http_code}" -X POST \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d @/tmp/pr_body.json \
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/pulls")
# Extract HTTP code and response body
HTTP_CODE=$(echo "$RESPONSE" | grep "HTTP_CODE:" | cut -d':' -f2)
RESPONSE_BODY=$(echo "$RESPONSE" | sed '/HTTP_CODE:/d')
echo "API Response (HTTP $HTTP_CODE):"
echo "$RESPONSE_BODY"
# Extract PR number and URL from response
PR_NUMBER=$(echo "$RESPONSE_BODY" | grep -o '"number":[0-9]*' | head -1 | cut -d':' -f2)
if [ -n "$PR_NUMBER" ] && [ "$HTTP_CODE" = "201" ]; then
echo "✅ Pull Request created successfully!"
echo "📝 PR #$PR_NUMBER"
echo "🔗 URL: $GITEA_URL/$GITEA_OWNER/$GITEA_REPO/pulls/$PR_NUMBER"
# Save PR info for summary
echo "pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT
echo "pr_url=$GITEA_URL/$GITEA_OWNER/$GITEA_REPO/pulls/$PR_NUMBER" >> $GITHUB_OUTPUT
else
echo "⚠️ Failed to create Pull Request (HTTP $HTTP_CODE)"
echo "Response: $RESPONSE_BODY"
# Check if PR already exists
if echo "$RESPONSE_BODY" | grep -q "already exists"; then
echo " PR already exists for this branch"
exit 0
fi
exit 1
fi
- name: Summary
if: always()
run: |
echo "## 📊 README Update Summary" >> $GITHUB_STEP_SUMMARY
if [ -f "README.md" ]; then
echo "- ✅ README generated successfully" >> $GITHUB_STEP_SUMMARY
if [ "${{ steps.check_changes.outputs.has_changes }}" = "true" ]; then
echo "- ✅ Changes detected" >> $GITHUB_STEP_SUMMARY
echo "- ✅ Pull Request created" >> $GITHUB_STEP_SUMMARY
if [ -n "${{ steps.create_pr.outputs.pr_number }}" ]; then
echo "" >> $GITHUB_STEP_SUMMARY
echo "**PR:** [#${{ steps.create_pr.outputs.pr_number }}](${{ steps.create_pr.outputs.pr_url }})" >> $GITHUB_STEP_SUMMARY
fi
else
echo "- No changes detected - README already up to date" >> $GITHUB_STEP_SUMMARY
fi
else
echo "- ❌ README generation failed" >> $GITHUB_STEP_SUMMARY
fi
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Generated at:** $(date)" >> $GITHUB_STEP_SUMMARY

8
.gitignore vendored
View File

@@ -10,13 +10,12 @@
crash.log
crash.*.log
# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
*.tfvars
*.tfvars.json
!*terraform.tfvars
# claude ai
.claude/
# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
@@ -58,3 +57,4 @@ tags
# Persistent undo
[._]*.un~
.DS_Store

View File

@@ -1,3 +1,62 @@
# homelab
ArgoCD homelab project
ArgoCD homelab project
## ArgoCD Applications Status
<table>
<tr>
<td valign="top" width="50%">
### Core Applications
| Application | Status |
| :--- | :---: |
| **argocd** | [![argocd](https://ag.hexor.cy/api/badge?name=argocd&revision=true)](https://ag.hexor.cy/applications/argocd/argocd) |
| **authentik** | [![authentik](https://ag.hexor.cy/api/badge?name=authentik&revision=true)](https://ag.hexor.cy/applications/argocd/authentik) |
| **cert-manager** | [![cert-manager](https://ag.hexor.cy/api/badge?name=cert-manager&revision=true)](https://ag.hexor.cy/applications/argocd/cert-manager) |
| **external-secrets** | [![external-secrets](https://ag.hexor.cy/api/badge?name=external-secrets&revision=true)](https://ag.hexor.cy/applications/argocd/external-secrets) |
| **kube-system-custom** | [![kube-system-custom](https://ag.hexor.cy/api/badge?name=kube-system-custom&revision=true)](https://ag.hexor.cy/applications/argocd/kube-system-custom) |
| **kubernetes-dashboard** | [![kubernetes-dashboard](https://ag.hexor.cy/api/badge?name=kubernetes-dashboard&revision=true)](https://ag.hexor.cy/applications/argocd/kubernetes-dashboard) |
| **postgresql** | [![postgresql](https://ag.hexor.cy/api/badge?name=postgresql&revision=true)](https://ag.hexor.cy/applications/argocd/postgresql) |
| **prom-stack** | [![prom-stack](https://ag.hexor.cy/api/badge?name=prom-stack&revision=true)](https://ag.hexor.cy/applications/argocd/prom-stack) |
| **system-upgrade** | [![system-upgrade](https://ag.hexor.cy/api/badge?name=system-upgrade&revision=true)](https://ag.hexor.cy/applications/argocd/system-upgrade) |
### Games
| Application | Status |
| :--- | :---: |
| **beam-ng** | [![beam-ng](https://ag.hexor.cy/api/badge?name=beam-ng&revision=true)](https://ag.hexor.cy/applications/argocd/beam-ng) |
| **counter-strike-16** | [![counter-strike-16](https://ag.hexor.cy/api/badge?name=counter-strike-16&revision=true)](https://ag.hexor.cy/applications/argocd/counter-strike-16) |
| **minecraft** | [![minecraft](https://ag.hexor.cy/api/badge?name=minecraft&revision=true)](https://ag.hexor.cy/applications/argocd/minecraft) |
</td>
<td valign="top" width="50%">
### Applications
| Application | Status |
| :--- | :---: |
| **gitea** | [![gitea](https://ag.hexor.cy/api/badge?name=gitea&revision=true)](https://ag.hexor.cy/applications/argocd/gitea) |
| **greece-notifier** | [![greece-notifier](https://ag.hexor.cy/api/badge?name=greece-notifier&revision=true)](https://ag.hexor.cy/applications/argocd/greece-notifier) |
| **hexound** | [![hexound](https://ag.hexor.cy/api/badge?name=hexound&revision=true)](https://ag.hexor.cy/applications/argocd/hexound) |
| **immich** | [![immich](https://ag.hexor.cy/api/badge?name=immich&revision=true)](https://ag.hexor.cy/applications/argocd/immich) |
| **iperf3** | [![iperf3](https://ag.hexor.cy/api/badge?name=iperf3&revision=true)](https://ag.hexor.cy/applications/argocd/iperf3) |
| **jellyfin** | [![jellyfin](https://ag.hexor.cy/api/badge?name=jellyfin&revision=true)](https://ag.hexor.cy/applications/argocd/jellyfin) |
| **k8s-secrets** | [![k8s-secrets](https://ag.hexor.cy/api/badge?name=k8s-secrets&revision=true)](https://ag.hexor.cy/applications/argocd/k8s-secrets) |
| **khm** | [![khm](https://ag.hexor.cy/api/badge?name=khm&revision=true)](https://ag.hexor.cy/applications/argocd/khm) |
| **ollama** | [![ollama](https://ag.hexor.cy/api/badge?name=ollama&revision=true)](https://ag.hexor.cy/applications/argocd/ollama) |
| **paperless** | [![paperless](https://ag.hexor.cy/api/badge?name=paperless&revision=true)](https://ag.hexor.cy/applications/argocd/paperless) |
| **pasarguard** | [![pasarguard](https://ag.hexor.cy/api/badge?name=pasarguard&revision=true)](https://ag.hexor.cy/applications/argocd/pasarguard) |
| **qbittorent-nas** | [![qbittorent-nas](https://ag.hexor.cy/api/badge?name=qbittorent-nas&revision=true)](https://ag.hexor.cy/applications/argocd/qbittorent-nas) |
| **remnawave** | [![remnawave](https://ag.hexor.cy/api/badge?name=remnawave&revision=true)](https://ag.hexor.cy/applications/argocd/remnawave) |
| **rustdesk** | [![rustdesk](https://ag.hexor.cy/api/badge?name=rustdesk&revision=true)](https://ag.hexor.cy/applications/argocd/rustdesk) |
| **sonarr-stack** | [![sonarr-stack](https://ag.hexor.cy/api/badge?name=sonarr-stack&revision=true)](https://ag.hexor.cy/applications/argocd/sonarr-stack) |
| **stirling-pdf** | [![stirling-pdf](https://ag.hexor.cy/api/badge?name=stirling-pdf&revision=true)](https://ag.hexor.cy/applications/argocd/stirling-pdf) |
| **syncthing** | [![syncthing](https://ag.hexor.cy/api/badge?name=syncthing&revision=true)](https://ag.hexor.cy/applications/argocd/syncthing) |
| **tg-bots** | [![tg-bots](https://ag.hexor.cy/api/badge?name=tg-bots&revision=true)](https://ag.hexor.cy/applications/argocd/tg-bots) |
| **vaultwarden** | [![vaultwarden](https://ag.hexor.cy/api/badge?name=vaultwarden&revision=true)](https://ag.hexor.cy/applications/argocd/vaultwarden) |
| **vpn** | [![vpn](https://ag.hexor.cy/api/badge?name=vpn&revision=true)](https://ag.hexor.cy/applications/argocd/vpn) |
</td>
</tr>
</table>

View File

@@ -30,6 +30,27 @@ spec:
containers:
- name: gitea
image: 'gitea/gitea:latest'
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1500m"
env:
- name: GITEA__service__REGISTER_MANUAL_CONFIRM
value: "true"
- name: GITEA__service__ENABLE_CAPTCHA
value: "false"
- name: GITEA__service__REQUIRE_CAPTCHA_FOR_LOGIN
value: "true"
- name: GITEA__service__REQUIRE_EXTERNAL_REGISTRATION_CAPTCHA
value: "true"
- name: GITEA__service__CAPTCHA_TYPE
value: "hcaptcha"
envFrom:
- secretRef:
name: gitea-recapcha-creds
ports:
- name: http
containerPort: 3000
@@ -40,6 +61,7 @@ spec:
volumeMounts:
- name: storage
mountPath: /data
---
apiVersion: apps/v1
kind: Deployment
@@ -56,20 +78,65 @@ spec:
app: gitea-runner
spec:
nodeSelector:
#kubernetes.io/hostname: master.tail2fe2d.ts.net
kubernetes.io/hostname: nas.homenet
kubernetes.io/hostname: home.homenet
volumes:
- name: docker-sock
hostPath:
#path: /var/run/k3s/containerd/containerd.sock
path: /var/run/docker.sock
type: Socket
- name: runner-data
emptyDir:
sizeLimit: 30Gi
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 3
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- home.homenet
- weight: 1
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master.tail2fe2d.ts.net
- weight: 2
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- nas.homenet
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- home.homenet
- nas.homenet
- master.tail2fe2d.ts.net
containers:
- name: gitea-runner
image: gitea/act_runner:nightly
resources:
requests:
cpu: "100m"
memory: "256Mi"
ephemeral-storage: "1Gi" # reserve ephemeral storage
limits:
cpu: "3000m"
memory: "4Gi"
ephemeral-storage: "28Gi" # hard cap for /data usage
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
- name: runner-data
mountPath: /data
env:
- name: GITEA_INSTANCE_URL
value: "https://gt.hexor.cy"
@@ -82,4 +149,3 @@ spec:
value: "k8s-runner"
- name: GITEA_RUNNER_LABELS
value: "ubuntu-latest:docker://ghcr.io/catthehacker/ubuntu:act-latest,ubuntu-22.04:docker://ghcr.io/catthehacker/ubuntu:act-22.04,ubuntu-20.04:docker://ghcr.io/catthehacker/ubuntu:act-20.04"

View File

@@ -1,5 +1,5 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: gitea-runner-token
@@ -23,3 +23,37 @@ spec:
key: e475b5ab-ea3c-48a5-bb4c-a6bc552fc064
property: login.password
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: gitea-recapcha-creds
spec:
refreshInterval: 1m
target:
name: gitea-recapcha-creds
deletionPolicy: Delete
template:
type: Opaque
data:
GITEA__service__HCAPTCHA_SITEKEY: |-
{{ .HCAPTCHA_SITEKEY }}
GITEA__service__HCAPTCHA_SECRET: |-
{{ .HCAPTCHA_SECRET }}
data:
- secretKey: HCAPTCHA_SITEKEY
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 89c8d8d2-6b53-42c5-805f-38a341ef163e
property: login.username
- secretKey: HCAPTCHA_SECRET
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 89c8d8d2-6b53-42c5-805f-38a341ef163e
property: login.password

View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: greece-notifier
namespace: argocd
spec:
project: apps
destination:
namespace: greece-notifier
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/greece-notifier
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,51 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: greece-notifier
spec:
replicas: 1
selector:
matchLabels:
app: greece-notifier
template:
metadata:
labels:
app: greece-notifier
spec:
nodeSelector:
kubernetes.io/hostname: nas.homenet
volumes:
- name: data
nfs:
server: nas.homenet
path: /mnt/storage/Storage/k8s/greece-notifier/
readOnly: false
containers:
- name: greece-notifier
image: ultradesu/greece-notifier:master
imagePullPolicy: Always
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "3000m"
memory: "1Gi"
volumeMounts:
- name: data
mountPath: /data
env:
- name: RUST_LOG
value: "info"
- name: ENDPOINT_BID
value: "56" # Cyprus id
- name: UPDATE_INTERVAL_MIN_SECS
value: "270"
- name: UPDATE_INTERVAL_MAX_SECS
value: "350"
- name: TELOXIDE_TOKEN
valueFrom:
secretKeyRef:
name: greece-notifier-creds
key: TELOXIDE_TOKEN

View File

@@ -0,0 +1,23 @@
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: greece-notifier-creds
spec:
target:
name: greece-notifier-creds
deletionPolicy: Delete
template:
type: Opaque
data:
TELOXIDE_TOKEN: |-
{{ .token }}
data:
- secretKey: token
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 34e8f207-27ad-4b21-b114-84d3f7460a51
property: login.password

View File

@@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./external-secrets.yaml
- ./deployment.yaml

View File

@@ -24,6 +24,13 @@ spec:
initContainers:
- name: git-cloner
image: alpine/git
resources:
requests:
memory: "32Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "300m"
command:
- git
- clone
@@ -36,6 +43,13 @@ spec:
containers:
- name: hexound
image: trafex/php-nginx:3.8.0
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "300m"
volumeMounts:
- name: hexound-repo
mountPath: /var/www/html

View File

@@ -0,0 +1,37 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hexound-tls-ingress
annotations:
ingressClassName: traefik
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
rules:
- host: hexound.hexor.cy
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hexound
port:
number: 80
- host: hexound.hexor.ru
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hexound
port:
number: 80
tls:
- secretName: hexound-tls
hosts:
- hexound.hexor.cy
- hexound.hexor.ru

View File

@@ -5,4 +5,5 @@ resources:
- app.yaml
- deployment.yaml
- service.yaml
- ingress.yaml

View File

@@ -17,6 +17,13 @@ spec:
- name: immich-server
image: ghcr.io/immich-app/immich-server:release
imagePullPolicy: Always
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "3000m"
ports:
- containerPort: 2283
env:
@@ -43,13 +50,20 @@ spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
- weight: 90
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- home.homenet
- weight: 10
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- nas.homenet
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
@@ -72,7 +86,7 @@ spec:
- name: camera
nfs:
server: nas.homenet
path: /mnt/storage/Storage/Photos/Phone/
path: /mnt/storage/Storage/Syncthing-repos/PhoneCamera/
readOnly: true
- name: localtime
hostPath:
@@ -127,7 +141,7 @@ spec:
- key: kubernetes.io/hostname
operator: In
values:
- home.homenet
- nas.homenet
topologySpreadConstraints:
- maxSkew: 2
@@ -140,6 +154,13 @@ spec:
- name: immich-ml
image: ghcr.io/immich-app/immich-machine-learning:release
imagePullPolicy: Always
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "8Gi"
cpu: "6000m"
env:
- name: TZ
value: Asia/Nicosia
@@ -174,6 +195,13 @@ spec:
containers:
- name: redis
image: redis:6.2-alpine
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "750m"
readinessProbe:
exec:
command: ["redis-cli", "ping"]

21
k8s/apps/iperf3/app.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: iperf3
namespace: argocd
spec:
project: apps
destination:
namespace: iperf3
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/iperf3
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,92 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iperf3-server
spec:
selector:
matchLabels:
app: iperf3-server
template:
metadata:
labels:
app: iperf3-server
spec:
serviceAccountName: iperf3-server
subdomain: iperf3
initContainers:
- name: create-service
image: bitnami/kubectl:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- /bin/bash
- -c
- |
# Clean node name for service name
NODE_CLEAN=$(echo "$NODE_NAME" | cut -d'.' -f1 | tr '[:upper:]' '[:lower:]' | tr '_' '-')
SERVICE_NAME="iperf3-${NODE_CLEAN}"
# Create service for this pod
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${SERVICE_NAME}
namespace: iperf3
labels:
app: iperf3-node-service
target-node: "${NODE_NAME}"
spec:
type: ClusterIP
ports:
- name: iperf3
port: 5201
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: ${SERVICE_NAME}
namespace: iperf3
labels:
app: iperf3-node-service
target-node: "${NODE_NAME}"
subsets:
- addresses:
- ip: ${POD_IP}
ports:
- name: iperf3
port: 5201
protocol: TCP
EOF
containers:
- name: iperf3-server
image: networkstatic/iperf3:latest
args: ["-s"]
ports:
- containerPort: 5201
protocol: TCP
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "750m"
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists

View File

@@ -0,0 +1,92 @@
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iperf3-exporter
labels:
app: iperf3-exporter
spec:
selector:
matchLabels:
app: iperf3-exporter
template:
metadata:
labels:
app: iperf3-exporter
spec:
serviceAccountName: iperf3-server
initContainers:
- name: create-exporter-service
image: bitnami/kubectl:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- /bin/bash
- -c
- |
NODE_CLEAN=$(echo "$NODE_NAME" | cut -d'.' -f1 | tr '[:upper:]' '[:lower:]' | tr '_' '-')
SERVICE_NAME="iperf3-exporter-${NODE_CLEAN}"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${SERVICE_NAME}
namespace: iperf3
labels:
app: iperf3-exporter-service
target-node: "${NODE_NAME}"
spec:
type: ClusterIP
ports:
- name: metrics
port: 9579
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: ${SERVICE_NAME}
namespace: iperf3
labels:
app: iperf3-exporter-service
target-node: "${NODE_NAME}"
subsets:
- addresses:
- ip: ${POD_IP}
ports:
- name: metrics
port: 9579
protocol: TCP
EOF
containers:
- name: iperf3-exporter
image: ghcr.io/edgard/iperf3_exporter:1.2.2
ports:
- containerPort: 9579
name: metrics
protocol: TCP
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "300m"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists

View File

@@ -0,0 +1,15 @@
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-exporter
labels:
app: iperf3-exporter
spec:
selector:
app: iperf3-exporter
ports:
- name: metrics
protocol: TCP
port: 9579
targetPort: 9579

View File

@@ -0,0 +1,11 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- rbac.yaml
- daemonset.yaml
- service-headless.yaml
- iperf3-exporter-daemonset.yaml
- iperf3-exporter-service.yaml
- servicemonitor.yaml

36
k8s/apps/iperf3/rbac.yaml Normal file
View File

@@ -0,0 +1,36 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: iperf3-server
namespace: iperf3
labels:
app: iperf3-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: iperf3-service-manager
namespace: iperf3
labels:
app: iperf3-server
rules:
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: iperf3-service-manager
namespace: iperf3
labels:
app: iperf3-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: iperf3-service-manager
subjects:
- kind: ServiceAccount
name: iperf3-server
namespace: iperf3

View File

@@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Service
metadata:
name: iperf3
spec:
clusterIP: None
selector:
app: iperf3-server
ports:
- name: iperf3
protocol: TCP
port: 5201
targetPort: 5201

View File

@@ -0,0 +1,122 @@
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: iperf3-exporter
labels:
app: iperf3-exporter
release: prometheus
spec:
selector:
matchLabels:
app: iperf3-exporter
endpoints:
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-ch.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-ch.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-us.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-us.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-iris.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-iris.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-home.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-home.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-master.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-master.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-it.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-it.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-nas.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-nas.iperf3.svc:9579
- port: metrics
path: /probe
interval: 5m
scrapeTimeout: 30s
params:
target: ['iperf3-spb.iperf3.svc.cluster.local:5201']
period: ['10s']
streams: ['4']
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __address__
replacement: iperf3-exporter-spb.iperf3.svc:9579
metricRelabelings:
- sourceLabels: [__name__]
regex: iperf3_(.+)
targetLabel: __name__
replacement: network_${1}

View File

@@ -1,5 +1,5 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vpn-creds
@@ -76,11 +76,14 @@ spec:
secretKeyRef:
name: vpn-creds
key: ss_link
command: ["/bin/bash", "-c", "rm /etc/shadowsocks-rust/config.json && sslocal --online-config-url $SS_LINK --local-addr 127.0.0.1:8081 -U --protocol http"]
command: ["/bin/bash", "-c", "rm /etc/shadowsocks-rust/config.json && sslocal --server-url $SS_LINK --local-addr 127.0.0.1:8081 -U --protocol http"]
resources:
requests:
memory: "64Mi"
cpu: "200m"
cpu: "300m"
limits:
memory: "128Mi"
cpu: "300m"
---
apiVersion: v1
kind: Service

View File

@@ -1,5 +1,12 @@
image:
tag: 10.10.7
tag: 10.11.4
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "8Gi"
cpu: "6000m"
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
persistence:
@@ -29,8 +36,40 @@ ingress:
paths:
- path: /
pathType: Prefix
- host: us.hexor.cy
paths:
- path: /
pathType: Prefix
- host: ch.hexor.cy
paths:
- path: /
pathType: Prefix
- host: jp.hexor.cy
paths:
- path: /
pathType: Prefix
- host: spb.hexor.cy
paths:
- path: /
pathType: Prefix
- host: cy.hexor.cy
paths:
- path: /
pathType: Prefix
- host: am.hexor.cy
paths:
- path: /
pathType: Prefix
- host: de.hexor.cy
paths:
- path: /
pathType: Prefix
- host: it.hexor.cy
paths:
- path: /
pathType: Prefix
tls:
- secretName: jellyfin-tls
hosts:
- 'jf.hexor.cy'
- '*.hexor.cy'

View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: k8s-secrets
namespace: argocd
spec:
project: apps
destination:
namespace: k8s-secret
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/k8s-secrets
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,63 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-reader
labels:
app: secret-reader
spec:
replicas: 1
selector:
matchLabels:
app: secret-reader
template:
metadata:
labels:
app: secret-reader
spec:
serviceAccountName: secret-reader
nodeSelector:
kubernetes.io/os: linux
containers:
- name: secret-reader
image: ultradesu/k8s-secrets:0.1.1
imagePullPolicy: Always
args:
- "--secrets"
- "openai-creds"
- "--namespace"
- "k8s-secret"
- "--port"
- "3000"
ports:
- containerPort: 3000
name: http
env:
- name: RUST_LOG
value: "info"
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "150m"
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

View File

@@ -0,0 +1,44 @@
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: openai-creds
spec:
target:
name: openai-creds
deletionPolicy: Delete
template:
type: Opaque
data:
USER: |-
{{ .user }}
PASS: |-
{{ .pass }}
TOTP: |-
{{ .totp }}
data:
- secretKey: user
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: a485f323-fd47-40ee-a5cf-40891b1f963c
property: login.username
- secretKey: pass
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: a485f323-fd47-40ee-a5cf-40891b1f963c
property: login.password
- secretKey: totp
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: a485f323-fd47-40ee-a5cf-40891b1f963c
property: login.totp

View File

@@ -0,0 +1,20 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: secret-reader

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: secret-reader
labels:
app: secret-reader

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: secret-reader
labels:
app: secret-reader
spec:
type: ClusterIP
selector:
app: secret-reader
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http

View File

@@ -23,6 +23,13 @@ spec:
- name: khm
image: 'ultradesu/khm:latest'
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "750m"
command:
- /bin/sh
- -c

View File

@@ -1,5 +1,5 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: khm-pg-creds

21
k8s/apps/ollama/app.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ollama
namespace: argocd
spec:
project: apps
destination:
namespace: ollama
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/ollama
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,33 @@
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: oidc-secret
spec:
target:
name: oidc-secret
deletionPolicy: Delete
template:
type: Opaque
data:
OAUTH_CLIENT_SECRET: |-
{{ .OAUTH_CLIENT_SECRET }}
OAUTH_CLIENT_ID: |-
{{ .OAUTH_CLIENT_ID }}
data:
- secretKey: OAUTH_CLIENT_SECRET
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 97959a8b-e3b2-4b34-bc54-ddb6476a12ea
property: fields[0].value
- secretKey: OAUTH_CLIENT_ID
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 97959a8b-e3b2-4b34-bc54-ddb6476a12ea
property: fields[1].value

View File

@@ -0,0 +1,21 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- external-secrets.yaml
helmCharts:
- name: ollama
repo: https://otwld.github.io/ollama-helm/
version: 0.4.0
releaseName: ollama
namespace: ollama
valuesFile: ollama-values.yaml
includeCRDs: true
- name: open-webui
repo: https://helm.openwebui.com/
version: 8.14.0
releaseName: openweb-ui
namespace: ollama
valuesFile: openweb-ui-values.yaml
includeCRDs: true

View File

@@ -0,0 +1,8 @@
image:
repository: ollama/ollama
pullPolicy: Always
tag: "latest"
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
ingress:
enabled: false

View File

@@ -0,0 +1,52 @@
clusterDomain: ai.hexor.cy
extraEnvVars:
GLOBAL_LOG_LEVEL: debug
OAUTH_PROVIDER_NAME: authentik
OPENID_PROVIDER_URL: https://idm.hexor.cy/application/o/openwebui/.well-known/openid-configuration
OPENID_REDIRECT_URI: https://ai.hexor.cy/oauth/oidc/callback
WEBUI_URL: https://ai.hexor.cy
# Allows auto-creation of new users using OAuth. Must be paired with ENABLE_LOGIN_FORM=false.
ENABLE_OAUTH_SIGNUP: true
# Disables user/password login form. Required when ENABLE_OAUTH_SIGNUP=true.
ENABLE_LOGIN_FORM: false
OAUTH_MERGE_ACCOUNTS_BY_EMAIL: true
extraEnvFrom:
- secretRef:
name: oidc-secret
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
ollamaUrls:
- http://ollama.ollama.svc:11434
ollama:
enabled: false
ollama:
gpu:
enabled: false
models:
pull:
- qwen3-vl:8b
run:
- qwen3-vl:8b
pipelines:
enabled: true
tika:
enabled: true
websocket:
enabled: true
ingress:
enabled: true
class: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
host: "ai.hexor.cy"
tls:
- hosts:
- '*.hexor.cy'
secretName: ollama-tls

View File

@@ -1,8 +1,8 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: postgres-creds
name: postgres-and-oauth-creds
spec:
target:
name: postgres-creds

View File

@@ -8,12 +8,12 @@ nodeSelector:
kubernetes.io/hostname: nas.homenet
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 200m
memory: 256Mi
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1500m"
service:
type: ClusterIP

View File

@@ -1,8 +1,22 @@
image:
tag: 2.15.3
tag: 2.19.3
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "3000m"
initContainers:
install-tesseract-langs:
image: ghcr.io/paperless-ngx/paperless-ngx:2.15.1
image: ghcr.io/paperless-ngx/paperless-ngx:2.18.2
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "750m"
command: ["/bin/sh", "-c"]
args:
- apt-get update && apt-get install -y --reinstall tesseract-ocr-rus tesseract-ocr-jpn tesseract-ocr-chi-sim tesseract-ocr-eng tesseract-ocr-ell && cp -v -r /usr/share/tesseract-ocr/5/tessdata/* /custom-tessdata/

View File

@@ -8,12 +8,12 @@ nodeSelector:
kubernetes.io/hostname: nas.homenet
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "750m"
service:
type: ClusterIP

View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: pasarguard
namespace: argocd
spec:
project: apps
destination:
namespace: pasarguard
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/pasarguard
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,14 @@
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pasarguard-tls
labels:
app: pasarguard
spec:
secretName: pasarguard-tls
issuerRef:
name: letsencrypt
kind: ClusterIssuer
dnsNames:
- ps.hexor.cy

View File

@@ -0,0 +1,212 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pasarguard-scripts-ingress
labels:
app: pasarguard-node-ingress
data:
init-uuid-ingress.sh: |
#!/bin/bash
set -e
echo "Started"
# NODE_NAME is already set via environment variable
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
# Get DNS name from node label xray-public-address
DNS_NAME=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.metadata.labels.xray-public-address}')
if [ -z "${DNS_NAME}" ]; then
echo "ERROR: Node ${NODE_NAME} does not have label 'xray-public-address'"
exit 1
fi
echo "Node: ${NODE_NAME}"
echo "DNS Name from label: ${DNS_NAME}"
# Use DNS name for ConfigMap name to ensure uniqueness
CONFIGMAP_NAME="node-uuid-ingress-${DNS_NAME//./-}"
echo "Checking ConfigMap: ${CONFIGMAP_NAME}"
# Check if ConfigMap exists and get UUID
if kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" &>/dev/null; then
echo "ConfigMap exists, reading UUID..."
API_KEY=$(kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" -o jsonpath='{.data.API_KEY}')
if [ -z "${API_KEY}" ]; then
echo "UUID not found in ConfigMap, generating new one..."
API_KEY=$(cat /proc/sys/kernel/random/uuid)
kubectl patch configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" --type merge -p "{\"data\":{\"API_KEY\":\"${API_KEY}\"}}"
else
echo "Using existing UUID from ConfigMap"
fi
else
echo "ConfigMap does not exist, creating new one..."
API_KEY=$(cat /proc/sys/kernel/random/uuid)
kubectl create configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" \
--from-literal=API_KEY="${API_KEY}" \
--from-literal=NODE_NAME="${NODE_NAME}"
fi
# Save UUID and node info to shared volume for the main container
echo -n "${API_KEY}" > /shared/api-key
echo -n "${NODE_NAME}" > /shared/node-name
echo -n "${CONFIGMAP_NAME}" > /shared/configmap-name
echo "UUID initialized: ${API_KEY}"
echo "Node name: ${NODE_NAME}"
echo "ConfigMap: ${CONFIGMAP_NAME}"
# Create Certificate for this node using DNS name from label
CERT_NAME="pasarguard-node-ingress-${DNS_NAME//./-}"
echo "Creating Certificate: ${CERT_NAME} for ${DNS_NAME}"
# Check if Certificate already exists
if ! kubectl get certificate "${CERT_NAME}" -n "${NAMESPACE}" &>/dev/null; then
echo "Certificate does not exist, creating..."
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: ${CERT_NAME}
namespace: ${NAMESPACE}
spec:
secretName: ${CERT_NAME}-tls
issuerRef:
name: letsencrypt
kind: ClusterIssuer
dnsNames:
- ${DNS_NAME}
EOF
else
echo "Certificate already exists"
fi
# Wait for certificate to be ready
echo "Waiting for certificate to be ready..."
for i in {1..600}; do
if kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
echo "Certificate secret is ready!"
break
fi
echo "Waiting for certificate... ($i/600)"
sleep 1
done
if ! kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
echo "WARNING: Certificate secret not ready after 600 seconds"
else
# Extract certificate and key from secret to shared volume
echo "Extracting certificate and key..."
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.crt}' | base64 -d > /shared/tls.crt
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.key}' | base64 -d > /shared/tls.key
echo "Certificate and key extracted successfully."
cat /shared/tls.crt
fi
# Create ClusterIP Service for this node (pod selector based)
NODE_SHORT_NAME="${NODE_NAME%%.*}"
SERVICE_NAME="${NODE_SHORT_NAME}-ingress"
echo "Creating Service: ${SERVICE_NAME} for node ${NODE_NAME} (short: ${NODE_SHORT_NAME})"
# Create Service with pod selector including node name
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: ${SERVICE_NAME}
namespace: ${NAMESPACE}
labels:
app: pasarguard-node-ingress
node: ${NODE_NAME}
spec:
type: ClusterIP
selector:
app: pasarguard-node-ingress
node-name: ${NODE_SHORT_NAME}
ports:
- name: proxy
port: 443
protocol: TCP
targetPort: 443
- name: api
port: 62050
protocol: TCP
targetPort: 62050
EOF
echo "Service created: ${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local"
# Create IngressRouteTCP for this DNS name with TLS passthrough
INGRESS_NAME="pasarguard-tcp-${DNS_NAME//./-}"
echo "Creating IngressRouteTCP: ${INGRESS_NAME} for ${DNS_NAME}"
cat <<EOF | kubectl apply -f -
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: ${INGRESS_NAME}
namespace: ${NAMESPACE}
labels:
app: pasarguard-node-ingress
node: ${NODE_NAME}
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(\`${DNS_NAME}\`)
services:
- name: ${SERVICE_NAME}
port: 443
tls:
passthrough: true
EOF
echo "IngressRouteTCP created: ${INGRESS_NAME}"
echo "Traffic to ${DNS_NAME}:443 will be routed to ${SERVICE_NAME}:443"
# Create second IngressRouteTCP for API port 62051
INGRESS_API_NAME="pasarguard-api-${DNS_NAME//./-}"
echo "Creating IngressRouteTCP for API: ${INGRESS_API_NAME} for ${DNS_NAME}:62051"
cat <<EOF | kubectl apply -f -
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: ${INGRESS_API_NAME}
namespace: ${NAMESPACE}
labels:
app: pasarguard-node-ingress
node: ${NODE_NAME}
spec:
entryPoints:
- pasarguard-api
routes:
- match: HostSNI(\`${DNS_NAME}\`)
services:
- name: ${SERVICE_NAME}
port: 62050
tls:
passthrough: true
EOF
echo "IngressRouteTCP API created: ${INGRESS_API_NAME}"
echo "Traffic to ${DNS_NAME}:62051 will be routed to ${SERVICE_NAME}:62050"
pasarguard-start.sh: |
#!/bin/sh
# Read API_KEY from shared volume created by init container
if [ -f /shared/api-key ]; then
export API_KEY=$(cat /shared/api-key)
echo "Loaded API_KEY from shared volume"
else
echo "WARNING: API_KEY file not found, using default"
fi
cd /app
exec ./main

View File

@@ -0,0 +1,264 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pasarguard-scripts
labels:
app: pasarguard-node
data:
init-uuid.sh: |
#!/bin/bash
set -e
echo "Started"
# NODE_NAME is already set via environment variable
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
# Get DNS name from node label xray-node-address
DNS_NAME=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.metadata.labels.xray-node-address}')
if [ -z "${DNS_NAME}" ]; then
echo "ERROR: Node ${NODE_NAME} does not have label 'xray-node-address'"
exit 1
fi
echo "Node: ${NODE_NAME}"
echo "DNS Name from label: ${DNS_NAME}"
# Use DNS name for ConfigMap name to ensure uniqueness
CONFIGMAP_NAME="node-uuid-${DNS_NAME//./-}"
echo "Checking ConfigMap: ${CONFIGMAP_NAME}"
# Check if ConfigMap exists and get UUID
if kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" &>/dev/null; then
echo "ConfigMap exists, reading UUID..."
API_KEY=$(kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" -o jsonpath='{.data.API_KEY}')
if [ -z "${API_KEY}" ]; then
echo "UUID not found in ConfigMap, generating new one..."
API_KEY=$(cat /proc/sys/kernel/random/uuid)
kubectl patch configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" --type merge -p "{\"data\":{\"API_KEY\":\"${API_KEY}\"}}"
else
echo "Using existing UUID from ConfigMap"
fi
else
echo "ConfigMap does not exist, creating new one..."
API_KEY=$(cat /proc/sys/kernel/random/uuid)
kubectl create configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" \
--from-literal=API_KEY="${API_KEY}" \
--from-literal=NODE_NAME="${NODE_NAME}"
fi
# Save UUID and node info to shared volume for the main container
echo -n "${API_KEY}" > /shared/api-key
echo -n "${NODE_NAME}" > /shared/node-name
echo -n "${CONFIGMAP_NAME}" > /shared/configmap-name
echo "UUID initialized: ${API_KEY}"
echo "Node name: ${NODE_NAME}"
echo "ConfigMap: ${CONFIGMAP_NAME}"
# Create Certificate for this node using DNS name from label
CERT_NAME="pasarguard-node-${DNS_NAME//./-}"
echo "Creating Certificate: ${CERT_NAME} for ${DNS_NAME}"
# Check if Certificate already exists
if ! kubectl get certificate "${CERT_NAME}" -n "${NAMESPACE}" &>/dev/null; then
echo "Certificate does not exist, creating..."
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: ${CERT_NAME}
namespace: ${NAMESPACE}
spec:
secretName: ${CERT_NAME}-tls
issuerRef:
name: letsencrypt
kind: ClusterIssuer
dnsNames:
- ${DNS_NAME}
EOF
else
echo "Certificate already exists"
fi
# Wait for certificate to be ready
echo "Waiting for certificate to be ready..."
for i in {1..600}; do
if kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
echo "Certificate secret is ready!"
break
fi
echo "Waiting for certificate... ($i/600)"
sleep 1
done
if ! kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
echo "WARNING: Certificate secret not ready after 600 seconds"
else
# Extract certificate and key from secret to shared volume
echo "Extracting certificate and key..."
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.crt}' | base64 -d > /shared/tls.crt
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.key}' | base64 -d > /shared/tls.key
echo "Certificate and key extracted successfully."
cat /shared/tls.crt
fi
# Create individual Service and Endpoints for this node
# Take only first part of node name before first dot
NODE_SHORT_NAME="${NODE_NAME%%.*}"
SERVICE_NAME="${NODE_SHORT_NAME}"
# Get node internal IP (take only first IP if multiple)
NODE_IP=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}' | awk '{print $1}')
echo "Creating Service: ${SERVICE_NAME} for node ${NODE_NAME} (short: ${NODE_SHORT_NAME}) with IP ${NODE_IP}"
# Create Service without selector
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: ${SERVICE_NAME}
namespace: ${NAMESPACE}
labels:
app: pasarguard-node
node: ${NODE_NAME}
spec:
clusterIP: None
ports:
- name: api
port: 62050
protocol: TCP
targetPort: 62050
- name: metrics
port: 9550
protocol: TCP
targetPort: 9550
---
apiVersion: v1
kind: Endpoints
metadata:
name: ${SERVICE_NAME}
namespace: ${NAMESPACE}
labels:
app: pasarguard-node
node: ${NODE_NAME}
subsets:
- addresses:
- ip: ${NODE_IP}
nodeName: ${NODE_NAME}
ports:
- name: api
port: 62050
protocol: TCP
- name: metrics
port: 9550
protocol: TCP
EOF
echo "Service created: ${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local -> ${NODE_IP}:62050"
exporter-start.sh: |
#!/bin/sh
# Install required tools
apk add --no-cache wget curl iproute2-ss bash
# Download v2ray-exporter
echo "Downloading v2ray-exporter..."
ARCH=$(uname -m)
case $ARCH in
x86_64)
BINARY_ARCH="amd64"
;;
aarch64|arm64)
BINARY_ARCH="arm64"
;;
*)
echo "Unsupported architecture: $ARCH"
exit 1
;;
esac
echo "Detected architecture: $ARCH, using binary: v2ray-exporter_linux_$BINARY_ARCH"
wget -L -O /tmp/v2ray-exporter "https://github.com/wi1dcard/v2ray-exporter/releases/download/v0.6.0/v2ray-exporter_linux_$BINARY_ARCH"
mv /tmp/v2ray-exporter /usr/local/bin/v2ray-exporter
chmod +x /usr/local/bin/v2ray-exporter
# Wait for initial API port file
echo "Waiting for initial xray API port file..."
while [ ! -f /shared/xray-api-port ]; do
echo "Waiting for API port file..."
sleep 2
done
# Main loop - restart exporter if it crashes or port changes
while true; do
if [ -f /shared/xray-api-port ]; then
API_PORT=$(cat /shared/xray-api-port)
if [ -n "$API_PORT" ]; then
echo "Starting v2ray-exporter with endpoint 127.0.0.1:$API_PORT"
/usr/local/bin/v2ray-exporter --v2ray-endpoint "127.0.0.1:$API_PORT" --listen ":9550" &
EXPORTER_PID=$!
# Wait for exporter to exit or port file to change
while kill -0 $EXPORTER_PID 2>/dev/null; do
if [ -f /shared/xray-api-port ]; then
NEW_PORT=$(cat /shared/xray-api-port)
if [ "$NEW_PORT" != "$API_PORT" ]; then
echo "API port changed from $API_PORT to $NEW_PORT, restarting exporter"
kill $EXPORTER_PID 2>/dev/null
wait $EXPORTER_PID 2>/dev/null
break
fi
fi
sleep 5
done
echo "Exporter stopped, restarting..."
wait $EXPORTER_PID 2>/dev/null
fi
fi
sleep 2
done
pasarguard-start.sh: |
#!/bin/sh
# Read API_KEY from shared volume created by init container
if [ -f /shared/api-key ]; then
export API_KEY=$(cat /shared/api-key)
echo "Loaded API_KEY from shared volume"
else
echo "WARNING: API_KEY file not found, using default"
fi
cd /app
# Start main process in background
./main &
MAIN_PID=$!
# Start continuous port monitoring in background
{
sleep 10 # Wait for xray to start initially
LAST_PORT=""
while true; do
API_PORT=$(netstat -tlpn | grep xray | grep 127.0.0.1 | awk '{print $4}' | cut -d: -f2 | head -1)
if [ -n "$API_PORT" ] && [ "$API_PORT" != "$LAST_PORT" ]; then
echo "Found xray API port: $API_PORT"
echo -n "$API_PORT" > /shared/xray-api-port
LAST_PORT="$API_PORT"
fi
sleep 5 # Check every 5 seconds
done
} &
PORT_MONITOR_PID=$!
# Wait for main process to finish
wait $MAIN_PID
# Clean up port monitor
kill $PORT_MONITOR_PID 2>/dev/null

View File

@@ -0,0 +1,211 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pasarguard-node-ingress
labels:
app: pasarguard-node-ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pasarguard-node-ingress-configmap
labels:
app: pasarguard-node-ingress
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: ["traefik.io", "traefik.containo.us"]
resources: ["ingressroutetcps"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "patch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pasarguard-node-ingress-configmap
labels:
app: pasarguard-node-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pasarguard-node-ingress-configmap
subjects:
- kind: ServiceAccount
name: pasarguard-node-ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pasarguard-node-ingress-reader
labels:
app: pasarguard-node-ingress
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pasarguard-node-ingress-reader
labels:
app: pasarguard-node-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pasarguard-node-ingress-reader
subjects:
- kind: ServiceAccount
name: pasarguard-node-ingress
namespace: pasarguard
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pasarguard-node-ingress
labels:
app: pasarguard-node-ingress
spec:
selector:
matchLabels:
app: pasarguard-node-ingress
revisionHistoryLimit: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: pasarguard-node-ingress
spec:
serviceAccountName: pasarguard-node-ingress
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: xray-public-address
operator: Exists
initContainers:
- name: label-pod
image: bitnami/kubectl:latest
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command:
- /bin/bash
- -c
- |
# Add node label to pod
NODE_SHORT=$(echo ${NODE_NAME} | cut -d. -f1)
kubectl label pod ${POD_NAME} -n ${POD_NAMESPACE} node-name=${NODE_SHORT} --overwrite
- name: init-uuid
image: bitnami/kubectl:latest
env:
- name: GODEBUG
value: "x509sha1=1"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command:
- /bin/bash
- /scripts/init-uuid-ingress.sh
volumeMounts:
- name: shared-data
mountPath: /shared
- name: scripts
mountPath: /scripts
containers:
- name: pasarguard-node
image: 'pasarguard/node:v0.1.3'
imagePullPolicy: Always
command:
- /bin/sh
- /scripts/pasarguard-start.sh
ports:
- name: api
containerPort: 62050
protocol: TCP
- name: proxy
containerPort: 443
protocol: TCP
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_HOST
value: "0.0.0.0"
- name: SERVICE_PORT
value: "62050"
- name: SERVICE_PROTOCOL
value: "grpc"
- name: DEBUG
value: "true"
- name: SSL_CERT_FILE
value: "/shared/tls.crt"
- name: SSL_KEY_FILE
value: "/shared/tls.key"
- name: XRAY_EXECUTABLE_PATH
value: "/usr/local/bin/xray"
- name: XRAY_ASSETS_PATH
value: "/usr/local/share/xray"
- name: API_KEY
value: "change-this-to-a-secure-uuid"
livenessProbe:
tcpSocket:
port: 62050
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 62050
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "750m"
volumeMounts:
- name: shared-data
mountPath: /shared
readOnly: false
- name: scripts
mountPath: /scripts
volumes:
- name: shared-data
emptyDir: {}
- name: scripts
configMap:
name: pasarguard-scripts-ingress
defaultMode: 0755

View File

@@ -0,0 +1,221 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pasarguard-node
labels:
app: pasarguard-node
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pasarguard-node-configmap
labels:
app: pasarguard-node
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pasarguard-node-configmap
labels:
app: pasarguard-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pasarguard-node-configmap
subjects:
- kind: ServiceAccount
name: pasarguard-node
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pasarguard-node-reader
labels:
app: pasarguard-node
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pasarguard-node-reader
labels:
app: pasarguard-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pasarguard-node-reader
subjects:
- kind: ServiceAccount
name: pasarguard-node
namespace: pasarguard
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pasarguard-node
labels:
app: pasarguard-node
spec:
selector:
matchLabels:
app: pasarguard-node
revisionHistoryLimit: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: pasarguard-node
spec:
serviceAccountName: pasarguard-node
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: xray-node-address
operator: Exists
initContainers:
- name: init-uuid
image: bitnami/kubectl:latest
env:
- name: GODEBUG
value: "x509sha1=1"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command:
- /bin/bash
- /scripts/init-uuid.sh
volumeMounts:
- name: shared-data
mountPath: /shared
- name: scripts
mountPath: /scripts
containers:
- name: pasarguard-node
image: 'pasarguard/node:v0.1.3'
imagePullPolicy: Always
command:
- /bin/sh
- /scripts/pasarguard-start.sh
ports:
- name: api
containerPort: 62050
protocol: TCP
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_HOST
value: "0.0.0.0"
- name: SERVICE_PORT
value: "62050"
- name: SERVICE_PROTOCOL
value: "grpc"
- name: DEBUG
value: "true"
- name: SSL_CERT_FILE
value: "/shared/tls.crt"
- name: SSL_KEY_FILE
value: "/shared/tls.key"
- name: XRAY_EXECUTABLE_PATH
value: "/usr/local/bin/xray"
- name: XRAY_ASSETS_PATH
value: "/usr/local/share/xray"
- name: API_KEY
value: "change-this-to-a-secure-uuid"
livenessProbe:
tcpSocket:
port: 62050
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 62050
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "750m"
volumeMounts:
- name: shared-data
mountPath: /shared
readOnly: false
- name: scripts
mountPath: /scripts
- name: xray-exporter
image: alpine:3.18
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- /scripts/exporter-start.sh
ports:
- name: metrics
containerPort: 9550
protocol: TCP
livenessProbe:
httpGet:
path: /scrape
port: metrics
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /scrape
port: metrics
initialDelaySeconds: 45
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "150m"
volumeMounts:
- name: shared-data
mountPath: /shared
readOnly: true
- name: scripts
mountPath: /scripts
volumes:
- name: shared-data
emptyDir: {}
- name: scripts
configMap:
name: pasarguard-scripts
defaultMode: 0755

View File

@@ -0,0 +1,84 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pasarguard
labels:
app: pasarguard
annotations:
reloader.stakater.com/auto: "true"
spec:
selector:
matchLabels:
app: pasarguard
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: pasarguard
spec:
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
initContainers:
- name: download-template
image: busybox:latest
command:
- sh
- -c
- |
wget -O /templates/subscription/index.html https://github.com/PasarGuard/subscription-template/releases/latest/download/index.html
volumeMounts:
- name: subscription-template
mountPath: /templates/subscription
containers:
- name: pasarguard-web
image: 'pasarguard/panel:latest'
imagePullPolicy: Always
envFrom:
- secretRef:
name: pasarguard-secrets
env:
- name: UVICORN_HOST
value: "0.0.0.0"
- name: UVICORN_PORT
value: "8000"
- name: DOCS
value: "true"
- name: UVICORN_SSL_CERTFILE
value: "/app/tls/tls.crt"
- name: UVICORN_SSL_KEYFILE
value: "/app/tls/tls.key"
- name: CUSTOM_TEMPLATES_DIRECTORY
value: "/code/app/templates/"
- name: SUBSCRIPTION_PAGE_TEMPLATE
value: "subscription/index.html"
ports:
- name: http
containerPort: 8000
protocol: TCP
volumeMounts:
- name: tls
mountPath: /app/tls
readOnly: true
- name: subscription-template
mountPath: /code/app/templates/subscription
volumes:
- name: tls
secret:
secretName: pasarguard-tls
- name: subscription-template
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: pasarguard
spec:
selector:
app: pasarguard
ports:
- protocol: TCP
port: 80
targetPort: 8000

View File

@@ -0,0 +1,44 @@
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: pasarguard-secrets
spec:
target:
name: pasarguard-secrets
deletionPolicy: Delete
template:
type: Opaque
data:
SUDO_PASSWORD: |-
{{ .admin_password }}
SUDO_USERNAME: |-
{{ .admin_username }}
SQLALCHEMY_DATABASE_URL : |-
postgresql+asyncpg://pasarguard:{{ .pg_pass }}@psql.psql.svc:5432/pasarguard
data:
- secretKey: pg_pass
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 2a9deb39-ef22-433e-a1be-df1555625e22
property: fields[9].value
- secretKey: admin_password
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 35ec5880-2576-401b-a89a-3c9d56b9c1de
property: login.password
- secretKey: admin_username
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 35ec5880-2576-401b-a89a-3c9d56b9c1de
property: login.username

View File

@@ -0,0 +1,14 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./app.yaml
- ./external-secrets.yaml
- ./deployment.yaml
- ./daemonset.yaml
- ./certificate.yaml
- ./configmap-scripts.yaml
- ./servicemonitor.yaml
- ./configmap-scripts-ingress.yaml
# - ./daemonset-ingress.yaml
# - ./traefik-pasarguard-entrypoint.yaml

View File

@@ -0,0 +1,21 @@
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pasarguard-node-metrics
labels:
app: pasarguard-node
release: prometheus
spec:
selector:
matchLabels:
app: pasarguard-node
endpoints:
- port: metrics
path: /scrape
interval: 30s
scrapeTimeout: 10s
honorLabels: true
namespaceSelector:
matchNames:
- pasarguard

View File

@@ -0,0 +1,66 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik
namespace: kube-system
spec:
template:
spec:
containers:
- name: traefik
args:
- --entryPoints.metrics.address=:9100/tcp
- --entryPoints.traefik.address=:8080/tcp
- --entryPoints.web.address=:8000/tcp
- --entryPoints.websecure.address=:8443/tcp
- --entryPoints.pasarguard-api.address=:62051/tcp
- --api.dashboard=true
- --ping=true
- --metrics.prometheus=true
- --metrics.prometheus.entrypoint=metrics
- --providers.kubernetescrd
- --providers.kubernetescrd.allowEmptyServices=true
- --providers.kubernetesingress
- --providers.kubernetesingress.allowEmptyServices=true
- --providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/traefik
- --entryPoints.websecure.http.tls=true
- --log.level=INFO
- --entryPoints.web.transport.respondingTimeouts.readTimeout=0s
- --entryPoints.websecure.transport.respondingTimeouts.readTimeout=0s
ports:
- containerPort: 9100
name: metrics
protocol: TCP
- containerPort: 8080
name: traefik
protocol: TCP
- containerPort: 8000
name: web
protocol: TCP
- containerPort: 8443
name: websecure
protocol: TCP
- containerPort: 62051
name: pasarguard-api
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
ports:
- name: web
port: 80
protocol: TCP
targetPort: web
- name: websecure
port: 443
protocol: TCP
targetPort: websecure
- name: pasarguard-api
port: 62051
protocol: TCP
targetPort: pasarguard-api

View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: remnawave
namespace: argocd
spec:
project: apps
destination:
namespace: remnawave
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/remnawave
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,71 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: remnawave
labels:
app: remnawave
annotations:
reloader.stakater.com/auto: "true"
spec:
selector:
matchLabels:
app: remnawave
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: remnawave
spec:
containers:
- name: remnawave
image: 'remnawave/backend:2'
imagePullPolicy: Always
envFrom:
- secretRef:
name: remnawave-secrets
env:
- name: REDIS_URL
value: "redis://remnawave-redis:6379"
ports:
- name: http
containerPort: 3000
protocol: TCP
- name: metrics
containerPort: 3001
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: remnawave
spec:
selector:
app: remnawave
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
- name: metrics
protocol: TCP
port: 3001
targetPort: 3001

View File

@@ -0,0 +1,70 @@
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: remnawave-secrets
spec:
target:
name: remnawave-secrets
deletionPolicy: Delete
template:
type: Opaque
data:
METRICS_USER: admin
FRONT_END_DOMAIN: rw.hexor.cy
SUB_PUBLIC_DOMAIN: sub.hexor.cy
REDIS_HOST: remnawave-redis
REDIS_PORT: "6379"
DATABASE_URL: |-
postgresql://remnawave:{{ .pg_pass }}@psql.psql.svc:5432/remnawave
JWT_AUTH_SECRET: |-
{{ .jwt_auth_secret }}
JWT_API_TOKENS_SECRET: |-
{{ .jwt_api_tokens_secret }}
METRICS_PASS: |-
{{ .metrics_pass }}
WEBHOOK_SECRET_HEADER: |-
{{ .webhook_secret }}
data:
- secretKey: pg_pass
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 2a9deb39-ef22-433e-a1be-df1555625e22
property: fields[10].value
- secretKey: jwt_auth_secret
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 0d090436-5e82-453a-914c-19cec2abded1
property: fields[0].value
- secretKey: jwt_api_tokens_secret
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 0d090436-5e82-453a-914c-19cec2abded1
property: fields[1].value
- secretKey: metrics_pass
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 0d090436-5e82-453a-914c-19cec2abded1
property: fields[2].value
- secretKey: webhook_secret
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 0d090436-5e82-453a-914c-19cec2abded1
property: fields[3].value

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./external-secrets.yaml
- ./deployment.yaml
- ./redis-deployment.yaml
- ./subscription-page-configmap.yaml
- ./subscription-page-deployment.yaml
- ./servicemonitor.yaml
- ./user-ui-ingress.yaml
- ./panel-ingress.yaml

View File

@@ -0,0 +1,37 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: panel-ui
annotations:
ingressClassName: traefik
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
rules:
- host: rw.hexor.cy
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: remnawave
port:
number: 3000
- host: rw.hexor.ru
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: remnawave
port:
number: 3000
tls:
- secretName: remnawave-panel-tls
hosts:
- rw.hexor.cy
- rw.hexor.ru

View File

@@ -0,0 +1,71 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: remnawave-redis
labels:
app: remnawave-redis
spec:
selector:
matchLabels:
app: remnawave-redis
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: remnawave-redis
spec:
containers:
- name: redis
image: 'valkey/valkey:8.1-alpine'
imagePullPolicy: Always
ports:
- name: redis
containerPort: 6379
protocol: TCP
livenessProbe:
exec:
command:
- valkey-cli
- ping
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- valkey-cli
- ping
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
volumeMounts:
- name: redis-data
mountPath: /data
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "200m"
volumes:
- name: redis-data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: remnawave-redis
spec:
selector:
app: remnawave-redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379

View File

@@ -0,0 +1,21 @@
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: remnawave-metrics
labels:
app: remnawave
release: prometheus
spec:
selector:
matchLabels:
app: remnawave
endpoints:
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
honorLabels: true
namespaceSelector:
matchNames:
- remnawave

View File

@@ -0,0 +1,27 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: remnawave-subscription-page-config
labels:
app: remnawave-subscription-page
data:
APP_PORT: "3010"
REMNAWAVE_PANEL_URL: "https://rw.hexor.cy"
META_TITLE: "RemnaWave Subscription"
META_DESCRIPTION: "Your VPN subscription portal"
META_KEYWORDS: "vpn,subscription,remnawave"
META_AUTHOR: "RemnaWave"
ENABLE_ANALYTICS: "false"
ANALYTICS_MEASUREMENT_ID: ""
CUSTOM_SUB_PREFIX: ""
THEME: "dark"
CUSTOM_LOGO_URL: ""
SHOW_SUBSCRIPTION_INFO: "true"
SHOW_CONNECTION_INFO: "true"
SHOW_QR_CODE: "true"
QR_CODE_SIZE: "256"
REFRESH_INTERVAL: "30000"
SUBSCRIPTION_TEXT_COLOR: "#ffffff"
BACKGROUND_COLOR: "#1a1a1a"
ACCENT_COLOR: "#007bff"

View File

@@ -0,0 +1,52 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: remnawave-subscription-page
labels:
app: remnawave-subscription-page
spec:
selector:
matchLabels:
app: remnawave-subscription-page
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: remnawave-subscription-page
spec:
containers:
- name: subscription-page
image: 'remnawave/subscription-page:latest'
imagePullPolicy: Always
envFrom:
- configMapRef:
name: remnawave-subscription-page-config
ports:
- name: http
containerPort: 3010
protocol: TCP
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
name: remnawave-subscription-page
labels:
app: remnawave-subscription-page
spec:
selector:
app: remnawave-subscription-page
ports:
- name: http
protocol: TCP
port: 3010
targetPort: 3010

View File

@@ -0,0 +1,37 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ui
annotations:
ingressClassName: traefik
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
rules:
- host: sub.hexor.cy
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: remnawave-subscription-page
port:
number: 3010
- host: sub.hexor.ru
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: remnawave-subscription-page
port:
number: 3010
tls:
- secretName: remnawave-user-ui-tls
hosts:
- sub.hexor.cy
- sub.hexor.ru

View File

@@ -0,0 +1,134 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rustdesk-hbbs
labels:
app: rustdesk-hbbs
spec:
replicas: 1
selector:
matchLabels:
app: rustdesk-hbbs
template:
metadata:
labels:
app: rustdesk-hbbs
spec:
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
containers:
- name: hbbs
image: rustdesk/rustdesk-server:latest
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "750m"
command: ["hbbs"]
args:
- "--relay-servers"
- "rd.hexor.cy:21117"
- "--port"
- "21116"
ports:
- name: registry
containerPort: 21116
protocol: TCP
- name: nat
containerPort: 21115
protocol: TCP
volumeMounts:
- name: keys
mountPath: /data
readOnly: true
- name: data
mountPath: /data-persistent
env:
- name: RUST_LOG
value: "info"
- name: DB_URL
value: "/data-persistent/db_v2.sqlite3"
workingDir: /data
volumes:
- name: keys
secret:
secretName: rustdesk-keys
items:
- key: id_ed25519
path: id_ed25519
mode: 0600
- key: id_ed25519.pub
path: id_ed25519.pub
mode: 0644
- name: data
hostPath:
path: /k8s/rustdesk/hbbs
type: DirectoryOrCreate
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rustdesk-hbbr
labels:
app: rustdesk-hbbr
spec:
replicas: 1
selector:
matchLabels:
app: rustdesk-hbbr
template:
metadata:
labels:
app: rustdesk-hbbr
spec:
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
containers:
- name: hbbr
image: rustdesk/rustdesk-server:latest
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "750m"
command: ["hbbr"]
args:
- "--port"
- "21117"
ports:
- name: relay
containerPort: 21117
protocol: TCP
volumeMounts:
- name: keys
mountPath: /data
readOnly: true
- name: data
mountPath: /data-persistent
env:
- name: RUST_LOG
value: "info"
workingDir: /data
volumes:
- name: keys
secret:
secretName: rustdesk-keys
items:
- key: id_ed25519
path: id_ed25519
mode: 0600
- key: id_ed25519.pub
path: id_ed25519.pub
mode: 0644
- name: data
hostPath:
path: /k8s/rustdesk/hbbr
type: DirectoryOrCreate

View File

@@ -1,66 +1,34 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: postgres-creds
name: rustdesk-keys
spec:
target:
name: postgres-creds
name: rustdesk-keys
deletionPolicy: Delete
template:
type: Opaque
data:
psql_user: paperless
psql_pass: |-
{{ .psql_pass }}
oauth_config: |-
{
"openid_connect": {
"APPS": [
{
"provider_id": "authentik",
"name": "Authentik",
"client_id": "{{ .oauth_id }}",
"secret": "{{ .oauth_secret }}",
"settings": {
"server_url": "{{ .server_url }}"
}
}
],
"OAUTH_PKCE_ENABLED": "True"
}
}
id_ed25519: |-
{{ .private_key }}
id_ed25519.pub: |-
{{ .public_key }}
data:
- secretKey: psql_pass
- secretKey: private_key
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 2a9deb39-ef22-433e-a1be-df1555625e22
property: fields[5].value
- secretKey: oauth_id
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
key: f5591dfd-a0ab-4101-a2d7-e06380d3dcc9
property: fields[0].value
- secretKey: oauth_secret
- secretKey: public_key
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
key: f5591dfd-a0ab-4101-a2d7-e06380d3dcc9
property: fields[1].value
- secretKey: server_url
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
property: fields[2].value

View File

@@ -0,0 +1,66 @@
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: postgres-creds
spec:
target:
name: postgres-creds
deletionPolicy: Delete
template:
type: Opaque
data:
psql_user: paperless
psql_pass: |-
{{ .psql_pass }}
oauth_config: |-
{
"openid_connect": {
"APPS": [
{
"provider_id": "authentik",
"name": "Authentik",
"client_id": "{{ .oauth_id }}",
"secret": "{{ .oauth_secret }}",
"settings": {
"server_url": "{{ .server_url }}"
}
}
],
"OAUTH_PKCE_ENABLED": "True"
}
}
data:
- secretKey: psql_pass
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 2a9deb39-ef22-433e-a1be-df1555625e22
property: fields[5].value
- secretKey: oauth_id
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
property: fields[0].value
- secretKey: oauth_secret
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
property: fields[1].value
- secretKey: server_url
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
property: fields[2].value

View File

@@ -3,14 +3,8 @@ kind: Kustomization
resources:
- app.yaml
#- external-secrets.yaml
helmCharts:
- name: rustdesk-server-oss
repo: https://schich.tel/helm-charts
version: 0.2.2
releaseName: rustdesk
namespace: rustdesk
valuesFile: values.yaml
includeCRDs: true
- deployment.yaml
- service.yaml
- external-secrets.yaml
- network-policy.yaml

View File

@@ -0,0 +1,73 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: rustdesk-network-policy
spec:
podSelector:
matchLabels:
app: rustdesk-hbbs
policyTypes:
- Ingress
- Egress
ingress:
# Allow all incoming connections to RustDesk ports
- from: []
ports:
- protocol: TCP
port: 21115
- protocol: TCP
port: 21116
- protocol: UDP
port: 21116
egress:
# Allow DNS
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
# Allow communication between HBBS and HBBR
- to:
- podSelector:
matchLabels:
app: rustdesk-hbbr
ports:
- protocol: TCP
port: 21117
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: rustdesk-hbbr-network-policy
spec:
podSelector:
matchLabels:
app: rustdesk-hbbr
policyTypes:
- Ingress
- Egress
ingress:
# Allow all incoming connections to relay port
- from: []
ports:
- protocol: TCP
port: 21117
# Allow connections from HBBS
- from:
- podSelector:
matchLabels:
app: rustdesk-hbbs
ports:
- protocol: TCP
port: 21117
egress:
# Allow DNS
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53

View File

@@ -0,0 +1,57 @@
---
apiVersion: v1
kind: Service
metadata:
name: rustdesk-hbbs
labels:
app: rustdesk-hbbs
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: rustdesk-hbbs
ports:
- name: registry-tcp
port: 21116
targetPort: 21116
protocol: TCP
- name: nat
port: 21115
targetPort: 21115
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: rustdesk-hbbs-udp
labels:
app: rustdesk-hbbs
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: rustdesk-hbbs
ports:
- name: registry-udp
port: 21116
targetPort: 21116
protocol: UDP
---
apiVersion: v1
kind: Service
metadata:
name: rustdesk-hbbr
labels:
app: rustdesk-hbbr
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: rustdesk-hbbr
ports:
- name: relay
port: 21117
targetPort: 21117
protocol: TCP

View File

@@ -1,87 +0,0 @@
replicaCount: 1
image:
repository: docker.io/rustdesk/rustdesk-server
pullPolicy: IfNotPresent
tag: 1
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
ingress:
enabled: true
className: "traefik"
annotations:
ingressClassName: traefik
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
acme.cert-manager.io/http01-edit-in-place: "true"
hosts:
- rd.hexor.cy
tls:
- secretName: rustdesk-tls
hosts:
- rd.hexor.cy
service:
type: LoadBalancer
externalTrafficPolicy: Cluster
loadBalancerIP: null
enableWebClientSupport: false
hbbr:
replayPort:
port: 21117
targetPort: 21117
clientPort:
port: 21119
targetPort: 21119
hbbs:
natPort:
port: 21115
targetPort: 21115
registryPort:
port: 21116
targetPort: 21116
heartbeatPort:
port: 21116
targetPort: 21116
webPort:
port: 21118
targetPort: 21118
resources:
hbbrResource: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
hbbsResource: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Additional volumes on the output Deployment definition.
volume: {}
# - name: foo
# secret:
# secretName: mysecret
# optional: false
# - name: foo
# mountPath: "/etc/foo"
# readOnly: true

View File

@@ -1,3 +1,10 @@
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "750m"
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net

View File

@@ -1,5 +1,12 @@
env:
TZ: Asia/Nicosia
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "750m"
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net

View File

@@ -1,5 +1,12 @@
env:
TZ: Asia/Nicosia
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1500m"
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net

View File

@@ -28,12 +28,12 @@ ingress:
tlsSecret: pdf-hexor-cy-tls
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1500m"
probes:
liveness:

View File

@@ -4,6 +4,8 @@ kind: Kustomization
resources:
- app.yaml
- nginx-router.yaml
- traefik-simple.yaml
helmCharts:
- name: syncthing
@@ -21,3 +23,11 @@ helmCharts:
namespace: syncthing
valuesFile: syncthing-khv.yaml
includeCRDs: true
- name: syncthing
repo: https://k8s-home-lab.github.io/helm-charts
version: 4.0.0
releaseName: syncthing-nas
namespace: syncthing
valuesFile: syncthing-nas.yaml
includeCRDs: true

View File

@@ -0,0 +1,276 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: syncthing
data:
default.conf: |
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name _;
# Landing page
location = / {
root /usr/share/nginx/html;
try_files /index.html =404;
}
# NAS instance
location /nas {
rewrite ^/nas$ /nas/ permanent;
}
# NAS API endpoints
location ~ ^/nas/(rest|meta)/ {
rewrite ^/nas/(.*) /$1 break;
proxy_pass http://syncthing-nas:8384;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400;
}
location /nas/ {
proxy_pass http://syncthing-nas:8384/;
# Important: tell syncthing that we're using /nas as base path
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400;
# Sub filter to fix asset paths
sub_filter 'href="/' 'href="/nas/';
sub_filter 'src="/' 'src="/nas/';
sub_filter 'url(/' 'url(/nas/';
sub_filter '"/meta' '"/nas/meta';
sub_filter '"/rest' '"/nas/rest';
sub_filter '"/vendor' '"/nas/vendor';
sub_filter '"/theme-assets' '"/nas/theme-assets';
sub_filter '"/syncthing' '"/nas/syncthing';
sub_filter_once off;
sub_filter_types text/html text/css application/javascript;
}
# Master instance
location /master {
rewrite ^/master$ /master/ permanent;
}
# Master API endpoints
location ~ ^/master/(rest|meta)/ {
rewrite ^/master/(.*) /$1 break;
proxy_pass http://syncthing-master:8384;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400;
}
location /master/ {
proxy_pass http://syncthing-master:8384/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
sub_filter 'href="/' 'href="/master/';
sub_filter 'src="/' 'src="/master/';
sub_filter 'url(/' 'url(/master/';
sub_filter '"/meta' '"/master/meta';
sub_filter '"/rest' '"/master/rest';
sub_filter '"/vendor' '"/master/vendor';
sub_filter '"/theme-assets' '"/master/theme-assets';
sub_filter '"/syncthing' '"/master/syncthing';
sub_filter_once off;
sub_filter_types text/html text/css application/javascript;
}
# Iris instance
location /iris {
rewrite ^/iris$ /iris/ permanent;
}
# Iris API endpoints
location ~ ^/iris/(rest|meta)/ {
rewrite ^/iris/(.*) /$1 break;
proxy_pass http://syncthing-khv:8384;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400;
}
location /iris/ {
proxy_pass http://syncthing-khv:8384/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
sub_filter 'href="/' 'href="/iris/';
sub_filter 'src="/' 'src="/iris/';
sub_filter 'url(/' 'url(/iris/';
sub_filter '"/meta' '"/iris/meta';
sub_filter '"/rest' '"/iris/rest';
sub_filter '"/vendor' '"/iris/vendor';
sub_filter '"/theme-assets' '"/iris/theme-assets';
sub_filter '"/syncthing' '"/iris/syncthing';
sub_filter_once off;
sub_filter_types text/html text/css application/javascript;
}
}
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Syncthing Instances</title>
<style>
body {
font-family: Arial, sans-serif;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
background-color: #f0f0f0;
}
.container {
text-align: center;
background: white;
padding: 40px;
border-radius: 10px;
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
}
h1 {
color: #333;
margin-bottom: 30px;
}
.links {
display: flex;
flex-direction: column;
gap: 15px;
}
a {
display: inline-block;
padding: 15px 30px;
background-color: #0078e7;
color: white;
text-decoration: none;
border-radius: 5px;
transition: background-color 0.3s;
}
a:hover {
background-color: #0056b3;
}
</style>
</head>
<body>
<div class="container">
<h1>Syncthing Instances</h1>
<div class="links">
<a href="/nas/">NAS Instance</a>
<a href="/master/">Master Instance</a>
<a href="/iris/">Iris Instance</a>
</div>
</div>
</body>
</html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: syncthing-router
namespace: syncthing
spec:
replicas: 1
selector:
matchLabels:
app: syncthing-router
template:
metadata:
labels:
app: syncthing-router
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: config
configMap:
name: nginx-config
items:
- key: default.conf
path: default.conf
- name: html
configMap:
name: nginx-config
items:
- key: index.html
path: index.html
---
apiVersion: v1
kind: Service
metadata:
name: syncthing-router
namespace: syncthing
spec:
selector:
app: syncthing-router
ports:
- protocol: TCP
port: 80
targetPort: 80

View File

@@ -4,13 +4,8 @@ persistence:
config:
enabled: true
type: hostPath
hostPath: "/k8s/syncthing"
hostPath: "/k8s/Syncthing"
mountPath: "/var/syncthing"
storage:
enabled: true
type: hostPath
hostPath: "/home/share"
mountPath: "/storage"
nodeSelector:
kubernetes.io/hostname: iris.khv
service:
@@ -20,23 +15,24 @@ service:
port: 8384
listen:
enabled: true
type: LoadBalancer
type: NodePort
externalTrafficPolicy: Local
ports:
listen:
enabled: true
port: 30023
port: 22000
protocol: TCP
targetPort: 22000
selector:
app.kubernetes.io/name: syncthing
discovery:
enabled: true
type: NodePort
externalTrafficPolicy: Cluster
externalTrafficPolicy: Local
ports:
discovery:
enabled: true
port: 21027
protocol: UDP
targetPort: 21027
port: 21027
protocol: UDP
targetPort: 21027

View File

@@ -5,13 +5,13 @@ persistence:
config:
enabled: true
type: hostPath
hostPath: "/k8s/syncthing"
hostPath: "/k8s/syncthing-config"
mountPath: "/var/syncthing"
storage:
enabled: true
type: hostPath
hostPath: "/k8s/"
mountPath: "/storage"
hostPath: "/k8s/Syncthing-repos"
mountPath: "/Syncthing-repos"
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
service:
@@ -21,23 +21,24 @@ service:
port: 8384
listen:
enabled: true
type: LoadBalancer
type: NodePort
externalTrafficPolicy: Local
ports:
listen:
enabled: true
port: 30023
port: 22000
protocol: TCP
targetPort: 22000
selector:
app.kubernetes.io/name: syncthing
discovery:
enabled: true
type: NodePort
externalTrafficPolicy: Cluster
externalTrafficPolicy: Local
ports:
discovery:
enabled: true
port: 21027
protocol: UDP
targetPort: 21027
port: 21027
protocol: UDP
targetPort: 21027

View File

@@ -0,0 +1,44 @@
image:
tag: latest
persistence:
config:
enabled: true
type: hostPath
hostPath: "/mnt/storage/Storage/syncthing-config"
mountPath: "/var/syncthing"
storage:
enabled: true
type: hostPath
hostPath: "/mnt/storage/Storage/Syncthing-repos"
mountPath: "/Syncthing-repos"
nodeSelector:
kubernetes.io/hostname: nas.homenet
service:
main:
ports:
http:
port: 8384
listen:
enabled: true
type: NodePort
externalTrafficPolicy: Local
ports:
listen:
enabled: true
port: 22000
protocol: TCP
targetPort: 22000
discovery:
enabled: true
type: NodePort
externalTrafficPolicy: Local
ports:
discovery:
enabled: true
port: 21027
protocol: UDP
targetPort: 21027
port: 21027
protocol: UDP
targetPort: 21027

View File

@@ -0,0 +1,36 @@
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: syncthing-ingressroute
namespace: syncthing
spec:
entryPoints:
- websecure
routes:
- match: Host(`ss.hexor.cy`)
kind: Rule
services:
- name: syncthing-router
port: 80
middlewares:
- name: authentik-forward-auth
namespace: syncthing
tls:
secretName: syncthing-tls
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: authentik-forward-auth
namespace: syncthing
spec:
forwardAuth:
address: http://authentik-server.authentik.svc.cluster.local/outpost.goauthentik.io/auth/traefik
trustForwardHeader: true
authResponseHeaders:
- X-authentik-username
- X-authentik-groups
- X-authentik-email
- X-authentik-name
- X-authentik-uid

21
k8s/apps/tg-bots/app.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tg-bots
namespace: argocd
spec:
project: apps
destination:
namespace: tg-bots
server: https://kubernetes.default.svc
source:
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
targetRevision: HEAD
path: k8s/apps/tg-bots
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,42 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: desubot
labels:
app: desubot
annotations:
reloader.stakater.com/auto: "true"
spec:
selector:
matchLabels:
app: desubot
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: desubot
spec:
nodeSelector:
kubernetes.io/hostname: home.homenet
containers:
- name: desubot
image: 'ultradesu/desubot:latest'
imagePullPolicy: Always
envFrom:
- secretRef:
name: desubot
env:
- name: RUST_LOG
value: "info"
volumeMounts:
- mountPath: /storage
name: storage
volumes:
- name: storage
nfs:
server: nas.homenet
path: /mnt/storage/Storage/k8s/desubot/
readOnly: false

View File

@@ -0,0 +1,49 @@
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: get-id-bot
spec:
target:
name: get-id-bot
deletionPolicy: Delete
template:
type: Opaque
data:
TELOXIDE_TOKEN: |-
{{ .token }}
data:
- secretKey: token
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 97bd0af9-54ab-429a-b060-09626525f4cd
property: fields[0].value
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: desubot
spec:
target:
name: desubot
deletionPolicy: Delete
template:
type: Opaque
data:
TELEGRAM_BOT_TOKEN: |-
{{ .token }}
data:
- secretKey: token
sourceRef:
storeRef:
name: vaultwarden-login
kind: ClusterSecretStore
remoteRef:
key: 97bd0af9-54ab-429a-b060-09626525f4cd
property: fields[1].value

View File

@@ -0,0 +1,36 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: get-id-bot
labels:
app: get-id-bot
annotations:
reloader.stakater.com/auto: "true"
spec:
selector:
matchLabels:
app: get-id-bot
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: get-id-bot
spec:
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
containers:
- name: get-id-bot
image: 'ghcr.io/house-of-vanity/get_id_bot:main'
imagePullPolicy: Always
envFrom:
- secretRef:
name: get-id-bot
env:
- name: RUST_LOG
value: "info,teloxide::error_handlers=off"

View File

@@ -0,0 +1,10 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- app.yaml
- get-id-bot.yaml
- external-secrets.yaml
- desubot.yaml
- restart-job.yaml

View File

@@ -0,0 +1,56 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tg-bots-restart-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tg-bots-restart-role
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tg-bots-restart-rb
subjects:
- kind: ServiceAccount
name: tg-bots-restart-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tg-bots-restart-role
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: tg-bots-daily-restart
spec:
schedule: "0 4 * * *" # every day at 04:00
jobTemplate:
spec:
template:
spec:
serviceAccountName: tg-bots-restart-sa
restartPolicy: OnFailure
containers:
- name: kubectl
image: bitnami/kubectl:latest
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /bin/sh
- -c
- |
kubectl -n "$POD_NAMESPACE" rollout restart deployment/desubot
kubectl -n "$POD_NAMESPACE" rollout restart deployment/get-id-bot

View File

@@ -31,6 +31,13 @@ spec:
- name: vaultwarden
image: 'vaultwarden/server:latest'
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "750m"
env:
- name: DOMAIN
value: https://vw.hexor.cy

View File

@@ -1,5 +1,5 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: admin-token

25
k8s/apps/vpn/config.yaml Normal file
View File

@@ -0,0 +1,25 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: outfleet-rs-config
data:
config.toml: |-
[database]
url = "postgres://outfleet_rs:FMj#bA0XW14Pd2@psql.psql.svc:5432/outfleet_rs"
[web]
host = "0.0.0.0"
port = 8080
base_url = "https://vpn.hexor.cy"
[telegram]
enabled = false
admin_chat_ids = []
allowed_users = []
[xray]
config_path = "./templates"
[log]
level = "debug"

View File

@@ -1,5 +1,5 @@
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: outfleet-secrets
@@ -51,7 +51,7 @@ spec:
property: fields[1].value
---
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: outline-config

View File

@@ -6,4 +6,7 @@ resources:
- ./external-secrets.yaml
- ./outfleet.yaml
- ./shadowsocks.yaml
- ./outfleet-rs.yaml
- ./config.yaml
- ./xray.yaml

View File

@@ -0,0 +1,66 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: outfleet-rs
labels:
app: outfleet-rs
annotations:
reloader.stakater.com/auto: "true"
spec:
selector:
matchLabels:
app: outfleet-rs
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: outfleet-rs
spec:
hostname: outfleet-rs
nodeSelector:
kubernetes.io/hostname: master.tail2fe2d.ts.net
containers:
- name: outfleet-rs
image: 'ultradesu/outfleet:rs-0.1.3'
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- "-c"
- |
set -x
/app/xray-admin --host 0.0.0.0 --port 8080
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: RUST_LOG
value: "info"
volumeMounts:
- name: outfleet-config
mountPath: /app/config.toml # <-- target path inside container
subPath: config.toml # <-- use the specific key as a file
readOnly: true
volumes:
- name: outfleet-config
configMap:
name: outfleet-rs-config
items:
- key: config.toml
path: config.toml
---
apiVersion: v1
kind: Service
metadata:
name: outfleet-rs
spec:
selector:
app: outfleet-rs
ports:
- protocol: TCP
port: 80
targetPort: 8080

View File

@@ -30,8 +30,11 @@ spec:
args:
- "-c"
- |
python ./manage.py makemigrations vpn
set -x
#python ./manage.py makemigrations
#python ./manage.py makemigrations vpn
python ./manage.py migrate
python ./manage.py migrate vpn
python ./manage.py create_admin
python ./manage.py runserver 0.0.0.0:8000
envFrom:

Some files were not shown because too many files have changed in this diff Show More