forked from ab/homelab
Compare commits
237 Commits
xelnagamex
...
main
Author | SHA1 | Date | |
---|---|---|---|
bed8f5b7c3 | |||
676a81852a | |||
73c09f80f7 | |||
104d67bfb3 | |||
|
71e5101604 | ||
|
5783db189a | ||
5659e4455b | |||
|
36e8c5c36b | ||
|
a6e0165027 | ||
|
09526f4e91 | ||
|
d1922019ab | ||
|
118a1c431a | ||
|
b9667ea5e7 | ||
|
b1446c53cd | ||
|
56fa6a5e05 | ||
|
aa19cd8e61 | ||
|
00837fb238 | ||
479a2a02ea | |||
|
95e12df43d | ||
5a33337aa1 | |||
ce9ba3661b | |||
8bcba25f7e | |||
be9f42afa7 | |||
656ec121d2 | |||
240fc4127f | |||
9b19d8ddd8 | |||
0b8fe99ee1 | |||
|
cff6c28b72 | ||
99a63eb840 | |||
4f3be5b14a | |||
9f5ec499dc | |||
|
7b169b8e6d | ||
a79003740a | |||
|
6276d543bc | ||
|
18a9bfa22a | ||
|
4b6090910c | ||
|
cfa796cc94 | ||
|
3e4f5500d2 | ||
|
9dd761bc8e | ||
|
eb3b5183b0 | ||
|
c9c75c791b | ||
|
8b959fec49 | ||
|
1184ff9060 | ||
|
18c64ef812 | ||
|
993cf1985d | ||
|
3daf7cf79a | ||
|
caa3354b33 | ||
|
68ca195735 | ||
|
93d7cb6bf1 | ||
600a1dfb6e | |||
|
a8c089d9ec | ||
e516f95f77 | |||
84dcdc343a | |||
094d80896a | |||
|
4ffc42af97 | ||
b1183896f9 | |||
6f17dc23f1 | |||
e353751031 | |||
f3baf90672 | |||
|
d71935d063 | ||
7dde0d3f2a | |||
00cbd8830b | |||
3760908270 | |||
412d32aa68 | |||
a6dd3638bb | |||
8e445f01ae | |||
3a8b14d3b4 | |||
7dde46f40d | |||
60fcf95476 | |||
7e95c361a8 | |||
0e5cb7686f | |||
15e1718861 | |||
902d630d09 | |||
9b7f953bd3 | |||
228a5e3b1c | |||
3a0bc6e0d2 | |||
|
39ac71e4ef | ||
|
fb80dfad0b | ||
|
8c3d29c5b6 | ||
1b69064a9a | |||
21fde3cfb4 | |||
201179f57d | |||
181b577255 | |||
10773abd14 | |||
049542f434 | |||
ef6c7998a4 | |||
ee703e8399 | |||
399833de34 | |||
4c571f63e7 | |||
56125caedf | |||
4aae2a87d6 | |||
bbc1ba4c2a | |||
5924dcfc5f | |||
2cb9b5636c | |||
a2d631d06f | |||
aa61b78c47 | |||
|
27fe1d2f50 | ||
|
557d03eb32 | ||
|
f06c80162f | ||
e8588a2d3b | |||
621871be93 | |||
91dcd9d5bf | |||
e40357ad78 | |||
fa544b3b7b | |||
8c112a1b0a | |||
e1ffaa8ba2 | |||
1a2b09bcaa | |||
21f27adc80 | |||
142c828f2b | |||
5a01da21af | |||
269b2b5221 | |||
1417fa830d | |||
6b85343c9e | |||
0bcd23009c | |||
02b20c9fcc | |||
dbe0fa9acf | |||
7d447163cb | |||
b58a930052 | |||
bf9b37b55f | |||
|
e093dd4f57 | ||
|
18a27dadcb | ||
|
288a4685d7 | ||
|
24d570e522 | ||
|
7541cee2eb | ||
|
c3fa6eb874 | ||
|
414d60edb4 | ||
|
364f5b38a9 | ||
|
e59215d2e9 | ||
|
3c6da4969c | ||
|
c08a3e745e | ||
|
00177d347f | ||
|
ca7fed506b | ||
|
2f0ada97cc | ||
|
ae516a79eb | ||
|
196d53a5a9 | ||
|
24d4d34733 | ||
74b7611ea0 | |||
91739d622e | |||
7730458061 | |||
b482c2e768 | |||
7256c98046 | |||
56d315eb4a | |||
58a2cd4a74 | |||
0052a81389 | |||
34bd0f1ec4 | |||
c1cedeaa13 | |||
a37ccbe5ef | |||
cc0a6559da | |||
88021e4bc0 | |||
81fa68af48 | |||
2a9c18cae0 | |||
be5d76c1e8 | |||
229190f0e8 | |||
d7adb966c4 | |||
f8ef2a48f5 | |||
a7cfc086d5 | |||
dfba5434f7 | |||
|
4c68ddfc3a | ||
|
98a11199d0 | ||
c9173fbcc3 | |||
4f91fdd26a | |||
|
b27d5594c5 | ||
ae02f0fe2a | |||
|
b682b7f8ef | ||
d7a425d005 | |||
422269f5e9 | |||
a99b549e2e | |||
a3c26117b3 | |||
5f8216cc7b | |||
ceb405b069 | |||
f53ea1976c | |||
b9e1b73681 | |||
1b04222c3e | |||
3ed26f872c | |||
aa615fe587 | |||
1be64f2f63 | |||
1212dfcaec | |||
28e06770c6 | |||
005cb0db72 | |||
fd80f3ad65 | |||
5281d58fae | |||
4542d03bc5 | |||
eb6a2e3e47 | |||
311ab269b6 | |||
5fa5843fa1 | |||
006f607e0d | |||
77371cd640 | |||
e3373dfb5f | |||
c3eb8ffc5c | |||
c5eb2a80c2 | |||
46527d924a | |||
0c5076c649 | |||
acf1f88412 | |||
01a88e21a2 | |||
fbfbaf0826 | |||
bf70cae59e | |||
95ea0c21fb | |||
816fa3662d | |||
caeb350ece | |||
ab184e559d | |||
a6002e7cc3 | |||
03f61962f7 | |||
2ebc8e718e | |||
a6cc4b067f | |||
37e79a1175 | |||
431f0df03d | |||
bd91762c9d | |||
e4c86235ae | |||
72a1154610 | |||
0beb0cd78b | |||
e342aab9df | |||
26f811c3b7 | |||
d1e834d175 | |||
02ec8fd4e1 | |||
7565c6c34f | |||
a45c11f883 | |||
cfc15d05eb | |||
3d1658f41d | |||
51a8cc1834 | |||
5dcbc9b11f | |||
aed859b8e9 | |||
05f277c8cd | |||
e25e9a8608 | |||
2ef7b23c69 | |||
4184534c8c | |||
145bdcaca1 | |||
e0ef44d8bd | |||
628c250a0b | |||
2e0df4ad1b | |||
120d68bd57 | |||
6f7fc0b796 | |||
a4f043c5c6 | |||
640447a4e0 | |||
b55e1b936b | |||
e939b14796 | |||
a9d63a7c0c | |||
73a14e1397 |
91
.gitea/scripts/README.md
Normal file
91
.gitea/scripts/README.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Wiki Generators for Homelab
|
||||
|
||||
Automated Wiki page generation scripts for documenting homelab infrastructure.
|
||||
|
||||
## 1. Authentik Applications Wiki Generator
|
||||
|
||||
Generates a Wiki page with all applications managed by Authentik from Terraform configuration.
|
||||
|
||||
### Files:
|
||||
- `generate-apps-wiki.py` - Generates Applications.md from Terraform output
|
||||
- `process-terraform-output.py` - Processes Terraform JSON output
|
||||
|
||||
### Workflow:
|
||||
- **Trigger**: Push to `main` branch with Terraform changes
|
||||
- **Workflow**: `.gitea/workflows/authentik-apps.yaml`
|
||||
- **Output**: Applications Wiki page
|
||||
|
||||
## 2. Kubernetes Services Wiki Generator
|
||||
|
||||
Analyzes k8s/ directory and generates comprehensive documentation for all Kubernetes services.
|
||||
|
||||
### Files:
|
||||
- `generate-k8s-wiki.py` - Main script for analyzing k8s services
|
||||
|
||||
### Features:
|
||||
- **Service Types**: Detects Helm Charts, Kustomize, and YAML manifests
|
||||
- **ArgoCD Integration**: Shows auto-sync status and project info
|
||||
- **Service Discovery**: Lists all services, ingresses, and external secrets
|
||||
- **Categorization**: Groups by apps, core, games categories
|
||||
- **Detailed Analysis**: Shows deployments, containers, files
|
||||
|
||||
### Workflow:
|
||||
- **Trigger**: Changes in `k8s/` directory
|
||||
- **Workflow**: `.gitea/workflows/k8s-wiki.yaml`
|
||||
- **Output**: Kubernetes-Services Wiki page
|
||||
|
||||
## GitHub Secrets Configuration
|
||||
|
||||
Required secrets in repository settings:
|
||||
|
||||
```
|
||||
GT_URL=https://gt.hexor.cy
|
||||
GT_WIKI_TOKEN=your_gitea_access_token
|
||||
GT_OWNER=your_username
|
||||
GT_REPO=homelab
|
||||
```
|
||||
|
||||
## Generated Wiki Pages Structure
|
||||
|
||||
### Applications Page
|
||||
- Table with icons (32x32), external/internal URLs
|
||||
- Statistics by type (Proxy vs OAuth2)
|
||||
- Grouping by categories (Core, Tools, Media, etc.)
|
||||
|
||||
### Kubernetes Services Page
|
||||
- Overview table with service types and status
|
||||
- Detailed sections by category
|
||||
- ArgoCD integration status
|
||||
- Service discovery information
|
||||
|
||||
## Local Testing
|
||||
|
||||
### Authentik Apps:
|
||||
```bash
|
||||
cd terraform/authentik
|
||||
terraform output -json > terraform-output.json
|
||||
python3 ../../.gitea/scripts/process-terraform-output.py terraform-output.json processed-output.json
|
||||
python3 ../../.gitea/scripts/generate-apps-wiki.py processed-output.json
|
||||
```
|
||||
|
||||
### K8s Services:
|
||||
```bash
|
||||
pip install pyyaml
|
||||
python3 .gitea/scripts/generate-k8s-wiki.py k8s/ Kubernetes-Services.md
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues:
|
||||
|
||||
1. **Terraform output parsing errors**
|
||||
- Check for [command] prefix in output
|
||||
- Verify JSON structure with debug mode
|
||||
|
||||
2. **Wiki upload failures**
|
||||
- Verify Gitea token permissions
|
||||
- Check network connectivity to Gitea instance
|
||||
|
||||
3. **YAML parsing errors in k8s analysis**
|
||||
- Ensure valid YAML syntax in k8s files
|
||||
- Check PyYAML installation
|
226
.gitea/scripts/generate-apps-wiki.py
Normal file
226
.gitea/scripts/generate-apps-wiki.py
Normal file
@@ -0,0 +1,226 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script for generating Wiki page with applications list from Terraform outputs
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
def generate_markdown_table(apps_data):
|
||||
"""Generates Markdown table for applications"""
|
||||
|
||||
# Combine all applications
|
||||
all_apps = []
|
||||
|
||||
if 'proxy_apps' in apps_data:
|
||||
for key, app in apps_data['proxy_apps'].items():
|
||||
all_apps.append({
|
||||
'key': key,
|
||||
'name': app['name'],
|
||||
'type': app['type'],
|
||||
'url': app['url'],
|
||||
'internal_url': app.get('internal_url', '-'),
|
||||
'group': app['group'],
|
||||
'description': app['description'],
|
||||
'icon': app['icon'],
|
||||
'slug': app['slug']
|
||||
})
|
||||
|
||||
if 'oauth_apps' in apps_data:
|
||||
for key, app in apps_data['oauth_apps'].items():
|
||||
all_apps.append({
|
||||
'key': key,
|
||||
'name': app['name'],
|
||||
'type': app['type'],
|
||||
'url': app['url'],
|
||||
'internal_url': '-', # OAuth apps don't have internal URLs
|
||||
'group': app['group'],
|
||||
'description': app['description'],
|
||||
'icon': app['icon'],
|
||||
'slug': app['slug']
|
||||
})
|
||||
|
||||
# Sort by groups, then by name
|
||||
all_apps.sort(key=lambda x: (x['group'], x['name']))
|
||||
|
||||
# Generate Markdown
|
||||
markdown = []
|
||||
markdown.append("# Authentik Applications")
|
||||
markdown.append("")
|
||||
markdown.append(f"*Automatically generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')}*")
|
||||
markdown.append("")
|
||||
markdown.append("## All Applications")
|
||||
markdown.append("")
|
||||
|
||||
# Table
|
||||
markdown.append("| Icon | Name | Type | External URL | Internal URL | Group | Description |")
|
||||
markdown.append("|:----:|------|------|--------------|--------------|-------|-------------|")
|
||||
|
||||
for app in all_apps:
|
||||
# Icon with size constraint
|
||||
if app['icon']:
|
||||
icon = f'<img src="{app["icon"]}" width="32" height="32" alt="{app["name"]}">'
|
||||
else:
|
||||
icon = "📱"
|
||||
|
||||
# External URL link
|
||||
external_link = f"[🔗 {app['url'].replace('https://', '').replace('http://', '')}]({app['url']})" if app.get('url') else "-"
|
||||
|
||||
# Internal URL (only for proxy apps)
|
||||
internal_url = app.get('internal_url', '-')
|
||||
if internal_url != '-':
|
||||
# Show full internal URL without shortening
|
||||
internal_url = f"`{internal_url}`"
|
||||
|
||||
description = app['description'] if app['description'] else "-"
|
||||
|
||||
markdown.append(f"| {icon} | **{app['name']}** | {app['type']} | {external_link} | {internal_url} | {app['group']} | {description} |")
|
||||
|
||||
markdown.append("")
|
||||
|
||||
# Statistics
|
||||
proxy_count = len(apps_data.get('proxy_apps', {}))
|
||||
oauth_count = len(apps_data.get('oauth_apps', {}))
|
||||
total_count = proxy_count + oauth_count
|
||||
|
||||
markdown.append("## Statistics")
|
||||
markdown.append("")
|
||||
markdown.append(f"- **Total applications**: {total_count}")
|
||||
markdown.append(f"- **Proxy applications**: {proxy_count}")
|
||||
markdown.append(f"- **OAuth2/OpenID applications**: {oauth_count}")
|
||||
markdown.append("")
|
||||
|
||||
# Grouping by types
|
||||
groups = {}
|
||||
for app in all_apps:
|
||||
group = app['group']
|
||||
if group not in groups:
|
||||
groups[group] = {'proxy': 0, 'oauth': 0}
|
||||
if app['type'] == 'Proxy':
|
||||
groups[group]['proxy'] += 1
|
||||
else:
|
||||
groups[group]['oauth'] += 1
|
||||
|
||||
markdown.append("## Applications by Groups")
|
||||
markdown.append("")
|
||||
for group, counts in sorted(groups.items()):
|
||||
total = counts['proxy'] + counts['oauth']
|
||||
markdown.append(f"- **{group}**: {total} applications (Proxy: {counts['proxy']}, OAuth: {counts['oauth']})")
|
||||
|
||||
markdown.append("")
|
||||
markdown.append("---")
|
||||
markdown.append("*This page is automatically generated via Terraform CI/CD*")
|
||||
|
||||
return "\n".join(markdown)
|
||||
|
||||
def parse_terraform_output(output_data):
|
||||
"""Parse Terraform output JSON structure"""
|
||||
# Check if this is a full terraform output (with value, type, sensitive fields)
|
||||
if isinstance(output_data, dict) and 'applications_for_wiki' in output_data:
|
||||
# This is full terraform output format
|
||||
app_output = output_data.get('applications_for_wiki', {})
|
||||
if isinstance(app_output, dict) and 'value' in app_output:
|
||||
return app_output['value']
|
||||
else:
|
||||
return app_output
|
||||
else:
|
||||
# This is already the value extracted
|
||||
return output_data
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python3 generate-apps-wiki.py <terraform-output-json> [--debug]")
|
||||
sys.exit(1)
|
||||
|
||||
output_file = sys.argv[1]
|
||||
debug = "--debug" in sys.argv
|
||||
|
||||
try:
|
||||
# Check if file exists and has content
|
||||
if not os.path.exists(output_file):
|
||||
print(f"ERROR: File {output_file} not found")
|
||||
sys.exit(1)
|
||||
|
||||
file_size = os.path.getsize(output_file)
|
||||
if file_size == 0:
|
||||
print(f"ERROR: File {output_file} is empty")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"📄 Reading Terraform output file: {output_file} ({file_size} bytes)")
|
||||
|
||||
# Read file content
|
||||
with open(output_file, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
if debug:
|
||||
print(f"🔍 File content preview: {content[:200]}...")
|
||||
|
||||
# Clean content - remove command line if present
|
||||
if content.startswith('[command]'):
|
||||
print("⚠️ Detected command prefix, removing...")
|
||||
lines = content.split('\n', 1)
|
||||
if len(lines) > 1:
|
||||
content = lines[1]
|
||||
if debug:
|
||||
print(f"🔍 Cleaned content preview: {content[:200]}...")
|
||||
else:
|
||||
print("ERROR: File contains only command line, no JSON data")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse JSON
|
||||
try:
|
||||
terraform_output = json.loads(content)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"ERROR: Invalid JSON in {output_file}: {e}")
|
||||
print(f"Content starts with: {repr(content[:100])}")
|
||||
# Try to find where JSON starts
|
||||
json_start = content.find('{')
|
||||
if json_start > 0:
|
||||
print(f"Found JSON starting at position {json_start}, retrying...")
|
||||
content = content[json_start:]
|
||||
try:
|
||||
terraform_output = json.loads(content)
|
||||
except json.JSONDecodeError as e2:
|
||||
print(f"ERROR: Still invalid JSON: {e2}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
# Extract application data using helper function
|
||||
apps_data = parse_terraform_output(terraform_output)
|
||||
|
||||
if not apps_data:
|
||||
print("ERROR: No applications data found in Terraform output")
|
||||
if debug:
|
||||
print(f"Full output structure: {json.dumps(terraform_output, indent=2)[:500]}...")
|
||||
sys.exit(1)
|
||||
|
||||
# Check if we have correct structure
|
||||
if 'proxy_apps' not in apps_data and 'oauth_apps' not in apps_data:
|
||||
print("ERROR: Expected 'proxy_apps' or 'oauth_apps' in output")
|
||||
print(f"Available keys: {list(apps_data.keys())}")
|
||||
if debug and apps_data:
|
||||
print(f"Data structure: {json.dumps(apps_data, indent=2)[:500]}...")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"📊 Found {len(apps_data.get('proxy_apps', {}))} proxy apps, {len(apps_data.get('oauth_apps', {}))} oauth apps")
|
||||
|
||||
# Generate Markdown
|
||||
markdown_content = generate_markdown_table(apps_data)
|
||||
|
||||
# Write result
|
||||
wiki_file = "Applications.md"
|
||||
with open(wiki_file, 'w', encoding='utf-8') as f:
|
||||
f.write(markdown_content)
|
||||
|
||||
print(f"✅ Wiki page generated: {wiki_file}")
|
||||
print(f"📊 Total applications: {len(apps_data.get('proxy_apps', {})) + len(apps_data.get('oauth_apps', {}))}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
298
.gitea/scripts/generate-k8s-wiki.py
Normal file
298
.gitea/scripts/generate-k8s-wiki.py
Normal file
@@ -0,0 +1,298 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script for generating Wiki page with Kubernetes services from k8s/ directory
|
||||
"""
|
||||
|
||||
import os
|
||||
import yaml
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from collections import defaultdict
|
||||
|
||||
class K8sService:
|
||||
def __init__(self, name, category, path):
|
||||
self.name = name
|
||||
self.category = category
|
||||
self.path = path
|
||||
self.namespace = None
|
||||
self.deployment_type = "Unknown"
|
||||
self.helm_charts = []
|
||||
self.services = []
|
||||
self.ingresses = []
|
||||
self.external_secrets = []
|
||||
self.deployments = []
|
||||
self.pvcs = []
|
||||
self.argo_app = None
|
||||
self.files = []
|
||||
|
||||
def __repr__(self):
|
||||
return f"K8sService({self.name}, {self.deployment_type})"
|
||||
|
||||
def parse_yaml_file(filepath):
|
||||
"""Parse YAML file and return content"""
|
||||
try:
|
||||
with open(filepath, 'r') as f:
|
||||
# Load all documents in the file
|
||||
docs = list(yaml.safe_load_all(f))
|
||||
return docs if len(docs) > 1 else docs[0] if docs else None
|
||||
except Exception as e:
|
||||
print(f" ⚠️ Error parsing {filepath}: {e}")
|
||||
return None
|
||||
|
||||
def analyze_service_directory(service_path, service_name, category):
|
||||
"""Analyze a service directory and extract information"""
|
||||
service = K8sService(service_name, category, service_path)
|
||||
|
||||
# List all files
|
||||
for file in os.listdir(service_path):
|
||||
if file.endswith('.yaml') or file.endswith('.yml'):
|
||||
service.files.append(file)
|
||||
filepath = os.path.join(service_path, file)
|
||||
|
||||
# Parse YAML content
|
||||
content = parse_yaml_file(filepath)
|
||||
if not content:
|
||||
continue
|
||||
|
||||
# Handle multiple documents in one file
|
||||
documents = content if isinstance(content, list) else [content]
|
||||
|
||||
for doc in documents:
|
||||
if not isinstance(doc, dict) or 'kind' not in doc:
|
||||
continue
|
||||
|
||||
kind = doc['kind']
|
||||
metadata = doc.get('metadata', {})
|
||||
|
||||
# ArgoCD Application
|
||||
if kind == 'Application' and doc.get('apiVersion', '').startswith('argoproj.io'):
|
||||
service.argo_app = {
|
||||
'name': metadata.get('name', ''),
|
||||
'namespace': doc.get('spec', {}).get('destination', {}).get('namespace', ''),
|
||||
'project': doc.get('spec', {}).get('project', ''),
|
||||
'auto_sync': doc.get('spec', {}).get('syncPolicy', {}).get('automated') is not None
|
||||
}
|
||||
service.namespace = service.argo_app['namespace']
|
||||
|
||||
# Kustomization
|
||||
elif kind == 'Kustomization':
|
||||
if 'helmCharts' in doc:
|
||||
service.deployment_type = "Helm Chart"
|
||||
for chart in doc.get('helmCharts', []):
|
||||
service.helm_charts.append({
|
||||
'name': chart.get('name', ''),
|
||||
'repo': chart.get('repo', ''),
|
||||
'version': chart.get('version', ''),
|
||||
'namespace': chart.get('namespace', service.namespace)
|
||||
})
|
||||
else:
|
||||
service.deployment_type = "Kustomize"
|
||||
|
||||
# Deployment
|
||||
elif kind == 'Deployment':
|
||||
service.deployments.append({
|
||||
'name': metadata.get('name', ''),
|
||||
'namespace': metadata.get('namespace', service.namespace),
|
||||
'replicas': doc.get('spec', {}).get('replicas', 1),
|
||||
'containers': [c.get('name', '') for c in doc.get('spec', {}).get('template', {}).get('spec', {}).get('containers', [])]
|
||||
})
|
||||
if service.deployment_type == "Unknown":
|
||||
service.deployment_type = "YAML Manifests"
|
||||
|
||||
# Service
|
||||
elif kind == 'Service':
|
||||
svc_spec = doc.get('spec', {})
|
||||
service.services.append({
|
||||
'name': metadata.get('name', ''),
|
||||
'type': svc_spec.get('type', 'ClusterIP'),
|
||||
'ports': svc_spec.get('ports', [])
|
||||
})
|
||||
|
||||
# Ingress
|
||||
elif kind == 'Ingress':
|
||||
rules = doc.get('spec', {}).get('rules', [])
|
||||
hosts = []
|
||||
for rule in rules:
|
||||
if 'host' in rule:
|
||||
hosts.append(rule['host'])
|
||||
service.ingresses.append({
|
||||
'name': metadata.get('name', ''),
|
||||
'hosts': hosts
|
||||
})
|
||||
|
||||
# ExternalSecret
|
||||
elif kind == 'ExternalSecret':
|
||||
service.external_secrets.append({
|
||||
'name': metadata.get('name', ''),
|
||||
'store': doc.get('spec', {}).get('secretStoreRef', {}).get('name', '')
|
||||
})
|
||||
|
||||
# PersistentVolumeClaim
|
||||
elif kind == 'PersistentVolumeClaim':
|
||||
service.pvcs.append({
|
||||
'name': metadata.get('name', ''),
|
||||
'size': doc.get('spec', {}).get('resources', {}).get('requests', {}).get('storage', '')
|
||||
})
|
||||
|
||||
# If no specific deployment type found but has YAML files
|
||||
if service.deployment_type == "Unknown" and service.files:
|
||||
service.deployment_type = "YAML Manifests"
|
||||
|
||||
return service
|
||||
|
||||
def generate_markdown_table(services):
|
||||
"""Generate markdown table for services"""
|
||||
markdown = []
|
||||
markdown.append("# Kubernetes Services")
|
||||
markdown.append("")
|
||||
markdown.append(f"*Automatically generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')}*")
|
||||
markdown.append("")
|
||||
|
||||
# Group by category
|
||||
categories = defaultdict(list)
|
||||
for service in services:
|
||||
categories[service.category].append(service)
|
||||
|
||||
# Statistics
|
||||
markdown.append("## Statistics")
|
||||
markdown.append("")
|
||||
markdown.append(f"- **Total Services**: {len(services)}")
|
||||
markdown.append(f"- **Categories**: {len(categories)}")
|
||||
helm_count = sum(1 for s in services if s.deployment_type == "Helm Chart")
|
||||
kustomize_count = sum(1 for s in services if s.deployment_type == "Kustomize")
|
||||
yaml_count = sum(1 for s in services if s.deployment_type == "YAML Manifests")
|
||||
markdown.append(f"- **Helm Charts**: {helm_count}")
|
||||
markdown.append(f"- **Kustomize**: {kustomize_count}")
|
||||
markdown.append(f"- **YAML Manifests**: {yaml_count}")
|
||||
markdown.append("")
|
||||
|
||||
# Main table
|
||||
markdown.append("## All Services")
|
||||
markdown.append("")
|
||||
markdown.append("| Service | Category | Type | Namespace | Ingresses | Services | Secrets | Auto-Sync |")
|
||||
markdown.append("|---------|----------|------|-----------|-----------|----------|---------|-----------|")
|
||||
|
||||
for category in sorted(categories.keys()):
|
||||
for service in sorted(categories[category], key=lambda x: x.name):
|
||||
# Service name with link to directory
|
||||
name_link = f"[{service.name}](k8s/{service.category}/{service.name}/)"
|
||||
|
||||
# Deployment type with emoji
|
||||
type_emoji = {
|
||||
"Helm Chart": "🎩",
|
||||
"Kustomize": "🔧",
|
||||
"YAML Manifests": "📄",
|
||||
"Unknown": "❓"
|
||||
}
|
||||
type_str = f"{type_emoji.get(service.deployment_type, '')} {service.deployment_type}"
|
||||
|
||||
# Ingresses
|
||||
ingresses = []
|
||||
for ing in service.ingresses:
|
||||
for host in ing['hosts']:
|
||||
ingresses.append(f"[{host}](https://{host})")
|
||||
ingress_str = "<br>".join(ingresses) if ingresses else "-"
|
||||
|
||||
# Services
|
||||
svc_list = []
|
||||
for svc in service.services:
|
||||
ports = [f"{p.get('port', '?')}" for p in svc['ports']]
|
||||
svc_list.append(f"`{svc['name']}:{','.join(ports)}`")
|
||||
svc_str = "<br>".join(svc_list) if svc_list else "-"
|
||||
|
||||
# External Secrets
|
||||
secrets_str = f"{len(service.external_secrets)} secrets" if service.external_secrets else "-"
|
||||
|
||||
# Auto-sync
|
||||
auto_sync = "✅" if service.argo_app and service.argo_app.get('auto_sync') else "❌"
|
||||
|
||||
markdown.append(f"| **{name_link}** | {category} | {type_str} | {service.namespace or '-'} | {ingress_str} | {svc_str} | {secrets_str} | {auto_sync} |")
|
||||
|
||||
markdown.append("")
|
||||
|
||||
# Detailed sections by category
|
||||
for category in sorted(categories.keys()):
|
||||
markdown.append(f"## {category.title()} Services")
|
||||
markdown.append("")
|
||||
|
||||
for service in sorted(categories[category], key=lambda x: x.name):
|
||||
markdown.append(f"### {service.name}")
|
||||
markdown.append("")
|
||||
|
||||
# Basic info
|
||||
markdown.append(f"- **Type**: {service.deployment_type}")
|
||||
markdown.append(f"- **Namespace**: {service.namespace or 'Not specified'}")
|
||||
markdown.append(f"- **Path**: `{service.path}`")
|
||||
|
||||
# Helm charts
|
||||
if service.helm_charts:
|
||||
markdown.append("- **Helm Charts**:")
|
||||
for chart in service.helm_charts:
|
||||
markdown.append(f" - {chart['name']} v{chart['version']} from {chart['repo']}")
|
||||
|
||||
# Deployments
|
||||
if service.deployments:
|
||||
markdown.append("- **Deployments**:")
|
||||
for dep in service.deployments:
|
||||
containers = ', '.join(dep['containers'])
|
||||
markdown.append(f" - {dep['name']} ({dep['replicas']} replicas) - Containers: {containers}")
|
||||
|
||||
# Files
|
||||
if service.files:
|
||||
markdown.append(f"- **Files**: {', '.join(sorted(service.files))}")
|
||||
|
||||
markdown.append("")
|
||||
|
||||
markdown.append("---")
|
||||
markdown.append("*This page is automatically generated from k8s/ directory via CI/CD*")
|
||||
|
||||
return "\n".join(markdown)
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: generate-k8s-wiki.py <k8s-directory> [output-file]")
|
||||
sys.exit(1)
|
||||
|
||||
k8s_dir = sys.argv[1]
|
||||
output_file = sys.argv[2] if len(sys.argv) > 2 else "Kubernetes-Services.md"
|
||||
|
||||
if not os.path.exists(k8s_dir):
|
||||
print(f"❌ Directory {k8s_dir} not found")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"📂 Scanning {k8s_dir}...")
|
||||
|
||||
services = []
|
||||
|
||||
# Scan each category directory
|
||||
for category in ['apps', 'core', 'games']:
|
||||
category_path = os.path.join(k8s_dir, category)
|
||||
if not os.path.exists(category_path):
|
||||
print(f" ⚠️ Category {category} not found")
|
||||
continue
|
||||
|
||||
print(f"📁 Processing {category}/...")
|
||||
|
||||
# Scan each service in category
|
||||
for service_name in os.listdir(category_path):
|
||||
service_path = os.path.join(category_path, service_name)
|
||||
if os.path.isdir(service_path):
|
||||
print(f" 🔍 Analyzing {service_name}...")
|
||||
service = analyze_service_directory(service_path, service_name, category)
|
||||
services.append(service)
|
||||
|
||||
print(f"\n✅ Found {len(services)} services")
|
||||
|
||||
# Generate markdown
|
||||
markdown = generate_markdown_table(services)
|
||||
|
||||
# Write output
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
f.write(markdown)
|
||||
|
||||
print(f"📄 Wiki page generated: {output_file}")
|
||||
print(f"📊 Total services: {len(services)}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
105
.gitea/scripts/process-terraform-output.py
Normal file
105
.gitea/scripts/process-terraform-output.py
Normal file
@@ -0,0 +1,105 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Process Terraform output to extract applications_for_wiki data
|
||||
Handles various output formats and cleans up invalid JSON
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import os
|
||||
|
||||
def clean_command_prefix(content):
|
||||
"""Remove [command] prefix if present"""
|
||||
if content.startswith('[command]'):
|
||||
lines = content.split('\n', 1)
|
||||
if len(lines) > 1:
|
||||
return lines[1]
|
||||
return content
|
||||
|
||||
def extract_valid_json(content):
|
||||
"""Extract valid JSON from content that might have extra data"""
|
||||
# Find first { and last matching }
|
||||
start = content.find('{')
|
||||
if start < 0:
|
||||
return None
|
||||
|
||||
count = 0
|
||||
end = start
|
||||
for i in range(start, len(content)):
|
||||
if content[i] == '{':
|
||||
count += 1
|
||||
elif content[i] == '}':
|
||||
count -= 1
|
||||
if count == 0:
|
||||
end = i + 1
|
||||
break
|
||||
|
||||
if end > start and count == 0:
|
||||
return content[start:end]
|
||||
return None
|
||||
|
||||
def extract_value(data):
|
||||
"""Extract value from Terraform output format"""
|
||||
if isinstance(data, dict) and 'value' in data:
|
||||
return data['value']
|
||||
return data
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 3:
|
||||
print("Usage: process-terraform-output.py <input-file> <output-file>")
|
||||
sys.exit(1)
|
||||
|
||||
input_file = sys.argv[1]
|
||||
output_file = sys.argv[2]
|
||||
|
||||
try:
|
||||
# Read input file
|
||||
with open(input_file, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Clean command prefix if present
|
||||
content = clean_command_prefix(content)
|
||||
|
||||
# Try to parse JSON directly
|
||||
try:
|
||||
data = json.loads(content)
|
||||
print("✅ Valid JSON parsed successfully")
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"⚠️ Initial JSON parse failed: {e}")
|
||||
print("🔍 Attempting to extract valid JSON portion...")
|
||||
|
||||
# Try to extract valid JSON
|
||||
valid_json = extract_valid_json(content)
|
||||
if valid_json:
|
||||
try:
|
||||
data = json.loads(valid_json)
|
||||
print("✅ Extracted valid JSON successfully")
|
||||
except json.JSONDecodeError as e2:
|
||||
print(f"❌ Failed to parse extracted JSON: {e2}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("❌ Could not extract valid JSON from content")
|
||||
sys.exit(1)
|
||||
|
||||
# Extract value if it's wrapped in Terraform output format
|
||||
result = extract_value(data)
|
||||
|
||||
# Write output
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(result, f, indent=2)
|
||||
|
||||
print(f"✅ Processed output written to {output_file}")
|
||||
|
||||
# Show preview
|
||||
preview = json.dumps(result, indent=2)[:200]
|
||||
print(f"📄 Preview: {preview}...")
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"❌ Input file {input_file} not found")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
139
.gitea/workflows/authentik-apps.yaml
Normal file
139
.gitea/workflows/authentik-apps.yaml
Normal file
@@ -0,0 +1,139 @@
|
||||
name: 'Terraform'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- 'terraform/authentik/**'
|
||||
pull_request:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
terraform:
|
||||
name: 'Terraform'
|
||||
runs-on: ubuntu-latest
|
||||
environment: production
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Setup Terraform
|
||||
uses: hashicorp/setup-terraform@v2
|
||||
with:
|
||||
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
|
||||
|
||||
- name: Terraform Init
|
||||
run: terraform init
|
||||
working-directory: ./terraform/authentik
|
||||
|
||||
- name: Terraform Format
|
||||
run: terraform fmt -check
|
||||
continue-on-error: true
|
||||
working-directory: ./terraform/authentik
|
||||
|
||||
- name: Terraform Apply
|
||||
run: terraform apply -var-file proxy-apps.tfvars -var-file oauth2-apps.tfvars -var-file terraform.tfvars -var-file groups.tfvars -input=false -auto-approve -parallelism=100
|
||||
working-directory: ./terraform/authentik
|
||||
|
||||
- name: Generate Wiki Content
|
||||
if: success()
|
||||
continue-on-error: true
|
||||
run: |
|
||||
echo "📋 Starting Wiki generation..."
|
||||
cd ./terraform/authentik
|
||||
|
||||
# Get terraform output
|
||||
echo "🔍 Generating Terraform output..."
|
||||
terraform output -json applications_for_wiki > terraform-raw-output.json 2>&1
|
||||
|
||||
# Process output to extract clean JSON
|
||||
echo "📤 Processing Terraform output..."
|
||||
python3 ../../.gitea/scripts/process-terraform-output.py terraform-raw-output.json terraform-output.json
|
||||
|
||||
# Run wiki generation
|
||||
echo "📊 Running wiki generation script..."
|
||||
if python3 ../../.gitea/scripts/generate-apps-wiki.py terraform-output.json; then
|
||||
echo "✅ Wiki content generated successfully"
|
||||
else
|
||||
echo "⚠️ Wiki generation failed, retrying with debug..."
|
||||
python3 ../../.gitea/scripts/generate-apps-wiki.py terraform-output.json --debug || echo "⚠️ Wiki generation failed"
|
||||
fi
|
||||
|
||||
# Check results
|
||||
if [ -f "Applications.md" ]; then
|
||||
echo "✅ Wiki file created: $(wc -l < Applications.md) lines"
|
||||
else
|
||||
echo "⚠️ Wiki content not generated"
|
||||
exit 0
|
||||
fi
|
||||
working-directory: ./
|
||||
|
||||
- name: Upload Wiki to Gitea
|
||||
if: success()
|
||||
continue-on-error: true
|
||||
run: |
|
||||
cd ./terraform/authentik
|
||||
|
||||
# Set variables
|
||||
GITEA_URL="${{ secrets.GT_URL }}"
|
||||
GITEA_TOKEN="${{ secrets.GT_WIKI_TOKEN }}"
|
||||
GITEA_OWNER="${{ secrets.GT_OWNER }}"
|
||||
GITEA_REPO="${{ secrets.GT_REPO }}"
|
||||
|
||||
# Debug variables (without exposing token)
|
||||
echo "🔍 Checking variables..."
|
||||
echo "GITEA_URL: ${GITEA_URL:-NOT SET}"
|
||||
echo "GITEA_OWNER: ${GITEA_OWNER:-NOT SET}"
|
||||
echo "GITEA_REPO: ${GITEA_REPO:-NOT SET}"
|
||||
echo "GITEA_TOKEN: $(if [ -n "$GITEA_TOKEN" ]; then echo "SET"; else echo "NOT SET"; fi)"
|
||||
|
||||
# Check if file exists
|
||||
if [ ! -f "Applications.md" ]; then
|
||||
echo "⚠️ Applications.md not found, skipping wiki update"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "📤 Uploading to Gitea Wiki..."
|
||||
|
||||
# Encode content to base64
|
||||
CONTENT=$(base64 -w 0 Applications.md)
|
||||
|
||||
# Check if wiki page exists
|
||||
WIKI_PAGE_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Applications" || echo "000")
|
||||
|
||||
if [ "$WIKI_PAGE_EXISTS" = "200" ]; then
|
||||
echo "📝 Updating existing wiki page..."
|
||||
curl -X PATCH \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"title\": \"Applications\",
|
||||
\"content_base64\": \"$CONTENT\",
|
||||
\"message\": \"Update applications list from CI/CD [$(date)]\"
|
||||
}" \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Applications" || echo "⚠️ Wiki update failed"
|
||||
else
|
||||
echo "📄 Creating new wiki page..."
|
||||
curl -X POST \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"title\": \"Applications\",
|
||||
\"content_base64\": \"$CONTENT\",
|
||||
\"message\": \"Create applications list from CI/CD [$(date)]\"
|
||||
}" \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/new" || echo "⚠️ Wiki creation failed"
|
||||
fi
|
||||
|
||||
echo "✅ Wiki update process completed"
|
||||
working-directory: ./
|
||||
|
111
.gitea/workflows/k8s-wiki.yaml
Normal file
111
.gitea/workflows/k8s-wiki.yaml
Normal file
@@ -0,0 +1,111 @@
|
||||
name: 'Update Kubernetes Services Wiki'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "main" ]
|
||||
# paths:
|
||||
# - 'k8s/**'
|
||||
# - '.gitea/scripts/generate-k8s-wiki.py'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
update-k8s-wiki:
|
||||
name: 'Generate and Update K8s Wiki'
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Install Python dependencies
|
||||
run: |
|
||||
pip install pyyaml
|
||||
|
||||
- name: Generate K8s Services Wiki
|
||||
run: |
|
||||
echo "📋 Starting K8s wiki generation..."
|
||||
python3 .gitea/scripts/generate-k8s-wiki.py k8s/ Kubernetes-Services.md
|
||||
|
||||
if [ -f "Kubernetes-Services.md" ]; then
|
||||
echo "✅ Wiki content generated successfully"
|
||||
echo "📄 File size: $(wc -c < Kubernetes-Services.md) bytes"
|
||||
echo "📄 Lines: $(wc -l < Kubernetes-Services.md)"
|
||||
else
|
||||
echo "❌ Wiki content not generated"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Wiki to Gitea
|
||||
continue-on-error: true
|
||||
run: |
|
||||
# Set variables
|
||||
GITEA_URL="${{ secrets.GT_URL }}"
|
||||
GITEA_TOKEN="${{ secrets.GT_WIKI_TOKEN }}"
|
||||
GITEA_OWNER="${{ secrets.GT_OWNER }}"
|
||||
GITEA_REPO="${{ secrets.GT_REPO }}"
|
||||
|
||||
# Debug variables (without exposing token)
|
||||
echo "🔍 Checking variables..."
|
||||
echo "GITEA_URL: ${GITEA_URL:-NOT SET}"
|
||||
echo "GITEA_OWNER: ${GITEA_OWNER:-NOT SET}"
|
||||
echo "GITEA_REPO: ${GITEA_REPO:-NOT SET}"
|
||||
echo "GITEA_TOKEN: $(if [ -n "$GITEA_TOKEN" ]; then echo "SET"; else echo "NOT SET"; fi)"
|
||||
|
||||
if [ ! -f "Kubernetes-Services.md" ]; then
|
||||
echo "❌ Kubernetes-Services.md not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📤 Uploading to Gitea Wiki..."
|
||||
|
||||
# Encode content to base64
|
||||
CONTENT=$(base64 -w 0 Kubernetes-Services.md)
|
||||
|
||||
# Check if wiki page exists
|
||||
WIKI_PAGE_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Kubernetes-Services" || echo "000")
|
||||
|
||||
if [ "$WIKI_PAGE_EXISTS" = "200" ]; then
|
||||
echo "📝 Updating existing wiki page..."
|
||||
curl -X PATCH \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"title\": \"Kubernetes-Services\",
|
||||
\"content_base64\": \"$CONTENT\",
|
||||
\"message\": \"Update K8s services list from CI/CD [$(date)]\"
|
||||
}" \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/page/Kubernetes-Services" || echo "⚠️ Wiki update failed"
|
||||
else
|
||||
echo "📄 Creating new wiki page..."
|
||||
curl -X POST \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"title\": \"Kubernetes-Services\",
|
||||
\"content_base64\": \"$CONTENT\",
|
||||
\"message\": \"Create K8s services list from CI/CD [$(date)]\"
|
||||
}" \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/wiki/new" || echo "⚠️ Wiki creation failed"
|
||||
fi
|
||||
|
||||
echo "✅ Wiki update process completed"
|
||||
echo "🔗 Wiki URL: $GITEA_URL/$GITEA_OWNER/$GITEA_REPO/wiki/Kubernetes-Services"
|
||||
|
||||
- name: Summary
|
||||
if: always()
|
||||
run: |
|
||||
echo "## 📊 K8s Wiki Update Summary" >> $GITHUB_STEP_SUMMARY
|
||||
if [ -f "Kubernetes-Services.md" ]; then
|
||||
echo "- ✅ K8s services analyzed" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- ✅ Wiki page generated" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Services found:** $(grep -c '^|' Kubernetes-Services.md || echo 0)" >> $GITHUB_STEP_SUMMARY
|
||||
else
|
||||
echo "- ❌ Wiki generation failed" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
echo "**Generated at:** $(date)" >> $GITHUB_STEP_SUMMARY
|
@@ -1,9 +1,9 @@
|
||||
name: Check with kubeconform
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
paths:
|
||||
- 'k8s/**'
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -16,22 +16,53 @@ jobs:
|
||||
- name: Kubeconform validation
|
||||
id: kubeconform
|
||||
run: |
|
||||
# Create exclusion list - add files that should be skipped from validation
|
||||
EXCLUSIONS=(
|
||||
"./k8s/core/system-upgrade/crd.yaml"
|
||||
# Add more files here as needed
|
||||
# "./path/to/another/file.yaml"
|
||||
)
|
||||
|
||||
# Create a temporary file for storing validation output
|
||||
VALIDATION_OUTPUT=$(mktemp)
|
||||
|
||||
# Run kubeconform and capture output
|
||||
find . -name '*.yaml' \
|
||||
# Function to check if file is in exclusions
|
||||
is_excluded() {
|
||||
local file="$1"
|
||||
for exclusion in "${EXCLUSIONS[@]}"; do
|
||||
if [[ "$file" == "$exclusion" ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# Find all yaml files and filter out exclusions
|
||||
YAML_FILES=()
|
||||
while IFS= read -r -d '' file; do
|
||||
if ! is_excluded "$file"; then
|
||||
YAML_FILES+=("$file")
|
||||
else
|
||||
echo "⚠️ Skipping excluded file: $file"
|
||||
fi
|
||||
done < <(find . -name '*.yaml' \
|
||||
! -name '*values.yaml' \
|
||||
! -path './.gitea/*' \
|
||||
-print0 \
|
||||
| xargs -0 kubeconform \
|
||||
-summary \
|
||||
-verbose \
|
||||
-output pretty \
|
||||
-ignore-missing-schemas \
|
||||
-schema-location default \
|
||||
-schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' \
|
||||
-schema-location 'https://raw.githubusercontent.com/SchemaStore/schemastore/refs/heads/master/src/schemas/json/kustomization.json' > $VALIDATION_OUTPUT 2>&1 || true
|
||||
-print0)
|
||||
|
||||
# Run kubeconform only if there are files to validate
|
||||
if [ ${#YAML_FILES[@]} -gt 0 ]; then
|
||||
printf '%s\0' "${YAML_FILES[@]}" | xargs -0 kubeconform \
|
||||
-summary \
|
||||
-verbose \
|
||||
-output pretty \
|
||||
-ignore-missing-schemas \
|
||||
-schema-location default \
|
||||
-schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' \
|
||||
-schema-location 'https://raw.githubusercontent.com/SchemaStore/schemastore/refs/heads/master/src/schemas/json/kustomization.json' > $VALIDATION_OUTPUT 2>&1 || true
|
||||
else
|
||||
echo "No files to validate after applying exclusions" > $VALIDATION_OUTPUT
|
||||
fi
|
||||
|
||||
# Display output in logs
|
||||
cat $VALIDATION_OUTPUT
|
||||
@@ -44,7 +75,7 @@ jobs:
|
||||
cat invalid_files.txt
|
||||
exit 1
|
||||
else
|
||||
echo "All manifests are valid!"
|
||||
echo "✅ All manifests are valid!"
|
||||
fi
|
||||
continue-on-error: true
|
||||
|
||||
@@ -69,4 +100,4 @@ jobs:
|
||||
|
||||
Invalid files:
|
||||
${{ env.INVALID_FILES }}
|
||||
<a href="https://gt.hexor.cy/${{ github.repository }}/actions/runs/${{ github.run_number }}">🔗 Check details</a>
|
||||
<a href="https://gt.hexor.cy/${{ github.repository }}/actions/runs/${{ github.run_number }}">🔗 Check details</a>
|
||||
|
8
.gitignore
vendored
8
.gitignore
vendored
@@ -10,13 +10,6 @@
|
||||
crash.log
|
||||
crash.*.log
|
||||
|
||||
# Exclude all .tfvars files, which are likely to contain sensitive data, such as
|
||||
# password, private keys, and other secrets. These should not be part of version
|
||||
# control as they are data points which are potentially sensitive and subject
|
||||
# to change depending on the environment.
|
||||
*.tfvars
|
||||
*.tfvars.json
|
||||
|
||||
# Ignore override files as they are usually used to override resources locally and so
|
||||
# are not checked in
|
||||
override.tf
|
||||
@@ -58,3 +51,4 @@ tags
|
||||
# Persistent undo
|
||||
[._]*.un~
|
||||
|
||||
.DS_Store
|
||||
|
@@ -30,6 +30,27 @@ spec:
|
||||
containers:
|
||||
- name: gitea
|
||||
image: 'gitea/gitea:latest'
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
env:
|
||||
- name: GITEA__service__REGISTER_MANUAL_CONFIRM
|
||||
value: "true"
|
||||
- name: GITEA__service__ENABLE_CAPTCHA
|
||||
value: "false"
|
||||
- name: GITEA__service__REQUIRE_CAPTCHA_FOR_LOGIN
|
||||
value: "true"
|
||||
- name: GITEA__service__REQUIRE_EXTERNAL_REGISTRATION_CAPTCHA
|
||||
value: "true"
|
||||
- name: GITEA__service__CAPTCHA_TYPE
|
||||
value: "hcaptcha"
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: gitea-recapcha-creds
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 3000
|
||||
@@ -40,6 +61,7 @@ spec:
|
||||
volumeMounts:
|
||||
- name: storage
|
||||
mountPath: /data
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@@ -56,20 +78,65 @@ spec:
|
||||
app: gitea-runner
|
||||
spec:
|
||||
nodeSelector:
|
||||
#kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
kubernetes.io/hostname: nas.homenet
|
||||
kubernetes.io/hostname: home.homenet
|
||||
volumes:
|
||||
- name: docker-sock
|
||||
hostPath:
|
||||
#path: /var/run/k3s/containerd/containerd.sock
|
||||
path: /var/run/docker.sock
|
||||
type: Socket
|
||||
- name: runner-data
|
||||
emptyDir:
|
||||
sizeLimit: 30Gi
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 3
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- home.homenet
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- master.tail2fe2d.ts.net
|
||||
- weight: 2
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- nas.homenet
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- home.homenet
|
||||
- nas.homenet
|
||||
- master.tail2fe2d.ts.net
|
||||
containers:
|
||||
- name: gitea-runner
|
||||
image: gitea/act_runner:nightly
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "256Mi"
|
||||
ephemeral-storage: "1Gi" # reserve ephemeral storage
|
||||
limits:
|
||||
cpu: "2000m"
|
||||
memory: "4Gi"
|
||||
ephemeral-storage: "28Gi" # hard cap for /data usage
|
||||
volumeMounts:
|
||||
- name: docker-sock
|
||||
mountPath: /var/run/docker.sock
|
||||
- name: runner-data
|
||||
mountPath: /data
|
||||
env:
|
||||
- name: GITEA_INSTANCE_URL
|
||||
value: "https://gt.hexor.cy"
|
||||
@@ -82,4 +149,3 @@ spec:
|
||||
value: "k8s-runner"
|
||||
- name: GITEA_RUNNER_LABELS
|
||||
value: "ubuntu-latest:docker://ghcr.io/catthehacker/ubuntu:act-latest,ubuntu-22.04:docker://ghcr.io/catthehacker/ubuntu:act-22.04,ubuntu-20.04:docker://ghcr.io/catthehacker/ubuntu:act-20.04"
|
||||
|
||||
|
@@ -23,3 +23,37 @@ spec:
|
||||
key: e475b5ab-ea3c-48a5-bb4c-a6bc552fc064
|
||||
property: login.password
|
||||
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: gitea-recapcha-creds
|
||||
spec:
|
||||
refreshInterval: 1m
|
||||
target:
|
||||
name: gitea-recapcha-creds
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
GITEA__service__HCAPTCHA_SITEKEY: |-
|
||||
{{ .HCAPTCHA_SITEKEY }}
|
||||
GITEA__service__HCAPTCHA_SECRET: |-
|
||||
{{ .HCAPTCHA_SECRET }}
|
||||
data:
|
||||
- secretKey: HCAPTCHA_SITEKEY
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 89c8d8d2-6b53-42c5-805f-38a341ef163e
|
||||
property: login.username
|
||||
- secretKey: HCAPTCHA_SECRET
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 89c8d8d2-6b53-42c5-805f-38a341ef163e
|
||||
property: login.password
|
21
k8s/apps/greece-notifier/app.yaml
Normal file
21
k8s/apps/greece-notifier/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: greece-notifier
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: apps
|
||||
destination:
|
||||
namespace: greece-notifier
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/apps/greece-notifier
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
51
k8s/apps/greece-notifier/deployment.yaml
Normal file
51
k8s/apps/greece-notifier/deployment.yaml
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: greece-notifier
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: greece-notifier
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: greece-notifier
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: nas.homenet
|
||||
volumes:
|
||||
- name: data
|
||||
nfs:
|
||||
server: nas.homenet
|
||||
path: /mnt/storage/Storage/k8s/greece-notifier/
|
||||
readOnly: false
|
||||
containers:
|
||||
- name: greece-notifier
|
||||
image: ultradesu/greece-notifier:master
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "256Mi"
|
||||
limits:
|
||||
cpu: "2000m"
|
||||
memory: "1Gi"
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
- name: ENDPOINT_BID
|
||||
value: "56" # Cyprus id
|
||||
- name: UPDATE_INTERVAL_MIN_SECS
|
||||
value: "270"
|
||||
- name: UPDATE_INTERVAL_MAX_SECS
|
||||
value: "350"
|
||||
- name: TELOXIDE_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: greece-notifier-creds
|
||||
key: TELOXIDE_TOKEN
|
23
k8s/apps/greece-notifier/external-secrets.yaml
Normal file
23
k8s/apps/greece-notifier/external-secrets.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: greece-notifier-creds
|
||||
spec:
|
||||
target:
|
||||
name: greece-notifier-creds
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
TELOXIDE_TOKEN: |-
|
||||
{{ .token }}
|
||||
data:
|
||||
- secretKey: token
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 34e8f207-27ad-4b21-b114-84d3f7460a51
|
||||
property: login.password
|
6
k8s/apps/greece-notifier/kustomization.yaml
Normal file
6
k8s/apps/greece-notifier/kustomization.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- ./external-secrets.yaml
|
||||
- ./deployment.yaml
|
@@ -24,6 +24,13 @@ spec:
|
||||
initContainers:
|
||||
- name: git-cloner
|
||||
image: alpine/git
|
||||
resources:
|
||||
requests:
|
||||
memory: "32Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "200m"
|
||||
command:
|
||||
- git
|
||||
- clone
|
||||
@@ -36,6 +43,13 @@ spec:
|
||||
containers:
|
||||
- name: hexound
|
||||
image: trafex/php-nginx:3.8.0
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "200m"
|
||||
volumeMounts:
|
||||
- name: hexound-repo
|
||||
mountPath: /var/www/html
|
||||
|
37
k8s/apps/hexound/ingress.yaml
Normal file
37
k8s/apps/hexound/ingress.yaml
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: hexound-tls-ingress
|
||||
annotations:
|
||||
ingressClassName: traefik
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
|
||||
acme.cert-manager.io/http01-edit-in-place: "true"
|
||||
spec:
|
||||
rules:
|
||||
- host: hexound.hexor.cy
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: hexound
|
||||
port:
|
||||
number: 80
|
||||
- host: hexound.hexor.ru
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: hexound
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- secretName: hexound-tls
|
||||
hosts:
|
||||
- hexound.hexor.cy
|
||||
- hexound.hexor.ru
|
@@ -5,4 +5,5 @@ resources:
|
||||
- app.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
|
||||
|
@@ -17,6 +17,13 @@ spec:
|
||||
- name: immich-server
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "2000m"
|
||||
ports:
|
||||
- containerPort: 2283
|
||||
env:
|
||||
@@ -43,13 +50,20 @@ spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
- weight: 90
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- home.homenet
|
||||
- weight: 10
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- nas.homenet
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
@@ -72,7 +86,7 @@ spec:
|
||||
- name: camera
|
||||
nfs:
|
||||
server: nas.homenet
|
||||
path: /mnt/storage/Storage/Photos/Phone/
|
||||
path: /mnt/storage/Storage/Syncthing-repos/PhoneCamera/
|
||||
readOnly: true
|
||||
- name: localtime
|
||||
hostPath:
|
||||
@@ -127,7 +141,7 @@ spec:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- home.homenet
|
||||
- nas.homenet
|
||||
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 2
|
||||
@@ -140,6 +154,13 @@ spec:
|
||||
- name: immich-ml
|
||||
image: ghcr.io/immich-app/immich-machine-learning:release
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
limits:
|
||||
memory: "8Gi"
|
||||
cpu: "4000m"
|
||||
env:
|
||||
- name: TZ
|
||||
value: Asia/Nicosia
|
||||
@@ -174,6 +195,13 @@ spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis:6.2-alpine
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
readinessProbe:
|
||||
exec:
|
||||
command: ["redis-cli", "ping"]
|
||||
|
@@ -1,5 +1,12 @@
|
||||
image:
|
||||
tag: 10.10.7
|
||||
resources:
|
||||
requests:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
limits:
|
||||
memory: "8Gi"
|
||||
cpu: "4000m"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
persistence:
|
||||
|
21
k8s/apps/k8s-secrets/app.yaml
Normal file
21
k8s/apps/k8s-secrets/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: k8s-secrets
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: apps
|
||||
destination:
|
||||
namespace: k8s-secret
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/apps/k8s-secrets
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
63
k8s/apps/k8s-secrets/deployment.yaml
Normal file
63
k8s/apps/k8s-secrets/deployment.yaml
Normal file
@@ -0,0 +1,63 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: secret-reader
|
||||
labels:
|
||||
app: secret-reader
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: secret-reader
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: secret-reader
|
||||
spec:
|
||||
serviceAccountName: secret-reader
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
containers:
|
||||
- name: secret-reader
|
||||
image: ultradesu/k8s-secrets:0.1.1
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
- "--secrets"
|
||||
- "openai-creds"
|
||||
- "--namespace"
|
||||
- "k8s-secret"
|
||||
- "--port"
|
||||
- "3000"
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: http
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: http
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
allowPrivilegeEscalation: false
|
||||
readOnlyRootFilesystem: true
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
44
k8s/apps/k8s-secrets/external-secret.yaml
Normal file
44
k8s/apps/k8s-secrets/external-secret.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: openai-creds
|
||||
spec:
|
||||
target:
|
||||
name: openai-creds
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
USER: |-
|
||||
{{ .user }}
|
||||
PASS: |-
|
||||
{{ .pass }}
|
||||
TOTP: |-
|
||||
{{ .totp }}
|
||||
data:
|
||||
- secretKey: user
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: a485f323-fd47-40ee-a5cf-40891b1f963c
|
||||
property: login.username
|
||||
- secretKey: pass
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: a485f323-fd47-40ee-a5cf-40891b1f963c
|
||||
property: login.password
|
||||
- secretKey: totp
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: a485f323-fd47-40ee-a5cf-40891b1f963c
|
||||
property: login.totp
|
||||
|
20
k8s/apps/k8s-secrets/rbac.yaml
Normal file
20
k8s/apps/k8s-secrets/rbac.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: secret-reader
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: secret-reader
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: secret-reader
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: secret-reader
|
6
k8s/apps/k8s-secrets/service-account.yaml
Normal file
6
k8s/apps/k8s-secrets/service-account.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: secret-reader
|
||||
labels:
|
||||
app: secret-reader
|
15
k8s/apps/k8s-secrets/service.yaml
Normal file
15
k8s/apps/k8s-secrets/service.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: secret-reader
|
||||
labels:
|
||||
app: secret-reader
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: secret-reader
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 3000
|
||||
protocol: TCP
|
||||
name: http
|
@@ -23,6 +23,13 @@ spec:
|
||||
- name: khm
|
||||
image: 'ultradesu/khm:latest'
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
|
@@ -2,7 +2,7 @@
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: postgres-creds
|
||||
name: postgres-and-oauth-creds
|
||||
spec:
|
||||
target:
|
||||
name: postgres-creds
|
||||
|
@@ -8,12 +8,12 @@ nodeSelector:
|
||||
kubernetes.io/hostname: nas.homenet
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1000m
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
memory: "512Mi"
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
|
@@ -1,8 +1,22 @@
|
||||
image:
|
||||
tag: 2.15.3
|
||||
tag: 2.18.4
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "2000m"
|
||||
initContainers:
|
||||
install-tesseract-langs:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:2.15.1
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:2.18.2
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- apt-get update && apt-get install -y --reinstall tesseract-ocr-rus tesseract-ocr-jpn tesseract-ocr-chi-sim tesseract-ocr-eng tesseract-ocr-ell && cp -v -r /usr/share/tesseract-ocr/5/tessdata/* /custom-tessdata/
|
||||
|
@@ -8,12 +8,12 @@ nodeSelector:
|
||||
kubernetes.io/hostname: nas.homenet
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
|
134
k8s/apps/rustdesk/deployment.yaml
Normal file
134
k8s/apps/rustdesk/deployment.yaml
Normal file
@@ -0,0 +1,134 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: rustdesk-hbbs
|
||||
labels:
|
||||
app: rustdesk-hbbs
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: rustdesk-hbbs
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: rustdesk-hbbs
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
containers:
|
||||
- name: hbbs
|
||||
image: rustdesk/rustdesk-server:latest
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
command: ["hbbs"]
|
||||
args:
|
||||
- "--relay-servers"
|
||||
- "rd.hexor.cy:21117"
|
||||
- "--port"
|
||||
- "21116"
|
||||
ports:
|
||||
- name: registry
|
||||
containerPort: 21116
|
||||
protocol: TCP
|
||||
- name: nat
|
||||
containerPort: 21115
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: keys
|
||||
mountPath: /data
|
||||
readOnly: true
|
||||
- name: data
|
||||
mountPath: /data-persistent
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
- name: DB_URL
|
||||
value: "/data-persistent/db_v2.sqlite3"
|
||||
workingDir: /data
|
||||
volumes:
|
||||
- name: keys
|
||||
secret:
|
||||
secretName: rustdesk-keys
|
||||
items:
|
||||
- key: id_ed25519
|
||||
path: id_ed25519
|
||||
mode: 0600
|
||||
- key: id_ed25519.pub
|
||||
path: id_ed25519.pub
|
||||
mode: 0644
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /k8s/rustdesk/hbbs
|
||||
type: DirectoryOrCreate
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: rustdesk-hbbr
|
||||
labels:
|
||||
app: rustdesk-hbbr
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: rustdesk-hbbr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: rustdesk-hbbr
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
containers:
|
||||
- name: hbbr
|
||||
image: rustdesk/rustdesk-server:latest
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
command: ["hbbr"]
|
||||
args:
|
||||
- "--port"
|
||||
- "21117"
|
||||
ports:
|
||||
- name: relay
|
||||
containerPort: 21117
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: keys
|
||||
mountPath: /data
|
||||
readOnly: true
|
||||
- name: data
|
||||
mountPath: /data-persistent
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
workingDir: /data
|
||||
volumes:
|
||||
- name: keys
|
||||
secret:
|
||||
secretName: rustdesk-keys
|
||||
items:
|
||||
- key: id_ed25519
|
||||
path: id_ed25519
|
||||
mode: 0600
|
||||
- key: id_ed25519.pub
|
||||
path: id_ed25519.pub
|
||||
mode: 0644
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /k8s/rustdesk/hbbr
|
||||
type: DirectoryOrCreate
|
@@ -2,65 +2,33 @@
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: postgres-creds
|
||||
name: rustdesk-keys
|
||||
spec:
|
||||
target:
|
||||
name: postgres-creds
|
||||
name: rustdesk-keys
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
psql_user: paperless
|
||||
psql_pass: |-
|
||||
{{ .psql_pass }}
|
||||
oauth_config: |-
|
||||
{
|
||||
"openid_connect": {
|
||||
"APPS": [
|
||||
{
|
||||
"provider_id": "authentik",
|
||||
"name": "Authentik",
|
||||
"client_id": "{{ .oauth_id }}",
|
||||
"secret": "{{ .oauth_secret }}",
|
||||
"settings": {
|
||||
"server_url": "{{ .server_url }}"
|
||||
}
|
||||
}
|
||||
],
|
||||
"OAUTH_PKCE_ENABLED": "True"
|
||||
}
|
||||
}
|
||||
id_ed25519: |-
|
||||
{{ .private_key }}
|
||||
id_ed25519.pub: |-
|
||||
{{ .public_key }}
|
||||
data:
|
||||
- secretKey: psql_pass
|
||||
- secretKey: private_key
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[5].value
|
||||
- secretKey: oauth_id
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
|
||||
key: f5591dfd-a0ab-4101-a2d7-e06380d3dcc9
|
||||
property: fields[0].value
|
||||
- secretKey: oauth_secret
|
||||
- secretKey: public_key
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
|
||||
key: f5591dfd-a0ab-4101-a2d7-e06380d3dcc9
|
||||
property: fields[1].value
|
||||
- secretKey: server_url
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
|
||||
property: fields[2].value
|
||||
|
||||
|
66
k8s/apps/rustdesk/external-secrets.yaml.backup
Normal file
66
k8s/apps/rustdesk/external-secrets.yaml.backup
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: postgres-creds
|
||||
spec:
|
||||
target:
|
||||
name: postgres-creds
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
psql_user: paperless
|
||||
psql_pass: |-
|
||||
{{ .psql_pass }}
|
||||
oauth_config: |-
|
||||
{
|
||||
"openid_connect": {
|
||||
"APPS": [
|
||||
{
|
||||
"provider_id": "authentik",
|
||||
"name": "Authentik",
|
||||
"client_id": "{{ .oauth_id }}",
|
||||
"secret": "{{ .oauth_secret }}",
|
||||
"settings": {
|
||||
"server_url": "{{ .server_url }}"
|
||||
}
|
||||
}
|
||||
],
|
||||
"OAUTH_PKCE_ENABLED": "True"
|
||||
}
|
||||
}
|
||||
data:
|
||||
- secretKey: psql_pass
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[5].value
|
||||
- secretKey: oauth_id
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
|
||||
property: fields[0].value
|
||||
- secretKey: oauth_secret
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
|
||||
property: fields[1].value
|
||||
- secretKey: server_url
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 07d4efd9-597c-4a4c-a78d-13bfc43e6055
|
||||
property: fields[2].value
|
||||
|
@@ -3,14 +3,8 @@ kind: Kustomization
|
||||
|
||||
resources:
|
||||
- app.yaml
|
||||
#- external-secrets.yaml
|
||||
|
||||
helmCharts:
|
||||
- name: rustdesk-server-oss
|
||||
repo: https://schich.tel/helm-charts
|
||||
version: 0.2.2
|
||||
releaseName: rustdesk
|
||||
namespace: rustdesk
|
||||
valuesFile: values.yaml
|
||||
includeCRDs: true
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- external-secrets.yaml
|
||||
- network-policy.yaml
|
||||
|
||||
|
73
k8s/apps/rustdesk/network-policy.yaml
Normal file
73
k8s/apps/rustdesk/network-policy.yaml
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: rustdesk-network-policy
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: rustdesk-hbbs
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
# Allow all incoming connections to RustDesk ports
|
||||
- from: []
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 21115
|
||||
- protocol: TCP
|
||||
port: 21116
|
||||
- protocol: UDP
|
||||
port: 21116
|
||||
egress:
|
||||
# Allow DNS
|
||||
- to: []
|
||||
ports:
|
||||
- protocol: UDP
|
||||
port: 53
|
||||
- protocol: TCP
|
||||
port: 53
|
||||
# Allow communication between HBBS and HBBR
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: rustdesk-hbbr
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 21117
|
||||
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: rustdesk-hbbr-network-policy
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: rustdesk-hbbr
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
# Allow all incoming connections to relay port
|
||||
- from: []
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 21117
|
||||
# Allow connections from HBBS
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: rustdesk-hbbs
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 21117
|
||||
egress:
|
||||
# Allow DNS
|
||||
- to: []
|
||||
ports:
|
||||
- protocol: UDP
|
||||
port: 53
|
||||
- protocol: TCP
|
||||
port: 53
|
57
k8s/apps/rustdesk/service.yaml
Normal file
57
k8s/apps/rustdesk/service.yaml
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: rustdesk-hbbs
|
||||
labels:
|
||||
app: rustdesk-hbbs
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app: rustdesk-hbbs
|
||||
ports:
|
||||
- name: registry-tcp
|
||||
port: 21116
|
||||
targetPort: 21116
|
||||
protocol: TCP
|
||||
- name: nat
|
||||
port: 21115
|
||||
targetPort: 21115
|
||||
protocol: TCP
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: rustdesk-hbbs-udp
|
||||
labels:
|
||||
app: rustdesk-hbbs
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app: rustdesk-hbbs
|
||||
ports:
|
||||
- name: registry-udp
|
||||
port: 21116
|
||||
targetPort: 21116
|
||||
protocol: UDP
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: rustdesk-hbbr
|
||||
labels:
|
||||
app: rustdesk-hbbr
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app: rustdesk-hbbr
|
||||
ports:
|
||||
- name: relay
|
||||
port: 21117
|
||||
targetPort: 21117
|
||||
protocol: TCP
|
@@ -1,87 +0,0 @@
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: docker.io/rustdesk/rustdesk-server
|
||||
pullPolicy: IfNotPresent
|
||||
tag: 1
|
||||
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
className: "traefik"
|
||||
annotations:
|
||||
ingressClassName: traefik
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
|
||||
acme.cert-manager.io/http01-edit-in-place: "true"
|
||||
hosts:
|
||||
- rd.hexor.cy
|
||||
tls:
|
||||
- secretName: rustdesk-tls
|
||||
hosts:
|
||||
- rd.hexor.cy
|
||||
|
||||
service:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Cluster
|
||||
loadBalancerIP: null
|
||||
enableWebClientSupport: false
|
||||
hbbr:
|
||||
replayPort:
|
||||
port: 21117
|
||||
targetPort: 21117
|
||||
clientPort:
|
||||
port: 21119
|
||||
targetPort: 21119
|
||||
hbbs:
|
||||
natPort:
|
||||
port: 21115
|
||||
targetPort: 21115
|
||||
registryPort:
|
||||
port: 21116
|
||||
targetPort: 21116
|
||||
heartbeatPort:
|
||||
port: 21116
|
||||
targetPort: 21116
|
||||
webPort:
|
||||
port: 21118
|
||||
targetPort: 21118
|
||||
|
||||
resources:
|
||||
hbbrResource: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
hbbsResource: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
# Additional volumes on the output Deployment definition.
|
||||
volume: {}
|
||||
|
||||
# - name: foo
|
||||
# secret:
|
||||
# secretName: mysecret
|
||||
# optional: false
|
||||
|
||||
# - name: foo
|
||||
# mountPath: "/etc/foo"
|
||||
# readOnly: true
|
||||
|
@@ -1,3 +1,10 @@
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
|
||||
|
@@ -1,5 +1,12 @@
|
||||
env:
|
||||
TZ: Asia/Nicosia
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
|
||||
|
@@ -1,5 +1,12 @@
|
||||
env:
|
||||
TZ: Asia/Nicosia
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
|
||||
|
@@ -28,12 +28,12 @@ ingress:
|
||||
tlsSecret: pdf-hexor-cy-tls
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 256Mi
|
||||
memory: "512Mi"
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
|
||||
probes:
|
||||
liveness:
|
||||
|
@@ -4,6 +4,8 @@ kind: Kustomization
|
||||
|
||||
resources:
|
||||
- app.yaml
|
||||
- nginx-router.yaml
|
||||
- traefik-simple.yaml
|
||||
|
||||
helmCharts:
|
||||
- name: syncthing
|
||||
@@ -21,3 +23,11 @@ helmCharts:
|
||||
namespace: syncthing
|
||||
valuesFile: syncthing-khv.yaml
|
||||
includeCRDs: true
|
||||
|
||||
- name: syncthing
|
||||
repo: https://k8s-home-lab.github.io/helm-charts
|
||||
version: 4.0.0
|
||||
releaseName: syncthing-nas
|
||||
namespace: syncthing
|
||||
valuesFile: syncthing-nas.yaml
|
||||
includeCRDs: true
|
276
k8s/apps/syncthing/nginx-router.yaml
Normal file
276
k8s/apps/syncthing/nginx-router.yaml
Normal file
@@ -0,0 +1,276 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: nginx-config
|
||||
namespace: syncthing
|
||||
data:
|
||||
default.conf: |
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
# Landing page
|
||||
location = / {
|
||||
root /usr/share/nginx/html;
|
||||
try_files /index.html =404;
|
||||
}
|
||||
|
||||
# NAS instance
|
||||
location /nas {
|
||||
rewrite ^/nas$ /nas/ permanent;
|
||||
}
|
||||
|
||||
# NAS API endpoints
|
||||
location ~ ^/nas/(rest|meta)/ {
|
||||
rewrite ^/nas/(.*) /$1 break;
|
||||
proxy_pass http://syncthing-nas:8384;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Handle websockets
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
|
||||
location /nas/ {
|
||||
proxy_pass http://syncthing-nas:8384/;
|
||||
|
||||
# Important: tell syncthing that we're using /nas as base path
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Handle websockets
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_read_timeout 86400;
|
||||
|
||||
# Sub filter to fix asset paths
|
||||
sub_filter 'href="/' 'href="/nas/';
|
||||
sub_filter 'src="/' 'src="/nas/';
|
||||
sub_filter 'url(/' 'url(/nas/';
|
||||
sub_filter '"/meta' '"/nas/meta';
|
||||
sub_filter '"/rest' '"/nas/rest';
|
||||
sub_filter '"/vendor' '"/nas/vendor';
|
||||
sub_filter '"/theme-assets' '"/nas/theme-assets';
|
||||
sub_filter '"/syncthing' '"/nas/syncthing';
|
||||
sub_filter_once off;
|
||||
sub_filter_types text/html text/css application/javascript;
|
||||
}
|
||||
|
||||
|
||||
# Master instance
|
||||
location /master {
|
||||
rewrite ^/master$ /master/ permanent;
|
||||
}
|
||||
|
||||
# Master API endpoints
|
||||
location ~ ^/master/(rest|meta)/ {
|
||||
rewrite ^/master/(.*) /$1 break;
|
||||
proxy_pass http://syncthing-master:8384;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Handle websockets
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
|
||||
location /master/ {
|
||||
proxy_pass http://syncthing-master:8384/;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
sub_filter 'href="/' 'href="/master/';
|
||||
sub_filter 'src="/' 'src="/master/';
|
||||
sub_filter 'url(/' 'url(/master/';
|
||||
sub_filter '"/meta' '"/master/meta';
|
||||
sub_filter '"/rest' '"/master/rest';
|
||||
sub_filter '"/vendor' '"/master/vendor';
|
||||
sub_filter '"/theme-assets' '"/master/theme-assets';
|
||||
sub_filter '"/syncthing' '"/master/syncthing';
|
||||
sub_filter_once off;
|
||||
sub_filter_types text/html text/css application/javascript;
|
||||
}
|
||||
|
||||
|
||||
# Iris instance
|
||||
location /iris {
|
||||
rewrite ^/iris$ /iris/ permanent;
|
||||
}
|
||||
|
||||
# Iris API endpoints
|
||||
location ~ ^/iris/(rest|meta)/ {
|
||||
rewrite ^/iris/(.*) /$1 break;
|
||||
proxy_pass http://syncthing-khv:8384;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Handle websockets
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
|
||||
location /iris/ {
|
||||
proxy_pass http://syncthing-khv:8384/;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
sub_filter 'href="/' 'href="/iris/';
|
||||
sub_filter 'src="/' 'src="/iris/';
|
||||
sub_filter 'url(/' 'url(/iris/';
|
||||
sub_filter '"/meta' '"/iris/meta';
|
||||
sub_filter '"/rest' '"/iris/rest';
|
||||
sub_filter '"/vendor' '"/iris/vendor';
|
||||
sub_filter '"/theme-assets' '"/iris/theme-assets';
|
||||
sub_filter '"/syncthing' '"/iris/syncthing';
|
||||
sub_filter_once off;
|
||||
sub_filter_types text/html text/css application/javascript;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
index.html: |
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Syncthing Instances</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
height: 100vh;
|
||||
margin: 0;
|
||||
background-color: #f0f0f0;
|
||||
}
|
||||
.container {
|
||||
text-align: center;
|
||||
background: white;
|
||||
padding: 40px;
|
||||
border-radius: 10px;
|
||||
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
|
||||
}
|
||||
h1 {
|
||||
color: #333;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
.links {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 15px;
|
||||
}
|
||||
a {
|
||||
display: inline-block;
|
||||
padding: 15px 30px;
|
||||
background-color: #0078e7;
|
||||
color: white;
|
||||
text-decoration: none;
|
||||
border-radius: 5px;
|
||||
transition: background-color 0.3s;
|
||||
}
|
||||
a:hover {
|
||||
background-color: #0056b3;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>Syncthing Instances</h1>
|
||||
<div class="links">
|
||||
<a href="/nas/">NAS Instance</a>
|
||||
<a href="/master/">Master Instance</a>
|
||||
<a href="/iris/">Iris Instance</a>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: syncthing-router
|
||||
namespace: syncthing
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: syncthing-router
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: syncthing-router
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/nginx/conf.d
|
||||
- name: html
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: nginx-config
|
||||
items:
|
||||
- key: default.conf
|
||||
path: default.conf
|
||||
- name: html
|
||||
configMap:
|
||||
name: nginx-config
|
||||
items:
|
||||
- key: index.html
|
||||
path: index.html
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: syncthing-router
|
||||
namespace: syncthing
|
||||
spec:
|
||||
selector:
|
||||
app: syncthing-router
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
@@ -4,13 +4,8 @@ persistence:
|
||||
config:
|
||||
enabled: true
|
||||
type: hostPath
|
||||
hostPath: "/k8s/syncthing"
|
||||
hostPath: "/k8s/Syncthing"
|
||||
mountPath: "/var/syncthing"
|
||||
storage:
|
||||
enabled: true
|
||||
type: hostPath
|
||||
hostPath: "/home/share"
|
||||
mountPath: "/storage"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: iris.khv
|
||||
service:
|
||||
@@ -20,23 +15,24 @@ service:
|
||||
port: 8384
|
||||
listen:
|
||||
enabled: true
|
||||
type: LoadBalancer
|
||||
type: NodePort
|
||||
externalTrafficPolicy: Local
|
||||
ports:
|
||||
listen:
|
||||
enabled: true
|
||||
port: 30023
|
||||
port: 22000
|
||||
protocol: TCP
|
||||
targetPort: 22000
|
||||
selector:
|
||||
app.kubernetes.io/name: syncthing
|
||||
discovery:
|
||||
enabled: true
|
||||
type: NodePort
|
||||
externalTrafficPolicy: Cluster
|
||||
externalTrafficPolicy: Local
|
||||
ports:
|
||||
discovery:
|
||||
enabled: true
|
||||
port: 21027
|
||||
protocol: UDP
|
||||
targetPort: 21027
|
||||
|
||||
port: 21027
|
||||
protocol: UDP
|
||||
targetPort: 21027
|
@@ -5,13 +5,13 @@ persistence:
|
||||
config:
|
||||
enabled: true
|
||||
type: hostPath
|
||||
hostPath: "/k8s/syncthing"
|
||||
hostPath: "/k8s/syncthing-config"
|
||||
mountPath: "/var/syncthing"
|
||||
storage:
|
||||
enabled: true
|
||||
type: hostPath
|
||||
hostPath: "/k8s/"
|
||||
mountPath: "/storage"
|
||||
hostPath: "/k8s/Syncthing-repos"
|
||||
mountPath: "/Syncthing-repos"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
service:
|
||||
@@ -21,23 +21,24 @@ service:
|
||||
port: 8384
|
||||
listen:
|
||||
enabled: true
|
||||
type: LoadBalancer
|
||||
type: NodePort
|
||||
externalTrafficPolicy: Local
|
||||
ports:
|
||||
listen:
|
||||
enabled: true
|
||||
port: 30023
|
||||
port: 22000
|
||||
protocol: TCP
|
||||
targetPort: 22000
|
||||
selector:
|
||||
app.kubernetes.io/name: syncthing
|
||||
discovery:
|
||||
enabled: true
|
||||
type: NodePort
|
||||
externalTrafficPolicy: Cluster
|
||||
externalTrafficPolicy: Local
|
||||
ports:
|
||||
discovery:
|
||||
enabled: true
|
||||
port: 21027
|
||||
protocol: UDP
|
||||
targetPort: 21027
|
||||
|
||||
port: 21027
|
||||
protocol: UDP
|
||||
targetPort: 21027
|
44
k8s/apps/syncthing/syncthing-nas.yaml
Normal file
44
k8s/apps/syncthing/syncthing-nas.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
image:
|
||||
tag: latest
|
||||
persistence:
|
||||
config:
|
||||
enabled: true
|
||||
type: hostPath
|
||||
hostPath: "/mnt/storage/Storage/syncthing-config"
|
||||
mountPath: "/var/syncthing"
|
||||
storage:
|
||||
enabled: true
|
||||
type: hostPath
|
||||
hostPath: "/mnt/storage/Storage/Syncthing-repos"
|
||||
mountPath: "/Syncthing-repos"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: nas.homenet
|
||||
service:
|
||||
main:
|
||||
ports:
|
||||
http:
|
||||
port: 8384
|
||||
listen:
|
||||
enabled: true
|
||||
type: NodePort
|
||||
externalTrafficPolicy: Local
|
||||
ports:
|
||||
listen:
|
||||
enabled: true
|
||||
port: 22000
|
||||
protocol: TCP
|
||||
targetPort: 22000
|
||||
discovery:
|
||||
enabled: true
|
||||
type: NodePort
|
||||
externalTrafficPolicy: Local
|
||||
ports:
|
||||
discovery:
|
||||
enabled: true
|
||||
port: 21027
|
||||
protocol: UDP
|
||||
targetPort: 21027
|
||||
port: 21027
|
||||
protocol: UDP
|
||||
targetPort: 21027
|
||||
|
36
k8s/apps/syncthing/traefik-simple.yaml
Normal file
36
k8s/apps/syncthing/traefik-simple.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: syncthing-ingressroute
|
||||
namespace: syncthing
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`ss.hexor.cy`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: syncthing-router
|
||||
port: 80
|
||||
middlewares:
|
||||
- name: authentik-forward-auth
|
||||
namespace: syncthing
|
||||
tls:
|
||||
secretName: syncthing-tls
|
||||
---
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: authentik-forward-auth
|
||||
namespace: syncthing
|
||||
spec:
|
||||
forwardAuth:
|
||||
address: http://authentik-server.authentik.svc.cluster.local/outpost.goauthentik.io/auth/traefik
|
||||
trustForwardHeader: true
|
||||
authResponseHeaders:
|
||||
- X-authentik-username
|
||||
- X-authentik-groups
|
||||
- X-authentik-email
|
||||
- X-authentik-name
|
||||
- X-authentik-uid
|
@@ -31,6 +31,13 @@ spec:
|
||||
- name: vaultwarden
|
||||
image: 'vaultwarden/server:latest'
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
env:
|
||||
- name: DOMAIN
|
||||
value: https://vw.hexor.cy
|
||||
|
25
k8s/apps/vpn/config.yaml
Normal file
25
k8s/apps/vpn/config.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: outfleet-rs-config
|
||||
data:
|
||||
config.toml: |-
|
||||
[database]
|
||||
url = "postgres://outfleet_rs:FMj#bA0XW14Pd2@psql.psql.svc:5432/outfleet_rs"
|
||||
|
||||
[web]
|
||||
host = "0.0.0.0"
|
||||
port = 8080
|
||||
base_url = "https://vpn.hexor.cy"
|
||||
|
||||
[telegram]
|
||||
enabled = false
|
||||
admin_chat_ids = []
|
||||
allowed_users = []
|
||||
|
||||
[xray]
|
||||
config_path = "./templates"
|
||||
|
||||
[log]
|
||||
level = "debug"
|
||||
|
66
k8s/apps/vpn/deployment.yaml
Normal file
66
k8s/apps/vpn/deployment.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: outfleet-rs
|
||||
labels:
|
||||
app: outfleet-rs
|
||||
annotations:
|
||||
reloader.stakater.com/auto: "true"
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: outfleet-rs
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: outfleet-rs
|
||||
spec:
|
||||
hostname: outfleet-rs
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
containers:
|
||||
- name: outfleet-rs
|
||||
image: 'ultradesu/outfleet:rs-0.2'
|
||||
imagePullPolicy: Always
|
||||
command: ["/bin/sh"]
|
||||
args:
|
||||
- "-c"
|
||||
- |
|
||||
set -x
|
||||
/app/xray-admin --host 0.0.0.0 --port 8080
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
volumeMounts:
|
||||
- name: outfleet-config
|
||||
mountPath: /app/config.toml # <-- target path inside container
|
||||
subPath: config.toml # <-- use the specific key as a file
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: outfleet-config
|
||||
configMap:
|
||||
name: outfleet-rs-config
|
||||
items:
|
||||
- key: config.toml
|
||||
path: config.toml
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: outfleet-rs
|
||||
spec:
|
||||
selector:
|
||||
app: outfleet-rs
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 8080
|
@@ -6,4 +6,7 @@ resources:
|
||||
- ./external-secrets.yaml
|
||||
- ./outfleet.yaml
|
||||
- ./shadowsocks.yaml
|
||||
- ./deployment.yaml
|
||||
- ./config.yaml
|
||||
- ./xray.yaml
|
||||
|
||||
|
@@ -30,8 +30,11 @@ spec:
|
||||
args:
|
||||
- "-c"
|
||||
- |
|
||||
python ./manage.py makemigrations vpn
|
||||
set -x
|
||||
#python ./manage.py makemigrations
|
||||
#python ./manage.py makemigrations vpn
|
||||
python ./manage.py migrate
|
||||
python ./manage.py migrate vpn
|
||||
python ./manage.py create_admin
|
||||
python ./manage.py runserver 0.0.0.0:8000
|
||||
envFrom:
|
||||
|
209
k8s/apps/vpn/xray.yaml
Normal file
209
k8s/apps/vpn/xray.yaml
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: xray-config-template
|
||||
data:
|
||||
config.json.template: |
|
||||
{
|
||||
"log": {
|
||||
"loglevel": "warning"
|
||||
},
|
||||
"api": {
|
||||
"tag": "api",
|
||||
"listen": "TAILSCALE_IP:10086",
|
||||
"services": [
|
||||
"HandlerService",
|
||||
"StatsService",
|
||||
"LoggerService",
|
||||
"RoutingService",
|
||||
"ReflectionService"
|
||||
]
|
||||
},
|
||||
"stats": {},
|
||||
"policy": {
|
||||
"system": {
|
||||
"statsInboundDownlink": true,
|
||||
"statsInboundUplink": true,
|
||||
"statsOutboundDownlink": true,
|
||||
"statsOutboundUplink": true
|
||||
}
|
||||
},
|
||||
"inbounds": [],
|
||||
"outbounds": [
|
||||
{
|
||||
"tag": "direct",
|
||||
"protocol": "freedom",
|
||||
"settings": {}
|
||||
}
|
||||
],
|
||||
"routing": {
|
||||
"rules": []
|
||||
}
|
||||
}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: xray-init-script
|
||||
data:
|
||||
init.sh: |
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
echo "Starting Xray configuration setup..."
|
||||
|
||||
# Find xray binary location
|
||||
XRAY_BIN=""
|
||||
for path in /usr/bin/xray /usr/local/bin/xray /bin/xray /opt/xray/xray; do
|
||||
if [ -x "$path" ]; then
|
||||
XRAY_BIN="$path"
|
||||
echo "Found Xray binary at: $XRAY_BIN"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -z "$XRAY_BIN" ]; then
|
||||
echo "Error: Xray binary not found"
|
||||
echo "Available files in common locations:"
|
||||
ls -la /usr/bin/xray* 2>/dev/null || echo "No xray in /usr/bin/"
|
||||
ls -la /usr/local/bin/xray* 2>/dev/null || echo "No xray in /usr/local/bin/"
|
||||
ls -la /bin/xray* 2>/dev/null || echo "No xray in /bin/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get Tailscale IP address
|
||||
TAILSCALE_IP=""
|
||||
|
||||
# Try different ways to get Tailscale IP
|
||||
if command -v ip >/dev/null 2>&1; then
|
||||
TAILSCALE_IP=$(ip addr show tailscale0 2>/dev/null | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | head -n1)
|
||||
fi
|
||||
|
||||
# Fallback: try to find any interface with 100.x.x.x IP (typical Tailscale range)
|
||||
if [ -z "$TAILSCALE_IP" ]; then
|
||||
TAILSCALE_IP=$(ip route get 8.8.8.8 2>/dev/null | grep -o 'src [0-9\.]*' | grep '100\.' | awk '{print $2}' | head -n1)
|
||||
fi
|
||||
|
||||
# Another fallback: check all interfaces for 100.x.x.x
|
||||
if [ -z "$TAILSCALE_IP" ]; then
|
||||
TAILSCALE_IP=$(ip addr show 2>/dev/null | grep -o 'inet 100\.[0-9\.]*' | awk '{print $2}' | head -n1)
|
||||
fi
|
||||
|
||||
# Final fallback: use localhost if no Tailscale IP found
|
||||
if [ -z "$TAILSCALE_IP" ]; then
|
||||
echo "Warning: Could not find Tailscale IP, using 127.0.0.1"
|
||||
TAILSCALE_IP="127.0.0.1"
|
||||
else
|
||||
echo "Found Tailscale IP: $TAILSCALE_IP"
|
||||
fi
|
||||
|
||||
# Create config directory
|
||||
mkdir -p /usr/local/etc/xray
|
||||
|
||||
# Replace TAILSCALE_IP placeholder in config template
|
||||
sed "s/TAILSCALE_IP/$TAILSCALE_IP/g" /config-template/config.json.template > /usr/local/etc/xray/config.json
|
||||
|
||||
echo "Generated Xray config:"
|
||||
cat /usr/local/etc/xray/config.json
|
||||
|
||||
# Increase file descriptor limits
|
||||
ulimit -n 65536 2>/dev/null || echo "Warning: Could not increase file descriptor limit"
|
||||
|
||||
echo "Starting Xray with binary: $XRAY_BIN"
|
||||
exec "$XRAY_BIN" run -c /usr/local/etc/xray/config.json
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: xray-daemon
|
||||
labels:
|
||||
app: xray
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: xray
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: xray
|
||||
spec:
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
nodeSelector:
|
||||
xray: "true"
|
||||
tolerations:
|
||||
- operator: Exists
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: xray
|
||||
image: teddysun/xray:latest
|
||||
command: ["/bin/sh"]
|
||||
args: ["/scripts/init.sh"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
- NET_RAW
|
||||
volumeMounts:
|
||||
- name: config-template
|
||||
mountPath: /config-template
|
||||
readOnly: true
|
||||
- name: init-script
|
||||
mountPath: /scripts
|
||||
readOnly: true
|
||||
- name: xray-config
|
||||
mountPath: /usr/local/etc/xray
|
||||
ports:
|
||||
- containerPort: 10086
|
||||
protocol: TCP
|
||||
name: api
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 10086
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 10086
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "250m"
|
||||
volumes:
|
||||
- name: config-template
|
||||
configMap:
|
||||
name: xray-config-template
|
||||
defaultMode: 0644
|
||||
- name: init-script
|
||||
configMap:
|
||||
name: xray-init-script
|
||||
defaultMode: 0755
|
||||
- name: xray-config
|
||||
emptyDir: {}
|
||||
restartPolicy: Always
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: xray-api-service
|
||||
labels:
|
||||
app: xray
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 10086
|
||||
targetPort: 10086
|
||||
protocol: TCP
|
||||
name: api
|
||||
selector:
|
||||
app: xray
|
@@ -13,9 +13,6 @@ spec:
|
||||
targetRevision: HEAD
|
||||
path: k8s/core/argocd
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
||||
|
@@ -14,8 +14,8 @@ spec:
|
||||
labels:
|
||||
app.kubernetes.io/part-of: argocd
|
||||
data:
|
||||
id: "{{ .client_id | quote }}"
|
||||
secret: "{{ .client_secret | quote }}"
|
||||
id: "{{ .client_id }}"
|
||||
secret: "{{ .client_secret }}"
|
||||
data:
|
||||
- secretKey: client_id
|
||||
sourceRef:
|
||||
|
@@ -10,7 +10,7 @@ resources:
|
||||
helmCharts:
|
||||
- name: argo-cd
|
||||
repo: https://argoproj.github.io/argo-helm
|
||||
version: 7.8.26
|
||||
version: 8.1.3
|
||||
releaseName: argocd
|
||||
namespace: argocd
|
||||
valuesFile: values.yaml
|
||||
|
@@ -17,9 +17,10 @@ configs:
|
||||
server.insecure: "true"
|
||||
cm:
|
||||
create: true
|
||||
exec.enabled: true
|
||||
kustomize.buildOptions: --enable-helm
|
||||
application.instanceLabelKey: argocd.argoproj.io/instance
|
||||
admin.enabled: true
|
||||
admin.enabled: false
|
||||
timeout.reconciliation: 60s
|
||||
oidc.config: |
|
||||
name: Authentik
|
||||
@@ -32,7 +33,20 @@ configs:
|
||||
create: true
|
||||
policy.default: ""
|
||||
policy.csv: |
|
||||
g, k8s_dashboard, role:admin
|
||||
# Bound OIDC Group and internal role
|
||||
g, Game Servers Managers, GameServersManagersRole
|
||||
# Role permissions
|
||||
p, GameServersManagersRole, applications, get, games/*, allow
|
||||
p, GameServersManagersRole, applications, update, games/*, allow
|
||||
p, GameServersManagersRole, applications, sync, games/*, allow
|
||||
p, GameServersManagersRole, applications, override, games/*, allow
|
||||
p, GameServersManagersRole, applications, action/*, games/*, allow
|
||||
p, GameServersManagersRole, exec, create, games/*, allow
|
||||
p, GameServersManagersRole, logs, get, games/*, allow
|
||||
p, GameServersManagersRole, applications, delete, games/*, deny
|
||||
|
||||
# Admin policy
|
||||
g, ArgoCD Admins, role:admin
|
||||
|
||||
secret:
|
||||
createSecret: true
|
||||
|
@@ -9,7 +9,7 @@ resources:
|
||||
helmCharts:
|
||||
- name: authentik
|
||||
repo: https://charts.goauthentik.io
|
||||
version: 2025.2.3
|
||||
version: 2025.8.1
|
||||
releaseName: authentik
|
||||
namespace: authentik
|
||||
valuesFile: values.yaml
|
||||
|
@@ -1,6 +1,6 @@
|
||||
global:
|
||||
image:
|
||||
tag: "2025.4.1"
|
||||
tag: "2025.8.1"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
|
||||
@@ -15,6 +15,14 @@ worker:
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: authentik-creds
|
||||
volumes:
|
||||
- name: dshm
|
||||
emptyDir:
|
||||
medium: Memory
|
||||
sizeLimit: 512Mi
|
||||
volumeMounts:
|
||||
- name: dshm
|
||||
mountPath: /dev/shm
|
||||
server:
|
||||
envFrom:
|
||||
- secretRef:
|
||||
@@ -30,15 +38,14 @@ server:
|
||||
- nas.hexor.cy # TrueNAS Limassol
|
||||
- nc.hexor.cy # NaxtCloud
|
||||
- of.hexor.cy # Outfleet-v2
|
||||
- master.hexor.cy # k8s dashboard
|
||||
- k8s.hexor.cy # k8s dashboard
|
||||
- qbt.hexor.cy # qBittorent for Jellyfin
|
||||
- prom.hexor.cy # Prometheus
|
||||
- ss.hexor.cy # Syncthing UI
|
||||
- khm.hexor.cy # Known Hosts keys Manager
|
||||
- backup.hexor.cy # Kopia Backup UI
|
||||
- fm.hexor.cy # Filemanager
|
||||
- hexound.hexor.cy # Hexound
|
||||
- minecraft.hexor.cy # Minecraft UI and server
|
||||
- pass.hexor.cy # k8s-secret for openai
|
||||
tls:
|
||||
- secretName: idm-tls
|
||||
hosts:
|
||||
|
@@ -39,6 +39,13 @@ spec:
|
||||
- name: bitwarden-cli
|
||||
image: ultradesu/bitwarden-client:2025.5.0
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
env:
|
||||
- name: BW_HOST
|
||||
valueFrom:
|
||||
|
@@ -20,6 +20,13 @@ spec:
|
||||
- name: kubernetes-dashboard
|
||||
image: kubernetesui/dashboard:v2.7.0
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
protocol: TCP
|
||||
@@ -87,6 +94,13 @@ spec:
|
||||
containers:
|
||||
- name: dashboard-metrics-scraper
|
||||
image: kubernetesui/metrics-scraper:v1.0.6
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "200m"
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
protocol: TCP
|
||||
|
@@ -13,9 +13,7 @@ spec:
|
||||
targetRevision: HEAD
|
||||
path: k8s/core/postgresql
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
- ServerSideApply=true
|
||||
|
||||
|
@@ -40,6 +40,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 832042b9-7edb-4f4c-9254-3c8884ba9733
|
||||
property: login.username
|
||||
- secretKey: password
|
||||
@@ -48,6 +51,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 832042b9-7edb-4f4c-9254-3c8884ba9733
|
||||
property: login.password
|
||||
- secretKey: client_id
|
||||
@@ -56,6 +62,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 832042b9-7edb-4f4c-9254-3c8884ba9733
|
||||
property: fields[0].value
|
||||
- secretKey: client_secret
|
||||
@@ -64,6 +73,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 832042b9-7edb-4f4c-9254-3c8884ba9733
|
||||
property: fields[1].value
|
||||
- secretKey: pgadmin_url
|
||||
@@ -72,6 +84,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 832042b9-7edb-4f4c-9254-3c8884ba9733
|
||||
property: fields[2].value
|
||||
---
|
||||
@@ -98,6 +113,10 @@ spec:
|
||||
{{ .grafana }}
|
||||
USER_khm: |-
|
||||
{{ .khm }}
|
||||
USER_kanjai: |-
|
||||
{{ .kanjai }}
|
||||
USER_outfleet_rs: |-
|
||||
{{ .outfleet_rs }}
|
||||
data:
|
||||
- secretKey: authentik
|
||||
sourceRef:
|
||||
@@ -105,6 +124,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[0].value
|
||||
- secretKey: outfleet
|
||||
@@ -113,6 +135,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[1].value
|
||||
- secretKey: grafana
|
||||
@@ -121,6 +146,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[2].value
|
||||
- secretKey: khm
|
||||
@@ -129,6 +157,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[3].value
|
||||
- secretKey: nextcloud
|
||||
@@ -137,6 +168,9 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[4].value
|
||||
- secretKey: paperless
|
||||
@@ -145,5 +179,30 @@ spec:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[5].value
|
||||
- secretKey: kanjai
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[7].value
|
||||
- secretKey: outfleet_rs
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
conversionStrategy: Default
|
||||
decodingStrategy: None
|
||||
metadataPolicy: None
|
||||
key: 2a9deb39-ef22-433e-a1be-df1555625e22
|
||||
property: fields[8].value
|
||||
|
@@ -13,7 +13,7 @@ resources:
|
||||
helmCharts:
|
||||
- name: pgadmin4
|
||||
repo: https://helm.runix.net
|
||||
version: 1.37.0
|
||||
version: 1.50.0
|
||||
releaseName: pgmanager
|
||||
namespace: psql
|
||||
valuesFile: pgadmin4-values.yaml
|
||||
|
@@ -1,5 +1,5 @@
|
||||
image:
|
||||
tag: "9.3"
|
||||
tag: "9.8"
|
||||
pullPolicy: Always
|
||||
env:
|
||||
email: "postgres@hexor.cy"
|
||||
|
@@ -10,7 +10,7 @@ admin:
|
||||
grafana.ini:
|
||||
auth:
|
||||
signout_redirect_url: https://idm.hexor.cy/application/o/grafana/end-session/
|
||||
oauth_auto_login: true
|
||||
# oauth_auto_login: true
|
||||
auth.generic_oauth:
|
||||
name: authentik
|
||||
enabled: true
|
||||
|
@@ -14,16 +14,10 @@ prometheus:
|
||||
labels: {instance: jp}
|
||||
- targets: ['100.117.24.104:9098']
|
||||
labels: {instance: bg}
|
||||
- targets: ['100.117.24.104:9099']
|
||||
labels: {instance: fi}
|
||||
|
||||
- job_name: term_humid_sensors
|
||||
- job_name: cs_16_server
|
||||
static_configs:
|
||||
- targets: ['100.117.24.104:7536']
|
||||
- job_name: win_exporter
|
||||
static_configs:
|
||||
- targets: ['10.0.5.100:9182']
|
||||
labels: {instance: win.homenet}
|
||||
- targets: ['prom-a2s-exporter.counter-strike.svc:9841']
|
||||
labels: {instance: master}
|
||||
|
||||
retention: "99999d"
|
||||
retentionSize: "0"
|
||||
|
@@ -10,7 +10,7 @@ spec:
|
||||
kind: Plan
|
||||
plural: plans
|
||||
singular: plan
|
||||
preserveUnknownFields: false
|
||||
#preserveUnknownFields: false
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- additionalPrinterColumns:
|
||||
|
@@ -16,7 +16,7 @@ spec:
|
||||
serviceAccountName: system-upgrade
|
||||
upgrade:
|
||||
image: rancher/k3s-upgrade
|
||||
version: v1.33.1+k3s1
|
||||
version: v1.34.1+k3s1
|
||||
---
|
||||
# Agent plan
|
||||
apiVersion: upgrade.cattle.io/v1
|
||||
@@ -39,5 +39,5 @@ spec:
|
||||
serviceAccountName: system-upgrade
|
||||
upgrade:
|
||||
image: rancher/k3s-upgrade
|
||||
version: v1.33.1+k3s1
|
||||
version: v1.34.1+k3s1
|
||||
|
||||
|
21
k8s/games/beam-ng/app.yaml
Normal file
21
k8s/games/beam-ng/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: beam-ng
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: games
|
||||
destination:
|
||||
namespace: beam-ng
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/games/beam-ng
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
46
k8s/games/beam-ng/deployments.yaml
Normal file
46
k8s/games/beam-ng/deployments.yaml
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: beam-ng
|
||||
labels:
|
||||
app: beam-ng
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: beam-ng
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: beam-ng
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
containers:
|
||||
- name: beam-ng
|
||||
image: 'rouhim/beammp-server'
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "2000m"
|
||||
env:
|
||||
- name: BEAMMP_NAME
|
||||
value: 'Anal Hexor'
|
||||
- name: BEAMMP_AUTH_KEY
|
||||
value: '1488_228'
|
||||
ports:
|
||||
- name: udp
|
||||
containerPort: 30814
|
||||
protocol: UDP
|
||||
- containerPort: 30814
|
||||
name: tcp
|
||||
protocol: TCP
|
8
k8s/games/beam-ng/kustomization.yaml
Normal file
8
k8s/games/beam-ng/kustomization.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- app.yaml
|
||||
- deployments.yaml
|
||||
- services.yaml
|
||||
|
20
k8s/games/beam-ng/services.yaml
Normal file
20
k8s/games/beam-ng/services.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: beam-ng
|
||||
spec:
|
||||
externalIPs:
|
||||
- 138.201.61.182
|
||||
selector:
|
||||
app: beam-ng
|
||||
ports:
|
||||
- protocol: TCP
|
||||
name: tcp
|
||||
port: 30814
|
||||
targetPort: 30814
|
||||
- protocol: UDP
|
||||
name: udp
|
||||
port: 30814
|
||||
targetPort: 30814
|
||||
|
21
k8s/games/counter-strike-16/app.yaml
Normal file
21
k8s/games/counter-strike-16/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: counter-strike-16
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: games
|
||||
destination:
|
||||
namespace: counter-strike
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/games/counter-strike-16
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
81
k8s/games/counter-strike-16/deployments.yaml
Normal file
81
k8s/games/counter-strike-16/deployments.yaml
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cs16-server-config
|
||||
data:
|
||||
MAXPLAYERS: "10"
|
||||
START_MAP: "fy_pool_day"
|
||||
SERVER_NAME: "GEYMERSKIY SOYUZ"
|
||||
START_MONEY: "1000"
|
||||
BUY_TIME: "0.25"
|
||||
FRIENDLY_FIRE: "1"
|
||||
SERVER_PASSWORD: ""
|
||||
RCON_PASSWORD: ""
|
||||
ADMIN_STEAM: "0:0:27591350"
|
||||
RESTART_ON_FAIL: "true"
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: counter-strike-16
|
||||
labels:
|
||||
app: counter-strike-16
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: counter-strike-16
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: counter-strike-16
|
||||
spec:
|
||||
nodeSelector:
|
||||
#kubernetes.io/hostname: home.homenet
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: prom-a2s-exporter
|
||||
image: armsnyder/a2s-exporter:latest
|
||||
resources:
|
||||
requests:
|
||||
memory: "32Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "200m"
|
||||
ports:
|
||||
- containerPort: 9841
|
||||
protocol: TCP
|
||||
args:
|
||||
- --address
|
||||
- cs.hexor.cy:30015
|
||||
- name: counter-strike-16
|
||||
image: 'kingk0der/counter-strike-1.6:latest'
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "1000m"
|
||||
args:
|
||||
- +log
|
||||
- -port
|
||||
- "30015"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: cs16-server-config
|
||||
ports:
|
||||
- containerPort: 26900
|
||||
protocol: UDP
|
||||
- containerPort: 27020
|
||||
protocol: UDP
|
||||
- containerPort: 30015
|
||||
protocol: UDP
|
||||
- containerPort: 30015
|
||||
protocol: TCP
|
8
k8s/games/counter-strike-16/kustomization.yaml
Normal file
8
k8s/games/counter-strike-16/kustomization.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- app.yaml
|
||||
- deployments.yaml
|
||||
- services.yaml
|
||||
- restart-job.yaml
|
55
k8s/games/counter-strike-16/restart-job.yaml
Normal file
55
k8s/games/counter-strike-16/restart-job.yaml
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: cs16-restart-sa
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: cs16-restart-role
|
||||
rules:
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["deployments"]
|
||||
verbs: ["get", "patch"]
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cs16-restart-rb
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cs16-restart-sa
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: cs16-restart-role
|
||||
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: cs16-daily-restart
|
||||
spec:
|
||||
schedule: "0 4 * * *" # every day at 04:00
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: cs16-restart-sa
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: kubectl
|
||||
image: bitnami/kubectl:latest
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
kubectl -n "$POD_NAMESPACE" rollout restart deployment/counter-strike-16
|
47
k8s/games/counter-strike-16/services.yaml
Normal file
47
k8s/games/counter-strike-16/services.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: counter-strike-16-lb
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
app: counter-strike-16
|
||||
ports:
|
||||
- name: game-udp
|
||||
port: 30015
|
||||
targetPort: 30015
|
||||
protocol: UDP
|
||||
nodePort: 30015
|
||||
- name: game-tcp
|
||||
port: 30015
|
||||
targetPort: 30015
|
||||
protocol: TCP
|
||||
nodePort: 30015
|
||||
- name: rcon
|
||||
port: 27020
|
||||
targetPort: 27020
|
||||
protocol: UDP
|
||||
nodePort: 30020
|
||||
- name: hltv
|
||||
port: 26900
|
||||
targetPort: 26900
|
||||
protocol: UDP
|
||||
nodePort: 30900
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prom-a2s-exporter
|
||||
labels:
|
||||
app: counter-strike-16
|
||||
spec:
|
||||
selector:
|
||||
app: counter-strike-16
|
||||
ports:
|
||||
- name: metrics
|
||||
port: 9841
|
||||
targetPort: 9841
|
||||
protocol: TCP
|
||||
type: ClusterIP
|
@@ -6,35 +6,64 @@ metadata:
|
||||
namespace: minecraft
|
||||
data:
|
||||
nginx.conf: |
|
||||
user nginx;
|
||||
worker_processes 1;
|
||||
user nginx;
|
||||
worker_processes 1;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
server {
|
||||
listen 80;
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:8123;
|
||||
sub_filter 'Minecraft Dynamic Map' "Hexor's MC server";
|
||||
sub_filter "</body>" '<p style="background-color: #CEC6CB; color: black; padding: 10px 10px; text-align: center; font-size: large; text-decoration: none; display: inline-block; border-radius: 4px; position: absolute; top: 10px; left: 150px;">Get <a href="https://github.com/PrismLauncher/PrismLauncher/releases/tag/8.4" >Prism Launcher</a> and <a href="/clients/1.12.2.zip" >client.zip</a> for this server. Server address <b>minecraft.hexor.cy:30565</b></p></body>';
|
||||
sub_filter_once off;
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
# Custom 502 error page with auto-refresh
|
||||
error_page 502 /502.html;
|
||||
location = /502.html {
|
||||
internal;
|
||||
return 200 '<!DOCTYPE html><html><head><meta charset="utf-8"><title>Server Loading</title><style>body{font-family:Arial,sans-serif;text-align:center;margin-top:100px;background:#f0f0f0}h1{color:#333}p{color:#666;font-size:18px}</style></head><body><h1>Server is loading probably...</h1><p>Please wait a moment and try refreshing the page.</p><script>setTimeout(function(){window.location.reload();}, 10000);</script></body></html>';
|
||||
add_header Content-Type text/html;
|
||||
}
|
||||
|
||||
# Main location - proxy to Minecraft Dynmap
|
||||
location / {
|
||||
# Proxy configuration for Dynmap server
|
||||
proxy_pass http://localhost:8123;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Inject user authentication meta tag into HTML head
|
||||
sub_filter '<head>' '<head><meta name="remote-user" content="$http_x_authentik_username">';
|
||||
|
||||
# Replace default Dynmap title with custom server name
|
||||
sub_filter 'Minecraft Dynamic Map' "Hexor's MC server";
|
||||
|
||||
# Inject all custom content before closing body tag (single replacement)
|
||||
sub_filter "</body>" '<script>function getUsername(){var headers=document.querySelectorAll("meta");for(var i=0;i<headers.length;i++){if(headers[i].getAttribute("name")==="remote-user"){return headers[i].getAttribute("content");}}var jwt=document.cookie.split("; ").find(row=>row.startsWith("authentik_session="));if(jwt){try{var token=jwt.split("=")[1];var payload=JSON.parse(atob(token.split(".")[1]));return payload.sub||payload.username||"web-user";}catch(e){}}return "web-user";}var username=getUsername();console.log("Username found:", username);if(username && username!=="web-user" && window.location.search.indexOf("playername=")===-1){var currentUrl=new URL(window.location.href);currentUrl.searchParams.set("playername",username);console.log("Redirecting to:", currentUrl.href);window.location.href=currentUrl.href;}document.addEventListener("DOMContentLoaded",function(){var userBlock=document.createElement("div");userBlock.style.cssText="background-color: #CEC6CB; color: black; padding: 8px; text-align: center; font-size: medium; border-radius: 4px; position: absolute; top: 10px; right: 150px; max-width: 200px;";userBlock.innerHTML="Logged in as: <b>"+username+"</b>";document.body.appendChild(userBlock);});</script><p style="background-color: #CEC6CB; color: black; padding: 10px 10px; text-align: center; font-size: large; text-decoration: none; display: inline-block; border-radius: 4px; position: absolute; top: 10px; left: 150px;">GEYMERSKIY SOYUZ Server <br>Get <a href="https://github.com/PrismLauncher/PrismLauncher/releases/tag/8.4" >Prism Launcher</a> and <a href="/clients/1.12.2.zip" >client.zip</a> for this server. Server address <b>minecraft.hexor.cy:30565</b><br><br><a href="#" onclick="showInstallModal(); return false;" style="color: black; text-decoration: underline;">Windows Install Script</a></p><div id="installModal" style="display: none; position: fixed; z-index: 1000; left: 0; top: 0; width: 100%; height: 100%; background-color: rgba(0,0,0,0.5);"><div style="background-color: #CEC6CB; margin: 15% auto; padding: 10px; border-radius: 4px; width: 70%; max-width: 500px; text-align: center; color: black; font-size: large;"><h3 style="margin-top: 0; color: black;">Windows Installation</h3><p style="color: black;">Copy and paste this command into PowerShell:</p><textarea id="scriptCommand" readonly style="width: 90%; height: 60px; font-family: monospace; padding: 8px; border: 1px solid #888; border-radius: 4px; resize: none; background-color: white; color: black;">iwr -useb https://minecraft.hexor.cy/clients/win-install.ps1 | iex</textarea><br><br><button id="copyButton" onclick="copyToClipboard()" style="background-color: #CEC6CB; color: black; padding: 10px 15px; border: 1px solid #888; border-radius: 4px; cursor: pointer; margin-right: 10px; font-size: large; text-decoration: none;">Copy</button><button onclick="closeInstallModal()" style="background-color: #CEC6CB; color: black; padding: 10px 15px; border: 1px solid #888; border-radius: 4px; cursor: pointer; font-size: large; text-decoration: none;">Close</button></div></div><script>function showInstallModal() { document.getElementById("installModal").style.display = "block"; } function closeInstallModal() { document.getElementById("installModal").style.display = "none"; } function copyToClipboard() { var textarea = document.getElementById("scriptCommand"); textarea.select(); textarea.setSelectionRange(0, 99999); if (document.execCommand("copy")) { var button = document.getElementById("copyButton"); button.style.borderColor = "#4CAF50"; setTimeout(function() { button.style.borderColor = "#888"; }, 2000); } } window.onclick = function(event) { var modal = document.getElementById("installModal"); if (event.target == modal) { closeInstallModal(); } }</script></body>';
|
||||
|
||||
# Apply sub_filter replacements globally (not just once)
|
||||
sub_filter_once off;
|
||||
}
|
||||
|
||||
# Static file serving for client downloads
|
||||
location /clients/ {
|
||||
alias /mc/clients/;
|
||||
sendfile on; # Enable efficient file serving
|
||||
add_header Content-Disposition "attachment"; # Force download
|
||||
autoindex on; # Enable directory listing
|
||||
gzip off; # Disable compression for downloads
|
||||
chunked_transfer_encoding off; # Disable chunked encoding
|
||||
}
|
||||
}
|
||||
location /clients/ {
|
||||
types { }
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
sendfile_max_chunk 1m;
|
||||
default_type application/zip;
|
||||
add_header Content-Disposition "attachment";
|
||||
alias /mc/clients/;
|
||||
autoindex on;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@@ -68,22 +97,12 @@ spec:
|
||||
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: nginx-config
|
||||
mountPath: /etc/nginx/nginx.conf
|
||||
subPath: nginx.conf
|
||||
- name: storage
|
||||
mountPath: /mc
|
||||
- name: minecraft
|
||||
image: 'openjdk:8-jdk-alpine'
|
||||
command: ["java"]
|
||||
args:
|
||||
- -Xms12G
|
||||
- -Xmx12G
|
||||
- -Xms4G
|
||||
- -Xmx4G
|
||||
- -XX:+UseG1GC
|
||||
- -XX:+ParallelRefProcEnabled
|
||||
- -XX:MaxGCPauseMillis=200
|
||||
@@ -107,12 +126,12 @@ spec:
|
||||
- nogui
|
||||
workingDir: /mc/
|
||||
resources:
|
||||
limits:
|
||||
memory: 15Gi
|
||||
#cpu: 1
|
||||
requests:
|
||||
memory: 10Gi
|
||||
#cpu: 100m
|
||||
requests:
|
||||
memory: "8Gi"
|
||||
cpu: "2000m"
|
||||
limits:
|
||||
memory: "12Gi"
|
||||
cpu: "4000m"
|
||||
ports:
|
||||
- name: game
|
||||
containerPort: 25565
|
||||
@@ -120,9 +139,29 @@ spec:
|
||||
- name: dynmap
|
||||
containerPort: 8123
|
||||
protocol: TCP
|
||||
- name: webstatus-mod
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: storage
|
||||
mountPath: /mc
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "200m"
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: nginx-config
|
||||
mountPath: /etc/nginx/nginx.conf
|
||||
subPath: nginx.conf
|
||||
- name: storage
|
||||
mountPath: /mc
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
|
27
k8s/games/minecraft/ingress.yaml
Normal file
27
k8s/games/minecraft/ingress.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: dynmap-tls-ingress
|
||||
annotations:
|
||||
ingressClassName: traefik
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
|
||||
acme.cert-manager.io/http01-edit-in-place: "true"
|
||||
spec:
|
||||
rules:
|
||||
- host: minecraft.hexor.cy
|
||||
http:
|
||||
paths:
|
||||
- path: /clients/
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: minecraft
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- secretName: dynmap-tls
|
||||
hosts:
|
||||
- minecraft.hexor.cy
|
||||
|
@@ -5,4 +5,4 @@ resources:
|
||||
- app.yaml
|
||||
- deployments.yaml
|
||||
- services.yaml
|
||||
|
||||
#- ingress.yaml
|
||||
|
@@ -3,7 +3,6 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minecraft-dynmap
|
||||
namespace: minecraft
|
||||
spec:
|
||||
selector:
|
||||
app: minecraft
|
||||
@@ -12,12 +11,24 @@ spec:
|
||||
port: 80
|
||||
targetPort: 80
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minecraft-webstatus-mod
|
||||
spec:
|
||||
selector:
|
||||
app: minecraft
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 8080
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minecraft-game
|
||||
namespace: minecraft
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
|
16
terraform/authentik/.claude/settings.local.json
Normal file
16
terraform/authentik/.claude/settings.local.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"WebSearch",
|
||||
"WebFetch(domain:registry.terraform.io)",
|
||||
"Bash(C:UsersabAppDataLocalMicrosoftWinGetPackagesHashicorp.Terraform_Microsoft.Winget.Source_8wekyb3d8bbweterraform.exe apply -auto-approve)",
|
||||
"Bash(\"C:\\Users\\ab\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Hashicorp.Terraform_Microsoft.Winget.Source_8wekyb3d8bbwe\\terraform.exe\" apply -auto-approve)",
|
||||
"Bash(\"C:\\Users\\ab\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Hashicorp.Terraform_Microsoft.Winget.Source_8wekyb3d8bbwe\\terraform.exe\" apply -auto-approve -lock=false)",
|
||||
"Bash(\"C:\\Users\\ab\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Hashicorp.Terraform_Microsoft.Winget.Source_8wekyb3d8bbwe\\terraform.exe\" plan -lock=false)",
|
||||
"Bash(\"C:\\Users\\ab\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Hashicorp.Terraform_Microsoft.Winget.Source_8wekyb3d8bbwe\\terraform.exe\" apply -replace=\"authentik_outpost.outposts[\"\"kubernetes-outpost\"\"]\" -auto-approve -lock=false)",
|
||||
"Bash(terraform plan:*)"
|
||||
],
|
||||
"deny": [],
|
||||
"ask": []
|
||||
}
|
||||
}
|
44
terraform/authentik/.terraform.lock.hcl
generated
Normal file
44
terraform/authentik/.terraform.lock.hcl
generated
Normal file
@@ -0,0 +1,44 @@
|
||||
# This file is maintained automatically by "terraform init".
|
||||
# Manual edits may be lost in future updates.
|
||||
|
||||
provider "registry.terraform.io/goauthentik/authentik" {
|
||||
version = "2025.8.1"
|
||||
constraints = ">= 2023.10.0, 2025.8.1"
|
||||
hashes = [
|
||||
"h1:R3h8ADB0Kkv/aoY0AaHkBiX2/P4+GnW8sSgkN30kJfQ=",
|
||||
"zh:0c3f1083fd48f20ed06959401ff1459fbb5d454d81c8175b5b6d321b308c0be3",
|
||||
"zh:21c6d93f8d26e688da38a660d121b5624e3597c426c671289f31a17a9771abbf",
|
||||
"zh:301b5763ffc4c5fe47aa7e851ce0b19f71bab4fae5c81003ad81b38775e85f78",
|
||||
"zh:4f7ee6473f6a687340538ddac0ec4a0453664186b15fdb0bb2fb5fcd8fb3ad30",
|
||||
"zh:7927f4f634c9e072d4aa6620d09e97dc83eeb1dbd0667102086779cd5fc495c1",
|
||||
"zh:84e7c2a3f3de721a54abe4c971d9a163127f5e4af91d023260fea305ac74bcf4",
|
||||
"zh:92af52aaac518c426164eb731d282f51a5825e64e6a02b0695952177a7af7d9c",
|
||||
"zh:a6920a54d5df69342f4ea2d903676145b00e7375d2f2eecc0840858d83b3b4a8",
|
||||
"zh:ac8a60801fc55fd05b3471778f908ed43072e046997c0082644c9602b84dafec",
|
||||
"zh:b1cc29e2878aa94a3827fd5e1dd8cffb98397aa4093d6a4852c6e53157e9b35f",
|
||||
"zh:c2d78f308c4d70a16ef4f6d1f4822a64f8f160d0a207f2121904cdd6f4942db4",
|
||||
"zh:ca970e5776f408059a84b4e17f6ac257ec92afae956be74f3807c548e4567eaa",
|
||||
"zh:eb2e3650ee0eec033207b6d72fcb938dc5846c6feb8a61ae30d61981ea411269",
|
||||
"zh:fcb93e51c84ba592bc2b075d7342e475126e5029620959666999b5b1bd11cb98",
|
||||
]
|
||||
}
|
||||
|
||||
provider "registry.terraform.io/hashicorp/random" {
|
||||
version = "3.7.2"
|
||||
constraints = ">= 3.5.0"
|
||||
hashes = [
|
||||
"h1:356j/3XnXEKr9nyicLUufzoF4Yr6hRy481KIxRVpK0c=",
|
||||
"zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f",
|
||||
"zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc",
|
||||
"zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab",
|
||||
"zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3",
|
||||
"zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212",
|
||||
"zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f",
|
||||
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
|
||||
"zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34",
|
||||
"zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967",
|
||||
"zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d",
|
||||
"zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62",
|
||||
"zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0",
|
||||
]
|
||||
}
|
55
terraform/authentik/README.md
Normal file
55
terraform/authentik/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Authentik Terraform Module
|
||||
|
||||
Terraform module for managing Authentik applications with OAuth2/OpenID and Proxy providers, including automatic Outpost assignment.
|
||||
|
||||
## Usage
|
||||
|
||||
```hcl
|
||||
module "authentik" {
|
||||
source = "./authentik"
|
||||
|
||||
authentik_url = "https://auth.example.com"
|
||||
authentik_token = var.authentik_token
|
||||
|
||||
oauth_applications = {
|
||||
"gitlab" = {
|
||||
name = "GitLab OAuth"
|
||||
slug = "gitlab"
|
||||
redirect_uris = ["https://gitlab.example.com/users/auth/openid_connect/callback"]
|
||||
}
|
||||
}
|
||||
|
||||
proxy_applications = {
|
||||
"portainer" = {
|
||||
name = "Portainer"
|
||||
slug = "portainer"
|
||||
external_host = "https://portainer.example.com"
|
||||
internal_host = "http://portainer:9000"
|
||||
outpost = "k8s-outpost"
|
||||
}
|
||||
}
|
||||
|
||||
outposts = {
|
||||
"k8s-outpost" = {
|
||||
name = "Kubernetes Outpost"
|
||||
type = "proxy"
|
||||
service_connection = "k8s-local"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
- `main.tf` - Main configuration
|
||||
- `variables.tf` - Input variables
|
||||
- `outputs.tf` - Output values
|
||||
- `modules/oauth-provider/` - OAuth2/OIDC provider module
|
||||
- `modules/proxy-provider/` - Proxy provider module
|
||||
- `terraform.tfvars.example` - Configuration example
|
||||
|
||||
## Requirements
|
||||
|
||||
- Terraform >= 1.0
|
||||
- Authentik provider >= 2023.10.0
|
||||
- Authentik API token with admin permissions
|
10
terraform/authentik/groups.tfvars
Normal file
10
terraform/authentik/groups.tfvars
Normal file
@@ -0,0 +1,10 @@
|
||||
groups = {
|
||||
"admins" = {
|
||||
name = "Administrators"
|
||||
is_superuser = true
|
||||
attributes = {
|
||||
notes = "Managed by Terraform"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
309
terraform/authentik/main.tf
Normal file
309
terraform/authentik/main.tf
Normal file
@@ -0,0 +1,309 @@
|
||||
|
||||
data "authentik_flow" "default_authorization_flow" {
|
||||
slug = var.default_authorization_flow
|
||||
}
|
||||
|
||||
data "authentik_flow" "default_authentication_flow" {
|
||||
slug = var.default_authentication_flow
|
||||
}
|
||||
|
||||
data "authentik_flow" "default_invalidation_flow" {
|
||||
slug = var.default_invalidation_flow
|
||||
}
|
||||
|
||||
# Root groups (without parent)
|
||||
resource "authentik_group" "root_groups" {
|
||||
for_each = {
|
||||
for k, v in var.groups : k => v
|
||||
if v.parent == null
|
||||
}
|
||||
|
||||
name = each.value.name
|
||||
is_superuser = each.value.is_superuser
|
||||
attributes = jsonencode(each.value.attributes)
|
||||
}
|
||||
|
||||
# Child groups (with parent)
|
||||
resource "authentik_group" "child_groups" {
|
||||
for_each = {
|
||||
for k, v in var.groups : k => v
|
||||
if v.parent != null
|
||||
}
|
||||
|
||||
name = each.value.name
|
||||
is_superuser = each.value.is_superuser
|
||||
parent = authentik_group.root_groups[each.value.parent].id
|
||||
attributes = jsonencode(each.value.attributes)
|
||||
|
||||
depends_on = [authentik_group.root_groups]
|
||||
}
|
||||
|
||||
# Auto-created groups for proxy applications
|
||||
resource "authentik_group" "proxy_app_groups" {
|
||||
for_each = {
|
||||
for k, v in var.proxy_applications : k => v
|
||||
if v.create_group == true
|
||||
}
|
||||
|
||||
name = "TF-${each.value.name} Users"
|
||||
is_superuser = false
|
||||
attributes = jsonencode({
|
||||
notes = "Auto-created for ${each.value.name} application"
|
||||
app_slug = each.value.slug
|
||||
})
|
||||
}
|
||||
|
||||
# Auto-created groups for OAuth applications
|
||||
resource "authentik_group" "oauth_app_groups" {
|
||||
for_each = {
|
||||
for k, v in var.oauth_applications : k => v
|
||||
if v.create_group == true
|
||||
}
|
||||
|
||||
name = "TF-${each.value.name} Users"
|
||||
is_superuser = false
|
||||
attributes = jsonencode({
|
||||
notes = "Auto-created for ${each.value.name} application"
|
||||
app_slug = each.value.slug
|
||||
})
|
||||
}
|
||||
|
||||
resource "authentik_certificate_key_pair" "certificates" {
|
||||
for_each = var.certificates
|
||||
|
||||
name = each.value.name
|
||||
certificate_data = each.value.certificate_data
|
||||
key_data = each.value.key_data
|
||||
}
|
||||
|
||||
|
||||
data "authentik_service_connection_kubernetes" "local_k8s" {
|
||||
name = "Local Kubernetes Cluster"
|
||||
}
|
||||
|
||||
resource "authentik_flow" "flows" {
|
||||
for_each = var.flows
|
||||
|
||||
name = each.value.name
|
||||
title = each.value.title
|
||||
slug = each.value.slug
|
||||
designation = each.value.designation
|
||||
policy_engine_mode = each.value.policy_engine_mode
|
||||
compatibility_mode = each.value.compatibility_mode
|
||||
layout = each.value.layout
|
||||
denied_action = each.value.denied_action
|
||||
}
|
||||
|
||||
resource "authentik_property_mapping_provider_scope" "oidc_mappings" {
|
||||
for_each = {
|
||||
for k, v in var.property_mappings : k => v
|
||||
if v.oidc_scope != null
|
||||
}
|
||||
|
||||
name = each.value.name
|
||||
scope_name = each.value.oidc_scope
|
||||
expression = each.value.expression
|
||||
}
|
||||
|
||||
resource "authentik_property_mapping_provider_saml" "saml_mappings" {
|
||||
for_each = {
|
||||
for k, v in var.property_mappings : k => v
|
||||
if v.saml_name != null
|
||||
}
|
||||
|
||||
name = each.value.name
|
||||
saml_name = each.value.saml_name
|
||||
expression = each.value.expression
|
||||
}
|
||||
|
||||
module "oauth_applications" {
|
||||
source = "./modules/oauth-provider"
|
||||
|
||||
for_each = var.oauth_applications
|
||||
|
||||
name = each.value.name
|
||||
app_name = each.value.name
|
||||
app_slug = each.value.slug
|
||||
app_group = each.value.group
|
||||
client_id = each.value.client_id
|
||||
authorization_flow = try(authentik_flow.flows[each.value.authorization_flow].id, data.authentik_flow.default_authorization_flow.id)
|
||||
invalidation_flow = data.authentik_flow.default_invalidation_flow.id
|
||||
redirect_uris = each.value.redirect_uris
|
||||
client_type = each.value.client_type
|
||||
include_claims_in_id_token = each.value.include_claims_in_id_token
|
||||
access_code_validity = each.value.access_code_validity
|
||||
access_token_validity = each.value.access_token_validity
|
||||
refresh_token_validity = each.value.refresh_token_validity
|
||||
property_mappings = each.value.property_mappings
|
||||
signing_key = each.value.signing_key
|
||||
policy_engine_mode = each.value.policy_engine_mode
|
||||
meta_description = each.value.meta_description
|
||||
meta_launch_url = each.value.meta_launch_url
|
||||
meta_icon = each.value.meta_icon
|
||||
scope_mappings = each.value.scope_mappings
|
||||
|
||||
# Access control - only pass explicitly defined groups
|
||||
access_groups = [
|
||||
for group_key in each.value.access_groups :
|
||||
try(
|
||||
authentik_group.root_groups[group_key].id,
|
||||
authentik_group.child_groups[group_key].id
|
||||
)
|
||||
]
|
||||
}
|
||||
|
||||
module "proxy_applications" {
|
||||
source = "./modules/proxy-provider"
|
||||
|
||||
for_each = var.proxy_applications
|
||||
|
||||
name = each.value.name
|
||||
app_name = each.value.name
|
||||
app_slug = each.value.slug
|
||||
app_group = each.value.group
|
||||
external_host = each.value.external_host
|
||||
internal_host = each.value.internal_host
|
||||
internal_host_ssl_validation = each.value.internal_host_ssl_validation
|
||||
authorization_flow = try(authentik_flow.flows[each.value.authorization_flow].id, data.authentik_flow.default_authorization_flow.id)
|
||||
invalidation_flow = data.authentik_flow.default_invalidation_flow.id
|
||||
mode = each.value.mode
|
||||
intercept_header_auth = each.value.intercept_header_auth
|
||||
basic_auth_enabled = each.value.basic_auth_enabled
|
||||
basic_auth_user_attribute = each.value.basic_auth_username_attribute
|
||||
basic_auth_password_attribute = each.value.basic_auth_password_attribute
|
||||
cookie_domain = each.value.cookie_domain
|
||||
skip_path_regex = each.value.skip_path_regex
|
||||
policy_engine_mode = each.value.policy_engine_mode
|
||||
meta_description = each.value.meta_description
|
||||
meta_launch_url = each.value.meta_launch_url
|
||||
meta_icon = each.value.meta_icon
|
||||
|
||||
# Access control - only pass explicitly defined groups
|
||||
access_groups = [
|
||||
for group_key in each.value.access_groups :
|
||||
try(
|
||||
authentik_group.root_groups[group_key].id,
|
||||
authentik_group.child_groups[group_key].id
|
||||
)
|
||||
]
|
||||
}
|
||||
|
||||
# Binding auto-created groups to their applications
|
||||
resource "authentik_policy_binding" "auto_group_bindings" {
|
||||
for_each = {
|
||||
for k, v in var.proxy_applications : k => v
|
||||
if v.create_group == true
|
||||
}
|
||||
|
||||
target = module.proxy_applications[each.key].application_uuid
|
||||
group = authentik_group.proxy_app_groups[each.key].id
|
||||
order = 100
|
||||
|
||||
depends_on = [
|
||||
module.proxy_applications,
|
||||
authentik_group.proxy_app_groups
|
||||
]
|
||||
}
|
||||
|
||||
# Binding auto-created groups to their OAuth applications
|
||||
resource "authentik_policy_binding" "oauth_auto_group_bindings" {
|
||||
for_each = {
|
||||
for k, v in var.oauth_applications : k => v
|
||||
if v.create_group == true
|
||||
}
|
||||
|
||||
target = module.oauth_applications[each.key].application_uuid
|
||||
group = authentik_group.oauth_app_groups[each.key].id
|
||||
order = 100
|
||||
|
||||
depends_on = [
|
||||
module.oauth_applications,
|
||||
authentik_group.oauth_app_groups
|
||||
]
|
||||
}
|
||||
|
||||
module "saml_applications" {
|
||||
source = "./modules/saml-provider"
|
||||
|
||||
for_each = var.saml_applications
|
||||
|
||||
name = each.value.name
|
||||
app_name = each.value.name
|
||||
app_slug = each.value.slug
|
||||
app_group = each.value.group
|
||||
authorization_flow = try(authentik_flow.flows[each.value.authorization_flow].id, data.authentik_flow.default_authorization_flow.id)
|
||||
invalidation_flow = data.authentik_flow.default_invalidation_flow.id
|
||||
acs_url = each.value.acs_url
|
||||
issuer = each.value.issuer
|
||||
audience = each.value.audience
|
||||
sp_binding = each.value.sp_binding
|
||||
signing_key = each.value.signing_key
|
||||
property_mappings = [for pm in each.value.property_mappings : authentik_property_mapping_provider_saml.saml_mappings[pm].id]
|
||||
name_id_mapping = each.value.name_id_mapping != null ? authentik_property_mapping_provider_saml.saml_mappings[each.value.name_id_mapping].id : null
|
||||
assertion_valid_not_before = each.value.assertion_valid_not_before
|
||||
assertion_valid_not_on_or_after = each.value.assertion_valid_not_on_or_after
|
||||
session_valid_not_on_or_after = each.value.session_valid_not_on_or_after
|
||||
policy_engine_mode = each.value.policy_engine_mode
|
||||
meta_description = each.value.meta_description
|
||||
meta_launch_url = each.value.meta_launch_url
|
||||
meta_icon = each.value.meta_icon
|
||||
}
|
||||
|
||||
locals {
|
||||
oauth_outpost_assignments = {
|
||||
for app_key, app in var.oauth_applications : app_key => app.outpost
|
||||
if app.outpost != null
|
||||
}
|
||||
|
||||
proxy_outpost_assignments = {
|
||||
for app_key, app in var.proxy_applications : app_key => app.outpost
|
||||
if app.outpost != null
|
||||
}
|
||||
|
||||
outpost_providers = {
|
||||
for outpost_key, outpost in var.outposts : outpost_key => concat(
|
||||
[for app_key, app_outpost in local.oauth_outpost_assignments :
|
||||
module.oauth_applications[app_key].provider_id if app_outpost == outpost_key],
|
||||
[for app_key, app_outpost in local.proxy_outpost_assignments :
|
||||
module.proxy_applications[app_key].provider_id if app_outpost == outpost_key]
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
resource "authentik_outpost" "outposts" {
|
||||
for_each = {
|
||||
for k, v in var.outposts : k => v
|
||||
if length(lookup(local.outpost_providers, k, [])) > 0
|
||||
}
|
||||
|
||||
name = each.value.name
|
||||
type = "proxy"
|
||||
protocol_providers = local.outpost_providers[each.key]
|
||||
service_connection = data.authentik_service_connection_kubernetes.local_k8s.id
|
||||
config = jsonencode({
|
||||
log_level = "info"
|
||||
docker_labels = null
|
||||
authentik_host = var.authentik_url
|
||||
docker_network = null
|
||||
container_image = null
|
||||
docker_map_ports = true
|
||||
refresh_interval = "minutes=5"
|
||||
kubernetes_replicas = 1
|
||||
kubernetes_namespace = "authentik"
|
||||
authentik_host_browser = ""
|
||||
object_naming_template = "ak-outpost-%(name)s"
|
||||
authentik_host_insecure = false
|
||||
kubernetes_json_patches = null
|
||||
kubernetes_service_type = "ClusterIP"
|
||||
kubernetes_image_pull_secrets = []
|
||||
kubernetes_ingress_class_name = null
|
||||
kubernetes_disabled_components = []
|
||||
kubernetes_ingress_annotations = {}
|
||||
kubernetes_ingress_secret_name = "authentik-outpost-tls"
|
||||
})
|
||||
|
||||
depends_on = [
|
||||
module.oauth_applications,
|
||||
module.proxy_applications
|
||||
]
|
||||
}
|
103
terraform/authentik/modules/oauth-provider/main.tf
Normal file
103
terraform/authentik/modules/oauth-provider/main.tf
Normal file
@@ -0,0 +1,103 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
authentik = {
|
||||
source = "goauthentik/authentik"
|
||||
version = ">= 2023.10.0"
|
||||
}
|
||||
random = {
|
||||
source = "hashicorp/random"
|
||||
version = ">= 3.5.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Get all available scope mappings
|
||||
data "authentik_property_mapping_provider_scope" "all_scopes" {
|
||||
managed_list = [
|
||||
"goauthentik.io/providers/oauth2/scope-email",
|
||||
"goauthentik.io/providers/oauth2/scope-openid",
|
||||
"goauthentik.io/providers/oauth2/scope-profile"
|
||||
]
|
||||
}
|
||||
|
||||
# Filter scope mappings based on requested scopes
|
||||
locals {
|
||||
scope_name_mapping = {
|
||||
"openid" = "goauthentik.io/providers/oauth2/scope-openid"
|
||||
"profile" = "goauthentik.io/providers/oauth2/scope-profile"
|
||||
"email" = "goauthentik.io/providers/oauth2/scope-email"
|
||||
}
|
||||
|
||||
selected_scope_ids = [
|
||||
for scope in var.scope_mappings :
|
||||
data.authentik_property_mapping_provider_scope.all_scopes.ids[index(data.authentik_property_mapping_provider_scope.all_scopes.managed_list, local.scope_name_mapping[scope])]
|
||||
if contains(keys(local.scope_name_mapping), scope)
|
||||
]
|
||||
}
|
||||
|
||||
resource "random_password" "client_secret" {
|
||||
count = var.client_secret == null ? 1 : 0
|
||||
length = 40
|
||||
special = true
|
||||
}
|
||||
|
||||
resource "authentik_provider_oauth2" "provider" {
|
||||
name = var.name
|
||||
client_id = var.client_id != null ? var.client_id : random_id.client_id[0].hex
|
||||
client_secret = var.client_secret != null ? var.client_secret : random_password.client_secret[0].result
|
||||
client_type = var.client_type
|
||||
authorization_flow = var.authorization_flow
|
||||
invalidation_flow = var.invalidation_flow
|
||||
include_claims_in_id_token = var.include_claims_in_id_token
|
||||
access_code_validity = var.access_code_validity
|
||||
access_token_validity = var.access_token_validity
|
||||
refresh_token_validity = var.refresh_token_validity
|
||||
signing_key = var.signing_key
|
||||
|
||||
allowed_redirect_uris = [
|
||||
for uri in var.redirect_uris : {
|
||||
matching_mode = "strict"
|
||||
url = uri
|
||||
}
|
||||
]
|
||||
|
||||
property_mappings = length(var.property_mappings) > 0 ? var.property_mappings : local.selected_scope_ids
|
||||
}
|
||||
|
||||
resource "random_id" "client_id" {
|
||||
count = var.client_id == null ? 1 : 0
|
||||
byte_length = 20
|
||||
}
|
||||
|
||||
resource "authentik_application" "app" {
|
||||
name = var.app_name
|
||||
slug = var.app_slug
|
||||
protocol_provider = authentik_provider_oauth2.provider.id
|
||||
group = var.app_group
|
||||
policy_engine_mode = var.policy_engine_mode
|
||||
meta_description = var.meta_description
|
||||
meta_launch_url = var.meta_launch_url
|
||||
meta_icon = var.meta_icon
|
||||
}
|
||||
|
||||
resource "authentik_policy_binding" "app_access" {
|
||||
for_each = var.access_policies
|
||||
|
||||
target = authentik_application.app.id
|
||||
policy = each.value.policy_id
|
||||
order = each.value.order
|
||||
|
||||
enabled = lookup(each.value, "enabled", true)
|
||||
timeout = lookup(each.value, "timeout", 30)
|
||||
negate = lookup(each.value, "negate", false)
|
||||
failure_result = lookup(each.value, "failure_result", true)
|
||||
}
|
||||
|
||||
# Binding groups to the application
|
||||
resource "authentik_policy_binding" "group_bindings" {
|
||||
for_each = { for idx, group_id in var.access_groups : idx => group_id }
|
||||
|
||||
target = authentik_application.app.uuid
|
||||
group = each.value
|
||||
order = 10 + each.key
|
||||
}
|
30
terraform/authentik/modules/oauth-provider/outputs.tf
Normal file
30
terraform/authentik/modules/oauth-provider/outputs.tf
Normal file
@@ -0,0 +1,30 @@
|
||||
output "provider_id" {
|
||||
description = "ID of the OAuth2 provider"
|
||||
value = authentik_provider_oauth2.provider.id
|
||||
}
|
||||
|
||||
output "application_id" {
|
||||
description = "ID of the application"
|
||||
value = authentik_application.app.id
|
||||
}
|
||||
|
||||
output "application_uuid" {
|
||||
description = "UUID of the application"
|
||||
value = authentik_application.app.uuid
|
||||
}
|
||||
|
||||
output "client_id" {
|
||||
description = "OAuth2 Client ID"
|
||||
value = authentik_provider_oauth2.provider.client_id
|
||||
}
|
||||
|
||||
output "client_secret" {
|
||||
description = "OAuth2 Client Secret"
|
||||
value = authentik_provider_oauth2.provider.client_secret
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
output "application_slug" {
|
||||
description = "Application slug"
|
||||
value = authentik_application.app.slug
|
||||
}
|
150
terraform/authentik/modules/oauth-provider/variables.tf
Normal file
150
terraform/authentik/modules/oauth-provider/variables.tf
Normal file
@@ -0,0 +1,150 @@
|
||||
variable "name" {
|
||||
description = "Name of the OAuth2 provider"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_name" {
|
||||
description = "Name of the application"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_slug" {
|
||||
description = "Slug of the application"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_group" {
|
||||
description = "Group for the application"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "client_id" {
|
||||
description = "OAuth2 Client ID"
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "client_secret" {
|
||||
description = "OAuth2 Client Secret"
|
||||
type = string
|
||||
default = null
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "client_type" {
|
||||
description = "OAuth2 Client type (confidential or public)"
|
||||
type = string
|
||||
default = "confidential"
|
||||
|
||||
validation {
|
||||
condition = contains(["confidential", "public"], var.client_type)
|
||||
error_message = "Client type must be either 'confidential' or 'public'."
|
||||
}
|
||||
}
|
||||
|
||||
variable "authorization_flow" {
|
||||
description = "Authorization flow UUID"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "invalidation_flow" {
|
||||
description = "Invalidation flow UUID"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "redirect_uris" {
|
||||
description = "List of allowed redirect URIs"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "access_code_validity" {
|
||||
description = "Access code validity duration"
|
||||
type = string
|
||||
default = "minutes=1"
|
||||
}
|
||||
|
||||
variable "access_token_validity" {
|
||||
description = "Access token validity duration"
|
||||
type = string
|
||||
default = "minutes=5"
|
||||
}
|
||||
|
||||
variable "refresh_token_validity" {
|
||||
description = "Refresh token validity duration"
|
||||
type = string
|
||||
default = "days=30"
|
||||
}
|
||||
|
||||
variable "include_claims_in_id_token" {
|
||||
description = "Include claims in ID token"
|
||||
type = bool
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "signing_key" {
|
||||
description = "Signing key UUID"
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "property_mappings" {
|
||||
description = "List of property mapping UUIDs"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "policy_engine_mode" {
|
||||
description = "Policy engine mode"
|
||||
type = string
|
||||
default = "all"
|
||||
|
||||
validation {
|
||||
condition = contains(["all", "any"], var.policy_engine_mode)
|
||||
error_message = "Policy engine mode must be either 'all' or 'any'."
|
||||
}
|
||||
}
|
||||
|
||||
variable "meta_description" {
|
||||
description = "Application meta description"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "meta_launch_url" {
|
||||
description = "Application launch URL"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "meta_icon" {
|
||||
description = "Application icon URL"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "access_policies" {
|
||||
description = "Access policies for the application"
|
||||
type = map(object({
|
||||
policy_id = string
|
||||
order = number
|
||||
enabled = optional(bool, true)
|
||||
timeout = optional(number, 30)
|
||||
negate = optional(bool, false)
|
||||
failure_result = optional(bool, true)
|
||||
}))
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "access_groups" {
|
||||
description = "List of group IDs that have access to the application"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "scope_mappings" {
|
||||
description = "List of scope mappings for the OAuth provider"
|
||||
type = list(string)
|
||||
default = ["openid", "profile", "email"]
|
||||
}
|
58
terraform/authentik/modules/proxy-provider/main.tf
Normal file
58
terraform/authentik/modules/proxy-provider/main.tf
Normal file
@@ -0,0 +1,58 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
authentik = {
|
||||
source = "goauthentik/authentik"
|
||||
version = ">= 2023.10.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "authentik_provider_proxy" "provider" {
|
||||
name = var.name
|
||||
external_host = var.external_host
|
||||
internal_host = var.internal_host
|
||||
internal_host_ssl_validation = var.internal_host_ssl_validation
|
||||
authorization_flow = var.authorization_flow
|
||||
invalidation_flow = var.invalidation_flow
|
||||
mode = var.mode
|
||||
cookie_domain = var.cookie_domain
|
||||
skip_path_regex = var.skip_path_regex
|
||||
intercept_header_auth = var.intercept_header_auth
|
||||
basic_auth_enabled = var.basic_auth_enabled
|
||||
basic_auth_password_attribute = var.basic_auth_password_attribute
|
||||
|
||||
property_mappings = var.property_mappings
|
||||
}
|
||||
|
||||
resource "authentik_application" "app" {
|
||||
name = var.app_name
|
||||
slug = var.app_slug
|
||||
protocol_provider = authentik_provider_proxy.provider.id
|
||||
group = var.app_group
|
||||
policy_engine_mode = var.policy_engine_mode
|
||||
meta_description = var.meta_description
|
||||
meta_launch_url = var.meta_launch_url
|
||||
meta_icon = var.meta_icon
|
||||
}
|
||||
|
||||
resource "authentik_policy_binding" "app_access" {
|
||||
for_each = var.access_policies
|
||||
|
||||
target = authentik_application.app.id
|
||||
policy = each.value.policy_id
|
||||
order = each.value.order
|
||||
|
||||
enabled = lookup(each.value, "enabled", true)
|
||||
timeout = lookup(each.value, "timeout", 30)
|
||||
negate = lookup(each.value, "negate", false)
|
||||
failure_result = lookup(each.value, "failure_result", true)
|
||||
}
|
||||
|
||||
# Binding groups to the application
|
||||
resource "authentik_policy_binding" "group_bindings" {
|
||||
for_each = { for idx, group_id in var.access_groups : idx => group_id }
|
||||
|
||||
target = authentik_application.app.uuid
|
||||
group = each.value
|
||||
order = 10 + each.key
|
||||
}
|
35
terraform/authentik/modules/proxy-provider/outputs.tf
Normal file
35
terraform/authentik/modules/proxy-provider/outputs.tf
Normal file
@@ -0,0 +1,35 @@
|
||||
output "provider_id" {
|
||||
description = "ID of the Proxy provider"
|
||||
value = authentik_provider_proxy.provider.id
|
||||
}
|
||||
|
||||
output "application_id" {
|
||||
description = "ID of the application"
|
||||
value = authentik_application.app.id
|
||||
}
|
||||
|
||||
output "application_uuid" {
|
||||
description = "UUID of the application"
|
||||
value = authentik_application.app.uuid
|
||||
}
|
||||
|
||||
output "application_slug" {
|
||||
description = "Application slug"
|
||||
value = authentik_application.app.slug
|
||||
}
|
||||
|
||||
output "launch_url" {
|
||||
description = "Application launch URL"
|
||||
value = authentik_application.app.meta_launch_url
|
||||
}
|
||||
|
||||
output "external_host" {
|
||||
description = "External host URL"
|
||||
value = authentik_provider_proxy.provider.external_host
|
||||
}
|
||||
|
||||
output "internal_host" {
|
||||
description = "Internal host URL"
|
||||
value = authentik_provider_proxy.provider.internal_host
|
||||
}
|
||||
|
151
terraform/authentik/modules/proxy-provider/variables.tf
Normal file
151
terraform/authentik/modules/proxy-provider/variables.tf
Normal file
@@ -0,0 +1,151 @@
|
||||
variable "name" {
|
||||
description = "Name of the Proxy provider"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_name" {
|
||||
description = "Name of the application"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_slug" {
|
||||
description = "Slug of the application"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_group" {
|
||||
description = "Group for the application"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "external_host" {
|
||||
description = "External hostname for the proxy"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "internal_host" {
|
||||
description = "Internal hostname for the proxy"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "internal_host_ssl_validation" {
|
||||
description = "Enable SSL validation for internal host"
|
||||
type = bool
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "authorization_flow" {
|
||||
description = "Authorization flow UUID"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "invalidation_flow" {
|
||||
description = "Invalidation flow UUID"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "mode" {
|
||||
description = "Proxy mode (proxy, forward_single, forward_domain)"
|
||||
type = string
|
||||
default = "proxy"
|
||||
|
||||
validation {
|
||||
condition = contains(["proxy", "forward_single", "forward_domain"], var.mode)
|
||||
error_message = "Mode must be one of: proxy, forward_single, forward_domain."
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
variable "cookie_domain" {
|
||||
description = "Cookie domain for the proxy"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
|
||||
variable "skip_path_regex" {
|
||||
description = "Regular expression for paths to skip authentication"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "intercept_header_auth" {
|
||||
description = "Intercept header authentication"
|
||||
type = bool
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "basic_auth_enabled" {
|
||||
description = "Enable basic authentication"
|
||||
type = bool
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "basic_auth_password_attribute" {
|
||||
description = "Attribute for basic auth password"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "basic_auth_user_attribute" {
|
||||
description = "Attribute for basic auth username"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "property_mappings" {
|
||||
description = "List of property mapping UUIDs"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "policy_engine_mode" {
|
||||
description = "Policy engine mode"
|
||||
type = string
|
||||
default = "all"
|
||||
|
||||
validation {
|
||||
condition = contains(["all", "any"], var.policy_engine_mode)
|
||||
error_message = "Policy engine mode must be either 'all' or 'any'."
|
||||
}
|
||||
}
|
||||
|
||||
variable "meta_description" {
|
||||
description = "Application meta description"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "meta_launch_url" {
|
||||
description = "Application launch URL"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "meta_icon" {
|
||||
description = "Application icon URL"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
|
||||
variable "access_policies" {
|
||||
description = "Access policies for the application"
|
||||
type = map(object({
|
||||
policy_id = string
|
||||
order = number
|
||||
enabled = optional(bool, true)
|
||||
timeout = optional(number, 30)
|
||||
negate = optional(bool, false)
|
||||
failure_result = optional(bool, true)
|
||||
}))
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "access_groups" {
|
||||
description = "List of group IDs that have access to the application"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
53
terraform/authentik/modules/saml-provider/main.tf
Normal file
53
terraform/authentik/modules/saml-provider/main.tf
Normal file
@@ -0,0 +1,53 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
authentik = {
|
||||
source = "goauthentik/authentik"
|
||||
version = ">= 2023.10.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
data "authentik_certificate_key_pair" "default" {
|
||||
name = "authentik Self-signed Certificate"
|
||||
}
|
||||
|
||||
resource "authentik_provider_saml" "provider" {
|
||||
name = var.name
|
||||
authorization_flow = var.authorization_flow
|
||||
invalidation_flow = var.invalidation_flow
|
||||
acs_url = var.acs_url
|
||||
issuer = var.issuer
|
||||
audience = var.audience
|
||||
sp_binding = var.sp_binding
|
||||
signing_kp = var.signing_key != null ? var.signing_key : data.authentik_certificate_key_pair.default.id
|
||||
property_mappings = var.property_mappings
|
||||
name_id_mapping = var.name_id_mapping
|
||||
|
||||
assertion_valid_not_before = var.assertion_valid_not_before
|
||||
assertion_valid_not_on_or_after = var.assertion_valid_not_on_or_after
|
||||
session_valid_not_on_or_after = var.session_valid_not_on_or_after
|
||||
}
|
||||
|
||||
resource "authentik_application" "app" {
|
||||
name = var.app_name
|
||||
slug = var.app_slug
|
||||
protocol_provider = authentik_provider_saml.provider.id
|
||||
group = var.app_group
|
||||
policy_engine_mode = var.policy_engine_mode
|
||||
meta_description = var.meta_description
|
||||
meta_launch_url = var.meta_launch_url
|
||||
meta_icon = var.meta_icon
|
||||
}
|
||||
|
||||
resource "authentik_policy_binding" "app_access" {
|
||||
for_each = var.access_policies
|
||||
|
||||
target = authentik_application.app.id
|
||||
policy = each.value.policy_id
|
||||
order = each.value.order
|
||||
|
||||
enabled = lookup(each.value, "enabled", true)
|
||||
timeout = lookup(each.value, "timeout", 30)
|
||||
negate = lookup(each.value, "negate", false)
|
||||
failure_result = lookup(each.value, "failure_result", true)
|
||||
}
|
24
terraform/authentik/modules/saml-provider/outputs.tf
Normal file
24
terraform/authentik/modules/saml-provider/outputs.tf
Normal file
@@ -0,0 +1,24 @@
|
||||
output "provider_id" {
|
||||
description = "ID of the SAML provider"
|
||||
value = authentik_provider_saml.provider.id
|
||||
}
|
||||
|
||||
output "application_id" {
|
||||
description = "ID of the application"
|
||||
value = authentik_application.app.id
|
||||
}
|
||||
|
||||
output "provider_name" {
|
||||
description = "Name of the SAML provider"
|
||||
value = authentik_provider_saml.provider.name
|
||||
}
|
||||
|
||||
output "acs_url" {
|
||||
description = "Assertion Consumer Service URL"
|
||||
value = authentik_provider_saml.provider.acs_url
|
||||
}
|
||||
|
||||
output "issuer" {
|
||||
description = "SAML Issuer"
|
||||
value = authentik_provider_saml.provider.issuer
|
||||
}
|
124
terraform/authentik/modules/saml-provider/variables.tf
Normal file
124
terraform/authentik/modules/saml-provider/variables.tf
Normal file
@@ -0,0 +1,124 @@
|
||||
variable "name" {
|
||||
description = "Name of the SAML provider"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_name" {
|
||||
description = "Name of the application"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_slug" {
|
||||
description = "Slug of the application"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "app_group" {
|
||||
description = "Group of the application"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "authorization_flow" {
|
||||
description = "Authorization flow ID"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "invalidation_flow" {
|
||||
description = "Invalidation flow ID"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "acs_url" {
|
||||
description = "Assertion Consumer Service URL"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "issuer" {
|
||||
description = "SAML Issuer"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "audience" {
|
||||
description = "SAML Audience"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "sp_binding" {
|
||||
description = "Service Provider binding (post or redirect)"
|
||||
type = string
|
||||
default = "post"
|
||||
}
|
||||
|
||||
variable "signing_key" {
|
||||
description = "Certificate key pair ID for signing"
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "property_mappings" {
|
||||
description = "List of property mapping IDs"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "name_id_mapping" {
|
||||
description = "Property mapping ID for NameID"
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "assertion_valid_not_before" {
|
||||
description = "Assertion valid not before"
|
||||
type = string
|
||||
default = "minutes=-5"
|
||||
}
|
||||
|
||||
variable "assertion_valid_not_on_or_after" {
|
||||
description = "Assertion valid not on or after"
|
||||
type = string
|
||||
default = "minutes=5"
|
||||
}
|
||||
|
||||
variable "session_valid_not_on_or_after" {
|
||||
description = "Session valid not on or after"
|
||||
type = string
|
||||
default = "minutes=86400"
|
||||
}
|
||||
|
||||
variable "policy_engine_mode" {
|
||||
description = "Policy engine mode"
|
||||
type = string
|
||||
default = "all"
|
||||
}
|
||||
|
||||
variable "meta_description" {
|
||||
description = "Application description"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "meta_launch_url" {
|
||||
description = "Application launch URL"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "meta_icon" {
|
||||
description = "Application icon URL"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "access_policies" {
|
||||
description = "Access policies for the application"
|
||||
type = map(object({
|
||||
policy_id = string
|
||||
order = number
|
||||
enabled = optional(bool, true)
|
||||
timeout = optional(number, 30)
|
||||
negate = optional(bool, false)
|
||||
failure_result = optional(bool, true)
|
||||
}))
|
||||
default = {}
|
||||
}
|
171
terraform/authentik/oauth2-apps.tfvars
Normal file
171
terraform/authentik/oauth2-apps.tfvars
Normal file
@@ -0,0 +1,171 @@
|
||||
oauth_applications = {
|
||||
"paperless" = {
|
||||
name = "Paperless-NGX"
|
||||
slug = "paperless"
|
||||
group = "Tools"
|
||||
meta_description = "Document management system"
|
||||
meta_icon = "https://img.icons8.com/fluency/48/documents.png"
|
||||
redirect_uris = ["https://docs.hexor.cy/accounts/oidc/authentik/login/callback/"]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"gitea" = {
|
||||
name = "Gitea"
|
||||
slug = "gitea"
|
||||
group = "Tools"
|
||||
meta_description = "Git repository hosting"
|
||||
meta_icon = "https://img.icons8.com/?size=100&id=20906&format=png&color=000000"
|
||||
redirect_uris = ["https://gt.hexor.cy/user/oauth2/Authentik/callback"]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=10"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"jellyfin" = {
|
||||
name = "Jellyfin"
|
||||
slug = "jellyfin"
|
||||
group = "Media and Storage"
|
||||
meta_description = "Media streaming server"
|
||||
meta_icon = "https://img.icons8.com/plasticine/100/jellyfin.png"
|
||||
redirect_uris = [
|
||||
"https://jf.hexor.cy/sso/OID/r/authentik",
|
||||
"https://jf.hexor.cy/sso/OID/redirect/authentik"
|
||||
]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=10"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"argocd" = {
|
||||
name = "ArgoCD"
|
||||
slug = "argocd"
|
||||
group = "Core"
|
||||
meta_description = "GitOps deployment tool"
|
||||
meta_icon = "https://img.icons8.com/color-glass/48/octopus.png"
|
||||
redirect_uris = ["https://ag.hexor.cy/auth/callback"]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"grafana" = {
|
||||
name = "Grafana"
|
||||
slug = "grafana"
|
||||
group = "Core"
|
||||
meta_description = "Monitoring and observability"
|
||||
meta_icon = "https://img.icons8.com/fluency/48/grafana.png"
|
||||
redirect_uris = ["https://gf.hexor.cy/login/generic_oauth"]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"immich" = {
|
||||
name = "Immich"
|
||||
slug = "immich"
|
||||
group = "Media and Storage"
|
||||
meta_description = "Photo and video management"
|
||||
meta_icon = "https://img.icons8.com/fluency/48/photos.png"
|
||||
redirect_uris = [
|
||||
"https://photos.hexor.cy/auth/login",
|
||||
"https://photos.hexor.cy/user-settings",
|
||||
"app.immich:///oauth-callback",
|
||||
"http://photos.homenet:30283/auth/login",
|
||||
"http://photos.homenet:30283/user-settings"
|
||||
]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
access_groups = ["admins"]
|
||||
create_group = true
|
||||
}
|
||||
|
||||
"pgadmin" = {
|
||||
name = "Postgres WEB Admin"
|
||||
slug = "pgadmin"
|
||||
group = "Core"
|
||||
meta_description = "PostgreSQL WEB administration"
|
||||
meta_icon = "https://img.icons8.com/?size=100&id=JRnxU7ZWP4mi&format=png&color=000000"
|
||||
redirect_uris = ["https://pg.hexor.cy/oauth2/authorize"]
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
}
|
||||
|
||||
"home-assistant-lms" = {
|
||||
name = "Home Assistant LMS"
|
||||
slug = "home-assistant-lms"
|
||||
group = "Internal"
|
||||
meta_description = "Home Assistant Limassol"
|
||||
meta_icon = "https://img.icons8.com/stickers/100/smart-home-automation.png"
|
||||
redirect_uris = [
|
||||
"http://ha-lms:8123/auth/oidc/callback",
|
||||
"http://ha-lms.homenet:8123/auth/oidc/callback",
|
||||
]
|
||||
meta_launch_url = "http://ha-lms:8123/auth/oidc/welcome"
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
create_group = true
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
}
|
||||
"home-assistant-london" = {
|
||||
name = "Home Assistant London"
|
||||
slug = "home-assistant-london"
|
||||
group = "Internal"
|
||||
meta_description = "Home Assistant London"
|
||||
meta_icon = "https://img.icons8.com/stickers/100/smart-home-automation.png"
|
||||
redirect_uris = [
|
||||
"http://ha-london:8123/auth/oidc/callback",
|
||||
"http://ha-london.tail2fe2d.ts.net:8123/auth/oidc/callback",
|
||||
]
|
||||
meta_launch_url = "http://ha-london:8123/auth/oidc/welcome"
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
create_group = true
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
}
|
||||
}
|
||||
|
124
terraform/authentik/outputs.tf
Normal file
124
terraform/authentik/outputs.tf
Normal file
@@ -0,0 +1,124 @@
|
||||
output "oauth_applications" {
|
||||
description = "OAuth2/OpenID applications details"
|
||||
value = {
|
||||
for k, v in module.oauth_applications : k => {
|
||||
application_id = v.application_id
|
||||
application_uuid = v.application_uuid
|
||||
client_id = v.client_id
|
||||
client_secret = v.client_secret
|
||||
slug = v.application_slug
|
||||
}
|
||||
}
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
output "proxy_applications" {
|
||||
description = "Proxy applications details"
|
||||
value = {
|
||||
for k, v in module.proxy_applications : k => {
|
||||
application_id = v.application_id
|
||||
application_uuid = v.application_uuid
|
||||
external_host = v.external_host
|
||||
internal_host = v.internal_host
|
||||
slug = v.application_slug
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output "outposts" {
|
||||
description = "Outposts details"
|
||||
value = {
|
||||
for k, v in authentik_outpost.outposts : k => {
|
||||
id = v.id
|
||||
name = v.name
|
||||
type = v.type
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output "groups" {
|
||||
description = "Groups details"
|
||||
value = merge(
|
||||
{
|
||||
for k, v in authentik_group.root_groups : k => {
|
||||
id = v.id
|
||||
name = v.name
|
||||
}
|
||||
},
|
||||
{
|
||||
for k, v in authentik_group.child_groups : k => {
|
||||
id = v.id
|
||||
name = v.name
|
||||
}
|
||||
},
|
||||
{
|
||||
for k, v in authentik_group.proxy_app_groups : k => {
|
||||
id = v.id
|
||||
name = v.name
|
||||
auto_created = true
|
||||
type = "proxy"
|
||||
}
|
||||
},
|
||||
{
|
||||
for k, v in authentik_group.oauth_app_groups : k => {
|
||||
id = v.id
|
||||
name = v.name
|
||||
auto_created = true
|
||||
type = "oauth"
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
output "flows" {
|
||||
description = "Custom flows details"
|
||||
value = {
|
||||
for k, v in authentik_flow.flows : k => {
|
||||
id = v.id
|
||||
slug = v.slug
|
||||
name = v.name
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output "certificates" {
|
||||
description = "Certificates details"
|
||||
value = {
|
||||
for k, v in authentik_certificate_key_pair.certificates : k => {
|
||||
id = v.id
|
||||
name = v.name
|
||||
fingerprint_sha256 = v.fingerprint_sha256
|
||||
fingerprint_sha1 = v.fingerprint_sha1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Output for applications table generation
|
||||
output "applications_for_wiki" {
|
||||
description = "Applications data formatted for wiki table generation"
|
||||
value = {
|
||||
proxy_apps = {
|
||||
for k, v in var.proxy_applications : k => {
|
||||
name = v.name
|
||||
type = "Proxy"
|
||||
url = v.external_host
|
||||
internal_url = v.internal_host
|
||||
group = v.group
|
||||
description = v.meta_description
|
||||
icon = v.meta_icon
|
||||
slug = v.slug
|
||||
}
|
||||
}
|
||||
oauth_apps = {
|
||||
for k, v in var.oauth_applications : k => {
|
||||
name = v.name
|
||||
type = "OAuth2/OpenID"
|
||||
url = length(v.redirect_uris) > 0 ? "https://${split("/", replace(v.redirect_uris[0], "https://", ""))[0]}" : ""
|
||||
group = v.group
|
||||
description = v.meta_description
|
||||
icon = v.meta_icon
|
||||
slug = v.slug
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
13
terraform/authentik/providers.tf
Normal file
13
terraform/authentik/providers.tf
Normal file
@@ -0,0 +1,13 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
authentik = {
|
||||
source = "goauthentik/authentik"
|
||||
version = "2025.8.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "authentik" {
|
||||
url = "https://idm.hexor.cy"
|
||||
token = "qXcuoCg77JaRgqnU6rqIIBa8MBJ4UNyLPTL89dZI8zeC2jfaWqQ7k56BJs8F"
|
||||
}
|
191
terraform/authentik/proxy-apps.tfvars
Normal file
191
terraform/authentik/proxy-apps.tfvars
Normal file
@@ -0,0 +1,191 @@
|
||||
proxy_applications = {
|
||||
"k8s-dashboard" = {
|
||||
name = "K8S dashboard"
|
||||
slug = "k8s-dashboard-ns"
|
||||
group = "Core"
|
||||
external_host = "https://k8s.hexor.cy"
|
||||
internal_host = "http://kubernetes-dashboard.kubernetes-dashboard.svc"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = "K8S dashboard chart"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
meta_icon = "https://img.icons8.com/color/48/kubernetes.png"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
"filemanager" = {
|
||||
name = "FM filemanager"
|
||||
slug = "fm-filemanager"
|
||||
group = "Core"
|
||||
external_host = "https://fm.hexor.cy"
|
||||
internal_host = "http://fb-filemanager-filebrowser.syncthing.svc"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = "K8S dashboard chart"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
meta_icon = "https://img.icons8.com/external-anggara-flat-anggara-putra/32/external-folder-basic-user-interface-anggara-flat-anggara-putra.png"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"prometheus" = {
|
||||
name = "Prometheus"
|
||||
slug = "prometheus"
|
||||
group = "Core"
|
||||
external_host = "https://prom.hexor.cy"
|
||||
internal_host = "http://prometheus-kube-prometheus-prometheus.prometheus.svc:9090"
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/fluency/48/prometheus-app.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
internal_host_ssl_validation = false
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"kubernetes-secrets" = {
|
||||
name = "kubernetes-secrets"
|
||||
slug = "k8s-secret"
|
||||
group = "Core"
|
||||
external_host = "https://pass.hexor.cy"
|
||||
internal_host = "http://secret-reader.k8s-secret.svc:80"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/ios-filled/50/password.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
# Tools applications
|
||||
"vpn" = {
|
||||
name = "VPN"
|
||||
slug = "vpn"
|
||||
group = "Tools"
|
||||
external_host = "https://of.hexor.cy"
|
||||
internal_host = "http://outfleet.vpn.svc"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
skip_path_regex = <<-EOT
|
||||
/u/
|
||||
/stat/
|
||||
/ss/
|
||||
/xray/
|
||||
/dynamic/
|
||||
EOT
|
||||
meta_icon = "https://img.icons8.com/?size=100&id=fqAD3lAB6zTe&format=png&color=000000"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"outfleet-rs" = {
|
||||
name = "OutFleet"
|
||||
slug = "outfleet-rs"
|
||||
group = "Tools"
|
||||
external_host = "https://vpn.hexor.cy"
|
||||
internal_host = "http://outfleet-rs.vpn.svc"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
skip_path_regex = <<-EOT
|
||||
/sub/
|
||||
EOT
|
||||
meta_icon = "https://img.icons8.com/?size=100&id=fqAD3lAB6zTe&format=png&color=000000"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
"qbittorrent" = {
|
||||
name = "qBittorent"
|
||||
slug = "qbittorent"
|
||||
group = "Tools"
|
||||
external_host = "https://qbt.hexor.cy"
|
||||
internal_host = "http://qbittorrent.jellyfin.svc"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/nolan/64/qbittorrent--v2.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
# Media and Storage applications
|
||||
"kopia" = {
|
||||
name = "Kopia"
|
||||
slug = "kopia"
|
||||
group = "Media and Storage"
|
||||
external_host = "https://backup.hexor.cy"
|
||||
internal_host = "http://100.72.135.2:51515"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/external-flaticons-lineal-color-flat-icons/64/external-backup-productivity-flaticons-lineal-color-flat-icons.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"syncthing-master" = {
|
||||
name = "Syncthing-master"
|
||||
slug = "syncthing-master"
|
||||
group = "Media and Storage"
|
||||
external_host = "https://ss.hexor.cy"
|
||||
internal_host = "https://syncthing-master.syncthing.svc:8384"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/?size=100&id=Id4NcEcXcYzF&format=png&color=000000"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"truenas" = {
|
||||
name = "TrueNAS"
|
||||
slug = "truenas-proxy"
|
||||
group = "Media and Storage"
|
||||
external_host = "https://nas.hexor.cy"
|
||||
internal_host = "http://10.0.5.107:81"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/dusk/64/nas.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"khm" = {
|
||||
name = "KHM"
|
||||
slug = "khm"
|
||||
group = "Media and Storage"
|
||||
external_host = "https://khm.hexor.cy"
|
||||
internal_host = "http://khm.khm.svc:8080"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/liquid-glass/48/key.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
access_groups = ["admins", "khm"] # Используем существующие группы
|
||||
create_group = true
|
||||
access_groups = ["admins"]
|
||||
}
|
||||
|
||||
"minecraft" = {
|
||||
name = "Minecraft"
|
||||
slug = "minecraft"
|
||||
group = "Media and Storage"
|
||||
external_host = "https://minecraft.hexor.cy"
|
||||
internal_host = "http://minecraft-dynmap.minecraft.svc"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
meta_icon = "https://img.icons8.com/color/48/minecraft-grass-cube.png"
|
||||
mode = "proxy"
|
||||
outpost = "kubernetes-outpost"
|
||||
}
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user