Compare commits
100 Commits
45b49944b3
...
auto-updat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3ad3f6db11 | ||
|
|
353bb877be | ||
|
|
e523bb8bea | ||
|
|
b433373725 | ||
|
|
3026e53746 | ||
|
|
63669c69ff | ||
|
|
fa98e553cd | ||
|
|
055ef8aa77 | ||
|
|
22b359a7ee | ||
| 611e3e31dd | |||
| ddbd53e476 | |||
| f8a9d91932 | |||
| 262fea115d | |||
|
|
1e1a015dc0 | ||
|
|
e76ebdd8c3 | ||
|
|
0c2ce55a41 | ||
|
|
6e9de5addf | ||
|
|
887a9a2306 | ||
|
|
776109d795 | ||
|
|
c998426b44 | ||
|
|
536be6a61f | ||
|
|
713481c726 | ||
| f6411b7b65 | |||
| 3af6d98be8 | |||
| a45af9d4bc | |||
| 76937930ce | |||
| d4ff8d4665 | |||
| e0cf9371ae | |||
| 1126cb25bc | |||
| 44250dc937 | |||
| d9db73e078 | |||
| 71ce9f15ef | |||
| 6b855294af | |||
| 8dd16e24e6 | |||
| 3df95f46a5 | |||
| c0151eb2c9 | |||
| 6d7e365058 | |||
| 0b5361323a | |||
| 56352fef4b | |||
| 7a1f792391 | |||
| defe0cbdf5 | |||
| 7285c62b37 | |||
| 60f8d86fca | |||
| 2387653edd | |||
| 78a639162b | |||
| 90b197bcbe | |||
|
|
156d26aaf9 | ||
| 700b9cf5ff | |||
| 84bd1fc05a | |||
| cb3defd28c | |||
| 47320b9df6 | |||
| 0e4204b2ea | |||
| 3230c67f16 | |||
| 60c764a657 | |||
| 869a75f6e4 | |||
| 26429f1d4c | |||
| 5718981b67 | |||
| 8c6ca27523 | |||
| 4334ea25b7 | |||
| 18e60344b3 | |||
| 113f1e668d | |||
| 9b1caa28ef | |||
|
|
bd927b7ea6 | ||
| 1688ac67c2 | |||
| eda944741e | |||
| e744629f85 | |||
|
|
e38ba5b70c | ||
| 860f83445a | |||
| 8e1e20f478 | |||
| 93afe8b187 | |||
| 54e83b0af9 | |||
| cbed0939fa | |||
| e545346fec | |||
| aae0255843 | |||
| c7fc3504ba | |||
| b2701a8435 | |||
| df39182fea | |||
| c9bee8c049 | |||
| 439ad751e9 | |||
| 4681a3b263 | |||
| 45f208a4ff | |||
| a19648aacc | |||
| 7d7906edc7 | |||
|
|
30a3b1cd60 | ||
|
|
8dfc8e2549 | ||
|
|
8e18775e3e | ||
| 1ee1b9a1c6 | |||
| 407b11f40d | |||
| 82cf742edb | |||
| 360f260cb5 | |||
| 31fe1bf95d | |||
| 7c8a3b2aaa | |||
| 792421ffc5 | |||
| 014db9ad81 | |||
| 602b729d97 | |||
| fa7b81c1b3 | |||
| 7deb5971a5 | |||
| dbfdc0f122 | |||
| 984f7030a3 | |||
| 3e2fe905bd |
147
.gitea/scripts/README-update-readme.md
Normal file
147
.gitea/scripts/README-update-readme.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# Auto-Update README Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
This workflow automatically updates the `README.md` file with the current list of ArgoCD applications based on the directory structure in `k8s/`.
|
||||
|
||||
## How it works
|
||||
|
||||
1. **Trigger**: Workflow runs automatically when changes are pushed to `k8s/**` paths
|
||||
2. **Scan**: Python script scans `k8s/` directory structure and finds all applications
|
||||
3. **Generate**: Creates README.md with badges for all found applications
|
||||
4. **Create PR**: If changes detected, creates a Merge Request for manual review
|
||||
|
||||
## Files
|
||||
|
||||
- `.gitea/workflows/update-readme.yaml` - GitHub Actions workflow
|
||||
- `.gitea/scripts/generate-readme.py` - Python script for README generation
|
||||
|
||||
## Directory Structure
|
||||
|
||||
The script expects the following k8s directory structure:
|
||||
|
||||
```
|
||||
k8s/
|
||||
├── core/ # Core infrastructure applications
|
||||
│ ├── argocd/
|
||||
│ ├── authentik/
|
||||
│ └── ...
|
||||
├── apps/ # User applications
|
||||
│ ├── gitea/
|
||||
│ ├── immich/
|
||||
│ └── ...
|
||||
└── games/ # Game servers
|
||||
├── minecraft/
|
||||
└── ...
|
||||
```
|
||||
|
||||
Each subdirectory name becomes an application name in the README.
|
||||
|
||||
## Required Secrets
|
||||
|
||||
The workflow requires the following secrets to be configured in Gitea:
|
||||
|
||||
| Secret | Description | Example |
|
||||
|--------|-------------|---------|
|
||||
| `GT_URL` | Gitea instance URL | `https://gt.hexor.cy` |
|
||||
| `GT_TOKEN` | Gitea API token with repo write access | `glpat-xxxxx...` |
|
||||
| `GT_OWNER` | Repository owner (username or org) | `ab` |
|
||||
| `GT_REPO` | Repository name | `homelab` |
|
||||
|
||||
### How to create a Gitea Token
|
||||
|
||||
1. Go to Settings → Applications → Generate New Token
|
||||
2. Give it a name like "README Update Bot"
|
||||
3. Select scopes: `repo` (Full control of repositories)
|
||||
4. Generate and copy the token
|
||||
5. Add it as a secret in repository settings
|
||||
|
||||
## Badge Format
|
||||
|
||||
Badges are generated using a predictable pattern:
|
||||
|
||||
```markdown
|
||||
[](https://ag.hexor.cy/applications/argocd/app-name)
|
||||
```
|
||||
|
||||
This allows you to immediately see which applications are:
|
||||
- ✅ Healthy and synced (green badge)
|
||||
- ⚠️ Degraded or out of sync (yellow badge)
|
||||
- ❌ Unhealthy or failed (red badge)
|
||||
|
||||
## Manual Trigger
|
||||
|
||||
You can manually trigger the workflow from Gitea:
|
||||
|
||||
1. Go to Actions tab
|
||||
2. Select "Auto-update README" workflow
|
||||
3. Click "Run workflow"
|
||||
4. Select branch and run
|
||||
|
||||
## Example Output
|
||||
|
||||
The generated README will look like:
|
||||
|
||||
```markdown
|
||||
# homelab
|
||||
|
||||
ArgoCD homelab project
|
||||
|
||||
## ArgoCD Applications Status
|
||||
|
||||
| Application | Status |
|
||||
| :--- | :---: |
|
||||
| **argocd** | [](https://ag.hexor.cy/applications/argocd/argocd) |
|
||||
...
|
||||
```
|
||||
|
||||
## Reviewing Pull Requests
|
||||
|
||||
When the workflow creates a PR:
|
||||
|
||||
1. Check the Actions tab for the workflow run details
|
||||
2. Review the PR in the Pull Requests tab
|
||||
3. Verify the application list matches your k8s/ structure
|
||||
4. Merge when ready
|
||||
|
||||
The PR will include:
|
||||
- Updated application list
|
||||
- Timestamp of generation
|
||||
- Automatic commit message
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No PR created
|
||||
|
||||
- Check if there are actually changes in README.md
|
||||
- Verify secrets are configured correctly
|
||||
- Check workflow logs in Actions tab
|
||||
|
||||
### Wrong applications listed
|
||||
|
||||
- Verify k8s/ directory structure
|
||||
- Ensure folder names match expected application names
|
||||
- Check for hidden directories (starting with `.`)
|
||||
|
||||
### Badge not loading
|
||||
|
||||
- Verify ArgoCD badge API is accessible at `https://ag.hexor.cy`
|
||||
- Check application name matches ArgoCD application name
|
||||
- Ensure application exists in ArgoCD
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Update badge URL
|
||||
|
||||
If you need to change the badge URL pattern, edit:
|
||||
- `.gitea/scripts/generate-readme.py` - function `generate_badge_line()`
|
||||
|
||||
### Change workflow trigger
|
||||
|
||||
To modify when the workflow runs, edit:
|
||||
- `.gitea/workflows/update-readme.yaml` - `on:` section
|
||||
|
||||
### Add new categories
|
||||
|
||||
To add new categories (besides core/apps/games), edit:
|
||||
- `.gitea/scripts/generate-readme.py` - function `scan_k8s_directory()` and `generate_readme_content()`
|
||||
161
.gitea/scripts/generate-readme.py
Normal file
161
.gitea/scripts/generate-readme.py
Normal file
@@ -0,0 +1,161 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate README.md with ArgoCD application status badges.
|
||||
Scans k8s/ directory structure to find all applications and generates badges for them.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Dict, List
|
||||
|
||||
|
||||
def scan_k8s_directory(k8s_path: str) -> Dict[str, List[str]]:
|
||||
"""
|
||||
Scan k8s/ directory and return applications grouped by category.
|
||||
|
||||
Args:
|
||||
k8s_path: Path to k8s directory
|
||||
|
||||
Returns:
|
||||
Dictionary with categories as keys and lists of app names as values
|
||||
"""
|
||||
apps_by_category = {
|
||||
'core': [],
|
||||
'apps': [],
|
||||
'games': []
|
||||
}
|
||||
|
||||
k8s_dir = Path(k8s_path)
|
||||
|
||||
for category in apps_by_category.keys():
|
||||
category_path = k8s_dir / category
|
||||
if category_path.exists() and category_path.is_dir():
|
||||
# Get all subdirectories (each subdirectory is an app)
|
||||
apps = [
|
||||
d.name for d in category_path.iterdir()
|
||||
if d.is_dir() and not d.name.startswith('.')
|
||||
]
|
||||
apps_by_category[category] = sorted(apps)
|
||||
|
||||
return apps_by_category
|
||||
|
||||
|
||||
def generate_badge_line(app_name: str) -> str:
|
||||
"""
|
||||
Generate markdown line with badge for an application.
|
||||
|
||||
Args:
|
||||
app_name: Name of the application
|
||||
|
||||
Returns:
|
||||
Markdown formatted string with badge
|
||||
"""
|
||||
badge_url = f"https://ag.hexor.cy/api/badge?name={app_name}&revision=true"
|
||||
app_url = f"https://ag.hexor.cy/applications/argocd/{app_name}"
|
||||
return f"| **{app_name}** | []({app_url}) |"
|
||||
|
||||
|
||||
def generate_readme_content(apps_by_category: Dict[str, List[str]]) -> str:
|
||||
"""
|
||||
Generate README.md content with all applications.
|
||||
|
||||
Args:
|
||||
apps_by_category: Dictionary with apps grouped by category
|
||||
|
||||
Returns:
|
||||
Complete README.md content
|
||||
"""
|
||||
lines = [
|
||||
"# homelab",
|
||||
"",
|
||||
"ArgoCD homelab project",
|
||||
"",
|
||||
"## ArgoCD Applications Status",
|
||||
"",
|
||||
"<table>",
|
||||
"<tr>",
|
||||
"<td valign=\"top\" width=\"50%\">",
|
||||
"",
|
||||
"### Core Applications",
|
||||
"",
|
||||
"| Application | Status |",
|
||||
"| :--- | :---: |"
|
||||
]
|
||||
|
||||
# Add core applications
|
||||
for app in apps_by_category.get('core', []):
|
||||
lines.append(generate_badge_line(app))
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"### Games",
|
||||
"",
|
||||
"| Application | Status |",
|
||||
"| :--- | :---: |"
|
||||
])
|
||||
|
||||
# Add games
|
||||
for app in apps_by_category.get('games', []):
|
||||
lines.append(generate_badge_line(app))
|
||||
|
||||
lines.extend([
|
||||
"</td>",
|
||||
"<td valign=\"top\" width=\"50%\">",
|
||||
"",
|
||||
"### Applications",
|
||||
"",
|
||||
"| Application | Status |",
|
||||
"| :--- | :---: |"
|
||||
])
|
||||
|
||||
# Add applications
|
||||
for app in apps_by_category.get('apps', []):
|
||||
lines.append(generate_badge_line(app))
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"</td>",
|
||||
"</tr>",
|
||||
"</table>"
|
||||
])
|
||||
|
||||
return '\n'.join(lines) + '\n'
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: generate-readme.py <k8s-directory> [output-file]")
|
||||
print("Example: generate-readme.py k8s/ README.md")
|
||||
sys.exit(1)
|
||||
|
||||
k8s_path = sys.argv[1]
|
||||
output_file = sys.argv[2] if len(sys.argv) > 2 else "README.md"
|
||||
|
||||
if not os.path.exists(k8s_path):
|
||||
print(f"Error: Directory {k8s_path} does not exist")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"📁 Scanning {k8s_path}...")
|
||||
apps_by_category = scan_k8s_directory(k8s_path)
|
||||
|
||||
# Print statistics
|
||||
total_apps = sum(len(apps) for apps in apps_by_category.values())
|
||||
print(f"✅ Found {total_apps} applications:")
|
||||
for category, apps in apps_by_category.items():
|
||||
if apps:
|
||||
print(f" - {category}: {len(apps)} apps")
|
||||
|
||||
print(f"📝 Generating {output_file}...")
|
||||
readme_content = generate_readme_content(apps_by_category)
|
||||
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
f.write(readme_content)
|
||||
|
||||
print(f"✅ {output_file} generated successfully")
|
||||
print(f" Total lines: {len(readme_content.splitlines())}")
|
||||
print(f" File size: {len(readme_content)} bytes")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
164
.gitea/workflows/update-readme.yaml
Normal file
164
.gitea/workflows/update-readme.yaml
Normal file
@@ -0,0 +1,164 @@
|
||||
name: 'Auto-update README'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "main" ]
|
||||
paths:
|
||||
- 'k8s/**'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
update-readme:
|
||||
name: 'Generate README and Create MR'
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global user.name "Gitea Actions Bot"
|
||||
git config --global user.email "actions@gitea.local"
|
||||
|
||||
- name: Generate README
|
||||
run: |
|
||||
echo "📋 Starting README generation..."
|
||||
python3 .gitea/scripts/generate-readme.py k8s/ README.md
|
||||
|
||||
if [ -f "README.md" ]; then
|
||||
echo "✅ README generated successfully"
|
||||
echo "📄 File size: $(wc -c < README.md) bytes"
|
||||
echo "📄 Lines: $(wc -l < README.md)"
|
||||
else
|
||||
echo "❌ README not generated"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Check for changes
|
||||
id: check_changes
|
||||
run: |
|
||||
if git diff --quiet README.md; then
|
||||
echo "No changes detected in README.md"
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "Changes detected in README.md"
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Create Pull Request
|
||||
if: steps.check_changes.outputs.has_changes == 'true'
|
||||
run: |
|
||||
# Set variables
|
||||
GITEA_URL="${{ secrets.GT_URL }}"
|
||||
GITEA_TOKEN="${{ secrets.GT_TOKEN }}"
|
||||
GITEA_OWNER="${{ secrets.GT_OWNER }}"
|
||||
GITEA_REPO="${{ secrets.GT_REPO }}"
|
||||
BRANCH_NAME="auto-update-readme-$(date +%Y%m%d-%H%M%S)"
|
||||
|
||||
echo "🔍 Configuration:"
|
||||
echo "GITEA_URL: ${GITEA_URL:-NOT SET}"
|
||||
echo "GITEA_OWNER: ${GITEA_OWNER:-NOT SET}"
|
||||
echo "GITEA_REPO: ${GITEA_REPO:-NOT SET}"
|
||||
echo "BRANCH_NAME: $BRANCH_NAME"
|
||||
|
||||
# Create and push new branch
|
||||
echo "🌿 Creating branch: $BRANCH_NAME"
|
||||
git checkout -b "$BRANCH_NAME"
|
||||
git add README.md
|
||||
git commit -m "Auto-update README with current k8s applications" \
|
||||
-m "Generated by CI/CD workflow on $(date +%Y-%m-%d\ %H:%M:%S)" \
|
||||
-m "This PR updates the README.md file with the current list of applications found in the k8s/ directory structure."
|
||||
|
||||
# Push branch to remote
|
||||
echo "📤 Pushing branch to remote..."
|
||||
git push origin "$BRANCH_NAME"
|
||||
|
||||
# Create Pull Request using Gitea API
|
||||
echo "🔀 Creating Pull Request..."
|
||||
|
||||
PR_TITLE="Auto-update README with k8s applications"
|
||||
|
||||
# Create PR body
|
||||
cat > /tmp/pr_body.json <<EOF
|
||||
{
|
||||
"title": "$PR_TITLE",
|
||||
"body": "This PR automatically updates README.md based on the current k8s/ directory structure.\n\n## Changes\n- Updated application list in README.md\n- Applications are now synced with k8s/ folders\n\n## Review\nPlease review and merge if everything looks correct.\n\n---\n🤖 This PR was automatically generated by CI/CD workflow\n⏰ Generated at: $(date '+%Y-%m-%d %H:%M:%S')",
|
||||
"head": "$BRANCH_NAME",
|
||||
"base": "main"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Create PR via API
|
||||
echo "Making API request to: $GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/pulls"
|
||||
echo "Request body:"
|
||||
cat /tmp/pr_body.json
|
||||
|
||||
RESPONSE=$(curl -s -w "\nHTTP_CODE:%{http_code}" -X POST \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @/tmp/pr_body.json \
|
||||
"$GITEA_URL/api/v1/repos/$GITEA_OWNER/$GITEA_REPO/pulls")
|
||||
|
||||
# Extract HTTP code and response body
|
||||
HTTP_CODE=$(echo "$RESPONSE" | grep "HTTP_CODE:" | cut -d':' -f2)
|
||||
RESPONSE_BODY=$(echo "$RESPONSE" | sed '/HTTP_CODE:/d')
|
||||
|
||||
echo "API Response (HTTP $HTTP_CODE):"
|
||||
echo "$RESPONSE_BODY"
|
||||
|
||||
# Extract PR number and URL from response
|
||||
PR_NUMBER=$(echo "$RESPONSE_BODY" | grep -o '"number":[0-9]*' | head -1 | cut -d':' -f2)
|
||||
|
||||
if [ -n "$PR_NUMBER" ] && [ "$HTTP_CODE" = "201" ]; then
|
||||
echo "✅ Pull Request created successfully!"
|
||||
echo "📝 PR #$PR_NUMBER"
|
||||
echo "🔗 URL: $GITEA_URL/$GITEA_OWNER/$GITEA_REPO/pulls/$PR_NUMBER"
|
||||
|
||||
# Save PR info for summary
|
||||
echo "pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT
|
||||
echo "pr_url=$GITEA_URL/$GITEA_OWNER/$GITEA_REPO/pulls/$PR_NUMBER" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "⚠️ Failed to create Pull Request (HTTP $HTTP_CODE)"
|
||||
echo "Response: $RESPONSE_BODY"
|
||||
|
||||
# Check if PR already exists
|
||||
if echo "$RESPONSE_BODY" | grep -q "already exists"; then
|
||||
echo "ℹ️ PR already exists for this branch"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Summary
|
||||
if: always()
|
||||
run: |
|
||||
echo "## 📊 README Update Summary" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
if [ -f "README.md" ]; then
|
||||
echo "- ✅ README generated successfully" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
if [ "${{ steps.check_changes.outputs.has_changes }}" = "true" ]; then
|
||||
echo "- ✅ Changes detected" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- ✅ Pull Request created" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
if [ -n "${{ steps.create_pr.outputs.pr_number }}" ]; then
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**PR:** [#${{ steps.create_pr.outputs.pr_number }}](${{ steps.create_pr.outputs.pr_url }})" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
else
|
||||
echo "- ℹ️ No changes detected - README already up to date" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
else
|
||||
echo "- ❌ README generation failed" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Generated at:** $(date)" >> $GITHUB_STEP_SUMMARY
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -14,6 +14,8 @@ crash.*.log
|
||||
*.tfvars.json
|
||||
!*terraform.tfvars
|
||||
|
||||
# claude ai
|
||||
.claude/
|
||||
# Ignore override files as they are usually used to override resources locally and so
|
||||
# are not checked in
|
||||
override.tf
|
||||
|
||||
@@ -16,10 +16,11 @@ ArgoCD homelab project
|
||||
| **authentik** | [](https://ag.hexor.cy/applications/argocd/authentik) |
|
||||
| **cert-manager** | [](https://ag.hexor.cy/applications/argocd/cert-manager) |
|
||||
| **external-secrets** | [](https://ag.hexor.cy/applications/argocd/external-secrets) |
|
||||
| **k3s-system-upgrade** | [](https://ag.hexor.cy/applications/argocd/k3s-system-upgrade) |
|
||||
| **kube-system-custom** | [](https://ag.hexor.cy/applications/argocd/kube-system-custom) |
|
||||
| **kubernetes-dashboard** | [](https://ag.hexor.cy/applications/argocd/kubernetes-dashboard) |
|
||||
| **postgresql** | [](https://ag.hexor.cy/applications/argocd/postgresql) |
|
||||
| **prom-stack** | [](https://ag.hexor.cy/applications/argocd/prom-stack) |
|
||||
| **system-upgrade** | [](https://ag.hexor.cy/applications/argocd/system-upgrade) |
|
||||
|
||||
### Games
|
||||
|
||||
@@ -39,18 +40,22 @@ ArgoCD homelab project
|
||||
| **greece-notifier** | [](https://ag.hexor.cy/applications/argocd/greece-notifier) |
|
||||
| **hexound** | [](https://ag.hexor.cy/applications/argocd/hexound) |
|
||||
| **immich** | [](https://ag.hexor.cy/applications/argocd/immich) |
|
||||
| **iperf3** | [](https://ag.hexor.cy/applications/argocd/iperf3) |
|
||||
| **jellyfin** | [](https://ag.hexor.cy/applications/argocd/jellyfin) |
|
||||
| **k8s-secrets** | [](https://ag.hexor.cy/applications/argocd/k8s-secrets) |
|
||||
| **khm** | [](https://ag.hexor.cy/applications/argocd/khm) |
|
||||
| **ollama** | [](https://ag.hexor.cy/applications/argocd/ollama) |
|
||||
| **paperless** | [](https://ag.hexor.cy/applications/argocd/paperless) |
|
||||
| **pasarguard** | [](https://ag.hexor.cy/applications/argocd/pasarguard) |
|
||||
| **qbittorent-nas** | [](https://ag.hexor.cy/applications/argocd/qbittorent-nas) |
|
||||
| **rustdesk** | [](https://ag.hexor.cy/applications/argocd/rustdesk) |
|
||||
| **sonarr-stack** | [](https://ag.hexor.cy/applications/argocd/sonarr-stack) |
|
||||
| **stirling-pdf** | [](https://ag.hexor.cy/applications/argocd/stirling-pdf) |
|
||||
| **syncthing** | [](https://ag.hexor.cy/applications/argocd/syncthing) |
|
||||
| **tg-bots** | [](https://ag.hexor.cy/applications/argocd/tg-bots) |
|
||||
| **vaultwarden** | [](https://ag.hexor.cy/applications/argocd/vaultwarden) |
|
||||
| **vpn** | [](https://ag.hexor.cy/applications/argocd/vpn) |
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</table>
|
||||
|
||||
21
k8s/apps/iperf3/app.yaml
Normal file
21
k8s/apps/iperf3/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: iperf3
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: apps
|
||||
destination:
|
||||
namespace: iperf3
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/apps/iperf3
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
||||
56
k8s/apps/iperf3/daemonset.yaml
Normal file
56
k8s/apps/iperf3/daemonset.yaml
Normal file
@@ -0,0 +1,56 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: iperf3-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: iperf3-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: iperf3-server
|
||||
spec:
|
||||
subdomain: iperf3
|
||||
initContainers:
|
||||
- name: set-hostname
|
||||
image: busybox:1.35
|
||||
command: ['sh', '-c']
|
||||
args:
|
||||
- |
|
||||
NODE_NAME=$(echo $NODE_NAME | cut -d'.' -f1 | tr '[:upper:]' '[:lower:]')
|
||||
echo "iperf3-${NODE_NAME}" > /etc/hostname
|
||||
hostname "iperf3-${NODE_NAME}"
|
||||
securityContext:
|
||||
privileged: true
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
containers:
|
||||
- name: iperf3-server
|
||||
image: networkstatic/iperf3:latest
|
||||
args: ["-s"]
|
||||
ports:
|
||||
- containerPort: 5201
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: HOSTNAME
|
||||
value: $(NODE_NAME)
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "500m"
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
40
k8s/apps/iperf3/iperf3-exporter-daemonset.yaml
Normal file
40
k8s/apps/iperf3/iperf3-exporter-daemonset.yaml
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: iperf3-exporter
|
||||
labels:
|
||||
app: iperf3-exporter
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: iperf3-exporter
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: iperf3-exporter
|
||||
spec:
|
||||
containers:
|
||||
- name: iperf3-exporter
|
||||
image: ghcr.io/edgard/iperf3_exporter:1.2.2
|
||||
ports:
|
||||
- containerPort: 9579
|
||||
name: metrics
|
||||
protocol: TCP
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "200m"
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
15
k8s/apps/iperf3/iperf3-exporter-service.yaml
Normal file
15
k8s/apps/iperf3/iperf3-exporter-service.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: iperf3-exporter
|
||||
labels:
|
||||
app: iperf3-exporter
|
||||
spec:
|
||||
selector:
|
||||
app: iperf3-exporter
|
||||
ports:
|
||||
- name: metrics
|
||||
protocol: TCP
|
||||
port: 9579
|
||||
targetPort: 9579
|
||||
10
k8s/apps/iperf3/kustomization.yaml
Normal file
10
k8s/apps/iperf3/kustomization.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- daemonset.yaml
|
||||
- service-headless.yaml
|
||||
- iperf3-exporter-daemonset.yaml
|
||||
- iperf3-exporter-service.yaml
|
||||
- servicemonitor.yaml
|
||||
|
||||
14
k8s/apps/iperf3/service-headless.yaml
Normal file
14
k8s/apps/iperf3/service-headless.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: iperf3
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: iperf3-server
|
||||
ports:
|
||||
- name: iperf3
|
||||
protocol: TCP
|
||||
port: 5201
|
||||
targetPort: 5201
|
||||
36
k8s/apps/iperf3/servicemonitor.yaml
Normal file
36
k8s/apps/iperf3/servicemonitor.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: iperf3-exporter
|
||||
labels:
|
||||
app: iperf3-exporter
|
||||
release: prometheus
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: iperf3-exporter
|
||||
endpoints:
|
||||
- port: metrics
|
||||
path: /probe
|
||||
interval: 5m
|
||||
scrapeTimeout: 30s
|
||||
params:
|
||||
duration:
|
||||
- "10"
|
||||
streams:
|
||||
- "4"
|
||||
relabelings:
|
||||
- sourceLabels: [__address__]
|
||||
targetLabel: __param_target
|
||||
regex: (.+):9579
|
||||
replacement: iperf3-${1}.iperf3.iperf3.svc.cluster.local:5201
|
||||
- sourceLabels: [__param_target]
|
||||
targetLabel: instance
|
||||
- targetLabel: __address__
|
||||
replacement: iperf3-exporter.iperf3.svc.cluster.local:9579
|
||||
metricRelabelings:
|
||||
- sourceLabels: [__name__]
|
||||
regex: iperf3_(.+)
|
||||
targetLabel: __name__
|
||||
replacement: network_${1}
|
||||
21
k8s/apps/ollama/app.yaml
Normal file
21
k8s/apps/ollama/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: ollama
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: apps
|
||||
destination:
|
||||
namespace: ollama
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/apps/ollama
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
||||
33
k8s/apps/ollama/external-secrets.yaml
Normal file
33
k8s/apps/ollama/external-secrets.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: oidc-secret
|
||||
spec:
|
||||
target:
|
||||
name: oidc-secret
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
OAUTH_CLIENT_SECRET: |-
|
||||
{{ .OAUTH_CLIENT_SECRET }}
|
||||
OAUTH_CLIENT_ID: |-
|
||||
{{ .OAUTH_CLIENT_ID }}
|
||||
data:
|
||||
- secretKey: OAUTH_CLIENT_SECRET
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 97959a8b-e3b2-4b34-bc54-ddb6476a12ea
|
||||
property: fields[0].value
|
||||
- secretKey: OAUTH_CLIENT_ID
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 97959a8b-e3b2-4b34-bc54-ddb6476a12ea
|
||||
property: fields[1].value
|
||||
21
k8s/apps/ollama/kustomization.yaml
Normal file
21
k8s/apps/ollama/kustomization.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- external-secrets.yaml
|
||||
|
||||
helmCharts:
|
||||
- name: ollama
|
||||
repo: https://otwld.github.io/ollama-helm/
|
||||
version: 0.4.0
|
||||
releaseName: ollama
|
||||
namespace: ollama
|
||||
valuesFile: ollama-values.yaml
|
||||
includeCRDs: true
|
||||
- name: open-webui
|
||||
repo: https://helm.openwebui.com/
|
||||
version: 8.14.0
|
||||
releaseName: openweb-ui
|
||||
namespace: ollama
|
||||
valuesFile: openweb-ui-values.yaml
|
||||
includeCRDs: true
|
||||
8
k8s/apps/ollama/ollama-values.yaml
Normal file
8
k8s/apps/ollama/ollama-values.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
image:
|
||||
repository: ollama/ollama
|
||||
pullPolicy: Always
|
||||
tag: "latest"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
ingress:
|
||||
enabled: false
|
||||
52
k8s/apps/ollama/openweb-ui-values.yaml
Normal file
52
k8s/apps/ollama/openweb-ui-values.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
clusterDomain: ai.hexor.cy
|
||||
|
||||
extraEnvVars:
|
||||
GLOBAL_LOG_LEVEL: debug
|
||||
OAUTH_PROVIDER_NAME: authentik
|
||||
OPENID_PROVIDER_URL: https://idm.hexor.cy/application/o/openwebui/.well-known/openid-configuration
|
||||
OPENID_REDIRECT_URI: https://ai.hexor.cy/oauth/oidc/callback
|
||||
WEBUI_URL: https://ai.hexor.cy
|
||||
# Allows auto-creation of new users using OAuth. Must be paired with ENABLE_LOGIN_FORM=false.
|
||||
ENABLE_OAUTH_SIGNUP: true
|
||||
# Disables user/password login form. Required when ENABLE_OAUTH_SIGNUP=true.
|
||||
ENABLE_LOGIN_FORM: false
|
||||
OAUTH_MERGE_ACCOUNTS_BY_EMAIL: true
|
||||
|
||||
extraEnvFrom:
|
||||
- secretRef:
|
||||
name: oidc-secret
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
ollamaUrls:
|
||||
- http://ollama.ollama.svc:11434
|
||||
ollama:
|
||||
enabled: false
|
||||
ollama:
|
||||
gpu:
|
||||
enabled: false
|
||||
models:
|
||||
pull:
|
||||
- qwen3-vl:8b
|
||||
run:
|
||||
- qwen3-vl:8b
|
||||
|
||||
pipelines:
|
||||
enabled: true
|
||||
|
||||
tika:
|
||||
enabled: true
|
||||
|
||||
websocket:
|
||||
enabled: true
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
class: traefik
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd
|
||||
host: "ai.hexor.cy"
|
||||
tls:
|
||||
- hosts:
|
||||
- '*.hexor.cy'
|
||||
secretName: ollama-tls
|
||||
14
k8s/apps/pasarguard/certificate.yaml
Normal file
14
k8s/apps/pasarguard/certificate.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: pasarguard-tls
|
||||
labels:
|
||||
app: pasarguard
|
||||
spec:
|
||||
secretName: pasarguard-tls
|
||||
issuerRef:
|
||||
name: letsencrypt
|
||||
kind: ClusterIssuer
|
||||
dnsNames:
|
||||
- ps.hexor.cy
|
||||
264
k8s/apps/pasarguard/configmap-scripts.yaml
Normal file
264
k8s/apps/pasarguard/configmap-scripts.yaml
Normal file
@@ -0,0 +1,264 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: pasarguard-scripts
|
||||
labels:
|
||||
app: pasarguard-node
|
||||
data:
|
||||
init-uuid.sh: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
echo "Started"
|
||||
# NODE_NAME is already set via environment variable
|
||||
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
|
||||
|
||||
# Get DNS name from node label xray-node-address
|
||||
DNS_NAME=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.metadata.labels.xray-node-address}')
|
||||
|
||||
if [ -z "${DNS_NAME}" ]; then
|
||||
echo "ERROR: Node ${NODE_NAME} does not have label 'xray-node-address'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Node: ${NODE_NAME}"
|
||||
echo "DNS Name from label: ${DNS_NAME}"
|
||||
|
||||
# Use DNS name for ConfigMap name to ensure uniqueness
|
||||
CONFIGMAP_NAME="node-uuid-${DNS_NAME//./-}"
|
||||
|
||||
echo "Checking ConfigMap: ${CONFIGMAP_NAME}"
|
||||
|
||||
# Check if ConfigMap exists and get UUID
|
||||
if kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "ConfigMap exists, reading UUID..."
|
||||
API_KEY=$(kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" -o jsonpath='{.data.API_KEY}')
|
||||
|
||||
if [ -z "${API_KEY}" ]; then
|
||||
echo "UUID not found in ConfigMap, generating new one..."
|
||||
API_KEY=$(cat /proc/sys/kernel/random/uuid)
|
||||
kubectl patch configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" --type merge -p "{\"data\":{\"API_KEY\":\"${API_KEY}\"}}"
|
||||
else
|
||||
echo "Using existing UUID from ConfigMap"
|
||||
fi
|
||||
else
|
||||
echo "ConfigMap does not exist, creating new one..."
|
||||
API_KEY=$(cat /proc/sys/kernel/random/uuid)
|
||||
kubectl create configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" \
|
||||
--from-literal=API_KEY="${API_KEY}" \
|
||||
--from-literal=NODE_NAME="${NODE_NAME}"
|
||||
fi
|
||||
|
||||
# Save UUID and node info to shared volume for the main container
|
||||
echo -n "${API_KEY}" > /shared/api-key
|
||||
echo -n "${NODE_NAME}" > /shared/node-name
|
||||
echo -n "${CONFIGMAP_NAME}" > /shared/configmap-name
|
||||
echo "UUID initialized: ${API_KEY}"
|
||||
echo "Node name: ${NODE_NAME}"
|
||||
echo "ConfigMap: ${CONFIGMAP_NAME}"
|
||||
|
||||
# Create Certificate for this node using DNS name from label
|
||||
CERT_NAME="pasarguard-node-${DNS_NAME//./-}"
|
||||
|
||||
echo "Creating Certificate: ${CERT_NAME} for ${DNS_NAME}"
|
||||
|
||||
# Check if Certificate already exists
|
||||
if ! kubectl get certificate "${CERT_NAME}" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "Certificate does not exist, creating..."
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: ${CERT_NAME}
|
||||
namespace: ${NAMESPACE}
|
||||
spec:
|
||||
secretName: ${CERT_NAME}-tls
|
||||
issuerRef:
|
||||
name: letsencrypt
|
||||
kind: ClusterIssuer
|
||||
dnsNames:
|
||||
- ${DNS_NAME}
|
||||
EOF
|
||||
else
|
||||
echo "Certificate already exists"
|
||||
fi
|
||||
|
||||
# Wait for certificate to be ready
|
||||
|
||||
echo "Waiting for certificate to be ready..."
|
||||
for i in {1..600}; do
|
||||
if kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "Certificate secret is ready!"
|
||||
break
|
||||
fi
|
||||
echo "Waiting for certificate... ($i/600)"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
if ! kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "WARNING: Certificate secret not ready after 600 seconds"
|
||||
else
|
||||
# Extract certificate and key from secret to shared volume
|
||||
echo "Extracting certificate and key..."
|
||||
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.crt}' | base64 -d > /shared/tls.crt
|
||||
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.key}' | base64 -d > /shared/tls.key
|
||||
echo "Certificate and key extracted successfully."
|
||||
cat /shared/tls.crt
|
||||
fi
|
||||
|
||||
# Create individual Service and Endpoints for this node
|
||||
# Take only first part of node name before first dot
|
||||
NODE_SHORT_NAME="${NODE_NAME%%.*}"
|
||||
SERVICE_NAME="${NODE_SHORT_NAME}"
|
||||
|
||||
# Get node internal IP (take only first IP if multiple)
|
||||
NODE_IP=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}' | awk '{print $1}')
|
||||
|
||||
echo "Creating Service: ${SERVICE_NAME} for node ${NODE_NAME} (short: ${NODE_SHORT_NAME}) with IP ${NODE_IP}"
|
||||
|
||||
# Create Service without selector
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ${SERVICE_NAME}
|
||||
namespace: ${NAMESPACE}
|
||||
labels:
|
||||
app: pasarguard-node
|
||||
node: ${NODE_NAME}
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: api
|
||||
port: 62050
|
||||
protocol: TCP
|
||||
targetPort: 62050
|
||||
- name: metrics
|
||||
port: 9550
|
||||
protocol: TCP
|
||||
targetPort: 9550
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Endpoints
|
||||
metadata:
|
||||
name: ${SERVICE_NAME}
|
||||
namespace: ${NAMESPACE}
|
||||
labels:
|
||||
app: pasarguard-node
|
||||
node: ${NODE_NAME}
|
||||
subsets:
|
||||
- addresses:
|
||||
- ip: ${NODE_IP}
|
||||
nodeName: ${NODE_NAME}
|
||||
ports:
|
||||
- name: api
|
||||
port: 62050
|
||||
protocol: TCP
|
||||
- name: metrics
|
||||
port: 9550
|
||||
protocol: TCP
|
||||
EOF
|
||||
|
||||
echo "Service created: ${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local -> ${NODE_IP}:62050"
|
||||
|
||||
exporter-start.sh: |
|
||||
#!/bin/sh
|
||||
# Install required tools
|
||||
apk add --no-cache wget curl iproute2-ss bash
|
||||
|
||||
# Download v2ray-exporter
|
||||
echo "Downloading v2ray-exporter..."
|
||||
ARCH=$(uname -m)
|
||||
case $ARCH in
|
||||
x86_64)
|
||||
BINARY_ARCH="amd64"
|
||||
;;
|
||||
aarch64|arm64)
|
||||
BINARY_ARCH="arm64"
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported architecture: $ARCH"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "Detected architecture: $ARCH, using binary: v2ray-exporter_linux_$BINARY_ARCH"
|
||||
wget -L -O /tmp/v2ray-exporter "https://github.com/wi1dcard/v2ray-exporter/releases/download/v0.6.0/v2ray-exporter_linux_$BINARY_ARCH"
|
||||
mv /tmp/v2ray-exporter /usr/local/bin/v2ray-exporter
|
||||
chmod +x /usr/local/bin/v2ray-exporter
|
||||
|
||||
# Wait for initial API port file
|
||||
echo "Waiting for initial xray API port file..."
|
||||
while [ ! -f /shared/xray-api-port ]; do
|
||||
echo "Waiting for API port file..."
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Main loop - restart exporter if it crashes or port changes
|
||||
while true; do
|
||||
if [ -f /shared/xray-api-port ]; then
|
||||
API_PORT=$(cat /shared/xray-api-port)
|
||||
if [ -n "$API_PORT" ]; then
|
||||
echo "Starting v2ray-exporter with endpoint 127.0.0.1:$API_PORT"
|
||||
/usr/local/bin/v2ray-exporter --v2ray-endpoint "127.0.0.1:$API_PORT" --listen ":9550" &
|
||||
EXPORTER_PID=$!
|
||||
|
||||
# Wait for exporter to exit or port file to change
|
||||
while kill -0 $EXPORTER_PID 2>/dev/null; do
|
||||
if [ -f /shared/xray-api-port ]; then
|
||||
NEW_PORT=$(cat /shared/xray-api-port)
|
||||
if [ "$NEW_PORT" != "$API_PORT" ]; then
|
||||
echo "API port changed from $API_PORT to $NEW_PORT, restarting exporter"
|
||||
kill $EXPORTER_PID 2>/dev/null
|
||||
wait $EXPORTER_PID 2>/dev/null
|
||||
break
|
||||
fi
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "Exporter stopped, restarting..."
|
||||
wait $EXPORTER_PID 2>/dev/null
|
||||
fi
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
pasarguard-start.sh: |
|
||||
#!/bin/sh
|
||||
# Read API_KEY from shared volume created by init container
|
||||
if [ -f /shared/api-key ]; then
|
||||
export API_KEY=$(cat /shared/api-key)
|
||||
echo "Loaded API_KEY from shared volume"
|
||||
else
|
||||
echo "WARNING: API_KEY file not found, using default"
|
||||
fi
|
||||
|
||||
cd /app
|
||||
|
||||
# Start main process in background
|
||||
./main &
|
||||
MAIN_PID=$!
|
||||
|
||||
# Start continuous port monitoring in background
|
||||
{
|
||||
sleep 10 # Wait for xray to start initially
|
||||
LAST_PORT=""
|
||||
|
||||
while true; do
|
||||
API_PORT=$(netstat -tlpn | grep xray | grep 127.0.0.1 | awk '{print $4}' | cut -d: -f2 | head -1)
|
||||
if [ -n "$API_PORT" ] && [ "$API_PORT" != "$LAST_PORT" ]; then
|
||||
echo "Found xray API port: $API_PORT"
|
||||
echo -n "$API_PORT" > /shared/xray-api-port
|
||||
LAST_PORT="$API_PORT"
|
||||
fi
|
||||
sleep 5 # Check every 5 seconds
|
||||
done
|
||||
} &
|
||||
PORT_MONITOR_PID=$!
|
||||
|
||||
# Wait for main process to finish
|
||||
wait $MAIN_PID
|
||||
|
||||
# Clean up port monitor
|
||||
kill $PORT_MONITOR_PID 2>/dev/null
|
||||
@@ -97,178 +97,66 @@ spec:
|
||||
- name: init-uuid
|
||||
image: bitnami/kubectl:latest
|
||||
env:
|
||||
- name: GODEBUG
|
||||
value: "x509sha1=1"
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
set -e
|
||||
|
||||
# NODE_NAME is already set via environment variable
|
||||
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
|
||||
|
||||
# Get DNS name from node label xray-node-address
|
||||
DNS_NAME=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.metadata.labels.xray-node-address}')
|
||||
|
||||
if [ -z "${DNS_NAME}" ]; then
|
||||
echo "ERROR: Node ${NODE_NAME} does not have label 'xray-node-address'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Node: ${NODE_NAME}"
|
||||
echo "DNS Name from label: ${DNS_NAME}"
|
||||
|
||||
# Use DNS name for ConfigMap name to ensure uniqueness
|
||||
CONFIGMAP_NAME="node-uuid-${DNS_NAME//./-}"
|
||||
|
||||
echo "Checking ConfigMap: ${CONFIGMAP_NAME}"
|
||||
|
||||
# Check if ConfigMap exists and get UUID
|
||||
if kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "ConfigMap exists, reading UUID..."
|
||||
API_KEY=$(kubectl get configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" -o jsonpath='{.data.API_KEY}')
|
||||
|
||||
if [ -z "${API_KEY}" ]; then
|
||||
echo "UUID not found in ConfigMap, generating new one..."
|
||||
API_KEY=$(cat /proc/sys/kernel/random/uuid)
|
||||
kubectl patch configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" --type merge -p "{\"data\":{\"API_KEY\":\"${API_KEY}\"}}"
|
||||
else
|
||||
echo "Using existing UUID from ConfigMap"
|
||||
fi
|
||||
else
|
||||
echo "ConfigMap does not exist, creating new one..."
|
||||
API_KEY=$(cat /proc/sys/kernel/random/uuid)
|
||||
kubectl create configmap "${CONFIGMAP_NAME}" -n "${NAMESPACE}" \
|
||||
--from-literal=API_KEY="${API_KEY}" \
|
||||
--from-literal=NODE_NAME="${NODE_NAME}"
|
||||
fi
|
||||
|
||||
# Save UUID and node info to shared volume for the main container
|
||||
echo -n "${API_KEY}" > /shared/api-key
|
||||
echo -n "${NODE_NAME}" > /shared/node-name
|
||||
echo -n "${CONFIGMAP_NAME}" > /shared/configmap-name
|
||||
echo "UUID initialized: ${API_KEY}"
|
||||
echo "Node name: ${NODE_NAME}"
|
||||
echo "ConfigMap: ${CONFIGMAP_NAME}"
|
||||
|
||||
# Create Certificate for this node using DNS name from label
|
||||
CERT_NAME="pasarguard-node-${DNS_NAME//./-}"
|
||||
|
||||
echo "Creating Certificate: ${CERT_NAME} for ${DNS_NAME}"
|
||||
|
||||
# Check if Certificate already exists
|
||||
if ! kubectl get certificate "${CERT_NAME}" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "Certificate does not exist, creating..."
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: ${CERT_NAME}
|
||||
namespace: ${NAMESPACE}
|
||||
spec:
|
||||
secretName: ${CERT_NAME}-tls
|
||||
issuerRef:
|
||||
name: letsencrypt
|
||||
kind: ClusterIssuer
|
||||
dnsNames:
|
||||
- ${DNS_NAME}
|
||||
EOF
|
||||
else
|
||||
echo "Certificate already exists"
|
||||
fi
|
||||
|
||||
# Wait for certificate to be ready (max 60 seconds)
|
||||
echo "Waiting for certificate to be ready..."
|
||||
for i in {1..60}; do
|
||||
if kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "Certificate secret is ready!"
|
||||
break
|
||||
fi
|
||||
echo "Waiting for certificate... ($i/60)"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
if ! kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" &>/dev/null; then
|
||||
echo "WARNING: Certificate secret not ready after 60 seconds"
|
||||
else
|
||||
# Extract certificate and key from secret to shared volume
|
||||
echo "Extracting certificate and key..."
|
||||
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.crt}' | base64 -d > /shared/tls.crt
|
||||
kubectl get secret "${CERT_NAME}-tls" -n "${NAMESPACE}" -o jsonpath='{.data.tls\.key}' | base64 -d > /shared/tls.key
|
||||
echo "Certificate and key extracted successfully"
|
||||
fi
|
||||
|
||||
# Create individual Service and Endpoints for this node
|
||||
# Take only first part of node name before first dot
|
||||
NODE_SHORT_NAME="${NODE_NAME%%.*}"
|
||||
SERVICE_NAME="${NODE_SHORT_NAME}"
|
||||
|
||||
# Get node internal IP (take only first IP if multiple)
|
||||
NODE_IP=$(kubectl get node "${NODE_NAME}" -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}' | awk '{print $1}')
|
||||
|
||||
echo "Creating Service: ${SERVICE_NAME} for node ${NODE_NAME} (short: ${NODE_SHORT_NAME}) with IP ${NODE_IP}"
|
||||
|
||||
# Create Service without selector
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ${SERVICE_NAME}
|
||||
namespace: ${NAMESPACE}
|
||||
labels:
|
||||
app: pasarguard-node
|
||||
node: ${NODE_NAME}
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: api
|
||||
port: 62050
|
||||
protocol: TCP
|
||||
targetPort: 62050
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Endpoints
|
||||
metadata:
|
||||
name: ${SERVICE_NAME}
|
||||
namespace: ${NAMESPACE}
|
||||
labels:
|
||||
app: pasarguard-node
|
||||
node: ${NODE_NAME}
|
||||
subsets:
|
||||
- addresses:
|
||||
- ip: ${NODE_IP}
|
||||
nodeName: ${NODE_NAME}
|
||||
ports:
|
||||
- name: api
|
||||
port: 62050
|
||||
protocol: TCP
|
||||
EOF
|
||||
|
||||
echo "Service created: ${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local -> ${NODE_IP}:62050"
|
||||
- /scripts/init-uuid.sh
|
||||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /shared
|
||||
- name: scripts
|
||||
mountPath: /scripts
|
||||
containers:
|
||||
- name: xray-exporter
|
||||
image: alpine:3.18
|
||||
imagePullPolicy: IfNotPresent
|
||||
command:
|
||||
- /bin/sh
|
||||
- /scripts/exporter-start.sh
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9550
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /scrape
|
||||
port: metrics
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
failureThreshold: 3
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /scrape
|
||||
port: metrics
|
||||
initialDelaySeconds: 45
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 3
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /shared
|
||||
readOnly: true
|
||||
- name: scripts
|
||||
mountPath: /scripts
|
||||
- name: pasarguard-node
|
||||
image: 'pasarguard/node:v0.1.1'
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
# Read API_KEY from shared volume created by init container
|
||||
if [ -f /shared/api-key ]; then
|
||||
export API_KEY=$(cat /shared/api-key)
|
||||
echo "Loaded API_KEY from shared volume"
|
||||
else
|
||||
echo "WARNING: API_KEY file not found, using default"
|
||||
fi
|
||||
|
||||
cd /app
|
||||
exec ./main
|
||||
- /scripts/pasarguard-start.sh
|
||||
ports:
|
||||
- name: api
|
||||
containerPort: 62050
|
||||
@@ -285,7 +173,7 @@ spec:
|
||||
- name: SERVICE_PROTOCOL
|
||||
value: "grpc"
|
||||
- name: DEBUG
|
||||
value: "false"
|
||||
value: "true"
|
||||
- name: SSL_CERT_FILE
|
||||
value: "/shared/tls.crt"
|
||||
- name: SSL_KEY_FILE
|
||||
@@ -320,7 +208,13 @@ spec:
|
||||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /shared
|
||||
readOnly: true
|
||||
readOnly: false
|
||||
- name: scripts
|
||||
mountPath: /scripts
|
||||
volumes:
|
||||
- name: shared-data
|
||||
emptyDir: {}
|
||||
- name: scripts
|
||||
configMap:
|
||||
name: pasarguard-scripts
|
||||
defaultMode: 0755
|
||||
|
||||
@@ -21,9 +21,20 @@ spec:
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
initContainers:
|
||||
- name: download-template
|
||||
image: busybox:latest
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
wget -O /templates/subscription/index.html https://github.com/PasarGuard/subscription-template/releases/latest/download/index.html
|
||||
volumeMounts:
|
||||
- name: subscription-template
|
||||
mountPath: /templates/subscription
|
||||
containers:
|
||||
- name: pasarguard-web
|
||||
image: 'pasarguard/panel:v1.4.1'
|
||||
image: 'pasarguard/panel:v1.7.2'
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- secretRef:
|
||||
@@ -33,34 +44,32 @@ spec:
|
||||
value: "0.0.0.0"
|
||||
- name: UVICORN_PORT
|
||||
value: "8000"
|
||||
- name: DOCS
|
||||
value: "true"
|
||||
- name: UVICORN_SSL_CERTFILE
|
||||
value: "/etc/ssl/certs/tls.crt"
|
||||
value: "/app/tls/tls.crt"
|
||||
- name: UVICORN_SSL_KEYFILE
|
||||
value: "/etc/ssl/private/tls.key"
|
||||
value: "/app/tls/tls.key"
|
||||
- name: CUSTOM_TEMPLATES_DIRECTORY
|
||||
value: "/code/app/templates/"
|
||||
- name: SUBSCRIPTION_PAGE_TEMPLATE
|
||||
value: "subscription/index.html"
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8000
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: tls-certs
|
||||
mountPath: /etc/ssl/certs
|
||||
readOnly: true
|
||||
- name: tls-keys
|
||||
mountPath: /etc/ssl/private
|
||||
- name: tls
|
||||
mountPath: /app/tls
|
||||
readOnly: true
|
||||
- name: subscription-template
|
||||
mountPath: /code/app/templates/subscription
|
||||
volumes:
|
||||
- name: tls-certs
|
||||
- name: tls
|
||||
secret:
|
||||
secretName: pasarguard-tls
|
||||
items:
|
||||
- key: tls.crt
|
||||
path: tls.crt
|
||||
- name: tls-keys
|
||||
secret:
|
||||
secretName: pasarguard-tls
|
||||
items:
|
||||
- key: tls.key
|
||||
path: tls.key
|
||||
- name: subscription-template
|
||||
emptyDir: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
||||
@@ -7,5 +7,5 @@ resources:
|
||||
- ./deployment.yaml
|
||||
- ./daemonset.yaml
|
||||
- ./certificate.yaml
|
||||
|
||||
|
||||
- ./configmap-scripts.yaml
|
||||
- ./servicemonitor.yaml
|
||||
|
||||
21
k8s/apps/pasarguard/servicemonitor.yaml
Normal file
21
k8s/apps/pasarguard/servicemonitor.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: pasarguard-node-metrics
|
||||
labels:
|
||||
app: pasarguard-node
|
||||
release: prometheus
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: pasarguard-node
|
||||
endpoints:
|
||||
- port: metrics
|
||||
path: /scrape
|
||||
interval: 30s
|
||||
scrapeTimeout: 10s
|
||||
honorLabels: true
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- pasarguard
|
||||
21
k8s/apps/tg-bots/app.yaml
Normal file
21
k8s/apps/tg-bots/app.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: tg-bots
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: apps
|
||||
destination:
|
||||
namespace: tg-bots
|
||||
server: https://kubernetes.default.svc
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/apps/tg-bots
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
||||
42
k8s/apps/tg-bots/desubot.yaml
Normal file
42
k8s/apps/tg-bots/desubot.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: desubot
|
||||
labels:
|
||||
app: desubot
|
||||
annotations:
|
||||
reloader.stakater.com/auto: "true"
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: desubot
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: desubot
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: home.homenet
|
||||
containers:
|
||||
- name: desubot
|
||||
image: 'ultradesu/desubot:latest'
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: desubot
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
volumeMounts:
|
||||
- mountPath: /storage
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
nfs:
|
||||
server: nas.homenet
|
||||
path: /mnt/storage/Storage/k8s/desubot/
|
||||
readOnly: false
|
||||
49
k8s/apps/tg-bots/external-secrets.yaml
Normal file
49
k8s/apps/tg-bots/external-secrets.yaml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: get-id-bot
|
||||
spec:
|
||||
target:
|
||||
name: get-id-bot
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
TELOXIDE_TOKEN: |-
|
||||
{{ .token }}
|
||||
|
||||
data:
|
||||
- secretKey: token
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 97bd0af9-54ab-429a-b060-09626525f4cd
|
||||
property: fields[0].value
|
||||
|
||||
---
|
||||
apiVersion: external-secrets.io/v1beta1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: desubot
|
||||
spec:
|
||||
target:
|
||||
name: desubot
|
||||
deletionPolicy: Delete
|
||||
template:
|
||||
type: Opaque
|
||||
data:
|
||||
TELEGRAM_BOT_TOKEN: |-
|
||||
{{ .token }}
|
||||
|
||||
data:
|
||||
- secretKey: token
|
||||
sourceRef:
|
||||
storeRef:
|
||||
name: vaultwarden-login
|
||||
kind: ClusterSecretStore
|
||||
remoteRef:
|
||||
key: 97bd0af9-54ab-429a-b060-09626525f4cd
|
||||
property: fields[1].value
|
||||
36
k8s/apps/tg-bots/get-id-bot.yaml
Normal file
36
k8s/apps/tg-bots/get-id-bot.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: get-id-bot
|
||||
labels:
|
||||
app: get-id-bot
|
||||
annotations:
|
||||
reloader.stakater.com/auto: "true"
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: get-id-bot
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: get-id-bot
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
containers:
|
||||
- name: get-id-bot
|
||||
image: 'ghcr.io/house-of-vanity/get_id_bot:main'
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: get-id-bot
|
||||
env:
|
||||
- name: RUST_LOG
|
||||
value: "info"
|
||||
|
||||
|
||||
|
||||
9
k8s/apps/tg-bots/kustomization.yaml
Normal file
9
k8s/apps/tg-bots/kustomization.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- app.yaml
|
||||
- get-id-bot.yaml
|
||||
- external-secrets.yaml
|
||||
- desubot.yaml
|
||||
@@ -10,7 +10,7 @@ resources:
|
||||
helmCharts:
|
||||
- name: authentik
|
||||
repo: https://charts.goauthentik.io
|
||||
version: 2025.8.1
|
||||
version: 2025.10.1
|
||||
releaseName: authentik
|
||||
namespace: authentik
|
||||
valuesFile: values.yaml
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
global:
|
||||
image:
|
||||
tag: "2025.8.1"
|
||||
tag: "2025.10.1"
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: master.tail2fe2d.ts.net
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
# BW_HOST: base64(url)
|
||||
# BW_USERNAME: base64(name)
|
||||
# BW_PASSWORD: base64(pass)
|
||||
# 81212111-6350-4069-8bcf-19a67d3964a5
|
||||
# Vaultwarden bot - 81212111-6350-4069-8bcf-19a67d3964a5
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
|
||||
@@ -6,7 +6,7 @@ metadata:
|
||||
namespace: immich
|
||||
spec:
|
||||
capacity:
|
||||
storage: 50Gi
|
||||
storage: 55Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
@@ -79,14 +79,14 @@ spec:
|
||||
- CREATE EXTENSION IF NOT EXISTS earthdistance;
|
||||
|
||||
storage:
|
||||
size: 50Gi
|
||||
size: 55Gi
|
||||
storageClass: nfs-storage
|
||||
pvcTemplate:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 50Gi
|
||||
storage: 55Gi
|
||||
storageClassName: nfs-storage
|
||||
volumeMode: Filesystem
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ spec:
|
||||
source:
|
||||
repoURL: ssh://git@gt.hexor.cy:30022/ab/homelab.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/core/prometheus
|
||||
path: k8s/core/prom-stack
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
@@ -4,16 +4,16 @@ prometheus:
|
||||
prometheusSpec:
|
||||
enableRemoteWriteReceiver: true
|
||||
additionalScrapeConfigs:
|
||||
- job_name: outline_vpn
|
||||
- job_name: xray_vpn
|
||||
metrics_path: /scrape
|
||||
static_configs:
|
||||
- targets: ['100.117.24.104:9095']
|
||||
labels: {instance: cy}
|
||||
- targets: ['100.117.24.104:9096']
|
||||
labels: {instance: am}
|
||||
- targets: ['100.117.24.104:9097']
|
||||
labels: {instance: jp}
|
||||
- targets: ['100.117.24.104:9098']
|
||||
labels: {instance: bg}
|
||||
- targets: ['cy.tail2fe2d.ts.net:9550']
|
||||
labels: {job: cy}
|
||||
- targets: ['x86.tail2fe2d.ts.net:9550']
|
||||
labels: {job: am}
|
||||
- targets: ['jp.tail2fe2d.ts.net:9550']
|
||||
labels: {job: jp}
|
||||
|
||||
- job_name: cs_16_server
|
||||
static_configs:
|
||||
- targets: ['prom-a2s-exporter.counter-strike.svc:9841']
|
||||
@@ -1,7 +1,7 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: k3s-system-upgrade
|
||||
name: system-upgrade
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: core
|
||||
|
||||
@@ -167,5 +167,26 @@ oauth_applications = {
|
||||
create_group = true
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
}
|
||||
|
||||
"openwebui" = {
|
||||
name = "OpenWeb UI"
|
||||
slug = "openwebui"
|
||||
group = "Tools"
|
||||
meta_description = "OpenWeb UI"
|
||||
meta_icon = "https://ollama.com/public/ollama.png"
|
||||
redirect_uris = [
|
||||
"https://ai.hexor.cy/oauth/oidc/callback",
|
||||
]
|
||||
meta_launch_url = "https://ai.hexor.cy"
|
||||
client_type = "confidential"
|
||||
include_claims_in_id_token = true
|
||||
access_code_validity = "minutes=1"
|
||||
access_token_validity = "minutes=5"
|
||||
refresh_token_validity = "days=30"
|
||||
scope_mappings = ["openid", "profile", "email"]
|
||||
access_groups = ["admins"]
|
||||
create_group = true
|
||||
signing_key = "1b1b5bec-034a-4d96-871a-133f11322360"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -192,11 +192,13 @@ EOT
|
||||
slug = "pasarguard"
|
||||
group = "Tools"
|
||||
external_host = "https://ps.hexor.cy"
|
||||
internal_host = "http://pasarguard.pasarguard.svc"
|
||||
internal_host = "https://pasarguard.pasarguard.svc:80"
|
||||
internal_host_ssl_validation = false
|
||||
meta_description = ""
|
||||
skip_path_regex = <<-EOT
|
||||
/sub/
|
||||
/dashboard/
|
||||
/
|
||||
EOT
|
||||
meta_icon = "https://img.icons8.com/?size=100&id=fqAD3lAB6zTe&format=png&color=000000"
|
||||
mode = "proxy"
|
||||
|
||||
Reference in New Issue
Block a user