30 Commits

Author SHA1 Message Date
ab
e85ed32b7b Merge pull request 'Fix source-missing auto-merge and remove Pink Floyd examples from prompts' (#6) from DEV into main
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m11s
Publish Web Player Image / build-and-push-image (push) Successful in 1m15s
Reviewed-on: #6
2026-03-20 01:07:15 +00:00
71d5a38f21 Fix source-missing auto-merge and remove Pink Floyd examples from prompts
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m10s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m10s
Auto-merge: when ingest pipeline detects "source file missing", now checks
if the track already exists in the library by file_hash. If so, marks the
pending entry as 'merged' instead of 'error' — avoiding stale error entries
for files that were already successfully ingested in a previous run.

Prompts: replaced Pink Floyd/The Wall/Have a Cigar examples in both
normalize.txt and merge.txt with Deep Purple examples. The LLM was using
these famous artist/album/track names as fallback output when raw metadata
was empty or ambiguous, causing hallucinated metadata like
"artist: Pink Floyd, title: Have a Cigar" for completely unrelated tracks.

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 01:05:22 +00:00
ab
e34440498c Merge pull request 'Disabled obsolete CI' (#5) from DEV into main
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m17s
Publish Web Player Image / build-and-push-image (push) Successful in 1m15s
Reviewed-on: #5
2026-03-20 00:49:45 +00:00
8d70a5133a Disabled obsolete CI
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m17s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m14s
2026-03-20 00:49:27 +00:00
ab
f873542d02 Merge pull request 'DEV' (#4) from DEV into main
Some checks failed
Publish Metadata Agent Image / build-and-push-image (push) Failing after 10s
Publish Web Player Image / build-and-push-image (push) Failing after 10s
Reviewed-on: #4
2026-03-20 00:02:49 +00:00
56760be586 Disabled obsolete CI
Some checks failed
Publish Metadata Agent Image (dev) / build-and-push-image (push) Failing after 10s
Publish Web Player Image (dev) / build-and-push-image (push) Failing after 9s
2026-03-20 00:01:30 +00:00
108c374c6d ci: update Dockerfile paths after moving to docker/ directory
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 00:00:42 +00:00
ab
2129dc8007 Merge pull request 'feat: added express + vite app + oidc' (#1) from feature/node-app into DEV
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m18s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m25s
Publish Server Image / build-and-push-image (push) Successful in 2m56s
Reviewed-on: #1
2026-03-19 23:44:29 +00:00
ab
3f2013e9d5 Merge pull request 'Fix phantom duplicate tracks created on Merged file ingestion' (#3) from DEV into main
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m47s
Publish Web Player Image / build-and-push-image (push) Successful in 1m41s
Publish Server Image / build-and-push-image (push) Successful in 3m1s
Reviewed-on: #3
2026-03-19 23:43:36 +00:00
cc3ef04cbe Fix phantom duplicate tracks created on Merged file ingestion
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m16s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m10s
Publish Server Image / build-and-push-image (push) Successful in 2m36s
When the mover returns MoveOutcome::Merged (destination already exists,
source deleted), approve_and_finalize was checking only file_hash to
detect duplicates. Since the second ingestion had a different hash
(different quality/mastering), it bypassed the check and created a
phantom track record pointing to an existing storage_path with the
wrong hash (of the now-deleted source file).

Added a second dedup check by storage_path: if a non-hidden track
already exists at that path, mark pending as 'merged' instead of
inserting a new track row. This prevents phantom entries for any
subsequent ingestion of a different-quality version of an already
stored file.

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 23:37:33 +00:00
ab
7ede23ff94 Merge pull request 'Improved admin UI' (#2) from DEV into main
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m14s
Publish Web Player Image / build-and-push-image (push) Successful in 1m16s
Publish Server Image / build-and-push-image (push) Successful in 2m20s
Reviewed-on: #2
2026-03-19 15:33:26 +00:00
a730ab568c Improved admin UI
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m6s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m7s
Publish Server Image / build-and-push-image (push) Successful in 2m13s
2026-03-19 15:28:25 +00:00
Boris Cherepanov
c30a3aff5d feat: refactoring
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m10s
Publish Web Player Image / build-and-push-image (push) Successful in 1m7s
Publish Server Image / build-and-push-image (push) Successful in 2m7s
2026-03-19 18:04:13 +03:00
Boris Cherepanov
71d88bacf2 feat: refactoring modules
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m9s
Publish Web Player Image / build-and-push-image (push) Successful in 1m15s
Publish Server Image / build-and-push-image (push) Successful in 2m11s
2026-03-19 17:32:27 +03:00
5fb8821709 Fixed merge
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m7s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m11s
Publish Server Image / build-and-push-image (push) Successful in 2m12s
2026-03-19 14:16:45 +00:00
b1eaa1b6e9 Reworked agent UI. Artist management form.
All checks were successful
Publish Metadata Agent Image (dev) / build-and-push-image (push) Successful in 1m8s
Publish Web Player Image (dev) / build-and-push-image (push) Successful in 1m9s
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m7s
Publish Web Player Image / build-and-push-image (push) Successful in 1m10s
Publish Server Image / build-and-push-image (push) Successful in 2m23s
2026-03-19 13:25:37 +00:00
7c2c7b0ce5 Fix dev CI
All checks were successful
Publish Server Image / build-and-push-image (push) Successful in 2m11s
2026-03-19 13:16:24 +00:00
Boris Cherepanov
4f239c2546 feat: added disable auth mode
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m6s
Publish Web Player Image / build-and-push-image (push) Successful in 1m14s
Publish Server Image / build-and-push-image (push) Successful in 2m9s
2026-03-19 15:47:21 +03:00
Boris Cherepanov
cfcf6e4029 feat: added express + vite app + oidc
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 3m28s
Publish Web Player Image / build-and-push-image (push) Successful in 1m9s
Publish Server Image / build-and-push-image (push) Successful in 2m16s
2026-03-19 15:06:32 +03:00
a7af27d064 Added merge
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m9s
Publish Web Player Image / build-and-push-image (push) Successful in 1m12s
Publish Server Image / build-and-push-image (push) Successful in 2m18s
2026-03-19 02:36:27 +00:00
e1210e6e20 Added merge
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m7s
Publish Web Player Image / build-and-push-image (push) Successful in 1m9s
Publish Server Image / build-and-push-image (push) Successful in 2m15s
2026-03-19 02:09:04 +00:00
12d28170d2 Added merge
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m8s
Publish Web Player Image / build-and-push-image (push) Successful in 1m18s
Publish Server Image / build-and-push-image (push) Successful in 2m11s
2026-03-19 01:52:07 +00:00
0ba1caaa23 Added merge
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m13s
Publish Web Player Image / build-and-push-image (push) Successful in 1m13s
Publish Server Image / build-and-push-image (push) Successful in 2m28s
2026-03-19 01:09:49 +00:00
e1782a6e3b Added merge
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m7s
Publish Web Player Image / build-and-push-image (push) Successful in 1m11s
Publish Server Image / build-and-push-image (push) Successful in 2m14s
2026-03-19 00:55:49 +00:00
4a272f373d Added remaster prompt instruction
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m10s
Publish Web Player Image / build-and-push-image (push) Successful in 1m11s
Publish Server Image / build-and-push-image (push) Successful in 2m12s
2026-03-18 13:13:31 +00:00
f4fa01ef7e Added doker compose
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m6s
Publish Web Player Image / build-and-push-image (push) Successful in 1m9s
Publish Server Image / build-and-push-image (push) Successful in 2m16s
2026-03-18 13:04:13 +00:00
a50efd0082 Update readme
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m9s
Publish Web Player Image / build-and-push-image (push) Successful in 1m11s
Publish Server Image / build-and-push-image (push) Successful in 2m15s
2026-03-18 10:33:51 +00:00
a4010e1173 Fixed agent UI
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m7s
Publish Web Player Image / build-and-push-image (push) Successful in 1m9s
Publish Server Image / build-and-push-image (push) Successful in 2m10s
2026-03-18 04:05:47 +00:00
6e2155d8bd Fix metadata agent and player relative paths
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m5s
Publish Web Player Image / build-and-push-image (push) Successful in 1m13s
Publish Server Image / build-and-push-image (push) Successful in 2m10s
2026-03-18 03:36:20 +00:00
cd842eb7ac Added CI for web and agent
All checks were successful
Publish Metadata Agent Image / build-and-push-image (push) Successful in 1m8s
Publish Web Player Image / build-and-push-image (push) Successful in 1m7s
Publish Server Image / build-and-push-image (push) Successful in 2m12s
2026-03-18 02:52:53 +00:00
67 changed files with 12851 additions and 466 deletions

3
.env_example Normal file
View File

@@ -0,0 +1,3 @@
OLLAMA_URL=https://ollama.host.com
OLLAMA_AUTH="Basic <BASE64 Auth string>"
#OLLAMA_AUTH="Bearer <TOKEN>"

View File

@@ -0,0 +1,50 @@
name: Publish Metadata Agent Image (dev)
on:
push:
branches:
- DEV
env:
REGISTRY: docker.io
IMAGE_NAME: ${{ secrets.DOCKERHUB_USERNAME }}/furumi-metadata-agent
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Determine version and tags
id: info
run: |
IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
SHORT_SHA="$(echo '${{ github.sha }}' | cut -c1-7)"
echo "tags=${IMAGE}:dev" >> "$GITHUB_OUTPUT"
echo "version=dev-${SHORT_SHA}" >> "$GITHUB_OUTPUT"
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile.agent
push: true
tags: ${{ steps.info.outputs.tags }}
build-args: |
FURUMI_VERSION=${{ steps.info.outputs.version }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -0,0 +1,61 @@
name: Publish Metadata Agent Image
on:
push:
branches:
- '**'
- '!DEV'
tags:
- 'v*.*.*'
env:
REGISTRY: docker.io
IMAGE_NAME: ${{ secrets.DOCKERHUB_USERNAME }}/furumi-metadata-agent
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Determine version and tags
id: info
run: |
IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
SHORT_SHA="$(echo '${{ github.sha }}' | cut -c1-7)"
if [[ "${{ github.ref }}" == refs/tags/v* ]]; then
TAG="${{ github.ref_name }}"
VERSION="${TAG#v}"
echo "tags=${IMAGE}:${VERSION},${IMAGE}:latest" >> "$GITHUB_OUTPUT"
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
else
echo "tags=${IMAGE}:trunk,${IMAGE}:${SHORT_SHA}" >> "$GITHUB_OUTPUT"
echo "version=${SHORT_SHA}" >> "$GITHUB_OUTPUT"
fi
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile.agent
push: true
tags: ${{ steps.info.outputs.tags }}
build-args: |
FURUMI_VERSION=${{ steps.info.outputs.version }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -0,0 +1,50 @@
name: Publish Web Player Image (dev)
on:
push:
branches:
- DEV
env:
REGISTRY: docker.io
IMAGE_NAME: ${{ secrets.DOCKERHUB_USERNAME }}/furumi-web-player
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Determine version and tags
id: info
run: |
IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
SHORT_SHA="$(echo '${{ github.sha }}' | cut -c1-7)"
echo "tags=${IMAGE}:dev" >> "$GITHUB_OUTPUT"
echo "version=dev-${SHORT_SHA}" >> "$GITHUB_OUTPUT"
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile.web-player
push: true
tags: ${{ steps.info.outputs.tags }}
build-args: |
FURUMI_VERSION=${{ steps.info.outputs.version }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -0,0 +1,61 @@
name: Publish Web Player Image
on:
push:
branches:
- '**'
- '!DEV'
tags:
- 'v*.*.*'
env:
REGISTRY: docker.io
IMAGE_NAME: ${{ secrets.DOCKERHUB_USERNAME }}/furumi-web-player
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Determine version and tags
id: info
run: |
IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
SHORT_SHA="$(echo '${{ github.sha }}' | cut -c1-7)"
if [[ "${{ github.ref }}" == refs/tags/v* ]]; then
TAG="${{ github.ref_name }}"
VERSION="${TAG#v}"
echo "tags=${IMAGE}:${VERSION},${IMAGE}:latest" >> "$GITHUB_OUTPUT"
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
else
echo "tags=${IMAGE}:trunk,${IMAGE}:${SHORT_SHA}" >> "$GITHUB_OUTPUT"
echo "version=${SHORT_SHA}" >> "$GITHUB_OUTPUT"
fi
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile.web-player
push: true
tags: ${{ steps.info.outputs.tags }}
build-args: |
FURUMI_VERSION=${{ steps.info.outputs.version }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -51,6 +51,7 @@ jobs:
uses: docker/build-push-action@v5
with:
context: .
file: docker/Dockerfile
push: true
tags: ${{ steps.info.outputs.tags }}
build-args: |

3
.gitignore vendored
View File

@@ -1 +1,4 @@
/target
/inbox
/storage
.env

221
README.md
View File

@@ -1,54 +1,117 @@
# Furumi-ng
Remote filesystem over encrypted gRPC. Mount a directory from a remote server as a local folder via FUSE.
A music platform consisting of a remote filesystem, an AI-powered metadata agent, and a database-backed web player.
Designed for streaming media (video, music) over the network.
## Components
```
furumi-server gRPC remote filesystem with TLS and auth
furumi-client-core Cross-platform gRPC client library
furumi-mount-linux FUSE mount for Linux
furumi-mount-macos NFS mount for macOS
furumi-agent AI metadata agent (LLM-powered ingest pipeline + admin UI)
furumi-web-player Database-backed web music player with OIDC auth
```
### furumi-server
Exposes a local directory over encrypted gRPC. Supports Bearer token auth, OIDC/SSO, Prometheus metrics, and a built-in web player for direct filesystem browsing.
### furumi-agent
Background service that watches an inbox folder for new music files, extracts metadata, normalizes it using a local LLM (via Ollama), and stores canonical metadata in PostgreSQL. Features:
- Automatic metadata extraction (Symphonia) and file path parsing
- LLM normalization with RAG (queries existing artists/albums in DB for consistency)
- Featured artist detection (`feat.`, `п.у.`, `&`, etc.)
- Album cover image processing
- Auto-approval for high-confidence results, review queue for uncertain ones
- Admin web UI with batch operations, inline editing, album grouping
- Organized file storage (`Artist/Album/Track.ext`)
- Configurable system prompt (built-in default or external file)
- Database migrations via sqlx
### furumi-web-player
Web music player that reads exclusively from the database populated by the agent. Features:
- Browse by Artists, Albums, Tracks
- Full-text search across the library
- Audio streaming with HTTP Range request support
- Album cover art (from DB or embedded in audio files)
- Queue management with shuffle, repeat, drag-and-drop reorder
- Media Session API (hardware controls, lock screen integration)
- OIDC/SSO authentication
- Deep linking (`?t=<track_slug>`)
- Relative URL paths (works behind any reverse proxy prefix)
## Architecture
```
furumi-server (gRPC + TLS) ←→ furumi-client-core ←→ furumi-mount-{linux,macos} (FUSE / NFS)
┌─────────────────┐
│ Ollama (LLM) │
└────────┬────────┘
┌──────────┐ ┌────────────┴────────────┐ ┌──────────────────┐
│ Inbox │───→│ furumi-agent │───→│ Storage (files) │
│ folder │ │ (ingest + admin UI) │ └────────┬─────────┘
└──────────┘ └────────────┬────────────┘ │
│ │
┌──────┴──────┐ ┌───────┴────────┐
│ PostgreSQL │←──────────│ furumi-web- │
│ (metadata) │ │ player │
└─────────────┘ └────────────────┘
```
- **furumi-server** — exposes a directory over gRPC with auto-TLS, Bearer token auth, and Prometheus metrics
- **furumi-client-core** — cross-platform gRPC client library with attribute caching
- **furumi-mount-linux** — mounts the remote directory locally via FUSE (read-only)
- **furumi-mount-macos** — mounts the remote directory locally via a local NFS server (read-only)
## Quick Start
### Remote Filesystem (FUSE/NFS mount)
```bash
# Build
cargo build --release --workspace
# Server — auto-generates TLS certificate, saves it for client
# Server
./target/release/furumi-server \
--root /path/to/media \
--token mysecrettoken \
--tls-cert-out /tmp/furumi-ca.pem
--token mysecrettoken
# Client
./target/release/furumi-mount-macos \
# Client (Linux)
./target/release/furumi-mount-linux \
--server server-ip:50051 \
--token mysecrettoken \
--mount /Volumes/remote
# Use it
ls /mnt/remote
mpv /mnt/remote/video.mkv
--mount /mnt/remote
```
### Linux FUSE3
Linux client uses FUSE. Install with:
```
sudo add-apt-repository universe
sudo apt install libfuse3-dev
```
## Encryption
### Music Platform (Agent + Player)
TLS is enabled by default. The server auto-generates a self-signed certificate on each start — no manual cert management required. The client automatically trusts the server's certificate for encryption.
Requires PostgreSQL with `pg_trgm` extension and Ollama for LLM.
To disable TLS (not recommended): `--no-tls` on both server and client.
```bash
# 1. Start PostgreSQL
docker run -d --name furumi-pg \
-e POSTGRES_DB=furumi -e POSTGRES_USER=furumi -e POSTGRES_PASSWORD=furumi \
-p 5432:5432 postgres:17
# 2. Create directories
mkdir -p /music/inbox /music/storage
# 3. Start the agent (runs migrations automatically)
./target/release/furumi-agent \
--inbox-dir /music/inbox \
--storage-dir /music/storage \
--database-url "postgres://furumi:furumi@localhost:5432/furumi" \
--ollama-url "http://localhost:11434" \
--ollama-model "qwen3:14b"
# 4. Start the web player
./target/release/furumi-web-player \
--storage-dir /music/storage \
--database-url "postgres://furumi:furumi@localhost:5432/furumi"
# 5. Drop music files into /music/inbox — agent processes them automatically
# 6. Open http://localhost:8080 to play music
# 7. Open http://localhost:8090 for the agent admin UI
```
## Configuration
@@ -62,9 +125,11 @@ All options can be set via CLI flags or environment variables.
| `--root` | `FURUMI_ROOT` | `.` | Directory to expose |
| `--token` | `FURUMI_TOKEN` | *(empty, auth off)* | Bearer token |
| `--metrics-bind` | `FURUMI_METRICS_BIND` | `0.0.0.0:9090` | Prometheus endpoint |
| `--web-bind` | `FURUMI_WEB_BIND` | `0.0.0.0:8080` | Built-in web player |
| `--no-web` | — | `false` | Disable built-in web player |
| `--no-tls` | — | `false` | Disable TLS |
### Client
### Client (Linux / macOS)
| Flag | Env | Default | Description |
|------|-----|---------|-------------|
@@ -73,22 +138,110 @@ All options can be set via CLI flags or environment variables.
| `--mount` | `FURUMI_MOUNT` | — | Mount point directory |
| `--no-tls` | — | `false` | Disable TLS |
### Metadata Agent
| Flag | Env | Default | Description |
|------|-----|---------|-------------|
| `--bind` | `FURUMI_AGENT_BIND` | `0.0.0.0:8090` | Admin UI address |
| `--inbox-dir` | `FURUMI_AGENT_INBOX_DIR` | — | Watch folder for new files |
| `--storage-dir` | `FURUMI_AGENT_STORAGE_DIR` | — | Permanent storage folder |
| `--database-url` | `FURUMI_AGENT_DATABASE_URL` | — | PostgreSQL URL |
| `--ollama-url` | `FURUMI_AGENT_OLLAMA_URL` | `http://localhost:11434` | Ollama API endpoint |
| `--ollama-model` | `FURUMI_AGENT_OLLAMA_MODEL` | `qwen3:14b` | LLM model name |
| `--poll-interval-secs` | `FURUMI_AGENT_POLL_INTERVAL_SECS` | `30` | Inbox scan interval |
| `--confidence-threshold` | `FURUMI_AGENT_CONFIDENCE_THRESHOLD` | `0.85` | Auto-approve threshold |
| `--system-prompt-file` | `FURUMI_AGENT_SYSTEM_PROMPT_FILE` | *(built-in)* | Custom LLM prompt |
### Web Player
| Flag | Env | Default | Description |
|------|-----|---------|-------------|
| `--bind` | `FURUMI_PLAYER_BIND` | `0.0.0.0:8080` | Player address |
| `--database-url` | `FURUMI_PLAYER_DATABASE_URL` | — | PostgreSQL URL |
| `--storage-dir` | `FURUMI_PLAYER_STORAGE_DIR` | — | Storage folder (for streaming) |
| `--oidc-issuer-url` | `FURUMI_PLAYER_OIDC_ISSUER_URL` | *(disabled)* | OIDC issuer |
| `--oidc-client-id` | `FURUMI_PLAYER_OIDC_CLIENT_ID` | — | OIDC client ID |
| `--oidc-client-secret` | `FURUMI_PLAYER_OIDC_CLIENT_SECRET` | — | OIDC client secret |
| `--oidc-redirect-url` | `FURUMI_PLAYER_OIDC_REDIRECT_URL` | — | OIDC redirect URL |
| `--oidc-session-secret` | `FURUMI_PLAYER_OIDC_SESSION_SECRET` | *(random)* | Session HMAC secret |
## Docker Compose
The easiest way to run the entire backend stack (PostgreSQL, Agent, Web Player, and gRPC Server) is using Docker Compose.
### Quick Start
1. **Prepare directories**:
```bash
mkdir -p inbox storage
```
2. **Start the services**:
```bash
docker compose up -d
```
3. **Check logs**:
```bash
docker compose logs -f
```
The following services will be available:
- **Web Player**: [http://localhost:8085](http://localhost:8085)
- **Agent Admin UI**: [http://localhost:8090](http://localhost:8090)
- **Metrics**: [http://localhost:9090/metrics](http://localhost:9090/metrics)
> [!NOTE]
> The Agent expects Ollama to be running. By default, it tries to connect to the host at `http://localhost:11434`.
### Reference Commands
- **Start**: `docker compose up -d`
- **Stop**: `docker compose stop`
- **Stop and remove containers**: `docker compose down`
- **Clear database and storage**: `docker compose down -v`
### Environment Variables
To configure the Agent (especially for remote Ollama or private models) and database, create an `.env` file in the root directory:
```env
# Database
POSTGRES_PASSWORD=secure-password
# LLM (Ollama)
OLLAMA_URL=http://your-ollama-host:11434
OLLAMA_AUTH="Bearer your-token"
# Server Security
FURUMI_TOKEN=secure-server-token
```
For more options, refer to the [Configuration](#configuration) section.
## Docker
Pre-built images are available on Docker Hub:
```bash
docker pull <user>/furumi-server
docker pull <user>/furumi-web-player
docker pull <user>/furumi-metadata-agent
```
## Prometheus Metrics
Available at `http://<metrics-bind>/metrics`:
Available at `http://<metrics-bind>/metrics` (server only):
- `furumi_grpc_requests_total` — request count by method and status
- `furumi_grpc_request_duration_seconds` — request latency histogram
- `furumi_bytes_read_total` — total bytes streamed
- `furumi_active_streams` — current streaming connections
- `furumi_file_open_errors_total` — file access errors
- `furumi_auth_failures_total` — authentication failures
## Requirements
- Linux with `libfuse3-dev` and `pkg-config` (for client)
- macOS (uses built-in NFS client)
- Rust 2024 edition
- PostgreSQL 14+ with `pg_trgm` extension (for agent and web player)
- Ollama with a local LLM (for agent)
- Linux with `libfuse3-dev` (for FUSE client only)
## License

32
docker/Dockerfile.agent Normal file
View File

@@ -0,0 +1,32 @@
FROM rust:1.88.0-bookworm AS builder
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
protobuf-compiler \
cmake \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
COPY . .
ARG FURUMI_VERSION=dev
RUN FURUMI_VERSION=${FURUMI_VERSION} cargo build --release --bin furumi-agent
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y \
ca-certificates \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -ms /bin/bash appuser
WORKDIR /home/appuser
COPY --from=builder /usr/src/app/target/release/furumi-agent /usr/local/bin/furumi-agent
USER appuser
EXPOSE 8090
ENTRYPOINT ["furumi-agent"]

View File

@@ -0,0 +1,32 @@
FROM rust:1.88.0-bookworm AS builder
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
protobuf-compiler \
cmake \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
COPY . .
ARG FURUMI_VERSION=dev
RUN FURUMI_VERSION=${FURUMI_VERSION} cargo build --release --bin furumi-web-player
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y \
ca-certificates \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -ms /bin/bash appuser
WORKDIR /home/appuser
COPY --from=builder /usr/src/app/target/release/furumi-web-player /usr/local/bin/furumi-web-player
USER appuser
EXPOSE 8080
ENTRYPOINT ["furumi-web-player"]

61
docker/docker-compose.yml Normal file
View File

@@ -0,0 +1,61 @@
services:
db:
image: postgres:17-alpine
container_name: furumi-db
environment:
POSTGRES_DB: ${POSTGRES_DB:-furumi}
POSTGRES_USER: ${POSTGRES_USER:-furumi}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-furumi}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U furumi -d furumi"]
interval: 5s
timeout: 5s
retries: 5
agent:
build:
context: .
dockerfile: Dockerfile.agent
container_name: furumi-agent
depends_on:
db:
condition: service_healthy
ports:
- "8090:8090"
environment:
FURUMI_AGENT_DATABASE_URL: "postgres://${POSTGRES_USER:-furumi}:${POSTGRES_PASSWORD:-furumi}@db:5432/${POSTGRES_DB:-furumi}"
FURUMI_AGENT_INBOX_DIR: "/inbox"
FURUMI_AGENT_STORAGE_DIR: "/storage"
FURUMI_AGENT_OLLAMA_URL: "${OLLAMA_URL:-http://host.docker.internal:11434}"
FURUMI_AGENT_OLLAMA_AUTH: "${OLLAMA_AUTH:-CHANGE-ME}"
FURUMI_PLAYER_BIND: "0.0.0.0:8090"
FURUMI_AGENT_POLL_INTERVAL_SECS: 5
volumes:
- ./inbox:/inbox
- ./storage:/storage
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
web-player:
build:
context: .
dockerfile: Dockerfile.web-player
container_name: furumi-web-player
depends_on:
db:
condition: service_healthy
ports:
- "8085:8085"
environment:
FURUMI_PLAYER_DATABASE_URL: "postgres://${POSTGRES_USER:-furumi}:${POSTGRES_PASSWORD:-furumi}@db:5432/${POSTGRES_DB:-furumi}"
FURUMI_PLAYER_STORAGE_DIR: "/storage"
FURUMI_PLAYER_BIND: "0.0.0.0:8085"
volumes:
- ./storage:/storage
restart: always
volumes:
pgdata:

View File

@@ -0,0 +1,10 @@
CREATE TABLE artist_merges (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
status TEXT NOT NULL DEFAULT 'pending',
source_artist_ids TEXT NOT NULL,
proposal TEXT,
llm_notes TEXT,
error_message TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

View File

@@ -0,0 +1,12 @@
ALTER TABLE albums
ADD COLUMN IF NOT EXISTS release_type TEXT NOT NULL DEFAULT 'album',
ADD COLUMN IF NOT EXISTS hidden BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE tracks
ADD COLUMN IF NOT EXISTS hidden BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE artists
ADD COLUMN IF NOT EXISTS hidden BOOLEAN NOT NULL DEFAULT FALSE;
ALTER TABLE pending_tracks
ADD COLUMN IF NOT EXISTS norm_release_type TEXT;

View File

@@ -0,0 +1,65 @@
You are a music library artist merge assistant. You will receive a list of artists (with their albums and tracks, each with database IDs) that have been identified as potential duplicates. Your job is to analyze them and produce a merge plan.
## Input format
You will receive a structured list like:
### Artist ID 42: "deep purple"
Album ID 10: "machine head" (1972)
- 01. "Highway Star" [track_id=100]
- 02. "Maybe I'm a Leo" [track_id=101]
### Artist ID 43: "Deep Purple"
Album ID 11: "Burn" (1974)
- 01. "Burn" [track_id=200]
## Your task
Determine if the artists are duplicates and produce a merge plan.
## Rules
### 1. Canonical artist name
- Use correct capitalization and canonical spelling (e.g., "deep purple" → "Deep Purple", "AC DC" → "AC/DC").
- If the database already contains an artist with a well-formed name, prefer that exact form.
- If one artist has clearly more tracks or albums, their name spelling may be more authoritative.
- Fix obvious typos or casing errors.
### 2. Winner artist
- `winner_artist_id` must be the ID of one of the provided artists — the one whose identity (ID) will survive in the database.
- All other artists are "losers" and will be deleted after their albums and tracks are moved to the winner.
- Prefer the artist ID that has the most tracks/albums, or the one with the most correct canonical name.
### 3. Canonical album names
- Use correct capitalization (title case for English, preserve language for non-English).
- Fix slug-like names: "new-songs" → "New Songs", "the_dark_side" → "The Dark Side".
- Fix all-lowercase or all-uppercase: "WISH YOU WERE HERE" → "Wish You Were Here".
- Preserve creative/intentional stylization (e.g., "OK Computer" stays as-is, "(What's the Story) Morning Glory?" stays as-is).
- If the database already contains the album under another artist with a well-formed name, use that exact name.
### 4. Album deduplication
- If two albums (across the artists being merged) have the same or very similar name, they are the same album. In that case, pick the better-formed one as the "winner album".
- Set `merge_into_album_id` to the winner album's ID for the duplicate album. This means all tracks from the duplicate will be moved into the winner album, and the duplicate album will be deleted.
- If an album is unique (no duplicate exists), set `merge_into_album_id` to null — the album will simply be renamed and moved to the winner artist.
- When comparing album names for similarity, ignore case, punctuation, and common suffixes like "(Remastered)" for the purpose of duplicate detection. However, treat remastered editions as separate albums unless both albums are clearly the same remaster.
### 5. Album mappings coverage
- `album_mappings` must include an entry for EVERY album across ALL source artists, not just duplicates.
- Every album (from every artist being merged) needs a canonical name, even if it is not being merged into another album.
### 6. Notes
- The `notes` field should briefly explain: which artist was chosen as winner and why, which albums were renamed, which albums were deduplicated and into what.
## Response format
You MUST respond with a single JSON object, no markdown fences, no extra text:
{"canonical_artist_name": "...", "winner_artist_id": 42, "album_mappings": [{"source_album_id": 10, "canonical_name": "Machine Head", "merge_into_album_id": null}, {"source_album_id": 11, "canonical_name": "Burn", "merge_into_album_id": null}], "notes": "..."}
- `canonical_artist_name`: the single correct name for this artist after merging.
- `winner_artist_id`: the integer ID of the artist whose record survives (must be one of the IDs provided).
- `album_mappings`: an array covering ALL albums from ALL source artists. Each entry:
- `source_album_id`: the integer ID of this album (as provided in the input).
- `canonical_name`: the corrected canonical name for this album.
- `merge_into_album_id`: null if this album is just renamed/moved to the winner artist; or the integer ID of another album (the winner album) if this album's tracks should be merged into that album and this album deleted. Never set merge_into_album_id to the same album's own ID.
- `notes`: brief explanation of the decisions made.

View File

@@ -3,10 +3,10 @@ You are a music metadata normalization assistant. Your job is to take raw metada
## Rules
1. **Artist names** must use correct capitalization and canonical spelling. Examples:
- "pink floyd" → "Pink Floyd"
- "deep purple" → "Deep Purple"
- "AC DC" → "AC/DC"
- "Guns n roses" → "Guns N' Roses"
- "Led zepplin" → "Led Zeppelin" (fix common misspellings)
- "guns n roses" → "Guns N' Roses"
- "led zepplin" → "Led Zeppelin" (fix common misspellings)
- "саша скул" → "Саша Скул" (fix capitalization, keep the language as-is)
- If the database already contains a matching artist (same name in any case or transliteration), always use the existing canonical name exactly. For example, if the DB has "Саша Скул" and the file says "саша скул" or "Sasha Skul", use "Саша Скул".
- **Compound artist fields**: When the artist field or path contains multiple artist names joined by "и", "and", "&", "/", ",", "x", or "vs", you MUST split them. The "artist" field must contain ONLY ONE primary artist. All others go into "featured_artists". If one of the names already exists in the database, prefer that one as the primary artist.
@@ -43,11 +43,12 @@ You are a music metadata normalization assistant. Your job is to take raw metada
- Preserve original language for non-English albums.
- If the database already contains a matching album under the same artist, use the existing name exactly.
- Do not alter the creative content of album names (same principle as track titles).
- **Remastered editions**: A remastered release is a separate album entity, even if it shares the same title and tracks as the original. If the tags or path indicate a remaster (e.g., "Remastered", "Remaster", "REMASTERED" anywhere in tags, filename, or path), append " (Remastered)" to the album name if not already present, and use the year of the remaster release (not the original). Example: original album "Paranoid" (1970) remastered in 2009 → album: "Paranoid (Remastered)", year: 2009.
4. **Track titles** must use correct capitalization, but their content must be preserved exactly.
- Use title case for English titles.
- Preserve original language for non-English titles.
- Remove leading track numbers if present (e.g., "01 - Have a Cigar" → "Have a Cigar").
- Remove leading track numbers if present (e.g., "01 - Smoke on the Water" → "Smoke on the Water").
- **NEVER remove, add, or alter words, numbers, suffixes, punctuation marks, or special characters in titles.** Your job is to fix capitalization and encoding, not to edit the creative content. If a title contains unusual punctuation, numbers, apostrophes, or symbols — they are intentional and must be kept as-is.
- If all tracks in the same album follow a naming pattern (e.g., numbered names like "Part 1", "Part 2"), preserve that pattern consistently. Do not simplify or truncate individual track names.
@@ -69,12 +70,33 @@ You are a music metadata normalization assistant. Your job is to take raw metada
- 0.5-0.8: Some guesswork involved, human review recommended.
- Below 0.5: Significant uncertainty, definitely needs review.
12. **Release type**: Determine the type of release based on all available evidence.
Allowed values (use exactly one, lowercase):
- `album`: Full-length release, typically 4+ tracks
- `single`: One or two tracks released as a single, OR folder/tag explicitly says "Single", "Сингл"
- `ep`: Short release, typically 3-6 tracks, OR folder/path contains "EP" or "ЕП"
- `compilation`: Best-of, greatest hits, anthology, сборник, compilation
- `live`: Live recording, concert, live album — folder or tags contain "Live", "Concert", "Концерт"
Determination rules (in priority order):
- If the folder path contains keywords like "Single", "Сингл", "single" → `single`
- If the folder path contains "EP", "ЕП", "ep" (case-insensitive) → `ep`
- If the folder path contains "Live", "Concert", "Концерт", "live" → `live`
- If the folder path contains "Compilation", "сборник", "Anthology", "Greatest Hits" → `compilation`
- If album name contains these keywords → apply same logic
- If track count in folder is 12 → likely `single`
- If track count in folder is 36 and no other evidence → likely `ep`
- If track count is 7+ → likely `album`
- When in doubt with 36 tracks, prefer `ep` over `album` only if EP indicators present, otherwise `album`
## Response format
You MUST respond with a single JSON object, no markdown fences, no extra text:
{"artist": "...", "album": "...", "title": "...", "year": 2000, "track_number": 1, "genre": "...", "featured_artists": ["...", "..."], "confidence": 0.95, "notes": "brief explanation of changes made"}
{"artist": "...", "album": "...", "title": "...", "year": 2000, "track_number": 1, "genre": "...", "featured_artists": [], "release_type": "album", "confidence": 0.95, "notes": "brief explanation of changes made"}
- Use null for fields you cannot determine.
- Use an empty array [] for "featured_artists" if there are no featured artists.
- The "notes" field should briefly explain what you changed and why.
- "release_type" must be exactly one of: "album", "single", "ep", "compilation", "live"

View File

@@ -5,6 +5,8 @@ use clap::Parser;
/// Default system prompt, compiled into the binary as a fallback.
const DEFAULT_SYSTEM_PROMPT: &str = include_str!("../prompts/normalize.txt");
const DEFAULT_MERGE_PROMPT: &str = include_str!("../prompts/merge.txt");
#[derive(Parser, Debug)]
#[command(version, about = "Furumi Agent: music metadata ingest and normalization")]
pub struct Args {
@@ -32,6 +34,10 @@ pub struct Args {
#[arg(long, env = "FURUMI_AGENT_OLLAMA_MODEL", default_value = "qwen3:14b")]
pub ollama_model: String,
/// Authorization header value for Ollama API (e.g. "Bearer <token>" or "Basic <base64>")
#[arg(long, env = "FURUMI_AGENT_OLLAMA_AUTH")]
pub ollama_auth: Option<String>,
/// Inbox scan interval in seconds
#[arg(long, env = "FURUMI_AGENT_POLL_INTERVAL_SECS", default_value_t = 30)]
pub poll_interval_secs: u64,
@@ -43,6 +49,10 @@ pub struct Args {
/// Path to a custom system prompt file (overrides the built-in default)
#[arg(long, env = "FURUMI_AGENT_SYSTEM_PROMPT_FILE")]
pub system_prompt_file: Option<PathBuf>,
/// Path to a custom merge prompt file (overrides the built-in default)
#[arg(long, env = "FURUMI_AGENT_MERGE_PROMPT_FILE")]
pub merge_prompt_file: Option<PathBuf>,
}
impl Args {
@@ -72,4 +82,14 @@ impl Args {
}
}
}
pub fn load_merge_prompt(&self) -> Result<String, Box<dyn std::error::Error>> {
match &self.merge_prompt_file {
Some(path) => {
tracing::info!("Loading merge prompt from {:?}", path);
Ok(std::fs::read_to_string(path)?)
}
None => Ok(DEFAULT_MERGE_PROMPT.to_owned()),
}
}
}

View File

@@ -25,6 +25,7 @@ pub async fn migrate(pool: &PgPool) -> Result<(), sqlx::migrate::MigrateError> {
pub struct Artist {
pub id: i64,
pub name: String,
pub hidden: bool,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
@@ -33,6 +34,8 @@ pub struct Album {
pub artist_id: i64,
pub name: String,
pub year: Option<i32>,
pub release_type: String,
pub hidden: bool,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
@@ -64,6 +67,7 @@ pub struct PendingTrack {
pub norm_track_number: Option<i32>,
pub norm_genre: Option<String>,
pub norm_featured_artists: Option<String>, // JSON array
pub norm_release_type: Option<String>,
pub confidence: Option<f64>,
pub llm_notes: Option<String>,
pub error_message: Option<String>,
@@ -88,6 +92,7 @@ pub struct SimilarAlbum {
pub similarity: f32,
}
#[allow(dead_code)]
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct AlbumImage {
pub id: i64,
@@ -171,6 +176,7 @@ pub async fn update_pending_normalized(
norm_year = $6, norm_track_number = $7, norm_genre = $8,
norm_featured_artists = $9,
confidence = $10, llm_notes = $11, error_message = $12,
norm_release_type = $13,
updated_at = NOW()
WHERE id = $1"#,
)
@@ -186,6 +192,7 @@ pub async fn update_pending_normalized(
.bind(norm.confidence)
.bind(&norm.notes)
.bind(error_message)
.bind(&norm.release_type)
.execute(pool)
.await?;
Ok(())
@@ -253,18 +260,19 @@ pub async fn upsert_artist(pool: &PgPool, name: &str) -> Result<i64, sqlx::Error
Ok(row.0)
}
pub async fn upsert_album(pool: &PgPool, artist_id: i64, name: &str, year: Option<i32>) -> Result<i64, sqlx::Error> {
pub async fn upsert_album(pool: &PgPool, artist_id: i64, name: &str, year: Option<i32>, release_type: &str) -> Result<i64, sqlx::Error> {
let slug = generate_slug();
let row: (i64,) = sqlx::query_as(
r#"INSERT INTO albums (artist_id, name, year, slug)
VALUES ($1, $2, $3, $4)
ON CONFLICT (artist_id, name) DO UPDATE SET year = COALESCE(EXCLUDED.year, albums.year)
r#"INSERT INTO albums (artist_id, name, year, slug, release_type)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (artist_id, name) DO UPDATE SET year = COALESCE(EXCLUDED.year, albums.year), release_type = EXCLUDED.release_type
RETURNING id"#
)
.bind(artist_id)
.bind(name)
.bind(year)
.bind(&slug)
.bind(release_type)
.fetch_one(pool)
.await?;
Ok(row.0)
@@ -326,11 +334,36 @@ pub async fn approve_and_finalize(
.fetch_one(pool)
.await?;
// Check if track already exists by file_hash (re-approval of same file)
let existing: Option<(i64,)> = sqlx::query_as("SELECT id FROM tracks WHERE file_hash = $1")
.bind(&pt.file_hash)
.fetch_optional(pool)
.await?;
if let Some((track_id,)) = existing {
update_pending_status(pool, pending_id, "approved", None).await?;
return Ok(track_id);
}
// Check if track already exists by storage_path (Merged: different quality file landed
// at the same destination, source was deleted — don't create a phantom duplicate)
let existing_path: Option<(i64,)> = sqlx::query_as(
"SELECT id FROM tracks WHERE storage_path = $1 AND NOT hidden"
)
.bind(storage_path)
.fetch_optional(pool)
.await?;
if let Some((track_id,)) = existing_path {
update_pending_status(pool, pending_id, "merged", None).await?;
return Ok(track_id);
}
let artist_name = pt.norm_artist.as_deref().unwrap_or("Unknown Artist");
let artist_id = upsert_artist(pool, artist_name).await?;
let album_id = match pt.norm_album.as_deref() {
Some(album_name) => Some(upsert_album(pool, artist_id, album_name, pt.norm_year).await?),
Some(album_name) => Some(upsert_album(pool, artist_id, album_name, pt.norm_year, pt.norm_release_type.as_deref().unwrap_or("album")).await?),
None => None,
};
@@ -404,6 +437,7 @@ pub async fn insert_album_image(
Ok(row.0)
}
#[allow(dead_code)]
pub async fn get_album_images(pool: &PgPool, album_id: i64) -> Result<Vec<AlbumImage>, sqlx::Error> {
sqlx::query_as::<_, AlbumImage>("SELECT * FROM album_images WHERE album_id = $1 ORDER BY image_type")
.bind(album_id)
@@ -425,6 +459,16 @@ pub async fn find_album_id(pool: &PgPool, artist_name: &str, album_name: &str) -
Ok(row.map(|r| r.0))
}
/// Fetch pending tracks that need (re-)processing by the LLM pipeline.
pub async fn list_pending_for_processing(pool: &PgPool, limit: i64) -> Result<Vec<PendingTrack>, sqlx::Error> {
sqlx::query_as::<_, PendingTrack>(
"SELECT * FROM pending_tracks WHERE status = 'pending' ORDER BY created_at ASC LIMIT $1"
)
.bind(limit)
.fetch_all(pool)
.await
}
// --- DTOs for insert helpers ---
#[derive(Debug, Default)]
@@ -456,6 +500,7 @@ pub struct NormalizedFields {
pub genre: Option<String>,
#[serde(default)]
pub featured_artists: Vec<String>,
pub release_type: Option<String>,
pub confidence: Option<f64>,
pub notes: Option<String>,
}
@@ -502,13 +547,13 @@ pub async fn delete_pending(pool: &PgPool, id: Uuid) -> Result<bool, sqlx::Error
}
pub async fn list_artists_all(pool: &PgPool) -> Result<Vec<Artist>, sqlx::Error> {
sqlx::query_as::<_, Artist>("SELECT id, name FROM artists ORDER BY name")
sqlx::query_as::<_, Artist>("SELECT id, name, hidden FROM artists ORDER BY name")
.fetch_all(pool)
.await
}
pub async fn list_albums_by_artist(pool: &PgPool, artist_id: i64) -> Result<Vec<Album>, sqlx::Error> {
sqlx::query_as::<_, Album>("SELECT id, artist_id, name, year FROM albums WHERE artist_id = $1 ORDER BY year, name")
sqlx::query_as::<_, Album>("SELECT id, artist_id, name, year, release_type, hidden FROM albums WHERE artist_id = $1 ORDER BY year, name")
.bind(artist_id)
.fetch_all(pool)
.await
@@ -541,6 +586,8 @@ pub struct Stats {
pub pending_count: i64,
pub review_count: i64,
pub error_count: i64,
pub merged_count: i64,
pub active_merges: i64,
}
pub async fn get_stats(pool: &PgPool) -> Result<Stats, sqlx::Error> {
@@ -550,5 +597,555 @@ pub async fn get_stats(pool: &PgPool) -> Result<Stats, sqlx::Error> {
let (pending_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'pending'").fetch_one(pool).await?;
let (review_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'review'").fetch_one(pool).await?;
let (error_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'error'").fetch_one(pool).await?;
Ok(Stats { total_tracks, total_artists, total_albums, pending_count, review_count, error_count })
let (merged_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'merged'").fetch_one(pool).await?;
let (active_merges,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM artist_merges WHERE status IN ('pending','processing')").fetch_one(pool).await?;
Ok(Stats { total_tracks, total_artists, total_albums, pending_count, review_count, error_count, merged_count, active_merges })
}
// =================== Library search ===================
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct TrackRow {
pub id: i64,
pub title: String,
pub artist_name: String,
pub album_id: Option<i64>,
pub album_name: Option<String>,
pub year: Option<i32>,
pub track_number: Option<i32>,
pub duration_secs: Option<f64>,
pub genre: Option<String>,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct AlbumRow {
pub id: i64,
pub name: String,
pub artist_name: String,
pub year: Option<i32>,
pub track_count: i64,
pub release_type: String,
pub hidden: bool,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct ArtistRow {
pub id: i64,
pub name: String,
pub album_count: i64,
pub single_count: i64,
pub ep_count: i64,
pub compilation_count: i64,
pub live_count: i64,
pub track_count: i64,
pub hidden: bool,
}
pub async fn search_tracks(
pool: &PgPool,
q: &str, artist: &str, album: &str,
limit: i64, offset: i64,
) -> Result<Vec<TrackRow>, sqlx::Error> {
sqlx::query_as::<_, TrackRow>(
r#"SELECT t.id, t.title, ar.name AS artist_name, t.album_id, al.name AS album_name,
al.year, t.track_number, t.duration_secs, t.genre
FROM tracks t
JOIN track_artists ta ON ta.track_id = t.id AND ta.role = 'primary'
JOIN artists ar ON ar.id = ta.artist_id
LEFT JOIN albums al ON al.id = t.album_id
WHERE ($1 = '' OR t.title ILIKE '%' || $1 || '%')
AND ($2 = '' OR ar.name ILIKE '%' || $2 || '%')
AND ($3 = '' OR al.name ILIKE '%' || $3 || '%')
ORDER BY ar.name, al.name NULLS LAST, t.track_number NULLS LAST, t.title
LIMIT $4 OFFSET $5"#,
)
.bind(q).bind(artist).bind(album).bind(limit).bind(offset)
.fetch_all(pool).await
}
pub async fn count_tracks(pool: &PgPool, q: &str, artist: &str, album: &str) -> Result<i64, sqlx::Error> {
let (n,): (i64,) = sqlx::query_as(
r#"SELECT COUNT(*) FROM tracks t
JOIN track_artists ta ON ta.track_id = t.id AND ta.role = 'primary'
JOIN artists ar ON ar.id = ta.artist_id
LEFT JOIN albums al ON al.id = t.album_id
WHERE ($1 = '' OR t.title ILIKE '%' || $1 || '%')
AND ($2 = '' OR ar.name ILIKE '%' || $2 || '%')
AND ($3 = '' OR al.name ILIKE '%' || $3 || '%')"#,
)
.bind(q).bind(artist).bind(album)
.fetch_one(pool).await?;
Ok(n)
}
pub async fn search_albums(
pool: &PgPool,
q: &str, artist: &str,
limit: i64, offset: i64,
) -> Result<Vec<AlbumRow>, sqlx::Error> {
sqlx::query_as::<_, AlbumRow>(
r#"SELECT a.id, a.name, ar.name AS artist_name, a.year,
COUNT(t.id) AS track_count, a.release_type, a.hidden
FROM albums a
JOIN artists ar ON ar.id = a.artist_id
LEFT JOIN tracks t ON t.album_id = a.id
WHERE ($1 = '' OR a.name ILIKE '%' || $1 || '%')
AND ($2 = '' OR ar.name ILIKE '%' || $2 || '%')
GROUP BY a.id, a.name, ar.name, a.year, a.release_type, a.hidden
ORDER BY ar.name, a.year NULLS LAST, a.name
LIMIT $3 OFFSET $4"#,
)
.bind(q).bind(artist).bind(limit).bind(offset)
.fetch_all(pool).await
}
pub async fn count_albums(pool: &PgPool, q: &str, artist: &str) -> Result<i64, sqlx::Error> {
let (n,): (i64,) = sqlx::query_as(
r#"SELECT COUNT(*) FROM albums a
JOIN artists ar ON ar.id = a.artist_id
WHERE ($1 = '' OR a.name ILIKE '%' || $1 || '%')
AND ($2 = '' OR ar.name ILIKE '%' || $2 || '%')"#,
)
.bind(q).bind(artist)
.fetch_one(pool).await?;
Ok(n)
}
pub async fn search_artists_lib(
pool: &PgPool,
q: &str,
limit: i64, offset: i64,
) -> Result<Vec<ArtistRow>, sqlx::Error> {
sqlx::query_as::<_, ArtistRow>(
r#"SELECT ar.id, ar.name,
COUNT(DISTINCT CASE WHEN al.release_type = 'album' THEN al.id END) AS album_count,
COUNT(DISTINCT CASE WHEN al.release_type = 'single' THEN al.id END) AS single_count,
COUNT(DISTINCT CASE WHEN al.release_type = 'ep' THEN al.id END) AS ep_count,
COUNT(DISTINCT CASE WHEN al.release_type = 'compilation' THEN al.id END) AS compilation_count,
COUNT(DISTINCT CASE WHEN al.release_type = 'live' THEN al.id END) AS live_count,
COUNT(DISTINCT ta.track_id) AS track_count,
ar.hidden
FROM artists ar
LEFT JOIN albums al ON al.artist_id = ar.id
LEFT JOIN track_artists ta ON ta.artist_id = ar.id AND ta.role = 'primary'
WHERE ($1 = '' OR ar.name ILIKE '%' || $1 || '%')
GROUP BY ar.id, ar.name, ar.hidden
ORDER BY ar.name
LIMIT $2 OFFSET $3"#,
)
.bind(q).bind(limit).bind(offset)
.fetch_all(pool).await
}
pub async fn count_artists_lib(pool: &PgPool, q: &str) -> Result<i64, sqlx::Error> {
let (n,): (i64,) = sqlx::query_as(
"SELECT COUNT(*) FROM artists WHERE ($1 = '' OR name ILIKE '%' || $1 || '%')"
)
.bind(q)
.fetch_one(pool).await?;
Ok(n)
}
// --- Track full details ---
#[derive(Debug, Serialize)]
pub struct TrackFull {
pub id: i64,
pub title: String,
pub artist_id: i64,
pub artist_name: String,
pub album_id: Option<i64>,
pub album_name: Option<String>,
pub track_number: Option<i32>,
pub duration_secs: Option<f64>,
pub genre: Option<String>,
pub file_hash: String,
pub file_size: i64,
pub storage_path: String,
pub featured_artists: Vec<String>,
}
pub async fn get_track_full(pool: &PgPool, id: i64) -> Result<Option<TrackFull>, sqlx::Error> {
#[derive(sqlx::FromRow)]
struct Row {
id: i64, title: String, artist_id: i64, artist_name: String,
album_id: Option<i64>, album_name: Option<String>,
track_number: Option<i32>, duration_secs: Option<f64>,
genre: Option<String>, file_hash: String, file_size: i64, storage_path: String,
}
let row: Option<Row> = sqlx::query_as(
r#"SELECT t.id, t.title,
ta_p.artist_id, ar.name AS artist_name,
t.album_id, al.name AS album_name,
t.track_number, t.duration_secs, t.genre,
t.file_hash, t.file_size, t.storage_path
FROM tracks t
JOIN track_artists ta_p ON ta_p.track_id = t.id AND ta_p.role = 'primary'
JOIN artists ar ON ar.id = ta_p.artist_id
LEFT JOIN albums al ON al.id = t.album_id
WHERE t.id = $1"#,
).bind(id).fetch_optional(pool).await?;
let row = match row { Some(r) => r, None => return Ok(None) };
let feat: Vec<(String,)> = sqlx::query_as(
"SELECT ar.name FROM track_artists ta JOIN artists ar ON ar.id=ta.artist_id WHERE ta.track_id=$1 AND ta.role='featured' ORDER BY ta.id"
).bind(id).fetch_all(pool).await?;
Ok(Some(TrackFull {
id: row.id, title: row.title, artist_id: row.artist_id, artist_name: row.artist_name,
album_id: row.album_id, album_name: row.album_name, track_number: row.track_number,
duration_secs: row.duration_secs, genre: row.genre, file_hash: row.file_hash,
file_size: row.file_size, storage_path: row.storage_path,
featured_artists: feat.into_iter().map(|(n,)| n).collect(),
}))
}
#[derive(Deserialize)]
pub struct TrackUpdateFields {
pub title: String,
pub artist_id: i64,
pub album_id: Option<i64>,
pub track_number: Option<i32>,
pub genre: Option<String>,
#[serde(default)]
pub featured_artists: Vec<String>,
}
pub async fn update_track_metadata(pool: &PgPool, id: i64, f: &TrackUpdateFields) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE tracks SET title=$2, album_id=$3, track_number=$4, genre=$5 WHERE id=$1")
.bind(id).bind(&f.title).bind(f.album_id).bind(f.track_number).bind(&f.genre)
.execute(pool).await?;
sqlx::query("UPDATE track_artists SET artist_id=$2 WHERE track_id=$1 AND role='primary'")
.bind(id).bind(f.artist_id).execute(pool).await?;
// Rebuild featured artists
sqlx::query("DELETE FROM track_artists WHERE track_id=$1 AND role='featured'")
.bind(id).execute(pool).await?;
for name in &f.featured_artists {
let feat_id = upsert_artist(pool, name).await?;
link_track_artist(pool, id, feat_id, "featured").await?;
}
Ok(())
}
// --- Album full details ---
#[derive(Debug, Serialize)]
pub struct AlbumDetails {
pub id: i64,
pub name: String,
pub year: Option<i32>,
pub artist_id: i64,
pub artist_name: String,
pub tracks: Vec<AlbumTrackRow>,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct AlbumTrackRow {
pub id: i64,
pub title: String,
pub track_number: Option<i32>,
pub duration_secs: Option<f64>,
pub artist_name: String,
pub genre: Option<String>,
}
pub async fn set_album_tracks_genre(pool: &PgPool, album_id: i64, genre: &str) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE tracks SET genre = $2 WHERE album_id = $1")
.bind(album_id).bind(genre).execute(pool).await?;
Ok(())
}
pub async fn get_album_details(pool: &PgPool, id: i64) -> Result<Option<AlbumDetails>, sqlx::Error> {
let row: Option<(i64, String, Option<i32>, i64, String)> = sqlx::query_as(
"SELECT a.id, a.name, a.year, ar.id, ar.name FROM albums a JOIN artists ar ON ar.id=a.artist_id WHERE a.id=$1"
).bind(id).fetch_optional(pool).await?;
let (aid, aname, ayear, artist_id, artist_name) = match row { Some(r) => r, None => return Ok(None) };
let tracks: Vec<AlbumTrackRow> = sqlx::query_as(
r#"SELECT t.id, t.title, t.track_number, t.duration_secs, ar.name AS artist_name, t.genre
FROM tracks t
JOIN track_artists ta ON ta.track_id=t.id AND ta.role='primary'
JOIN artists ar ON ar.id=ta.artist_id
WHERE t.album_id=$1 ORDER BY t.track_number NULLS LAST, t.title"#
).bind(id).fetch_all(pool).await?;
Ok(Some(AlbumDetails { id: aid, name: aname, year: ayear, artist_id, artist_name, tracks }))
}
pub async fn update_album_full(pool: &PgPool, id: i64, name: &str, year: Option<i32>, artist_id: i64) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE albums SET name=$2, year=$3, artist_id=$4 WHERE id=$1")
.bind(id).bind(name).bind(year).bind(artist_id).execute(pool).await?;
Ok(())
}
pub async fn reorder_tracks(pool: &PgPool, orders: &[(i64, i32)]) -> Result<(), sqlx::Error> {
for &(track_id, track_number) in orders {
sqlx::query("UPDATE tracks SET track_number=$2 WHERE id=$1")
.bind(track_id).bind(track_number).execute(pool).await?;
}
Ok(())
}
pub async fn get_album_cover(pool: &PgPool, album_id: i64) -> Result<Option<(String, String)>, sqlx::Error> {
let row: Option<(String, String)> = sqlx::query_as(
"SELECT file_path, mime_type FROM album_images WHERE album_id=$1 LIMIT 1"
).bind(album_id).fetch_optional(pool).await?;
Ok(row)
}
/// Returns the storage_path of the first track in an album (for embedded cover fallback).
pub async fn get_album_first_track_path(pool: &PgPool, album_id: i64) -> Result<Option<String>, sqlx::Error> {
let row: Option<(String,)> = sqlx::query_as(
"SELECT storage_path FROM tracks WHERE album_id=$1 ORDER BY track_number NULLS LAST, title LIMIT 1"
).bind(album_id).fetch_optional(pool).await?;
Ok(row.map(|(p,)| p))
}
pub async fn get_artist_by_id(pool: &PgPool, id: i64) -> Result<Option<Artist>, sqlx::Error> {
sqlx::query_as::<_, Artist>("SELECT id, name, hidden FROM artists WHERE id=$1")
.bind(id).fetch_optional(pool).await
}
pub async fn search_albums_for_artist(pool: &PgPool, q: &str, artist_id: Option<i64>) -> Result<Vec<(i64, String)>, sqlx::Error> {
if let Some(aid) = artist_id {
let rows: Vec<(i64, String)> = sqlx::query_as(
"SELECT id, name FROM albums WHERE artist_id=$1 AND ($2='' OR name ILIKE '%'||$2||'%') ORDER BY year NULLS LAST, name LIMIT 15"
).bind(aid).bind(q).fetch_all(pool).await?;
Ok(rows)
} else {
let rows: Vec<(i64, String)> = sqlx::query_as(
"SELECT id, name FROM albums WHERE $1='' OR name ILIKE '%'||$1||'%' ORDER BY name LIMIT 15"
).bind(q).fetch_all(pool).await?;
Ok(rows)
}
}
pub async fn set_track_hidden(pool: &PgPool, id: i64, hidden: bool) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE tracks SET hidden=$2 WHERE id=$1").bind(id).bind(hidden).execute(pool).await?;
Ok(())
}
pub async fn set_album_hidden(pool: &PgPool, id: i64, hidden: bool) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE albums SET hidden=$2 WHERE id=$1").bind(id).bind(hidden).execute(pool).await?;
Ok(())
}
pub async fn set_artist_hidden(pool: &PgPool, id: i64, hidden: bool) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE artists SET hidden=$2 WHERE id=$1").bind(id).bind(hidden).execute(pool).await?;
Ok(())
}
pub async fn set_album_release_type(pool: &PgPool, id: i64, release_type: &str) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE albums SET release_type=$2 WHERE id=$1").bind(id).bind(release_type).execute(pool).await?;
Ok(())
}
pub async fn rename_artist_name(pool: &PgPool, id: i64, name: &str) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE artists SET name=$2 WHERE id=$1").bind(id).bind(name).execute(pool).await?;
Ok(())
}
// Full artist data for admin form
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct ArtistAlbumRow {
pub id: i64,
pub name: String,
pub year: Option<i32>,
pub release_type: String,
pub hidden: bool,
pub track_count: i64,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct ArtistAlbumTrack {
pub id: i64,
pub title: String,
pub track_number: Option<i32>,
pub duration_secs: Option<f64>,
pub hidden: bool,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct AppearanceRow {
pub track_id: i64,
pub track_title: String,
pub primary_artist_id: i64,
pub primary_artist_name: String,
pub album_id: Option<i64>,
pub album_name: Option<String>,
}
pub async fn get_artist_albums(pool: &PgPool, artist_id: i64) -> Result<Vec<ArtistAlbumRow>, sqlx::Error> {
sqlx::query_as::<_, ArtistAlbumRow>(
r#"SELECT a.id, a.name, a.year, a.release_type, a.hidden,
COUNT(t.id) AS track_count
FROM albums a
LEFT JOIN tracks t ON t.album_id = a.id
WHERE a.artist_id = $1
GROUP BY a.id, a.name, a.year, a.release_type, a.hidden
ORDER BY a.year NULLS LAST, a.name"#
).bind(artist_id).fetch_all(pool).await
}
pub async fn get_album_tracks_admin(pool: &PgPool, album_id: i64) -> Result<Vec<ArtistAlbumTrack>, sqlx::Error> {
sqlx::query_as::<_, ArtistAlbumTrack>(
"SELECT id, title, track_number, duration_secs, hidden FROM tracks WHERE album_id=$1 ORDER BY track_number NULLS LAST, title"
).bind(album_id).fetch_all(pool).await
}
pub async fn get_artist_appearances(pool: &PgPool, artist_id: i64) -> Result<Vec<AppearanceRow>, sqlx::Error> {
sqlx::query_as::<_, AppearanceRow>(
r#"SELECT ta.track_id, t.title AS track_title,
ta_p.artist_id AS primary_artist_id, ar_p.name AS primary_artist_name,
t.album_id, al.name AS album_name
FROM track_artists ta
JOIN tracks t ON t.id = ta.track_id
JOIN track_artists ta_p ON ta_p.track_id = t.id AND ta_p.role = 'primary'
JOIN artists ar_p ON ar_p.id = ta_p.artist_id
LEFT JOIN albums al ON al.id = t.album_id
WHERE ta.artist_id = $1 AND ta.role = 'featured'
ORDER BY ar_p.name, al.name NULLS LAST, t.title"#
).bind(artist_id).fetch_all(pool).await
}
pub async fn add_track_appearance(pool: &PgPool, track_id: i64, artist_id: i64) -> Result<(), sqlx::Error> {
sqlx::query(
"INSERT INTO track_artists (track_id, artist_id, role) VALUES ($1, $2, 'featured') ON CONFLICT DO NOTHING"
).bind(track_id).bind(artist_id).execute(pool).await?;
Ok(())
}
pub async fn remove_track_appearance(pool: &PgPool, track_id: i64, artist_id: i64) -> Result<(), sqlx::Error> {
sqlx::query(
"DELETE FROM track_artists WHERE track_id=$1 AND artist_id=$2 AND role='featured'"
).bind(track_id).bind(artist_id).execute(pool).await?;
Ok(())
}
pub async fn search_tracks_for_feat(pool: &PgPool, q: &str) -> Result<Vec<(i64, String, String)>, sqlx::Error> {
// Returns (track_id, track_title, primary_artist_name)
sqlx::query_as::<_, (i64, String, String)>(
r#"SELECT t.id, t.title, ar.name FROM tracks t
JOIN track_artists ta ON ta.track_id=t.id AND ta.role='primary'
JOIN artists ar ON ar.id=ta.artist_id
WHERE t.title ILIKE '%'||$1||'%' OR ar.name ILIKE '%'||$1||'%'
ORDER BY ar.name, t.title LIMIT 15"#
).bind(q).fetch_all(pool).await
}
// =================== Artist Merges ===================
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct ArtistMerge {
pub id: Uuid,
pub status: String,
pub source_artist_ids: String,
pub proposal: Option<String>,
pub llm_notes: Option<String>,
pub error_message: Option<String>,
pub created_at: chrono::DateTime<chrono::Utc>,
pub updated_at: chrono::DateTime<chrono::Utc>,
}
#[derive(Debug, Serialize)]
pub struct ArtistFullData {
pub id: i64,
pub name: String,
pub albums: Vec<AlbumFullData>,
}
#[derive(Debug, Serialize)]
pub struct AlbumFullData {
pub id: i64,
pub name: String,
pub year: Option<i32>,
pub tracks: Vec<TrackBasic>,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct TrackBasic {
pub id: i64,
pub title: String,
pub track_number: Option<i32>,
pub storage_path: String,
}
#[derive(Debug, sqlx::FromRow)]
pub struct TrackWithAlbum {
pub id: i64,
pub storage_path: String,
pub album_name: Option<String>,
}
pub async fn insert_artist_merge(pool: &PgPool, source_artist_ids: &[i64]) -> Result<Uuid, sqlx::Error> {
let ids_json = serde_json::to_string(source_artist_ids).unwrap_or_default();
let row: (Uuid,) = sqlx::query_as(
"INSERT INTO artist_merges (source_artist_ids) VALUES ($1) RETURNING id"
).bind(&ids_json).fetch_one(pool).await?;
Ok(row.0)
}
pub async fn list_artist_merges(pool: &PgPool) -> Result<Vec<ArtistMerge>, sqlx::Error> {
sqlx::query_as::<_, ArtistMerge>("SELECT * FROM artist_merges ORDER BY created_at DESC")
.fetch_all(pool).await
}
pub async fn get_artist_merge(pool: &PgPool, id: Uuid) -> Result<Option<ArtistMerge>, sqlx::Error> {
sqlx::query_as::<_, ArtistMerge>("SELECT * FROM artist_merges WHERE id = $1")
.bind(id).fetch_optional(pool).await
}
pub async fn update_merge_status(pool: &PgPool, id: Uuid, status: &str, error: Option<&str>) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE artist_merges SET status = $2, error_message = $3, updated_at = NOW() WHERE id = $1")
.bind(id).bind(status).bind(error).execute(pool).await?;
Ok(())
}
pub async fn update_merge_proposal(pool: &PgPool, id: Uuid, proposal_json: &str, notes: &str) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE artist_merges SET proposal = $2, llm_notes = $3, status = 'review', error_message = NULL, updated_at = NOW() WHERE id = $1")
.bind(id).bind(proposal_json).bind(notes).execute(pool).await?;
Ok(())
}
pub async fn get_pending_merges_for_processing(pool: &PgPool) -> Result<Vec<Uuid>, sqlx::Error> {
let rows: Vec<(Uuid,)> = sqlx::query_as(
"SELECT id FROM artist_merges WHERE status = 'pending' ORDER BY created_at ASC LIMIT 5"
).fetch_all(pool).await?;
Ok(rows.into_iter().map(|(id,)| id).collect())
}
pub async fn get_artists_full_data(pool: &PgPool, ids: &[i64]) -> Result<Vec<ArtistFullData>, sqlx::Error> {
let mut result = Vec::new();
for &id in ids {
let artist: Artist = sqlx::query_as("SELECT id, name, hidden FROM artists WHERE id = $1")
.bind(id).fetch_one(pool).await?;
let albums: Vec<Album> = sqlx::query_as("SELECT * FROM albums WHERE artist_id = $1 ORDER BY year NULLS LAST, name")
.bind(id).fetch_all(pool).await?;
let mut album_data = Vec::new();
for album in albums {
let tracks: Vec<TrackBasic> = sqlx::query_as(
"SELECT id, title, track_number, storage_path FROM tracks WHERE album_id = $1 ORDER BY track_number NULLS LAST, title"
).bind(album.id).fetch_all(pool).await?;
album_data.push(AlbumFullData { id: album.id, name: album.name, year: album.year, tracks });
}
result.push(ArtistFullData { id, name: artist.name, albums: album_data });
}
Ok(result)
}
pub async fn get_tracks_with_albums_for_artist(pool: &PgPool, artist_id: i64) -> Result<Vec<TrackWithAlbum>, sqlx::Error> {
sqlx::query_as::<_, TrackWithAlbum>(
r#"SELECT t.id, t.storage_path, a.name as album_name
FROM tracks t
LEFT JOIN albums a ON a.id = t.album_id
WHERE t.id IN (
SELECT track_id FROM track_artists WHERE artist_id = $1 AND role = 'primary'
)"#
).bind(artist_id).fetch_all(pool).await
}
pub async fn update_track_storage_path(pool: &PgPool, track_id: i64, new_path: &str) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE tracks SET storage_path = $2 WHERE id = $1")
.bind(track_id).bind(new_path).execute(pool).await?;
Ok(())
}

View File

@@ -19,6 +19,24 @@ pub async fn run(state: Arc<AppState>) {
Ok(count) => tracing::info!(count, "processed new files"),
Err(e) => tracing::error!(?e, "inbox scan failed"),
}
// Re-process pending tracks (e.g. retried from admin UI)
match reprocess_pending(&state).await {
Ok(0) => {}
Ok(count) => tracing::info!(count, "re-processed pending tracks"),
Err(e) => tracing::error!(?e, "pending re-processing failed"),
}
// Process pending merge proposals
match db::get_pending_merges_for_processing(&state.pool).await {
Ok(merge_ids) => {
for merge_id in merge_ids {
if let Err(e) = crate::merge::propose_merge(&state, merge_id).await {
tracing::error!(id = %merge_id, ?e, "Merge proposal failed");
let _ = db::update_merge_status(&state.pool, merge_id, "error", Some(&e.to_string())).await;
}
}
}
Err(e) => tracing::error!(?e, "Failed to load pending merges"),
}
tokio::time::sleep(interval).await;
}
}
@@ -61,6 +79,155 @@ async fn scan_inbox(state: &Arc<AppState>) -> anyhow::Result<usize> {
Ok(count)
}
/// Re-process pending tracks from DB (e.g. tracks retried via admin UI).
/// These already have raw metadata and path hints stored — just need RAG + LLM.
async fn reprocess_pending(state: &Arc<AppState>) -> anyhow::Result<usize> {
let pending = db::list_pending_for_processing(&state.pool, 10).await?;
if pending.is_empty() {
return Ok(0);
}
let mut count = 0;
for pt in &pending {
tracing::info!(id = %pt.id, title = pt.raw_title.as_deref().unwrap_or("?"), "Re-processing pending track");
db::update_pending_status(&state.pool, pt.id, "processing", None).await?;
// Build raw metadata and hints from stored DB fields
let raw_meta = metadata::RawMetadata {
title: pt.raw_title.clone(),
artist: pt.raw_artist.clone(),
album: pt.raw_album.clone(),
track_number: pt.raw_track_number.map(|n| n as u32),
year: pt.raw_year.map(|n| n as u32),
genre: pt.raw_genre.clone(),
duration_secs: pt.duration_secs,
};
let hints = db::PathHints {
title: pt.path_title.clone(),
artist: pt.path_artist.clone(),
album: pt.path_album.clone(),
year: pt.path_year,
track_number: pt.path_track_number,
};
// RAG lookup
let artist_query = raw_meta.artist.as_deref()
.or(hints.artist.as_deref())
.unwrap_or("");
let album_query = raw_meta.album.as_deref()
.or(hints.album.as_deref())
.unwrap_or("");
let similar_artists = if !artist_query.is_empty() {
db::find_similar_artists(&state.pool, artist_query, 5).await.unwrap_or_default()
} else {
Vec::new()
};
let similar_albums = if !album_query.is_empty() {
db::find_similar_albums(&state.pool, album_query, 5).await.unwrap_or_default()
} else {
Vec::new()
};
// LLM normalization (no folder context available for reprocessing from DB)
match normalize::normalize(state, &raw_meta, &hints, &similar_artists, &similar_albums, None).await {
Ok(normalized) => {
let confidence = normalized.confidence.unwrap_or(0.0);
let status = if confidence >= state.config.confidence_threshold {
"approved"
} else {
"review"
};
tracing::info!(
id = %pt.id,
norm_artist = normalized.artist.as_deref().unwrap_or("-"),
norm_title = normalized.title.as_deref().unwrap_or("-"),
confidence,
status,
"Re-processing complete"
);
db::update_pending_normalized(&state.pool, pt.id, status, &normalized, None).await?;
if status == "approved" {
let artist = normalized.artist.as_deref().unwrap_or("Unknown Artist");
let album = normalized.album.as_deref().unwrap_or("Unknown Album");
let title = normalized.title.as_deref().unwrap_or("Unknown Title");
let source = std::path::Path::new(&pt.inbox_path);
let ext = source.extension().and_then(|e| e.to_str()).unwrap_or("flac");
let track_num = normalized.track_number.unwrap_or(0);
let dest_filename = if track_num > 0 {
format!("{:02} - {}.{}", track_num, sanitize_filename(title), ext)
} else {
format!("{}.{}", sanitize_filename(title), ext)
};
// Check if already moved
let dest = state.config.storage_dir
.join(sanitize_filename(artist))
.join(sanitize_filename(album))
.join(&dest_filename);
let (storage_path, was_merged) = if dest.exists() && !source.exists() {
(dest.to_string_lossy().to_string(), false)
} else if source.exists() {
match mover::move_to_storage(
&state.config.storage_dir, artist, album, &dest_filename, source,
).await {
Ok(mover::MoveOutcome::Moved(p)) => (p.to_string_lossy().to_string(), false),
Ok(mover::MoveOutcome::Merged(p)) => (p.to_string_lossy().to_string(), true),
Err(e) => {
tracing::error!(id = %pt.id, ?e, "Failed to move file");
db::update_pending_status(&state.pool, pt.id, "error", Some(&e.to_string())).await?;
continue;
}
}
} else {
// Source file is gone — check if already in library by hash
let in_library: (bool,) = sqlx::query_as(
"SELECT EXISTS(SELECT 1 FROM tracks WHERE file_hash = $1)"
)
.bind(&pt.file_hash)
.fetch_one(&state.pool).await.unwrap_or((false,));
if in_library.0 {
tracing::info!(id = %pt.id, "Source missing but track already in library — merging");
db::update_pending_status(&state.pool, pt.id, "merged", None).await?;
} else {
tracing::error!(id = %pt.id, "Source file missing: {:?}", source);
db::update_pending_status(&state.pool, pt.id, "error", Some("Source file missing")).await?;
}
continue;
};
match db::approve_and_finalize(&state.pool, pt.id, &storage_path).await {
Ok(track_id) => {
if was_merged {
let _ = db::update_pending_status(&state.pool, pt.id, "merged", None).await;
}
tracing::info!(id = %pt.id, track_id, "Track finalized");
}
Err(e) => tracing::error!(id = %pt.id, ?e, "Failed to finalize"),
}
}
count += 1;
}
Err(e) => {
tracing::error!(id = %pt.id, ?e, "LLM normalization failed");
db::update_pending_status(&state.pool, pt.id, "error", Some(&e.to_string())).await?;
}
}
}
Ok(count)
}
/// Recursively remove empty directories inside the inbox.
/// Does not remove the inbox root itself.
async fn cleanup_empty_dirs(dir: &std::path::Path) -> bool {
@@ -273,9 +440,35 @@ async fn process_file(state: &Arc<AppState>, file_path: &std::path::Path) -> any
tracing::info!(file = filename, matches = ?names, "Found similar albums in DB");
}
// Build folder context for the LLM
let audio_extensions = ["flac", "mp3", "ogg", "wav", "aac", "m4a", "opus", "wma", "ape", "alac"];
let folder_ctx = {
let folder = file_path.parent().unwrap_or(file_path);
let mut folder_files: Vec<String> = std::fs::read_dir(folder)
.ok()
.map(|rd| {
rd.filter_map(|e| e.ok())
.filter_map(|e| {
let name = e.file_name().to_string_lossy().into_owned();
let ext = name.rsplit('.').next().unwrap_or("").to_lowercase();
if audio_extensions.contains(&ext.as_str()) { Some(name) } else { None }
})
.collect()
})
.unwrap_or_default();
folder_files.sort();
let track_count = folder_files.len();
let folder_path = folder
.strip_prefix(&state.config.inbox_dir)
.unwrap_or(folder)
.to_string_lossy()
.into_owned();
normalize::FolderContext { folder_path, folder_files, track_count }
};
// Call LLM for normalization
tracing::info!(file = filename, model = %state.config.ollama_model, "Sending to LLM for normalization...");
match normalize::normalize(state, &raw_meta, &hints, &similar_artists, &similar_albums).await {
match normalize::normalize(state, &raw_meta, &hints, &similar_artists, &similar_albums, Some(&folder_ctx)).await {
Ok(normalized) => {
let confidence = normalized.confidence.unwrap_or(0.0);
let status = if confidence >= state.config.confidence_threshold {
@@ -335,10 +528,17 @@ async fn process_file(state: &Arc<AppState>, file_path: &std::path::Path) -> any
)
.await
{
Ok(storage_path) => {
Ok(outcome) => {
let (storage_path, was_merged) = match outcome {
mover::MoveOutcome::Moved(p) => (p, false),
mover::MoveOutcome::Merged(p) => (p, true),
};
let rel_path = storage_path.to_string_lossy().to_string();
match db::approve_and_finalize(&state.pool, pending_id, &rel_path).await {
Ok(track_id) => {
if was_merged {
let _ = db::update_pending_status(&state.pool, pending_id, "merged", None).await;
}
tracing::info!(file = filename, track_id, storage = %rel_path, "Track finalized in database");
}
Err(e) => {

View File

@@ -1,18 +1,27 @@
use std::path::{Path, PathBuf};
pub enum MoveOutcome {
/// File was moved/renamed to destination.
Moved(PathBuf),
/// Destination already existed; inbox duplicate was removed.
Merged(PathBuf),
}
/// Move a file from inbox to the permanent storage directory.
///
/// Creates the directory structure: `storage_dir/artist/album/filename`
/// Returns the full path of the moved file.
///
/// If `rename` fails (cross-device), falls back to copy + remove.
/// If the destination already exists the inbox copy is removed and
/// `MoveOutcome::Merged` is returned instead of an error.
pub async fn move_to_storage(
storage_dir: &Path,
artist: &str,
album: &str,
filename: &str,
source: &Path,
) -> anyhow::Result<PathBuf> {
) -> anyhow::Result<MoveOutcome> {
let artist_dir = sanitize_dir_name(artist);
let album_dir = sanitize_dir_name(album);
@@ -21,9 +30,13 @@ pub async fn move_to_storage(
let dest = dest_dir.join(filename);
// Avoid overwriting existing files
// File already at destination — remove the inbox duplicate
if dest.exists() {
anyhow::bail!("Destination already exists: {:?}", dest);
if source.exists() {
tokio::fs::remove_file(source).await?;
tracing::info!(from = ?source, to = ?dest, "merged duplicate into existing storage file");
}
return Ok(MoveOutcome::Merged(dest));
}
// Try atomic rename first (same filesystem)
@@ -37,7 +50,7 @@ pub async fn move_to_storage(
}
tracing::info!(from = ?source, to = ?dest, "moved file to storage");
Ok(dest)
Ok(MoveOutcome::Moved(dest))
}
/// Remove characters that are unsafe for directory names.

View File

@@ -7,6 +7,13 @@ use crate::web::AppState;
use super::metadata::RawMetadata;
#[derive(Debug)]
pub struct FolderContext {
pub folder_path: String, // path relative to inbox_dir (e.g. "Kunteynir/Синглы/Пьюк")
pub folder_files: Vec<String>, // audio filenames in the same folder
pub track_count: usize, // number of audio files in folder
}
/// Build the user message with all context and call Ollama for normalization.
pub async fn normalize(
state: &Arc<AppState>,
@@ -14,14 +21,16 @@ pub async fn normalize(
hints: &crate::db::PathHints,
similar_artists: &[SimilarArtist],
similar_albums: &[SimilarAlbum],
folder_ctx: Option<&FolderContext>,
) -> anyhow::Result<NormalizedFields> {
let user_message = build_user_message(raw, hints, similar_artists, similar_albums);
let user_message = build_user_message(raw, hints, similar_artists, similar_albums, folder_ctx);
let response = call_ollama(
&state.config.ollama_url,
&state.config.ollama_model,
&state.system_prompt,
&user_message,
state.config.ollama_auth.as_deref(),
)
.await?;
@@ -33,6 +42,7 @@ fn build_user_message(
hints: &crate::db::PathHints,
similar_artists: &[SimilarArtist],
similar_albums: &[SimilarAlbum],
folder_ctx: Option<&FolderContext>,
) -> String {
let mut msg = String::from("## Raw metadata from file tags\n");
@@ -87,6 +97,18 @@ fn build_user_message(
}
}
if let Some(ctx) = folder_ctx {
msg.push_str("\n## Folder context\n");
msg.push_str(&format!("Folder path: \"{}\"\n", ctx.folder_path));
msg.push_str(&format!("Track count in folder: {}\n", ctx.track_count));
if !ctx.folder_files.is_empty() {
msg.push_str("Files in folder:\n");
for f in &ctx.folder_files {
msg.push_str(&format!(" - {}\n", f));
}
}
}
msg
}
@@ -120,11 +142,12 @@ struct OllamaResponseMessage {
content: String,
}
async fn call_ollama(
pub async fn call_ollama(
base_url: &str,
model: &str,
system_prompt: &str,
user_message: &str,
auth: Option<&str>,
) -> anyhow::Result<String> {
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(120))
@@ -151,7 +174,11 @@ async fn call_ollama(
tracing::info!(%url, model, prompt_len = user_message.len(), "Calling Ollama API...");
let start = std::time::Instant::now();
let resp = client.post(&url).json(&request).send().await?;
let mut req = client.post(&url).json(&request);
if let Some(auth_header) = auth {
req = req.header("Authorization", auth_header);
}
let resp = req.send().await?;
let elapsed = start.elapsed();
if !resp.status().is_success() {
@@ -195,6 +222,7 @@ fn parse_response(response: &str) -> anyhow::Result<NormalizedFields> {
genre: Option<String>,
#[serde(default)]
featured_artists: Vec<String>,
release_type: Option<String>,
confidence: Option<f64>,
notes: Option<String>,
}
@@ -210,6 +238,7 @@ fn parse_response(response: &str) -> anyhow::Result<NormalizedFields> {
track_number: parsed.track_number,
genre: parsed.genre,
featured_artists: parsed.featured_artists,
release_type: parsed.release_type,
confidence: parsed.confidence,
notes: parsed.notes,
})

View File

@@ -1,6 +1,7 @@
mod config;
mod db;
mod ingest;
mod merge;
mod web;
use std::sync::Arc;
@@ -24,6 +25,9 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let system_prompt = args.load_system_prompt()?;
tracing::info!("System prompt loaded: {} chars", system_prompt.len());
let merge_prompt = args.load_merge_prompt()?;
tracing::info!("Merge prompt loaded: {} chars", merge_prompt.len());
tracing::info!("Connecting to database...");
let pool = db::connect(&args.database_url).await?;
tracing::info!("Running database migrations...");
@@ -34,6 +38,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
pool: pool.clone(),
config: Arc::new(args),
system_prompt: Arc::new(system_prompt),
merge_prompt: Arc::new(merge_prompt),
});
// Spawn the ingest pipeline as a background task

355
furumi-agent/src/merge.rs Normal file
View File

@@ -0,0 +1,355 @@
use std::sync::Arc;
use serde::{Deserialize, Serialize};
use uuid::Uuid;
use crate::db;
use crate::web::AppState;
use crate::ingest::normalize::call_ollama;
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct MergeProposal {
pub canonical_artist_name: String,
pub winner_artist_id: i64,
pub album_mappings: Vec<AlbumMapping>,
pub notes: String,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct AlbumMapping {
pub source_album_id: i64,
pub canonical_name: String,
pub merge_into_album_id: Option<i64>,
}
pub async fn propose_merge(state: &Arc<AppState>, merge_id: Uuid) -> anyhow::Result<()> {
db::update_merge_status(&state.pool, merge_id, "processing", None).await?;
let merge = db::get_artist_merge(&state.pool, merge_id).await?
.ok_or_else(|| anyhow::anyhow!("Merge not found: {}", merge_id))?;
let source_ids: Vec<i64> = serde_json::from_str(&merge.source_artist_ids)
.map_err(|e| anyhow::anyhow!("Invalid source_artist_ids: {}", e))?;
let artists_data = db::get_artists_full_data(&state.pool, &source_ids).await?;
let user_message = build_merge_message(&artists_data);
let response = call_ollama(
&state.config.ollama_url,
&state.config.ollama_model,
&state.merge_prompt,
&user_message,
state.config.ollama_auth.as_deref(),
).await?;
let proposal = parse_merge_response(&response)?;
let notes = proposal.notes.clone();
let proposal_json = serde_json::to_string(&proposal)?;
db::update_merge_proposal(&state.pool, merge_id, &proposal_json, &notes).await?;
tracing::info!(id = %merge_id, "Merge proposal generated");
Ok(())
}
fn build_merge_message(artists: &[db::ArtistFullData]) -> String {
let mut msg = String::from("## Artists to merge\n\n");
for artist in artists {
msg.push_str(&format!("### Artist ID {}: \"{}\"\n", artist.id, artist.name));
if artist.albums.is_empty() {
msg.push_str(" (no albums)\n");
}
for album in &artist.albums {
let year_str = album.year.map(|y| format!(" ({})", y)).unwrap_or_default();
msg.push_str(&format!(" Album ID {}: \"{}\"{}\n", album.id, album.name, year_str));
for track in &album.tracks {
let num = track.track_number.map(|n| format!("{:02}. ", n)).unwrap_or_default();
msg.push_str(&format!(" - {}\"{}\" [track_id={}]\n", num, track.title, track.id));
}
}
msg.push('\n');
}
msg
}
fn parse_merge_response(response: &str) -> anyhow::Result<MergeProposal> {
let cleaned = response.trim();
let json_str = if cleaned.starts_with("```") {
let start = cleaned.find('{').unwrap_or(0);
let end = cleaned.rfind('}').map(|i| i + 1).unwrap_or(cleaned.len());
&cleaned[start..end]
} else {
cleaned
};
serde_json::from_str(json_str)
.map_err(|e| anyhow::anyhow!("Failed to parse merge LLM response: {} — raw: {}", e, response))
}
pub async fn execute_merge(state: &Arc<AppState>, merge_id: Uuid) -> anyhow::Result<()> {
let merge = db::get_artist_merge(&state.pool, merge_id).await?
.ok_or_else(|| anyhow::anyhow!("Merge not found"))?;
let proposal_str = merge.proposal.ok_or_else(|| anyhow::anyhow!("No proposal to execute"))?;
let proposal: MergeProposal = serde_json::from_str(&proposal_str)?;
let source_ids: Vec<i64> = serde_json::from_str(&merge.source_artist_ids)?;
let loser_ids: Vec<i64> = source_ids.iter().copied()
.filter(|&id| id != proposal.winner_artist_id).collect();
// Execute all DB mutations in a single atomic transaction.
// On error the transaction rolls back automatically (dropped without commit).
let mut tx = state.pool.begin().await?;
if let Err(e) = merge_db(&mut tx, &proposal, &loser_ids).await {
// tx is dropped here → auto-rollback
return Err(e);
}
tx.commit().await?;
// Move files after commit (best-effort; storage_path updated per file)
let tracks = db::get_tracks_with_albums_for_artist(&state.pool, proposal.winner_artist_id).await?;
for track in &tracks {
let current = std::path::Path::new(&track.storage_path);
let filename = match current.file_name() {
Some(f) => f.to_string_lossy().to_string(),
None => continue,
};
let album_name = track.album_name.as_deref().unwrap_or("Unknown Album");
let new_path = state.config.storage_dir
.join(sanitize(&proposal.canonical_artist_name))
.join(sanitize(album_name))
.join(&filename);
if current != new_path.as_path() {
if current.exists() {
if let Some(parent) = new_path.parent() {
let _ = tokio::fs::create_dir_all(parent).await;
}
let moved = tokio::fs::rename(current, &new_path).await;
if moved.is_err() {
if let Ok(_) = tokio::fs::copy(current, &new_path).await {
let _ = tokio::fs::remove_file(current).await;
}
}
}
db::update_track_storage_path(&state.pool, track.id, &new_path.to_string_lossy()).await?;
}
}
db::update_merge_status(&state.pool, merge_id, "approved", None).await?;
tracing::info!(id = %merge_id, "Merge executed successfully");
Ok(())
}
/// All DB mutations for a merge, executed inside a single transaction.
/// `tx` is a `Transaction<'_, Postgres>` which derefs to `PgConnection`.
async fn merge_db(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
proposal: &MergeProposal,
loser_ids: &[i64],
) -> anyhow::Result<()> {
// 0. Validate proposal — ensure winner and all album IDs belong to source artists
let source_ids: Vec<i64> = loser_ids.iter().copied()
.chain(std::iter::once(proposal.winner_artist_id))
.collect();
// Verify winner_artist_id is one of the source artists
if !source_ids.contains(&proposal.winner_artist_id) {
anyhow::bail!(
"winner_artist_id {} is not among source artists {:?}",
proposal.winner_artist_id, source_ids
);
}
// Build set of valid album IDs (albums that actually belong to source artists)
let mut valid_album_ids = std::collections::HashSet::<i64>::new();
for &src_id in &source_ids {
let rows: Vec<(i64,)> = sqlx::query_as("SELECT id FROM albums WHERE artist_id = $1")
.bind(src_id).fetch_all(&mut **tx).await?;
for (id,) in rows { valid_album_ids.insert(id); }
}
// 1. Rename winner artist to canonical name
sqlx::query("UPDATE artists SET name = $2 WHERE id = $1")
.bind(proposal.winner_artist_id)
.bind(&proposal.canonical_artist_name)
.execute(&mut **tx).await?;
// 2. Process album mappings from the proposal
for mapping in &proposal.album_mappings {
// Skip albums that don't belong to any source artist (LLM hallucinated IDs)
if !valid_album_ids.contains(&mapping.source_album_id) {
tracing::warn!(
album_id = mapping.source_album_id,
"Skipping album mapping: album does not belong to source artists"
);
continue;
}
// Skip if source was already processed (idempotent retry support)
let src_exists: (bool,) = sqlx::query_as("SELECT EXISTS(SELECT 1 FROM albums WHERE id = $1)")
.bind(mapping.source_album_id)
.fetch_one(&mut **tx).await?;
if !src_exists.0 { continue; }
if let Some(target_id) = mapping.merge_into_album_id {
album_merge_into(tx, mapping.source_album_id, target_id).await?;
} else {
// Rename first
sqlx::query("UPDATE albums SET name = $2 WHERE id = $1")
.bind(mapping.source_album_id)
.bind(&mapping.canonical_name)
.execute(&mut **tx).await?;
// Check if winner already has an album with this canonical name (excluding self)
let conflict: Option<(i64,)> = sqlx::query_as(
"SELECT id FROM albums WHERE artist_id = $1 AND name = $2 AND id != $3"
)
.bind(proposal.winner_artist_id)
.bind(&mapping.canonical_name)
.bind(mapping.source_album_id)
.fetch_optional(&mut **tx).await?;
if let Some((existing_id,)) = conflict {
album_merge_into(tx, mapping.source_album_id, existing_id).await?;
} else {
// Just move to winner artist (only if not already there)
sqlx::query(
"UPDATE albums SET artist_id = $2 WHERE id = $1 AND artist_id != $2"
)
.bind(mapping.source_album_id)
.bind(proposal.winner_artist_id)
.execute(&mut **tx).await?;
}
}
}
// 3. Move all remaining albums from each loser to winner, merging name conflicts
for &loser_id in loser_ids {
loop {
// Fetch one album at a time; loop because merging changes the set
let album: Option<(i64, String)> = sqlx::query_as(
"SELECT id, name FROM albums WHERE artist_id = $1 LIMIT 1"
)
.bind(loser_id)
.fetch_optional(&mut **tx).await?;
let (album_id, album_name) = match album {
Some(a) => a,
None => break,
};
let conflict: Option<(i64,)> = sqlx::query_as(
"SELECT id FROM albums WHERE artist_id = $1 AND name = $2"
)
.bind(proposal.winner_artist_id)
.bind(&album_name)
.fetch_optional(&mut **tx).await?;
if let Some((existing_id,)) = conflict {
// Merge loser album into winner album
album_merge_into(tx, album_id, existing_id).await?;
} else {
sqlx::query("UPDATE albums SET artist_id = $2 WHERE id = $1")
.bind(album_id)
.bind(proposal.winner_artist_id)
.execute(&mut **tx).await?;
}
}
}
// 4. Move track_artists from losers to winner
for &loser_id in loser_ids {
// Remove winner's entries that would conflict after the update
sqlx::query(
r#"DELETE FROM track_artists
WHERE artist_id = $2
AND (track_id, role) IN (
SELECT track_id, role FROM track_artists WHERE artist_id = $1
)"#
)
.bind(loser_id)
.bind(proposal.winner_artist_id)
.execute(&mut **tx).await?;
sqlx::query("UPDATE track_artists SET artist_id = $2 WHERE artist_id = $1")
.bind(loser_id)
.bind(proposal.winner_artist_id)
.execute(&mut **tx).await?;
}
// 5. Delete loser artists (should be empty of albums/tracks by now)
for &loser_id in loser_ids {
sqlx::query("DELETE FROM artists WHERE id = $1")
.bind(loser_id)
.execute(&mut **tx).await?;
}
Ok(())
}
/// Merge source album into target within an open transaction:
/// deduplicate by file_hash, move the rest, delete source.
async fn album_merge_into(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
source_id: i64,
target_id: i64,
) -> anyhow::Result<()> {
// Verify target exists
let target_ok: (bool,) = sqlx::query_as("SELECT EXISTS(SELECT 1 FROM albums WHERE id = $1)")
.bind(target_id)
.fetch_one(&mut **tx).await?;
if !target_ok.0 {
anyhow::bail!("Target album {} does not exist", target_id);
}
// Delete duplicate tracks from source (same file_hash already in target)
let dups: Vec<(i64,)> = sqlx::query_as(
r#"SELECT t1.id FROM tracks t1
JOIN tracks t2 ON t1.file_hash = t2.file_hash AND t2.album_id = $2
WHERE t1.album_id = $1"#
)
.bind(source_id)
.bind(target_id)
.fetch_all(&mut **tx).await?;
for (dup_id,) in dups {
// Retrieve path for later file deletion (non-fatal if missing)
let path: Option<(String,)> = sqlx::query_as("SELECT storage_path FROM tracks WHERE id = $1")
.bind(dup_id).fetch_optional(&mut **tx).await?;
if let Some((p,)) = path {
// Schedule physical deletion after commit — store in a side channel;
// here we do a best-effort remove outside the tx scope via tokio::spawn.
let p = p.clone();
tokio::spawn(async move {
let _ = tokio::fs::remove_file(&p).await;
});
}
sqlx::query("DELETE FROM track_artists WHERE track_id = $1").bind(dup_id).execute(&mut **tx).await?;
sqlx::query("DELETE FROM tracks WHERE id = $1").bind(dup_id).execute(&mut **tx).await?;
}
// Move remaining tracks from source to target
sqlx::query("UPDATE tracks SET album_id = $2 WHERE album_id = $1")
.bind(source_id)
.bind(target_id)
.execute(&mut **tx).await?;
// Delete the now-empty source album
sqlx::query("DELETE FROM albums WHERE id = $1")
.bind(source_id)
.execute(&mut **tx).await?;
Ok(())
}
fn sanitize(name: &str) -> String {
name.chars()
.map(|c| match c {
'/' | '\\' | ':' | '*' | '?' | '"' | '<' | '>' | '|' | '\0' => '_',
_ => c,
})
.collect::<String>()
.trim()
.trim_matches('.')
.to_owned()
}

File diff suppressed because it is too large Load Diff

View File

@@ -82,21 +82,30 @@ pub async fn approve_queue_item(State(state): State<S>, Path(id): Path<Uuid>) ->
format!("{}.{}", sanitize_filename(title), ext)
};
match crate::ingest::mover::move_to_storage(
&state.config.storage_dir,
artist,
album,
&filename,
source,
)
.await
{
Ok(storage_path) => {
let rel_path = storage_path.to_string_lossy().to_string();
match db::approve_and_finalize(&state.pool, id, &rel_path).await {
Ok(track_id) => (StatusCode::OK, Json(serde_json::json!({"track_id": track_id}))).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
let artist_dir = sanitize_filename(artist);
let album_dir = sanitize_filename(album);
let dest = state.config.storage_dir.join(&artist_dir).join(&album_dir).join(&filename);
use crate::ingest::mover::MoveOutcome;
let (storage_path, was_merged) = if dest.exists() && !source.exists() {
// File already moved (e.g. auto-approved earlier but DB not finalized)
(dest.to_string_lossy().to_string(), false)
} else {
match crate::ingest::mover::move_to_storage(
&state.config.storage_dir, artist, album, &filename, source,
).await {
Ok(MoveOutcome::Moved(p)) => (p.to_string_lossy().to_string(), false),
Ok(MoveOutcome::Merged(p)) => (p.to_string_lossy().to_string(), true),
Err(e) => return error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
};
match db::approve_and_finalize(&state.pool, id, &storage_path).await {
Ok(track_id) => {
if was_merged {
let _ = db::update_pending_status(&state.pool, id, "merged", None).await;
}
(StatusCode::OK, Json(serde_json::json!({"track_id": track_id}))).into_response()
}
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
@@ -134,6 +143,7 @@ pub async fn update_queue_item(
track_number: body.norm_track_number,
genre: body.norm_genre,
featured_artists: body.featured_artists,
release_type: None,
confidence: Some(1.0), // manual edit = full confidence
notes: Some("Manually edited".to_owned()),
};
@@ -144,6 +154,105 @@ pub async fn update_queue_item(
}
}
// --- Retry ---
pub async fn retry_queue_item(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match db::update_pending_status(&state.pool, id, "pending", None).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Batch operations ---
#[derive(Deserialize)]
pub struct BatchIds {
pub ids: Vec<Uuid>,
}
pub async fn batch_approve(State(state): State<S>, Json(body): Json<BatchIds>) -> impl IntoResponse {
let mut ok = 0u32;
let mut errors = Vec::new();
for id in &body.ids {
let pt = match db::get_pending(&state.pool, *id).await {
Ok(Some(pt)) => pt,
Ok(None) => { errors.push(format!("{}: not found", id)); continue; }
Err(e) => { errors.push(format!("{}: {}", id, e)); continue; }
};
let artist = pt.norm_artist.as_deref().unwrap_or("Unknown Artist");
let album = pt.norm_album.as_deref().unwrap_or("Unknown Album");
let title = pt.norm_title.as_deref().unwrap_or("Unknown Title");
let source = std::path::Path::new(&pt.inbox_path);
let ext = source.extension().and_then(|e| e.to_str()).unwrap_or("flac");
let track_num = pt.norm_track_number.unwrap_or(0);
let filename = if track_num > 0 {
format!("{:02} - {}.{}", track_num, sanitize_filename(title), ext)
} else {
format!("{}.{}", sanitize_filename(title), ext)
};
let artist_dir = sanitize_filename(artist);
let album_dir = sanitize_filename(album);
let dest = state.config.storage_dir.join(&artist_dir).join(&album_dir).join(&filename);
use crate::ingest::mover::MoveOutcome;
let (rel_path, was_merged) = if dest.exists() && !source.exists() {
(dest.to_string_lossy().to_string(), false)
} else {
match crate::ingest::mover::move_to_storage(
&state.config.storage_dir, artist, album, &filename, source,
).await {
Ok(MoveOutcome::Moved(p)) => (p.to_string_lossy().to_string(), false),
Ok(MoveOutcome::Merged(p)) => (p.to_string_lossy().to_string(), true),
Err(e) => { errors.push(format!("{}: {}", id, e)); continue; }
}
};
match db::approve_and_finalize(&state.pool, *id, &rel_path).await {
Ok(_) => {
if was_merged {
let _ = db::update_pending_status(&state.pool, *id, "merged", None).await;
}
ok += 1;
}
Err(e) => errors.push(format!("{}: {}", id, e)),
}
}
(StatusCode::OK, Json(serde_json::json!({"approved": ok, "errors": errors}))).into_response()
}
pub async fn batch_reject(State(state): State<S>, Json(body): Json<BatchIds>) -> impl IntoResponse {
let mut ok = 0u32;
for id in &body.ids {
if db::update_pending_status(&state.pool, *id, "rejected", None).await.is_ok() {
ok += 1;
}
}
(StatusCode::OK, Json(serde_json::json!({"rejected": ok}))).into_response()
}
pub async fn batch_retry(State(state): State<S>, Json(body): Json<BatchIds>) -> impl IntoResponse {
let mut ok = 0u32;
for id in &body.ids {
if db::update_pending_status(&state.pool, *id, "pending", None).await.is_ok() {
ok += 1;
}
}
(StatusCode::OK, Json(serde_json::json!({"retried": ok}))).into_response()
}
pub async fn batch_delete(State(state): State<S>, Json(body): Json<BatchIds>) -> impl IntoResponse {
let mut ok = 0u32;
for id in &body.ids {
if db::delete_pending(&state.pool, *id).await.unwrap_or(false) {
ok += 1;
}
}
(StatusCode::OK, Json(serde_json::json!({"deleted": ok}))).into_response()
}
// --- Artists ---
#[derive(Deserialize)]
@@ -218,6 +327,441 @@ pub async fn update_album(
}
}
// --- Merges ---
#[derive(Deserialize)]
pub struct CreateMergeBody {
pub artist_ids: Vec<i64>,
}
pub async fn create_merge(State(state): State<S>, Json(body): Json<CreateMergeBody>) -> impl IntoResponse {
if body.artist_ids.len() < 2 {
return error_response(StatusCode::BAD_REQUEST, "need at least 2 artists to merge");
}
match db::insert_artist_merge(&state.pool, &body.artist_ids).await {
Ok(id) => (StatusCode::OK, Json(serde_json::json!({"id": id}))).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn list_merges(State(state): State<S>) -> impl IntoResponse {
match db::list_artist_merges(&state.pool).await {
Ok(items) => (StatusCode::OK, Json(serde_json::to_value(items).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn get_merge(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
let merge = match db::get_artist_merge(&state.pool, id).await {
Ok(Some(m)) => m,
Ok(None) => return error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => return error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let source_ids: Vec<i64> = serde_json::from_str(&merge.source_artist_ids).unwrap_or_default();
let artists = match db::get_artists_full_data(&state.pool, &source_ids).await {
Ok(a) => a,
Err(e) => return error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let proposal: Option<serde_json::Value> = merge.proposal.as_deref()
.and_then(|p| serde_json::from_str(p).ok());
(StatusCode::OK, Json(serde_json::json!({
"merge": {
"id": merge.id,
"status": merge.status,
"source_artist_ids": source_ids,
"llm_notes": merge.llm_notes,
"error_message": merge.error_message,
"created_at": merge.created_at,
"updated_at": merge.updated_at,
},
"artists": artists,
"proposal": proposal,
}))).into_response()
}
#[derive(Deserialize)]
pub struct UpdateMergeBody {
pub proposal: serde_json::Value,
}
pub async fn update_merge(
State(state): State<S>,
Path(id): Path<Uuid>,
Json(body): Json<UpdateMergeBody>,
) -> impl IntoResponse {
let notes = body.proposal.get("notes")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_owned();
let proposal_json = match serde_json::to_string(&body.proposal) {
Ok(s) => s,
Err(e) => return error_response(StatusCode::BAD_REQUEST, &e.to_string()),
};
match db::update_merge_proposal(&state.pool, id, &proposal_json, &notes).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn approve_merge(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match crate::merge::execute_merge(&state, id).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => {
let msg = e.to_string();
let _ = db::update_merge_status(&state.pool, id, "error", Some(&msg)).await;
error_response(StatusCode::INTERNAL_SERVER_ERROR, &msg)
}
}
}
pub async fn reject_merge(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match db::update_merge_status(&state.pool, id, "rejected", None).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn retry_merge(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match db::update_merge_status(&state.pool, id, "pending", None).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Library search ---
#[derive(Deserialize)]
pub struct LibraryQuery {
#[serde(default)]
pub q: String,
#[serde(default)]
pub artist: String,
#[serde(default)]
pub album: String,
#[serde(default = "default_lib_limit")]
pub limit: i64,
#[serde(default)]
pub offset: i64,
}
fn default_lib_limit() -> i64 { 50 }
pub async fn library_tracks(State(state): State<S>, Query(q): Query<LibraryQuery>) -> impl IntoResponse {
let (tracks, total) = tokio::join!(
db::search_tracks(&state.pool, &q.q, &q.artist, &q.album, q.limit, q.offset),
db::count_tracks(&state.pool, &q.q, &q.artist, &q.album),
);
match (tracks, total) {
(Ok(rows), Ok(n)) => (StatusCode::OK, Json(serde_json::json!({"total": n, "items": rows}))).into_response(),
(Err(e), _) | (_, Err(e)) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn library_albums(State(state): State<S>, Query(q): Query<LibraryQuery>) -> impl IntoResponse {
let (albums, total) = tokio::join!(
db::search_albums(&state.pool, &q.q, &q.artist, q.limit, q.offset),
db::count_albums(&state.pool, &q.q, &q.artist),
);
match (albums, total) {
(Ok(rows), Ok(n)) => (StatusCode::OK, Json(serde_json::json!({"total": n, "items": rows}))).into_response(),
(Err(e), _) | (_, Err(e)) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn library_artists(State(state): State<S>, Query(q): Query<LibraryQuery>) -> impl IntoResponse {
let (artists, total) = tokio::join!(
db::search_artists_lib(&state.pool, &q.q, q.limit, q.offset),
db::count_artists_lib(&state.pool, &q.q),
);
match (artists, total) {
(Ok(rows), Ok(n)) => (StatusCode::OK, Json(serde_json::json!({"total": n, "items": rows}))).into_response(),
(Err(e), _) | (_, Err(e)) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Track / Album detail & edit ---
pub async fn get_track(State(state): State<S>, Path(id): Path<i64>) -> impl IntoResponse {
match db::get_track_full(&state.pool, id).await {
Ok(Some(t)) => (StatusCode::OK, Json(serde_json::to_value(t).unwrap())).into_response(),
Ok(None) => error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn update_track(
State(state): State<S>,
Path(id): Path<i64>,
Json(body): Json<db::TrackUpdateFields>,
) -> impl IntoResponse {
match db::update_track_metadata(&state.pool, id, &body).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn get_album_full(State(state): State<S>, Path(id): Path<i64>) -> impl IntoResponse {
match db::get_album_details(&state.pool, id).await {
Ok(Some(a)) => (StatusCode::OK, Json(serde_json::to_value(a).unwrap())).into_response(),
Ok(None) => error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct AlbumUpdateBody {
pub name: String,
pub year: Option<i32>,
pub artist_id: i64,
}
pub async fn update_album_full(
State(state): State<S>,
Path(id): Path<i64>,
Json(body): Json<AlbumUpdateBody>,
) -> impl IntoResponse {
match db::update_album_full(&state.pool, id, &body.name, body.year, body.artist_id).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct SetGenreBody { pub genre: String }
pub async fn set_album_tracks_genre(
State(state): State<S>,
Path(id): Path<i64>,
Json(body): Json<SetGenreBody>,
) -> impl IntoResponse {
match db::set_album_tracks_genre(&state.pool, id, &body.genre).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct ReorderBody {
pub orders: Vec<(i64, i32)>,
}
pub async fn reorder_album_tracks(
State(state): State<S>,
Path(_id): Path<i64>,
Json(body): Json<ReorderBody>,
) -> impl IntoResponse {
match db::reorder_tracks(&state.pool, &body.orders).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
/// Cover by artist+album name — used for queue items that may not have an album_id yet.
#[derive(Deserialize)]
pub struct CoverByNameQuery {
#[serde(default)] pub artist: String,
#[serde(default)] pub name: String,
}
pub async fn album_cover_by_name(State(state): State<S>, Query(q): Query<CoverByNameQuery>) -> impl IntoResponse {
let album_id = match db::find_album_id(&state.pool, &q.artist, &q.name).await {
Ok(Some(id)) => id,
_ => return StatusCode::NOT_FOUND.into_response(),
};
album_cover_by_id(&state, album_id).await
}
pub async fn album_cover(State(state): State<S>, Path(id): Path<i64>) -> impl IntoResponse {
album_cover_by_id(&state, id).await
}
async fn album_cover_by_id(state: &super::AppState, id: i64) -> axum::response::Response {
// 1. Try album_images table
if let Ok(Some((file_path, mime_type))) = db::get_album_cover(&state.pool, id).await {
if let Ok(bytes) = tokio::fs::read(&file_path).await {
return ([(axum::http::header::CONTENT_TYPE, mime_type)], bytes).into_response();
}
}
// 2. Fallback: extract embedded cover from first track in album
if let Ok(Some(track_path)) = db::get_album_first_track_path(&state.pool, id).await {
let path = std::path::PathBuf::from(track_path);
if path.exists() {
let result = tokio::task::spawn_blocking(move || extract_embedded_cover(&path)).await;
if let Ok(Some((bytes, mime))) = result {
return ([(axum::http::header::CONTENT_TYPE, mime)], bytes).into_response();
}
}
}
StatusCode::NOT_FOUND.into_response()
}
fn extract_embedded_cover(path: &std::path::Path) -> Option<(Vec<u8>, String)> {
use symphonia::core::{
formats::FormatOptions,
io::MediaSourceStream,
meta::MetadataOptions,
probe::Hint,
};
let file = std::fs::File::open(path).ok()?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(ext) = path.extension().and_then(|e| e.to_str()) {
hint.with_extension(ext);
}
let mut probed = symphonia::default::get_probe()
.format(
&hint,
mss,
&FormatOptions { enable_gapless: false, ..Default::default() },
&MetadataOptions::default(),
)
.ok()?;
if let Some(rev) = probed.metadata.get().as_ref().and_then(|m| m.current()) {
if let Some(v) = rev.visuals().first() {
return Some((v.data.to_vec(), v.media_type.clone()));
}
}
if let Some(rev) = probed.format.metadata().current() {
if let Some(v) = rev.visuals().first() {
return Some((v.data.to_vec(), v.media_type.clone()));
}
}
None
}
#[derive(Deserialize)]
pub struct AlbumSearchQuery {
#[serde(default)]
pub q: String,
pub artist_id: Option<i64>,
}
pub async fn search_albums_for_artist(State(state): State<S>, Query(q): Query<AlbumSearchQuery>) -> impl IntoResponse {
match db::search_albums_for_artist(&state.pool, &q.q, q.artist_id).await {
Ok(items) => (StatusCode::OK, Json(serde_json::to_value(
items.iter().map(|(id, name)| serde_json::json!({"id": id, "name": name})).collect::<Vec<_>>()
).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Artist full admin form ---
pub async fn get_artist_full(State(state): State<S>, Path(id): Path<i64>) -> impl IntoResponse {
let artist = match db::get_artist_by_id(&state.pool, id).await {
Ok(Some(a)) => a,
Ok(None) => return error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => return error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let (albums, appearances) = tokio::join!(
db::get_artist_albums(&state.pool, id),
db::get_artist_appearances(&state.pool, id),
);
// For each album, load tracks
let albums = match albums {
Ok(a) => a,
Err(e) => return error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let mut albums_with_tracks = Vec::new();
for album in albums {
let tracks = db::get_album_tracks_admin(&state.pool, album.id).await.unwrap_or_default();
albums_with_tracks.push(serde_json::json!({
"id": album.id, "name": album.name, "year": album.year,
"release_type": album.release_type, "hidden": album.hidden,
"track_count": album.track_count, "tracks": tracks,
}));
}
(StatusCode::OK, Json(serde_json::json!({
"artist": artist,
"albums": albums_with_tracks,
"appearances": appearances.unwrap_or_default(),
}))).into_response()
}
#[derive(Deserialize)]
pub struct SetHiddenBody { pub hidden: bool }
pub async fn set_track_hidden(State(state): State<S>, Path(id): Path<i64>, Json(b): Json<SetHiddenBody>) -> impl IntoResponse {
match db::set_track_hidden(&state.pool, id, b.hidden).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn set_album_hidden(State(state): State<S>, Path(id): Path<i64>, Json(b): Json<SetHiddenBody>) -> impl IntoResponse {
match db::set_album_hidden(&state.pool, id, b.hidden).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn set_artist_hidden(State(state): State<S>, Path(id): Path<i64>, Json(b): Json<SetHiddenBody>) -> impl IntoResponse {
match db::set_artist_hidden(&state.pool, id, b.hidden).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct SetReleaseTypeBody { pub release_type: String }
pub async fn set_album_release_type(State(state): State<S>, Path(id): Path<i64>, Json(b): Json<SetReleaseTypeBody>) -> impl IntoResponse {
let valid = ["album","single","ep","compilation","live"];
if !valid.contains(&b.release_type.as_str()) {
return error_response(StatusCode::BAD_REQUEST, "invalid release_type");
}
match db::set_album_release_type(&state.pool, id, &b.release_type).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct RenameArtistBody { pub name: String }
pub async fn rename_artist_api(State(state): State<S>, Path(id): Path<i64>, Json(b): Json<RenameArtistBody>) -> impl IntoResponse {
match db::rename_artist_name(&state.pool, id, &b.name).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct AddAppearanceBody { pub track_id: i64 }
pub async fn add_appearance(State(state): State<S>, Path(artist_id): Path<i64>, Json(b): Json<AddAppearanceBody>) -> impl IntoResponse {
match db::add_track_appearance(&state.pool, b.track_id, artist_id).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn remove_appearance(State(state): State<S>, Path((artist_id, track_id)): Path<(i64, i64)>) -> impl IntoResponse {
match db::remove_track_appearance(&state.pool, track_id, artist_id).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct SearchTracksQuery { #[serde(default)] pub q: String }
pub async fn search_tracks_feat(State(state): State<S>, Query(q): Query<SearchTracksQuery>) -> impl IntoResponse {
match db::search_tracks_for_feat(&state.pool, &q.q).await {
Ok(rows) => (StatusCode::OK, Json(serde_json::to_value(
rows.iter().map(|(id, title, artist)| serde_json::json!({"id": id, "title": title, "artist_name": artist})).collect::<Vec<_>>()
).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Helpers ---
fn error_response(status: StatusCode, message: &str) -> axum::response::Response {

View File

@@ -2,7 +2,7 @@ pub mod api;
use std::sync::Arc;
use axum::{Router, routing::{get, post, put}};
use axum::{Router, routing::{delete, get, post, put}};
use sqlx::PgPool;
use crate::config::Args;
@@ -12,6 +12,7 @@ pub struct AppState {
pub pool: PgPool,
pub config: Arc<Args>,
pub system_prompt: Arc<String>,
pub merge_prompt: Arc<String>,
}
pub fn build_router(state: Arc<AppState>) -> Router {
@@ -21,12 +22,42 @@ pub fn build_router(state: Arc<AppState>) -> Router {
.route("/queue/:id", get(api::get_queue_item).delete(api::delete_queue_item))
.route("/queue/:id/approve", post(api::approve_queue_item))
.route("/queue/:id/reject", post(api::reject_queue_item))
.route("/queue/:id/retry", post(api::retry_queue_item))
.route("/queue/:id/update", put(api::update_queue_item))
.route("/queue/batch/approve", post(api::batch_approve))
.route("/queue/batch/reject", post(api::batch_reject))
.route("/queue/batch/retry", post(api::batch_retry))
.route("/queue/batch/delete", post(api::batch_delete))
.route("/artists/search", get(api::search_artists))
.route("/artists", get(api::list_artists))
.route("/artists/:id", put(api::update_artist))
.route("/artists/:id/albums", get(api::list_albums))
.route("/albums/:id", put(api::update_album));
.route("/artists/:id/full", get(api::get_artist_full))
.route("/artists/:id/hidden", put(api::set_artist_hidden))
.route("/artists/:id/rename", put(api::rename_artist_api))
.route("/artists/:id/appearances", post(api::add_appearance))
.route("/artists/:id/appearances/:track_id", delete(api::remove_appearance))
.route("/tracks/search", get(api::search_tracks_feat))
.route("/tracks/:id", get(api::get_track).put(api::update_track))
.route("/tracks/:id/hidden", put(api::set_track_hidden))
.route("/albums/search", get(api::search_albums_for_artist))
.route("/albums/cover-by-name", get(api::album_cover_by_name))
.route("/albums/:id/cover", get(api::album_cover))
.route("/albums/:id/full", get(api::get_album_full))
.route("/albums/:id/reorder", put(api::reorder_album_tracks))
.route("/albums/:id/edit", put(api::update_album_full))
.route("/albums/:id/genre", put(api::set_album_tracks_genre))
.route("/albums/:id/hidden", put(api::set_album_hidden))
.route("/albums/:id/release_type", put(api::set_album_release_type))
.route("/albums/:id", put(api::update_album))
.route("/merges", get(api::list_merges).post(api::create_merge))
.route("/merges/:id", get(api::get_merge).put(api::update_merge))
.route("/merges/:id/approve", post(api::approve_merge))
.route("/merges/:id/reject", post(api::reject_merge))
.route("/merges/:id/retry", post(api::retry_merge))
.route("/library/tracks", get(api::library_tracks))
.route("/library/albums", get(api::library_albums))
.route("/library/artists", get(api::library_artists));
Router::new()
.route("/", get(admin_html))

View File

@@ -0,0 +1,19 @@
# auth-app
Проект состоит из двух частей:
- `client` - Vite + React + TypeScript
- `server` - Express + TypeScript + OIDC авторизация
## Запуск
1. Скопируй `server/.env.example` в `server/.env` и заполни OIDC параметры.
- Если нужно запустить без авторизации, поставь `DISABLE_AUTH=true` (OIDC параметры тогда не требуются).
2. В одном терминале:
- `cd server`
- `npm run dev`
3. В другом терминале:
- `cd client`
- `npm run dev`
Клиент откроется на `http://localhost:5173`, сервер на `http://localhost:3001`.

24
furumi-node-player/client/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

View File

@@ -0,0 +1,73 @@
# React + TypeScript + Vite
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
Currently, two official plugins are available:
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Oxc](https://oxc.rs)
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/)
## React Compiler
The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
## Expanding the ESLint configuration
If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules:
```js
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
// Other configs...
// Remove tseslint.configs.recommended and replace with this
tseslint.configs.recommendedTypeChecked,
// Alternatively, use this for stricter rules
tseslint.configs.strictTypeChecked,
// Optionally, add this for stylistic rules
tseslint.configs.stylisticTypeChecked,
// Other configs...
],
languageOptions: {
parserOptions: {
project: ['./tsconfig.node.json', './tsconfig.app.json'],
tsconfigRootDir: import.meta.dirname,
},
// other options...
},
},
])
```
You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules:
```js
// eslint.config.js
import reactX from 'eslint-plugin-react-x'
import reactDom from 'eslint-plugin-react-dom'
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
// Other configs...
// Enable lint rules for React
reactX.configs['recommended-typescript'],
// Enable lint rules for React DOM
reactDom.configs.recommended,
],
languageOptions: {
parserOptions: {
project: ['./tsconfig.node.json', './tsconfig.app.json'],
tsconfigRootDir: import.meta.dirname,
},
// other options...
},
},
])
```

View File

@@ -0,0 +1,23 @@
import js from '@eslint/js'
import globals from 'globals'
import reactHooks from 'eslint-plugin-react-hooks'
import reactRefresh from 'eslint-plugin-react-refresh'
import tseslint from 'typescript-eslint'
import { defineConfig, globalIgnores } from 'eslint/config'
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
js.configs.recommended,
tseslint.configs.recommended,
reactHooks.configs.flat.recommended,
reactRefresh.configs.vite,
],
languageOptions: {
ecmaVersion: 2020,
globals: globals.browser,
},
},
])

View File

@@ -0,0 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>client</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

2965
furumi-node-player/client/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,30 @@
{
"name": "client",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
},
"dependencies": {
"react": "^19.2.4",
"react-dom": "^19.2.4"
},
"devDependencies": {
"@eslint/js": "^9.39.4",
"@types/node": "^24.12.0",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"@vitejs/plugin-react": "^6.0.1",
"eslint": "^9.39.4",
"eslint-plugin-react-hooks": "^7.0.1",
"eslint-plugin-react-refresh": "^0.5.2",
"globals": "^17.4.0",
"typescript": "~5.9.3",
"typescript-eslint": "^8.57.0",
"vite": "^8.0.1"
}
}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.3 KiB

View File

@@ -0,0 +1,24 @@
<svg xmlns="http://www.w3.org/2000/svg">
<symbol id="bluesky-icon" viewBox="0 0 16 17">
<g clip-path="url(#bluesky-clip)"><path fill="#08060d" d="M7.75 7.735c-.693-1.348-2.58-3.86-4.334-5.097-1.68-1.187-2.32-.981-2.74-.79C.188 2.065.1 2.812.1 3.251s.241 3.602.398 4.13c.52 1.744 2.367 2.333 4.07 2.145-2.495.37-4.71 1.278-1.805 4.512 3.196 3.309 4.38-.71 4.987-2.746.608 2.036 1.307 5.91 4.93 2.746 2.72-2.746.747-4.143-1.747-4.512 1.702.189 3.55-.4 4.07-2.145.156-.528.397-3.691.397-4.13s-.088-1.186-.575-1.406c-.42-.19-1.06-.395-2.741.79-1.755 1.24-3.64 3.752-4.334 5.099"/></g>
<defs><clipPath id="bluesky-clip"><path fill="#fff" d="M.1.85h15.3v15.3H.1z"/></clipPath></defs>
</symbol>
<symbol id="discord-icon" viewBox="0 0 20 19">
<path fill="#08060d" d="M16.224 3.768a14.5 14.5 0 0 0-3.67-1.153c-.158.286-.343.67-.47.976a13.5 13.5 0 0 0-4.067 0c-.128-.306-.317-.69-.476-.976A14.4 14.4 0 0 0 3.868 3.77C1.546 7.28.916 10.703 1.231 14.077a14.7 14.7 0 0 0 4.5 2.306q.545-.748.965-1.587a9.5 9.5 0 0 1-1.518-.74q.191-.14.372-.293c2.927 1.369 6.107 1.369 8.999 0q.183.152.372.294-.723.437-1.52.74.418.838.963 1.588a14.6 14.6 0 0 0 4.504-2.308c.37-3.911-.63-7.302-2.644-10.309m-9.13 8.234c-.878 0-1.599-.82-1.599-1.82 0-.998.705-1.82 1.6-1.82.894 0 1.614.82 1.599 1.82.001 1-.705 1.82-1.6 1.82m5.91 0c-.878 0-1.599-.82-1.599-1.82 0-.998.705-1.82 1.6-1.82.893 0 1.614.82 1.599 1.82 0 1-.706 1.82-1.6 1.82"/>
</symbol>
<symbol id="documentation-icon" viewBox="0 0 21 20">
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="m15.5 13.333 1.533 1.322c.645.555.967.833.967 1.178s-.322.623-.967 1.179L15.5 18.333m-3.333-5-1.534 1.322c-.644.555-.966.833-.966 1.178s.322.623.966 1.179l1.534 1.321"/>
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M17.167 10.836v-4.32c0-1.41 0-2.117-.224-2.68-.359-.906-1.118-1.621-2.08-1.96-.599-.21-1.349-.21-2.848-.21-2.623 0-3.935 0-4.983.369-1.684.591-3.013 1.842-3.641 3.428C3 6.449 3 7.684 3 10.154v2.122c0 2.558 0 3.838.706 4.726q.306.383.713.671c.76.536 1.79.64 3.581.66"/>
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M3 10a2.78 2.78 0 0 1 2.778-2.778c.555 0 1.209.097 1.748-.047.48-.129.854-.503.982-.982.145-.54.048-1.194.048-1.749a2.78 2.78 0 0 1 2.777-2.777"/>
</symbol>
<symbol id="github-icon" viewBox="0 0 19 19">
<path fill="#08060d" fill-rule="evenodd" d="M9.356 1.85C5.05 1.85 1.57 5.356 1.57 9.694a7.84 7.84 0 0 0 5.324 7.44c.387.079.528-.168.528-.376 0-.182-.013-.805-.013-1.454-2.165.467-2.616-.935-2.616-.935-.349-.91-.864-1.143-.864-1.143-.71-.48.051-.48.051-.48.787.051 1.2.805 1.2.805.695 1.194 1.817.857 2.268.649.064-.507.27-.857.49-1.052-1.728-.182-3.545-.857-3.545-3.87 0-.857.31-1.558.8-2.104-.078-.195-.349-1 .077-2.078 0 0 .657-.208 2.14.805a7.5 7.5 0 0 1 1.946-.26c.657 0 1.328.092 1.946.26 1.483-1.013 2.14-.805 2.14-.805.426 1.078.155 1.883.078 2.078.502.546.799 1.247.799 2.104 0 3.013-1.818 3.675-3.558 3.87.284.247.528.714.528 1.454 0 1.052-.012 1.896-.012 2.156 0 .208.142.455.528.377a7.84 7.84 0 0 0 5.324-7.441c.013-4.338-3.48-7.844-7.773-7.844" clip-rule="evenodd"/>
</symbol>
<symbol id="social-icon" viewBox="0 0 20 20">
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M12.5 6.667a4.167 4.167 0 1 0-8.334 0 4.167 4.167 0 0 0 8.334 0"/>
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M2.5 16.667a5.833 5.833 0 0 1 8.75-5.053m3.837.474.513 1.035c.07.144.257.282.414.309l.93.155c.596.1.736.536.307.965l-.723.73a.64.64 0 0 0-.152.531l.207.903c.164.715-.213.991-.84.618l-.872-.52a.63.63 0 0 0-.577 0l-.872.52c-.624.373-1.003.094-.84-.618l.207-.903a.64.64 0 0 0-.152-.532l-.723-.729c-.426-.43-.289-.864.306-.964l.93-.156a.64.64 0 0 0 .412-.31l.513-1.034c.28-.562.735-.562 1.012 0"/>
</symbol>
<symbol id="x-icon" viewBox="0 0 19 19">
<path fill="#08060d" fill-rule="evenodd" d="M1.893 1.98c.052.072 1.245 1.769 2.653 3.77l2.892 4.114c.183.261.333.48.333.486s-.068.089-.152.183l-.522.593-.765.867-3.597 4.087c-.375.426-.734.834-.798.905a1 1 0 0 0-.118.148c0 .01.236.017.664.017h.663l.729-.83c.4-.457.796-.906.879-.999a692 692 0 0 0 1.794-2.038c.034-.037.301-.34.594-.675l.551-.624.345-.392a7 7 0 0 1 .34-.374c.006 0 .93 1.306 2.052 2.903l2.084 2.965.045.063h2.275c1.87 0 2.273-.003 2.266-.021-.008-.02-1.098-1.572-3.894-5.547-2.013-2.862-2.28-3.246-2.273-3.266.008-.019.282-.332 2.085-2.38l2-2.274 1.567-1.782c.022-.028-.016-.03-.65-.03h-.674l-.3.342a871 871 0 0 1-1.782 2.025c-.067.075-.405.458-.75.852a100 100 0 0 1-.803.91c-.148.172-.299.344-.99 1.127-.304.343-.32.358-.345.327-.015-.019-.904-1.282-1.976-2.808L6.365 1.85H1.8zm1.782.91 8.078 11.294c.772 1.08 1.413 1.973 1.425 1.984.016.017.241.02 1.05.017l1.03-.004-2.694-3.766L7.796 5.75 5.722 2.852l-1.039-.004-1.039-.004z" clip-rule="evenodd"/>
</symbol>
</svg>

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

@@ -0,0 +1,71 @@
.page {
min-height: 100vh;
display: grid;
place-items: center;
padding: 24px;
}
.card {
width: min(520px, 100%);
border: 1px solid #d8dde6;
border-radius: 14px;
padding: 24px;
background-color: #ffffff;
box-shadow: 0 12px 30px rgba(0, 0, 0, 0.08);
}
.subtitle {
margin-top: 0;
margin-bottom: 20px;
color: #5a6475;
}
.settings {
margin-bottom: 16px;
padding: 12px;
border: 1px solid #e6eaf2;
border-radius: 10px;
background: #f8fafc;
}
.toggle {
display: flex;
align-items: center;
gap: 10px;
color: #0f172a;
font-weight: 600;
}
.toggle input {
width: 18px;
height: 18px;
}
.hint {
margin: 10px 0 0;
color: #5a6475;
}
.btn {
display: inline-block;
text-decoration: none;
background: #2251ff;
color: #ffffff;
padding: 10px 16px;
border-radius: 8px;
font-weight: 600;
}
.btn.ghost {
background: #edf1ff;
color: #1e3fc4;
margin-top: 10px;
}
.profile p {
margin: 8px 0;
}
.error {
color: #cc1e1e;
}

View File

@@ -0,0 +1,142 @@
import { useEffect, useMemo, useState } from 'react'
import { FurumiPlayer } from './FurumiPlayer'
import './App.css'
type UserProfile = {
sub: string
name?: string
email?: string
}
const NO_AUTH_STORAGE_KEY = 'furumiNodePlayer.runWithoutAuth'
function App() {
const [loading, setLoading] = useState(true)
const [user, setUser] = useState<UserProfile | null>(null)
const [error, setError] = useState<string | null>(null)
const [runWithoutAuth, setRunWithoutAuth] = useState(() => {
try {
return window.localStorage.getItem(NO_AUTH_STORAGE_KEY) === '1'
} catch {
return false
}
})
const apiBase = useMemo(() => import.meta.env.VITE_API_BASE_URL ?? '', [])
useEffect(() => {
if (runWithoutAuth) {
setError(null)
setUser({ sub: 'noauth', name: 'No Auth' })
setLoading(false)
return
}
const loadMe = async () => {
try {
const response = await fetch(`${apiBase}/api/me`, {
credentials: 'include',
})
if (response.status === 401) {
setUser(null)
return
}
if (!response.ok) {
throw new Error(`Request failed with status ${response.status}`)
}
const data = await response.json()
setUser(data.user ?? null)
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to load session')
} finally {
setLoading(false)
}
}
void loadMe()
}, [apiBase, runWithoutAuth])
const loginUrl = `${apiBase}/api/login`
const logoutUrl = `${apiBase}/api/logout`
const playerApiRoot = `${apiBase}/api`
return (
<>
{!loading && (user || runWithoutAuth) ? (
<FurumiPlayer apiRoot={playerApiRoot} />
) : (
<main className="page">
<section className="card">
<h1>OIDC Login</h1>
<p className="subtitle">Авторизация обрабатывается на Express сервере.</p>
<div className="settings">
<label className="toggle">
<input
type="checkbox"
checked={runWithoutAuth}
onChange={(e) => {
const next = e.target.checked
setRunWithoutAuth(next)
try {
if (next) window.localStorage.setItem(NO_AUTH_STORAGE_KEY, '1')
else window.localStorage.removeItem(NO_AUTH_STORAGE_KEY)
} catch {
// ignore
}
setLoading(true)
setUser(null)
}}
/>
<span>Запускать без авторизации</span>
</label>
</div>
{loading && <p>Проверяю сессию...</p>}
{error && <p className="error">Ошибка: {error}</p>}
{!loading && runWithoutAuth && (
<p className="hint">
Режим без авторизации включён. Для входа отключи настройку выше.
</p>
)}
{!loading && !user && (
<a className="btn" href={loginUrl}>
Войти через OIDC
</a>
)}
{!loading && user && (
<div className="profile">
<p>
<strong>ID:</strong> {user.sub}
</p>
{user.name && (
<p>
<strong>Имя:</strong> {user.name}
</p>
)}
{user.email && (
<p>
<strong>Email:</strong> {user.email}
</p>
)}
{!runWithoutAuth && (
<a className="btn ghost" href={logoutUrl}>
Выйти
</a>
)}
</div>
)}
</section>
</main>
)}
</>
)
}
export default App

View File

@@ -0,0 +1,767 @@
import { useEffect, useRef, useState, type MouseEvent as ReactMouseEvent } from 'react'
import './furumi-player.css'
import { createFurumiApiClient } from './furumiApi'
import { SearchDropdown } from './components/SearchDropdown'
import { Breadcrumbs } from './components/Breadcrumbs'
import { LibraryList } from './components/LibraryList'
import { QueueList, type QueueItem } from './components/QueueList'
import { NowPlaying } from './components/NowPlaying'
type FurumiPlayerProps = {
apiRoot: string
}
type Crumb = { label: string; action?: () => void }
export function FurumiPlayer({ apiRoot }: FurumiPlayerProps) {
const [breadcrumbs, setBreadcrumbs] = useState<Array<{ label: string; action?: () => void }>>(
[],
)
const [libraryLoading, setLibraryLoading] = useState(false)
const [libraryError, setLibraryError] = useState<string | null>(null)
const [libraryItems, setLibraryItems] = useState<
Array<{
key: string
className: string
icon: string
name: string
detail?: string
nameClassName?: string
onClick: () => void
button?: { title: string; onClick: (ev: ReactMouseEvent<HTMLButtonElement>) => void }
}>
>([])
const [searchResults, setSearchResults] = useState<
Array<{ result_type: string; slug: string; name: string; detail?: string }>
>([])
const [searchOpen, setSearchOpen] = useState(false)
const searchSelectRef = useRef<(type: string, slug: string) => void>(() => {})
const [nowPlayingTrack, setNowPlayingTrack] = useState<QueueItem | null>(null)
const [queueItemsView, setQueueItemsView] = useState<QueueItem[]>([])
const [queueOrderView, setQueueOrderView] = useState<number[]>([])
const [queuePlayingOrigIdxView, setQueuePlayingOrigIdxView] = useState<number>(-1)
const [queueScrollSignal, setQueueScrollSignal] = useState(0)
const queueActionsRef = useRef<{
playIndex: (i: number) => void
removeFromQueue: (idx: number) => void
moveQueueItem: (fromPos: number, toPos: number) => void
} | null>(null)
useEffect(() => {
// --- Original player script adapted for React environment ---
const audio = document.getElementById('audioEl') as HTMLAudioElement
if (!audio) return
let queue: QueueItem[] = []
let queueIndex = -1
let shuffle = false
let repeatAll = true
let shuffleOrder: number[] = []
let searchTimer: number | null = null
let toastTimer: number | null = null
let muted = false
// Restore prefs
try {
const v = window.localStorage.getItem('furumi_vol')
const volSlider = document.getElementById('volSlider') as HTMLInputElement | null
if (v !== null && volSlider) {
audio.volume = Number(v) / 100
volSlider.value = v
}
const btnShuffle = document.getElementById('btnShuffle')
const btnRepeat = document.getElementById('btnRepeat')
shuffle = window.localStorage.getItem('furumi_shuffle') === '1'
repeatAll = window.localStorage.getItem('furumi_repeat') !== '0'
btnShuffle?.classList.toggle('active', shuffle)
btnRepeat?.classList.toggle('active', repeatAll)
} catch {
// ignore
}
// --- Audio events ---
audio.addEventListener('timeupdate', () => {
if (audio.duration) {
const fill = document.getElementById('progressFill')
const timeElapsed = document.getElementById('timeElapsed')
const timeDuration = document.getElementById('timeDuration')
if (fill) fill.style.width = `${(audio.currentTime / audio.duration) * 100}%`
if (timeElapsed) timeElapsed.textContent = fmt(audio.currentTime)
if (timeDuration) timeDuration.textContent = fmt(audio.duration)
}
})
audio.addEventListener('ended', () => nextTrack())
audio.addEventListener('play', () => {
const btn = document.getElementById('btnPlayPause')
if (btn) btn.innerHTML = '&#9208;'
})
audio.addEventListener('pause', () => {
const btn = document.getElementById('btnPlayPause')
if (btn) btn.innerHTML = '&#9654;'
})
audio.addEventListener('error', () => {
showToast('Playback error')
nextTrack()
})
// --- API helper ---
const API = apiRoot
const api = createFurumiApiClient(API)
// --- Library navigation ---
async function showArtists() {
setBreadcrumb([{ label: 'Artists', action: showArtists }])
setLibraryLoading(true)
setLibraryError(null)
const artists = await api('/artists')
if (!artists) {
setLibraryLoading(false)
setLibraryError('Error')
return
}
setLibraryLoading(false)
setLibraryItems(
(artists as any[]).map((a) => ({
key: `artist:${a.slug}`,
className: 'file-item dir',
icon: '👤',
name: a.name,
detail: `${a.album_count} albums`,
onClick: () => void showArtistAlbums(a.slug, a.name),
})),
)
}
async function showArtistAlbums(artistSlug: string, artistName: string) {
setBreadcrumb([
{ label: 'Artists', action: showArtists },
{ label: artistName, action: () => showArtistAlbums(artistSlug, artistName) },
])
setLibraryLoading(true)
setLibraryError(null)
const albums = await api('/artists/' + artistSlug + '/albums')
if (!albums) {
setLibraryLoading(false)
setLibraryError('Error')
return
}
setLibraryLoading(false)
const allTracksItem = {
key: `artist-all:${artistSlug}`,
className: 'file-item',
icon: '▶',
name: 'Play all tracks',
nameClassName: 'name',
onClick: () => void playAllArtistTracks(artistSlug),
}
const albumItems = (albums as any[]).map((a) => {
const year = a.year ? ` (${a.year})` : ''
return {
key: `album:${a.slug}`,
className: 'file-item dir',
icon: '💿',
name: `${a.name}${year}`,
detail: `${a.track_count} tracks`,
onClick: () => void showAlbumTracks(a.slug, a.name, artistSlug, artistName),
button: {
title: 'Add album to queue',
onClick: (ev: ReactMouseEvent<HTMLButtonElement>) => {
ev.stopPropagation()
void addAlbumToQueue(a.slug)
},
},
}
})
setLibraryItems([allTracksItem, ...albumItems])
}
async function showAlbumTracks(
albumSlug: string,
albumName: string,
artistSlug: string,
artistName: string,
) {
setBreadcrumb([
{ label: 'Artists', action: showArtists },
{ label: artistName, action: () => showArtistAlbums(artistSlug, artistName) },
{ label: albumName },
])
setLibraryLoading(true)
setLibraryError(null)
const tracks = await api('/albums/' + albumSlug)
if (!tracks) {
setLibraryLoading(false)
setLibraryError('Error')
return
}
setLibraryLoading(false)
const playAlbumItem = {
key: `album-play:${albumSlug}`,
className: 'file-item',
icon: '▶',
name: 'Play album',
onClick: () => {
void addAlbumToQueue(albumSlug, true)
},
}
const trackItems = (tracks as any[]).map((t) => {
const num = t.track_number ? `${t.track_number}. ` : ''
const dur = t.duration_secs ? fmt(t.duration_secs) : ''
return {
key: `track:${t.slug}`,
className: 'file-item',
icon: '🎵',
name: `${num}${t.title}`,
detail: dur,
onClick: () => {
addTrackToQueue(
{
slug: t.slug,
title: t.title,
artist: t.artist_name,
album_slug: albumSlug,
duration: t.duration_secs,
},
true,
)
},
}
})
setLibraryItems([playAlbumItem, ...trackItems])
}
function setBreadcrumb(parts: Crumb[]) {
setBreadcrumbs(parts)
}
// --- Queue management ---
function addTrackToQueue(
track: {
slug: string
title: string
artist: string
album_slug: string | null
duration: number | null
},
playNow?: boolean,
) {
const existing = queue.findIndex((t) => t.slug === track.slug)
if (existing !== -1) {
if (playNow) playIndex(existing)
return
}
queue.push(track)
updateQueueModel()
if (playNow || (queueIndex === -1 && queue.length === 1)) {
playIndex(queue.length - 1)
}
}
async function addAlbumToQueue(albumSlug: string, playFirst?: boolean) {
const tracks = await api('/albums/' + albumSlug)
if (!tracks || !(tracks as any[]).length) return
const list = tracks as any[]
let firstIdx = queue.length
list.forEach((t) => {
if (queue.find((q) => q.slug === t.slug)) return
queue.push({
slug: t.slug,
title: t.title,
artist: t.artist_name,
album_slug: albumSlug,
duration: t.duration_secs,
})
})
updateQueueModel()
if (playFirst || queueIndex === -1) playIndex(firstIdx)
showToast(`Added ${list.length} tracks`)
}
async function playAllArtistTracks(artistSlug: string) {
const tracks = await api('/artists/' + artistSlug + '/tracks')
if (!tracks || !(tracks as any[]).length) return
const list = tracks as any[]
clearQueue()
list.forEach((t) => {
queue.push({
slug: t.slug,
title: t.title,
artist: t.artist_name,
album_slug: t.album_slug,
duration: t.duration_secs,
})
})
updateQueueModel()
playIndex(0)
showToast(`Added ${list.length} tracks`)
}
function playIndex(i: number) {
if (i < 0 || i >= queue.length) return
queueIndex = i
const track = queue[i]
audio.src = `${API}/stream/${track.slug}`
void audio.play().catch(() => {})
updateNowPlaying(track)
updateQueueModel()
setQueueScrollSignal((s) => s + 1)
if (window.history && window.history.replaceState) {
const url = new URL(window.location.href)
url.searchParams.set('t', track.slug)
window.history.replaceState(null, '', url.toString())
}
}
function updateNowPlaying(track: QueueItem | null) {
setNowPlayingTrack(track)
if (!track) return
document.title = `${track.title} — Furumi`
const coverUrl = `${API}/tracks/${track.slug}/cover`
if ('mediaSession' in navigator) {
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
navigator.mediaSession.metadata = new window.MediaMetadata({
title: track.title,
artist: track.artist || '',
album: '',
artwork: [{ src: coverUrl, sizes: '512x512' }],
})
}
}
function currentOrder() {
if (!shuffle) return [...Array(queue.length).keys()]
if (shuffleOrder.length !== queue.length) buildShuffleOrder()
return shuffleOrder
}
function buildShuffleOrder() {
shuffleOrder = [...Array(queue.length).keys()]
for (let i = shuffleOrder.length - 1; i > 0; i--) {
const j = Math.floor(Math.random() * (i + 1))
;[shuffleOrder[i], shuffleOrder[j]] = [shuffleOrder[j], shuffleOrder[i]]
}
if (queueIndex !== -1) {
const ci = shuffleOrder.indexOf(queueIndex)
if (ci > 0) {
shuffleOrder.splice(ci, 1)
shuffleOrder.unshift(queueIndex)
}
}
}
function updateQueueModel() {
const order = currentOrder()
setQueueItemsView(queue.slice())
setQueueOrderView(order.slice())
setQueuePlayingOrigIdxView(queueIndex)
}
function removeFromQueue(idx: number) {
if (idx === queueIndex) {
queueIndex = -1
audio.pause()
audio.src = ''
updateNowPlaying(null)
} else if (queueIndex > idx) {
queueIndex--
}
queue.splice(idx, 1)
if (shuffle) {
const si = shuffleOrder.indexOf(idx)
if (si !== -1) shuffleOrder.splice(si, 1)
for (let i = 0; i < shuffleOrder.length; i++) {
if (shuffleOrder[i] > idx) shuffleOrder[i]--
}
}
updateQueueModel()
}
function moveQueueItem(from: number, to: number) {
if (from === to) return
if (shuffle) {
const item = shuffleOrder.splice(from, 1)[0]
shuffleOrder.splice(to, 0, item)
} else {
const item = queue.splice(from, 1)[0]
queue.splice(to, 0, item)
if (queueIndex === from) queueIndex = to
else if (from < queueIndex && to >= queueIndex) queueIndex--
else if (from > queueIndex && to <= queueIndex) queueIndex++
}
updateQueueModel()
}
queueActionsRef.current = {
playIndex,
removeFromQueue,
moveQueueItem,
}
function clearQueue() {
queue = []
queueIndex = -1
shuffleOrder = []
audio.pause()
audio.src = ''
updateNowPlaying(null)
document.title = 'Furumi Player'
updateQueueModel()
}
// --- Playback controls ---
function togglePlay() {
if (!audio.src && queue.length) {
playIndex(queueIndex === -1 ? 0 : queueIndex)
return
}
if (audio.paused) void audio.play()
else audio.pause()
}
function nextTrack() {
if (!queue.length) return
const order = currentOrder()
const pos = order.indexOf(queueIndex)
if (pos < order.length - 1) playIndex(order[pos + 1])
else if (repeatAll) {
if (shuffle) buildShuffleOrder()
playIndex(currentOrder()[0])
}
}
function prevTrack() {
if (!queue.length) return
if (audio.currentTime > 3) {
audio.currentTime = 0
return
}
const order = currentOrder()
const pos = order.indexOf(queueIndex)
if (pos > 0) playIndex(order[pos - 1])
else if (repeatAll) playIndex(order[order.length - 1])
}
function toggleShuffle() {
shuffle = !shuffle
if (shuffle) buildShuffleOrder()
const btn = document.getElementById('btnShuffle')
btn?.classList.toggle('active', shuffle)
window.localStorage.setItem('furumi_shuffle', shuffle ? '1' : '0')
updateQueueModel()
}
function toggleRepeat() {
repeatAll = !repeatAll
const btn = document.getElementById('btnRepeat')
btn?.classList.toggle('active', repeatAll)
window.localStorage.setItem('furumi_repeat', repeatAll ? '1' : '0')
}
// --- Seek & Volume ---
function seekTo(e: MouseEvent) {
if (!audio.duration) return
const bar = document.getElementById('progressBar') as HTMLDivElement | null
if (!bar) return
const rect = bar.getBoundingClientRect()
const pct = (e.clientX - rect.left) / rect.width
audio.currentTime = pct * audio.duration
}
function toggleMute() {
muted = !muted
audio.muted = muted
const volIcon = document.getElementById('volIcon')
if (volIcon) volIcon.innerHTML = muted ? '&#128263;' : '&#128266;'
}
function setVolume(v: number) {
audio.volume = v / 100
const volIcon = document.getElementById('volIcon')
if (volIcon) volIcon.innerHTML = v === 0 ? '&#128263;' : '&#128266;'
window.localStorage.setItem('furumi_vol', String(v))
}
// --- Search ---
function onSearch(q: string) {
if (searchTimer) {
window.clearTimeout(searchTimer)
}
if (q.length < 2) {
closeSearch()
return
}
searchTimer = window.setTimeout(async () => {
const results = await api('/search?q=' + encodeURIComponent(q))
if (!results || !(results as any[]).length) {
closeSearch()
return
}
setSearchResults(results as any[])
setSearchOpen(true)
}, 250)
}
function closeSearch() {
setSearchOpen(false)
setSearchResults([])
}
function onSearchSelect(type: string, slug: string) {
closeSearch()
if (type === 'artist') void showArtistAlbums(slug, '')
else if (type === 'album') void addAlbumToQueue(slug, true)
else if (type === 'track') {
addTrackToQueue(
{ slug, title: '', artist: '', album_slug: null, duration: null },
true,
)
void api('/stream/' + slug).catch(() => null)
}
}
searchSelectRef.current = onSearchSelect
// --- Helpers ---
function fmt(secs: number) {
if (!secs || Number.isNaN(secs)) return '0:00'
const s = Math.floor(secs)
const m = Math.floor(s / 60)
const h = Math.floor(m / 60)
if (h > 0) {
return `${h}:${pad(m % 60)}:${pad(s % 60)}`
}
return `${m}:${pad(s % 60)}`
}
function pad(n: number) {
return String(n).padStart(2, '0')
}
function showToast(msg: string) {
const t = document.getElementById('toast')
if (!t) return
t.textContent = msg
t.classList.add('show')
if (toastTimer) window.clearTimeout(toastTimer)
toastTimer = window.setTimeout(() => t.classList.remove('show'), 2500)
}
function toggleSidebar() {
const sidebar = document.getElementById('sidebar')
const overlay = document.getElementById('sidebarOverlay')
sidebar?.classList.toggle('open')
overlay?.classList.toggle('show')
}
// --- MediaSession ---
if ('mediaSession' in navigator) {
try {
navigator.mediaSession.setActionHandler('play', togglePlay)
navigator.mediaSession.setActionHandler('pause', togglePlay)
navigator.mediaSession.setActionHandler('previoustrack', prevTrack)
navigator.mediaSession.setActionHandler('nexttrack', nextTrack)
navigator.mediaSession.setActionHandler('seekto', (d: any) => {
if (typeof d.seekTime === 'number') {
audio.currentTime = d.seekTime
}
})
} catch {
// ignore
}
}
// --- Wire DOM events that were inline in HTML ---
const btnMenu = document.querySelector('.btn-menu')
btnMenu?.addEventListener('click', () => toggleSidebar())
const sidebarOverlay = document.getElementById('sidebarOverlay')
sidebarOverlay?.addEventListener('click', () => toggleSidebar())
const searchInput = document.getElementById('searchInput') as HTMLInputElement | null
if (searchInput) {
searchInput.addEventListener('input', (e) => {
onSearch((e.target as HTMLInputElement).value)
})
searchInput.addEventListener('keydown', (e: KeyboardEvent) => {
if (e.key === 'Escape') closeSearch()
})
}
const btnShuffle = document.getElementById('btnShuffle')
btnShuffle?.addEventListener('click', () => toggleShuffle())
const btnRepeat = document.getElementById('btnRepeat')
btnRepeat?.addEventListener('click', () => toggleRepeat())
const btnClear = document.getElementById('btnClearQueue')
btnClear?.addEventListener('click', () => clearQueue())
const btnPrev = document.getElementById('btnPrev')
btnPrev?.addEventListener('click', () => prevTrack())
const btnPlay = document.getElementById('btnPlayPause')
btnPlay?.addEventListener('click', () => togglePlay())
const btnNext = document.getElementById('btnNext')
btnNext?.addEventListener('click', () => nextTrack())
const progressBar = document.getElementById('progressBar')
progressBar?.addEventListener('click', (e) => seekTo(e as MouseEvent))
const volIcon = document.getElementById('volIcon')
volIcon?.addEventListener('click', () => toggleMute())
const volSlider = document.getElementById('volSlider') as HTMLInputElement | null
if (volSlider) {
volSlider.addEventListener('input', (e) => {
const v = Number((e.target as HTMLInputElement).value)
setVolume(v)
})
}
const clearQueueBtn = document.getElementById('btnClearQueue')
clearQueueBtn?.addEventListener('click', () => clearQueue())
// --- Init ---
;(async () => {
const url = new URL(window.location.href)
const urlSlug = url.searchParams.get('t')
if (urlSlug) {
const info = await api('/tracks/' + urlSlug)
if (info) {
addTrackToQueue(
{
slug: (info as any).slug,
title: (info as any).title,
artist: (info as any).artist_name,
album_slug: (info as any).album_slug,
duration: (info as any).duration_secs,
},
true,
)
}
}
void showArtists()
})()
// Cleanup: best-effort remove listeners on unmount
return () => {
queueActionsRef.current = null
audio.pause()
}
}, [apiRoot])
return (
<div className="furumi-root">
<header className="header">
<div className="header-logo">
<button className="btn-menu">&#9776;</button>
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2">
<circle cx="9" cy="18" r="3" />
<circle cx="18" cy="15" r="3" />
<path d="M12 18V6l9-3v3" />
</svg>
Furumi
<span className="header-version">v</span>
</div>
<div style={{ display: 'flex', alignItems: 'center', gap: '1rem' }}>
<div className="search-wrap">
<input id="searchInput" placeholder="Search..." />
<SearchDropdown
isOpen={searchOpen}
results={searchResults}
onSelect={(type, slug) => searchSelectRef.current(type, slug)}
/>
</div>
</div>
</header>
<div className="main">
<div className="sidebar-overlay" id="sidebarOverlay" />
<aside className="sidebar" id="sidebar">
<div className="sidebar-header">Library</div>
<Breadcrumbs items={breadcrumbs} />
<div className="file-list" id="fileList">
<LibraryList loading={libraryLoading} error={libraryError} items={libraryItems} />
</div>
</aside>
<section className="queue-panel">
<div className="queue-header">
<span>Queue</span>
<div className="queue-actions">
<button className="queue-btn active" id="btnShuffle">
Shuffle
</button>
<button className="queue-btn active" id="btnRepeat">
Repeat
</button>
<button className="queue-btn" id="btnClearQueue">
Clear
</button>
</div>
</div>
<div className="queue-list" id="queueList">
<QueueList
apiRoot={apiRoot}
queue={queueItemsView}
order={queueOrderView}
playingOrigIdx={queuePlayingOrigIdxView}
scrollSignal={queueScrollSignal}
onPlay={(origIdx) => queueActionsRef.current?.playIndex(origIdx)}
onRemove={(origIdx) =>
queueActionsRef.current?.removeFromQueue(origIdx)
}
onMove={(fromPos, toPos) =>
queueActionsRef.current?.moveQueueItem(fromPos, toPos)
}
/>
</div>
</section>
</div>
<div className="player-bar">
<NowPlaying apiRoot={apiRoot} track={nowPlayingTrack} />
<div className="controls">
<div className="ctrl-btns">
<button className="ctrl-btn" id="btnPrev">
&#9198;
</button>
<button className="ctrl-btn ctrl-btn-main" id="btnPlayPause">
&#9654;
</button>
<button className="ctrl-btn" id="btnNext">
&#9197;
</button>
</div>
<div className="progress-row">
<span className="time" id="timeElapsed">
0:00
</span>
<div className="progress-bar" id="progressBar">
<div className="progress-fill" id="progressFill" style={{ width: '0%' }} />
</div>
<span className="time" id="timeDuration">
0:00
</span>
</div>
</div>
<div className="volume-row">
<span className="vol-icon" id="volIcon">
&#128266;
</span>
<input
type="range"
className="volume-slider"
id="volSlider"
min={0}
max={100}
defaultValue={80}
/>
</div>
</div>
<div className="toast" id="toast" />
<audio id="audioEl" />
</div>
)
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="35.93" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 228"><path fill="#00D8FF" d="M210.483 73.824a171.49 171.49 0 0 0-8.24-2.597c.465-1.9.893-3.777 1.273-5.621c6.238-30.281 2.16-54.676-11.769-62.708c-13.355-7.7-35.196.329-57.254 19.526a171.23 171.23 0 0 0-6.375 5.848a155.866 155.866 0 0 0-4.241-3.917C100.759 3.829 77.587-4.822 63.673 3.233C50.33 10.957 46.379 33.89 51.995 62.588a170.974 170.974 0 0 0 1.892 8.48c-3.28.932-6.445 1.924-9.474 2.98C17.309 83.498 0 98.307 0 113.668c0 15.865 18.582 31.778 46.812 41.427a145.52 145.52 0 0 0 6.921 2.165a167.467 167.467 0 0 0-2.01 9.138c-5.354 28.2-1.173 50.591 12.134 58.266c13.744 7.926 36.812-.22 59.273-19.855a145.567 145.567 0 0 0 5.342-4.923a168.064 168.064 0 0 0 6.92 6.314c21.758 18.722 43.246 26.282 56.54 18.586c13.731-7.949 18.194-32.003 12.4-61.268a145.016 145.016 0 0 0-1.535-6.842c1.62-.48 3.21-.974 4.76-1.488c29.348-9.723 48.443-25.443 48.443-41.52c0-15.417-17.868-30.326-45.517-39.844Zm-6.365 70.984c-1.4.463-2.836.91-4.3 1.345c-3.24-10.257-7.612-21.163-12.963-32.432c5.106-11 9.31-21.767 12.459-31.957c2.619.758 5.16 1.557 7.61 2.4c23.69 8.156 38.14 20.213 38.14 29.504c0 9.896-15.606 22.743-40.946 31.14Zm-10.514 20.834c2.562 12.94 2.927 24.64 1.23 33.787c-1.524 8.219-4.59 13.698-8.382 15.893c-8.067 4.67-25.32-1.4-43.927-17.412a156.726 156.726 0 0 1-6.437-5.87c7.214-7.889 14.423-17.06 21.459-27.246c12.376-1.098 24.068-2.894 34.671-5.345a134.17 134.17 0 0 1 1.386 6.193ZM87.276 214.515c-7.882 2.783-14.16 2.863-17.955.675c-8.075-4.657-11.432-22.636-6.853-46.752a156.923 156.923 0 0 1 1.869-8.499c10.486 2.32 22.093 3.988 34.498 4.994c7.084 9.967 14.501 19.128 21.976 27.15a134.668 134.668 0 0 1-4.877 4.492c-9.933 8.682-19.886 14.842-28.658 17.94ZM50.35 144.747c-12.483-4.267-22.792-9.812-29.858-15.863c-6.35-5.437-9.555-10.836-9.555-15.216c0-9.322 13.897-21.212 37.076-29.293c2.813-.98 5.757-1.905 8.812-2.773c3.204 10.42 7.406 21.315 12.477 32.332c-5.137 11.18-9.399 22.249-12.634 32.792a134.718 134.718 0 0 1-6.318-1.979Zm12.378-84.26c-4.811-24.587-1.616-43.134 6.425-47.789c8.564-4.958 27.502 2.111 47.463 19.835a144.318 144.318 0 0 1 3.841 3.545c-7.438 7.987-14.787 17.08-21.808 26.988c-12.04 1.116-23.565 2.908-34.161 5.309a160.342 160.342 0 0 1-1.76-7.887Zm110.427 27.268a347.8 347.8 0 0 0-7.785-12.803c8.168 1.033 15.994 2.404 23.343 4.08c-2.206 7.072-4.956 14.465-8.193 22.045a381.151 381.151 0 0 0-7.365-13.322Zm-45.032-43.861c5.044 5.465 10.096 11.566 15.065 18.186a322.04 322.04 0 0 0-30.257-.006c4.974-6.559 10.069-12.652 15.192-18.18ZM82.802 87.83a323.167 323.167 0 0 0-7.227 13.238c-3.184-7.553-5.909-14.98-8.134-22.152c7.304-1.634 15.093-2.97 23.209-3.984a321.524 321.524 0 0 0-7.848 12.897Zm8.081 65.352c-8.385-.936-16.291-2.203-23.593-3.793c2.26-7.3 5.045-14.885 8.298-22.6a321.187 321.187 0 0 0 7.257 13.246c2.594 4.48 5.28 8.868 8.038 13.147Zm37.542 31.03c-5.184-5.592-10.354-11.779-15.403-18.433c4.902.192 9.899.29 14.978.29c5.218 0 10.376-.117 15.453-.343c-4.985 6.774-10.018 12.97-15.028 18.486Zm52.198-57.817c3.422 7.8 6.306 15.345 8.596 22.52c-7.422 1.694-15.436 3.058-23.88 4.071a382.417 382.417 0 0 0 7.859-13.026a347.403 347.403 0 0 0 7.425-13.565Zm-16.898 8.101a358.557 358.557 0 0 1-12.281 19.815a329.4 329.4 0 0 1-23.444.823c-7.967 0-15.716-.248-23.178-.732a310.202 310.202 0 0 1-12.513-19.846h.001a307.41 307.41 0 0 1-10.923-20.627a310.278 310.278 0 0 1 10.89-20.637l-.001.001a307.318 307.318 0 0 1 12.413-19.761c7.613-.576 15.42-.876 23.31-.876H128c7.926 0 15.743.303 23.354.883a329.357 329.357 0 0 1 12.335 19.695a358.489 358.489 0 0 1 11.036 20.54a329.472 329.472 0 0 1-11 20.722Zm22.56-122.124c8.572 4.944 11.906 24.881 6.52 51.026c-.344 1.668-.73 3.367-1.15 5.09c-10.622-2.452-22.155-4.275-34.23-5.408c-7.034-10.017-14.323-19.124-21.64-27.008a160.789 160.789 0 0 1 5.888-5.4c18.9-16.447 36.564-22.941 44.612-18.3ZM128 90.808c12.625 0 22.86 10.235 22.86 22.86s-10.235 22.86-22.86 22.86s-22.86-10.235-22.86-22.86s10.235-22.86 22.86-22.86Z"></path></svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.5 KiB

View File

@@ -0,0 +1,30 @@
type Crumb = {
label: string
action?: () => void
}
type BreadcrumbsProps = {
items: Crumb[]
}
export function Breadcrumbs({ items }: BreadcrumbsProps) {
if (!items.length) return null
return (
<div className="breadcrumb">
{items.map((item, index) => {
const isLast = index === items.length - 1
return (
<span key={`${item.label}-${index}`}>
{!isLast && item.action ? (
<span onClick={item.action}>{item.label}</span>
) : (
<span>{item.label}</span>
)}
{!isLast ? ' / ' : ''}
</span>
)
})}
</div>
)
}

View File

@@ -0,0 +1,54 @@
import type { MouseEvent } from 'react'
type LibraryListButton = {
title: string
onClick: (ev: MouseEvent<HTMLButtonElement>) => void
}
type LibraryListItem = {
key: string
className: string
icon: string
name: string
detail?: string
nameClassName?: string
onClick: () => void
button?: LibraryListButton
}
type LibraryListProps = {
loading: boolean
error: string | null
items: LibraryListItem[]
}
export function LibraryList({ loading, error, items }: LibraryListProps) {
if (loading) {
return (
<div style={{ padding: '2rem', textAlign: 'center' }}>
<div className="spinner" />
</div>
)
}
if (error) {
return <div style={{ padding: '1rem', color: 'var(--danger)' }}>{error}</div>
}
return (
<>
{items.map((item) => (
<div key={item.key} className={item.className} onClick={item.onClick}>
<span className="icon">{item.icon}</span>
<span className={item.nameClassName ?? 'name'}>{item.name}</span>
{item.detail ? <span className="detail">{item.detail}</span> : null}
{item.button ? (
<button className="add-btn" title={item.button.title} onClick={item.button.onClick}>
&#10133;
</button>
) : null}
</div>
))}
</>
)
}

View File

@@ -0,0 +1,52 @@
import { useEffect, useState } from 'react'
import type { QueueItem } from './QueueList'
function Cover({ src }: { src: string }) {
const [errored, setErrored] = useState(false)
useEffect(() => {
setErrored(false)
}, [src])
if (errored) return <>&#127925;</>
return <img src={src} alt="" onError={() => setErrored(true)} />
}
export function NowPlaying({ apiRoot, track }: { apiRoot: string; track: QueueItem | null }) {
if (!track) {
return (
<div className="np-info">
<div className="np-cover" id="npCover">
&#127925;
</div>
<div className="np-text">
<div className="np-title" id="npTitle">
Nothing playing
</div>
<div className="np-artist" id="npArtist">
&mdash;
</div>
</div>
</div>
)
}
const coverUrl = `${apiRoot}/tracks/${track.slug}/cover`
return (
<div className="np-info">
<div className="np-cover" id="npCover">
<Cover src={coverUrl} />
</div>
<div className="np-text">
<div className="np-title" id="npTitle">
{track.title}
</div>
<div className="np-artist" id="npArtist">
{track.artist || '—'}
</div>
</div>
</div>
)
}

View File

@@ -0,0 +1,144 @@
import { useEffect, useRef, useState } from 'react'
export type QueueItem = {
slug: string
title: string
artist: string
album_slug: string | null
duration: number | null
}
type QueueListProps = {
apiRoot: string
queue: QueueItem[]
order: number[]
playingOrigIdx: number
scrollSignal: number
onPlay: (origIdx: number) => void
onRemove: (origIdx: number) => void
onMove: (fromPos: number, toPos: number) => void
}
function pad(n: number) {
return String(n).padStart(2, '0')
}
function fmt(secs: number) {
if (!secs || Number.isNaN(secs)) return '0:00'
const s = Math.floor(secs)
const m = Math.floor(s / 60)
const h = Math.floor(m / 60)
if (h > 0) return `${h}:${pad(m % 60)}:${pad(s % 60)}`
return `${m}:${pad(s % 60)}`
}
function Cover({ src }: { src: string }) {
const [errored, setErrored] = useState(false)
useEffect(() => {
setErrored(false)
}, [src])
if (errored) return <>&#127925;</>
return <img src={src} alt="" onError={() => setErrored(true)} />
}
export function QueueList({
apiRoot,
queue,
order,
playingOrigIdx,
scrollSignal,
onPlay,
onRemove,
onMove,
}: QueueListProps) {
const playingRef = useRef<HTMLDivElement | null>(null)
const [draggingPos, setDraggingPos] = useState<number | null>(null)
const [dragOverPos, setDragOverPos] = useState<number | null>(null)
useEffect(() => {
if (playingRef.current) {
playingRef.current.scrollIntoView({ behavior: 'smooth', block: 'nearest' })
}
}, [playingOrigIdx, scrollSignal])
if (!queue.length) {
return (
<div className="queue-empty">
<div className="empty-icon">&#127925;</div>
<div>Select an album to start</div>
</div>
)
}
return (
<>
{order.map((origIdx, pos) => {
const t = queue[origIdx]
if (!t) return null
const isPlaying = origIdx === playingOrigIdx
const coverSrc = t.album_slug ? `${apiRoot}/tracks/${t.slug}/cover` : ''
const dur = t.duration ? fmt(t.duration) : ''
const isDragging = draggingPos === pos
const isDragOver = dragOverPos === pos
return (
<div
key={`${t.slug}:${pos}`}
ref={isPlaying ? playingRef : null}
className={`queue-item${isPlaying ? ' playing' : ''}${isDragging ? ' dragging' : ''}${
isDragOver ? ' drag-over' : ''
}`}
draggable
onClick={() => onPlay(origIdx)}
onDragStart={(e) => {
setDraggingPos(pos)
e.dataTransfer?.setData('text/plain', String(pos))
}}
onDragEnd={() => {
setDraggingPos(null)
setDragOverPos(null)
}}
onDragOver={(e) => {
e.preventDefault()
}}
onDragEnter={() => {
setDragOverPos(pos)
}}
onDragLeave={() => {
setDragOverPos((cur) => (cur === pos ? null : cur))
}}
onDrop={(e) => {
e.preventDefault()
setDragOverPos(null)
const from = parseInt(e.dataTransfer?.getData('text/plain') ?? '', 10)
if (!Number.isNaN(from)) onMove(from, pos)
setDraggingPos(null)
}}
>
<span className="qi-index">{isPlaying ? '' : pos + 1}</span>
<div className="qi-cover">
{coverSrc ? <Cover src={coverSrc} /> : <>&#127925;</>}
</div>
<div className="qi-info">
<div className="qi-title">{t.title}</div>
<div className="qi-artist">{t.artist || ''}</div>
</div>
<span className="qi-dur">{dur}</span>
<button
className="qi-remove"
onClick={(e) => {
e.stopPropagation()
onRemove(origIdx)
}}
>
&#10005;
</button>
</div>
)
})}
</>
)
}

View File

@@ -0,0 +1,30 @@
type SearchResultItem = {
result_type: string
slug: string
name: string
detail?: string
}
type SearchDropdownProps = {
isOpen: boolean
results: SearchResultItem[]
onSelect: (type: string, slug: string) => void
}
export function SearchDropdown({ isOpen, results, onSelect }: SearchDropdownProps) {
return (
<div className={`search-dropdown${isOpen ? ' open' : ''}`}>
{results.map((r) => (
<div
key={`${r.result_type}:${r.slug}`}
className="search-result"
onClick={() => onSelect(r.result_type, r.slug)}
>
<span className="sr-type">{r.result_type}</span>
{r.name}
{r.detail ? <span className="sr-detail">{r.detail}</span> : null}
</div>
))}
</div>
)
}

View File

@@ -0,0 +1,754 @@
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap');
.furumi-root,
.furumi-root * {
box-sizing: border-box;
margin: 0;
padding: 0;
}
.furumi-root {
height: 100%;
display: flex;
flex-direction: column;
font-family: 'Inter', system-ui, sans-serif;
}
:root {
--bg-base: #0a0c12;
--bg-panel: #111520;
--bg-card: #161d2e;
--bg-hover: #1e2740;
--bg-active: #252f4a;
--border: #1f2c45;
--accent: #7c6af7;
--accent-dim: #5a4fcf;
--accent-glow: rgba(124, 106, 247, 0.3);
--text: #e2e8f0;
--text-muted: #64748b;
--text-dim: #94a3b8;
--success: #34d399;
--danger: #f87171;
}
.header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.75rem 1.5rem;
background: var(--bg-panel);
border-bottom: 1px solid var(--border);
flex-shrink: 0;
z-index: 10;
}
.header-logo {
display: flex;
align-items: center;
gap: 0.75rem;
font-weight: 700;
font-size: 1.1rem;
}
.header-logo svg {
width: 22px;
height: 22px;
}
.header-version {
font-size: 0.7rem;
color: var(--text-muted);
background: rgba(255, 255, 255, 0.05);
padding: 0.1rem 0.4rem;
border-radius: 4px;
margin-left: 0.25rem;
font-weight: 500;
text-decoration: none;
}
.btn-menu {
display: none;
background: none;
border: none;
color: var(--text);
font-size: 1.2rem;
cursor: pointer;
padding: 0.1rem 0.5rem;
margin-right: 0.2rem;
border-radius: 4px;
}
.search-wrap {
position: relative;
}
.search-wrap input {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 6px;
padding: 6px 12px 6px 30px;
color: var(--text);
font-size: 13px;
width: 220px;
font-family: inherit;
}
.search-wrap::before {
content: '🔍';
position: absolute;
left: 8px;
top: 50%;
transform: translateY(-50%);
font-size: 12px;
}
.search-dropdown {
position: absolute;
top: 100%;
left: 0;
right: 0;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 0 0 6px 6px;
max-height: 300px;
overflow-y: auto;
z-index: 50;
display: none;
}
.search-dropdown.open {
display: block;
}
.search-result {
padding: 8px 12px;
cursor: pointer;
font-size: 13px;
border-bottom: 1px solid var(--border);
}
.search-result:hover {
background: var(--bg-hover);
}
.search-result .sr-type {
font-size: 10px;
color: var(--text-muted);
text-transform: uppercase;
margin-right: 6px;
}
.search-result .sr-detail {
font-size: 11px;
color: var(--text-muted);
margin-left: 4px;
}
.main {
display: flex;
flex: 1;
overflow: hidden;
position: relative;
background: var(--bg-base);
color: var(--text);
}
.sidebar-overlay {
display: none;
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.6);
z-index: 20;
}
.sidebar-overlay.show {
display: block;
}
.sidebar {
width: 280px;
min-width: 200px;
max-width: 400px;
flex-shrink: 0;
display: flex;
flex-direction: column;
background: var(--bg-panel);
border-right: 1px solid var(--border);
overflow: hidden;
resize: horizontal;
}
.sidebar-header {
padding: 0.85rem 1rem 0.6rem;
font-size: 0.7rem;
font-weight: 600;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-muted);
border-bottom: 1px solid var(--border);
flex-shrink: 0;
display: flex;
align-items: center;
gap: 0.5rem;
}
.breadcrumb {
padding: 0.5rem 1rem;
font-size: 0.78rem;
color: var(--text-muted);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
border-bottom: 1px solid var(--border);
flex-shrink: 0;
}
.breadcrumb span {
color: var(--accent);
cursor: pointer;
}
.breadcrumb span:hover {
text-decoration: underline;
}
.file-list {
flex: 1;
overflow-y: auto;
padding: 0.3rem 0;
}
.file-list::-webkit-scrollbar {
width: 4px;
}
.file-list::-webkit-scrollbar-thumb {
background: var(--border);
border-radius: 4px;
}
.file-item {
display: flex;
align-items: center;
gap: 0.6rem;
padding: 0.45rem 1rem;
cursor: pointer;
font-size: 0.875rem;
color: var(--text-dim);
user-select: none;
transition: background 0.12s;
}
.file-item:hover {
background: var(--bg-hover);
color: var(--text);
}
.file-item.dir {
color: var(--accent);
}
.file-item .icon {
font-size: 0.95rem;
flex-shrink: 0;
opacity: 0.8;
}
.file-item .name {
flex: 1;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.file-item .detail {
font-size: 0.7rem;
color: var(--text-muted);
flex-shrink: 0;
}
.file-item .add-btn {
opacity: 0;
font-size: 0.75rem;
background: var(--bg-hover);
color: var(--text);
border: 1px solid var(--border);
border-radius: 4px;
padding: 0.2rem 0.4rem;
cursor: pointer;
flex-shrink: 0;
}
.file-item:hover .add-btn {
opacity: 1;
}
.file-item .add-btn:hover {
background: var(--accent);
color: #fff;
border-color: var(--accent);
}
.queue-panel {
flex: 1;
display: flex;
flex-direction: column;
overflow: hidden;
background: var(--bg-base);
}
.queue-header {
padding: 0.85rem 1.25rem 0.6rem;
font-size: 0.7rem;
font-weight: 600;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-muted);
border-bottom: 1px solid var(--border);
flex-shrink: 0;
display: flex;
align-items: center;
justify-content: space-between;
}
.queue-actions {
display: flex;
gap: 0.5rem;
}
.queue-btn {
font-size: 0.7rem;
padding: 0.2rem 0.55rem;
background: none;
border: 1px solid var(--border);
border-radius: 5px;
color: var(--text-muted);
cursor: pointer;
}
.queue-btn:hover {
border-color: var(--accent);
color: var(--accent);
}
.queue-btn.active {
background: var(--accent);
border-color: var(--accent);
color: #fff;
}
.queue-list {
flex: 1;
overflow-y: auto;
padding: 0.3rem 0;
}
.queue-list::-webkit-scrollbar {
width: 4px;
}
.queue-list::-webkit-scrollbar-thumb {
background: var(--border);
border-radius: 4px;
}
.queue-item {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.55rem 1.25rem;
cursor: pointer;
border-left: 2px solid transparent;
transition: background 0.12s;
}
.queue-item:hover {
background: var(--bg-hover);
}
.queue-item.playing {
background: var(--bg-active);
border-left-color: var(--accent);
}
.queue-item.playing .qi-title {
color: var(--accent);
}
.queue-item .qi-index {
font-size: 0.75rem;
color: var(--text-muted);
width: 1.5rem;
text-align: right;
flex-shrink: 0;
}
.queue-item.playing .qi-index::before {
content: '▶';
font-size: 0.6rem;
color: var(--accent);
}
.queue-item .qi-cover {
width: 36px;
height: 36px;
border-radius: 5px;
background: var(--bg-card);
flex-shrink: 0;
overflow: hidden;
display: flex;
align-items: center;
justify-content: center;
font-size: 1.1rem;
}
.queue-item .qi-cover img {
width: 100%;
height: 100%;
object-fit: cover;
}
.queue-item .qi-info {
flex: 1;
overflow: hidden;
}
.queue-item .qi-title {
font-size: 0.875rem;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.queue-item .qi-artist {
font-size: 0.75rem;
color: var(--text-muted);
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.queue-item .qi-dur {
font-size: 0.75rem;
color: var(--text-muted);
margin-left: auto;
margin-right: 0.5rem;
}
.qi-remove {
background: none;
border: none;
font-size: 0.9rem;
color: var(--text-muted);
cursor: pointer;
padding: 0.3rem;
border-radius: 4px;
opacity: 0;
}
.queue-item:hover .qi-remove {
opacity: 1;
}
.qi-remove:hover {
background: rgba(248, 113, 113, 0.15);
color: var(--danger);
}
.queue-item.dragging {
opacity: 0.5;
}
.queue-item.drag-over {
border-top: 2px solid var(--accent);
margin-top: -2px;
}
.queue-empty {
flex: 1;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
color: var(--text-muted);
font-size: 0.875rem;
gap: 0.5rem;
padding: 2rem;
}
.queue-empty .empty-icon {
font-size: 2.5rem;
opacity: 0.3;
}
.player-bar {
background: var(--bg-panel);
border-top: 1px solid var(--border);
padding: 0.9rem 1.5rem;
flex-shrink: 0;
display: grid;
grid-template-columns: 1fr 2fr 1fr;
align-items: center;
gap: 1rem;
}
.np-info {
display: flex;
align-items: center;
gap: 0.75rem;
min-width: 0;
}
.np-cover {
width: 44px;
height: 44px;
border-radius: 6px;
background: var(--bg-card);
flex-shrink: 0;
overflow: hidden;
display: flex;
align-items: center;
justify-content: center;
font-size: 1.3rem;
}
.np-cover img {
width: 100%;
height: 100%;
object-fit: cover;
}
.np-text {
min-width: 0;
}
.np-title {
font-size: 0.875rem;
font-weight: 500;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.np-artist {
font-size: 0.75rem;
color: var(--text-muted);
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.controls {
display: flex;
flex-direction: column;
align-items: center;
gap: 0.5rem;
}
.ctrl-btns {
display: flex;
align-items: center;
gap: 0.5rem;
}
.ctrl-btn {
background: none;
border: none;
color: var(--text-dim);
cursor: pointer;
padding: 0.35rem;
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
font-size: 1rem;
}
.ctrl-btn:hover {
color: var(--text);
background: var(--bg-hover);
}
.ctrl-btn.active {
color: var(--accent);
}
.ctrl-btn-main {
width: 38px;
height: 38px;
background: var(--accent);
color: #fff !important;
font-size: 1.1rem;
box-shadow: 0 0 14px var(--accent-glow);
}
.ctrl-btn-main:hover {
background: var(--accent-dim) !important;
}
.progress-row {
display: flex;
align-items: center;
gap: 0.6rem;
width: 100%;
}
.time {
font-size: 0.7rem;
color: var(--text-muted);
flex-shrink: 0;
font-variant-numeric: tabular-nums;
min-width: 2.5rem;
text-align: center;
}
.progress-bar {
flex: 1;
height: 4px;
background: var(--bg-hover);
border-radius: 2px;
cursor: pointer;
position: relative;
}
.progress-fill {
height: 100%;
background: var(--accent);
border-radius: 2px;
pointer-events: none;
}
.progress-fill::after {
content: '';
position: absolute;
right: -5px;
top: 50%;
transform: translateY(-50%);
width: 10px;
height: 10px;
border-radius: 50%;
background: var(--accent);
box-shadow: 0 0 6px var(--accent-glow);
opacity: 0;
transition: opacity 0.15s;
}
.progress-bar:hover .progress-fill::after {
opacity: 1;
}
.volume-row {
display: flex;
align-items: center;
gap: 0.5rem;
justify-content: flex-end;
}
.vol-icon {
font-size: 0.9rem;
color: var(--text-muted);
cursor: pointer;
}
.volume-slider {
-webkit-appearance: none;
appearance: none;
width: 80px;
height: 4px;
border-radius: 2px;
background: var(--bg-hover);
cursor: pointer;
outline: none;
}
.volume-slider::-webkit-slider-thumb {
-webkit-appearance: none;
width: 12px;
height: 12px;
border-radius: 50%;
background: var(--accent);
cursor: pointer;
}
.spinner {
display: inline-block;
width: 14px;
height: 14px;
border: 2px solid var(--border);
border-top-color: var(--accent);
border-radius: 50%;
animation: spin 0.7s linear infinite;
}
.toast {
position: fixed;
bottom: 90px;
right: 1.5rem;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 8px;
padding: 0.6rem 1rem;
font-size: 0.8rem;
color: var(--text-dim);
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.4);
opacity: 0;
transform: translateY(8px);
transition: all 0.25s;
pointer-events: none;
z-index: 100;
}
.toast.show {
opacity: 1;
transform: translateY(0);
}
@keyframes spin {
to {
transform: rotate(360deg);
}
}
@media (max-width: 768px) {
.btn-menu {
display: inline-block;
}
.header {
padding: 0.75rem 1rem;
}
.sidebar {
position: absolute;
top: 0;
bottom: 0;
left: -100%;
width: 85%;
max-width: 320px;
z-index: 30;
transition: left 0.3s;
box-shadow: 4px 0 20px rgba(0, 0, 0, 0.6);
}
.sidebar.open {
left: 0;
}
.player-bar {
grid-template-columns: 1fr;
gap: 0.75rem;
padding: 0.75rem 1rem;
}
.volume-row {
display: none;
}
.search-wrap input {
width: 140px;
}
}

View File

@@ -0,0 +1,12 @@
export type FurumiApiClient = (path: string) => Promise<unknown | null>
export function createFurumiApiClient(apiRoot: string): FurumiApiClient {
const API = apiRoot
return async function api(path: string) {
const r = await fetch(API + path)
if (!r.ok) return null
return r.json()
}
}

View File

@@ -0,0 +1,15 @@
body {
margin: 0;
font-family: Inter, -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
color: #0f172a;
background-color: #f3f6fb;
}
* {
box-sizing: border-box;
}
h1 {
margin-top: 0;
margin-bottom: 10px;
}

View File

@@ -0,0 +1,10 @@
import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import './index.css'
import App from './App.tsx'
createRoot(document.getElementById('root')!).render(
<StrictMode>
<App />
</StrictMode>,
)

View File

@@ -0,0 +1,28 @@
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.app.tsbuildinfo",
"target": "ES2023",
"useDefineForClassFields": true,
"lib": ["ES2023", "DOM", "DOM.Iterable"],
"module": "ESNext",
"types": ["vite/client"],
"skipLibCheck": true,
/* Bundler mode */
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"moduleDetection": "force",
"noEmit": true,
"jsx": "react-jsx",
/* Linting */
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"erasableSyntaxOnly": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true
},
"include": ["src"]
}

View File

@@ -0,0 +1,7 @@
{
"files": [],
"references": [
{ "path": "./tsconfig.app.json" },
{ "path": "./tsconfig.node.json" }
]
}

View File

@@ -0,0 +1,26 @@
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.node.tsbuildinfo",
"target": "ES2023",
"lib": ["ES2023"],
"module": "ESNext",
"types": ["node"],
"skipLibCheck": true,
/* Bundler mode */
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"moduleDetection": "force",
"noEmit": true,
/* Linting */
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"erasableSyntaxOnly": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true
},
"include": ["vite.config.ts"]
}

View File

@@ -0,0 +1,15 @@
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
// https://vite.dev/config/
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/api': {
target: 'http://localhost:3001',
changeOrigin: true,
},
},
},
})

View File

@@ -0,0 +1,12 @@
PORT=3001
BASE_URL=http://localhost:3001
FRONTEND_ORIGIN=http://localhost:5173
SESSION_SECRET=super-long-random-secret
# Если true/1/on/yes — сервер стартует без OIDC и не требует авторизации.
DISABLE_AUTH=false
OIDC_ISSUER_BASE_URL=https://your-issuer.example.com
OIDC_CLIENT_ID=your-client-id
OIDC_CLIENT_SECRET=your-client-secret
OIDC_SCOPE="openid profile email"

24
furumi-node-player/server/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

3355
furumi-node-player/server/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,28 @@
{
"name": "server",
"version": "1.0.0",
"type": "module",
"main": "dist/index.js",
"scripts": {
"dev": "tsx watch src/index.ts",
"build": "tsc -p tsconfig.json",
"start": "node dist/index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"cors": "^2.8.6",
"dotenv": "^17.3.1",
"express": "^5.2.1",
"express-openid-connect": "^2.19.4"
},
"devDependencies": {
"@types/cors": "^2.8.19",
"@types/express": "^5.0.6",
"@types/node": "^25.5.0",
"tsx": "^4.21.0",
"typescript": "^5.9.3"
}
}

View File

@@ -0,0 +1,103 @@
import 'dotenv/config';
import cors from 'cors';
import express from 'express';
import { auth } from 'express-openid-connect';
const app = express();
const port = Number(process.env.PORT ?? 3001);
const frontendOrigin = process.env.FRONTEND_ORIGIN ?? 'http://localhost:5173';
const disableAuth = ['1', 'true', 'yes', 'on'].includes(
String(process.env.DISABLE_AUTH ?? '').trim().toLowerCase(),
);
const oidcConfig = {
authRequired: false,
auth0Logout: false,
secret: process.env.SESSION_SECRET ?? 'change-me-in-env',
baseURL: process.env.BASE_URL ?? `http://localhost:${port}`,
clientID: process.env.OIDC_CLIENT_ID ?? '',
issuerBaseURL: process.env.OIDC_ISSUER_BASE_URL ?? '',
clientSecret: process.env.OIDC_CLIENT_SECRET ?? '',
authorizationParams: {
response_type: 'code',
scope: process.env.OIDC_SCOPE ?? 'openid profile email',
},
};
if (!disableAuth && (!oidcConfig.clientID || !oidcConfig.issuerBaseURL || !oidcConfig.clientSecret)) {
// Keep a clear startup failure if OIDC is not configured.
throw new Error(
'OIDC config is missing. Set OIDC_ISSUER_BASE_URL, OIDC_CLIENT_ID, OIDC_CLIENT_SECRET in server/.env (or set DISABLE_AUTH=true)',
);
}
app.use(
cors({
origin: frontendOrigin,
credentials: true,
}),
);
app.use(express.json());
if (!disableAuth) {
app.use(auth(oidcConfig));
}
app.get('/api/health', (_req, res) => {
res.json({ ok: true });
});
app.get('/api/me', (req, res) => {
if (disableAuth) {
res.json({
authenticated: false,
bypassAuth: true,
user: {
sub: 'noauth',
name: 'No Auth',
},
});
return;
}
if (!req.oidc.isAuthenticated()) {
res.status(401).json({ authenticated: false });
return;
}
res.json({
authenticated: true,
user: req.oidc.user,
});
});
app.get('/api/login', (req, res) => {
if (disableAuth) {
res.status(204).end();
return;
}
res.oidc.login({
returnTo: frontendOrigin,
});
});
app.get('/api/logout', (req, res) => {
if (disableAuth) {
res.status(204).end();
return;
}
res.oidc.logout({
returnTo: frontendOrigin,
});
});
app.listen(port, () => {
console.log(
`${disableAuth ? 'NO-AUTH' : 'OIDC auth'} server listening on http://localhost:${port}`,
);
});

View File

@@ -0,0 +1,14 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"strict": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"skipLibCheck": true,
"outDir": "dist",
"rootDir": "src"
},
"include": ["src"]
}

View File

@@ -92,6 +92,7 @@ pub async fn list_artists(pool: &PgPool) -> Result<Vec<ArtistListItem>, sqlx::Er
FROM artists ar
LEFT JOIN albums al ON al.artist_id = ar.id
LEFT JOIN tracks t ON t.artist_id = ar.id
WHERE NOT ar.hidden
GROUP BY ar.id, ar.slug, ar.name
HAVING COUNT(DISTINCT t.id) > 0
ORDER BY ar.name"#
@@ -116,8 +117,10 @@ pub async fn list_albums_by_artist(pool: &PgPool, artist_slug: &str) -> Result<V
EXISTS(SELECT 1 FROM album_images ai WHERE ai.album_id = al.id AND ai.image_type = 'cover') AS has_cover
FROM albums al
JOIN artists ar ON al.artist_id = ar.id
LEFT JOIN tracks t ON t.album_id = al.id
LEFT JOIN tracks t ON t.album_id = al.id AND NOT t.hidden
WHERE ar.slug = $1
AND NOT al.hidden
AND EXISTS (SELECT 1 FROM tracks t2 WHERE t2.album_id = al.id AND NOT t2.hidden)
GROUP BY al.id, al.slug, al.name, al.year
ORDER BY al.year NULLS LAST, al.name"#
)
@@ -135,6 +138,7 @@ pub async fn list_tracks_by_album(pool: &PgPool, album_slug: &str) -> Result<Vec
JOIN artists ar ON t.artist_id = ar.id
LEFT JOIN albums al ON t.album_id = al.id
WHERE al.slug = $1
AND NOT t.hidden
ORDER BY t.track_number NULLS LAST, t.title"#
)
.bind(album_slug)
@@ -221,6 +225,7 @@ pub async fn list_all_tracks_by_artist(pool: &PgPool, artist_slug: &str) -> Resu
JOIN artists ar ON t.artist_id = ar.id
LEFT JOIN albums al ON t.album_id = al.id
WHERE ar.slug = $1
AND NOT t.hidden
ORDER BY al.year NULLS LAST, al.name, t.track_number NULLS LAST, t.title"#
)
.bind(artist_slug)

View File

@@ -250,8 +250,10 @@ audio.addEventListener('pause', () => document.getElementById('btnPlayPause').in
audio.addEventListener('error', () => { showToast('Playback error'); nextTrack(); });
// --- API helper ---
const _base = location.pathname.replace(/\/+$/, '');
const API = _base + '/api';
async function api(path) {
const r = await fetch('/api' + path);
const r = await fetch(API + path);
if (!r.ok) return null;
return r.json();
}
@@ -323,7 +325,7 @@ async function showAlbumTracks(albumSlug, albumName, artistSlug, artistName) {
allBtn.onclick = () => addAlbumToQueue(albumSlug, true);
el.appendChild(allBtn);
const coverUrl = '/api/albums/' + albumSlug + '/cover';
const coverUrl = API + '/albums/' + albumSlug + '/cover';
tracks.forEach(t => {
const div = document.createElement('div');
@@ -392,7 +394,7 @@ function playIndex(i) {
if (i < 0 || i >= queue.length) return;
queueIndex = i;
const track = queue[i];
audio.src = '/api/stream/' + track.slug;
audio.src = API + '/stream/' + track.slug;
audio.play().catch(() => {});
updateNowPlaying(track);
renderQueue();
@@ -407,7 +409,7 @@ function updateNowPlaying(track) {
document.title = track.title + ' \u2014 Furumi';
const cover = document.getElementById('npCover');
const coverUrl = '/api/tracks/' + track.slug + '/cover';
const coverUrl = API + '/tracks/' + track.slug + '/cover';
cover.innerHTML = `<img src="${coverUrl}" alt="" onerror="this.parentElement.innerHTML='&#127925;'">`;
if ('mediaSession' in navigator) {
@@ -434,7 +436,7 @@ function renderQueue() {
const div = document.createElement('div');
div.className = 'queue-item' + (isPlaying ? ' playing' : '');
const coverSrc = t.album_slug ? `/api/tracks/${t.slug}/cover` : '';
const coverSrc = t.album_slug ? `${API}/tracks/${t.slug}/cover` : '';
const coverHtml = coverSrc
? `<img src="${coverSrc}" alt="" onerror="this.parentElement.innerHTML='&#127925;'">`
: '&#127925;';