15 Commits

Author SHA1 Message Date
ff3ad15b95 New player
All checks were successful
Publish Server Image / build-and-push-image (push) Successful in 2m7s
2026-03-18 02:44:59 +00:00
d5068aaa33 Added AI agent to manage metadata 2026-03-18 02:21:00 +00:00
8a49a5013b Added CHIPTUNE.md plan
All checks were successful
Publish Server Image / build-and-push-image (push) Successful in 2m8s
2026-03-18 00:46:16 +00:00
Ultradesu
85b3cb6852 Fixed docker CI
All checks were successful
Publish Server Image / build-and-push-image (push) Successful in 2m12s
2026-03-17 19:03:48 +00:00
Ultradesu
bfc0675f5a Improve web UI 2026-03-17 17:21:15 +00:00
Ultradesu
722183047d Improve web UI
Some checks failed
Build and Publish Deb Package / build-deb (push) Failing after 33s
Publish Server Image / build-and-push-image (push) Successful in 2m11s
2026-03-17 16:16:43 +00:00
Ultradesu
106ab96c56 Improve web UI
All checks were successful
Build and Publish Deb Package / build-deb (push) Successful in 35s
Publish Server Image / build-and-push-image (push) Successful in 2m9s
2026-03-17 16:05:14 +00:00
Ultradesu
cbc5639f99 Fixed UI
All checks were successful
Build and Publish Deb Package / build-deb (push) Successful in 47s
Publish Server Image / build-and-push-image (push) Successful in 2m19s
2026-03-17 15:17:30 +00:00
Ultradesu
754097f894 Fixed OIDC
All checks were successful
Build and Publish Deb Package / build-deb (push) Successful in 2m10s
Publish Server Image / build-and-push-image (push) Successful in 4m43s
2026-03-17 15:04:04 +00:00
Ultradesu
b761245fd0 Fixed OIDC 2026-03-17 15:03:36 +00:00
Ultradesu
0f49d8d079 Furumi: Added web ui with OIDC SSO
All checks were successful
Build and Publish Deb Package / build-deb (push) Successful in 59s
Publish Server Image / build-and-push-image (push) Successful in 4m28s
2026-03-17 14:53:16 +00:00
Ultradesu
a17ff322ad Added web ui with OIDC SSO
All checks were successful
Build and Publish Deb Package / build-deb (push) Successful in 1m3s
Publish Server Image / build-and-push-image (push) Successful in 11m44s
2026-03-17 14:25:58 +00:00
Ultradesu
707ef85e5d Added web ui with OIDC SSO
Some checks failed
Publish Server Image / build-and-push-image (push) Has been cancelled
Build and Publish Deb Package / build-deb (push) Has been cancelled
2026-03-17 14:25:04 +00:00
Ultradesu
ec4c53497f Added oauth2 OIDC support 2026-03-17 14:23:49 +00:00
Ultradesu
46ba3d5490 Added web player 2026-03-17 13:49:03 +00:00
48 changed files with 9921 additions and 820 deletions

View File

@@ -2,6 +2,8 @@ name: Publish Server Image
on: on:
push: push:
branches:
- '**'
tags: tags:
- 'v*.*.*' - 'v*.*.*'
@@ -29,22 +31,29 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker - name: Determine version and tags
id: meta id: info
uses: docker/metadata-action@v5 run: |
with: IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} SHORT_SHA="$(echo '${{ github.sha }}' | cut -c1-7)"
tags: |
type=semver,pattern={{version}} if [[ "${{ github.ref }}" == refs/tags/v* ]]; then
type=semver,pattern={{major}}.{{minor}} TAG="${{ github.ref_name }}"
type=sha,format=short VERSION="${TAG#v}"
echo "tags=${IMAGE}:${VERSION},${IMAGE}:latest" >> "$GITHUB_OUTPUT"
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
else
echo "tags=${IMAGE}:trunk,${IMAGE}:${SHORT_SHA}" >> "$GITHUB_OUTPUT"
echo "version=${SHORT_SHA}" >> "$GITHUB_OUTPUT"
fi
- name: Build and push Docker image - name: Build and push Docker image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
push: true push: true
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.info.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} build-args: |
FURUMI_VERSION=${{ steps.info.outputs.version }}
cache-from: type=gha cache-from: type=gha
cache-to: type=gha,mode=max cache-to: type=gha,mode=max

216
CHIPTUNE.md Normal file
View File

@@ -0,0 +1,216 @@
# Chiptune Support Implementation Plan
## Overview
Add playback support for tracker/chiptune module formats (MOD, XM, S3M, IT, MPTM) to the
Furumi web player. The implementation consists of two parts:
1. **Server-side** — lightweight metadata parser in pure Rust (zero external dependencies)
2. **Client-side** — playback via libopenmpt WebAssembly using AudioWorklet API
## Supported Formats
| Format | Extension | Origin |
|--------|-----------|--------|
| MOD | `.mod` | Amiga ProTracker |
| XM | `.xm` | FastTracker II |
| S3M | `.s3m` | Scream Tracker 3 |
| IT | `.it` | Impulse Tracker |
| MPTM | `.mptm` | OpenMPT |
## Part 1: Server-Side Metadata Parser
### Rationale
libopenmpt must NOT be a server dependency. All tracker formats store metadata at fixed byte
offsets in their headers, making manual parsing trivial. Reading the first ~400 bytes of a file
is sufficient to extract all available metadata.
### Extracted Fields
- **Title** — song name embedded in the module header
- **Channels** — number of active audio channels
- **Patterns** — number of unique patterns in the module
- **Message** — song message/comment (IT/MPTM only)
Note: none of these formats have a dedicated "artist" field. Author information, when present,
is typically found in the IT/MPTM song message.
### Binary Format Reference
#### MOD
| Offset | Size | Field |
|--------|------|-------|
| 0 | 20 | Song title (space/null padded) |
| 952 | 128 | Pattern order table |
| 1080 | 4 | Signature (determines channel count) |
Channel count is derived from the 4-byte signature at offset 1080:
- `M.K.`, `M!K!`, `FLT4`, `4CHN` → 4 channels
- `6CHN` → 6, `8CHN` / `OCTA` → 8
- `xCHN` → x channels, `xxCH` → xx channels
Pattern count = max value in the order table (128 bytes at offset 952) + 1.
#### XM
All multi-byte values are little-endian.
| Offset | Size | Field |
|--------|------|-------|
| 0 | 17 | Magic: `"Extended Module: "` |
| 17 | 20 | Module name |
| 58 | 2 | Version number |
| 68 | 2 | Number of channels |
| 70 | 2 | Number of patterns |
#### S3M
| Offset | Size | Field |
|--------|------|-------|
| 0x00 | 28 | Song title (null-terminated) |
| 0x1C | 1 | Signature byte (`0x1A`) |
| 0x24 | 2 | Pattern count (LE u16) |
| 0x2C | 4 | Magic: `"SCRM"` |
| 0x40 | 32 | Channel settings |
Channel count = number of entries in channel settings (32 bytes) that are not `0xFF`.
#### IT
| Offset | Size | Field |
|--------|------|-------|
| 0x00 | 4 | Magic: `"IMPM"` |
| 0x04 | 26 | Song title (null-terminated) |
| 0x26 | 2 | Pattern count (LE u16) |
| 0x2E | 2 | Special flags (bit 0 = message attached) |
| 0x36 | 2 | Message length |
| 0x38 | 4 | Message file offset |
| 0x40 | 64 | Channel panning table |
Channel count = number of entries in channel panning (64 bytes) with value < 128.
Song message: if `special & 1`, read `message_length` bytes from `message_offset`. Uses `\r`
(0x0D) as line separator.
#### MPTM
Parsed identically to IT. Detection:
- Legacy: magic `tpm.` instead of `IMPM`
- Modern: magic `IMPM` with tracker version (offset 0x28) in range `0x0889..=0x0FFF`
### Integration Points
- **`browse.rs`** — add tracker extensions to the audio file whitelist
- **`meta.rs`** — add a chiptune metadata branch that runs before Symphonia (which does not
support tracker formats); return title, channel count, pattern count, and message
- **`stream.rs`** — serve tracker files as-is (no server-side transcoding); these files are
typically under 1 MB
### Implementation Notes
- Zero external crate dependencies — only `std::io::Read` + `std::io::Seek`
- Read at most the first 1084 bytes for header parsing (MOD needs offset 1080 + 4 byte sig)
- For IT/MPTM messages, a second seek to `message_offset` is needed
- All strings should be trimmed of null bytes and trailing whitespace
- Expected code size: ~200300 lines of Rust
## Part 2: Client-Side Playback via libopenmpt WASM
### Rationale
Browsers cannot decode tracker formats natively. libopenmpt compiled to WebAssembly decodes
modules into PCM samples which are then rendered through the Web Audio API. Client-side
decoding keeps the server dependency-free and enables interactive features (pattern display,
channel visualization) in the future.
### libopenmpt WASM Source
Use the **chiptune3** library (npm: `chiptune3`, by DrSnuggles) which bundles libopenmpt as a
self-contained AudioWorklet-compatible ES6 module.
Package contents:
| File | Size | Purpose |
|------|------|---------|
| `chiptune3.js` | ~4 KB | Main API (load, play, pause, seek) |
| `chiptune3.worklet.js` | ~12 KB | AudioWorklet processor glue |
| `libopenmpt.worklet.js` | ~1.7 MB | libopenmpt WASM + JS (single-file bundle) |
Available via jsDelivr CDN or can be vendored into the project.
If a newer libopenmpt version is needed, the official project provides source tarballs with an
Emscripten build target:
```
make CONFIG=emscripten EMSCRIPTEN_TARGET=audioworkletprocessor
```
This produces a single ES6 module with WASM embedded inline (`SINGLE_FILE=1`), which is
required because AudioWorklet contexts cannot fetch separate `.wasm` files.
### Playback Architecture
```
┌──────────────────────────────────────────────────────────┐
│ player.html │
│ │
│ Format detection (by file extension) │
│ ┌─────────────────────┐ ┌────────────────────────────┐ │
│ │ Standard audio │ │ Tracker module │ │
│ │ (mp3/flac/ogg/...) │ │ (mod/xm/s3m/it/mptm) │ │
│ │ │ │ │ │
│ │ <audio> element │ │ fetch() → ArrayBuffer │ │
│ │ src=/api/stream/path │ │ ↓ │ │
│ │ │ │ libopenmpt WASM decode │ │
│ │ │ │ ↓ │ │
│ │ │ │ AudioWorkletProcessor │ │
│ │ │ │ ↓ │ │
│ │ ↓ │ │ AudioContext.destination │ │
│ └────────┼─────────────┘ └────────────┼───────────────┘ │
│ └──────────┬──────────────────┘ │
│ ↓ │
│ Player controls │
│ (play/pause/seek/volume) │
│ MediaSession API │
└──────────────────────────────────────────────────────────┘
```
### Integration Points
- **`player.html`** — detect tracker format by extension; use chiptune3 API instead of
`<audio>` element for tracker files; unify transport controls (play/pause/seek/volume)
across both playback engines
- **WASM assets** — serve `chiptune3.js`, `chiptune3.worklet.js`, and
`libopenmpt.worklet.js` via a static file endpoint or embed them inline
- **`mod.rs`** (routes) — add endpoint for serving WASM assets if not embedded
### Player Integration Details
The player must abstract over two playback backends behind a common interface:
```
play(path) — start playback (auto-detect engine by extension)
pause() — pause current playback
resume() — resume current playback
seek(seconds) — seek to position
setVolume(v) — set volume (0.01.0)
getDuration() — total duration in seconds
getPosition() — current position in seconds
isPlaying() — playback state
onEnded(cb) — callback when track finishes
```
For tracker modules, `getDuration()` and `getPosition()` are provided by libopenmpt's
`get_duration_seconds()` and `get_position_seconds()` APIs.
### Considerations
- Tracker files are small (typically < 1 MB) — fetch the entire file before playback; no
streaming/range-request needed
- AudioWorklet requires a secure context (HTTPS or localhost)
- The WASM bundle is ~1.7 MB — load it lazily on first tracker file playback
- MediaSession API metadata should display module title from `/api/meta` response

2389
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,11 +4,15 @@ members = [
"furumi-server", "furumi-server",
"furumi-client-core", "furumi-client-core",
"furumi-mount-linux", "furumi-mount-linux",
"furumi-mount-macos" "furumi-mount-macos",
"furumi-agent",
"furumi-web-player",
] ]
default-members = [ default-members = [
"furumi-common", "furumi-common",
"furumi-server", "furumi-server",
"furumi-client-core", "furumi-client-core",
"furumi-agent",
"furumi-web-player",
] ]
resolver = "2" resolver = "2"

View File

@@ -14,8 +14,10 @@ WORKDIR /usr/src/app
# Option: Copy in root workspace files and source crates # Option: Copy in root workspace files and source crates
COPY . . COPY . .
ARG FURUMI_VERSION=dev
# Build only the server for release # Build only the server for release
RUN cargo build --release --bin furumi-server RUN FURUMI_VERSION=${FURUMI_VERSION} cargo build --release --bin furumi-server
# Stage 2: Create the minimal runtime image # Stage 2: Create the minimal runtime image
FROM debian:bookworm-slim FROM debian:bookworm-slim

214
PLAYER-API.md Normal file
View File

@@ -0,0 +1,214 @@
# Furumi Web Player API
Base URL: `http://<host>:<port>/api`
All endpoints require authentication when `--token` is set (via cookie `furumi_token=<token>` or query param `?token=<token>`).
All entity references use **slugs** — 12-character hex identifiers (not sequential IDs).
## Artists
### `GET /api/artists`
List all artists that have at least one track.
**Response:**
```json
[
{
"slug": "a1b2c3d4e5f6",
"name": "Pink Floyd",
"album_count": 5,
"track_count": 42
}
]
```
Sorted alphabetically by name.
### `GET /api/artists/:slug`
Get artist details.
**Response:**
```json
{
"slug": "a1b2c3d4e5f6",
"name": "Pink Floyd"
}
```
**Errors:** `404` if not found.
### `GET /api/artists/:slug/albums`
List all albums by an artist.
**Response:**
```json
[
{
"slug": "b2c3d4e5f6a7",
"name": "Wish You Were Here",
"year": 1975,
"track_count": 5,
"has_cover": true
}
]
```
Sorted by year (nulls last), then name.
### `GET /api/artists/:slug/tracks`
List all tracks by an artist across all albums.
**Response:** same as album tracks (see below).
Sorted by album year, album name, track number, title.
## Albums
### `GET /api/albums/:slug`
List all tracks in an album.
**Response:**
```json
[
{
"slug": "c3d4e5f6a7b8",
"title": "Have a Cigar",
"track_number": 3,
"duration_secs": 312.5,
"artist_name": "Pink Floyd",
"album_name": "Wish You Were Here",
"album_slug": "b2c3d4e5f6a7",
"genre": "Progressive Rock"
}
]
```
Sorted by track number (nulls last), then title. Fields `album_name`, `album_slug` may be `null` for tracks without an album.
### `GET /api/albums/:slug/cover`
Serve the album cover image from the `album_images` table.
**Response:** Binary image data with appropriate `Content-Type` (`image/jpeg`, `image/png`, etc.) and `Cache-Control: public, max-age=86400`.
**Errors:** `404` if no cover exists.
## Tracks
### `GET /api/tracks/:slug`
Get full track details.
**Response:**
```json
{
"slug": "c3d4e5f6a7b8",
"title": "Have a Cigar",
"track_number": 3,
"duration_secs": 312.5,
"genre": "Progressive Rock",
"storage_path": "/music/storage/Pink Floyd/Wish You Were Here/03 - Have a Cigar.flac",
"artist_name": "Pink Floyd",
"artist_slug": "a1b2c3d4e5f6",
"album_name": "Wish You Were Here",
"album_slug": "b2c3d4e5f6a7",
"album_year": 1975
}
```
**Errors:** `404` if not found.
### `GET /api/tracks/:slug/cover`
Serve cover art for a specific track. Resolution order:
1. Album cover from `album_images` table (if the track belongs to an album with a cover)
2. Embedded cover art extracted from the audio file metadata (ID3/Vorbis/etc. via Symphonia)
3. `404` if no cover art is available
**Response:** Binary image data with `Content-Type` and `Cache-Control: public, max-age=86400`.
**Errors:** `404` if no cover art found.
## Streaming
### `GET /api/stream/:slug`
Stream the audio file for a track.
Supports HTTP **Range requests** for seeking:
- Full response: `200 OK` with `Content-Length` and `Accept-Ranges: bytes`
- Partial response: `206 Partial Content` with `Content-Range`
- Invalid range: `416 Range Not Satisfiable`
`Content-Type` is determined by the file extension (e.g. `audio/flac`, `audio/mpeg`).
**Errors:** `404` if track or file not found.
## Search
### `GET /api/search?q=<query>&limit=<n>`
Search across artists, albums, and tracks by name (case-insensitive substring match).
| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| `q` | yes | — | Search query |
| `limit` | no | 20 | Max results |
**Response:**
```json
[
{
"result_type": "artist",
"slug": "a1b2c3d4e5f6",
"name": "Pink Floyd",
"detail": null
},
{
"result_type": "album",
"slug": "b2c3d4e5f6a7",
"name": "Wish You Were Here",
"detail": "Pink Floyd"
},
{
"result_type": "track",
"slug": "c3d4e5f6a7b8",
"name": "Have a Cigar",
"detail": "Pink Floyd"
}
]
```
`detail` contains the artist name for albums and tracks, `null` for artists.
Sorted by result type (artist → album → track), then by name.
## Authentication
When `--token` / `FURUMI_PLAYER_TOKEN` is set:
- **Cookie:** `furumi_token=<token>` — set after login
- **Query parameter:** `?token=<token>` — redirects to player and sets cookie
When token is empty, authentication is disabled and all endpoints are public.
Unauthenticated requests receive `401 Unauthorized` with a login form.
## Error format
All errors return JSON:
```json
{
"error": "description of the error"
}
```
With appropriate HTTP status code (`400`, `404`, `500`, etc.).

23
furumi-agent/Cargo.toml Normal file
View File

@@ -0,0 +1,23 @@
[package]
name = "furumi-agent"
version = "0.3.4"
edition = "2024"
[dependencies]
anyhow = "1.0"
blake3 = "1"
chrono = { version = "0.4", features = ["serde"] }
clap = { version = "4.5", features = ["derive", "env"] }
encoding_rs = "0.8"
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls", "json"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
sqlx = { version = "0.8", features = ["runtime-tokio-rustls", "postgres", "chrono", "uuid", "migrate"] }
symphonia = { version = "0.5", default-features = false, features = ["mp3", "aac", "flac", "vorbis", "wav", "alac", "adpcm", "pcm", "mpa", "isomp4", "ogg", "aiff", "mkv"] }
thiserror = "2.0"
tokio = { version = "1.50", features = ["full"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
axum = { version = "0.7", features = ["tokio", "macros"] }
tower = { version = "0.4", features = ["util"] }
uuid = { version = "1", features = ["v4", "serde"] }

View File

@@ -0,0 +1,86 @@
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE TABLE artists (
id BIGSERIAL PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE albums (
id BIGSERIAL PRIMARY KEY,
artist_id BIGINT NOT NULL REFERENCES artists(id) ON DELETE CASCADE,
name TEXT NOT NULL,
year INT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE (artist_id, name)
);
CREATE TABLE tracks (
id BIGSERIAL PRIMARY KEY,
artist_id BIGINT NOT NULL REFERENCES artists(id) ON DELETE CASCADE,
album_id BIGINT REFERENCES albums(id) ON DELETE SET NULL,
title TEXT NOT NULL,
track_number INT,
genre TEXT,
duration_secs DOUBLE PRECISION,
codec TEXT,
bitrate INT,
sample_rate INT,
file_hash TEXT NOT NULL UNIQUE,
file_size BIGINT NOT NULL,
storage_path TEXT NOT NULL,
manual_override BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE pending_tracks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
status TEXT NOT NULL DEFAULT 'pending',
inbox_path TEXT NOT NULL,
file_hash TEXT NOT NULL,
file_size BIGINT NOT NULL,
raw_title TEXT,
raw_artist TEXT,
raw_album TEXT,
raw_year INT,
raw_track_number INT,
raw_genre TEXT,
duration_secs DOUBLE PRECISION,
path_title TEXT,
path_artist TEXT,
path_album TEXT,
path_year INT,
path_track_number INT,
norm_title TEXT,
norm_artist TEXT,
norm_album TEXT,
norm_year INT,
norm_track_number INT,
norm_genre TEXT,
norm_featured_artists TEXT,
confidence DOUBLE PRECISION,
llm_notes TEXT,
error_message TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE track_artists (
id BIGSERIAL PRIMARY KEY,
track_id BIGINT NOT NULL REFERENCES tracks(id) ON DELETE CASCADE,
artist_id BIGINT NOT NULL REFERENCES artists(id) ON DELETE CASCADE,
role TEXT NOT NULL DEFAULT 'featured',
UNIQUE (track_id, artist_id, role)
);
-- Indexes
CREATE INDEX idx_artists_name_trgm ON artists USING gin (name gin_trgm_ops);
CREATE INDEX idx_albums_name_trgm ON albums USING gin (name gin_trgm_ops);
CREATE INDEX idx_tracks_file_hash ON tracks (file_hash);
CREATE INDEX idx_pending_status ON pending_tracks (status);
CREATE INDEX idx_pending_file_hash ON pending_tracks (file_hash);
CREATE INDEX idx_track_artists_track ON track_artists (track_id);
CREATE INDEX idx_track_artists_artist ON track_artists (artist_id);

View File

@@ -0,0 +1,37 @@
-- Add slug (public unique ID) to tracks
ALTER TABLE tracks ADD COLUMN slug TEXT;
-- Generate slugs for existing tracks
UPDATE tracks SET slug = encode(gen_random_uuid()::text::bytea, 'hex') WHERE slug IS NULL;
ALTER TABLE tracks ALTER COLUMN slug SET NOT NULL;
CREATE UNIQUE INDEX idx_tracks_slug ON tracks (slug);
-- Add slug to albums
ALTER TABLE albums ADD COLUMN slug TEXT;
UPDATE albums SET slug = encode(gen_random_uuid()::text::bytea, 'hex') WHERE slug IS NULL;
ALTER TABLE albums ALTER COLUMN slug SET NOT NULL;
CREATE UNIQUE INDEX idx_albums_slug ON albums (slug);
-- Add slug to artists
ALTER TABLE artists ADD COLUMN slug TEXT;
UPDATE artists SET slug = encode(gen_random_uuid()::text::bytea, 'hex') WHERE slug IS NULL;
ALTER TABLE artists ALTER COLUMN slug SET NOT NULL;
CREATE UNIQUE INDEX idx_artists_slug ON artists (slug);
-- Album artwork table
CREATE TABLE album_images (
id BIGSERIAL PRIMARY KEY,
album_id BIGINT NOT NULL REFERENCES albums(id) ON DELETE CASCADE,
image_type TEXT NOT NULL DEFAULT 'cover', -- 'cover', 'back', 'booklet', 'other'
file_path TEXT NOT NULL, -- relative path in storage
file_hash TEXT NOT NULL,
mime_type TEXT NOT NULL,
width INT,
height INT,
file_size BIGINT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_album_images_album ON album_images (album_id);
CREATE UNIQUE INDEX idx_album_images_hash ON album_images (file_hash);

View File

@@ -0,0 +1,80 @@
You are a music metadata normalization assistant. Your job is to take raw metadata extracted from audio files and produce clean, accurate, canonical metadata suitable for a music library database.
## Rules
1. **Artist names** must use correct capitalization and canonical spelling. Examples:
- "pink floyd" → "Pink Floyd"
- "AC DC" → "AC/DC"
- "Guns n roses" → "Guns N' Roses"
- "Led zepplin" → "Led Zeppelin" (fix common misspellings)
- "саша скул" → "Саша Скул" (fix capitalization, keep the language as-is)
- If the database already contains a matching artist (same name in any case or transliteration), always use the existing canonical name exactly. For example, if the DB has "Саша Скул" and the file says "саша скул" or "Sasha Skul", use "Саша Скул".
- **Compound artist fields**: When the artist field or path contains multiple artist names joined by "и", "and", "&", "/", ",", "x", or "vs", you MUST split them. The "artist" field must contain ONLY ONE primary artist. All others go into "featured_artists". If one of the names already exists in the database, prefer that one as the primary artist.
- Examples:
- Artist or path: "Саша Скул и Олег Харитонов" with DB containing "Саша Скул" → artist: "Саша Скул", featured_artists: ["Олег Харитонов"]
- Artist: "Metallica & Lou Reed" with DB containing "Metallica" → artist: "Metallica", featured_artists: ["Lou Reed"]
- Artist: "Artist A / Artist B" with neither in DB → artist: "Artist A", featured_artists: ["Artist B"] (first listed = primary)
- **NEVER create a new compound artist** like "X и Y" or "X & Y" as a single artist name. Always split into primary + featured.
2. **Featured artists**: Many tracks include collaborations. Guest artists can be indicated by ANY of the following markers (case-insensitive) in the artist field, track title, filename, or path:
- English: "feat.", "ft.", "featuring", "with"
- Russian: "п.у.", "при участии"
- Parenthetical: "(feat. X)", "(ft. X)", "(п.у. X)", "(при участии X)"
- Any other language-specific equivalent indicating a guest/featured collaboration
You must:
- Extract the **primary artist** (the main performer) into the "artist" field.
- Extract ALL **featured/guest artists** into a separate "featured_artists" array.
- Remove the collaboration marker and featured artist names from the track title, keeping only the song name.
- When multiple featured artists are listed, split them by commas or "&" into separate entries.
- Examples:
- Artist: "НСМВГЛП feat. XACV SQUAD" → artist: "НСМВГЛП", featured_artists: ["XACV SQUAD"]
- Title: "Знаешь ли ты feat. SharOn" → title: "Знаешь ли ты", featured_artists: ["SharOn"]
- Title: "Ваши мамки (п.у. Ваня Айван,Иван Смех, Жильцов)" → title: "Ваши мамки", featured_artists: ["Ваня Айван", "Иван Смех", "Жильцов"]
- Title: "Молоды (п.у. Паша Батруха)" → title: "Молоды", featured_artists: ["Паша Батруха"]
- Title: "Повелитель Мух (п.у. Пикуль)" → title: "Повелитель Мух", featured_artists: ["Пикуль"]
- Artist: "A & B ft. C, D" → artist: "A & B", featured_artists: ["C", "D"]
- **IMPORTANT**: Always check for parenthetical markers like "(п.у. ...)" or "(feat. ...)" at the end of track titles. These are very common and must not be missed.
- Apply the same capitalization and consistency rules to featured artist names.
- If the database already contains a matching featured artist name, use the existing canonical form.
3. **Album names** must use correct capitalization and canonical spelling.
- Use title case for English albums.
- Preserve original language for non-English albums.
- If the database already contains a matching album under the same artist, use the existing name exactly.
- Do not alter the creative content of album names (same principle as track titles).
4. **Track titles** must use correct capitalization, but their content must be preserved exactly.
- Use title case for English titles.
- Preserve original language for non-English titles.
- Remove leading track numbers if present (e.g., "01 - Have a Cigar" → "Have a Cigar").
- **NEVER remove, add, or alter words, numbers, suffixes, punctuation marks, or special characters in titles.** Your job is to fix capitalization and encoding, not to edit the creative content. If a title contains unusual punctuation, numbers, apostrophes, or symbols — they are intentional and must be kept as-is.
- If all tracks in the same album follow a naming pattern (e.g., numbered names like "Part 1", "Part 2"), preserve that pattern consistently. Do not simplify or truncate individual track names.
5. **Year**: If not present in tags, try to infer from the file path. Only set a year if you are confident it is correct.
6. **Track number**: If not present in tags, try to infer from the filename (e.g., "03 - Song.flac" → track 3).
7. **Genre**: Normalize to a common genre name. Avoid overly specific sub-genres unless the existing database already uses them.
8. **Encoding issues**: Raw metadata may contain mojibake (e.g., Cyrillic text misread as Latin-1). If you detect garbled text that looks like encoding errors, attempt to determine the intended text.
9. **Preservation principle**: When in doubt, preserve the original value. Only change metadata when you are confident the change is a correction (e.g., fixing capitalization, fixing encoding, matching to an existing DB entry). Do not "clean up" or "simplify" values that look unusual — artists often use unconventional naming intentionally.
10. **Consistency**: When the database already contains entries for an artist or album, your output MUST match the existing canonical names. Do not introduce new variations.
11. **Confidence**: Rate your confidence from 0.0 to 1.0.
- 1.0: All fields are clear and unambiguous.
- 0.8+: Minor inferences made (e.g., year from path), but high certainty.
- 0.5-0.8: Some guesswork involved, human review recommended.
- Below 0.5: Significant uncertainty, definitely needs review.
## Response format
You MUST respond with a single JSON object, no markdown fences, no extra text:
{"artist": "...", "album": "...", "title": "...", "year": 2000, "track_number": 1, "genre": "...", "featured_artists": ["...", "..."], "confidence": 0.95, "notes": "brief explanation of changes made"}
- Use null for fields you cannot determine.
- Use an empty array [] for "featured_artists" if there are no featured artists.
- The "notes" field should briefly explain what you changed and why.

View File

@@ -0,0 +1,75 @@
use std::path::PathBuf;
use clap::Parser;
/// Default system prompt, compiled into the binary as a fallback.
const DEFAULT_SYSTEM_PROMPT: &str = include_str!("../prompts/normalize.txt");
#[derive(Parser, Debug)]
#[command(version, about = "Furumi Agent: music metadata ingest and normalization")]
pub struct Args {
/// IP address and port for the admin web UI
#[arg(long, env = "FURUMI_AGENT_BIND", default_value = "0.0.0.0:8090")]
pub bind: String,
/// Directory to watch for new music files
#[arg(long, env = "FURUMI_AGENT_INBOX_DIR")]
pub inbox_dir: PathBuf,
/// Directory for permanently stored and organized music files
#[arg(long, env = "FURUMI_AGENT_STORAGE_DIR")]
pub storage_dir: PathBuf,
/// PostgreSQL connection URL
#[arg(long, env = "FURUMI_AGENT_DATABASE_URL")]
pub database_url: String,
/// Ollama API base URL
#[arg(long, env = "FURUMI_AGENT_OLLAMA_URL", default_value = "http://localhost:11434")]
pub ollama_url: String,
/// Ollama model name
#[arg(long, env = "FURUMI_AGENT_OLLAMA_MODEL", default_value = "qwen3:14b")]
pub ollama_model: String,
/// Inbox scan interval in seconds
#[arg(long, env = "FURUMI_AGENT_POLL_INTERVAL_SECS", default_value_t = 30)]
pub poll_interval_secs: u64,
/// Confidence threshold for auto-approval (0.0 - 1.0)
#[arg(long, env = "FURUMI_AGENT_CONFIDENCE_THRESHOLD", default_value_t = 0.85)]
pub confidence_threshold: f64,
/// Path to a custom system prompt file (overrides the built-in default)
#[arg(long, env = "FURUMI_AGENT_SYSTEM_PROMPT_FILE")]
pub system_prompt_file: Option<PathBuf>,
}
impl Args {
pub fn validate(&self) -> Result<(), Box<dyn std::error::Error>> {
if !self.inbox_dir.exists() || !self.inbox_dir.is_dir() {
return Err(format!("Inbox directory {:?} does not exist or is not a directory", self.inbox_dir).into());
}
if !self.storage_dir.exists() || !self.storage_dir.is_dir() {
return Err(format!("Storage directory {:?} does not exist or is not a directory", self.storage_dir).into());
}
if !(0.0..=1.0).contains(&self.confidence_threshold) {
return Err("Confidence threshold must be between 0.0 and 1.0".into());
}
Ok(())
}
/// Load the system prompt from a custom file or use the built-in default.
pub fn load_system_prompt(&self) -> Result<String, Box<dyn std::error::Error>> {
match &self.system_prompt_file {
Some(path) => {
tracing::info!("Loading system prompt from {:?}", path);
Ok(std::fs::read_to_string(path)?)
}
None => {
tracing::info!("Using built-in default system prompt");
Ok(DEFAULT_SYSTEM_PROMPT.to_owned())
}
}
}
}

554
furumi-agent/src/db.rs Normal file
View File

@@ -0,0 +1,554 @@
use serde::{Deserialize, Serialize};
use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions;
use uuid::Uuid;
/// Generate a short URL-safe slug from a UUID v4.
fn generate_slug() -> String {
Uuid::new_v4().simple().to_string()[..12].to_owned()
}
pub async fn connect(database_url: &str) -> Result<PgPool, sqlx::Error> {
PgPoolOptions::new()
.max_connections(5)
.connect(database_url)
.await
}
pub async fn migrate(pool: &PgPool) -> Result<(), sqlx::migrate::MigrateError> {
sqlx::migrate!("./migrations").run(pool).await
}
// --- Models ---
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct Artist {
pub id: i64,
pub name: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct Album {
pub id: i64,
pub artist_id: i64,
pub name: String,
pub year: Option<i32>,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct PendingTrack {
pub id: Uuid,
pub status: String,
pub inbox_path: String,
pub file_hash: String,
pub file_size: i64,
// Raw metadata from file tags
pub raw_title: Option<String>,
pub raw_artist: Option<String>,
pub raw_album: Option<String>,
pub raw_year: Option<i32>,
pub raw_track_number: Option<i32>,
pub raw_genre: Option<String>,
pub duration_secs: Option<f64>,
// Path-derived hints
pub path_artist: Option<String>,
pub path_album: Option<String>,
pub path_year: Option<i32>,
pub path_track_number: Option<i32>,
pub path_title: Option<String>,
// Normalized (LLM output)
pub norm_title: Option<String>,
pub norm_artist: Option<String>,
pub norm_album: Option<String>,
pub norm_year: Option<i32>,
pub norm_track_number: Option<i32>,
pub norm_genre: Option<String>,
pub norm_featured_artists: Option<String>, // JSON array
pub confidence: Option<f64>,
pub llm_notes: Option<String>,
pub error_message: Option<String>,
pub created_at: chrono::DateTime<chrono::Utc>,
pub updated_at: chrono::DateTime<chrono::Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct SimilarArtist {
pub id: i64,
pub name: String,
pub similarity: f32,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct SimilarAlbum {
pub id: i64,
pub artist_id: i64,
pub name: String,
pub year: Option<i32>,
pub similarity: f32,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]
pub struct AlbumImage {
pub id: i64,
pub album_id: i64,
pub image_type: String,
pub file_path: String,
pub file_hash: String,
pub mime_type: String,
pub width: Option<i32>,
pub height: Option<i32>,
pub file_size: i64,
}
// --- Queries ---
pub async fn file_hash_exists(pool: &PgPool, hash: &str) -> Result<bool, sqlx::Error> {
let row: (bool,) = sqlx::query_as(
"SELECT EXISTS(SELECT 1 FROM tracks WHERE file_hash = $1) OR EXISTS(SELECT 1 FROM pending_tracks WHERE file_hash = $1 AND status NOT IN ('rejected', 'error'))"
)
.bind(hash)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn insert_pending(
pool: &PgPool,
inbox_path: &str,
file_hash: &str,
file_size: i64,
raw: &RawFields,
path_hints: &PathHints,
duration_secs: Option<f64>,
) -> Result<Uuid, sqlx::Error> {
let row: (Uuid,) = sqlx::query_as(
r#"INSERT INTO pending_tracks
(inbox_path, file_hash, file_size,
raw_title, raw_artist, raw_album, raw_year, raw_track_number, raw_genre,
path_title, path_artist, path_album, path_year, path_track_number,
duration_secs, status)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, 'pending')
RETURNING id"#,
)
.bind(inbox_path)
.bind(file_hash)
.bind(file_size)
.bind(&raw.title)
.bind(&raw.artist)
.bind(&raw.album)
.bind(raw.year)
.bind(raw.track_number)
.bind(&raw.genre)
.bind(&path_hints.title)
.bind(&path_hints.artist)
.bind(&path_hints.album)
.bind(path_hints.year)
.bind(path_hints.track_number)
.bind(duration_secs)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn update_pending_normalized(
pool: &PgPool,
id: Uuid,
status: &str,
norm: &NormalizedFields,
error_message: Option<&str>,
) -> Result<(), sqlx::Error> {
let featured_json = if norm.featured_artists.is_empty() {
None
} else {
Some(serde_json::to_string(&norm.featured_artists).unwrap_or_default())
};
sqlx::query(
r#"UPDATE pending_tracks SET
status = $2,
norm_title = $3, norm_artist = $4, norm_album = $5,
norm_year = $6, norm_track_number = $7, norm_genre = $8,
norm_featured_artists = $9,
confidence = $10, llm_notes = $11, error_message = $12,
updated_at = NOW()
WHERE id = $1"#,
)
.bind(id)
.bind(status)
.bind(&norm.title)
.bind(&norm.artist)
.bind(&norm.album)
.bind(norm.year)
.bind(norm.track_number)
.bind(&norm.genre)
.bind(&featured_json)
.bind(norm.confidence)
.bind(&norm.notes)
.bind(error_message)
.execute(pool)
.await?;
Ok(())
}
pub async fn update_pending_status(
pool: &PgPool,
id: Uuid,
status: &str,
error_message: Option<&str>,
) -> Result<(), sqlx::Error> {
sqlx::query("UPDATE pending_tracks SET status = $2, error_message = $3, updated_at = NOW() WHERE id = $1")
.bind(id)
.bind(status)
.bind(error_message)
.execute(pool)
.await?;
Ok(())
}
pub async fn find_similar_artists(pool: &PgPool, name: &str, limit: i32) -> Result<Vec<SimilarArtist>, sqlx::Error> {
// pg_trgm needs at least 3 chars to produce trigrams; for shorter queries use ILIKE prefix
if name.chars().count() < 3 {
sqlx::query_as::<_, SimilarArtist>(
"SELECT id, name, 1.0::real AS similarity FROM artists WHERE name ILIKE $1 || '%' ORDER BY name LIMIT $2"
)
.bind(name)
.bind(limit)
.fetch_all(pool)
.await
} else {
sqlx::query_as::<_, SimilarArtist>(
r#"SELECT id, name, MAX(sim) AS similarity FROM (
SELECT id, name, similarity(name, $1) AS sim FROM artists WHERE name % $1
UNION ALL
SELECT id, name, 0.01::real AS sim FROM artists WHERE name ILIKE '%' || $1 || '%'
) sub GROUP BY id, name ORDER BY similarity DESC LIMIT $2"#
)
.bind(name)
.bind(limit)
.fetch_all(pool)
.await
}
}
pub async fn find_similar_albums(pool: &PgPool, name: &str, limit: i32) -> Result<Vec<SimilarAlbum>, sqlx::Error> {
sqlx::query_as::<_, SimilarAlbum>(
"SELECT id, artist_id, name, year, similarity(name, $1) AS similarity FROM albums WHERE name % $1 ORDER BY similarity DESC LIMIT $2"
)
.bind(name)
.bind(limit)
.fetch_all(pool)
.await
}
pub async fn upsert_artist(pool: &PgPool, name: &str) -> Result<i64, sqlx::Error> {
let slug = generate_slug();
let row: (i64,) = sqlx::query_as(
"INSERT INTO artists (name, slug) VALUES ($1, $2) ON CONFLICT (name) DO UPDATE SET name = EXCLUDED.name RETURNING id"
)
.bind(name)
.bind(&slug)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn upsert_album(pool: &PgPool, artist_id: i64, name: &str, year: Option<i32>) -> Result<i64, sqlx::Error> {
let slug = generate_slug();
let row: (i64,) = sqlx::query_as(
r#"INSERT INTO albums (artist_id, name, year, slug)
VALUES ($1, $2, $3, $4)
ON CONFLICT (artist_id, name) DO UPDATE SET year = COALESCE(EXCLUDED.year, albums.year)
RETURNING id"#
)
.bind(artist_id)
.bind(name)
.bind(year)
.bind(&slug)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn insert_track(
pool: &PgPool,
artist_id: i64,
album_id: Option<i64>,
title: &str,
track_number: Option<i32>,
genre: Option<&str>,
duration_secs: Option<f64>,
file_hash: &str,
file_size: i64,
storage_path: &str,
) -> Result<i64, sqlx::Error> {
let slug = generate_slug();
let row: (i64,) = sqlx::query_as(
r#"INSERT INTO tracks
(artist_id, album_id, title, track_number, genre, duration_secs, file_hash, file_size, storage_path, slug)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
RETURNING id"#
)
.bind(artist_id)
.bind(album_id)
.bind(title)
.bind(track_number)
.bind(genre)
.bind(duration_secs)
.bind(file_hash)
.bind(file_size)
.bind(storage_path)
.bind(&slug)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn link_track_artist(pool: &PgPool, track_id: i64, artist_id: i64, role: &str) -> Result<(), sqlx::Error> {
sqlx::query(
"INSERT INTO track_artists (track_id, artist_id, role) VALUES ($1, $2, $3) ON CONFLICT DO NOTHING"
)
.bind(track_id)
.bind(artist_id)
.bind(role)
.execute(pool)
.await?;
Ok(())
}
pub async fn approve_and_finalize(
pool: &PgPool,
pending_id: Uuid,
storage_path: &str,
) -> Result<i64, sqlx::Error> {
let pt: PendingTrack = sqlx::query_as("SELECT * FROM pending_tracks WHERE id = $1")
.bind(pending_id)
.fetch_one(pool)
.await?;
let artist_name = pt.norm_artist.as_deref().unwrap_or("Unknown Artist");
let artist_id = upsert_artist(pool, artist_name).await?;
let album_id = match pt.norm_album.as_deref() {
Some(album_name) => Some(upsert_album(pool, artist_id, album_name, pt.norm_year).await?),
None => None,
};
let title = pt.norm_title.as_deref().unwrap_or("Unknown Title");
let track_id = insert_track(
pool,
artist_id,
album_id,
title,
pt.norm_track_number,
pt.norm_genre.as_deref(),
pt.duration_secs,
&pt.file_hash,
pt.file_size,
storage_path,
)
.await?;
// Link primary artist
link_track_artist(pool, track_id, artist_id, "primary").await?;
// Link featured artists
if let Some(featured_json) = &pt.norm_featured_artists {
if let Ok(featured) = serde_json::from_str::<Vec<String>>(featured_json) {
for feat_name in &featured {
let feat_id = upsert_artist(pool, feat_name).await?;
link_track_artist(pool, track_id, feat_id, "featured").await?;
}
}
}
update_pending_status(pool, pending_id, "approved", None).await?;
Ok(track_id)
}
// --- Album images ---
pub async fn image_hash_exists(pool: &PgPool, hash: &str) -> Result<bool, sqlx::Error> {
let row: (bool,) = sqlx::query_as("SELECT EXISTS(SELECT 1 FROM album_images WHERE file_hash = $1)")
.bind(hash)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn insert_album_image(
pool: &PgPool,
album_id: i64,
image_type: &str,
file_path: &str,
file_hash: &str,
mime_type: &str,
file_size: i64,
) -> Result<i64, sqlx::Error> {
let row: (i64,) = sqlx::query_as(
r#"INSERT INTO album_images (album_id, image_type, file_path, file_hash, mime_type, file_size)
VALUES ($1, $2, $3, $4, $5, $6)
ON CONFLICT (file_hash) DO NOTHING
RETURNING id"#
)
.bind(album_id)
.bind(image_type)
.bind(file_path)
.bind(file_hash)
.bind(mime_type)
.bind(file_size)
.fetch_one(pool)
.await?;
Ok(row.0)
}
pub async fn get_album_images(pool: &PgPool, album_id: i64) -> Result<Vec<AlbumImage>, sqlx::Error> {
sqlx::query_as::<_, AlbumImage>("SELECT * FROM album_images WHERE album_id = $1 ORDER BY image_type")
.bind(album_id)
.fetch_all(pool)
.await
}
/// Find album_id by artist+album name (used when linking covers to already-finalized albums)
pub async fn find_album_id(pool: &PgPool, artist_name: &str, album_name: &str) -> Result<Option<i64>, sqlx::Error> {
let row: Option<(i64,)> = sqlx::query_as(
r#"SELECT a.id FROM albums a
JOIN artists ar ON a.artist_id = ar.id
WHERE ar.name = $1 AND a.name = $2"#
)
.bind(artist_name)
.bind(album_name)
.fetch_optional(pool)
.await?;
Ok(row.map(|r| r.0))
}
// --- DTOs for insert helpers ---
#[derive(Debug, Default)]
pub struct RawFields {
pub title: Option<String>,
pub artist: Option<String>,
pub album: Option<String>,
pub year: Option<i32>,
pub track_number: Option<i32>,
pub genre: Option<String>,
}
#[derive(Debug, Default)]
pub struct PathHints {
pub title: Option<String>,
pub artist: Option<String>,
pub album: Option<String>,
pub year: Option<i32>,
pub track_number: Option<i32>,
}
#[derive(Debug, Default, Serialize, Deserialize)]
pub struct NormalizedFields {
pub title: Option<String>,
pub artist: Option<String>,
pub album: Option<String>,
pub year: Option<i32>,
pub track_number: Option<i32>,
pub genre: Option<String>,
#[serde(default)]
pub featured_artists: Vec<String>,
pub confidence: Option<f64>,
pub notes: Option<String>,
}
// --- Admin queries ---
pub async fn list_pending(pool: &PgPool, status_filter: Option<&str>, limit: i64, offset: i64) -> Result<Vec<PendingTrack>, sqlx::Error> {
match status_filter {
Some(status) => {
sqlx::query_as::<_, PendingTrack>(
"SELECT * FROM pending_tracks WHERE status = $1 ORDER BY created_at DESC LIMIT $2 OFFSET $3"
)
.bind(status)
.bind(limit)
.bind(offset)
.fetch_all(pool)
.await
}
None => {
sqlx::query_as::<_, PendingTrack>(
"SELECT * FROM pending_tracks ORDER BY created_at DESC LIMIT $1 OFFSET $2"
)
.bind(limit)
.bind(offset)
.fetch_all(pool)
.await
}
}
}
pub async fn get_pending(pool: &PgPool, id: Uuid) -> Result<Option<PendingTrack>, sqlx::Error> {
sqlx::query_as::<_, PendingTrack>("SELECT * FROM pending_tracks WHERE id = $1")
.bind(id)
.fetch_optional(pool)
.await
}
pub async fn delete_pending(pool: &PgPool, id: Uuid) -> Result<bool, sqlx::Error> {
let result = sqlx::query("DELETE FROM pending_tracks WHERE id = $1")
.bind(id)
.execute(pool)
.await?;
Ok(result.rows_affected() > 0)
}
pub async fn list_artists_all(pool: &PgPool) -> Result<Vec<Artist>, sqlx::Error> {
sqlx::query_as::<_, Artist>("SELECT id, name FROM artists ORDER BY name")
.fetch_all(pool)
.await
}
pub async fn list_albums_by_artist(pool: &PgPool, artist_id: i64) -> Result<Vec<Album>, sqlx::Error> {
sqlx::query_as::<_, Album>("SELECT id, artist_id, name, year FROM albums WHERE artist_id = $1 ORDER BY year, name")
.bind(artist_id)
.fetch_all(pool)
.await
}
pub async fn update_artist_name(pool: &PgPool, id: i64, name: &str) -> Result<bool, sqlx::Error> {
let result = sqlx::query("UPDATE artists SET name = $2 WHERE id = $1")
.bind(id)
.bind(name)
.execute(pool)
.await?;
Ok(result.rows_affected() > 0)
}
pub async fn update_album(pool: &PgPool, id: i64, name: &str, year: Option<i32>) -> Result<bool, sqlx::Error> {
let result = sqlx::query("UPDATE albums SET name = $2, year = $3 WHERE id = $1")
.bind(id)
.bind(name)
.bind(year)
.execute(pool)
.await?;
Ok(result.rows_affected() > 0)
}
#[derive(Debug, Serialize)]
pub struct Stats {
pub total_tracks: i64,
pub total_artists: i64,
pub total_albums: i64,
pub pending_count: i64,
pub review_count: i64,
pub error_count: i64,
}
pub async fn get_stats(pool: &PgPool) -> Result<Stats, sqlx::Error> {
let (total_tracks,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM tracks").fetch_one(pool).await?;
let (total_artists,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM artists").fetch_one(pool).await?;
let (total_albums,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM albums").fetch_one(pool).await?;
let (pending_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'pending'").fetch_one(pool).await?;
let (review_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'review'").fetch_one(pool).await?;
let (error_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM pending_tracks WHERE status = 'error'").fetch_one(pool).await?;
Ok(Stats { total_tracks, total_artists, total_albums, pending_count, review_count, error_count })
}

View File

@@ -0,0 +1,129 @@
use std::path::Path;
use symphonia::core::{
codecs::CODEC_TYPE_NULL,
formats::FormatOptions,
io::MediaSourceStream,
meta::{MetadataOptions, StandardTagKey},
probe::Hint,
};
#[derive(Debug, Default)]
pub struct RawMetadata {
pub title: Option<String>,
pub artist: Option<String>,
pub album: Option<String>,
pub track_number: Option<u32>,
pub year: Option<u32>,
pub genre: Option<String>,
pub duration_secs: Option<f64>,
}
/// Extract metadata from an audio file using Symphonia.
/// Must be called from a blocking context (spawn_blocking).
pub fn extract(path: &Path) -> anyhow::Result<RawMetadata> {
let file = std::fs::File::open(path)?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(ext) = path.extension().and_then(|e| e.to_str()) {
hint.with_extension(ext);
}
let mut probed = symphonia::default::get_probe().format(
&hint,
mss,
&FormatOptions { enable_gapless: false, ..Default::default() },
&MetadataOptions::default(),
)?;
let mut meta = RawMetadata::default();
// Check metadata side-data (e.g., ID3 tags probed before format)
if let Some(rev) = probed.metadata.get().as_ref().and_then(|m| m.current()) {
extract_tags(rev.tags(), &mut meta);
}
// Also check format-embedded metadata
if let Some(rev) = probed.format.metadata().current() {
if meta.title.is_none() {
extract_tags(rev.tags(), &mut meta);
}
}
// Duration
meta.duration_secs = probed
.format
.tracks()
.iter()
.find(|t| t.codec_params.codec != CODEC_TYPE_NULL)
.and_then(|t| {
let n_frames = t.codec_params.n_frames?;
let tb = t.codec_params.time_base?;
Some(n_frames as f64 * tb.numer as f64 / tb.denom as f64)
});
Ok(meta)
}
fn extract_tags(tags: &[symphonia::core::meta::Tag], meta: &mut RawMetadata) {
for tag in tags {
let value = fix_encoding(tag.value.to_string());
if let Some(key) = tag.std_key {
match key {
StandardTagKey::TrackTitle => {
if meta.title.is_none() {
meta.title = Some(value);
}
}
StandardTagKey::Artist | StandardTagKey::Performer => {
if meta.artist.is_none() {
meta.artist = Some(value);
}
}
StandardTagKey::Album => {
if meta.album.is_none() {
meta.album = Some(value);
}
}
StandardTagKey::TrackNumber => {
if meta.track_number.is_none() {
meta.track_number = value.parse().ok();
}
}
StandardTagKey::Date | StandardTagKey::OriginalDate => {
if meta.year.is_none() {
meta.year = value[..4.min(value.len())].parse().ok();
}
}
StandardTagKey::Genre => {
if meta.genre.is_none() {
meta.genre = Some(value);
}
}
_ => {}
}
}
}
}
/// Heuristic to fix mojibake (CP1251 bytes interpreted as Latin-1/Windows-1252).
fn fix_encoding(s: String) -> String {
let bytes: Vec<u8> = s.chars().map(|c| c as u32).filter(|&c| c <= 255).map(|c| c as u8).collect();
if bytes.len() != s.chars().count() {
return s;
}
let has_mojibake = bytes.iter().any(|&b| b >= 0xC0);
if !has_mojibake {
return s;
}
let (decoded, _, errors) = encoding_rs::WINDOWS_1251.decode(&bytes);
if errors {
return s;
}
decoded.into_owned()
}

View File

@@ -0,0 +1,555 @@
pub mod metadata;
pub mod normalize;
pub mod path_hints;
pub mod mover;
use std::sync::Arc;
use std::time::Duration;
use crate::db;
use crate::web::AppState;
pub async fn run(state: Arc<AppState>) {
let interval = Duration::from_secs(state.config.poll_interval_secs);
tracing::info!("Ingest loop started, polling every {}s: {:?}", state.config.poll_interval_secs, state.config.inbox_dir);
loop {
match scan_inbox(&state).await {
Ok(0) => {}
Ok(count) => tracing::info!(count, "processed new files"),
Err(e) => tracing::error!(?e, "inbox scan failed"),
}
tokio::time::sleep(interval).await;
}
}
async fn scan_inbox(state: &Arc<AppState>) -> anyhow::Result<usize> {
let mut count = 0;
let mut audio_files = Vec::new();
let mut image_files = Vec::new();
collect_files(&state.config.inbox_dir, &mut audio_files, &mut image_files).await?;
if !audio_files.is_empty() || !image_files.is_empty() {
tracing::info!("Scan found {} audio file(s) and {} image(s) in inbox", audio_files.len(), image_files.len());
}
for file_path in &audio_files {
match process_file(state, file_path).await {
Ok(true) => count += 1,
Ok(false) => tracing::debug!(path = ?file_path, "skipped (already known)"),
Err(e) => tracing::warn!(?e, path = ?file_path, "failed to process file"),
}
}
// Process cover images after audio (so albums exist in DB)
for image_path in &image_files {
match process_cover_image(state, image_path).await {
Ok(true) => {
tracing::info!(path = ?image_path, "Cover image processed");
count += 1;
}
Ok(false) => tracing::debug!(path = ?image_path, "cover image skipped"),
Err(e) => tracing::warn!(?e, path = ?image_path, "failed to process cover image"),
}
}
// Clean up empty directories in inbox
if count > 0 {
cleanup_empty_dirs(&state.config.inbox_dir).await;
}
Ok(count)
}
/// Recursively remove empty directories inside the inbox.
/// Does not remove the inbox root itself.
async fn cleanup_empty_dirs(dir: &std::path::Path) -> bool {
let mut entries = match tokio::fs::read_dir(dir).await {
Ok(e) => e,
Err(_) => return false,
};
let mut is_empty = true;
while let Ok(Some(entry)) = entries.next_entry().await {
let ft = match entry.file_type().await {
Ok(ft) => ft,
Err(_) => { is_empty = false; continue; }
};
if ft.is_dir() {
let child_empty = Box::pin(cleanup_empty_dirs(&entry.path())).await;
if child_empty {
if let Err(e) = tokio::fs::remove_dir(&entry.path()).await {
tracing::warn!(?e, path = ?entry.path(), "Failed to remove empty directory");
} else {
tracing::info!(path = ?entry.path(), "Removed empty inbox directory");
}
} else {
is_empty = false;
}
} else {
is_empty = false;
}
}
is_empty
}
/// Recursively collect all audio files and image files under a directory.
async fn collect_files(dir: &std::path::Path, audio: &mut Vec<std::path::PathBuf>, images: &mut Vec<std::path::PathBuf>) -> anyhow::Result<()> {
let mut entries = tokio::fs::read_dir(dir).await?;
while let Some(entry) = entries.next_entry().await? {
let name = entry.file_name().to_string_lossy().into_owned();
if name.starts_with('.') {
continue;
}
let ft = entry.file_type().await?;
if ft.is_dir() {
Box::pin(collect_files(&entry.path(), audio, images)).await?;
} else if ft.is_file() {
if is_audio_file(&name) {
audio.push(entry.path());
} else if is_cover_image(&name) {
images.push(entry.path());
}
}
}
Ok(())
}
fn is_audio_file(name: &str) -> bool {
let ext = name.rsplit('.').next().unwrap_or("").to_lowercase();
matches!(
ext.as_str(),
"mp3" | "flac" | "ogg" | "opus" | "aac" | "m4a" | "wav" | "ape" | "wv" | "wma" | "tta" | "aiff" | "aif"
)
}
fn is_cover_image(name: &str) -> bool {
let ext = name.rsplit('.').next().unwrap_or("").to_lowercase();
if !matches!(ext.as_str(), "jpg" | "jpeg" | "png" | "webp" | "bmp" | "gif") {
return false;
}
let stem = std::path::Path::new(name)
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_lowercase();
matches!(
stem.as_str(),
"cover" | "front" | "folder" | "back" | "booklet" | "inlay" | "disc" | "cd"
| "album" | "artwork" | "art" | "scan" | "thumb" | "thumbnail"
)
}
fn classify_image(name: &str) -> &'static str {
let stem = std::path::Path::new(name)
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_lowercase();
match stem.as_str() {
"back" => "back",
"booklet" | "inlay" | "scan" => "booklet",
"disc" | "cd" => "disc",
_ => "cover",
}
}
fn mime_for_image(name: &str) -> &'static str {
let ext = name.rsplit('.').next().unwrap_or("").to_lowercase();
match ext.as_str() {
"jpg" | "jpeg" => "image/jpeg",
"png" => "image/png",
"webp" => "image/webp",
"gif" => "image/gif",
"bmp" => "image/bmp",
_ => "application/octet-stream",
}
}
async fn process_file(state: &Arc<AppState>, file_path: &std::path::Path) -> anyhow::Result<bool> {
let filename = file_path.file_name().and_then(|n| n.to_str()).unwrap_or("?");
tracing::info!(file = filename, "Processing new file: {:?}", file_path);
// Compute file hash for dedup
tracing::info!(file = filename, "Computing file hash...");
let path_clone = file_path.to_path_buf();
let (hash, file_size) = tokio::task::spawn_blocking(move || -> anyhow::Result<(String, i64)> {
let data = std::fs::read(&path_clone)?;
let hash = blake3::hash(&data).to_hex().to_string();
let size = data.len() as i64;
Ok((hash, size))
})
.await??;
tracing::info!(file = filename, hash = &hash[..16], size = file_size, "File hashed");
// Skip if already known
if db::file_hash_exists(&state.pool, &hash).await? {
tracing::info!(file = filename, "Skipping: file hash already exists in database");
return Ok(false);
}
// Extract raw metadata
tracing::info!(file = filename, "Extracting metadata with Symphonia...");
let path_for_meta = file_path.to_path_buf();
let raw_meta = tokio::task::spawn_blocking(move || metadata::extract(&path_for_meta)).await??;
tracing::info!(
file = filename,
artist = raw_meta.artist.as_deref().unwrap_or("-"),
title = raw_meta.title.as_deref().unwrap_or("-"),
album = raw_meta.album.as_deref().unwrap_or("-"),
"Raw metadata extracted"
);
// Parse path hints relative to inbox dir
let relative = file_path.strip_prefix(&state.config.inbox_dir).unwrap_or(file_path);
let hints = path_hints::parse(relative);
if hints.artist.is_some() || hints.album.is_some() || hints.year.is_some() {
tracing::info!(
file = filename,
path_artist = hints.artist.as_deref().unwrap_or("-"),
path_album = hints.album.as_deref().unwrap_or("-"),
path_year = ?hints.year,
"Path hints parsed"
);
}
let inbox_path_str = file_path.to_string_lossy().to_string();
// Insert pending record
tracing::info!(file = filename, "Inserting pending track record...");
let pending_id = db::insert_pending(
&state.pool,
&inbox_path_str,
&hash,
file_size,
&db::RawFields {
title: raw_meta.title.clone(),
artist: raw_meta.artist.clone(),
album: raw_meta.album.clone(),
year: raw_meta.year.map(|y| y as i32),
track_number: raw_meta.track_number.map(|t| t as i32),
genre: raw_meta.genre.clone(),
},
&db::PathHints {
title: hints.title.clone(),
artist: hints.artist.clone(),
album: hints.album.clone(),
year: hints.year,
track_number: hints.track_number,
},
raw_meta.duration_secs,
)
.await?;
db::update_pending_status(&state.pool, pending_id, "processing", None).await?;
// RAG: find similar entries in DB
let artist_query = raw_meta.artist.as_deref()
.or(hints.artist.as_deref())
.unwrap_or("");
let album_query = raw_meta.album.as_deref()
.or(hints.album.as_deref())
.unwrap_or("");
tracing::info!(file = filename, "Searching database for similar artists/albums...");
let similar_artists = if !artist_query.is_empty() {
db::find_similar_artists(&state.pool, artist_query, 5).await.unwrap_or_default()
} else {
Vec::new()
};
let similar_albums = if !album_query.is_empty() {
db::find_similar_albums(&state.pool, album_query, 5).await.unwrap_or_default()
} else {
Vec::new()
};
if !similar_artists.is_empty() {
let names: Vec<&str> = similar_artists.iter().map(|a| a.name.as_str()).collect();
tracing::info!(file = filename, matches = ?names, "Found similar artists in DB");
}
if !similar_albums.is_empty() {
let names: Vec<&str> = similar_albums.iter().map(|a| a.name.as_str()).collect();
tracing::info!(file = filename, matches = ?names, "Found similar albums in DB");
}
// Call LLM for normalization
tracing::info!(file = filename, model = %state.config.ollama_model, "Sending to LLM for normalization...");
match normalize::normalize(state, &raw_meta, &hints, &similar_artists, &similar_albums).await {
Ok(normalized) => {
let confidence = normalized.confidence.unwrap_or(0.0);
let status = if confidence >= state.config.confidence_threshold {
"approved"
} else {
"review"
};
tracing::info!(
file = filename,
norm_artist = normalized.artist.as_deref().unwrap_or("-"),
norm_title = normalized.title.as_deref().unwrap_or("-"),
norm_album = normalized.album.as_deref().unwrap_or("-"),
confidence,
status,
notes = normalized.notes.as_deref().unwrap_or("-"),
"LLM normalization complete"
);
if !normalized.featured_artists.is_empty() {
tracing::info!(
file = filename,
featured = ?normalized.featured_artists,
"Featured artists detected"
);
}
db::update_pending_normalized(&state.pool, pending_id, status, &normalized, None).await?;
// Auto-approve: move file to storage
if status == "approved" {
let artist = normalized.artist.as_deref().unwrap_or("Unknown Artist");
let album = normalized.album.as_deref().unwrap_or("Unknown Album");
let title = normalized.title.as_deref().unwrap_or("Unknown Title");
let ext = file_path.extension().and_then(|e| e.to_str()).unwrap_or("flac");
let track_num = normalized.track_number.unwrap_or(0);
let dest_filename = if track_num > 0 {
format!("{:02} - {}.{}", track_num, sanitize_filename(title), ext)
} else {
format!("{}.{}", sanitize_filename(title), ext)
};
tracing::info!(
file = filename,
dest_artist = artist,
dest_album = album,
dest_filename = %dest_filename,
"Auto-approved, moving to storage..."
);
match mover::move_to_storage(
&state.config.storage_dir,
artist,
album,
&dest_filename,
file_path,
)
.await
{
Ok(storage_path) => {
let rel_path = storage_path.to_string_lossy().to_string();
match db::approve_and_finalize(&state.pool, pending_id, &rel_path).await {
Ok(track_id) => {
tracing::info!(file = filename, track_id, storage = %rel_path, "Track finalized in database");
}
Err(e) => {
tracing::error!(file = filename, ?e, "Failed to finalize track in DB after move");
}
}
}
Err(e) => {
tracing::error!(file = filename, ?e, "Failed to move file to storage");
db::update_pending_status(&state.pool, pending_id, "error", Some(&e.to_string())).await?;
}
}
} else {
tracing::info!(file = filename, confidence, "Sent to review queue (below threshold {})", state.config.confidence_threshold);
}
}
Err(e) => {
tracing::error!(file = filename, ?e, "LLM normalization failed");
db::update_pending_status(&state.pool, pending_id, "error", Some(&e.to_string())).await?;
}
}
Ok(true)
}
/// Process a cover image found in the inbox.
/// Uses path hints (Artist/Album/) to find the matching album in the DB,
/// then copies the image to the album's storage folder.
async fn process_cover_image(state: &Arc<AppState>, image_path: &std::path::Path) -> anyhow::Result<bool> {
let filename = image_path.file_name().and_then(|n| n.to_str()).unwrap_or("?");
// Hash for dedup
let path_clone = image_path.to_path_buf();
let (hash, file_size) = tokio::task::spawn_blocking(move || -> anyhow::Result<(String, i64)> {
let data = std::fs::read(&path_clone)?;
let hash = blake3::hash(&data).to_hex().to_string();
let size = data.len() as i64;
Ok((hash, size))
})
.await??;
if db::image_hash_exists(&state.pool, &hash).await? {
return Ok(false);
}
// Derive artist/album from path hints
let relative = image_path.strip_prefix(&state.config.inbox_dir).unwrap_or(image_path);
let components: Vec<&str> = relative
.components()
.filter_map(|c| c.as_os_str().to_str())
.collect();
tracing::info!(file = filename, path = ?relative, components = components.len(), "Processing cover image");
// Supported structures:
// Artist/Album/image.jpg (3+ components)
// Album/image.jpg (2 components — album dir + image)
if components.len() < 2 {
tracing::info!(file = filename, "Cover image not inside an album folder, skipping");
return Ok(false);
}
// The directory directly containing the image is always the album hint
let album_raw = components[components.len() - 2];
let path_artist = if components.len() >= 3 {
Some(components[components.len() - 3])
} else {
None
};
let (album_name, _) = path_hints::parse_album_year_public(album_raw);
tracing::info!(
file = filename,
path_artist = path_artist.unwrap_or("-"),
album_hint = %album_name,
"Looking up album in database..."
);
// Try to find album in DB — try with artist if available, then without
let album_id = if let Some(artist) = path_artist {
find_album_for_cover(&state.pool, artist, &album_name).await?
} else {
None
};
// If not found with artist, try fuzzy album name match across all artists
let album_id = match album_id {
Some(id) => Some(id),
None => {
let similar_albums = db::find_similar_albums(&state.pool, &album_name, 3).await.unwrap_or_default();
if let Some(best) = similar_albums.first() {
if best.similarity > 0.5 {
tracing::info!(file = filename, album = %best.name, similarity = best.similarity, "Matched album by fuzzy search");
Some(best.id)
} else {
None
}
} else {
None
}
}
};
let album_id = match album_id {
Some(id) => id,
None => {
tracing::info!(
file = filename,
artist = path_artist.unwrap_or("-"),
album = %album_name,
"No matching album found in DB, skipping cover"
);
return Ok(false);
}
};
// Determine image type and move to storage
let image_type = classify_image(filename);
let mime = mime_for_image(filename);
// Get album's storage path from any track in that album
let storage_dir_opt: Option<(String,)> = sqlx::query_as(
"SELECT storage_path FROM tracks WHERE album_id = $1 LIMIT 1"
)
.bind(album_id)
.fetch_optional(&state.pool)
.await?;
let album_storage_dir = match storage_dir_opt {
Some((track_path,)) => {
let p = std::path::Path::new(&track_path);
match p.parent() {
Some(dir) if dir.is_dir() => dir.to_path_buf(),
_ => {
tracing::warn!(file = filename, track_path = %track_path, "Track storage path has no valid parent dir");
return Ok(false);
}
}
}
None => {
tracing::info!(file = filename, album_id, "Album has no tracks in storage yet, skipping cover");
return Ok(false);
}
};
tracing::info!(file = filename, dest_dir = ?album_storage_dir, "Will copy cover to album storage dir");
let dest = album_storage_dir.join(filename);
if !dest.exists() {
// Move or copy image
match tokio::fs::rename(image_path, &dest).await {
Ok(()) => {}
Err(_) => {
tokio::fs::copy(image_path, &dest).await?;
tokio::fs::remove_file(image_path).await?;
}
}
}
let dest_str = dest.to_string_lossy().to_string();
db::insert_album_image(&state.pool, album_id, image_type, &dest_str, &hash, mime, file_size).await?;
tracing::info!(
file = filename,
album_id,
image_type,
dest = %dest_str,
"Album image saved"
);
Ok(true)
}
/// Find an album in DB matching the path-derived artist and album name.
/// Tries exact match, then fuzzy artist + exact album, then fuzzy artist + fuzzy album.
async fn find_album_for_cover(pool: &sqlx::PgPool, path_artist: &str, album_name: &str) -> anyhow::Result<Option<i64>> {
// Try exact match first
if let Some(id) = db::find_album_id(pool, path_artist, album_name).await? {
return Ok(Some(id));
}
// Try fuzzy artist, then exact or fuzzy album under that artist
let similar_artists = db::find_similar_artists(pool, path_artist, 5).await.unwrap_or_default();
for artist in &similar_artists {
if artist.similarity < 0.3 {
continue;
}
// Exact album under fuzzy artist
if let Some(id) = db::find_album_id(pool, &artist.name, album_name).await? {
return Ok(Some(id));
}
// Fuzzy album under this artist
let similar_albums = db::find_similar_albums(pool, album_name, 3).await.unwrap_or_default();
for album in &similar_albums {
if album.artist_id == artist.id && album.similarity > 0.4 {
return Ok(Some(album.id));
}
}
}
Ok(None)
}
/// Remove characters that are unsafe for filenames.
fn sanitize_filename(name: &str) -> String {
name.chars()
.map(|c| match c {
'/' | '\\' | ':' | '*' | '?' | '"' | '<' | '>' | '|' => '_',
_ => c,
})
.collect::<String>()
.trim()
.to_owned()
}

View File

@@ -0,0 +1,54 @@
use std::path::{Path, PathBuf};
/// Move a file from inbox to the permanent storage directory.
///
/// Creates the directory structure: `storage_dir/artist/album/filename`
/// Returns the full path of the moved file.
///
/// If `rename` fails (cross-device), falls back to copy + remove.
pub async fn move_to_storage(
storage_dir: &Path,
artist: &str,
album: &str,
filename: &str,
source: &Path,
) -> anyhow::Result<PathBuf> {
let artist_dir = sanitize_dir_name(artist);
let album_dir = sanitize_dir_name(album);
let dest_dir = storage_dir.join(&artist_dir).join(&album_dir);
tokio::fs::create_dir_all(&dest_dir).await?;
let dest = dest_dir.join(filename);
// Avoid overwriting existing files
if dest.exists() {
anyhow::bail!("Destination already exists: {:?}", dest);
}
// Try atomic rename first (same filesystem)
match tokio::fs::rename(source, &dest).await {
Ok(()) => {}
Err(_) => {
// Cross-device: copy then remove
tokio::fs::copy(source, &dest).await?;
tokio::fs::remove_file(source).await?;
}
}
tracing::info!(from = ?source, to = ?dest, "moved file to storage");
Ok(dest)
}
/// Remove characters that are unsafe for directory names.
fn sanitize_dir_name(name: &str) -> String {
name.chars()
.map(|c| match c {
'/' | '\\' | ':' | '*' | '?' | '"' | '<' | '>' | '|' | '\0' => '_',
_ => c,
})
.collect::<String>()
.trim()
.trim_matches('.')
.to_owned()
}

View File

@@ -0,0 +1,216 @@
use std::sync::Arc;
use serde::{Deserialize, Serialize};
use crate::db::{NormalizedFields, SimilarAlbum, SimilarArtist};
use crate::web::AppState;
use super::metadata::RawMetadata;
/// Build the user message with all context and call Ollama for normalization.
pub async fn normalize(
state: &Arc<AppState>,
raw: &RawMetadata,
hints: &crate::db::PathHints,
similar_artists: &[SimilarArtist],
similar_albums: &[SimilarAlbum],
) -> anyhow::Result<NormalizedFields> {
let user_message = build_user_message(raw, hints, similar_artists, similar_albums);
let response = call_ollama(
&state.config.ollama_url,
&state.config.ollama_model,
&state.system_prompt,
&user_message,
)
.await?;
parse_response(&response)
}
fn build_user_message(
raw: &RawMetadata,
hints: &crate::db::PathHints,
similar_artists: &[SimilarArtist],
similar_albums: &[SimilarAlbum],
) -> String {
let mut msg = String::from("## Raw metadata from file tags\n");
if let Some(v) = &raw.title {
msg.push_str(&format!("Title: \"{}\"\n", v));
}
if let Some(v) = &raw.artist {
msg.push_str(&format!("Artist: \"{}\"\n", v));
}
if let Some(v) = &raw.album {
msg.push_str(&format!("Album: \"{}\"\n", v));
}
if let Some(v) = raw.year {
msg.push_str(&format!("Year: {}\n", v));
}
if let Some(v) = raw.track_number {
msg.push_str(&format!("Track number: {}\n", v));
}
if let Some(v) = &raw.genre {
msg.push_str(&format!("Genre: \"{}\"\n", v));
}
msg.push_str("\n## Hints from file path\n");
if let Some(v) = &hints.artist {
msg.push_str(&format!("Path artist: \"{}\"\n", v));
}
if let Some(v) = &hints.album {
msg.push_str(&format!("Path album: \"{}\"\n", v));
}
if let Some(v) = hints.year {
msg.push_str(&format!("Path year: {}\n", v));
}
if let Some(v) = hints.track_number {
msg.push_str(&format!("Path track number: {}\n", v));
}
if let Some(v) = &hints.title {
msg.push_str(&format!("Path title: \"{}\"\n", v));
}
if !similar_artists.is_empty() {
msg.push_str("\n## Existing artists in database (similar matches)\n");
for a in similar_artists {
msg.push_str(&format!("- \"{}\" (similarity: {:.2})\n", a.name, a.similarity));
}
}
if !similar_albums.is_empty() {
msg.push_str("\n## Existing albums in database (similar matches)\n");
for a in similar_albums {
let year_str = a.year.map(|y| format!(", year: {}", y)).unwrap_or_default();
msg.push_str(&format!("- \"{}\" (similarity: {:.2}{})\n", a.name, a.similarity, year_str));
}
}
msg
}
#[derive(Serialize)]
struct OllamaRequest {
model: String,
messages: Vec<OllamaMessage>,
format: String,
stream: bool,
options: OllamaOptions,
}
#[derive(Serialize)]
struct OllamaMessage {
role: String,
content: String,
}
#[derive(Serialize)]
struct OllamaOptions {
temperature: f64,
}
#[derive(Deserialize)]
struct OllamaResponse {
message: OllamaResponseMessage,
}
#[derive(Deserialize)]
struct OllamaResponseMessage {
content: String,
}
async fn call_ollama(
base_url: &str,
model: &str,
system_prompt: &str,
user_message: &str,
) -> anyhow::Result<String> {
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(120))
.build()?;
let request = OllamaRequest {
model: model.to_owned(),
messages: vec![
OllamaMessage {
role: "system".to_owned(),
content: system_prompt.to_owned(),
},
OllamaMessage {
role: "user".to_owned(),
content: user_message.to_owned(),
},
],
format: "json".to_owned(),
stream: false,
options: OllamaOptions { temperature: 0.1 },
};
let url = format!("{}/api/chat", base_url.trim_end_matches('/'));
tracing::info!(%url, model, prompt_len = user_message.len(), "Calling Ollama API...");
let start = std::time::Instant::now();
let resp = client.post(&url).json(&request).send().await?;
let elapsed = start.elapsed();
if !resp.status().is_success() {
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
tracing::error!(%status, body = &body[..body.len().min(500)], "Ollama API error");
anyhow::bail!("Ollama returned {}: {}", status, body);
}
let ollama_resp: OllamaResponse = resp.json().await?;
tracing::info!(
elapsed_ms = elapsed.as_millis() as u64,
response_len = ollama_resp.message.content.len(),
"Ollama response received"
);
tracing::debug!(raw_response = %ollama_resp.message.content, "LLM raw output");
Ok(ollama_resp.message.content)
}
/// Parse the LLM JSON response into NormalizedFields.
/// Handles both clean JSON and JSON wrapped in markdown code fences.
fn parse_response(response: &str) -> anyhow::Result<NormalizedFields> {
let cleaned = response.trim();
// Strip markdown code fences if present
let json_str = if cleaned.starts_with("```") {
let start = cleaned.find('{').unwrap_or(0);
let end = cleaned.rfind('}').map(|i| i + 1).unwrap_or(cleaned.len());
&cleaned[start..end]
} else {
cleaned
};
#[derive(Deserialize)]
struct LlmOutput {
artist: Option<String>,
album: Option<String>,
title: Option<String>,
year: Option<i32>,
track_number: Option<i32>,
genre: Option<String>,
#[serde(default)]
featured_artists: Vec<String>,
confidence: Option<f64>,
notes: Option<String>,
}
let parsed: LlmOutput = serde_json::from_str(json_str)
.map_err(|e| anyhow::anyhow!("Failed to parse LLM response as JSON: {} — raw: {}", e, response))?;
Ok(NormalizedFields {
title: parsed.title,
artist: parsed.artist,
album: parsed.album,
year: parsed.year,
track_number: parsed.track_number,
genre: parsed.genre,
featured_artists: parsed.featured_artists,
confidence: parsed.confidence,
notes: parsed.notes,
})
}

View File

@@ -0,0 +1,203 @@
use std::path::Path;
use crate::db::PathHints;
/// Parse metadata hints from the file path relative to the inbox directory.
///
/// Recognized patterns:
/// Artist/Album/01 - Title.ext
/// Artist/Album (Year)/01 - Title.ext
/// Artist/(Year) Album/01 - Title.ext
/// Artist/Album [Year]/01 - Title.ext
/// 01 - Title.ext (flat, no artist/album)
pub fn parse(relative_path: &Path) -> PathHints {
let components: Vec<&str> = relative_path
.components()
.filter_map(|c| c.as_os_str().to_str())
.collect();
let mut hints = PathHints::default();
let filename = components.last().copied().unwrap_or("");
let stem = Path::new(filename)
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("");
// Parse track number and title from filename
parse_filename(stem, &mut hints);
match components.len() {
// Artist/Album/file.ext
3.. => {
hints.artist = Some(components[0].to_owned());
let album_raw = components[1];
let (album, year) = parse_album_with_year(album_raw);
hints.album = Some(album);
if year.is_some() {
hints.year = year;
}
}
// Album/file.ext (or Artist/file.ext — ambiguous, treat as album)
2 => {
let dir = components[0];
let (name, year) = parse_album_with_year(dir);
hints.album = Some(name);
if year.is_some() {
hints.year = year;
}
}
// Just file.ext
_ => {}
}
hints
}
/// Try to extract track number and title from a filename stem.
///
/// Patterns: "01 - Title", "01. Title", "1 Title", "Title"
fn parse_filename(stem: &str, hints: &mut PathHints) {
let trimmed = stem.trim();
// Try "NN - Title" or "NN. Title"
if let Some(rest) = try_strip_track_prefix(trimmed) {
let (num_str, title) = rest;
if let Ok(num) = num_str.parse::<i32>() {
hints.track_number = Some(num);
if !title.is_empty() {
hints.title = Some(title.to_owned());
}
return;
}
}
// No track number found, use full stem as title
if !trimmed.is_empty() {
hints.title = Some(trimmed.to_owned());
}
}
/// Try to parse "NN - Rest" or "NN. Rest" from a string.
/// Returns (number_str, rest) if successful.
fn try_strip_track_prefix(s: &str) -> Option<(&str, &str)> {
// Find leading digits
let digit_end = s.find(|c: char| !c.is_ascii_digit())?;
if digit_end == 0 {
return None;
}
let num_str = &s[..digit_end];
let rest = s[digit_end..].trim_start();
// Expect separator: " - ", ". ", "- ", or just space if followed by letter
let title = if let Some(stripped) = rest.strip_prefix("- ") {
stripped.trim()
} else if let Some(stripped) = rest.strip_prefix(". ") {
stripped.trim()
} else if let Some(stripped) = rest.strip_prefix('.') {
stripped.trim()
} else if let Some(stripped) = rest.strip_prefix("- ") {
stripped.trim()
} else {
// Just "01 Title" — digits followed by space then text
rest
};
Some((num_str, title))
}
/// Public wrapper for cover image processing.
pub fn parse_album_year_public(dir: &str) -> (String, Option<i32>) {
parse_album_with_year(dir)
}
/// Extract album name and optional year from directory name.
///
/// Patterns: "Album (2001)", "(2001) Album", "Album [2001]", "Album"
fn parse_album_with_year(dir: &str) -> (String, Option<i32>) {
// Try "Album (YYYY)" or "Album [YYYY]"
for (open, close) in [('(', ')'), ('[', ']')] {
if let Some(start) = dir.rfind(open) {
if let Some(end) = dir[start..].find(close) {
let inside = &dir[start + 1..start + end];
if let Ok(year) = inside.trim().parse::<i32>() {
if (1900..=2100).contains(&year) {
let album = format!("{}{}", &dir[..start].trim(), &dir[start + end + 1..].trim());
let album = album.trim().to_owned();
return (album, Some(year));
}
}
}
}
}
// Try "(YYYY) Album"
if dir.starts_with('(') {
if let Some(end) = dir.find(')') {
let inside = &dir[1..end];
if let Ok(year) = inside.trim().parse::<i32>() {
if (1900..=2100).contains(&year) {
let album = dir[end + 1..].trim().to_owned();
return (album, Some(year));
}
}
}
}
(dir.to_owned(), None)
}
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
#[test]
fn test_artist_album_track() {
let p = PathBuf::from("Pink Floyd/Wish You Were Here (1975)/03 - Have a Cigar.flac");
let h = parse(&p);
assert_eq!(h.artist.as_deref(), Some("Pink Floyd"));
assert_eq!(h.album.as_deref(), Some("Wish You Were Here"));
assert_eq!(h.year, Some(1975));
assert_eq!(h.track_number, Some(3));
assert_eq!(h.title.as_deref(), Some("Have a Cigar"));
}
#[test]
fn test_year_prefix() {
let p = PathBuf::from("Artist/(2020) Album Name/01. Song.flac");
let h = parse(&p);
assert_eq!(h.artist.as_deref(), Some("Artist"));
assert_eq!(h.album.as_deref(), Some("Album Name"));
assert_eq!(h.year, Some(2020));
assert_eq!(h.track_number, Some(1));
assert_eq!(h.title.as_deref(), Some("Song"));
}
#[test]
fn test_flat_file() {
let p = PathBuf::from("05 - Something.mp3");
let h = parse(&p);
assert_eq!(h.artist, None);
assert_eq!(h.album, None);
assert_eq!(h.track_number, Some(5));
assert_eq!(h.title.as_deref(), Some("Something"));
}
#[test]
fn test_no_track_number() {
let p = PathBuf::from("Artist/Album/Song Name.flac");
let h = parse(&p);
assert_eq!(h.track_number, None);
assert_eq!(h.title.as_deref(), Some("Song Name"));
}
#[test]
fn test_square_bracket_year() {
let p = PathBuf::from("Band/Album [1999]/track.flac");
let h = parse(&p);
assert_eq!(h.album.as_deref(), Some("Album"));
assert_eq!(h.year, Some(1999));
}
}

58
furumi-agent/src/main.rs Normal file
View File

@@ -0,0 +1,58 @@
mod config;
mod db;
mod ingest;
mod web;
use std::sync::Arc;
use clap::Parser;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
tracing_subscriber::fmt::init();
let args = config::Args::parse();
args.validate()?;
let version = option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION"));
tracing::info!("Furumi Agent v{} starting", version);
tracing::info!("Inbox directory: {:?}", args.inbox_dir);
tracing::info!("Storage directory: {:?}", args.storage_dir);
tracing::info!("Ollama: {} (model: {})", args.ollama_url, args.ollama_model);
tracing::info!("Confidence threshold: {}", args.confidence_threshold);
let system_prompt = args.load_system_prompt()?;
tracing::info!("System prompt loaded: {} chars", system_prompt.len());
tracing::info!("Connecting to database...");
let pool = db::connect(&args.database_url).await?;
tracing::info!("Running database migrations...");
db::migrate(&pool).await?;
tracing::info!("Database ready");
let state = Arc::new(web::AppState {
pool: pool.clone(),
config: Arc::new(args),
system_prompt: Arc::new(system_prompt),
});
// Spawn the ingest pipeline as a background task
let ingest_state = state.clone();
tokio::spawn(async move {
ingest::run(ingest_state).await;
});
// Start the admin web UI
let bind_addr: std::net::SocketAddr = state.config.bind.parse().unwrap_or_else(|e| {
eprintln!("Error: Invalid bind address '{}': {}", state.config.bind, e);
std::process::exit(1);
});
tracing::info!("Admin UI: http://{}", bind_addr);
let app = web::build_router(state);
let listener = tokio::net::TcpListener::bind(bind_addr).await?;
axum::serve(listener, app).await?;
Ok(())
}

View File

@@ -0,0 +1,621 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Furumi Agent — Admin</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap');
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
:root {
--bg-base: #0a0c12;
--bg-panel: #111520;
--bg-card: #161d2e;
--bg-hover: #1e2740;
--bg-active: #252f4a;
--border: #1f2c45;
--accent: #7c6af7;
--accent-dim: #5a4fcf;
--text: #e2e8f0;
--text-muted: #64748b;
--text-dim: #94a3b8;
--success: #34d399;
--danger: #f87171;
--warning: #fbbf24;
}
html, body { height: 100%; overflow: hidden; }
body {
font-family: 'Inter', sans-serif;
background: var(--bg-base);
color: var(--text);
display: flex;
flex-direction: column;
}
header {
background: var(--bg-panel);
border-bottom: 1px solid var(--border);
padding: 12px 24px;
display: flex;
align-items: center;
gap: 24px;
}
header h1 {
font-size: 16px;
font-weight: 600;
}
.stats {
display: flex;
gap: 16px;
margin-left: auto;
font-size: 13px;
color: var(--text-dim);
}
.stats .stat { display: flex; gap: 4px; align-items: center; }
.stats .stat-value { color: var(--text); font-weight: 600; }
nav {
display: flex;
gap: 4px;
}
nav button {
background: none;
border: none;
color: var(--text-muted);
padding: 6px 12px;
border-radius: 6px;
cursor: pointer;
font-size: 13px;
font-family: inherit;
}
nav button:hover { background: var(--bg-hover); color: var(--text); }
nav button.active { background: var(--bg-active); color: var(--accent); }
main {
flex: 1;
overflow-y: auto;
padding: 16px 24px;
}
table {
width: 100%;
border-collapse: collapse;
font-size: 13px;
}
th {
text-align: left;
padding: 8px 12px;
color: var(--text-muted);
font-weight: 500;
border-bottom: 1px solid var(--border);
position: sticky;
top: 0;
background: var(--bg-base);
}
td {
padding: 8px 12px;
border-bottom: 1px solid var(--border);
max-width: 200px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
tr:hover td { background: var(--bg-hover); }
.status {
padding: 2px 8px;
border-radius: 4px;
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
}
.status-pending { background: #1e293b; color: var(--text-muted); }
.status-processing { background: #1e1b4b; color: var(--accent); }
.status-review { background: #422006; color: var(--warning); }
.status-approved { background: #052e16; color: var(--success); }
.status-rejected { background: #450a0a; color: var(--danger); }
.status-error { background: #450a0a; color: var(--danger); }
.actions {
display: flex;
gap: 4px;
}
.btn {
border: none;
padding: 4px 10px;
border-radius: 4px;
cursor: pointer;
font-size: 12px;
font-family: inherit;
font-weight: 500;
}
.btn-approve { background: #052e16; color: var(--success); }
.btn-approve:hover { background: #065f46; }
.btn-reject { background: #450a0a; color: var(--danger); }
.btn-reject:hover { background: #7f1d1d; }
.btn-edit { background: var(--bg-active); color: var(--text-dim); }
.btn-edit:hover { background: var(--bg-hover); color: var(--text); }
.empty {
text-align: center;
padding: 48px;
color: var(--text-muted);
font-size: 14px;
}
/* Modal */
.modal-overlay {
display: none;
position: fixed;
inset: 0;
background: rgba(0,0,0,0.7);
z-index: 100;
align-items: center;
justify-content: center;
}
.modal-overlay.visible { display: flex; }
.modal {
background: var(--bg-panel);
border: 1px solid var(--border);
border-radius: 12px;
padding: 24px;
min-width: 400px;
max-width: 600px;
}
.modal h2 { font-size: 16px; margin-bottom: 16px; }
.modal label {
display: block;
font-size: 12px;
color: var(--text-muted);
margin-bottom: 4px;
margin-top: 12px;
}
.modal input, .modal textarea {
width: 100%;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 6px;
padding: 8px 10px;
color: var(--text);
font-family: inherit;
font-size: 13px;
}
.modal textarea { resize: vertical; min-height: 60px; }
.modal-actions {
margin-top: 20px;
display: flex;
gap: 8px;
justify-content: flex-end;
}
.modal-actions .btn {
padding: 8px 16px;
}
.btn-primary { background: var(--accent); color: white; }
.btn-primary:hover { background: var(--accent-dim); }
.btn-cancel { background: var(--bg-card); color: var(--text-dim); }
.btn-cancel:hover { background: var(--bg-hover); }
/* Detail fields in modal */
.detail-row {
display: flex;
gap: 12px;
margin-top: 8px;
}
.detail-row .field { flex: 1; }
.raw-value {
font-size: 11px;
color: var(--text-muted);
margin-top: 2px;
}
/* Featured artists tags */
.feat-tags {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-top: 6px;
min-height: 28px;
}
.feat-tag {
display: flex;
align-items: center;
gap: 4px;
background: var(--bg-active);
border: 1px solid var(--border);
border-radius: 4px;
padding: 2px 8px;
font-size: 12px;
}
.feat-tag .remove {
cursor: pointer;
color: var(--text-muted);
font-size: 14px;
line-height: 1;
}
.feat-tag .remove:hover { color: var(--danger); }
/* Artist search dropdown */
.artist-search-wrap {
position: relative;
margin-top: 6px;
}
.artist-search-wrap input {
width: 100%;
}
.artist-dropdown {
position: absolute;
top: 100%;
left: 0;
right: 0;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 0 0 6px 6px;
max-height: 160px;
overflow-y: auto;
z-index: 10;
display: none;
}
.artist-dropdown.open { display: block; }
.artist-option {
padding: 6px 10px;
cursor: pointer;
font-size: 13px;
display: flex;
justify-content: space-between;
}
.artist-option:hover { background: var(--bg-hover); }
.artist-option .sim {
color: var(--text-muted);
font-size: 11px;
}
</style>
</head>
<body>
<header>
<h1>Furumi Agent</h1>
<nav>
<button class="active" onclick="showTab('queue')">Queue</button>
<button onclick="showTab('artists')">Artists</button>
</nav>
<div class="stats" id="statsBar"></div>
</header>
<main id="content"></main>
<div class="modal-overlay" id="modalOverlay" onclick="if(event.target===this)closeModal()">
<div class="modal" id="modal"></div>
</div>
<script>
const API = '/api';
let currentTab = 'queue';
let currentFilter = null;
async function api(path, opts) {
const r = await fetch(API + path, opts);
if (r.status === 204) return null;
const text = await r.text();
if (!text) return null;
try { return JSON.parse(text); }
catch(e) { console.error('API parse error:', r.status, text); return null; }
}
async function loadStats() {
const s = await api('/stats');
document.getElementById('statsBar').innerHTML = `
<div class="stat">Tracks: <span class="stat-value">${s.total_tracks}</span></div>
<div class="stat">Artists: <span class="stat-value">${s.total_artists}</span></div>
<div class="stat">Albums: <span class="stat-value">${s.total_albums}</span></div>
<div class="stat">Pending: <span class="stat-value">${s.pending_count}</span></div>
<div class="stat">Review: <span class="stat-value">${s.review_count}</span></div>
<div class="stat">Errors: <span class="stat-value">${s.error_count}</span></div>
`;
}
function showTab(tab) {
currentTab = tab;
document.querySelectorAll('nav button').forEach(b => b.classList.remove('active'));
event.target.classList.add('active');
if (tab === 'queue') loadQueue();
else if (tab === 'artists') loadArtists();
}
async function loadQueue(status) {
currentFilter = status;
const qs = status ? `?status=${status}` : '';
const items = await api(`/queue${qs}`);
const el = document.getElementById('content');
if (!items.length) {
el.innerHTML = '<div class="empty">No items in queue</div>';
return;
}
let html = `
<div style="margin-bottom:12px;display:flex;gap:4px">
<button class="btn ${!status?'btn-primary':'btn-edit'}" onclick="loadQueue()">All</button>
<button class="btn ${status==='review'?'btn-primary':'btn-edit'}" onclick="loadQueue('review')">Review</button>
<button class="btn ${status==='pending'?'btn-primary':'btn-edit'}" onclick="loadQueue('pending')">Pending</button>
<button class="btn ${status==='approved'?'btn-primary':'btn-edit'}" onclick="loadQueue('approved')">Approved</button>
<button class="btn ${status==='error'?'btn-primary':'btn-edit'}" onclick="loadQueue('error')">Errors</button>
</div>
<table>
<tr><th>Status</th><th>Raw Artist</th><th>Raw Title</th><th>Norm Artist</th><th>Norm Title</th><th>Norm Album</th><th>Conf</th><th>Actions</th></tr>
`;
for (const it of items) {
const conf = it.confidence != null ? it.confidence.toFixed(2) : '-';
html += `<tr>
<td><span class="status status-${it.status}">${it.status}</span></td>
<td title="${esc(it.raw_artist)}">${esc(it.raw_artist || '-')}</td>
<td title="${esc(it.raw_title)}">${esc(it.raw_title || '-')}</td>
<td title="${esc(it.norm_artist)}">${esc(it.norm_artist || '-')}</td>
<td title="${esc(it.norm_title)}">${esc(it.norm_title || '-')}</td>
<td title="${esc(it.norm_album)}">${esc(it.norm_album || '-')}</td>
<td>${conf}</td>
<td class="actions">
${it.status === 'review' ? `<button class="btn btn-approve" onclick="approveItem('${it.id}')">Approve</button>` : ''}
${it.status === 'review' ? `<button class="btn btn-reject" onclick="rejectItem('${it.id}')">Reject</button>` : ''}
<button class="btn btn-edit" onclick="editItem('${it.id}')">Edit</button>
</td>
</tr>`;
}
html += '</table>';
el.innerHTML = html;
}
async function loadArtists() {
const artists = await api('/artists');
const el = document.getElementById('content');
if (!artists.length) {
el.innerHTML = '<div class="empty">No artists yet</div>';
return;
}
let html = '<table><tr><th>ID</th><th>Name</th><th>Actions</th></tr>';
for (const a of artists) {
html += `<tr>
<td>${a.id}</td>
<td>${esc(a.name)}</td>
<td class="actions">
<button class="btn btn-edit" onclick="editArtist(${a.id}, '${esc(a.name)}')">Rename</button>
</td>
</tr>`;
}
html += '</table>';
el.innerHTML = html;
}
async function approveItem(id) {
await api(`/queue/${id}/approve`, { method: 'POST' });
loadStats();
loadQueue(currentFilter);
}
async function rejectItem(id) {
await api(`/queue/${id}/reject`, { method: 'POST' });
loadStats();
loadQueue(currentFilter);
}
let editFeatured = [];
let searchTimer = null;
async function editItem(id) {
const item = await api(`/queue/${id}`);
if (!item) return;
// Parse featured artists from JSON string
editFeatured = [];
if (item.norm_featured_artists) {
try { editFeatured = JSON.parse(item.norm_featured_artists); } catch(e) {}
}
document.getElementById('modal').innerHTML = `
<h2>Edit Metadata</h2>
<div class="detail-row">
<div class="field">
<label>Artist</label>
<input id="ed-artist" value="${esc(item.norm_artist || item.raw_artist || '')}">
<div class="raw-value">Raw: ${esc(item.raw_artist || '-')} | Path: ${esc(item.path_artist || '-')}</div>
</div>
</div>
<div class="detail-row">
<div class="field">
<label>Title</label>
<input id="ed-title" value="${esc(item.norm_title || item.raw_title || '')}">
<div class="raw-value">Raw: ${esc(item.raw_title || '-')} | Path: ${esc(item.path_title || '-')}</div>
</div>
</div>
<div class="detail-row">
<div class="field">
<label>Album</label>
<input id="ed-album" value="${esc(item.norm_album || item.raw_album || '')}">
<div class="raw-value">Raw: ${esc(item.raw_album || '-')} | Path: ${esc(item.path_album || '-')}</div>
</div>
<div class="field">
<label>Year</label>
<input id="ed-year" type="number" value="${item.norm_year || item.raw_year || ''}">
</div>
</div>
<div class="detail-row">
<div class="field">
<label>Track #</label>
<input id="ed-track" type="number" value="${item.norm_track_number || item.raw_track_number || ''}">
</div>
<div class="field">
<label>Genre</label>
<input id="ed-genre" value="${esc(item.norm_genre || item.raw_genre || '')}">
</div>
</div>
<label>Featured Artists</label>
<div class="feat-tags" id="feat-tags"></div>
<div class="artist-search-wrap">
<input id="feat-search" placeholder="Search artist to add..." autocomplete="off"
oninput="onFeatSearch(this.value)" onkeydown="onFeatKey(event)">
<div class="artist-dropdown" id="feat-dropdown"></div>
</div>
${item.llm_notes ? `<label>Agent Notes</label><div class="raw-value" style="margin-bottom:8px">${esc(item.llm_notes)}</div>` : ''}
${item.error_message ? `<label>Error</label><div class="raw-value" style="color:var(--danger)">${esc(item.error_message)}</div>` : ''}
<div class="modal-actions">
<button class="btn btn-cancel" onclick="closeModal()">Cancel</button>
<button class="btn btn-primary" onclick="saveEdit('${item.id}')">Save</button>
</div>
`;
renderFeatTags();
openModal();
}
function renderFeatTags() {
const el = document.getElementById('feat-tags');
if (!el) return;
el.innerHTML = editFeatured.map((name, i) =>
`<span class="feat-tag">${esc(name)}<span class="remove" onclick="removeFeat(${i})">&times;</span></span>`
).join('');
}
function removeFeat(idx) {
editFeatured.splice(idx, 1);
renderFeatTags();
}
function addFeat(name) {
name = name.trim();
if (!name || editFeatured.includes(name)) return;
editFeatured.push(name);
renderFeatTags();
const input = document.getElementById('feat-search');
if (input) { input.value = ''; }
closeFeatDropdown();
}
function onFeatSearch(q) {
clearTimeout(searchTimer);
if (q.length < 2) { closeFeatDropdown(); return; }
searchTimer = setTimeout(async () => {
const results = await api(`/artists/search?q=${encodeURIComponent(q)}&limit=8`);
const dd = document.getElementById('feat-dropdown');
if (!results || !results.length) {
// Show option to add as new
dd.innerHTML = `<div class="artist-option" onclick="addFeat('${esc(q)}')">
Add "${esc(q)}" as new
</div>`;
dd.classList.add('open');
return;
}
let html = '';
for (const a of results) {
html += `<div class="artist-option" onclick="addFeat('${esc(a.name)}')">
${esc(a.name)}
</div>`;
}
// Always offer to add typed value as-is
const typed = document.getElementById('feat-search').value.trim();
if (typed && !results.find(a => a.name.toLowerCase() === typed.toLowerCase())) {
html += `<div class="artist-option" onclick="addFeat('${esc(typed)}')">
Add "${esc(typed)}" as new
</div>`;
}
dd.innerHTML = html;
dd.classList.add('open');
}, 250);
}
function onFeatKey(e) {
if (e.key === 'Enter') {
e.preventDefault();
const val = e.target.value.trim();
if (val) addFeat(val);
} else if (e.key === 'Escape') {
closeFeatDropdown();
}
}
function closeFeatDropdown() {
const dd = document.getElementById('feat-dropdown');
if (dd) dd.classList.remove('open');
}
async function saveEdit(id) {
const body = {
norm_artist: document.getElementById('ed-artist').value || null,
norm_title: document.getElementById('ed-title').value || null,
norm_album: document.getElementById('ed-album').value || null,
norm_year: parseInt(document.getElementById('ed-year').value) || null,
norm_track_number: parseInt(document.getElementById('ed-track').value) || null,
norm_genre: document.getElementById('ed-genre').value || null,
featured_artists: editFeatured,
};
await api(`/queue/${id}/update`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
});
closeModal();
loadQueue(currentFilter);
}
async function editArtist(id, currentName) {
const name = prompt('New artist name:', currentName);
if (!name || name === currentName) return;
await api(`/artists/${id}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name }),
});
loadArtists();
}
function openModal() { document.getElementById('modalOverlay').classList.add('visible'); }
function closeModal() { document.getElementById('modalOverlay').classList.remove('visible'); }
function esc(s) {
if (s == null) return '';
return String(s).replace(/&/g,'&amp;').replace(/</g,'&lt;').replace(/>/g,'&gt;').replace(/"/g,'&quot;').replace(/'/g,'&#39;');
}
// Init
loadStats();
loadQueue();
setInterval(loadStats, 10000);
</script>
</body>
</html>

236
furumi-agent/src/web/api.rs Normal file
View File

@@ -0,0 +1,236 @@
use std::sync::Arc;
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::{IntoResponse, Json},
};
use serde::Deserialize;
use uuid::Uuid;
use crate::db;
use super::AppState;
type S = Arc<AppState>;
// --- Stats ---
pub async fn stats(State(state): State<S>) -> impl IntoResponse {
match db::get_stats(&state.pool).await {
Ok(stats) => (StatusCode::OK, Json(serde_json::to_value(stats).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Queue ---
#[derive(Deserialize)]
pub struct QueueQuery {
#[serde(default)]
pub status: Option<String>,
#[serde(default = "default_limit")]
pub limit: i64,
#[serde(default)]
pub offset: i64,
}
fn default_limit() -> i64 {
50
}
pub async fn list_queue(State(state): State<S>, Query(q): Query<QueueQuery>) -> impl IntoResponse {
match db::list_pending(&state.pool, q.status.as_deref(), q.limit, q.offset).await {
Ok(items) => (StatusCode::OK, Json(serde_json::to_value(items).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn get_queue_item(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match db::get_pending(&state.pool, id).await {
Ok(Some(item)) => (StatusCode::OK, Json(serde_json::to_value(item).unwrap())).into_response(),
Ok(None) => error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn delete_queue_item(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match db::delete_pending(&state.pool, id).await {
Ok(true) => StatusCode::NO_CONTENT.into_response(),
Ok(false) => error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn approve_queue_item(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
// Get pending track, move file, finalize in DB
let pt = match db::get_pending(&state.pool, id).await {
Ok(Some(pt)) => pt,
Ok(None) => return error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => return error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let artist = pt.norm_artist.as_deref().unwrap_or("Unknown Artist");
let album = pt.norm_album.as_deref().unwrap_or("Unknown Album");
let title = pt.norm_title.as_deref().unwrap_or("Unknown Title");
let source = std::path::Path::new(&pt.inbox_path);
let ext = source.extension().and_then(|e| e.to_str()).unwrap_or("flac");
let track_num = pt.norm_track_number.unwrap_or(0);
let filename = if track_num > 0 {
format!("{:02} - {}.{}", track_num, sanitize_filename(title), ext)
} else {
format!("{}.{}", sanitize_filename(title), ext)
};
match crate::ingest::mover::move_to_storage(
&state.config.storage_dir,
artist,
album,
&filename,
source,
)
.await
{
Ok(storage_path) => {
let rel_path = storage_path.to_string_lossy().to_string();
match db::approve_and_finalize(&state.pool, id, &rel_path).await {
Ok(track_id) => (StatusCode::OK, Json(serde_json::json!({"track_id": track_id}))).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn reject_queue_item(State(state): State<S>, Path(id): Path<Uuid>) -> impl IntoResponse {
match db::update_pending_status(&state.pool, id, "rejected", None).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct UpdateQueueItem {
pub norm_title: Option<String>,
pub norm_artist: Option<String>,
pub norm_album: Option<String>,
pub norm_year: Option<i32>,
pub norm_track_number: Option<i32>,
pub norm_genre: Option<String>,
#[serde(default)]
pub featured_artists: Vec<String>,
}
pub async fn update_queue_item(
State(state): State<S>,
Path(id): Path<Uuid>,
Json(body): Json<UpdateQueueItem>,
) -> impl IntoResponse {
let norm = db::NormalizedFields {
title: body.norm_title,
artist: body.norm_artist,
album: body.norm_album,
year: body.norm_year,
track_number: body.norm_track_number,
genre: body.norm_genre,
featured_artists: body.featured_artists,
confidence: Some(1.0), // manual edit = full confidence
notes: Some("Manually edited".to_owned()),
};
match db::update_pending_normalized(&state.pool, id, "review", &norm, None).await {
Ok(()) => StatusCode::NO_CONTENT.into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Artists ---
#[derive(Deserialize)]
pub struct SearchArtistsQuery {
pub q: String,
#[serde(default = "default_search_limit")]
pub limit: i32,
}
fn default_search_limit() -> i32 {
10
}
pub async fn search_artists(State(state): State<S>, Query(q): Query<SearchArtistsQuery>) -> impl IntoResponse {
if q.q.is_empty() {
return (StatusCode::OK, Json(serde_json::json!([]))).into_response();
}
match db::find_similar_artists(&state.pool, &q.q, q.limit).await {
Ok(artists) => (StatusCode::OK, Json(serde_json::to_value(artists).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn list_artists(State(state): State<S>) -> impl IntoResponse {
match db::list_artists_all(&state.pool).await {
Ok(artists) => (StatusCode::OK, Json(serde_json::to_value(artists).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct UpdateArtistBody {
pub name: String,
}
pub async fn update_artist(
State(state): State<S>,
Path(id): Path<i64>,
Json(body): Json<UpdateArtistBody>,
) -> impl IntoResponse {
match db::update_artist_name(&state.pool, id, &body.name).await {
Ok(true) => StatusCode::NO_CONTENT.into_response(),
Ok(false) => error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Albums ---
pub async fn list_albums(State(state): State<S>, Path(artist_id): Path<i64>) -> impl IntoResponse {
match db::list_albums_by_artist(&state.pool, artist_id).await {
Ok(albums) => (StatusCode::OK, Json(serde_json::to_value(albums).unwrap())).into_response(),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
#[derive(Deserialize)]
pub struct UpdateAlbumBody {
pub name: String,
pub year: Option<i32>,
}
pub async fn update_album(
State(state): State<S>,
Path(id): Path<i64>,
Json(body): Json<UpdateAlbumBody>,
) -> impl IntoResponse {
match db::update_album(&state.pool, id, &body.name, body.year).await {
Ok(true) => StatusCode::NO_CONTENT.into_response(),
Ok(false) => error_response(StatusCode::NOT_FOUND, "not found"),
Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Helpers ---
fn error_response(status: StatusCode, message: &str) -> axum::response::Response {
(status, Json(serde_json::json!({"error": message}))).into_response()
}
fn sanitize_filename(name: &str) -> String {
name.chars()
.map(|c| match c {
'/' | '\\' | ':' | '*' | '?' | '"' | '<' | '>' | '|' => '_',
_ => c,
})
.collect::<String>()
.trim()
.to_owned()
}

View File

@@ -0,0 +1,39 @@
pub mod api;
use std::sync::Arc;
use axum::{Router, routing::{get, post, put}};
use sqlx::PgPool;
use crate::config::Args;
#[derive(Clone)]
pub struct AppState {
pub pool: PgPool,
pub config: Arc<Args>,
pub system_prompt: Arc<String>,
}
pub fn build_router(state: Arc<AppState>) -> Router {
let api = Router::new()
.route("/stats", get(api::stats))
.route("/queue", get(api::list_queue))
.route("/queue/:id", get(api::get_queue_item).delete(api::delete_queue_item))
.route("/queue/:id/approve", post(api::approve_queue_item))
.route("/queue/:id/reject", post(api::reject_queue_item))
.route("/queue/:id/update", put(api::update_queue_item))
.route("/artists/search", get(api::search_artists))
.route("/artists", get(api::list_artists))
.route("/artists/:id", put(api::update_artist))
.route("/artists/:id/albums", get(api::list_albums))
.route("/albums/:id", put(api::update_album));
Router::new()
.route("/", get(admin_html))
.nest("/api", api)
.with_state(state)
}
async fn admin_html() -> axum::response::Html<&'static str> {
axum::response::Html(include_str!("admin.html"))
}

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "furumi-client-core" name = "furumi-client-core"
version = "0.2.1" version = "0.3.4"
edition = "2024" edition = "2024"
[dependencies] [dependencies]

View File

@@ -1,7 +1,7 @@
use crate::error::{ClientError, Result}; use crate::error::{ClientError, Result};
use furumi_common::proto::{ use furumi_common::proto::{
remote_file_system_client::RemoteFileSystemClient, AttrResponse, DirEntry, FileChunk, remote_file_system_client::RemoteFileSystemClient, AttrResponse, DirEntry, FileChunk,
PathRequest, ReadRequest, SnapshotRequest, WatchRequest, PathRequest, ReadRequest,
}; };
use moka::future::Cache; use moka::future::Cache;
use std::future::Future; use std::future::Future;
@@ -15,7 +15,7 @@ use tonic::codegen::InterceptedService;
use tonic::metadata::MetadataValue; use tonic::metadata::MetadataValue;
use tonic::transport::{Channel, Endpoint, Uri}; use tonic::transport::{Channel, Endpoint, Uri};
use tonic::{Request, Status}; use tonic::{Request, Status};
use tracing::{debug, info, warn, trace}; use tracing::{debug, info};
// ── Auth interceptor ─────────────────────────────────────────── // ── Auth interceptor ───────────────────────────────────────────
@@ -127,7 +127,6 @@ impl tower::Service<Uri> for InsecureTlsConnector {
pub struct FurumiClient { pub struct FurumiClient {
client: GrpcClient, client: GrpcClient,
attr_cache: Cache<String, AttrResponse>, attr_cache: Cache<String, AttrResponse>,
dir_cache: Cache<String, Arc<Vec<DirEntry>>>,
} }
impl FurumiClient { impl FurumiClient {
@@ -188,130 +187,29 @@ impl FurumiClient {
.time_to_live(Duration::from_secs(5)) .time_to_live(Duration::from_secs(5))
.build(); .build();
let dir_cache = Cache::builder() Ok(Self { client, attr_cache })
.max_capacity(10_000)
.time_to_live(Duration::from_secs(30))
.build();
Ok(Self { client, attr_cache, dir_cache })
}
/// Spawns background tasks that pre-warm the cache with a server snapshot and then
/// subscribe to live change events to keep it up to date.
/// Must be called after a successful authentication check.
pub fn start_background_sync(&self) {
info!("background sync: starting");
let this = self.clone();
tokio::spawn(async move {
let t = std::time::Instant::now();
match this.load_snapshot("/", 3).await {
Ok(n) => info!(
"background sync: snapshot loaded — {} directories in {:.1}s",
n,
t.elapsed().as_secs_f32()
),
Err(e) => {
warn!("background sync: GetSnapshot failed ({}), falling back to on-demand caching", e);
return;
}
}
info!("background sync: subscribing to change events");
// Reconnect loop: if the watch stream drops, reconnect after a short delay.
loop {
match this.run_watch_loop().await {
Ok(()) => {
info!("background sync: WatchChanges stream closed cleanly");
break;
}
Err(e) => {
warn!("background sync: WatchChanges error ({}), reconnecting in 5s", e);
tokio::time::sleep(Duration::from_secs(5)).await;
}
}
}
});
}
/// Fetches the server's pre-built directory snapshot and populates both caches.
/// Returns the number of directories loaded.
async fn load_snapshot(&self, path: &str, depth: u32) -> Result<usize> {
debug!("snapshot: requesting path={} depth={}", path, depth);
let mut client = self.client.clone();
let req = tonic::Request::new(SnapshotRequest {
path: path.to_string(),
depth,
});
let mut stream = client.get_snapshot(req).await?.into_inner();
let mut dirs = 0;
let mut attrs_warmed = 0;
while let Some(entry) = stream.next().await {
let entry = entry?;
trace!("snapshot: got dir '{}' ({} entries)", entry.path, entry.children.len());
// Warm attr_cache for the directory itself.
if let Some(dir_attr) = entry.dir_attr {
self.attr_cache.insert(entry.path.clone(), dir_attr).await;
attrs_warmed += 1;
}
// Warm attr_cache for each child (parallel slice: children[i] ↔ child_attrs[i]).
for (child, attr) in entry.children.iter().zip(entry.child_attrs.iter()) {
let child_path = if entry.path == "/" {
format!("/{}", child.name)
} else {
format!("{}/{}", entry.path, child.name)
};
self.attr_cache.insert(child_path, attr.clone()).await;
attrs_warmed += 1;
}
// Populate dir_cache.
self.dir_cache
.insert(entry.path, Arc::new(entry.children))
.await;
dirs += 1;
}
debug!("snapshot: {} dirs → dir_cache, {} attrs → attr_cache", dirs, attrs_warmed);
Ok(dirs)
}
/// Subscribes to the server's live change events and invalidates `dir_cache` entries.
/// Returns when the stream closes or on error.
async fn run_watch_loop(&self) -> Result<()> {
let mut client = self.client.clone();
let req = tonic::Request::new(WatchRequest {});
let mut stream = client.watch_changes(req).await?.into_inner();
while let Some(event) = stream.next().await {
let event = event?;
debug!("watch: invalidating dir_cache for '{}'", event.path);
self.dir_cache.invalidate(&event.path).await;
}
Ok(())
} }
/// Fetches file attributes from the server, utilizing an internal cache. /// Fetches file attributes from the server, utilizing an internal cache.
pub async fn get_attr(&self, path: &str) -> Result<AttrResponse> { pub async fn get_attr(&self, path: &str) -> Result<AttrResponse> {
if let Some(attr) = self.attr_cache.get(path).await { if let Some(attr) = self.attr_cache.get(path).await {
trace!("get_attr: cache hit '{}'", path);
return Ok(attr); return Ok(attr);
} }
let t = std::time::Instant::now(); debug!("get_attr (cache miss): {}", path);
let mut client = self.client.clone(); let mut client = self.client.clone();
let req = tonic::Request::new(PathRequest { let req = tonic::Request::new(PathRequest {
path: path.to_string(), path: path.to_string(),
}); });
let response = client.get_attr(req).await?.into_inner(); let response = client.get_attr(req).await?.into_inner();
debug!("get_attr: cache miss '{}' — rpc {:.1}ms", path, t.elapsed().as_secs_f32() * 1000.0);
self.attr_cache.insert(path.to_string(), response.clone()).await; self.attr_cache.insert(path.to_string(), response.clone()).await;
Ok(response) Ok(response)
} }
/// Fetches directory contents from gRPC and stores them in the cache. /// Reads directory contents from the server stream.
/// Does not trigger prefetch — safe to call from background tasks. pub async fn read_dir(&self, path: &str) -> Result<Vec<DirEntry>> {
async fn fetch_and_cache_dir(&self, path: &str) -> Result<Arc<Vec<DirEntry>>> { debug!("read_dir: {}", path);
let t = std::time::Instant::now();
let mut client = self.client.clone(); let mut client = self.client.clone();
let req = tonic::Request::new(PathRequest { let req = tonic::Request::new(PathRequest {
path: path.to_string(), path: path.to_string(),
@@ -321,87 +219,13 @@ impl FurumiClient {
let mut entries = Vec::new(); let mut entries = Vec::new();
while let Some(chunk) = stream.next().await { while let Some(chunk) = stream.next().await {
entries.push(chunk?); let entry = chunk?;
entries.push(entry);
} }
debug!(
"fetch_dir: '{}' — {} entries in {:.1}ms",
path,
entries.len(),
t.elapsed().as_secs_f32() * 1000.0
);
let entries = Arc::new(entries);
self.dir_cache.insert(path.to_string(), entries.clone()).await;
Ok(entries) Ok(entries)
} }
/// Reads directory contents, utilizing an internal cache.
/// On cache miss, fetches from server and spawns a background task to
/// prefetch attributes and immediate subdirectory listings.
pub async fn read_dir(&self, path: &str) -> Result<Vec<DirEntry>> {
if let Some(entries) = self.dir_cache.get(path).await {
debug!("read_dir: cache hit '{}' ({} entries)", path, entries.len());
return Ok((*entries).clone());
}
debug!("read_dir: cache miss '{}' — fetching from server", path);
let entries = self.fetch_and_cache_dir(path).await?;
let self_clone = self.clone();
let path_clone = path.to_string();
let entries_clone = entries.clone();
tokio::spawn(async move {
self_clone.prefetch_children(&path_clone, &entries_clone).await;
});
Ok((*entries).clone())
}
/// Background: warms attr_cache for all children and dir_cache for immediate subdirs.
async fn prefetch_children(&self, parent: &str, entries: &[DirEntry]) {
let dirs: Vec<_> = entries.iter().filter(|e| e.r#type == 4).collect();
let files: Vec<_> = entries.iter().filter(|e| e.r#type != 4).collect();
debug!(
"prefetch: '{}' — warming {} attrs, {} subdirs",
parent,
entries.len(),
dirs.len().min(20)
);
// Warm attr_cache for all children.
for entry in entries {
let child_path = if parent == "/" {
format!("/{}", entry.name)
} else {
format!("{}/{}", parent, entry.name)
};
let _ = self.get_attr(&child_path).await;
}
let _ = files; // suppress unused warning
// Prefetch dir listings for immediate subdirs (up to 20).
let subdirs: Vec<_> = dirs.into_iter().take(20).collect();
let mut fetched = 0;
let mut already_cached = 0;
for subdir in &subdirs {
let child_path = if parent == "/" {
format!("/{}", subdir.name)
} else {
format!("{}/{}", parent, subdir.name)
};
if self.dir_cache.get(&child_path).await.is_none() {
let _ = self.fetch_and_cache_dir(&child_path).await;
fetched += 1;
} else {
already_cached += 1;
}
}
debug!(
"prefetch: '{}' done — {} subdirs fetched, {} already cached",
parent, fetched, already_cached
);
}
/// Fetches file chunk stream from the server. /// Fetches file chunk stream from the server.
pub async fn read_file( pub async fn read_file(
&self, &self,

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "furumi-common" name = "furumi-common"
version = "0.2.1" version = "0.3.4"
edition = "2024" edition = "2024"
[dependencies] [dependencies]

View File

@@ -29,40 +29,6 @@ message FileChunk {
bytes data = 1; bytes data = 1;
} }
// ── Snapshot & watch ──────────────────────────────────────────────
// Request a pre-built snapshot of the directory tree up to `depth` levels.
// depth = 0 means only the requested path itself; depth = 1 includes immediate children, etc.
message SnapshotRequest {
string path = 1;
uint32 depth = 2;
}
// One directory's contents within a snapshot response.
// child_attrs is parallel to children: child_attrs[i] is the AttrResponse for children[i].
// dir_attr is the AttrResponse for the directory itself (path).
message SnapshotEntry {
string path = 1;
repeated DirEntry children = 2;
repeated AttrResponse child_attrs = 3;
AttrResponse dir_attr = 4;
}
// Subscribe to live filesystem change notifications (no parameters needed).
message WatchRequest {}
enum ChangeKind {
CREATED = 0;
DELETED = 1;
MODIFIED = 2;
}
// Notifies the client that the contents of `path` have changed.
message ChangeEvent {
string path = 1;
ChangeKind kind = 2;
}
service RemoteFileSystem { service RemoteFileSystem {
// Get file or directory attributes (size, permissions, timestamps). Maps to stat/getattr. // Get file or directory attributes (size, permissions, timestamps). Maps to stat/getattr.
rpc GetAttr (PathRequest) returns (AttrResponse); rpc GetAttr (PathRequest) returns (AttrResponse);
@@ -72,12 +38,4 @@ service RemoteFileSystem {
// Read chunks of a file. Uses Server Streaming for efficient chunk delivery based on offset/size. // Read chunks of a file. Uses Server Streaming for efficient chunk delivery based on offset/size.
rpc ReadFile (ReadRequest) returns (stream FileChunk); rpc ReadFile (ReadRequest) returns (stream FileChunk);
// Return a pre-built in-memory snapshot of the directory tree rooted at `path`.
// The server walks `depth` levels deep on its side — one round-trip fills the client cache.
rpc GetSnapshot (SnapshotRequest) returns (stream SnapshotEntry);
// Subscribe to live filesystem change events. The server pushes a ChangeEvent whenever
// a directory's contents change, allowing the client to invalidate its cache immediately.
rpc WatchChanges (WatchRequest) returns (stream ChangeEvent);
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "furumi-mount-linux" name = "furumi-mount-linux"
version = "0.2.1" version = "0.3.4"
edition = "2024" edition = "2024"
[dependencies] [dependencies]

View File

@@ -9,7 +9,7 @@ use std::time::{Duration, UNIX_EPOCH};
use tracing::{debug, error}; use tracing::{debug, error};
use tokio::runtime::Handle; use tokio::runtime::Handle;
const TTL: Duration = Duration::from_secs(5); // 5 second FUSE kernel TTL (matches attr_cache) const TTL: Duration = Duration::from_secs(1); // 1 second FUSE kernel TTL
// ── InodeMapper ────────────────────────────────────────────────── // ── InodeMapper ──────────────────────────────────────────────────

View File

@@ -64,9 +64,6 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
return Err(format!("Failed to authenticate or connect to server: {}", e).into()); return Err(format!("Failed to authenticate or connect to server: {}", e).into());
} }
// Auth verified — start background snapshot + watch sync
c.start_background_sync();
Ok::<_, Box<dyn std::error::Error>>(c) Ok::<_, Box<dyn std::error::Error>>(c)
})?; })?;
@@ -78,7 +75,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
MountOption::NoExec, // Better security for media mount MountOption::NoExec, // Better security for media mount
]; ];
println!("Mounting Furumi-ng to {:?}", args.mount); println!("Mounting Furumi-ng v{} to {:?}", option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION")), args.mount);
// Use Session + BackgroundSession for graceful unmount on exit // Use Session + BackgroundSession for graceful unmount on exit
let session = Session::new(fuse_fs, &args.mount, &options)?; let session = Session::new(fuse_fs, &args.mount, &options)?;

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "furumi-mount-macos" name = "furumi-mount-macos"
version = "0.2.1" version = "0.3.4"
edition = "2024" edition = "2024"
[dependencies] [dependencies]

View File

@@ -58,11 +58,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
format!("https://{}", args.server) format!("https://{}", args.server)
}; };
let client = rt.block_on(async { let client = rt.block_on(async { FurumiClient::connect(&full_addr, &args.token).await })?;
let c = FurumiClient::connect(&full_addr, &args.token).await?;
c.start_background_sync();
Ok::<_, Box<dyn std::error::Error + Send + Sync>>(c)
})?;
let furumi_nfs = nfs::FurumiNfs::new(client); let furumi_nfs = nfs::FurumiNfs::new(client);
@@ -112,7 +108,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
std::process::exit(1); std::process::exit(1);
} }
println!("Mounted Furumi-ng to {:?}", mount_path); println!("Mounted Furumi-ng v{} to {:?}", option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION")), mount_path);
// Wait for shutdown signal // Wait for shutdown signal
while running.load(Ordering::SeqCst) { while running.load(Ordering::SeqCst) {

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "furumi-server" name = "furumi-server"
version = "0.2.1" version = "0.3.4"
edition = "2024" edition = "2024"
[dependencies] [dependencies]
@@ -18,16 +18,31 @@ rustls = { version = "0.23.37", features = ["ring"] }
thiserror = "2.0.18" thiserror = "2.0.18"
tokio = { version = "1.50.0", features = ["full"] } tokio = { version = "1.50.0", features = ["full"] }
tokio-stream = "0.1.18" tokio-stream = "0.1.18"
tokio-util = { version = "0.7", features = ["io"] }
tonic = { version = "0.12.3", features = ["tls"] } tonic = { version = "0.12.3", features = ["tls"] }
tracing = "0.1.44" tracing = "0.1.44"
tracing-subscriber = { version = "0.3.22", features = ["env-filter"] } tracing-subscriber = { version = "0.3.22", features = ["env-filter"] }
async-stream = "0.3.6" async-stream = "0.3.6"
async-trait = "0.1.89" async-trait = "0.1.89"
prometheus = { version = "0.14.0", features = ["process"] } prometheus = { version = "0.14.0", features = ["process"] }
axum = { version = "0.7", features = ["tokio"] } axum = { version = "0.7", features = ["tokio", "macros"] }
once_cell = "1.21.3" once_cell = "1.21.3"
rcgen = { version = "0.14.7", features = ["pem"] } rcgen = { version = "0.14.7", features = ["pem"] }
notify = "6" symphonia = { version = "0.5", default-features = false, features = ["mp3", "aac", "flac", "vorbis", "wav", "alac", "adpcm", "pcm", "mpa", "isomp4", "ogg", "aiff", "mkv"] }
opus = "0.3"
ogg = "0.9"
mime_guess = "2.0"
tower = { version = "0.4", features = ["util"] }
sha2 = "0.10"
base64 = "0.22"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
openidconnect = "3.4"
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls"] }
hmac = "0.12"
rand = "0.8"
encoding_rs = "0.8"
urlencoding = "2.1.3"
[dev-dependencies] [dev-dependencies]
tempfile = "3.26.0" tempfile = "3.26.0"

View File

@@ -2,7 +2,7 @@ pub mod vfs;
pub mod security; pub mod security;
pub mod server; pub mod server;
pub mod metrics; pub mod metrics;
pub mod tree; pub mod web;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::path::PathBuf; use std::path::PathBuf;
@@ -34,9 +34,37 @@ struct Args {
#[arg(long, env = "FURUMI_METRICS_BIND", default_value = "0.0.0.0:9090")] #[arg(long, env = "FURUMI_METRICS_BIND", default_value = "0.0.0.0:9090")]
metrics_bind: String, metrics_bind: String,
/// IP address and port for the web music player
#[arg(long, env = "FURUMI_WEB_BIND", default_value = "0.0.0.0:8080")]
web_bind: String,
/// Disable the web music player UI
#[arg(long, default_value_t = false)]
no_web: bool,
/// Disable TLS encryption (not recommended, use only for debugging) /// Disable TLS encryption (not recommended, use only for debugging)
#[arg(long, default_value_t = false)] #[arg(long, default_value_t = false)]
no_tls: bool, no_tls: bool,
/// OIDC Issuer URL (e.g. https://auth.example.com/application/o/furumi/)
#[arg(long, env = "FURUMI_OIDC_ISSUER_URL")]
oidc_issuer_url: Option<String>,
/// OIDC Client ID
#[arg(long, env = "FURUMI_OIDC_CLIENT_ID")]
oidc_client_id: Option<String>,
/// OIDC Client Secret
#[arg(long, env = "FURUMI_OIDC_CLIENT_SECRET")]
oidc_client_secret: Option<String>,
/// OIDC Redirect URL (e.g. https://music.example.com/auth/callback)
#[arg(long, env = "FURUMI_OIDC_REDIRECT_URL")]
oidc_redirect_url: Option<String>,
/// OIDC Session Secret (32+ chars, for HMAC). If not provided, a random one is generated on startup.
#[arg(long, env = "FURUMI_OIDC_SESSION_SECRET")]
oidc_session_secret: Option<String>,
} }
async fn metrics_handler() -> String { async fn metrics_handler() -> String {
@@ -75,13 +103,12 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
} }
let vfs = Arc::new(LocalVfs::new(&root_path)); let vfs = Arc::new(LocalVfs::new(&root_path));
let tree = Arc::new(tree::WatchedTree::new(root_path.clone()).await?); let remote_fs = RemoteFileSystemImpl::new(vfs);
let remote_fs = RemoteFileSystemImpl::new(vfs, tree);
let auth = AuthInterceptor::new(args.token.clone()); let auth = AuthInterceptor::new(args.token.clone());
let svc = RemoteFileSystemServer::with_interceptor(remote_fs, auth.clone()); let svc = RemoteFileSystemServer::with_interceptor(remote_fs, auth.clone());
// Print startup info // Print startup info
println!("Furumi-ng Server listening on {}", addr); println!("Furumi-ng Server v{} listening on {}", option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION")), addr);
if args.no_tls { if args.no_tls {
println!("WARNING: TLS is DISABLED — traffic is unencrypted"); println!("WARNING: TLS is DISABLED — traffic is unencrypted");
} else { } else {
@@ -102,6 +129,40 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
axum::serve(metrics_listener, metrics_app).await.unwrap(); axum::serve(metrics_listener, metrics_app).await.unwrap();
}); });
// Spawn the web music player on its own port
if !args.no_web {
let web_addr: SocketAddr = args.web_bind.parse().unwrap_or_else(|e| {
eprintln!("Error: Invalid web bind address '{}': {}", args.web_bind, e);
std::process::exit(1);
});
// Initialize OIDC State if provided
let oidc_state = if let (Some(issuer), Some(client_id), Some(secret), Some(redirect)) = (
args.oidc_issuer_url,
args.oidc_client_id,
args.oidc_client_secret,
args.oidc_redirect_url,
) {
println!("OIDC (SSO): enabled for web UI (issuer: {})", issuer);
match web::auth::oidc_init(issuer, client_id, secret, redirect, args.oidc_session_secret).await {
Ok(state) => Some(Arc::new(state)),
Err(e) => {
eprintln!("Error initializing OIDC client: {}", e);
std::process::exit(1);
}
}
} else {
None
};
let web_app = web::build_router(root_path.clone(), args.token.clone(), oidc_state);
let web_listener = tokio::net::TcpListener::bind(web_addr).await?;
println!("Web player: http://{}", web_addr);
tokio::spawn(async move {
axum::serve(web_listener, web_app).await.unwrap();
});
}
let mut builder = Server::builder() let mut builder = Server::builder()
.tcp_keepalive(Some(std::time::Duration::from_secs(60))) .tcp_keepalive(Some(std::time::Duration::from_secs(60)))
.http2_keepalive_interval(Some(std::time::Duration::from_secs(60))); .http2_keepalive_interval(Some(std::time::Duration::from_secs(60)));

View File

@@ -1,27 +1,22 @@
use std::pin::Pin; use std::pin::Pin;
use std::sync::Arc;
use tokio::sync::broadcast;
use tokio_stream::Stream; use tokio_stream::Stream;
use tonic::{Request, Response, Status}; use tonic::{Request, Response, Status};
use crate::metrics::{self, RequestTimer};
use crate::security::sanitize_path;
use crate::tree::WatchedTree;
use crate::vfs::VirtualFileSystem; use crate::vfs::VirtualFileSystem;
use crate::metrics::{self, RequestTimer};
use furumi_common::proto::{ use furumi_common::proto::{
remote_file_system_server::RemoteFileSystem, AttrResponse, ChangeEvent, DirEntry, FileChunk, remote_file_system_server::RemoteFileSystem, AttrResponse, DirEntry, FileChunk,
PathRequest, ReadRequest, SnapshotEntry, SnapshotRequest, WatchRequest, PathRequest, ReadRequest,
}; };
use crate::security::sanitize_path;
pub struct RemoteFileSystemImpl<V: VirtualFileSystem> { pub struct RemoteFileSystemImpl<V: VirtualFileSystem> {
vfs: Arc<V>, vfs: std::sync::Arc<V>,
tree: Arc<WatchedTree>,
} }
impl<V: VirtualFileSystem> RemoteFileSystemImpl<V> { impl<V: VirtualFileSystem> RemoteFileSystemImpl<V> {
pub fn new(vfs: Arc<V>, tree: Arc<WatchedTree>) -> Self { pub fn new(vfs: std::sync::Arc<V>) -> Self {
Self { vfs, tree } Self { vfs }
} }
} }
@@ -48,7 +43,11 @@ impl<V: VirtualFileSystem> RemoteFileSystem for RemoteFileSystemImpl<V> {
} }
} }
type ReadDirStream = Pin<Box<dyn Stream<Item = Result<DirEntry, Status>> + Send + 'static>>; type ReadDirStream = Pin<
Box<
dyn Stream<Item = Result<DirEntry, Status>> + Send + 'static,
>,
>;
async fn read_dir( async fn read_dir(
&self, &self,
@@ -79,7 +78,11 @@ impl<V: VirtualFileSystem> RemoteFileSystem for RemoteFileSystemImpl<V> {
} }
} }
type ReadFileStream = Pin<Box<dyn Stream<Item = Result<FileChunk, Status>> + Send + 'static>>; type ReadFileStream = Pin<
Box<
dyn Stream<Item = Result<FileChunk, Status>> + Send + 'static,
>,
>;
async fn read_file( async fn read_file(
&self, &self,
@@ -123,74 +126,5 @@ impl<V: VirtualFileSystem> RemoteFileSystem for RemoteFileSystemImpl<V> {
} }
} }
} }
// ── Snapshot ─────────────────────────────────────────────────
type GetSnapshotStream =
Pin<Box<dyn Stream<Item = Result<SnapshotEntry, Status>> + Send + 'static>>;
async fn get_snapshot(
&self,
request: Request<SnapshotRequest>,
) -> Result<Response<Self::GetSnapshotStream>, Status> {
let req = request.into_inner();
let safe_path = sanitize_path(&req.path)?;
// sanitize_path strips the leading "/" — map "" back to "/" for snapshot lookup.
let virt_path = sanitized_to_virt(&safe_path);
let entries = self.tree.get_snapshot(&virt_path, req.depth).await;
let total_entries: usize = entries.iter().map(|(_, d)| d.children.len()).sum();
tracing::debug!(
"GetSnapshot: path='{}' depth={} → {} dirs, {} total entries",
virt_path, req.depth, entries.len(), total_entries
);
let stream = async_stream::try_stream! {
for (path, snap_dir) in entries {
yield SnapshotEntry {
path,
children: snap_dir.children,
child_attrs: snap_dir.child_attrs,
dir_attr: Some(snap_dir.dir_attr),
};
}
};
Ok(Response::new(Box::pin(stream) as Self::GetSnapshotStream))
}
// ── Watch ─────────────────────────────────────────────────────
type WatchChangesStream =
Pin<Box<dyn Stream<Item = Result<ChangeEvent, Status>> + Send + 'static>>;
async fn watch_changes(
&self,
_request: Request<WatchRequest>,
) -> Result<Response<Self::WatchChangesStream>, Status> {
let mut rx = self.tree.subscribe();
let stream = async_stream::try_stream! {
loop {
match rx.recv().await {
Ok(event) => yield event,
Err(broadcast::error::RecvError::Lagged(n)) => {
// Client was too slow — it missed n events.
// Log and continue; the client's TTL will cover the gap.
tracing::warn!("WatchChanges client lagged, skipped {} events", n);
}
Err(broadcast::error::RecvError::Closed) => break,
}
}
};
Ok(Response::new(Box::pin(stream) as Self::WatchChangesStream))
}
} }
/// sanitize_path removes the leading "/" so "/" becomes "". Map it back.
fn sanitized_to_virt(safe: &str) -> String {
if safe.is_empty() {
"/".to_string()
} else {
format!("/{}", safe)
}
}

View File

@@ -1,327 +0,0 @@
use std::collections::{HashMap, HashSet, VecDeque};
use std::os::unix::fs::MetadataExt;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use notify::{Config, RecommendedWatcher, RecursiveMode, Watcher};
use tokio::sync::{broadcast, RwLock};
use tracing::{debug, info, warn};
use furumi_common::proto::{AttrResponse, ChangeEvent, ChangeKind, DirEntry};
/// How many directory levels to pre-walk on startup.
const INITIAL_DEPTH: u32 = 3;
/// Broadcast channel capacity — clients that fall behind lose events and rely on TTL.
const BROADCAST_CAPACITY: usize = 256;
// ── Types ─────────────────────────────────────────────────────────
/// One directory in the snapshot: its entries and the attr for each child.
/// `children` and `child_attrs` are parallel slices (same index = same file).
/// `dir_attr` is the attr of the directory itself.
#[derive(Clone)]
pub struct SnapDir {
pub children: Vec<DirEntry>,
pub child_attrs: Vec<AttrResponse>,
pub dir_attr: AttrResponse,
}
// ── WatchedTree ──────────────────────────────────────────────────
/// Maintains an in-memory snapshot of the directory tree and broadcasts
/// change events to connected clients via inotify.
pub struct WatchedTree {
/// Virtual-path → (entries + attrs).
snapshot: Arc<RwLock<HashMap<String, SnapDir>>>,
change_tx: broadcast::Sender<ChangeEvent>,
/// Kept alive to continue watching. Shared with the event handler so it can
/// add new watches when directories are created at runtime.
_watcher: Arc<Mutex<RecommendedWatcher>>,
}
impl WatchedTree {
pub async fn new(root: PathBuf) -> anyhow::Result<Self> {
let snapshot: Arc<RwLock<HashMap<String, SnapDir>>> =
Arc::new(RwLock::new(HashMap::new()));
let (change_tx, _) = broadcast::channel(BROADCAST_CAPACITY);
info!("WatchedTree: walking '{}' (depth {})…", root.display(), INITIAL_DEPTH);
let t = std::time::Instant::now();
let watched_dirs = walk_tree(&root, INITIAL_DEPTH, &snapshot).await;
let snap_len = snapshot.read().await.len();
let total_entries: usize = snapshot.read().await.values().map(|d| d.children.len()).sum();
info!(
"WatchedTree: snapshot ready — {} dirs, {} entries, {} watches, took {:.1}s",
snap_len,
total_entries,
watched_dirs.len(),
t.elapsed().as_secs_f32(),
);
// Bridge notify's sync callback → async tokio task.
let (notify_tx, mut notify_rx) =
tokio::sync::mpsc::unbounded_channel::<notify::Result<notify::Event>>();
let watcher = Arc::new(Mutex::new(RecommendedWatcher::new(
move |res| {
let _ = notify_tx.send(res);
},
Config::default(),
)?));
// Add one non-recursive inotify watch per directory in the snapshot.
{
let mut w = watcher.lock().unwrap();
for dir_abs in &watched_dirs {
if let Err(e) = w.watch(dir_abs, RecursiveMode::NonRecursive) {
warn!("watch failed for {:?}: {}", dir_abs, e);
}
}
}
let snapshot_bg = Arc::clone(&snapshot);
let root_bg = root.clone();
let tx_bg = change_tx.clone();
let watcher_bg = Arc::clone(&watcher);
tokio::spawn(async move {
while let Some(res) = notify_rx.recv().await {
match res {
Ok(event) => {
handle_fs_event(event, &root_bg, &snapshot_bg, &tx_bg, &watcher_bg).await
}
Err(e) => warn!("notify error: {}", e),
}
}
});
Ok(Self {
snapshot,
change_tx,
_watcher: watcher,
})
}
/// Returns all snapshot entries within `depth` levels of `base`.
pub async fn get_snapshot(&self, base: &str, depth: u32) -> Vec<(String, SnapDir)> {
let snap = self.snapshot.read().await;
snap.iter()
.filter(|(path, _)| path_depth_from(base, path).map_or(false, |d| d <= depth))
.map(|(path, snap_dir)| (path.clone(), snap_dir.clone()))
.collect()
}
pub fn subscribe(&self) -> broadcast::Receiver<ChangeEvent> {
self.change_tx.subscribe()
}
}
// ── Helpers ──────────────────────────────────────────────────────
fn metadata_to_attr(meta: &std::fs::Metadata) -> AttrResponse {
AttrResponse {
size: meta.len(),
mode: meta.mode(),
mtime: meta.mtime() as u64,
}
}
/// BFS walk: reads filesystem, stores entries + attrs in snapshot.
/// Returns the list of absolute paths walked (for inotify setup).
async fn walk_tree(
root: &Path,
max_depth: u32,
snapshot: &Arc<RwLock<HashMap<String, SnapDir>>>,
) -> Vec<PathBuf> {
let mut walked: Vec<PathBuf> = Vec::new();
let mut queue: VecDeque<(PathBuf, String, u32)> = VecDeque::new();
queue.push_back((root.to_path_buf(), "/".to_string(), 0));
while let Some((abs_path, virt_path, depth)) = queue.pop_front() {
// Stat the directory itself.
let dir_attr = match std::fs::metadata(&abs_path) {
Ok(m) => metadata_to_attr(&m),
Err(e) => {
warn!("walk_tree: cannot stat {:?}: {}", abs_path, e);
continue;
}
};
let mut dir = match tokio::fs::read_dir(&abs_path).await {
Ok(d) => d,
Err(e) => {
warn!("walk_tree: cannot read {:?}: {}", abs_path, e);
continue;
}
};
let mut children: Vec<DirEntry> = Vec::new();
let mut child_attrs: Vec<AttrResponse> = Vec::new();
while let Ok(Some(entry)) = dir.next_entry().await {
let Ok(ft) = entry.file_type().await else { continue };
let type_val = if ft.is_dir() { 4 } else if ft.is_file() { 8 } else { continue };
let name = entry.file_name().to_string_lossy().into_owned();
// Stat the child.
let attr = match entry.metadata().await {
Ok(m) => metadata_to_attr(&m),
Err(_) => AttrResponse { size: 0, mode: 0, mtime: 0 },
};
// Skip hidden directories to avoid exploding the watch list.
if ft.is_dir() && name.starts_with('.') {
continue;
}
if ft.is_dir() && depth < max_depth {
let child_virt = child_virt_path(&virt_path, &name);
queue.push_back((entry.path(), child_virt, depth + 1));
}
children.push(DirEntry { name, r#type: type_val });
child_attrs.push(attr);
}
snapshot.write().await.insert(virt_path, SnapDir { children, child_attrs, dir_attr });
walked.push(abs_path);
}
walked
}
/// Re-reads a single directory and updates its snapshot entry.
async fn refresh_dir(
abs_path: &Path,
virt_path: &str,
snapshot: &Arc<RwLock<HashMap<String, SnapDir>>>,
) {
let dir_attr = match std::fs::metadata(abs_path) {
Ok(m) => metadata_to_attr(&m),
Err(_) => {
snapshot.write().await.remove(virt_path);
return;
}
};
let mut dir = match tokio::fs::read_dir(abs_path).await {
Ok(d) => d,
Err(_) => {
snapshot.write().await.remove(virt_path);
return;
}
};
let mut children: Vec<DirEntry> = Vec::new();
let mut child_attrs: Vec<AttrResponse> = Vec::new();
while let Ok(Some(entry)) = dir.next_entry().await {
let Ok(ft) = entry.file_type().await else { continue };
let type_val = if ft.is_dir() { 4 } else if ft.is_file() { 8 } else { continue };
let attr = match entry.metadata().await {
Ok(m) => metadata_to_attr(&m),
Err(_) => AttrResponse { size: 0, mode: 0, mtime: 0 },
};
children.push(DirEntry {
name: entry.file_name().to_string_lossy().into_owned(),
r#type: type_val,
});
child_attrs.push(attr);
}
snapshot.write().await.insert(virt_path.to_string(), SnapDir { children, child_attrs, dir_attr });
}
async fn handle_fs_event(
event: notify::Event,
root: &Path,
snapshot: &Arc<RwLock<HashMap<String, SnapDir>>>,
tx: &broadcast::Sender<ChangeEvent>,
watcher: &Arc<Mutex<RecommendedWatcher>>,
) {
use notify::EventKind;
let proto_kind = match &event.kind {
EventKind::Create(_) => ChangeKind::Created,
EventKind::Remove(_) => ChangeKind::Deleted,
EventKind::Modify(_) => ChangeKind::Modified,
_ => return,
};
let kind_i32 = proto_kind as i32;
if matches!(event.kind, EventKind::Create(_)) {
for path in &event.paths {
if path.is_dir()
&& !path.file_name().map_or(false, |n| n.to_string_lossy().starts_with('.'))
{
let mut w = watcher.lock().unwrap();
if let Err(e) = w.watch(path, RecursiveMode::NonRecursive) {
warn!("failed to add watch for new dir {:?}: {}", path, e);
}
}
}
}
let mut parents: HashSet<PathBuf> = HashSet::new();
for path in &event.paths {
if let Some(parent) = path.parent() {
if parent.starts_with(root) {
parents.insert(parent.to_path_buf());
}
}
}
for parent_abs in parents {
let virt = abs_to_virt(root, &parent_abs);
debug!("snapshot refresh: {}", virt);
refresh_dir(&parent_abs, &virt, snapshot).await;
let _ = tx.send(ChangeEvent { path: virt, kind: kind_i32 });
}
}
fn abs_to_virt(root: &Path, abs: &Path) -> String {
match abs.strip_prefix(root) {
Ok(rel) if rel.as_os_str().is_empty() => "/".to_string(),
Ok(rel) => format!("/{}", rel.to_string_lossy()),
Err(_) => "/".to_string(),
}
}
fn child_virt_path(parent: &str, name: &str) -> String {
if parent == "/" { format!("/{}", name) } else { format!("{}/{}", parent, name) }
}
fn path_depth_from(base: &str, path: &str) -> Option<u32> {
if base == "/" {
if path == "/" { return Some(0); }
let trimmed = path.trim_start_matches('/');
Some(trimmed.matches('/').count() as u32 + 1)
} else {
if path == base { return Some(0); }
let prefix = format!("{}/", base);
path.strip_prefix(prefix.as_str())
.map(|rest| rest.matches('/').count() as u32 + 1)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_path_depth_from_root() {
assert_eq!(path_depth_from("/", "/"), Some(0));
assert_eq!(path_depth_from("/", "/a"), Some(1));
assert_eq!(path_depth_from("/", "/a/b"), Some(2));
assert_eq!(path_depth_from("/", "/a/b/c"), Some(3));
}
#[test]
fn test_path_depth_from_subdir() {
assert_eq!(path_depth_from("/movies", "/movies"), Some(0));
assert_eq!(path_depth_from("/movies", "/movies/action"), Some(1));
assert_eq!(path_depth_from("/movies", "/movies/action/marvel"), Some(2));
assert_eq!(path_depth_from("/movies", "/music"), None);
assert_eq!(path_depth_from("/movies", "/movies-extra"), None);
assert_eq!(path_depth_from("/movies", "/"), None);
}
}

View File

@@ -0,0 +1,521 @@
use axum::{
body::Body,
extract::{Form, Request, State},
http::{header, HeaderMap, StatusCode},
middleware::Next,
response::{Html, IntoResponse, Redirect, Response},
};
use openidconnect::{
core::{CoreClient, CoreProviderMetadata, CoreResponseType},
reqwest::async_http_client,
AuthenticationFlow, AuthorizationCode, ClientId, ClientSecret, CsrfToken, IssuerUrl, Nonce,
PkceCodeChallenge, PkceCodeVerifier, RedirectUrl, Scope, TokenResponse,
};
use rand::RngCore;
use serde::Deserialize;
use sha2::{Digest, Sha256};
use base64::Engine;
use hmac::{Hmac, Mac};
use super::{OidcState, WebState};
/// Cookie name used to store the session token.
const SESSION_COOKIE: &str = "furumi_session";
fn esc(s: &str) -> String {
s.replace('&', "&amp;")
.replace('<', "&lt;")
.replace('>', "&gt;")
.replace('"', "&quot;")
.replace('\'', "&#39;")
}
/// Compute SHA-256 of the token as hex string (stored in cookie, not raw token).
pub fn token_hash(token: &str) -> String {
let mut h = Sha256::new();
h.update(token.as_bytes());
format!("{:x}", h.finalize())
}
pub async fn require_auth(
State(state): State<WebState>,
mut req: Request,
next: Next,
) -> Response {
// Auth disabled when token is empty
if state.token.is_empty() {
req.extensions_mut().insert(super::AuthUserInfo("Unauthenticated".to_string()));
return next.run(req).await;
}
let cookies = req
.headers()
.get(header::COOKIE)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let expected = token_hash(&state.token);
let mut authed_user = None;
for c in cookies.split(';') {
let c = c.trim();
if let Some(val) = c.strip_prefix(&format!("{}=", SESSION_COOKIE)) {
if val == expected {
authed_user = Some("Master Token".to_string());
break;
} else if let Some(oidc) = &state.oidc {
if let Some(user) = verify_sso_cookie(&oidc.session_secret, val) {
authed_user = Some(user);
break;
}
}
}
}
if let Some(user) = authed_user {
req.extensions_mut().insert(super::AuthUserInfo(user));
next.run(req).await
} else {
let uri = req.uri().to_string();
if uri.starts_with("/api/") {
(StatusCode::UNAUTHORIZED, "Unauthorized").into_response()
} else {
let redirect_url = format!("/login?next={}", urlencoding::encode(&uri));
Redirect::to(&redirect_url).into_response()
}
}
}
type HmacSha256 = Hmac<sha2::Sha256>;
pub fn generate_sso_cookie(secret: &[u8], user_id: &str) -> String {
let mut mac = HmacSha256::new_from_slice(secret).unwrap();
mac.update(user_id.as_bytes());
let sig = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(mac.finalize().into_bytes());
format!("sso:{}:{}", user_id, sig)
}
pub fn verify_sso_cookie(secret: &[u8], cookie_val: &str) -> Option<String> {
let parts: Vec<&str> = cookie_val.split(':').collect();
if parts.len() != 3 || parts[0] != "sso" {
return None;
}
let user_id = parts[1];
let sig = parts[2];
let mut mac = HmacSha256::new_from_slice(secret).unwrap();
mac.update(user_id.as_bytes());
let expected_sig = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(mac.finalize().into_bytes());
if sig == expected_sig {
Some(user_id.to_string())
} else {
None
}
}
#[derive(Deserialize)]
pub struct LoginQuery {
pub next: Option<String>,
}
/// GET /login — show login form.
pub async fn login_page(
State(state): State<WebState>,
axum::extract::Query(query): axum::extract::Query<LoginQuery>,
) -> impl IntoResponse {
let token_enabled = !state.token.is_empty();
let oidc_enabled = state.oidc.is_some();
if !token_enabled && !oidc_enabled {
return Redirect::to("/").into_response();
}
let next_val = query.next.unwrap_or_else(|| "/".to_string());
let next_encoded = urlencoding::encode(&next_val);
let oidc_html = if oidc_enabled {
format!(
r#"<div class="divider"><span>OR</span></div>
<a href="/auth/login?next={}" class="btn-oidc">Log in with Authentik (SSO)</a>"#,
next_encoded
)
} else {
"".to_string()
};
let next_input = format!(r#"<input type="hidden" name="next" value="{}">"#, esc(&next_val));
let html = LOGIN_HTML
.replace("<!-- OIDC_PLACEHOLDER -->", &oidc_html)
.replace("<!-- NEXT_INPUT_PLACEHOLDER -->", &next_input);
Html(html).into_response()
}
#[derive(Deserialize)]
pub struct LoginForm {
password: String,
next: Option<String>,
}
/// POST /login — validate password, set session cookie.
pub async fn login_submit(
State(state): State<WebState>,
Form(form): Form<LoginForm>,
) -> impl IntoResponse {
if state.token.is_empty() {
return Redirect::to("/").into_response();
}
if form.password == *state.token {
let hash = token_hash(&state.token);
let cookie = format!(
"{}={}; HttpOnly; SameSite=Strict; Path=/; Max-Age=604800",
SESSION_COOKIE, hash
);
let redirect_to = form.next.as_deref().unwrap_or("/");
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, cookie.parse().unwrap());
headers.insert(header::LOCATION, redirect_to.parse().unwrap());
(StatusCode::FOUND, headers, Body::empty()).into_response()
} else {
Html(LOGIN_ERROR_HTML).into_response()
}
}
/// GET /logout — clear session cookie and redirect to login.
pub async fn logout() -> impl IntoResponse {
let cookie = format!(
"{}=; HttpOnly; SameSite=Strict; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT",
SESSION_COOKIE
);
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, cookie.parse().unwrap());
headers.insert(header::LOCATION, "/login".parse().unwrap());
(StatusCode::FOUND, headers, Body::empty()).into_response()
}
pub async fn oidc_init(
issuer: String,
client_id: String,
client_secret: String,
redirect: String,
session_secret_override: Option<String>,
) -> anyhow::Result<OidcState> {
let provider_metadata = CoreProviderMetadata::discover_async(
IssuerUrl::new(issuer)?,
async_http_client,
)
.await?;
let client = CoreClient::from_provider_metadata(
provider_metadata,
ClientId::new(client_id),
Some(ClientSecret::new(client_secret)),
)
.set_auth_type(openidconnect::AuthType::RequestBody)
.set_redirect_uri(RedirectUrl::new(redirect)?);
let session_secret = if let Some(s) = session_secret_override {
let mut b = s.into_bytes();
b.resize(32, 0); // Ensure at least 32 bytes for HMAC-SHA256
b
} else {
let mut b = vec![0u8; 32];
rand::thread_rng().fill_bytes(&mut b);
b
};
Ok(OidcState {
client,
session_secret,
})
}
pub async fn oidc_login(
State(state): State<WebState>,
axum::extract::Query(query): axum::extract::Query<LoginQuery>,
req: Request,
) -> impl IntoResponse {
let oidc = match &state.oidc {
Some(o) => o,
None => return Redirect::to("/login").into_response(),
};
let (pkce_challenge, pkce_verifier) = PkceCodeChallenge::new_random_sha256();
let (auth_url, csrf_token, nonce) = oidc
.client
.authorize_url(
AuthenticationFlow::<CoreResponseType>::AuthorizationCode,
CsrfToken::new_random,
Nonce::new_random,
)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("profile".to_string()))
.set_pkce_challenge(pkce_challenge)
.url();
let next_url = query.next.unwrap_or_else(|| "/".to_string());
let cookie_val = format!("{}:{}:{}:{}", csrf_token.secret(), nonce.secret(), pkce_verifier.secret(), urlencoding::encode(&next_url));
// Determine if we are running behind an HTTPS proxy
let is_https = req.headers().get("x-forwarded-proto")
.and_then(|v| v.to_str().ok())
.map(|s| s == "https")
.unwrap_or(false);
// If HTTPS, use SameSite=None + Secure to fully support cross-domain POST redirects.
// Otherwise fallback to Lax for local testing.
let cookie_attrs = if is_https {
"SameSite=None; Secure"
} else {
"SameSite=Lax"
};
let cookie = format!("furumi_oidc_state={}; HttpOnly; {}; Path=/; Max-Age=3600", cookie_val, cookie_attrs);
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, cookie.parse().unwrap());
headers.insert(header::LOCATION, auth_url.as_str().parse().unwrap());
headers.insert(header::CACHE_CONTROL, "no-store, no-cache, must-revalidate".parse().unwrap());
(StatusCode::FOUND, headers, Body::empty()).into_response()
}
#[derive(Deserialize)]
pub struct AuthCallbackQuery {
code: String,
state: String,
}
pub async fn oidc_callback(
State(state): State<WebState>,
axum::extract::Query(query): axum::extract::Query<AuthCallbackQuery>,
req: Request,
) -> impl IntoResponse {
let oidc = match &state.oidc {
Some(o) => o,
None => return Redirect::to("/login").into_response(),
};
let cookies = req
.headers()
.get(header::COOKIE)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let mut matching_val = None;
for c in cookies.split(';') {
let c = c.trim();
if let Some(val) = c.strip_prefix("furumi_oidc_state=") {
let parts: Vec<&str> = val.split(':').collect();
if parts.len() >= 3 && parts[0] == query.state {
matching_val = Some(val.to_string());
break;
}
}
}
let cookie_val = match matching_val {
Some(c) => c,
None => {
tracing::warn!("OIDC callback failed: Invalid state or missing valid cookie. Received cookies: {}", cookies);
return (StatusCode::BAD_REQUEST, "Invalid state").into_response();
}
};
let parts: Vec<&str> = cookie_val.split(':').collect();
let nonce = Nonce::new(parts[1].to_string());
let pkce_verifier = PkceCodeVerifier::new(parts[2].to_string());
let token_response = oidc
.client
.exchange_code(AuthorizationCode::new(query.code))
.set_pkce_verifier(pkce_verifier)
.request_async(async_http_client)
.await;
let token_response = match token_response {
Ok(tr) => tr,
Err(e) => {
tracing::error!("OIDC exchange code error: {:?}", e);
if let openidconnect::RequestTokenError::ServerResponse(err) = &e {
tracing::error!("OIDC Server returned error: {:?}", err);
}
return (StatusCode::INTERNAL_SERVER_ERROR, format!("OIDC error: {}", e)).into_response();
}
};
let id_token = match token_response.id_token() {
Some(t) => t,
None => return (StatusCode::INTERNAL_SERVER_ERROR, "No ID token").into_response(),
};
let claims = match id_token.claims(&oidc.client.id_token_verifier(), &nonce) {
Ok(c) => c,
Err(e) => return (StatusCode::UNAUTHORIZED, format!("Invalid ID token: {}", e)).into_response(),
};
let user_id = claims
.preferred_username()
.map(|u| u.to_string())
.or_else(|| claims.email().map(|e| e.to_string()))
.unwrap_or_else(|| claims.subject().to_string());
let session_val = generate_sso_cookie(&oidc.session_secret, &user_id);
let parts: Vec<&str> = cookie_val.split(':').collect();
let redirect_to = parts.get(3)
.and_then(|&s| urlencoding::decode(s).ok())
.map(|v| v.into_owned())
.unwrap_or_else(|| "/".to_string());
let redirect_to = if redirect_to.is_empty() { "/".to_string() } else { redirect_to };
let is_https = req.headers().get("x-forwarded-proto")
.and_then(|v| v.to_str().ok())
.map(|s| s == "https")
.unwrap_or(false);
let session_attrs = if is_https {
"SameSite=Strict; Secure"
} else {
"SameSite=Strict"
};
let session_cookie = format!("{}={}; HttpOnly; {}; Path=/; Max-Age=604800", SESSION_COOKIE, session_val, session_attrs);
let clear_state_cookie = "furumi_oidc_state=; HttpOnly; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT";
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, session_cookie.parse().unwrap());
headers.append(header::SET_COOKIE, clear_state_cookie.parse().unwrap());
headers.insert(header::LOCATION, redirect_to.parse().unwrap());
(StatusCode::FOUND, headers, Body::empty()).into_response()
}
const LOGIN_HTML: &str = r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Furumi Player — Login</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
min-height: 100vh;
display: flex; align-items: center; justify-content: center;
background: #0d0f14;
font-family: 'Inter', system-ui, sans-serif;
color: #e2e8f0;
}
.card {
background: #161b27;
border: 1px solid #2a3347;
border-radius: 16px;
padding: 2.5rem 3rem;
width: 360px;
box-shadow: 0 20px 60px rgba(0,0,0,0.5);
}
.logo { font-size: 1.8rem; font-weight: 700; color: #7c6af7; margin-bottom: 0.25rem; }
.subtitle { font-size: 0.85rem; color: #64748b; margin-bottom: 2rem; }
label { display: block; font-size: 0.8rem; color: #94a3b8; margin-bottom: 0.4rem; }
input[type=password] {
width: 100%; padding: 0.6rem 0.8rem;
background: #0d0f14; border: 1px solid #2a3347; border-radius: 8px;
color: #e2e8f0; font-size: 0.95rem; outline: none;
transition: border-color 0.2s;
}
input[type=password]:focus { border-color: #7c6af7; }
button {
margin-top: 1.2rem; width: 100%; padding: 0.65rem;
background: #7c6af7; border: none; border-radius: 8px;
color: #fff; font-size: 0.95rem; font-weight: 600; cursor: pointer;
transition: background 0.2s;
}
button:hover { background: #6b58e8; }
.btn-oidc {
display: block; width: 100%; padding: 0.65rem; text-align: center;
background: #2a3347; border: 1px solid #3d4a66; border-radius: 8px;
color: #e2e8f0; font-size: 0.95rem; font-weight: 600; text-decoration: none;
transition: background 0.2s;
}
.btn-oidc:hover { background: #3d4a66; }
.divider {
display: flex; align-items: center; text-align: center; margin: 1.5rem 0;
color: #64748b; font-size: 0.75rem;
}
.divider::before, .divider::after {
content: ''; flex: 1; border-bottom: 1px solid #2a3347;
}
.divider span { padding: 0 10px; }
</style>
</head>
<body>
<div class="card">
<div class="logo">🎵 Furumi</div>
<div class="subtitle">Enter access token to continue</div>
<form method="POST" action="/login">
<!-- NEXT_INPUT_PLACEHOLDER -->
<label for="password">Access Token</label>
<input type="password" id="password" name="password" autofocus autocomplete="current-password">
<button type="submit">Sign In</button>
</form>
<!-- OIDC_PLACEHOLDER -->
</div>
</body>
</html>"#;
const LOGIN_ERROR_HTML: &str = r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Furumi Player — Login</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
min-height: 100vh;
display: flex; align-items: center; justify-content: center;
background: #0d0f14;
font-family: 'Inter', system-ui, sans-serif;
color: #e2e8f0;
}
.card {
background: #161b27;
border: 1px solid #2a3347;
border-radius: 16px;
padding: 2.5rem 3rem;
width: 360px;
box-shadow: 0 20px 60px rgba(0,0,0,0.5);
}
.logo { font-size: 1.8rem; font-weight: 700; color: #7c6af7; margin-bottom: 0.25rem; }
.subtitle { font-size: 0.85rem; color: #64748b; margin-bottom: 2rem; }
.error { color: #f87171; font-size: 0.85rem; margin-bottom: 1rem; }
label { display: block; font-size: 0.8rem; color: #94a3b8; margin-bottom: 0.4rem; }
input[type=password] {
width: 100%; padding: 0.6rem 0.8rem;
background: #0d0f14; border: 1px solid #f87171; border-radius: 8px;
color: #e2e8f0; font-size: 0.95rem; outline: none;
}
button {
margin-top: 1.2rem; width: 100%; padding: 0.65rem;
background: #7c6af7; border: none; border-radius: 8px;
color: #fff; font-size: 0.95rem; font-weight: 600; cursor: pointer;
}
button:hover { background: #6b58e8; }
</style>
</head>
<body>
<div class="card">
<div class="logo">🎵 Furumi</div>
<div class="subtitle">Enter access token to continue</div>
<p class="error">❌ Invalid token. Please try again.</p>
<form method="POST" action="/login">
<!-- NEXT_INPUT_PLACEHOLDER -->
<label for="password">Access Token</label>
<input type="password" id="password" name="password" autofocus>
<button type="submit">Sign In</button>
</form>
</div>
</body>
</html>"#;

View File

@@ -0,0 +1,132 @@
use axum::{
extract::{Query, State},
http::StatusCode,
response::{IntoResponse, Json},
};
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use crate::security::sanitize_path;
use super::WebState;
#[derive(Deserialize)]
pub struct BrowseQuery {
#[serde(default)]
pub path: String,
}
#[derive(Serialize)]
pub struct BrowseResponse {
pub path: String,
pub entries: Vec<Entry>,
}
#[derive(Serialize)]
pub struct Entry {
pub name: String,
#[serde(rename = "type")]
pub kind: EntryKind,
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
}
#[derive(Serialize)]
#[serde(rename_all = "lowercase")]
pub enum EntryKind {
File,
Dir,
}
pub async fn handler(
State(state): State<WebState>,
Query(query): Query<BrowseQuery>,
) -> impl IntoResponse {
let safe = match sanitize_path(&query.path) {
Ok(p) => p,
Err(_) => {
return (StatusCode::BAD_REQUEST, Json(serde_json::json!({"error": "invalid path"}))).into_response();
}
};
let dir_path: PathBuf = state.root.join(&safe);
let read_dir = match tokio::fs::read_dir(&dir_path).await {
Ok(rd) => rd,
Err(e) => {
let status = if e.kind() == std::io::ErrorKind::NotFound {
StatusCode::NOT_FOUND
} else {
StatusCode::INTERNAL_SERVER_ERROR
};
return (status, Json(serde_json::json!({"error": e.to_string()}))).into_response();
}
};
let mut entries: Vec<Entry> = Vec::new();
let mut rd = read_dir;
loop {
match rd.next_entry().await {
Ok(Some(entry)) => {
let name = entry.file_name().to_string_lossy().into_owned();
// Skip hidden files
if name.starts_with('.') {
continue;
}
let meta = match entry.metadata().await {
Ok(m) => m,
Err(_) => continue,
};
if meta.is_dir() {
entries.push(Entry { name, kind: EntryKind::Dir, size: None });
} else if meta.is_file() {
// Only expose audio files
if is_audio_file(&name) {
entries.push(Entry {
name,
kind: EntryKind::File,
size: Some(meta.len()),
});
}
}
}
Ok(None) => break,
Err(e) => {
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({"error": e.to_string()})),
)
.into_response();
}
}
}
// Sort: dirs first, then files; alphabetically within each group
entries.sort_by(|a, b| {
let a_dir = matches!(a.kind, EntryKind::Dir);
let b_dir = matches!(b.kind, EntryKind::Dir);
b_dir.cmp(&a_dir).then(a.name.to_lowercase().cmp(&b.name.to_lowercase()))
});
let response = BrowseResponse {
path: safe,
entries,
};
(StatusCode::OK, Json(response)).into_response()
}
/// Whitelist of audio extensions served via the web player.
pub fn is_audio_file(name: &str) -> bool {
let ext = name.rsplit('.').next().unwrap_or("").to_lowercase();
matches!(
ext.as_str(),
"mp3" | "flac" | "ogg" | "opus" | "aac" | "m4a" | "wav" | "ape" | "wv" | "wma" | "tta" | "aiff" | "aif"
)
}
/// Returns true if the format needs transcoding (not natively supported by browsers).
pub fn needs_transcode(name: &str) -> bool {
let ext = name.rsplit('.').next().unwrap_or("").to_lowercase();
matches!(ext.as_str(), "ape" | "wv" | "wma" | "tta" | "aiff" | "aif")
}

View File

@@ -0,0 +1,204 @@
use axum::{
extract::{Path, State},
http::StatusCode,
response::{IntoResponse, Json},
};
use base64::{Engine, engine::general_purpose::STANDARD as BASE64};
use serde::Serialize;
use symphonia::core::{
codecs::CODEC_TYPE_NULL,
formats::FormatOptions,
io::MediaSourceStream,
meta::{MetadataOptions, StandardTagKey},
probe::Hint,
};
use crate::security::sanitize_path;
use super::WebState;
#[derive(Serialize)]
pub struct MetaResponse {
pub title: Option<String>,
pub artist: Option<String>,
pub album: Option<String>,
pub track: Option<u32>,
pub year: Option<u32>,
pub duration_secs: Option<f64>,
pub cover_base64: Option<String>, // "data:image/jpeg;base64,..."
}
pub async fn handler(
State(state): State<WebState>,
Path(path): Path<String>,
) -> impl IntoResponse {
let safe = match sanitize_path(&path) {
Ok(p) => p,
Err(_) => return (StatusCode::BAD_REQUEST, Json(serde_json::json!({"error": "invalid path"}))).into_response(),
};
let file_path = state.root.join(&safe);
let filename = file_path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_owned();
let meta = tokio::task::spawn_blocking(move || read_meta(file_path, &filename)).await;
match meta {
Ok(Ok(m)) => (StatusCode::OK, Json(m)).into_response(),
Ok(Err(e)) => (StatusCode::INTERNAL_SERVER_ERROR, Json(serde_json::json!({"error": e.to_string()}))).into_response(),
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, Json(serde_json::json!({"error": e.to_string()}))).into_response(),
}
}
fn read_meta(file_path: std::path::PathBuf, filename: &str) -> anyhow::Result<MetaResponse> {
let file = std::fs::File::open(&file_path)?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(ext) = file_path.extension().and_then(|e| e.to_str()) {
hint.with_extension(ext);
}
let mut probed = symphonia::default::get_probe().format(
&hint,
mss,
&FormatOptions { enable_gapless: false, ..Default::default() },
&MetadataOptions::default(),
)?;
// Extract tags from container-level metadata
let mut title: Option<String> = None;
let mut artist: Option<String> = None;
let mut album: Option<String> = None;
let mut track: Option<u32> = None;
let mut year: Option<u32> = None;
let mut cover_data: Option<(Vec<u8>, String)> = None;
// Check metadata side-data (e.g., ID3 tags probed before format)
if let Some(rev) = probed.metadata.get().as_ref().and_then(|m| m.current()) {
extract_tags(rev.tags(), rev.visuals(), &mut title, &mut artist, &mut album, &mut track, &mut year, &mut cover_data);
}
// Also check format-embedded metadata
if let Some(rev) = probed.format.metadata().current() {
if title.is_none() {
extract_tags(rev.tags(), rev.visuals(), &mut title, &mut artist, &mut album, &mut track, &mut year, &mut cover_data);
}
}
// If no title from tags, use filename without extension
if title.is_none() {
title = Some(
std::path::Path::new(filename)
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or(filename)
.to_owned(),
);
}
// Estimate duration from track time_base + n_frames
let duration_secs = probed
.format
.tracks()
.iter()
.find(|t| t.codec_params.codec != CODEC_TYPE_NULL)
.and_then(|t| {
let n_frames = t.codec_params.n_frames?;
let tb = t.codec_params.time_base?;
Some(n_frames as f64 * tb.numer as f64 / tb.denom as f64)
});
let cover_base64 = cover_data.map(|(data, mime)| {
format!("data:{};base64,{}", mime, BASE64.encode(&data))
});
Ok(MetaResponse {
title,
artist,
album,
track,
year,
duration_secs,
cover_base64,
})
}
fn extract_tags(
tags: &[symphonia::core::meta::Tag],
visuals: &[symphonia::core::meta::Visual],
title: &mut Option<String>,
artist: &mut Option<String>,
album: &mut Option<String>,
track: &mut Option<u32>,
year: &mut Option<u32>,
cover: &mut Option<(Vec<u8>, String)>,
) {
for tag in tags {
let value = fix_encoding(tag.value.to_string());
if let Some(key) = tag.std_key {
match key {
StandardTagKey::TrackTitle => {
*title = Some(value);
}
StandardTagKey::Artist | StandardTagKey::Performer => {
if artist.is_none() {
*artist = Some(value);
}
}
StandardTagKey::Album => {
*album = Some(value);
}
StandardTagKey::TrackNumber => {
if track.is_none() {
*track = value.parse().ok();
}
}
StandardTagKey::Date | StandardTagKey::OriginalDate => {
if year.is_none() {
// Parse first 4 characters as year
*year = value[..4.min(value.len())].parse().ok();
}
}
_ => {}
}
}
}
if cover.is_none() {
if let Some(visual) = visuals.first() {
let mime = visual.media_type.clone();
*cover = Some((visual.data.to_vec(), mime));
}
}
}
/// Heuristic to fix mojibake (CP1251 bytes interpreted as Latin-1/Windows-1252)
fn fix_encoding(s: String) -> String {
// If it's already a valid UTF-8 string that doesn't look like mojibake, return it.
// Mojibake looks like characters from Latin-1 Supplement (0xC0-0xFF)
// where they should be Cyrillic.
let bytes: Vec<u8> = s.chars().map(|c| c as u32).filter(|&c| c <= 255).map(|c| c as u8).collect();
// If the length is different, it means there were characters > 255, so it's not simple Latin-1 mojibake.
if bytes.len() != s.chars().count() {
return s;
}
// Check if it's likely CP1251. Russian characters in CP1251 are 0xC0-0xFF.
// In Latin-1 these are characters like À-ÿ.
let has_mojibake = bytes.iter().any(|&b| b >= 0xC0);
if !has_mojibake {
return s;
}
let (decoded, _, errors) = encoding_rs::WINDOWS_1251.decode(&bytes);
if errors {
return s;
}
decoded.into_owned()
}

View File

@@ -0,0 +1,66 @@
pub mod auth;
pub mod browse;
pub mod meta;
pub mod stream;
pub mod transcoder;
use std::path::PathBuf;
use std::sync::Arc;
use axum::{
Router,
middleware,
routing::get,
};
/// Shared state passed to all web handlers.
#[derive(Clone)]
pub struct WebState {
pub root: Arc<PathBuf>,
pub token: Arc<String>,
pub oidc: Option<Arc<OidcState>>,
}
pub struct OidcState {
pub client: openidconnect::core::CoreClient,
pub session_secret: Vec<u8>,
}
/// Build the axum Router for the web player.
pub fn build_router(root: PathBuf, token: String, oidc: Option<Arc<OidcState>>) -> Router {
let state = WebState {
root: Arc::new(root),
token: Arc::new(token),
oidc,
};
let api = Router::new()
.route("/browse", get(browse::handler))
.route("/stream/*path", get(stream::handler))
.route("/meta/*path", get(meta::handler));
let authed_routes = Router::new()
.route("/", get(player_html))
.nest("/api", api)
.route_layer(middleware::from_fn_with_state(state.clone(), auth::require_auth));
Router::new()
.route("/login", get(auth::login_page).post(auth::login_submit))
.route("/logout", get(auth::logout))
.route("/auth/login", get(auth::oidc_login))
.route("/auth/callback", get(auth::oidc_callback))
.merge(authed_routes)
.with_state(state)
}
#[derive(Clone)]
pub struct AuthUserInfo(pub String);
async fn player_html(
axum::extract::Extension(user_info): axum::extract::Extension<AuthUserInfo>,
) -> axum::response::Html<String> {
let html = include_str!("player.html")
.replace("<!-- USERNAME_PLACEHOLDER -->", &user_info.0)
.replace("<!-- VERSION_PLACEHOLDER -->", option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION")));
axum::response::Html(html)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,171 @@
use axum::{
body::Body,
extract::{Path, Query, State},
http::{HeaderMap, HeaderValue, StatusCode, header},
response::{IntoResponse, Response},
};
use serde::Deserialize;
use tokio::io::{AsyncReadExt, AsyncSeekExt};
use crate::security::sanitize_path;
use super::{
WebState,
browse::{is_audio_file, needs_transcode},
};
#[derive(Deserialize)]
pub struct StreamQuery {
#[serde(default)]
pub transcode: Option<String>,
}
pub async fn handler(
State(state): State<WebState>,
Path(path): Path<String>,
Query(query): Query<StreamQuery>,
headers: HeaderMap,
) -> impl IntoResponse {
let safe = match sanitize_path(&path) {
Ok(p) => p,
Err(_) => return bad_request("invalid path"),
};
let file_path = state.root.join(&safe);
let filename = file_path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_owned();
if !is_audio_file(&filename) {
return (StatusCode::FORBIDDEN, "not an audio file").into_response();
}
let force_transcode = query.transcode.as_deref() == Some("1");
if force_transcode || needs_transcode(&filename) {
return stream_transcoded(file_path).await;
}
stream_native(file_path, &filename, &headers).await
}
/// Stream a file as-is with Range support.
async fn stream_native(file_path: std::path::PathBuf, filename: &str, req_headers: &HeaderMap) -> Response {
let mut file = match tokio::fs::File::open(&file_path).await {
Ok(f) => f,
Err(e) => {
let status = if e.kind() == std::io::ErrorKind::NotFound {
StatusCode::NOT_FOUND
} else {
StatusCode::INTERNAL_SERVER_ERROR
};
return (status, e.to_string()).into_response();
}
};
let file_size = match file.metadata().await {
Ok(m) => m.len(),
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
};
let content_type = guess_content_type(filename);
// Parse Range header
let range_header = req_headers
.get(header::RANGE)
.and_then(|v| v.to_str().ok())
.and_then(parse_range);
if let Some((start, end)) = range_header {
let end = end.unwrap_or(file_size - 1).min(file_size - 1);
if start > end || start >= file_size {
return (StatusCode::RANGE_NOT_SATISFIABLE, "invalid range").into_response();
}
let length = end - start + 1;
if let Err(e) = file.seek(std::io::SeekFrom::Start(start)).await {
return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response();
}
let limited = file.take(length);
let stream = tokio_util::io::ReaderStream::new(limited);
let body = Body::from_stream(stream);
let mut resp_headers = HeaderMap::new();
resp_headers.insert(header::CONTENT_TYPE, content_type.parse().unwrap());
resp_headers.insert(header::ACCEPT_RANGES, HeaderValue::from_static("bytes"));
resp_headers.insert(header::CONTENT_LENGTH, length.to_string().parse().unwrap());
resp_headers.insert(
header::CONTENT_RANGE,
format!("bytes {}-{}/{}", start, end, file_size).parse().unwrap(),
);
(StatusCode::PARTIAL_CONTENT, resp_headers, body).into_response()
} else {
// Full file
let stream = tokio_util::io::ReaderStream::new(file);
let body = Body::from_stream(stream);
let mut resp_headers = HeaderMap::new();
resp_headers.insert(header::CONTENT_TYPE, content_type.parse().unwrap());
resp_headers.insert(header::ACCEPT_RANGES, HeaderValue::from_static("bytes"));
resp_headers.insert(header::CONTENT_LENGTH, file_size.to_string().parse().unwrap());
(StatusCode::OK, resp_headers, body).into_response()
}
}
/// Stream a transcoded (Ogg/Opus) version of the file.
async fn stream_transcoded(file_path: std::path::PathBuf) -> Response {
let ogg_data = match tokio::task::spawn_blocking(move || {
super::transcoder::transcode_to_ogg_opus(file_path)
})
.await
{
Ok(Ok(data)) => data,
Ok(Err(e)) => {
return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response();
}
Err(e) => {
return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response();
}
};
let len = ogg_data.len();
let mut resp_headers = HeaderMap::new();
resp_headers.insert(header::CONTENT_TYPE, "audio/ogg".parse().unwrap());
resp_headers.insert(header::CONTENT_LENGTH, len.to_string().parse().unwrap());
resp_headers.insert(header::ACCEPT_RANGES, HeaderValue::from_static("none"));
(StatusCode::OK, resp_headers, Body::from(ogg_data)).into_response()
}
/// Parse `Range: bytes=<start>-<end>` header.
fn parse_range(s: &str) -> Option<(u64, Option<u64>)> {
let s = s.strip_prefix("bytes=")?;
let mut parts = s.splitn(2, '-');
let start: u64 = parts.next()?.parse().ok()?;
let end: Option<u64> = parts.next().and_then(|e| {
if e.is_empty() { None } else { e.parse().ok() }
});
Some((start, end))
}
fn guess_content_type(filename: &str) -> &'static str {
let ext = filename.rsplit('.').next().unwrap_or("").to_lowercase();
match ext.as_str() {
"mp3" => "audio/mpeg",
"flac" => "audio/flac",
"ogg" => "audio/ogg",
"opus" => "audio/ogg; codecs=opus",
"aac" => "audio/aac",
"m4a" => "audio/mp4",
"wav" => "audio/wav",
_ => "application/octet-stream",
}
}
fn bad_request(msg: &'static str) -> Response {
(StatusCode::BAD_REQUEST, msg).into_response()
}

View File

@@ -0,0 +1,244 @@
//! Symphonia-based audio transcoder: decodes any format → encodes to Ogg/Opus stream.
//!
//! The heavy decode/encode work runs in a `spawn_blocking` thread.
//! PCM samples are sent over a channel to the async stream handler.
use std::path::PathBuf;
use std::io::Cursor;
use anyhow::{anyhow, Result};
use symphonia::core::{
audio::{AudioBufferRef, Signal},
codecs::{DecoderOptions, CODEC_TYPE_NULL},
errors::Error as SymphoniaError,
formats::FormatOptions,
io::MediaSourceStream,
meta::MetadataOptions,
probe::Hint,
};
use ogg::writing::PacketWriter;
use opus::{Application, Channels, Encoder};
/// Transcode an audio file at `path` into an Ogg/Opus byte stream.
/// Returns `Vec<u8>` with the full Ogg/Opus file (suitable for streaming/download).
///
/// This is intentionally synchronous (for use inside `spawn_blocking`).
pub fn transcode_to_ogg_opus(path: PathBuf) -> Result<Vec<u8>> {
// ---- Open and probe the source ----
let file = std::fs::File::open(&path)?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(ext) = path.extension().and_then(|e| e.to_str()) {
hint.with_extension(ext);
}
let probed = symphonia::default::get_probe()
.format(&hint, mss, &FormatOptions::default(), &MetadataOptions::default())
.map_err(|e| anyhow!("probe failed: {e}"))?;
let mut format = probed.format;
// Find the default audio track
let track = format
.tracks()
.iter()
.find(|t| t.codec_params.codec != CODEC_TYPE_NULL)
.ok_or_else(|| anyhow!("no audio track found"))?
.clone();
let track_id = track.id;
let codec_params = &track.codec_params;
let sample_rate = codec_params.sample_rate.unwrap_or(44100);
let n_channels = codec_params.channels.map(|c| c.count()).unwrap_or(2);
// Opus only supports 1 or 2 channels; downmix to stereo if needed
let opus_channels = if n_channels == 1 { Channels::Mono } else { Channels::Stereo };
let opus_ch_count = if n_channels == 1 { 1usize } else { 2 };
// Opus encoder (target 48 kHz, we'll resample if needed)
// Opus natively works at 48000 Hz; symphonia will decode at source rate.
// For simplicity, we encode at the source sample rate - most clients handle this.
let opus_sample_rate = if [8000u32, 12000, 16000, 24000, 48000].contains(&sample_rate) {
sample_rate
} else {
// Opus spec: use closest supported rate; 48000 is safest
48000
};
let mut encoder = Encoder::new(opus_sample_rate, opus_channels, Application::Audio)
.map_err(|e| anyhow!("opus encoder init: {e}"))?;
// Typical Opus frame = 20ms
let frame_size = (opus_sample_rate as usize * 20) / 1000; // samples per channel per frame
let mut decoder = symphonia::default::get_codecs()
.make(codec_params, &DecoderOptions::default())
.map_err(|e| anyhow!("decoder init: {e}"))?;
// ---- Ogg output buffer ----
let mut ogg_buf: Vec<u8> = Vec::with_capacity(4 * 1024 * 1024);
{
let cursor = Cursor::new(&mut ogg_buf);
let mut pkt_writer = PacketWriter::new(cursor);
// Write Opus header packet (stream serial = 1)
let serial: u32 = 1;
let opus_head = build_opus_head(opus_ch_count as u8, opus_sample_rate, 0);
pkt_writer.write_packet(opus_head, serial, ogg::writing::PacketWriteEndInfo::EndPage, 0)?;
// Write Opus tags packet (empty)
let opus_tags = build_opus_tags();
pkt_writer.write_packet(opus_tags, serial, ogg::writing::PacketWriteEndInfo::EndPage, 0)?;
let mut sample_buf: Vec<f32> = Vec::new();
let mut granule_pos: u64 = 0;
loop {
let packet = match format.next_packet() {
Ok(p) => p,
Err(SymphoniaError::IoError(e)) if e.kind() == std::io::ErrorKind::UnexpectedEof => break,
Err(SymphoniaError::ResetRequired) => {
decoder.reset();
continue;
}
Err(e) => return Err(anyhow!("format error: {e}")),
};
if packet.track_id() != track_id {
continue;
}
match decoder.decode(&packet) {
Ok(decoded) => {
collect_samples(&decoded, opus_ch_count, &mut sample_buf);
}
Err(SymphoniaError::DecodeError(_)) => continue,
Err(e) => return Err(anyhow!("decode error: {e}")),
}
// Encode complete frames from sample_buf
while sample_buf.len() >= frame_size * opus_ch_count {
let frame: Vec<f32> = sample_buf.drain(..frame_size * opus_ch_count).collect();
let mut out = vec![0u8; 4000];
let encoded_len = encoder
.encode_float(&frame, &mut out)
.map_err(|e| anyhow!("opus encode: {e}"))?;
out.truncate(encoded_len);
granule_pos += frame_size as u64;
pkt_writer.write_packet(
out,
serial,
ogg::writing::PacketWriteEndInfo::NormalPacket,
granule_pos,
)?;
}
}
// Encode remaining samples (partial frame — pad with silence)
if !sample_buf.is_empty() {
let needed = frame_size * opus_ch_count;
sample_buf.resize(needed, 0.0);
let mut out = vec![0u8; 4000];
let encoded_len = encoder
.encode_float(&sample_buf, &mut out)
.map_err(|e| anyhow!("opus encode final: {e}"))?;
out.truncate(encoded_len);
granule_pos += frame_size as u64;
pkt_writer.write_packet(
out,
serial,
ogg::writing::PacketWriteEndInfo::EndStream,
granule_pos,
)?;
}
}
Ok(ogg_buf)
}
/// Collect PCM samples from a symphonia AudioBufferRef into a flat f32 vec.
/// Downmixes to `target_channels` (1 or 2) if source has more channels.
fn collect_samples(decoded: &AudioBufferRef<'_>, target_channels: usize, out: &mut Vec<f32>) {
match decoded {
AudioBufferRef::F32(buf) => {
interleave_channels(buf.chan(0), if buf.spec().channels.count() > 1 { Some(buf.chan(1)) } else { None }, target_channels, out);
}
AudioBufferRef::S16(buf) => {
let ch0: Vec<f32> = buf.chan(0).iter().map(|&s| s as f32 / 32768.0).collect();
let ch1 = if buf.spec().channels.count() > 1 {
Some(buf.chan(1).iter().map(|&s| s as f32 / 32768.0).collect::<Vec<_>>())
} else {
None
};
interleave_channels(&ch0, ch1.as_deref(), target_channels, out);
}
AudioBufferRef::S32(buf) => {
let ch0: Vec<f32> = buf.chan(0).iter().map(|&s| s as f32 / 2147483648.0).collect();
let ch1 = if buf.spec().channels.count() > 1 {
Some(buf.chan(1).iter().map(|&s| s as f32 / 2147483648.0).collect::<Vec<_>>())
} else {
None
};
interleave_channels(&ch0, ch1.as_deref(), target_channels, out);
}
AudioBufferRef::U8(buf) => {
let ch0: Vec<f32> = buf.chan(0).iter().map(|&s| (s as f32 - 128.0) / 128.0).collect();
let ch1 = if buf.spec().channels.count() > 1 {
Some(buf.chan(1).iter().map(|&s| (s as f32 - 128.0) / 128.0).collect::<Vec<_>>())
} else {
None
};
interleave_channels(&ch0, ch1.as_deref(), target_channels, out);
}
_ => {
// For other formats, try to get samples via S16 conversion
// (symphonia may provide other types; we skip unsupported ones)
}
}
}
fn interleave_channels(ch0: &[f32], ch1: Option<&[f32]>, target_channels: usize, out: &mut Vec<f32>) {
let len = ch0.len();
if target_channels == 1 {
if let Some(c1) = ch1 {
// Mix down to mono
out.extend(ch0.iter().zip(c1.iter()).map(|(l, r)| (l + r) * 0.5));
} else {
out.extend_from_slice(ch0);
}
} else {
// Stereo interleaved
let c1 = ch1.unwrap_or(ch0);
for i in 0..len {
out.push(ch0[i]);
out.push(c1[i]);
}
}
}
/// Build OpusHead binary packet (RFC 7845).
fn build_opus_head(channels: u8, sample_rate: u32, pre_skip: u16) -> Vec<u8> {
let mut v = Vec::with_capacity(19);
v.extend_from_slice(b"OpusHead");
v.push(1); // version
v.push(channels);
v.extend_from_slice(&pre_skip.to_le_bytes());
v.extend_from_slice(&sample_rate.to_le_bytes());
v.extend_from_slice(&0u16.to_le_bytes()); // output gain
v.push(0); // channel mapping family
v
}
/// Build OpusTags binary packet with minimal vendor string.
fn build_opus_tags() -> Vec<u8> {
let vendor = b"furumi-server";
let mut v = Vec::new();
v.extend_from_slice(b"OpusTags");
v.extend_from_slice(&(vendor.len() as u32).to_le_bytes());
v.extend_from_slice(vendor);
v.extend_from_slice(&0u32.to_le_bytes()); // user comment list length = 0
v
}

View File

@@ -0,0 +1,27 @@
[package]
name = "furumi-web-player"
version = "0.3.4"
edition = "2024"
[dependencies]
anyhow = "1.0"
axum = { version = "0.7", features = ["tokio", "macros"] }
clap = { version = "4.5", features = ["derive", "env"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
sqlx = { version = "0.8", features = ["runtime-tokio-rustls", "postgres", "chrono", "uuid", "migrate"] }
tokio = { version = "1.50", features = ["full"] }
tower = { version = "0.4", features = ["util"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
mime_guess = "2.0"
symphonia = { version = "0.5", default-features = false, features = ["mp3", "aac", "flac", "vorbis", "wav", "alac", "adpcm", "pcm", "mpa", "isomp4", "ogg", "aiff", "mkv"] }
tokio-util = { version = "0.7", features = ["io"] }
openidconnect = "3.4"
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls"] }
sha2 = "0.10"
hmac = "0.12"
base64 = "0.22"
rand = "0.8"
urlencoding = "2.1.3"
rustls = { version = "0.23", features = ["ring"] }

229
furumi-web-player/src/db.rs Normal file
View File

@@ -0,0 +1,229 @@
use serde::Serialize;
use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions;
pub async fn connect(database_url: &str) -> Result<PgPool, sqlx::Error> {
PgPoolOptions::new()
.max_connections(10)
.connect(database_url)
.await
}
// --- Models ---
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct ArtistListItem {
pub slug: String,
pub name: String,
pub album_count: i64,
pub track_count: i64,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct ArtistDetail {
pub slug: String,
pub name: String,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct AlbumListItem {
pub slug: String,
pub name: String,
pub year: Option<i32>,
pub track_count: i64,
pub has_cover: bool,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct TrackListItem {
pub slug: String,
pub title: String,
pub track_number: Option<i32>,
pub duration_secs: Option<f64>,
pub artist_name: String,
pub album_name: Option<String>,
pub album_slug: Option<String>,
pub genre: Option<String>,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct TrackDetail {
pub slug: String,
pub title: String,
pub track_number: Option<i32>,
pub duration_secs: Option<f64>,
pub genre: Option<String>,
pub storage_path: String,
pub artist_name: String,
pub artist_slug: String,
pub album_name: Option<String>,
pub album_slug: Option<String>,
pub album_year: Option<i32>,
}
#[derive(Debug, sqlx::FromRow)]
pub struct CoverInfo {
pub file_path: String,
pub mime_type: String,
}
#[derive(Debug, sqlx::FromRow)]
pub struct TrackCoverLookup {
pub storage_path: String,
pub album_id: Option<i64>,
}
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct SearchResult {
pub result_type: String, // "artist", "album", "track"
pub slug: String,
pub name: String,
pub detail: Option<String>, // artist name for albums/tracks
}
// --- Queries ---
pub async fn list_artists(pool: &PgPool) -> Result<Vec<ArtistListItem>, sqlx::Error> {
sqlx::query_as::<_, ArtistListItem>(
r#"SELECT ar.slug, ar.name,
COUNT(DISTINCT al.id) AS album_count,
COUNT(DISTINCT t.id) AS track_count
FROM artists ar
LEFT JOIN albums al ON al.artist_id = ar.id
LEFT JOIN tracks t ON t.artist_id = ar.id
GROUP BY ar.id, ar.slug, ar.name
HAVING COUNT(DISTINCT t.id) > 0
ORDER BY ar.name"#
)
.fetch_all(pool)
.await
}
pub async fn get_artist(pool: &PgPool, slug: &str) -> Result<Option<ArtistDetail>, sqlx::Error> {
sqlx::query_as::<_, ArtistDetail>(
"SELECT slug, name FROM artists WHERE slug = $1"
)
.bind(slug)
.fetch_optional(pool)
.await
}
pub async fn list_albums_by_artist(pool: &PgPool, artist_slug: &str) -> Result<Vec<AlbumListItem>, sqlx::Error> {
sqlx::query_as::<_, AlbumListItem>(
r#"SELECT al.slug, al.name, al.year,
COUNT(t.id) AS track_count,
EXISTS(SELECT 1 FROM album_images ai WHERE ai.album_id = al.id AND ai.image_type = 'cover') AS has_cover
FROM albums al
JOIN artists ar ON al.artist_id = ar.id
LEFT JOIN tracks t ON t.album_id = al.id
WHERE ar.slug = $1
GROUP BY al.id, al.slug, al.name, al.year
ORDER BY al.year NULLS LAST, al.name"#
)
.bind(artist_slug)
.fetch_all(pool)
.await
}
pub async fn list_tracks_by_album(pool: &PgPool, album_slug: &str) -> Result<Vec<TrackListItem>, sqlx::Error> {
sqlx::query_as::<_, TrackListItem>(
r#"SELECT t.slug, t.title, t.track_number, t.duration_secs,
ar.name AS artist_name,
al.name AS album_name, al.slug AS album_slug, t.genre
FROM tracks t
JOIN artists ar ON t.artist_id = ar.id
LEFT JOIN albums al ON t.album_id = al.id
WHERE al.slug = $1
ORDER BY t.track_number NULLS LAST, t.title"#
)
.bind(album_slug)
.fetch_all(pool)
.await
}
pub async fn get_track(pool: &PgPool, slug: &str) -> Result<Option<TrackDetail>, sqlx::Error> {
sqlx::query_as::<_, TrackDetail>(
r#"SELECT t.slug, t.title, t.track_number, t.duration_secs, t.genre,
t.storage_path, ar.name AS artist_name, ar.slug AS artist_slug,
al.name AS album_name, al.slug AS album_slug, al.year AS album_year
FROM tracks t
JOIN artists ar ON t.artist_id = ar.id
LEFT JOIN albums al ON t.album_id = al.id
WHERE t.slug = $1"#
)
.bind(slug)
.fetch_optional(pool)
.await
}
pub async fn get_track_cover_lookup(pool: &PgPool, track_slug: &str) -> Result<Option<TrackCoverLookup>, sqlx::Error> {
sqlx::query_as::<_, TrackCoverLookup>(
"SELECT storage_path, album_id FROM tracks WHERE slug = $1"
)
.bind(track_slug)
.fetch_optional(pool)
.await
}
pub async fn get_album_cover_by_id(pool: &PgPool, album_id: i64) -> Result<Option<CoverInfo>, sqlx::Error> {
sqlx::query_as::<_, CoverInfo>(
r#"SELECT file_path, mime_type FROM album_images
WHERE album_id = $1 AND image_type = 'cover' LIMIT 1"#
)
.bind(album_id)
.fetch_optional(pool)
.await
}
pub async fn get_album_cover(pool: &PgPool, album_slug: &str) -> Result<Option<CoverInfo>, sqlx::Error> {
sqlx::query_as::<_, CoverInfo>(
r#"SELECT ai.file_path, ai.mime_type
FROM album_images ai
JOIN albums al ON ai.album_id = al.id
WHERE al.slug = $1 AND ai.image_type = 'cover'
LIMIT 1"#
)
.bind(album_slug)
.fetch_optional(pool)
.await
}
pub async fn search(pool: &PgPool, query: &str, limit: i32) -> Result<Vec<SearchResult>, sqlx::Error> {
let pattern = format!("%{}%", query);
sqlx::query_as::<_, SearchResult>(
r#"SELECT * FROM (
SELECT 'artist' AS result_type, slug, name, NULL AS detail
FROM artists WHERE name ILIKE $1
UNION ALL
SELECT 'album' AS result_type, al.slug, al.name, ar.name AS detail
FROM albums al JOIN artists ar ON al.artist_id = ar.id
WHERE al.name ILIKE $1
UNION ALL
SELECT 'track' AS result_type, t.slug, t.title AS name, ar.name AS detail
FROM tracks t JOIN artists ar ON t.artist_id = ar.id
WHERE t.title ILIKE $1
) sub ORDER BY result_type, name LIMIT $2"#
)
.bind(&pattern)
.bind(limit)
.fetch_all(pool)
.await
}
pub async fn list_all_tracks_by_artist(pool: &PgPool, artist_slug: &str) -> Result<Vec<TrackListItem>, sqlx::Error> {
sqlx::query_as::<_, TrackListItem>(
r#"SELECT t.slug, t.title, t.track_number, t.duration_secs,
ar.name AS artist_name,
al.name AS album_name, al.slug AS album_slug, t.genre
FROM tracks t
JOIN artists ar ON t.artist_id = ar.id
LEFT JOIN albums al ON t.album_id = al.id
WHERE ar.slug = $1
ORDER BY al.year NULLS LAST, al.name, t.track_number NULLS LAST, t.title"#
)
.bind(artist_slug)
.fetch_all(pool)
.await
}

View File

@@ -0,0 +1,106 @@
mod db;
mod web;
use std::sync::Arc;
use clap::Parser;
#[derive(Parser, Debug)]
#[command(version, about = "Furumi Web Player: database-backed music player")]
struct Args {
/// IP address and port for the web player
#[arg(long, env = "FURUMI_PLAYER_BIND", default_value = "0.0.0.0:8080")]
bind: String,
/// PostgreSQL connection URL
#[arg(long, env = "FURUMI_PLAYER_DATABASE_URL")]
database_url: String,
/// Root directory where music files are stored (agent's storage_dir)
#[arg(long, env = "FURUMI_PLAYER_STORAGE_DIR")]
storage_dir: std::path::PathBuf,
/// OIDC Issuer URL (e.g. https://auth.example.com/application/o/furumi/)
#[arg(long, env = "FURUMI_PLAYER_OIDC_ISSUER_URL")]
oidc_issuer_url: Option<String>,
/// OIDC Client ID
#[arg(long, env = "FURUMI_PLAYER_OIDC_CLIENT_ID")]
oidc_client_id: Option<String>,
/// OIDC Client Secret
#[arg(long, env = "FURUMI_PLAYER_OIDC_CLIENT_SECRET")]
oidc_client_secret: Option<String>,
/// OIDC Redirect URL (e.g. https://music.example.com/auth/callback)
#[arg(long, env = "FURUMI_PLAYER_OIDC_REDIRECT_URL")]
oidc_redirect_url: Option<String>,
/// OIDC Session Secret (32+ chars, for HMAC). Random if not provided.
#[arg(long, env = "FURUMI_PLAYER_OIDC_SESSION_SECRET")]
oidc_session_secret: Option<String>,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Install ring as the default crypto provider for rustls
rustls::crypto::ring::default_provider()
.install_default()
.expect("Failed to install rustls crypto provider");
tracing_subscriber::fmt::init();
let args = Args::parse();
let version = option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION"));
tracing::info!("Furumi Web Player v{} starting", version);
tracing::info!("Storage directory: {:?}", args.storage_dir);
if !args.storage_dir.exists() || !args.storage_dir.is_dir() {
eprintln!("Error: Storage directory {:?} does not exist or is not a directory", args.storage_dir);
std::process::exit(1);
}
tracing::info!("Connecting to database...");
let pool = db::connect(&args.database_url).await?;
tracing::info!("Database connected");
// Initialize OIDC if configured
let oidc_state = if let (Some(issuer), Some(client_id), Some(secret), Some(redirect)) = (
args.oidc_issuer_url,
args.oidc_client_id,
args.oidc_client_secret,
args.oidc_redirect_url,
) {
tracing::info!("OIDC (SSO): enabled (issuer: {})", issuer);
match web::auth::oidc_init(issuer, client_id, secret, redirect, args.oidc_session_secret).await {
Ok(state) => Some(Arc::new(state)),
Err(e) => {
eprintln!("Error initializing OIDC: {}", e);
std::process::exit(1);
}
}
} else {
tracing::info!("OIDC (SSO): disabled (no OIDC configuration provided)");
None
};
let bind_addr: std::net::SocketAddr = args.bind.parse().unwrap_or_else(|e| {
eprintln!("Error: Invalid bind address '{}': {}", args.bind, e);
std::process::exit(1);
});
let state = Arc::new(web::AppState {
pool,
storage_dir: Arc::new(args.storage_dir),
oidc: oidc_state,
});
tracing::info!("Web player: http://{}", bind_addr);
let app = web::build_router(state);
let listener = tokio::net::TcpListener::bind(bind_addr).await?;
axum::serve(listener, app).await?;
Ok(())
}

View File

@@ -0,0 +1,298 @@
use std::sync::Arc;
use axum::{
body::Body,
extract::{Path, Query, State},
http::{HeaderMap, StatusCode, header},
response::{IntoResponse, Json, Response},
};
use serde::Deserialize;
use tokio::io::{AsyncReadExt, AsyncSeekExt};
use crate::db;
use super::AppState;
type S = Arc<AppState>;
// --- Library browsing ---
pub async fn list_artists(State(state): State<S>) -> impl IntoResponse {
match db::list_artists(&state.pool).await {
Ok(artists) => (StatusCode::OK, Json(serde_json::to_value(artists).unwrap())).into_response(),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn get_artist(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
match db::get_artist(&state.pool, &slug).await {
Ok(Some(artist)) => (StatusCode::OK, Json(serde_json::to_value(artist).unwrap())).into_response(),
Ok(None) => error_json(StatusCode::NOT_FOUND, "artist not found"),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn list_artist_albums(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
match db::list_albums_by_artist(&state.pool, &slug).await {
Ok(albums) => (StatusCode::OK, Json(serde_json::to_value(albums).unwrap())).into_response(),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn list_artist_all_tracks(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
match db::list_all_tracks_by_artist(&state.pool, &slug).await {
Ok(tracks) => (StatusCode::OK, Json(serde_json::to_value(tracks).unwrap())).into_response(),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn get_track_detail(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
match db::get_track(&state.pool, &slug).await {
Ok(Some(track)) => (StatusCode::OK, Json(serde_json::to_value(track).unwrap())).into_response(),
Ok(None) => error_json(StatusCode::NOT_FOUND, "track not found"),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
pub async fn get_album_tracks(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
match db::list_tracks_by_album(&state.pool, &slug).await {
Ok(tracks) => (StatusCode::OK, Json(serde_json::to_value(tracks).unwrap())).into_response(),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Stream ---
pub async fn stream_track(
State(state): State<S>,
Path(slug): Path<String>,
headers: HeaderMap,
) -> impl IntoResponse {
let track = match db::get_track(&state.pool, &slug).await {
Ok(Some(t)) => t,
Ok(None) => return error_json(StatusCode::NOT_FOUND, "track not found"),
Err(e) => return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let file_path = std::path::Path::new(&track.storage_path);
if !file_path.exists() {
return error_json(StatusCode::NOT_FOUND, "file not found on disk");
}
let file_size = match tokio::fs::metadata(file_path).await {
Ok(m) => m.len(),
Err(e) => return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let content_type = mime_guess::from_path(file_path)
.first_or_octet_stream()
.to_string();
// Parse Range header
let range = headers.get(header::RANGE).and_then(|v| v.to_str().ok());
if let Some(range_str) = range {
stream_range(file_path, file_size, &content_type, range_str).await
} else {
stream_full(file_path, file_size, &content_type).await
}
}
async fn stream_full(path: &std::path::Path, size: u64, content_type: &str) -> Response {
let file = match tokio::fs::File::open(path).await {
Ok(f) => f,
Err(e) => return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let stream = tokio_util::io::ReaderStream::new(file);
let body = Body::from_stream(stream);
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CONTENT_LENGTH, size)
.header(header::ACCEPT_RANGES, "bytes")
.body(body)
.unwrap()
}
async fn stream_range(path: &std::path::Path, size: u64, content_type: &str, range_str: &str) -> Response {
// Parse "bytes=START-END"
let range = range_str.strip_prefix("bytes=").unwrap_or("");
let parts: Vec<&str> = range.split('-').collect();
let start: u64 = parts.first().and_then(|s| s.parse().ok()).unwrap_or(0);
let end: u64 = parts.get(1).and_then(|s| if s.is_empty() { None } else { s.parse().ok() }).unwrap_or(size - 1);
if start >= size || end >= size || start > end {
return Response::builder()
.status(StatusCode::RANGE_NOT_SATISFIABLE)
.header(header::CONTENT_RANGE, format!("bytes */{}", size))
.body(Body::empty())
.unwrap();
}
let length = end - start + 1;
let mut file = match tokio::fs::File::open(path).await {
Ok(f) => f,
Err(e) => return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
if start > 0 {
if let Err(e) = file.seek(std::io::SeekFrom::Start(start)).await {
return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string());
}
}
let limited = file.take(length);
let stream = tokio_util::io::ReaderStream::new(limited);
let body = Body::from_stream(stream);
Response::builder()
.status(StatusCode::PARTIAL_CONTENT)
.header(header::CONTENT_TYPE, content_type)
.header(header::CONTENT_LENGTH, length)
.header(header::CONTENT_RANGE, format!("bytes {}-{}/{}", start, end, size))
.header(header::ACCEPT_RANGES, "bytes")
.body(body)
.unwrap()
}
// --- Cover art ---
pub async fn album_cover(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
serve_album_cover_by_slug(&state, &slug).await
}
/// Cover for a specific track: album_images → embedded in file → 404
pub async fn track_cover(State(state): State<S>, Path(slug): Path<String>) -> impl IntoResponse {
let lookup = match db::get_track_cover_lookup(&state.pool, &slug).await {
Ok(Some(l)) => l,
Ok(None) => return error_json(StatusCode::NOT_FOUND, "track not found"),
Err(e) => return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
// 1) Try album cover from DB
if let Some(album_id) = lookup.album_id {
if let Ok(Some(cover)) = db::get_album_cover_by_id(&state.pool, album_id).await {
let path = std::path::Path::new(&cover.file_path);
if path.exists() {
if let Ok(data) = tokio::fs::read(path).await {
return Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, &cover.mime_type)
.header(header::CACHE_CONTROL, "public, max-age=86400")
.body(Body::from(data))
.unwrap();
}
}
}
}
// 2) Try extracting embedded cover from the audio file
let file_path = std::path::PathBuf::from(&lookup.storage_path);
if file_path.exists() {
let result = tokio::task::spawn_blocking(move || extract_embedded_cover(&file_path)).await;
if let Ok(Some((data, mime))) = result {
return Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, mime)
.header(header::CACHE_CONTROL, "public, max-age=86400")
.body(Body::from(data))
.unwrap();
}
}
error_json(StatusCode::NOT_FOUND, "no cover art available")
}
/// Extract embedded cover art from an audio file using Symphonia.
fn extract_embedded_cover(path: &std::path::Path) -> Option<(Vec<u8>, String)> {
use symphonia::core::{
formats::FormatOptions,
io::MediaSourceStream,
meta::MetadataOptions,
probe::Hint,
};
let file = std::fs::File::open(path).ok()?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(ext) = path.extension().and_then(|e| e.to_str()) {
hint.with_extension(ext);
}
let mut probed = symphonia::default::get_probe().format(
&hint,
mss,
&FormatOptions { enable_gapless: false, ..Default::default() },
&MetadataOptions::default(),
).ok()?;
// Check metadata side-data
if let Some(rev) = probed.metadata.get().as_ref().and_then(|m| m.current()) {
if let Some(visual) = rev.visuals().first() {
return Some((visual.data.to_vec(), visual.media_type.clone()));
}
}
// Check format-embedded metadata
if let Some(rev) = probed.format.metadata().current() {
if let Some(visual) = rev.visuals().first() {
return Some((visual.data.to_vec(), visual.media_type.clone()));
}
}
None
}
async fn serve_album_cover_by_slug(state: &AppState, slug: &str) -> Response {
let cover = match db::get_album_cover(&state.pool, slug).await {
Ok(Some(c)) => c,
Ok(None) => return error_json(StatusCode::NOT_FOUND, "no cover"),
Err(e) => return error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
};
let path = std::path::Path::new(&cover.file_path);
if !path.exists() {
return error_json(StatusCode::NOT_FOUND, "cover file missing");
}
match tokio::fs::read(path).await {
Ok(data) => Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, &cover.mime_type)
.header(header::CACHE_CONTROL, "public, max-age=86400")
.body(Body::from(data))
.unwrap(),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Search ---
#[derive(Deserialize)]
pub struct SearchQuery {
pub q: String,
#[serde(default = "default_limit")]
pub limit: i32,
}
fn default_limit() -> i32 { 20 }
pub async fn search(State(state): State<S>, Query(q): Query<SearchQuery>) -> impl IntoResponse {
if q.q.is_empty() {
return (StatusCode::OK, Json(serde_json::json!([]))).into_response();
}
match db::search(&state.pool, &q.q, q.limit).await {
Ok(results) => (StatusCode::OK, Json(serde_json::to_value(results).unwrap())).into_response(),
Err(e) => error_json(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
}
}
// --- Helpers ---
fn error_json(status: StatusCode, message: &str) -> Response {
(status, Json(serde_json::json!({"error": message}))).into_response()
}

View File

@@ -0,0 +1,384 @@
use axum::{
body::Body,
extract::{Request, State},
http::{header, HeaderMap, StatusCode},
middleware::Next,
response::{Html, IntoResponse, Redirect, Response},
};
use openidconnect::{
core::{CoreClient, CoreProviderMetadata, CoreResponseType},
reqwest::async_http_client,
AuthenticationFlow, AuthorizationCode, ClientId, ClientSecret, CsrfToken, IssuerUrl, Nonce,
PkceCodeChallenge, PkceCodeVerifier, RedirectUrl, Scope, TokenResponse,
};
use rand::RngCore;
use serde::Deserialize;
use base64::Engine;
use hmac::{Hmac, Mac};
use super::AppState;
use std::sync::Arc;
const SESSION_COOKIE: &str = "furumi_session";
type HmacSha256 = Hmac<sha2::Sha256>;
pub struct OidcState {
pub client: CoreClient,
pub session_secret: Vec<u8>,
}
pub async fn oidc_init(
issuer: String,
client_id: String,
client_secret: String,
redirect: String,
session_secret_override: Option<String>,
) -> anyhow::Result<OidcState> {
let provider_metadata = CoreProviderMetadata::discover_async(
IssuerUrl::new(issuer)?,
async_http_client,
)
.await?;
let client = CoreClient::from_provider_metadata(
provider_metadata,
ClientId::new(client_id),
Some(ClientSecret::new(client_secret)),
)
.set_auth_type(openidconnect::AuthType::RequestBody)
.set_redirect_uri(RedirectUrl::new(redirect)?);
let session_secret = if let Some(s) = session_secret_override {
let mut b = s.into_bytes();
b.resize(32, 0);
b
} else {
let mut b = vec![0u8; 32];
rand::thread_rng().fill_bytes(&mut b);
b
};
Ok(OidcState {
client,
session_secret,
})
}
fn generate_sso_cookie(secret: &[u8], user_id: &str) -> String {
let mut mac = HmacSha256::new_from_slice(secret).unwrap();
mac.update(user_id.as_bytes());
let sig = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(mac.finalize().into_bytes());
format!("sso:{}:{}", user_id, sig)
}
fn verify_sso_cookie(secret: &[u8], cookie_val: &str) -> Option<String> {
let parts: Vec<&str> = cookie_val.split(':').collect();
if parts.len() != 3 || parts[0] != "sso" {
return None;
}
let user_id = parts[1];
let sig = parts[2];
let mut mac = HmacSha256::new_from_slice(secret).unwrap();
mac.update(user_id.as_bytes());
let expected_sig = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(mac.finalize().into_bytes());
if sig == expected_sig {
Some(user_id.to_string())
} else {
None
}
}
/// Auth middleware: requires valid SSO session cookie.
pub async fn require_auth(
State(state): State<Arc<AppState>>,
req: Request,
next: Next,
) -> Response {
let oidc = match &state.oidc {
Some(o) => o,
None => return next.run(req).await, // No OIDC configured = no auth
};
let cookies = req
.headers()
.get(header::COOKIE)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
for c in cookies.split(';') {
let c = c.trim();
if let Some(val) = c.strip_prefix(&format!("{}=", SESSION_COOKIE)) {
if verify_sso_cookie(&oidc.session_secret, val).is_some() {
return next.run(req).await;
}
}
}
let uri = req.uri().to_string();
if uri.starts_with("/api/") {
(StatusCode::UNAUTHORIZED, "Unauthorized").into_response()
} else {
Redirect::to("/login").into_response()
}
}
/// GET /login — show SSO login page.
pub async fn login_page(State(state): State<Arc<AppState>>) -> impl IntoResponse {
if state.oidc.is_none() {
return Redirect::to("/").into_response();
}
Html(LOGIN_HTML).into_response()
}
/// GET /logout — clear session cookie.
pub async fn logout() -> impl IntoResponse {
let cookie = format!(
"{}=; HttpOnly; SameSite=Strict; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT",
SESSION_COOKIE
);
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, cookie.parse().unwrap());
headers.insert(header::LOCATION, "/login".parse().unwrap());
(StatusCode::FOUND, headers, Body::empty()).into_response()
}
#[derive(Deserialize)]
pub struct LoginQuery {
pub next: Option<String>,
}
/// GET /auth/login — initiate OIDC flow.
pub async fn oidc_login(
State(state): State<Arc<AppState>>,
axum::extract::Query(query): axum::extract::Query<LoginQuery>,
req: Request,
) -> impl IntoResponse {
let oidc = match &state.oidc {
Some(o) => o,
None => return Redirect::to("/").into_response(),
};
let (pkce_challenge, pkce_verifier) = PkceCodeChallenge::new_random_sha256();
let (auth_url, csrf_token, nonce) = oidc
.client
.authorize_url(
AuthenticationFlow::<CoreResponseType>::AuthorizationCode,
CsrfToken::new_random,
Nonce::new_random,
)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("profile".to_string()))
.set_pkce_challenge(pkce_challenge)
.url();
let next_url = query.next.unwrap_or_else(|| "/".to_string());
let cookie_val = format!(
"{}:{}:{}:{}",
csrf_token.secret(),
nonce.secret(),
pkce_verifier.secret(),
urlencoding::encode(&next_url)
);
let is_https = req
.headers()
.get("x-forwarded-proto")
.and_then(|v| v.to_str().ok())
.map(|s| s == "https")
.unwrap_or(false);
let cookie_attrs = if is_https {
"SameSite=None; Secure"
} else {
"SameSite=Lax"
};
let cookie = format!(
"furumi_oidc_state={}; HttpOnly; {}; Path=/; Max-Age=3600",
cookie_val, cookie_attrs
);
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, cookie.parse().unwrap());
headers.insert(header::LOCATION, auth_url.as_str().parse().unwrap());
headers.insert(
header::CACHE_CONTROL,
"no-store, no-cache, must-revalidate".parse().unwrap(),
);
(StatusCode::FOUND, headers, Body::empty()).into_response()
}
#[derive(Deserialize)]
pub struct AuthCallbackQuery {
code: String,
state: String,
}
/// GET /auth/callback — handle OIDC callback.
pub async fn oidc_callback(
State(state): State<Arc<AppState>>,
axum::extract::Query(query): axum::extract::Query<AuthCallbackQuery>,
req: Request,
) -> impl IntoResponse {
let oidc = match &state.oidc {
Some(o) => o,
None => return Redirect::to("/").into_response(),
};
let cookies = req
.headers()
.get(header::COOKIE)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let mut matching_val = None;
for c in cookies.split(';') {
let c = c.trim();
if let Some(val) = c.strip_prefix("furumi_oidc_state=") {
let parts: Vec<&str> = val.split(':').collect();
if parts.len() >= 3 && parts[0] == query.state {
matching_val = Some(val.to_string());
break;
}
}
}
let cookie_val = match matching_val {
Some(c) => c,
None => {
tracing::warn!("OIDC callback: invalid state or missing cookie");
return (StatusCode::BAD_REQUEST, "Invalid state").into_response();
}
};
let parts: Vec<&str> = cookie_val.split(':').collect();
let nonce = Nonce::new(parts[1].to_string());
let pkce_verifier = PkceCodeVerifier::new(parts[2].to_string());
let token_response = oidc
.client
.exchange_code(AuthorizationCode::new(query.code))
.set_pkce_verifier(pkce_verifier)
.request_async(async_http_client)
.await;
let token_response = match token_response {
Ok(tr) => tr,
Err(e) => {
tracing::error!("OIDC token exchange error: {:?}", e);
return (StatusCode::INTERNAL_SERVER_ERROR, format!("OIDC error: {}", e))
.into_response();
}
};
let id_token = match token_response.id_token() {
Some(t) => t,
None => {
return (StatusCode::INTERNAL_SERVER_ERROR, "No ID token").into_response();
}
};
let claims = match id_token.claims(&oidc.client.id_token_verifier(), &nonce) {
Ok(c) => c,
Err(e) => {
return (StatusCode::UNAUTHORIZED, format!("Invalid ID token: {}", e)).into_response();
}
};
let user_id = claims
.preferred_username()
.map(|u| u.to_string())
.or_else(|| claims.email().map(|e| e.to_string()))
.unwrap_or_else(|| claims.subject().to_string());
let session_val = generate_sso_cookie(&oidc.session_secret, &user_id);
let redirect_to = parts
.get(3)
.and_then(|&s| urlencoding::decode(s).ok())
.map(|v| v.into_owned())
.unwrap_or_else(|| "/".to_string());
let redirect_to = if redirect_to.is_empty() {
"/".to_string()
} else {
redirect_to
};
let is_https = req
.headers()
.get("x-forwarded-proto")
.and_then(|v| v.to_str().ok())
.map(|s| s == "https")
.unwrap_or(false);
let session_attrs = if is_https {
"SameSite=Strict; Secure"
} else {
"SameSite=Strict"
};
let session_cookie = format!(
"{}={}; HttpOnly; {}; Path=/; Max-Age=604800",
SESSION_COOKIE, session_val, session_attrs
);
let clear_state =
"furumi_oidc_state=; HttpOnly; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT";
let mut headers = HeaderMap::new();
headers.insert(header::SET_COOKIE, session_cookie.parse().unwrap());
headers.append(header::SET_COOKIE, clear_state.parse().unwrap());
headers.insert(header::LOCATION, redirect_to.parse().unwrap());
(StatusCode::FOUND, headers, Body::empty()).into_response()
}
const LOGIN_HTML: &str = r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Furumi Player — Login</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
min-height: 100vh;
display: flex; align-items: center; justify-content: center;
background: #0d0f14;
font-family: 'Inter', system-ui, sans-serif;
color: #e2e8f0;
}
.card {
background: #161b27;
border: 1px solid #2a3347;
border-radius: 16px;
padding: 2.5rem 3rem;
width: 360px;
box-shadow: 0 20px 60px rgba(0,0,0,0.5);
text-align: center;
}
.logo { font-size: 1.8rem; font-weight: 700; color: #7c6af7; margin-bottom: 0.25rem; }
.subtitle { font-size: 0.85rem; color: #64748b; margin-bottom: 2rem; }
.btn-sso {
display: block; width: 100%; padding: 0.75rem; text-align: center;
background: #7c6af7; border: none; border-radius: 8px;
color: #fff; font-size: 0.95rem; font-weight: 600; text-decoration: none;
cursor: pointer; transition: background 0.2s;
}
.btn-sso:hover { background: #6b58e8; }
</style>
</head>
<body>
<div class="card">
<div class="logo">Furumi</div>
<div class="subtitle">Sign in to continue</div>
<a href="/auth/login" class="btn-sso">SSO Login</a>
</div>
</body>
</html>"#;

View File

@@ -0,0 +1,57 @@
pub mod api;
pub mod auth;
use std::sync::Arc;
use std::path::PathBuf;
use axum::{Router, routing::get, middleware};
use sqlx::PgPool;
#[derive(Clone)]
pub struct AppState {
pub pool: PgPool,
#[allow(dead_code)]
pub storage_dir: Arc<PathBuf>,
pub oidc: Option<Arc<auth::OidcState>>,
}
pub fn build_router(state: Arc<AppState>) -> Router {
let library = Router::new()
.route("/artists", get(api::list_artists))
.route("/artists/:slug", get(api::get_artist))
.route("/artists/:slug/albums", get(api::list_artist_albums))
.route("/artists/:slug/tracks", get(api::list_artist_all_tracks))
.route("/albums/:slug", get(api::get_album_tracks))
.route("/albums/:slug/cover", get(api::album_cover))
.route("/tracks/:slug", get(api::get_track_detail))
.route("/tracks/:slug/cover", get(api::track_cover))
.route("/stream/:slug", get(api::stream_track))
.route("/search", get(api::search));
let authed = Router::new()
.route("/", get(player_html))
.nest("/api", library);
let has_oidc = state.oidc.is_some();
let app = if has_oidc {
authed
.route_layer(middleware::from_fn_with_state(state.clone(), auth::require_auth))
} else {
authed
};
Router::new()
.route("/login", get(auth::login_page))
.route("/logout", get(auth::logout))
.route("/auth/login", get(auth::oidc_login))
.route("/auth/callback", get(auth::oidc_callback))
.merge(app)
.with_state(state)
}
async fn player_html() -> axum::response::Html<String> {
let html = include_str!("player.html")
.replace("<!-- VERSION_PLACEHOLDER -->", option_env!("FURUMI_VERSION").unwrap_or(env!("CARGO_PKG_VERSION")));
axum::response::Html(html)
}

View File

@@ -0,0 +1,589 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Furumi Player</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap');
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
:root {
--bg-base: #0a0c12;
--bg-panel: #111520;
--bg-card: #161d2e;
--bg-hover: #1e2740;
--bg-active: #252f4a;
--border: #1f2c45;
--accent: #7c6af7;
--accent-dim: #5a4fcf;
--accent-glow:rgba(124,106,247,0.3);
--text: #e2e8f0;
--text-muted: #64748b;
--text-dim: #94a3b8;
--success: #34d399;
--danger: #f87171;
}
html, body { height: 100%; overflow: hidden; }
body { font-family: 'Inter', system-ui, sans-serif; background: var(--bg-base); color: var(--text); display: flex; flex-direction: column; }
.header { display: flex; align-items: center; justify-content: space-between; padding: 0.75rem 1.5rem; background: var(--bg-panel); border-bottom: 1px solid var(--border); flex-shrink: 0; z-index: 10; }
.header-logo { display: flex; align-items: center; gap: 0.75rem; font-weight: 700; font-size: 1.1rem; }
.header-logo svg { width: 22px; height: 22px; }
.header-version { font-size: 0.7rem; color: var(--text-muted); background: rgba(255,255,255,0.05); padding: 0.1rem 0.4rem; border-radius: 4px; margin-left: 0.25rem; font-weight: 500; text-decoration: none; }
.btn-menu { display: none; background: none; border: none; color: var(--text); font-size: 1.2rem; cursor: pointer; padding: 0.1rem 0.5rem; margin-right: 0.2rem; border-radius: 4px; }
/* Search bar */
.search-wrap { position: relative; }
.search-wrap input { background: var(--bg-card); border: 1px solid var(--border); border-radius: 6px; padding: 6px 12px 6px 30px; color: var(--text); font-size: 13px; width: 220px; font-family: inherit; }
.search-wrap::before { content: '🔍'; position: absolute; left: 8px; top: 50%; transform: translateY(-50%); font-size: 12px; }
.search-dropdown { position: absolute; top: 100%; left: 0; right: 0; background: var(--bg-card); border: 1px solid var(--border); border-radius: 0 0 6px 6px; max-height: 300px; overflow-y: auto; z-index: 50; display: none; }
.search-dropdown.open { display: block; }
.search-result { padding: 8px 12px; cursor: pointer; font-size: 13px; border-bottom: 1px solid var(--border); }
.search-result:hover { background: var(--bg-hover); }
.search-result .sr-type { font-size: 10px; color: var(--text-muted); text-transform: uppercase; margin-right: 6px; }
.search-result .sr-detail { font-size: 11px; color: var(--text-muted); margin-left: 4px; }
.main { display: flex; flex: 1; overflow: hidden; position: relative; }
.sidebar-overlay { display: none; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: rgba(0,0,0,0.6); z-index: 20; }
.sidebar-overlay.show { display: block; }
.sidebar { width: 280px; min-width: 200px; max-width: 400px; flex-shrink: 0; display: flex; flex-direction: column; background: var(--bg-panel); border-right: 1px solid var(--border); overflow: hidden; resize: horizontal; }
.sidebar-header { padding: 0.85rem 1rem 0.6rem; font-size: 0.7rem; font-weight: 600; letter-spacing: 0.08em; text-transform: uppercase; color: var(--text-muted); border-bottom: 1px solid var(--border); flex-shrink: 0; display: flex; align-items: center; gap: 0.5rem; }
.breadcrumb { padding: 0.5rem 1rem; font-size: 0.78rem; color: var(--text-muted); white-space: nowrap; overflow: hidden; text-overflow: ellipsis; border-bottom: 1px solid var(--border); flex-shrink: 0; }
.breadcrumb span { color: var(--accent); cursor: pointer; }
.breadcrumb span:hover { text-decoration: underline; }
.file-list { flex: 1; overflow-y: auto; padding: 0.3rem 0; }
.file-list::-webkit-scrollbar { width: 4px; }
.file-list::-webkit-scrollbar-thumb { background: var(--border); border-radius: 4px; }
.file-item { display: flex; align-items: center; gap: 0.6rem; padding: 0.45rem 1rem; cursor: pointer; font-size: 0.875rem; color: var(--text-dim); user-select: none; transition: background 0.12s; }
.file-item:hover { background: var(--bg-hover); color: var(--text); }
.file-item.dir { color: var(--accent); }
.file-item .icon { font-size: 0.95rem; flex-shrink: 0; opacity: 0.8; }
.file-item .name { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.file-item .detail { font-size: 0.7rem; color: var(--text-muted); flex-shrink: 0; }
.file-item .add-btn { opacity: 0; font-size: 0.75rem; background: var(--bg-hover); color: var(--text); border: 1px solid var(--border); border-radius: 4px; padding: 0.2rem 0.4rem; cursor: pointer; flex-shrink: 0; }
.file-item:hover .add-btn { opacity: 1; }
.file-item .add-btn:hover { background: var(--accent); color: #fff; border-color: var(--accent); }
.queue-panel { flex: 1; display: flex; flex-direction: column; overflow: hidden; background: var(--bg-base); }
.queue-header { padding: 0.85rem 1.25rem 0.6rem; font-size: 0.7rem; font-weight: 600; letter-spacing: 0.08em; text-transform: uppercase; color: var(--text-muted); border-bottom: 1px solid var(--border); flex-shrink: 0; display: flex; align-items: center; justify-content: space-between; }
.queue-actions { display: flex; gap: 0.5rem; }
.queue-btn { font-size: 0.7rem; padding: 0.2rem 0.55rem; background: none; border: 1px solid var(--border); border-radius: 5px; color: var(--text-muted); cursor: pointer; }
.queue-btn:hover { border-color: var(--accent); color: var(--accent); }
.queue-btn.active { background: var(--accent); border-color: var(--accent); color: #fff; }
.queue-list { flex: 1; overflow-y: auto; padding: 0.3rem 0; }
.queue-list::-webkit-scrollbar { width: 4px; }
.queue-list::-webkit-scrollbar-thumb { background: var(--border); border-radius: 4px; }
.queue-item { display: flex; align-items: center; gap: 0.75rem; padding: 0.55rem 1.25rem; cursor: pointer; border-left: 2px solid transparent; transition: background 0.12s; }
.queue-item:hover { background: var(--bg-hover); }
.queue-item.playing { background: var(--bg-active); border-left-color: var(--accent); }
.queue-item.playing .qi-title { color: var(--accent); }
.queue-item .qi-index { font-size: 0.75rem; color: var(--text-muted); width: 1.5rem; text-align: right; flex-shrink: 0; }
.queue-item.playing .qi-index::before { content: '▶'; font-size: 0.6rem; color: var(--accent); }
.queue-item .qi-cover { width: 36px; height: 36px; border-radius: 5px; background: var(--bg-card); flex-shrink: 0; overflow: hidden; display: flex; align-items: center; justify-content: center; font-size: 1.1rem; }
.queue-item .qi-cover img { width: 100%; height: 100%; object-fit: cover; }
.queue-item .qi-info { flex: 1; overflow: hidden; }
.queue-item .qi-title { font-size: 0.875rem; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.queue-item .qi-artist { font-size: 0.75rem; color: var(--text-muted); overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.queue-item .qi-dur { font-size: 0.75rem; color: var(--text-muted); margin-left: auto; margin-right: 0.5rem; }
.qi-remove { background: none; border: none; font-size: 0.9rem; color: var(--text-muted); cursor: pointer; padding: 0.3rem; border-radius: 4px; opacity: 0; }
.queue-item:hover .qi-remove { opacity: 1; }
.qi-remove:hover { background: rgba(248,113,113,0.15); color: var(--danger); }
.queue-item.dragging { opacity: 0.5; }
.queue-item.drag-over { border-top: 2px solid var(--accent); margin-top: -2px; }
.queue-empty { flex: 1; display: flex; flex-direction: column; align-items: center; justify-content: center; color: var(--text-muted); font-size: 0.875rem; gap: 0.5rem; padding: 2rem; }
.queue-empty .empty-icon { font-size: 2.5rem; opacity: 0.3; }
.player-bar { background: var(--bg-panel); border-top: 1px solid var(--border); padding: 0.9rem 1.5rem; flex-shrink: 0; display: grid; grid-template-columns: 1fr 2fr 1fr; align-items: center; gap: 1rem; }
.np-info { display: flex; align-items: center; gap: 0.75rem; min-width: 0; }
.np-cover { width: 44px; height: 44px; border-radius: 6px; background: var(--bg-card); flex-shrink: 0; overflow: hidden; display: flex; align-items: center; justify-content: center; font-size: 1.3rem; }
.np-cover img { width: 100%; height: 100%; object-fit: cover; }
.np-text { min-width: 0; }
.np-title { font-size: 0.875rem; font-weight: 500; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.np-artist { font-size: 0.75rem; color: var(--text-muted); overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.controls { display: flex; flex-direction: column; align-items: center; gap: 0.5rem; }
.ctrl-btns { display: flex; align-items: center; gap: 0.5rem; }
.ctrl-btn { background: none; border: none; color: var(--text-dim); cursor: pointer; padding: 0.35rem; border-radius: 50%; display: flex; align-items: center; justify-content: center; font-size: 1rem; }
.ctrl-btn:hover { color: var(--text); background: var(--bg-hover); }
.ctrl-btn.active { color: var(--accent); }
.ctrl-btn-main { width: 38px; height: 38px; background: var(--accent); color: #fff !important; font-size: 1.1rem; box-shadow: 0 0 14px var(--accent-glow); }
.ctrl-btn-main:hover { background: var(--accent-dim) !important; }
.progress-row { display: flex; align-items: center; gap: 0.6rem; width: 100%; }
.time { font-size: 0.7rem; color: var(--text-muted); flex-shrink: 0; font-variant-numeric: tabular-nums; min-width: 2.5rem; text-align: center; }
.progress-bar { flex: 1; height: 4px; background: var(--bg-hover); border-radius: 2px; cursor: pointer; position: relative; }
.progress-fill { height: 100%; background: var(--accent); border-radius: 2px; pointer-events: none; }
.progress-fill::after { content: ''; position: absolute; right: -5px; top: 50%; transform: translateY(-50%); width: 10px; height: 10px; border-radius: 50%; background: var(--accent); box-shadow: 0 0 6px var(--accent-glow); opacity: 0; transition: opacity 0.15s; }
.progress-bar:hover .progress-fill::after { opacity: 1; }
.volume-row { display: flex; align-items: center; gap: 0.5rem; justify-content: flex-end; }
.vol-icon { font-size: 0.9rem; color: var(--text-muted); cursor: pointer; }
.volume-slider { -webkit-appearance: none; appearance: none; width: 80px; height: 4px; border-radius: 2px; background: var(--bg-hover); cursor: pointer; outline: none; }
.volume-slider::-webkit-slider-thumb { -webkit-appearance: none; width: 12px; height: 12px; border-radius: 50%; background: var(--accent); cursor: pointer; }
* { scrollbar-width: thin; scrollbar-color: var(--border) transparent; }
@keyframes spin { to { transform: rotate(360deg); } }
.spinner { display: inline-block; width: 14px; height: 14px; border: 2px solid var(--border); border-top-color: var(--accent); border-radius: 50%; animation: spin 0.7s linear infinite; }
.toast { position: fixed; bottom: 90px; right: 1.5rem; background: var(--bg-card); border: 1px solid var(--border); border-radius: 8px; padding: 0.6rem 1rem; font-size: 0.8rem; color: var(--text-dim); box-shadow: 0 8px 24px rgba(0,0,0,0.4); opacity: 0; transform: translateY(8px); transition: all 0.25s; pointer-events: none; z-index: 100; }
.toast.show { opacity: 1; transform: translateY(0); }
@media (max-width: 768px) {
.btn-menu { display: inline-block; }
.header { padding: 0.75rem 1rem; }
.sidebar { position: absolute; top: 0; bottom: 0; left: -100%; width: 85%; max-width: 320px; z-index: 30; transition: left 0.3s; box-shadow: 4px 0 20px rgba(0,0,0,0.6); }
.sidebar.open { left: 0; }
.player-bar { grid-template-columns: 1fr; gap: 0.75rem; padding: 0.75rem 1rem; }
.volume-row { display: none; }
.search-wrap input { width: 140px; }
}
</style>
</head>
<body>
<header class="header">
<div class="header-logo">
<button class="btn-menu" onclick="toggleSidebar()">&#9776;</button>
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"><circle cx="9" cy="18" r="3"/><circle cx="18" cy="15" r="3"/><path d="M12 18V6l9-3v3"/></svg>
Furumi
<span class="header-version">v<!-- VERSION_PLACEHOLDER --></span>
</div>
<div style="display:flex;align-items:center;gap:1rem">
<div class="search-wrap">
<input id="searchInput" placeholder="Search..." oninput="onSearch(this.value)" onkeydown="if(event.key==='Escape'){closeSearch();}">
<div class="search-dropdown" id="searchDropdown"></div>
</div>
</div>
</header>
<div class="main">
<div class="sidebar-overlay" id="sidebarOverlay" onclick="toggleSidebar()"></div>
<aside class="sidebar" id="sidebar">
<div class="sidebar-header">Library</div>
<div class="breadcrumb" id="breadcrumb"><span onclick="showArtists()">Artists</span></div>
<div class="file-list" id="fileList"></div>
</aside>
<section class="queue-panel">
<div class="queue-header">
<span>Queue</span>
<div class="queue-actions">
<button class="queue-btn active" id="btnShuffle" onclick="toggleShuffle()">Shuffle</button>
<button class="queue-btn active" id="btnRepeat" onclick="toggleRepeat()">Repeat</button>
<button class="queue-btn" onclick="clearQueue()">Clear</button>
</div>
</div>
<div class="queue-list" id="queueList">
<div class="queue-empty"><div class="empty-icon">&#127925;</div><div>Select an album to start</div></div>
</div>
</section>
</div>
<div class="player-bar">
<div class="np-info">
<div class="np-cover" id="npCover">&#127925;</div>
<div class="np-text">
<div class="np-title" id="npTitle">Nothing playing</div>
<div class="np-artist" id="npArtist">&mdash;</div>
</div>
</div>
<div class="controls">
<div class="ctrl-btns">
<button class="ctrl-btn" onclick="prevTrack()">&#9198;</button>
<button class="ctrl-btn ctrl-btn-main" id="btnPlayPause" onclick="togglePlay()">&#9654;</button>
<button class="ctrl-btn" onclick="nextTrack()">&#9197;</button>
</div>
<div class="progress-row">
<span class="time" id="timeElapsed">0:00</span>
<div class="progress-bar" id="progressBar" onclick="seekTo(event)">
<div class="progress-fill" id="progressFill" style="width:0%"></div>
</div>
<span class="time" id="timeDuration">0:00</span>
</div>
</div>
<div class="volume-row">
<span class="vol-icon" onclick="toggleMute()" id="volIcon">&#128266;</span>
<input type="range" class="volume-slider" id="volSlider" min="0" max="100" value="80" oninput="setVolume(this.value)">
</div>
</div>
<div class="toast" id="toast"></div>
<audio id="audioEl"></audio>
<script>
const audio = document.getElementById('audioEl');
let queue = []; // [{slug, title, artist, album_slug, duration, cover}]
let queueIndex = -1;
let shuffle = false;
let repeatAll = true;
let shuffleOrder = [];
let searchTimer = null;
// Restore prefs
try {
const v = localStorage.getItem('furumi_vol');
if (v !== null) { audio.volume = v / 100; document.getElementById('volSlider').value = v; }
shuffle = localStorage.getItem('furumi_shuffle') === '1';
repeatAll = localStorage.getItem('furumi_repeat') !== '0';
document.getElementById('btnShuffle').classList.toggle('active', shuffle);
document.getElementById('btnRepeat').classList.toggle('active', repeatAll);
} catch(e) {}
// --- Audio events ---
audio.addEventListener('timeupdate', () => {
if (audio.duration) {
document.getElementById('progressFill').style.width = (audio.currentTime / audio.duration * 100) + '%';
document.getElementById('timeElapsed').textContent = fmt(audio.currentTime);
document.getElementById('timeDuration').textContent = fmt(audio.duration);
}
});
audio.addEventListener('ended', () => nextTrack());
audio.addEventListener('play', () => document.getElementById('btnPlayPause').innerHTML = '&#9208;');
audio.addEventListener('pause', () => document.getElementById('btnPlayPause').innerHTML = '&#9654;');
audio.addEventListener('error', () => { showToast('Playback error'); nextTrack(); });
// --- API helper ---
async function api(path) {
const r = await fetch('/api' + path);
if (!r.ok) return null;
return r.json();
}
// --- Library navigation ---
async function showArtists() {
setBreadcrumb([{label: 'Artists', action: 'showArtists()'}]);
const el = document.getElementById('fileList');
el.innerHTML = '<div style="padding:2rem;text-align:center"><div class="spinner"></div></div>';
const artists = await api('/artists');
if (!artists) { el.innerHTML = '<div style="padding:1rem;color:var(--danger)">Error</div>'; return; }
el.innerHTML = '';
artists.forEach(a => {
const div = document.createElement('div');
div.className = 'file-item dir';
div.innerHTML = `<span class="icon">&#128100;</span><span class="name">${esc(a.name)}</span><span class="detail">${a.album_count} albums</span>`;
div.onclick = () => showArtistAlbums(a.slug, a.name);
el.appendChild(div);
});
}
async function showArtistAlbums(artistSlug, artistName) {
setBreadcrumb([
{label: 'Artists', action: 'showArtists()'},
{label: artistName, action: `showArtistAlbums('${artistSlug}','${esc(artistName)}')`}
]);
const el = document.getElementById('fileList');
el.innerHTML = '<div style="padding:2rem;text-align:center"><div class="spinner"></div></div>';
const albums = await api('/artists/' + artistSlug + '/albums');
if (!albums) { el.innerHTML = '<div style="padding:1rem;color:var(--danger)">Error</div>'; return; }
el.innerHTML = '';
// "Play all" button
const allBtn = document.createElement('div');
allBtn.className = 'file-item';
allBtn.innerHTML = '<span class="icon">&#9654;</span><span class="name" style="color:var(--accent);font-weight:500">Play all tracks</span>';
allBtn.onclick = () => playAllArtistTracks(artistSlug);
el.appendChild(allBtn);
albums.forEach(a => {
const div = document.createElement('div');
div.className = 'file-item dir';
const year = a.year ? `(${a.year})` : '';
div.innerHTML = `<span class="icon">&#128191;</span><span class="name">${esc(a.name)} ${year}</span>
<span class="detail">${a.track_count} tracks</span>
<button class="add-btn" title="Add album to queue">&#10133;</button>`;
div.querySelector('.add-btn').onclick = (ev) => { ev.stopPropagation(); addAlbumToQueue(a.slug); };
div.onclick = () => showAlbumTracks(a.slug, a.name, artistSlug, artistName);
el.appendChild(div);
});
}
async function showAlbumTracks(albumSlug, albumName, artistSlug, artistName) {
setBreadcrumb([
{label: 'Artists', action: 'showArtists()'},
{label: artistName, action: `showArtistAlbums('${artistSlug}','${esc(artistName)}')`},
{label: albumName}
]);
const el = document.getElementById('fileList');
el.innerHTML = '<div style="padding:2rem;text-align:center"><div class="spinner"></div></div>';
const tracks = await api('/albums/' + albumSlug);
if (!tracks) { el.innerHTML = '<div style="padding:1rem;color:var(--danger)">Error</div>'; return; }
el.innerHTML = '';
// "Play album" button
const allBtn = document.createElement('div');
allBtn.className = 'file-item';
allBtn.innerHTML = '<span class="icon">&#9654;</span><span class="name" style="color:var(--accent);font-weight:500">Play album</span>';
allBtn.onclick = () => addAlbumToQueue(albumSlug, true);
el.appendChild(allBtn);
const coverUrl = '/api/albums/' + albumSlug + '/cover';
tracks.forEach(t => {
const div = document.createElement('div');
div.className = 'file-item';
const num = t.track_number ? t.track_number + '. ' : '';
const dur = t.duration_secs ? fmt(t.duration_secs) : '';
div.innerHTML = `<span class="icon">&#127925;</span><span class="name">${num}${esc(t.title)}</span>
<span class="detail">${dur}</span>`;
div.onclick = () => {
addTrackToQueue({slug: t.slug, title: t.title, artist: t.artist_name, album_slug: albumSlug, duration: t.duration_secs}, true);
};
el.appendChild(div);
});
}
function setBreadcrumb(parts) {
const el = document.getElementById('breadcrumb');
el.innerHTML = parts.map((p, i) => {
if (i < parts.length - 1 && p.action) {
return `<span onclick="${p.action}">${esc(p.label)}</span>`;
}
return esc(p.label);
}).join(' / ');
}
// --- Queue management ---
function addTrackToQueue(track, playNow) {
const existing = queue.findIndex(t => t.slug === track.slug);
if (existing !== -1) {
if (playNow) playIndex(existing);
return;
}
queue.push(track);
renderQueue();
if (playNow || (queueIndex === -1 && queue.length === 1)) {
playIndex(queue.length - 1);
}
}
async function addAlbumToQueue(albumSlug, playFirst) {
const tracks = await api('/albums/' + albumSlug);
if (!tracks || !tracks.length) return;
let firstIdx = queue.length;
tracks.forEach(t => {
if (queue.find(q => q.slug === t.slug)) return;
queue.push({slug: t.slug, title: t.title, artist: t.artist_name, album_slug: albumSlug, duration: t.duration_secs});
});
renderQueue();
if (playFirst || queueIndex === -1) playIndex(firstIdx);
showToast(`Added ${tracks.length} tracks`);
}
async function playAllArtistTracks(artistSlug) {
const tracks = await api('/artists/' + artistSlug + '/tracks');
if (!tracks || !tracks.length) return;
clearQueue();
tracks.forEach(t => {
queue.push({slug: t.slug, title: t.title, artist: t.artist_name, album_slug: t.album_slug, duration: t.duration_secs});
});
renderQueue();
playIndex(0);
showToast(`Added ${tracks.length} tracks`);
}
function playIndex(i) {
if (i < 0 || i >= queue.length) return;
queueIndex = i;
const track = queue[i];
audio.src = '/api/stream/' + track.slug;
audio.play().catch(() => {});
updateNowPlaying(track);
renderQueue();
scrollQueueToActive();
history.replaceState(null, '', '?t=' + track.slug);
}
function updateNowPlaying(track) {
if (!track) { document.getElementById('npTitle').textContent = 'Nothing playing'; document.getElementById('npArtist').textContent = '\u2014'; return; }
document.getElementById('npTitle').textContent = track.title;
document.getElementById('npArtist').textContent = track.artist || '\u2014';
document.title = track.title + ' \u2014 Furumi';
const cover = document.getElementById('npCover');
const coverUrl = '/api/tracks/' + track.slug + '/cover';
cover.innerHTML = `<img src="${coverUrl}" alt="" onerror="this.parentElement.innerHTML='&#127925;'">`;
if ('mediaSession' in navigator) {
navigator.mediaSession.metadata = new MediaMetadata({
title: track.title,
artist: track.artist || '',
album: '',
artwork: [{src: coverUrl, sizes: '512x512'}]
});
}
}
function renderQueue() {
const el = document.getElementById('queueList');
if (!queue.length) {
el.innerHTML = '<div class="queue-empty"><div class="empty-icon">&#127925;</div><div>Select an album to start</div></div>';
return;
}
const order = currentOrder();
el.innerHTML = '';
order.forEach((origIdx, pos) => {
const t = queue[origIdx];
const isPlaying = origIdx === queueIndex;
const div = document.createElement('div');
div.className = 'queue-item' + (isPlaying ? ' playing' : '');
const coverSrc = t.album_slug ? `/api/tracks/${t.slug}/cover` : '';
const coverHtml = coverSrc
? `<img src="${coverSrc}" alt="" onerror="this.parentElement.innerHTML='&#127925;'">`
: '&#127925;';
const dur = t.duration ? fmt(t.duration) : '';
div.innerHTML = `
<span class="qi-index">${isPlaying ? '' : pos + 1}</span>
<div class="qi-cover">${coverHtml}</div>
<div class="qi-info"><div class="qi-title">${esc(t.title)}</div><div class="qi-artist">${esc(t.artist || '')}</div></div>
<span class="qi-dur">${dur}</span>
<button class="qi-remove" onclick="removeFromQueue(${origIdx},event)">&#10005;</button>
`;
div.addEventListener('click', () => playIndex(origIdx));
// Drag & drop
div.draggable = true;
div.addEventListener('dragstart', e => { e.dataTransfer.setData('text/plain', pos); div.classList.add('dragging'); });
div.addEventListener('dragend', () => { div.classList.remove('dragging'); el.querySelectorAll('.queue-item').forEach(x => x.classList.remove('drag-over')); });
div.addEventListener('dragover', e => { e.preventDefault(); });
div.addEventListener('dragenter', () => div.classList.add('drag-over'));
div.addEventListener('dragleave', () => div.classList.remove('drag-over'));
div.addEventListener('drop', e => { e.preventDefault(); div.classList.remove('drag-over'); const from = parseInt(e.dataTransfer.getData('text/plain')); if (!isNaN(from)) moveQueueItem(from, pos); });
el.appendChild(div);
});
}
function scrollQueueToActive() {
const el = document.querySelector('.queue-item.playing');
if (el) el.scrollIntoView({behavior: 'smooth', block: 'nearest'});
}
function removeFromQueue(idx, ev) {
if (ev) ev.stopPropagation();
if (idx === queueIndex) { queueIndex = -1; audio.pause(); audio.src = ''; updateNowPlaying(null); }
else if (queueIndex > idx) queueIndex--;
queue.splice(idx, 1);
if (shuffle) { const si = shuffleOrder.indexOf(idx); if (si !== -1) shuffleOrder.splice(si, 1); for (let i = 0; i < shuffleOrder.length; i++) if (shuffleOrder[i] > idx) shuffleOrder[i]--; }
renderQueue();
}
function moveQueueItem(from, to) {
if (from === to) return;
if (shuffle) { const item = shuffleOrder.splice(from, 1)[0]; shuffleOrder.splice(to, 0, item); }
else { const item = queue.splice(from, 1)[0]; queue.splice(to, 0, item); if (queueIndex === from) queueIndex = to; else if (from < queueIndex && to >= queueIndex) queueIndex--; else if (from > queueIndex && to <= queueIndex) queueIndex++; }
renderQueue();
}
function clearQueue() {
queue = []; queueIndex = -1; shuffleOrder = [];
audio.pause(); audio.src = '';
updateNowPlaying(null);
document.title = 'Furumi Player';
renderQueue();
}
// --- Playback controls ---
function togglePlay() {
if (!audio.src && queue.length) { playIndex(queueIndex === -1 ? 0 : queueIndex); return; }
if (audio.paused) audio.play(); else audio.pause();
}
function nextTrack() {
if (!queue.length) return;
const order = currentOrder();
const pos = order.indexOf(queueIndex);
if (pos < order.length - 1) playIndex(order[pos + 1]);
else if (repeatAll) { if (shuffle) buildShuffleOrder(); playIndex(currentOrder()[0]); }
}
function prevTrack() {
if (!queue.length) return;
if (audio.currentTime > 3) { audio.currentTime = 0; return; }
const order = currentOrder();
const pos = order.indexOf(queueIndex);
if (pos > 0) playIndex(order[pos - 1]);
else if (repeatAll) playIndex(order[order.length - 1]);
}
function toggleShuffle() { shuffle = !shuffle; if (shuffle) buildShuffleOrder(); document.getElementById('btnShuffle').classList.toggle('active', shuffle); localStorage.setItem('furumi_shuffle', shuffle ? '1' : '0'); renderQueue(); }
function toggleRepeat() { repeatAll = !repeatAll; document.getElementById('btnRepeat').classList.toggle('active', repeatAll); localStorage.setItem('furumi_repeat', repeatAll ? '1' : '0'); }
function buildShuffleOrder() { shuffleOrder = [...Array(queue.length).keys()]; for (let i = shuffleOrder.length - 1; i > 0; i--) { const j = Math.floor(Math.random() * (i + 1)); [shuffleOrder[i], shuffleOrder[j]] = [shuffleOrder[j], shuffleOrder[i]]; } if (queueIndex !== -1) { const ci = shuffleOrder.indexOf(queueIndex); if (ci > 0) { shuffleOrder.splice(ci, 1); shuffleOrder.unshift(queueIndex); } } }
function currentOrder() { if (!shuffle) return [...Array(queue.length).keys()]; if (shuffleOrder.length !== queue.length) buildShuffleOrder(); return shuffleOrder; }
// --- Seek & Volume ---
function seekTo(e) { if (!audio.duration) return; const bar = document.getElementById('progressBar'); const pct = (e.clientX - bar.getBoundingClientRect().left) / bar.offsetWidth; audio.currentTime = pct * audio.duration; }
let muted = false;
function toggleMute() { muted = !muted; audio.muted = muted; document.getElementById('volIcon').innerHTML = muted ? '&#128263;' : '&#128266;'; }
function setVolume(v) { audio.volume = v / 100; document.getElementById('volIcon').innerHTML = v == 0 ? '&#128263;' : '&#128266;'; localStorage.setItem('furumi_vol', v); }
// --- Search ---
function onSearch(q) {
clearTimeout(searchTimer);
if (q.length < 2) { closeSearch(); return; }
searchTimer = setTimeout(async () => {
const results = await api('/search?q=' + encodeURIComponent(q));
if (!results || !results.length) { closeSearch(); return; }
const dd = document.getElementById('searchDropdown');
dd.innerHTML = results.map(r => {
const detail = r.detail ? `<span class="sr-detail">${esc(r.detail)}</span>` : '';
return `<div class="search-result" onclick="onSearchSelect('${r.result_type}','${r.slug}')"><span class="sr-type">${r.result_type}</span>${esc(r.name)}${detail}</div>`;
}).join('');
dd.classList.add('open');
}, 250);
}
function closeSearch() { document.getElementById('searchDropdown').classList.remove('open'); }
function onSearchSelect(type, slug) {
closeSearch();
document.getElementById('searchInput').value = '';
if (type === 'artist') showArtistAlbums(slug, '');
else if (type === 'album') addAlbumToQueue(slug, true);
else if (type === 'track') {
addTrackToQueue({slug, title: '', artist: '', album_slug: null, duration: null}, true);
// Fetch full info
api('/stream/' + slug).catch(() => {});
}
}
// --- Helpers ---
function fmt(secs) { if (!secs || isNaN(secs)) return '0:00'; const s = Math.floor(secs); const m = Math.floor(s / 60); const h = Math.floor(m / 60); if (h > 0) return h + ':' + pad(m % 60) + ':' + pad(s % 60); return m + ':' + pad(s % 60); }
function pad(n) { return String(n).padStart(2, '0'); }
function esc(s) { return String(s||'').replace(/&/g,'&amp;').replace(/</g,'&lt;').replace(/>/g,'&gt;').replace(/"/g,'&quot;').replace(/'/g,'&#39;'); }
let toastTimer;
function showToast(msg) { const t = document.getElementById('toast'); t.textContent = msg; t.classList.add('show'); clearTimeout(toastTimer); toastTimer = setTimeout(() => t.classList.remove('show'), 2500); }
function toggleSidebar() { document.getElementById('sidebar').classList.toggle('open'); document.getElementById('sidebarOverlay').classList.toggle('show'); }
// --- MediaSession ---
if ('mediaSession' in navigator) {
navigator.mediaSession.setActionHandler('play', togglePlay);
navigator.mediaSession.setActionHandler('pause', togglePlay);
navigator.mediaSession.setActionHandler('previoustrack', prevTrack);
navigator.mediaSession.setActionHandler('nexttrack', nextTrack);
navigator.mediaSession.setActionHandler('seekto', d => { audio.currentTime = d.seekTime; });
}
// --- Init ---
(async () => {
const urlSlug = new URLSearchParams(window.location.search).get('t');
if (urlSlug) {
const info = await api('/tracks/' + urlSlug);
if (info) {
addTrackToQueue({
slug: info.slug,
title: info.title,
artist: info.artist_name,
album_slug: info.album_slug,
duration: info.duration_secs
}, true);
}
}
showArtists();
})();
</script>
</body>
</html>