From 6eb7129637910a592ed16db6f6fafe7657c7a0c3 Mon Sep 17 00:00:00 2001 From: xCyanGrizzly Date: Wed, 25 Mar 2026 21:40:13 +0100 Subject: [PATCH] docs: add package grouping design spec and implementation plan Co-Authored-By: Claude Opus 4.6 (1M context) --- ...rch-indicators-size-limit-skipped-files.md | 1580 +++++++++++++++++ .../plans/2026-03-25-package-grouping.md | 1343 ++++++++++++++ ...icators-size-limit-skipped-files-design.md | 241 +++ .../2026-03-25-package-grouping-design.md | 246 +++ 4 files changed, 3410 insertions(+) create mode 100644 docs/superpowers/plans/2026-03-24-search-indicators-size-limit-skipped-files.md create mode 100644 docs/superpowers/plans/2026-03-25-package-grouping.md create mode 100644 docs/superpowers/specs/2026-03-24-search-indicators-size-limit-skipped-files-design.md create mode 100644 docs/superpowers/specs/2026-03-25-package-grouping-design.md diff --git a/docs/superpowers/plans/2026-03-24-search-indicators-size-limit-skipped-files.md b/docs/superpowers/plans/2026-03-24-search-indicators-size-limit-skipped-files.md new file mode 100644 index 0000000..b16ac0c --- /dev/null +++ b/docs/superpowers/plans/2026-03-24-search-indicators-size-limit-skipped-files.md @@ -0,0 +1,1580 @@ +# Search Match Indicators, Size Limit Increase, Skipped/Failed Files Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Add search match indicators to the STL files table, raise the ingestion size limit to 200 GB, and track skipped/failed archives with a retry UI. + +**Architecture:** Three independent features sharing one migration. Feature 1 (size limit) is a one-line config change. Feature 2 (search indicators) modifies `searchPackages()` to return per-package match counts and pipes that through to the table and file drawer. Feature 3 (skipped files) adds a new `SkippedPackage` model, worker-side recording, and a UI tab with retry capability. + +**Tech Stack:** Prisma 7.4, Next.js 16 (App Router), TanStack Table, shadcn/ui, TypeScript 5.9 + +**Spec:** `docs/superpowers/specs/2026-03-24-search-indicators-size-limit-skipped-files-design.md` + +--- + +## File Structure + +### Create +- `src/app/(app)/stls/_components/skipped-packages-tab.tsx` — Skipped/failed packages table with retry buttons +- `src/app/(app)/stls/_components/skipped-columns.tsx` — Column definitions for skipped packages table + +### Modify +- `worker/src/util/config.ts` — Raise default `maxZipSizeMB` from 4096 to 204800 +- `prisma/schema.prisma` — Add `SkipReason` enum, `SkippedPackage` model, reverse relations +- `worker/src/worker.ts` — Add `accountId` to `PipelineContext`, record skips/failures, clean up on success +- `worker/src/db/queries.ts` — Add `upsertSkippedPackage()` and `deleteSkippedPackage()` functions +- `src/lib/telegram/types.ts` — Add `matchedFileCount`/`matchedByContent` to `PackageListItem`, add `SkippedPackageItem` type +- `src/lib/telegram/queries.ts` — Modify `searchPackages()` for grouped counts, add skipped package queries +- `src/app/(app)/stls/page.tsx` — Pass search term, fetch skipped count +- `src/app/(app)/stls/_components/stl-table.tsx` — Accept search prop, pass to columns/drawer, add tabs +- `src/app/(app)/stls/_components/package-columns.tsx` — Add `matchedFileCount`/`matchedByContent` to `PackageRow`, render match badge +- `src/app/(app)/stls/_components/package-files-drawer.tsx` — Accept `highlightTerm`, highlight matching files, auto-expand matched folders +- `src/app/(app)/stls/actions.ts` — Add retry server actions + +--- + +## Task 1: Raise Ingestion Size Limit + +**Files:** +- Modify: `worker/src/util/config.ts:6` + +- [ ] **Step 1: Change the default max size** + +In `worker/src/util/config.ts`, change line 6: + +```typescript +// Before: +maxZipSizeMB: parseInt(process.env.WORKER_MAX_ZIP_SIZE_MB ?? "4096", 10), +// After: +maxZipSizeMB: parseInt(process.env.WORKER_MAX_ZIP_SIZE_MB ?? "204800", 10), +``` + +- [ ] **Step 2: Verify worker builds** + +Run: `cd worker && npx tsc --noEmit` +Expected: No errors + +- [ ] **Step 3: Commit** + +```bash +git add worker/src/util/config.ts +git commit -m "feat: raise default ingestion size limit from 4GB to 200GB" +``` + +--- + +## Task 2: Prisma Schema — SkippedPackage Model + +**Files:** +- Modify: `prisma/schema.prisma` + +- [ ] **Step 1: Add SkipReason enum and SkippedPackage model** + +Add after the `ArchiveExtractRequest` model (end of file area) in `prisma/schema.prisma`: + +```prisma +enum SkipReason { + SIZE_LIMIT + DOWNLOAD_FAILED + EXTRACT_FAILED + UPLOAD_FAILED +} + +model SkippedPackage { + id String @id @default(cuid()) + fileName String + fileSize BigInt + reason SkipReason + errorMessage String? + sourceChannelId String + sourceChannel TelegramChannel @relation(fields: [sourceChannelId], references: [id], onDelete: Cascade) + sourceMessageId BigInt + sourceTopicId BigInt? + isMultipart Boolean @default(false) + partCount Int @default(1) + accountId String + account TelegramAccount @relation(fields: [accountId], references: [id], onDelete: Cascade) + createdAt DateTime @default(now()) + + @@unique([sourceChannelId, sourceMessageId]) + @@index([reason]) + @@index([accountId]) + @@map("skipped_packages") +} +``` + +- [ ] **Step 2: Add reverse relations to existing models** + +In `TelegramAccount` model (line ~401-418), add inside the relations block (after `fetchRequests`): + +```prisma + skippedPackages SkippedPackage[] +``` + +In `TelegramChannel` model (line ~420-437), add inside the relations block (after `packages`): + +```prisma + skippedPackages SkippedPackage[] +``` + +- [ ] **Step 3: Generate Prisma client and verify** + +Run: `npx prisma generate` +Expected: Success, no errors + +- [ ] **Step 4: Create migration** + +Run: `npx prisma migrate dev --name add-skipped-packages` +Expected: Migration created successfully + +- [ ] **Step 5: Commit** + +```bash +git add prisma/ +git commit -m "feat: add SkippedPackage model for tracking skipped/failed archives" +``` + +--- + +## Task 3: Worker — Record Skipped/Failed Archives + +**Files:** +- Modify: `worker/src/db/queries.ts` +- Modify: `worker/src/worker.ts:279-298` (PipelineContext), `worker/src/worker.ts:436-448` (pipelineCtx creation), `worker/src/worker.ts:781-802` (size guard), `worker/src/worker.ts:726-732` (set failure catch) + +- [ ] **Step 1: Add worker DB functions for skipped packages** + +In `worker/src/db/queries.ts`, add these functions: + +```typescript +export async function upsertSkippedPackage(data: { + fileName: string; + fileSize: bigint; + reason: "SIZE_LIMIT" | "DOWNLOAD_FAILED" | "EXTRACT_FAILED" | "UPLOAD_FAILED"; + errorMessage?: string; + sourceChannelId: string; + sourceMessageId: bigint; + sourceTopicId?: bigint | null; + isMultipart: boolean; + partCount: number; + accountId: string; +}) { + return db.skippedPackage.upsert({ + where: { + sourceChannelId_sourceMessageId: { + sourceChannelId: data.sourceChannelId, + sourceMessageId: data.sourceMessageId, + }, + }, + update: { + reason: data.reason, + errorMessage: data.errorMessage ?? null, + fileName: data.fileName, + fileSize: data.fileSize, + createdAt: new Date(), + }, + create: { + fileName: data.fileName, + fileSize: data.fileSize, + reason: data.reason, + errorMessage: data.errorMessage ?? null, + sourceChannelId: data.sourceChannelId, + sourceMessageId: data.sourceMessageId, + sourceTopicId: data.sourceTopicId ?? null, + isMultipart: data.isMultipart, + partCount: data.partCount, + accountId: data.accountId, + }, + }); +} + +export async function deleteSkippedPackage( + sourceChannelId: string, + sourceMessageId: bigint +) { + return db.skippedPackage.deleteMany({ + where: { sourceChannelId, sourceMessageId }, + }); +} +``` + +- [ ] **Step 2: Add `accountId` to PipelineContext** + +In `worker/src/worker.ts`, add `accountId` to the `PipelineContext` interface (line ~279-298): + +```typescript +interface PipelineContext { + client: Client; + runId: string; + accountId: string; // <-- ADD THIS + channelTitle: string; + channel: TelegramChannel; + // ... rest unchanged +} +``` + +And add it to the `pipelineCtx` creation (line ~436-448): + +```typescript +const pipelineCtx: PipelineContext = { + client, + runId: activeRunId, + accountId: account.id, // <-- ADD THIS + channelTitle: channel.title, + // ... rest unchanged +}; +``` + +- [ ] **Step 3: Record SIZE_LIMIT skips** + +In `worker/src/worker.ts` at the size guard (line ~784-802), after the `updateRunActivity` call and before the `return`, add: + +```typescript + await upsertSkippedPackage({ + fileName: archiveName, + fileSize: totalArchiveSize, + reason: "SIZE_LIMIT", + sourceChannelId: channel.id, + sourceMessageId: archiveSet.parts[0].id, + sourceTopicId: ctx.sourceTopicId, + isMultipart: archiveSet.isMultipart, + partCount: archiveSet.parts.length, + accountId: ctx.accountId, + }); +``` + +Add the import at top of worker.ts: +```typescript +import { upsertSkippedPackage, deleteSkippedPackage } from "./db/queries.js"; +``` + +- [ ] **Step 4: Record processing failures in the catch block** + +In `worker/src/worker.ts` at the archive set failure catch (the `processArchiveSets` function, line ~726-732), enhance the catch block: + +```typescript + } catch (setErr) { + // If a set fails, do NOT advance the watermark past it + accountLog.warn( + { err: setErr, baseName: archiveSets[setIdx].baseName }, + "Archive set failed, watermark will not advance past this set" + ); + // Record the failure for visibility in the UI + try { + const archiveSet = archiveSets[setIdx]; + const totalSize = archiveSet.parts.reduce((sum, p) => sum + p.fileSize, 0n); + await upsertSkippedPackage({ + fileName: archiveSet.parts[0].fileName, + fileSize: totalSize, + reason: "DOWNLOAD_FAILED", // Catch-all for any pipeline failure at this level + errorMessage: setErr instanceof Error ? setErr.message : String(setErr), + sourceChannelId: ctx.channel.id, + sourceMessageId: archiveSet.parts[0].id, + sourceTopicId: ctx.sourceTopicId, + isMultipart: archiveSet.isMultipart, + partCount: archiveSet.parts.length, + accountId: ctx.accountId, + }); + } catch { + // Best-effort — don't fail the run if skip recording fails + } + } +``` + +- [ ] **Step 5: Clean up skip records on successful ingestion** + +In `worker/src/worker.ts`, in `processOneArchiveSet`, after the `createPackageWithFiles` call succeeds (near the end of the function where `counters.zipsIngested++` is), add: + +```typescript + // Clean up any prior skip record for this archive + await deleteSkippedPackage(channel.id, archiveSet.parts[0].id); +``` + +- [ ] **Step 6: Verify worker builds** + +Run: `cd worker && npx tsc --noEmit` +Expected: No errors + +- [ ] **Step 7: Commit** + +```bash +git add worker/src/db/queries.ts worker/src/worker.ts +git commit -m "feat: record skipped/failed archives in database for UI visibility" +``` + +--- + +## Task 4: Search Match Indicators — Backend + +**Files:** +- Modify: `src/lib/telegram/types.ts:1-17` +- Modify: `src/lib/telegram/queries.ts:165-257` + +- [ ] **Step 1: Add match fields to PackageListItem** + +In `src/lib/telegram/types.ts`, add to the `PackageListItem` interface (after `sourceChannel`): + +```typescript + matchedFileCount: number; + matchedByContent: boolean; +``` + +- [ ] **Step 2: Update listPackages to include default match fields** + +In `src/lib/telegram/queries.ts`, in the `listPackages` function's mapping (line ~47-60), add the two default fields: + +```typescript + const mapped: PackageListItem[] = items.map((pkg) => ({ + // ... existing fields ... + sourceChannel: pkg.sourceChannel, + matchedFileCount: 0, + matchedByContent: false, + })); +``` + +- [ ] **Step 3: Rewrite searchPackages to return match counts** + +Replace the `searchPackages` function in `src/lib/telegram/queries.ts` (lines 165-257): + +```typescript +export async function searchPackages(options: { + query: string; + page: number; + limit: number; + searchIn: "packages" | "files" | "both"; +}) { + const q = options.query; + + if (options.searchIn === "files" || options.searchIn === "both") { + // Get per-package file match counts + const fileMatches = await prisma.packageFile.groupBy({ + by: ["packageId"], + where: { + OR: [ + { fileName: { contains: q, mode: "insensitive" } }, + { path: { contains: q, mode: "insensitive" } }, + ], + }, + _count: { _all: true }, + }); + + const fileMatchMap = new Map( + fileMatches.map((m) => [m.packageId, m._count._all]) + ); + const fileMatchedIds = fileMatches.map((f) => f.packageId); + + const packageNameIds = + options.searchIn === "both" + ? ( + await prisma.package.findMany({ + where: { fileName: { contains: q, mode: "insensitive" } }, + select: { id: true }, + }) + ).map((p) => p.id) + : []; + + const allIds = [...new Set([...fileMatchedIds, ...packageNameIds])]; + + const [items, total] = await Promise.all([ + prisma.package.findMany({ + where: { id: { in: allIds } }, + orderBy: { indexedAt: "desc" }, + skip: (options.page - 1) * options.limit, + take: options.limit, + select: { + id: true, + fileName: true, + fileSize: true, + contentHash: true, + archiveType: true, + fileCount: true, + isMultipart: true, + indexedAt: true, + creator: true, + tags: true, + previewData: true, + sourceChannel: { select: { id: true, title: true } }, + }, + }), + Promise.resolve(allIds.length), + ]); + + const mapped: PackageListItem[] = items.map((pkg) => ({ + id: pkg.id, + fileName: pkg.fileName, + fileSize: pkg.fileSize.toString(), + contentHash: pkg.contentHash, + archiveType: pkg.archiveType, + fileCount: pkg.fileCount, + isMultipart: pkg.isMultipart, + hasPreview: pkg.previewData !== null, + creator: pkg.creator, + tags: pkg.tags, + indexedAt: pkg.indexedAt.toISOString(), + sourceChannel: pkg.sourceChannel, + matchedFileCount: fileMatchMap.get(pkg.id) ?? 0, + matchedByContent: fileMatchMap.has(pkg.id), + })); + + return { + items: mapped, + pagination: { + page: options.page, + limit: options.limit, + total, + totalPages: Math.ceil(total / options.limit), + }, + }; + } + + // Search packages only + return listPackages({ + page: options.page, + limit: options.limit, + sortBy: "indexedAt", + order: "desc", + }); +} +``` + +- [ ] **Step 4: Verify app builds** + +Run: `npx tsc --noEmit` (from project root) +Expected: Errors about `PackageRow` missing the new fields — this is expected, we fix it in the next task. + +- [ ] **Step 5: Commit** + +```bash +git add src/lib/telegram/types.ts src/lib/telegram/queries.ts +git commit -m "feat: return per-package file match counts from searchPackages" +``` + +--- + +## Task 5: Search Match Indicators — Frontend (Table) + +**Files:** +- Modify: `src/app/(app)/stls/page.tsx:25-53` +- Modify: `src/app/(app)/stls/_components/stl-table.tsx:26-32, 34-40, 78-110, 116-168` +- Modify: `src/app/(app)/stls/_components/package-columns.tsx:10-26, 28-32, 61-65, 75-88` + +- [ ] **Step 1: Add match fields to PackageRow** + +In `src/app/(app)/stls/_components/package-columns.tsx`, add to the `PackageRow` interface (after `sourceChannel`, line ~22-26): + +```typescript + matchedFileCount: number; + matchedByContent: boolean; +``` + +- [ ] **Step 2: Add searchTerm to PackageColumnsProps and render match badge** + +In `src/app/(app)/stls/_components/package-columns.tsx`, add `searchTerm` to `PackageColumnsProps` (line ~28-32): + +```typescript +interface PackageColumnsProps { + onViewFiles: (pkg: PackageRow) => void; + onSetCreator: (pkg: PackageRow) => void; + onSetTags: (pkg: PackageRow) => void; + searchTerm: string; +} +``` + +Update the `getPackageColumns` destructuring to include `searchTerm`: + +```typescript +export function getPackageColumns({ + onViewFiles, + onSetCreator, + onSetTags, + searchTerm, +}: PackageColumnsProps): ColumnDef[] { +``` + +Update the `fileName` column cell (line ~78-88) to render the match badge: + +```typescript + { + accessorKey: "fileName", + header: ({ column }) => , + cell: ({ row }) => ( +
+
+ {row.original.fileName} + {row.original.isMultipart && ( + + Multi + + )} +
+ {searchTerm && row.original.matchedByContent && ( + + )} +
+ ), + enableHiding: false, + }, +``` + +- [ ] **Step 3: Pass searchTerm from page to StlTable** + +In `src/app/(app)/stls/page.tsx`, pass `search` to `StlTable` (line ~45-53): + +```typescript + return ( + + ); +``` + +- [ ] **Step 4: Accept searchTerm in StlTable and pipe to columns/drawer** + +In `src/app/(app)/stls/_components/stl-table.tsx`: + +Add `searchTerm` to `StlTableProps` (line ~26-32): + +```typescript +interface StlTableProps { + data: PackageRow[]; + pageCount: number; + totalCount: number; + ingestionStatus: IngestionAccountStatus[]; + availableTags: string[]; + searchTerm: string; +} +``` + +Add `searchTerm` to the destructured props (line ~34-40): + +```typescript +export function StlTable({ + data, + pageCount, + totalCount, + ingestionStatus, + availableTags, + searchTerm, +}: StlTableProps) { +``` + +Pass `searchTerm` to `getPackageColumns` (line ~78): + +```typescript + const columns = getPackageColumns({ + onViewFiles: (pkg) => setViewPkg(pkg), + // ... existing handlers unchanged ... + searchTerm, + }); +``` + +**Note:** Do NOT pass `highlightTerm` to `PackageFilesDrawer` yet — that prop is added in Task 6. It will be wired in Task 6 Step 1 as part of updating the drawer. + +- [ ] **Step 5: Verify app builds** + +Run: `npm run build` +Expected: Build succeeds + +- [ ] **Step 6: Commit** + +```bash +git add src/app/(app)/stls/page.tsx src/app/(app)/stls/_components/stl-table.tsx src/app/(app)/stls/_components/package-columns.tsx +git commit -m "feat: show file match count badge in search results" +``` + +--- + +## Task 6: Search Match Indicators — File Drawer Highlighting + +**Files:** +- Modify: `src/app/(app)/stls/_components/package-files-drawer.tsx:51-55, 118-128, 186-210, 226, 477-504` + +- [ ] **Step 1: Add highlightTerm to PackageFilesDrawerProps** + +In `src/app/(app)/stls/_components/package-files-drawer.tsx`, update the props interface (line ~51-55): + +```typescript +interface PackageFilesDrawerProps { + pkg: PackageRow | null; + open: boolean; + onOpenChange: (open: boolean) => void; + highlightTerm?: string; +} +``` + +Update the component signature (line ~226): + +```typescript +export function PackageFilesDrawer({ pkg, open, onOpenChange, highlightTerm }: PackageFilesDrawerProps) { +``` + +- [ ] **Step 2: Add a helper to check if a file matches the highlight term** + +Add a helper function near the top of the file (after the `getExtBadgeClass` function): + +```typescript +function fileMatchesHighlight(file: FileItem, term: string): boolean { + if (!term) return false; + const lower = term.toLowerCase(); + return ( + file.fileName.toLowerCase().includes(lower) || + file.path.toLowerCase().includes(lower) + ); +} +``` + +- [ ] **Step 3: Add highlight term to TreeNodeView props and highlight matching files** + +Update the `TreeNodeView` props to accept `highlightTerm` (line ~118-128): + +```typescript +function TreeNodeView({ + node, + depth, + search, + defaultOpen, + highlightTerm, +}: { + node: TreeNode; + depth: number; + search: string; + defaultOpen: boolean; + highlightTerm?: string; +}) { +``` + +Add a helper inside `TreeNodeView` to check if a subtree contains highlighted files: + +```typescript + const hasHighlightedDescendant = useMemo(() => { + if (!highlightTerm) return false; + function check(n: TreeNode): boolean { + if (n.file && fileMatchesHighlight(n.file, highlightTerm!)) return true; + for (const child of n.children.values()) { + if (check(child)) return true; + } + return false; + } + return check(node); + }, [node, highlightTerm]); +``` + +Update the `useEffect` for auto-expanding to also expand when there are highlighted descendants (line ~141-143): + +```typescript + useEffect(() => { + if (search || hasHighlightedDescendant) setOpen(true); + }, [search, hasHighlightedDescendant]); +``` + +In the file node rendering (line ~186-210), add a highlight class when the file matches: + +```typescript + // File node + if (node.file) { + const isHighlighted = highlightTerm ? fileMatchesHighlight(node.file, highlightTerm) : false; + return ( +
+ + + {node.name} + + {node.file.extension && ( + + .{node.file.extension} + + )} + + {formatBytes(node.file.uncompressedSize)} + +
+ ); + } +``` + +Pass `highlightTerm` through recursive `TreeNodeView` calls (line ~173-181): + +```typescript + {open && + sortedChildren.map((child) => ( + + ))} +``` + +- [ ] **Step 4: Pass highlightTerm to TreeNodeView from the main render** + +In the `PackageFilesDrawer` component, where `TreeNodeView` is rendered for root children (line ~468-475): + +```typescript + +``` + +- [ ] **Step 5: Add highlighting to the flat list render path too** + +In the flat list render path (line ~477-504), add the same highlight logic: + +```typescript + {filtered.map((file) => { + const isHighlighted = highlightTerm ? fileMatchesHighlight(file, highlightTerm) : false; + return ( +
+ +
+

+ {file.fileName} +

+
+ {file.extension && ( + + .{file.extension} + + )} + + {formatBytes(file.uncompressedSize)} + +
+ ); + })} +``` + +- [ ] **Step 6: Wire highlightTerm prop in StlTable** + +In `src/app/(app)/stls/_components/stl-table.tsx`, update the `PackageFilesDrawer` usage to pass the prop: + +```typescript + { + if (!open) setViewPkg(null); + }} + highlightTerm={searchTerm} + /> +``` + +- [ ] **Step 7: Verify app builds and lint passes** + +Run: `npm run build && npm run lint` +Expected: Both pass + +- [ ] **Step 8: Commit** + +```bash +git add src/app/(app)/stls/_components/package-files-drawer.tsx src/app/(app)/stls/_components/stl-table.tsx +git commit -m "feat: highlight matching files in package drawer when opened from search" +``` + +--- + +## Task 7: Skipped/Failed Packages — App Queries & Types + +**Files:** +- Modify: `src/lib/telegram/types.ts` +- Modify: `src/lib/telegram/queries.ts` + +- [ ] **Step 1: Add SkippedPackageItem type** + +In `src/lib/telegram/types.ts`, add after the `PackageFileItem` interface: + +```typescript +export interface SkippedPackageItem { + id: string; + fileName: string; + fileSize: string; + reason: "SIZE_LIMIT" | "DOWNLOAD_FAILED" | "EXTRACT_FAILED" | "UPLOAD_FAILED"; + errorMessage: string | null; + sourceChannel: { + id: string; + title: string; + }; + sourceMessageId: string; + isMultipart: boolean; + partCount: number; + createdAt: string; +} +``` + +- [ ] **Step 2: Add query functions for skipped packages** + +In `src/lib/telegram/queries.ts`, add these functions: + +```typescript +export async function listSkippedPackages(options: { + page: number; + limit: number; + reason?: "SIZE_LIMIT" | "DOWNLOAD_FAILED" | "EXTRACT_FAILED" | "UPLOAD_FAILED"; +}) { + const where: Record = {}; + if (options.reason) where.reason = options.reason; + + const [items, total] = await Promise.all([ + prisma.skippedPackage.findMany({ + where, + orderBy: { createdAt: "desc" }, + skip: (options.page - 1) * options.limit, + take: options.limit, + include: { + sourceChannel: { select: { id: true, title: true } }, + }, + }), + prisma.skippedPackage.count({ where }), + ]); + + const mapped: SkippedPackageItem[] = items.map((s) => ({ + id: s.id, + fileName: s.fileName, + fileSize: s.fileSize.toString(), + reason: s.reason, + errorMessage: s.errorMessage, + sourceChannel: s.sourceChannel, + sourceMessageId: s.sourceMessageId.toString(), + isMultipart: s.isMultipart, + partCount: s.partCount, + createdAt: s.createdAt.toISOString(), + })); + + return { + items: mapped, + pagination: { + page: options.page, + limit: options.limit, + total, + totalPages: Math.ceil(total / options.limit), + }, + }; +} + +export async function countSkippedPackages(): Promise { + return prisma.skippedPackage.count(); +} +``` + +Add `SkippedPackageItem` to the import in queries.ts: + +```typescript +import type { + PackageListItem, + PackageDetail, + PackageFileItem, + IngestionAccountStatus, + SkippedPackageItem, +} from "./types"; +``` + +- [ ] **Step 3: Verify app builds** + +Run: `npx tsc --noEmit` +Expected: No errors + +- [ ] **Step 4: Commit** + +```bash +git add src/lib/telegram/types.ts src/lib/telegram/queries.ts +git commit -m "feat: add query functions for listing skipped/failed packages" +``` + +--- + +## Task 8: Skipped/Failed Packages — Retry Server Actions + +**Files:** +- Modify: `src/app/(app)/stls/actions.ts` + +- [ ] **Step 1: Add retry server actions** + +In `src/app/(app)/stls/actions.ts`, add: + +```typescript +export async function retrySkippedPackageAction( + id: string +): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + try { + const skipped = await prisma.skippedPackage.findUnique({ + where: { id }, + }); + if (!skipped) return { success: false, error: "Skipped package not found" }; + + // Find the AccountChannelMap and reset watermark if needed + const mapping = await prisma.accountChannelMap.findUnique({ + where: { + accountId_channelId: { + accountId: skipped.accountId, + channelId: skipped.sourceChannelId, + }, + }, + }); + + if (mapping) { + const targetId = skipped.sourceMessageId - 1n; + + // Only reset if the watermark is past this message + if (mapping.lastProcessedMessageId && mapping.lastProcessedMessageId >= skipped.sourceMessageId) { + await prisma.accountChannelMap.update({ + where: { id: mapping.id }, + data: { lastProcessedMessageId: targetId }, + }); + } + + // Also reset TopicProgress if this was a forum topic message + if (skipped.sourceTopicId) { + const topicProgress = await prisma.topicProgress.findFirst({ + where: { + accountChannelMapId: mapping.id, + topicId: skipped.sourceTopicId, + }, + }); + if (topicProgress && topicProgress.lastProcessedMessageId && topicProgress.lastProcessedMessageId >= skipped.sourceMessageId) { + await prisma.topicProgress.update({ + where: { id: topicProgress.id }, + data: { lastProcessedMessageId: targetId }, + }); + } + } + } + + // Delete the skip record + await prisma.skippedPackage.delete({ where: { id } }); + + revalidatePath("/stls"); + return { success: true, data: undefined }; + } catch { + return { success: false, error: "Failed to retry skipped package" }; + } +} + +export async function retryAllSkippedPackagesAction( + reason?: "SIZE_LIMIT" | "DOWNLOAD_FAILED" | "EXTRACT_FAILED" | "UPLOAD_FAILED" +): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + try { + const where: Record = {}; + if (reason) where.reason = reason; + + const skippedItems = await prisma.skippedPackage.findMany({ where }); + + if (skippedItems.length === 0) { + return { success: true, data: undefined }; + } + + // Group by (accountId, channelId) to find minimum messageId per channel + const channelResets = new Map }>(); + + for (const item of skippedItems) { + const key = `${item.accountId}:${item.sourceChannelId}`; + const existing = channelResets.get(key); + const targetId = item.sourceMessageId - 1n; + + if (!existing) { + const topicResets = new Map(); + if (item.sourceTopicId) { + topicResets.set(item.sourceTopicId, targetId); + } + channelResets.set(key, { + mappingKey: { accountId: item.accountId, channelId: item.sourceChannelId }, + minMessageId: targetId, + topicResets, + }); + } else { + if (targetId < existing.minMessageId) { + existing.minMessageId = targetId; + } + if (item.sourceTopicId) { + const existingTopic = existing.topicResets.get(item.sourceTopicId); + if (!existingTopic || targetId < existingTopic) { + existing.topicResets.set(item.sourceTopicId, targetId); + } + } + } + } + + // Reset watermarks + for (const reset of channelResets.values()) { + const mapping = await prisma.accountChannelMap.findUnique({ + where: { accountId_channelId: reset.mappingKey }, + }); + if (!mapping) continue; + + if (mapping.lastProcessedMessageId && mapping.lastProcessedMessageId > reset.minMessageId) { + await prisma.accountChannelMap.update({ + where: { id: mapping.id }, + data: { lastProcessedMessageId: reset.minMessageId }, + }); + } + + // Reset topic progress + for (const [topicId, targetId] of reset.topicResets) { + const topicProgress = await prisma.topicProgress.findFirst({ + where: { accountChannelMapId: mapping.id, topicId }, + }); + if (topicProgress && topicProgress.lastProcessedMessageId && topicProgress.lastProcessedMessageId > targetId) { + await prisma.topicProgress.update({ + where: { id: topicProgress.id }, + data: { lastProcessedMessageId: targetId }, + }); + } + } + } + + // Delete all matching skip records + await prisma.skippedPackage.deleteMany({ where }); + + revalidatePath("/stls"); + return { success: true, data: undefined }; + } catch { + return { success: false, error: "Failed to retry skipped packages" }; + } +} +``` + +- [ ] **Step 2: Verify app builds** + +Run: `npx tsc --noEmit` +Expected: No errors + +- [ ] **Step 3: Commit** + +```bash +git add src/app/(app)/stls/actions.ts +git commit -m "feat: add retry server actions for skipped/failed packages" +``` + +--- + +## Task 9: Skipped/Failed Packages — UI Components + +**Files:** +- Create: `src/app/(app)/stls/_components/skipped-columns.tsx` +- Create: `src/app/(app)/stls/_components/skipped-packages-tab.tsx` + +- [ ] **Step 1: Create skipped package column definitions** + +Create `src/app/(app)/stls/_components/skipped-columns.tsx`: + +```typescript +"use client"; + +import { type ColumnDef } from "@tanstack/react-table"; +import { DataTableColumnHeader } from "@/components/shared/data-table-column-header"; +import { Badge } from "@/components/ui/badge"; +import { Button } from "@/components/ui/button"; +import { RotateCw } from "lucide-react"; +import { + Tooltip, + TooltipContent, + TooltipTrigger, +} from "@/components/ui/tooltip"; + +export interface SkippedRow { + id: string; + fileName: string; + fileSize: string; + reason: "SIZE_LIMIT" | "DOWNLOAD_FAILED" | "EXTRACT_FAILED" | "UPLOAD_FAILED"; + errorMessage: string | null; + sourceChannel: { id: string; title: string }; + isMultipart: boolean; + partCount: number; + createdAt: string; +} + +function formatBytes(bytesStr: string): string { + const bytes = Number(bytesStr); + if (bytes === 0) return "0 B"; + const k = 1024; + const sizes = ["B", "KB", "MB", "GB", "TB"]; + const i = Math.floor(Math.log(bytes) / Math.log(k)); + return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`; +} + +const REASON_LABELS: Record = { + SIZE_LIMIT: { label: "Size Limit", variant: "secondary" }, + DOWNLOAD_FAILED: { label: "Download Failed", variant: "destructive" }, + EXTRACT_FAILED: { label: "Extract Failed", variant: "destructive" }, + UPLOAD_FAILED: { label: "Upload Failed", variant: "destructive" }, +}; + +export function getSkippedColumns({ + onRetry, +}: { + onRetry: (row: SkippedRow) => void; +}): ColumnDef[] { + return [ + { + accessorKey: "fileName", + header: ({ column }) => , + cell: ({ row }) => ( +
+ {row.original.fileName} + {row.original.isMultipart && ( + + {row.original.partCount} parts + + )} +
+ ), + enableHiding: false, + }, + { + accessorKey: "fileSize", + header: ({ column }) => , + cell: ({ row }) => ( + + {formatBytes(row.original.fileSize)} + + ), + }, + { + accessorKey: "reason", + header: ({ column }) => , + cell: ({ row }) => { + const { label, variant } = REASON_LABELS[row.original.reason]; + return {label}; + }, + }, + { + accessorKey: "errorMessage", + header: "Error", + cell: ({ row }) => { + const msg = row.original.errorMessage; + if (!msg) return {"\u2014"}; + return ( + + + + {msg} + + + +

{msg}

+
+
+ ); + }, + }, + { + id: "channel", + header: ({ column }) => , + cell: ({ row }) => ( + + {row.original.sourceChannel.title} + + ), + accessorFn: (row) => row.sourceChannel.title, + }, + { + accessorKey: "createdAt", + header: ({ column }) => , + cell: ({ row }) => ( + + {new Date(row.original.createdAt).toLocaleDateString()} + + ), + }, + { + id: "actions", + cell: ({ row }) => ( + + ), + enableHiding: false, + }, + ]; +} +``` + +- [ ] **Step 2: Create skipped packages tab component** + +Create `src/app/(app)/stls/_components/skipped-packages-tab.tsx`: + +```typescript +"use client"; + +import { useTransition } from "react"; +import { useRouter } from "next/navigation"; +import { toast } from "sonner"; +import { RotateCw } from "lucide-react"; +import { useDataTable } from "@/hooks/use-data-table"; +import { getSkippedColumns, type SkippedRow } from "./skipped-columns"; +import { DataTable } from "@/components/shared/data-table"; +import { DataTablePagination } from "@/components/shared/data-table-pagination"; +import { Button } from "@/components/ui/button"; +import { retrySkippedPackageAction, retryAllSkippedPackagesAction } from "../actions"; + +interface SkippedPackagesTabProps { + data: SkippedRow[]; + pageCount: number; + totalCount: number; +} + +export function SkippedPackagesTab({ + data, + pageCount, + totalCount, +}: SkippedPackagesTabProps) { + const router = useRouter(); + const [isPending, startTransition] = useTransition(); + + const columns = getSkippedColumns({ + onRetry: (row) => { + startTransition(async () => { + const result = await retrySkippedPackageAction(row.id); + if (result.success) { + toast.success(`"${row.fileName}" queued for retry`); + router.refresh(); + } else { + toast.error(result.error); + } + }); + }, + }); + + const { table } = useDataTable({ data, columns, pageCount }); + + return ( +
+ {totalCount > 0 && ( +
+ +
+ )} + + +
+ ); +} +``` + +- [ ] **Step 3: Commit** + +```bash +git add src/app/(app)/stls/_components/skipped-columns.tsx src/app/(app)/stls/_components/skipped-packages-tab.tsx +git commit -m "feat: add skipped/failed packages table UI components" +``` + +--- + +## Task 10: Wire Up Tabs in STL Page + +**Files:** +- Modify: `src/app/(app)/stls/page.tsx` +- Modify: `src/app/(app)/stls/_components/stl-table.tsx` + +- [ ] **Step 1: Fetch skipped packages data in page.tsx** + +In `src/app/(app)/stls/page.tsx`, update imports and data fetching: + +```typescript +import { auth } from "@/lib/auth"; +import { redirect } from "next/navigation"; +import { listPackages, searchPackages, getIngestionStatus, getAllPackageTags, listSkippedPackages, countSkippedPackages } from "@/lib/telegram/queries"; +import { StlTable } from "./_components/stl-table"; + +interface Props { + searchParams: Promise>; +} + +export default async function StlFilesPage({ searchParams }: Props) { + const session = await auth(); + if (!session?.user?.id) redirect("/login"); + + const params = await searchParams; + + const page = Number(params.page) || 1; + const perPage = Number(params.perPage) || 20; + const sort = (params.sort as string) ?? "indexedAt"; + const order = (params.order as "asc" | "desc") ?? "desc"; + const search = (params.search as string) ?? ""; + const creator = (params.creator as string) || undefined; + const tag = (params.tag as string) || undefined; + const tab = (params.tab as string) ?? "packages"; + + // Fetch packages, ingestion status, tags, and skipped count in parallel + const [result, ingestionStatus, availableTags, skippedCount] = await Promise.all([ + search + ? searchPackages({ + query: search, + page, + limit: perPage, + searchIn: "both", + }) + : listPackages({ + page, + limit: perPage, + creator, + tag, + sortBy: sort as "indexedAt" | "fileName" | "fileSize", + order, + }), + getIngestionStatus(), + getAllPackageTags(), + countSkippedPackages(), + ]); + + // Fetch skipped packages only if on that tab + const skippedResult = tab === "skipped" + ? await listSkippedPackages({ page, limit: perPage }) + : null; + + return ( + + ); +} +``` + +- [ ] **Step 2: Add tabs to StlTable** + +In `src/app/(app)/stls/_components/stl-table.tsx`, add the tab UI. Update imports: + +```typescript +import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs"; +import { Badge } from "@/components/ui/badge"; +import { SkippedPackagesTab } from "./skipped-packages-tab"; +import type { SkippedRow } from "./skipped-columns"; +``` + +Update props: + +```typescript +interface StlTableProps { + data: PackageRow[]; + pageCount: number; + totalCount: number; + ingestionStatus: IngestionAccountStatus[]; + availableTags: string[]; + searchTerm: string; + skippedData: SkippedRow[]; + skippedPageCount: number; + skippedTotalCount: number; +} +``` + +Update the component to use tabs. The return JSX should become: + +```typescript + const activeTab = searchParams.get("tab") ?? "packages"; + + const updateTab = useCallback( + (value: string) => { + const params = new URLSearchParams(searchParams.toString()); + if (value === "packages") { + params.delete("tab"); + } else { + params.set("tab", value); + } + params.set("page", "1"); + router.push(`${pathname}?${params.toString()}`, { scroll: false }); + }, + [router, pathname, searchParams] + ); + + return ( +
+ + + + + + + Packages + + Skipped / Failed + {skippedTotalCount > 0 && ( + + {skippedTotalCount} + + )} + + + + +
+
+ + updateSearch(e.target.value)} + className="pl-9 h-9" + /> +
+ {availableTags.length > 0 && ( + + )} + +
+ + + +
+ + + + +
+ + { + if (!open) setViewPkg(null); + }} + highlightTerm={searchTerm} + /> +
+ ); +``` + +Make sure to add the new props to the destructured params and add the `updateTab` callback. Remove the old JSX that is now inside `TabsContent`. + +- [ ] **Step 3: Verify the Tabs component exists** + +Check if `@/components/ui/tabs` exists. If not, install it: + +Run: `npx shadcn@latest add tabs` (if missing) + +- [ ] **Step 4: Verify app builds and lint passes** + +Run: `npm run build && npm run lint` +Expected: Both pass + +- [ ] **Step 5: Commit** + +```bash +git add src/app/(app)/stls/page.tsx src/app/(app)/stls/_components/stl-table.tsx +git commit -m "feat: add skipped/failed packages tab to STL files page" +``` + +--- + +## Task 11: Final Build Verification + +- [ ] **Step 1: Full build check** + +Run: `npm run build` +Expected: Build succeeds + +- [ ] **Step 2: Lint check** + +Run: `npm run lint` +Expected: No errors + +- [ ] **Step 3: Worker build check** + +Run: `cd worker && npx tsc --noEmit` +Expected: No errors + +- [ ] **Step 4: Prisma generate check** + +Run: `npx prisma generate` +Expected: Success diff --git a/docs/superpowers/plans/2026-03-25-package-grouping.md b/docs/superpowers/plans/2026-03-25-package-grouping.md new file mode 100644 index 0000000..caffe01 --- /dev/null +++ b/docs/superpowers/plans/2026-03-25-package-grouping.md @@ -0,0 +1,1343 @@ +# Package Grouping Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Group related packages that were posted together in Telegram so they appear as collapsible rows in the STL files table, with auto-detection via album IDs and manual grouping. + +**Architecture:** New `PackageGroup` model links related `Package` records. The worker captures `media_album_id` during scanning and creates groups post-indexing. The app uses a two-step SQL query for paginated display items (groups + standalone packages). UI renders group rows with expand/collapse and supports manual group management. + +**Tech Stack:** Prisma 7 (PostgreSQL), Next.js 16 (App Router, server components + server actions), TanStack Table, shadcn/ui, TDLib (tdl) + +**Spec:** `docs/superpowers/specs/2026-03-25-package-grouping-design.md` + +**Testing:** No test framework configured. Each task includes manual verification steps. + +--- + +## File Map + +### New Files +| File | Responsibility | +|------|---------------| +| `prisma/migrations/_add_package_groups/migration.sql` | Schema migration (auto-generated) | +| `src/app/api/groups/[id]/preview/route.ts` | Group preview image endpoint | +| `src/app/(app)/stls/_components/group-row.tsx` | Group row rendering (collapsed + expanded header) | +| `src/app/(app)/stls/_components/group-toolbar.tsx` | Toolbar for "Group Selected" action | +| `worker/src/grouping.ts` | Post-processing: album detection → PackageGroup creation | + +### Modified Files +| File | Changes | +|------|---------| +| `prisma/schema.prisma` | Add `PackageGroup` model, add `packageGroupId` to `Package`, add back-relation to `TelegramChannel` | +| `worker/src/archive/multipart.ts` | Add `mediaAlbumId?` to `TelegramMessage` interface | +| `worker/src/preview/match.ts` | Add `mediaAlbumId?` to `TelegramPhoto` interface | +| `worker/src/tdlib/download.ts` | Capture `media_album_id` from TDLib messages in scan loop | +| `worker/src/tdlib/topics.ts` | Capture `media_album_id` from TDLib messages in forum topic scan loop | +| `worker/src/worker.ts` | Call grouping post-processing after package indexing loop | +| `worker/src/db/queries.ts` | Add `createPackageGroup`, `linkPackagesToGroup` functions | +| `src/lib/telegram/types.ts` | Add `PackageGroupRow`, `DisplayItem` union type | +| `src/lib/telegram/queries.ts` | Add `listDisplayItems`, `getDisplayItemCount`, group CRUD queries | +| `src/app/(app)/stls/actions.ts` | Add server actions for group rename, dissolve, create, remove member, update preview, send all | +| `src/app/(app)/stls/_components/package-columns.tsx` | Add chevron column, checkbox column, group-aware rendering | +| `src/app/(app)/stls/_components/stl-table.tsx` | Add expand/collapse state, selection state, group toolbar, integrate new display item data shape | +| `src/app/(app)/stls/page.tsx` | Switch from `listPackages` to `listDisplayItems`, pass group data to table | + +--- + +## Task 1: Prisma Schema Migration + +**Files:** +- Modify: `prisma/schema.prisma` + +- [ ] **Step 1: Add PackageGroup model to schema** + +In `prisma/schema.prisma`, add the new model after the `Package` model (after line ~495): + +```prisma +model PackageGroup { + id String @id @default(cuid()) + name String + mediaAlbumId String? + sourceChannelId String + previewData Bytes? + createdAt DateTime @default(now()) + updatedAt DateTime @updatedAt + + packages Package[] + sourceChannel TelegramChannel @relation(fields: [sourceChannelId], references: [id], onDelete: Cascade) + + @@unique([mediaAlbumId, sourceChannelId]) + @@index([sourceChannelId]) + @@map("package_groups") +} +``` + +- [ ] **Step 2: Add packageGroupId to Package model** + +In the `Package` model (around line 479, after `previewMsgId`), add: + +```prisma + packageGroupId String? + packageGroup PackageGroup? @relation(fields: [packageGroupId], references: [id], onDelete: SetNull) +``` + +And add this index alongside the existing indexes (after line ~493): + +```prisma + @@index([packageGroupId]) +``` + +- [ ] **Step 3: Add back-relation to TelegramChannel** + +In the `TelegramChannel` model (around line 435, after `skippedPackages`), add: + +```prisma + packageGroups PackageGroup[] +``` + +- [ ] **Step 4: Generate and run the migration** + +```bash +cd /e/Projects/DragonsStash && npx prisma migrate dev --name add_package_groups +``` + +Expected: Migration creates `package_groups` table, adds `packageGroupId` column and index to `packages`, creates unique index on `(mediaAlbumId, sourceChannelId)`. + +- [ ] **Step 5: Verify Prisma client generation** + +```bash +cd /e/Projects/DragonsStash && npm run db:generate +``` + +Expected: Prisma client generates without errors. `prisma.packageGroup` is available. + +- [ ] **Step 6: Commit** + +```bash +git add prisma/schema.prisma prisma/migrations/ +git commit -m "feat: add PackageGroup schema for album-based file grouping" +``` + +--- + +## Task 2: Worker — Add mediaAlbumId to Interfaces + +**Files:** +- Modify: `worker/src/archive/multipart.ts` +- Modify: `worker/src/preview/match.ts` + +- [ ] **Step 1: Add mediaAlbumId to TelegramMessage** + +In `worker/src/archive/multipart.ts`, update the `TelegramMessage` interface (line 7-13): + +```typescript +export interface TelegramMessage { + id: bigint; + fileName: string; + fileId: string; + fileSize: bigint; + date: Date; + mediaAlbumId?: string; +} +``` + +- [ ] **Step 2: Add mediaAlbumId to TelegramPhoto** + +In `worker/src/preview/match.ts`, update the `TelegramPhoto` interface (line 5-13): + +```typescript +export interface TelegramPhoto { + id: bigint; + date: Date; + /** Caption text on the photo message (if any). */ + caption: string; + /** The smallest photo size available — used as thumbnail. */ + fileId: string; + fileSize: number; + mediaAlbumId?: string; +} +``` + +- [ ] **Step 3: Build worker to verify no type errors** + +```bash +cd /e/Projects/DragonsStash/worker && npm run build +``` + +Expected: Clean build. The new optional field doesn't break any existing call sites. + +- [ ] **Step 4: Commit** + +```bash +cd /e/Projects/DragonsStash && git add worker/src/archive/multipart.ts worker/src/preview/match.ts +git commit -m "feat: add mediaAlbumId to TelegramMessage and TelegramPhoto interfaces" +``` + +--- + +## Task 3: Worker — Capture media_album_id During Scanning + +**Files:** +- Modify: `worker/src/tdlib/download.ts` + +- [ ] **Step 1: Add media_album_id to TdMessage interface** + +In `worker/src/tdlib/download.ts`, update the `TdMessage` interface (lines 35-58) to add `media_album_id`: + +```typescript +interface TdMessage { + id: number; + date: number; + media_album_id?: string; + content: { + _: string; + document?: { + file_name?: string; + document?: { + id: number; + size: number; + local?: { + path?: string; + is_downloading_completed?: boolean; + }; + }; + }; + photo?: { + sizes?: TdPhotoSize[]; + }; + caption?: { + text?: string; + }; + }; +} +``` + +- [ ] **Step 2: Pass media_album_id through to TelegramMessage** + +In the `getChannelMessages` function, update the archive push block (around line 208-215). Change the `archives.push` call to include `mediaAlbumId`: + +```typescript + archives.push({ + id: BigInt(msg.id), + fileName: doc.file_name, + fileId: String(doc.document.id), + fileSize: BigInt(doc.document.size), + date: new Date(msg.date * 1000), + mediaAlbumId: msg.media_album_id && msg.media_album_id !== "0" ? msg.media_album_id : undefined, + }); +``` + +- [ ] **Step 3: Pass media_album_id through to TelegramPhoto** + +In the same function, update the photo push block (around line 224-231). Change the `photos.push` call to include `mediaAlbumId`: + +```typescript + photos.push({ + id: BigInt(msg.id), + date: new Date(msg.date * 1000), + caption, + fileId: String(smallest.photo.id), + fileSize: smallest.photo.size || smallest.photo.expected_size, + mediaAlbumId: msg.media_album_id && msg.media_album_id !== "0" ? msg.media_album_id : undefined, + }); +``` + +- [ ] **Step 4: Add media_album_id to forum topic scanning** + +`worker/src/tdlib/topics.ts` has a parallel `getTopicMessages` function with its own inline message struct. Apply the same changes: + +1. Add `media_album_id?: string` to the inline TDLib message struct in `getTopicMessages` +2. Update the `archives.push` block to include `mediaAlbumId` +3. Update the `photos.push` block to include `mediaAlbumId` + +Use the exact same pattern as steps 2-3 above. + +- [ ] **Step 5: Build worker to verify** + +```bash +cd /e/Projects/DragonsStash/worker && npm run build +``` + +Expected: Clean build. + +- [ ] **Step 6: Commit** + +```bash +cd /e/Projects/DragonsStash && git add worker/src/tdlib/download.ts worker/src/tdlib/topics.ts +git commit -m "feat: capture media_album_id from TDLib messages during channel and topic scanning" +``` + +--- + +## Task 4: Worker — Group DB Queries + +**Files:** +- Modify: `worker/src/db/queries.ts` + +- [ ] **Step 1: Add createOrFindPackageGroup function** + +At the end of `worker/src/db/queries.ts`, add: + +```typescript +export async function createOrFindPackageGroup(input: { + mediaAlbumId: string; + sourceChannelId: string; + name: string; + previewData?: Buffer | null; +}): Promise { + // findFirst + conditional create (Prisma doesn't support upsert on nullable compound unique) + const existing = await db.packageGroup.findFirst({ + where: { + mediaAlbumId: input.mediaAlbumId, + sourceChannelId: input.sourceChannelId, + }, + select: { id: true }, + }); + + if (existing) return existing.id; + + const group = await db.packageGroup.create({ + data: { + mediaAlbumId: input.mediaAlbumId, + sourceChannelId: input.sourceChannelId, + name: input.name, + previewData: input.previewData ? new Uint8Array(input.previewData) : undefined, + }, + }); + + return group.id; +} + +export async function linkPackagesToGroup( + packageIds: string[], + groupId: string +): Promise { + await db.package.updateMany({ + where: { id: { in: packageIds } }, + data: { packageGroupId: groupId }, + }); +} +``` + +- [ ] **Step 2: Build worker to verify** + +```bash +cd /e/Projects/DragonsStash/worker && npm run build +``` + +Expected: Clean build. + +- [ ] **Step 3: Commit** + +```bash +cd /e/Projects/DragonsStash && git add worker/src/db/queries.ts +git commit -m "feat: add createOrFindPackageGroup and linkPackagesToGroup worker queries" +``` + +--- + +## Task 5: Worker — Grouping Post-Processing + +**Files:** +- Create: `worker/src/grouping.ts` +- Modify: `worker/src/worker.ts` + +- [ ] **Step 1: Create grouping.ts module** + +Create `worker/src/grouping.ts`: + +```typescript +import type { Client } from "tdl"; +import type { TelegramMessage } from "./archive/multipart.js"; +import type { TelegramPhoto } from "./preview/match.js"; +import { downloadPhotoThumbnail } from "./tdlib/download.js"; +import { createOrFindPackageGroup, linkPackagesToGroup } from "./db/queries.js"; +import { childLogger } from "./util/logger.js"; +import { db } from "./db/client.js"; + +const log = childLogger("grouping"); + +interface IndexedPackageRef { + packageId: string; + sourceMessageId: bigint; + mediaAlbumId?: string; +} + +/** + * After a scan cycle's packages are individually indexed, detect album groups + * and create PackageGroup records linking the members. + * + * - Collects indexed packages that share a non-zero mediaAlbumId + * - Creates (or finds existing) PackageGroup per album + * - Links all member packages via packageGroupId + * - Downloads album photo as group preview if available + */ +export async function processAlbumGroups( + client: Client, + sourceChannelId: string, + indexedPackages: IndexedPackageRef[], + photos: TelegramPhoto[] +): Promise { + // Group indexed packages by mediaAlbumId + const albumMap = new Map(); + for (const pkg of indexedPackages) { + if (!pkg.mediaAlbumId || pkg.mediaAlbumId === "0") continue; + const group = albumMap.get(pkg.mediaAlbumId) ?? []; + group.push(pkg); + albumMap.set(pkg.mediaAlbumId, group); + } + + if (albumMap.size === 0) return; + + log.info({ albumCount: albumMap.size }, "Detected album groups to process"); + + for (const [albumId, members] of albumMap) { + if (members.length < 2) continue; // Single-file albums aren't groups + + try { + // Find the first package's fileName for the group name fallback + const firstPkg = await db.package.findFirst({ + where: { id: { in: members.map((m) => m.packageId) } }, + orderBy: { sourceMessageId: "asc" }, + select: { id: true, fileName: true }, + }); + + // Try to find a caption from the album's photo message + const albumPhoto = photos.find((p) => p.mediaAlbumId === albumId); + const groupName = albumPhoto?.caption || firstPkg?.fileName || "Unnamed Group"; + + // Download preview from album photo if available + let previewData: Buffer | null = null; + if (albumPhoto) { + previewData = await downloadPhotoThumbnail(client, albumPhoto.fileId); + } + + const groupId = await createOrFindPackageGroup({ + mediaAlbumId: albumId, + sourceChannelId, + name: groupName, + previewData, + }); + + // Idempotent link — safe to re-run if some packages were indexed in prior scans + const packageIds = members.map((m) => m.packageId); + await linkPackagesToGroup(packageIds, groupId); + + log.info( + { albumId, groupId, groupName, memberCount: packageIds.length }, + "Linked packages to album group" + ); + } catch (err) { + log.warn({ albumId, err }, "Failed to create album group — packages still indexed individually"); + } + } +} +``` + +- [ ] **Step 2: Integrate grouping into worker pipeline** + +In `worker/src/worker.ts`, find the `processArchiveSets` function. The function processes archive sets in a loop (around lines 726-772) and tracks `maxProcessedId`. After the processing loop ends, add the grouping step. + +First, add the import at the top of `worker.ts`: + +```typescript +import { processAlbumGroups } from "./grouping.js"; +``` + +Then, in the `processArchiveSets` function, add tracking for indexed packages. Near line 726 (before the archive set loop), add: + +```typescript + const indexedPackageRefs: { packageId: string; sourceMessageId: bigint; mediaAlbumId?: string }[] = []; +``` + +Inside the per-set processing (in `processOneArchiveSet`), after the `createPackageWithFiles` call (around line 1149), the function needs to return the created package ID. Since `processOneArchiveSet` is a void function called from `processArchiveSets`, modify `processArchiveSets` to capture the result. + +The cleanest integration point: in the `processArchiveSets` loop body (around line 740), after a successful `processOneArchiveSet` call, query for the created package by contentHash or source message and push to `indexedPackageRefs`. But simpler: have `processOneArchiveSet` return the package ID. + +Find the `processOneArchiveSet` function signature. Change its return type from `Promise` to `Promise` (returning the created package ID, or null if it reused an existing upload). + +After the `createPackageWithFiles` call (around line 1149), capture the return value: + +```typescript + const pkg = await createPackageWithFiles({ ... }); + // ... existing code after creation ... + return pkg.id; +``` + +Add `return null;` to the early-return paths (size guard, dedup skip). + +Back in `processArchiveSets`, in the success branch of the try/catch (around line 740), capture the return: + +```typescript + const packageId = await processOneArchiveSet(/* ... */); + if (packageId) { + const firstPart = archiveSet.parts[0]; + indexedPackageRefs.push({ + packageId, + sourceMessageId: firstPart.id, + mediaAlbumId: firstPart.mediaAlbumId, + }); + } +``` + +After the loop (around line 773, before `return maxProcessedId`), add: + +```typescript + // Post-processing: group packages by Telegram album ID + if (indexedPackageRefs.length > 0) { + await processAlbumGroups( + ctx.client, + channel.id, + indexedPackageRefs, + scanResult.photos + ); + } +``` + +- [ ] **Step 3: Build worker to verify** + +```bash +cd /e/Projects/DragonsStash/worker && npm run build +``` + +Expected: Clean build. + +- [ ] **Step 4: Commit** + +```bash +cd /e/Projects/DragonsStash && git add worker/src/grouping.ts worker/src/worker.ts +git commit -m "feat: add album grouping post-processing to worker pipeline" +``` + +--- + +## Task 6: App — Types + +**Files:** +- Modify: `src/lib/telegram/types.ts` + +- [ ] **Step 1: Add PackageGroupRow and DisplayItem types** + +At the end of `src/lib/telegram/types.ts`, add: + +```typescript +export interface PackageGroupRow { + id: string; + name: string; + hasPreview: boolean; + totalFileSize: string; + totalFileCount: number; + packageCount: number; + combinedTags: string[]; + archiveTypes: ("ZIP" | "RAR" | "SEVEN_Z" | "DOCUMENT")[]; + latestIndexedAt: string; + sourceChannel: { id: string; title: string }; + packages: PackageListItem[]; +} + +export type DisplayItem = + | { type: "package"; data: PackageListItem } + | { type: "group"; data: PackageGroupRow }; +``` + +- [ ] **Step 2: Verify app build** + +```bash +cd /e/Projects/DragonsStash && npm run build +``` + +Expected: Clean build. Types are exported but not yet consumed. + +- [ ] **Step 3: Commit** + +```bash +cd /e/Projects/DragonsStash && git add src/lib/telegram/types.ts +git commit -m "feat: add PackageGroupRow and DisplayItem types" +``` + +--- + +## Task 7: App — Queries + +**Files:** +- Modify: `src/lib/telegram/queries.ts` + +- [ ] **Step 1: Add listDisplayItems query** + +Add the following function to `src/lib/telegram/queries.ts`: + +```typescript +export async function listDisplayItems(options: { + page: number; + limit: number; + channelId?: string; + creator?: string; + tag?: string; + sortBy: "indexedAt" | "fileName" | "fileSize"; + order: "asc" | "desc"; +}): Promise<{ items: DisplayItem[]; pagination: PaginatedResponse["pagination"] }> { + const { page, limit, channelId, creator, tag, sortBy, order } = options; + + // Build WHERE clause fragments for raw SQL + const conditions: string[] = []; + const params: unknown[] = []; + let paramIdx = 1; + + if (channelId) { + conditions.push(`p."sourceChannelId" = $${paramIdx++}`); + params.push(channelId); + } + if (creator) { + conditions.push(`p."creator" = $${paramIdx++}`); + params.push(creator); + } + if (tag) { + conditions.push(`$${paramIdx++} = ANY(p."tags")`); + params.push(tag); + } + + const whereClause = conditions.length > 0 ? `WHERE ${conditions.join(" AND ")}` : ""; + + // Sort column mapping + const sortCol = sortBy === "fileName" ? `"fileName"` : sortBy === "fileSize" ? `"fileSize"` : `"indexedAt"`; + const sortDir = order === "asc" ? "ASC" : "DESC"; + + // Step 1: Count display items + const countSql = ` + SELECT COUNT(*) AS count FROM ( + SELECT DISTINCT COALESCE(p."packageGroupId", p."id") AS display_id + FROM packages p + ${whereClause} + ) AS display_items + `; + const countResult = await prisma.$queryRawUnsafe<[{ count: bigint }]>(countSql, ...params); + const total = Number(countResult[0].count); + + // Step 2: Get display item IDs for this page + const itemsSql = ` + SELECT + COALESCE(p."packageGroupId", p."id") AS display_id, + CASE WHEN p."packageGroupId" IS NOT NULL THEN 'group' ELSE 'package' END AS display_type, + MAX(p.${sortCol}) AS sort_value + FROM packages p + ${whereClause} + GROUP BY COALESCE(p."packageGroupId", p."id"), + CASE WHEN p."packageGroupId" IS NOT NULL THEN 'group' ELSE 'package' END + ORDER BY sort_value ${sortDir} + LIMIT $${paramIdx++} OFFSET $${paramIdx++} + `; + params.push(limit, (page - 1) * limit); + + const displayRows = await prisma.$queryRawUnsafe< + { display_id: string; display_type: "group" | "package" }[] + >(itemsSql, ...params); + + // Step 3: Fetch full data for each display item + const groupIds = displayRows.filter((r) => r.display_type === "group").map((r) => r.display_id); + const packageIds = displayRows.filter((r) => r.display_type === "package").map((r) => r.display_id); + + // Fetch standalone packages + const standalonePackages = packageIds.length > 0 + ? await prisma.package.findMany({ + where: { id: { in: packageIds } }, + select: { + id: true, fileName: true, fileSize: true, contentHash: true, + archiveType: true, fileCount: true, isMultipart: true, + indexedAt: true, creator: true, tags: true, previewData: true, + sourceChannel: { select: { id: true, title: true } }, + }, + }) + : []; + + // Fetch groups with their member packages + const groups = groupIds.length > 0 + ? await prisma.packageGroup.findMany({ + where: { id: { in: groupIds } }, + select: { + id: true, name: true, previewData: true, + sourceChannel: { select: { id: true, title: true } }, + packages: { + select: { + id: true, fileName: true, fileSize: true, contentHash: true, + archiveType: true, fileCount: true, isMultipart: true, + indexedAt: true, creator: true, tags: true, previewData: true, + sourceChannel: { select: { id: true, title: true } }, + }, + orderBy: { indexedAt: "desc" }, + }, + }, + }) + : []; + + // Build DisplayItem array in the original sort order + const packageMap = new Map(standalonePackages.map((p) => [p.id, p])); + const groupMap = new Map(groups.map((g) => [g.id, g])); + + const items: DisplayItem[] = displayRows.map((row) => { + if (row.display_type === "package") { + const pkg = packageMap.get(row.display_id)!; + return { + type: "package" as const, + data: { + id: pkg.id, + fileName: pkg.fileName, + fileSize: pkg.fileSize.toString(), + contentHash: pkg.contentHash, + archiveType: pkg.archiveType, + fileCount: pkg.fileCount, + isMultipart: pkg.isMultipart, + hasPreview: pkg.previewData !== null, + creator: pkg.creator, + tags: pkg.tags, + indexedAt: pkg.indexedAt.toISOString(), + sourceChannel: pkg.sourceChannel, + matchedFileCount: 0, + matchedByContent: false, + }, + }; + } else { + const grp = groupMap.get(row.display_id)!; + const allTags = [...new Set(grp.packages.flatMap((p) => p.tags))]; + const archiveTypes = [...new Set(grp.packages.map((p) => p.archiveType))]; + return { + type: "group" as const, + data: { + id: grp.id, + name: grp.name, + hasPreview: grp.previewData !== null, + totalFileSize: grp.packages.reduce((sum, p) => sum + p.fileSize, 0n).toString(), + totalFileCount: grp.packages.reduce((sum, p) => sum + p.fileCount, 0), + packageCount: grp.packages.length, + combinedTags: allTags, + archiveTypes, + latestIndexedAt: grp.packages.length > 0 + ? grp.packages[0].indexedAt.toISOString() + : new Date().toISOString(), + sourceChannel: grp.sourceChannel, + packages: grp.packages.map((pkg) => ({ + id: pkg.id, + fileName: pkg.fileName, + fileSize: pkg.fileSize.toString(), + contentHash: pkg.contentHash, + archiveType: pkg.archiveType, + fileCount: pkg.fileCount, + isMultipart: pkg.isMultipart, + hasPreview: pkg.previewData !== null, + creator: pkg.creator, + tags: pkg.tags, + indexedAt: pkg.indexedAt.toISOString(), + sourceChannel: pkg.sourceChannel, + matchedFileCount: 0, + matchedByContent: false, + })), + }, + }; + } + }); + + return { + items, + pagination: { page, limit, total, totalPages: Math.ceil(total / limit) }, + }; +} +``` + +- [ ] **Step 2: Add group CRUD queries** + +Add these functions to `src/lib/telegram/queries.ts`: + +```typescript +export async function getPackageGroup(groupId: string) { + return prisma.packageGroup.findUnique({ + where: { id: groupId }, + select: { + id: true, name: true, previewData: true, mediaAlbumId: true, + sourceChannelId: true, createdAt: true, + sourceChannel: { select: { id: true, title: true } }, + packages: { + select: { + id: true, fileName: true, fileSize: true, archiveType: true, + fileCount: true, creator: true, tags: true, + }, + orderBy: { indexedAt: "desc" }, + }, + }, + }); +} + +export async function updatePackageGroupName(groupId: string, name: string) { + return prisma.packageGroup.update({ + where: { id: groupId }, + data: { name: name.trim() }, + }); +} + +export async function updatePackageGroupPreview(groupId: string, previewData: Buffer) { + return prisma.packageGroup.update({ + where: { id: groupId }, + data: { previewData: new Uint8Array(previewData) }, + }); +} + +export async function createManualGroup(name: string, packageIds: string[]) { + // Verify all packages belong to the same channel (cross-channel groups are not supported) + const pkgs = await prisma.package.findMany({ + where: { id: { in: packageIds } }, + select: { sourceChannelId: true }, + }); + const channelIds = new Set(pkgs.map((p) => p.sourceChannelId)); + if (channelIds.size > 1) { + throw new Error("Cannot group packages from different channels"); + } + + const firstPkg = pkgs[0]; + + const group = await prisma.packageGroup.create({ + data: { + name: name.trim(), + sourceChannelId: firstPkg.sourceChannelId, + }, + }); + + // Move packages to new group (removes from any existing group) + await prisma.package.updateMany({ + where: { id: { in: packageIds } }, + data: { packageGroupId: group.id }, + }); + + // Clean up empty groups left behind + await prisma.packageGroup.deleteMany({ + where: { + packages: { none: {} }, + id: { not: group.id }, + }, + }); + + return group; +} + +export async function addPackagesToGroup(packageIds: string[], groupId: string) { + await prisma.package.updateMany({ + where: { id: { in: packageIds } }, + data: { packageGroupId: groupId }, + }); + + // Clean up empty groups left behind + await prisma.packageGroup.deleteMany({ + where: { packages: { none: {} } }, + }); +} + +export async function removePackageFromGroup(packageId: string) { + const pkg = await prisma.package.findUniqueOrThrow({ + where: { id: packageId }, + select: { packageGroupId: true }, + }); + + if (!pkg.packageGroupId) return; + + await prisma.package.update({ + where: { id: packageId }, + data: { packageGroupId: null }, + }); + + // Clean up empty group + await prisma.packageGroup.deleteMany({ + where: { id: pkg.packageGroupId, packages: { none: {} } }, + }); +} + +export async function dissolveGroup(groupId: string) { + await prisma.package.updateMany({ + where: { packageGroupId: groupId }, + data: { packageGroupId: null }, + }); + await prisma.packageGroup.delete({ where: { id: groupId } }); +} +``` + +- [ ] **Step 3: Add import for DisplayItem type** + +At the top of `src/lib/telegram/queries.ts`, ensure `DisplayItem` and `PackageGroupRow` are imported from `./types`: + +```typescript +import type { PackageListItem, PackageDetail, PaginatedResponse, DisplayItem, PackageGroupRow } from "./types"; +``` + +- [ ] **Step 4: Update searchPackages to include group names** + +In the `searchPackages` function, add a `LEFT JOIN` to `package_groups` when building the query. When `searchIn` is `"packages"` or `"both"`, add `PackageGroup.name` as an additional search target: + +After the existing `where: { fileName: { contains: query, mode: "insensitive" } }` block for package name matching, also find packages whose group name matches: + +```typescript +// Also match by group name +const groupNameMatches = await prisma.package.findMany({ + where: { + packageGroup: { name: { contains: query, mode: "insensitive" } }, + }, + select: { id: true }, +}); +const groupMatchIds = groupNameMatches.map((p) => p.id); +``` + +Merge `groupMatchIds` into the existing `allIds` set before the final query. + +- [ ] **Step 5: Verify app build** + +```bash +cd /e/Projects/DragonsStash && npm run build +``` + +Expected: Clean build. + +- [ ] **Step 6: Commit** + +```bash +cd /e/Projects/DragonsStash && git add src/lib/telegram/queries.ts src/lib/telegram/types.ts +git commit -m "feat: add listDisplayItems query, search by group name, and group CRUD operations" +``` + +--- + +## Task 8: App — Group Preview API Route + +**Files:** +- Create: `src/app/api/groups/[id]/preview/route.ts` + +- [ ] **Step 1: Create group preview endpoint** + +Create `src/app/api/groups/[id]/preview/route.ts`: + +```typescript +import { NextResponse } from "next/server"; +import { prisma } from "@/lib/prisma"; +import { authenticateApiRequest } from "@/lib/telegram/api-auth"; + +/** + * GET /api/groups/:id/preview + * Returns the group's preview thumbnail image as JPEG binary. + */ +export async function GET( + request: Request, + { params }: { params: Promise<{ id: string }> } +) { + const authResult = await authenticateApiRequest(request); + if ("error" in authResult) return authResult.error; + + const { id } = await params; + + const group = await prisma.packageGroup.findUnique({ + where: { id }, + select: { previewData: true }, + }); + + if (!group || !group.previewData) { + return new NextResponse(null, { status: 404 }); + } + + const buffer = + group.previewData instanceof Buffer + ? group.previewData + : Buffer.from(group.previewData); + + return new NextResponse(buffer, { + status: 200, + headers: { + "Content-Type": "image/jpeg", + "Content-Length": String(buffer.length), + "Cache-Control": "public, max-age=3600, immutable", + }, + }); +} +``` + +- [ ] **Step 2: Verify app build** + +```bash +cd /e/Projects/DragonsStash && npm run build +``` + +- [ ] **Step 3: Commit** + +```bash +cd /e/Projects/DragonsStash && git add src/app/api/groups/ +git commit -m "feat: add group preview image API endpoint" +``` + +--- + +## Task 9: App — Server Actions for Groups + +**Files:** +- Modify: `src/app/(app)/stls/actions.ts` + +- [ ] **Step 1: Add group server actions** + +Import the new query functions at the top of `src/app/(app)/stls/actions.ts`: + +```typescript +import { + updatePackageGroupName, + updatePackageGroupPreview, + createManualGroup, + removePackageFromGroup, + dissolveGroup, + addPackagesToGroup, +} from "@/lib/telegram/queries"; +``` + +Then add these server actions after the existing ones: + +```typescript +export async function renameGroupAction( + groupId: string, + name: string +): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + if (!name.trim()) return { success: false, error: "Group name is required" }; + + await updatePackageGroupName(groupId, name); + revalidatePath("/stls"); + return { success: true }; +} + +export async function dissolveGroupAction(groupId: string): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + await dissolveGroup(groupId); + revalidatePath("/stls"); + return { success: true }; +} + +export async function createGroupAction( + name: string, + packageIds: string[] +): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + if (!name.trim()) return { success: false, error: "Group name is required" }; + if (packageIds.length < 2) return { success: false, error: "Need at least 2 packages" }; + + await createManualGroup(name, packageIds); + revalidatePath("/stls"); + return { success: true }; +} + +export async function removeFromGroupAction(packageId: string): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + await removePackageFromGroup(packageId); + revalidatePath("/stls"); + return { success: true }; +} + +export async function updateGroupPreviewAction( + groupId: string, + formData: FormData +): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + const file = formData.get("preview") as File | null; + if (!file) return { success: false, error: "No file provided" }; + + const buffer = Buffer.from(await file.arrayBuffer()); + await updatePackageGroupPreview(groupId, buffer); + revalidatePath("/stls"); + return { success: true }; +} + +export async function sendAllInGroupAction(groupId: string): Promise { + const session = await auth(); + if (!session?.user?.id) return { success: false, error: "Unauthorized" }; + + // Resolve the user's linked Telegram account (same pattern as /api/telegram/bot/send) + const telegramLink = await prisma.telegramLink.findUnique({ + where: { userId: session.user.id }, + }); + if (!telegramLink) { + return { success: false, error: "No linked Telegram account. Link one in Settings → Telegram." }; + } + + const group = await prisma.packageGroup.findUnique({ + where: { id: groupId }, + select: { + packages: { + where: { destChannelId: { not: null }, destMessageId: { not: null } }, + select: { id: true }, + }, + }, + }); + + if (!group) return { success: false, error: "Group not found" }; + if (group.packages.length === 0) return { success: false, error: "No uploadable packages in group" }; + + // Queue send requests for each package, skipping those with existing pending/sending requests + for (const pkg of group.packages) { + const existingPending = await prisma.botSendRequest.findFirst({ + where: { + packageId: pkg.id, + telegramLinkId: telegramLink.id, + status: { in: ["PENDING", "SENDING"] }, + }, + }); + if (!existingPending) { + await prisma.botSendRequest.create({ + data: { + packageId: pkg.id, + telegramLinkId: telegramLink.id, + requestedByUserId: session.user.id, + status: "PENDING", + }, + }); + } + } + + revalidatePath("/stls"); + return { success: true }; +} +``` + +- [ ] **Step 2: Add prisma import if not present** + +Make sure `prisma` is imported: + +```typescript +import { prisma } from "@/lib/prisma"; +``` + +- [ ] **Step 3: Verify app build** + +```bash +cd /e/Projects/DragonsStash && npm run build +``` + +- [ ] **Step 4: Commit** + +```bash +cd /e/Projects/DragonsStash && git add src/app/(app)/stls/actions.ts +git commit -m "feat: add server actions for group rename, dissolve, create, remove, preview, and send all" +``` + +--- + +## Task 10: App — Update Page to Use Display Items + +**Files:** +- Modify: `src/app/(app)/stls/page.tsx` + +- [ ] **Step 1: Switch to listDisplayItems** + +In `src/app/(app)/stls/page.tsx`, update the imports to include `listDisplayItems`: + +```typescript +import { listDisplayItems, searchPackages, getIngestionStatus, getAllPackageTags, countSkippedPackages, listSkippedPackages } from "@/lib/telegram/queries"; +``` + +Update the data fetch in the parallel `Promise.all`. Replace the `listPackages` call: + +```typescript + const [result, ingestionStatus, availableTags, skippedCount] = await Promise.all([ + search + ? searchPackages({ query: search, page, limit: perPage, searchIn: "both" }) + : listDisplayItems({ page, limit: perPage, creator, tag, sortBy: sort as "indexedAt" | "fileName" | "fileSize", order }), + getIngestionStatus(), + getAllPackageTags(), + countSkippedPackages(), + ]); +``` + +Update the props passed to `StlTable` — `result.items` is now `DisplayItem[]` when not searching, and `PackageListItem[]` when searching. The `StlTable` component will need to handle both, so wrap search results as `DisplayItem[]`: + +```typescript + const displayItems = search + ? result.items.map((item: PackageListItem) => ({ type: "package" as const, data: item })) + : result.items; +``` + +Pass `displayItems` instead of `result.items` to `StlTable`. + +- [ ] **Step 2: Update StlTable props type** + +This will be done in Task 11 when we modify the table component. For now, just ensure the page passes the right data shape. + +- [ ] **Step 3: Commit** + +```bash +cd /e/Projects/DragonsStash && git add src/app/(app)/stls/page.tsx +git commit -m "feat: switch STL page from listPackages to listDisplayItems" +``` + +--- + +## Task 11: App — UI Table with Group Support + +**Files:** +- Modify: `src/app/(app)/stls/_components/stl-table.tsx` +- Modify: `src/app/(app)/stls/_components/package-columns.tsx` +- Create: `src/app/(app)/stls/_components/group-row.tsx` +- Create: `src/app/(app)/stls/_components/group-toolbar.tsx` + +This is the largest task. It modifies the table to render both group rows and package rows, with expand/collapse and selection for manual grouping. + +- [ ] **Step 1: Create group-row.tsx component** + +Create `src/app/(app)/stls/_components/group-row.tsx`. This component renders a single group as a collapsible row. When collapsed it shows aggregates; when expanded it shows a header row + indented member packages. + +Key elements: +- Chevron toggle button (ChevronRight rotated when expanded) +- Preview thumbnail (from `/api/groups/${groupId}/preview` or fallback icon) +- Editable group name (click to edit inline, calls `renameGroupAction`) +- "Mixed" type badge or most common type +- Combined size, file count, tag badges +- Actions: Send All, Dissolve Group (with confirmation dialog) +- Expanded state renders member `PackageRow` entries with indent and "Remove from group" action + +Use the existing UI patterns from `package-columns.tsx` for consistency (same badge styles, size formatting, etc.). + +- [ ] **Step 2: Create group-toolbar.tsx component** + +Create `src/app/(app)/stls/_components/group-toolbar.tsx`. Shows when 2+ packages are selected: +- "Group N Selected" button +- Clicking it opens a dialog prompting for group name +- On submit, calls `createGroupAction(name, selectedPackageIds)` +- Clears selection after success + +- [ ] **Step 3: Update package-columns.tsx** + +In `src/app/(app)/stls/_components/package-columns.tsx`: + +Add a checkbox column as the first column for row selection: + +```typescript +{ + id: "select", + header: ({ table }) => ( + table.toggleAllPageRowsSelected(!!value)} + aria-label="Select all" + className="h-4 w-4" + /> + ), + cell: ({ row }) => ( + row.toggleSelected(!!value)} + aria-label="Select row" + className="h-4 w-4" + /> + ), + enableSorting: false, + enableHiding: false, + size: 32, +} +``` + +Add a "Remove from group" option in the actions dropdown for packages that have a `packageGroupId`. This means `PackageRow` needs to carry `packageGroupId: string | null` so the cell can conditionally show the action. + +- [ ] **Step 4: Update stl-table.tsx** + +In `src/app/(app)/stls/_components/stl-table.tsx`: + +Update `StlTableProps` to accept `DisplayItem[]`: + +```typescript +interface StlTableProps { + data: DisplayItem[]; + pageCount: number; + totalCount: number; + // ... rest stays the same +} +``` + +Add state for: +- `expandedGroups: Set` — which group IDs are expanded +- Row selection via TanStack Table's built-in selection + +Render logic: +- Iterate over `data` items +- If `item.type === "group"`: + - Render `` component + - If expanded, render member packages as indented `` entries using the existing column definitions +- If `item.type === "package"`: + - Render normal `` as today + +Show `` when selected count >= 2. + +The `DataTable` component (`src/components/shared/data-table.tsx`) renders rows generically from TanStack Table. Since we need custom group rows interspersed, the cleanest approach is to **not use the generic `DataTable` for the packages tab** and instead render the table body directly in `stl-table.tsx`, similar to how `DataTable` does it but with group-awareness. + +- [ ] **Step 5: Verify app build** + +```bash +cd /e/Projects/DragonsStash && npm run build +``` + +- [ ] **Step 6: Manual testing** + +1. Start the dev environment: `npm run dev` +2. Navigate to `/stls` +3. Verify standalone packages render as before +4. Create a manual group: select 2+ packages via checkboxes, click "Group Selected", enter a name +5. Verify the group appears as a collapsed row with aggregated data +6. Click the chevron to expand — member packages appear indented +7. Click group name to edit it inline +8. Test "Remove from group" on a member package +9. Test "Dissolve Group" on the group row +10. Test "Send All" on the group row + +- [ ] **Step 7: Commit** + +```bash +cd /e/Projects/DragonsStash && git add src/app/(app)/stls/_components/ +git commit -m "feat: add group row rendering, expand/collapse, selection, and manual grouping UI" +``` + +--- + +## Task 12: App — Group Preview Upload in UI + +**Files:** +- Modify: `src/app/(app)/stls/_components/group-row.tsx` + +- [ ] **Step 1: Add preview upload to group row** + +In the group row's preview cell, make the thumbnail clickable. On click, open a file input dialog. On file selection, call `updateGroupPreviewAction(groupId, formData)`. + +Reuse the pattern from the existing package preview upload in `package-files-drawer.tsx` — it uses a hidden `` triggered by a button click, then submits via FormData. + +- [ ] **Step 2: Verify and commit** + +```bash +cd /e/Projects/DragonsStash && npm run build +git add src/app/(app)/stls/_components/group-row.tsx +git commit -m "feat: add preview image upload to group rows" +``` + +--- + +## Verification Checklist + +After all tasks are complete, verify end-to-end: + +- [ ] Worker builds cleanly: `cd worker && npm run build` +- [ ] App builds cleanly: `npm run build` +- [ ] Migration applied: `npm run db:migrate` +- [ ] Worker scans a channel with an album of files → PackageGroup created automatically +- [ ] STL table shows album groups as collapsed rows +- [ ] Expand/collapse works +- [ ] Manual grouping (select + group) works +- [ ] Group rename works +- [ ] Group dissolve works +- [ ] Remove from group works +- [ ] Send All works (queues requests for all members) +- [ ] Group preview upload works +- [ ] Search finds groups by name +- [ ] Filtering by tag/creator correctly shows groups when any member matches +- [ ] Pagination is correct (groups take one slot) diff --git a/docs/superpowers/specs/2026-03-24-search-indicators-size-limit-skipped-files-design.md b/docs/superpowers/specs/2026-03-24-search-indicators-size-limit-skipped-files-design.md new file mode 100644 index 0000000..4964bf2 --- /dev/null +++ b/docs/superpowers/specs/2026-03-24-search-indicators-size-limit-skipped-files-design.md @@ -0,0 +1,241 @@ +# Design: Search Match Indicators, Size Limit Increase, Skipped/Failed Files Overview + +**Date:** 2026-03-24 +**Status:** Approved + +## Overview + +Three related improvements to the STL packages system: + +1. **Search match indicators** — Show which internal files matched a search query, with highlighted files in the drawer +2. **Size limit increase** — Raise the ingestion limit from 4 GB to 200 GB so large multipart archives aren't skipped +3. **Skipped/failed files overview** — Track and display archives that were skipped or failed, with retry capability + +--- + +## Feature 1: Size Limit Increase + +### Change + +`worker/src/util/config.ts` line 6 — change default from `"4096"` to `"204800"`. + +One-line change. The split/upload pipeline already handles arbitrary sizes. The 2 GB per-part Telegram API limit is a separate hard-coded constant and stays as-is. + +### Impact + +- Archives up to 200 GB will now be attempted +- Multipart archives where individual parts are under 2 GB (but total exceeds 4 GB) will no longer be skipped — these upload directly without any splitting +- Single files over 2 GB are automatically split into 2 GB parts (existing behavior) +- Temp disk usage during processing can now reach up to ~200 GB per archive + +--- + +## Feature 2: Search Match Indicators + +### Backend Changes + +**File:** `src/lib/telegram/queries.ts` — `searchPackages()` + +When `searchIn` is `"files"` or `"both"`, change the PackageFile query from `distinct` to a **grouped count**: + +```typescript +// Current: findMany with select: { packageId }, distinct: ["packageId"] +// New: groupBy packageId with _count +const fileMatches = await prisma.packageFile.groupBy({ + by: ["packageId"], + where: { + OR: [ + { fileName: { contains: q, mode: "insensitive" } }, + { path: { contains: q, mode: "insensitive" } }, + ], + }, + _count: { _all: true }, +}); +``` + +This returns `{ packageId: string, _count: { _all: number } }[]`. + +Note: `PackageRow` in `package-columns.tsx` mirrors `PackageListItem` and must also receive the two new fields. + +**File:** `src/lib/telegram/types.ts` — `PackageListItem` + +Add two fields: +- `matchedFileCount: number` — how many files inside matched (0 if matched by package name only) +- `matchedByContent: boolean` — true if any files inside matched + +### Frontend Changes + +**File:** `src/app/(app)/stls/page.tsx` + +Pass the search term to `StlTable` as a new prop. + +**File:** `src/app/(app)/stls/_components/stl-table.tsx` + +Pass search term to columns via TanStack Table column meta. + +**File:** `src/app/(app)/stls/_components/package-columns.tsx` + +When search is active and `matchedByContent` is true, render a clickable badge below the filename: e.g., "3 file matches". Clicking opens the `PackageFilesDrawer` with a `highlightTerm` prop set to the search term. + +**File:** `src/app/(app)/stls/_components/package-files-drawer.tsx` + +- Accept optional `highlightTerm: string` prop +- Render full file tree as normal (all files visible) +- Files whose `fileName` or `path` case-insensitively contains `highlightTerm` get a subtle highlight (amber/yellow background on the row) +- Auto-expand folders that contain highlighted files +- The drawer's own search input remains independent + +### Data Flow + +1. User types search term in STL table search input +2. URL updates with `?search=value`, page reloads +3. `page.tsx` calls `searchPackages()` with `searchIn: "both"` +4. Query returns packages with `matchedFileCount` and `matchedByContent` +5. Table renders "N file matches" badge on content-matched rows +6. User clicks badge -> drawer opens with full tree, matching files highlighted +7. Folders containing matches auto-expanded + +--- + +## Feature 3: Skipped/Failed Files Overview + +### Database Schema + +New model in `prisma/schema.prisma`: + +```prisma +enum SkipReason { + SIZE_LIMIT + DOWNLOAD_FAILED + EXTRACT_FAILED + UPLOAD_FAILED +} + +model SkippedPackage { + id String @id @default(cuid()) + fileName String + fileSize BigInt + reason SkipReason + errorMessage String? + sourceChannelId String + sourceChannel TelegramChannel @relation(fields: [sourceChannelId], references: [id], onDelete: Cascade) + sourceMessageId BigInt + sourceTopicId BigInt? + isMultipart Boolean @default(false) + partCount Int @default(1) + accountId String + account TelegramAccount @relation(fields: [accountId], references: [id], onDelete: Cascade) + createdAt DateTime @default(now()) + + @@unique([sourceChannelId, sourceMessageId]) + @@index([reason]) + @@index([accountId]) + @@map("skipped_packages") +} +``` + +Reverse relations must be added to `TelegramChannel` and `TelegramAccount` models: +```prisma +// In TelegramChannel: +skippedPackages SkippedPackage[] + +// In TelegramAccount: +skippedPackages SkippedPackage[] +``` + +### Worker Changes + +**File:** `worker/src/worker.ts` + +Extend `PipelineContext` interface to include `accountId` (derived from the ingestion run's account). + +At each skip/failure point, upsert a `SkippedPackage` record: + +- **Size limit skip** (line 784): reason `SIZE_LIMIT`, no error message +- **Download failure** (catch in download loop): reason `DOWNLOAD_FAILED` + error text +- **Extract/metadata failure** (catch in extract): reason `EXTRACT_FAILED` + error text +- **Upload failure** (catch in upload): reason `UPLOAD_FAILED` + error text + +On **successful ingestion** of a package, delete any existing `SkippedPackage` with the same `(sourceChannelId, sourceMessageId)` — so successful retries clean up after themselves. + +**File:** `worker/src/db/queries.ts` + +Add functions: +- `upsertSkippedPackage(data)` — create or update skip record +- `deleteSkippedPackage(sourceChannelId, sourceMessageId)` — remove on success + +### Retry Mechanism + +Retrying a skipped package: +1. Delete the `SkippedPackage` record +2. Find the `AccountChannelMap` record using both `accountId` and `sourceChannelId`, then reset its `lastProcessedMessageId` to `sourceMessageId - 1` (only if less than current watermark) +3. If `sourceTopicId` is non-null, also reset the corresponding `TopicProgress.lastProcessedMessageId` for that topic +4. The next ingestion cycle picks up the message and re-attempts processing + +For "Retry All" (e.g., all `SIZE_LIMIT` skips after raising the limit): +- Delete all matching `SkippedPackage` records +- For each affected (account, channel) pair, reset `AccountChannelMap` watermark to the minimum `sourceMessageId - 1` among deleted records +- For each affected (account, channel, topic) triple, reset `TopicProgress` watermark similarly + +**Note on behavioral distinction:** `DOWNLOAD_FAILED`, `EXTRACT_FAILED`, and `UPLOAD_FAILED` archives already naturally retry because the worker does not advance the watermark past failed sets. The `SkippedPackage` record provides visibility into these failures. The explicit retry/watermark reset is only strictly needed for `SIZE_LIMIT` skips (where the watermark does advance past the skipped message). The UI should present both types but the retry button is most impactful for `SIZE_LIMIT` skips. + +**Performance note:** "Retry All" can cause the worker to re-scan large message ranges. The existing dedup logic (`packageExistsBySourceMessage`) ensures already-ingested packages are skipped quickly, but there is a scanning cost proportional to the number of messages between the reset watermark and the current position. + +### Frontend Changes + +**File:** `src/app/(app)/stls/_components/stl-table.tsx` + +Add a "Skipped / Failed" tab alongside the main packages table. + +**New file:** `src/app/(app)/stls/_components/skipped-packages-tab.tsx` + +Table columns: +- **fileName** — archive name +- **fileSize** — formatted size +- **reason** — color-coded badge: `SIZE_LIMIT` (yellow), `DOWNLOAD_FAILED` (red), `EXTRACT_FAILED` (red), `UPLOAD_FAILED` (red) +- **errorMessage** — truncated with expandable tooltip/popover for full text +- **channel** — source channel title +- **createdAt** — when the skip/failure was recorded + +Actions: +- **Retry** button per row — server action that deletes record + resets watermark +- **Retry All** button in the header — bulk retry, filterable by reason + +**File:** `src/app/(app)/stls/page.tsx` + +Fetch skipped packages count (for tab badge) alongside existing queries. + +**File:** `src/data/` or `src/lib/telegram/queries.ts` + +Add query functions: +- `listSkippedPackages(options)` — paginated list with reason filter +- `countSkippedPackages()` — for tab badge +- `retrySkippedPackage(id)` — delete record + reset watermark +- `retryAllSkippedPackages(reason?)` — bulk retry + +**File:** `src/app/(app)/stls/actions.ts` + +Add server actions: +- `retrySkippedPackageAction(id)` +- `retryAllSkippedPackagesAction(reason?)` + +--- + +## Files to Create/Modify + +### Create +- `src/app/(app)/stls/_components/skipped-packages-tab.tsx` — skipped packages table UI +- Prisma migration for `SkippedPackage` model + +### Modify +- `worker/src/util/config.ts` — raise default max size +- `worker/src/worker.ts` — record skips/failures, clean up on success +- `worker/src/db/queries.ts` — add skip record CRUD functions +- `prisma/schema.prisma` — add `SkippedPackage` model and `SkipReason` enum +- `src/lib/telegram/queries.ts` — modify `searchPackages()` for match counts, add skipped package queries +- `src/lib/telegram/types.ts` — add `matchedFileCount`/`matchedByContent` to `PackageListItem`, add skipped package types +- `src/app/(app)/stls/page.tsx` — pass search term, fetch skipped count, add tab +- `src/app/(app)/stls/_components/stl-table.tsx` — accept search prop, render tabs +- `src/app/(app)/stls/_components/package-columns.tsx` — render match badge +- `src/app/(app)/stls/_components/package-files-drawer.tsx` — accept highlightTerm, highlight matching files, auto-expand matched folders +- `src/app/(app)/stls/actions.ts` — add retry server actions diff --git a/docs/superpowers/specs/2026-03-25-package-grouping-design.md b/docs/superpowers/specs/2026-03-25-package-grouping-design.md new file mode 100644 index 0000000..9f3d696 --- /dev/null +++ b/docs/superpowers/specs/2026-03-25-package-grouping-design.md @@ -0,0 +1,246 @@ +# Package Grouping Design + +## Overview + +Add the ability to group related packages that were posted together in a Telegram channel (e.g., "DUNGEON BLOCKS - Colossal Dungeon" with 6 separate archive files). Groups appear as collapsible rows in the STL files table, with support for both automatic detection via Telegram album IDs and manual grouping through the UI. + +## Goals + +- Automatically detect and group files posted together in Telegram (same `media_album_id`) +- Display groups as collapsed rows in the STL table with aggregated metadata +- Allow manual grouping/ungrouping of packages via the UI +- Support editable group names and preview images +- Enable "Send All" to deliver every package in a group via the bot + +## Non-Goals + +- Merging grouped packages into a single Package record (each stays independent) +- Time-proximity heuristics for grouping (too error-prone) +- Grouping across different source channels + +--- + +## Data Model + +### New `PackageGroup` Table + +```prisma +model PackageGroup { + id String @id @default(cuid()) + name String + mediaAlbumId String? + sourceChannelId String + previewData Bytes? + createdAt DateTime @default(now()) + updatedAt DateTime @updatedAt + + packages Package[] + sourceChannel TelegramChannel @relation(fields: [sourceChannelId], references: [id], onDelete: Cascade) + + @@unique([mediaAlbumId, sourceChannelId]) + @@index([sourceChannelId]) + @@map("package_groups") +} +``` + +### Package Model Changes + +Add optional group membership: + +```prisma +model Package { + // ... existing fields ... + packageGroupId String? + packageGroup PackageGroup? @relation(fields: [packageGroupId], references: [id], onDelete: SetNull) + + @@index([packageGroupId]) +} +``` + +### TelegramChannel Model Changes + +Add back-relation for the new `PackageGroup` model: + +```prisma +model TelegramChannel { + // ... existing fields and relations ... + packageGroups PackageGroup[] +} +``` + +### Key Decisions + +- `mediaAlbumId` is `String?` (TDLib int64 stringified) — only used for dedup lookups, avoids BigInt complexity +- `@@unique([mediaAlbumId, sourceChannelId])` prevents duplicate album-derived groups when re-scanning. PostgreSQL treats NULLs as distinct in unique constraints, so manually-created groups (with `mediaAlbumId = null`) are not constrained by this — which is correct behavior +- Idempotency for album groups uses `findFirst({ where: { mediaAlbumId, sourceChannelId } })` + conditional `create`, not `upsert`, because Prisma does not support `upsert` on compound unique keys with nullable fields +- `onDelete: SetNull` on `Package.packageGroup` means dissolving a group automatically unlinks all members +- `onDelete: Cascade` on `PackageGroup.sourceChannel` means deleting a channel cleans up its groups +- `sourceTopicId` is omitted from `PackageGroup` — it can be inferred from member packages, and manual groups may span topics +- `@@map("package_groups")` follows the project's snake_case table naming convention +- `previewData` stores JPEG thumbnail bytes directly on the group (same pattern as Package) + +--- + +## Worker Changes + +### TelegramMessage Interface + +Add optional `mediaAlbumId` field: + +```typescript +export interface TelegramMessage { + id: bigint; + fileName: string; + fileId: string; + fileSize: bigint; + date: Date; + mediaAlbumId?: string; // Absent or "0" when not part of an album +} +``` + +The field is optional to minimize call-site changes. The grouping step treats `undefined` and `"0"` equivalently as "not part of an album." + +### TelegramPhoto Interface + +Add optional `mediaAlbumId` field: + +```typescript +export interface TelegramPhoto { + id: bigint; + date: Date; + caption: string; + fileId: string; + fileSize: number; + mediaAlbumId?: string; // For album-to-preview correlation +} +``` + +### Channel Scanning + +In `getChannelMessages()`, read `media_album_id` from the TDLib message object (already present in TDLib responses, just not captured today). Add `media_album_id?: string` to the `TdMessage` interface and pass through to both `TelegramMessage` and `TelegramPhoto`. + +The document pass and photo pass already run as separate loops over `searchChatMessages`. Both loops capture `media_album_id` independently. Correlation happens at grouping time: album photos are matched to album documents by comparing their `mediaAlbumId` values, not at scan time. + +### Group Creation (Post-Processing) + +After each scan cycle's packages are individually processed (downloaded, hashed, uploaded, indexed), a post-processing step handles grouping: + +1. Collect all packages from the current scan batch that share the same non-zero `mediaAlbumId` +2. For each distinct `mediaAlbumId`, check if a `PackageGroup` already exists via `findFirst({ where: { mediaAlbumId, sourceChannelId } })` +3. If no group exists, create one: + - **Name:** caption of the first message in the album (falls back to first file's base name) + - **Preview:** find a `TelegramPhoto` from the scan's `photos[]` array with the same `mediaAlbumId`. If found, download via `downloadPhotoThumbnail`. If not, the group starts with no preview (can be added in UI later) +4. Link all member packages via an idempotent `updateMany` — sets `packageGroupId` on all packages whose `sourceMessageId` is in the album's message set. This handles both newly-indexed packages and previously-indexed ones that were created in an earlier partial scan (e.g., if one package failed and was retried later) + +The per-package pipeline is unchanged — each file is still downloaded, hashed, deduped, split, uploaded, and indexed independently. Grouping is a layer on top. + +--- + +## Query Layer + +### Paginated Listing with Groups + +The STL table shows "display items" — either a group (collapsed) or a standalone package. Pagination operates on display items so that a group occupies exactly one slot regardless of member count. + +**Two-step query approach** (handles filters correctly): + +**Step 1 — Find matching display item IDs:** + +```sql +-- Find all group IDs and standalone package IDs where at least one member matches filters +SELECT DISTINCT COALESCE(p."packageGroupId", p.id) AS display_id, + CASE WHEN p."packageGroupId" IS NOT NULL THEN 'group' ELSE 'package' END AS display_type, + MAX(p."indexedAt") AS sort_date +FROM packages p +LEFT JOIN package_groups pg ON pg.id = p."packageGroupId" +WHERE 1=1 + -- Optional filters applied here (creator, tags, search text, channelId) +GROUP BY COALESCE(p."packageGroupId", p.id), + CASE WHEN p."packageGroupId" IS NOT NULL THEN 'group' ELSE 'package' END +ORDER BY sort_date DESC +LIMIT $1 OFFSET $2 +``` + +**Step 2 — Fetch full data:** + +For groups on the current page, fetch all member packages (including those that didn't match filters — the group appears because at least one member matched, but the expanded view shows all members). For standalone packages, fetch the full package data. + +**Count query** (for pagination total): + +```sql +SELECT COUNT(*) FROM ( + SELECT DISTINCT COALESCE(p."packageGroupId", p.id) + FROM packages p + WHERE 1=1 + -- Same filters as step 1 +) AS display_items +``` + +### Group Row Aggregates + +Computed in the step 2 fetch: total file size (sum), total file count (sum), combined tags (array union), member package count per group. These populate the collapsed group row. + +### Search + +`searchPackages` adds `PackageGroup.name` to search targets via a `LEFT JOIN` to `package_groups`. If any package in a group matches by name/file content, or the group name matches, the whole group appears. + +### Filtering + +Creator/tag filters apply to member packages. A group appears if any member matches the filter. The group row shows aggregates of all members (not just matching ones). + +### New Query Functions + +| Function | Purpose | +|----------|---------| +| `listDisplayItems(page, limit, filters)` | Two-step paginated query returning groups + standalone packages | +| `getDisplayItemCount(filters)` | Count of display items for pagination total | +| `getPackageGroup(groupId)` | Group metadata + all member packages | +| `updatePackageGroupName(groupId, name)` | Rename group | +| `updatePackageGroupPreview(groupId, previewData)` | Replace group preview | +| `addPackagesToGroup(packageIds, groupId)` | Manual grouping — add to existing group | +| `removePackageFromGroup(packageId)` | Ungroup single package | +| `createManualGroup(name, packageIds)` | Create new group from UI | +| `dissolveGroup(groupId)` | Ungroup all members, delete group record | + +For manual grouping of packages that already belong to different groups: the UI first dissolves empty source groups (groups where all members were moved), then links the selected packages to the target group. Non-selected members of source groups remain in their original group. + +--- + +## UI Changes + +### STL Table — Group Rows + +- **Collapsed (default):** Single row showing preview thumbnail, group name (editable inline), archive type badge ("Mixed" if heterogeneous), combined size, combined file count, combined tags (editable), source channel, latest `indexedAt`, actions +- **Expanded:** Chevron toggle reveals member packages as indented sub-rows with their existing columns and per-package actions +- Chevron icon on the left of the row toggles expand/collapse + +**Loading strategy:** Member packages for all groups on the current page are prefetched in a single batched query during the step 2 fetch. This means expand/collapse is instant (no on-demand loading) and avoids per-row loading states. + +### Group Row Actions + +- **Send All** — Queues bot send requests for every package in the group. Checks for existing PENDING/SENDING requests per package to avoid duplicates. +- **View Files** — Opens file drawer showing all member packages' files, separated by package name headers +- **Dissolve Group** — Ungroups all members (confirmation required) + +### Individual Package Actions (Within a Group) + +- Existing: Send, View Files +- New: "Remove from group" in dropdown menu + +### Manual Grouping + +- Checkbox selection column on package rows +- When 2+ packages selected, a "Group Selected" button appears in the table toolbar +- Prompts for a group name, creates the group +- If selected packages belong to existing groups, those packages are moved to the new group. Source groups that become empty are automatically dissolved. + +### Preview Editing + +- Click the group's preview thumbnail to upload a replacement image +- Same upload flow as individual packages (existing component reuse) + +### No Changes To + +- Skipped/failed packages tab +- Package detail drawer internals +- Search UI (just broader matching behind the scenes)