mirror of
https://github.com/xCyanGrizzly/DragonsStash.git
synced 2026-05-11 06:11:15 +00:00
fix: raise size limit and make MAX_PART_SIZE configurable
All checks were successful
continuous-integration/drone/push Build is passing
All checks were successful
continuous-integration/drone/push Build is passing
- Raise WORKER_MAX_ZIP_SIZE_MB from 4GB to 200GB (production .env) - Make MAX_PART_SIZE configurable via MAX_PART_SIZE_MB env var (default 1950 MiB, set to 3900 for Premium accounts) - Remove hardcoded 1950 MiB constants in split.ts and worker.ts - Add grouping system audit report with real-world failure cases 10 archives were blocked by the 4GB limit (up to 70.5GB). They will be retried on next ingestion cycle. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -1020,7 +1020,7 @@ async function processOneArchiveSet(
|
||||
(sum, p) => sum + p.fileSize,
|
||||
0n
|
||||
);
|
||||
const MAX_UPLOAD_SIZE = 1950n * 1024n * 1024n; // Match split.ts MAX_PART_SIZE
|
||||
const MAX_UPLOAD_SIZE = BigInt(config.maxPartSizeMB) * 1024n * 1024n;
|
||||
const hasOversizedPart = archiveSet.parts.some((p) => p.fileSize > MAX_UPLOAD_SIZE);
|
||||
|
||||
if (hasOversizedPart) {
|
||||
|
||||
Reference in New Issue
Block a user