Ce mail provient de l'extérieur, restons vigilants

=====================================================================

                            CERT-Renater

                Note d'Information No. 2025/VULN907
_____________________________________________________________________

DATE                : 29/12/2025

HARDWARE PLATFORM(S): /

OPERATING SYSTEM(S): Systems running MongoDB versions prior to 8.2.3,
                     8.1.2 ,8.0.17, 7.0.28, 6.0.27, 5.0.32, 4.4.30.

=====================================================================
https://jira.mongodb.org/browse/SERVER-115508
https://jira.mongodb.org/browse/SERVER-106075
https://jira.mongodb.org/browse/SERVER-108565
https://jira.mongodb.org/browse/SERVER-103582
https://jira.mongodb.org/browse/SERVER-101180
_____________________________________________________________________


Make minimally sized buffers for uncompressed Messages



Export

    Type:    Bug
    Resolution:    Fixed
    Priority:    Major - P3
    Fix Version/s:    4.4.30, 5.0.32, 6.0.27, 8.2.3, 8.0.17, 7.0.28
    Affects Version/s:    None
    Component/s:    None
    Labels:        auto-fix-version-interrupted 


    Backwards Compatibility:    Fully Compatible
    Operating System:    ALL
    Linked BF Score:    200
    CAR Domain/s:    None


Issue Status as of Dec 29 2025

SUMMARY

This is a critical fix to address CVE-2025-14847. Upgrade to 8.2.3,
8.0.17, 7.0.28, 6.0.27, 5.0.32, or 4.4.30.

ISSUE DESCRIPTION AND IMPACT

An client-side exploit of the Server's zlib implementation can return
uninitialized heap memory without authenticating to the server. We
strongly recommend upgrading to a fixed version as soon as possible.

This issue affects MongoDB versions:

    MongoDB 8.2.0 through 8.2.2
    MongoDB 8.0.0 through 8.0.16
    MongoDB 7.0.0 through 7.0.26
    MongoDB 6.0.0 through 6.0.26
    MongoDB 5.0.0 through 5.0.31
    MongoDB 4.4.0 through 4.4.29
    All MongoDB Server v4.2 versions
    All MongoDB Server v4.0 versions
    All MongoDB Server v3.6 versions


WORKAROUND

We strongly suggest you upgrade immediately.

If you cannot upgrade immediately, disable zlib compression on the
MongoDB Server by starting mongod or mongos with a
networkMessageCompressors or a net.compression.compressors option
that explicitly omits zlib. Example safe values include
snappy,zstd or disabled


REMEDIATION

Upgrade to MongoDB 8.2.3, 8.0.17, 7.0.28, 6.0.27, 5.0.32, or
4.4.30.

_____________________________________________________________________

Prepared Transactions with apiVersion Fail to Resume After Primary
Failover


Export

    Type:    Bug
    Resolution:    Fixed
    Priority:    Major - P3
    Fix Version/s:    8.3.0-rc0, 8.2.2, 7.0.26, 8.0.16
    Affects Version/s:    None
    Component/s:    None
    Labels:    None

    Assigned Teams:    Replication	
    Backwards Compatibility:    Fully Compatible
    Operating System:    ALL
    Backport Requested:    v8.2, v8.1, v8.0, v7.0, v6.0	
    Sprint:    Repl 2025-06-23, Repl 2025-07-07, Repl 2025-07-21,
               Repl 2025-08-04
    Case:    
    CAR Domain/s:    None


Issue Status as of November 20, 2025

ISSUE DESCRIPTION AND IMPACT
MongoDB uses a two-phase commit protocol to handle customer cross-shard
transactions. This protocol works in the following way:

    Prepare Phase: The transaction is prepared on all involved shards,
ensuring that each shard is ready to commit.
    Commit Phase: Once all shards successfully prepare the transaction,
a commit command is sent to all shards. The system waits for
acknowledgments from all shards before confirming success to the client.
At this point, the client expects the transaction's data to be
committed across all shards.

The problem arises when a client explicitly sets {apiVersion} in their
transaction, and during the two-phase commit process:

    The transaction reaches the prepare phase successfully.
    A failover event occurs on some of the shards (e.g., the primary on
that shard steps down, a new primary is elected, or the same primary
restarts and resumes).
    The shard that undergoes failover may return an "API Version
Mismatch" error when the commit command is issued. This causes the
transaction to remain in the "prepared" state on that shard.
    The two-phase commit coordinator misinterprets this error as a
successful acknowledgment and incorrectly marks the transaction as
committed. It then returns success to the client.
    Transactions left in the prepared state can block further write or
read operations involving the affected documents (especially those with
higher timestamps than the prepared transaction's timestamp).

Impact:

    Version v8.0–v8.0.12: Due to a separate bug (SERVER-105751), prepared
transactions may be "reaped" (removed) after a default timeout of 30
minutes (TransactionRecordMinimumLifetimeMinutes).
        This potentially leaves the data in a torn state across shards,
leading to logical data inconsistency where clients observe inconsistent
transaction outcomes.
        DIAGNOSIS: There is currently no way to diagnose this issue
directly from the server.
        REMEDIATION: No remediation can be performed directly on the
server.

    In case not hitting (SERVER-105751): The prepared transaction
remains indefinitely:
        DIAGNOSIS:
            This will cause persistent issues for subsequent operations.
This might results in Frequent `writeConflict` errors when modifying
documents in prepared transactions.
            This will also cause unbounded growth of the oplog, as
prepared transactions will block oplog truncation from advancing.
        REMEDIATION:
            If the commit/abort state of the transaction can be
determined from other shards (via logs, oplog, or config.transactions)
or from the client, manual intervention is required to abort or commit
the blocked prepared transaction. However, if definitive data is
unavailable, recovery cannot be guaranteed.


AFFECTED VERSIONS

    5.0.0 - 5.0.31
    6.0.0 - 6.0.26
    7.0.0 - 7.0.25
    8.0.0 - 8.0.15
    8.2.0 - 8.2.1

—-----------------------------------------------------
Original description

A prepared transaction that was initiated with apiVersion set cannot be
continued on a new primary after a failover. This is because we do not
preserve apiParameters (such as apiVersion) during oplog application for
prepared transactions. As a result, when the new primary takes over, it
will have empty apiParameters.

When the transaction coordinator later sends the commit or abort decision
to the new primary, the new primary will detect an APIMismatchError and
assert. However, the coordinator will treated as an acknowledgement,
leading to a situation where the distributed transaction may be committed
on some shards but remain stuck in the prepared state on others.

This can result in a partially committed transaction after a failover,
which is an unsafe state.

Given that apiParameters is saved as in-memory state, we could also hit
this error if the same primary restarted and stepped up to become a
primary after going through startup recovery.

_____________________________________________________________________

Check bucket size before writing to storage for time-series writes


Export

    Type:    Task
    Resolution:    Fixed
    Priority:    Major - P3
    Fix Version/s:    8.2.1, 8.3.0-rc0, 7.0.26, 8.0.16
    Affects Version/s:    None
    Component/s:    None
    Labels:        server-rapid-response-resolved 


    Assigned Teams:    Storage Execution	
    Backwards Compatibility:    Fully Compatible
    Backport Requested:    v8.2, v8.0, v7.0	
    Sprint:
    Storage Execution 2025-08-04, Storage Execution 2025-08-18,
Storage Execution 2025-09-01, Storage Execution 2025-09-15, Storage
Execution 2025-09-29, Storage Execution 2025-10-13
    Linked BF Score:    0
    CAR Domain/s:    None

Certain writes to time-series collections are able to generate documents
larger than 16MB, which can crash secondaries. Various conservative size
limits already exist that prevent most writes from getting close to the
BSON 16 MB size limit. This ticket adds a final check to time-series
bucket writes before writing to storage.

 

Ordered writes will retry oversized bucket updates on a new bucket
automatically, to try to fit the measurement in the collection as best
as possible. All other writes ([un]ordered bucket insert and unordered
update) will reject writes that are too big.


_____________________________________________________________________

Racy authorization check in killCursors allows killing cursors from
other users



Export

    Type:    Bug
    Resolution:    Fixed
    Priority:    Major - P3
    Fix Version/s:    8.2.0-rc0, 8.0.14, 7.0.26
    Affects Version/s:    6.0.0, 7.0.0, 8.1.0-rc0, 8.0.0, 8.2.0-rc0
    Component/s:    None
    Labels:    None


    Assigned Teams:    Query Execution	
    Backwards Compatibility:    Fully Compatible
    Operating System:    ALL
    Backport Requested:    v8.1, v8.0, v7.0, v6.0	
    Sprint:
    QE 2025-05-12, QE 2025-05-26, QE 2025-06-09, QE 2025-06-23,
QE 2025-07-07, QE 2025-07-21
    CAR Domain/s:    None

The killCursors and releaseMemory authorization checks have potential
for a "time-of-check to time-of-use" race allowing a malicious user
to intentionally kill cursors from other users:

 

In the authorization check for both commands, if the cursor does not
exist, it will let the command continue running, expecting that when
the command tries to kill the cursor, it will find again that it still
does not exist, and fail.

However, it may happen that the cursor is allocated right after the
auth check. The chance that this succeeds is not random because cursor
IDs are predictable: They are allocated using a non-cryptographically
secure pseudo-random number generator, which is shared.

 

In a multi-user/multi-tenant environment, a malicious user may be
able to reconstruct the state of the PRNG from observed cursor IDs,
use it to guess the next cursor IDs to be allocated by other users,
then attempt to kill them, which will occasionally succeed, when
the cursor gets allocated right after the auth check.

_____________________________________________________________________

Fix standalone BatchedDelete large _ids edge case


Export

    Type:    Bug
    Resolution:    Fixed
    Priority:    Major - P3
    Fix Version/s:    8.2.0-rc0, 8.1.2, 8.0.13, 7.0.26
    Affects Version/s:    7.0.17, 8.0.6
    Component/s:    None
    Labels:    None

    Assigned Teams:    Query Execution	
    Backwards Compatibility:    Fully Compatible
    Operating System:    ALL
    Backport Requested:    v8.1, v8.0, v7.0, v6.0	
    Sprint:
    QE 2025-03-03, QE 2025-03-17, QE 2025-03-31, QE 2025-04-14,
QE 2025-04-28, QE 2025-05-12, QE 2025-05-26, QE 2025-06-09,
QE 2025-06-23
    CAR Domain/s:    None

When running the following shell code against a replica set or
standalone (cluster not validated), mongod crashes with an invariant
failure [here](https://github.com/10gen/mongo/blob/7ca242fa77c2a15f3a3628db2a2cf1c14c6c7231/src/mongo/db/exec/batched_delete_stage.cpp#L418):

kCollName = "boom"; 
db[kCollName].insert({_id: "X".repeat(16776704)});
db[kCollName].remove({});

Error: network error while attempting to run command 'delete' on
host '127.0.0.1:27017'  :: caused by :: dbclient error communicating
$with server 127.0.0.1:27017 :: caused by :: futurize :: caused by :: Connection closed by peer :


Log output from mongod:

{"t":{"$date":"2025-02-21T14:11:16.083+00:00"},"s":"F",  "c":"ASSERT",   "id":23079,   "ctx":"conn1","msg":"Invariant failure","attr":{"expr":"*bufferOffset > 0","location":"src/mongo/db/exec/batched_delete_stage.cpp:418:40:long long mongo::BatchedDeleteStage::_commitBatch(WorkingSetID *, std::set<WorkingSetID> *, unsigned int *, unsigned int *, unsigned int *)"}}
{"t":{"$date":"2025-02-21T14:11:16.083+00:00"},"s":"F",  "c":"ASSERT",   "id":23080,   "ctx":"conn1","msg":"\n\n***aborting after invariant() failure\n\n"}

Note that the _id value of the inserted document is quite large
(a 16776704 bytes long string).
This if condition and block in batched_delete_stage.cpp are likely
wrong, as they do not take into account that already a batch with
a single document can trigger it.

At this point, applyOpsBytes is 512 bytes (2 * kApplyOpsNonArrayEntryPaddingBytes
plus the size of the _id field's BSONElement. The BSONElement
size is the length of the field name plus the length of the value
plus some BSON overhead. Together with the 512 bytes overhead
added before, this is larger than BSONObjMaxUserSize, so the if
branch is taken here.

This is reproducible in master, but likely also in previous
versions.


=========================================================
+ CERT-RENATER        |    tel : 01-53-94-20-44         +
+ 23/25 Rue Daviel    |    fax : 01-53-94-20-41         +
+ 75013 Paris         |   email:cert@support.renater.fr +
=========================================================




